text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
This is what I am trying to do: I am fairly new so please be as descriptive as possible so that I may learn what I'm doing wrong and (hopefully) not repeat those mistakes. On my game I have a handle which rotates right or left depending on mouse input clicks on the GUI. That aspect works fine, when the handle rotates it goes through colliders which in turn are supposed to be detected by this script when ran. That part (in theory) works fine, except that it is not running everytime the object moves to a new collision box. So In essence I am trying to find a way to make the thing run the script every so often (roughly 1 second). From looking around it seems the method using invoke repeat won't work with the way it's written so I had to use a coroutine. My knowledge with coroutines is very weak, in the compiler I show no errors but when I try to execute the script within the play preview of unity an error appears: "script error: OnTriggerStay2D The Message must have 0 or 1 parameters" Because of this error it does not work so could someone please take the time and explain why I'm getting this error and how to make it go away. Or alternatively, a different means in which to repeat my script (the original context of the script is included in the bottom but disabled). Note:The functions for the script are incomplete since it does little good to complete the conditions if it cannot even do part of it properly. using UnityEngine; using System.Collections; using UnityEngine.UI; public class Throttle_Triggers : MonoBehaviour { private Collider2D other; // Use this for initialization void Start() { //Call your function StartCoroutine(OnTriggerStay2D(other)); } IEnumerator OnTriggerStay2D(Collider2D other, float delayTime = 0f) { yield return new WaitForSeconds(delayTime); //You can then put your code below //......your code { if (GetComponent<Collider2D>() && gameObject.name == "Throttle_Stop") gameObject.tag = "Throttle_Stop"; else if (GetComponent<Collider2D>() && gameObject.name == "Throttle_1-3") gameObject.tag = "Throttle_1-3"; else gameObject.tag = "Throttle1-3"; } } // void OnTriggerStay2D(Collider2D other) // { // if (GetComponent<Collider2D>() && gameObject.name == "Throttle_Stop") //{ // gameObject.tag = "Throttle_Stop1"; //} //else if (GetComponent<Collider2D>() && gameObject.name == "Throttle_13") //{ // gameObject.tag = "Throttle_13"; //} //else //{ // gameObject.tag = "Throttle_23"; //} } Please also be aware there are 3 handles which function much the same, so bear that in mind in your response. Thank you for your time in reading this and hopefully helping me solve it. Answer by Mikeypi · Mar 05, 2016 at 12:32 PM I was able to solve the problem by using a Switch instead of the If/Else system... Even still if someone could explain where I went wrong with that since I will likely need it again I would. Errors With Game Over Script 1 Answer How do i make an uppercut attack? 0 Answers How to get variables of a behavior tree with RAIN AI 0 Answers HELP ! Highscore name input after each Death 0 Answers Highscore shows 0 upon restart, and the correct highscore appears again once the player scores 1 point 2 Answers
https://answers.unity.com/questions/1150871/script-problem-trying-to-make-the-code-of-this-scr.html
CC-MAIN-2020-40
refinedweb
513
63.09
Can you please take a look at the attached set of documents and figure out why the first row in the second table disappears when converting to PDF, and also why the values in the second row of the same table have disappeared? This is the code that processes the generated document prior to export: NodeCollection runs = document.GetChildNodes(NodeType.Run, true); Regex arabicMatchExpr = new Regex(@"\p{IsArabic}+"); foreach (Run run in runs) { Match arabicMatch = arabicMatchExpr.Match(run.Text); if (arabicMatch.Success) { run.Font.Bidi = true; } else { Match dateMatch = dateExpr.Match(run.Text); if (dateMatch.Success) { run.Font.Bidi = true; run.Text = DateTime.Parse(run.Text).ToString(“dd/MM/yyyy”); } } } MemoryStream tempStream = new MemoryStream(); document.Save(tempStream, SaveFormat.AsposePdf); Pdf pdfDoc = new Pdf(); pdfDoc.BindXML(tempStream, null); pdfDoc.IsRightToLeft = true; pdfDoc.Save(String.Format("{0}_{1:yyyyMMddHHmmss}.pdf", docType, DateTime.Now), Aspose.Pdf.SaveType.OpenInAcrobat, httpResponse); httpResponse.End(); Thanks, Larkin Can you please take a look at the attached set of documents and Hi Larkin, I have tested the issue and I’m able to reproduce the same problem. I have logged it in our issue tracking system as PDFNET-7152. We will investigate this issue in detail and will keep you updated on the status of a correction. We apologize for your inconvenience.<?xml:namespace prefix = o I really need a fix for this as soon as possible. I did some debugging research and what I came up with is this: The generated XML has a number of the text segments within the table cells enclosed in elements. I don’t know why this is done–perhaps this is how Word treats text runs within table cells, but in any case it is fine IF the text runs within the heading element are LTR. If the text runs are RTL (or perhaps it is the Unicode flag that causes the problem) then the cell renders as empty. As you see in the documents I attached in my previous post, the entire first row of the second table is not missing, it has merely collapsed because all its cells are empty since the values of every cell are Arabic characters. To verify that this is the problem, generate the XML file from the test2.doc file and locate the following line: الحوض Change the element to a element, as shown below: الحوض Now call BindXml() using this modified XML file and you will see that in the resulting PDF the row now appears, and the value in the segment above displays correctly. So now that I’ve done your leg work for you, can you fix the bug for me?? Hi Larkin, Thank you very much for your help; the findings will help us resolve the issue. We'll let you know about the estimated time of the fix, as we get some information from our development team. We appreciate your patience. Regards, Dear Larkin, We appreciate for your help. I have confirmed that your investigation on the issue is right. I think the bug would be fixed in about one week. We will try our best to make it sooner and send you an update once the fix is ready. Best regards.
https://forum.aspose.com/t/table-rows-cell-values-disappearing-when-converting-to-pdf/123629
CC-MAIN-2022-40
refinedweb
534
64.71
Celery Periodic Tasks: From Installation to Infinity This is a quick example of how I got Periodic Tasks to work using Celery without Django. There are lots of examples out there for creating tasks and loose documentation on how to start Celery and Celery Beat, but most of them involve Django. I just wanted to run a simple example and I spent way too long trying to fill in the gaps to get even this simple task to run periodically. So hopefully this quick example will help somebody else out there save some time. Pre-requisites Installing Celery sudo pip install Celery You'll want to choose your broker; RabbitMQ is what comes recommended by default. Create a celeryconfig.py Once you install your broker of choice you can setup the configuration options in celeryconfig.py. In my case I was using MySQL. Here is an example of what my file looked like. BROKER_URL = 'sqla+mysql://user:password@localhost/database' CELERY_RESULT_BACKEND = "database" CELERY_RESULT_DBURI = 'mysql://user:password@localhost/database' Create some tasks Create your tasks (generally they will be saved into a file called tasks.py). This is what my tasks.py looks like: celery = Celery('tasks') celery.config_from_object('celeryconfig') @celery.task def add(x, y): return x + y Update your celeryconfig.py Now in order for Celery to know how often you want to run your periodic tasks, update your celeryconfig.py to add the following. CELERYBEAT_SCHEDULE = { 'every-minute': { 'task': 'tasks.add', 'schedule': crontab(minute='*/1'), 'args': (1,2), }, } Start Celery Now we just need to start a Celery worker with the --beat flag and our task will run every minute. This part was not so obvious, but the --beat flag needs to appear after worker, otherwise nothing will happen. $ celery -A tasks worker --loglevel=info --beat Wait a minute... You should see Celery output a bunch of info as tasks are ran. Among them you should see a line that looks like this: [2012-12-07 13:26:00,343: INFO/MainProcess] Task tasks.add[44477955-906c-4827-bda2-f6f19822c652] succeeded in 0.00760388374329s: 3 Awesome !! I was needing exactly this, and I 'm using Flask. I was doing a very simple test, an implementation, and I spent the past 4 hours trying to understand why it wasn't working at all !!. Thanks a lot ! John Prawynkumar.S Fri, 11/08/2013 - 06:44 Really helpful I was searcing for the same kind of tutorial. Helped me a lot. Soo Ling Lim Tue, 12/03/2013 - 15:58 Couldn't get celery to work until I read your post Hey Jonathan, Thanks so much for your post. Simple and very helpful! Cheers, Soo Ling Blibble Thu, 05/22/2014 - 21:13 nico Tue, 06/10/2014 - 05:52 how to get the result of the running periodic task? thanks for your help David McLEan Thu, 09/18/2014 - 11:31 Awesome tutorial dude! Quick Awesome tutorial dude! Quick question though, how do I get the RESULT of my scheduled task out of celery. Previously I'd been sending tasks to celery using the '.delay()' function. Lets say I had a task called "add" in my tasks.py file. I'd do: from tasks import add result = add.delay(6,7) result.get() >>13 how do I get beats to tell me what the result of it's periodic task is? Timur Thu, 09/25/2014 - 10:29 Can I run beat periodic task Can I run beat periodic task scheduler with the -c: celery worker -Q celery --beat -l info -c 4 ? Joachim Wed, 01/07/2015 - 22:30 jagadish Mon, 08/20/2018 - 07:44 celery beat scheduler In the below given code, what is "every-minute" means? we are mentioning 'crontab(minute='*/1')' to run the task for every 1 minute(If not , please correct me). So what is the use of 'every-minute' then ? CELERYBEAT_SCHEDULE = { 'every-minute': { 'task': 'tasks.add', 'schedule': crontab(minute='*/1'), 'args': (1,2), }, } No-one Thu, 01/03/2019 - 06:41 In reply to celery beat scheduler by jagadish every-minute is just an… every-minute is just an title for the scheduled task Jorge Omar Vazquez Mon, 05/06/2013 - 01:45
https://www.metaltoad.com/blog/celery-periodic-tasks-installation-infinity
CC-MAIN-2020-16
refinedweb
701
65.93
You can subscribe to this list here. Showing 1 results of 1 Dear André André Wobst venit, vidit, dixit 23.05.2012 21:19: > Dear Michael, > > I'm sorry, could we please rewind the whole discussion. And let us > not talk about code but about the way it should work. Sure, I should have marked the code as "RFD" ;) Let me point out that I think that pyx.deco is one of the gems of PyX, and that I only want to help polish it :) > To me, the current behavior is the correct (and only correct) one. > Let me explain again, why I think so. > > 1. The arrow should be "part" of the path, i.e. the tip and the > constriction point needs to be positioned on the path. Yes and no. The arrow head should be part of the path just like (the outline of) a stroke is part of the path. But the stroke extends transversally, of course, and also longitudinally (depending on linecap). > 2. For pos == 0 the arrow should be at the beginning, either in the > forward direction (reverse == 0) or backwards (reverse == 1). As it > is a common use-case, pos == 0 and reverse == 1 is available as > "barrow". > 3. For pos == 1 the arrow should be at the beginning. Again, it can > be reversed. pos == 1 and reverse == 0 is an "earrow". Yes and no: the begin and end case should be available as barrow and earrow. But one can really argue about what "at the beginning" and "at the end" mean. For the simple triangular arrow heads, one could also expect an end head to be attached to the end of a path (like a round linecap). This is difficult for PyX's path-shaped heads, of course. > 4. Between those limits the pos argument should interpolate > linearly. And yes, this means that for pos == 0.5 neighter the tip > nor the constriction point of the arrow is at 0.5*arclen(). > > Do you agree to this functionality? Not completely ;) For example, for a closed smooth curve it's quite reasonable to expect pos=0 and pos=1 to be equivalent since they are the same point with the same tangent! I think the main issue is that all other existing or conceivable decorations are "local" in the sense that they use a point and maybe the tangent at that point of the path; but arrow heads need to use a a whole segment of the path. This limits the available positions if one does not want to extend the path. Note that it is extended even now: _arrowhead() use a portion of the path of size "size", so the first allowed position should be at "size", not "constrictionlen": ---->%---- from pyx import * c = canvas.canvas() const = 0.5 c.stroke(path.circle(0,0,0.3), [deco.arrow(pos=0, constriction=const), deco.earrow(constriction=const)]) c.writePDFfile() ---->%---- See how the linear interpolation is used for pos=0 (i.e. arclenpos=constrictionlen)? So, following the existing reasoning pos from [0,1] should be mapped to an interval of length arclen-size. But that would require adding another exclusion range at the beginning! So, if we insist on avoiding path extrapolation, pos=0 would need to correspond to arclenpos=size (and analogous for reversed heads, of course). I don't think that's the look that we want even for pos=0! > I do understand, that you might wonder about my solution to split > the path to properly align the arrows. But this is a straight forward > and elegant solution to work around the intrinsic problem, that pos > interpolates on the range arclen-constrictionlen. It needs to be > that way. I'm glad you agree that this interpolation is the problem ;) Seriously: maybe we can, for a moment, _not_ take that range as given and see what could happen then: * Either by changing pos or introducing arclenpos, allow range 0 to arclen (or even more). * Make earrow put an arrow at arclenpos, barrow(reversed=true) at size and so on. * Either export size (like you did with constriction) or better introduce a poscorr (or shift) parameter which allows shifting the position by multiples of size. That way, the positioning would be analogous to other decorators, one could "center" the heads at will, and barrow/earrow results would be unchanged (except for a fix the size vs. constrictionlen problem). >> The difference between us is that you seem to care about begin and >> end arrows mainly, while I care about mid arrows (also). > > Well, to me an arrow is something like a text. It has some extend. Yes, exactly! > If you center it, the text starts before half of the path and ends > after half of the path. > Yes, exactly my thinking! And, just like for tick labels, there's no reason why the decoration should be clipped by the path. I've been meaning for a while now to code a deco.insert() which inserts a canvas as a decoration. deco.text() is just a special version of that. deco.arrow() is in some sense a special blend between a decorator and a deformer; but for a "uniformly shaped path", I would expect it to behave like the other decorators in the sense above: insert something at the path pos, where the insertion is centered/anchored at a natural reference point (which may be changed by options). Cheers Michael
http://sourceforge.net/p/pyx/mailman/pyx-devel/?viewmonth=201205&viewday=24
CC-MAIN-2014-41
refinedweb
904
72.05
Last updated on 2015-10-06 Previous tutorial: Creating an OPM GEF Editor – Part 24: Showing Feedback to the User One of the problems with visual programming languages is layout – you can stay endless hours just arranging the figures in the diagram without any real changes in the program, just to “make it look better”. This is a complete waste of time. For textual languages, it is a standard practice to have code auto-formatters that format the code every time you save. There are even some extremists who consider badly formatted code as having errors (if you are interested in the subject, there are programs like CheckStyle and Sonar to check coding standards, among others. If you use a good another tool and find it helpful, please leave me a note). Anyway, my solution to this problem is simple: make the layout part of the language, so that it is not “secondary information”. This means that the developer cannot change the layout without changing the meaning of the program. And this also means that the language UI has to be very smart. For example, an OPP process is displayed as an ellipse, with the name of the process inside the ellipse. This could be shown in a number of ways: So the developer has a problem – he has to decide how to resize his ellipse in the “best way” possible (which will definitely be different from the “best way” of all other programmers). My solution: don’t let the programmer decide (at least not directly), but have each model element decide on its size, based on pre-defined configuration. It took me some days to get this to work, and I had to dig and debug the GEF Logic example a couple of times, but finally I managed. I needed to do the following things: 1. calculate how large is the text; 2. calculate the desired width of the the text using a pre-determined width to height ratio (See my question on ux stackexchange regarding this ration); and 3. calculate the final size of the text after resizing it to the calculated width. The last step is needed because the text may actually end narrower because we only want to have new lines when there is a break in the text (spaces, commas, etc.). Draw2d has a component that knows how to do this (called TextFlow) but I couldn’t manage to find a way to calculate the height… I tried to use getPreferredSize() on the TextFlow after setting its size, bounds, etc, but didn’t work. And then I found it: I had to pass -1 as the desired height to the getPreferredSize() method so it calculated it alone! So now the solution is simple: calculate the size of the text in one line (using the TextUtilities class). Calculate the area that is used by the text and from this derive the desired width, using the width to height ratio. And last, ask the FlowPage to calculate it’s preferred size using this width and a height of -1. I created a new class that does just this: package com.vainolo.draw2d.extras; /******************************************************************************* * Copyright (c) 2012 Arieh 'Vainolo' Bibliowicz * You can use this code for educational purposes. For any other uses * please contact me: vainolo@gmail.com *******************************************************************************/ import org.eclipse.draw2d.TextUtilities; import org.eclipse.draw2d.geometry.Dimension; import org.eclipse.draw2d.text.FlowPage; import org.eclipse.draw2d.text.ParagraphTextLayout; import org.eclipse.draw2d.text.TextFlow; import com.vainolo.draw2d.extras.translate.JDraw2dToDraw2dTranslations; import com.vainolo.jdraw2d.HorizontalAlignment; /** * A figure with a {@link TextFlow} that "smartly" calculates its preferred size, * using a provided width to height ratio. */ public class SmartLabelFigure extends FlowPage { private final TextFlow textFlow; private double ratio; /** * Create a new smart label with the given width to height ratio (width = ratio * height) * * @param ratio * ratio to use when calculating the smart size of the label. */ public SmartLabelFigure(double ratio) { super(); this.ratio = ratio; textFlow = new TextFlow(); textFlow.setLayoutManager(new ParagraphTextLayout(textFlow, ParagraphTextLayout.WORD_WRAP_HARD)); add(textFlow); } public void setText(String text) { textFlow.setText(text); } public String getText() { return textFlow.getText(); } public void setRatio(double ratio) { this.ratio = ratio; } public double getRatio() { return ratio; } /** * Calculate the best size for this label using the class's width to height ratio. * This is done by calculating the area that the text would occupy if it was in only * one line, then calculate a new width that would give the same area using the * width to height ratio, and finally it sends this width to the {@link FlowPage}'s * {@link FlowPage#getPreferredSize(int, int)} method which calculates the real * height using line breaks. * * @return A close match to the size that this figure should have to match * the required width to height ratio. */ public Dimension calculateSize() { Dimension lineDimensions = TextUtilities.INSTANCE.getStringExtents(textFlow.getText(), getFont()); double area = lineDimensions.width() * lineDimensions.height(); double width = Math.sqrt(area / ratio) * ratio; invalidate(); return getPreferredSize((int) width, -1); } public void setHorizontalAlignment(HorizontalAlignment hAlignment) { setHorizontalAligment(JDraw2dToDraw2dTranslations.translateHorizontalAlignment(hAlignment)); } } (Note: I’m using here some special classes that I am creating in order to port the draw2d project outsize of eclipse. I’ll write more about this in a future post). So there it is, now the programmer can’t resize the figures, not matter how hard he tries. And the size is calculated automatically. Using a ratio of 2, the example above looks something like this: And here is another example: The code for this class (and more goodies to come) can be found here and my new jdraw2d library (in progress) can be found here. Update: the FlowPage stores a kind of “cache” of the last 3 saved layouts, so my implementation didn’t give the correct results sometime. Thankfully, invalidate() clears this cache. The code was updated to mach this. Next tutorial: Creating an OPM GEF Editor – Part 26: Activating Tools from the Context Menu 8 Comments Please, make a tutorial for nodes with labels outside of nodes. I.e. OPMObject with label under or over it. There are two ways to implement this: If you have followed my tutorials, both implementation should be straightforward. Hi, One Question He deployment in exercise this all page com.vainolo and running perfect. Exists one example the implementation this example Sorry, but I could not understand your question. Hello : ) I have a question. How can i change font size of textflow ? I tried with the method [textflow.setfont] before I had changed height value of fontdata. like this, Font tFont = m_Textflow.getFont(); FontData[] tFontDataList = tFont.getFontData(); tFontDataList[0].setHeight(aSize); m_Textflow.setFont(new Font(null, tFontDataList[0])); But that didn’t work correctly and made any space on head. Help me please T^T No idea… never tried to do this. Did you try debugging the code to see what happens? You should also ask in StackOverflow. Hey Vainolo, I have small silly question. Could you please let me know where the main() for all the tutorials is ? Thanks in advance This is an eclipse plugin, so there is no main(). Start reading from the first tutorial and thing will surely become more clear.
https://vainolo.com/2013/03/05/creating-an-opm-gef-editor-part-25-smart-multi-line-text-figure/
CC-MAIN-2021-43
refinedweb
1,187
56.66
Razor Pages is part of ASP.NET Core 2.0, which is in preview. That means that there is still development work going on and design decisions being made, so the final released version of Razor Pages could well be different to the one examined here. At the moment, ASP.NET Core 2.0 is scheduled for release some time between July and September 2017. Therefore there is plenty of time for anything described here to change. Razor Pages works with Visual Studio 2017 Update 3 which is also in preview. Aside from a code editor of some sort, you also need to install the preview of the .NET Core 2.0 SDK. Creating the sample application Update: The way to create a Razor Pages application in VS 2017 Update 3 Preview is to select ASP.NET Core Web Application as the project type, and then to change the target framework to .NET Core 2.0 in the selector on the next wizard screen: I couldn't find a way to get VS 2017 Update 3 to generate a Razor Pages application out of the box (I could eventually, see above, but I'll leave these steps here anyway), so I resorted to using command line tools instead. First thing to do is to ensure that you have the preview of .NET Core 2.0 installed correctly, so open a command prompt (cmd.exe, Powershell, bash, terminal or whatever you prefer on your operating system) and type dotnet --version. The result should confirm that your are using a preview of version 2.0: Create a folder for the Razor Pages application: Navigate to the folder and type dotnet new pagesfollowed by dotnet restore(even though the feedback in the terminal suggests that it has already been successfully run), then dotnet build. These commands generate the files for the project, retrieve referenced packages, and then build the application. You should be able to see the files and folders for the application in the file explorer: Open Visual Studio 2017 Update 3 and use the File... Open... Project/Solution dialog to navigate to the .csproj file in your new application. The default location for Razor Pages is the Pages folder. The sample application contains a number of files that will look familiar to Web Pages developers. However, this framework is not Web Pages. It is built on top of the ASP.NET Core MVC framework. As such, Razor Pages has all the view-related features of ASP.NET Core MVC available to it along with a host of other features, and I will look in more detail at those in future. It also works in a different way to a Web Pages site, which used the Web Site project type. Razor Pages sites use the Web Application project type. The chief difference between them is that class files can be placed in an App_Code folder in a Web Site project, and then get compiled on first request. Class files in a Web Application project can be placed anywhere in the application directly but the app needs to be compiled before being deployed to a web server. Some files in the Pages folder end with a .cshtml.cs extension. These class files are similar to code behind files in ASP.NET web forms in that there is a one-to-one mapping between the class file and the Razor page (or view) with the same file name. This file type however is new to Razor Pages. I will look at them a bit later. Simple Development Model The main driver behind the Web Pages framework was to provide a simple development model. In order to deliver on that goal, Web Pages included a lot of "helpers" that hid a lot of bare wires. These helpers included the data access library exposed by the Database class that relied on the dynamic type (with its pros and cons) and the WebSecurity helper that wrapped the SimpleMembership provider. There was also a range of extension methods, IsPost, IsEmpty, AsInt and so on, that provided shortcuts to commonly used methods. None of these exist in Razor Pages, but it is still possible to adopt a Web Pages style of programming where the processing code is included in a code block at the top of the .cshtml file. Here is a Razor Page called Form bulk of the code above for a simple form looks just like it would in a Web Pages page. The chief differences are that the Razor Page requires the @page directive at the top of the file. The HasFormContentType property is used to determine whether a form has been posted (instead of the IsPost method from Web Pages) and the specific collection ( Form) from the Request object is referenced unlike in other versions of ASP.NET including Web Pages where the shortened Request["name"] would work. A Better Way To Do Things The above code will work and kind of follows the old Web Pages maxim of reducing the concept count for learners. However, where possible it is advisable to adopt a strongly typed approach to development which relies on the compiler telling you about your typing mistakes at design time, rather than at run time. Razor Pages leverages the MVC model binding framework to help with this. Model Binding is a core part of MVC which takes values from an HTTP request and maps them to a specified "Model". The model can be a simple property such as an int or string, or it can be complex such as a user-defined class. Here's the same form from above with the use of Model Binding for the "name" value: @page @using Microsoft.AspNetCore.Mvc.RazorPages @functions { [BindProperty] public string Name { get; set; } public PageResult OnPost() { return Page(); } } <div style="margin-top:30px;"> <form method="post"> <div>Name: <input name="name" /></div> <div><input type="submit" /></div> </form> @if (!string.IsNullOrEmpty(Name)) { <p>Hello @Name!</p> } </div> Anyone familiar with Web Pages will recognise the @functions block. In this case, it is used to define a public property (string Name) and to specify is as the model for binding via the BindProperty attribute. The @Functions block is also used to specify what should happen in the Razor Page's OnPost method. In this case, the page is re-displayed with the incoming form values having been bound to the model. The OnPost method is called by the framework when the HTTP POST verb has been used for the request. To execute logic when a GET request is made, you would use the OnGet method. The benefit of taking a strongly typed approach is that you get Intellisense support for properties in the Page: So far, all code has been confined to Razor (.cshtml) files. These support Just-In-Time compilation, which means that you can mimick the Web Site project development model to a large extent. Once you have deployed your compiled application, you can replace existing Razor pages with amended ones without having to pre-compile anything and they will be compiled on demand. The PageModel The PageModel is the code behind file that I mentioned earlier. This allows you to move the processing code out of the Razor Page into a separate class file. This approach facilitates advanced scenarios such as unit testing the code for the view. The next iteration shows how to use the PageModel. It begins with a revised version of the Razor Page: @page @model FormModel <div style="margin-top:30px;"> <form method="post"> <div>Name: <input name="name" /></div> <div><input type="submit" /></div> </form> @if (!string.IsNullOrEmpty(Model.Name)) { <p>Hello @Model.Name!</p> } </div> All of the model definition and processing code has been replaced with an @model directive (familiar to anyone who has worked with MVC). It exposes the model type to the Page. The other code has been moved to new file: Form.cshtml.cs: using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.RazorPages; namespace RazorPagesTest.Pages { public class FormModel : PageModel { [BindProperty] public string Name { get; set; } public PageResult OnPost() { return Page(); } } } The class is named after the page file itself, in this case FormModel. It implements PageModel, a class that acts like a Controller class in MVC. This approach is generally recommended above all others. It is less likely to lead to code bloat in a page, and will make migrating to MVC a lot easier, if that is a likely future requirement. Data Access And Membership As I mentioned earlier, there is no Razor Pages-specific data access technology similar to the Web Pages WebMatrix.Data.Database class. The recommended data access technology is Entity Framework Core. That is not to say, however that you cannot use any other data access technology that works with .NET Core, including plain ADO.NET. The same goes for managing users. There is no Razor Pages specific approach. You would be expected to use the ASP.NET Core Identity framework, which leverages Entity Framework, or (not recommended) to roll your own solution. Summary This completes a first look at the Razor Pages framework. There are obvious similarities with the old Web Pages framework, but there are significant differences. Some of these differences add significant complexity, and this brief look has barely scratched the surface of those yet. It is important to highlight some of the goals behind the Razor Pages framework to put it in context: - Simplify the code required to implement common page-focused patterns, e.g. dynamic pages, CRUD, PRG, etc. - Use and expose the existing MVC primitives as much as possible - Allow straightforward migration to traditional MVC structure And its only fair to repeat a couple of the non-goals - i.e. things that the framework developers specifically want to avoid doing: - Create a scripted page framework to compete with PHP, etc. - Create new primitives that are only applicable to Razor Pages If you were looking for something that directly replaced the simplicity of Web Pages, you are likely to be disappointed. On the other hand, the additional power provided by the fact that this framework sits on top of MVC will help you to build much more robust and maintainable applications.
https://www.mikesdotnetting.com/article/307/razor-pages-getting-started-with-the-preview
CC-MAIN-2020-40
refinedweb
1,702
63.39
While professional developers are waiting for the Visual Studio Tools and Designers for SQL Server Compact 4.0, I will show how impatient develoers can include SQL Server Compact with ASP.NET applications, and use it from ASP.NET pages. Previously, you had to circumvent the SQL Compact ASP.NET blocker by adding a line of code to global.asax, as I describe here. This is no longer required, and SQL Compact 4.0 can now reliably handle web load. Including SQL Server Compact 4.0 with your ASP.NET 4.0 app 1: Download and install the 4.0 CTP runtime. 2: Copy the contents of the folder C:\Program Files\Microsoft SQL Server Compact Edition\v4.0\Private (show below) to the bin folder in your web app (you may have to use Show All Files in VS to see this folder). 3: Place your SQL Compact sdf file in your App_Data folder, so your solution looks like this (with Show All Files on): You can include the database file and the SQL Compact as content (Do not copy), if desired (so they become part of your project file). 4: Add a connection string to web.config, notice the |DataDirectory| macro, which will expand to the App_Data folder. <connectionStrings> <add name ="NorthWind" connectionString="data source=|DataDirectory|\Nw40.sdf" /> </connectionStrings> 5: Write code to connect. For now, you must use vanilla ADO.NET.Later you will be able to use Entity Framework and other OR/Ms to provide and model of your database. (Not LINQ to SQL, however). – UPDATE: EF CTP with SQL Compact support just released: Microsoft announced a new Entity Framework CTP today. using System; using System.Configuration; using System.Data.SqlServerCe; namespace Ce40ASPNET { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { using (SqlCeConnection conn = new SqlCeConnection()) { conn.ConnectionString = ConfigurationManager.ConnectionStrings["Northwind"].ConnectionString; conn.Open(); using (SqlCeCommand cmd = new SqlCeCommand("SELECT TOP (1) [Category Name] FROM Categories", conn)) { string valueFromDb = (string)cmd.ExecuteScalar(); Response.Write(string.Format("{0} Time {1}", valueFromDb, DateTime.Now.ToLongTimeString())); } } } } } You can now upload you project files to any web hosting site running ASP.NET 4.0, and your database runtime will be included in the upload. No SQL Server subscription will be required. Hope this will get you started with SQL Compact 4.0 and ASP.NET.
http://erikej.blogspot.ch/2010_07_01_archive.html
CC-MAIN-2018-22
refinedweb
392
52.76
Many applications write a "record" of data and then call sync(). I just can't automatically do the "right thing" in the library. bool SdFile::close() { if (sync()) { type_ = FAT_FILE_TYPE_CLOSED; return true; } return false;} I am having an issue. I can not get any of the sample apps to compile. The compiler complains of an error on line 40 of ArduinoStream.h: expected ')' before '&' token. I took a look and the reference parameter seemed to be declared correctly.I tried several of the example files and each one gave me the same error. I'll try the SD classes without the stream IO and see if I have better luck. #include <SdFat.h> Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=58549.msg434600
CC-MAIN-2015-22
refinedweb
148
76.42
h5dump − Displays HDF5 file contents. h5dump [OPTIONS] file h5dump enables the user to examine the contents of an HDF5 file and dump those contents, in human readable form, to an ASCII file. h5dump dumps HDF5 file content to standard output. It can display the contents of the entire HDF5 file or selected objects, which can be groups, datasets, a subset of a dataset, links, attributes, or datatypes. The --header option displays object header information only. Names are the absolute names of the objects. h5dump displays objects in the order same as the command order. If a name does not start with a slash, h5dump begins searching for the specified object starting at the root group. If an object is hard linked with multiple names, h5dump displays the content of the object in the first occurrence. Only the link information is displayed in later occurrences. h5dump assigns a name for any unnamed datatype in the form of #oid1:oid2, where oid1 and oid2 are the object identifiers assigned by the library. The unnamed types are displayed within the root group. Datatypes are displayed with standard type names. For example, if a dataset is created with H5T_NATIVE_INT type and the standard type name for integer on that machine is H5T_STD_I32BE, h5dump displays H5T_STD_I32BE as the type of the dataset. h5dump can also dump a subset of a dataset. This feature operates in much the same way as hyperslabs in HDF5; the parameters specified on the commnd line are passed to the function H5Sselect_hyperslab and the resulting selection is displayed. The h5dump output is described in detail in the DDL for HDF5, the Data Description Language document. Note: It is not permissable to specify multiple attributes, datasets, datatypes, groups, or soft links with one flag. For example, one may not issue the command WRONG: h5dump -a /attr1 /attr2 foo.h5 to display both /attr1 and /attr2. One must issue the following command: CORRECT: h5dump -a /attr1 -a /attr2 foo.h5 It’s possible to select the file driver with which to open the HDF5 file by using the −−filedriver (−f) command-line option. Acceptable values for the −−filedriver option are: "sec2", "family", "split", "multi", and "stream". If the file driver flag isn’t specified, then the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file. With the --xml option, h5dump generates XML output. This output contains a complete description of the file, marked up in XML. The XML conforms to the HDF5 Document Type Definition (DTD) available at. The XML output is suitable for use with other tools, including the HDF5 Java Tools. −h or −−help Print a usage message and exit. −B or −−bootblock Print the content of the boot block. (This option is not yet implemented.) −H or −−header Print the header only; no data is displayed. −A Print the header and value of attributes; data of datasets is not displayed. −i or −−object−ids Print the object ids. −r or −−string Print 1-bytes integer datasets as ASCII. −V or −−version Print version number and exit. −a P or −−attribute=P Print the specified attribute. −d P or −−dataset=P Print the specified dataset. −f D or −−filedriver=D Specify which driver to open the file with. −g P or −−group=P Print the specified group and all members. −l P or −−soft−link=P Print the value(s) of the specified soft link. −o F or −−output=F Output raw data into file F. −t T or −−datatype=T Print the specified named datatype. −w N or −−width=N Set the number of columns of output. −x or −−xml Output XML using XML schema (default) instead of DDL. −u or −−use−dtd Output XML using XML DTD instead of DDL. −D U or −−xml−dtd=U In XML output, refer to the DTD or schema at U instead of the default schema/DTD. −X S or −−xml−dns=S In XML output, (XML Schema) use qualified names in the XML: ":": no namespace, default: "hdf5:" −s L or −−start=L Offset of start of subsetting selection. Default: the beginning of the dataset. −S L or −−stride=L Hyperslab stride. Default: 1 in all dimensions. −c L or −−count=L Number of blocks to include in the selection. −k L or −−block=L Size of block in hyperslab. Default: 1 in all dimensions. −− Indicate that all following arguments are non-options. E.g., to dump a file called ‘−f’, use h5dump −− −f. file The file to be examined. The option parameters listed above are defined as follows: D which file driver to use in opening the file. Acceptable values are "sec2", "family", "split", "multi", and "stream". Without the file driver flag the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file. P The full path from the root group to the object T The name of the datatype F A filename N An integer greater than 1 L A list of integers, the number of which is equal to the number of dimensions in the dataspace being queried U A URI (as defined in [IETF RFC 2396], updated by [IETF RFC 2732]) that refers to the DTD to be used to validate the XML Subsetting paramaters can also be expressed in a convenient compact form, as follows: −−dataset="/foo/mydataset[START;STRIDE;COUNT;BLOCK]" All of the semicolons (;) are required, even when a parameter value is not specified. When not specified, default parameter values are used. 1. Dumping the group /GroupFoo/GroupBar in the file quux.h5: h5dump −g /GroupFoo/GroupBar quux.h5 2. Dumping the dataset Fnord in the group /GroupFoo/GroupBar in the file quux.h5: h5dump −d /GroupFoo/GroupBar/Fnord quux.h5 3. Dumping the attribute metadata of the dataset Fnord which is in group /GroupFoo/GroupBar in the file quux.h5: h5dump −a /GroupFoo/GroupBar/Fnord/metadata quux.h5 4. Dumping the attribute metadata which is an attribute of the root group in the file quux.h5: h5dump −a /metadata quux.h5 5. Producing an XML listing of the file bobo.h5: h5dump −−xml bobo.h5 > bobo.h5.xml 6. Dumping a subset of the dataset /GroupFoo/databar/ in the file quux.h5 h5dump −d /GroupFoo/databar −−start="1,1" −−stride="2,3" −−count="3,19" −−block="1,1" quux.h5 7. The same example using the short form to specify the subsetting parameters: h5dump −d "/GroupFoo/databar[1,1;2,3;3,19;1,1]" quux.h5 The current version of h5dump displays the following information: * Group o group attribute (see Attribute) o group member * Dataset o dataset attribute (see Attribute) o dataset type (see Datatype) o dataset space (see Dataspace) o dataset data * Attribute o attribute type (see Datatype) o attribute space (see Dataspace) o attribute data * Datatype o integer type − H5T_STD_I8BE, H5T_STD_I8LE, H5T_STD_I16BE, ... o floating point type − H5T_IEEE_F32BE, H5T_IEEE_F32LE, H5T_IEEE_F64BE, ... o string type o compound type − named, unnamed and transient compound type − integer, floating or string type member o opaque types o reference type − object references − data regions o enum type o variable-length datatypes − atomic types only − scalar or single dimensional array of variable-length types supported * Dataspace o scalar and simple space * Soft link * Hard link * Loop detection h5ls(1), h5diff(1), h5repart(1), h5import(1), gif2h5(1), h52gif(1), h5perf(1) * HDF5 Data Description Language syntax at * HDF5 XML Schema at * HDF5 XML information at
http://man.sourcentral.org/ubuntu904/1+h5dump
CC-MAIN-2019-26
refinedweb
1,252
64.81
![if !IE]> <![endif]> Cache Conflict and Capacity One of the notable features of multicore processors is that threads will share a single cache at some level. There are two issues that can occur with shared caches: capacity misses and conflict misses. A conflict cache miss is where one thread has caused data needed by another thread to be evicted from the cache. The worst example of this is thrashing where multiple threads each require an item of data and that item of data maps to the same cache line for all the threads. Shared caches usually have sufficient associativity to avoid this being a significant issue. However, there are certain attributes of computer systems that tend to make this likely to occur. Data structures such as stacks tend to be aligned on cache line boundaries, which increases the likelihood that structures from different processes will map onto the same address. Consider the code shown in Listing 9.22. This code creates a number of threads. Each thread prints the address of the first item on its stack and then waits at a barrier for all the threads to complete before exiting. Listing 9.22 Code to Print the Stack Address for Different Threads #include <pthread.h> #include <stdio.h> #include <stdlib.h> pthread_barrier_t barrier; void* threadcode( void* param ) { int stack; printf("Stack base address = %x for thread %i\n", &stack, (int)param); pthread_barrier_wait( &barrier ); } int main( int argc, char*argv[] ) { pthread_t threads[20]; int nthreads = 8; if ( argc > 1 ) { nthreads = atoi( argv[1] ); } pthread_barrier_init( &barrier, 0, nthreads ); for( int i=0; i<nthreads; i++ ) { pthread_create( &threads[i], 0, threadcode, (void*)i ); } for( int i=0; i<nthreads; i++ ) { pthread_join( threads[i], 0 ); } pthread_barrier_destroy( &barrier ); return 0; } The expected output when this code is run on 32-bit Solaris indicates that threads are created with a 1MB offset between the start of each stack. For a processor with a cache size that is a power of two and smaller than 1MB, a stride of 1MB would ensure the base of the stack for all threads is in the same set of cache lines. The associativity of the cache will reduce the chance that this would be a problem. A cache with an associa-tivity greater than the number of threads sharing is less likely to have a problem with conflict misses. It is tempting to imagine that this is a theoretical problem, rather than one that can actually be encountered. However, suppose an application has multiple threads, and they all execute common code, spending the majority of the time performing calculations in the same routine. If that routine performs a lot of stack accesses, there is a good chance that the threads will conflict, causing thrashing and poor application performance. This is because all stacks start at some multiple of the stack size. The same variable, under the same call stack, will appear at the same offset from the base of the stack for all threads. It is quite possible that the cache line will be mapped onto the same cache line set for all threads. If all threads make heavy use of this stack location, it will cause thrashing within the set of cache lines. It is for this reason that processors usually implement some kind of hashing in hard-ware, which will cause addresses with a strided access pattern to map onto different sets of cache lines. If this is done, the variables will map onto different cache lines, and the threads should not cause thrashing in the cache. Even under this kind of hardware fea-ture, it is still possible to cause thrashing, but it is much less likely to happen because the obvious causes of thrashing have been eliminated. The other issue with shared caches is capacity misses. This is the situation where the data set that a single thread uses fits into the cache, but adding a second thread causes the total data footprint to exceed the capacity of the cache. Consider the code shown in Listing 9.23. In this code, each thread allocates an 8KB array of integers. When the code is run with a single thread on a core with at least 8KB of cache, the data the thread uses becomes cache resident, and the code runs quickly. If a second thread is started on the same core, the two threads would require a total 16KB of cache for the data required by both threads to remain resident in cache. Listing 9.23 Code Where Each Thread Uses an 8KB Chunk of Data #include <pthread.h> #include <stdio.h> #include <sys/time.h> #include <stdlib.h> #include <sys/types.h> #include <sys/processor.h> #include <sys/procset.h> void * threadcode( void*id ) { int *stack = calloc( sizeof(int), 2048 ); processor_bind( P_LWPID, P_MYID, ((int)id*4) & 63, 0 ); for( int i=0; i<1000; i++ ) { hrtime_t start = gethrtime(); double total = 0.0; for( int h=0; h<100; h++ ) for( int k=0; k<256*1024; k++ ) total += stack[ ( (h*k) ^ 20393 ) & 2047 ] *stack[ ( (h*k) ^ 12834 ) & 2047 ]; hrtime_t end = gethrtime(); if ( total == 0 ){ printf( "" ); } printf( "Time %f ns %i\n", (double)end – (double)start, (int)id ); } } int main( int argc, char*argv[] ) { pthread_t threads[20]; int nthreads = 8; if ( argc > 1 ) { nthreads = atoi( argv[1] ); } for( int i=0; i<nthreads; i++ ) { pthread_create( &threads[i], 0, threadcode, (void*)i ); } for( int i=0; i<nthreads; i++) { pthread_join( threads[i], 0 ); } return 0; } Running this code on an UltraSPARC T2 processor with one thread reports a time of about 0.7 seconds per iteration of the outermost loop. The 8KB data structure fits into the 8KB cache. When run with two threads, this time nearly doubles to 1.2 seconds per iteration, as might be expected, because the required data exceeds the size of the first-level cache and needs to be fetched from the shared second-level cache. Note that this code contains a call to the Solaris function processor_bind(), which binds a thread to a particular CPU. This is used in the code to ensure that two threads are placed on the same core. Binding will be discussed in the section “Using Processor Binding to Improve Memory Locality.” Obviously, when multiple threads are bound to the same processor core, there are other reasons why the performance might not scale. To prove that the problem we are observing is a cache capacity issue, we need to eliminate the other options. In this particular instance, the only other applicable reason is that of instruction issue capacity. This would be the situation where the processor was unable to issue the instruc-tions from both streams as fast as it could issue the instructions from a single stream. There are two ways to determine whether this was the problem. The first way is to perform the experiment where the size of the data used by each thread is reduced so that the combined footprint is much less than the size of the cache. If this is done and there is no impact performance from adding a second thread, it indicates that the prob-lem is a cache capacity issue and not because of the two threads sharing instruction issue width. However, modifying the data structures of an application is practical only on test codes. It is much harder to perform the same experiments on real programs. An alterna-tive way to identify the same issue is to look at cache miss rates using the hardware per-formance counters available on most processors. One tool to access the hardware performance counters on Solaris is cputrack. This tool reports the number of hardware performance counter events triggered by a single process. Listing 9.24 shows the results of using cputrack to count the cache misses from a single-threaded version of the example code. The tool reports that for the active thread there are about 450,000 L1 data cache misses per second and a few hundred L2 cache miss events per second. This can be compared to the cache misses encountered when two threads are run on the same core, as shown in Listing 9.25. In this instance, the L1 data cache miss rate increases to 26 million per second for each of the two active threads. The L2 cache miss rate remains near to zero. Listing 9.25 Cache Misses Reported When Two Threads Run on the Same Core This indicates that when a single thread is running on the core, it is cache resident and has few L1 cache misses. When a second thread joins this one on the same core, the combined memory footprint of the two threads exceeds the size of the L1 cache and causes both threads to have L1 cache misses. However, the combined memory footprint is still smaller than the size of the L2 cache, so there is no increase in L2 cache misses. Therefore, it is important to minimize the memory footprint of the codes running on the system. Most memory footprint optimizations will tend to appear to be common good programming practice and are often automatically implemented by the compiler. For example, if there are unused local variables within a function, then the compiler will not allocate stack space to hold them. Other issues might be less apparent. Consider an application that manages its own memory. It would be better for this application to return recently deallocated memory to the thread that deallocated it rather than return memory that had not been used in a while. The reason is that the recently deallocated memory might still be cache resident. Reusing the same memory avoids the cache misses that would occur if the memory manager returned an address that had not been recently used and the thread had to fetch the cache line from memory. Related Topics Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
https://www.brainkart.com/article/Cache-Conflict-and-Capacity_9531/
CC-MAIN-2019-39
refinedweb
1,651
68.4
Migrating to Sourcegraph 3.7.2+Migrating to Sourcegraph 3.7.2+ Sourcegraph 3.7.2+ includes much faster indexed symbol search (on large searches, up to 20x faster). However, there are some aspects you should be aware of when upgrading an existing Sourcegraph instance: Upgrading and downgrading is safe, reindexing will occur in the background seamlessly with no downtime or harm to search performance. Please read this document in full before upgrading to 3.7. Increased disk space requirementsIncreased disk space requirements With indexed symbol search comes an increase in the required disk space. Please ensure you have enough free space before upgrading. Run the command below for your deployment to determine how much disk space the indexed search indexes are taking currently. Then, multiply the number you get times 1.3 to determine how much free space you need before upgrading. For example, in the below examples we see 126 GiB is currently in use. Multiplying 126 GiB * 1.3 gives us 163.8 GiB (the amount we should ensure is free before upgrading). Single-container Docker deployment Run the following on the host machine: $ du -sh ~/.sourcegraph/data/zoekt/index/ 126G /Users/jane/.sourcegraph/data Kubernetes cluster deployment Run the following, but replace the value of $POD_NAME with your indexed-search pod name from kubectl get pods: $ POD_NAME='indexed-search-974c74498-6jngm' kubectl --namespace=prod exec -it $POD_NAME -c zoekt-indexserver -- du -sh /data/index 126G /data/index Pure-Docker cluster deployment Run the following against the zoekt-shared-disk directory on the host machine: $ du -sh ~/sourcegraph-docker/zoekt-shared-disk/ 126G /home/ec2-user/sourcegraph-docker/zoekt-shared-disk/ Background indexingBackground indexing Sourcegraph will reindex all repositories in the background seamlessly. In the meantime, it will serve searches just as fast from the old search index. This process happens at a rate of about 1,400 repositories/hr, depending on repository size and available resources. Until this process has completed, search performance will be the same as the prior version. If you’re eager or want to confirm, here’s how to check the process has finished: Single-container Docker deployment The following command ran on the host machine shows how many repositories have been reindexed: $ ls ~/.sourcegraph/data/zoekt/index/*_v16* | wc -l 12583 When it is equal to the number of repositories on your instance, the process has finished! Kubernetes cluster deployment The following command will show how many repositories have been reindexed. Replace the value of $POD_NAME with your indexed-search pod name from kubectl get pods: $ kubectl --namespace=prod exec -it indexed-search-974c74498-6jngm -c zoekt-indexserver -- sh -c 'ls /data/index/*_v16* | wc -l' 12583 When it is equal to the number of repositories on your instance, the process has finished! Pure-Docker cluster deployment The following command ran on the host machine against the zoekt-shared-disk directory will show how many repositories have been reindexed. $ ls ~/sourcegraph-docker/zoekt-shared-disk/*_v16* | wc -l 12583 When it is equal to the number of repositories on your instance, the process has finished! DowngradingDowngrading As guaranteed by our compatibility promise, it is always safe to downgrade to a previous minor version. e.g. 3.7.2 to 3.6.x. There will be no downtime, and search speed will not be impacted. Please do not downgrade or upgrade to 3.7.0 or 3.7.1, though, as those versions will incur a reindex and search performance will be harmed in the meantime. Memory and CPU requirements (no substantial change compared to v3.5)Memory and CPU requirements (no substantial change compared to v3.5) - In v3.7, the indexed-search/ zoekt-indexservercontainer will use 28% more memory on average compared to v3.6. However, please take note that v3.6 reduced memory consumption of the same container by about 41% – so the net change from v3.5 -> v3.7 is still less memory usage overall. - CPU usage may increase depending on the amount of symbol queries your users run now that it is much faster. We suggest not changing any CPU resources and instead checking resource usage after the upgrade and reindexing has finished.
https://docs.sourcegraph.com/admin/migration/3_7
CC-MAIN-2021-43
refinedweb
696
56.76
org.apache.commons.lang3.ObjectUtils; 20 21 /** 22 * <p> 23 * A very simple implementation of the {@link ConcurrentInitializer} interface 24 * which always returns the same object. 25 * </p> 26 * <p> 27 * An instance of this class is passed a reference to an object when it is 28 * constructed. The {@link #get()} method just returns this object. No 29 * synchronization is required. 30 * </p> 31 * <p> 32 * This class is useful for instance for unit testing or in cases where a 33 * specific object has to be passed to an object which expects a 34 * {@link ConcurrentInitializer}. 35 * </p> 36 * 37 * @since 3.0 38 * @param <T> the type of the object managed by this initializer 39 */ 40 public class ConstantInitializer<T> implements ConcurrentInitializer<T> { 41 /** Constant for the format of the string representation. */ 42 private static final String FMT_TO_STRING = "ConstantInitializer@%d [ object = %s ]"; 43 44 /** Stores the managed object. */ 45 private final T object; 46 47 /** 48 * Creates a new instance of {@code ConstantInitializer} and initializes it 49 * with the object to be managed. The {@code get()} method will always 50 * return the object passed here. This class does not place any restrictions 51 * on the object. It may be <b>null</b>, then {@code get()} will return 52 * <b>null</b>, too. 53 * 54 * @param obj the object to be managed by this initializer 55 */ 56 public ConstantInitializer(final T obj) { 57 object = obj; 58 } 59 60 /** 61 * Directly returns the object that was passed to the constructor. This is 62 * the same object as returned by {@code get()}. However, this method does 63 * not declare that it throws an exception. 64 * 65 * @return the object managed by this initializer 66 */ 67 public final T getObject() { 68 return object; 69 } 70 71 /** 72 * Returns the object managed by this initializer. This implementation just 73 * returns the object passed to the constructor. 74 * 75 * @return the object managed by this initializer 76 * @throws ConcurrentException if an error occurs 77 */ 78 @Override 79 public T get() throws ConcurrentException { 80 return getObject(); 81 } 82 83 /** 84 * Returns a hash code for this object. This implementation returns the hash 85 * code of the managed object. 86 * 87 * @return a hash code for this object 88 */ 89 @Override 90 public int hashCode() { 91 return getObject() != null ? getObject().hashCode() : 0; 92 } 93 94 /** 95 * Compares this object with another one. This implementation returns 96 * <b>true</b> if and only if the passed in object is an instance of 97 * {@code ConstantInitializer} which refers to an object equals to the 98 * object managed by this instance. 99 * 100 * @param obj the object to compare to 101 * @return a flag whether the objects are equal 102 */ 103 @SuppressWarnings( "deprecation" ) // ObjectUtils.equals(Object, Object) has been deprecated in 3.2 104 @Override 105 public boolean equals(final Object obj) { 106 if (this == obj) { 107 return true; 108 } 109 if (!(obj instanceof ConstantInitializer<?>)) { 110 return false; 111 } 112 113 final ConstantInitializer<?> c = (ConstantInitializer<?>) obj; 114 return ObjectUtils.equals(getObject(), c.getObject()); 115 } 116 117 /** 118 * Returns a string representation for this object. This string also 119 * contains a string representation of the object managed by this 120 * initializer. 121 * 122 * @return a string for this object 123 */ 124 @Override 125 public String toString() { 126 return String.format(FMT_TO_STRING, Integer.valueOf(System.identityHashCode(this)), 127 String.valueOf(getObject())); 128 } 129 }
http://commons.apache.org/proper/commons-lang/xref/org/apache/commons/lang3/concurrent/ConstantInitializer.html
CC-MAIN-2016-44
refinedweb
561
55.13
#include <FXFont.h> #include <FXFont.h> Inheritance diagram for FX::FXFont: See also: Construct a font with given font description of the form:. fontname [ "[" foundry "]" ] ["," size ["," weight ["," slant ["," setwidth ["," encoding ["," hints]]]]]] For example: "helvetica [bitstream],120,bold,i,normal,iso8859-1,0" Typically, at least the font name, and size must be given for normal font matching. As a special case, raw X11 fonts can also be passed, for example: "9x15bold" Finally, an old FOX 1.0 style font string may be passed as well: "[helvetica] 90 700 1 1 0 0" FONTWEIGHT_NORMAL FONTSLANT_REGULAR FONTENCODING_DEFAULT FONTSETWIDTH_DONTCARE 0 Construct a font with given name, size in points, weight, slant, character set encoding, setwidth, and hints. The font name may be comprised of a family name and optional foundry name enclosed in square brackets, for example, "helvetica [bitstream]". Construct font from font description. [virtual] Destroy font. Create the font. Reimplemented from FX::FXId. Detach the font. Destroy the font. [inline] Get font family name. Get actual family name. Get size in deci-points. Get actual size in deci-points. Get font weight. Get actual font weight. Get slant. Get actual slant. Get character set encoding. Get actual encoding. Get setwidth. Get actual setwidth. Get hints. Change font description. Get font description. Change the font to the specified font description string. Return the font description as a string suitable for parsing with setFont(), see above. Find out if the font is monotype or proportional. See if font has glyph for ch. Get first character glyph in font. Get last character glyph in font. Left bearing. Right bearing. Width of widest character in font. Height of highest character in font. Ascent from baseline. Descent from baseline. Get font leading [that is lead-ing as in Pb!]. Get font line spacing. Calculate width of given text in this font. Calculate height of given text in this font. FONTWEIGHT_DONTCARE FONTSLANT_DONTCARE [static] List all fonts matching hints. If listFonts() returns TRUE then fonts points to a newly-allocated array of length numfonts. It is the caller's responsibility to free this array using FXFREE(). Save font data into stream. Load font data from stream.
https://fox-toolkit.org/ref14/classFX_1_1FXFont.html
CC-MAIN-2021-25
refinedweb
357
71.92
Advertisement static keyword u didnt mention any example for static keyword plz explain static and state that how it is different from c++ static Regarding Varibles please differntiate instance and static variable with program about import u dont mention above example about import why use import key word ? pls explain about import. java Hi I new to java.so i have no knowledge about java. so please help me to develope my java language .Thanku Please explain Interface methods cannot be declared as static..Can you explain the reason please. Thank you. java please differentiate the static and nonstatic variables with a program Static Keyword in java There will be times when you will want to define a class member that will be used independently of any object of that class. Normally a class member must be accessed only in conjunction with an object of its class. However, it is possible to create a class class content is good Java program Very good java example for fresher. Help full for learning Java in One Day. Nice example I find this example very use full, will be much better if we have rules of static keyword. thank u your service is very useful to me.so, thank u to given this service to us. Static keyword in public static void main :// Explanation...; The variable declared with static keyword is called class variable. This variable... by all instance of the class. Example of static keyword: class UseStatic static instances of the class. When static is applied to methods, the static keyword...static what r the main uses of static in java Hi Friend... and accessed each time an instance of a class, an object, is created. When static static Java Keyword static Java Keyword The static is a keyword defined in the java programming language. Keywords... in java programming language likewise the static keyword indicates the following static keyword in java .style1 { color: #0000FF; } static keyword in java We are going to discuss about static keyword in java. The static keyword is a special keyword in java programming language. A static member belongs to a class static keyword static keyword Hii, In which portion of memory static variables stored in java. Is it take memory at compile time? thanks deepak mishra static instance of that object (the object that is declared static) in the class... of the class that is initialized. Another thing about static objects is that you... is why if you declare the function main as static then java does not have Day for the given Date in Java java.util.*; import java.text.*; class CheckDay{ public static void main...Day for the given Date in Java How can i get the day for the user..."); String day=f.format(date); System.out.println(day static keyword static keyword please give some detail about Static keyword. Static Variables Static variables are class variables that are shared... instances of the class. 4)A static variable can be accessed directly by the class name Learn Java in a day Learn Java in a day  ... structure. Initially learn to develop a class in java. To meet this purpose we..., "Learn java in a day" tutorial will definitely create your Implementation of Day Class to be returned is Monday. Here is the code: public class Day { final static int...Implementation of Day Class Here we are going to design and implement the class Day. The class Day should store the days of the week. Then test the following
http://roseindia.net/tutorialhelp/allcomments/4043
CC-MAIN-2016-18
refinedweb
584
66.84
Performance tuning Another interesting thing I found when doing my timing tests. When I change the following to use Python's gzip reader vs. the system 'gzcat' f = os.popen("gzcat /Users/dalke/databases/flybase/dmel-2L-r4.2.1.gff.gz") #f = gzip.open("/Users/dalke/databases/flybase/dmel-2L-r4.2.1.gff.gz")my parse time goes from 47 seconds to 73 seconds. Guess I won't be using the built-in gzip if I can get away with it! Looks like the problem is the slow iter performance. The gzip.GzipFile.read function is very, very slow def readline(self, size=-1): if size < 0: size = sys.maxint bufs = [] readsize = min(100, size) # Read from the file in small chunks while True: if size == 0: return "".join(bufs) # Return resulting line c = self.read(readsize) i = c.find('\n') if size is not None: # We set i=size to break out of the loop under two # conditions: 1) there's no newline, and the chunk is # larger than size, or 2) there is a newline, but the # resulting line would be longer than 'size'. if i==-1 and len(c) > size: i=size-1 elif size <= i: i = size -1 if i >= 0 or c == '': bufs.append(c[:i+1]) # Add portion of last chunk self._unread(c[i+1:]) # Push back rest of chunk return ''.join(bufs) # Return resulting line # Append chunk to list, decrease 'size', bufs.append(c) size = size - len(c) readsize = min(size, readsize * 2)If I replace the f=popen("gzcat ...") call with f = open(filename).readlines() then my time is 45 seconds. This means the overhead of using an external process for uncompressing is a negligible 2 seconds, compared to 28 seconds. Many of the fields in a GFF3 file can be escaped using the URL %hex convention. Few of of the fields are so I had a quick test for "%" in s before incurring the function call overhead. According to the profiler it was still being called a lot. I tracked it down to the attributes section, which is a key/value table where the values may be multi-valued. I did a function call for every single value. Now I check for the common case, where I don't need to unescape values. That got me down to about 32 seconds for my test set. I then did an evil hack. Instead of calling the constructor, which does a __new__ and an __init__ Python function call, I called __new__ directly and set the attributes myself. ## return GFF3Feature(seqid, source, type, start, end, ## score, strand, phase, attributes) # This hack gives me about 5% better performance because # I skip the extra __init__ call x = GFF3Feature.__new__(GFF3Feature) x.seqid = seqid x.source = source x.type = type x.start = start x.end = end x.score = score x.strand = strand x.phase = phase x.attributes = attributes return xAs you can read, I got a 5% overall speedup from that. In my fastest implementation, with all the hacks and tricks including the split change I'm about to talk about, I can process 566,119 lines in 25.3 seconds. By skipping the __init__ the time goes to 27 seconds, which is a bit over 6% slower. I used line=line.rstrip("\n") to remove the terminal newline. The profiler said rstrip was kinda slow. Micro-timing it % python -m timeit 's="Hello!\n"' 'if s[-1] == "\n": s=s[:-1]' 1000000 loops, best of 3: 1.2 usec per loop % python -m timeit 's="Hello!\n";s=s.rstrip("\n")' 1000000 loops, best of 3: 1.72 usec per loopThat's a shame. I don't like having if statements like that. I would rather have a "chomp()" method which implements this logic. On the other hand, for my test data set that's only 0.3 seconds of (at final reckoning) 25 seconds; about a 1% speedup. The profiler now claims the slowest code is 'split'. ncalls tottime percall cumtime percall filename:lineno(function) 99995 11.960 0.000 21.160 0.000 gff3_parser.py:220(parse_gff3_line) 397148 7.640 0.000 7.640 0.000 :0(split) 100001 3.970 0.000 25.130 0.000 gff3_parser.py:392(parse_gff3) 1 1.880 1.880 27.010 27.010 gff3_parser.py:722(go) 99995 1.520 0.000 1.520 0.000 :0(__new__) 131 0.010 0.000 0.040 0.000 :0(map) 272 0.010 0.000 0.010 0.000 :0(join) 272 0.010 0.000 0.030 0.000 urllib.py:1055(unquote) ...I've removed as many split calls as I can. My profile test set works over the first 100,000 lines of the file, which means about 4 splits per line. Why? Ahh, each line contains a list of key/value attributes of the form: ID=CG11023-RA-u5;Dbxref=FlyBase:FBgn0031208;Parent=CG11023-RAEvery line has one, and most lines have several. I split on ';' then split on '=' to get the fields. Turns out i = field.index("=") d[field[:i]] = field[i+1:]is faster than k, v = field.split("=", 1) d[k] = vI looked at the Python's implementation of split. Here is the heart for (i = j = 0; i < len; ) { if (s[i] == ch) { if (maxcount-- <= 0) break; SPLIT_APPEND(s, j, i); i = j = i + 1; } else i++; } if (j <= len) { SPLIT_APPEND(s, j, len); } return list;For my most common case this requires two reallocs, because I have 9 elements and in listobject.c * The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...I implemented an alternative which scanned the string to find how many characters matched and used that to build a list of the correct size. That sped up my code by a few percent, which was only about the same speed as the .index() approach. My solution isn't a good general purpose one though. I should only scan the string once to identify breakpoints. Suppose the string was in shared memory or memory mapped to a file and it changes underneath. Bad idea, but possible. One advantage to the current scheme is it will never cause problems even if the characters are mutated while it's working. My scheme will cause problems because it requires a double scan. My also causes problems with cache coherency because it will read long strings twice. Other solutions: only try to speed up if the string is less than, say, 200 bytes. That's going to be a frequent case. Use a local list of, say, 10 offset values. Find the split points and save to that list. If there are fewer than 10 values, prealloc the list with the right size and use the stored split points when making the substrings. If there are more than 10 points, or the string is too long, fall back to the existing method, because the realloc performance is less of a problem. This is interesting. Split without a maxsplit is faster than with a maxsplit % python -m timeit 's="xyz=123"; a,b=s.split("=")' 100000 loops, best of 3: 2.95 usec per loop % python -m timeit 's="xyz=123"; a,b=s.split("=", 1)' 100000 loops, best of 3: 3.34 usec per loopThat accounts for about 0.2 seconds of the 25 second performance time. However, I'm now at the 1% speedup point, where I need multiple runs to make sure I'm seeing real speedups and not system variances. 47 seconds to 25 seconds is nearly a factor of two. I think that's good enough for now. If I need more later, guess I'll have to look at C. Anyone want to spend time improving the performance of Python's native split function? If Raymond Hettinger's 'partition' gets in that could speed up the case of parsing the "key=value" terms but won't handle my need to split nearly every line on tab to get 9 fields. Andrew Dalke is an independent consultant focusing on software development for computational chemistry and biology. Need contract programming, help, or training? Contact me
http://www.dalkescientific.com/writings/diary/archive/2006/03/20/performance_tuning.html
CC-MAIN-2014-41
refinedweb
1,365
76.72
Describes the structure of a top-level window. It is the root node of a XUL document. It is by default a horizontally oriented box. As it is a box, all box attributes can be used. By default, the window will have a platform-specific frame around it. To set an icon for the window, create a platform-specific icon file <windowid> .ico and/or <windowid> .xpm and place or install these files into the <mozilla-directory> /chrome/icons/default/ directory. The <windowid> is the value of the id attribute on the window. This allows you to have a different icon for each window. Without including the css file at "chrome://global/skin/", the window will not be stylized and will be invisible and glitchy when opened as a dialog. Note: Starting in Gecko 1.9.2, you can detect when a window is activated or deactivated by watching for the "activate" and "deactivate" events. See Window activation events. More information is available in the XUL tutorial. - Attributes - accelerated, chromemargin, disablechrome, disablefastfind, drawintitlebar, fullscreenbutton, height, hidechrome, id, lightweightthemes, lightweightthemesfooter, screenX, screenY, sizemode, title, width, windowtype Examples <?xml version="1.0"?> <?xml-stylesheet <vbox> <hbox> <image src="application_form.png"/> <description>Register Online!</description> </hbox> <groupbox align="start"> <caption label="Your Information"/> <radiogroup> <vbox> <hbox> <label control="your-fname" value="Enter first name:"/> <textbox id="your-fname" value="Johan"/> </hbox> <hbox> <label control="your-lname" value="Enter last name:"/> <textbox id="your-lname" value="Hernandez"/> </hbox> <hbox> <button oncommand="alert('save!')"> <description>Save</description> </button> </hbox> </vbox> </radiogroup> </groupbox> </vbox> </window> Attributes accelerated - Type: boolean - Set this attribute to trueto allow hardware layer managers to accelerate the window. activetitlebarcolor - Type: color string - Specify background color of the window's titlebar when it is active (foreground). Moreover this hides separator between titlebar and window contents. This only affects Mac OS X. chrom. disablechrome - Type: boolean - Set this attribute to trueto disable chrome in the window. This is used to hide chrome when showing in-browser UI such as the about:addonspage, and causes the toolbars to be hidden, with only the tab strip (and, if currently displayed, the add-on bar) left showing. disable). draw. hidechrome - Type: boolean - Set this attribute to trueto have the chrome including the titlebar hidden. id - Type: unique id - A unique identifier so that you can identify the element with. You can use this as a parameter to getElementById()and other DOM functions and to reference the element in style sheets. inactivetitlebarcolor - Type: color string - Specify background color of the window's titlebar when it is inactive (background). Moreover this hides separator between titlebar and window contents. This affects only on Mac OS X. lightweightthemes - Type: boolean trueif the window supports lightweight themes, otherwise false. s. width -. window Properties Methods See also: DOM window object methods Note The error message "XML Parsing Error: undefined entity...<window" can be caused by a missing or unreachable DTD file referenced in the XUL file. A filename following the SYSTEM keyword in a DOCTYPE declaration may silently fail to load and the only error message will be an undefined entity error on the next XUL element. Related Elements Related Topics User Notes To change the Icon to a window's title bar check this page on Window icons. To add a favicon to the address bar and browser tab (ie dialog is not a popup) then use the following code snippet to use the html namespace and link. <window xmlns="" xmlns: <!-- Icon from chrome --> <html:link <!-- From a remote site --> <html:link Since Firefox 3.6 the above listed code does not work correctly - it produces the following message: "Warning: XUL box for box element contained an inline link child, forcing all its children to be wrapped in a block". If this code is placed between window tags it messes up all other controls on the window. If it is placed between box tags, window controls are rendered fine, but still there is this error message. The problem can be solved as follows: <html:link or <html:head> <html:link </html:head>
https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XUL/window
CC-MAIN-2016-50
refinedweb
680
56.96
» Java » Java in General Author Filepath Help henri henri Ranch Hand Joined: Oct 03, 2005 Posts: 115 posted Jan 13, 2006 18:49:00 0 My picture panel gets the image from the same folder that stores my class files in NetBeans but I have a folder called Actors and would like the filepath to be directed there. I have tried to use ./Actors/file.jpg without any succes. public class Picture extends JPanel { Image img; public Picture(String file) { img = new ImageIcon(this.getClass().getResource(file + ".jpg")).getImage(); this.setBorder(BorderFactory.createLineBorder(DARK_BLU, 1)); } public Dimension getPreferredSize() { return new Dimension(img.getWidth(this), img.getHeight(this)); } public void paintComponent(Graphics gr) { super.paintComponent(gr); gr.drawImage(img, 0, 0, this.getWidth(), this.getHeight(), this); } } Layne Lund Ranch Hand Joined: Dec 06, 2001 Posts: 3061 posted Jan 13, 2006 21:55:00 0 Please describe the file directory hierarchy that you are using. We need this information in order to help you. Also which version of NetBeans are you using? I think NetBeans 4.1 creates a jar file and runs the program from there. However, the user directory will be set differently from the directory that contains the jar. You might want to include the image in the jar file and use Class.getResource() or Class.getResourceAsStream() in order to load the image. Again, we need some more information in order to help you. Layne [ January 13, 2006: Message edited by: Layne Lund ] Java API Documentation The Java Tutorial henri henri Ranch Hand Joined: Oct 03, 2005 Posts: 115 posted Jan 14, 2006 09:41:00 0 I am using NetBeans 4.1 Filepath: D:\Titres de Film DVD\build\classes\film_titles\Actors You can see by my code that my serialized file for my actor JavaBeans is in folder SER filepath: D:\Titres de Film DVD\SER I would like to write the code in the Picture class to be able to get the images in folder Actors D:\Titres de Film DVD\Actors If I use the "clean and build main project" command in NetBeans, the entire Actors folder is deleted. public class Actor implements MouseListener, Serializable { private static final Pattern pattern = Pattern.compile("(.*),\\s*(.*)"); private ActorContainer container = new ActorContainer(); private TLabel birth_Place, birth_Date, status; private JScrollPane scrollpane; private JFrame fr; private String file; public Actor(String actor_Name) { this.file = actor_Name; try { FileInputStream file = new FileInputStream("./SER/ActorContainer.ser"); ObjectInputStream input = new ObjectInputStream(file); container = (ActorContainer)input.readObject(); } catch(Exception ex) { ex.printStackTrace(); } Collection<ActorBean>values = container.values(); for(Iterator<ActorBean>it = values.iterator(); it.hasNext() :wink: { ActorBean actorBean = it.next(); String name = actorBean.getName(); if(name.equals(actor_Name)) { ArrayList<String>films = actorBean.getFilmNames(); String birthplace = actorBean.getBirthPlace(); Birthdate date = actorBean.getBirthDate(); int day = date.getDay(); String month = date.getMonth(); int year = date.getYear(); birth_Date = new TLabel("Date de Naissance: " + day + " " + month + " " + year); birth_Place = new TLabel(birthplace); TArea filmDisplay = new TArea(); for(Iterator<String>sit = films.iterator(); sit.hasNext() :wink: { String film_Name = sit.next().trim(); String film_title = this.reverse(film_Name); filmDisplay.append(film_title + "\n"); } scrollpane = new JScrollPane(filmDisplay); scrollpane.setBorder(BorderFactory.createLineBorder(WHITE, 1)); String actorName = this.reverse(actor_Name); ImageIcon icon = new ImageIcon("./Images/Bean.jpg"); if(films.size() == 1) { status = new TLabel(films.size() + " titre dans le capsule " + actorName, icon, JLabel.LEFT); } else if(films.size() > 1) { status = new TLabel(films.size() + " titres dans le capsule " + actorName, icon, JLabel.LEFT); } JPanel p1 = new JPanel(); p1.setBackground(GREEN); p1.setBorder(BorderFactory.createLineBorder(DARK_BLU, 1)); Box box = Box.createHorizontalBox(); box.add(birth_Date); box.add(Box.createHorizontalStrut(75)); box.add(birth_Place); p1.add(box); JPanel p2 = new JPanel(); p2.setBackground(WHITE); p2.setBorder(BorderFactory.createLineBorder(DARK_BLU, 1)); p2.setLayout(new BorderLayout()); p2.add(scrollpane, BorderLayout.CENTER); p2.add(status, BorderLayout.SOUTH); fr = new JFrame(); Container pane = fr.getContentPane(); pane.add(p1, BorderLayout.NORTH); pane.add(new Picture(file), BorderLayout.EAST); pane.add(p2, BorderLayout.CENTER); fr.setAlwaysOnTop(true); fr.addMouseListener(this); fr.setUndecorated(true); fr.setLocation(152, 189); fr.setSize(575, 306); fr.setVisible(true); fr.validate(); } } } public class Picture extends JPanel { Image img; public Picture(String file) { img = new ImageIcon(this.getClass().getResource("./Actors/" + file + ".jpg")).getImage(); this.setBorder(BorderFactory.createLineBorder(DARK_BLU, 1)); this.setBackground(WHITE); } public Dimension getPreferredSize() { return new Dimension(img.getWidth(this), img.getHeight(this)); } public void paintComponent(Graphics gr) { super.paintComponent(gr); gr.drawImage(img, 0, 0, this.getWidth(), this.getHeight(), this); } } public void mouseClicked(MouseEvent me) { if(me.getComponent() == fr) fr.dispose(); } public final String reverse(String name) { return pattern.matcher(name).replaceAll("$2 $1"); } public void mouseEntered(MouseEvent me) {} public void mouseExited(MouseEvent me) {} public void mousePressed(MouseEvent me) {} public void mouseReleased(MouseEvent me) {}} Layne Lund Ranch Hand Joined: Dec 06, 2001 Posts: 3061 posted Jan 14, 2006 14:26:00 0 You have only given the path for one directory or file. I was hoping that you would describe the WHOLE directory hierarchy for your project. Since you mentioned that you are using NetBeans 4.1, I think I can make some safe assumptions about this. I understand that your project lives in "D:\Titres de Film DVD". Since you are using NetBeans, there are probably the following directories under this: nbproject - files for NetBeans to manage your project src - this is probably where your source code (.java files) lives test - this is where any JUnit tests typically go dist - this is where NetBeans will put the JAR file and Javadocs if you generate them build - this is where NetBeans will put generated class files Can you fill in any details that I may have missed here? Also please correct some assumptions. So it sounds like you put the file.jpg image file under the build\classes\film_titles\Actors directory? This is a big HUGE mistake, especially if this is your master copy of that file. The build directory is TEMPORARY! If you accidentally tell NetBeans to clean your project, the build subdirectory will be deleted! Typically I create a directory called resources to store the master copy of such files. Now to answer your question: First you need to get the user directory. To do this you just use the following line of code: String userDir = System.getProperty("user.dir"); Any time you use a relative file path (i.e. one that doesn't start with C: or D: or whatever), it will be resolved relative to this user directory. For now, it will probably be simplest to put the Actors directory under this directory. You may want to write some test code that prints this out so you know what directory your program uses. You should also learn about different types of paths that are used to locate files. This will include absolute paths and relative paths. Also make sure you understand the meaning of special directory names like . and .. since these come into play when you use relative paths. Eventually, you may want to learn how to include these resource files in a JAR file and load them from there. To do so, you will need to look at the Class.getResource() and Class.getResourceAsStream() methods. This will make it easier to distribute your app since you will only need to distribute one JAR file rather than multiple image and class files. You should also learn how to modify the build.xml file so that NetBeans can include the resources in the JAR file automatically. NetBeans uses a tool called Ant that uses this build.xml file. Look for Ant documentation to find out more about making these modifications. I hope this helps. Regards, Layne henri henri Ranch Hand Joined: Oct 03, 2005 Posts: 115 posted Jan 16, 2006 13:37:00 0 Thank you very much for your response Layne. You laid out for me some very important concepts that I still need to learn in order to complete my program and have it distributed. All of your assumptions about my file structure were correct. The whole thing is just so overwhelming. The program's imbilical cord attached to NetBeans will eventually have to be severed. You also mentioned some things about how NetBeans works that I have not had time to learn. Thank you for that too. I came up with the following line of code in order to get an image in the Actors' folder. You also mentioned using System.getProperty(), I used this earlier on before posting this topic to get the sounds in my program. img = new ImageIcon("./Actors/" + file + ".jpg").getImage(); codeBase = new URL("file:" + System.getProperty("user.dir") + "/Sound/"); Once again Layne, I really appreciate all the ideas you laid out for me. Thank you very much. Stan James (instanceof Sidekick) Ranch Hand Joined: Jan 29, 2003 Posts: 8791 posted Jan 16, 2006 16:44:00 0 Since you brought it up, Layne, I've been having fits with getResourceAsStream(). I added a zip file to the Java build path in Eclipse and can't seem to read the files that are in it. I tried the method on Class and ClassLoader , with and without leading /. Is there something with waving dead chickens over the code that I'm missing? A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi Layne Lund Ranch Hand Joined: Dec 06, 2001 Posts: 3061 posted Jan 17, 2006 16:44:00 0 Stan, Does getResourceAsStream() return a valid InputStream object? Can you read any data at all from it? Did you try wrappnig it in a ZipInputStream to access the information from the zip file? These are the only guesses I have at the moment. If this doesn't spark anything, perhaps you can post some code and a description of where your files are located. It might be helpful if you start your own thread on this. Regards, Layne Stan James (instanceof Sidekick) Ranch Hand Joined: Jan 29, 2003 Posts: 8791 posted Jan 17, 2006 17:48:00 0 It always returns null, which is documented for not found. When I get back on my work PC I'll post a sample. This code is in a POJO called by a servlet which may tell us something about where the jar should be stored. I agree. Here's the link: subject: Filepath Help Similar Threads Converting from Swing to Applet JPanel placing image on jpanel and then add other components how can i refresh a scrollpane getting problem in creating an image from int array All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/378861/java/java/Filepath
CC-MAIN-2015-18
refinedweb
1,777
58.99
"Point[1].h"//My test program #include <iostream> using namespace std; int main () { const Point p1(1,2); Point p2(2,4); Point p3(-5, 6); Point p4; p4 = p2.distance(p1); cout << "p1 is " << p1.x1() << "/" p1.y1() << endl; cout << "p2 is " << p2.x1() << "/" p2.y1() << endl; cout << "p3 is " << p3.x1() << "/" p3.y1() << endl; cout << "The resulting p4 is:" << endl; cout << p4.x1() << "/" p4.y1() << endl; return 0; } Class Point { // Your typical stuff friend ostream& operator<< (ostream& ostr, Point& pt); } above main: ostream& operator<<(ostream& ostr, Point& pt) { ostr << "X: " << pt._x << " Y: " << pt._y << endl; return ostr; } main() { const Point p1(1,2); Point p2(2,4); Point p3(-5, 6); Point p4; p4 = p2.distance(p1); cout << p4; } The reason we can access the Point's private data members is because we made the ostream a friend of the class Point. Normally, you do not want to make other classes or functions friends of classes, but making an ostream a friend is a well known old tactic and for this partical case, it is usually deemed appropriate. I do agree with jkr's answer as well. I only wish to provide an alternative, however, in my opinion, overloading the << operator would be a better choice here. Jkr, would you not agree? Hope this helps, Mactep. Also there was an error in that code Point(int x = 0, int y =0): _x(y), _y(y); // y is used in both cases. #ifndef _POINT_H_ #define _POINT_H_ class Point { public: // Constructors Point(int x = 0, int y =0): _x(x), _y(y){} Point(const Point& p): _x(p._x), _y(p._y){} ~Point(){} // Accessors int x1()const { return _x; } int y1()const { return _y; } // Modifiers void x1(int x) { _x = x; } void y1(int y) { _y = y; } void set_xy(int x, int y) { _x = x, _y = y; } // Member Functions void increment_x() { _x++; } void increment_y() { _y++; } Point operator +(const Point&) const; Point distance(const Point& p) const; friend ostream & operator << (ostream & os, Point & p); private: int _x; int _y; }; #include "Point[1].h"//function file #include <cmath> Point Point::operator + (const Point& p) const//function to add points { Point add((this->_x + p._x) + (this->_y + p._y)); return add;// how do I retrieve this result? } Point Point::distance(const Point& p) const { Point dist(sqrt(pow((this->_x - p._x),2) + pow((this->_y - p._y),2))); return dist;//how do I retrieve this result? } ostream & operator << (ostream & os, Point & p) { os<< "x1=" << p._x <<",y1=" << p._y << endl; return os; } #include "Point[1].h"//My test program #include <iostream> using namespace std; int main () { const Point p1(1,2); Point p2(2,4); Point p3(-5, 6); cout << p2; Point p4 = p1 + p2; cout << p4; p4 = p2.distance(p1); cout << p4; return 0; } _novi_ Mactep13-Thank you for the alternate solution. I haven't learned how to use the friend specifier yet, but it looks like a useful option. I wish there was a way to split solution points between the experts. Maybe there is let me know. Novitiate-thank you for that overloaded varaible catch-that also totally helped. You guys all rule as usual.
https://www.experts-exchange.com/questions/21394996/testing-my-class-member-functions.html
CC-MAIN-2018-22
refinedweb
528
74.79
Profiling — wiki You can use this code to profile your pygame at the python level. With it you can find out the bits that are slowing down your game. If your game is going too slow, you will be able to find out what areas to improve. If your main loop is not already in a main() function, just put it in one, and add the stuff below to your game. def main(): for x in range(10000): pass import cProfile as profile profile.run('main()') Warning: the hotshot module has some bugs and may produce incorrect timings. Read the a python-dev mailing list thread titled "s/hotshot/lsprof" for details. It's better to use cProfile if you have python2.5+ import traceback,sys def slow_function(): # your code goes here. for x in range(10000): pass def main(): # your code goes here. slow_function() if __name__ == "__main__": if "profile" in sys.argv: import hotshot import hotshot.stats import tempfile import os profile_data_fname = tempfile.mktemp("prf") try: prof = hotshot.Profile(profile_data_fname) prof.run('main()') del prof s = hotshot.stats.load(profile_data_fname) s.strip_dirs() print "cumulative\n\n" s.sort_stats('cumulative').print_stats() print "By time.\n\n" s.sort_stats('time').print_stats() del s finally: # clean up the temporary file name. try: os.remove(profile_data_fname) except: # may have trouble deleting ;) pass else: try: main() except: traceback.print_exc(sys.stderr) Or if you're too lazy to copy all that in, this works as well (although not quite as fancily.) Or you can use the @profile decorator from the profilehooks module by Marius Gedminas. from profilehooks import profile @profile def main(): ... # your code here if __name__ == '__main__': main()
https://www.pygame.org/wiki/Profiling?parent=index
CC-MAIN-2018-51
refinedweb
275
69.48
Sigfox library for LoPy4? Out of interest i tried to register a LoPy4 on the sigfox backend. But the most recent firmware for LoPy does not seem to include the Sigfox library. In REPL: from network import Sigfox Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name Sigfox What now? Wait for a new firmware version for the LoPy4? - Xykon administrators last edited by @xykon Hi, like I mentionnend in my message to you, I will go on details, maybe there is some step missing. I first went for the DFU upgrade on the Expansion board by installing the libusbK driver. Even if the upgrade was a success I could not see a COM port allocated, so I replaced it with a USB Serial Driver and from there on the port worked (that might be the critical point). After that I just connected the LoPy4 on the Expansion board (de-plugging/re-plugging after each step) and launched the upgrade by choosing the LoRa country and SigFox region. The update did not work (Screenshot below). So I went for downloading the last .tar.gz on github and upgraded via choosing the flash file (this worked btw). But therefore I guess I can't get an ID since it did not register the region? PS: I have a few LoPy4 to upgrade. Thanks a lot for your fast replies so far, much appreciated. - Xykon administrators last edited by Xykon @toppert Can you please run the following code? import binascii, machine print(binascii.hexlify(machine.unique_id())) Please send me the output from the command either via direct forum message or email: support@pycom.io P.S. can you confirm you did a firmware update? This is mandatory in order to receive the Sigfox credentials from the online server. Mhhhh, this seems weird, somehow I can't manage to get the ID and the PAC of my device for Sigfox. Update has been done on the Lopy, so importing the Library works fine now. (If you wonder about the outcome of the console, I typed myself some parts again to check everything). Something wrong with the update I made or did I forget something? - jmarcelino last edited by @toppert Yes everything is supported now, just update the firmware to the latest using the Pycom Firmware Updater and you'll be ready I have the same issue, is this now fixed and there is something to upgrade to? I just received my Lopy4 and wanted to test Sigfox on it. @jmarcelino i am having trouble getting LoRa code to work on the LoPy4, which worked fine on an old LoPy. Is this part of the same release / issue? If so, is there an estimated release date? Right now the LoPy4's i have are useless. - jmarcelino last edited by @robin Yes, our factory was very quick to produce and ship the LoPy4 but I'm afraid the firmware isn't fully baked yet. Full support is coming. Thank you for your patience.
https://forum.pycom.io/topic/2407/sigfox-library-for-lopy4
CC-MAIN-2018-39
refinedweb
504
73.37
Recently,. Meet mehdb, the Kubernetes-native key-value store The stateful app I wrote is called mehdb and you can think of it as a naive distributed key-value store, supporting permanent read and write operations. In mehdb there is one leader and one or more followers: a leader accepts both read and write operations and a follower only serves read operations. If you attempt a write operation on a follower, it will redirect the request to the leader. Both the leader and the followers are themselves stateless, the state is entirely kept and managed through the StatefulSet which is responsible for launching the pods in order and the persistent volumes, guaranteeing data being available across pod or node re-starts. Each follower periodically queries the leader for new data and syncs it. Three exemplary interactions are shown in the architecture diagram above: - Shows a WRITEoperation using the /set/$KEYendpoint which is directly handled by the leader shard. - Shows a READoperation using the /get/$KEYendpoint which is directly handled by a follower shard. - Shows a WRITEoperation issued against a follower shard using the /set/$KEYendpoint and which is redirected, using the HTTP status code 307, to the leader shard and handled there. To ensure that followers have time to sync data before they serve reads readiness probes are used: the /status?level=full endpoint returns a HTTP 200 status code and the number of keys it can serve or a 500 otherwise. StatefulSet In Action In order to try out the following, you'll need a Kubernetes 1.9 (or higher) cluster. Also, the default setup defined in app.yaml assumes that a storage class ebs is defined. Let's deploy mehdb first. The following brings up the StatefulSet including two pods (a leader and a follower), binds the persistent volumes to each pod as well as creates a headless service for it: $ kubectl create ns mehdb $ kubectl -n=mehdb apply -f app.yaml First, let's verify that StatefulSet has created the leader ( mehdb-0) and follower pod ( mehdb-1) and that the persistent volumes are in place: $ kubectl -n=mehdb get sts,po,pvc -o wide NAME DESIRED CURRENT AGE CONTAINERS IMAGES statefulsets/mehdb 2 2 28m shard quay.io/mhausenblas/mehdb:0.6 NAME READY STATUS RESTARTS AGE IP NODE po/mehdb-0 1/1 Running 0 28m 10.131.9.180 ip-172-31-59-148.ec2.internal po/mehdb-1 1/1 Running 0 25m 10.130.4.99 ip-172-31-59-74.ec2.internal NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc/data-mehdb-0 Bound pvc-f464d0c3-7527-11e8-8993-123713f594ec 1Gi RWO ebs 28m pvc/data-mehdb-1 Bound pvc-6e448695-7528-11e8-8993-123713f594ec 1Gi RWO ebs 25m When I inspect the StatefulSet in the OpenShift web console, it looks like this: That looks all good so far, now let's check the service: $ kubectl -n=mehdb describe svc/mehdb Name: mehdb Namespace: mehdb Labels: app=mehdb Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"mehdb"},"name":"mehdb","namespace":"mehdb"},"spec":{"clusterIP":"None... Selector: app=mehdb Type: ClusterIP IP: None Port: <unset> 9876/TCP TargetPort: 9876/TCP Endpoints: 10.130.4.99:9876,10.131.9.180:9876 Session Affinity: None Events: <none> As expected, the headless service itself has no cluster IP and created two endpoints for the pods mehdb-0 and mehdb-1 respectively. Also, the DNS configuration should be updated now to return A record entries for the pods, let's check that: $ kubectl -n=mehdb run -i -t --rm dnscheck --restart=Never --image=quay.io/mhausenblas/jump:0.2 -- nslookup mehdb nslookup: can't resolve '(null)': Name does not resolve Name: mehdb Address 1: 10.130.4.99 mehdb-1.mehdb.mehdb.svc.cluster.local Address 2: 10.131.9.180 mehdb-0.mehdb.mehdb.svc.cluster.local Great, we're all set to use mehdb now. Let's first write some data, in our case, we store test data under the key test: $ kubectl -n=mehdb run -i -t --rm jumpod --restart=Never --image=quay.io/mhausenblas/jump:0.2 -- sh If you don't see a command prompt, try pressing enter. / $ echo "test data" > /tmp/test / $ curl -sL -XPUT -T /tmp/test mehdb:9876/set/test WRITE completed/ $ Note that the -L option in above curl command makes sure that if we happen to hit the follower shard we get redirected to the leader shard and the write goes through. We should now be able to read the data from any shard, so let's try to get it directly from the follower shard: $ kubectl -n=mehdb run -i -t --rm mehdbclient --restart=Never --image=quay.io/mhausenblas/jump:0.2 -- curl mehdb-1.mehdb:9876/get/test test data OK, now that we know we can write and read data, let's see what happens when we scale the StatefulSet, creating two more followers (note that this can take several minutes until the readiness probes pass): $ kubectl -n=mehdb scale sts mehdb --replicas=4 $ kubectl -n=mehdb get sts NAME DESIRED CURRENT AGE mehdb 4 4 43m Looks like that worked out fine. But what happens if we simulate a failure, for example by deleting one of the pods, let's say mehdb-1? $ kubectl -n=mehdb get po/mehdb-1 -o=wide NAME READY STATUS RESTARTS AGE IP NODE mehdb-1 1/1 Running 0 42m 10.130.4.99 ip-172-31-59-74.ec2.internal $ kubectl -n=mehdb delete po/mehdb-1 pod "mehdb-1" deleted $ kubectl -n=mehdb get po/mehdb-1 -o=wide NAME READY STATUS RESTARTS AGE IP NODE mehdb-1 1/1 Running 0 2m 10.131.34.198 ip-172-31-50-211.ec2.internal We can see that the StatefulSet detected that mehdb-1 is gone, created a replacement for it with a new IP address (on a different node) and we can still get the data from this shard via curl mehdb-1.mehdb:9876/get/test thanks to the persistent volume. When you're done, remember to clean up. The default behavior of the StatefulSet is to remove its pods as well (if you want to keep them around, use --cascade=false): $ kubectl -n=mehdb delete sts/mehdb statefulset "mehdb" deleted When you delete the StatefulSet, it does not touch the persistent volumes nor the service, so we have to take care of that ourselves: $ for i in 0 1 2 3; do kubectl -n=mehdb delete pvc/data-mehdb-$i; done persistentvolumeclaim "data-mehdb-0" deleted persistentvolumeclaim "data-mehdb-1" deleted persistentvolumeclaim "data-mehdb-2" deleted persistentvolumeclaim "data-mehdb-3" deleted $ kubectl -n=mehdb delete svc/mehdb With this we conclude the exploration of StatefulSets in action and if you're interested to learn more about the topic, make sure to check out the following resources: - Orchestrating Stateful Apps with Kubernetes StatefulSets (2018) - Technical Dive into StatefulSets and Deployments in Kubernetes (2017) - How to Run a MongoDb Replica Set on Kubernetes PetSet or StatefulSet (2017) - Kubernetes: State and Storage (2017) - This Hacker News thread (2016) I've been using OpenShift Online for the experiment and would love to hear from others using it in a different environment (just make sure you have Kubernetes 1.9 or above). If you want to extend mehdb you need a little working knowledge of Go: the source clocks in at some 230 lines of code and it also shows how awesome the primitives Kubernetes provides are, allowing us to build custom systems on top of a well-designed API. Categories
https://www.openshift.com/blog/kubernetes-statefulset-in-action
CC-MAIN-2020-16
refinedweb
1,273
54.15
The fgets function reads a sequence of character, i. e., a character string from an input stream. Its prototype is given below. char *fgets(char *s , int n, FILE*fp); The characters from the input stream are read into a character arrays until a newline character is read, n - 1 characters are read or the end-of-file is reached. If a newline character is read, it is retained in the character array. The function then adds a terminating null character to strings and returns it. However, the function returns a NULL value if an end-of-file is encountered before any character is read or an error occurs during input. The fputs function writes a character string to an output stream. Its format is given below. int fputs (const char *s , FILE *.fp) ; All the characters in strings except the null terminator are written to stream fp. The function returns the numbers of characters written to the stream or EOF if an error occurs during output. These functions are similar to putchar () and getchar () which we have already used for standard input/output. The functions may also be used for standard input/output by changing the file stream. The header file <stdio.h> should be included in the program which uses these functions. A program to display a specified text file using the fgets and fputs functions is given below. #include <stdio.h> #include <stdlib.h> void main() { //Header fileincluded for function exit() FILE* Fptr; char ch; Fptr = fopen("Myfile","w"); clrscr(); if(Fptr ==NULL) //open myfilefor writing. // If return value is NULL exit { printf("File could not be opened."); exit (1); } else { printf("File is open. Enter the text.\n"); while ((ch= getchar()) != EOF) /*get characters from keyboard* fputc(ch, Fptr); //one by one and write it to file. fclose(Fptr); } //this code closes the file Fptr = fopen("Myfile","r"); // open file for reading if(Fptr ==NULL) // if return value is NULL, exit. { printf("File could not be opened"); exit(1); } else { printf("File is opened again for reading.\n"); while ((ch= fgetc(Fptr)) != EOF) // get characters from file printf("%c",ch); fclose(Fptr); } // till EOF and display it. } // This line closes the file. The expected output is as given below. The EOF is included by pressing Ctrl and Z. It is indicated by ^Z in the third line of output.
http://ecomputernotes.com/what-is-c/file-handling/fgets-and-fputs-functions
CC-MAIN-2020-05
refinedweb
393
68.47
- NAME - VERSION - SYNOPSIS - DESCRIPTION - VARIABLES - FUNCTIONS - QUOTING - EXAMPLES - INTERNATIONALIZATION - EXPORTS - BUGS - REQUIREMENTS - AUTHOR / COPYRIGHT NAME Time::Format - Easy-to-use date/time formatting. VERSION This is version 1.12 of Time::Format, September 27, 2012. SYNOPSIS use Time::Format qw(%time %strftime %manip); $time{$format} $time{$format, $unixtime} print "Today is $time{'yyyy/mm/dd'}\n"; print "Yesterday was $time{'yyyy/mm/dd', time-24*60*60}\n"; print "The time is $time{'hh:mm:ss'}\n"; print "Another time is $time{'H:mm am tz', $another_time}\n"; print "Timestamp: $time{'yyyymmdd.hhmmss.mmm'}\n"; %time also accepts Date::Manip strings and DateTime objects: $dm = Date::Manip::ParseDate('last monday'); print "Last monday was $time{'Month d, yyyy', $dm}"; $dt = DateTime->new (....); print "Here's another date: $time{'m/d/yy', $dt}"; It also accepts most ISO-8601 date/time strings: $t = '2005/10/31T17:11:09'; # date separator: / or - or . $t = '2005-10-31 17.11.09'; # in-between separator: T or _ or space $t = '20051031_171109'; # time separator: : or . $t = '20051031171109'; # separators may be omitted $t = '2005/10/31'; # date-only is okay $t = '17:11:09'; # time-only is okay # But not: $t = '20051031'; # date-only without separators $t = '171109'; # time-only without separators # ...because those look like epoch time numbers. %strftime works like POSIX's strftime, if you like those %-formats. $strftime{$format} $strftime{$format, $unixtime} $strftime{$format, $sec,$min,$hour, $mday,$mon,$year, $wday,$yday,$isdst} print "POSIXish: $strftime{'%A, %B %d, %Y', 0,0,0,12,11,95,2}\n"; print "POSIXish: $strftime{'%A, %B %d, %Y', 1054866251}\n"; print "POSIXish: $strftime{'%A, %B %d, %Y'}\n"; # current time %manip works like Date::Manip's UnixDate function. $manip{$format}; $manip{$format, $when}; print "Date::Manip: $manip{'%m/%d/%Y'}\n"; # current time print "Date::Manip: $manip{'%m/%d/%Y','last Tuesday'}\n"; These can also be used as standalone functions: use Time::Format qw(time_format time_strftime time_manip); print "Today is ", time_format('yyyy/mm/dd', $some_time), "\n"; print "POSIXish: ", time_strftime('%A %B %d, %Y',$some_time), "\n"; print "Date::Manip: ", time_manip('%m/%d/%Y',$some_time), "\n"; DESCRIPTION This module creates global pseudovariables which format dates and times, according to formatting codes you pass to them in strings. The %time formatting codes are designed to be easy to remember and use, and to take up just as many characters as the output time value whenever possible. For example, the four-digit year code is " yyyy", the three-letter month abbreviation is " Mon". The nice thing about having a variable-like interface instead of function calls is that the values can be used inside of strings (as well as outside of strings in ordinary expressions). Dates are frequently used within strings (log messages, output, data records, etc.), so having the ability to interpolate them directly is handy. Perl allows arbitrary expressions within curly braces of a hash, even when that hash is being interpolated into a string. This allows you to do computations on the fly while formatting times and inserting them into strings. See the "yesterday" example above. The format strings are designed with programmers in mind. What do you need most frequently? 4-digit year, month, day, 24-based hour, minute, second -- usually with leading zeroes. These six are the easiest formats to use and remember in Time::Format: yyyy, mm, dd, hh, mm, ss. Variants on these formats follow a simple and consistent formula. This module is for everyone who is weary of trying to remember strftime(3)'s arcane codes, or of endlessly writing $t[4]++; $t[5]+=1900 as you manually format times or dates. Note that mm (and related codes) are used both for months and minutes. This is a feature. %time resolves the ambiguity by examining other nearby formatting codes. If it's in the context of a year or a day, "month" is assumed. If in the context of an hour or a second, "minute" is assumed. The format strings are not meant to encompass every date/time need ever conceived. But how often do you need the day of the year (strftime's %j) or the week number (strftime's %W)? For capabilities that %time does not provide, %strftime provides an interface to POSIX's strftime, and %manip provides an interface to the Date::Manip module's UnixDate function. If the companion module Time::Format_XS is also installed, Time::Format will detect and use it. This will result in a significant speed increase for %time and time_format. VARIABLES - time $time{$format} $time{$format,$time_value}; Formats a unix time number (seconds since the epoch), DateTime object, stringified DateTime, Date::Manip string, or ISO-8601 string, according to the specified format. If the time expression is omitted, the current time is used. The format string may contain any of the following: yyyy 4-digit year yy 2-digit year m 1- or 2-digit month, 1-12 mm 2-digit month, 01-12 ?m month with leading space if < 10 Month full month name, mixed-case MONTH full month name, uppercase month full month name, lowercase Mon 3-letter month abbreviation, mixed-case MON mon ditto, uppercase and lowercase versions d day number, 1-31 dd day number, 01-31 ?d day with leading space if < 10 th day suffix (st, nd, rd, or th) TH uppercase suffix Weekday weekday name, mixed-case WEEKDAY weekday name, uppercase weekday weekday name, lowercase Day 3-letter weekday name, mixed-case DAY day ditto, uppercase and lowercase versions h hour, 0-23 hh hour, 00-23 ?h hour, 0-23 with leading space if < 10 H hour, 1-12 HH hour, 01-12 ?H hour, 1-12 with leading space if < 10 m minute, 0-59 mm minute, 00-59 ?m minute, 0-59 with leading space if < 10 s second, 0-59 ss second, 00-59 ?s second, 0-59 with leading space if < 10 mmm millisecond, 000-999 uuuuuu microsecond, 000000-999999 am a.m. The string "am" or "pm" (second form with periods) pm p.m. same as "am" or "a.m." AM A.M. same as "am" or "a.m." but uppercase PM P.M. same as "AM" or "A.M." tz time zone abbreviation Millisecond and microsecond require Time::HiRes, otherwise they'll always be zero. Timezone requires POSIX, otherwise it'll be the empty string. The second codes ( s, ss, ?s) can be 60 or 61 in rare circumstances (leap seconds, if your system supports such). Anything in the format string other than the above patterns is left intact. Any character preceded by a backslash is left alone and not used for any part of a format code. See the "QUOTING" section for more details. For the most part, each of the above formatting codes takes up as much space as the output string it generates. The exceptions are the codes whose output is variable length: Weekday, Month, time zone, and the single-character codes. The mixed-case "Month", "Mon", "Weekday", and "Day" codes return the name of the month or weekday in the preferred case representation for the locale currently in effect. Thus in an English-speaking locale, the seventh month would be "July" (uppercase first letter, lowercase rest); while in a French-speaking locale, it would be "juillet" (all lowercase). See the "QUOTING" section for ways to control the case of month/weekday names. Note that the " mm", " m", and " ?m" formats are ambiguous. %timetries to guess whether you meant "month" or "minute" based on nearby characters in the format string. Thus, a format of " yyyy/mm/dd hh:mm:ss" is correctly parsed as "year month day, hour minute second". If %timecannot determine whether you meant "month" or "minute", it leaves the mm, m, or ?muntranslated. To remove the ambiguity, you can use the following codes: m{on} month, 1-12 mm{on} month, 01-12 ?m{on} month, 1-12 with leading space if < 10 m{in} minute, 0-59 mm{in} minute, 00-59 ?m{in} minute, 0-59 with leading space if < 10 In other words, append " {on}" or " {in}" to make " m", " mm", or " ?m" unambiguous. - strftime $strftime{$format, $sec,$min,$hour, $mday,$mon,$year, $wday,$yday,$isdst} $strftime{$format, $unixtime} $strftime{$format} For those who prefer strftime's weird % formats, or who need POSIX compliance, or who need week numbers or other features %timedoes not provide. - manip $manip{$format}; $manip{$format,$when}; Provides an interface to the Date::Manip module's UnixDatefunction. This function is rather slow, but can parse a very wide variety of date input. See the Date::Manip module for details about the inputs accepted. If you want to use the %timecodes, but need the input flexibility of %manip, you can use Date::Manip's ParseDatefunction: print "$time{'yyyymmdd', ParseDate('last sunday')}"; FUNCTIONS - time_format time_format($format); time_format($format, $unix_time); This is a function interface to %time. It accepts the same formatting codes and everything. This is provided for people who want their function calls to look like function calls, not hashes. :-) The following two are equivalent: $x = $time{'yyyy/mm/dd'}; $x = time_format('yyyy/mm/dd'); - time_strftime time_strftime($format, $sec,$min,$hour, $mday,$mon,$year, $wday,$yday,$isdst); time_strftime($format, $unixtime); time_strftime($format); This is a function interface to %strftime. It simply calls POSIX:: strftime, but it does provide a bit of an advantage over calling strftimedirectly, in that you can pass the time as a unix time (seconds since the epoch), or omit it in order to get the current time. - time_manip manip($format); manip($format,$when); This is a function interface to %manip. It calls Date::Manip:: UnixDateunder the hood. It does not provide much of an advantage over calling UnixDatedirectly, except that you can omit the $whenparameter in order to get the current time. QUOTING This section applies to the format strings used by %time and time_format only. Sometimes it is necessary to suppress expansion of some format characters in a format string. For example: $time{'Hour: hh; Minute: mm{in}; Second: ss'}; In the above expression, the "H" in "Hour" would be expanded, as would the "d" in "Second". The result would be something like: 8our: 08; Minute: 10; Secon17: 30 It would not be a good solution to break the above statement out into three calls to %time: "Hour: $time{hh}; Minute: $time{'mm{in}'}; Second: $time{ss}" because the time could change from one call to the next, which would be a problem when the numbers roll over (for example, a split second after 7:59:59). For this reason, you can escape individual format codes with a backslash: $time{'\Hour: hh; Minute: mm{in}; Secon\d: ss'}; Note that with double-quoted (and qq//) strings, the backslash must be doubled, because Perl first interpolates the string: $time{"\\Hour: hh; Minute: mm{in}; Secon\\d: ss"}; For added convenience, Time::Format simulates Perl's built-in \Q and \E inline quoting operators. Anything in a string between a \Q and \E will not be interpolated as any part of any formatting code: $time{'\QHour:\E hh; \QMinute:\E mm{in}; \QSecond:\E ss'}; Again, within interpolated strings, the backslash must be doubled, or else Perl will interpret and remove the \Q...\E sequence before Time::Format gets it: $time{"\\QHour:\\E hh; \\QMinute:\\E mm{in}; \\QSecond\\E: ss"}; Time::Format also recognizes and simulates the \U, \L, \u, and \l sequences. This is really only useful for finer control of the Month, Mon, Weekday, and Day formats. For example, in some locales, the month names are all-lowercase by convention. At the start of a sentence, you may want to ensure that the first character is uppercase: $time{'\uMonth \Qis the finest month of all.'}; Again, be sure to use \Q, and be sure to double the backslashes in interpolated strings, otherwise you'll get something ugly like: July i37 ste fine37t july of all. EXAMPLES $time{'Weekday Month d, yyyy'} Thursday June 5, 2003 $time{'Day Mon d, yyyy'} Thu Jun 5, 2003 $time{'dd/mm/yyyy'} 05/06/2003 $time{yymmdd} 030605 $time{'yymmdd',time-86400} 030604 $time{'dth of Month'} 5th of June $time{'H:mm:ss am'} 1:02:14 pm $time{'hh:mm:ss.uuuuuu'} 13:02:14.171447 $time{'yyyy/mm{on}/dd hh:mm{in}:ss.mmm'} 2003/06/05 13:02:14.171 $time{'yyyy/mm/dd hh:mm:ss.mmm'} 2003/06/05 13:02:14.171 $time{"It's H:mm."} It'14 1:02. # OOPS! $time{"It'\\s H:mm."} It's 1:02. # Backslash fixes it. . . # Rename a file based on today's date: rename $file, "$file_$time{yyyymmdd}"; # Rename a file based on its last-modify date: rename $file, "$file_$time{'yyyymmdd',(stat $file)[9]}"; # stftime examples $strftime{'%A %B %d, %Y'} Thursday June 05, 2003 $strftime{'%A %B %d, %Y',time+86400} Friday June 06, 2003 # manip examples $manip{'%m/%d/%Y'} 06/05/2003 $manip{'%m/%d/%Y','yesterday'} 06/04/2003 $manip{'%m/%d/%Y','first monday in November 2000'} 11/06/2000 INTERNATIONALIZATION If the I18N::Langinfo module is available, Time::Format will return weekday and month names in the language appropriate for the current locale. If not, English names will be used. Programmers in non-English locales may want to provide an alias to %time in their own preferred language. This can be done by assigning \%time to a typeglob: # French use Time::Format; use vars '%temps'; *temps = \%time; print "C'est aujourd'hui le $temps{'d Month'}\n"; # German use Time::Format; use vars '%zeit'; *zeit = \%time; print "Heutiger Tag ist $zeit{'d.m.yyyy'}\n"; EXPORTS The following symbols are exported into your namespace by default: %time time_format The following symbols are available for import into your namespace: %strftime %manip time_strftime time_manip The :all tag will import all of these into your namespace. Example: use Time::Format ':all'; BUGS The format string used by %time must not have $; as a substring anywhere. $; (by default, ASCII character 28, or 1C hex) is used to separate values passed to the tied hash, and thus Time::Format will interpret your format string to be two or more arguments if it contains $;. The time_format function does not have this limitation. REQUIREMENTS Time::Local I18N::Langinfo, if you want non-English locales to work. POSIX, if you choose to use %strftime or want the C<tz> format to work. Time::HiRes, if you want the C<mmm> and C<uuuuuu> time formats to work. Date::Manip, if you choose to use %manip. Time::Format_XS is optional but will make C<%time> and C<time_format> much faster. The version of Time::Format_XS installed must match the version of Time::Format installed; otherwise Time::Format will not use it (and will issue a warning). AUTHOR / COPYRIGHT Copyright © 2003-2012.
https://metacpan.org/pod/Time::Format
CC-MAIN-2015-22
refinedweb
2,472
61.67
"Theodore Y. Ts'o" <tytso@MIT.EDU> writes:> True, we have to implement it. But we don't have to spend a lot of time> getting it super fast, or particularily elegant. I'd concetrate on> those parts of the POSIX.1b spec that are likely to be actually used by> real applications.....But I think that the named semaphores can be implemented on top ofthe unnamed semaphores. The kernel should keep a list of the names(the pseudo-filesystem) and return handle for a semaphore from thecommon pool.When I go back to my initial proposal: sem_init() returns simply the next free semaphore in the bufferwhile sem_open() looks whether there is such a semaphore. If not and O_CREAT is given is returns the next free semaphore, i.e., it implicitly calls sem_init()The functions which use semaphores behave the same.(Please note that I don't talk anymore about a second pool for local semaphores. Xavier pointed out that this is really stupid since it would be slower than using a user-level only implementation.)One point still not clear is whether it's worth to create apseudo filesystem for the named semaphores. I would vote for itand the standard author encourage this use. It would allow easymaintenance of the semaphore namespace. Keeping all names in aplane namespace (i.e., treating / as a normal character) wouldbe simpler but once the semaphores are implemented I would thinkhundreds of semaphores used at the same time is easily possible.How do you like a directory with hundreds of entries? The POSIXpeople gave us this mean of structurization so me should use it.-- Uli--------------. drepper@cygnus.com ,-. Rubensstrasse 5Ulrich Drepper \ ,--------------------' \ 76149 Karlsruhe/GermanyCygnus Support `--' drepper@gnu.ai.mit.edu `------------------------
http://lkml.org/lkml/1996/11/20/100
CC-MAIN-2013-48
refinedweb
286
66.03
3 posts in this topic You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy! Register a new account Already have an account? Sign in here. Similar Content - By Simpel Hi. I'm trying to write a xml. Here is my code: #include <_XMLDomWrapper.au3> #include <Date.au3> Global $g_sXMLFileName Global $g_sDestPath = @DesktopDir & "\" Global $g_sReturnedBID = "A10829" _makeXML() _AddXML(1, "A10829_Thomas/wav/T001.wav") _AddXML(2, "A10829_Thomas/wav/T002.wav") Exit Func _makeXML() Local $sXMLtime = StringReplace(StringReplace(StringReplace(_NowCalc()," ","_"),":","-"),"/","-") ; in yyyy-mm-dd_hh-mm-ss $g_sXMLFileName = $g_sDestPath & $g_sReturnedBID & "_" & "EB-Ton-Upload" & "_" & $sXMLtime & ".xml" _XMLCreateFile($g_sXMLFileName, "gemagvl", 1,1) _XMLFileOpen($g_sXMLFileName) EndFunc Func _AddXML($iCount, $sDateiname) _XMLCreateRootNodeWAttr("row", "count", $iCount, "") _XMLCreateChildNode("//row", "picklistenname", $g_sReturnedBID & "_EB-Ton-Upload") _XMLCreateChildNode("//row", "picklisteninfo") _XMLCreateChildNode("//row", "bid", $g_sReturnedBID) _XMLCreateChildNode("//row", "audiodateiname", $sDateiname) _XMLCreateChildNode("//row", "titel", StringTrimRight(StringTrimLeft($sDateiname, 7), 4)) _XMLCreateChildNode("//row", "interpret", "EB") _XMLCreateChildNode("//row", "quelle", "Ton") EndFunc It returns: <> But it should return: <> The second inserted nodes are double. How will it be going right? Regards, Conrad - Guys, Since I'm able to get a Dell equipment warranty status thanks to my API key, I'm using an UDF to extract data from an XML file and get the end date. > Thing is, when using InetGet, the original file is in JSON format and the UDF is not working anymore, even if I download the file with the xml extension. Therefore, and when I manually download the page with Chrome, I have a proper XML file where the UDF is working fine. Here's my code: I even tried to convert the json to xml > I took a look here but I don't understand anything :/ The XML read UDF is just perfect for my needs but I'm stuck here... Thanks for any help you can provide -31290- 3MTXM12.json 3MTXM12.xml -.
https://www.autoitscript.com/forum/topic/176350-struggling-with-xml-parsing/
CC-MAIN-2018-09
refinedweb
316
55.24
Py Gtk Tutorial I recently understood that Gnome [1] , and also Xfce Desktop Environment [2] create GUI's (see Graphical User Interface) using a library called GTK: GIMP Tool Kit. This provides a stable and well balanced kit for creating almost any GUI in many desktop environments (mostly for *nix, but also Windows if you do some tricks). I have also known for quite some time that there are a few ways of creating a GUI using our favourite programming language Python. But I have unfortunately had little or no success and/or patience using TKInter (the first one I tried). But Yesterday I took the courage to try to investigate pygtk - an implementation of gtk for python. More on pygtk is found here [3] and a tutorial in different formats is found here: [4] There are a number of ways for building GUI's with gtk. There are also a number of standard dialogs already implemented, like the open/save file dialog. Example One: Included Dialogs This little example will, if it run from a terminal use a standard Open As-GUI and print the contents of the file selected on the terminal. (The example is based on the one included in the pygtk manual) Here is a screen shot of if: # needed imports import pygtk import gtk pygtk.require('2.0') # create a new dialog dialog = gtk.FileChooserDialog("Open..", None, gtk.FILE_CHOOSER_ACTION_OPEN, (gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL, gtk.STOCK_OPEN, gtk.RESPONSE_OK)) if dialog.run() == gtk.RESPONSE_OK: filename = dialog.get_filename() print "%s\n%s" % (filename, '-' * len(filename)) f = open(filename, 'r') print f.read() f.close() dialog.destroy() dump_file1.py And by adding a filter (see below) between the dialog creation and the dialog.run this is what we get instead: # if we want to apply filters filter1 = gtk.FileFilter() filter1.set_name("any file") filter1.add_pattern("*") dialog.add_filter(filter1) filter2 = gtk.FileFilter() filter2.set_name("python files") filter2.add_pattern("*.py") dialog.add_filter(filter2) Example Two: Building GUI's from scratch As you might have already figured out it is possible to create GUI's with at least some tools (see next section). But it is always nice to do it from scratch at least a few times to get the hang of it. Gtk uses callbacks (function-pointers that for example write a file to disk) and events (an event is triggered if you for example press a button) to make things happen (for example when clicking a button the file you work on is saved - this is real magic). The user must also connect the event with suitable callbacks. This might sound complicated but it is quite nice if for example more than one event triggers the same behavior (f.x. pressing Ctrl+S or clicking the save button BOTH results in a file being saved). The following example results in a simple window with a button: #!/usr/bin/env python import pygtk pygtk.require('2.0') import gtk class HelloWorld: # this is a function later used as a callback def hello(self, widget, data=None): print "Hello World" def destroy(self, widget, data=None): gtk.main_quit() def __init__(self): self.window = gtk.Window(gtk.WINDOW_TOPLEVEL) self.window.connect("destroy", self.destroy) self.window.set_border_width(10) self.button = gtk.Button("Hello World") # this next line connects the button, the click event and the function self.button.connect("clicked", self.hello, None) # here we add the button to the window self.window.add(self.button) # we must also show both the button and the window self.button.show() self.window.show() def main(self): gtk.main() if __name__ == "__main__": hello = HelloWorld() hello.main() Building more complicated GUI's require a little more planning and thinking - but it is far from impossible. One important thing to know is that one needs to add buttons and other things in horizontal and vertical boxes - than in turn can contain other horizontal and vertical boxes. A very complex GUI should most likely be built with an IDE - such as the one we will see in the next section. This is particularly important if a developer later wants to be able to add, move or remove some bells and whistles. Example Three: Building interfaces with Glade Interface Designer Glade An (or perhaps the) excellent tool for building Interfaces is Glade. It is of course free (as in freedom of speech). Glade comes with a few versions, and on my Xubuntu Linux installation I got Glade 2 as the default version (when just selected "Glade" in Synaptic package manager). Glade 2 is perhaps an excellent tool, but it is not suitable for python - it is more adapted towards creating Interfaces in good old fashioned C. A little searching in Synaptic revealed Glade 3. This versions is more modular and created interfaces for most programming languages that can work with GTK by using an intermediate layer of XML (the files a saved as "*.glade" - I like that having a file suffix that should surely be unique :D ). Using Glade This little screen shot shows a typical Glade session. - In the left column there are a number of Widgets you can add to your interface - for example buttons and tabbed views and so on. - The middle column shows the Interface you are currently working on. In this little example I have a silly editor with two buttons and a text-field. - The right column shows the Inspector - from wich you can select a particular button (or widget in general terms), and the properties of a particular item. Also note that the screen shot shows the layout of my interface (in the Inspector). First in my Main Window I have a vertical box. The vertical box has two slots. The second of these slots contain the text view (called textor). This first slot in the vertical box is occupied with a horizontal box. Do not let this confuse you - this is just the gtk-way to let you add a number of buttons. The horizontal box in this little example contains two buttons: an open-button and a save-button. Handlers As you can see in the inspector the save button has the signal "clicked" associated to the handler "init_save". When the interface is under planning I guess it is nice to keep a list and plan all these handlers - otherwise they will surely be forgotten and using your application will be strange. The handlers (f.x. init_save) must python later must be connected with a callback. The syntax is something like this: self.window.connect("do_stuff", self.function_that_does_the_stuff). Another tutorial I read used a dictionary to autoconnect many things at once: dic = { "init_save" : self.saveit, "init_load" : self.loadit} self.wTree.signal_autoconnect(dic) Connecting an interface and a python script To "connect" a glade project and a python script you can use a syntax similar to this one: #!/usr/bin/python import sys import pygtk pygtk.require("2.0") import gtk import gtk.glade class RandomTestGTK: def __init__(self): self.gladefile = "silly_edit.glade" self.wTree = gtk.glade.XML(self.gladefile, "MainWindow") As you can see in the above code snippet there a lot of important packages here that give you an idea of the underlying structure: pygtk, gtk, and gtk.glade. Handle the widgets In your program there may be items the user should be able to modify - such as the text in your editor. Also you might want to be able to change the title of your application. To obtain this it seems the following way is nice: self.mother = self.wTree.get_widget("MainWindow") self.textor = self.wTree.get_widget("textor").get_buffer() Here I use the get_widget function of self.wTree and use the name of the widget I want to handle. I just store them in a random handle in self. Please note that I use two items called textor - one is a widget that contains a buffer. The otherone is a handle to the buffer - don't let this confuse you I am just lazy and used the same name twice. Implementing the callbacks As you have seen above I have connected the click signals of the save button in my interface to the init_save-handler. I also connected the init_save-handler using a callback to the saveit-function. So now I need to implement these functions. Here is a stripped version of my loadit-function (It is silly but at least it does something). As you can se it opens a file and places its contents in self.textor. def loadit(self, widget): f = file('/home/per/.bashrc') self.textor.set_text(f.read()) f.close() Result A screen shot of the resulting application: Source Here is the glade-project [6] and here is the python project [7] . Beware: if you do not alter the source code it will not work under windows, also it will most likely not find the any file when you press load, but it will save a file in /tmp/my_old_bachrc.txt when you press save. Conclusions I really like pygtk - hopefully I'll learn a little more and give a tutorial on how to use multiple windows in the same project sometime soon. This page belongs in Kategori Programmering
http://pererikstrandberg.se/blog/index.cgi?page=PyGtkTutorial
CC-MAIN-2017-39
refinedweb
1,527
63.8
/09/13 18:57, William S Fulton wrote: > On 06/09/13 11:47, Simon Marchetto wrote: >> Le 05/09/2013 09:55, William S Fulton a écrit : >> Not overriding the test-suite swig & compile command >> lines (swig_and_compile_cpp,...) ? >> scilab & scilab_cpp targets are used only to compile the code. OK, we >> execute Scilab for this, because it has a simple command that does the >> job. We could do it by our own, it maybe be better, I don't know at this >> moment, but it would take time to change all this. >> >> I've introduced the get_swig_scilab_args to replace that following code: >> scilab: $(SRCS) >> if test ! -z "$(SRCS)"; then \ >> if test ! -z "$(INCLUDES)"; then \ >> $(SWIG) -scilab $(SWIGOPT) $(SCILABOPT) -addsrc $(SRCS) >> -addcflag $(INCLUDES) $(INTERFACEPATH); \ >> else \ >> $(SWIG) -scilab $(SWIGOPT) $(SCILABOPT) -addsrc $(SRCS) >> $(INTERFACEPATH); \ >> fi \ >> else \ >> if test ! -z "$(INCLUDES)"; then \ >> $(SWIG) -scilab $(SWIGOPT) $(SCILABOPT) -addcflag >> $(INCLUDES) $(INTERFACEPATH); \ >> else \ >> $(SWIG) -scilab $(SWIGOPT) $(SCILABOPT) $(INTERFACEPATH); \ >> fi \ >> fi >> >> This code is hard to read, and more hard to maintain, and we have it >> twice, in the two targets. >> Moreover, I had to add another option 'outdir' (to build in separate >> directory, I couldn't make swig work without it, having the message >> "Unable to open ...wrap.cxx", maybe you have an idea on this). >> So i would have a tree of if, with 8 possible combinations. And we may >> add other options in the future. >> OK, the $(eval, ...) is not a simple syntax, but globally the result >> looked much more readable & maintainable to me. >> The macro has a name so we know what it does. And to add an option, it >> is just simple as this: >> ifdef OUTDIR >> SWIG_SCILAB_ARGS += -outdir "$OUTDIR" >> #endif >> (by the way, ifdef ... looks simpler to me than theseif test ! -z >> "$... ) >> >> But I am not an expert in shell. And consistency is important to me. So >> what can we do ? >> > It all seems a lot more complicated than elsewhere. Model it on Java > which uses -outdir. And if you need to use -addsrc, add it to the > SWIGOPT, eg like this in test-suite/java/Makefile.in: > > SWIGOPT += $(JAVA_PACKAGEOPT) > These makefile changes still need doing. Here is a list of the remaining issues I can see. This concludes my review for the moment. I just want to look at the typemaps in more detail next, then probably we can think about merging into master once the issues mentioned including the ones below are addressed. - The banner is missing in builder.sce. You'll need to code something up like JAVA::emitBanner(). - typedef int SciObject; => All C/C++ symbols introduced by SWIG in the wrapper code should have Swig or SWIG as a prefix. Sci as a prefix ought to be reserved for symbols in the Scilab headers. There are a few of these, such as SciSequence_InputIterator should be SwigSciSequence_InputIterator, SciSwigIterator_T should be SwigSciIterator_T. Use 'grep -i sciswig' to find them or look at the symbols in a compiled shared object. Also there are other global variables such as fname and outputPosition which should be prefixed. - This include: #include "stack-c.h" It would be better to keep all the scilab #includes together, so I would move this further up the file. - extern "C" should be within the __cplusplus macro: #ifdef __cplusplus extern "C" { #endif - Should fname in all the wrappers, such as: int _wrap_Square_perimeter(char *fname, unsigned long fname_len) { not be const char *, or is that the correct convention? - Does Scilab support multiple threads? The use of globals such as fname and outputPosition is not thread safe. The goal of SWIG is to not remove thread safety if it already exists in the library being wrapped. - /* functionReturnTypemap */ comment in generated code should be removed. - Test-suite... constructor_copy.i => I've restored some lost code in this testcase (additional constructors) and committed it. If it breaks anything in scilab, the language module itself will need fixing. ignore_template_constructor.i => Scilab should work with this test. Let's see what breaks. Examples/test-suite/scilab/aggregate_runme.sci => we can change the name of 'move' to something else in the .i file (let's do when the merge to master is complete). - What the situation about identifiers being too long, what are the restrictions? Can you document details in Scilab.html? - There is some usage of mprintf for errors in the test-suite (swigtest.start). Errors should print to stderr - can this be done? William View entire thread
https://sourceforge.net/p/swig/mailman/message/31415609/
CC-MAIN-2018-22
refinedweb
731
74.9
Ok, I'm gonna explain this as best I can. I am trying to make an if/else statement in python that basically informs the user if they put in the right age based on raw input, but I keep getting an error in terms of my if statement. Can I get some help? from datetime import datetime now = datetime.now() print '%s/%s/%s %s:%s:%s' % (now.month, now.day, now.year, now.hour,now.minute, now.second) print "Welcome to the beginning of a Awesome Program created by yours truly." print " It's time for me to get to know a little bit about you" name = raw_input("What is your name?") age = raw_input("How old are you?") if age == 1 and <= 100 print "Ah, so your name is %s, and you're %s years old. " % (name, age) else: print " Yeah right!"
http://www.howtobuildsoftware.com/index.php/how-do/bFg/python-if-statement-if-else-problems-in-python
CC-MAIN-2018-51
refinedweb
144
92.53
Duplicate Objects in Java: Not Just Strings Duplicate Objects in Java: Not Just Strings Did you ever consider that instances of other classes can sometimes be duplicate and waste a considerable amount of memory? Join the DZone community and get the full member experience.Join For Free When a Java application consumes a lot of memory, it can be a problem on its own and can lead to increased GC pressure and long GC pauses. In one of my previous articles, I discussed one common source of memory waste in Java: duplicate strings. Two java.lang.String objects a and b are duplicates when a != b && a.equals(b). In other words, there are two (or more) separate strings with the same contents in the JVM memory. This problem occurs very frequently, especially in business applications. In such apps, strings represent a lot of real-world data, and yet, the respective data domains (e.g. customer names, country names, product names) are finite and often small. From my experience, in an unoptimized Java application, duplicate strings typically waste between 5 and 30 percent of the heap. However, did you ever think that instances of other classes, including arrays, can sometimes be duplicate as well, and waste a considerable amount of memory? If not, read on. Object Duplication Scenarios Object duplication in memory occurs whenever the number of distinct objects of a certain type is limited but the app keeps creating such objects without trying to cache/reuse the existing ones. Here are just a few concrete examples of object duplication that I've seen: In Hadoop File System (HDFS) NameServer, byte[]arrays rather than Strings are used to store file names. When there are files with the same name in different directories, the respective byte[]arrays are duplicates. See this ticket for more details. In some monitoring systems, periodic "events" or "updates" received from the monitored entities (machines, applications, components, etc.) are represented as small objects with two main fields: timestamp and value. When many updates arrive at the same time and all have the same value (for example, 0 or 1 to signal that the monitored entity's health is good), many duplicate objects get created. The Hive data warehouse once had the following problem. When multiple concurrent queries were executed against the same DB table with a large number of partitions, a separate per-query copy of metadata for each partition was loaded into memory. Partition metadata is represented as a java.util.Propertiesinstance. Thus, running 50 concurrent queries against 2000 partitions used to create 50 identical copies of Propertiesfor each of these partitions, or 100,000 such collections in total, that consumed a lot of memory. See this ticket for more details. These are just a few examples. Other, less-obvious ones include multiple identical byte buffers storing identical messages, multiple (usually small) object sets with same contents representing certain frequently occurring data combinations, and so on. Getting Rid of Duplicate Objects As mentioned above, strings are one category of objects that are especially prone to duplication. This problem has been realized by the JDK developers very long ago and addressed with the String.intern() method. The article mentioned above discusses it in detail. In brief, this method uses a global string cache (pool) with effectively weak references. It saves and returns the given string instance if it's not in the cache yet or returns the cached string instance with the same value. Performance and scalability of String.intern() , that once was not great, has substantially improved starting from JDK 7. Thus, when not overused, it is likely a good solution for many applications. However, as discussed in this article, in highly concurrent or performance-critical apps, it may become a bottleneck, and a different, "hand-rolled" interning method may be needed. Let's consider other object interning implementations. Keep in mind that the following discussion applies only to immutable objects, that is, those that don't change after creation. If object contents can change, eliminating duplicates becomes much harder and requires custom, case-by-case solutions. The most widely used off-the-shelf interning functionality is provided by the Guava library via the com.google.commmon.collect.Interners class. This class has two key methods that return an interner instance: newStrongInterner() and newWeakInterner(). The weak interner, which eventually releases objects that are not needed anymore (not referenced strongly from anywhere), generally uses less memory and is, therefore, used more frequently. It is implemented as a concurrent hash set with weak keys that resemble the standard JDK ConcurrentHashMap. In many situations, it's a good choice that helps reduce the memory footprint substantially with a small CPU performance overhead, which is often offset by the reduced GC time. However, consider the following situation: There are 20 million instances of some class C, each taking 32 bytes 10 million of these instances are all identical to each other, and another 10 million are all distinct. In practice, such a sharp division almost never occurs, but still, a situation when about a half of objects represent only a handful of unique values, and in another half most objects have very few or no duplicates, is very common. The simplified division just makes our calculations easier. In this scenario, when we don't intern any C instances, they use 32 * 20M = 640M bytes. But what happens if we invoke the Guava intern() for each of them? The first 10 million objects will be successfully reduced to just one instance of C, taking a negligible amount of memory. However, for each of the remaining 10 million objects, there will be no savings, since each of them is unique. Still, the Guava weak interner will maintain an internal table with 10 million entries to accommodate each of these objects. This table will use one instance of com.google.common.collect.MapMakerInternalMap$WeakKeyDummyValueEntry class per each interned object, plus one slot in the internal array referencing each entry. The size of a WeakKeyDummyValueEntry is 40 bytes, so the interner will need 40+4 = 44 bytes per interned object. 44 * 10M = 440M. Add to that 32 * 10M = 320M that the unique instances of C still take, and the total memory footprint now is 760M bytes! In other words, we use more memory than before, not less. This scenario may be a bit extreme, but in practice, the general rule holds: if in a given group of objects the percentage of unique objects is high, then memory savings achieved by a traditional interner, that stores a reference to each and every object given to it, may be small, if not negative. Can we do better? Fixed-Size Array, Lock Free (FALF) Interner It turns out that if we don't need to have a single copy of each unique object, but just want to save memory, and possibly still have a few duplicate objects — in other words, if we agree to deduplicate objects "opportunistically" — there is a solution that's surprisingly simple and efficient. I haven't seen it in the literature, so I named it "Fixed-Size Array, Lock Free (FALF) Interner." This interner is implemented as a small, fixed-size, open-hashmap-based object cache. When there is a cache miss, a cached object in the given slot is always replaced with a new object. There is no locking and no synchronization, and thus, no associated overhead. Basically, this cache is based on the idea that an object with value X that has many copies has a higher chance of staying in the cache for long enough to guarantee several cache hits for itself before a miss evicts X and replaces it with an object with a different value Y. Here is one possible implementation of this interner: /** Fixed size array, lock free object interner */ static class FALFInterner<T> { static final int MAXIMUM_CAPACITY = 1 << 30; private Object[] cache; FALFInterner(int expectedCapacity) { cache = new Object[tableSizeFor(expectedCapacity)]; } T intern(T obj) { int slot = hash(obj) & (cache.length - 1); T cachedObj = (T) cache[slot]; if (cachedObj != null && cachedObj.equals(obj)) return cachedObj; else { cache[slot] = obj; return obj; } } /** Copied from java.util.HashMap */ static int hash(Object key) { int h; return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16); } /** * Returns a power of two size for the given target capacity. * Copied from java.util.HashMap. */ static int tableSizeFor(int cap) { int n = cap - 1; n |= n >>> 1; n |= n >>> 2; n |= n >>> 4; n |= n >>> 8; n |= n >>> 16; return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; } } To compare the performance of the FALF interner with the Guava weak interner, I wrote a simple multithreaded benchmark. The code generates and interns strings that are derived from random numbers obeying a Gaussian distribution. That is, strings with some values are generated much more frequently than others, and will, therefore, result in more duplicates. In this benchmark, the FALF interner runs faster than the Guava interner by about 17 percent. However, that's not all. When I measured the memory footprint with the jmap -histo:live command after the benchmark finished running, but before it exited, it turned out that with the FALF interner, the used heap size is nearly 30 times smaller than with the Guava interner! That's the difference in memory footprint of the fixed-size small cache versus a weak hashmap with an entry for each unique object ever seen. To be fair, the FALF interner would typically require more tuning than the traditional, one-size-fits-all interners. For one thing, since the cache size is fixed, you need to choose it carefully. On the one hand, to minimize misses, this size should be big enough — ideally equal to the number of unique objects of the type that you intern. On the other hand, our goal is to minimize used memory, so in practice, you will likely choose a (much) smaller size that's roughly equal to the number of objects with a high number of duplicates. Another important consideration is a choice of hash function for interned objects. In a small, fixed-size cache, it is very important to distribute objects across slots as uniformly as possible to avoid the situation when many slots are used infrequently, whereas there is a small group in which each slot is contended by several object values with many copies. This contention would result in many duplicates missed by the cache, and thus, a bigger memory footprint. When this happens for instances of a class with a very simple hashCode() method (e.g. with code similar to the one in java.lang.String class), it may signal that this hash function implementation is inadequate. A more advanced hash function, like one of those provided by the com.google.common.hash.Hashing class, can dramatically improve the efficiency of the FALF interner. Detection of Duplicate Objects So far, we haven's discussed how a developer can determine which objects in their app have many duplicates and thus need to be interned. For big applications, this can be non-trivial. Even if you can guess which objects are likely duplicate, it can be really hard to estimate their exact memory impact. From experience, the best way to solve this JXRay - the tool built specifically for heap dump analysis. Unlike most other tools, JXRay analyzes a heap dump right away for a large number of common problems, including duplicate strings and other objects. Currently, objects comparison is shallow. That is, two objects (for example ArrayLists) are considered duplicate only when they reference exactly the same group of objects x0, x1, x2, ... in the same order. To put it differently, for two objects a and b to be considered equal, all pointers to other objects in a and b should be equal, bit by bit. JXRay runs through the given heap dump once and_0<< So, in this dump, 24.7 percent of the used heap is wasted by duplicate non-array, non-collection objects! The table above lists all classes whose instances contribute most significantly to this overhead. To see where these objects come from (what objects reference them, all the way up to GC roots), scroll down to the "Expensive data fields" or "Full reference chain" subsection of the report, expand it, and click on the relevant line in the table. Here is an example for one of the above classes, TopicPartition: From here, we can get a good idea of what data structures manage problematic objects. To summarize, duplicate objects, i.e. multiple instances of the same class with identical contents, can become a burden in a Java app. They may waste a lot of memory and/or increase the GC pressure. The best way to measure the impact of such objects is to obtain a heap dump and use a tool like JXRay to analyze it. When duplicate objects are immutable, you can use an off-the-shelf or custom interner implementation to reduce the number of such objects, and thus their memory impact. Mutable duplicate objects are much more difficult to get rid of, and may only be eliminated with case-by-case, customized solutions. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/duplicate-objects-in-java-not-just-strings
CC-MAIN-2020-24
refinedweb
2,212
52.7
Groovy 1.7.2 was released 22 hours ago, so by now you have surely spent hours playing with the new bits and combing through the release notes looking for new nuggets of productivity. Or maybe not. For all those busier than me, here are the top 3 reasons to upgrade to the point release: Convert GPathResults to XML - JIRA 1178 XmlSlurper is a great way to parse, manipulate, and generally tease XML. The problem is that once you start drilling into XML using the dynamic syntax, then you are working with GPathResults objects and not plain text or Strings. It was quite hard to pull out snippets of XML as text. No more! Now you can use XmlSlurper to pull out pieces of XML and then easily manipulate them as text by converting a GPathResult into a String. Here is the classic XML structure from the XmlSlurper documentation:> ''' So anyway, XmlSlurper makes it trivial to pull out the second car element from the example XML: def records = new XmlSlurper().parseText(CAR_RECORDS) def secondCar = records.car[1] assert secondCar.getClass() == NodeChild assert secondCar.toString() == 'Isle of ManSmallest Street-Legal Car at 99cm wide and 59 kg in weight' There are a few problems with this example, and it isn't that the second car element has an index of 1. The secondCar variable is of type NodeChild. This is great if you want to perform more dynamic matching against NodeChild, but not so great if you want to get the full XML of the child and all descendants. The toString() is pretty meaningless, it's just a concatenation of all the child node values. In 1.7.2, StreamingMarkupBuilder lets you quickly get the child XML as a String. The assertion and code sample is messy, but the feature isn't: def xmlString = new StreamingMarkupBuilder().bindNode(secondCar).toString()Nice. I understand you could somehow get the XML in previous Groovy versions, but I never figured it out. I'm lucky if I can just get something to pretty print. assert xmlString == "<car name='P50' year='1962' make='Peel'><country>Isle of Man</country><record type='size'>Smallest Street-Legal Car at 99cm wide and 59 kg in weight</record></car>" Easier Map and Property Sorting - JIRA 2597 It is easier than ever to sort Maps and Properties objects. The first step is to create an unsorted Map: def map = [c: 1, b: 2, a: 3] In previous Groovy releases there were two options to sort a Map. One sort method took a closure that would be invoked like a Comparator, and the other way is just to wrap the Map in a TreeMap. Here are some assertions showing how they worked: assert map.sort { Entry it -> it.key }*.key == ['a', 'b', 'c'] assert new TreeMap(map)*.key == ['a', 'b', 'c'] Not bad, but now the Map object has a no-arg sort that does default sorting and an API that lets you pass a real Comparator: assert map.sort()*.key == ['a', 'b', 'c'] assert map.sort({ a, b -> a <=> b } as Comparator)*.key == ['a', 'b', 'c'] And remember, these methods return a new Map, they do not mutate the original. No Maps are harmed during the sorting of Maps. ncurry and rcurry - JIRA 4144 The curry method has been around Groovy for a long time. It lets you bind variables into closures from left to right: def multiply = { a, b -> a * b } def doubler = multiply.curry(2) assert doubler(4) == 8 In the above example, multiply has a type signature of "Object -> Object -> Object" (two Object parameters and an Object return type). When you curry multiply into doubler you get a type signature of "Object -> Object" (one Object parameter and an Object return type). Parameters are bound from left to right. Curry always bound the left most parameter in the parameter list. Until now. rcurry will bind parameters from right to left. So a halver can now be made from a divider. Whee! def divider = { a, b -> a / b } def halver = divider.rcurry(2) assert 5 == halver(10) ncurry is the API sibling of curry and rcurry. It lets you bind parameters based on the integer index of the parameter in the signature. So you can bind parameter 2, 3, 4, or whatever. Check it out: def operation = { int x, Closure f, int y -> f(x, y) } def divider = operation.ncurry(1) { a, b -> a / b } assert 5 == divider(10, 2) Before you go too crazy with ncurry and rcurry, be warned that I think there is a bug when you try to mix them. Using ncurry and rcurry alone seems to work great, but rcurrying an ncurried closure does not currently work. I have a hunch that this does not effect a whole lot of people. I'll post a comment when I know what is wrong or when it is fixed. Now go upgrade to 1.7.2! From
http://java.dzone.com/articles/groovy-172-three-features
CC-MAIN-2014-41
refinedweb
821
71.85
Spatial hashing is a simple technique that is used in databases, physical and graphics engine, particle simulation and everywhere, where you need to quickly find something. In short, the point is that we divide the space into cells with objects and adds them to the objects. Then, instead of looking for value very quickly looking for hash. Suppose that you have several objects and you want to know whether the collision between them. The simplest solution is to calculate the distance from each object to all other objects. However, this approach increases the amount of computation required is too fast. If a dozen objects have to do a hundred checks, then hundreds of objects has been published for tens of thousands of checks. This is the infamous quadratic complexity of the algorithm. Here and below we use the pseudo code suspiciously very similar to C #, but it is only an illusion. // Each object foreach (var obj in objects) { if (Distance(this.position, obj.position) < t) { // Found one neighbor } } You can improve the situation, if you break the space with the help of the grid. Then each object gets into some cell of the grid and will be fairly easy to find the adjacent cells to test their occupancy. For example, we can be sure that two objects don't collide when they are separated by at least one cell. Grid is already good, but there is a problem with the update of the grid. Permanent screening coordinate the cells can work well in two dimensions, but in three starts to slow down already. We can take a step further and bring a multidimensional search space is one-dimensional, hashing. Hashing will allow us to quickly fill cells and search for them. How does it work? Hashing is the process of converting large amounts of data to a fixed, with little difference in the source data results in a large difference in the fixed. In fact it is a lossy compression. We can assign each cell of your ID, and when we compress the coordinates of objects, drawing an analogy with pictures, to lower their permission, then automatically obtain the coordinates of the desired cell. Thus filling all the cells, then we can easily get the objects in the vicinity of any origin. We just hash the coordinates and get a cell identifier objects. It may look a hash function for a two-dimensional grid of fixed size? For example as follows: hashId = (Math.Floor(position.x / ÃâÃÂellSize)) + (Math.Floor(position.y / ÃâÃÂellSize)) * width; We divide the X coordinate of the size of the cell and cut off the fractional part, then do the same with the Y coordinate and multiply the width of the grid. It turns out one-dimensional array as in the picture below. In fact we have the same grid as before, but the search is faster by simplifying calculations. True, they are simplified due to the appearance of conflicts when the hash is very distant object coordinates may coincide with the hash coordinates close. Therefore important to choose a hash function with good distribution and acceptable frequency of collisions. Read more at Wikipedia. I better show you how to realize in practice the three-dimensional spatial hash. a little about hashes What is faster calculate 2 * 2 * 3.14 or 3,1415926535897932384626433832795028841971 ...? Of course the first thing. We get less accurate result, but who cares? Likewise, hashes. We simplify the calculations lose their accuracy but most likely still get the result that we want. This feature hashes and the source of their power. Let's start with the most important - a hash function for three-dimensional coordinates. In the article on the site Nvidia offer the following options: GLuint hash = ((GLuint)(pos.x / CELLSIZE) << XSHIFT) | ((GLuint)(pos.y / CELLSIZE) << YSHIFT) | ((GLuint)(pos.z / CELLSIZE) << ZSHIFT); Take the coordinate of each axis is divided by the size of a cell, use the bitwise shift and OR results with each other. The authors did not specify what the value should be a shift to the same bit math is not exactly "for the little ones." There is a function a little easier, as described in this this publication. If you translate it into code, we get something like that: hash = (Floor(pos.x / cellSize) * 73856093) ^ (Floor(pos.y / cellSize) * 19349663) ^ (Floor(pos.z / cellSize) * 83492791); Find the coordinates of the cell for each axis, multiplied by a large prime number, and XOR. Honestly not much easier, but at least without unknown changes. To work with a spatial hash convenient to have two containers, one for storing objects in cells other storage location numbers of objects. As the main container fastest hash table work, they are also dictionaries, they also heshmapy or as they are called in your favorite language. In one of them we get keyed-hash stored object in another object by key-cell number. Together, these two containers allow you to quickly find and check neighbors filled cells. private Dictionary<int, List<T>> dict; private Dictionary<T, int> objects; How did look to work with these containers? We insert objects using two parameters: the coordinates and the actual object. public void Insert(Vector3 vector, T obj) { var key = Key(vector); if (dict.ContainsKey(key)) { dict[key].Add(obj); } else { dict[key] = new List<T> { obj }; } objects[obj] = key; } Hashing position, check the presence of a key in the dictionary, we push on the key object and the object key. When we need to update the coordinates, we remove the object from the old box and insert the new one. public void UpdatePosition(Vector3 vector, T obj) { if (objects.ContainsKey(obj)) { dict[objects[obj]].Remove(obj); } Insert(vector, obj); } If the object is updated at a time too many, it is easier to clean all the dictionaries and refill each time. public void Clear() { var keys = dict.Keys.ToArray(); for (var i = 0; i < keys.Length; i++) dict[keys[i]].Clear(); objects.Clear(); } That's all full class code is shown below. It uses a Vector3 of engine Unity, but the code can easily be adapted for the XNA framework or another. using System.Linq; using UnityEngine; using System.Collections.Generic; public class SpatialHash<T> { private Dictionary<int, List<T>> dict; private Dictionary<T, int> objects; private int cellSize; public SpatialHash(int cellSize) { this.cellSize = cellSize; dict = new Dictionary<int, List<T>>(); objects = new Dictionary<T, int>(); } public void Insert(Vector3 vector, T obj) { var key = Key(vector); if (dict.ContainsKey(key)) { dict[key].Add(obj); } else { dict[key] = new List<T> { obj }; } objects[obj] = key; } public void UpdatePosition(Vector3 vector, T obj) { if (objects.ContainsKey(obj)) { dict[objects[obj]].Remove(obj); } Insert(vector, obj); } public List<T> QueryPosition(Vector3 vector) { var key = Key(vector); return dict.ContainsKey(key) ? dict[key] : new List<T>(); } public bool ContainsKey(Vector3 vector) { return dict.ContainsKey(Key(vector)); } public void Clear() { var keys = dict.Keys.ToArray(); for (var i = 0; i < keys.Length; i++) dict[keys[i]].Clear(); objects.Clear(); } public void Reset() { dict.Clear(); objects.Clear(); } private const int BIG_ENOUGH_INT = 16 * 1024; private const double BIG_ENOUGH_FLOOR = BIG_ENOUGH_INT + 0.0000; private static int FastFloor(float f) { return (int)(f + BIG_ENOUGH_FLOOR) - BIG_ENOUGH_INT; } private int Key(Vector3 v) { return ((FastFloor(v.x / cellSize) * 73856093) ^ (FastFloor(v.y / cellSize) * 19349663) ^ (FastFloor(v.z / cellSize) * 83492791)); } } All data posted on the site represents accessible information that can be browsed and downloaded for free from the web. No replies yet
https://unionassets.com/blog/spatial-hashing-295
CC-MAIN-2017-13
refinedweb
1,244
57.77
NAME Data::UUID - Globally/Universally Unique Identifiers (GUIDs/UUIDs) SEE INSTEAD? The module Data::GUID provides another interface for generating GUIDs. Right now, it relies on Data::UUID, but it may not in the future. Its interface may be just a little more straightforward for the average Perl programer. SYNOPSIS use Data::UUID; $ug = Data::UUID->new; $uuid1 = $ug->create(); $uuid2 = $ug->create_from_name(<namespace>, <name>); $res = $ug->compare($uuid1, $uuid2); $str = $ug->to_string( $uuid ); $uuid = $ug->from_string( $str ); DESCRIPTION This module provides a framework for generating v3. (See RFC 4122.). In all methods, <namespace> is a UUID and <name> is a free form string. # # Note that digits A-F are capitalized, which is contrary to rfc4122 $ug->create_str(); $ug->create_from_name_str(<namespace>, <name>); # creates UUID string as a hex string, # such as: 0x4162F7121DD211B2B17EC09EFE1DC403 # Note that digits A-F are capitalized, which is contrary to rfc4122 (using upper, rather than lower, case letters) ; # this creates a new UUID in string form, based on the standard namespace # UUID NameSpace_URL and name "" $ug = Data::UUID->new; (RFC 4122)
https://metacpan.org/pod/distribution/Data-UUID/UUID.pm
CC-MAIN-2019-26
refinedweb
174
52.19
Automatic Discovery for Firewall and Web Proxy Clients Overview Microsoft. Concepts and Procedures This section includes: - Configuring automatic discovery - Web Proxy clients - Firewall clients - Client support - Configuring WPAD entries - Configuring a WPAD server - References Configuring Automatic Discovery There are a number of configuration steps involved in setting up automatic discovery support for clients: - Configure Web Proxy clients and Firewall clients for automatic discovery. - Create WPAD entries containing a URL that points to a WPAD server on which the Wpad.dat and Wspad.dat files are located. You can create a WPAD entry in DNS, in DHCP, or in both. - Configure a WPAD server. The URL specified in the WPAD entry points to the WPAD server, which is the computer on which the WPAD and WSPAD files can be located. There are a number of possible configurations for the WPAD server: - In the simplest configuration, the WPAD server is located on the ISA Server computer that will service client requests. - Alternatively, the WPAD server might be located on a computer separate from the ISA Server computer. - If the ISA Server computer will act as the WPAD server, configure ISA Server to listen for automatic discovery requests, by publishing automatic discovery information on a specified port. These configuration steps are outlined in detail in the sections that follow. Web Proxy Clients. Enable Web Proxy Automatic Discovery in Internet Explorer On Web Proxy client computers running Internet Explorer 5 or later, do the following: On the Tools menu, click Internet Options. Click the Connections tab. Click LAN Settings. Click to select the Automatically detect settings check box, and then click OK two times. Enable Web Proxy Automatic Discovery on Firewall Client for ISA Server 2004 Computers To enable Web Proxy automatic discovery on a Firewall client, do the following: In the Web Browser tab of the Microsoft Firewall Client for ISA Server 2004 dialog box, select Enable Web browser automatic configuration. To apply settings immediately, click Configure now. Firewall Clients To implement automatic discovery for Firewall clients, ISA Server uses the WPAD protocol to locate a WPAD entry in DHCP or DNS. If a Firewall Client computer has automatic discovery enabled, the following occurs: - When the client makes a Winsock request, the client connects to the DNS or DHCP server. - The WPAD entry URL returned to the client contains the address of a WPAD server (a server on which the Wpad.dat and Wspad.dat files are located). - The client computer requests the automatic configuration information held in Wspad.dat, with a call to on the WPAD server, where port is the port listening for automatic discovery requests. For DNS entries, you must listen on port 80. DHCP can listen on any port. (By default ISA Server listens on port 8080). You can manually type this URL into the Firewall Client browser to check that Firewall Client settings on the ISA Server computer are displayed as expected. - The ISA Server computer identified in the Wspad.dat file is then used to service Winsock connections for all applications on the client computer configured to use the Firewall Client.. Enable Automatic Discovery for Firewall Clients in ISA Server 2004 To enable automatic discovery for Firewall clients for ISA Server 2004, do the following:. Enable Automatic Discovery for Firewall Clients in ISA Server 2000 To enable automatic discovery for Firewall clients for ISA Server 2000, do the following: In ISA Server Management, click the ISA Server computer name, and then click Client Configuration. In the details pane, right-click Firewall Client and then click Properties. On the General tab, select Enable automatic discovery in Firewall Clients. Client Support The following table summarizes automatic discovery support for Firewall and Web Proxy clients for various operating systems, such as Microsoft Windows Server„2003, Windows® XP, Windows 2000, Windows NT® Server 4.0, Windows Millennium Edition, Windows 98, and Windows 95. Note In ISA Server 2000, the following DHCP limitation applies: Web Proxy clients on computers running Windows 2000 can only use automatic discovery for users who are members of the Administrators or Power Users group. In Windows XP, the Network Configuration Operators group also has permission to issue DHCP queries. For more information, see article 307502, "Automatically Detect Settings Does Not Work if You Configure DHCP Option 252," in the Microsoft Knowledge Base. Configuring WPAD Entries You can create WPAD entries in DHCP, DNS, or both. There are advantages and disadvantages to both approaches: - To use DNS, ISA Server must publish automatic discovery information (listen for automatic discovery requests) on port 80. Using DHCP, you can specify any port. Note that by default the ISA Server computer listens on port 8080 for automatic discovery requests. - If clients are spread over multiple domains, you need to configure a DNS entry for each domain containing clients with automatic discovery enabled. - Clients enabled for automatic discovery must be able to directly access or query the DHCP server for option 252. Remote access and VPN clients cannot access the DHCP server to directly obtain option 252. If automatic discovery is configured using DHCP only, remote access clients will not be able to use this feature. - Generally, using DHCP servers with automatic detection works best for local area network (LAN)€“based clients, while DNS servers enable automatic detection on computers with both LAN-based and dial-up connections. Although DNS servers can handle network and dial-up connections, DHCP servers provide faster access to LAN users and greater flexibility. If you configure both DNS and DHCP, clients will attempt to query DHCP for automatic discovery information first, and then query DNS. DHCP To configure automatic discovery using DHCP, check the following: - Ensure you have a valid DHCP server, and that there is a DHCP scope defined for each subnet containing client computers. - Add a WPAD entry to the DHCP server by means of a DHCP Option 252 entry. Option 252 is typically used as a registration and query point for discovery of printers, Web proxies (through WPAD), time servers, and many other network services. The Option 252 entry is a string value indicating the URL of the WPAD server. - Configure the Option 252 entry for the appropriate scope, even if there is only a single scope. - Ensure that client computers are configured as DHCP clients. DHCP information is supplied as follows: - DHCP provides WPAD information to DHCP clients during the allocation process, or fetches the information as required. - On Firewall client computers, when you click Detect Now, the Firewall client queries the DHCP client for WPAD information. Create an Option 252 Entry in DHCP To create an Option 252 entry in DHCP, do the following:: - Computer_Name is the fully qualified domain name of the ISA Server computer. - Port is the port number on which automatic discovery - \information is published. You can specify any port number. By default ISA Server publishes automatic discovery information on port 8080. Right-click Server options, and then click Configure options. Confirm that the Option 252 check box is selected. Notes - When you specify the Option 252 string, be sure to use lowercase letters when typing wpad.dat. For example, if you type, the request will fail. ISA Server uses wpad.dat and is case-sensitive. For more information, see article 252898, "HOW TO: Enable Proxy Autodiscovery in Windows 2000," in the Microsoft Knowledge Base. - You do not need to create anything specifically for Wspad.dat. Wspad.dat uses the same 252 option as wpad.dat, and modifies the wpad.dat name to Wspad.dat as required. Configure Option 252 for a DHCP Scope To configure an Option 252 entry for a DCHP scope, do the following: Click Start, point to Programs, point to Administrative Tools, and then click DHCP. Right-click Scope Options, and then click Configure Options. Click Advanced, and then in Vendor Class, click Standard Options. In Available Options, select the 252 Proxy Autodiscovery check box, and then click OK. DNS To configure a DNS server to provide a WPAD entry to clients, you must create a DNS entry. This entry can be configured in a number of ways: - Configure a host (A) record for your). - As an alternative, configure a computer with the name WPAD, and add a host entry specifying the IP address or addresses for this computer, avoiding the need to resolve an alias.: -. - Manually configure the IP properties of the client computer with the correct domain suffix. Note that you should configure Firewall clients to resolve the WPAD entry using an internal DNS server. Create a WPAD Entry in DNS To create a WPAD entry in DNS, do the following:. Note The ISA Server computer or array needs a host (A) record defined before you can create an Alias entry. If a host (A) record is defined, you can click Browse to search the DNS namespace for the ISA Server computer. Configuring a WPAD Server This sections explains WPAD and WSPAD files, a standard configuration, and an alternative configuration. WPAD and WSPAD Files: - [Common] - Port=1745 - [Servers Ip Addresses] - Name=ISAServer.microsoft.com Standard Configuration In a single computer configuration, the WPAD server will run on the ISA Server computer used to service client requests. Note the following in such a configuration: - If the ISA Server computer is unavailable, clients cannot make requests to the ISA Server computer, or request WPAD or WSPAD information. The effect of this is that you cannot update the WPAD or WSPAD file to point to an alternative ISA Server computer. - To update the WPAD server, you update the DHCP or DNS WPAD entries that point to the server. However, information is cached on DNS or DHCP servers, and the WPAD entry returned by DCHP or DNS may not contain the most up-to-date ISA Server information. - The advantage of using the ISA Server computer as the WPAD server is that the Wpad.dat and Wspad.dat files are updated automatically according to the ISA Server configuration. - In the standard configuration when using a DHCP option entry, you should keep the URL structure in the following format:. The Wpad.dat file must be in the root folder, and you should not modify the file name. Publish Automatic Discovery Information. Enable and Configure ISA Server 2004 to Listen for Automatic Discovery Requests To enable and configure ISA Server 2004 to listen for automatic discovery requests, do the following: In the console tree of ISA Server Management, click Firewall Policy. In the details pane, select the applicable network (usually Internal). On the Tasks tab, click Edit Selected Network. On the Auto Discovery tab, select Publish automatic discovery information. Enable and Configure ISA Server 2000 to Listen for Automatic Discovery Requests To enable and configure ISA Server 2000 to listen for automatic discovery requests, do the following: In the console tree of ISA Server Management, right-click the ISA Server computer name, and then click Properties. On the Auto Discovery tab, select the Publish automatic discovery information check box. In Use this port for automatic discovery requests, type the appropriate port number. Alternative Configuration: - Using this method, you maintain WPAD and WSPAD files on the computer running IIS. This avoids cache latency issues that can occur when you consistently modify WPAD entries to point to alternative ISA Server computers. - Such a configuration provides some failover possibilities.. - The drawback to this approach is that the files on the server running IIS need to be updated manually. DHCP entries, the files can be located anywhere as long as option 252 points to the correct location, not just. References
https://docs.microsoft.com/en-us/previous-versions/tn-archive/cc713344(v=technet.10)?redirectedfrom=MSDN
CC-MAIN-2020-24
refinedweb
1,917
54.52
Distance between two points in 2D space Get FREE domain for 1st year and build your brand new site Reading time: 30 minutes | Coding time: 10 minutes Distance between two points is calculated by creating a right-angled triangle using the two points.The line between the two points is the hypotenuse (the longest side, opposite to the 90° angle). Lines drawn along the X and Y-axes complete the other sides of the triangle, whose lengths can be found by finding the change in X and change in Y between the two points.We will use the Distance formula derived from Pythagorean theorem. Diagram : The formula for distance between two point (x1, y1) and (x2, y2) is Algorithm step 1 : Take a two points as an input from an user. step 2 : Calculate the difference between the corresponding X-cooridinates i.e : X2 - X1 and Y-cooridinates i.e : Y2 - Y1 of two points. step 3 : Apply the formula derived from Pythagorean Theorem i.e : sqrt((X2 - X1)^2 + (Y2 - Y1)^2) step 4 : Exit. Implementation #include <iostream> #include <cmath> class DistancebwTwoPoints { public: void getCoordinates(); double calculateDistance(); private: double x1_, y1_, x2_, y2_; }; void DistancebwTwoPoints::getCoordinates() { std::cout << "\nEnter the Cooridinates of Two points "; std::cout << "\nEnter X1 : "; std::cin >> x1_; std::cout << "\nEnter Y1 : "; std::cin >> y1_; std::cout << "\nEnter X2 : "; std::cin >> x2_; std::cout << "\nEnter Y2 : "; std::cin >> y2_; } double DistancebwTwoPoints::calculateDistance() { return sqrt(pow(x2_ - x1_, 2) + pow(y2_ - y1_, 2)); } int main() { DistancebwTwoPoints d; d.getCoordinates(); std::cout << "\nDistance between given two points is " << d.calculateDistance() << " units."; } Complexity The time and space complexity to calculate Distance between two points : calculating distance between two points is Θ((log(N))^2).
https://iq.opengenus.org/distance-between-two-points-2d/
CC-MAIN-2021-43
refinedweb
285
51.68
04 November 2009 12:30 [Source: ICIS news] By Andy Brice LONDON (ICIS news)--Politicians and industry can no longer afford to gamble with climate change and must take more responsibility, Feike Sijbesma, chairman of DSM's managing board, said late on Tuesday. Speaking at a presentation to journalists ahead of December's UN climate change conference in ?xml:namespace> “When I meet with government leaders they all say that something has to happen,” said Sijbesma. “But what’s also changed in the last year is that CEOs are taking much more of a responsibility than ever before. It’s not only the politicians; we have to take responsibility too. If Considering the discussions that will be raised at the December 7-18 event, Sijbesma said that aside from advocating a strong authority to regulate the carbon dioxide (CO2) market, it was essential that there should be some kind of incentive mechanism to reduce CO2 emissions. “My philosophy is why not take a benchmark - a certain product - and look at how it’s produced,” he said. “The moment you are among the 10-15% best producing companies in the world – even where you are emitting CO2 - then you get your rights for free.” This would give companies an incentive to invest in better technologies. Firms that exceed the benchmark performance would not have to pay for their CO2 emissions under the European carbon emissions trading scheme. “The world wants that product and even when it results in CO2 you are doing it in the cleanest way that the world can do it,” said Sijbesma. “If you are among the 85% doing it with higher CO2 emissions then you have to pay for that. A kind of benchmarked system – that is what I think the industry is advocating for.” In addition to addressing CO2 emissions, he said that it was also important to consider cleaner production processes. “We are already developing technology in the world that can partly solve that problem,” said Sijbesma. “What should be part of Sijbesma said he remained convinced that one of the solutions would be biotechnology – a shift from chemical-based processes to those based on water and using raw materials other than oil. “At the end of the day, the scenarios are that we have maybe 100 years of oil left,” he said. “The oil price can only go in one direction. My point is that one day it will all be gone, it will get more expensive and thirdly, it is polluting. These are all reasons to find alternatives.” Sijbesma added that although biotech currently represented only a few percent of the global chemical industry, a study by global consultancy McKinsey & Co two years ago suggested that this could grow to 20% in the next 10-20 years. In December, the Dutch specialty chemicals company plans to bring a pilot plant onstream in Bookmark Paul Hodges’ Chemicals and the Economy Blog For more on Rhodia visit ICIS company intelligence To discuss issues facing the chemical industry go to ICIS connect Please visit
http://www.icis.com/Articles/2009/11/04/9260958/more-responsibility-must-be-taken-on-climate-change-dsm.html
CC-MAIN-2013-48
refinedweb
508
57.1
ulimit(2) ulimit(2) NAME ulimit - get and set user limits SYNOPSIS #include <<<<ulimit.h>>>> long ulimit(int cmd, ...); Remarks The ANSI C ", ..." construct denotes a variable length argument list whose optional [or required] members are given in the associated comment (/* */). DESCRIPTION ulimit() provides for control over process limits. Available values for cmd are: UL_GETFSIZE Get the file size limit of the process. The limit is in units of 512-byte blocks and is inherited by child processes. Files of any size can be read. The optional second argument is not used. UL_SETFSIZE Set the file size limit of the process to the value of the optional second argument which is taken as a long. Any process can decrease this limit, but only a process with an effective user ID of super-user can increase the limit. Note that the limit must be specified in units of 512-byte blocks. UL_GETMAXBRK Get the maximum possible break value (see brk(2)). Depending on system resources such as swap space, this maximum might not be attainable at a given time. The optional second argument is not used. ERRORS ulimit() fails if one or more of the following conditions is true. [EINVAL] cmd is not in the correct range. [EPERM] ulimit() fails and the limit is unchanged if a process with an effective user ID other than super-user attempts to increase its file size limit. RETURN VALUE Upon successful completion, a non-negative value is returned. Errors return a -1, with errno set to indicate the error. Hewlett-Packard Company - 1 - HP-UX Release 11i: November 2000 ulimit(2) ulimit(2) SEE ALSO brk(2), write(2). STANDARDS CONFORMANCE ulimit(): AES, SVID2, SVID3, XPG2, XPG3, XPG4 Hewlett-Packard Company - 2 - HP-UX Release 11i: November 2000
http://modman.unixdev.net/?sektion=2&page=ulimit&manpath=HP-UX-11.11
CC-MAIN-2017-17
refinedweb
294
67.25
Much cute, such lights! Yesterday (as I write this), Pimoroni, creators of the amazingly cute Bearables, put out the following tweet: When we said that was just a programming header on the back of the Bearables badges, we kind of lied… We’ll give a fully-assembled pHAT Stack to the first person who successfully hacks a new pattern onto their badge and provides code/video proof. Tag with #bearableshack pic.twitter.com/a59PiJ7DY5 — pimoroni (@pimoroni) November 17, 2017 Just like that, all the work I was about to do for the rest of Friday evaporated. I must admit, I did have some prior knowledge of this hidden capability – as Phil adorned me with a Bearable at CamJam, he informed me that ‘these are some power pins and a few data bus pins, sure you can figure them out’ – but I forgot to do anything about it… until now! The first step to working out what was what, was to look at the chip. A bit of googling and it was found to be a PIC16F1503 microcontroller. With datasheet in-hand, a spot of multimeter probing between the controller and the pins and I had the pinout: Prog. Vref / SDa / SCl / Gnd / VCC. I highly suspected the Bearables used I2C for the comms, as the same chip is used on all of Pimoroni’s Flotilla line, where I2C is the main communication protocol. Makes sense, right? After hooking it up to a Pi, and running a good ol’ i2cdetect, my suspicions were proven correct – the Bearable appeared as 0x15! I tried sending it some 1s and 0s down the command line (with, frankly no clue what I was doing – I have yet to delve into the world of raw I2C comms on a Pi) but nothing happened. I also found the rather excellent Raspberry Pi PIC Programmer, but I assumed that a full firmware rewrite wasn’t necessary. I also have absolutely no experience with PIC microcontrollers, and the programmer software didn’t support the chip in hand. Besides, everyone knows that Friday afternoons are not a good time to do important and serious things such as firmware flashing. So I started looking for other approaches. The Bearables line also includes a range of sensors – perhaps I could fake a sensor and see what that did? After shoving 3.3v and Gnd down the sensor connections without much luck, I tried just shorting out the pins. This worked, kind of: once the badge was in sensor mode, you could sort-of-ish make stuff happen by shorting the lines. But, there seemed to be an erratic time delay between the connection being made and a change being seen, so it wasn’t going to work for a new pattern. There’s only really one other interface on the badge: the button for changing modes/patterns. I noticed whilst poking around that if you spam it fast enough, it just interrupts the current pattern, instead of changing it to the next one along. To test this theory out, the switch was brutally removed from the badge, and the pads hooked up to an NPN transistor, with the base hooked up to the Pi again. I then wrote some very quick and dirty code (see below) to pulse the line as fast as possible. This had a pretty cool effect – it mostly made some rapidly flashing variants of the current pattern, but when it was on the chasing LEDs pattern you could get it to do almost single LED control. I had a new pattern! I must be completely honest, I didn't use the intended method, but it does in fact work! pic.twitter.com/GIgzPtAFjj — Archie Roques (@archieroques) November 17, 2017 You’ve earned yourself a bear badge at the very least, Archie. You can upgrade to the pHAT Stack with a “correct” solution. 😄 — pimoroni (@pimoroni) November 17, 2017 The offending code: from gpiozero import LED from time import sleep l = LED(3) x = 0.01 y = 0.002 while True: l.on() sleep(x) l.off() sleep(y) I was all chuffed with myself and ready to have a nice, relaxing evening. And I did, for a bit. Until Team Underwood came along with their I2C knowledge and properly formatted code…. We've done it! (We = my husband Phil) #bearableshack @pimoroni New pattern coming next pic.twitter.com/XPbGpIjQCx — Lorraine Underwood (@LMcUnderwood) November 17, 2017 And, what’s more, the Underwood method uses the actual intended method – I was quite close to begin with! If only I’d read up on my smbus eh… Their very proper code is online, which meant there was an opportunity for more hacking! Over the course of the night (and early next morning) I produced…. @pimoroni pokeable bearable! pic.twitter.com/LTwTyklLBi — Archie Roques (@archieroques) November 18, 2017 Get this… It's Bearablompass! Never before has there been a more cute but less useful way of measuring degrees of rotation. pic.twitter.com/SggTNusaGN — Archie Roques (@archieroques) November 18, 2017 And now, for my final masterpiece of the day, I present the world's most broken binary clock. So bad but so good. pic.twitter.com/dx8g972Mlm — Archie Roques (@archieroques) November 18, 2017 Amazing what you can achieve with a little badge! And it’s really easy once someone else has written the code for you (thanks team Underwood!) If you haven’t got one, get one (just for the cuteness factor!) and have a go at programming it. I quite fancy trying to write some firmware for a little ATTiny microcontroller that can be bodged onto the back of the badge…. maybe in the future.
https://roques.xyz/?p=71
CC-MAIN-2018-17
refinedweb
943
71.55
Microsoft Messenger Architect On The Future Of IM 277 CowboyRobot writes "ACM Queue has an interview with Peter Ford, chief architect for MSN Messenger, by Eric Allman, CTO of Sendmail. They discuss the present and future states of IM, the current big players as industry shuffles toward standardization, some of the social implications of IM versus email or telephone, and technical issues such as using SIP as opposed to XMPP (Microsoft is pushing for SIP, everyone else seems to favor XMPP). They don't bring up Wallop, Microsoft's community application that will be built into Longhorn, but that's surely part of the long-term discussion." Trillian, VM (Score:5, Insightful) The pundits of chargeable IM services socialize the use of the service, as a Freudian brainwash, by forming IM parties with other-sexy-trendy-phone-pundits, and I sit back wondering what the fuck is happening to the world; it should be all free, or at least the cost of hardware. It's obviously a ploy to put a price on a few bytes of data, and slap a carriage charge on top of it. Which is why I'm not at all surprised this Microsoft guy, PETER FORD (from the interview) is talking about IM. It seems that the fancier the names of the new protocols are, the more money it's going to cost. But it's mumbo-jumbo to the end user, who would gladly fork over the cash just to make it go away (and just work). That's what these pundits are counting on. One part of the article I found interesting was the design of voice mail. I agree. It would be better to build the message at the sender's location and *then* send it. Re:Trillian, VM (Score:3, Informative) Its not as polished as Trillian, but its OSS and cross platform, and thats whats important! Re:Trillian, VM (Score:5, Insightful) Of course, I'm a Pro member for the Jabber support, but little bonuses like this make it worthwhile. why Trillian when you have gaim ? (Score:4, Insightful) It's good, at least, that gaim/trillian developers collaborate in cracking proprietary protocols. Re:why Trillian when you have gaim ? (Score:5, Insightful) As long as the underlying protocols stay free and open (be it soap, irc, jabber or whatever) then if someone wants to write a closed source interface to it, that's their perogative, and of course they do so at their own risks. As great as it is to work as an (open) team, there is still something to be said for going it on your (closed) own. Re:why Trillian when you have gaim ? (Score:2) HAHAHAHAHHAA I've had bug reports in for locked dialogs and missing shortcut/default/cancel keys on them since somewhere around 0.60. The trillian "developers" are a joke. Re:why Trillian when you have gaim ? (Score:2) Gaim is the most active project on Sourceforge, and you can write your own plugins in perl. Dunno, I just don't see any reason to really ever not use gaim. SecureIM that's why (Score:3, Informative) Gaim is feature poor and the developers refuse to interoperate with Trillian's secure protocol. The secure Gaim spin-off doesn't want to play with Trillian eithe Re:SecureIM that's why (Score:2, Informative) 2. Gaim already has encrypted IM plugin 3. Trillian's SecureIM is a closed protocol, why should GAIM interact with something that could change at any moment? Re:SecureIM that's why (Score:2, Informative) Uh, yeah. Neither ssh or ssl are vulnerable to MITM attacks. Re:SecureIM that's why (Score:2) Re:SecureIM that's why (Score:2, Informative) Both are jabber clients, so you'll have to choose whether you find security or sticking to the current protocol the most important, but I like both of these clients (prefer jajc though, mor Freudian Brainwash (Score:2) Re:Freudian Brainwash (Score:2) Well, yeah, but... (Score:4, Insightful) SIP (Score:5, Interesting) IBM, which sells the #1 selling business IM solution (Lotus Instant Messaging), is using SIP. Apple is using SIP. So who are the "everyone else" who want XMPP? Re:SIP (Score:3, Informative) AIM Yahoo Chat ICQ ??? Profit!!! Re:SIP (Score:2) Re:SIP (Score:2) Still, whatever the case, Ford gives a compelling case for chosing SIP and while the lack of IM standards on SIP might appear to be a problem, to me it looks like room to grow in an area which clearly isn't anywhere close to reaching its full potential. It does, unfortunately, make interoperability a problem though until IM over SIP is better standardiz Re:SIP (Score:2) Apparently iChatAV is some kind of SIP variant. Some people were trying to get it to talk to IP phones, but could never get it quite right due to some irregularity in the way it opened ports (???). I totally agree with grandparent, the first thing I thought on reading (Microsoft is pushing for SIP, everyone else seems to favor XMPP) was *who* is this everyone else? As for the IM part of iChat, yeah, it's OSCAR, the AOL protocol, as far as I know. Nice product - the integration with the mailer is particul IBM sponsor Jabber. (Score:2, Informative) Re:SIP (Score:5, Informative) So, they are using XMPP in the local messaging stuff, but SIP to negotiate the exchange of A/V streams. Which is really what the two protocols were designed for. The SIP pushed for by MS discussed is actually an extension called SIMPLE. If you want proof of iChat using XMPP, either install a packet sniffer on your network, or run "strings", "otool -tV" or the 3rd party "class-dump" utility on the executable for iChatAgent, and grep the output for "Jabber". Re:SIP (Score:3, Insightful) Well, XMPP is orders of magnitude more popular, or at least more visible, among small businesses and end-users. There are clients for every platform you can name, and quite a few server software offerings. Many of these projects are open source. Search around on the web and you'll find a great number of fun Jabber-related projects, such as the Jabber World Map [ralphm.net], or the multitude of mailing lists and user communities [affinix.com] dedicated to Jabber. Even Trillian and Gai SIP over XMPP (Score:2, Interesting) While the architecture of XMPP allows for theoretically broader support of handwriting recognition systems, you rarely need more than two on any given system (your native language and English). I have a feeling Microsoft will win this small battle. Maybe I should RTFA, but... (Score:2, Redundant) Re:Maybe I should RTFA, but... (Score:5, Informative) SIP (Session Initiation Protocol) has been around for a long time and AFAIK is a binary protocol. SIMPLE is built on top of SIP and provides the instant messaging functionality. XMPP is relatively new and is based on XML (hence why it's so extensible.) There are two parts, the core (which might as well be equivalent to SIP's core) and the IM extensions. The glaring practical difference is that there seem to be about zero open-source SIP servers, and about a dozen open-source XMPP servers (going off the list at JabberStudio [jabberstudio.org] which might not represent all of them.) Re:Maybe I should RTFA, but... (Score:3, Informative) SIP messages look like HTTP messages, but can be encased in either TCP or UDP packets. (Which means you can add new HTTP style headers, just as web browsers do) SIP is mostly used for carrying VoIP session information at the moment (as an SDP message body), but SIMPLE would work really great for carrying IM. Re:Maybe I should RTFA, but... (Score:2) There is an (apparently) open SIP implementation at [vovida.org]. Microsoft vs. Everyone? Get your facts straight (Score:5, Informative) Microsoft, Lotus, Sun, and Novell seem to have settled on SIP. Intel, H-P, Hitachi, Sony, and more or less the entire open source world is going toward XMPP, sometimes better known as Jabber. and the poster says: Microsoft is pushing for SIP, everyone else seems to favor XMPP. Yeah, it's fun to paint the world in black and white but this is just a blatant lie. Re:Microsoft vs. Everyone? Get your facts straight (Score:5, Informative) This is too big a deal to ignore. SIP+SIMPLE will be a powerful platform and in many cases, already is. This isn't about Jabber vs. SIMPLE or Microsoft vs the world. SIP/SIMPLE is going to be able to leverage an amazing installed base of VoIP infractructure that Jabber will not have access to. Re:Microsoft vs. Everyone? Get your facts straight (Score:3, Informative) H.323 was really a mapping of H.320 ISDN videophones onto IP-based networks. The protocols are all binary. SIP was designed from scratch for IP networks, and is a test-based protocol with HTTP-like syntax. SIP was also designed from the outset to perform user-location via a search. This makes it appropria Re:Microsoft vs. Everyone? Get your facts straight (Score:3, Informative) Re:Microsoft vs. Everyone? Get your facts straight (Score:2) If the author had done a bit more research, he would have found the follwing: Lotus (SameTime): Native protocol proprietary, with a SIP gateway. Sun: Native protocol proprietary, no gateways at all right now. Microsoft: MSN Messenger proprietary, new Exchange 2003 SIMPLE plus extensions. Kind of a side question (Score:5, Insightful) So IM will be build into Windows, and Netmeeting, and this and that and whatnot. Isn't this getting slightly ridiculous to bundle everything in an OS ? I'm sure nobody wants *all* of that installed on their hard-drive, just as I wouldn't want to install all the packages that come with my Linux distro CD, but instead I want to choose what I install and nothing else, and save disk space. What's beyond me is why don't we hear a great number of people (regular users) complaining about this waste of disk space, and also why so few OS experts voice their concern about the fact that the OS/application boundary in Windows is so blurry it's frightening in terms of security and stability Re:Kind of a side question (Score:2, Insightful) If it is in the OS, MS will say "it's part of the system" to try and avoind future monopoly abuse charges. And/Or they want to control everything that happens on a computers. really, no other reasons. It make no technical sense to bundle this crap into the OS. Re:Kind of a side question (Score:2) I mean, the issue touches people's wallets, where it really hurts, why doesn't anybody say anything ? Re:Kind of a side question (Score:4, Insightful) Re:Kind of a side question-BAAAAAHH!. (Score:3, Informative) The examples I give in the grandparent post are real. Some people that I personally know really do think that a worm spreading through email is normal, and don't understand that it could be prevented Re:Kind of a side question (Score:2) It's hardly a phenomenon unique to Windows. [...] after all, it's not like it takes a computer genius to see that it's necessary to upgrade hard-disk (or entire machines, more likely) every 2 or 3 years, [...] People aren't buying new hard disks to fit newer versions of Windows on, they're buying them to put the dozens of gigabytes of mp3s, warez and porn they're downloadin Re:Kind of a side question (Score:2) Re:Kind of a side question (Score:3, Informative) If by compulsory you mean "I have to use it because I'm too stupid/ignorant/lazy/indifferent to use something else", then yes. If you're referring to the more traditional use of the word "compulsory", then bollocks. On OS X if, for some weird reason, I chose to not use iChat at all, thought that the software was crap, hated the icon, whatever, all I have to do is drag the app to the trashc Re:Kind of a side question (Score:3, Informative) So Mr. I Know it All... explain to me why where I live (Italy) the average iLliterate user started using en-masse Messenger with the coming of XP. Lazy and/or indifferent users. I've tried for years to convince people that IM was cool and better thatn email for this kind of comms but NADA, ZILCH, NO! I am not at all suprised Microsoft does a better job of marketing a product than you do. But now everybody uses MSN and the only ones I can iChat with are thos Re:Kind of a side question (Score:3, Interesting) I'll put aside the personal attacks as those are points for me anyway. Then presumably your "personal attacks" on me even it all out ? Let me pick on your first remark: lazy users! Oh, so now it's not MS's or any other SW provider's fault: it's the user's fault. It is the user's responsibility to choose the software they use. So, yes, it most certainly is the user's "fault" if they don't switch to an alternative. Let's see.. Re:Kind of a side question (Score:4, Interesting) The OS/application boundary (if you mean DLLs) is a different thing. Re:Kind of a side question (Score:2, Interesting) Messaging built into the OS isn't exactly new... think syslog. The only addition is the ability for the messages to span (more easily?) outside the source maching. Presumably this means "bundled into the OS" the same way Internet Explorer is "bundled into the OS", that is, not. It just comes with the OS... pretty much like Messenger and NetMeeting already do. Re:Kind of a side question (Score:2) Messaging built into the OS isn't exactly new... think syslog. How about Windows (MSN) Messenger? Re:Kind of a side question (Score:2, Informative) Re:Kind of a side question (Score:2, Insightful) I don't know if IM will go the same way, but it might make sense for some apps to sort of integrate with it. Disk space? You're paying $100 for windows Re:Kind of a side question (Score:4, Informative) Re:Kind of a side question (Score:3, Informative) Re:Kind of a side question (Score:3, Insightful) With the lowliest entry-level Windows PCs offering P4s and 80 GB hard drives as standard, no one gives a damn about minor performance hits or O/S bloat. Re:Kind of a side question (Score:5, Insightful) For the market they're aiming at ? No, not at all. Remember, they're trying to sell a single, everything-you-need solution to normal people who just want to go out and buy a single thing to do it all. There are people out there who think adjustable seats, air conditioning and radios are worthless fluff in cars, as well. Fortunately they're in the minority and most manufacturers ignore them. I'm sure nobody wants *all* of that installed on their hard-drive, just as I wouldn't want to install all the packages that come with my Linux distro CD, but instead I want to choose what I install and nothing else, and save disk space. These people are few in number and generally not at all interested in Windows (or a similar product like OS X) anyway. What's beyond me is why don't we hear a great number of people (regular users) complaining about this waste of disk space [...] Because their last PC came standard with an 80G hard disk. 1.5G for Windows isn't even 2% of that (relative to common hard disk sizes, Windows XP isn't really any bigger than Windows 3.1). Disk space is dirt cheap - a few hundred megs here or there is pocket change. Personally, I've lost interest in the carefully chosen custom install - because I've now got so much disk space that the only really compelling reason for doing so has disappeared. Why should I care if an application wants to install to 100MB or 150MB when I've got 50G free on the machine and another half a terabyte sitting on a fileserver ? and also why so few OS experts voice their concern about the fact that the OS/application boundary in Windows is so blurry it's frightening in terms of security and stability ... The OS/application line has been blurry ever since the first machine that used a CLI shell instead of a bunch of flashing lights and switches rumbled into life. "Bundling" an IM client (or a web browser) is logically no different to bundling a text editor, or ping, or ftp, or any number of "core applications" that have been being "bundled" with operating systems for decades. Not to mention unix boxes have been shipping with an IM client for donkey's years - talk. "OS experts" aren't voicing their opinions because by and large they have grasped the concept that the thing academically defined as an "operating system" bears little resemblence to the thing commercially defined as an "operating system". The only commercial products that are even remotely similar to the academic definition of "operating system" are embedded OSes. Re:Kind of a side question (Score:2) Mac OS X does actually allow you to install or not install what you want. Just click the Customize button and you can leave iChat or most anything else Re:Kind of a side question (Score:2) It isn't about disk space. It's about complexity and how it relates to security. If I'm not interested in using IM, why do I have to have it installed, with the extra security risks that comes with it? It's about who is in control of your computer. You (the owner) or someone else. "Bundling" an IM client (or a web browser) is logically no differ Re:Kind of a side question (Score:2) What's beyond me is why don't we hear a great number of people (regular users) complaining about this waste of disk space... Most users use what came preinstalled on their computer from the factory, or if they ever do install/upgrade Windows, just click the pretty dialog boxes until the setup program is done. Windows XP does not even give you an option of what to install anymore. Previous versions did, but not XP. You have to go back after the install and manually (un)install OS applications in the contr Re:Kind of a side question (Score:2) You and I may enjoy choosing which packages we want, and what office-like suite we'll use, but my 50-year old mother doesn't care. She wants her PC to just work. She does really well at taking pics with her camera, plugging it in, emailing it, etc. She's a damned Re:Kind of a side question (Score:2) Real Improvements (Score:4, Interesting) Re:Real Improvements (Score:2) Am I the only one who doesn't use IM? (Score:5, Interesting) Re:Am I the only one who doesn't use IM? (Score:5, Interesting) No, you are not weird. It's a well-known fact that IM, even more than computer games, is a notorious productivity killer. So much so that many companies have started to firewall IM clients off and edict company rules forbidding the use of IM at the office. Now Windows will propose it by default in all standard installs, I bet that Microsoft decision will be very popular amongst IT personels : it's hard enough to discourage the use of third-party applications without having to deal with the Microsoft trojan-horsish IM client Re:Am I the only one who doesn't use IM? (Score:2) It's a well-known fact that IM, even more than computer games, is a notorious productivity killer. So much so that many companies have started to firewall IM clients off and edict company rules forbidding the use of IM at the office. Yeah, email and web are probably even bigger productivity killers. Hell, they should just forbid internet access. Please. IM can be very useful. If people aren't going to be productive, they're not going to be productive. Take away everything in the room except their work Re:Am I the only one who doesn't use IM? (Score:4, Insightful) Hmm no, your logic is backward: there is a certain category of people who, IM or web or nothing at all, will do nothing. Those need to be fired. Another category is the people who do their work equally well and/or fast regardless of the shiny toys they have on their computer. Those need to be praise, they're not many. And the last category, the vast majority of workers, work well most of the time, but work even better without the distraction of IM, the web and whatever else. So yeah, in many cases, they should just forbid the internet. Most accountants don't need it to do accountancy, for example. Most secretaries don't either. Re:Am I the only one who doesn't use IM? (Score:5, Insightful) As a communications medium, it combines the immediacy of cellular phones with the subtlety of e-mail. Likewise, you can copy/paste, a big bonus in many technical fields. Unfortunately, if not taken seriously this can lead to abuses and general slacking, but so could phones and e-mail if that sort of thing weren't frowned upon. Still, the holy grail is achieving a single unified standard that will allow all IM systems to interact. This is not a technical hurdle, but a financial one. Much like how the lack of inter-network text messaging killed SMS in the US, the messaging companies are all fighting hard to earn a piece of the surprisingly non-lucrative IM market. Apparently they are under the delusion that infinity times free equals a large sum of money for sufficiently large values of infinity. If everyone ran a Jabber client, it would quickly become as indispensable as e-mail. Re:Am I the only one who doesn't use IM? (Score:2) So yes, my hat is off to IM... but: - For every thesis there is an antithesis. * IM is a source of spam * IM is a source of virii and worms * It can distract even the most ideal workaholic So lets face this - every invention or product no matter which industry it belongs to will always have an upside and the d Re:Am I the only one who doesn't use IM? (Score:2) Re:Am I the only one who doesn't use IM? (Score:5, Insightful) IM is one of those things you want other people to have so you can get hold of them at a moment's notice, but you don't like when it interrupts what you're doing. I'd say the next big thing in on-line communication will have more in common with phpBB than ICQ. Re:Am I the only one who doesn't use IM? (Score:2) With email (like a letter) you compose with a certain number of questions, chekc your speelling for errors, and then to make sure you tone isn't "over the top". Sometimes you can hold the channel open too long (a long email), maybe even explaining what you don't need to because you don't get any sense of what the reader understands already. You might even repeat your m Re:Am I the only one who doesn't use IM? (Score:4, Insightful) Pool messages, read when available, then let them queue up again. Eghads! You mean... taking responsibility and not being a slave to the device? Holy hell, what is this world coming to? Personal accountability? Nope, that's right out. Just blame the tools and not the people for using them in the way that best suits them. Re:Am I the only one who doesn't use IM? (Score:2) Come off it, email is an open standard, easy to manage and available for every device with an Internet connection. I can send email from my mobile phone. If all you're going to do with an IM is treat it like email, what's the point of moving away from a low-bandwidth open standard? Re:Am I the only one who doesn't use IM? (Score:2) Re:Am I the only one who doesn't use IM? (Score:2) Re:Am I the only one who doesn't use IM? (Score:2) Microsoft Messenger Architect speaks (Score:5, Funny) My favorite quote (Score:2) The problem I think is not the technology but the market. Remember that AT&T failed to GIVE those things away. Re:Microsoft Messenger Architect speaks (Score:2) I didn't even watch the VMA special features until my brother practically made me. His class at full-sail watched it and I thank them for that. Without his recommendation I might have missed witnessing possibly one of the funniest spoofs I've seen in ages. Oh, F***, Not Again ... (Score:5, Funny) We're gonna go through this spam thing again, aren't we? Man it's like living in Groundhog Day. On the other hand, this does give us a use for Bunker Buster bombs - instant localized retaliation against any spammer. And their families. And friends. And neighborhood. Which is as it should be. Re:Oh, F***, Not Again ... (Score:2) Don't worry, IM spam won't last as long since, as you can see, the Internet makes the Internet faster! It's all so clear to me now! And here I thought the solution was broadband... My views on the future of IM... (Score:5, Interesting) No, I don't think the major IM players will settle on a standard. The best thing we can hope for is that the Jabber protocol catches on and we all have an open IM standard. That's most likely not going to happen, though, until the rest of the world catches on to the whole OSS movement. And at that point, there are going to be so much better things out there than text IM that people are working on together that it won't matter anyway. What category does Jabber fall under? (Score:2) I'm a bit unclear about the differences between SIP and XMPP and where Jabber, which could have been used as an interoperability standard, all fit together. At the high end, these all seem like simple namespace issues and would map onto Jabber nicely. An AIM user, for example, could be user@aim but the end user doesn't need to know that, they could just be presented with an icon representing AOL or something. The real issue is that there doesn't seem to be much in the way of motivation into making the IM Er, k.. (Score:2, Interesting) Seriously, I'm not trolling, but am I the only one who is saying to himself, "it's just IM, what's the big deal." Maybe there is something massive to gain by pushing for one tech over another in this area, but come on, it's just IM. What's perhaps even sillier is the concept of someone being a chief arc Re:Er, k.. (Score:3, Insightful) Control (Score:5, Insightful) Meanwhile, I plan to wash my hands of the whole mess and use Jabber [jabber.org]. Remember back when we had standards, and the internet was decentralized? It actually worked - there wasn't a single point of failure. When was the last time the entire email system went down? Jabber can offer the same reliability, and you don't aren't locked into a single server or client. Besides being decentralized, Jabber tries to offer gateways, and many Jabber clients (such as GAIM [sourceforge.net]) also play the "keep up with the proprietary protocol" game. So have the best of both worlds - get a Jabber account somewhere, and whenever your friends's servers lock out their clients of choice, convince them to get a Jabber account also. On Wallop .. I find it threatening (Score:4, Informative) great story, but one thing wasn't touched on (Score:5, Interesting) The thing that interests me is the way that Ford talked about differences in accessibility (can people you don't know communicate with you?), and verifiability (do I know who you are?) in various systems, and how one system (say chat) might be used to allow rough and tumble anonymous communications with strangers, while another (IMing) might be limited to friends on a whitelist. Another characteristic that's particularly important to me is real time vs. instant response. I *hate* systems that interrupt me in real time, which is why I use email instead of IMs. I've pretty much stopped answering my phone, too, because I can, and now I depend on my machine to queue up calls, so I can deal with them when it makes sense to do so. The question that all of this raises, for me, is whether or not it's practical to have a comprehensive messaging service that will allow people to tweak all of these different parameters in combinations that they like. Is there any need for email and IMs to be distinct? Maybe we need a messaging "account" to be open, and another to be whitelisted, or one to be real time, and another to be queued -- but can't they be the same general sort of accounts, configured differently? (I'm not talking about trying to twist email itself into this shape... but about a new system that would cover much of the same ground.) Re:great story, but one thing wasn't touched on (Score:2) Re:great story, but one thing wasn't touched on (Score:2) I almost never call people -- I use email. People who know me, usually get in touch with me via email. If someone wants to call me, they can deal with my machine. Peer networks, network agnostic clients (Score:4, Insightful) In this sense I see even Jabber as a dead-end - give me GAIM and the other multinetwork clients anyday, and open up more peer networks to them as they are populated. Microsoft's Idea of Innovation 30+ Years Old (Score:4, Insightful) End users are beginning to ask for it. They get jazzed about the idea of being able to start an IM session with somebody, then if that person goes offline at some point, the message being sent would be saved and retrieved at a later time. IBM Reality (1972) [faqs.org] You can also leave a message for wdd to receive when he logs on by typing: send 'message' user(wdd) logon. a different observation (Score:2) Re:a different observation (Score:5, Insightful) He's a first-rate architect. He's one of those people who understands more than just the protocols he's dealing with at the time -- he gets the reason those protocols came into existence, what drives them, who wants to use them, how they fit with othe protocols, etc. Peter has been pushing for SIP inside Microsoft for a long time. I was part of the design process for a couple of years, and it was a real pleasure to work with so many excellent engineers and thinkers. There is a real desire to make interoperable, public network products at Microsoft -- don't laugh, it's true. We spent YEARS making H.323 work (which is a public protocol -- anyone can implement it), but it didn't matter because, in the end, H.323 sucked. Even the Windows Messenger guys want to move to SIP, because it solves a lot of headaches for them. The best thing about SIP is that it is fairly decentralized. It's exactly as decentralized as DNS+SMTP. If you have a domain, you can publish your SIP service records, and you can handle your own communications any way you want to (similar to SMTP). This is in contrast to the way that all of the current IM protocols work -- extremely centralized, where all of your messages go to a server, that just re-sends them to the other person. I don't know anything about XMPP. If it's a good protocol -- awesome. But whether it's XMPP or SIP, or whatever -- it's gotta happen. Instant messaging (and other similar services) need to be decentralized, standard, and open. And for once, the people inside Microsoft agree, and are actively working on it. I just hope they can convince the upper management layer. Cross-platform IM solutions with apt-get (Score:4, Troll) Basically, apt-get is a kick-ass system for making sure your Debian system is up to date, has the latest packages installed, and manages conflicts. At the core, what is an IM system about? Making sure your message 'packages' are up to date, has the latest messages 'installed', and manages conflicts, that is, a reply had been requested, yet hasn't been sent! All the key infrastructure was already in place, including an interface (dselect), which could easily be ported to all the required platforms to allow easy reading and sending of instant messages. The first step was to use apt-get itself to distribute a modified apt.sources file, which contained the IP addresses of all of the IM clients on the network. Some people had suggested DNS as a solution to this, but my feeling was that DNS wouldn't scale so well (this was a large LAN, with over 10,000 clients...I'd like to see DNS cope with that!!). Once each client had it's apt.sources file updated, you could basically send a 'message' (your ASCII message encapsulated into a .deb file by a custom packager I created that runs as a background process) to any host specified in the apt.sources file. To do this, I had to create a daemon-ized version of apt-get, listening on a predefined port. The daemon would be contacted by the apt-get client, would receive the .deb package containing the message, and then 'install' it to the dselect based client on the receiving system. Without trying to sound like I'm blowing my own trumpet, the system was a huge success, and the many features of apt-get for package management really came in handy for managing IM flows. For instance, just say you've just sent a message to a colleague via apt-get saying "Let's meet for lunch at 1pm": apt-get install host=fred-pc "Let's meet for lunch at 1pm" But then...you're called into an emergency meeting and you can't make lunch until 2pm. You need to 'upgrade' your message to the latest version: apt-get upgrade host=fred-pc "Make that 2pm!" Easy! The whole project was essentially wrapped up in 6 months, and because of the open-source nature of apt-get, we'd managed to port to all of the platforms in our specification. If Microsoft can swallow their pride a little, I think they could really learn something from the power of apt-get! Re:Cross-platform IM solutions with apt-get (Score:2, Interesting) Where is IRC? (Score:5, Interesting) How many millions of people use IRC? Why not adopt it as a mainstream system? I was surprised that the interviewer, being from Sendmail, so glaringly ignored throwing this into the mix. IRC can do everything instant messaging can, and then some. Both the Mr. Ford and the interviewer failed in their mission: the former may not be much of an architect if he's willing to overlook this, and the latter should've asked more incisive questions. Cheers, Eugene Idiot RTFA more closely... (Score:2) P.S. Did you actually read all 8 pages of the indepth interview? Do not RTFA (Score:5, Insightful) There is no technical comparison of SIP vs XMPP. From indepth one I would expect to see why features of SIP are better or worse than features of XMPP. They don't even list features to compare. Also, they talk about XMPP and ignore Jabber user community, which has recently overgrown ICQ community by amount of users. They talk about interoperability IM-gateway in a future tense, whil most of Jabber users already use interoperability today. For example, my Jabber client doesn't communicate directly to my ICQ or AIM buddies - it does it through the Jabber server instead. I don't wonder they don't talk about personal/SOHO Jabber servers, which some percentage of Jabber users connect to, instead of direct connecting to public server, in a process to communicating with the rest of the world. Of course, Microsoft prefers everyone will connect directly to MSN - they don't like people building communities out of their control. And, yes, IRC is missed. I don't like some features of IRC protocol personally too, but the fact is that IRC is here for many years, has a community, applications, and still good concepts. Well, what do you expect from the guy, who works for Microsoft (the company responsible for so many viruses due to poor architectural design of their products) and Sendmail (the company responsible for so many spam due to poor architectural design of their main product)? I am so disappointed that I wasted my time on RTFA. Re:Where is IRC? (Score:2) Even though I love IRC myself, it wouldn't be much of a IM service for everyone. There are several problems with IRC as an IM. There have been attempt to fix some of the problems with different methods like bots, and serverside modifications. One of the problems are authentication. IRC servers don't give any guaranties by default that a person is what he claims to be. Some time we could count on the hostmask, but that isn't very good when there are large ISPs where many users would have a hostmask that woul What might happen... (Score:3, Insightful) If you're going to draw parallels to email, as so many have done with these discussions in the past, you have to consider right now... how much of the world's mail infrastructure is handled by the 'open standard', SMTP, and how much of it is handled by Exchange? Exchange may be quite popular with corporations, but outside the corporation the servers tend to be a little more standard. You might see a mail going via Exchange all the way to the boundaries of the company's network, then via SMTP to another company, and back to Exchange again. It would be interesting to see this same phenomenon emerge here. It isn't a stretch because Jabber to SIMPLE gateways have been done already, companies such as Altova supporting both in their servers. Wait (Score:4, Insightful) Did I miss an edit in the force, or what? Do these guys actually use IM? Oh, and Yahoo (Score:3, Interesting) For example: without altering my firewall config, I get far far better cam performance with MSN than I do with Yahoo. Interesting point, if one is talking about Microsoft's protocols. (And yes, I *do* use cam for exactly what you are suspecting.) Secondly, what the fuck is this point ahout?: Yahoo has queued messages for years, it's one of the things which I love about Yahoo. MSN is all about re-doing windows in a messenger: same crap all over again, with an improved NetMeeting (which as I said, really has very good video performance). AOL is in my opinion just an add-on, for years rubbish and not much better now. It's just an extension to the AOL 'portal environment' and in its own way a logical extension of the same. OK, but not breathtaking. ICQ and Yahoo though, are very very different: they build real communities, and are NOT JUST ABOUT IM. Yahoo for one -- and yeah I just love this IM -- is just bursting with features, like IMvironments, Archived messages, Queueing, had Cam *way* before other clients even considered it, and has a thriving chat-mode which makes conferencing in NetMeeting look like something out of the Stone Age. Whyowhy doesn't Yahoo *advertise* it's own brilliance? It has so much good stuff, and it behaves like Apple. Invent gobsmackingly cool apps, and then halfheartedly advertise them. And all the while Microsoft papers the planet with adverts which announce a 'brand! new! chat! system!' for windows. Great. Nalfy
https://slashdot.org/story/03/11/27/0014206/microsoft-messenger-architect-on-the-future-of-im
CC-MAIN-2017-34
refinedweb
6,850
70.02
Hi, From: Alexander Motin <m...@freebsd.org> Date: Mon, 13 Jul 2015 18:29:36 +0300 > Hi. > > On 13.07.2015 11:51, Kohji Okuno wrote: >>> On 07/13/15 10:11, Kohji Okuno wrote: >>>> Could you comment on my quesion? >>>> >>>>> I found panic() in scsi_da.c. Please find the following. >>>>> I think we should return with error without panic(). >>>>> What do you think about this? >>>>> >>>>> scsi_da.c: >>>>> 3018 } else if (bp != NULL) { >>>>> 3019 if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) >>>>> 3020 panic("REQ_CMP with QFRZN"); >>>>> >>> >>> It looks to me more like an KASSERT() is appropriate here. > > As I can see, this panic() call was added by ken@ about 15 years ago. > I've added him to CC in case he has some idea why it was done. From my > personal opinion I don't see much reasons to allow CAM_DEV_QFRZN to be > returned only together with error. While is may have little sense in > case of successful command completion, I don't think it should be > treated as error. Simply removing this panic is probably a bad idea, > since if it happens device will just remain frozen forever, that will be > will be difficult to diagnose, but I would better just dropped device > freeze in that case same as in case of completion with error. Advertising Thank you for your comment. I have a strange USB HDD. When I access the specified sector, the kernel causes panic("REQ_CMP with QFRZN") always. After I modified the following, I think that I can recover from this state, although the specified sector access fails. This recovery means that I can access other sectors after this recovery. What do you think about my idea? @@ -3016,8 +3016,17 @@ dadone(struct cam_periph *periph, union ccb *done_ccb) /*timeout*/0, /*getcount_only*/0); } else if (bp != NULL) { +#if 0 if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) panic("REQ_CMP with QFRZN"); +#else + if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) + cam_release_devq(done_ccb->ccb_h.path, + /*relsim_flags*/0, + /*reduction*/0, + /*timeout*/0, + /*getcount_only*/0); +#endif if (state == DA_CCB_DELETE) bp->bio_resid = 0; else Best regards, Kohji Okuno _______________________________________________ freebsd-current@freebsd.org mailing list To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
https://www.mail-archive.com/freebsd-current@freebsd.org/msg161276.html
CC-MAIN-2018-26
refinedweb
360
67.55
: This will generate the following assemblies.. Use the Process class found in the System.Diagnostics namespace. Use the Process class found in the System.Diagnostic namespace. Define the struct you want, for example RECT Then use the Message.GetLPAram method To maximize the main window, you can get the handle of the main window in the new process, and then send a SC_MAXIMIZE system command message to it. You can get the handle through the Process.MainWindowHandle property. To send a message you should use the DllImportAttribute attribute to import the API function. This is a sample code: (from lion_noreply@microsoft.com on microsoft.public.dotnet.framework.windowsforms) Please check out this article from the October 2000 issue of MSDN Magazine. Allen Weng gives the following explanation in a post on the microsoft.public.dotnet.framework.windowsforms newgroup. If what you are looking for is just to intercept and handle generic Windows messages such as WM_NCPAINT or alike, you can override WndProc (). You don't need to use the hook procedure. Here is some code that does this: But if you need to use a hook procedure, be cautious since they might interfere with normal execution of other applications. In some extreme cases, they might bring the whole system down if not processed correctly. Here is the code snippet that shows you how to do implement and use the hook procedure in .NET: To install the hook procedure Bill Zhang (Microsoft) responds to this question in a posting on microsoft.public.dotnet.frameworks.windowsforms newsgroup. Here is a sample prepared by Matthias Heubi that uses interop to access SHGetFileInfo to retrieve icons. You could also use the ExtractIconEx native api via PInvoke to extract the app icon. You can download a working project that uses this code.: When you want to hide it: Use the DllImport attribute that is a member of the System.Runtime.InteropServices namespace. Assume your exported function found in MyDLL.dll has a signature: The code below shows how you can access this function from within C#. You have to get a handle to the desktop and draw on the desktop. This means that whatever you draw will not be automatically refreshed when another window is dragged over it. from a microsoft.public.dotnet.framework.windowsforms posting by Lion Shi (MS) Yes, you can use the FormatMessage Win32 API. Sample projects for C# and VB.NET are enclosed. This is how the declaration looks like: Called like so: | | |
http://www.syncfusion.com/FAQ/WindowsForms/FAQ_c70c.aspx
crawl-002
refinedweb
412
58.48
Buildout recipe for Django Djangorecipe: easy install of Django with buildout With djangorecipe you can manage your django site in a way that is familiar to buildout users. For example: - bin/django to run django instead of bin/python manage.py. - bin/test to run tests instead of bin/python manage.py test yourproject. (Including running coverage “around” your test). - bin/django automatically uses the right django settings. So you can have a development.cfg buildout config and a production.cfg, each telling djangorecipe to use a different django settings module. bin/django will use the right setting automatically, no need to set an environment variable. Djangorecipe is developed on github at, you can submit bug reports there. It is tested with travis-ci and the code quality is checked via landscape.io: Setup You can see an example of how to use the recipe below with some of the most common settings: [buildout] show-picked-versions = true parts = django eggs = yourproject gunicorn develop = . # ^^^ Assumption: the current directory is where you develop 'yourproject'. versions = versions [versions] Django = 1.8.2 gunicorn = 19.3.0 [django] recipe = djangorecipe settings = development eggs = ${buildout:eggs} project = yourproject test = yourproject scripts-with-settings = gunicorn # ^^^ This line generates a bin/gunicorn-with-settings script with # the correct django environment settings variable already set. Earlier versions of djangorecipe used to create a project structure for you, if you wanted it to. Django itself generates good project structures now. Just run bin/django startproject <projectname>. The main directory created is the one where you should place your buildout and probably a setup.py. Startproject creates a manage.py script for you. You can remove it, as the bin/django script that djangorecipe creates is the (almost exact) replacement for it. See django’s documentation for startproject. You can also look at cookiecutter. Supported options The recipe supports the following options. - project - This option sets the name for your project. - settings - You can set the name of the settings file which is to be used with this option. This is useful if you want to have a different production setup from your development setup. It defaults to development. - test - If you want a script in the bin folder to run all the tests for a specific set of apps this is the option you would use. Set this to the list of app labels which you want to be tested. Normally, it is recommended that you use this option and set it to your project’s name. - scripts-with-settings Script names you add to here (like ‘gunicorn’) get a duplicate script created with ‘-with-settings’ after it (so: bin/gunicorn-with-settings). They get the settings environment variable set. At the moment, it is mostly useful for gunicorn, which cannot be run from within the django process anymore. So the script must already be passed the correct settings environment variable. Note: the package the script is in must be in the “eggs” option of your part. So if you use gunicorn, add it there (or add it as a dependency of your project). - eggs - Like most buildout recipes, you can/must pass the eggs (=python packages) you want to be available here. Often you’ll have a list in the [buildout] part and re-use it here by saying ${buildout:eggs}. - coverage - If you set coverage = true, bin/test will start coverage recording before django starts. The coverage library must be importable. See the extra coverage notes further below. The options below are for older projects or special cases mostly: - dotted-settings-path - Use this option to specify a custom settings path to be used. By default, the project and settings option values are concatenated, so for instance myproject.development. dotted-settings-path = somewhere.else.production allows you to customize it. - extra-paths - All paths specified here will be used to extend the default Python path for the bin/* scripts. Use this if you have code somewhere without a proper setup.py. - control-script - The name of the script created in the bin folder. This script is the equivalent of the manage.py Django normally creates. By default it uses the name of the section (the part between the [ ]). Traditionally, the part is called [django]. - initialization - Specify some Python initialization code to be inserted into the control-script. This functionality is very limited. In particular, be aware that leading whitespace is stripped from the code given. - wsgi - An extra script is generated in the bin folder when this is set to true. This is mostly only useful when deploying with apache’s mod_wsgi. The name of the script is the same as the control script, but with .wsgi appended. So often it will be bin/django.wsgi. - wsgi-script - Use this option if you need to overwrite the name of the script above. - deploy_script_extra - In the wsgi deployment script, you sometimes need to wrap the application in a custom wrapper for some cloud providers. This setting allows extra content to be appended to the end of the wsgi script. For instance application = some_extra_wrapper(application). The limits described above for initialization also apply here. - testrunner - This is the name of the testrunner which will be created. It defaults to test. Coverage notes Starting in django 1.7, you cannot use a custom test runner (like django-nose) anymore to automatically run your tests with coverage enabled. The new app initialization mechanism already loads your models.py, for instance, before the test runner gets called. So your models.py shows up as largely untested. With coverage = true, bin/test starts coverage recording before django gets called. It also prints out a report and export xml results (for recording test results in Jenkins, for instance) and html results. Behind the scenes, true is translated to a default of report xml_report html_report. These space-separated function names are called in turn on the coverage instance. See the coverage API docs for the available functions. If you only want a quick report and xml output, you can set coverage = report xml_report instead. Note that you cannot pass options to these functions, like html output location. For that, add a .coveragerc next to your buildout.cfg. See the coverage configuration file docs. Here is an example: [run] omit = */migrations/* *settings.py source = your_app [report] show_missing = true [html] directory = htmlcov [xml] output = coverage.xml> Corner case:, normally bin/django, that is generated). The following 2.2.1 (2016-06-29) - Bugfix for 2.2: bin/test was missing quotes around an option. [reinout] 2.2 (2016-06-29) - Added optional coverage option. Set it to true to automatically run coverage around your django tests. Needed if you used to have a test runner like django-nose run your coverage automatically. Since django 1.7, this doesn’t work anymore. With the new “coverage” option, bin/test does it for you. [reinout] - Automated tests (travis-ci.org) test with django 1.4, 1.8 and 1.9 now. And pypi, python 2.7 and python 3.4. [reinout] 2.1.2 (2015-10-21) - Fixed documentation bug: the readme mentioned script-with-settings instead of scripts-with-settings (note the missing s after script). The correct one is script-with-settings. [tzicatl] 2.1.1 (2015-06-15) - Bugfix: script-entrypoints entry point finding now actually works. 2.1 (2015-06-15) Renamed script-entrypoints option to scripts-with-settings. It accepts script names that would otherwise get generated (like gunicorn) and generates a duplicate script named like bin/gunicorn-with-settings. Technical note: this depends on the scripts being setuptools “console_script entrypoint” scripts. 2.0 (2015-06-10) Removed project generation. Previously, djangorecipe would generate a directory for you from a template, but Django’s own template is more than good enough now. Especially: it generates a subdirectory for your project now. Just run bin/django startproject <projectname>. See django’s documentation for startproject. You can also look at cookiecutter. This also means the projectegg option is now deprecated, it isn’t needed anymore. We aim at django 1.7 and 1.8 now. Django 1.4 still works, (except that that one doesn’t have a good startproject command). Gunicorn doesn’t come with the django manage.py integration, so bin/django run_gunicorn doesn’t work anymore. If you add script-entrypoints = gunicorn to the configuration, we generate a bin/django_env_gunicorn script that is identical to bin/gunicorn, only with the environment correctly set. Note: renamed in 2.1 to ``scripts-with-settings``. This way, you can use the wsgi.py script in your project (copy it from the django docs if needed) with bin/django_env_gunicorn yourproject/wsgi.py just like suggested everywhere. This way you can adjust your wsgi file to your liking and run it with gunicorn. For other wsgi runners (or programs you want to use with the correct environment set), you can add a full entry point to script-entrypoints, like script-entrypoints = gunicorn=gunicorn.app.wsgiapp:run would be the full line for gunicorn. Look up the correct entrypoint in the relevant package’s setup.py. Django’s 1.8 wsgi.py file looks like this, see: import os from django.core.wsgi import get_wsgi_application os.environ.setdefault("DJANGO_SETTINGS_MODULE", "yourproject.settings") application = get_wsgi_application() The wsgilog option has been deprecated, the old apache mod_wsgi script hasn’t been used for a long time. Removed old pth option, previously used for pinax. Pinax uses proper python packages since a long time, so it isn’t needed anymore. 1.11 (2014-11-21) - The dotted-settings-path options was only used in management script. Now it is also used for the generated wsgi file and the test scripts. 1.10 (2014-06-16) - Added dotted-settings-path option. Useful when you want to specify a custom settings path to be used by the manage.main() command. - Renamed deploy_script_extra (with underscores) to deploy-script-extra (with dashes) for consistency with the other options. If the underscore version is found, an exception is raised.. Release History Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/djangorecipe/
CC-MAIN-2017-43
refinedweb
1,694
60.61
Write operation Hadoop 2.0 - The client creates the file by calling create() method on DistributedFileSystem. - DistributedFileSystem makes an RPC call to the namenode to create a new file in the filesystem’s namespace, with no blocks associated with it. The namenode performs various checks to make sure the file doesn’t already exist and the client has the right permissions to create the file. If all these checks pass, the namenode makes a record of the new file; otherwise, file creation fails and the client is thrown an IOException. TheDistributedFileSystem returns an FSDataOutputStream for the client to start writing data to datanode. FSDataOutputStream wraps a DFSOutputStream which handles communication with the datanodes and namenode. - default. -. - When the client has finished writing data, it calls close() on the stream.It. Below diagram summarises file write operation in hadoop. Read operation Hadoop 2.0 - The client opens the file by calling open() method on DistributedFileSystem. - DistributedFileSystem makes an RPC call to the namenode to determine location of datanodes where files is stored in form of blocks.For each blocks,the namenode returns address of datanodes(metadata of blocks and datanodes) that have a copy of block. Datanodes are sorted according to proximity(depending of network topology information).. DFSInputStream,which has stored the datanode addresses for the first few blocks in the file, then connects to the first (closest) datanode for the first block in the file. - Data is streamed from the datanode back to the client(in the form of packets) and read() is repeatedly called on the stream by client. - When the end of the block is reached, DFSInputStream will close the connection to the datanode, then find the best datanode for the next block (Step 5 and 6) - When the client has finished reading, it calls close() on the FSDataInputStream (step. Reference :- Hadoop definitive guide by Tom White.
http://www.devinline.com/2015/11/read-and-write-operation-in-hadoop-20.html
CC-MAIN-2017-26
refinedweb
309
63.09
>> a number has same number of set and unset bits in C++ C in Depth: The Complete C Programming Guide for Beginners 45 Lectures 4.5 hours Practical C++: Learn C++ Basics Step by Step 50 Lectures 4.5 hours Master C and Embedded C Programming- Learn as you go 66 Lectures 5.5 hours In this section we will check whether a number has same number of set bits and unset bits or not. Suppose the number 12 is there. Its binary representation is 1100. This has same number of 0s and 1s. The approach is simple. We will check each bit of the number, and if that is 1, then increase set_bit_count, and if it is 0, then increase unset_bit_count. Finally, if they are same, then return true, otherwise false. Example #include <iostream> using namespace std; bool hasSameSetUnset(int n) { int set_count = 0, unset_count = 0; while(n){ if((n & 1) == 1){ set_count++; }else{ unset_count++; } n = n >> 1; //shift to right } if(set_count == unset_count) return true; return false; } int main() { int num = 35; //100011 if(hasSameSetUnset(num)){ cout << "Has same set, unset bits"; }else{ cout << "Not same number of set, unset bits"; } } Output Has same set, unset bits - Related Questions & Answers - Check if a number has two adjacent set bits in C++ - Check if bits of a number has count of consecutive set bits in increasing order in Python - Check if a number has bits in alternate pattern - Set 1 in C++ - Count unset bits of a number in C++ - Check if all bits of a number are set in Python - Check if a number has bits in alternate pattern - Set-2 O(1) Approach in C++ - Check whether the number has only first and last bits set in Python - Next higher number with same number of set bits in C++ - Find the largest number with n set and m unset bits in C++ - Maximum number of contiguous array elements with same number of set bits in C++ - Program to find higher number with same number of set bits as n in Python? - Minimum number using set bits of a given number in C++ - Check if the binary representation of a number has equal number of 0s and 1s in blocks in Python - Maximum sum by adding numbers with same number of set bits in C++ - Number of integers with odd number of set bits in C++ Advertisements
https://www.tutorialspoint.com/check-if-a-number-has-same-number-of-set-and-unset-bits-in-cplusplus
CC-MAIN-2022-40
refinedweb
399
55.61
. Alcatraz is a package manager for Xcode, which allows you to install additional plugins right inside of your IDE. In the past, Xcode hasn’t been very friendly with add-ons, but with the release of Xcode 5, they have eased up on what they allow developers to add-on to the IDE. Installing Alcatraz is as easy using the following command. curl -fsSL | sh Once complete, it will ask you to restart Xcode. You will now see a “Package Manager” option after clicking “Window” -> “Package Manager” as shown in Figure 1. Once launched, you have the option to select from a variety of Xcode plugins as shown in Figure 2 below: One of my favorite plug-ins for XCode is called BBUDebuggerTuckAway, which allows you to dismiss the debug output window as you start typing for extra screen space. A sample of this is shown in Figure 3. After taking a look at a package manger for Xcode, you may be asking if a dependency manager exists? Absolutely! Cocoapods is a well-known dependency manager for Objective-C projects. There are literally thousands of libraries currently available. Sometimes programmers forget how useful it really is. Cocoapods was built with Ruby and is easy to install using the default Ruby available on OS X. You can simply install it with the following command in your terminal window: sudo gem install cocoapods Once installed you should see the following in your terminal window (the actual version number depends on when you install it): Successfully installed cocoapods-0.33.1 Parsing documentation for cocoapods-0.33.1 1 gem installed Go ahead and open Xcode and create a new “Single-View Application” and give it a name. I named mine TDNiOSApp. Navigate inside of the project you just created using the terminal and type the following command: pod init You will now have a Podfile and will need to edit it with the following command: open -a Xcode Podfile Let’s edit the file and use one of my favorite libraries called AFNetworking as shown in Figure 4. Figure 4 : Your completed Podfile should look like this except for the project name. It is worth noting that even though our project targets 7.1, we simply added 7.0 for the platform line. We also added an additional line to install the Pod and gave it the version number. In this case, we want the latest 2.x version. Finally, save the file and type the following command to install the pod. pod install You will get a warning message that says the following: From now on use TDNiOSApp.xcworkspace. Go ahead and close your current project and launch the new workspace. You will see that we now have a “Pods” project along with AFNetworking already installed for us along with our original project as shown in Figure 5. We can now reference the new library as shown with the red arrow and hit CMD-B to build our project. It should build successfully and we can begin adding code specific to that library. FontasticIcons is simply an Objective-C wrapper for iconic fonts. Those of you that work with CSS have probably been using iconic fonts for some time and there is plenty of reasons to do so. Some of those reasons include: Since you already have CocoaPods installed, you can use FontasticIcons in your Xcode project by adding the following line in your Podfile as described earlier and then installing the Pod: pod 'FontasticIcons' Now if you open your xcodeproj file and navigate inside of Pods -> FontasticIcons -> Resources, you should pay special attention to the .string files. These files will let you know what icons are available as shown in Figure 6* below: You can now reference them in code by adding code similar to the the following snippet that creates a button such as using the “camera” icon. UIButton *button = [UIButton buttonWithType:UIButtonTypeRoundedRect]; [self.view addSubview:button]; button.frame = CGRectMake(80.0, 210.0, 160.0, 40.0); FIIcon *icon = [FIEntypoIcon cameraIcon]; FIIconLayer *layer = [FIIconLayer new]; layer.icon = icon; layer.frame = button.bounds; layer.iconColor = [UIColor blueColor]; [button.layer addSublayer:layer]; Looking at this code, we can see that we are creating a simple button and setting some simple properties on it. We then call set our icon with the cameraIcon on line 5. The only thing left to do is to add it as a layer and set the color. Which will generate the following screen as shown in Figure 7. If our customer decides they want it to be green instead of blue, then you would only have to change the following line instead of adding yet another image to the project: layer.iconColor = [UIColor greenColor]; The new image is displayed in Figure 8. The same could be said for the image dimensions. You may need 4 icons with different resolutions and you could do this just as easily by modifying the button.frame code. Besides all of the icons already bundled, there is also a commercial-free library called Font Awesome that integrates nicely with FontasticIcons. As a matter of fact, some of the icons are already added to the default installation and can be found in the FontAwesomeRegular.strings file in the same Resources folder that we looked at earlier. However, you probably want to update it as the latest version includes 439 icons! Prepo takes the hassle out of preparing app icons and artwork with its simple user interface and drag and drop file support. You simply add in your icon artwork at 1024×1024 and it generates icons for iOS 1-7, iPhone or iPad and even gives you the option to add a “Shine” look and feel to it. After you are ready, you simply click the “Export” button and you have images available to drop directly into XCode. I’ve shown an icon sample in Figure 9 below: You may optionally click the “Copy Plist” button and generate the code needed for XCode if you are not using the asset catalog as shown below. <key>CFBundleIconFiles</key> <array> <string>Icon.png</string> <string>Icon@2x.png</string> <string>Icon-60.png</string> <string>Icon-60@2x.png</string> <string>Icon-72.png</string> <string>Icon-72@2x.png</string> <string>Icon-76.png</string> <string>Icon-76@2x.png</string> <string>Icon-Small-50.png</string> <string>Icon-Small-50@2x.png</string> <string>Icon-Spotlight-40.png</string> <string>Icon-Spotlight-40@2x.png</string> <string>Icon-Small.png</string> <string>Icon-Small@2x.png</string> </array> There is also a @2x artwork option that will automatically convert and rename @2x images to @1x. Also, a paid plan is available, but the functionality described in this article is completely free. CocoaLumberJack is basically a logging framework and a replacment for NSLog. It does much more than just log to the console; you can log messages to a file, console, or even a database if you wish. A lot of iOS developers love the ability to remove the log statements out of the release build and archive log files to review later. With CocoaPods installed, you can use CocoaLumberJack in your Xcode project by adding the following line in your Podfile as described earlier and then installing the Pod: pod 'CocoaLumberjack' Head over to your AppName-Prefix.pch file and add the following code: #import <Availability.h> #ifndef __IPHONE_5_0 #warning "This project uses features only available in iOS SDK 5.0 and later." #endif #ifdef __OBJC__ #import <UIKit/UIKit.h> #import <Foundation/Foundation.h> #import "DDLog.h" #endif #ifdef DEBUG static const int ddLogLevel = LOG_LEVEL_VERBOSE; #else static const int ddLogLevel = LOG_LEVEL_ERROR; #endif This will ensure that the macros contained in DDLog.h are available throughout the project, as well as set the ddLogLevel required by the framework. If we are running in debug mode, then we will set our log level to verbose. If we are in release mode then it will be logged as an actual error. Switch back to your AppDelegate and add the following header files. #import "DDASLLogger.h" #import "DDTTYLogger.h" #import "DDFileLogger.h" You will also need to add the following code to your application delegate’s application:didFinishLaunchingWithOptions: method. // Override point for customization after application launch. [DDLog addLogger:[DDASLLogger sharedInstance]]; [DDLog addLogger:[DDTTYLogger sharedInstance]]; return YES; Switch over to your ViewController and add the following code: DDLogError(@"This is an error."); DDLogWarn(@"This is a warning."); DDLogInfo(@"This is a message."); DDLogVerbose(@"This is a verbose message."); If you run the app, then you will see the following in your console window: 2014-07-02 18:11:32:373 TDNiOSApp[16749:60b] This is an error. 2014-07-02 18:11:32:374 TDNiOSApp[16749:60b] This is a warning. 2014-07-02 18:11:32:374 TDNiOSApp[16749:60b] This is a message. 2014-07-02 18:11:32:374 TDNiOSApp[16749:60b] This is a verbose message. If you switch the scheme by selecting Product -> Scheme -> Edit Scheme and select “Release” as shown in Figure 10 then you will only get the first error message. Figure 10 : Switch from Debug to Release in the Build Configuration. The real magic comes in the form of Log files. Add the following code to your application delegate’s application:didFinishLaunchingWithOptions: method. DDFileLogger *fileLogger = [[DDFileLogger alloc] init]; [fileLogger setRollingFrequency:60 * 60 * 24]; [[fileLogger logFileManager] setMaximumNumberOfLogFiles:7]; [DDLog addLogger:fileLogger]; Now if you deployed this in your simulator, navigate to the following directory: ~/Library/Application Support/iPhone Simulator/#ios simulator version no#/Applications/#yourapplication uuid#/Library/Caches/Logs/company_identifier_appname.date.log If you open the file with the default program, then you will see what is shown in Figure 11. We’ve looked at a variety of tools that can enhance your productivity developing apps using XCode. We started at looking at a package manager called Alcatraz, which makes it very easy to find, install and keep packages updated. Then we looked at our first dependency manager called CocoaPods which has thousands of libraries readily available. Now that we have a great way for developers to find what they need, they can rely on the help of FontasticIcons to get a great icon free of charge and Presto to resize it appropriately for all devices. We wrapped up with CocoaLumberJack which is a super-charged logging framework on steroids! With these 5 tools and XCode, you have nothing stopping you from creating awesome apps. So what are you waiting for, go build something awesome! Now that you have seen all these useful tools, also check out our control suite that enriches the iOS SDK at.
https://developer.telerik.com/featured/5-killer-tools-ios-developers/
CC-MAIN-2018-22
refinedweb
1,777
54.93
Introduction to the JSP Java Server Pages source code. Introduction to JSP Java Server Pages or JSP... applications. JSP Tutorials - Introducing Java Server Pages... to embed java code in html pages. JSP files are finally compiled index Fortran Tutorials Java Tutorials Java Applet Tutorials Java Swing and AWT Tutorials JavaBeans Tutorials... Flash Tutorials JSP Tutorials Perl Tutorials java server pages - Java Server Faces Questions java server pages code for jsp login page if user forgot password reminding password with hint question.the table login is created in database Hi Friend, Try the following code: 1)email.jsp: Enter Email Java Server Pages(JSP) Java Server Pages(JSP) JavaServer Pages (JSP) technology provides... and platform-independent. The JavaServer Pages specification extends the Java Servlet API Tutorials - Java Server Pages Technology Tutorials - Java Server Pages Technology  ... server-side Java a very exciting area. JavaServer Pages - An Overview... web servers, and its underlying server components. That is, JSP pages JSP - Java Server Pages JSP JSP stands for Java Server Pages. JSP is Java technologies for creating dynamic web applications. JSP allows the developer to embed Java code inside HTML. It makes the development of dynamic web application very easy in Java Java Server Pages (JSP) Java Server Pages (JSP) In this tutorial we are going to give you over view of JSP. What is JSP ? JSP is java based technology by using that you can... Dynamic Elements in HTML Pages itself. JSP are always compiled before it's java pages run java pages run how do we run jsp file in the browser. do we need... Tomcat Server to run the jsp code over it. Follow these steps to run the simple... a jsp file:'hello.jsp' <%@page language="java"%> <%String st="Hello Tutorial | J2ME Tutorial | JSP Tutorial | Core Java Tutorial...; | Java Servlets Tutorial | Jsp Tutorials | Java Swing Tutorials... applications, mobile applications, batch processing applications. Java is used How to upload files to server using JSP/Servlet? How to upload files to server using JSP/Servlet? How to upload files to server using JSP/Servlet Web Server pages to the client. Apache and Microsoft's Internet Information Server (IIS) are two leading web servers. In the case of java language, a web server is used to support Servlet and JSP web components. A web server does not provide support get files list - Ajax get files list Please,friend how to get files list with directories from ftp server or local files on web browser. Thanks, Tin Linn Soe Hi friend, Index of Files Index - Java Server Faces Questions jsp how to copy files using jsp JSP Tutorials with working source code. JSP stands for Java Server Pages and is a technology...: Introduction to JSP Java Server Pages or JSP for short is Sun's solution...; JSP Tutorials - Introducing Java Server Pages Technology Iterating through pages with a sample java/ jsp code or technical details of how to iterate through the pages...". And the next 100 records will get displayed in the page2. The pages go on wrt Web Server pages and files), SMTP server (to support mail services), FTP server ( for files... e-mail and building and publishing web pages. A web server works on a client... program. While talking about Java language then a web server is a server JSP Training JSP Training Java Server Pages... Model 1 architecture Model 2 architecture Java Server Pages The Role... of the JSP lifecycle, JSP syntax and architecture, fundamentals of using the latest Java FTP Server : List Files and Directories In this tutorial we will discuss how to list files and directories on FTP server using java files upload to apache ftp server - Ajax files upload to apache ftp server Please, how to upload multiple files to apache ftp server using ajax . I want to upload files using drag drop style in javascript. I am okay for uploading files to ftpserver using command What is JSP? types of documents to server the web client. The JSP technology allows the programmers to embed Java code into html (.jsp) page. Java Server Pages are first...The JSP Java Server Pages Technology: Java Server Pages is a technology index of javaprogram index of javaprogram what is the step of learning java. i am not asking syllabus am i am asking the step of program to teach a pesonal student. To learn java, please visit the following link: Java Tutorial files . Please visit the following link: articles given belows: Java Server Pages Technology JSP Architecture JSP...JSP Tutorials JavaServer Pages (JSP) is a server-side programming technology... for creating the UI for web applications easily. JSP pages are first compiled into Java files files write a java program to calculate the time taken to read a given number of files. file names should be given at command line. Hello Friend, Try the following code: import java.io.*; import java.util.*; class How to avoid Java Code in JSP-Files? How to avoid Java Code in JSP-Files? How to avoid Java Code in JSP-Files Uploading Files - Java Server Faces Questions Tomcat an Introduction ; Apache Tomcat: Apache Tomcat server is one of the most popular open source web server that implements the Java Servlet and the JavaServer Pages (JSP) specifications from Sun Microsystem to provide the platform to run Java code on a web upload files to apache ftp server - Ajax files to apache ftp server. I am using ajax framework for j2ee.Now I am okay... server using javascript and I don't know how to integrate javascript and java...."); // //Get the list of files on the remote server FTP Server: List all files name This tutorial represents how to list the entire files name on FTP server using java JSP Error Pages JSP Error Pages  .... To know more about JSP error pages click on the link: http... a server's default exception page. Even though most of the well-designed JSF Tutorial for Beginners ; JAVA SERVER FACES ( not to be confused with JSP..JAVA SERVER PAGES.... --------- JAVA SERVER FACES is the equivalent of ASP.Net's... in Swing, etc. ( 'Mastering Java Server Faces' by Bill Dudney & others..Wiley JSP JSP What is JSP? JavaServer Pages (JSP) is a server... document types. Hi, Here is the answer. JavaServer Pages (JSP) is a server-side Java technology an extension to the Java servlet technology using session variable determine the no of pages viewd by the user /java/jsp/trackuserSession.html Thanks...using session variable determine the no of pages viewd by the user using session variable determine the no of pages viewd by the user Hi Uploading Multiple Files Using Jsp Uploading Multiple Files Using Jsp  ... to understand how you can upload multiple files by using the Jsp. We should avoid... logic, but at least we should know how we can use a java code inside the jsp page Introduction to JSP ; Developing first JSP Java Server Pages... as a small stand-alone server for testing servlets and JSP pages before... for testing servlets and JSP pages, or can be integrated into the Apache Web server tomcat server - Java Server Faces Questions server error */ Jul 28, 2008 10:42:07 PM...) at org.apache.coyote.tomcat5.CoyoteConnector.initialize(CoyoteConnector. java:1326...) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39 EasyEclipse Server Java EasyEclipse Server Java EasyEclipse Server Java For development of server-side Java applications, such as JavaServer Pages, EJBs and Web Services. EasyEclipse JSP ARCHITECTURE ; JSP pages are high level extension of servlet and it enable the developers to embed java code in html pages. JSP files are finally compiled into a servlet by the JSP engine. Compiled Java server Faces Books Java server Faces Books Java server Faces...; Core Java server Faces index - Java Beginners index Hi could you pls help me with this two programs they go hand in hand. Write a Java GUI application called Index.java that inputs several... the number of occurrences of the character in the text. Write a Java GUI jsf - Java Server Faces Questions server javax.faces.CONFIG_FILES /WEB-INF/faces.../ The requested resource (/SimpleLoginPage/) is not available. Mycode for xml files... /pages/login.jsp #{SimpleLogin.CheckValidUser} success Description of GlassFish Application Server the latest versions of technologies such as JavaServer Pages (JSP) 2.1... Description of GlassFish Application Server GlassFish is a free, open source application server JSP JSP Hi, What is JSP? What is the use of JSP? Thanks Hi, JSP Stands for Java Server Pages. It is Java technology for developing web applications. JSP is very easy to learn and it allows the developers to use Java About tld files - JSP-Interview Questions is the difference between tld files and java beans? Hi Friend, The .tld... the tag files or to implement thecustom tags with tag handlers in Java..._Tag_Library.shtml Difference between tags of .tld files and java beans: 1 Developing JSP files Developing JSP Files This login and registration application uses JSP for presentation layer. Our application has 10 JSP pages. login.jsp success.jsp useraccount.jsp Java arraylist index() Function Java arrayList has index for each added element. This index starts from 0. arrayList values can be retrieved by the get(index) method. Example of Java Arraylist Index() Function import linking of pages in java a register page how can i do this in java (applet ie frames)i made to pages one for login and other for register pls rply Java linking two pages...linking of pages in java i made a login form accountno: Tomcat Server is Tomcat Server? Tomcat Server is a pure Java HTTP web server environment to run Java code. Tomcat Server is an open source web server. Tomcat Server... container. It applies the Sun Microsystems Java Servlet and JavaServer Pages JSP Arraylist Index Description: ArrayList is a class and a member of Java Collection Framework. It is the resizable-array and permit all element including the null.... Code for JSP Arraylist index.jsp & postindex.jsp: index.jsp <%@page passing - Java Server Faces Questions JSP How i can create a directory from the JSP program ? Hi friend, Code to Create a directory using JSP : For more information on JSP visit to : Thanks here i need to downloadall the pages from the websit by giving url - JSP-Servlet the pages from that website. It should take the depth of retrieval from the user input. All the files/pages must be stored in a folder...here i need to downloadall the pages from the websit by giving url   JSP Architecture FTP server FTP server How to store a series of files in a ftp server using java jsf - Java Server Faces Questions jsf1.1,copied all jar files from jsf/lib directory to tomcat/common/lib directory,and copied "jstl.jar" and "standard.jar" from tomcat/webapps/jsp-examples/WEB-INF/lib... the classpath for these jar files ,how to set the classpath for these jarfiles Java JAR Files Java JAR Files  ... In computing, a JAR file (or Java ARchive) is used for aggregating many files.... WAR (Web Application aRchive) files store XML files, java classes jsp - Java Server Faces Questions jsp how to lock and unlock a folder by using jsp programming? thank u in advance server - JSP-Servlet server - set the environment variable How to set the environment variable in Java Servlet Hi,To set the environment variables right click on the My Computer->properties- >advance->Environment Variables-> about threading in java - Java Server Faces Questions need to index some files and URL's every day at certain time.. so plz help me jsp - Java Server Faces Questions in my JSP page .please tell me the code Hi friend, Please give... information : Thanks Hello Tarun You mean pass the value to another jsp or what? Post clearly exactly what you want Create Web Page with jsp server. JSP simply application that work with Java inside HTML pages. Now we... the current date/time for display on JSP page. Start the tomcat server, open... Web Page with jsp   jsp - Java Server Faces Questions jsp Thanks for answering previous questions . Q)I need a code to draw line graph in jsp Thank u Happy new year Hi friend, Code to help in solving the problem : Thanks - Java Server Faces Questions JSP how to set a lock to the folder by using jsp? Hi friend, Code to help in solving the problem. FileInputStream in = new FileInputStream(file) { try { java.nio.channels.FileLock lock including index in java regular expression including index in java regular expression Hi, I am using java regular expression to merge using underscore consecutive capatalized words e.g., "New York" (after merging "New_York") or words that has accented characters JSF - Java Server Faces Tutorials JSF - Java Server Faces Tutorials Complete Java Server Faces (JSF... language, Messages etc... Java Server Faces (JSF) is a great... effort brought a new technology named Java Server Faces (JSF... like examples and put jsp file into it. 7)Then start the tomcat server Developing JSP, Java and Configuration for Hello World Application Writing JSP, Java and Configuration for Hello World Application In this section we will write JSP, Java and required configuration files for our Struts 2 Hello World application. Now Creating a String Creating a String In jsp we create a string as we does in a java. In jsp we can declare it inside the declaration directive or a scriptlet directive Tomcat Server Tomcat Server Why my tomcat server installation stop at using:jvm c:\program files\java\jdk 1.6.0\bin\client\jvm.dll. Even though i trying to install several times. please help me.... Installing Tomcat Server JSP FUNDAMENTALS ; JSP termed as Java Server Pages is a technology introduced by Sun...JSP FUNDAMENTALS  ... of them itself define the JSP i.e. JSP separates the presentation logic jsp - Java Server Faces Questions Programming Books server: Microsoft ASP, PHP3, Java servlets, and JavaServer Pages? (JSP[1...; More Servlets and Java Server Pages... guide to creating custom JSP tag libraries for server-side Java applications What is a Tag Library in JSP explanation of the Tag Library in JSP. In the Java Server Pages Technology, multiple actions are accessed by using the tags of the JSP whether the tag is standard... What is a Tag Library in JSP   web server web server we need to restart tomcat server when we upload java class file but no need to restart when we upload jsp file. why
http://www.roseindia.net/tutorialhelp/comment/44807
CC-MAIN-2014-49
refinedweb
2,403
56.55
turquoise wall clock clocks uk aqua blue swimming pool. click to view additional images turquoise wall clock large antique metal oval. turquoise wall clock big n. silver wall clock decorative round turquoise clocks uk handmade glass mosaic quiet motion design by. import collection wall clock available at turquoise clocks uk. turquoise wall clock large antique metal oval i m a fiesta. turquoise wall clock oversized graves kitchen. turquoise wall clock teal blue shiny glitter print effect sparkle luxury backdrop vintage clocks uk e. orbit 8 wall clock turquoise large rusty round metal retro. turquoise wall clock large. turquoise wall clock blue. yin yang wall clock turquoise canada oriental furniture. turquoise plastic wall clock by art of mosaic x 1 inch. wall clock blue green turquoise clocks uk. starfish wall clock turquoise clocks uk 2. mixed geometrical tones wall clock 5 colors modern decor geometric art turquoise amazon. medallion bright teal round clock turquoise wall large rusty metal. classy square wall clock glass turquoise battery operated oversized contemporary clocks. colorful wall clock with thin frame promotional turquoise large round. industrial reed wall clock turquoise large antique metal oval. turquoise wall clock clocks uk scuba diving office. wall clock i pare acrylic digital clocks turquoise large rusty round metal. blue wall clocks medium image for wonderful clock turquoise large rusty round metal navy na. new large indoor outdoor wooden decorative rustic vintage country wall clocks turquoise clock metal. turquoise wall clock for room decoration amazon. modern wall clocks sticker clock fashion home decoration numerals for turquoise uk. turquoise wall clock large colourful blue white. sea glass beach driftwood ocean square wall clock turquoise blue.
http://wearethefarm.co/turquoise-wall-clock/
CC-MAIN-2019-04
refinedweb
272
78.55
Introducing Essence#: A Smalltalk-based Language for .NET - | - - - - - - Read later Reading List As programming languages become more powerful, they also become more complicated. To an experienced developer, C# holds few surprises. But to a novice who needs to write code for his non-IT job, it can be a bit daunting. This scenario led to the creation of Essence#. Alan Lovejoy created Essence# as a superset of syntax found in ANSI-Standard Smalltalk. As the # suffix implies, this is a .NET compatible language. More specifically, it runs on top of the Dynamic Language Runtime (DLR). Recently we spoke with Alan Lovejoy, creator of Essence# about his creation. InfoQ: Before we get into the details of the language, can we talk a little about the history of Essence#? Was this originally a research project or was it built for a specific use case? Alan Lovejoy: Essence# was built for a specific use case: Trading financial markets. I and my business partners use it to write “trading plan scripts” that are executed whenever a trading signal (to buy or to sell) is generated by our market analyst software. A particular trading plan script can generate a trading plan that is appropriate to the situation at the moment the trading signal is issued. For example, it may vary the size of the position based on the current balance in your trading account, or use 3 profit targets instead of just one because all trends are in the same direction (all trends up or all trends down,) or it may optionally use timed exits because the trend on the daily chart, the trend on the hourly chart and the trend on the 15-minute chart conflict with each other and it’s a narrow range day. We would also like to make it possible to write our market analyst software in Essence#, but that’s for the future. My trading partners are not programmers, and so they needed a programming language that they could use that didn’t require them to master computer science. So they left the computer science up to me :-). Essence# meets their needs. We feel it’s easier than TradeStation’s EasyLanguage, which is used for similar purposes, and it’s far easier than C#, which is what everyone else must use in order to write trading strategies and trading plan generation logic on NinjaTrader, which is what we use. The fact that NinjaTrader is implemented in C# is the reason I implemented Essence# as a .Net language, using the Microsoft Dynamic Language Runtime. InfoQ: At InfoQ we have a wide mix of readers. The C# and Java developers tend to think in terms of classes with a fixed set of features while others of objects that can and should be extended at runtime. And then we have some in the middle, like Python where you can modify functionality at runtime but you’ll be accused of “monkey-patching” if you do. What is the correct mindset for working with Essence#? Alan: That completely depends on the use case. Essence# can be used as though it were a language like Java or C#, or as though it were a language such as Python, or as though it were a language such as Smalltalk. The best use case for being fully and shamelessly dynamic is hosting software development tools in the language, so that a programmer can use the development tools to operate on and change a program while the program is running. I haven’t tried that yet, but I designed the language to support it. My language design philosophy is that a programming language should make it possible to do whatever you want, but should do so using as little syntax as possible. I view the programmer as a professional and artist, not as a cog in a machine who shouldn’t be trusted. That’s why I designed Essence# to use an optimistic typing paradigm which assumes that the programmer is innocent until proven guilty. In other words, Essence# is dynamically typed. There is no static type checking that can be specified in the syntax of the language, and the Essence# compiler does not attempt to enforce type safety at compile time. But Essence# is nevertheless strongly typed, although type safety is enforced solely at run time. InfoQ: Sitting between Essence# and the actual runtime is the DLR or Dynamic Language Runtime. Some of our readers are familiar with the DLR from its use in IronPython and IronRuby. How heavily do you use the DLR? Do you see it as a collection of helpful functions or is more core to how Essence# works? Alan: I use the DLR rather heavily, but I have also gone my own way in some respects. I use LINQ expressions to generate all code. I use the DLR’s ParameterExpressions for all method/function parameters, and for all stack-resident variables. I rely on the DLR closure implementation for the Essence# lambda functions. And I use the DynamicMetaObject protocol: Essence# objects implement the IDynamicMetaObjectProvider interface, and so can be sent messages by other languages that also use the DLR’s DynamicMetaObject protocol. And Essence# uses the DynamicMetaObject protocol in order to operate on any “foreign” objects it may encounter. The Essence# compiler generates DLR dynamic call site for each and every message send, regardless of the receiver. The compiler does not and cannot know the type of the object that is receiving a message, so it just emits a DLR CallSite for all message sends, and the CallSite for a message send is always an instance of the ESMessageSendBinder class. An ESMessageSendBinder figures out, at run time, how to implement the message send. That’s done one way in the case of native Essence# objects, done another way in the case of the CLR primitive types, done yet another way for any non-essence# objects that implement the IDynamicMetaObjectProvider interface (using the protocol of DynamicMetaObjects and the DLR’s standard set of DynamicMetaObjectBinders as fallback binders,) and done yet another way for all other objects (using LINQ.Expression trees.) The bottom line is that Essence# objects are fully interoperable with code written in other .Net programming languages. The methods of Essence# objects can be called by foreign code (however, the fields of Essence# objects are never accessible externally, not even in the case of other Essence# code.) And Essence# code can instantiate instances of foreign types, can access the fields and/or properties of those types, and can invoke any methods defined by those types. In fact, that’s how we are able to use it with NinjaTrader. Language Highlights To fully review Essence# we would have to also review Smalltalk, so instead we are just going to look at some of the features and design concepts that may be novel to C# developers. Namespaces in Essence# Namespaces in Essence# are unlike anything you see in C# or VB programming. Rather than just being part of a name, they are actual objects that one can interact with. In addition to classes, programmers can store loose variables, constants, and functions in a namespace. You may find it beneficial to think of this as a formalization of the pseudo-namespace conventions used by JavaScript libraries such as jQuery. There are four primary namespaces in any Essence# project. The first is the Root namespace, which is the parent of all other namespaces. Under this is the default namespace, which is where “all of the core system classes reside”. Unless you specify a different namespace, all code is executed in this context. For working with external libraries, there is a CLR namespace and its children. From the documentation: The "CLR" namespace is the default parent namespace for all root namespaces known to .Net and the CLR. Unless overridden by the program, all namespaces that are direct or indirect children of the CLR namespace are automatically bound to a .Net/CLR namespace with the corresponding qualified path name, where "corresponding" means "without the prefix Root.CLR." Consequently, there is a default, standard qualified name for the Essence# class that corresponds to any type (struct or class) known to the CLR. (The runtime system makes it appear as though there is always an Essence# class that shadows/represents/corresponds to every CLR type, although what it actually does is to create any such classes dynamically, when and as needed--but only once, of course.) The last of the primary namespaces is called “Undeclared”. If you try to reference an undeclared variable, and don’t have the compiler set to abort in that scenario (Option Explicit Off in VB parlance), then the undeclared variables are stored in this namespace. Items stored in a namespace must have one of three visible levels. “Public” is the same as public in any other language. “Local” is similar Java’s “package private” in that anything in the same namespace can access to the namespace entry. The final option is “InHierarchy”, which is like Local expect child namespaces can also access the entry. Object State Architecture This is a hard concept to introduce because we don’t have a commonly used term to refer to a “class of classes”. Haskell has the term “Type Classes”, but that implies more than we want to say. If we were to use the biological classification system, the next level above “class” would be “phylum”. In Essence# it is called “Object State Architecture”. If we were to apply the concept of object state architectures to C# we would find six basic architectures: primitives, value types, reference types, finalizable reference types, arrays, and static classes. Each of these architectures has specific implications when it comes to runtime issues such as memory layout and garbage collection. In Essence#, most of the classes will derive from one of the following object state architectures. - Abstract: A class whose instance architecture is #Abstract cannot have any instances. - Stateless: The instances of a class whose instance architecture is #Stateless cannot have any state at all. For example, the class Object. - NamedSlots: The instances of a class whose instance architecture is #NamedSlots can have named instance variables ("fields" in CLR-speak.) The instance variables ("fields") are dynamically typed; they work as though they had the C# type "Dynamic." (Note that there are more specific object state architectures that can also have named instance variables. So #NamedSlots is just the most abstract or general case.) - IndexedObjectSlots: The instances of a class whose instance architecture is #IndexedObjectSlots can have any number of indexable slots--including none at all. They can also optionally have named instance variables. In both cases, the slots work as though they had the C# type "Dynamic." Such objects are the Essence# equivalent of C# object arrays. - Indexed[Type]Slots: These represent arrays of various types. The current list is Byte, Char, HalfWord, Word, LongWord, SinglePrecision, DoublePrecision, and QuadPrecision (128-bit floating point). There is a variety of system architectures as well. These architectures are used for the language’s infrastructure rather than for custom types. Some of the more interesting architectures include: - Message: A Message instance specifies a message that was or could be sent, along with the message arguments, if any. Instances are created by the run time system when and as needed, although application code may also create and use instances. Message instances cannot have programmer-accessible named instance variables. - Namespace: Namespace instances serve as dynamic namespaces at runtime (See the documentation on namespaces for a far more detailed description]). Instances may optionally have programmer-accessible named instance variables. - Pathname: A Pathname instance serves as a hierarchical key whose elements are Strings. It's used for identifying namespaces, file pathnames, URLs, etc. Instances may optionally have programmer-accessible named instance variables. - Block: A block is an anonymous function with full closure semantics. The implementation uses CLR delegates. Blocks cannot have programmer-accessible named instance variables. - Method: A method is executable code that runs in the context of a specific class, with full access to the internal state of the distinguished object that receives the message that invokes the method. Methods cannot have programmer-accessible named instance variables. - Behavior: A Behavior is a proto-class. There can actually be instances--it's not abstract. Instances may optionally have programmer-accessible named instance variables. (See the documentation on classes for a more detailed description]). - Class: A Class is a full Essence# class which is a subclass of Behavior, is an instance of a Metaclass, and whose instances (if it's allowed to have any) can be an object (value) of any type. The term 'class' is usually intended to refer to an (indirect) instance of the class Class, but technically can refer to any Object that can create instances of itself, such as a Behavior or a Metclass (i.e., any instance of Behavior or anything that inherits from Behavior.) Instances may optionally have programmer-accessible named instance variables. (See the documentation on classes for a more detailed description]). - Metaclass: A Metaclass is an Essence# class which is a direct subclass of the (Essence#) class Behavior. A Metaclass is an instance of the class Metaclass, and its instances must be Classes. A Metaclass can have only one instance which is called either the canonical instance or the sole instance. Note that the superclass of the Metaclass of any root Behavior (e.g., the metaclass of class Object) is (and must be) the class Class. Instances may optionally have programmer-accessible named instance variables. (See the documentation on classes for a more detailed description]). - BehavioralTrait: A Trait is a composable unit of behavior. Traits can be "used" by a class or by another Trait with the effect of adding the methods defined (or used) by the Trait to the method dictionary of the using class or of the using Trait. A BehavioralTrait is a Trait usable by any BehavioralTrait or by any Behavior (i.e., by any instance of the class BehavioralTrait, or by any instance of the class Behavior, or by any instance of any subclass of either the class BehavioralTrait or of the class Behavior.) - HostSystemObject: A "host system object" is simply an instance of any CLR type which is not a formal part of the Essence# runtime system. One of the requirements for an Essence# class to represent a CLR type (which may or may not be a "class" as the CLR defines that term) is that its instance type must be #HostSystemObject. Traits Traits are Essence#’s answer to multiple inheritance. Traits are sets of functionality that can be imported by a class, but they are more than just cut-and-paste methods handled by the compiler. So first, let us talk about how they differ from normal methods. With one exception, methods imported from a trait should be indistinguishable from those implemented locally by the importing class or trait. The exception is that methods defined in a trait bind to non-local variables (e.g, class variables) based on the namespace environment of the trait that defines the methods, and not based on the namespace environment of the class or trait that uses (“imports”) those methods. Currently Essence# does not allow instance variables to be defined or referenced by traits. That may change in the future. Traits don’t have to be used entirely as-is; they can be combined to create new traits. Combining traits isn’t like combining interfaces where you have to include every method from each source. Instead, you have to work with three operators. The '+' operator allows you to combine the unique methods of two traits. As a way of dealing with name collisions, if a method is defined in both traits being combined with the '+' operator, the method is excluded. Since that is probably not what you want to happen, you can use the '-' operator to remove the unwanted method from one trait so that the method will be expressed by the other trait. Or you can use the '@' operator to rename the method on one of the base traits. Essence# is available on CodePlex under the Simplified BSD License. About the Interviewee Alan Lovejoy is currently a discretionary trader specializing in automated trading systems. Before becoming a trader, he worked for many years in the software industry, initially as a programmer, and then as a software engineer, software team lead, software/solution architect, enterprise architect and finally as Director Of Engineering And Architecture for a small startup company. He worked on trading/financial software for BankAmerica and JP Morgan. His first exposure to Object-Oriented programming was using Smalltalk for BankAmerica on a prototype of a foreign-exchange trading platform, staring in November 1985. In addition to Essence#, he is the author the Chronos Date/Time Library (Smalltalk,) and of “Smalltalk: Getting The Message,” a primer/tutorial for Smalltalk. Rate this Article - Editor Review - Chief Editor Action
https://www.infoq.com/articles/Introducing-Essence-Sharp
CC-MAIN-2018-30
refinedweb
2,814
62.17
[SOLVED] QSerialPort::waitForBytesWritten() returns false I upgraded from Qt 5.2.0 to 5.3 - now a piece of code that worked before does not work. My serial port lives in a worker thread, running without an event loop. To write data to the port, I'm calling write() and then waitForBytesWritten() with a 50 msecs timeout. It seems like the calls actually do "work" (the data is written). BUT QSerialPort reports failures. The first waitForBytesWritten() returns false, the error code is 12 and the description is "Unknown error". Every attempt after that still returns false, error code still 12 and description is "Overlapped I/O operation is in progress." Looks like it might be this bug: It's a bit hard to follow the discussion there but am I right in understanding that this is only going to be fixed in 5.3.1? Should I go back to 5.2.0 or 5.2.1? bq. in understanding that this is only going to be fixed in 5.3.1? Yes. Will be in 5.3.1. But in any case you can build QtSerialPort from Git sources and do not worry. :) Thanks kuzulis. Up until now I only used the binaries (with the web installer). Could you point me to what exactly I need to get from github? The branch named Stable? bq. Could you point me to what exactly I need to get from the github? I don't know nothing about GitHub, but from Gitorious it is enough to do: @git clone git://gitorious.org/qt/qtserialport.git@ see Wiki.: bq. The branch named Stable? Yes. Need the stable branch. (also by default will be downloaded the stable branch, will be active). Also you can download from the WEB: as written on Wiki. Thanks for the links kuzulis - but I'm still having issues. After using the stable Qt5SerialPortd.dll - indeed my waitForBytesWritten works - but now waitForReadyRead does not. To be exact, the first calls to waitForReadyRead correctly return false, with a TimeoutError (code 12) and the errorString() is Unknown Error. As soon as one of the following things happens - A. there is actually data to read. B. I called write() and waitForBytesWritten() - the next call to waitForReadyRead() returns false, with same error code 12 and error string "Overlapped I/O operation is in progress." Can you please provide an simple complete code to reproduce a problem? Simplified as much as I can - this is just for the error for reading data. I didn't include the writing code and the threading related stuff besides the minimal to reproduce. SerialPortWorkerThread.h @ class SerialPortWorkerThread : public QThread { Q_OBJECT public: SerialPortWorkerThread(QObject *parent = 0); ~SerialPortWorkerThread(); virtual void run(); private: bool abortThread; bool openPort(QSerialPort &serialPort, QString &errorMsg); }; @ SerialPortWorkerThread.cpp @ #include "serialportworkerthread.h" #include <QDebug> SerialPortWorkerThread::SerialPortWorkerThread(QObject *parent) : QThread(parent), abortThread(false) { } SerialPortWorkerThread::~SerialPortWorkerThread() { if (isRunning()) abortThread = true; } void SerialPortWorkerThread::run() { QSerialPort serialPort; QString error; bool opened = openPort(serialPort, error); if (!opened) qDebug() << QString("Error opening port: %1").arg(error); while (opened && !abortThread) { bool haveData = serialPort.waitForReadyRead(15); if (!haveData) { QString error = serialPort.errorString(); int errorId = serialPort.error(); //! skip the UNKNOWN if (!error.contains("Unknown error", Qt::CaseInsensitive)) qDebug() << QString("Error %1 - %2").arg(errorId).arg(error); } else { qDebug() << QString("Data was read!"); QByteArray data = serialPort.readAll(); } } qDebug() << "Serial Port Worker Thread - we got out of while loop"; } bool SerialPortWorkerThread::openPort(QSerialPort &serialPort, QString &errorMsg) { errorMsg.clear(); if (serialPort.isOpen()) serialPort.close(); serialPort.setPortName("COM20"); bool success = serialPort.open(QIODevice::ReadWrite); //! connection error - just quit here if (!success) { int errorId = serialPort.error(); QString errorDescription = serialPort.errorString(); errorMsg = tr("Could not open port: (errorId=%1) %2").arg(errorId).arg(errorDescription); return false; } //! Try to apply settings one after another, storing potential error message success = serialPort.setBaudRate(QSerialPort::Baud9600); if (!success) errorMsg = tr("Could not set baud rate."); if (success) success = serialPort.setDataBits(QSerialPort::Data8); if (!success) errorMsg = tr("Could not set data bits number."); if (success) success = serialPort.setParity(QSerialPort::NoParity); if (!success) errorMsg = tr("Could not set parity check option."); if (success) success = serialPort.setStopBits(QSerialPort::OneStop); if (!success) errorMsg = tr("Could not set stop bits."); if (success) success = serialPort.setFlowControl(QSerialPort::NoFlowControl); if (!success) errorMsg = tr("Could not set flow control mode."); //! If necessary, close if (!success) serialPort.close(); return success; } @ Then I create a regular UI application, with a SerialPortWorkerThread* member and in the constructor I call: @ serialPortWorker = new SerialPortWorkerThread(this); serialPortWorker->start(); @ Ohh.. it is not simple example.. :) Though yes, I found some problems with waitForReadyRead() in own research. I will try to prepare a patch. :( Thank you for confirming. Should I do anything in the Qt Bug Tracker or would you update any existing issues? You can try a patch, see issue: Thanks kuzulis. I will try this. I hate to hijack you to another thread (which is way more creative than hijacking a thread) - but is there a chance you could help out over here? (if not, I'll work it out). Unfortunately still doesn't work. I took qtserialport-stable from git and manually replaced the qserialport_win.cpp with the patched version. I then cleaned and rebuilt, with 5.3.0. In my test project I removed @QT += serialport@ and added @ INCLUDEPATH += "C:/qtserialport-stable/src/serialport" LIBS += "C:/build-qtserialport-Desktop_Qt_5_3_0_MSVC2010_OpenGL_32bit-Debug/lib/Qt5SerialPortd.lib" @ For the first time data should be read, I get "Error 12 - Overlapped I/O operation is in progress." "Error 12 - The handle is invalid." After that, the code returns one or two of the "overlapped" errors each time data is sent to the port and should be read. [quote] For the first time data should be read, I get “Error 12 – Overlapped I/O operation is in progress.” “Error 12 – The handle is invalid.” [/quote] Hmm.. This is strange problem.. Can you please provide a more simple example without threads? E.g. try to modify an "creadersync" example.. UPD: I just checked a patch with a modified "creadersync" example: I got a such output: @ waitForReadyRead failed, 12, error: Unknown error waitForReadyRead failed, 12, error: Unknown error. waitForReadyRead failed, 12, error: Протекает наложенное событие ввода/вывода.. @ Where "Протекает наложенное событие ввода/вывода." is an "Overlapped I/O operation is in progress." Thus, everything is fine. It seems, the error description has incorrect text (seems, the last GetLastError() is decrypted), possibly it needs to be fix in the future (but it isn't critical). To be guided by the error code number =12 (that there is QSerialPort::TimeoutError), and this is correct code. PS: If to be honest, for me do not like your error text: "The handle is invalid.” Seems, that your code has some problems.. You trying to manage on closed device.. or something else.. So you're saying I should ignore the Overlapped I/O string and just treat this is an incorrect error string? OK - that's better. I see that indeed now with the patch and if I ignore this specific error, it should eliminate the problems I had in my mini project. About the "The handle is invalid" - I got this error when my waitForReadyRead() calls followed a write() and waitForBytesWritten() calls. I indeed didn't get that error in your modified CReaderSync. BUT I did manage to reproduce it based on your code with a few steps and without any threads :) Create a new Qt Widgets Application and copy all of your suggested CReaderSync code to the constructor of the MainWindow. Modification 1: open the port as QIODevice::ReadWrite. Modification 2: Add the following code (picked up from CWriterSync and slightly reformatted output string) before starting the while loop - so it happens only once. @ QByteArray writeData(QString::number(QDateTime::currentMSecsSinceEpoch()).toLatin1()); qint64 bytesWritten = serialPort.write(writeData); if (bytesWritten == -1) { qDebug() << QObject::tr("Failed to write the data to port, error: %1 - %2").arg(serialPort.error()).arg(serialPort.errorString()) << endl; return; } else if (bytesWritten != writeData.size()) { qDebug() << QObject::tr("Failed to write all the data to port, error: %1 - %2").arg(serialPort.error()).arg(serialPort.errorString()) << endl; return; } else if (!serialPort.waitForBytesWritten(5000)) { qDebug() << QObject::tr("Operation timed out or an error occurred for port, error: %1 - %2").arg(serialPort.error()).arg(serialPort.errorString()) << endl; return; } @ - Modification 3: I removed the return values (since I threw code into constructor) and changed standardoutput to qDebug() (to see messages in the Application Output in Qt Creator). Copied a copy of the patched Qt5SerialPortd.dll into the shadow build folder. Made it compile. What I get here is the mentioned: "waitForReadyRead failed, 12, error: Overlapped I/O operation is in progress." "waitForReadyRead failed, 12, error: The handle is invalid." Hmm.. I can not reproduce: bq. “waitForReadyRead failed, 12, error: The handle is invalid.” with our modifications (If I correctly understand): Quite possibly that problems with the driver of your device or something else. Because someone trying to have access to your device... I.e. someone earlier set this error, maybe you trying to access to other I/O resource? (file, pipe and so on)? In any case, don't pay attention to the text of this error in case the timeout works correctly (is fired after 10 sec). :) You mean this error is not necessarily related to the specific QSerialPort I'm accessing? I'm not sure how that part works. Anyway, I think this solves the issue for me - so thank you! Will the patch be integrated into the stable branch of QSerialPort and will be released as a part of 5.3.1? You mean this error is not necessarily related to the specific QSerialPort I’m accessing? Yes, because someone can do something on some I/O resources in context of current thread (maybe you have an some other code in your app, not only the constructor, as you mention??). And this maybe lead to the access to invalid descriptor of some resource (not necessarily that it will serial port).. I just assumption.. QSerialPort and will be released as a part of 5.3.1? Yes, most likely in 5.3.1 if reviewers don't block this patch. You can vote or add comments for this patch here: Many thanks for your vote. :) BTW: Is ready an other patch to display an correct error message for TimeoutError: You can also try it. :) Thanks Kuzulis, What's the best way to apply these kind of patches? Should I just replace the .cpp in the qtserialport-stable\src\serialport folder and then qmake, build and finally nmake install from the shadow build folder? I'm a bit new to this patching business - usually I'm only working with the binary releases. I looked in the Wiki page and I didn't understand this sentence: - add a new make "Build Step" and write to the "Make arguments" the install target Could you explain exactly what I need to do here? bq. What’s the best way to apply these kind of patches? Usual, like: [code] $ git clone git://gitorious.org/qt/qtserialport.git $ cd qtserialport $ git fetch refs/changes/06/86406/1 && git checkout FETCH_HEAD [/code] where "git fetch" it is an copy/paste string from this: , patch set #1 with selected: checkout tab + Anonymoys HTTP + click to copy to clipboard. After this manipulations, the current patch will be applied in your local copy of QtSerialPort and you can try it. bq. Could you explain exactly what I need to do here? This is an QtCreator's project property (build-step) in the build-page (you should click to build step to expand it).. By default there is just like: bq. make: [ ] where [ ] is the line edit. So, you should just write there install . And re-build project. Thanks again! Is there any chance this will enter 5.3.1? I'm looking at the QT-BUG entry and the Fix Version/s field is still None. bq. Is there any chance this will enter 5.3.1? Yes, will be in 5.3.1. bq. I’m looking at the QT-BUG entry and the Fix Version/s field is still None. Oops.. I'm sorry, I forgot to close a bug. Done. :)
https://forum.qt.io/topic/41833/solved-qserialport-waitforbyteswritten-returns-false
CC-MAIN-2018-39
refinedweb
2,021
60.11
Build UAC aware apps with VS2008 - Posted: Aug 16, 2007 at 6:22 PM - 25,755 Views - 11 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…” Author: Hi, I am Daniel Moth Introduction: User Account Control is the top compatibility hurdle for some applications moving to Windows Vista. It is relatively easy to comply with some elements of UAC (e.g. embedding a manifest in your app) and with Visual Studio 2008 it is even easier as I show in this video. Relevant blog posts of mine are why is this so late? Did the VS2005 team really not know that Vista was coming? And why does the "UAC Options" button in VB.NET mode actually create a file? Isn't that just about the last action that anyone would expect given the labeling on the button? Lastly, this video is grossly mistitled. It's not about making UAC aware apps - the app knows nothing about UAC except for a "magic string" in the manifest. It would be nice to know if an app can detect that it's elevated, or handle the user not running it. But since these apps aren't actually UAC aware at all, that's not possible. Hi rsclient Thank you for the thoughtful comments and questionsManifests are just another resource, but in the native sense. So if you are prepared to play with RC files then you can also embed the manifest. You can read that approach for VS2005 here. . The VS2005 team (and probably the VS.NET 2003 and the VS.NET 2002 and the VS6 teams) knew that Vista was coming but it hadn't come yet when their product RTMd. As a reminder, VS2005 was released in November 2005 and Windows Vista in January 2007. That is why VS2005 *itself* is not UAC-aware whereas VS2008, as I demonstrate in the video, is. I also share in the video what you must do to embed manifests using VS2005. Not sure what action most users would expect from that button. If you think it is bad UI design or if you prefer the C# approach, please feed that back directly to the product groups or raise it in the forums. I am just the messenger explaining how things work This video does not cover *everything* there is about writing UAC-aware apps, but it does cover some hence the title (note the "with VS2008" in the title). Hopefully, by watching the video you learnt why it is important to embed a manifest and thus taking the first step to making your app UAC-aware (e.g. not taking advantage of virtualization as I demonstrate) and towards logo certification. I also show how to add the Shield icon to the button that performs admin functionality, hence making that bit UAC-aware. Finally, in the video at 13' I provide some overall generic advice for building UAC-aware apps. If you still feel the video is mistitled then apologies. To see how to determine if your app is elevated, Download my other UAC video from June 2006 here. Thanks again for your feedback. Cheers Daniel public static class UAC { public static bool AmElevated(){ return new WindowsPrincipal(WindowsIdentity.GetCurrent()) .IsInRole(WindowsBuiltInRole.Administrator); } public static void Elevate(){ if(AmElevated()) return; ShellExecute(IntPtr.Zero, "runas\0", Application.ExecutablePath + "\0", "\0", "\0", 1); } } ... public class Program { public static void Main(){ if(!UAC.AmElevated()){ MessageBox.Show("I'm not elevated, asking you to elevate..."); UAC.Elevate(); }else{ MessageBox.Show("I'm elevated, yays!"); } Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } } The checking of admin rights is essentially the same as what the video I referenced shows. You do not need to use ShellExecute directly since it is possible with the managed Process class (set the Verb to "runas" - example here). Cheers Daniel? BTW, if you encoutner any issues with developing for Vista these forums are very useful. Hi Daniel,. supachannel, sorry I don't know. Please see the MSDN documentation:"> ..and failing that, please use the MSDN forums. Cheers Daniel Remove this comment Remove this threadclose
http://channel9.msdn.com/Blogs/DanielMoth/Build-UAC-aware-apps-with-VS2008
CC-MAIN-2015-06
refinedweb
701
65.42
Changing Math node input value Xpresso On 23/11/2017 at 06:54, xxxxxxxx wrote: Hi everyone, I'm trying to change a math node value and unfortunately can't do it with: math[0][2000, 1001] = 1.0 it gives the TypeError: __setitem__ got unexpected type 'float'. The error appears with any assigned value type. Interestingly enough if I change the value manually first and than run the line it works fine. Don't understand what I'm missing. Thank you very much in advance! Andre On 23/11/2017 at 07:14, xxxxxxxx wrote: import c4d def main() : nodeMaster = doc.GetActiveTag().GetNodeMaster() node = nodeMaster.GetRoot().GetDown() # GET lv1 = c4d.DescLevel(2000, c4d.DTYPE_SUBCONTAINER, 0) lv2 = c4d.DescLevel(1000, c4d.DTYPE_REAL, 0) # or DTYPE_DYNAMIC that will automaticly adapt print node.GetParameter(c4d.DescID(lv1, lv2), c4d.DESCFLAGS_GET_0) # SET lv1 = c4d.DescLevel(2000, c4d.DTYPE_SUBCONTAINER, 0) lv2 = c4d.DescLevel(1000, c4d.DTYPE_DYNAMIC, 0) node.SetParameter(c4d.DescID(lv1, lv2), 50.0, c4d.DESCFLAGS_SET_0) c4d.EventAdd() if __name__=='__main__': main() As you may know or not, [] operators are just a python wrapper around Get/SetParameter, and sometime it fails. So you need to build it yourself On 23/11/2017 at 07:34, xxxxxxxx wrote: Hi Graphos, It works great and understand better the containers. Thank you very much once again! You are a live saver and a technical wizard
https://plugincafe.maxon.net/topic/10477/13926_changing-math-node-input-value-xpresso
CC-MAIN-2020-10
refinedweb
229
53.68
lstat(2) lstat(2) NAME [Toc] [Back] lstat - get symbolic link status SYNOPSIS [Toc] [Back] #include <sys/stat.h> int lstat( const char *path, struct stat *buf ); PARAMETERS [Toc] [Back] The parameters for the lstat() function are as follows: path is a pointer to a path name of any file within the mounted file system. All directories listed in the path name must be searchable. buf is a pointer to a stat structure where the file status information is stored. DESCRIPTION [Toc] [Back]. If the chosen path name or file descriptor refers to a Multi-Level Directory (MLD), and the process does not have the multilevel effective privilege, the i-node number returned in st_ino is the i-node of the MLD itself. The stat */ Hewlett-Packard Company - 1 - HP-UX 11i Version 2: August 2003 lstat(2) lstat(2) dev_t st_rdev; /* Device ID; this entry defined */ /* only for char or blk spec files */ off_t st_size; /* File size (bytes) */ time_t st_atime; /* Time of last access */ time_t st_mtime; /* Last modification time */ time_t st_ctime; /* Last file status change time */ /* Measured in secs since */ /* 00:00:00 GMT, Jan 1, 1970 */ long st_blksize; /* File system block size */.) RETURN VALUE [Toc] [Back] Upon successful completion, lstat() returns 0. Otherwise, it returns -1 and sets errno to indicate the error. ERRORS [Toc] [Back] The lstat() function will fail if: [EACCES] A component of the path prefix denies search permission. [ELOOP] Too many symbolic links were encountered in resolving path. [ENAMETOOLONG] The length of a pathname exceeds {PATH_MAX}, or the pathname component is longer than {NAME_MAX}. [ENOTDIR] A component of the path prefix is not a directory. [ENOENT] A component of path does not name an existing file or path is an empty string. [EOVERFLOW] A 32-bit application is making this call on a file where the st_size or other field(s) would need to hold a 64-bit value. [EFAULT] buf points to an invalid address. The reliable detection of this error is implementation dependent. Hewlett-Packard Company - 2 - HP-UX 11i Version 2: August 2003 lstat(2) lstat(2) The lstat() function may fail if: [ENAMETOOLONG] The pathname resolution of a symbolic link produced an intermediate result with a length exceeding {PATH_MAX}. NETWORKING FEATURES [Toc] [Back] NFS The st_basemode, st_acl and st_aclv fields are zero on files accessed remotely. The st_acl field is applicable to HFS File Systems only. The st_aclv field is applicable to JFS File Systems only. WARNINGS [Toc] [Back] Access Control Lists - HFS and JFS File Systems Only Access control list descriptions in this entry apply only to HFS and JFS file systems on standard HP-UX operating systems. For 32-bit applications, st_ino will be truncated to its least significant 32-bits for filesystems that use 64-bit values. DEPENDENCIES (CD-ROM) [Toc] [Back] The st_uid and st_gid fields are set to -1 if they are not specified on the disk for a given file. AUTHOR [Toc] [Back] stat() and fstat() were developed by AT&T. lstat() was developed by the University of California, Berkeley. SEE ALSO [Toc] [Back] touch(1), acl(2), chmod(2), chown(2), creat(2), fstat(2), link(2), lstat64(2), mknod(2), pipe(2), read(2), readlink(2), rename(2), setacl(2), stat(2), symlink(2), sysfs(2), time(2), truncate(2), unlink(2), utime(2), write(2), acl(5), aclv(5), stat(5), <sys/stat.h>. STANDARDS CONFORMANCE [Toc] [Back] lstat(): AES, SVID3 CHANGE HISTORY [Toc] [Back] First released in Issue 4, Version 2. Hewlett-Packard Company - 3 - HP-UX 11i Version 2: August 2003
https://nixdoc.net/man-pages/HP-UX/man2/lstat.2.html
CC-MAIN-2022-27
refinedweb
592
61.16
NCache is an in-memory .NET distributed cache and has been developed with C#. Therefore, unlike some other Java based distributed caches that only provide a .NET client API, NCache is a 100% .NET / .NET Core product that fits very nicely in your .NET application environments. You can use NCache as a .NET distributed cache from any .NET application including ASP.NET, WCF and .NET web services, .NET grid computing applications, and any other server-type .NET applications with high transactions. So, whether you're making client-side API calls to NCache as your .NET distributed cache or developing server-side code for Read-thru/Write-thru, rest assured that you will always be in native .NET environment.ASP.NET Support Documentation NCache allows you to store your ASP.NET Session State in an extremely fast in-memory .NET distributed cache with intelligent replication. And, you can do that without making any code changes to your application. This is a much better option than storing your ASP.NET Session State in StateServer or SqlServer provided by ASP.NET. NCache is faster and more scalable .NET distributed cache than these options. And, NCache replicates your sessions to multiple cache servers so there is no loss of session data in case any server goes down. NCache as your .NET distributed cache allows you to accelerate the content delivery from IIS to your user browser and significantly improve ASP.NET response times. NCache does this by providing ASP.NET View State caching, JavaScript and CSS Minify, and JavaScript and Imaging merging. And, no code changes are required in your .NET application to use NCache as your .NET distributed cache. NCache caches the ASP.NET View State at the server-side and returns a unique ID in place of it to the browser. And, this reduces the payload and improves performance. NCache also minifies JavaScript and CSS files to reduce their sizes. And, then it merges all JavaScript files into one and also merges images. This reduces the number of HTTP calls the browser makes to load a page and speeds up response times.View State Caching ADO.NET Entity Framework is rapidly becoming very popular because it greatly simplifies database programming. NCache provides a way for you to easily incorporate caching into Entity Framework and boost performance and scalability of your .NET applications through a .NET distributed cache. Entity Framework has implemented a stackable provider model for leading databases. NCache has developed a .NET distributed cache provider called EF Caching Provider that plugs in-between Entity Framework and the original database provider and intercepts all calls and caches query responses. This means you can start caching application data in a .NET distributed cache without any code changes to your Entity Framework based application. NHibernate is a leading open source Object Relational Mapping (ORM) solution and simplifies database programming for .NET applications. NHibernate provides a local InProc cache that cannot be used in a multi-server environment. Therefore, NCache provides an extremely fast and highly scalable level-2 .NET distributed cache for NHibernate. This allows applications using NHibernate to now scale to multi-server environments and also remove any database bottlenecks. You can incorporate NCache as your .NET distributed cache into your applications without any code changes You only change your configuration file to use NCache. .NET 4.0 now has a System.Runtime.Caching namespace. The classes in this namespace provide a way to use caching facilities like those in ASP.NET, but without a dependency on the System.Web assembly. And, most importantly, this caching is extensible. Therefore, NCache has developed a provider for .NET 4.0 Cache that results in an extremely fast and highly scalable .NET distributed cache. This allows applications using .NET 4.0 Cache to now scale to multi-server environments and also remove any database bottlenecks. You can incorporate NCache as your .NET distributed cache without any code changes to your application. You only change your configuration file to use NCache. +1 (214) 764-6933 (US) +44 20 7993 8327 (UK)
https://www.alachisoft.com/ncache/dot-net-support.html
CC-MAIN-2021-04
refinedweb
674
61.93
_info(3) form_field_info(3) NAME form_field_info - retrieve field characteristics SYNOPSIS #include <form.h> int field_info(const FIELD *field, int *rows, int *cols, int *frow, int *fcol, int *nrow, int *nbuf); int dynamic_field_info(const FIELD *field, int *rows, int *cols, *max); DESCRIPTION The function field_info returns the sizes and other attributes passed in to the field at its creation time. The attributes are: height, width, row of upper-left cor- ner, column of upper-left corner, number off-screen rows, and number of working buffers. The function dynamic_field_info returns the actual size of the field, and its maximum possible size. If the field has no size limit, the location addressed by the third argument will be set to 0. (A field can be made dynamic by turning off the O_STATIC). RETURN VALUE These routines return one of the following: E_OK The routine succeeded. E_SYSTEM_ERROR System error occurred (see errno). E_BAD_ARGUMENT Routine detected an incorrect or out-of-range argu- ment.. 1
http://www.rocketaware.com/man/man3/form_field_info.3.htm
CC-MAIN-2019-22
refinedweb
160
54.52
I've got a python script that outputs unicode to the console, and I'd like to redirect it to a file. Apparently, the redirect process in python involves converting the output to a string, so I get errors about inability to decode unicode characters. So then, is there any way to perform a redirect into a file encoded in UTF-8? When printing to the console, Python looks at sys.stdout.encoding to determine the encoding to use to encode unicode objects before printing. When redirecting output to a file, sys.stdout.encoding is None, so Python2 defaults to the ascii encoding. (In contrast, Python3 defaults to utf-8.) This often leads to an exception when printing unicode. You can avoid the error by explicitly encoding the unicode yourself before printing: print (unicode_obj.encode('utf-8')) or you could redefine sys.stdout so all output is encoded in utf-8: import sys import codecs sys.stdout=codecs.getwriter('utf-8')(sys.stdout) print(unicode_obj)
https://codedump.io/share/AnBalOMMUEG0/1/can-i-redirect-unicode-output-from-the-console-directly-into-a-file
CC-MAIN-2017-51
refinedweb
165
50.84
Namespace Selector In the CSS 2.1 standard, the element selectors ignore the namespaces of the elements they are matching. Only the local name of the elements are considered in the selector matching process. Oxygen XML Author uses a different approach that is similar to the CSS Level 3 specification. If the element name from the CSS selector is not preceded by a namespace prefix it is considered to match an element with the same local name as the selector value and ANY namespace. Otherwise, the element must match both the local name and the namespace. In CSS up to version 2.1 the name tokens from selectors are matching all elements from ANY namespace that have the same local name. Example: <x:b xmlns: <y:b xmlns: Are both matched by the rule: b {font-weight:bold} Starting with CSS Level 3 you can create selectors that are namespace aware. Example: Defining both prefixed namespaces and the default namespace @namespace sync ""; @namespace ""; In a context where the default namespace applies: - sync|A represents the name A in the. - |B represents the name B that belongs to NO NAMESPACE. - *|C represents the name C in ANY namespace, including NO NAMESPACE. - D represents the name D in the. Example: Defining only prefixed namespaces @namespace sync ""; Then: - sync|A represents the name A in the. - |B represents the name B that belongs to NO NAMESPACE. - *|C represents the name C in ANY namespace, including NO NAMESPACE. - D represents the name D in ANY namespace, including NO NAMESPACE. Example: Defining prefixed namespaces combined with pseudo-elements defelement its namespace will be declared, bind it to the abs prefix, and then write a CSS rule: @namespace abs ""; Then: - abs|def represents the name "def" in the. - abs|def:before represents the :before pseudo-element of the "def" element from the.
http://www.oxygenxml.com/doc/versions/19.1/ug-author/topics/dg-namespace-selectors.html
CC-MAIN-2018-05
refinedweb
306
64.3
> From: Costin Manolache [mailto:cmanolache@yahoo.com] > > > I think you started with wrong assumptions here. > > There is no need to change anything in the core or optional tasks, > you can have an antlib that uses multiple jars ( and most > likely antlibs > will eventually use some dependency mechanism and have > mutliple jars ). > > So you want to continue with the aberration of allowing having tasks in the descriptor that cannot be loaded or fail to resolve due to lack of dependencies? This is what we have to allow today in order to have those magic property files that contain all the optional tasks. At least I was striding to get rid of it. But if you impose the rule "one antlib one URI" (as it was suggested) then you have to continue with this aberration just for backward compatibility. Let me just make clear that I am assuming the content of an "antlib" is defined by its XML (or whatever) descriptor. > > I have no problem using XML namespaces as long as they are > independent > > of the antlib and under complete user control (not antlib designer > > control). In other words the user should be able to decide > if s/he wants > > to load the library on some particular namespace or in the > default "" > > namespace which is the one used by core. > > The namespace is not under user control - by definition. Read > the W3C spec, > it is designed to be fixed, stable, durable, etc. > And there is no point in the user changing the namespace URI > - the ns is > the id or name of the library. > There is definetly nothing on W3C about antlib libraries. The URI is just a string and W3C imposes no meaning to it. Whether it is universally unique or not is just suggested not impossed, the same as public-ids for DTDs. As long as you can distinguish between two diferent things that is all that is required. > Regarding use of the core namespace if no name conflicts: +1 > So I am an <antlib> writer and have a <shitch/> task and because it does not conflict with core I fix it in the core namespace. So ANT 1.7 comes along and decides to add a <switch/> task to core. Now what, did 1.7 broke backward compatibility? My antlib is now incompatible with the new version of ANT eventhough no APIs where mofified. Is ant.apache.org gonna monitor all 3rd party <antlibs> to make sure core never uses a name someone else already used? This is a administration nightmare. This is why I am saying let the user resolve the conflicts, if he decides to use two antlibs with clashes, then allow him to load them is separate namespaces to resolve its own conflicts. > > > > So if I say: > > > > <antlib location="antcontrib.jar"/> > > > > I will be able to use: <if/>, <switch/>, etc. But is I do: > > > > <project xmlns: > > Again - the URI is not under user control, but under the antlib author > control. Just like the "if" and the other task names. > Allowing the user to > rename tasks would be very wrong and confusing. ( let's > rename "<delete>" > to "copy", then import few files - and figure out what the > build file is > actually doing ). > Tell me where in the introspection engine this is required. This is just your idea. Antlib could pick the namespace dynamically at loading time. And lets just say that today I could rename all tasks to french names if I wanted just need to write a properties file. (granted the inner-elements will continue in english, but hey). Jose Alberto
http://mail-archives.apache.org/mod_mbox/ant-dev/200305.mbox/%3C747F247264ECE34CA60E323FEF0CCC0C0F50F4@london.cellectivity.com%3E
CC-MAIN-2017-04
refinedweb
595
72.46
Hi guys, i did solve these two problems myself without googling. I have little confusion on nested loops here: 1st code: def exponents(bases, powers): new_list = [] for i in range(len(bases)): for j in range(len(powers)): new_list.append(bases[i] ** powers[j]) return new_list print(exponents([2,3,4],[1,2,3])) 2nd code: def larger_sum(lst1, lst2): sum1 = 0 sum2 = 0 for i in range(len(lst1)): sum1 += lst1[i] for j in range(len(lst2)): sum2 += lst2[j] if sum1 > sum2: return lst1 elif sum1 == sum2: return lst1 else: return lst2 print(larger_sum([1, 9, 5], [2, 3, 7])) In the second code, i was not getting the solution due to using of nested loops, I have little doubts that, in the second code too, there are two parameter lists. But , why don’t we use nested loop here, while in the 1st code, we used nested loop as I need to multiply the base index to the power index. Somebody, please clear my doubts. I will be very thankful to you. Thanks New coder
https://discuss.codecademy.com/t/nested-loop-confusion/491566
CC-MAIN-2020-40
refinedweb
179
66.17
In my last article, I discussed, in quite some detail, the process that GCC uses to convert a C source file into an executable program file. These steps included preprocessing the source to remove comments, include other files as required, and string substitution. The resulting file was then compiled into assembly language. The assembly language output was then used to create an object file containing machine language, which was then linked with other standardized libraries to create an executable. As mentioned in the previous article, that article as well as this one, are based on a software development class I taught a few years ago. Some people are going to find this to be quite a dry subject; others will be delighted to see some of the magic that the compiler performs on our creations in order to make them executable. I happened to fall into the later category and I hope you do to. And so, last time I had concluded my article with a very light discussion of the linking process. I intend to go a bit deeper into the linking process in this article, as well as some discussion about some of the optimizations that GCC can perform for you. Before we get too deep into things, let's see a quick example of what the linking process does for us. For this example, we have two files, main.c and funct.c main.c: #include <stdio.h> extern void funct(); int main () { funct(); } Yes, this is a pretty simple program. Notice that we've not defined the function, funct(), only declared it as an external function that accepts no parameters and returns no value. We will define this function in the next file, funct.c: void funct () { puts("Hello World."); } Most of you, by now, see where this is headed. This is the proverbial “Hello World” program, only we've broken it up into two separate files, for the sake of instruction. In a real project, you'd use the make program to arrange for all of the files to be compiled, but we're going to do the compilation by hand. First we compile the main.c into a main.o file: gcc main.c -c -o main.o This command tells GCC to compile the source file, but not to run the linker so that we are left with an object file, which we want named main.o. Compiling funct.c is much the same: gcc funct.c -c -o funct.o Now we can call GCC one more time, only this time, we want it to run the linker: gcc main.o funct.o -o hello In this example, we supplied the names of a couple “.o” object files, requested that they all be linked, and that the resulting executable be named hello. Would you be surprised if executing ./hello resulted in “Hello World.”? I didn't think so. So why would we take the simplest program possible and split it into two separate files? Well, because we can. And what we gain from doing it this way is that if we make a change to only one of the files, we don't have to recompile any of the files that didn't change; we simply re-link the already existing object files to the new object file that we created when we compiled the source file that we changed. This is where the make utility comes in handy as it keeps track of what needs to be recompiled based on what files have been changed since the last compilation. Essentially, let's say that we had a very large software project. We could write it as one file and simply recompile it as needed. However, this would make it difficult for more than one person to work on the project, as only one of us could work at a given time. Also, it would mean that the compilation process would be quite time consuming since it would have to compile several thousands of lines of C source. But if we split the project into several smaller files, more than one person can work on the project and we only have to compile those files that get changed. The Linux linker is pretty powerful. The linker is capable of linking object files together, as in the example above. It's also able to create shared libraries that can be loaded into our program at run time. While we won't discuss the creation of shared libraries, we will see a few examples that the system already has. In my last article, I used a source file called test.c for the sake of discussion: #include <stdio.h> // This is a comment. #define STRING "This is a test" #define COUNT (5) int main () { int i; for (i=0; i<COUNT; i++) { puts(STRING); } return 1; } We can compile this program with: gcc test.c -o test We can use the ldd command to get a list of shared libraries that our program depends upon. ldd test And we see: linux-gate.so.1 => (0xffffe000) libc.so.6 => /lib/libc.so.6 (0xb7e3c000) /lib/ld-linux.so.2 (0xb7f9a000) The libc.so.6 entry is fairly easy. It's the standard C library that contains such things as puts() and printf(). We can also see which file provides this library, /lib/libc.so.6. The other two are a bit more interesting. The ld-linux.so.2 is a library that finds and load all of the other shared libraries, that a program needs in order to run, such as the libc mentioned earlier. The Linux-gate.so.1 entry is also interesting. This library is actually just a virtual library created by the Linux kernel that lets a program know how to make system calls. Some systems support the sysenter mechanism, while others call system calls via the interrupt mechanism, which is considerably slower. We'll be talking about system calls next. System calls are a standardized interface for interacting with the operating system. Long story short, how do you allocate memory? How do you output a string to the console? How do you read a file? These functions are provided by system calls. Let's take a closer look. We can see what function calls a program uses by using the strace command. For example, let's take a look at our test program above with the strace program. strace ./test This command results in output similar to what we see below except that I've added line numbers for the sake of convenient reference. 1 execve("./test", ["./test"], [/* 56 vars */]) = 0 2 brk(0) = 0x804b000 3 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) 4 open("/etc/ld.so.cache", O_RDONLY) = 3 5 fstat64(3, {st_mode=S_IFREG|0644, st_size=149783, ...}) = 0 6 mmap2(NULL, 149783, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7f79000 7 close(3) = 0 8 open("/lib/libc.so.6", O_RDONLY)= 3 9 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\220g\1\0004\0\0\0"..., 512) = 512 10 fstat64(3, {st_mode=S_IFREG|0755, st_size=1265948, ...}) = 0 11 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f78000 12 mmap2(NULL, 1271376, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7e41000 13 mmap2(0xb7f72000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x131) = 0xb7f72000 14 mmap2(0xb7f75000, 9808, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7f75000 15 close(3) = 0 16 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7e40000 17 set_thread_area({entry_number:-1 -> 6, base_addr:0xb7e406c0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}) = 0 18 mprotect(0xb7f72000, 8192, PROT_READ) = 0 19 mprotect(0x8049000, 4096, PROT_READ)= 0 20 mprotect(0xb7fb9000, 4096, PROT_READ) = 0 21 munmap(0xb7f79000, 149783) = 0 22 fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 3), ...}) = 0 23 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f9d000 24 write(1, "This is a test\n", 15This is a test 25 )= 15 26 write(1, "This is a test\n", 15This is a test 27 )= 15 28 write(1, "This is a test\n", 15This is a test 29 )= 15 30 write(1, "This is a test\n", 15This is a test 31 )= 15 32 write(1, "This is a test\n", 15This is a test Lines 1 and 2 are simply the calls needed by the shell execute an external command. In ines 3 through 8 we see the system trying to load various shared libraries. Line 8 shows is where the system tries to load libc. Line 9 shows us the results of the first reading of the library file. In lines8-15, we see the system mapping the contents of the libc file into memory. This is how the system loads a library into memory for use by our program; it simply reads the file into memory and gives our program a pointer to the block of memory where the library got loaded. Now our program can call functions in libc as though they were a part of our program. Line 22 is where the system allocates a tty to send it's output to. Finally, we see our output getting sent out in lines 24-32. The strace command lets us see what what our program is doing under the hood. It's good for learning about the system, as in this article, as well as helping to find what a misbehaved program is attempting to do. I've had numerous occasions to run strace on a program that had apparently “locked up” only to find that it was blocking on some sort of file read or such. Strace is a sure fire way of locating those types of problems. Finally, GCC supports various levels of optimization, and I'd like to discuss just what that means. Let's take a look at another program, test1.c: #include <stdio.h> int main () { int i; for(i=0;i<4;i++) { puts("Hello"); } return 0; } When we convert that to assembly language with the gcc -s command, we get: .file "t2.c" .section .rodata .LC0: .string "Hello" $3, -8(%ebp) jle .L3 movl $0, %eax addl $20, %esp popl %ecx popl %ebp leal -4(%ecx), %esp ret .size main, .-main .ident "GCC: (GNU) 4.2.4 (Gentoo 4.2.4 p1.0)" .section .note.GNU-stack,"",@progbits We can see the for loop starting at .L3:. It runs until the jle instruction right after .L2. Now let's compile this program into assembly language, but with 03 optimization turned on: gcc -S -O3 test.c What we get is: main: leal 4(%esp), %ecx andl $-16, %esp pushl -4(%ecx) pushl %ebp movl %esp, %ebp pushl %ecx subl $4, %esp movl $.LC0, (%esp) call puts movl $.LC0, (%esp) call puts movl $.LC0, (%esp) call puts movl $.LC0, (%esp) call puts movl $.LC0, (%esp) call puts addl $4, %esp movl $1, %eax popl %ecx popl %ebp leal -4(%ecx), %esp ret .size main, .-main .ident "GCC: (GNU) 4.2.4 (Gentoo 4.2.4 p1.0)" .section .note.GNU-stack,"",@progbits Here we can see that the for loop has been completely factored out and that gcc has replaced it with 5 separate calls to the puts system call. The entire for loop is gone! Nice. GCC is an extremely sophisticated compiler that is even capable of factoring out loop invariants. Consider this code snippet: for (i=0; i<5; i++) { x=23; do_something(); } If you write a quick program to exercise this code snippet, you will see that the assignment to the x variable gets factored to a point outside of the for loop, as long as the value of x isn't used inside the loop. Essentially, GCC, with -O3, rewrites the code into this: x=23; for (i=0; i<5; i++) { do_something(); } Very nice. Bonus points for anyone who can guess what gcc -O3 does to this program: #include <stdio.h> int main () { int i; int j; for(i=0;i<4;i++) { j=j+2; } return 0; } Na, I always hated bonus questions, so I'll just give you the answer. GCC factors our program out completely. Since it does nothing, GCC doesn't write anything. Here is the output of that program: .file "t3.c" .text .p2align 4,,15 .globl main .type main, @function main: leal 4(%esp), %ecx andl $-16, %esp pushl -4(%ecx) xorl %eax, %eax pushl %ebp movl %esp, %ebp pushl %ecx popl %ecx popl %ebp leal -4(%ecx), %esp ret .size main, .-main .ident "GCC: (GNU) 4.2.4 (Gentoo 4.2.4 p1.0)" .section .note.GNU-stack,"",@progbits As you can see, the program starts up and immediately terminates. The for loop is gone, as well as the assignment to the j variable. Very, very nice. So, GCC is a very sophisticated compiler that is capable of handling very large projects and performing some very sophisticated optimizations on a given source file. I hope that reading this article, and the one before it, has lead you to a greater appreciation of just how intelligent the Linux compiler suite actually is, as well as given you some understanding that you can use to debug your own programs. 3 sec ago - Reply to comment | Linux Journal 2 hours 52 min ago - Reply to comment | Linux Journal 4 hours 9 min ago - great post 4 hours 44 min ago - Google Docs 5 hours 7 min ago - Reply to comment | Linux Journal 9 hours 55 min ago - Reply to comment | Linux Journal 10 hours 42 min ago - Web Hosting IQ 12 hours 16 min ago - Thanks for taking the time to 13 hours 53 would like to recommend you very useful file search Hello I have fond a lot of useful and interesting files on visit and enjoy it! thanks! nice articles.. thank you.. have learnt a lot.... amazing Thanks for the articles. I learned a lot reading it. Any other article in this subject is very welcome!! =) Congratulations Thanks Great series, easy to follow. Wish you had included a similarly, simple discussion of "make". Thanks I read part 1 and 2 and I learned a lot. So thank you! I want to learn the Assambly language, can you advise me a good book or website? I have Assembly (Intel 32-bits) at University but it's like: add eax,eax And not like %eax, so I guess this is some kind of different Assembly? Grtz These are just syntax These are just syntax flavours for x86 Assembly. Intel: add eax, eax AT&T: add %eax, %eax Please note that the order (destination and origin) is dest-orig in Intel, and orig-dest in AT&T. For the Assembly book, go to "The Art of Assembly Programming", by Randall Hyde. Re: Thanks Thank you for the kind words. Unless you really need it for a particular project, I wouldn't recommend learning Assembly. As you can see from this article, the gcc C compiler is very efficient. As to the %eax in the instruction that you cited, that's just a different addressing mode. Like I said, my Assembly skills are a bit rusty, though. Mike Diehl There's a problem with your There's a problem with your HTML tags. They're not displaying correctly, at least not in Firefox 3.0.3 on Ubuntu. Change your carets by < and > .
http://www.linuxjournal.com/content/examining-compilation-process-part-2?quicktabs_1=0
CC-MAIN-2013-20
refinedweb
2,588
72.87
The QMacCocoaViewContainer class provides a widget for Mac OS X that can be used to wrap arbitrary Cocoa views (i.e., NSView subclasses) and insert them into Qt hierarchies. More... #include <QMacCocoaViewContainer> This class was introduced in Qt 4.5. The QMacCocoaViewContainer class provides a widget for Mac OS X that can be used to wrap arbitrary Cocoa views (i.e., NSView subclasses) and insert them into Qt hierarchies. While Qt offers a lot of classes for writing your application, Apple's Cocoa framework offers lots of functionality that is not currently in Qt or may never end up in Qt. Using QMacCocoaViewContainer, it is possible to put an arbitrary NSView-derived class from Cocoa and put it in a Qt hierarchy. Depending on how comfortable you are with using objective-C, you can use QMacCocoaViewContainer directly, or subclass it to wrap further functionality of the underlying NSView. QMacCocoaViewContainer works regardless if Qt is built against Carbon or Cocoa. However, QCocoaContainerView requires Mac OS X 10.5 or better to be used with Carbon. It should be also noted that at the low level on Mac OS X, there is a difference between windows (top-levels) and view (widgets that are inside a window). For this reason, make sure that the NSView that you are wrapping doesn't end up as a top-level. The best way to ensure this is to make sure you always have a parent and not set the parent to 0. If you are using QMacCocoaViewContainer as a sub-class and are mixing and matching objective-C with C++ (a.k.a. objective-C++). It is probably simpler to have your file end with .mm than of subclassing QMacCocoaViewContainer to wrap cocoaViewToWrap with parent, parent. QMacCocoaViewContainer will retain cocoaViewToWrap. cocoaViewToWrap is a void pointer that allows the header to be included with C++ source. Destroy the QMacCocoaViewContainer and release the wrapped view. Returns the NSView that has been set on this container. The returned view has been autoreleased, so you will need to retain it if you want to make use of it. See also setCocoaView(). Sets the NSView to contain to be cocoaViewToWrap and retains it. If this container already had a view set, it will release the previously set view.
http://doc.qt.nokia.com/main-snapshot/qmaccocoaviewcontainer.html#cocoaView
crawl-003
refinedweb
377
56.76
How to get the list of all installed color schemes in Vim? How to get the list of all installed color schemes in Vim? Accepted Answer Type :colorscheme then Space followed by TAB. or as Peter said, :colorscheme then Space followed by CTRLd The short version of the command is :colo so you can use it in the two previous commands, instead of using the "long form". If you want to find and preview more themes, there are various websites like Vim colors Read more... Read less... Just for convenient reference as I see that there are a lot of people searching for this topic and are too laz... sorry, busy, to check themselves (including me). Here a list of the default set of colour schemes for Vim 7.4: blue.vim darkblue.vim, delek.vim desert.vim elflord.vim evening.vim industry.vim koehler.vim morning.vim murphy.vim pablo.vim peachpuff.vim ron.vim shine.vim slate.vim torte.vim zellner.vim If you are willing to install a plugin, I recommend. to cycle through all installed colorschemes. Nice way to easily choose a colorscheme. Here is a small function I wrote to try all the colorschemes in $VIMRUNTIME/colors directory. Add the below function to your vimrc, then open your source file and call the function from command. function! DisplayColorSchemes() let currDir = getcwd() exec "cd $VIMRUNTIME/colors" for myCol in split(glob("*"), '\n') if myCol =~ '\.vim' let mycol = substitute(myCol, '\.vim', '', '') exec "colorscheme " . mycol exec "redraw!" echo "colorscheme = ". myCol sleep 2 endif endfor exec "cd " . currDir endfunction If you have your vim compiled with +menu, you can follow menus with the :help of console-menu. From there, you can navigate to Edit.Color\ Scheme to get the same list as with in gvim. Other method is to use a cool script ScrollColors that previews the colorschemes while you scroll the schemes with j/k.
https://ask4knowledgebase.com/questions/7331940/how-to-get-the-list-of-all-installed-color-schemes-in-vim-
CC-MAIN-2021-10
refinedweb
317
74.9
So, you just finish a program you write in Python and you want to share it with friends, family and upload it to the internet. Problem is, a lot of people aren't going to know what to do with a .py or .pyc file. So the solution is to make an exe file. There are several 'programs' that can be used to make a .exe file from a .py file. Some of them include; Pyinstaller, Py2exe, Gui2exe, cx_Freeze just to name a few. For me creating my first .exe file was a big task. I struggled to find a suitable program to do it and didn't know how to use it when I did. Unfortunately for me, py2exe does not work on my computer so I was unable to use it. So I am going to show you how to use pyinstaller to make an .exe file. There is a help guide for using pyinstaller, but for some of the features I found that guide confusing. Getting Pyinstaller First of all, you have to download pyinstaller. You can download it from here:. After downloading the file, you need to unzip it. It should extract to a pyinstaller-1.4 folder. You don't need to install pyinstaller and it doesn't need to go in the usual Python\lib\site-packages. For the sake of this tutorial, I will use pyinstaller as if it was on the desktop. Preparing your script Before you make your .py file a .exe file you want to make sure it works. So run the script and make sure it works. For this tutorial I will use the following script for examples: import sys print "Hello World!" sys.exit() I have purposely imported the sys module for something later. For me, I often use PythonCard for my GUI's. For pyinstaller, you need to add a few imports to your script. So, for example if I make a GUI with a button that says 'Hello' and some static text which says 'Goodbye', I need to have the following in my script for pyinstaller: from PythonCard.components import button, statictext This could apply to other GUI's and modules so if you keep getting weird errors where things aren't working, make sure you don't need to do an import thing like shown above. The reason you have to have imports that you usually wouldn't need is because when pyinstaller runs, it imports all the dependencies of your script and in a GUI, all the parts need to be there for them to show up on the GUI. This mainly applies to PythonCard because there is a separate GUI file but could apply to other modules so I thought I should mention it. Making an exe file Ok, so you've made sure your script works so it's time to make the exe file. For this, I am assuming that pyinstaller is on the computer desktop. It doesn't have to be there it just makes it easier. So, open up the command prompt by going start>all programs>accessories or clicking run and typing cmd. For windows 7, you can just type cmd into the start search bar. When the cmd is open, you need to set it to where pyinstaller is. So for the desktop we enter the following: cd desktop\pyinstaller-1.4 That sets it so it operates from that directory for when you launch the functions of pyinstaller. Then run Configure.py to set up pyinstaller. Now, you need to copy your script (.py file) and put in in the pyinstaller directory. If you have a custom module/dependency which isn't in the lib\site-packages, you need to put that with the .py file when you copy it. If you want to have an icon for your program, it has to be a .ico file and it must also be put with the .py file. If you don't use an your own icon, you will get the default icon of a blue butterfly. If you need to convert a picture (.jpg, .gif, .png, .bmp etc) to a .ico you can use the following website: convert icon Ok, for this bit, I am going to make a single .exe file with an icon and no command console. In the cmd, enter in the following: Makespec.py --icon=icon.ico --onefile --noconsole -nProgramName Program.py (if you are using the script I have used above as an example, you need to change --noconsole to --console. It should say something like: wrote C:\...ProgramName.spec now run Build.py to build the exe. So, now we run Build.py to make the exe file. You can run it by entering this: Build.py ProgramName\ProgramName.spec It will come up with a whole bunch of stuff about what it is doing in making the exe file. then it should tell you it appended the .exe file to a directory. Go to that directory and then go to the folder called dist. In the folder you should have a single .exe file and it should have your icon as the image. Double click it and see if it runs correctly. One thing you may notice about the .exe file is that it is usually over 1mb big, depending on your script. This is because all of the python launchers and readers need to be packaged into the .exe file. For me on a standard 350 line script (with a GUI) that is about 10kb, after running through pyinstaller it works out to be about 8.4mb which is quite large seeing as all the program does is move some files. You can use upx if you need a smaller .exe file, but I have found it doesn't really help much. It changed my 8.4mb .exe to 8.3mb. All of the options you enter into the cmd for the Makespec.py are optional arguments. You don't have to have an icon or only a single file. For more information about the arguments you can look here: Pyinstaller Manual Some of the options are only available on certain platforms (e.g the --noconsole is windows only) so be careful! good luck making .exe files! Happy Coding!
http://www.dreamincode.net/forums/topic/192592-making-an-exe-file-with-pyinstaller/
CC-MAIN-2016-22
refinedweb
1,051
84.47
Hi all, is it possible to activate a thousands separator for numbers in WebAccess or in Service Desk in general (However WebAccess would be more important) i.e. an integer field has the number "2500". This should be displayed as "2,500". Can anyone point me to the right direction if this is even possible? Thanks, Kai Hi Kai, I don't think the standard decimal or int32 fields will display this thousands separator, however you could raise an enhancement request for this to be available. If this is only for display purposes you could use a calculation to reformat the decimal value as a string and add in the separator, however this wouldn't allow the value to be updated, and would require a separate field for the number to be enetered in. Cheers, Hadyn Thanks for the hint. I did it with a calculation that modified the format: import System static def Get AttributeValue(Change): Value = Change.integerNumber return Value.ToString('N0') Retrieving data ...
https://community.ivanti.com/thread/19282
CC-MAIN-2017-22
refinedweb
166
55.44
Imagine a world where every occupation had the type of power a tool like Stack Overflow has bestowed upon Software Engineers. Surgeons could repeatedly look up the difference between slicing and splicing, and mechanics could crowdsource the best way to remove a transmission from a Buick. The internet is full of information on almost anything you want to know, however, for students, finding answers to specific questions, explained for the right grade level is a challenge. Kids learning at home under quarantine, without ready access to their teacher, would greatly benefit from a community like Stack Overflow. So I decided to take a crack at building it and I’m going to show you how I went about architecting the application. Building Stack Overflow today is far easier than it was in 2008. With the rise of serverless technologies we now have ways to launch applications faster, with less code, less setup, and that can scale to millions of users as needed. The setup I used for StudyVue cost zero dollars to launch and will only start to incur a cost if usage increases. The best part is if your application goes viral, these serverless setups can scale up to handle the load and scale back down again with no effort on your part. Without further ado let’s get started. Product Definition First I wanted to make sure to have the core product features squared away. I was not going to try to replicate all of the features of Stack Overflow, but still wanted to make sure to have a minimum viable version that gives students and teachers access to the most valuable pieces. Those pieces being a way to ask questions, receive multiple answers, and for users to be able to validate or invalidate those answers with a simple, binary voting system. I also wanted to be cognizant of the fact that the target audience would be school-aged students. Therefore,being careful with personally identifiable information is a must and knowing how kids can be, there was going to have to be a way for users to flag abusive content. For this project I decided the best way to deal with personal information is to not ask for it in the first place. A simple login that only required an email address was an important feature. Email seems to be universal across generations so this will be a consistent way for students, teachers, and parents to verify their identity. So the core feature list I went for was: - Users can verify identity using their email with no other personal information required. - Users can post a question. - Users can post an answer. - Users can vote on answers no more than once. - Users can easily search for questions already posted. - Users can report an abusive question or answer. - Anyone can browse questions and answers. I also took into consideration a few other requirements. The most important being that these pages could be indexed by search engines. As such, server side rendering of the question pages in particular was going to be necessary. Although google claims they do render and crawl client side rendered content, it has been my experience that if you want to be indexed and rank well with google, server side rendering (SSR) or pre-rendering via static site generation (SSG) is a requirement. In this case, since the data is dynamic and ever-changing, pre-rendering won’t be an option, I would need to make sure the public facing pages used SSR. Another nice feature of Next.js is that all of our markup is still written in JSX and are still just react components. These are served as static markup and then hydrated client side with interactivity. You are still free to render elements client side that do not need to be indexed as well. Next.js supports all three major use cases, SSR, pre-rendering, and client-side rendering out of the tin. The Stack When evaluating the feature set there were a few things I wanted. I wanted to use React for the frontend and a serverless setup for my API. I would need to server side render most of the application, a cloud hosted database, and a way to handle search. I also wanted to consider how to deploy the app easily to keep this as simple and painless as possible. Right now the most robust framework that supports server side rendered content for react is Next.js. I personally like NextJS for a few reasons. It integrates easily with Vercel (formerly Zeit) for serverless deployment, it supports server side rendering of our UI, api routes that are deployed as lambdas to Vercel, and it supports typescript out of the box. Being as this is a side project we are looking to develop quickly, I find typescript helps me write safer code without compromising my development speed. For a database I chose FaunaDB. FaunaDB is a cloud-hosted, NoSql database that is easy to set up and can scale to millions of users. It has pay as you scale pricing, so you won’t incur any costs at startup. FaunaDB was easy to play around with in their web UI and model out my data before I ever wrote a single line of code. No need to run local copies of the databases, deal with running migrations, or worry about crashing the whole thing with a bad command. FaunaDB has user authentication and permissions features baked in as well so I can save some time building the authentication without bringing in another vendor. Last, we are going to need search to be as robust as possible. The last thing users want is to be stuck with exact text matches or have to type questions in a specific way to return results. Search is messy in the wild and users expect even small apps to be able to deal with that. Algolia is the perfect solution for this. They bring the robustness of google style search to your datasets with little overhead. They also have a react component library that can drop right into the frontend. Initial Setup Next.js + Vercel Setting up a project with Next.js and Vercel can be ready to go and deployed in a few minutes by following the Vercel docs. One of the nice things about Vercel is they have a powerful CLI that you can run locally that closely mimics the production environment. I like to think about it as something like Docker for serverless apps. Setting up Vercel locally is simple, however, finding your way around their docs after the name change from Zeit can be a challenge. Once you setup the Vercel CLI to run your application locally, you can further hook your Vercel project up to github to create staging URLs for every git branch you have, and have any merges into master automatically deploy to production. This way you are set up for rapid and safe iteration post launch without having to setup pipelines or containers and the like. I like to get this all squared away at the start of the project since you will need to start storing secrets and environment variables right away when setting up FaunaDB. I personally enable typescript right away when working on a Next.js project. With Next.js this is pre-configured to work out of the box and FaunaDB also has type definitions published so it’s a great combination. I find strong types help me avoid silly errors as well as help me remember my data types and key names while I’m writing code. It can also be incrementally adopted. You don’t need to start off in strict mode right away. You can get a feel for it and gradually work your way up to a complete, strongly typed codebase. I have left the type definitions in my examples here so you can see how this looks but also may have stripped out some of the more defensive error handling for greater readability. Setting Up the Database I want to walk through the initial set up of FaunaDB inside of a Next.js app to be able to read and write to the database. I think that setting up environment variables with Next.js can be somewhat tricky so here’s a quick rundown of what I did. You’ll want to first install the FaunaDB package from npm. Now head over to the FaunaDB console, go to the SECURITY tab and create a new API key. You’ll want to assign this key a role of Server since we just want this to work on this specific database. We want to copy this key now since this is the last time you will see it. We can now add this to our codebase, which requires that you add this info to four different files to work properly. First, you will want to put this in your .env and .env.build files. // .env and .env.build files FAUNADB_SECRET_KEY = '<YOUR_SECRET_KEY>' Next, we want to add this to our Vercel environment. This can be done with the following command: $ now secrets add studyvue_db_key <YOUR_SECRET_KEY> This saves your key into Vercel and will be available when you deploy your app. We can now add this key to our now.json and our next.config.json files. // now.json { "version": 2, "build": { "env": { "FAUNADB_SECRET_KEY": "@studyvue_db_key", } }, "builds": [{ "src": "next.config.js", "use": "@now/next" }] } // next.config.js module.exports = { target: 'serverless', env: { FAUNADB_SECRET_KEY: process.env.FAUNADB_SECRET_KEY, } } Note how in our now.json file we reference the Vercel secret prefixed by the @ symbol. We namespace the key since right now Vercel keeps all of your secrets available to all applications. If you launch other apps or sites on Vercel you will likely want to prefix these secrets with the app name. After that, we can utilize the standard process.env.FAUNADB_SECRET_KEY throughout the application. Now we can head back over to the FaunaDB console and begin modelling out our data. Modeling Our Data One of the best things about FaunaDB is how easy it is to set up your database. When I started out I just created an account and created all of my collections and indexes right in the GUI they provide. I’ll give a brief walk through of what that process was like to show the ease. After you create your account you are taken right to the FaunaDB console where you can start by clicking NEW DATABASE in the top left hand corner. I’ll start by calling this StudyVue and leave the "Pre-populate with demo data" option unchecked. Once you create your database you are brought to the main dashboard for that database. You can already see that FaunaDB offers a lot of options like child databases and multi-tenancy, GraphQL, and functions. For this project, I just needed to deal with three things; collections, indexes, and security. Collections Collections are similar to tables in a traditional SQL database. If you are familiar with MongoDB, this is the same concept. We know from our product description we need five collections. - Users - Questions - Answers - Votes - Abuse Reports Creating these is simple, just go into the COLLECTIONS tab and click NEW COLLECTION. Here is an example of creating the users collection: You’ll notice two additional fields, one is History Days, which is how long FaunaDB will retain the history of documents within the collection. I left this set to 30 days for all my collections since I don’t need to retain history forever. The TTL option is useful if you want to remove documents that have not been updated after a certain period of time. I didn’t need that for my collections either but again it’s good to take note that it is available. Click save, and your new collection is ready to go. I then created the other five collections the same way with the same options. That’s it, no schemas, no migration files, no commands, you have a database. Another thing you will notice is that I decided to store votes as their own collection. It is common when working with NoSql databases to get into the habit of storing these votes on the Answer document itself. I tend to always struggle with the decision to store data on the related document in one-to-many relationships or to make a new collection. In general, I like to avoid nesting too much data in a single document, especially when that data could relate back to other collections, for example, a vote belonging to both a user and an answer. It can become unwieldy over time to manage this from within another document. With a relational approach, if we ever need to reference another document we just add an index and we have it. We may want to show a user all their up voted or down voted answers, or have an undo vote feature. Keeping votes in their own collection thus offers a bit more flexibility long term in the face of not knowing exactly where you will go. Another advantage is that the relational model is less costly to update. For instance removing a vote from an array of votes requires us to store the complete array again, whereas with the relational model we are just removing a single item from an index. While it may be easier to just store things nested in the same document, you’ll typically want to take the time to have more flexible, normalized models. Indexes Indexes are what you use to query the data in your collections. Creating indexes requires you to think about the relationships between your collections and how you want to be able to query and manipulate that data. Don’t worry if you are unsure of every possible index at this moment. One of the advantages of FaunaDB is that indexes and models are flexible and can be made at any time whenever you want. I started with the obvious relations first and later on was able to add additional indexes as the product evolved. For example, right away I knew that I was going to want to be able to display all questions either on the homepage or on a page that houses a list of all the questions asked. This would allow users and most importantly search engine crawlers to be able easily find newly created questions. To create an index go into the INDEXES tab and click NEW INDEX. Here you can select which collection you want this index to work with, in this case, questions, and the name of the index, which I will call all_questions. I also knew I was going to need to fetch a question by its ref id. This can be done easily without creating an index. However, I needed to be able to fetch all of the answers related to a question. So I have an index called answers_by_question_id that will allow me to perform a join between these two collections. In this case, I want the Source Collection to be answers and I want to populate the Terms field with the data attribute I will need to be able to query by, which is data.question. The question attribute will be what I am going to use to store the ref to the question that a particular answer is associated with. I also know I am going to want to be able to fetch votes that are tied to a specific answer. I can now make an index called votes_by_answer that pulls from the votes collection and use data.answer to represent the attribute we want to be able to look up on. Setting up more indexes follows the same process. For collections where you only want to allow one entity with the same attributes to exist, such as users that should have a unique email address, we can make sure that only unique email addresses are allowed by checking the unique field. As you can see, we effectively model our entire database within the dashboard and are now ready to use this in the code base. What is FQL? FaunaDB has two ways to query the database. One is the more familiar GraphQL and the other is something called FQL. FQL is Fauna’s proprietary query language. It’s what is called an embedded domain-specific language (DSL), which is a powerful way to compose queries in the languages they support. It gives us the ability to use it to create composable functions and helpers throughout our codebase. For instance here is a function I made to create a user document. export function createUserDocument(data: FaunaUserData) { return q.Create(q.Collection('users'), data); } We can take this a step further by utilizing a functional programming technique called composing functions. If you look at the FQL above what we see is that FQL is just composed of functions that take other functions as arguments. Let’s take a bit more of an advanced example. Let's say we wanted to retrieve all questions from the questions index. The FQL looks like this: const questions = await client.query( q.Map( q.Paginate( q.Match( q.Index('questions') ) ), ref => q.Get(ref) ) ) We can see functional composition at work here where Map() takes two arguments that are functions. If we focus on the first argument we see a chain of unary functions, which are just functions that take one argument, the Paginate() function takes the Match() function which takes the Index() function. Without going into too much detail about functional programming, these types of unary function chains are ripe for functional composition. In this case I used the ramda library to compose more general, powerful helpers. So taking our above example and using ramda's compose helper we can create a function getAllByIndex(). export const getAllByIndex = compose(q.Paginate, q.Match, q.Index); We read the compose function's arguments as being executed from right to left. So getAllByIndex() takes our index as a string and then passes it into Index() the output of which goes into Match() the output of which goes into Paginate(). We can now use this to cleanup our questions FQL query. const questions = await client.query( q.Map( getAllByIndex('questions'), ref => q.Get(ref) ) ) We can continue to use this technique to create more helpers for common operations, like the below helper I created to get a collection's document by ref id. export const getCollectionDocumentById = compose(q.Get, q.Ref); While it was a little hard to get used to at first, the power of using FQL and readability when coupled with functional composition, was a worthwhile investment over GraphQL in my opinion. Authenticating Users When it came to user management, I wanted a way to verify that users are real people and I wanted a way to make sure we had a user’s email so that we could eventually build notifications for when their questions had new answers. I also wanted to make sure it was as simple as possible to create an account and move forward. I didn’t want to interfere with the spontaneity of wanting to ask or answer a question. One thing I personally hate is having to create new passwords for every new service I sign up for. I loved the idea of creating a magic link type login where the user submits their email and they click on a link that logs them into the app. This type of login has a major pitfall for mobile users that we will discuss in just a bit but let's begin modeling this out with FaunaDB’s internal authentication. FaunaDB's internal authentication allows you to pass in an email and a credentials object with a password key. That password is then stored as an encrypted digest in the database and returns to us a token that can be used to authenticate that user. The tokens do not expire unless the user logs out, but the same token is never issued twice. We can use this system to create our magic login. The Login First, whether a user is logging in for the first time or returning to the site we want to make sure there is a single login pathway. To do this we can query the database first to see if that users' email exists already. If it does not exist, we'll create a new user and assign a randomized password. If the user does exist, we will update the user with a new randomized password. In both cases, we are going to receive back an authentication token we can now use to persist the login of that user. In order to do this, we'll need a new index in order to fetch users by email. We can go ahead and call this users_by_email and this time check off the unique option so that no emails can be submitted to the collection twice. Here's an example of how we can build this logic inside of our API. Notice that for our FQL query we use the Paginate() method instead of Get(). Get throws an error when no results are found, what we want to do is detect when there are no results and go on to create a new user. let user: FaunaUser | undefined = undefined; const password = uuidv4(); const { email } = JSON.parse(req.body); // use paginate to fetch single user since q.Get throws error obj when none found const existingUser: FaunaUsers | undefined = await client?.query( q.Map( q.Paginate( q.Match( q.Index('users_by_email'), email ) ), ref => q.Get(ref) ) ); if (existingUser?.data.length === 0 ) { // create new user with generated password user = await client?.query(createUserDocument({ data: { email }, credentials: { password } })); } else { // update existing user with generated password user = await client?.query( q.Update( existingUser?.data[0].ref, { credentials: { password } } ) ); } Passing the Token We still want the user to click a link in the email. We can send the entire token in the email link as a part of the URL to complete the authentication, however I'd like to be a bit more secure than this. Sending the entire token means that it is likely going to sit forever in plain text in a users inbox. While we aren't handling payment or personal information, there still is the potential for someone to accidentally share the link or forward the wrong message, exposing a valid token. To be extra secure, we really want to ensure that this link only works for a short duration of time, and it only works in the device and browser the user used to generate it. We can use Http only cookies to help us with this. We can first take a section from the start of the token, let's say 18 characters, and then take the rest of the token and send it back in a temporary cookie that will be removed from the browser after 15 minutes. The section at the start of the token we can send in our email. This way the link will only work for as long as the cookie is persisted in the browser. It will not work if anyone else clicks on it since they do not have the other segment. After the two pieces are put back together by our API, we can send back the new Http cookie as a header with a thirty-day expiration to keep the user logged in. Here we can log in the user we created and split the returned token into the piece we are going to email, and the piece we are going to store in the browser. // login user with new password const loggedInUser: { secret: string } | undefined = await client?.query( q.Login( getUserByEmail(email), { password } ) ); // setup cookies const emailToken = loggedInUser?.secret?.substring(0, 18); const browserToken = loggedInUser?.secret?.substring(18); // email link and set your http cookie... Just to put our minds at ease, let's consider how easy it would be to brute force the other half of the token. FaunaDB tokens are 51 characters long, meaning the other half of our token contains 33 alphanumeric characters including dashes and underscores. That’s 64 possible characters so the total number of combinations would be 64^33 or 1.37371891×10^16. So the short answer is, brute-forcing just a piece of this token would take quite a long time. If this were a banking application or we were taking payments from people we'd want to possibly use an encryption scheme for the tokens and use a temporary token that expired for the login before getting the real long term token. This is something that Fauna's built-in TTL options on a collection item would be useful for. For the purposes of this app, breaking the token in two will work just fine. Creating the API To build out these features securely we are going to utilize api routes with Next.js. You are now seeing one of the advantages of the Next and Vercel combination. While we are technically deploying this a serverless app, we can manage our API and our client in a single monorepo. For small projects that you are maintaining yourself this is incredibly powerful as you no longer need to sync your deployment of client-side and API features. As the project grows, your test suites can run on the entire application and when we add FaunaDB to the mix we don’t have to worry about running migrations post-deploy. This gives you the scalability of microservices in practice but without the added overhead of maintaining multiple codebases and deployments. To set up an API simply create an api directory inside of the pages directory and now you can build out your API using file system routing. So if we create a login.ts file, we can now make requests to /api/login. Here is an example login route where we can handle a GET or POST request that will be deployed as a serverless function: import { NextApiRequest, NextApiResponse } from 'next' export default async function main(req: NextApiRequest, res: NextApiResponse) { switch(req.method) { case 'GET': try { // Check if user is logged in return res.status(200).json({ isLoggedIn: true }); } catch(e) { return res.status(500).json({ error: e.message }); } case 'POST': try { // login or create user and send authentication email here return res.status(200).json({ userId, isLoggedIn: true }); } catch(e) { return res.status(500).json({ error: e.message }); } default: Return res.status(500).json({ error: 'Bad Request.'}); } In this case, we can use a GET request to verify if a given token is valid and use a POST to log in a user and send the authentication email. Sending the Auth Email To send the emails with the passwords, I used nodemailer and mailgun. I won’t go into setting up mailgun here since you could use another provider like sendgrid, but I will mention that it is important to make sure you are careful sending your email inside of a callback instead of using async / await or promises. If you return out of a serverless function before receiving a success message from the email server, the serverless function instance shuts down without waiting for the email send call to resolve. The Mobile Pitfall When I first created and launched this app I built the magic link system and it was great on desktop. I thought it was incredibly seamless until I handed it off to my friends who primarily opened it on mobile phones or inside of a Facebook or Twitter browser. I'll give you the benefit of hindsight here and let you know that magic links are an awful experience on mobile devices. Mobile devices, iOS specifically in this case, do not allow users to set a different default browser. Therefore many users would generate a link in the browser they like using (like Google Chrome) only to open the link in their default browser (Safari) through their preferred email application. Since our authentication system requires using the same browser and device to maintain security, nobody could log in with our magic links. On top of that, if users were using the browser inside of a social application like Facebook, there was no way to open the link inside the Facebook browser. I decided on a different UX to account for this. Instead, I would email a section of the token to be copy and pasted into a password input field instead. This had the added advantage of allowing the user to stay in the same browser tab while they authenticated and it would work well inside of all browsers even those who were inside of social applications that had their own internal browser windows. Architecting the API Now that we have a way to authenticate users, we can submit a question and save it to the database we’re going to create two things. First, we’ll create a page for asking a question, second, we’ll make an API route with a cloud function that can receive a POST request and save the data to our database. This has the advantage of allowing us to authenticate users in our API and ensuring they can’t manipulate our queries. FaunaDB also has ways that you can safely do this on the client-side, however, I chose to only access the database from inside the API. Personally, I like the added security that working with our database through an API can provide. This also allows for some more freedom down the line should we incorporate other external services for things like monitoring, email notifications, caching, or even bringing in data from another database. I find having a server environment to unite these services allows for better performance tuning and security than trying to do it all in the browser. You are also not tied to Javascript, should you want to change the API to a more performant language like Go, which is supported by FaunaDB and Vercel, you are free to do so. We can expand our API by creating a questions directory inside of the api directory with an index.ts file. This will be our main endpoint for creating questions. The endpoint can now be accessed at /api/questions, we'll use this endpoint to POST new questions and to GET the list of all questions. We are also going to need a way to fetch a single question by its id. We’ll create a new endpoint by creating a [qid].ts file in the same questions directory. This allows us to call /api/questions/:qid with a dynamic question id as the last part of the URL. Api Routes vs getServerSideProps() In Next.js you have two parts to your server-side processes. You have your API directory, which are your serverless functions that always execute on the backend. In my app I used these to fetch the raw data we need from the database. Here’s an example of our /api/questions/:qid route, where we fetch our question, the answers with a reference to it, and all the votes with references to that answer. We then return that data in the response. export default async function main(req: NextApiRequest, res: NextApiResponse) { const { cookies, method, query: { qid = '' } = {} } = req; switch(method) { case 'GET': try { const question: {data: FaunaQuestion} | undefined = await client?.query( getQuestionById(typeof qid === 'string' ? qid : '') ) const answers: {data: FaunaAnswer[]} | undefined = await client?.query( q.Map( q.Paginate( q.Match( q.Index('answers_by_question_id'), questionRef(qid) ) ), ref => q.Get(ref) ) ) const votes: {data: FaunaVote[]} | undefined = await client?.query( q.Map( q.Paginate( q.Join( q.Match( q.Index('answers_by_question_id'), questionRef(qid) ), q.Index('votes_by_answer') ) ), ref => q.Get(ref) ) ) return res.status(200).json({ question, answers, votes }) } catch (e) { return res.status(500).json({ error: e.message }) } case 'POST': // ...for posting an answer to a question default: return } } You can see some of my helpers like questionRef() and getQuestionById() that are more good examples of using FQL to help make your code more readable and reusable all without a complex abstraction or ORM. export const getCollectionDocumentById = compose(q.Get, q.Ref); export function getQuestionById(id: string) { return getCollectionDocumentById(q.Collection('questions'), id); } export function questionRef(id: string | string[]): faunadb.Expr { return q.Ref(q.Collection('questions'), id); } The other part of our Next.js app that executes on a server is actually within our /pages/questions/[qid].tsx file that represents a page component in our app. Next.js allows you to export a function called getServerSideProps() that fetches the data necessary to render your page server-side before serving it. This is where I prefer to do any map reduces, sorting, or aggregating of the data itself. You can choose to do this in your API routes as well but I like to keep a separation of concerns here, where my API routes simply return the necessary data from the database and any aggregation needed for rendering and display is done in my getServerSideProps() functions. export const getServerSideProps: GetServerSideProps = async ({req, params}) => { try { const host = req?.headers.host; const res = await fetch(`{host}/api/questions/${params?.qid}`) const resJson: QuestionResponse = await res.json() const { question, answers, votes } = resJson; return { props: { question, answers: mergeAndSortAnswersAndVotes(answers, votes) } } } catch (error) { throw new Error('Oops! Something went wrong...'); } }; I went on to use a similar setup for creating the other endpoints, with the API routes fetching data from fauna and the data processing done on the backend of our pages. The other added advantage of this is the data processing bit used for display may not be necessary for other things we may need these endpoints for like sending out notifications to users when a question is answered. In a sense we are doing a serverless take on the classic MVVM pattern, where our model sits in the API folder and our view models are our getServerSideProps functions.. This just showcases how even though we have a single repository with Next.js for code management, we can easily maintain separate domains for our services and renderings. We can also just as easily change this if need be in the future. The Frontend For this prototype I wanted to keep the frontend as simple as possible. Next.js already comes set up to use react out of the box but what about our styles? I personally love tachyons, which is a lightweight atomic CSS framework not unlike tailwind, just considerably lighter weight. While tailwind is more configurable, tachyons is far easier to memorize so I find myself just adding the classes without thinking or referring back to the documentation. For any custom CSS I have to write or any styles that require dynamic variables I like to use the styled jsx that Next.js comes with out of the box. Typically with this setup I write very few styles or modifications myself. In this case I will be designing as I code as well so I just stuck to the tachyons defaults which are good for this project. Here’s a quick look at the Header component: <header className="Header flex items-center justify-between w-100 pb3 bb"> <Link href="/"> <a className="Header__logoLink db mv2 pa0 black link b"> <img className="Header__logo db" alt="studyvue logo" src="/logo.svg" /> </a> </Link> <nav className="Header__nav flex items-center"> {userInfo.isLoggedIn && ( <Link href="/me"> <a className="Header__logoutLink db black f5 link dark-blue dim"> <span className="di dn-ns pr2">Me</span><span className="dn di-ns pr3">My Stuff</span> </a> </Link> )} <Link href="/questions/ask"> <a className="Header__askQuestionLink db ph3 pv2 ml2 ml3-ns white bg-blue hover-bg-green f5 link"> Ask <span className="dn di-ns">a Question</span> </a> </Link> </nav> <style jsx>{` .Header__logo { width: 12rem; } @media screen and (min-width: 60em) { .Header__logo { width: 16rem; } } `}</style> </header> At this point, you may also notice that I am adding my own class names as well like Header and Header__logo. This is a bit of a take on the classic BEM CSS methodology. I have modified this a bit for use with React and to be Component, Element, Modifier instead. Where the component name prefixes all class names used in that component, followed by two underscores, followed by the name of the element itself. Right now, I'm not managing a lot of styles, however, call me old school, but I still like to be able to comb the DOM in my developer tools and know exactly what I am looking at. So while most of these class names do not have style attached to them right now, I love the meaning it conveys as I develop so I've made a habit of adhering to this. It's also nice when the time comes to write end to end tests to be able to query any element easily. User Context All of the forms and UI elements inside of the application follow very standard React architectural methods so I won’t go into those in detail here. One thing that I think is worth talking about in the context of Next.js is how to have a global context to our app that lets us know if a user is logged in and what their user id is. At this point, we have already set up our app to use an Http only cookie that will be passed on every request back to our API. The notable exception to this is our getServerSideProps function. This will receive the cookie, but in order to use it to fetch data from our API we will have to forward that cookie along. In this case, we don’t have to worry about this because all of our data is public-facing. Therefore any calls to fetch questions, answers, and votes can just use our standard server token from the API. Where we do need to pass the user token is any time we POST data to the database, when we want to have a page that shows a user's asked questions, and when changing layouts based on a user's logged-in status. In all of the above cases, we can make those calls from the client directly to our API so the saved token is passed along by default in cookies every time. What we don't want to happen is see a re-render on every page load as we update our header to reflect if the user is logged in or not. The ideal scenario is when the user opens up the app, we check if the token saved to cookies is valid and then update our global context with a boolean value isLoggedIn and the userId from our database. I've opted not to pass the email back to the frontend under any circumstances to provide some additional protection of the only PII we do store in the database. In Next.js this is done by creating a _app.tsx file in the pages directory. This is a wrapper component that we can use React's useEffect() hook in and run once when the application loads and it will hold that value until the browser is refreshed again. By using Next's Link components to navigate, the DOM is updated only where needed and our user context persists as our users navigate the application. You could do this user check during server-side rendering as well, however, I found keeping these user functions client-side to result in less code in my getServerSideProps functions since we don’t need to check for the presence of a token and forward that cookie along to the API. Here is an example of my _app.tsx file: import { useEffect, useState } from 'react'; import { AppProps } from 'next/app'; import { UserContext } from '../utils/contexts'; function MyApp({ Component, pageProps }: AppProps) { const [userInfo, setUserInfo] = useState<{isLoggedIn: boolean, userId: string | null}>({isLoggedIn: false, userId: null}); useEffect(() => { checkLogIn() .then((userData: {userId: string | null, isLoggedIn: boolean}) => { setUserInfo(userData); }) .catch((_error) => { setUserInfo({ userId: null, isLoggedIn: false }); }) }, []); return ( <UserContext.Provider value={[userInfo, setUserInfo]}> <Component {...pageProps} /> </UserContext.Provider> ); } async function checkLogIn() { try { const res = await fetch('/api/auth/login'); const resData = await res.json(); return resData; } catch(error) { throw new Error(`Error: ${error.message}`); } } export default MyApp Above you can see how the UserContext wraps the entire app and provides a method to update this from within the app via the setUserInfo() method. We can use this at the various login points in the application to update the context without refreshing the page after a new login. This allows for many points of login throughout the application and does not force users to go to a /login or /create-account route in order to participate. This, in conjunction with our easy two-step authentication, keeps the user in the experience at the place where they decided to login without forcing them to find their way back to the question or answer forms. Algolia Search So in order for our product to be effective we need to have robust search. Ideally the search will be able to handle returning results in the event of misspellings and be able to query on the question as well as the additional description of the question. FaunaDB does have search features built into it for exact text search but to build out the kind of robustness we want is going to be quite a bit of overhead. Thankfully Algolia is a product designed to deal with this exact issue. Setting up Algolia, like FaunaDB can all be done through their GUI interface. You create what are called Indices, which are just going to be copies of your FaunaDB objects. In this case, I only want to create an Index for the questions since this is what users need to be able to search on. In the future I could see a world where we also add the top voted answers to the search so we can get even richer results, but for now all that is needed on day one is indexing of the questions. The way that I do this is upon successful saving of our question to FaunaDB in our API, I then follow that up with POST of a flattened copy of that object to Algolia. It’s important to only pass the fields you want to be able to search on to Algolia as well as the Ref of the Question. The Ref Id is what we are going to use to link to the actual question in our app at the route /questions/:qid. By doing this users can now search question titles and their descriptions and the results returned by Algolia can easily be used to link to the actual question page. Here is an example of that flow inside the api: const postQuestion: FaunaQuestion | undefined = await userClient?.query( createQuestionDocument(formattedBody) ) try { const algoliaClient = algoliasearch('<your_algolia_id>', process.env.ALGOLIA_SECRET); const questionsIndex = algoliaClient.initIndex('prod_QUESTIONS'); const refId = await userClient?.query(q.Select(['ref', 'id'], postQuestion)); const indexableQuestionObj = { objectID: refId, question: postQuestion.data.question, description: postQuestion.data.description, } await questionsIndex?.saveObject(indexableQuestionObj) } catch (error) { console.error('Error indexing question with algolia: ', postQuestion); } return res.status(200).json(postQuestion); The key thing to note here is I didn’t want any failures to index a question with Algolia to interrupt the user experience. Here we simply wrap that up in a try… catch block and in our catch where I am logging the error we can send that off to our error logging software like Sentry or LogRocket or Honeybadger. This will let us manually correct the issue if need be but all that would happen in a failure is the question won’t come up in search results. In that case, we don’t want users to try to double save the question since we’d end up with it in FaunaDB twice. In the future, we can create a system to retry adding failures to Algolia asynchronously outside the user flow to make this more robust, but either way, we want users to be able to move on as long as the data makes it to FaunaDB, our source of truth. Algolia on the Client Now that Algolia just saved us time on the building of search, we can use Algolia to save us some time building the actual search bar. Algolia has React components ready to go for us that can just be dropped into our app and styled with some CSS to match our theme. We can just install the react-instantsearch-dom package from npm and we'll use the same Algolia search package that we used in our api on the client to fetch our results. I will admit actually finding a code sample that showcased how this worked was a bit tough so here’s my approach. I made a component called SearchBar that wrapped up the Algolia InstantSearch and SearchBox components. I also defined a component called Hit that will represent the list item of a hit and showcase our data the way we want it to. Here’s an example: const searchClient = algoliasearch( '<YOUR_ALGOLIA_ID>', '<YOUR_ALGOLIA_KEY>' ); const Hit = ({ hit: { question, hashtags, objectID }}: Hit) => { return ( <div className="Hit pv3 bt b--silver"> <Link href="/questions/[qid]" as={`/questions/${objectID}`}> <a className="Hit__question db f5 link dark-blue dim"> <span>{question}</span> </a> </Link> </div> ); } const Search = () => ( <div className="Search"> <InstantSearch indexName="prod_QUESTIONS" searchClient={searchClient} > <SearchBox translations={{ placeholder: "Search questions or hashtags..." }} /> <Hits hitComponent={Hit} /> </InstantSearch> <style jsx global>{` .ais-SearchBox-form { position: relative; display: block; } .ais-SearchBox-input { position: relative; display: block; width: 100%; padding: 1rem 2rem; border: 1px solid #999; border-radius: 0; background-color: #fff; } .ais-SearchBox-submit, .ais-SearchBox-reset { position: absolute; top: 50%; transform: translateY(-50%); height: 1rem; appearance: none; border: none; background: none; } .ais-SearchBox-submitIcon, .ais-SearchBox-resetIcon { width: 1rem; height: 1rem; } .ais-SearchBox-submit { left: 0.2rem; } .ais-SearchBox-reset { right: 0.2rem; } .ais-Hits-list { padding: 0; list-style: none; } `}</style> </div> ); As you can see I just used Next.js styled-jsx block with a global scope to style the classes inside of the Algolia components. And there you have it, professional-grade search and an easy to use component ready to go in under an hour. Deployment At this point deployment is as simple as typing now into the command line. One thing about using Vercel is that our deployment pipeline is effectively done for us before we even start writing the app. Rather than deploy directly from the terminal I set up their GitHub integration which does two things. - Any merges into master are automatically deployed to production. - Any new branches deploy an instance of our app with those changes. These effectively become our QA branches. Now if you have any test suites to run in your deployment pipeline you will need another tool to run tests before deploy. In this case I am ok to run tests manually for a while as this is just a prototype and be cautious about merging into master. The nice thing is I can spin up QA branches and have people try out new changes and updates before sending them off to the public release. In Conclusion All in all the construction of the entire application took a few hours over about three weekends with the above approach. I have a performant, scaleable prototype to test my idea out with that I can grow into. I have found that combining Next.js with Vercel makes microservices less painful by abstracting the difficulties in managing multiple services into simple configuration files. Infrastructure as code is empowering for solo developers running on limited time. FaunaDB was also an excellent choice as I got the flexibility of a NoSql database, but was also able to model out a normalized data model with ease. FQL was a powerful easter egg whose power I didn’t realize until I started actually working with it. I know I’ve just scratched the surface on how we can leverage this to optimize the various queries we need to make. Depending upon how this experiment goes the future for this application can go in many directions. I think the key benefit to this type of architecture is that it's humble enough to not be too opinionated, flexible enough to allow for pivots, and with enough structure to not get sucked down wormholes of configuration and build steps. That’s all most developers can ask for, the ability to work efficiently on the business problem at hand. Please take a look at the project here, and ask or answer some questions if you feel inclined! Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/mrispoli24/building-a-serverless-stack-overflow-for-students-learning-at-home-38in
CC-MAIN-2021-21
refinedweb
8,143
61.36
Sorry, I forgot to label the kernel tree. It is 2.6.26-rc5-mm2.On Tue, Jun 10, 2008 at 10:37 AM, Michael Buesch <mb@bu3sch.de> wrote:> On Tuesday 10 June 2008 16:34:21 Michael Buesch wrote:>> On Tuesday 10 June 2008 16:29:17 Vegard Nossum wrote:>> > On Tue, Jun 10, 2008 at 4:23 PM, Michael Buesch <mb@bu3sch.de> wrote:>> > > On Tuesday 10 June 2008 16:09:37 Miles Lane wrote:>> > >> BUG: unable to handle kernel NULL pointer dereference at 00000000>> > >> IP: [<f8e783d5>] :b43:b43_dma_mapping_error+0x16/0x155>> > >>> > >>> > > It seems to crash at>> > > 60 extern const struct dma_mapping_ops *dma_ops;>> > > 61>> > > 62 static inline int dma_mapping_error(dma_addr_t dma_addr)>> > > 63 {>> > > 64 if (dma_ops->mapping_error)>> > > 65 return dma_ops->mapping_error(dma_addr);>> > > 66>> > > 67 return (dma_addr == bad_dma_address);>> > > 68 }>> >>> > No, this is wrong.>> >>> > /* Check if a DMA mapping address is invalid. */>> > static bool b43_dma_mapping_error(struct b43_dmaring *ring,>> > dma_addr_t addr,>> > size_t buffersize, bool dma_to_device)>> > {>> > if (unlikely(dma_mapping_error(ring->dev->dev->dma_dev, addr)))>> >>> > It crashes on this line ---^>>>> Which calls dma_mapping_error(), correct?>>>> But you are right. I see the bug now.>> ring->dev is assigned after the call.>> I wonder why it works reliably on all of my machines.>>>> Ehm no wait a second...> What strange tree are you looking at?> This is a copy of the code from my local tree.>> 516 /* Check if a DMA mapping address is invalid. */> 517 static bool b43_dma_mapping_error(struct b43_dmaring *ring,> 518 dma_addr_t addr,> 519 size_t buffersize, bool dma_to_device)> 520 {> 521 if (unlikely(dma_mapping_error(addr)))> 522 return 1;>> This code is perfectly fine.> This must be some merge error in some upstream tree.>> --> Greetings Michael.>
http://lkml.org/lkml/2008/6/10/175
CC-MAIN-2017-26
refinedweb
270
65.83
. Merchify™ Custom Merchify™ lets you easily sell awesome, made-on-demand merch from your Shopify Store. Print Aura Custom Print Aura prints & ships YOUR art on T-Shirts and custom products. Orders ship under your brand with custom invoices & return labels. Kite - Print and Dropshipping on Demand Custom Sell your designs on apparel, prints, canvas, phone cases & more. Delivered worldwide and printed on demand. Inventory Source Free Fully Integrate with ANY Dropship Supplier or choose from our 100+ Supplier Directory to auto-upload products and sync inventory. MODALYST $35.00 / month Source & sell cool, edgy brands – with products available for dropshipping – integrated with your site through real-time product feeds. Gooten Custom The Gooten Shopify App allows you to add dropshipping and fulfillment to your store. Sell your custom designs on our products in minutes. Expressfy $14.95 – $24.95 / month Easily import products from AliExpress directly into your Shopify store, all in just a few clicks! Inkthreadable Custom Sell custom printed t-shirts, mugs, cases, posters & more on your Shopify store. Printed in house, in the UK & shipped worldwide. Scalable Press Custom Instantly fulfill your t-shirt, mug, phone case, and poster orders. Scalable Press prints and drop ships orders directly to your customer. Teescape Fulfillment Custom Design products and automatically add them to your store, with high-quality printing, fast shipping, and Low Prices! Pillow Profits Fulfillment $29.99 / month The premier Shopify fulfillment app for selling high-quality custom printed footwear and more! SMAR7 Express - Fulfillment Automation $7.99 / month SMAR7 Express allows anyone to easily setup and completely automate their very own drop shipping business with one click fulfillment. Sunrise Wholesale Product Dropshipping Free – $29.00 / month Great products that you add and sell from your Shopify Store. We drop-ship those items right to your customers. Art of Where Custom Print your artwork on leggings, silk scarves, dresses, and more. Build your brand with custom labels and packaging. Worldwide shipping. All Over Print Custom We Print-on-Demand All Over Print (Dye-sublimation) & Direct to Garment products and dropship to your customers with your brand worldwide!. Aliexpress Dropshipping $5.00 – $20.00 / month import desired products on AliExpress.com to add to your store with ease RageOn Connect Custom Connect your RageOn products to your Shopify store. Automatically process orders to your customers and fulfill. SEMBLY - print & ship shirts on demand Custom Printing, packing, shipping and tracking your t-shirts on demand Teezily Plus Custom Teezily Plus is a complete e-commerce solution that enables worldwide sellers to set up an online store to sell custom items in a few clicks Collective Fab From $29.00 / month Add thousands of fashion & beauty dropship products to your store with just a few clicks. Orders are shipped directly to your customers. Canvas by CG Pro Prints Custom Sell your artwork on canvas, fine art paper, adhesive wall clings, and more. We build and dropship directly to your customer. PersonalizationPop Custom Access to 1,000's of custom and personalized items. Fully automated drop ship fulfillment system. Earn healthy margins! RSpider From $0.00 / month RSpider provides smart product sourcing and allows 1-click products importing to your Shopify store. Cimpress Open Custom Create custom printed products on demand with one of the most trusted names in the business. Pixels Custom Add 100+ print-on-demand products to your Shopify store, including: t-shirts, canvas prints, framed prints, phone cases, towels, and more. Air Waves OnDemand (Shirts and Apparel) Custom Air Waves gives you the highest quality custom apparel, dropshipped to your customers at prices that will give you plenty of room for margin Streetshirts Custom Send your t-shirt orders to streetshirts - we print & ship directly to your customers. White-label, drop-ship DTG fulfillment from the UK. MODALYST for Suppliers Free Increase distribution, raise brand awareness, earn higher margins, & manage 1,000s of retailers through our automated dropshipping platform. Tshirtgang Printing & Fulfillment Free Integration with Tshirtgang.com, your T-shirt fulfillment partner. VaultDrop Custom Quality controlled jewelry, order automation & billing, fast fulfillment & delivery from the US, this is the App for scaling Jewelry stores. BigBuy Dropshipping Inventory Sync Free – $29.99 / month Automatically synchronize your inventory with the BigBuy wholesaler dropshipping service. Ltd. Ed. Custom Leggings are one of the fastest growing products. We offer competitive pricing and easy set up. Start selling today. Mothership Connect Custom Connect to dropshippers that sell with Mothership. Easily import and manage products. Automatically route orders to Mothership suppliers. Doba $9.99 / month Instantly add the products your customers are searching for directly to your Shopify store. Fast, reliable delivery direct to your customer. Watchify $15.00 / month Watchify is the leading dropship supplier of high quality watches for your store Spocket Free Import hundreds of high quality and handmade products from Etsy. Genie $0.95 – $43.95 / month Genie, the app that imports your eBay items directly to your Shopify ecommerce store with a single click. ArtGun Custom We're your backend printing and fulfillment partner, for on-demand graphic apparel. Expand your merchandise offering w/ no inventory risk. Spark Innovation Dropshipping Fulfillment Custom FiberFix was created in 2012. After appearing on Shark Tank and landing a deal with Lori Grenier FiberFix has seen roll outs in large retail Product Creations Group Dropship Fulfillment (Electronics) Custom Join the PCG Network of stores and enjoy the benefits of factory direct sourcing. Free access to quality products and US-based drop-shipping
https://apps.shopify.com/featured/find-products-to-sell-3
CC-MAIN-2017-26
refinedweb
911
57.77
Getting Started With An Express And ES6+ JavaScript Stack This article is the second part in a series, with part one located here, which provided basic and (hopefully) intuitive insight into Node.js, ES6+ JavaScript, Callback Functions, Arrow Functions, APIs, the HTTP Protocol, JSON, MongoDB, and more. In this article, we’ll build upon the skills we attained in the previous one, learning how to implement and deploy a MongoDB Database for storing user booklist information, build an API with Node.js and the Express Web Application framework to expose that database and perform CRUD Operations upon it, and more. Along the way, we’ll discuss ES6 Object Destructuring, ES6 Object Shorthand, the Async/Await syntax, the Spread Operator, and we’ll take a brief look at CORS, the Same Origin Policy, and more. In a later article, we’ll refactor our codebase as to separate concerns by utilizing three-layer architecture and achieving Inversion of Control via Dependency Injection, we’ll perform JSON Web Token and Firebase Authentication based security and access control, learn how to securely store passwords, and employ AWS Simple Storage Service to store user avatars with Node.js Buffers and Streams — all the while utilizing PostgreSQL for data persistence. Along the way, we will re-write our codebase from the ground up in TypeScript as to examine Classical OOP concepts (such as Polymorphism, Inheritance, Composition, and so on) and even design patterns like Factories and Adapters. A Word Of Warning There is a problem with the majority of articles discussing Node.js out there today. Most of them, not all of them, go no further than depicting how to setup Express Routing, integrate Mongoose, and perhaps utilize JSON Web Token Authentication. The problem is that they don’t talk about architecture, or security best practices, or about clean coding principles, or ACID Compliance, Relational Databases, Fifth Normal Form, the CAP Theorem or Transactions. It’s either assumed that you know about all of that coming in, or that you won’t be building projects large or popular enough to warrant that aforementioned knowledge. There appear to be a few different types of Node developers — among others, some are new to programming in general, and others come from a long history of enterprise development with C# and the .NET Framework or the Java Spring Framework. The majority of articles cater to the former group. In this article, I’m going to do exactly what I just stated that too many articles are doing, but in a follow up article, we are going to refactor our codebase entirely, permitting me to explain principles such as Dependency Injection, Three-Layer Architecture (Controller/Service/Repository), Data Mapping and Active Record, design patterns, unit, integration, and mutation testing, SOLID Principles, Unit of Work, coding against interfaces, security best practices like HSTS, CSRF, NoSQL and SQL Injection Prevention, and so on. We will also migrate from MongoDB to PostgreSQL, using the simple query builder Knex instead of an ORM — permitting us to build our own data access infrastructure and to get close up and personal with the Structured Query Language, the different types of relations (One-to-One, Many-to-Many, etc.), and more. This article, then, should appeal to beginners, but the next few should cater to more intermediate developers looking to improve their architecture. In this one, we are only going to worry about persisting book data. We won’t handle user authentication, password hashing, architecture, or anything complex like that. All of that will come in the next and future articles. For now, and very basically, we’ll just build a method by which to permit a client to communicate with our web server via the HTTP Protocol as to save book information in a database. Note: I’ve intentionally kept it extremely simple and perhaps not all that practical here because this article, in and of itself, is extremely long, for I have taken the liberty of deviating to discuss supplemental topics. Thus, we will progressively improve the quality and complexity of the API over this series, but again, because I’m considering this as one of your first introductions to Express, I’m intentionally keeping things extremely simple. ES6 Object Destructuring ES6 Object Destructuring, or Destructuring Assignment Syntax, is a method by which to extract or unpack values from arrays or objects into their own variables. We’ll start with object properties and then discuss array elements. const person = { name: 'Richard P. Feynman', occupation: 'Theoretical Physicist' }; // Log properties: console.log('Name:', person.name); console.log('Occupation:', person.occupation); Such an operation is quite primitive, but it can be somewhat of a hassle considering we have to keep referencing person.something everywhere. Suppose there were 10 other places throughout our code where we had to do that — it would get quite arduous quite fast. A method of brevity would be to assign these values to their own variables. const person = { name: 'Richard P. Feynman', occupation: 'Theoretical Physicist' }; const personName = person.name; const personOccupation = person.occupation; // Log properties: console.log('Name:', personName); console.log('Occupation:', personOccupation); Perhaps this looks reasonable, but what if we had 10 other properties nested on the person object as well? That would be many needless lines just to assign values to variables — at which point we’re in danger because if object properties are mutated, our variables won’t reflect that change (remember, only references to the object are immutable with const assignment, not the object’s properties), so basically, we can no longer keep “state” (and I’m using that word loosely) in sync. Pass by reference vs pass by value might come into play here, but I don’t want to stray too far from the scope of this section. ES6 Object Destructing basically lets us do this: const person = { name: 'Richard P. Feynman', occupation: 'Theoretical Physicist' }; // This is new. It’s called Object Destructuring. const { name, occupation } = person; // Log properties: console.log('Name:', name); console.log('Occupation:', occupation); We are not creating a new object/object literal, we are unpacking the name and occupation properties from the original object and putting them into their own variables of the same name. The names we use have to match the property names that we wish to extract. Again, the syntax const { a, b } = someObject; is specifically saying that we expect some property a and some property b to exist within someObject (i.e, someObject could be { a: 'dataA', b: 'dataB' }, for example) and that we want to place whatever the values are of those keys/properties within const variables of the same name. That’s why the syntax above would provide us with two variables const a = someObject.a and const b = someObject.b . What that means is that there are two sides to Object Destructuring. The “Template” side and the “Source” side, where the const { a, b } side (the left-hand side) is the template and the someObject side (the right-hand side) is the source side — which makes sense — we are defining a structure or “template” on the left that mirrors the data on “source” side. Again, just to make this clear, here are a few examples: // ----- Destructure from Object Variable with const ----- // const objOne = { a: 'dataA', b: 'dataB' }; // Destructure const { a, b } = objOne; console.log(a); // dataA console.log(b); // dataB // ----- Destructure from Object Variable with let ----- // let objTwo = { c: 'dataC', d: 'dataD' }; // Destructure let { c, d } = objTwo; console.log(c); // dataC console.log(d); // dataD // Destructure from Object Literal with const ----- // const { e, f } = { e: 'dataE', f: 'dataF' }; // <-- Destructure console.log(e); // dataE console.log(f); // dataF // Destructure from Object Literal with let ----- // let { g, h } = { g: 'dataG', h: 'dataH' }; // <-- Destructure console.log(g); // dataG console.log(h); // dataH In the case of nested properties, mirror the same structure in your destructing assignment: const person = { name: 'Richard P. Feynman', occupation: { type: 'Theoretical Physicist', location: { lat: 1, lng: 2 } } }; // Attempt one: const { name, occupation } = person; console.log(name); // Richard P. Feynman console.log(occupation); // The entire `occupation` object. // Attempt two: const { occupation: { type, location } } = person; console.log(type); // Theoretical Physicist console.log(location) // The entire `location` object. // Attempt three: const { occupation: { location: { lat, lng } } } = person; console.log(lat); // 1 console.log(lng); // 2 As you can see, the properties you decide to pull off are optional, and to unpack nested properties, simply mirror the structure of the original object (the source) in the template side of your destructuring syntax. If you attempt to destructure a property that does not exist on the original object, that value will be undefined. We can additionally destructure a variable without first declaring it — assignment without declaration — using the following syntax: let name, occupation; const person = { name: 'Richard P. Feynman', occupation: 'Theoretical Physicist' }; ;({ name, occupation } = person); console.log(name); // Richard P. Feynman console.log(occupation); // Theoretical Physicist We precede the expression with a semicolon as to ensure we don’t accidentally create an IIFE (Immediately Invoked Function Expression) with a function on a previous line (if one such function exists), and the parentheses around the assignment statement are required as to stop JavaScript from treating your left-hand (template) side as a block. A very common use case of destructuring exists within function arguments: const config = { baseUrl: '<baseURL>', awsBucket: '<bucket>', secret: '<secret-key>' // <- Make this an env var. }; // Destructures `baseUrl` and `awsBucket` off `config`. const performOperation = ({ baseUrl, awsBucket }) => { fetch(baseUrl).then(() => console.log('Done')); console.log(awsBucket); // <bucket> }; performOperation(config); As you can see, we could have just used the normal destructuring syntax we are now used to inside of the function, like this: const config = { baseUrl: '<baseURL>', awsBucket: '<bucket>', secret: '<secret-key>' // <- Make this an env var. }; const performOperation = someConfig => { const { baseUrl, awsBucket } = someConfig; fetch(baseUrl).then(() => console.log('Done')); console.log(awsBucket); // <bucket> }; performOperation(config); But placing said syntax inside the function signature performs destructuring automatically and saves us a line. A real-world use case of this is in React Functional Components for props: import React from 'react'; // Destructure `titleText` and `secondaryText` from `props`. export default ({ titleText, secondaryText }) => ( <div> <h1>{titleText}</h1> <h3>{secondaryText}</h3> </div> ); As opposed to: import React from 'react'; export default props => ( <div> <h1>{props.titleText}</h1> <h3>{props.secondaryText}</h3> </div> ); In both cases, we can set default values to the properties as well: const personOne = { name: 'User One', password: 'BCrypt Hash' }; const personTwo = { password: 'BCrypt Hash' }; const createUser = ({ name = 'Anonymous', password }) => { if (!password) throw new Error('InvalidArgumentException'); console.log(name); console.log(password); return { id: Math.random().toString(36) // <--- Should follow RFC 4122 Spec in real app. .substring(2, 15) + Math.random() .toString(36).substring(2, 15), name: name, // <-- We’ll discuss this next. password: password // <-- We’ll discuss this next. }; } createUser(personOne); // User One, BCrypt Hash createUser(personTwo); // Anonymous, BCrypt Hash As you can see, in the event that name is not present when destructured, we provide it a default value. We can do this with the previous syntax as well: const { a, b, c = 'Default' } = { a: 'dataA', b: 'dataB' }; console.log(a); // dataA console.log(b); // dataB console.log(c); // Default Arrays can be destructured too: const myArr = [4, 3]; // Destructuring happens here. const [valOne, valTwo] = myArr; console.log(valOne); // 4 console.log(valTwo); // 3 // ----- Destructuring without assignment: ----- // let a, b; // Destructuring happens here. ;([a, b] = [10, 2]); console.log(a + b); // 12 A practical reason for array destructuring occurs with React Hooks. (And there are many other reasons, I’m just using React as an example). import React, { useState } from "react"; export default () => { const [buttonText, setButtonText] = useState("Default"); return ( <button onClick={() => setButtonText("Toggled")}> {buttonText} </button> ); } Notice useState is being destructured off the export, and the array functions/values are being destructured off the useState hook. Again, don’t worry if the above doesn’t make sense — you’d have to understand React — and I’m merely using it as an example. While there is more to ES6 Object Destructuring, I’ll cover one more topic here: Destructuring Renaming, which is useful to prevent scope collisions or variable shadows, etc. Suppose we want to destructure a property called name from an object called person, but there is already a variable by the name of name in scope. We can rename on the fly with a colon: // JS Destructuring Naming Collision Example: const name = 'Jamie Corkhill'; const person = { name: 'Alan Turing' }; // Rename `name` from `person` to `personName` after destructuring. const { name: personName } = person; console.log(name); // Jamie Corkhill <-- As expected. console.log(personName); // Alan Turing <-- Variable was renamed. Finally, we can set default values with renaming too: const name = 'Jamie Corkhill'; const person = { location: 'New York City, United States' }; const { name: personName = 'Anonymous', location } = person; console.log(name); // Jamie Corkhill console.log(personName); // Anonymous console.log(location); // New York City, United States As you can see, in this case, name from person ( person.name) will be renamed to personName and set to the default value of Anonymous if non-existent. And of course, the same can be performed in function signatures: const personOne = { name: 'User One', password: 'BCrypt Hash' }; const personTwo = { password: 'BCrypt Hash' }; const createUser = ({ name: personName = 'Anonymous', password }) => { if (!password) throw new Error('InvalidArgumentException'); console.log(personName); console.log(password); return { id: Math.random().toString(36).substring(2, 15) + Math.random().toString(36).substring(2, 15), name: personName, password: password // <-- We’ll discuss this next. }; } createUser(personOne); // User One, BCrypt Hash createUser(personTwo); // Anonymous, BCrypt Hash ES6 Object Shorthand Suppose you have the following factory: (we’ll cover factories later) const createPersonFactory = (name, location, position) => ({ name: name, location: location, position: position }); One might use this factory to create a person object, as follows. Also, note that the factory is implicitly returning an object, evident by the parentheses around the brackets of the Arrow Function. const person = createPersonFactory('Jamie', 'Texas', 'Developer'); console.log(person); // { ... } That’s what we already know from the ES5 Object Literal Syntax. Notice, however, in the factory function, that the value of each property is the same name as the property identifier (key) itself. That is — location: location or name: name. It turned out that that was a pretty common occurrence with JS developers. With the shorthand syntax from ES6, we may achieve the same result by rewriting the factory as follows: const createPersonFactory = (name, location, position) => ({ name, location, position }); const person = createPersonFactory('Jamie', 'Texas', 'Developer'); console.log(person); Producing the output: { name: 'Jamie', location: 'Texas', position: 'Developer' } It’s important to realize that we can only use this shorthand when the object we wish to create is being dynamically created based on variables, where the variable names are the same as the names of the properties to which we want the variables assigned. This same syntax works with object values: const createPersonFactory = (name, location, position, extra) => ({ name, location, position, extra // <- right here. }); const extra = { interests: [ 'Mathematics', 'Quantum Mechanics', 'Spacecraft Launch Systems' ], favoriteLanguages: [ 'JavaScript', 'C#' ] }; const person = createPersonFactory('Jamie', 'Texas', 'Developer', extra); console.log(person); Producing the output: { name: 'Jamie', location: 'Texas', position: 'Developer', extra: { interests: [ 'Mathematics', 'Quantum Mechanics', 'Spacecraft Launch Systems' ], favoriteLanguages: [ 'JavaScript', 'C#' ] } } As a final example, this works with object literals as well: const id = '314159265358979'; const name = 'Archimedes of Syracuse'; const location = 'Syracuse'; const greatMathematician = { id, name, location }; ES6 Spread Operator (…) The Spread Operator permits us to do a variety of things, some of which we’ll discuss here. Firstly, we can spread out properties from one object on to another object: const myObjOne = { a: 'a', b: 'b' }; const myObjTwo = { ...myObjOne }: This has the effect of placing all properties on myObjOne onto myObjTwo, such that myObjTwo is now { a: 'a', b: 'b' }. We can use this method to override previous properties. Suppose a user wants to update their account: const user = { name: 'John Doe', email: 'john@domain.com', password: ' ', bio: 'Lorem ipsum' }; const updates = { password: ' ', bio: 'Ipsum lorem', email: 'j@domain.com' }; const updatedUser = { ...user, // <- original ...updates // <- updates }; console.log(updatedUser); /* { name: 'John Doe', email: 'j@domain.com', // Updated password: ' ', // Updated bio: 'Ipsum lorem' } */ The same can be performed with arrays: const apollo13Astronauts = ['Jim', 'Jack', 'Fred']; const apollo11Astronauts = ['Neil', 'Buz', 'Michael']; const unionOfAstronauts = [...apollo13Astronauts, ...apollo11Astronauts]; console.log(unionOfAstronauts); // ['Jim', 'Jack', 'Fred', 'Neil', 'Buz, 'Michael']; Notice here that we created a union of both sets (arrays) by spreading the arrays out into a new array. There is a lot more to the Rest/Spread Operator, but it is out of scope for this article. It can be used to attain multiple arguments to a function, for example. If you want to learn more, view the MDN Documentation here. ES6 Async/Await Async/Await is a syntax to ease the pain of promise chaining. The await reserved keyword permits you to “await” the settling of a promise, but it may only be used in functions marked with the async keyword. Suppose I have a function that returns a promise. In a new async function, I can await the result of that promise instead of using .then and .catch. // Returns a promise. const myFunctionThatReturnsAPromise = () => { return new Promise((resolve, reject) => { setTimeout(() => resolve('Hello'), 3000); }); } const myAsyncFunction = async () => { const promiseResolutionResult = await myFunctionThatReturnsAPromise(); console.log(promiseResolutionResult); }; // Writes the log statement after three seconds. myAsyncFunction(); There are a few things to note here. When we use await in an async function, only the resolved value goes into the variable on the left-hand side. If the function rejects, that’s an error that we have to catch, as we’ll see in a moment. Additionally, any function marked async will, by default, return a promise. Let’s suppose I needed to make two API calls, one with the response from the former. Using promises and promise chaining, you might do it this way: const makeAPICall = route => new Promise((resolve, reject) => { console.log(route) resolve(route); }); const main = () => { makeAPICall('/whatever') .then(response => makeAPICall(response + ' second call')) .then(response => console.log(response + ' logged')) .catch(err => console.error(err)) }; main(); // Result: /* /whatever /whatever second call /whatever second call logged */ What’s happening here is that we first call makeAPICall passing to it /whatever, which gets logged the first time. The promise resolves with that value. Then we call makeAPICall again, passing to it /whatever second call, which gets logged, and again, the promise resolves with that new value. Finally, we take that new value /whatever second call which the promise just resolved with, and log it ourselves in the final log, appending on logged at the end. If this doesn’t make sense, you should look into promise chaining. Using async/ await, we can refactor to the following: const main = async () => { const resultOne = await makeAPICall('/whatever'); const resultTwo = await makeAPICall(resultOne + ' second call'); console.log(resultTwo + ' logged'); }; Here is what will happen. The entire function will stop executing at the very first await statement until the promise from the first call to makeAPICall resolves, upon resolution, the resolved value will be placed in resultOne. When that happens, the function will move to the second await statement, again pausing right there for the duration of the promise settling. When the promise resolves, the resolution result will be placed in resultTwo. If the idea about function execution sounds blocking, fear not, it’s still asynchronous, and I’ll discuss why in a minute. This only depicts the “happy” path. In the event that one of the promises reject, we can catch that with try/catch, for if the promise rejects, an error will be thrown — which will be whatever error the promise rejected with. const main = async () => { try { const resultOne = await makeAPICall('/whatever'); const resultTwo = await makeAPICall(resultOne + ' second call'); console.log(resultTwo + ' logged'); } catch (e) { console.log(e) } }; As I said earlier, any function declared async will return a promise. So, if you want to call an async function from another function, you can use normal promises, or await if you declare the calling function async. However, if you want to call an async function from top-level code and await its result, then you’d have to use .then and .catch. For example: const returnNumberOne = async () => 1; returnNumberOne().then(value => console.log(value)); // 1 Or, you could use an Immedieately Invoked Function Expression (IIFE): (async () => { const value = await returnNumberOne(); console.log(value); // 1 })(); When you use await in an async function, the execution of the function will stop at that await statement until the promise settles. However, all other functions are free to proceed with execution, thus no extra CPU resources are allocated nor is the thread ever blocked. I’ll say that again — operations in that specific function at that specific time will stop until the promise settles, but all other functions are free to fire. Consider an HTTP Web Server — on a per-request basis, all functions are free to fire for all users concurrently as requests are made, it’s just that the async/await syntax will provide the illusion that an operation is synchronous and blocking as to make promises easier to work with, but again, everything will remain nice and async. This isn’t all there is to async/ await, but it should help you to grasp the basic principles. Classical OOP Factories We are now going to leave the JavaScript world and enter the Java world. There can come a time when the creation process of an object (in this case, an instance of a class — again, Java) is fairly complex or when we want to have different objects produced based upon a series of parameters. An example might be a function that creates different error objects. A factory is a common design pattern in Object-Oriented Programming and is basically a function that creates objects. To explore this, let us move away from JavaScript into the world of Java. This will make sense to developers who come from a Classical OOP (i.e, not prototypal), statically typed language background. If you are not one such developer, feel free to skip this section. This is a small deviation, and so if following along here interrupts your flow of JavaScript, then again, please skip this section. A common creational pattern, the Factory Pattern permits us to create objects without exposing the required business logic to perform said creation. Suppose we are writing a program that permits us to visualize primitive shapes in n-dimensions. If we provide a cube, for example, we’d see a 2D cube (a square), a 3D cube (a cube), and a 4D cube (a Tesseract, or Hypercube). Here is how this might be done, trivially, and barring the actual drawing part, in Java. // Main.java // Defining an interface for the shape (can be used as a base type) interface IShape { void draw(); } // Implementing the interface for 2-dimensions: class TwoDimensions implements IShape { @Override public void draw() { System.out.println("Drawing a shape in 2D."); } } // Implementing the interface for 3-dimensions: class ThreeDimensions implements IShape { @Override public void draw() { System.out.println("Drawing a shape in 3D."); } } // Implementing the interface for 4-dimensions: class FourDimensions implements IShape { @Override public void draw() { System.out.println("Drawing a shape in 4D."); } } // Handles object creation class ShapeFactory { // Factory method (notice return type is the base interface) public IShape createShape(int dimensions) { switch(dimensions) { case 2: return new TwoDimensions(); case 3: return new ThreeDimensions(); case 4: return new FourDimensions(); default: throw new IllegalArgumentException("Invalid dimension."); } } } // Main class and entry point. public class Main { public static void main(String[] args) throws Exception { ShapeFactory shapeFactory = new ShapeFactory(); IShape fourDimensions = shapeFactory.createShape(4); fourDimensions.draw(); // Drawing a shape in 4D. } } As you can see, we define an interface that specifies a method for drawing a shape. By having the different classes implement the interface, we can guarantee that all shapes can be drawn (for they all must have an overridable draw method as per the interface definition). Considering this shape is drawn differently depending upon the dimensions within which it’s viewed, we define helper classes that implement the interface as to perform the GPU intensive work of simulating n-dimensional rendering. ShapeFactory does the work of instantiating the correct class — the createShape method is a factory, and like the definition above, it is a method that returns an object of a class. The return type of createShape is the IShape interface because the IShape interface is the base type of all shapes (because they have a draw method). This Java example is fairly trivial, but you can easily see how useful it becomes in larger applications where the setup to create an object might not be so simple. An example of this would be a video game. Suppose the user has to survive different enemies. Abstract classes and interfaces might be used to define core functions available to all enemies (and methods that can be overridden), perhaps employing the delegation pattern (favor composition over inheritance as the Gang of Four suggested so you don’t get locked into extending a single base class and to make testing/mocking/DI easier). For enemy objects instantiated in different ways, the interface would permit factory object creation while relying on the generic interface type. This would be very relevant if the enemy was created dynamically. Another example is a builder function. Suppose we utilize the Delegation Pattern to have a class delegate work to other classes that honor an interface. We could place a static build method on the class to have it construct its own instance (assuming you were not using a Dependency Injection Container/Framework). Instead of having to call each setter, you can do this: public class User { private IMessagingService msgService; private String name; private int age; public User(String name, int age, IMessagingService msgService) { this.name = name; this.age = age; this.msgService = msgService; } public static User build(String name, int age) { return new User(name, age, new SomeMessageService()); } } I’ll be explaining the Delegation Pattern in a later article if you’re not familiar with it — basically, through Composition and in terms of object-modeling, it creates a “has-a” relationship instead of an “is-a” relationship as you’d get with inheritance. If you have a Mammal class and a Dog class, and Dog extends Mammal, then a Dog is-a Mammal. Whereas, if you had a Bark class, and you just passed instances of Bark into the constructor of Dog, then Dog has-a Bark. As you might imagine, this especially makes unit testing easier, for you can inject mocks and assert facts about the mock as long as mock honors the interface contract in the testing environment. The static “build” factory method above simply creates a new object of User and passes a concrete MessageService in. Notice how this follows from the definition above — not exposing the business logic to create an object of a class, or, in this case, not exposing the creation of the messaging service to the caller of the factory. Again, this is not necessarily how you would do things in the real world, but it presents the idea of a factory function/method quite well. We might use a Dependency Injection container instead, for example. Now back to JavaScript. Starting With Express Express is a Web Application Framework for Node (available via an NPM Module) that permits one to create an HTTP Web Server. It’s important to note that Express is not the only framework to do this (there exists Koa, Fastify, etc.), and that, as seen in the previous article, Node can function without Express as a stand-alone entity. (Express is merely a module that was designed for Node — Node can do many things without it, although Express is popular for Web Servers). Again, let me make a very important distinction. There is a dichotomy present between Node/JavaScript and Express. Node, the runtime/environment within which you run JavaScript, can do many things — such as permitting you to build React Native apps, desktop apps, command-line tools, etc. — Express is nothing but a lightweight framework that permits you to use Node/JS to build web servers as opposed to dealing with Node’s low-level network and HTTP APIs. You don’t need Express to build a web server. Before starting this section, if you are not familiar with HTTP and HTTP Requests (GET, POST, etc.), then I encourage you to read the corresponding section of my former article, which is linked above. Using Express, we’ll set up different routes to which HTTP Requests may be made, as well as the related endpoints (which are callback functions) that will fire when a request is made to that route. Don’t worry if routes and endpoints are currently non-sensical — I’ll be explaining them later. Unlike other articles, I’ll take the approach of writing the source code as we go, line-by-line, rather than dumping the entire codebase into one snippet and then explaining later. Let’s begin by opening a terminal (I’m using Terminus on top of Git Bash on Windows — which is a nice option for Windows users who want a Bash Shell without setting up the Linux Subsystem), setting up our project’s boilerplate, and opening it in Visual Studio Code. mkdir server && cd server touch server.js npm init -y npm install express code . Inside the server.js file, I’ll begin by requiring express using the require() function. const express = require('express'); require('express') tells Node to go out and get the Express module we installed earlier, which is currently inside the node_modules folder (for that’s what npm install does — create a node_modules folder and puts modules and their dependencies in there). By convention, and when dealing with Express, we call the variable that holds the return result from require('express') express, although it may be called anything. This returned result, which we have called express, is actually a function — a function we’ll have to invoke to create our Express app and set up our routes. Again, by convention, we call this app — app being the return result of express() — that is, the return result of calling the function that has the name express as express(). const express = require('express'); const app = express(); // Note that the above variable names are the convention, but not required. // An example such as that below could also be used. const foo = require('express'); const bar = foo(); // Note also that the node module we installed is called express. The line const app = express(); simply puts a new Express Application inside of the app variable. It calls a function named express (the return result of require('express')) and stores its return result in a constant named app. If you come from an object-oriented programming background, consider this equivalent to instantiating a new object of a class, where app would be the object and where express() would call the constructor function of the express class. Remember, JavaScript allows us to store functions in variables — functions are first-class citizens. The express variable, then, is nothing more than a mere function. It’s provided to us by the developers of Express. I apologize in advance if I’m taking a very long time to discuss what is actually very basic, but the above, although primitive, confused me quite a lot when I was first learning back-end development with Node. Inside the Express source code, which is open-source on GitHub, the variable we called express is a function entitled createApplication, which, when invoked, performs the work necessary to create an Express Application: A snippet of Express source code: exports = module.exports = createApplication; /* * Create an express application */ // This is the function we are storing in the express variable. (- Jamie) function createApplication() { // This is what I mean by "Express App" (- Jamie) var app = function(req, res, next) { app.handle(req, res, next); }; mixin(app, EventEmitter.prototype, false); mixin(app, proto, false); // expose the prototype that will get set on requests app.request = Object.create(req, { app: { configurable: true, enumerable: true, writable: true, value: app } }) // expose the prototype that will get set on responses app.response = Object.create(res, { app: { configurable: true, enumerable: true, writable: true, value: app } }) app.init(); // See - `app` gets returned. (- Jamie) return app; } GitHub: With that short deviation complete, let’s continue setting up Express. Thus far, we have required the module and set up our app variable. const express = require('express'); const app = express(); From here, we have to tell Express to listen on a port. Any HTTP Requests made to the URL and Port upon which our application is listening will be handled by Express. We do that by calling app.listen(...), passing to it the port and a callback function which gets called when the server starts running: const PORT = 3000; app.listen(PORT, () => console.log(`Server is up on port {PORT}.`)); We notate the PORT variable in capital by convention, for it is a constant variable that will never change. You could do that with all variables that you declare const, but that would look messy. It’s up to the developer or development team to decide on notation, so we’ll use the above sparsely. I use const everywhere as a method of “defensive coding” — that is, if I know that a variable is never going to change then I might as well just declare it const. Since I define everything const, I make the distinction between what variables should remain the same on a per-request basis and what variables are true actual global constants. Here is what we have thus far: const express = require('express'); const app = express(); const PORT = 3000; // We will build our API here. // ... // Binding our application to port 3000. app.listen(PORT, () => { console.log(`Server is up on port ${PORT}.`); }); Let’s test this to see if the server starts running on port 3000. I’ll open a terminal and navigate to our project’s root directory. I’ll then run node server/server.js. Note that this assumes you have Node already installed on your system (You can check with node -v). If everything works, you should see the following in the terminal: Server is up on port 3000. Go ahead and hit Ctrl + C to bring the server back down. If this doesn’t work for you, or if you see an error such as EADDRINUSE, then it means you may have a service already running on port 3000. Pick another port number, like 3001, 3002, 5000, 8000, etc. Be aware, lower number ports are reserved and there is an upper bound of 65535. At this point, it’s worth taking another small deviation as to understand servers and ports in the context of computer networking. We’ll return to Express in a moment. I take this approach, rather than introducing servers and ports first, for the purpose of relevance. That is, it is difficult to learn a concept if you fail to see its applicability. In this way, you are already aware of the use case for ports and servers with Express, so the learning experience will be more pleasurable. A Brief Look At Servers And Ports A server is simply a computer or computer program that provides some sort of “functionality” to the clients that talk to it. More generally, it’s a device, usually connected to the Internet, that handles connections in a pre-defined manner. In our case, that “pre-defined manner” will be HTTP or the HyperText Transfer Protocol. Servers that use the HTTP Protocol are called Web Servers. When building an application, the server is a critical component of the “client-server model”, for it permits the sharing and syncing of data (generally via databases or file systems) across devices. It’s a cross-platform approach, in a way, for the SDKs of platforms against which you may want to code — be they web, mobile, or desktop — all provide methods (APIs) to interact with a server over HTTP or TCP/UDP Sockets. It’s important to make a distinction here — by APIs, I mean programming language constructs to talk to a server, like XMLHttpRequest or the Fetch API in JavaScript, or HttpUrlConnection in Java, or even HttpClient in C#/.NET. This is different from the kind of REST API we’ll be building in this article to perform CRUD Operations on a database. To talk about ports, it’s important to understand how clients connect to a server. A client requires the IP Address of the server and the Port Number of our specific service on that server. An IP Address, or Internet Protocol Address, is just an address that uniquely identifies a device on a network. Public and private IPs exist, with private addresses commonly used behind a router or Network Address Translator on a local network. You might see private IP Addresses of the form 192.168.XXX.XXX or 10.0.XXX.XXX. When articulating an IP Address, decimals are called “dots”. So 192.168.0.1 (a common router IP Addr.) might be pronounced, “one nine two dot one six eight dot zero dot one”. (By the way, if you’re ever in a hotel and your phone/laptop won’t direct you to the AP captive portal, try typing 192.168.0.1 or 192.168.1.1 or similar directly into Chrome). For simplicity, and since this is not an article about the complexities of computer networking, assume that an IP Address is equivalent to a house address, allowing you to uniquely identify a house (where a house is analogous to a server, client, or network device) in a neighborhood. One neighborhood is one network. Put together all of the neighborhoods in the United States, and you have the public Internet. (This is a basic view, and there are many more complexities — firewalls, NATs, ISP Tiers (Tier One, Tier Two, and Tier Three), fiber optics and fiber optic backbones, packet switches, hops, hubs, etc., subnet masks, etc., to name just a few — in the real networking world.) The traceroute Unix command can provide more insight into the above, displaying the path (and associated latency) that packets take through a network as a series of “hops”. A Port Number identifies a specific service running on a server. SSH, or Secure Shell, which permits remote shell access to a device, commonly runs on port 22. FTP or File Transfer Protocol (which might, for example, be used with an FTP Client to transfer static assets to a server) commonly runs on Port 21. We might say, then, that ports are specific rooms inside each house in our analogy above, for rooms in houses are made for different things — a bedroom for sleeping, a kitchen for food preparation, a dining room for consumption of said food, etc., just like ports correspond to programs that perform specific services. For us, Web Servers commonly run on Port 80, although you are free to specify whichever Port Number you wish as long they are not in use by some other service (they can’t collide). In order to access a website, you need the IP Address of the site. Despite that, we normally access websites via a URL. Behind the scenes, a DNS, or Domain Name Server, converts that URL into an IP Address, allowing the browser to make a GET Request to the server, get the HTML, and render it to the screen. 8.8.8.8 is the address of one of Google’s Public DNS Servers. You might imagine that requiring the resolution of a hostname to an IP Address via a remote DNS Server will take time, and you’d be right. To reduce latency, Operating Systems have a DNS Cache — a temporary database that stores DNS lookup information, thereby reducing the frequency of which said lookups must occur. The DNS Resolver Cache can be viewed on Windows with the ipconfig /displaydns CMD command and purged via the ipconfig /flushdns command. On a Unix Server, more common lower number ports, like 80, require root level (escalated if you come from a Windows background) privileges. For that reason, we’ll be using port 3000 for our development work, but will allow the server to choose the port number (whatever is available) when we deploy to our production environment. Finally, note that we can type IP Addresses directly in Google Chrome’s search bar, thus bypassing the DNS Resolution mechanism. Typing 216.58.194.36, for example, will take you to Google.com. In our development environment, when using our own computer as our dev server, we’ll be using localhost and port 3000. An address is formatted as hostname:port, so our server will be up on localhost:3000. Localhost, or 127.0.0.1, is the loopback address, and means the address of “this computer”. It is a hostname, and its IPv4 address resolves to 127.0.0.1. Try pinging localhost on your machine right now. You might get ::1 back — which is the IPv6 loopback address, or 127.0.0.1 back — which is the IPv4 loopback address. IPv4 and IPv6 are two different IP Address formats associated with different standards — some IPv6 addresses can be converted to IPv4 but not all. Returning To Express I mentioned HTTP Requests, Verbs, and Status Codes in my previous article, Get Started With Node: An Introduction To APIs, HTTP And ES6+ JavaScript. If you do not have a general understanding of the protocol, feel free to jump to the “HTTP and HTTP Requests” section of that piece. In order to get a feel for Express, we are simply going to set up our endpoints for the four fundamental operations we’ll be performing on the database — Create, Read, Update, and Delete, known collectively as CRUD. Remember, we access endpoints by routes in the URL. That is, although the words “route” and “endpoint” are commonly used interchangeably, an endpoint is technically a programming language function (like ES6 Arrow Functions) that performs some server-side operation, while a route is what the endpoint is located behind of. We specify these endpoints as callback functions, which Express will fire when the appropriate request is made from the client to the route behind which the endpoint lives. You can remember the above by realizing that it is endpoints that perform a function and the route is the name that is used to access the endpoints. As we’ll see, the same route can be associated with multiple endpoints by using different HTTP Verbs (similar to method overloading if you come from a classical OOP background with Polymorphism). Keep in mind, we are following REST (REpresentational State Transfer) Architecture by permitting clients to make requests to our server. This is, after all, a REST or RESTful API. Specific requests made to specific routes will fire specific endpoints which will do specific things. An example of such a “thing” that an endpoint might do is adding new data to a database, removing data, updating data, etc. Express knows what endpoint to fire because we tell it, explicitly, the request method (GET, POST, etc.) and the route — we define what functions to fire for specific combinations of the above, and the client makes the request, specifying a route and method. To put this more simply, with Node, we’ll tell Express — “Hey, if someone makes a GET Request to this route, then go ahead and fire this function (use this endpoint)”. Things can get more complicated: “Express, if someone makes a GET Request to this route, but they don’t send up a valid Authorization Bearer Token in the header of their request, then please respond with an HTTP 401 Unauthorized. If they do possess a valid Bearer Token, then please send down whatever protected resource they were looking for by firing the endpoint. Thanks very much and have a nice day.” Indeed, it’d be nice if programming languages could be that high level without leaking ambiguity, but it nonetheless demonstrates the basic concepts. Remember, the endpoint, in a way, lives behind the route. So it’s imperative that the client provides, in the header of the request, what method it wants to use so that Express can figure out what to do. The request will be made to a specific route, which the client will specify (along with the request type) when contacting the server, allowing Express to do what it needs to do and us to do what we need to do when Express fires our callbacks. That’s what it all comes down to. In the code examples earlier, we called the listen function which was available on app, passing to it a port and callback. app itself, if you remember, is the return result from calling the express variable as a function (that is, express()), and the express variable is what we named the return result from requiring 'express' from our node_modules folder. Just like listen is called on app, we specify HTTP Request Endpoints by calling them on app. Let’s look at GET: app.get('/my-test-route', () => { // ... }); The first parameter is a string, and it is the route behind which the endpoint will live. The callback function is the endpoint. I’ll say that again: the callback function — the second parameter — is the endpoint that will fire when an HTTP GET Request is made to whatever route we specify as the first argument ( /my-test-route in this case). Now, before we do any more work with Express, we need to know how routes work. The route we specify as a string will be called by making the request to. In our case, the domain is localhost:3000, which means, in order to fire the callback function above, we have to make a GET Request to localhost:3000/my-test-route. If we used a different string as the first argument above, the URL would have to be different to match what we specified in JavaScript. When talking about such things, you’ll likely hear of Glob Patterns. We could say that all of our API’s routes are located at the localhost:3000/** Glob Pattern, where ** is a wildcard meaning any directory or sub-directory (note that routes are not directories) to which root is a parent — that is, everything. Let’s go ahead and add a log statement into that callback function so that altogether we have: // Getting the module from node_modules. const express = require('express'); // Creating our Express Application. const app = express(); //}.`) }); We’ll get our server up and running by executing node server/server.js (with Node installed on our system and accessible globally from system environment variables) in the project’s root directory. Like earlier, you should see the message that the server is up in the console. Now that the server is running, open a browser, and visit localhost:3000 in the URL bar. You should be greeted with an error message that states Cannot GET /. Press Ctrl + Shift + I on Windows in Chrome to view the developer console. In there, you should see that we have a 404 (Resource not found). That makes sense — we have only told the server what to do when someone visits localhost:3000/my-test-route. The browser has nothing to render at localhost:3000 (which is equivalent to localhost:3000/ with a slash). If you look at the terminal window where the server is running, there should be no new data. Now, visit localhost:3000/my-test-route in your browser’s URL bar. You might see the same error in Chrome’s Console (because the browser is caching the content and still has no HTML to render), but if you view your terminal where the server process is running, you’ll see that the callback function did indeed fire and the log message was indeed logged. Shut down the server with Ctrl + C. Now, let’s give the browser something to render when a GET Request is made to that route so we can lose the Cannot GET / message. I’m going to take our app.get() from earlier, and in the callback function, I’m going to add two arguments. Remember, the callback function we are passing in is getting called by Express behind the scenes, and Express can add whatever arguments it wants. It actually adds two (well, technically three, but we’ll see that later), and while they are both extremely important, we don’t care about the first one for now. The second argument is called res, short for response, and I’ll access it by setting undefined as the first parameter: app.get('/my-test-route', (undefined, res) => { console.log('A GET Request was made to /my-test-route.'); }); Again, we can call the res argument whatever we want, but res is convention when dealing with Express. res is actually an object, and upon it exist different methods for sending data back to the client. In this case, I’m going to access the send(...) function available on res to send back HTML which the browser will render. We are not limited to sending back HTML, however, and can choose to send back text, a JavaScript Object, a stream (streams are especially beautiful), or whatever. app.get('/my-test-route', (undefined, res) => { console.log('A GET Request was made to /my-test-route.'); res.send('<h1>Hello, World!</h1>'); }); If you shut down the server and then bring it back up, and then refresh your browser at the /my-test-route route, you’ll see the HTML get rendered. The Network Tab of the Chrome Developer Tools will allow you to see this GET Request with more detail as it pertains to headers. At this point, it’ll serve us well to start learning about Express Middleware — functions that can be fired globally after a client makes a request. Express Middleware Express provides methods by which to define custom middleware for your application. Indeed, the meaning of Express Middleware is best defined in the Express Docs, here) function in the stack. In other words, a middleware function is a custom function that we (the developer) can define, and that will act as an intermediary between when Express receives the request and when our appropriate callback function fires. We might make a log function, for example, that will log every time a request is made. Note that we can also choose to make these middleware functions fire after our endpoint has fired, depending upon where you place it in the stack — something we’ll see later. In order to specify custom middleware, we have to define it as a function and pass it into app.use(...). const myMiddleware = (req, res, next) => { console.log(`Middleware has fired at time ${Date().now}`); next(); } app.use(myMiddleware); // This is the app variable returned from express(). All together, we now have: // Getting the module from node_modules. const express = require('express'); // Creating our Express Application. const app = express(); // Our middleware function. const myMiddleware = (req, res, next) => { console.log(`Middleware has fired at time ${Date().now}`); next(); } // Tell Express to use the middleware. app.use(myMiddleware); //}.`) }); If you make the requests through the browser again, you should now see that your middleware function is firing and logging timestamps. To foster experimentation, try removing the call to the next function and see what happens. The middleware callback function gets called with three arguments, req, res, and req is the parameter we skipped over when building out the GET Handler earlier, and it is an object containing information regarding the request, such as headers, custom headers, parameters, and any body that might have been sent up from the client (such as you do with a POST Request). I know we are talking about middleware here, but both the endpoints and the middleware function get called with req and res. req and res will be the same (unless one or the other mutates it) in both the middleware and the endpoint within the scope of a single request from the client. That means, for example, you could use a middleware function to sanitize data by stripping any characters that might be aimed at performing SQL or NoSQL Injections, and then handing the safe req to the endpoint. res, as seen earlier, permits you to send data back to the client in a handful of different ways. next is a callback function that you have to execute when the middleware has finished doing its job in order to call the next middleware function in the stack or the endpoint. Be sure to take note that you will have to call this in the then block of any async functions you fire in the middleware. Depending on your async operation, you may or may not want to call it in the catch block. That is, the myMiddleware function fires after the request is made from the client but before the endpoint function of the request is fired. When we execute this code and make a request, you should see the Middleware has fired... message before the A GET Request was made to... message in the console. If you don’t call next(), the latter part will never run — your endpoint function to the request will not fire. Note also that I could have defined this function anonymously, as such (a convention to which I’ll be sticking): app.use((req, res, next) => { console.log(`Middleware has fired at time ${Date().now}`); next(); }); For anyone new to JavaScript and ES6, if the way in which the above works does not make immediate sense, the below example should help. We are simply defining a callback function (the anonymous function) which takes another callback function ( next) as an argument. We call a function that takes a function argument a Higher Order Function. Look at it the below way — it depicts a basic example of how the Express Source Code might work behind the scenes: console.log('Suppose a request has just been made from the client.\n'); // This is what (it’s not exactly) the code behind app.use() might look like. const use = callback => { // Simple log statement to see where we are. console.log('Inside use() - the "use" function has been called.'); // This depicts the termination of the middleware. const next = () => console.log('Terminating Middleware!\n'); // Suppose req and res are defined above (Express provides them). const req = res = null; // "callback" is the "middleware" function that is passed into "use". // "next" is the above function that pretends to stop the middleware. callback(req, res, next); }; // This is analogous to the middleware function we defined earlier. // It gets passed in as "callback" in the "use" function above. const myMiddleware = (req, res, next) => { console.log('Inside the myMiddleware function!'); next(); } // Here, we are actually calling "use()" to see everything work. use(myMiddleware); console.log('Moving on to actually handle the HTTP Request or the next middleware function.'); We first call use which takes myMiddleware as an argument. myMiddleware, in and of itself, is a function which takes three arguments - req, res, and next. Inside use, myMiddlware is called, and those three arguments are passed in. next is a function defined in use. myMiddleware is defined as callback in the use method. If I’d placed use, in this example, on an object called app, we could have mimicked Express’s setup entirely, albeit without any sockets or network connectivity. In this case, both myMiddleware and callback are Higher Order Functions, because they both take functions as arguments. If you execute this code, you will see the following response: Suppose a request has just been made from the client. Inside use() - the "use" function has been called. Inside the middleware function! Terminating Middleware! Moving on to actually handle the HTTP Request or the next middleware function. Note that I could have also used anonymous functions to achieve the same result: console.log('Suppose a request has just been made from the client.'); // This is what (it’s not exactly) the code behind app.use() might look like. const use = callback => { // Simple log statement to see where we are. console.log('Inside use() - the "use" function has been called.'); // This depicts the termination of the middlewear. const next = () => console.log('Terminating Middlewear!'); // Suppose req and res are defined above (Express provides them). const req = res = null; // "callback" is the function which is passed into "use". // "next" is the above function that pretends to stop the middlewear. callback(req, res, () => { console.log('Terminating Middlewear!'); }); }; // Here, we are actually calling "use()" to see everything work. use((req, res, next) => { console.log('Inside the middlewear function!'); next(); }); console.log('Moving on to actually handle the HTTP Request.'); With that hopefully settled, we can now return to the actual task at hand — setting up our middleware. The fact of the matter is, you will commonly have to send data up through an HTTP Request. You have a few different options for doing so — sending up URL Query Parameters, sending up data that will be accessible on the req object that we learned about earlier, etc. That object is not only available in the callback to calling app.use(), but also to any endpoint. We used undefined as a filler earlier so we could focus on res to send HTML back to the client, but now, we need access to it. app.use('/my-test-route', (req, res) => { // The req object contains client-defined data that is sent up. // The res object allows the server to send data back down. }); HTTP POST Requests might require that we send a body object up to the server. If you have a form on the client, and you take the user’s name and email, you will likely send that data to the server on the body of the request. Let’s take a look at what that might look like on the client side: <!DOCTYPE html> <html> <body> <form action="" method="POST" > <input type="text" name="nameInput"> <input type="email" name="emailInput"> <input type="submit"> </form> </body> </html> On the server side: app.post('/email-list', (req, res) => { // What do we now? // How do we access the values for the user’s name and email? }); To access the user’s name and email, we’ll have to use a particular type of middleware. This will put the data on an object called body available on req. Body Parser was a popular method of doing this, available by the Express developers as a standalone NPM module. Now, Express comes pre-packaged with its own middleware to do this, and we’ll call it as so: app.use(express.urlencoded({ extended: true })); Now we can do: app.post('/email-list', (req, res) => { console.log('User Name: ', req.body.nameInput); console.log('User Email: ', req.body.emailInput); }); All this does is take any user-defined input which is sent up from the client, and makes them available on the body object of req. Note that on req.body, we now have nameInput and input tags in the HTML. Now, this client-defined data should be considered dangerous (never, never trust the client), and needs to be sanitized, but we’ll cover that later. Another type of middleware provided by express is express.json(). express.json is used to package any JSON Payloads sent up in a request from the client onto req.body, while express.urlencoded will package any incoming requests with strings, arrays, or other URL Encoded data onto req.body. In short, both manipulate req.body, but .json() is for JSON Payloads and .urlencoded() is for, among others, POST Query Parameters. Another way of saying this is that incoming requests with a Content-Type: application/json header (such as specifying a POST Body with the fetch API) will be handled by express.json(), while requests with header Content-Type: application/x-www-form-urlencoded (such as HTML Forms) will be handled with express.urlencoded(). This hopefully now makes sense. Starting Our CRUD Routes For MongoDB Note: When performing PATCH Requests in this article, we won’t follow the JSONPatch RFC Spec — an issue we’ll rectify in the next article of this series. Considering that we understand that we specify each endpoint by calling the relevant function on app, passing to it the route and a callback function containing the request and response objects, we can begin to define our CRUD Routes for the Bookshelf API. Indeed, and considering this is an introductory article, I won’t be taking care to follow HTTP and REST specifications completely, nor will I attempt to use the cleanest possible architecture. That will come in a future article. I’ll open up the server.js file that we have been using thus far and empty everything out as to start from the below clean slate: //. // ... // Binding our application to port 3000. app.listen(PORT, () => console.log(`Server is up on port ${PORT}.`)); Consider all following code to take up the // ... portion of the file above. To define our endpoints, and because we are building a REST API, we should discuss the proper way to name routes. Again, you should take a look at the HTTP section of my former article for more information. We are dealing with books, so all routes will be located behind /books (the plural naming convention is standard). As you can see, an ID does not need to be specified when POSTing a book because we’ll (or rather, MongoDB), will be generating it for us, automatically, server-side. GETting, PATCHing, and DELETing books will all require that we do pass that ID to our endpoint, which we’ll discuss later. For now, let’s simply create the endpoints: //}`); }); The :id syntax tells Express that id is a dynamic parameter that will be passed up in the URL. We have access to it on the params object which is available on req. I know “we have access to it on req” sounds like magic and magic (which doesn’t exist) is dangerous in programming, but you have to remember that Express is not a black box. It’s an open-source project available on GitHub under an MIT LIcense. You can easily view it’s source code if you want to see how dynamic query parameters are put onto the req object. All together, we now have the following in our server.js file: //}.`)); Go ahead and start the server, running node server.js from the terminal or command line, and visit your browser. Open the Chrome Development Console, and in the URL (Uniform Resource Locator) Bar, visit localhost:3000/books. You should already see the indicator in your OS’s terminal that the server is up as well as the log statement for GET. Thus far, we’ve been using a web browser to perform GET Requests. That is good for just starting out, but we’ll quickly find that better tools exist to test API routes. Indeed, we could paste fetch calls directly into the console or use some online service. In our case, and to save time, we’ll use cURL and Postman. I use both in this article (although you could use either or) so that I can introduce them for if you haven’t used them. cURL is a library (a very, very important library) and command-line tool designed to transfer data using various protocols. Postman is a GUI based tool for testing APIs. After following the relevant installation instructions for both tools on your operating system, ensure your server is still running, and then execute the following commands (one-by-one) in a new terminal. It’s important that you type them and execute them individually, and then watch the log message in the separate terminal from your server. Also, note that the standard programming language comment symbol // is not a valid symbol in Bash or MS-DOS. You’ll have to omit those lines, and I only use them here to describe each block of cURL commands. // HTTP POST Request (Localhost, IPv4, IPv6) curl -X POST curl -X POST curl -X POST http://[::1]:3000/books // HTTP GET Request (Localhost, IPv4, IPv6) curl -X GET curl -X GET curl -X GET http://[::1]:3000/books/book-abc123 // HTTP PATCH Request (Localhost, IPv4, IPv6) curl -X PATCH curl -X PATCH curl -X PATCH http://[::1]:3000/books/some-id // HTTP DELETE Request (Localhost, IPv4, IPv6) curl -X DELETE curl -X DELETE curl -X DELETE http://[::1]:3000/books/217 As you can see, the ID that is passed in as a URL Parameter can be any value. The -X flag specifies the type of HTTP Request (it can be omitted for GET), and we provide the URL to which the request will be made thereafter. I’ve duplicated each request three times, allowing you to see that everything still works whether you use the localhost hostname, the IPv4 Address ( 127.0.0.1) to which localhost resolves, or the IPv6 Address ( ::1) to which localhost resolves. Note that cURL requires wrapping IPv6 Addresses in square brackets. We are in a decent place now — we have the simple structure of our routes and endpoints set up. The server runs correctly and accepts HTTP Requests as we expect it to. Contrary to what you might expect, there is not long to go at this point — we just have to set up our database, host it (using a Database-as-a-Service — MongoDB Atlas), and persist data to it (and perform validation and create error responses). Setting Up A Production MongoDB Database To set up a production database, we’ll head over to the MongoDB Atlas Home Page and sign up for a free account. Thereafter, create a new cluster. You can maintain the default settings, picking a fee tier applicable region. Then hit the “Create Cluster” button. The cluster will take some time to create, and then you’ll be able to attain your database URL and password. Take note of these when you see them. We’ll hardcode them for now, and then store them in environment variables later for security purposes. For help in creating and connecting to a cluster, I’ll refer you to the MongoDB Documentation, particularly this page and this page, or you can leave a comment below and I’ll try to help. Creating A Mongoose Model It’s recommended that you have an understanding of the meanings of Documents and Collections in the context of NoSQL (Not Only SQL — Structured Query Language). For reference, you might want to read both the Mongoose Quick Start Guide and the MongoDB section of my former article. We now have a database that is ready to accept CRUD Operations. Mongoose is a Node module (or ODM — Object Document Mapper) that will allow us to perform those operations (abstracting away some of the complexities) as well as set up the schema, or structure, of the database collection. As an important disclaimer, there is a lot of controversy around ORMs and such patterns as Active Record or Data Mapper. Some developers swear by ORMs and others swear against them (believing they get in the way). It’s also important to note that ORMs abstract a lot away like connection pooling, socket connections, and handling, etc. You could easily use the MongoDB Native Driver (another NPM Module), but it would talk a lot more work. While it’s recommended that you play with the Native Driver before using ORMs, I omit the Native Driver here for brevity. For complex SQL operations on a Relational Database, not all ORMs will be optimized for query speed, and you may end up writing your own raw SQL. ORMs can come into play a lot with Domain-Driven Design and CQRS, among others. They are an established concept in the .NET world, and the Node.js community has not completely caught up yet — TypeORM is better, but it’s not NHibernate or Entity Framework. To create our Model, I’ll create a new folder in the server directory entitled models, within which I’ll create a single file with the name book.js. Thus far, our project’s directory structure is as follows: - server - node_modules - models - book.js - package.json - server.js Indeed, this directory structure is not required, but I use it here because it’s simple. Allow me to note that this is not at all the kind of architecture you want to use for larger applications (and you might not even want to use JavaScript — TypeScript could be a better option), which I discuss in this article’s closing. The next step will be to install mongoose, which is performed via, as you might expect, npm i mongoose. The meaning of a Model is best ascertained from the Mongoose documentation: Models are fancy constructors compiled from Schemadefinitions. An instance of a model is called a document. Models are responsible for creating and reading documents from the underlying MongoDB database. Before creating the Model, we’ll define its Schema. A Schema will, among others, make certain expectations about the value of the properties provided. MongoDB is schemaless, and thus this functionality is provided by the Mongoose ODM. Let’s start with a simple example. Suppose I want my database to store a user’s name, email address, and password. Traditionally, as a plain old JavaScript Object (POJO), such a structure might look like this: const userDocument = { name: 'Jamie Corkhill', email: 'jamie@domain.com', password: 'Bcrypt Hash' }; If that above object was how we expected our user’s object to look, then we would need to define a schema for it, like this: const schema = { name: { type: String, trim: true, required: true }, email: { type: String, trim: true, required: true }, password: { type: String, required: true } }; Notice that when creating our schema, we define what properties will be available on each document in the collection as an object in the schema. In our case, that’s name, password. The fields type, trim, required tell Mongoose what data to expect. If we try to set the name field to a number, for example, or if we don’t provide a field, Mongoose will throw an error (because we are expecting a type of String), and we can send back a 400 Bad Request to the client. This might not make sense right now because we have defined an arbitrary schema object. However, the fields of type, trim, and required (among others) are special validators that Mongoose understands. trim, for example, will remove any whitespace from the beginning and end of the string. We’ll pass the above schema to mongoose.Schema() in the future and that function will know what to do with the validators. Understanding how Schemas work, we’ll create the model for our Books Collection of the Bookshelf API. Let’s define what data we require: Title ISBN Number Author Publishing Date Finished Reading (Boolean) I’m going to create this in the book.js file we created earlier in /models. Like the example above, we’ll be performing validation: const mongoose = require('mongoose'); // Define the schema: const mySchema = { title: { type: String, required: true, trim: true, }, isbn: { type: String, required: true, trim: true, }, author: { firstName:{ type: String, required: true, trim: true }, lastName: { type: String, required: true, trim: true } }, publishingDate: { type: String }, finishedReading: { type: Boolean, required: true, default: false } } default will set a default value for the property if none is provided — finishedReading for example, although a required field, will be set automatically to false if the client does not send one up. Mongoose also provides the ability to perform custom validation on our fields, which is done by supplying the validate() method, which attains the value that was attempted to be set as its one and only parameter. In this function, we can throw an error if the validation fails. Here is an example: // ... isbn: { type: String, required: true, trim: true, validate(value) { if (!validator.isISBN(value)) { throw new Error('ISBN is invalid.'); } } } // ... Now, if anyone supplies an invalid ISBN to our model, Mongoose will throw an error when trying to save that document to the collection. I’ve already installed the NPM module validator via npm i validator and required it. validator contains a bunch of helper functions for common validation requirements, and I use it here instead of RegEx because ISBNs can’t be validated with RegEx alone due to a tailing checksum. Remember, users will be sending a JSON body to one of our POST routes. That endpoint will catch any errors (such as an invalid ISBN) when attempting to save, and if one is thrown, it’ll return a blank response with an HTTP 400 Bad Request status — we haven’t yet added that functionality. Finally, we have to define our schema of earlier as the schema for our model, so I’ll make a call to mongoose.Schema() passing in that schema: const bookSchema = mongoose.Schema(mySchema); To make things more precise and clean, I’ll replace the mySchema variable with the actual object all on one line: } }); Let’s take a final moment to discuss this schema. We are saying that each of our documents will consist of a title, an ISBN, an author with a first and last name, a publishing date, and a finishedReading boolean. titlewill be of type String, it’s a required field, and we’ll trim any whitespace. isbnwill be of type String, it’s a required field, it must match the validator, and we’ll trim any whitespace. authoris of type objectcontaining a required, trimmed, stringfirstName and a required, trimmed, stringlastName. publishingDateis of type String (although we could make it of type Dateor Numberfor a Unix timestamp. finishedReadingis a required booleanthat will default to falseif not provided. With our bookSchema defined, Mongoose knows what data and what fields to expect within each document to the collection that stores books. However, how do we tell it what collection that specific schema defines? We could have hundreds of collections, so how do we correlate, or tie, bookSchema to the Book collection? The answer, as seen earlier, is with the use of models. We’ll use bookSchema to create a model, and that model will model the data to be stored in the Book collection, which will be created by Mongoose automatically. Append the following lines to the end of the file: const Book = mongoose.model('Book', bookSchema); module.exports = Book; As you can see, we have created a model, the name of which is Book (— the first parameter to mongoose.model()), and also provided the ruleset, or schema, to which all data is saved in the Book collection will have to abide. We export this model as a default export, allowing us to require the file for our endpoints to access. Book is the object upon which we’ll call all of the required functions to Create, Read, Update, and Delete data which are provided by Mongoose. Altogether, our book.js file should look as follows: const mongoose = require('mongoose'); const validator = require('validator'); // Define the schema. } }); // Create the "Book" model of name Book with schema bookSchema. const Book = mongoose.model('Book', bookSchema); // Provide the model as a default export. module.exports = Book; Connecting To MongoDB (Basics) Don’t worry about copying down this code. I’ll provide a better version in the next section. To connect to our database, we’ll have to provide the database URL and password. We’ll call the connect method available on mongoose to do so, passing to it the required data. For now, we are going hardcode the URL and password — an extremely frowned upon technique for many reasons: namely the accidental committing of sensitive data to a public (or private made public) GitHub Repository. Realize also that commit history is saved, and that if you accidentally commit a piece of sensitive data, removing it in a future commit will not prevent people from seeing it (or bots from harvesting it), because it’s still available in the commit history. CLI tools exist to mitigate this issue and remove history. As stated, for now, we’ll hard code the URL and password, and then save them to environment variables later. At this point, let’s look at simply how to do this, and then I’ll mention a way to optimize it. const mongoose = require('mongoose'); const MONGODB_URL = 'Your MongoDB URL'; mongoose.connect(MONGODB_URL, { useNewUrlParser: true, useCreateIndex: true, useFindAndModify: false, useUnifiedTopology: true }); This will connect to the database. We provide the URL that we attained from the MongoDB Atlas dashboard, and the object passed in as the second parameter specifies features to use as to, among others, prevent deprecation warnings. Mongoose, which uses the core MongoDB Native Driver behind the scenes, has to attempt to keep up with breaking changes made to the driver. In a new version of the driver, the mechanism used to parse connection URLs was changed, so we pass the useNewUrlParser: true flag to specify that we want to use the latest version available from the official driver. By default, if you set indexes (and they are called “indexes” not “indices”) (which we won’t cover in this article) on data in your database, Mongoose uses the ensureIndex() function available from the Native Driver. MongoDB deprecated that function in favor of createIndex(), and so setting the flag useCreateIndex to true will tell Mongoose to use the createIndex() method from the driver, which is the non-deprecated function. Mongoose’s original version of findOneAndUpdate (which is a method to find a document in a database and update it) pre-dates the Native Driver version. That is, findOneAndUpdate() was not originally a Native Driver function but rather one provided by Mongoose, so Mongoose had to use findAndModify provided behind the scenes by the driver to create findOneAndUpdate functionality. With the driver now updated, it contains its own such function, so we don’t have to use findAndModify. This might not make sense, and that’s okay — it’s not an important piece of information on the scale of things. Finally, MongoDB deprecated its old server and engine monitoring system. We use the new method with useUnifiedTopology: true. What we have thus far is a way to connect to the database. But here’s the thing — it’s not scalable or efficient. When we write unit tests for this API, the unit tests are going to use their own test data (or fixtures) on their own test databases. So, we want a way to be able to create connections for different purposes — some for testing environments (that we can spin up and tear down at will), others for development environments, and others for production environments. To do that, we’ll build a factory. (Remember that from earlier?) Connecting To Mongo — Building An Implementation Of A JS Factory Indeed, Java Objects are not analogous at all to JavaScript Objects, and so, subsequently, what we know above from the Factory Design Pattern won’t apply. I merely provided that as an example to show the traditional pattern. To attain an object in Java, or C#, or C++, etc., we have to instantiate a class. This is done with the new keyword, which instructs the compiler to allocate memory for the object on the heap. In C++, this gives us a pointer to the object that we have to clean up ourselves so we don’t have hanging pointers or memory leaks (C++ has no garbage collector, unlike Node/V8 which is built on C++) In JavaScript, the above need not be done — we don’t need to instantiate a class to attain an object — an object is just {}. Some people will say that everything in JavaScript is an object, although that is technically not true because primitive types are not objects. For the above reasons, our JS Factory will be simpler, sticking to the loose definition of a factory being a function that returns an object (a JS object). Since a function is an object (for function inherits from object via prototypal inheritance), our below example will meet this criterion. To implement the factory, I’ll create a new folder inside of server called db. Within db I’ll create a new file called mongoose.js. This file will make connections to the database. Inside of mongoose.js, I’ll create a function called connectionFactory and export it by default: // Directory - server/db/mongoose.js const mongoose = require('mongoose'); const MONGODB_URL = 'Your MongoDB URL'; const connectionFactory = () => { return mongoose.connect(MONGODB_URL, { useNewUrlParser: true, useCreateIndex: true, useFindAndModify: false }); }; module.exports = connectionFactory; Using the shorthand provided by ES6 for Arrow Functions that return one statement on the same line as the method signature, I’ll make this file simpler by getting rid of the connectionFactory definition and just exporting the factory by default: // server/db/mongoose.js const mongoose = require('mongoose'); const MONGODB_URL = 'Your MongoDB URL'; module.exports = () => mongoose.connect(MONGODB_URL, { useNewUrlParser: true, useCreateIndex: true, useFindAndModify: true }); Now, all one has to do is require the file and call the method that gets exported, like this: const connectionFactory = require('./db/mongoose'); connectionFactory(); // OR require('./db/mongoose')(); You could invert control by having your MongoDB URL be provided as a parameter to the factory function, but we are going to dynamically change the URL as an environment variable based on environment. The benefits of making our connection as a function are that we can call that function later in code to connect to the database from files aimed at production and those aimed at local and remote integration testing both on-device and with a remote CI/CD pipeline/build server. Building Our Endpoints We now begin to add very simple CRUD related logic to our endpoints. As previously stated, a short disclaimer is in order. The methods by which we go about implementing our business logic here are not ones that you should mirror for anything other than simple projects. Connecting to databases and performing logic directly within endpoints is (and should be) frowned upon, for you lose the ability to swap out services or DBMSs without having to perform an application wide refactor. Nonetheless, considering this is a beginner’s article, I employ these bad practices here. A future article in this series will discuss how we can increase both the complexity and the quality of our architecture. For now, let’s go back to our server.js file and ensure we both have the same starting point. Notice I added the require statement for our database connection factory and I imported the model we exported from ./models/book.js. const express = require('express'); //}.`)); I’m going to start with app.post(). We have access to the Book model because we exported it from the file within which we created it. As stated in the Mongoose docs, Book is constructable. To create a new book, we call the constructor and pass the book data in, as follows: const book = new Book(bookData); In our case, we’ll have bookData as the object sent up in the request, which will be available on req.body.book. Remember, express.json() middleware will put any JSON data that we send up onto req.body. We are to send up JSON in the following format: { "book": { "title": "The Art of Computer Programming", "isbn": "ISBN-13: 978-0-201-89683-1", "author": { "firstName": "Donald", "lastName": "Knuth" }, "publishingDate": "July 17, 1997", "finishedReading": true } } What that means, then, is that the JSON we pass up will get parsed, and the entire JSON object (the first pair of braces) will be placed on req.body by the express.json() middleware. The one and only property on our JSON object is book, and thus the book object will be available on req.body.book. At this point, we can call the model constructor function and pass in our data: app.post('/books', async (req, res) => { // <- Notice 'async' const book = new Book(req.body.book); await book.save(); // <- Notice 'await' }); Notice a few things here. Calling the save method on the instance we get back from calling the constructor function will persist the req.body.book object to the database if and only if it complies with the schema we defined in the Mongoose model. The act of saving data to a database is an asynchronous operation, and this save() method returns a promise — the settling of which we much await. Rather than chain on a .then() call, I use the ES6 Async/Await syntax, which means I must make the callback function to app.post async. book.save() will reject with a ValidationError if the object the client sent up does not comply with the schema we defined. Our current setup makes for some very flaky and badly written code, for we don’t want our application to crash in the event of a failure regarding validation. To fix that, I’ll surround the dangerous operation in a try/catch clause. In the event of an error, I’ll return an HTTP 400 Bad Request or an HTTP 422 Unprocessable Entity. There is some amount of debate over which to use, so I’ll stick with a 400 for this article since it is more generic. app.post('/books', async (req, res) => { try { const book = new Book(req.body.book); await book.save(); return res.status(201).send({ book }); } catch (e) { return res.status(400).send({ error: 'ValidationError' }); } }); Notice that I use the ES6 Object Shorthand to just return the book object right back to the client in the success case with res.send({ book }) — that would be equivalent to res.send({ book: book }). I also return the expression just to make sure my function exits. In the catch block, I set the status to be 400 explicitly, and return the string ‘ValidationError’ on the error property of the object that gets sent back. A 201 is the success path status code meaning “CREATED”. Indeed, this isn’t the best solution either because we can’t really be sure the reason for failure was a Bad Request on the client’s side. Maybe we lost connection (supposed a dropped socket connection, thus a transient exception) to the database, in which case we should probably return a 500 Internal Server error. A way to check this would be to read the e error object and selectively return a response. Let’s do that now, but as I’ve said multiple times, a followup article will discuss proper architecture in terms of Routers, Controllers, Services, Repositories, custom error classes, custom error middleware, custom error responses, Database Model/Domain Entity data mapping, and Command Query Separation (CQS). app.post('/books', async (req, res) => { try { const book = new Book(req.body.book); await book.save(); return res.send({ book }); } catch (e) { if (e instanceof mongoose.Error.ValidationError) { return res.status(400).send({ error: 'ValidationError' }); } else { return res.status(500).send({ error: 'Internal Error' }); } } }); Go ahead and open Postman (assuming you have it, otherwise, download and install it) and create a new request. We’ll be making a POST Request to localhost:3000/books. Under the “Body” tab within the Postman Request section, I’ll select the “raw” radio button and select “JSON” in the dropdown button to the far right. This will go ahead and automatically add the Content-Type: application/json header to the request. I’ll then copy and paste the Book JSON Object from earlier into the Body text area. This is what we have: Thereafter, I’ll hit the send button, and you should see a 201 Created response in the “Response” section of Postman (the bottom row). We see this because we specifically asked Express to respond with a 201 and the Book object — had we just done res.send() with no status code, express would have automatically responded with a 200 OK. As you can see, the Book object is now saved to the database and has been returned to the client as the Response to the POST Request. If you view the database Book collection through MongoDB Atlas, you’ll see that the book was indeed saved. You can also tell that MongoDB has inserted the __v and _id fields. The former represents the version of the document, in this case, 0, and the latter is the document’s ObjectID — which is automatically generated by MongoDB and is guaranteed to have a low collision probability. A Summary Of What We Have Covered Thus Far We have covered a lot thus far in the article. Let’s take a short reprieve by going over a brief summary before returning to finish the Express API. We learned about ES6 Object Destructuring, the ES6 Object Shorthand Syntax, as well as the ES6 Rest/Spread operator. All three of those let us do the following (and more, as discussed above): // Destructuring Object Properties: const { a: newNameA = 'Default', b } = { a: 'someData', b: 'info' }; console.log(`newNameA: ${newNameA}, b: ${b}`); // newNameA: someData, b: info // Destructuring Array Elements const [elemOne, elemTwo] = [() => console.log('hi'), 'data']; console.log(`elemOne(): ${elemOne()}, elemTwo: ${elemTwo}`); // elemOne(): hi, elemTwo: data // Object Shorthand const makeObj = (name) => ({ name }); console.log(`makeObj('Tim'): ${JSON.stringify(makeObj('Tim'))}`); // makeObj('Tim'): { "name": "Tim" } // Rest, Spread const [c, d, ...rest] = [0, 1, 2, 3, 4]; console.log(`c: ${c}, d: ${d}, rest: ${rest}`) // c: 0, d: 1, rest: 2, 3, 4 We also covered Express, Expess Middleware, Servers, Ports, IP Addressing, etc. Things got interesting when we learned that there exist methods availabile on the return result from require('express')(); with the names of the HTTP Verbs, such as app.get and app.post. If that require('express')() part didn’t make sense to you, this was the point I was making: const express = require('express'); const app = express(); app.someHTTPVerb It should make sense in the same way that we fired off the connection factory before for Mongoose. Each route handler, which is the endpoint function (or callback function), gets passed in a req object and a res object from Express behind the scenes. (They technically also get next, as we’ll see in a minute). req contains data specific to the incoming request from the client, such as headers or any JSON sent up. res is what permits us to return responses to the client. The next function is also passed into handlers. With Mongoose, we saw how we can connect to the database with two methods — a primitive way and a more advanced/practical way that borrows from the Factory Pattern. We’ll end up using this when we discuss Unit and Integration Testing with Jest (and mutation testing) because it’ll permit us to spin up a test instance of the DB populated with seed data against which we can run assertions. After that, we created a Mongoose schema object and used it to create a model, and then learned how we can call the constructor of that model to create a new instance of it. Available on the instance is a save method (among others), which is asynchronous in nature, and which will check that the object structure we passed in complies with the schema, resolving the promise if it does, and rejecting the promise with a ValidationError if it does not. In the event of a resolution, the new document is saved to the database and we respond with an HTTP 200 OK/201 CREATED, otherwise, we catch the thrown error in our endpoint, and return an HTTP 400 Bad Request to the client. As we continue you building out our endpoints, you’ll learn more about some of the methods available on the model and the model instance. Finishing Our Endpoints Having completed the POST Endpoint, let’s handle GET. As I mentioned earlier, the :id syntax inside the route lets Express know that id is a route parameter, accessible from req.params. You already saw that when you match some ID for the param “wildcard” in the route, it was printed to the screen in the early examples. For instance, if you made a GET Request to “/books/test-id-123”, then req.params.id would be the string test-id-123 because the param name was id by having the route as HTTP GET /books/:id. So, all we need to do is retrieve that ID from the req object and check to see if any document in our database has the same ID — something made very easy by Mongoose (and the Native Driver). app.get('/books/:id', async (req, res) => { const book = await Book.findById(req.params.id); console.log(book); res.send({ book }); }); You can see that accessible upon our model is a function we can call that will find a document by its ID. Behind the scenes, Mongoose will cast whatever ID we pass into findById to the type of the _id field on the document, or in this case, an ObjectId. If a matching ID is found (and only one will ever be found for ObjectId has an extremely low collision probability), that document will be placed in our book constant variable. If not, book will be null — a fact we’ll use in the near future. For now, let’s restart the server (you must restart the server unless you’re using nodemon) and ensure that we still have the one book document from before inside the Books Collection. Go ahead and copy the ID of that document, the highlighted portion of the image below: And use it to make a GET Request to /books/:id with Postman as follows (note that the body data is just left over from my earlier POST Request. It’s not actually being used despite the fact that it’s depicted in the image below): Upon doing so, you should get the book document with the specified ID back inside the Postman response section. Notice that earlier, with the POST Route, which is designed to “POST” or “push” new resources to the server, we responded with a 201 Created — because a new resource (or document) was created. In the case of GET, nothing new was created — we just requested a resource with a specific ID, thus a 200 OK status code is what we got back, instead of 201 Created. As is common in the field of software development, edge cases must be accounted for — user input is inherently unsafe and erroneous, and it’s our job, as developers, to be flexible to the types of input we can be given and to respond to them accordingly. What do we do if the user (or the API Caller) passes us some ID that can’t be cast to a MongoDB ObjectID, or an ID that can be cast but that doesn’t exist? For the former case, Mongoose is going to throw a CastError — which is understandable because if we provide an ID like math-is-fun, then that’s obviously not something that can be cast to an ObjectID, and casting to an ObjectID is specifically what Mongoose is doing under the hood. For the latter case, we could easily rectify the issue via a Null Check or a Guard Clause. Either way, I’m going to send back and HTTP 404 Not Found Response. I’ll show you a few ways we can do this, a bad way and then a better way. Firstly, we could do the following: app.get('/books/:id', async (req, res) => { try { const book = await Book.findById(req.params.id); if (!book) throw new Error(); return res.send({ book }); } catch (e) { return res.status(404).send({ error: 'Not Found' }); } }); This works and we can use it just fine. I expect that the statement await Book.findById() will throw a Mongoose CastError if the ID string can’t be cast to an ObjectID, causing the catch block to execute. If it can be cast but the corresponding ObjectID does not exist, then book will be null and the Null Check will throw an error, again firing the catch block. Inside catch, we just return a 404. There are two problems here. First, even if the Book is found but some other unknown error occurs, we send back a 404 when we should probably give the client a generic catch-all 500. Second, we are not really differentiating between whether the ID sent up is valid but non-existent, or whether it’s just a bad ID. So, here is another way: const mongoose = require('mongoose'); app.get('/books/:id', async (req, res) => { try { const book = await Book.findById(req.params.id); if (!book) return res.status(404).send({ error: 'Not Found' }); return res.send({ book }); } catch (e) { if (e instanceof mongoose.Error.CastError) { return res.status(400).send({ error: 'Not a valid ID' }); } else { return res.status(500).send({ error: 'Internal Error' }); } } }); The nice thing about this is that we can handle all three cases of a 400, a 404 and a generic 500. Notice that after the Null Check on book, I use the return keyword on my response. This is very important because we want to make sure we exit the route handler there. Some other options might be for us to check if the id on req.params can be cast to an ObjectID explicitly as opposed to permitting Mongoose to cast implicitly with mongoose.Types.ObjectId.isValid('id);, but there is an edge case with 12-byte strings that causes this to sometimes work unexpectedly. We could make said repetition less painful with Boom, an HTTP Response library, for example, or we could employ Error Handling Middleware. We could also transform Mongoose Errors into something more readable with Mongoose Hooks/Middleware as described here. An additional option would be to define custom error objects and use global Express Error Handling Middleware, however, I’ll save that for an upcoming article wherein we discuss better architectural methods. In the endpoint for PATCH /books/:id, we’ll expect an update object to be passed up containing updates for the book in question. For this article, we’ll allow all fields to be updated, but in the future, I’ll show how we can disallow updates of particular fields. Additionally, you’ll see that the error handling logic in our PATCH Endpoint will be the same as our GET Endpoint. That’s an indication that we are violating DRY Principles, but again, we’ll touch on that later. I’m going to expect that all updates are available on the updates object of req.body (meaning the client will send up JSON containing an updates object) and will use the Book.findByAndUpdate function with a special flag to perform the update. app.patch('/books/:id', async (req, res) => { const { id } = req.params; const { updates } = req.body; try { const updatedBook = await Book.findByIdAndUpdate(id, updates, { runValidators: true, new: true }); if (!updatedBook) return res.status(404).send({ error: 'Not Found' }); return res.send({ book: updatedBook }); } catch (e) { if (e instanceof mongoose.Error.CastError) { return res.status(400).send({ error: 'Not a valid ID' }); } else { return res.status(500).send({ error: 'Internal Error' }); } } }); Notice a few things here. We first destructure id from req.params and updates from req.body. Available on the Book model is a function by the name of findByIdAndUpdate that takes the ID of the document in question, the updates to perform, and an optional options object. Normally, Mongoose won’t re-perform validation for update operations, so the runValidators: true flag we pass in as the options object forces it to do so. Furthermore, as of Mongoose 4, Model.findByIdAndUpdate no longer returns the modified document but returns the original document instead. The new: true flag (which is false by default) overrides that behavior. Finally, we can build out our DELETE endpoint, which is quite similar to all of the others: app.delete('/books/:id', async (req, res) => { try { const deletedBook = await Book.findByIdAndDelete(req.params.id); if (!deletedBook) return res.status(404).send({ error: 'Not Found' }); return res.send({ book: deletedBook }); } catch (e) { if (e instanceof mongoose.Error.CastError) { return res.status(400).send({ error: 'Not a valid ID' }); } else { return res.status(500).send({ error: 'Internal Error' }); } } }); With that, our primitive API is complete and you can test it by making HTTP Requests to all endpoints. A Short Disclaimer About Architecture And How We’ll Rectify It From an architectural standpoint, the code we have here is quite bad, it’s messy, it’s not DRY, it’s not SOLID, in fact, you might even call it abhorrent. These so-called “Route Handlers” are doing a lot more than just “handing routes” — they are directly interfacing with our database. That means there is absolutely no abstraction. Let’s face it, most applications will never be this small or you could probably get away with serverless architectures with the Firebase Database. Maybe, as we’ll see later, users want the ability to upload avatars, quotes, and snippets from their books, etc. Maybe we want to add a live chat feature between users with WebSockets, and let’s even go as far as saying we’ll open up our application to let users borrow books with one another for a small charge — at which point we need to consider Payment Integration with the Stripe API and shipping logistics with the Shippo API. Suppose we proceed with our current architecture and add all of this functionality. These route handers, also known as Controller Actions, are going to end up being very, very large with a high cyclomatic complexity. Such a coding style might suit us fine in the early days, but what if we decide that our data is referential and thus PostgreSQL is a better database choice than MongoDB? We now have to refactor our entire application, stripping out Mongoose, altering our Controllers, etc., all of which could lead to potential bugs in the rest of the business logic. Another such example would be that of deciding that AWS S3 is too expensive and we wish to migrate to GCP. Again, this requires an application-wide refactor. Although there are many opinions around architecture, from Domain-Driven Design, Command Query Responsibility Segregation, and Event Sourcing, to Test-Driven Development, SOILD, Layered Architecture, Onion Architecture, and more, we’ll focus on implementing simple Layered Architecture in future articles, consisting of Controllers, Services, and Repositories, and employing Design Patterns like Composition, Adapters/Wrappers, and Inversion of Control via Dependency Injection. While, to an extent, this could be somewhat performed with JavaScript, we’ll look into TypeScript options to achieve this architecture as well, permitting us to employ functional programming paradigms such as Either Monads in addition to OOP concepts like Generics. For now, there are two small changes we can make. Because our error handling logic is quite similar in the catch block of all endpoints, we can extract it to a custom Express Error Handling Middleware function at the very end of the stack. Cleaning Up Our Architecture At present, we are repeating a very large amount of error handling logic across all our endpoints. Instead, we can build an Express Error Handling Middleware function, which is an Express Middleware Function that gets called with an error, the req and res objects, and the next function. For now, let’s build that middleware function. All I’m going to do is repeat the same error handling logic we are used to:' }); } }); This doesn’t appear to work with Mongoose Errors, but in general, rather than using if/else if/else to determine error instances, you can switch over the error’s constructor. I’ll leave what we have, however. In a synchronous endpoint/route handler, if you throw an error, Express will catch it and process it with no extra work required on your part. Unfortunately, that’s not the case for us. We are dealing with asynchronous code. In order to delegate error handling to Express with async route handlers, we much catch the error ourselves and pass it to So, I’ll just permit next to be the third argument into the endpoint, and I’ll remove the error handling logic in the catch blocks in favor of just passing the error instance to next, as such: app.post('/books', async (req, res, next) => { try { const book = new Book(req.body.book); await book.save(); return res.send({ book }); } catch (e) { next(e) } }); If you do this to all route handlers, you should end up with the following code: const express = require('express'); const mongoose = require('mongoose'); //', async (req, res, next) => { try { const book = new Book(req.body.book); await book.save(); return res.status(201).send({ book }); } catch (e) { next(e) } }); // HTTP GET /books/:id app.get('/books/:id', async (req, res) => { try { const book = await Book.findById(req.params.id); if (!book) return res.status(404).send({ error: 'Not Found' }); return res.send({ book }); } catch (e) { next(e); } }); // HTTP PATCH /books/:id app.patch('/books/:id', async (req, res, next) => { const { id } = req.params; const { updates } = req.body; try { const updatedBook = await Book.findByIdAndUpdate(id, updates, { runValidators: true, new: true }); if (!updatedBook) return res.status(404).send({ error: 'Not Found' }); return res.send({ book: updatedBook }); } catch (e) { next(e); } }); // HTTP DELETE /books/:id app.delete('/books/:id', async (req, res, next) => { try { const deletedBook = await Book.findByIdAndDelete(req.params.id); if (!deletedBook) return res.status(404).send({ error: 'Not Found' }); return res.send({ book: deletedBook }); } catch (e) { next(e); } }); // Notice - bottom of stack.' }); } }); // Binding our application to port 3000. app.listen(PORT, () => console.log(`Server is up on port ${PORT}.`)); Moving further, it would be worth separating our error handling middleware into another file, but that’s trivial, and we’ll see it in future articles in this series. Additionally, we could use an NPM module named express-async-errors as to permit us to not have to call next in the catch block, but again, I’m trying to show you how things are done officially. A Word About CORS And The Same Origin Policy Suppose your website is served from the domain myWebsite.com but your server is at myOtherDomain.com/api. CORS stands for Cross-Origin Resource Sharing and is a mechanism by which cross-domain requests can be performed. In the case above, since the server and front-end JS code are at different domains, you’d be making a request across two different origins, which is commonly restricted by the browser for security reasons, and mitigated by supplying specific HTTP headers. The Same Origin Policy is what performs those aforementioned restrictions — a web browser will only permit requires to be made across the same origin. We’ll touch on CORS and SOP later when we build a Webpack bundled front-end for our Book API with React. Conclusion And What’s Next We have discussed a lot in this article. Perhaps it wasn’t all fully practical, but it hopefully got you more comfortable working with Express and ES6 JavaScript features. If you are new to programming and Node is the first path down which you are embarking, hopefully the references to statically types languages like Java, C++, and C# helped to highlight some of the differences between JavaScript and its static counterparts. Next time, we’ll finish building out our Book API by making some fixes to our current setup with regards to the Book Routes, as well as adding in User Authentication so that users can own books. We’ll do all of this with a similar architecture to what I described here and with MongoDB for data persistence. Finally, we’ll permit users to upload avatar images to AWS S3 via Buffers. In the article thereafter, we’ll be rebuilding our application from the ground up in TypeScript, still with Express. We’ll also move to PostgreSQL with Knex instead of MongoDB with Mongoose as to depict better architectural practices. Finally, we’ll update our avatar image uploading process to use Node Streams (we’ll discuss Writable, Readable, Duplex, and Transform Streams). Along the way, we’ll cover a great amount of design and architectural patterns and functional paradigms, including: - Controllers/Controller Actions - Services - Repositories - Data Mapping - The Adapter Pattern - The Factory Pattern - The Delegation Pattern - OOP Principles and Composition vs Inheritance - Inversion of Control via Dependency Injection - SOLID Principles - Coding against interfaces - Data Transfer Objects - Domain Models and Domain Entities - Either Monads - Validation - Decorators - Logging and Logging Levels - Unit Tests, Integration Tests (E2E), and Mutation Tests - The Structured Query Language - Relations - HTTP/Express Security Best Practices - Node Best Practices - OWASP Security Best Practices - And more. Using that new architecture, in the article after that, we’ll write Unit, Integration, and Mutation tests, aiming for close to 100 percent testing coverage, and we’ll finally discuss setting up a remote CI/CD pipeline with CircleCI, as well as Message Busses, Job/Task Scheduling, and load balancing/reverse proxying. Hopefully, this article has been helpful, and if you have any queries or concerns, let me know in the comments below.
https://www.smashingmagazine.com/2019/11/express-es6-javascript-stack-mongodb-mongoose-servers/
CC-MAIN-2021-31
refinedweb
18,173
61.97
This documentation is archived and is not being maintained. Hey, Scripting Guy! Scripting Around the Squiggly Red Line_06.exe (151KB) When you’re a Microsoft Scripting Guy, you get used to people falling all over themselves in order to do things for you: "Can I get you a fresh cup of coffee, Scripting Guy?" "Hey, Scripting Guy! There’s only one cookie. Why don’t you take it?" "Get up, Grandma. Let the Scripting Guy have the chair." Yes, everyone wants to help the Scripting Guys—everyone. Well, everyone except Microsoft® Word. That’s right, Microsoft Word. Now, in general, the Scripting Guys like Microsoft Word. And we suppose that it’s possible that Word has better things to do than cater to the Scripting Guys’ every whim. (Although we can’t imagine what could be better than that.) Still, it’s hard to deny that Microsoft Word can be a bit lazy at times. For example, take a look at Figure 1, a screenshot showing an extract from a draft of this column. Figure 1 Document in Word showing spelling mistakes (Click the image for a larger view) Notice anything peculiar? That’s right: Word has drawn squiggly lines under scripting terms like Msgbox. Why? Because Word doesn’t recognize those terms. Doesn’t recognize those terms? Come on, Word; after all, over the past few years the Scripting Guys have typed the term Msgbox several million times. Several million more times they have run spell-check and told you that Msgbox was spelled correctly. And yet still you mark Msgbox and StrReverse and LTrim as being misspelled words? What’s up with that? Now, before you rush to the defense of Word, yes, we know that we can manually add these terms to the Microsoft Word dictionary. But, come on, we’re the Scripting Guys, for crying out loud; what do we know about manual labor? Besides, manually adding VBScript functions and Windows® Management Instrumentation (WMI) class names and Active Directory® Security Interfaces (ADSI) object attributes can be a bit tedious, to say the least. And what if we want to do this on all of our computers? What if we want to share these correct spellings with our devoted readers? What if... well, you get the idea. The point is this: surely there must be an easier way to add scripting terms (or any terms, for that matter) to the dictionary without having to do it by hand, one word at a time. As it turns out, there isn’t a way. See you all next month. No, hey, we’re just kidding. Of course there’s an easier way to go about adding terms to the dictionary. And in just a moment we’ll show you how to do that. First, however, let’s take a minute to chat about dictionaries. Speaking of which, here’s an interesting fact about dictionaries. Why do Americans spell color without a u (color, not colour)? Why do Americans spell the word as center rather than centre? We know the answer: because Noah Webster, who put together the first American dictionary, didn’t like English spelling rules and thus somewhat arbitrarily changed the spelling of many words. Well, OK. Not enthralling. But you try coming up with an interesting fact about dictionaries! In addition to the standard dictionary built into the application, Microsoft Word also allows you to create any number of custom dictionaries. By default, Word comes with a custom dictionary named Custom.dic, which is typically found in the Application Data\Microsoft\Proof folder within your User Profile folder (\Documents and Settings\username by default in Windows XP). If you right-click a misspelled word and select Add to Dictionary, that word will be added to Custom.dic. As it turns out, Custom.dic is just a plain old text file. Open it up in Notepad and you’ll see a list of words similar to this: And that’s how we can programmatically add scripting terms to the dictionary in Microsoft Word. All we have to do is create a text file containing those terms, then configure Word to use that file as a custom dictionary. How hard is that going to be? Considering the fact that the Scripting Guys are willing to do this ourselves, obviously not very hard at all. But hold on: we need to take one more quick side trip before doing that. In the Microsoft Word object model, information about custom dictionaries is stored in the CustomDictionaries collection. In turn, that collection can be programmatically accessed by referencing the Word.Application object and the CustomDictionaries property. For example, Figure 2 shows a script that returns information about all the custom dictionaries associated with your copy of Microsoft Word. Next objWord.Quit Run the script in Figure 2, and you’ll get back information similar to this: Pretty slick, huh? Of course, getting information about existing dictionaries isn’t quite the same thing as creating a new custom dictionary; it’s sort of like the difference between reading about someone who has a million dollars and being someone who has a million dollars. Therefore, let’s see if we can figure out how to create a new custom dictionary in Word. We’re going to start out by adding all the VBScript functions to a new dictionary named Scripting.dic. We’re starting out with VBScript functions because we have to do a little work here. After that, however, we’ll show you how to add WMI classes, ADSI attributes, and Windows PowerShell™ cmdlets to your dictionary without having to do hardly any work at all. (Needless to say, when it comes to hardly doing any work at all, the Scripting Guys know what they’re talking about.) To begin with, creating a custom dictionary in Microsoft Word is as easy as this: As you can see, we start out by creating an instance of the Word.Application object. You might notice that we never set the Visible property to True, and thus never make Word visible onscreen. Why not? Well, simply because this script requires only a second or so to run. Because of that, we didn’t see much reason to have Word flash briefly onto the screen and then immediately disappear. But if you like flashing instances of Word (if for no other reason than to let you know that the script is actually doing something) then simply add this as the second line of code in your script: After creating the Word.Application object, we next use this line of code to create an object reference to the CustomDictionaries collection: We then call the Add method to add the custom dictionary. Note that this file doesn’t have to exist; if Word can’t find a file named Scripting.dic, then it will simply create a new, blank file by that name. Here’s the code for adding the custom dictionary: You might have noticed as well that we didn’t specify a path when adding the new dictionary. Instead, we simply handed the file name (Scripting.dic) to the Add method. That’s fine. In that case, Scripting.dic will be created in the default folder for dictionaries and other proofing tools (for example, C:\Documents and Settings\kenmyer\Application Data\Microsoft\Proof). What if we wanted our dictionary to reside in a different location? No problem. This command creates Scripting.dic in the C:\Scripts folder: And what if we decide later on that we want to get rid of that custom dictionary? Once again, that’s no problem. In that case, we can just bind to the dictionary in question and use the Delete method to get rid of it: Note that this removes C:\Scripts\ Scripting.dic from the CustomDictionaries collection in Word. However, it does not delete the file itself. If you motor on over to C:\Scripts, the Scripting.dic file will still be there, just the way you left it. At this point, all we have to do is start adding terms to our custom dictionary. One way to do that is to simply find a list of, say, VBScript functions, copy them, and then paste them into the file Scripting.dic. For example, you can find a list of VBScript functions in the VBScript Language Reference on MSDN®. Copy those terms, paste them into Scripting.dic, then fire up Word. Figure 3 shows the same section of this article we showed you earlier in Figure 1. See that squiggly red line beneath Msgbox? Of course you don’t; that’s because Word now views Msgbox as a correctly spelled word. Figure 3 Document in Word after adding a custom dictionary (Click the image for a larger view) That’s pretty handy, if we do say so ourselves, especially when you consider the fact that you can share Scripting.dic and your custom dictionary creation script with anyone you want. Do so and in no time at all their versions of Word will also view Msgbox as a correctly spelled word. The truth is, if we Scripting Guys were lazy, we could just end this month’s column right here and now and you’d still walk away with something useful. But, as the saying goes, you ain’t seen nothin’ yet. Note: actually, we are lazy. But seeing as how we had to fill up the excess space in this column, we decided to toss in a bonus script or two. As you probably know, both VBScript and Windows Script Host are kind of shy; they don’t really like to talk about themselves. That isn’t true of WMI, ADSI, or Windows PowerShell, however. All you have to do is ask and those technologies will tell you all about their classes, properties, methods—pretty much anything and everything you’d ever want to know. Take ADSI, for example. Figure 4 shows a script that binds to the Active Directory schema and retrieves all the attributes of the user account object. In turn, it then writes those attributes to the file C:\Scripts\Scripting.dic. Next For Each strAttribute in objSchemaUser.OptionalProperties objFile.WriteLine strAttribute Next objFile.Close See how cool that is? With just a few lines of code we’ve created a dictionary containing all the attributes of the Active Directory user object. Share this code with your fellow scripters, and none of you will ever have to worry about attribute names like streetAddress or givenName being flagged as misspelled words. And, of course, this is infinitely extensible. Want to add the properties for the computer class? Then substitute in this line of code: Or how about WMI classes? Figure 5 shows a script that retrieves a list of all the WMI classes found in the root\cimV2 namespace and writes that information to C:\Scripts\Scripting.dic. Next objFile.Close Forget our Windows PowerShell users? The Scripting Guys would never do that. Well, OK, actually we did forget you. But we did remember to come up with this one-line script, which returns all the cmdlet names and then writes them to C:\Scripts\Scripting.dic: The nice thing about this month’s column is that we all learned something. You learned how to create custom dictionaries in Microsoft Word and how to programmatically configure Word to use those custom dictionaries. And the Scripting Guys learned that if you want something done, well, it’s still way better to try and get someone else to do it for you. But if all else fails, you can always do it yourself. Show:
https://technet.microsoft.com/en-us/library/2007.06.heyscriptingguy.aspx
CC-MAIN-2017-43
refinedweb
1,943
73.37
Hi, I on google but no success. Thanks So, waited until the last minute to do your project assignment, huh? No this is not an assignment. I searched for it on google but no success. Thanks Well if it is not on google you will have to write it yourself. If you have any trouble, post your code and you will get help. Thanks for replying. So here is the code which I have written so far /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package testing; /** * * @author Danish */ import javax.swing.*; import javax.swing.tree.*; import java.awt.event.*; import java.awt.*; import java.util.*; import java.io.*; public class SimpleTree extends JPanel { JTree tree; DefaultMutableTreeNode root; public SimpleTree() { root = new DefaultMutableTreeNode("root", true); getList(root, new File("c:/Program Files")); setLayout(new BorderLayout()); tree = new JTree(root); tree.setRootVisible(false); add(new JScrollPane((JTree)tree),"Center"); } public Dimension getPreferredSize(){ return new Dimension(200, 120); } public void getList(DefaultMutableTreeNode node, File f) { if(!f.isDirectory()) { // We keep only JAVA source file for display in this HowTo if (f.getName().endsWith("java")) { System.out.println("FILE - " + f.getName()); DefaultMutableTreeNode child = new DefaultMutableTreeNode(f); node.add(child); } } else { System.out.println("DIRECTORY - " + f.getName()); DefaultMutableTreeNode child = new DefaultMutableTreeNode(f); node.add(child); File fList[] = f.listFiles(); for(int i = 0; i < fList.length; i++) getList(child, fList[i]); } } public static void main(String s[]){ MyJFrame frame = new MyJFrame("Directory explorer"); } } class WindowCloser extends WindowAdapter { public void windowClosing(WindowEvent e) { Window win = e.getWindow(); win.setVisible(false); System.exit(0); } } class MyJFrame extends JFrame { JButton b1, b2, b3; SimpleTree panel; MyJFrame(String s) { super(s); panel = new SimpleTree(); getContentPane().add(panel,"Center"); setSize(300,300); setVisible(true); setDefaultCloseOperation(JFrame.DO_NOTHING_ON_CLOSE); addWindowListener(new WindowCloser()); } } As you can see from the code that I specify "C:/Program Files" folder as a root folder to this tree. I want "My Computer" to be the top level root node. Thanks I don't know if it is possible to get the "My Computer" as root, but check this out: File [] files = File.listRoots(); for (int i=0;i<files.length;i++) { System.out.println(files[i]); } This will get you all the files that are under the "My Computer". So instead of having one root ("C:/program files"), you can create one father, and under that father put as nodes the "Files" that the above code returns: C://, E://, D:// I don't know if it makes sense, but you can have this: root = new DefaultMutableTreeNode("root", true); File fList[] = File.listRoots(); for(int i = 0; i < fList.length; i++) getList(root , fList[i]); } After all, under "My Computer", all you have are the drives of your computer: C:/, D:/ Also you can check this tutorial: And I would suggest not to populate the tree recursively because it takes awfully long. Try to put Listeners and when a folder is clicked, then take and add the children. Question: Have you tried JFileChooser? Thankyou very much for your response. This solves the problem. In this perticular situation I do not need JFileChooser. I need something similar to "Project Explorer" in Net Beans IDE.
http://www.daniweb.com/software-development/java/threads/216139/file-and-folder-explorer-code-in-java
CC-MAIN-2014-15
refinedweb
532
58.18
its quite good thing to have in once software lib.Reply Originally posted by: Michael Shamgar Im using 'SHBrowseForFolder', and have managed to work around all the issues with it, except for one : I have two 'removable drives' (actually drives that map to my flash card reader, for my digital camera). Everytime the dialog opens, or attempts to scroll up/down near these drives, it tries to access them, but can't - because there are no devices in them. As a result it pops up the classic : 'There is no disk in the drive. Please insert a disk..' Cancel/TryAgain/Continue (etc..) has anyone come across this, or have a solution for fixing it? Mike Originally posted by: Paul Marston I recently had to write part of an application that would accept an addition of relative paths to an existing directory under win NT 4. Unfortunately, the SHCreateDirectoryEx is unavailable to NT (only 2K and above :<) The way I got around this, was to simply break the directory string down (using CString [x]), and analysing if there was a \ or // or . etc. This meant that I could use a simple if else chain to analyse whether or not the relative path should be added or removed. For example: user Path ..\Fred Existing path D:\Bod\app.exe became D:\Fred\app.exe It is clunky, and I am sure that there are hundreds of holes in it, but it works on NT ! More importantly, it is fairly straightforward to do...Reply Originally posted by: bennyfan Yeah, it's very great,but how can I add all my driver path to it, so I can select it just like Windows Explorer! Originally posted by: Sarath Bhooshan S ITs working perfect in Win2000 Professional. Thanks a lotReply Originally posted by: kenshee Is there any way I could get the system to scan if the flopppy or any drives are used using MFC??Thanks..Reply Originally posted by: Sebastien Martin I built a similar class as the one posted here using only API functions, but I couldn't figure out how to make the icons highlighted. (Not just turning every second pixel to a blue color, but acutally shading every pixel to a shade of blue like Win98/NT4/2K do. The original author mentioned something about masks but I'm still kind of new to image manipulation. Even if you provide an MFC solution, I could probably base my code on it and get it working with win GDI. Thanks.Reply Originally posted by: David Netherwood Following Tobias Sch�nherr's suggestion I used the SHBrowseForFolder to pick a directory. I works fine, but there is quite a bit of work goes into to getting it work smoothly, so I thought I'd post a fuller example. #include <windows.h> #include <shlobj.h> #include <iostream.h> // call back function // we only se it to set the current directory int CALLBACK BrowseCallbackProc(HWND hwnd, UINT uMsg, LPARAM , char* path) { switch (uMsg) { case BFFM_INITIALIZED: // Driretory browser intializing - tell it where to start from PostMessage (hwnd, BFFM_SETSELECTION, TRUE, (LPARAM)path); break; } return 0; } int main(int argc, char** argv) { // Browser parameter block BROWSEINFO bi; memset (&bi, 0, sizeof(bi)); // Path variable char path[_MAX_PATH] = "C:\\Temp"; // Initialize COM CoInitialize(NULL); // Get root point to browse from // Can use NULL for all - which in effect is what we are doing here LPITEMIDLIST pidlRoot = NULL; SHGetSpecialFolderLocation(HWND_DESKTOP, CSIDL_DESKTOP /*CSIDL_DRIVES*/, &pidlRoot); // Setup Browse parameter block bi.pidlRoot = pidlRoot; bi.ulFlags = BIF_RETURNONLYFSDIRS; bi.lpfn = (BFFCALLBACK)BrowseCallbackProc; bi.lParam = (LPARAM)path; // Call browse LPITEMIDLIST Selection; Selection = SHBrowseForFolder(&bi); // If went OK, decode return to produce the selected path if (!NULL) SHGetPathFromIDList(Selection, path); // Now free up the memory allocated by the various COM // objects we accessed indirectly LPMALLOC lpMalloc = NULL; if (!SHGetMalloc(&lpMalloc) && (NULL != lpMalloc)) { if (pidlRoot) lpMalloc->Free(pidlRoot); if (Selection) lpMalloc->Free(Selection); } // Print path to console cout << path << endl; return 0; } Originally posted by: TERAHI Originally posted by: Xiaolong Wu this comment concerns the author's improvement plan #2. the plan is to add icons to the edit box. if we have a closer look at the "combobox" on the top of the CFileDiaolog, we can see that it is not a usual CComboBox. the original edit box is replaced by a CButton. one cannot type in it. When one clicks on it, it behaves just like the arrow button on the right. so for the author's improvement plan #2, we need to combine two buttons and a CListBox.the CListBox will be hiden most of the time until user clicks on the buttons. I tried to install a button on a combobox to cover the editbox. but didn't work. when the combobox gets focus, the button dispeared. CComboBox does not give us a handle to the edit box. CComboBoxEx does, but the item data stucture is quite different. xiaolong wu
http://www.codeguru.com/comment/get/48156210/
CC-MAIN-2016-44
refinedweb
816
62.27
7. Electronics design¶ This week I learned a lot , I am an architect and though I have been around labs lately but with each project I did someone in our team used to design electronics but this week I did it alone. make sure you check 5. Electronics production week because you are going to need it in this week . P.s get ready for the back pain Individual assignment:¶ Redraw one of the echo hello-world boards or something equivalent, add (at least) a button and LED (with current-limiting resistor) or equivalent input and output, check the design rules, make it, test it. CAD Software¶ So you need first to choose a software to design your PCB , there is a lot of options like EAGLE and KiCad. I chose Eagle yet I am willing to learn KiCad soon. What is EAGLE? EAGLE is electronic design automation (EDA) software that lets printed circuit board (PCB) designers seamlessly connect schematic diagrams, component placement, PCB routing, and comprehensive library content. Note : I love how Fab academy spreads the culture of Free software and open source data. This helps with spreading knowledge to Everyone. First you need to download EAGLE Step 1 : to design the circuit in Eagle. We start with creating a project Right click on “Projects” and then select “New Project” and name it Step 2 : you should start with a schematic within the project. use the fab.lbr file provided to add fab lab components to my Eagle library. right click on your project and select “New > Schematic” Step 3 then started to add part by part using the Fab.lib after you download it . start by library > open library manger > Step 4 : Find available libraries > browse to fab.lib > open Step 5 : After you find it press use and then go to In Use and search for it to make sure Step 6 Then using the add tool started to add the parts one by one. I will add list of parts and their name in the library Step 7 : You need to find the data sheet for every part and do the connections based on that for this assignment you can use the connection diagram provided by the academy this is a list of the parts how we decided on those parts? Step 8 We start with the function of the board and the data sheet for the programmer and each other part Step 9 Add a net and a label with right name and value this will help you when you switch to board Step 10 Add ISP 3X2 and LEDs with resistors connect the LEDs to GND and to each pin to the microcontroller . Step 11 for the button We will and pull up resistor also I added 20 MHZ crystal and Step 12 After you are done with the schematic press switch board design Step 13 in “Schematic Editor” go to “File > Switch to board”. Click “Yes” for the warning message “Create from schematic?” “Board Editor” window opens and all components are placed using there footprints and connections made in schematic editor. Click on “Group” and draw around your components. Select “Move” and right click on components and select “Move: group”. Place your components to the center of board. this will take time and you can do it manually or automatic but by time you will learn to do it alone and fast Step 14 the yellow lines show the connections between parts as the schematic All those wirings should be replaced using “Route Airwire” tool. but before we start, we need to load design rule check file (fabcity-designrules.dru). Download the file In Board Editor go to “Tools > DRC…” and then select “File” tab. Click on “Load”. now make sure you use 16 as a thickness minimum you can you use 12 for small spaces make sure you use 20 for the border around the components Now export it try to download older version because I have Eagle 9.6.2 free but this version has an issue when exporting to PNG. that it duplicates the size , I couldn’t find a final solution yet but to download older versions or while uploading the file to Fab mods you need to change the DPI. example, if you exported with 1000 dpi you need to make it 2000 dpi. after that Do the same steps for FAB mods same as Electronics production process¶ Step 1 : after you are done with PCB design you should be able to export traces” and cut .png files for inner traces for outer cuts Step 2 : _40<< _47<<. _61<<_62<< Step 6: Click on “Cut” and click on “Delete All” to remove any old jobs. Click on “Add” and select the traces rml file. Click on “Output”. The machine should start now. after I was done with the first cut then I did the outline cut got my part of list ready for soldering Hero shots¶ Programming Using Fab ISP I checked that I have soldered all pieces correctly using multimeter , testing is an important part I used my Fab ISP that I built before and use it to give program the Echo Hello board. .How to do it ? C and Make files Download.¶ Step 1: I Downloaded the C and Makefile. hello.ftdi.44.echo.c C file hello.ftdi.44.echo Makefile Save to your computer and change the name of the makefile to “Makefile1”. Step 2: Connect the board to FabTiny which we made before through the ISP cable. . Make sure that you connect cables in the right direction such that the board, Connect the programmer to USB port how to do it : I uploaded libraries to Arduino to work with the micro controller you can check that in the embedded programming then you can upload the code using Arduino #include <avr/io.h> #include <util/delay.h> void switchLED(unsigned char); int main(void) { //Clock divider set to one using lfuse bit CKDIV8 = 1 unsigned char counter = 1; //Counts push button presses double Tdebounce = 150; //Delay to remove debouncing DDRA |= (1 << DDA2); //Output: Red LED DDRA |= (1 << DDA3); //Output: Blue LED DDRB &= ~(1 << DDB2); //Input: Push button. Default = 1 (pulled up through external 10k resistor) while(1){ if(!(PINB & (1 << PINB2))){ //If push button pressed _delay_ms(Tdebounce); while(!(PINB & (1 << PINB2))){} //Looping as long as push button is pressed counter++; if(counter == 5){ counter = 1; } } switchLED(counter); } } void switchLED(unsigned char c){ switch(c){ case 1: //Both LEDs ON PORTA |= (1 << PORTA2); PORTA |= (1 << PORTA3); break; case 2: //Red LED ON PORTA |= (1 << PORTA2); PORTA &= ~(1 << PORTA3); break; case 3: //Blue LED ON PORTA &= ~(1 << PORTA2); PORTA |= (1 << PORTA3); break; case 4: //Both LED OFF PORTA &= ~(1 << PORTA2); PORTA &= ~(1 << PORTA3); break; } } For programming process more in the embedded programming week Group assignment:¶ Use the test equipment in your lab to observe the operation of a microcontroller circuit board (in minimum, check operating voltage on the board with multimeter or voltmeter and use oscilloscope to check noise of operating voltage and interpret a data signal) Document your work (in a group or individually) Using the digital multimeter, in the normal state the measured value is 5.05 V (digitally 1 or high), and when pressed, the measured value is -0.3 mV (digitally 0 or low). We used oscilloscope to measure the same input signal. To connect the oscilloscope probe, connect the spring loaded end (hock or clamp) to GND and the pin head to test point, which is the push button signal. Using the test equipment in our lab I started by watching a sparkfun’s youtube video on an oscilloscope I learned: that an oscilloscope measures voltage over time: getting amplitude, frequency, transient signals; The important buttons… We used oscilloscope to measure the same input signal. To connect the oscilloscope probe, connect the spring loaded end (hock or clamp) to GND and the pin head to test point, which is the push button signal. - Our Lab oscilloscope - Calibrating the probe/Adjusting compensation capacitor a.on the probe select to X10; b.connect the probe to channel one plug; c.connect the probe to the 1kHz 5V square wave generator d.turn on oscilloscope e.make sure only channel one is on, click on CH2 button until its off and the other way around on CH1 button; f.select DC coupling with the F1 key; g.press button F4 to select 10X on probe mode; h.press trig/menu button and select type as “edge” and source CH1 i.Select slope as “Rise” j.Adjust the Vertical and horizontal knobs until you see the wave k.if wave is moving adjust trig knob until it stops l.to calibrate the probe adjust the screw on the base of the probe until the wave is square References¶ special thanks to Nadine Tuhaimer I used her documentation as a Reference and also the work of Aziz Wadi.
https://fabacademy.org/2021/labs/techworks/students/zaid-marji/assignments/week07/
CC-MAIN-2022-05
refinedweb
1,492
68.1
What is Web Service? Web Service is an application that is designed to interact directly with other applications over the internet. In simple sense, Web Services are means for interacting with objects over the Internet. History of Web Service or How Web Service comes into existence? As I have mention before that Web Service is nothing but means for interacting with objects over the Internet. 1. Initially Object - Oriented Language comes which allow us to interact with two object within same application. 2. Then comes Component Object Model (COM) which allows interacting two objects on the same computer, but in different applications. 3. Then comes Distributed Component Object Model (DCOM) which allows interacting two objects on different computers, but within same local network. 4. And finally the web services, which allows two object to interact internet. That is it allows interacting between two objects on different computers and even not within same local network. Example Communication:Web Services communicate by using standard web protocols and data formats, such as 1. HTTP 2. XML 3. SOAP Advantages of Web Service CommunicationWeb Service messages are formatted as XML, a standard way for communication between two incompatible systems. And this message is sent via HTTP, so that they can reach to any machine on the internet without being blocked by firewall. Here is one question arising in your mind and that is testing a Web Service? Answer is: You can test web service without building an entire client application. With Asp.net you can simply run the application and test the method by entering valid input parameters. You can also use .Net Web Service Studio Tool comes from Microsoft. Example of Creating Web Service in .Net This Web Service will take two numbers from client web page and add these on web service page then return to client page. Step1:- Create a Web Service Application by File > New > Web Site > Asp.net Web Services Step2:- Write a method named “Add” on “Service.cs” file using System; using System.Collections.Generic; using System.Linq; : WebService { [WebMethod] public string HelloWorld() { return "Hello World"; } [WebMethod] //creating method Add which will accept two arguments. public int Add(int a, int b) { //returning value return a + b; } } Step3:- Build Web Service and Run the Web Service for testing by pressing F5 function key. Copy the URL string for “Add Web Reference” in your project Step4:- Create a Web Site by File > New > Web Site > Asp.net Web Site and create UI as per as below format Step5:- In Visual Studio 2008, Right Click on Solution Explorer and Choose "Add Web Reference". Or in Visual Studio 2010, Right Click on Solution Explorer and Choose "Add Service Reference" then click on button “Advance”, then chose “Add Web Reference” Step6:- Paste copy string in “URL” box, then click button “go”. After few second button “Add Reference” enable then click on it. Step7:- After adding “Add Reference”, Solution Explorer look like this….. Step8:- Write codes on button “Add” click event using System; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void btnAdd_Click(object sender, EventArgs e) { //creating object of service created earlier. localhost.Service ws = new localhost.Service(); //using Add method of web service and displaying the result in the page. Response.Write((ws.Add(int.Parse(txtNo1.Text), int.Parse(txtNo2.Text))).ToString()); } } Step9:- Run this web site and enter values in TextBox then click button “Add” and see output. E.g. 1st value is “5” and 2nd value is “15”. Output is “20” Your article is very informative but when I'm trying to create web service in Visual Studio 2010, ASP.NET Web Service option is not show. Hi Parvesh Singh! In ASP.NET Web Service Application project template is not available for .Net framework 4.0, however, available for .Net Framework 3.5. If you're building your application on .net framework 4.0, You can use WCF Service Application as ASMX in legacy. Please note that you'd need to enable AspNetCompatibilityMode to access HttpContext objects. If you still want to use ASMX, choose ASP.NET Empty Web application and then you can add ASMX files to the project.
https://www.mindstick.com/Articles/970/web-service-in-dot-net
CC-MAIN-2017-47
refinedweb
700
57.98
This is the interface for 2D renderer. More... #include <ivideo/graph2d.h> Detailed Description This is the interface for 2D renderer. The 2D renderer is responsible for all 2D operations such as creating the window, switching pages, returning pixel format and so on. Main creators of instances implementing this interface: - OpenGL/Windows canvas plugin (crystalspace.graphics2d.glwin32) - OpenGL/X11 canvas plugin (crystalspace.graphics2d.glx) - DirectDraw canvas plugin (crystalspace.graphics2d.directdraw) - X11 canvas plugin (crystalspace.graphics2d.x2d) - Memory canvas plugin (crystalspace.graphics2d.memory) - Null 2D canvas plugin (crystalspace.graphics2d.null) - Some others. - Note that it is the 3D renderer that will automatically create the right instance of the canvas that it requires. Main ways to get pointers to this interface: Main users of this interface: - 3D renderers (iGraphics3D implementations) Definition at line 180 of file graph2d.h. Member Function Documentation Enable/disable canvas resizing. This routine should be called before any draw operations. It should return true if graphics context is ready. Blit a memory block. Format of the image is RGBA in bytes. Row by row. Clear backbuffer. Clear all video pages. Clip a line against given rectangle. Function returns true if line is not visible. Close the device. Create an off-screen canvas so you can render on a given memory area. If depth==8 then the canvas will use palette mode. In that case you can do SetRGB() to initialize the palette. The callback interface (if given) is used to communicate from the canvas back to the caller. You can use this to detect when the texture data has changed for example. - Deprecated: - Deprecated in 1.3. Offscreen canvases are deprecated, use iGraphics3D::SetRenderTarget() Enable or disable double buffering; returns success status. Draw a box. Draw a line. Draw a pixel. Draw an array of pixel coordinates with the given color.. This routine should be called when you finished drawing. Free storage allocated for a subarea of screen. Retrieve clipping rectangle. Get the double buffer state. Get the dimensions of the framebuffer. Returns 'true' if the program is being run full-screen. Get gamma value. Return the height of the framebuffer. Get the name of the canvas. Get the native window corresponding with this canvas. If this is an off-screen canvas then this will return. Get the palette (if there is one). As GetPixel() above, but with alpha. Query pixel R,G,B at given screen location. Returns the address of the pixel at the specified (x, y) coordinates. Return the number of bytes for every pixel. This function is equivalent to the PixelBytes field that you get from GetPixelFormat(). Return information about the pixel format. Retrieve the R,G,B,A tuple for a given color index. Retrieve the R,G,B tuple for a given color index. Get the currently set viewport. Return the width of the framebuffer. Open the'. Flip video pages (or dump backbuffer into framebuffer). The area parameter is only a hint to the canvas driver. Changes outside the rectangle may or may not be printed as well. Resize the canvas. Restore a subarea of screen saved with SaveArea(). Save a subarea of screen and return a handle to saved buffer. Storage is allocated in this call, you should either FreeArea() the handle after usage or RestoreArea () it. Set clipping rectangle. The clipping rectangle is inclusive the top and left edges and exclusive for the right and bottom borders. Change the fullscreen state of the canvas. Set gamma value (if supported by canvas). By default this is 1. Smaller values are darker. If the canvas doesn't support gamma then this function will return false. to one of predefined shape classes (see csmcXXX enum above). If a specific mouse cursor shape is not supported, return 'false'; otherwise return 'true'. If system supports it the cursor should be set to its nearest system equivalent depending on iShape argument and the routine should return "true". Set a color index to given R,G,B (0..255) values. Only use if there is a palette. Set the viewport (the rectangle of the framebuffer to draw to). - Parameters: - Write a text string into the back buffer. A value of -1 for bg color will not draw the background. The documentation for this struct was generated from the following file: Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4/structiGraphics2D.html
CC-MAIN-2017-30
refinedweb
725
62.64
Template Engines¶ There are many benefits to using templates. - Separation of the view from the controller and model code. - Increased productivity. - Ease of creating a site with a unified design. - Division of labor. Designers can now work on template files using html plus a bit more without needing to know how to write Python. At the simplest level templates are not much different than a Python formatted string. However they are much more convenient to use, and have many more features that we will explore shortly. Here is a simple example: from jinja2 import Template template = Template('Hello {{ name }}!') template.render(name='John Doe') A slightly more complicated example goes like this: from jinja2 import Environment, FileSystemLoader env = Environment(loader=FileSystemLoader('/path/to/templates)) t = env.get_template('foo.html') t.render(name='Luther') This example shows how we can set up an environment that can be shared among many functions. This environment takes care of the details behind locating and reading our templates from a file when we want to use them. The foo.html file could look like this: <html> <body> <h1>Hello {{ name }}</h1> </body> </html> The values inside the double curlies are not limited to being string objects, although they must have an __str__ method if they are going to be useful. Consider an instance of a Student class that has instance variables for firstname, lastname, and gpa. Lets change our template to look like the following: <html> <body> <h1>Hello {{ s.firstname }} {{ s.lastname }}</h1> <p>Your gpa is {{ s.gpa }}.</p> </body> </html> Assuming we have a student object called joe we can render the template above with the line: t.render(s=joe) This would also work if joe was a dictionary and had keys firstname, lastname, and gpa. The dot notation works for either attributes or items in a dictionary ( __getattr__ or __getitem__) for those of you who like magic method speak. Loops in Templates¶ Lets suppose you want to make a table in a template. The ideal would be to pass render a list of things, and have the template turn the list into a table. (or an unordered list or whatever) This is easy to do. <html> <body> <h1>The first {{ plist|length }} prime numbers</h1> <table> {% for i in plist: %} <tr><td>{{ i }}</td></tr> {% endfor %} </table> </body> </html> This introduces several interesting new features of templates. - The {% ... %}notation is used to include a non-rendering bit of code in the template. In this example we introduce a for loop. Notice that since html does not require you to indent things we need an endfor to delimit the end of the for loop. - Jinja2 includes a huge number of filters that you can use on a variable. The filter plist|lengthwill render as the number of elements in the plist list. Conditionals in Templates¶ In addition to loops you can also have a conditional in a template for example: <html> <body> {% if name %} <h1>Hello {{ name }} </h1> {% else %} <h1>Hello World</h1> {% endif %} </body> </html> Template Inheritance¶ The real power of templates comes when you use template inheritance. The following scenario is very common: 1. base.html - This file contains the layout that will be used throughout the site, along with all of the links to css files and includes of javascript. The base.html file will define a set of blocks that have default content, but can be overridden by other templates. 1. index.html – The landing page, that inherits from base.html and customizes some blocks for the main page. 1. other child pages, will also inherit from base.html annd make their own customizations. For example lets suppose you have a base.html file that looks like this: <html> <head> {% block head %} <link rel="stylesheet" href="static/style.css" /> <title>{% block title %}{% endblock %} - My Webpage</title> {% endblock %} </head> <body> <main>{% block content %}{% endblock %}</main> <footer> {% block footer %} Creative Commons 2014 by <a href="">you</a>. {% endblock %} </footer> </body> Running this through the Jinja2 renderer gives us this: <html> <head> <link rel="stylesheet" href="static/style.css" /> <title> - My Webpage</title> </head> <body> <main></main> <footer> Creative Commons 2014 by <a href="">you</a>. </footer> </body> Now lets create a child template that contains a title and some real content. .. code-block:: html {% block content %} <h1>Tempates are awesome for 10 reasons</h1> <ol> {% for i in reasons: %} <li>Reason {{ i }}</li> {% endfor %} </ol> {% endblock %} And render it with render(reasons=[1,2,3,4,5]) <html> <head> <link rel="stylesheet" href="static/style.css" /> <title> Great Title - My Webpage</title> </head> <body> <main> <h1>Tempates are awesome for 5 reasons</h1> <ol> <li>Reason 1</li> <li>Reason 2</li> <li>Reason 3</li> <li>Reason 4</li> <li>Reason 5</li> </ol> </main> <footer> Creative Commons 2014 by <a href="">you</a>. </footer> </body> Notice that the header and footer are intact, however the child has the title “Great Title” and the content of the child has been inserted into the content block. Templates in Flask¶ To use Jinja templates in flask is easy. - You need to make a templates subdirectory in your main project directory. - Add from flask import render_templateto your Python. - Then from one of your controller functions, rather than returning a big string, you simple invoke the render_templatefunction: ``return render_template(‘todo.html’,todolist=todolist)) Remember that in flask our controller functions return an iterable. The render_template function returns such an interable. Its just a string, so you can call the render_template function and print the results if you like.
https://runestone.academy/runestone/static/webfundamentals/Frameworks/templates.html
CC-MAIN-2018-17
refinedweb
926
63.29
#include "HX711.h"// HX711 circuit wiringconst int LOADCELL_DOUT_PIN = A2;const int LOADCELL_SCK_PIN = A3;HX711 scale;void setup() { Serial.begin(57600); scale.begin(LOADCELL_DOUT_PIN, LOADCELL_SCK_PIN); byte gain = 64; long average = scale.read_average(10); //Reads the average (how many times) scale.set_scale (1.f); //Sets the scale to convert raw data to human readable data (psia). What does 1.f mean? float get_scale(); //Reads the current scale scale.set_offset (100); //This section should print once Serial.println("START"); Serial.print("OFFSET: "); Serial.println(scale.get_offset()); //Serial Print OFFSET: Value Serial.print("AVERAGE: "); Serial.println(average); //Serial Print AVERAGE: Value Serial.print("CURRENT SCALE: "); Serial.println(scale.get_scale()); //Serial Print CURRENT SCALE: Valuedelay(10000);} void loop() { if (scale.is_ready()) { long reading = scale.read(); long average = scale.read_average(10); //Reads the average (how many times) byte gain = 64; //Sets the Gain on Channel A // This section should print continuously Serial.print("AVERAGE: "); Serial.println(average); //Serial Print AVERAGE: Value } else { Serial.println("HX711 not found."); } delay(1000); } It sounds like several hardware issues, there is no bypass in your circuit, no power supply information, etc. It would help if you include a complete schematic with all the power and ground connections. You are working at the mV level and will have a lot of noise problems if not properly designed including leed dress. By using a multi meter set on 200mV range I have tried to measure a voltage across combinations of the A+, A-, E+, E- pins. I never got a reading other than zero. The HX711 is designed for exactly this purpose and if you use the recommended circuitI don't see a problem with noise as it has a low effective bandwidth as you might expect for a load-cell instrumentation amp/ADC.Which actual HX711 module are you using BTW?A bad connection, or a fake part ?
https://forum.arduino.cc/index.php?topic=728866.msg4905876
CC-MAIN-2021-10
refinedweb
307
52.05
KMP (Knuth-Morris-Pratt) Algorithm Reading time: 15 minutes | Coding time: 10 minutes Knuth Morris Pratt pattern searching algorithm searches for occurrences of a pattern P within a string S using the key idea that when a mismatch occurs, the pattern P has sufficient information to determine where the next potential match could begin thereby avoiding several unnecessary matching bringing the time complexity to linear. The algorithm was developed in 1970 by Donald Knuth and Vaughan Pratt. KMP borders or prefix function. What is border? Border is the prefix of the string that is equal to its suffix but is not equal to the whole string. That might have been confusing so let us clear the doubts with an example: "ab" is a border of "ab c ab" but it isn't a border of "ab". Here we can see, "ab" is prefix and also a suffix of string "abcab" but we are not considering it a border of string "ab" because the whole string is equal to border. Likewise, "aba" is border of "ababa" but "ab" isn't. Now, our prefix function just calculate border of maximum length of string S[0....i] for every position i (from 0 to n-1) of string S. How do we calculate that you ask.... Well suppose we already have a string S and its maximum length border. Now, if we append a character in the end it can increase the length of maximum border by 1 at max. Ex. if "aba" is a border of "ababa" and we append "b" to end then new string is "ababab" with border of maximum length 4 as "abab". So for any index i+1 it can have its value of prefix function, f(i+1) < f(i) + 1 . If suppose the border doesn't get increased by 1 we try to find a border smaller than the previous one where it can be increase by 1. If no such border exists then it means this index doesn't form any border yet. Approach is given below: 1. prefix[] <---- Array of integers to store prefix function values for every index of S. 2. prefix[0] = 0, border = 0 3. for i := 1 to |S| - 1: 4. while (border > 0) and (S[i] != S[border]) : 5. border = prefix[border - 1] 6. if S[i] == S[border] : 7. border = border + 1 8. else: 9. border = 0 10. prefix[i] = border Now you may be wondering how this border and prefix function helps us finding the pattern in string. It is clear that our prefix function will have values smaller than the length of the string itself. The next step is a little difficult to comprehend at first but it is the breakthrough. If we see all the values of prefix function and consider only the ones equal to the length of pattern then it means a string of length equal to our pattern is present as prefix and suffix in the string i.e, at every i where prefix[i] = |P| there is a string of length |P| as prefix and suffix in string. What if this string is equal to our pattern P ? Then we only need to look at these positions and voila this is our breakthrough. What we do is construct a new string out of our pattern and string like: (P + "#" + S) where "#" sign is used to append the pattern and string together. We can use any character to append our pattern and string together unless it doesn't appear in both pattern and string. If we calculate prefix function for this string then for every position i where prefix function value is equal to length of pattern, the prefix is our pattern or string before "$" sign and suffix is contained in our string which we have loacted. PseudocodeThe pseudocode of given Problem is as follows: 1. S = P + # + T <--- here P is pattern and T is text 2. prefix[] = prefix_function(S) 3. result = [] 4. for i from |P|+1 to |S|-1: 5. if prefix[i] = |P|: 6. result.append(i - 2*|P|) ComplexityTime Complexity: O(m + n) Space Complexity: O(m + n) Implementation #include <bits/stdc++.h> using namespace std; void call_prefix(string &kmp, vector<int> &prefix, int len){ int border = 0; for(int i = 1; i < len ;i++){ while(border > 0 && kmp[i] != kmp[border]) border = prefix[border - 1]; if(kmp[i] == kmp[border]) border++; else border = 0; prefix[i] = border; } } int main(){ string pattern, text; cin >>pattern >>text; string kmp = pattern + '$' + text; int len = kmp.length(), len_of_pat = pattern.length(); vector<int> prefix(len, 0); call_prefix(kmp, prefix, len); for(int i = len_of_pat + 1; i < len ;i++){ if(prefix[i] == len_of_pat) cout <<i - 2 * len_of_pat <<" "; } return 0; } ExampleGiven below is an example of how KMP is skipping positions to get a better worst case complexity - In below pic upper string is text and lower one is pattern and we search for pattern in text - In first iteration we check occurence. - Now same process is done for rest of the positions. - finding border for matched string. - Skipping next position. - Bingo! we found a match at 5th position. Application The KMP Algorithm is an efficient exact pattern searching algorithm and is used where fast pattern matching is required but there is a drawback. For differnt patterns and text KMP has to be applied multiple times. So, it is not feasible in case of multiple patterns or texts. In that case, more advanced data structures like: Trie, Suffix Trees or Suffix arrays are used.
https://iq.opengenus.org/knuth-morris-pratt-algorithm/
CC-MAIN-2018-51
refinedweb
921
72.05
Am 26.05.2013 03:23, schrieb mdroth: On Sat, May 25, 2013 at 01:09:50PM +0200, Stefan Priebe wrote:Am 25.05.2013 00:32, schrieb mdroth:On Sat, May 25, 2013 at 12:12:22AM +0200, Stefan Priebe wrote:Am 25.05.2013 00:09, schrieb mdroth:I would try to create a small example script.I use qmp-shell and other little scripts very often.Am this be due to the fact that I don't wait for the welcome banner right now?If you're not reading from the socket, then you'll get the banner back when you read your first response. But qom-set shouldn't fail because of that.I can workaround it by adding this patch: diff --git a/monitor.c b/monitor.c index 62aaebe..9997520 100644 --- a/monitor.c +++ b/monitor.c @@ -4239,7 +4239,8 @@ static int monitor_can_read(void *opaque) static int invalid_qmp_mode(const Monitor *mon, const char *cmd_name) { int is_cap = compare_cmd(cmd_name, "qmp_capabilities"); - return (qmp_cmd_mode(mon) ? is_cap : !is_cap); +// return (qmp_cmd_mode(mon) ? is_cap : !is_cap); + return ((is_cap > 0) ? 0 : (qmp_cmd_mode(mon) ? is_cap : !is_cap)); }I think this is unrelated to your original issue. If you issue 'qmp_capabilities' command more than once you will get CommandNotFound, and that behavior seems to be present even with v1.3.0. This patch seems to be masking the problem you're having (which seems to be state from previous monitor sessions/connections leaking into subsequent ones).That sounds reasonable. I'm using proxmox / PVE which does a lot of qmp queries in the background. So i might see situations where X connections in parallel do qmp queries.It's possible the GSource-based mechanism for handling I/O for chardev backends is causing a difference in behavior. Still not sure exactly what's going on though.Can i revert some patches to test?I think somewhere prior to this one should be enough to test: 2ea5a7af7bfa576a5936400ccca4144caca9640bYES! I used 2ea5a7af7bfa576a5936400ccca4144caca9640b~1 for my tests and this works absoluty fine.Turns out the real culprit was a few commits later: 9f939df955a4152aad69a19a77e0898631bb2c18 I've sent a workaround this fixes things for QMP, but we may need a more general fix. Please give it a shot and see if it fixes your issues: no i got again:The command qom-set has not been found JSON Reply: {"id": "21677:2", "error": {"class": "CommandNotFound", "desc": "The command qom-set has not been found"}} JSON Query: {"execute":"qom-set","id":"21677:2","arguments":{"value":2,"path":"machine/peripheral/balloon0","property":"guest-stats-polling-interval"}} at /usr/share/perl5/PVE/QMPClient.pm line 101. Stefan
https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg03627.html
CC-MAIN-2022-33
refinedweb
434
58.38
Anthony Liguori wrote: > Stefano Stabellini wrote: >> -#if 0 >> case 8: >> - r = (rgba >> 16) & 0xff; >> - g = (rgba >> 8) & 0xff; >> - b = (rgba) & 0xff; >> - color = (rgb_to_index[r] * 6 * 6) + >> - (rgb_to_index[g] * 6) + >> - (rgb_to_index[b]); >> + color = ((r >> 5) << 5 | (g >> 5) << 2 | (b >> 6)); >> break; >> -#endif >> > > This fix seems orthogonal to the rest of the patch. You're adding > support for an 8-bit DisplayState depth that's 3-3-2. It would be good > to document this somewhere. You are right: I saw the "#if 0" and I just tried to "fix" it. Reading the code again I realize that this change doesn't make sense for at least two different reasons, so I'll just drop it. >> >> +static void vnc_colourdepth(DisplayState *ds, int depth); >> > > For the purposes of consistency, please stick to American English > spellings. OK, no problems, I'll start practicing with this email :) >> static inline void vnc_set_bit(uint32_t *d, int k) >> { >> d[k >> 5] |= 1 << (k & 0x1f); >> @@ -332,54 +334,73 @@ static void vnc_write_pixels_copy(VncState *vs, >> void *pixels, int size) >> /* slowest but generic code. */ >> static void vnc_convert_pixel(VncState *vs, uint8_t *buf, uint32_t v) >> { >> - unsigned int r, g, b; >> + uint8_t r, g, b; >> >> - r = (v >> vs->red_shift1) & vs->red_max; >> - g = (v >> vs->green_shift1) & vs->green_max; >> - b = (v >> vs->blue_shift1) & vs->blue_max; >> - v = (r << vs->red_shift) | >> - (g << vs->green_shift) | >> - (b << vs->blue_shift); >> + r = ((v >> vs->red_shift1) & vs->red_max1) * (vs->red_max + 1) / >> + (vs->red_max1 + 1); >> > > I don't understand this change. The code & red_max but then also rounds > to red_max + 1. Is this an attempt to handle color maxes that aren't > power of 2 - 1? The spec insists that the max is always in the form n^2 > - 1: > > "Red-max is the maximum red value (= 2n − 1 where n is the number of > bits used for red)." > > Is this just overzealous checks or was a fix for a broken client? This code is meant to convert pixels from the vnc server internal pixel format to the vnc client pixel format. red_max refers to the vnc client red max, while red_max1 refers to the vnc server internal red max. Before we were just handling the case red_max1 = 0xff, this code should be able to handle other cases as well (necessary for handling the shared buffer). Does this answer your question? May be with the assumption that red_max = 2^n - 1 is still possible to simplify the conversion code... >> switch(vs->pix_bpp) { >> case 1: >> - buf[0] = v; >> + buf[0] = (r << vs->red_shift) | (g << vs->green_shift) | >> + (b << vs->blue_shift); >> break; >> case 2: >> + { >> + uint16_t *p = (uint16_t *) buf; >> + *p = (r << vs->red_shift) | (g << vs->green_shift) | >> + (b << vs->blue_shift); >> if (vs->pix_big_endian) { >> - buf[0] = v >> 8; >> - buf[1] = v; >> - } else { >> - buf[1] = v >> 8; >> - buf[0] = v; >> + *p = htons(*p); >> } >> > > I think this stinks compared to the previous code. I don't see a > functional difference between the two. Can you elaborate on why you > made this change? It seems that these last changes can be dropped all together. The color conversion changes were fixed in multiple steps on xen-unstable, so now the latter changes seem pointless. I'll drop them and do all the tests again... >> send_framebuffer_update_hextile(VncState *vs, int x, int y, int w, i >> { >> int i, j; >> int has_fg, has_bg; >> - uint32_t last_fg32, last_bg32; >> + void *last_fg, *last_bg; >> >> vnc_framebuffer_update(vs, x, y, w, h, 5); >> >> + last_fg = (void *) malloc(vs->depth); >> + last_bg = (void *) malloc(vs->depth); >> > > Probably should just have uint8_t last_fg[4], last_bg[4]. That avoids > error checking on the malloc. OK. >> has_fg = has_bg = 0; >> for (j = y; j < (y + h); j += 16) { >> for (i = x; i < (x + w); i += 16) { >> vs->send_hextile_tile(vs, i, j, >> MIN(16, x + w - i), MIN(16, y + h - >> j), >> - &last_bg32, &last_fg32, &has_bg, >> &has_fg); >> + last_bg, last_fg, &has_bg, &has_fg); >> } >> } >> + free(last_fg); >> + free(last_bg); >> + >> } >> >> static void send_framebuffer_update(VncState *vs, int x, int y, int >> w, int h) >> @@ -1135,17 +1173,6 @@ static void set_encodings(VncState *vs, int32_t >> *encodings, size_t n_encodings) >> check_pointer_type_change(vs, kbd_mouse_is_absolute()); >> } >> >> -static int compute_nbits(unsigned int val) >> -{ >> - int n; >> - n = 0; >> - while (val != 0) { >> - n++; >> - val >>= 1; >> - } >> - return n; >> -} >> - >> static void set_pixel_format(VncState *vs, >> int bits_per_pixel, int depth, >> int big_endian_flag, int true_color_flag, >> @@ -1165,6 +1192,7 @@ static void set_pixel_format(VncState *vs, >> return; >> } >> if (bits_per_pixel == 32 && >> + bits_per_pixel == vs->depth * 8 && >> host_big_endian_flag == big_endian_flag && >> red_max == 0xff && green_max == 0xff && blue_max == 0xff && >> red_shift == 16 && green_shift == 8 && blue_shift == 0) { >> @@ -1173,6 +1201,7 @@ static void set_pixel_format(VncState *vs, >> vs->send_hextile_tile = send_hextile_tile_32; >> } else >> if (bits_per_pixel == 16 && >> + bits_per_pixel == vs->depth * 8 && >> host_big_endian_flag == big_endian_flag && >> red_max == 31 && green_max == 63 && blue_max == 31 && >> red_shift == 11 && green_shift == 5 && blue_shift == 0) { >> @@ -1181,6 +1210,7 @@ static void set_pixel_format(VncState *vs, >> vs->send_hextile_tile = send_hextile_tile_16; >> } else >> if (bits_per_pixel == 8 && >> + bits_per_pixel == vs->depth * 8 && >> red_max == 7 && green_max == 7 && blue_max == 3 && >> red_shift == 5 && green_shift == 2 && blue_shift == 0) { >> vs->depth = 1; >> @@ -1193,28 +1223,116 @@ static void set_pixel_format(VncState *vs, >> bits_per_pixel != 16 && >> bits_per_pixel != 32) >> goto fail; >> - vs->depth = 4; >> - vs->red_shift = red_shift; >> - vs->red_max = red_max; >> - vs->red_shift1 = 24 - compute_nbits(red_max); >> - vs->green_shift = green_shift; >> - vs->green_max = green_max; >> - vs->green_shift1 = 16 - compute_nbits(green_max); >> - vs->blue_shift = blue_shift; >> - vs->blue_max = blue_max; >> - vs->blue_shift1 = 8 - compute_nbits(blue_max); >> - vs->pix_bpp = bits_per_pixel / 8; >> + if (vs->depth == 4) { >> + vs->send_hextile_tile = send_hextile_tile_generic_32; >> + } else if (vs->depth == 2) { >> + vs->send_hextile_tile = send_hextile_tile_generic_16; >> + } else { >> + vs->send_hextile_tile = send_hextile_tile_generic_8; >> + } >> + >> vs->pix_big_endian = big_endian_flag; >> vs->write_pixels = vnc_write_pixels_generic; >> - vs->send_hextile_tile = send_hextile_tile_generic; >> } >> >> - vnc_dpy_resize(vs->ds, vs->ds->width, vs->ds->height); >> + vs->red_shift = red_shift; >> + vs->red_max = red_max; >> + vs->green_shift = green_shift; >> + vs->green_max = green_max; >> + vs->blue_shift = blue_shift; >> + vs->blue_max = blue_max; >> + vs->pix_bpp = bits_per_pixel / 8; >> > > I think the previous way was better. This code seems to be trying to > deal with red_max that's not in the form of 2^n-1, but it has to be in > that form according to the spec. > same as before: we are trying to handle a changing vnc server internal resolution in order to be able to support a shared buffer with the guest.
http://lists.gnu.org/archive/html/qemu-devel/2008-09/msg00437.html
CC-MAIN-2014-42
refinedweb
964
51.82
Histograms With Python Histograms With Python Histograms are extremely helpful in comparing and analyzing data. This article provides the nitty-gritty of drawing a histogram using the matplotlib library in Python. Join the DZone community and get the full member experience.Join For Free In this article, we will look into the details of drawing a histogram using the matplotlib library. The importance of a histogram is how information is grouped together so that it can be compared and analyzed. This article provides the nitty-gritty of drawing a histogram using the matplotlib library. A histogram is a powerful technique in data visualization. Drawing a histogram in Python is relatively easy. All you have to do is code for 3-4 lines of code. That looks pretty straightforward. But complexity is involved when we deal with live data for visualization. In order to draw a histogram, we need to know the following concepts clearly. They are as follows: Axis: y-axis and x-axis. Data: The data can be represented as an array. Height and width of bars. This is determined based on the analysis. The width of the bar is called bin or intervals. Title of the histogram. Color of the bar. Border color of the bar. Based on the above information, we can draw a histogram using the following code. import numpy as np import matplotlib.pyplot as plt data = [1,11,21,31,41]plt.hist([1,11,21,31,41, 51], bins=[0,10,20,30,40,50, 60], weights=[10,1,40,33,6,8], edgecolor="red") plt.show() This will draw a histogram, as shown below. The above dataset is uniformly distributed in this tutorial. The interval is considered as 10 in this example, which means that the x-axis is marked a width of 10 units. In the given data, the width is distributed so that each bar takes a width of 10. This means that we have grouped the data based on the bin size. Here, the size of the bin is taken as 10. However, you can change the size of this interval based on the requirements. The histogram is drawn from the smaller values to the highest on the x- and y-axis. In this case, the smaller value of x is 1 and highest value is 51 for y. The arguments to plot in the histogram are dataset, bin, weights, face color, and edge color. The weight in the above arguement represents the y-axis values. Basically, the dataset, bin, and weight attributes contribute in drawing a histogram. We can calculate the value of the bin dynamically, as well. The following code snippet does this job for you: bins = range(min(data), max(data) + interval, interval) interval is the width that is marked on the x-axis. In the above graph, the default color is taken as blue while drawing. In order to change the color of a histogram, we can use face color attribute in the method arguments list. The above method can be defined as below: plt.hist([1,11,21,31,41, 51], bins=[0,10,20,30,40,50, 60], weights=[10,1,0,33,6,8], facecolor='y', edgecolor="red") plt.show() It will display a histogram with a yellow color. The edge color attribute will draw a border around the bar using the color mentioned. In this case, it is r for red. There are so many attributes mentioned in the matplotlib API for your perusal. Finally, we need to give a name to the histogram by calling plt.title("Histogram for 2018"). The histogram can be saved by clicking on the Save button on the GUI. Also, the following code will save the histogram as a PNG image. plt.savefig("foo.png") plt.show() plt.close() It is important to understand the order of the commands. The savefig() method needs to be called before the show() method or else it will not save the current drawing. You can go through the matplotlib API for more data visualization support. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/histogram-with-python
CC-MAIN-2020-29
refinedweb
696
68.26
Wikiversity:Colloquium/archives/August 2010 Contents - 1 m:Requests for comment/Global banners - 2 WV:Custodian feedback#Ottava Rima - 3 WV:AGF - 4 The alleged discretionary "rights" of wikiversity custodians - 5 EDP updates need approval - 6 Friendly treat v hostile threat - 7 Help:OTRS - 8 Wikiversitans in London - 9 Call for bureaucrats - 10 Community input needed - 11 Importation proposal - 12 Redirect upload to Commons - 13 THE IMPORTANCE OF MATHEMATICS TO EDUCATION AND GOVERNMENT - 14 Annotated Bibliography - 15 I have source material for a new course. What do I do with it? - 16 sound shapes - phonograms - 17 feedback if you're interested :-) - 18 wikt:MediaWiki:Gadget-WiktSidebarTranslation.js - 19 Vector is coming! - 20 "Wikimedia Studies": perhaps we should have a policy or CR? - 21 Wikiversity:Embassy - 22 Subtitled movies - 23 Difficult navigation - 24 Quiz options - 25 Countervandalism channel IRC - 26 Wikiversity Signpost - 27 New encyclopedism - 28 Solution against the broken external links: back up the Internet - 29 Category casing. - 30 The Sandbox Server II -- The Sandbox strikes back - 31 Adding a maps extension - 32 Semantic Mediawiki extension - 33 Personal attacks ^ --MZMcBride 22:37, 1 August 2010 (UTC) WV:Custodian feedback#Ottava Rima I have opened a custodian feedback section regarding some recent actions of custodian Ottava Rima. Elsewhere on this page, there are links to Community Reviews. When a person has a problem with a specific custodian, Custodianship/Problems with Custodians suggests first filing a report on Custodian feedback, hoping to find community advice regarding the problem, which might possibly resolve it. That step has been skipped with some of the CRs that are open. Accordingly, I'm asking that as many members of the community as possible look at this review and comment, or watch it for a time. The feedback report cannot result in desysopping, but it's an opportunity to provide advice that may avert it (or avert problems with the filing user!). I would think that if someone comments in the Feedback, and if a Community Review is filed because we cannot resolve the dispute at this lower level, those who have commented will be notified on their Talk pages, and prior comment would be referenced in the Community Review, so this is efficient. Thanks. --Abd 22:23, 2 August 2010 (UTC) WV:AGF I've had a bit more spare time than usual over the past week or so, and have spent some of it reading a lot of things here that are frankly pretty depressing. Here's a snippet from WV:AGF, one of our core policies: - When you disagree with someone, remember that they probably believe that they are helping the project. That's an important tip, and I think everyone needs to take that more seriously. --SB_Johnny talk 00:39, 4 August 2010 (UTC) The alleged discretionary "rights" of wikiversity custodians I absolutely retain the right to use my custodian rights in accordance with what I judge to be in the interests of the project and in accordance with the views of the community. . Adambro certainly have mixed "rights" with "priviledges". But that is not my main point. In essense he believes he can competently act as police, judge and jury at the same time, and at every instance that he interests himself in, just based on what he thinks wikiversity should be, and even in the face of community opposition. I do not know if that has ever been the consensus or custom of wikiversity. However, from what I know from the very early days wikiversity custodians has never been explicitly endowed with any "rights" and they only act as functionaries who would act on behalf of the community, and hence the title "custodian", in clear distinction from wikipedia "administrator". [So please don't say we are wikimedia blablabla and it has always been like that blablabla.] The use of the custodial tools by custodians may have changed through practice, and I would like the wikiversity community to establish consensus on what discretions, in particular blocking others from participations, we allow our functionaries to make. Hillgentleman | //\\ |Talk 12:32, 5 August 2010 (UTC) - You might be better raising any concerns you have about the general use of blocks at Wikiversity talk:Blocking policy where the development of a policy regarding this can be discussed. I'm not quite sure how you conclude that I believe I can act "based on what he thinks wikiversity should be, and even in the face of community opposition" when in the quote you highlight I specifically said that I would use my custodian rights in accordance with both "what I judge to be in the interests of the project and in accordance with the views of the community". On the issue of my "custodian rights", I refer to the ability to block as a custodian right because that is what it is widely referred to as. You can find a list of the rights that members of the Custodians group have at Special:ListGroupRights. Adambro 12:46, 5 August 2010 (UTC) - These problems are not here because of missing policies. Policies may help to solve actually problems, but on the other hands, they will build up barriers around custodians, who will no longer be possible to use their brains and fantasy, to fix problems.--Juan de Vojníkov 12:49, 5 August 2010 (UTC) - I agree that would be nice to know. I'd also like to know in general what discretion anyone has to make any independent decisions. Is the impact that a decision has part of what discretion people are willing to give anyone? I believe that is usually true on other wikimedia projects. A decision to rename a resource usually only impacts the people working on the resource and the people reading the resource for example. If most people do the things that Custodians block for that would have more of a direct impact on the Wikiversity community, than if only a few people do the things that Custodians block for. Is having a direct impact a consideration as well? I think where people stand on issues that have no direct impact on them seems to vary. -- darklama 13:05, 5 August 2010 (UTC) - I retain the right to say hi, Hillgentleman, how have you been? Ottava Rima (talk) 13:49, 5 August 2010 (UTC) - Hi, Ottava! Been busy. Hillgentleman | //\\ |Talk 20:44, 6 August 2010 (UTC) - Are you going to be sticking around? Say yes. I'd like to see you involved in the community again (and I don't mean the drama stuff). It would be nice to drag some people back. Ottava Rima (talk) 20:53, 6 August 2010 (UTC) - Hillgentleman wrote, "I would like the wikiversity community to establish consensus on what discretions, in particular blocking others from participations, we allow our functionaries to make". According to Wikiversity policy, Custodians "can protect, delete and restore pages as well as block users from editing as prescribed by policy and community consensus." There are four policies that prescribe how the block tool can be used. As is being documented at the community review, a few rogue sysops have claimed the right to misuse the block tool. One of the proposals arising from the community review is for an official policy on blocking, a policy that will protect the Wikiversity community from further misuse of the block tool by sysops who ignore the existing policy. Such a policy on blocking was developed over the past four years. Darklama has attempted to hijack the proposed policy on blocking. Darklama, as one of the sysops who has misused the block tool, has a conflict of interest and should not be altering the proposed policy on blocking. The Wikiversity community needs to protect itself from further misuse of custodial tools by rogue sysops. --JWSchmidt 18:49, 5 August 2010 (UTC) - You've said similar to the above, that "There are four policies that prescribe how the block tool can be used", a few times recently. Could you just confirm which four policies you are referring to? Adambro 18:52, 5 August 2010 (UTC) - Adambro asked, "which four policies you are referring to?" The policy on custodianship, the policy on bots, the civility policy and the Research policy. --JWSchmidt 19:02, 5 August 2010 (UTC) - Slightly confused. You link to betawikiversity:Wikiversity:Review board/En but referred to it as "Research policy". Did you mean to link to betawikiversity:Wikiversity:Research guidelines/En? In which case, that doesn't seem to be a policy or mention blocking. I note Wikiversity:Bots only mention of blocking is "Bots running anonymously may be blocked". Would you not agree therefore that we don't really have much policy on how blocks should be used? Is it not the case that the only mention of blocking in our policies of any real substance is the small section of Wikiversity:Custodianship? Adambro 19:19, 5 August 2010 (UTC) - Adambro, an actual Custodian would have become familiar with Wikiversity policy while being mentored. I see no evidence that Adambro was mentored as a probationary custodian. Adambro was never listed at Wikiversity:Probationary custodians. During the community discussion of his candidacy for full custodianship, Adambro refused to answer important questions about his participation at Wikiversity, including his policy violations, which continue. Adambro says, "we don't really have much policy on how blocks should be used", but how blocks can be used is explicitly and clearly described in Wikiversity policy. The only problem is a few rogue sysops who ignore Wikiversity policy. The Wikiversity research policy exists on three related pages. --JWSchmidt 21:14, 5 August 2010 (UTC) - You again suggest that how blocks can be used is "explicitly and clearly described in Wikiversity policy" despite me showing that simply isn't the case. Of the four policies you've suggested define how blocks can be used, the Research guidelines on beta doesn't seem to even discuss blocks, the bots policy has one sentence and the civility policy also says little about how blocks should be used. All we have of any substance is the section of the custodianship policy but even that is only one paragraph which provides little guidance as to how blocks can and can't be used on Wikiversity. The first sentence simply says that custodians can block users, IP address or ranges. The second again just states a fact, that block can be temporary or permanent. The third section just describes how blocks are most commonly used, "in response to obvious and repeated vandalism". The fourth just requires a reason to be given in the block log. The fifth sentence again just states a fact about the blocking feature and the final sentence just provides a link to the proposed policy. - What I conclude from all of that is that it isn't accurate to say that "how blocks can be used is explicitly and clearly described in Wikiversity policy", which seems to be supported by your enthusiasm for Wikiversity:Blocking policy to be developed. I note that rather than responding to my points in my previous comment about this you just restated your concerns about how I was made a custodian. What you say may or may not be true but it doesn't help answer the question as to whether ow blocks can be used is actually "explicitly and clearly described in Wikiversity policy". Here's another opportunity for you. You can respond to the points I've raised and demonstrate that I am incorrect to conclude that Wikiversity doesn't have much which says how blocks should be used. Alternatively you could not bother and just restate your opinion that I've abused my custodian rights or whatever. Adambro 22:11, 5 August 2010 (UTC) - Adambro says, You again suggest that how blocks can be used is "explicitly and clearly described in Wikiversity policy" despite me showing that simply isn't the case, but I'm not "suggesting" anything. How blocks can be used is explicitly and clearly described in Wikiversity policy, as anyone can verify by reading the policies, starting with: "A Wikiversity custodian is an experienced and trusted user who can protect, delete and restore pages as well as block users from editing as prescribed by policy and community consensus." "the Research guidelines on beta doesn't seem to even discuss blocks" <-- The research policy says, "...Custodians take action to delete pages or block editors who refuse to follow the research guidelines". "responding to my points" <-- Adambro, I have responded, but you you don't seem to want to follow policy that clearly says how the block tool can be used at Wikiversity. The failure of a few people to follow existing Wikiversity policy is a matter under community review and the reason why Wikiversity needs an official policy on blocking that will protect the community from people who misuse the block tool. Adambro, if there are remaining "points" please make a numbered list so that we can discuss them. --JWSchmidt 06:27, 6 August 2010 (UTC) - Perhaps I should keep things simple and only ask one thing at a time. You've said 'The research policy says, "...Custodians take action to delete pages or block editors who refuse to follow the research guidelines"'. Where in betawikiversity:Wikiversity:Research guidelines/En does it say that? You seem to be referring to Wikiversity:Review board/En (or betawikiversity:Wikiversity:Review board/En) again. Is Wikiversity:Review board an official policy? Adambro 09:12, 6 August 2010 (UTC) - As I said before, the research policy exists on three related pages. --JWSchmidt 21:33, 6 August 2010 (UTC) Adambro has succinctly stated the common-law rule for administrators. In the absence of specific and clear policy to the contrary, this is the most that we can expect from any custodian, it is actually a lesser standard that the custodian would, for example, agree to only act within the clear confines of explicit policy. Wikiversity could establish this latter standard, but I'd highly recommend against it. "Police" are not "judges," they exercise executive power, which only allows temporary, ad-hoc "judgment," pending a deeper process where the community (or government, or university administrator, for, say, campus police, up to and including courts) reviews the actions. If Adambro regularly abuses discretion, that should be specifically addressed, not the principle of discretion, which is essential. Until this community gives much clearer guidance to Adambro, he cannot be deeply faulted for his actions as long as they are not clearly contrary to policy. If someone believes that a specific action is problematic, or a set of specific actions, and this cannot be resolved by direct discussion, that should be taken to a report on Wikiversity:Custodian feedback for the community to advise the custodian and the person(s) with a complaint. Instead, we have a habit of ineffective complaint through useless discussion, here and there, which just wastes everyone's time while accomplishing nothing. --Abd 18:30, 5 August 2010 (UTC) - "the common-law rule for administrators" <-- Abd, what does "the common-law rule for administrators" mean and how is it relevant to Wikiversity? "agree to only act within the clear confines of explicit policy" <-- Wikiversity policy says, Custodians "can protect, delete and restore pages as well as block users from editing as prescribed by policy and community consensus." That means, in particular, that any block not prescribed by policy must be made by community consensus. "campus police" <-- Abd, why are police relevant to Wikiversity? Custodians clean up vandalism. "the principle of discretion" <-- Abd, what "principle" are you talking about and how is it relevant to Wikiversity? "he cannot be deeply faulted for his actions" <-- Policy violations by sysops and misuse of IRC chat channel operator tools are among the problematic actions that are under community review. a report on "Wikiversity:Custodian feedback" <-- Such a "report" already exists. --JWSchmidt 19:23, 5 August 2010 (UTC) - what does "the common-law rule for administrators" mean and how is it relevant to Wikiversity? Good question, thanks. "Common law" refers to what is known and accepted by precedent and shared understanding, in the absence of specific law, statutory law in legal terms. Here it refers to what someone who has administrative experience, and users with general wiki experience, will expect as a norm, quite aside from explicit policies. Legally, policy would trump common law, except that wikis in general also follow some form or other of what is called on Wikipedia Ignore All Rules, which in public common law is called Public Policy. Means the same thing. So, unless you give custodians here guidance through establishing consensus on contrary policy, they will generally follow, providing they have sufficient experience to understand it, "common law." Inexperienced custodians may not, and even experienced custodians, I found, on Wikipedia, sometimes didn't have a clue about what makes wikis really work, the large body of shared experience. Violations of common law will often outrage people, who won't then have a policy to point to prohibiting the action. They just "know" it's wrong. How do they know that? Good policy and guidelines will stay close to common law, or they will confuse people. Deviations from common law should be very well justified by the specific conditions of a wiki. --Abd 20:11, 5 August 2010 (UTC) - Common law?? Custodians do a lot of things, some of which I don't like, big and small. Sometimes I say so and sometimes I don't. So what? It hardly makes any difference since the custodians can always get their way, since most contentious issues have, by defintion, supporters and detractors. If THAT is how wikiversity establishes its precedents, it gives far too much leeway for the custodians. They can easily get away with whatever they want. I have seen custodians allowing themselves more and more discretions through the years. And now I want to say enough is enough. A Wikiversity "Admin" who has a habit of equating what he thinks is the community consensus to the actual consensus can simply cite "my right and my discretion!" to stuff whatever he likes down the throat on the community. Wikiversity isn't supposed to be a place where a caste of admins have an advantage. You are supposed to use persuation, not your drawn tools. Hillgentleman | //\\ |Talk 00:03, 6 August 2010 (UTC) - So, unless you give custodians here guidance through establishing consensus on contrary policy, they will generally follow, providing they have sufficient experience to understand it, "common law." <-- Abd, I don't understand what you are trying to say. Destructive practices from other websites are not relevant to Wikiversity and its Mission. I agree that some sysops were never mentored and they seem not to understand/respect Wikiversity policy. Any sysop who does not understand and respect Wikiversity policy cannot be trusted and should not be a Custodian. Hillgentleman is correct. Custodians are empowered to do what is described in policy. A few sysops who ignore Wikiversity policy and try to give themselves additional powers are disrupting the Wikiversity community. --JWSchmidt 06:41, 6 August 2010 (UTC) - See also: "The voice of one crying in the wilderness" -- KYPark [T] 03:35, 7 August 2010 (UTC) EDP updates need approval Hello, I've proposed two EDP updates here. Please comment and approve. Geoff Plourde 21:49, 7 August 2010 (UTC) Friendly treat v hostile threat - Synopsis - A statistics, for your reference, shows the number of edits, made by editors and custodians, counting more than a hundred in the last 30 days as of 8 August 2010. The number includes a more or less portion of Talks, which in turn includes more or less portions of friendly treats and hostile threats, perhaps depending on the user's temperament, civility, or the respect and love of the community. - -- KYPark [T] 02:28, 9 August 2010 (UTC) Help:OTRS Hi could you proofread this text and optionally place comments, please.--Juan de Vojníkov 11:13, 8 August 2010 (UTC) - Looks good, are we going to add this to the no thanks template? Geoff Plourde 19:57, 8 August 2010 (UTC) Well, it should be there.--Juan de Vojníkov 21:26, 8 August 2010 (UTC) - It can go also to db-copyvio but the previous possition is much important.--Juan de Vojníkov 21:28, 9 August 2010 (UTC) Wikiversitans in London I would be interested to hear whether there are any other wikiversitans in London. I regularly go to the London wikipedia meetups, and perhaps we could meet up there too! Harrypotter 09:05, 10 August 2010 (UTC) Call for bureaucrats Who would you like to see as a bureaucrat?. See also: Current bureaucrats. Current custodians. -- Jtneill - Talk - c 02:11, 12 August 2010 (UTC) - is it just me, or is the custodian list really weird / wrong? Privatemusings 06:25, 12 August 2010 (UTC) - I am a technical guru, and have fixed it. (I think) :-) Privatemusings 06:27, 12 August 2010 (UTC) Community input needed A Wikiversity community member is being prevented from participating at Wikiversity. See Wikiversity:Request custodian action#Ethical Accountability.2C aka Thekohser.2C request unblock. --JWSchmidt 17:19, 12 August 2010 (UTC) - What is your definition of "community member"? I ask because Thekohser has 61 edits whereas KillerChihuahua and Salmon of Doubt had over 100, two people you have stated were not really part of the community and therefore had no right to express opinions about Moulton's ban. Ottava Rima (talk) 18:17, 12 August 2010 (UTC) - Ottava Rima, please provide a link to where I said they "had no right to express opinions about Moulton's ban". If I was forced to define "community member", my definition would involve the idea that a person is editing in support of the Wikiversity Mission. --JWSchmidt 18:24, 12 August 2010 (UTC) - I have yet to see any of that. And JWS, you challenged KillerChihuahua's statements and Salmon of Doubt's statements as being from outsiders quite often. Or are you going to say that they were part of our community and therefore their votes on Moulton's ban were correct? Ottava Rima (talk) 19:47, 12 August 2010 (UTC) - When Wikipedians make decisions in secret, off-wiki, decisions that disrupt the Wikiversity community and deflect Wikiversity from its mission, and when they edit at Wikiversity so as to impose those decisions on the Wikiversity community then it is fair to characterize them as acting as outsiders. "votes on Moulton's ban were correct?" <-- The decision to ban Moulton was made in secret, off-wiki. I don't know of any votes on Moulton's ban that were "correct", certainly there were none that were announced to the community as being a vote to community ban Moulton. There have been quite a few calls for bans at Wikiversity and none of them were justified, thus they were all serious violations of Wikiversity policy. There were some show trials that do not constitute a fair and just treatment of Moulton. "you challenged KillerChihuahua's statements and Salmon of Doubt's statements" <-- I have challenged some of their statements. Can you link to an edit by me where I challenged one of their statements on the basis of them being outsiders? --JWSchmidt 20:07, 12 August 2010 (UTC) - In the document that I keep citing that was used to verify why you needed to be desysopped, he section on IRC abuse included you being unkind to Salmon of Doubt and KC in IRC. Perhaps they did things in secret because you were using anything public to cause them discomfort? Ever think that perhaps you drove people to such? Ottava Rima (talk) 01:04, 13 August 2010 (UTC) - Ottava Rima, the show trial document that you keep linking could not be the basis for a policy-violating emergency desysop when no emergency existed. "unkind to Salmon of Doubt and KC in IRC" <-- Exactly what does "unkind" mean? When they made unsubstantiated claims about Moulton I asked for evidence to support those claims? I objected to a disruptive sockpuppet from Wikipedia coming to Wikiversity on a self-declared mission to get a Wikiversity community member banned? I objected to the unauthorized use of a bot at Wikiversity? If my questioning of their actions caused them "discomfort" the source of their discomfort was their own actions and their inability to explain how their actions supported the Mission of Wikiversity. "drove people to such" <-- A group of bullies decided to violate Wikipedia's BLP policy and use Wikipedia biographical articles in an inept and misguided effort to paint some scientists as being unscientific. When they were caught violating Wikipedia policy, they blocked Moulton from editing. Not satisfied with that, they resorted to vile online harassment which resulted in it being revealed that one of the policy violating Wikipedians was using corporate computing resources to violate Wikiversity policies and carry out online harassment. In an attempt to cover all that up, there was an orchestrated effort to ban Moulton from participation at Wikiversity. It is a truly sad saga, and a huge embarrassment for the Wikimedia Foundation, made even sadder by a few misguided Wikimedia Functionaries who continue to defend the policy-violating Wikipedians who harassed Moulton. --JWSchmidt 05:02, 13 August 2010 (UTC) Gad this spins out quickly. Looks to me like, far from preventing the member from editing, a possible or even probable result is that the impediments to editing will be lifted, in short order. I didn't notify the community here of that Request Custodian Action because, really, I was just looking for a single neutral custodian to look at this and make an ad hoc decision, trying to keep it simple, avoiding Community Review -- maybe -- unless someone wanted to push it. But there is now a poll going on there, it's true. And various fireworks. For example, SB_Johnny's back! As a custodian and bureaucrat again.... --Abd 01:32, 13 August 2010 (UTC) Importation proposal Hello, I find {{Tabbed portal}} a little bit obsolete comparatively to this template, which I've just adapted on the Wikiversité in French today. It would necessitate a Mediawiki:Common.js + Mediawiki:Common.css modification. JackPotte 04:59, 13 August 2010 (UTC) vote for my bot Redirect upload to Commons I would like to propose that Wikiversity will redirect file upload to Wikimedia Commons. There are some pros and contras of course, but I think pros have majority: - Advantages: - Wikimedia Commons is a server for all media used on WMF projects, than files can be used everywhere. There are many useful files on wv, which may be useful also on other projects, but there is now personnel to move them to Commons (Economicaly: why we should do the work of someone else?) - We will have less work, less controlling, less work with moving files to Commons (the true is no one actually do this). - en.wv doesn't have different license policies, so what can be here can be also at Commons. - en.wv doesn't have extended upload, it means file types which differ from file types of Commons. - Disadvantages: - there might be problems with categorizing of some works to Commons, but why not set there special categories such as "Wikiversity works". - Users may have problems moving to different environment. Finally we can prohibit the upload to en.wv at all and redirect everybody directly to Commons. Than in the future if license policy or file types will be extended, we may allow to upload just these specific file types, which doesn't fit to Commons.--Juan de Vojníkov 20:33, 4 August 2010 (UTC) - I agree that we should be working towards directing uploaders to Commons for all media files and licensees which are accepted there. I think we still need to allow local uploads for things like screenshots. I have been doing a bit of work on and off for a while now with the idea of at some point proposing change the upload form. Wikiversity:Upload is part of this work. It is the development of a upload page to direct uploaders to the most appropriate form. Adambro 20:40, 4 August 2010 (UTC) - Yes, as we just talked on IRC. Fair use works can go to Commons, so they probably should stay here. Than just the redirection.--Juan de Vojníkov 20:56, 4 August 2010 (UTC) - Thats what we are talking John, that Fair use works will stay here, but other will go to Commons. It can be done like Adrignola say or there could still exist local upload but kind of hidden. On the end, Fair use may look like open for contributers, but it is not open for other people, who would like to share. Is it possible to use fair use works in Australia, UK, SA or Germany?--Juan de Vojníkov 06:01, 5 August 2010 (UTC) - Wikibooks uses a special group called "uploaders" that administrators can add/remove to allow people to upload fair use files locally. The "upload file" link has been changed to direct to Commons, but people can still visit Special:Upload if they are a member of the uploaders group (admins don't have to add themselves to that group). Keep in mind that fair use files are not permitted at Commons. The system used at Wikibooks directs uploads to Commons while not disabling uploads entirely. Finally, if you look at the fine-grained permissions at Special:ListGroupRights, you'll see that even under that system, people who aren't members of the uploaders group can still overwrite files they've uploaded. It will be a long process to push all the existing files to Commons, but that system takes the burden of file upload license checking off administrators and keeps files from being limited to Wikiversity's use only (which makes cross-project cooperation difficult). Adrignola 00:44, 5 August 2010 (UTC) - Yes, the other option is not to allow Fair use, because our mission is not to collect the higher number of files at all costs but offer the free content. So how many English speaking countries recognize Fair use? All?--Juan de Vojníkov 06:01, 5 August 2010 (UTC) - From Fair dealing, it would appear that there are at least seven: Australia, Canada, New Zealand, Singapore, South Africa, United Kingdom, and United States. I also replied to your other question on my talk page. Adrignola 17:15, 5 August 2010 (UTC) I want to inject support for JWSchmidt here; I have had recent dealings with the WP where I extended myself as a "fisherman" for images. I cannot possibly tell you how much grief I went through. WP seems to be the opposite of free when it comes to sharing information; if you read all the copyright documentation there is now way to describe WP except as a supporter of copyright law. They seem to create their own, needlessly. WP is so opposed to fair use that pictures of buildings are illegal. Fair use is the most important tool for education there is, because all information is built on existing information--all work is derivative of other work! Here is writing on the topic from the WP that attempts to loosen things there. As an educational entity and a genuine wiki, we need to head the opposite direction with respect to uploads. Also, see WP as an educational entity.--John Bessatalk 13:01, 14 August 2010 (UTC) THE IMPORTANCE OF MATHEMATICS TO EDUCATION AND GOVERNMENT --41.138.169.70 12:58, 14 August 2010 (UTC) - Yeah?--Juan de Vojníkov 09:36, 15 August 2010 (UTC) Annotated Bibliography I am reading IF Stone's Trial of Socrates to understand the open-education environment of Athens (look in the upper left corner), and the nature of the Socratic circle especially with respect to Aristotle, inventor of the Scientific Method. I will create an annotated bibliography that includes my reactions creating what will be the first of what I call "mediated citations." I hope to attract opinions' of others, within the rules of MCs, that will very likely say I am all wrong! (pervious text)--John Bessatalk 15:35, 7 August 2010 (UTC) - I read, or re-read, the descriptive sections about Athens, the Socratics, and Greece that are important to me (I find trials boring) and annotated them on post-its. As you may have seen below, I am starting my counseling masters, so I have to ration my time between requirements and interest. - As an aside, the mediation concept is becoming very useful; I developed the term "mediated glandular responses" to describe, for instance, the feeling a gambler is looking for when winning.--John Bessatalk 12:53, 13 August 2010 (UTC) - Getting closer. I have these two examples of conversations that I hope will be typical of discussions: [2], [3]. Since I will have to also start reading psych texts now, I will want to create the same types of bibliographies for the texts. I think that writing per text is somehow more "open," as the text book industry is notorious for "price fixing."--John Bessatalk 14:19, 15 August 2010 (UTC) I have source material for a new course. What do I do with it? I have translated some of Maimonides' work and I think it would make great source material for a survey course in Judaica. Is it something the Wikiversity could use? Is this the right place for it? --Rebele 14:19, 16 August 2010 (UTC) - Is Miamonides' original untranslated work in the public domain or released under the CC-BY-SA license? -- darklama 14:27, 16 August 2010 (UTC) - Maimonides died over 800 years ago. His works are PD. en.wikisource.org tends to have PD collections of works. If they don't want a translation, then you can post here and we can figure out how to accomodate you. Ottava Rima (talk) 15:15, 16 August 2010 (UTC) sound shapes - phonograms --155.150.223.150 14:47, 17 August 2010 (UTC)Have a list US language sounds and their graphic representation like ough_ow - oo - uf - off - all - bough dough through rough caugh bought. A set of cards with the symbol on one side and the sound words on the other side. Munson short hand is similar but with shapes. There are 26 letters and 76 phonograms, very confusing but very flexiable, US language is not like NORWAY. NORWAY HAS GOVERNMENT CONTROL OF SPELLING.14:47, 17 August 2010 (UTC)14:47, 17 August 2010 (UTC)~~ feedback if you're interested :-) any and all feedback on this, a recent post of mine, is most welcome. cheers, Privatemusings 04:13, 19 August 2010 (UTC):11, 19 August 2010 (UTC) --Justin carlo masangya 12:21, 20 August 2010 (UTC) White blood cells (WBCs), or leukocytes (also spelled "leucocytes"), are cells of the immune system involved in --Justin carlo masangya 12:45, 20 August 2010 (UTC)justin carlo masangya meaning deforestation-is the clearance of forest by logging[popularity known as slash and burn.] effects the effects are: 1.erosion of soil 2.disruption of the water cycle 3.loss of biodiversity 4.flooding 5.drought 6.climate change Vector is coming! Guys, Wikimedia Usability has set August 25th as the date when Vector will become the default skin for all other projects, including Wikiversity. Geoff Plourde 07:05, 8 August 2010 (UTC) - Thanks Geoff. Ill be prepared that day, to switch back everywhere.--Juan de Vojníkov 08:31, 8 August 2010 (UTC) - Just a note -- it's the target date, and may move. Just clarifying. Historybuff 14:40, 9 August 2010 (UTC) "Wikimedia Studies": perhaps we should have a policy or CR? Original research projects related to Wikipedia, the Wikimedia Foundation, and so on have proven to be rather problematic for Wikiversity in the past. Should we simply set these studies outside of our scope? There are problems with doing so, of course, since this would in fact be censorship. However, a blanket ban on the subject would be easier to digest than a ban on only those projects that are critical, or bans that only apply to certain people. Thoughts? Comments? Angry rants at the very thought? --SB_Johnny talk 16:33, 16 August 2010 (UTC) - Jimbo himself when asked said research about Wikimedia projects are fine. I see no need to put a blanket ban on the subject. I'm not aware of any current problems, are you? -- darklama 16:47, 16 August 2010 (UTC) - How have research projects related to Wikipedia and the Wikimedia Foundation been problematic? What has been problematic are people who disrupt such projects. Rather than ban useful research, I favor putting in place at Wikiversity some protections for scholarly research projects and researchers. --JWSchmidt 16:54, 16 August 2010 (UTC) - I agree, John: the projects weren't the issue, but the reaction to them did a lot of damage. The point is that any such effort is going to attract the same reaction. Projects that are critical will attract the attention of those who want to defend Wikimedia from criticism, and likewise projects that are not critical will attract the attention of those who feel Wikimedia deserves some criticism. - I don't, of course, think this approach would in any way be good for the sort of academic freedom that WV should ideally stand for and encourage. It might be the only way to survive. --SB_Johnny talk 17:14, 16 August 2010 (UTC) - Moulton commented here, it was properly reverted as being by a blocked user, and I restored it with redaction and a note. This was removed. To respond, I agree with Moulton that the study of wiki ethics could do much to avoid future problems, by delineating existing problems, which may lead to suggestions for improvement. We need be particularly careful, here, to avoid the assignment of blame. In my view, most problems on the wikis are due to defective structure; this is at variance with what seems to be a popular, easily-assumed view that ascribes problems to problem users. --Abd 19:18, 19 August 2010 (UTC) - I think Wikiversity needs to recognize that any research can attract all kinds of people, and some may seek to have their views dominate discussion and research. I think Wikiversity needs people that know how to quickly bring about a cease fire when strong views clash. If Wikiversity needs a policy it might be that dominating discussion and research and seeking "victory" harms Wikiversity, and those are acceptable reasons to block when people don't stop after being asked to cease fire and come to a truce. -- darklama 17:38, 16 August 2010 (UTC) - It might seem contradictory, but the solution may be (at first) more blocks rather than fewer. And even more than more blocks, more warnings for incivility or revert warring or other disruption, followed by short blocks upon disregard. A short block is like a sergeant-at-arms at a meeting asking a disruptive member to leave a meeting. It is not a ban, and the member can come back when there is no more immediate risk of disruption. It is purely procedural, and it's understood that some people can be hot-headed and that the community can and should restrain this. But members aren't punished for being hot-headed, rather the disruption is directly addressed. Temporary exclusion is not punishment and should never be presented as such. A failure to understand this is behind a great deal of tenacious disruption on Wikipedia and here. Basically, if anyone thinks a person is causing disruption, they can and should warn the person. If the person disregards this, any custodian can look and short-block, if warranted. Ideally, the one warning should not be from someone involved in a dispute, and the reason is that people will tend to discount warnings from others who are involved,nor should the custodian be involved, except in an emergency as I've elsewhere described. But it's still okay for someone involved to warn; a reviewing custodian can decide whether or not to proceed with a block or confirm the warning -- and then block for continued disregard beyond that. The point is to gain voluntary compliance, and not to allow the user to believe that they are being excluded. - Generally, a user who has violated agreements many times should be unblocked promptly upon assurances that the user will not continue the blockworthy behavior. The blocking custodian should always consider this, and can even set conditions for prompt unblock, but should not coerce; humiliating conditions and unclear conditions should be avoided, they cause trouble. If the blocking custodian does not wish to unblock, that custodian should never decline an unblock request. If there is no other custodian available, the blocking custodian should simply leave it in place. Blocks should always be applied with utmost civility and with support for acceptable behavior. Wikipedia deprecated "cool-down blocks." I suppose the reason is that, as a block reason, it represents mind-reading and, indeed, that could be offensive. But, in fact, a properly applied block will accomplish cool-down. "Okay, I was out of line there, thanks for considering unblocking me, I'll try not to repeat that." Most adults are capable of that kind of admission, and they will do so sincerely. It's not even any kind of moral offense to be "out of line." We get angry for good reasons, often. But we, if we are sane, also understand that if we start shouting at a judge in a court, for example, we'll be restrained. Only the truly crazy will take this as a personal insult. - A better understanding of block policy would go a long way. We need better documentation, to guide custodians, and also to assure users that they will get fair treatment, if the policy is followed. We should never allow the appearance to arise that a single custodian is "in charge" of an editor's behavior, unless the editor has accepted that arrangement. Even my young children know, instinctively, to resist this kind of control! I'm in charge of what I will permit and what I will prevent, as the parent, but they are always in charge of their own behavior, and if I don't respect that, I'm failing as a parent. - There are deeper solutions that are possible, using bots, allowing for the flexibility of temporary narrow or broad "topic bans" that would be bot-enforced (by automatic reversion), but that's down the road. For now, seeking and encouraging voluntary compliance, with stronger response as needed, with judicious use of the block tool, should be adequate. --Abd 20:24, 16 August 2010 (UTC) - A ban is not needed. What is needed are guidelines accepted by consensus that handle how to avoid unnecessary disruption when individual users or the WMF are criticized or appear to be so. It is easy for such study projects to become wheels on which to grind axes. Now, need these guidelines be developed in advance? No, except for one, which we should write. When WV content becomes controversial because of "cross-wiki issues" -- or even local issues -- we need to have procedures in place to address this and prevent disruption. In the current project started by Privatemusings, I called for work to "come to a screeching halt" when objections appeared, until the objections themselves are addressed and consensus found. Not "cancelled." Not "deleted," except that ordinary content deletion, still in history, should be fine if needed temporarily, while it's under discussion. We should not allow any user to barge ahead with insisting on controversial content. If there is "outing" perhaps revision deletion may be needed, and even short blocks if revision deletion is needed. Otherwise, we need what Moulton calls a social contract, an agreement that provides for means to resolve disputes, and the default situation is blank, i.e., no content. When someone objects to content that has not been established by consensus, it should be blanked or deleted, by default. Then it can be discussed, whether or not to allow it, with the community assisting to keep the discussions civil and to the point. The legitimate needs of "outsiders" must be respected, but also the academic freedom of this community. We need to do both. And it takes time. - The key is to establish process that seeks consensus, not "victory" for one side or another. --Abd 17:09, 16 August 2010 (UTC) - What about a temporary ban. To wait until Wikiversity has grown in learning communities on other issues than Wikimedia? If there is a very large community of users, mostly occupied with topics that have nothing to do with studying Wikimedia, than fights on these kind projects for studying Wikimedia will have far less influence on the whole Wikiversity community.Daanschr 17:27, 16 August 2010 (UTC) - We have a current project which is not causing any disruption. And more are being opened, with no sign of disruption. Why fix it if it isn't broken? See Response_testing/WMF_Projects. Note that if someone objects to some work there, there will indeed be a kind of "temporary ban." I.e., an informal ban will arise upon complaint, enforced by users who have both academic freedom and avoidance of unnecessary disruption as goals, and who will seek consensus before barging ahead. It's really like just about any wiki decision. --Abd 20:30, 16 August 2010 (UTC) < I think CR is 'community review', right? - I suppose that's actually what's happening here - I hope to be able to carry on real slow with the Response testing project (which, following a suggestion from sj, has a 'wmf' section) - I don't think it's really creating any trouble at the mo - and I have a feeling that the root causes of the broo ha ha's are both interesting and important as subjects to discuss, learn about, analyse etc. There's a hint of an intimation in sbj's post that perhaps the closure of wv remains on the table somewhere - personally I'd raise an eyebrow were wmf to shut the project down on the basis that it became critical - but sure, things like sue's blog (she's the executive director of the wmf - so the boss on the staff side) could be read as warnings to pull some heads in. Is wikiversity really seen as harbouring people who" (Sue's paraphrase of Gary Marx's description of how people attack social / political movements here) The idea that the above could in any way be aimed at, well, me I suppose, I find both amusing and troubling - were it shown to be the case that those in ultimate control of this project are forming that view, I think shutting down wv would probably be a good thing - there probably wouldn't be much point in it, I guess? Privatemusings 00:35, 17 August 2010 (UTC - PM, that comment of Sue Gardner was not at all aimed at you. Sue was writing much more generally. However, a shallow understanding of Marx could lead her to think of Wikipedia vs. The Enemies, which would be a serious mistake. --Abd 15:24, 17 August 2010 (UTC) - w:Wikipedia:Wikipedia_Signpost/2010-08-16/Spam_attacks describes a recent "research" aka vandalism project on Wikipedia. Any research which harms or otherwise disrupts other WMF projects shouldn't be permitted here. More generally, we shouldn't have to ban all research of other WMF projects, we just shouldn't pretend that there aren't certain limitations and issues to consider due to Wikiversity being a WMF project. As far as I can tell, the WMF research projects here that have been controversial, such as trying to research into past conflicts on Wikipedia, have failed to recognise some of these issues. Adambro 15:58, 17 August 2010 (UTC) - There is a possible misunderstanding here, based on there being two kinds of "research." There is experimental research, which is something done by someone who undertakes "response testing," for example, and there is research as in the study of evidence already available. The research Adambro mentions (thanks for the link, by the way, fascinating story) was, however, not organized on-wiki, nor was prior response testing research, to my knowledge. In other words, these were not "research projects here." (But they may have been headed in that direction, hence were properly interrupted, given the lack of guidelines and supervision.) - I'll note that the researcher involved in the Signpost report was unblocked per an agreement with ArbComm that did not prohibit further research; rather, it contained it and set up private review processes to precede future projects. My own opinion is that research like that which was done is actually very important, even though it involved "vandalizing" Wikipedia for a very short time. (With fake spam designed to test real user response.) The intention underlying the research was to reduce vandalism and spam and to reduce its persistence. Actions should be judged by intention, as well as by immediate effect, sometimes a negative immediate effect can have a benefit, long-term, that far outweighs the immediate effect. There are ways to address the problem of consent to participation in research involving human beings, and I hope that the WMF obtains some real expert advice in this area. - I do agree with Adambro's conclusion, however. First of all, experimental research involving human beings requires fairly complex ethical guidelines, just as response testing in business is best done under ethical restraints. Study of particular past conflicts, however, is normally research of the second kind, without the same ethical considerations. However, because such research can create what are effectively partial biographies of human beings, there are still serious requirements to respect, and these are guidelines that we need to develop. These should be developed and applied wherever such study takes place, whether here at a WMF project, or on, say, the alternative netknowledge wiki, independently controlled. If undue interference develops here, I assume that the project would move elsewhere. But I don't expect that outcome, except for minor subprojects, perhaps. I expect cooperation between the "academic institutions," which is the norm. --Abd 17:21, 17 August 2010 (UTC) - @ The signpost Adam pointed reminds us of the reality, loud and clear: You can get away with doing the same harmful things or worse more easily if you are powerful enough, like being a developer or a researcher working on a project in a computer science department in a major university. Guys, if you want to make a splash, make a big one, and don't talk about your plan if it isn't mature. Hillgentleman | //\\ |Talk 22:58, 17 August 2010 (UTC) The Arbitration Committee has reviewed your block and the information you have submitted privately, and is prepared to unblock you conditionally. - I think the moral of the story is if you've contributed things of great worth in the past, you are more likely to be forgiven and allowed to get away with doing something harmful. I think waiting until plans are mature to discuss them discourages early collaboration. I think people just need to be absolutely clear that plans aren't final yet and need to indicate when a plan is final when discussing plans. I think people should avoid acting on plans before plans are clearly finalized though. -- darklama 23:26, 17 August 2010 (UTC) - I think HG's point is valid, though. The research project in question was not planned with the approval of ArbComm or the Foundation. It was outside. It did minor harm, short-term. The breaching experiment with a set of unwatched BLPs, done by Greg, was by the cooperation of a WP administrator and the blocked Greg Kohs. The experiment did much less harm, probably, than the university experiment. Yet the admin was desysopped, and that experiment might have been a factor in the global lock, as I recall (what was the timing? I forget). The conclusion and resolution of ArbComm in the university case was one that I agree with. But there is, in fact, a double standard being applied here, and it probably has to do with Greg being a prominent critic. And that sucks, in short. Nevertheless, this is really moot here. As far as I can see we are not going to allow Wikiversity to be a base for organizing "breaching experiments." Period. We might study those, however, sometimes, afterwards. Carefully. --Abd 23:45, 17 August 2010 (UTC) - For simplicity what I'm saying is: Do get permission before acting. Don't wait until a proposal is mature to discuss a proposed experiment. To use the specific experiment being discussed as an example, the researcher should of been able to use Wikiversity to develop there plan and to discuss the plan with other people, and than once people at Wikiversity felt the proposal was mature, the researcher should of sought permission from the Wikimedia Foundation, Wikipedia's ArbCom, or Wikimedia Research Committee, pointed to the development here, and answered any questions that WMF, ArbCom, or the Research Committee had, and ensured any actions or experiment carried out was within the limits that WMF, ArbCom, or the Research Committee permitted or not done it at all if they opposed the proposed research experiment entirely. -- darklama 14:25, 19 August 2010 (UTC) - "Experiment" is a loaded term here. I agree with Darklama, generally, but I wrote the response before carefully reading all of it! So this is an independent take: We can say that experiments involving testing the responses of human beings raise ethical questions that must be addressed before proceeding with actual experiment. If someone proposes such an experiment here, discussion is necessary before action, and, in fact, that discussion should eventually be brought to the attention of those that might be affected. If, for example, some response testing on en.WP were proposed, users here should be discouraged from acting to run the experiment before there is consensus for it; users who disregard that might be sanctioned, if there was activity here that was improper (such as active and specific planning of an experiment, with operational details, etc.) If an experiment is to be run on another wiki, such as en.WP, the proposal should be cleared, first, with either the community of the wiki involved, or, on WP, if confidentiality and some level of secrecy were required, with ArbComm there, or with some WMF body. WP ArbComm has an established procedure, it looks like, for such testing. - However, we do not have to wait for wide consensus to develop resources studying wiki history. There are still issues, but anticipating them all could be difficult, so normal wiki process suggests proceeding with caution, being sensitive to criticism and warnings. Normal process (such as deletion of contributions considered too hot to stand at the surface), avoidance of revert warring, and ordinary discussion should handle this well enough. An Ethics Committee might be formed to consider ethical issues, with, possibly, some special process, but we'll cross that bridge when we come to it. --Abd 17:44, 19 August 2010 (UTC) - Templates could be used to indicate the progress of a research project like: This proposed research project or experiment may still be in development, under discussion, or in the process of gathering approval. You may be sanctioned if you follow suggestions in this draft proposal without approval. This research project or experiment is mature and has gained approval. Please check the edit history to ensure no significant divergences from the approved proposal has happened before following suggestions to avoid any sanctions. This research project or experiment sought approval and was rejected. This resource is kept for historical interest and for people to learn what not to do. - -- darklama 13:35, 20 August 2010 (UTC) - Yes. This would be clearly appropriate for proposals involving response testing planned as part of the study. A somewhat different template would be used for simple documentation research, pointing to guidelines for such. (Suppose I put up a link to all contributions of Editor X. No problem. Suppose I put up a link to selected contributions in a way that make the editor look like a complete bozo. Problem. Maybe! A proper research project would be designed to avoid cherry-picking, would use stated, objective criteria, if selection is to be done.) A goal of stating or inferring blame should be carefully avoided, even the appearance of such a goal should be avoided. It's impossible to anonymize the necessary evidence, on-wiki, but certain pieces of a project could be developed off-wiki, or on-wiki under certain conditions, that would, top-level, draw anonymized conclusions, with raw evidence being buried, not available in current pages, so not searchable under the person's name. For the studies, the identity of the person is, ultimately, not relevant. It's tricky, and no specific rule is likely to apply well under all conditions, sometimes identity is important, which is why the Privacy Policy allows breaking privacy rules when it's needed. And when one does that, a review process is needed, which is, I think, OTRS, though there is also Ombudsman if checkuser is involved. --Abd 14:06, 20 August 2010 (UTC) Wikiversity:Embassy Pardon my French, but It seems to be requiered. JackPotte 02:21, 19 August 2010 (UTC) - Not quite. Nobody was watching or using Wikibooks:English Embassy and there were no complaints when I delinked it from the above page and removed it. Adrignola 12:35, 19 August 2010 (UTC) Subtitled movies Hello, the en.w, fr.w, fr.v et fr.b have installed this gadget, it can be useful as we can use a video in any language by pasting some customized subtitles above, with eventually some hyperlinks. JackPotte 13:48, 21 August 2010 (UTC) - I agree, this would be very useful if someone wants to use videos on their educative posts. Some admin please import it. Diego Grez 17:40, 23 August 2010 (UTC) - Bugzilla advises to put it directly in Mediawiki:Common.js in order to be able to give them enough feedback about the tool. JackPotte 19:48, 24 August 2010 (UTC) --91.65.132.76 12:23, 21 August 2010 (UTC) As a studdent, I find the navigation trough Wikiversity really confused. For example, I was interested in learning Biology, but first I reached the primary school and then a link to "university" level, and from there just a boring list in alphabetic order... I understand the lak of content, but why do I get different places form the same link and viceversa? The spanish wikiversity is oftenly easier to navigate, may be you could get some ideas from there, or what is better, to colaborate together! :) I agree, wikiversity as a whole is kind of all over the place. Perhaps it might be useful to develop some pages for those who would like to make things easier. That is what led me to this page, but there appears to be a series of disparate discussions here!Harrypotter 19:41, 21 August 2010 (UTC) - There also needs to be worthwhile contents. If there are to be links, the best thing would be to link to finished products, or to mention from the start where finished products are not to be found. - The disparate discussions deal with the issue whether conflicts on Wikipedia should be studied here. And these conflicts are not only studied... The rest of Wikiversity didn't cause much trouble as far as i know.--Daanschr 20:53, 21 August 2010 (UTC) I think there is a lot of useful and worthwhile contents, but content is too deeply nested in navigation to find. I think most people give up after 1 or 2 pages, and with the current navigation you could end up having to goto 6 or more pages before finding the course contents: Math portal > Math school > Math department > Math topic > Math category > Math list > Calculus topic > Calculus category > Calculus list > Introduction to calculus. when it should be just 1 or 2 pages away: Math > Calculus course > Introduction to calculus -- darklama 22:28, 21 August 2010 (UTC) - An inventory could be made of all worthwhile contents, and than a good navigation system to get these contents. I got little to do now, so i can work on it, but i don't want to do everything on my own.Daanschr 08:16, 22 August 2010 (UTC) - I think everyone will consider their own work worthwhile. I think navigation needs to start small and grow as more contents becomes available. I think part of the problem is people at the beginning were too ambitious and didn't put enough thought into how to organize contents so it could be easily navigated. I think if we are to avoid repeating that, we need to have a organized plan that most people can agree with, even if only a few people are willing to volunteer their time to implement it. In the past I suggested that the portal, school, and topic namespaces be replaced with a single course namespace. We could always attempt to implement a course pseudo-namespace with the intent to replace those 3 namespaces once all/most works have been organized into it. What do you think? -- darklama 12:35, 22 August 2010 (UTC) - Well i have come across a vast undergrowth of incomplete pages which have been abandoned often a year or two ago. In working on the British Empire, I parked this as an archive, and proceeded developing a much smaller element of this enormous topic (those interested can see Tudor Origins of the British Empire). In a rather chaotic fashion I have stumbled across wonderful navigation aids, quizzes and other useful tools, but largely through trial and error. What I feel would be useful is: - Student Navigation, which would lead potential students to peer-reviewed educational resources - perhaps some thing like wikipedia Good and Featured articles - Teacher Navigation, which would include partially completed material and various resources like quizzes, navigational bars etc. as well as active working groups. It seems to me that there was a lot of enthusiasm a year or two ago, but that activity has declined and that now quite a few people check the site, but after a little while give up. Before being active here, I was active on wikieducator, which has quite different problems. I am currently trying to find a practical way of using both sites to get the benefit of each.Harrypotter 14:11, 22 August 2010 (UTC) - We could make a split in the categorization between featured contents and all contents. And put a warning on the last categorization, that lots of the contents aren't finished learning materials. Determining the difference between featured and non-featured contents will require a lot of politics. How can that be managed in a decent manner? - It would be best to make a categorization of active courses and learning communities, to ensure that new users don't enter a desert, but can become part of a community. One way to stimulate this would be to organize fairs. In the Middle Ages, merchants organized fairs to ensure that they could buy and sell products and wouldn't be alone on a market. If we set time periods, like a week, in which certain topics could be studied as a group activity than that could stimulate new users to stay. I fear that lots of people will turn away when they can choose between 30 featured contents, all developed by a single user of Wikiversity.Daanschr 09:00, 23 August 2010 (UTC) - Harrypotter 09:12, 23 August 2010 (UTC) - By accident I came across this page: Category:Featured resources! Harrypotter 09:19, 23 August 2010 (UTC) - I think Cormac made it. But, i wasn't part of the people who made this category. My main focus here on Wikiversity was to come to some kind of learning communities. The reading groups have been operational, one was aired for a couple of months with weekly activities. Two others had problems with upstarts. - The idea of the fair can be used in several ways. Suppose there will be a fair on history in the first week of February 2011. Than a couple of people can prepare something for this week with the aim to attract more users for Wikiversity who are interested in history. There can be discussions on the chat, there can be the development of a game on history, or we can discuss certain writers or sources. The same can be done with topics like climate change or Einstein's Theory of relativity, or major political philosophers. - I have tried to organize congresses, focused on editing a group of articles on Wikipedia. But, that looks a lot like WikiProjects. It could also be simply about discussing a subject and collectively writing an essay on it. A date can be set when the congress will start. In preparation of this congress, literature can be discussed and read, people can be invited. An organization can be set up in order to manage the congress in such a way that the participants felt comfortable with it. - A fair is more broad and less determined from a congress. In the Middle Ages fairs were attended by merchants and some customers, who traded their products with each other in order to sell them in different areas. On Wikiversity a fair could be a gathering of people, all with different ideas, who want to find some like-minded people in order to get their ideas worked out, with whatever they want to do regarding learning and Wikiversity. I guess it is best not to use the word fair, because it is distracting. What can better be done is to just put a meeting on an agenda to of a field of study (like history, or physics), to talk about this on the Wikiversity chat. But, maybe there are too little people at the moment to man these kind of meetings. - We are both interested in history, i graduated at the university in history. So, we could try this out with history. - The problem with the under construction tag is that most contents on Wikiversity are under construction. If you add the tag, it should also be removed when nothing is done with the article anymore. On Wikipedia there was a campaign to remove tags from articles, otherwise half of the encyclopedia appeared to be under construction. But, in some cases it might be a good idea to use those tags. Suppose we have a busy well-organized community, that cleans up all the mess it leaves behind online, than an under construction tag would work very well.Daanschr 14:55, 23 August 2010 (UTC) - What you say is very interesting and brings to mind the Champagne fairs, which I always link with the development of the narrative form, i.e. through Chrétien de Troyes. Perhaps we should use the term Fair and encourage existing participants to prepare material to showcase during the period, have a number of people who will agree to respond to queries during the week and try to raise the profile of wikiversity outside the existing community, particularly as regards other wikimedia projects and wikieducator - who I feel we could work more with. How does that sound?Harrypotter 18:58, 23 August 2010 (UTC) - I derived the idea from the Champagne fairs. Never heard of Chrétien though. - One way to organize such a fair would be to make an article on the subject and to have people join by stating their own learning projects on it and tell what they are doing now with them. Added to it could be chat sessions, to make the communication quicker. - It won't help with the problem of difficult navigation on wikiversity ;-).Daanschr 20:59, 23 August 2010 (UTC) - I think we should go ahead and see what comes of it. Actually I think we could do an article on the Champagne Fairs. Chrétien was a medieval writer of romances, and the fairs became an important culture focus as well as just trade. I think the navigation issue is vast and somewhat daunting, and the solutions will come about through creating islands of collaboration which can then be linked up. If the Signpost idea comes off as well, and we work together, this could help.Harrypotter 09:41, 24 August 2010 (UTC) - So, what kind of fair do you want to organize? I know it is my own idea, but i still doubt the usefulness and my own satisfaction of it. I would also be happy to do something with history!Daanschr 17:18, 24 August 2010 (UTC) - Well let's start with history. I've been mucking about a bit with Portal:Social Sciences, Portal:History and School:History. Actually it's a bit like stepping on board the Marie Celeste - I keep on expecting to find someone's half eaten sandwich - stale after having been left for 18 months - whenever I follow a link. Perhaps if we do some work together there, we can see how the idea of a fair works out a little bit later?Harrypotter 00:09, 25 August 2010 (UTC) - Okay, i will continue the discussion on the School:History article and talk page.Daanschr 08:58, 25 August 2010 (UTC) Quiz options Making a quiz with single submit options --Rahul08 09:56, 27 August 2010 (UTC) What I am talking about is the ability to submit the answer to one question at a time.For e.g., in english basics 101,[basics 101 numbers] numbers section, you can see a series of questions where the user has to enter an answer.Now if he clicks the submit button for the first question, the present format tends to check all the questions, including the ones he hasn't answered.I was wondering if there was something which would allow only one question to be corrected each time rather than the whole quiz. - There is a (rather experimental) way to do it with recursive conversions. The idea is to write a substituted template and you put the answers in as parameters; if your answer is right then the template will give you the next question; if you answer is wrong then you will be told so and the question repeated. Hillgentleman | //\\ |Talk 14:36, 27 August 2010 (UTC) Countervandalism channel IRC Hello dear Wikiversitarians, It's been a while since the last check, but I noticed the #cvn-wv-en on irc.freenode.net channel is abandoned. The recentchanges-bot from the m:CVN (named MartinBot) has been offline for about a year, but not. The advantage of a CVN-channel over the regular feed from irc.wikimedia.org is that it filters down to suspicious edits (anonymous edits and otherwise notable edits for vandal fighters) and filters these based on a globally shared database of blacklisted and whitelisted usernames and other patterns (these are shared amongst all cvn-channels.). But if the amount of edits isn't up to a point where one can't follow the edits via Special:RecentChanges such a channel may be overkill. Right now when I look at Special:RecentChanges I can look back 2-3 days in the last 100 edits so if there are enough people watching that one could check 'everything' without having to filter them down to a lower number. Either way, setting up the channel is no hassle at all. So state below what you think about it and whether or not you would like such a channel again. Krinkle 14:21, 27 August 2010 (UTC) - Wikiversity is very much more of an experiment on using wiki for education than a development reference resource; I doubt the counter-vandalism heuristics of other sites would work here. There are many new/ip users which are students and they may not be very familiar with wiki editing. Experimentations are actually encouraged. Hillgentleman | //\\ |Talk 14:28, 27 August 2010 (UTC) Wikiversity Signpost I'm trying to start up a Wikiversity equivalent of the Wikipedia Signpost. Does anyone have any article ideas? Would you liekt o write an article yourself? Thanks, Rock drum (talk • contribs) 19:54, 23 August 2010 (UTC) - If me and Harrypotter will succeed, than we might be writing some articles. At the moment though, i don't have any material.Daanschr 21:02, 23 August 2010 (UTC) - Hey, great idea! Try checking in with some of the profs and active editors. User:MrABlair23, for instance, who just created and then blanked a fascinating course. Or Prof. Loc Vu-Quoc who has all of his students post all of their assignments here on WV. –SJ+> 02:20, 24 August 2010 (UTC) - Well, maybe we should post the fair idea there. Do you know what it's going to be called?Harrypotter 09:36, 24 August 2010 (UTC) - SJ, the reason I blanked the course is that I had a bit of a problem going on and that was the only possible solution. It is no big deal and plus, I have thought of quite a better way to deliver that fascinating course. So, if anyone is interested in doing it, please sign up to it now! --MrABlair23 14:32, 24 August 2010 (UTC) - Thanks, MrABlair -- I didn't mean to focus on the blanking, just that you're working on a cool course and making lots of updates to it. A story about the course itself and your ideas for running it would be quite interesting. As an aside, I was just at the NYC Wikiconference this past weekend, and there were dozens of WP editors there interested in Wikiversity once they heard about it. Most of them had only edited Wikipedia, and some didn't even know WV existed... so a signpost, or a regular story in the en:wp signpost, and other ways to share what's happening here, will make a real difference. –SJ+> 06:42, 31 August 2010 (UTC) - This edit, a response to MrABlair23, was made to this section by a blocked editor, and was reverted as such by me, and listed for review (with other edits made the same day), with a comment that it looked "good." Taking responsibility for the content of this edit, today I restored it. However, my restoration was reverted, so I'm making this comment for transparency. --Abd 19:48, 28 August 2010 (UTC) New encyclopedism Just in case you may be interested in any of: - w: Wikipedia:Categories for discussion/Log/2010 August 23#Category:New encyclopedism - w: Talk:New encyclopedism - User:KYPark/Encyclopaedism/Timeline (Maybe more proper here than in Wikipedia.) BTW, please anyone advise me why v: User:KYPark/Encyclopaedism/Timeline doesn't work here, which works at w: Talk:New encyclopedism. -- KYPark [T] 09:44, 30 August 2010 (UTC) - (edit conflict wtih below) Looks like one of the templates you created here, as you had created at Wikipedia, had an error in it (that wasn't on Wikipedia). I think I fixed it. The wikipedia page isn't the one you cited. Rather, it's w:User:KYPark/Encyclopaedism/Timeline. --Abd 15:54, 30 August 2010 (UTC) - Huh! That WP page doesn't exist. I must have been confused. The template w:Template:show-head2 is only used at w:User:KYPark/Sandbox, permanent link, I have no idea now what KYPark means by his comment that the Timeline page doesn't work here, and that page content is not where he referenced it to be. w:Template:show-tail is used on his private Sandbox and on a series of year pages in his user space. The Talk page he references refers to the Timeline page here. --Abd 18:01, 30 August 2010 (UTC) - The development of the British Museum Library Catalogue was very important in what you are describing.Harrypotter 15:50, 30 August 2010 (UTC) - FYI, for templates you can request Import which has the advantage of copying the template exactly and also preserves the edit history to give credit to those who contributed to the work. Another advantage is that the import can also copy subpages (like the /doc documentation for a template) in one easy click. --mikeu talk 15:59, 30 August 2010 (UTC) - Sure. In this case, though, KYPark is the author of the WP templates.... Yes, import is better, for the reason of giving credit, but it also requires a custodian to act, which delays the process; if I just want to experiment with templates to make a page work here, that delay and extra hassle will probably mean that it won't get done. But it would be pretty simple to fix this later. I'll review templates I've brought from WP and make a list to be imported. The process should be done in such a way as to merge the present content, which has often been altered from Wikipedia to make it work here, with the old content underneath in History. That should be simple. Hey, quite a bit of what I do would be simpler with the tools .... but I'll need a mentor. One step at a time, I suppose. --Abd 17:42, 30 August 2010 (UTC) - Thanks FYI, mikeu, but the case is roughly as Abd suggested. The WV version is improved, whereas the WP version was improvised (original), as it were. To be honest, I'm giving WP up for one reason or another. One more reason has been added; the w:Category:New encyclopedism was after all deleted on August 30 unjustly without any deleting guy responding to my w: Talk:New encyclopedism, though individually invited. May Wikipedia pay for this obvious injustice! -- KYPark [T] 09:35, 2 September 2010 (UTC) - Again, why v: User:KYPark/Encyclopaedism/Timeline doesn't work here? - which is just User:KYPark/Encyclopaedism/Timeline. - It is on WP and WV that the very link w: Talk:New encyclopedism works. So I expect v: User:KYPark/Encyclopaedism/Timeline to work on WP and WV as well. As you see, however, this link is red on WV, while the same code works on WP as you experience at w: Talk:New encyclopedism.:29, 31 August 2010 (UTC) Here is an example: do you see the reference at the bottom of wikt:fr:welcome? I've just added it and the archive link is already available. JackPotte 12:23, 3 September 2010 (UTC) Category casing. I have been thinking about putting some hard work cleaning up categories and uncategorized pages. It was suggested that I standardize the casing of category names. But the suggester and I immediately disagreed about which was best. So I thought I would get a quick straw pole about a sense of how the communtiy feels. Let me know what you think. Thenub314 12:49, 3 August 2010 (UTC) strstract Algebraact Algebra - Wikiversity once had a policy page where these kinds of issues were discussed by the community, but User:Darklama made the unilateral decision to disrupt the development of that policy. --JWSchmidt 13:16, 3 August 2010 (UTC) - Ok, but here is our chance to decide what we would like to do regardless of Darklama's edit. Express your opinion one way or the other about the issue on the table, and when enough people have, or I get bored of waiting then I will get to work. Thenub314 17:05, 3 August 2010 (UTC) - "Not very relevant." <-- Wrong. The page that Darklama hid away is the page where Wikiversity community members should decide such matters. --JWSchmidt 17:30, 3 August 2010 (UTC) - While that may be true, until we are ready to enforce page discipline (easily done, it's not censorship), here we are. The man wants an answer, and if that answer, perhaps based on shallow discussion here, conflicts with general practice, we'll have to look at it some more. --Abd 20:39, 3 August 2010 (UTC) - Let me be very clear about why it is not (in my opinion) very relevant. First regardless of that edit, or what the policy said, I would have asked again. Why? Because I have a preference, and for all I know someone who wrote that page many years ago made a choice by eeny-meeny-miny-moe. Before I steel my nerves to make several hundred edits it would take to clean things up, I am going to ask questions to make sure my work reflects the current feelings of the community. I would have always started the discussion at this page instead of the page you point to because it is more visible. Now you can continue to complain about Darklama's edit, that is fine, but I have nothing more to say on the matter. Might I suggest though that it would be more productive to give your opinion below. Because I am not interested in any of this politics, I just want to get stuff done. Thenub314 20:45, 3 August 2010 (UTC) - Production, Politics, Prediction. My prediction is that you will change a bunch of category names and then a year from now someone else who "wants to get stuff done" will change them all back to the way they are are now. I admit that politics often works in futile cycles of needless activity, but I don't think that recording hard-earned cultural wisdom in guidelines and policies is "politics"...it is good community practice. --JWSchmidt 22:47, 4 August 2010 (UTC) Categories should be title cased (as in Category:Abstract Algebra) - Thenub314 12:49, 3 August 2010 (UTC) - For courses, Geoff Plourde 17:42, 3 August 2010 (UTC) #Being an educational entity, I think that title casing is necessary down to second headings. - With some more thought, I think that title casing is necessary for the top few levels of articles/lessons, but not necessarily everything else.--John Bessatalk 12:18, 26 August 2010 (UTC) - Beyond that I think we should be as familiar as possible, and hence capitalize as little as possible.--John Bessatalk 18:09, 25 August 2010 (UTC) - With some experience, casing needs to be in context. When doing more social work (in counseling), casing is more common because of the ego-centric nature of "theories." But with neuroanatomy, casing seems entirely unnecessary as everything is objective, and (easily allows itself to be object-oriented, or OO--or perhaps, functionally-oriented). - Further, ego-centricism in anatomy that has resulted in upper-cased parts should be de-ego-centralized with lower casing.--John Bessatalk 16:01, 3 September 2010 (UTC) Categories should be sentence cased (as in Category:Abnormal psychology) - Abd 20:39, 3 August 2010 (UTC) There are strong convenience reasons to use sentence case. It allows someone to neglect case in citing the page or adding a category from memory, and case is always a little bit harder to type. Because most of us have strong Wikipedia experience, as well, which uses sentence case except for proper nouns, it's more in line with our habits. Wikiversity's somewhat common usage of "title case" -- which is ambiguous in fact, just clear in the example, has often delayed me completing an edit until I figured out what the used form is. Example of ambiguity: Category:Category:Solutions to problems in Abstract Algebra. Abstract Algebra can be taken as a proper noun, but it's easier if we use "title case," i.e, all lower case except for obvious and clear proper nouns, i.e., Category:Solutions from Isaac Newton on integral calculus. And if anyone thinks that a capitalization error will be common, a redirect can be put in. I think that sentence case will require fewer redirects. First letter is by convention capitalized in page names: first letter case is ignored by the software, I believe. Sentence case thus allows someone to type all lower case letters, usually, which can help with, say, an iPhone. I have some vague memory that there may be an exception. --Abd 20:39, 3 August 2010 (UTC) - Categories for courses should be title-cased, but general subject categories like those seen at Wikiversity:Browse should be sentence case. As a side effect, this makes it easier for interwiki adders to match up categories. Adrignola 22:43, 3 August 2010 (UTC) - For general subjects, Geoff Plourde 22:46, 3 August 2010 (UTC) Well I am relatively happy with the compromise suggested by Adrignola, a similar scheme is used at wikibooks. There are just a few things that should be kept in mind: - It is often the case that a resourse may be neither a course nor a subject, but rather some other learning resource. I it may be better to think of things in terms of learning resources and subjects. - Resources (and hence courses) are sentenced cased. So for example, Philosophy of mathematics would have a corresponding category would be Category:Philosophy of Mathematics to hold the subpages for the course. The casing would not match, this is no big deal in my opinion, but thought I would point it out. There is also a small potential for confusion. If someone later creates a category for the subject of the philosophy of mathematics it would be Category:Philosophy of mathematics, which now matches the case of the course. Maybe we should consider the reverse? That is, sentence case categories corresponding to learning resources and title case categories that exist to collect similar resources together. We used this type of scheme at wikibooks to avoid this type of name collision but it took the opposite form since our resources are usually title cased. Of course the other obvious choice is to make one of the names explicit by appending something like a (subject), but this could get a little messy. Thenub314 09:11, 4 August 2010 (UTC) I generally use and encourage sentence casing unless there is a particularly good reason e.g., proper nouns/names. -- Jtneill - Talk - c 11:18, 4 August 2010 (UTC) WV as an educational entity It makes sense for WV to be distinct from WP as WP is an encyclopedia built from wikis, and WP is an educational wiki. Wikis are only a reasonably new concept, being about the same age as the Web, but they continually growing into complex collaborative knowledge construction entities, a concept that be-devils WP because it is only an encyclopedia. The exact opposite is true here; when we finally get this community site harmonized and are able to attract those who are truly wrapped in the wikis' construction potentials, then the WP will be a widely-respected as a source for new and revolutionary information. Wikis are educational by nature, so we can embrace the wiki potential in ways the WP cannot.--John Bessatalk 12:43, 14 August 2010 (UTC) - To prepare for my courses, I have been familiarizing myself with the topics by deconstructing over-view material. It seems to make most sense to title-case, or fully capitalize pages, be they articles or lessons, and also first heading topics (=Topic=), but then sentence case second headings (==Second topic==), and then use as much lower case as possible from then on, so as to poise wiki-structured "byte code" writing to be converted into prose.--John Bessatalk 16:57, 25 August 2010 (UTC) The Sandbox Server II -- The Sandbox strikes back Hi all, We're getting another shot with a sandbox server -- but we need some projects!! Is there a course, learning experiment, interaction or other bit that could utilize a server? Please let us know! We're putting together projects that will be going on to the server when it gets set up, hopefully in the next few weeks. We're hoping to start with 3 strong projects, but once we've got those we'll be rolling out more in the future. So let us know what you've got. --Historybuff 05:55, 9 August 2010 (UTC) Here is fine -- if things get busy, we can move it to another page. Historybuff 14:39, 9 August 2010 (UTC) - Moodle!!! Geoff Plourde 05:14, 10 August 2010 (UTC) Geoff, you are a man of few words. I think you've nominated yourself to help out with the Moodle project -- I like that idea. Any other contributors there, and any other ideas? Historybuff 23:22, 10 August 2010 (UTC) Darklama -- I think Wikimedia (and WM software development) would be a _fantastic_ project, but it could be a large one, and one (at present) which I won't have time to manage or lead. If we can find a tech lead and a learning project manager, I think it would work well. Wordpress is a great idea. I like the idea of just a blog, and I'll fiddle around with this. There are other good ideas (other CMS, LMS, etc) which could be explored. Keep the ideas coming! Historybuff 14:28, 11 August 2010 (UTC) - I can run a Moodle installation, and I',m sure JWS could assist. other LMSs are iffy, I'd say. Geoff Plourde 05:55, 13 August 2010 (UTC) Hi Geoff. Do you have a Learning Project on WV about Moodle, or somewhere to talk on-wiki about it? I think you've got the Moodle thing if you want it, just let me know what's needed to get started. I think we'll be "going live" in a couple weeks. Historybuff 18:50, 17 August 2010 (UTC) - I am on moodle myself for my masters program -- it's OK, not great. I think a wiki implementation would be better with a side-site for social gathering. Where moodle rules is in teacher evaluation of participation over tested grades -- "teach" can monitor all activity and see who is really doing the work. But, as we all know, you can really drill down on activity with mediawiki!--John Bessatalk 13:18, 6 September 2010 (UTC) Adding a maps extension Could somebody please take a look at Geographic touchpoints, and note in the table that maps are not rendering in the way we had gotten them to render at the equivalent page on NetKnowledge.org. Could someone guide me as to how to either adopt that same extension here at Wikiversity, or to modify the touchpoints table so that it will comply with another existing maps extension that is in use here at Wikiversity? -- Thekohser 16:07, 27 August 2010 (UTC) I think the maps extension is unlikely to be enabled here, even if the community demonstrated support for it, because the extension relies on querying servers outside of WMF's control which they would likely be consider a privacy issue, since 3rd parties would have access to "private" information. -- darklama 15:36, 30 August 2010 (UTC) - mmm... DL, I think you meant "querying." Has this been discussed somewhere? --Abd 16:00, 30 August 2010 (UTC) - If we are indeed prohibited by privacy policy from acquiring an external map extension within a Wikimedia Foundation project, then I suppose just text coordinates with an external link to a community-decided "safe enough" external map site, plus perhaps a freely-licensed bitmap image (though static) of the city location (something like this) would be sufficient. I'd like to leave this discussion open, though, for another week or two, just to make sure that Darklama's (helpful) opinion is not mistaken. Certainly, some work has been done to attempt solutions for mapping: - So, some people are clearly working on the problem as we speak. It just may be months or years away from acceptable implementation. Frankly, I find it discouraging that the Wikimedia Foundation hasn't taken a more active role in either launching or acquiring a free, open-source mapping project, but I'll leave that comment where it is. -- Thekohser 16:14, 31 August 2010 (UTC) - Yes there has been work done to try to address the issue as you found. I didn't mention any of them because like you said they may be years away from an acceptable implementation. You could use an imagemap with some image to link to other pages on Wikiversity, if that would work for you. External links are fine because the person acknowledges/accepts the risk by clicking the link. If a person doesn't click the external link than supposedly there is no risk to their privacy. Presumably if an extension required people to opt-in through there preferences to see maps by a 3rd party (like Google Maps) that would be fine too. -- darklama 18:22, 31 August 2010 (UTC) - This kind of sucks. It's a shame for a "university" type of forum to lack anything more technically useful than the old "pull down" maps of the world, Europe, Asia, Africa, South America, and North America that I remember from ninth grade. Now I need to simply decide if the interactive mapping found at NetKnowledge outweighs or not the apparently larger and more active community here at Wikiversity. Of course, I am also open to the suggestion that Geographic touchpoints are simply not worth compiling at all. -- Thekohser 16:01, 7 September 2010 (UTC) Semantic Mediawiki extension Is the Semantic Mediawiki extension installed on English Wikiversity? If not, could it be, or has that been deprecated? If so, I'm thinking that it would be a better framework for the Geographic touchpoints project here. -- Thekohser 16:31, 31 August 2010 (UTC) - No it is not installed. That would be good to have, but I wonder if the developers position would be the same as using the latest DynamicPageList? Bug requests to use the latest DPL have long been quickly closed as WONTFIX. -- darklama 18:25, 31 August 2010 (UTC) Support - Support per SJ's comment above in that case. Wikiversity would be a good test ground for that extension, just like with the Quiz extension. -- darklama 22:25, 7 September 2010 (UTC) Discussion Personal attacks - I think this discussion has produced Yet Another World-shattering NuanceHarrypotter 22:25, 5 September 2010 (UTC) - If we did embrace and even made a custodian of such a user as user:Salmon of Doubt who came here to contribute nothing but to edit war with other wikiversiters, you guys are really making a storm out of very little. Hillgentleman | //\\ |Talk 16:33, 11 September 2010 (UTC)
https://en.wikiversity.org/wiki/Wikiversity:Colloquium/archives/August_2010
CC-MAIN-2019-43
refinedweb
15,780
59.64
If you knew .NET like I know .NET… Sun, 11/11/01 Many years ago when Java was new, I dove in. My first program I wrote like a C++ programmer and I didn’t get it. Then I rewrote the program as a Java program and it was much nicer, but I still didn’t get it. Then I discovered the lack of deterministic finalization and that’s all she wrote; my C++ thinking turned me away (the fact that Java was ashamed of my favorite platform and, in fact, all platforms didn’t help). Now I’ve spent the last year doing .NET programming, mostly focused on Managed C++ and I didn’t get it. I’ve participated in DevelopMentor’s ASP.NET and Guerrilla .NET and loved both of them, but still didn’t get it. I rewrote (with my good friend Jon’s help) my entire web site in ASP.NET and didn’t get it. And I spent most of last week preparing for my teach of the latest version of DevelopMentor’s Essential .NET and I *still* didn’t get it. Until today. Today I finally got it. The major power of Java wasn’t that it was platform independent, as Sun so often touts. The power of Java (and therefore the power of .NET) is that it provides a major productivity boost for the programmer. I realized this today for the first time. It started yesterday. Yesterday I ported a tiny little application (a time client) from MFC and C++ to .NET and C#. The first time I ported it, I ported it like it was still C++ and the result was ugly (I was trying to do the standard library trick of representing time as a the number of seconds since some marker time and then translating it into a formatted string at the last moment). In fact, I couldn’t even do the nice formatting of the current time since .NET didn’t support that means of conversion. But then I remembered that, unlike C++, .NET has proper date, time and span classes. Once I rewrote it in .NET style, it was a thing of beauty. With knowledge of only the name of the sockets namespace (System.Net), I was able to learn .NET and build a time client in under an hour. To paraphrase the closing sentence of one of my first articles, it just makes you want to grab the back the of computer monitor and feel the power of .NET. Encouraged by my success and armed with the courage of my convictions (and a couple of web sites given me by Jason and Peter), I sat down to write a MSN Instant Messenger application. Those of you unlucky enough to have me as an IM contact saw me log in and log out about a hundred times yesterday (sorry : ). This was because every step of the way, I’d run my client and every step of the way, I’d make more progress. 90% of my code was reading and writing from the sockets and parsing strings. For the former, I used the StreamReader and StreamWriter and for the latter, I used the Regex class. Both did a ton of heavy lifting so I didn’t have to. It was amazing. But yesterday, I still didn’t get it. I was so focused on getting my IM client working (about a half day’s work) that I completely missed the magnitude of what I was doing. I was implementing an async notification-based socket protocol after reading an example doc, skimming a spec and digging in. And the code I ended up with wasn’t spaghetti. It wasn’t production ready yet (my error handling, while present, was rudimentary) and I didn’t implement the entire spec, but I had refactored along the way and now I have the beginnings of a nice little namespace for doing IM work. And all of this happened the same day that I first discovered the existence of the System.Net namespace. It’s only after reflection (and a good night’s sleep) that I finally get it. The power of .NET is the programmer productivity. This productivity comes from a combination of ease of use and flexibility that’s going to attract almost everyone. It will attract the VB programmers because of the continued ease of use and the new functionality of the .NET Foundation Classes. It’s also going to attract the C++ and the Java programmers for the same reason, but they won’t admit that’s why they like it. They’ll claim to love .NET because of the power and flexibility. Not only are the .NET languages themselves allowed to be fully-featured, but the framework itself has tons of amazing functionality built right in. So much so that it’s easy to miss some of it. When I needed to calculate an MD5 hash in my IM client, I went to the net, downloaded some C++ code and built myself a COM object (although I could have easily built a MC++ component, too) and then brought it into my app via interop (see what not being ashamed of the platform can do? : ). That was great, but this morning Peter asked me why I hadn’t just used the MD5 functionality built into .NET. There’s so much stuff in there that I missed it. As a community, we’ll be digging into it for a long time to come. In the past, I had scoffed at the value of programmer productive over user productivity. My argument was that programmers are soldiers on the field of combat taking bullets and protecting the users at home. This was how I justified the complexity of C++, but measured it against the flexibility of the language and the performance of the produced code, and therefore the pleasure of the user. Don’t get me wrong. We’re still going to be using C++ for high-performance, shrink-wrap software for some time to come. Windows and Office XQ (or whatever they’re going to call them) will still be implemented in C++. Client applications that cannot dictate their deployment environment, i.e. can’t dictate the presence of the .NET runtime, will still be implemented in C++ or VB6. Most of the rest is going to be built in .NET without 12-24 months. Stand-alone corporate applications can dictate the presence of the runtime, as can server-side n-tier applications. Web-based applications can take advantage of the compiled and caching performance gains using ASP-style programming, will still enjoying the flexibility and power of ISAPI-style programming via .NET modules and handlers. Everyone in these spaces that works under Windows is going to be moving to .NET, especially the Windows Java programmers who already know the power of such a platform, but want easy access to their DLLs and their COM servers. The reason that people in these environments can now afford to move is that continued research in virtual machine-style environments and increase in machine power has brought about the first platform where ease of use for the programmer does not mean that the user has to suffer. Applications built with .NET are going to be fast enough and when it comes to ASP, they’re going to be much faster than what we’ve had in the past. The combination of ease of use for the home-by-5 style programmer and the flexibility for the computers-are-my-life style programmer was so great in Java that tens of thousands flooded the Java conferences from the first year. Mix in a user experience that doesn’t have to suffer from the programmers’ choice of .NET and I finally get it.
http://sellsbrothers.com/12598/
CC-MAIN-2017-51
refinedweb
1,307
73.58
Hi, I've spent several hours on this and I can't figure out what's wrong :( Please help. Edit 2: I tried everything in Dev-C++ and there were no errors (I was originally using VC++), so it's a debugger problem :/ I still don't know how to solve it... Edit: Here's a smaller program that I thought was pretty straightforward that still gives me the same error when debugging Code: #include<iostream> using namespace std; int main() { int m=4; int ** Rows; Rows = new int * [m]; return 0; } Code: #include<iostream> using namespace std; struct Node { int row; int col; int value; Node * next_in_col; Node * next_in_row; }; int main() { int m, n; cin >> m >> n; Node * * Rows; Rows = new Node * [m]; } When I debug I get this error message: "There is no source code available for the current location. OK / Show Disassembly" (the yellow arrow points to the first line: "mov edi,edi")(the yellow arrow points to the first line: "mov edi,edi")Quote: --- f:\dd\vctools\crt_bld\self_x86\crt\src\newaop.cpp -------------------------- 002C14C0 mov edi,edi 002C14C2 push ebp 002C14C3 mov ebp,esp 002C14C5 mov eax,dword ptr [count] 002C14C8 push eax 002C14C9 call operator new (2C1186h) 002C14CE add esp,4 002C14D1 pop ebp 002C14D2 ret Also, I know it runs, but when I put it in a larger program it crashes miserably :(
https://cboard.cprogramming.com/cplusplus-programming/126426-there-no-source-code-available-current-location-printable-thread.html
CC-MAIN-2017-30
refinedweb
228
55.85
I'm trying to cluster using DBSCAN (scikit learn implementation) and location data. My data is in np array format, but to use DBSCAN with Haversine formula I need to create a distance matrix. I'm getting the following error when I try to do this( a 'module' not callable error.) From what i've reading online this is an import error, but I'm pretty sure thats not the case for me. I've created my own haversine distance formula, but I'm sure the error is not with this. This is my input data, an np array (ResultArray). [[ 53.3252628 -6.2644198 ] [ 53.3287395 -6.2646543 ] [ 53.33321202 -6.24785807] [ 53.3261015 -6.2598324 ] [ 53.325291 -6.2644105 ] [ 53.3281323 -6.2661467 ] [ 53.3253074 -6.2644483 ] [ 53.3388147 -6.2338417 ] [ 53.3381102 -6.2343826 ] [ 53.3253074 -6.2644483 ] [ 53.3228188 -6.2625379 ] [ 53.3253074 -6.2644483 ]] distance_matrix = sp.spatial.distance.squareform(sp.spatial.distance.pdist (ResultArray,(lambda u,v: haversine(u,v)))) File "Location.py", line 48, in <module> distance_matrix = sp.spatial.distance.squareform(sp.spatial.distance.pdist (ResArray,(lambda u,v: haversine(u,v)))) File "/usr/lib/python2.7/dist-packages/scipy/spatial/distance.py", line 1118, in pdist dm[k] = dfun(X[i], X[j]) File "Location.py", line 48, in <lambda> distance_matrix = sp.spatial.distance.squareform(sp.spatial.distance.pdist (ResArray,(lambda u,v: haversine(u,v)))) TypeError: 'module' object is not callable Simply scipy's pdist does not allow to pass in a custom distance function. As you can read in the docs, you have some options, but haverside distance is not within the list of supported metrics. (Matlab pdist does support the option though, see here) you need to do the calculation "manually", i.e. with loops, something like this will work: from numpy import array,zeros def haversine(lon1, lat1, lon2, lat2): """ See the link below for a possible implementation """ pass #example input (your's, truncated) ResultArray = array([[ 53.3252628, -6.2644198 ], [ 53.3287395 , -6.2646543 ], [ 53.33321202 , -6.24785807], [ 53.3253074 , -6.2644483 ]]) N = ResultArray.shape[0] distance_matrix = zeros((N, N)) for i in xrange(N): for j in xrange(N): lati, loni = ResultArray[i] latj, lonj = ResultArray[j] distance_matrix[i, j] = haversine(loni, lati, lonj, latj) distance_matrix[j, i] = distance_matrix[i, j] print distance_matrix [[ 0. 0.38666203 1.41010971 0.00530489] [ 0.38666203 0. 1.22043364 0.38163748] [ 1.41010971 1.22043364 0. 1.40848782] [ 0.00530489 0.38163748 1.40848782 0. ]] Just for reference, an implementation in Python of Haverside can be found here.
https://codedump.io/share/hckvep5saNMo/1/distance-matrix-creation-using-nparray-with-pdist-and-squareform
CC-MAIN-2018-09
refinedweb
425
54.83
Rendering SVG Files Qt SVG provides classes for rendering SVG files. To include the definitions of the module's classes, use the following directive: #include <QtSvg> To link against the module, add this line to your qmake .pro file: QT += svg Rendering SVG Files Scalable Vector Graphics (SVG) is a language for describing two-dimensional graphics and graphical applications in XML. SVG 1.1 is a W3C Recommendation and forms the core of the current SVG developments in Qt. SVG 1.2 is the specification currently being developed by the SVG Working Group, and it is available in draft form. The Mobile SVG Profiles (SVG Basic and SVG Tiny) are aimed at resource-limited devices and are part of the 3GPP platform for third generation mobile phones. You can read more about SVG at About SVG. Qt supports the static features of SVG 1.2 Tiny. ECMA scripts and DOM manipulation are currently not supported. SVG drawings can be rendered onto any QPaintDevice subclass. This approach gives developers the flexibility to experiment, in order to find the best solution for each application. The easiest way to render SVG files is to construct a QSvgWidget and load an SVG file using one of the QSvgWidget::load() functions..
https://doc.qt.io/qt-6/svgrendering.html
CC-MAIN-2022-27
refinedweb
206
54.63
An application might be written that sources needed files, makes use of images files, etc. All the files are relative to the physical location of the actual program.ProblemHow do you determine the directory (without hardcoding it in the program) that contains your executable? This isn't as easy as you might first think. What if the executable is executed via an alias (UNIX) or shortcut (Windows). Or perhaps a symbolic link (UNIX) to the executable is used.SolutionThis bit of code will do the trick. The while loop is used to resolve multiple symbolic links. set originalPath [pwd] set scriptPath $::argv0 set workingPath [file dirname $::argv0] while {![catch {file readlink $scriptPath} result]} { cd $workingPath set scriptPath [file join [pwd] $result] set workingPath [file dirname $scriptPath] } cd [file dirname $scriptPath] set scriptPath [pwd] cd $originalPathSee Making a Path Absolute for a related discussion If you are using a Starkit, the root directory is contained in the starkit::topdir variable package require starkit puts stderr "root directory = $starkit::topdir" JOB: The following start sequence in the starkit's main.tcl allows to handle both cases: manually call up main.tcl during development as well as for regular usage of the starkit package: #!/usr/bin/env tclkit # startup if {[catch { package require starkit if {[starkit::startup] eq {sourced}} return}]} { namespace eval ::starkit { variable topdir [file normalize [file dirname [info script]]] } } # set auto_path [linsert $auto_path 0 [file join $::starkit::topdir]] FW: Of course, if you're satisfied with it only working if the application is invoked directly, then you can just invoke this code to retrieve the application root: file dirname [info script] jys: I wish this functionality was built into Tcl, it's really a necessity for writing code that'll run from your PATH. Anyway, this is my more compact version: set resolved_path [info script] while {![catch [list file readlink $resolved_path] target]} { set resolved_path $target} source [file join [file dirname $resolved_path] file_to_source.tcl] US: See also: Techniques for reading and writing application configuration files
http://wiki.tcl.tk/1710
CC-MAIN-2018-39
refinedweb
333
51.38
importing video - From: "Darko_Chika" <Darko_Chika@xxxxxxxxxxxxxxxxxxxxxxxxx> - Date: Thu, 31 Mar 2005 17:51:02 -0800 well im really new to using XP and i hope i explain my problem correctly, but here it goes...i have a video file saved on the hard drive that i want to import into Windows Movie maker, but when i try to do this i get an error that say something like: "The file could not be imported. An interface has to many methods to fire events from." Does anyone know what i should do or even know what im talking about? please reply ASAP. Thanx a lot! . - Follow-Ups: - Re: importing video - From: Graham Hughes - Prev by Date: audio/music and audio - Next by Date: what video editing software can do this? - Previous by thread: audio/music and audio - Next by thread: Re: importing video - Index(es):
http://www.tech-archive.net/Archive/WinXP/microsoft.public.windowsxp.video/2005-04/msg00047.html
crawl-002
refinedweb
144
79.6
.TH PCREBUILD 3 .SH NAME PCRE - Perl-compatible regular expressions .SH "PCRE BUILD-TIME OPTIONS" .rs .sp This document describes the optional features of PCRE that can be selected when the library is compiled. It assumes use of the \fBconfigure\fP script, where the optional features are selected or deselected by providing options to \fBconfigure\fP before running the \fBmake\fP command. However, the same options can be selected in both Unix-like and non-Unix-like environments using the GUI facility of \fBCMakeSetup\fP if you are using \fBCMake\fP instead of \fBconfigure\fP to build PCRE. .P The complete list of options for \fBconfigure\fP (which includes the standard ones such as the selection of the installation directory) can be obtained by running .sp ./configure --help .sp The following sections include descriptions of options whose names begin with --enable or --disable. These settings specify changes to the defaults for the \fBconfigure\fP command. Because of the way that \fBconfigure\fP works, --enable and --disable always come in pairs, so the complementary option always exists as well, but as it specifies the default, it is not described. . .SH "C++ SUPPORT" .rs .sp By default, the \fBconfigure\fP script will search for a C++ compiler and C++ header files. If it finds them, it automatically builds the C++ wrapper library for PCRE. You can disable this by adding .sp --disable-cpp .sp to the \fBconfigure\fP command. . .SH "UTF-8 SUPPORT" .rs .sp To build PCRE with support for UTF-8 Unicode character strings, add .sp --enable-utf8 .sp to the \fBconfigure\fP command. Of itself, this does not make PCRE treat strings as UTF-8. As well as compiling PCRE with this option, you also have have to set the PCRE_UTF8 option when you call the \fBpcre_compile()\fP function. .P. . .SH "UNICODE CHARACTER PROPERTY SUPPORT" .rs .sp UTF-8 support allows PCRE to process character values greater than 255 in the strings that it handles. On its own, however, it does not provide any facilities for accessing the properties of such characters. If you want to be able to use the pattern escapes \eP, \ep, and \eX, which refer to Unicode character properties, you must add .sp --enable-unicode-properties .sp to the \fBconfigure\fP command. This implies UTF-8 support, even if you have not explicitly requested it. .P Including Unicode property support adds around 30K of tables to the PCRE library. Only the general category properties such as \fILu\fP and \fINd\fP are supported. Details are given in the .\" HREF \fBpcrepattern\fP .\" documentation. . .SH "CODE VALUE OF NEWLINE" .rs .sp By default, PCRE interprets the linefeed (LF) character as indicating the end of a line. This is the normal newline character on Unix-like systems. You can compile PCRE to use carriage return (CR) instead, by adding .sp --enable-newline-is-cr .sp to the \fBconfigure\fP command. There is also a --enable-newline-is-lf option, which explicitly specifies linefeed as the newline character. .sp Alternatively, you can specify that line endings are to be indicated by the two character sequence CRLF. If you want this, add .sp --enable-newline-is-crlf .sp to the \fBconfigure\fP command. There is a fourth option, specified by .sp --enable-newline-is-anycrlf .sp which causes PCRE to recognize any of the three sequences CR, LF, or CRLF as indicating a line ending. Finally, a fifth option, specified by .sp --enable-newline-is-any .sp causes PCRE to recognize any Unicode newline sequence. .P Whatever line ending convention is selected when PCRE is built can be overridden when the library functions are called. At build time it is conventional to use the standard for your operating system. . .SH "WHAT \eR MATCHES" .rs .sp By default, the sequence \eR in a pattern matches any Unicode newline sequence, whatever has been selected as the line ending sequence. If you specify .sp --enable-bsr-anycrlf .sp the default is changed so that \eR matches only CR, LF, or CRLF. Whatever is selected when PCRE is built can be overridden when the library functions are called. . .SH "BUILDING SHARED AND STATIC LIBRARIES" .rs .sp The PCRE building process uses \fBlibtool\fP to build both shared and static Unix libraries by default. You can suppress one of these by adding one of .sp --disable-shared --disable-static .sp to the \fBconfigure\fP command, as required. . .SH "POSIX MALLOC USAGE" .rs .sp When PCRE is called through the POSIX interface (see the .\" HREF \fBpcreposix\fP .\" \fBmalloc()\fP for each call. The default threshold above which the stack is no longer used is 10; it can be changed by adding a setting such as .sp --with-posix-malloc-threshold=20 .sp to the \fBconfigure\fP command. . .SH "HANDLING VERY LARGE PATTERNS" .rs .sp Within a compiled pattern, offset values are used to point from one part to another (for example, from an opening parenthesis to an alternation .sp --with-link-size=3 .sp to the \fBconfigure\fP command. The value given must be 2, 3, or 4. Using longer offsets slows down the operation of PCRE because it has to load additional bytes when handling them. . .SH "AVOIDING EXCESSIVE STACK USAGE" .rs .sp When matching with the \fBpcre_exec()\fP function, PCRE implements backtracking by making recursive calls to an internal function called \fBmatch()\fP. In environments where the size of the stack is limited, this can severely limit PCRE's operation. (The Unix environment does not usually suffer from this problem, but it may sometimes be necessary to increase the maximum stack size. There is a discussion in the .\" HREF \fBpcrestack\fP .\" documentation.) An alternative approach to recursion that uses memory from the heap to remember data, instead of using recursive function calls, has been implemented to work round the problem of limited stack size. If you want to build a version of PCRE that works this way, add .sp --disable-stack-for-recursion .sp to the \fBconfigure\fP command. With this configuration, PCRE will use the \fBpcre_stack_malloc\fP and \fBpcre_stack_free\fP variables to call memory management functions. By default these point to \fBmalloc()\fP and \fBfree()\fP, but you can replace the pointers so that your own functions are used. .P Separate functions are provided rather than using \fBpcre_malloc\fP and \fBpcre_free\fP because the usage is very predictable: the block sizes requested are always the same, and the blocks are always freed in reverse order. A calling program might be able to implement optimized functions that perform better than \fBmalloc()\fP and \fBfree()\fP. PCRE runs noticeably more slowly when built in this way. This option affects only the \fBpcre_exec()\fP function; it is not relevant for the the \fBpcre_dfa_exec()\fP function. . .SH "LIMITING PCRE RESOURCE USAGE" .rs .sp Internally, PCRE has a function called \fBmatch()\fP, which it calls repeatedly (sometimes recursively) when matching a pattern with the \fBpcre_exec()\fP function. By controlling the maximum number of times this function may be called during a single matching operation, a limit can be placed on the resources used by a single call to \fBpcre_exec()\fP. The limit can be changed at run time, as described in the .\" HREF \fBpcreapi\fP .\" documentation. The default is 10 million, but this can be changed by adding a setting such as .sp --with-match-limit=500000 .sp to the \fBconfigure\fP command. This setting has no effect on the \fBpcre_dfa_exec()\fP matching function. .P In some environments it is desirable to limit the depth of recursive calls of \fBmatch()\fP, .sp --with-match-limit-recursion=10000 .sp to the \fBconfigure\fP command. This value can also be overridden at run time. . .SH "CREATING CHARACTER TABLES AT BUILD TIME" .rs .sp PCRE uses fixed tables for processing characters whose code values are less than 256. By default, PCRE is built with a set of tables that are distributed in the file \fIpcre_chartables.c.dist\fP. These tables are for ASCII codes only. If you add .sp --enable-rebuild-chartables .sp to the \fBconfigure\fP command, the distributed tables are no longer used. Instead, a program called \fBdftables\fP is compiled and run. This outputs the source for new set of tables, created in the default locale of your C runtime system. (This method of replacing the tables does not work if you are cross compiling, because \fBdftables\fP is run on the local host. If you need to create alternative tables when cross compiling, you will have to do so "by hand".) . .SH "USING EBCDIC CODE" .rs .sp .sp --enable-ebcdic .sp to the \fBconfigure\fP command. This setting implies --enable-rebuild-chartables. You should only use it if you know that you are in an EBCDIC environment (for example, an IBM mainframe operating system). The --enable-ebcdic option is incompatible with --enable-utf8. . .SH "PCREGREP OPTIONS FOR COMPRESSED FILE SUPPORT" .rs .sp By default, \fBpcregrep\fP reads all files as plain text. You can build it so that it recognizes files whose names end in \fB.gz\fP or \fB.bz2\fP, and reads them with \fBlibz\fP or \fBlibbz2\fP, respectively, by adding one or both of .sp --enable-pcregrep-libz --enable-pcregrep-libbz2 .sp to the \fBconfigure\fP command. These options naturally require that the relevant libraries are installed on your system. Configuration will fail if they are not. . .SH "PCRETEST OPTION FOR LIBREADLINE SUPPORT" .rs .sp If you add .sp --enable-pcretest-libreadline .sp to the \fBconfigure\fP command, \fBpcretest\fP is linked with the \fBlibreadline\fP library, and when its input is from a terminal, it reads it using the \fBreadline()\fP function. This provides line-editing and history facilities. Note that \fBlibreadline\fP is GPL-licenced, so if you distribute a binary of \fBpcretest\fP linked in this way, there may be licensing issues. .P Setting this option causes the \fB-lreadline\fP option to be added to the \fBpcretest\fP build. In many operating environments with a sytem-installed \fBlibreadline\fP this is sufficient. However, in some environments (e.g. if an unmodified distribution version of readline is in use), some extra configuration may be necessary. The INSTALL file for \fBlibreadline\fP says this: .sp "Readline uses the termcap functions, but does not link with the termcap or curses library itself, allowing applications which link with readline the to choose an appropriate library." .sp If your environment has not been set up so that an appropriate library is automatically included, you may need to add something like .sp LIBS="-ncurses" .sp immediately before the \fBconfigure\fP command. . . .SH "SEE ALSO" .rs .sp \fBpcreapi\fP(3), \fBpcre_config\fP(3). . . .SH AUTHOR .rs .sp .nf Philip Hazel University Computing Service Cambridge CB2 3QH, England. .fi . . .SH REVISION .rs .sp .nf Last updated: 17 March 2009 Copyright (c) 1997-2009 University of Cambridge. .fi
http://opensource.apple.com/source/pcre/pcre-4.2/pcre/doc/pcrebuild.3
CC-MAIN-2014-42
refinedweb
1,800
58.48
Layout each tab in QTabBar Hi guys... if someone of you use OS X, you should know that in Safari, when we open a new tab, each tab has the same width, so if I have only 2 tabs, them width will be the half of the width in the QTabBar... well, I'm trying to do this and I can't find on internet any solution, because the only one I've found is use the style property, and set there the width of each tab... I can do that, but If I resize the application, I'l must recalculate all again... my question is if QTabWidget or QTabBar has some kind of property which let me show all tabs without any scroll on the tab bar and at the same time, fix each tab width to be sure that all of them has the same width. If you want to see what I'm talking about, you can see Safari or Finder applications on OS X Yosemite or El Capitan and you'll see... how can I do that ?? what is the best way to make it ?? regards If I understand you correctly, I'd try: - setting QTabBar::expanding to true; and - overriding QTabBar::tabSizeHint to return the same width for all tabs. eg: MyTabBar::MyTabBar(QWidget *parent) : QTabBar(parent) { setExpanding(true); // At least by default. }; } MyTabWidget::MyTabWidget(QWidget *parent) : QTabWidget(parent) { setTabBar(new MyTabBar); } Of course, there may be a simpler way, but that's what I'd try experimenting with if no-one suggests anything nicer :) Cheers. Hi Paul and thank you for your answer. That doesn't work for me. :( In 1st place, the value of QTabBar::tabSizeHint(index) when I call is the value of the widget but without apply its layout. Layout is apply after start event loop until i know, so, at this point, the widget has its initials values for width and height. I didn't see any signal to know if the we are finish to render the widget, to know if i can update the value of the width of the tabs... and I that's the idea, i have to do the same every time when I resize the windows. Am I wrong ? I'm going to share the example what I'm testing That is the mainwindos.h file #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> #include <QTabWidget> #include <QTabBar> #include <QDebug> class TabBar : public QTabBar { Q_OBJECT public: double _width; TabBar(QWidget* parent = nullptr) : QTabBar(parent){ setExpanding(true); } virtual QSize tabSizeHint(int index) const { Q_UNUSED(index); QSize mySyze = QTabBar::tabSizeHint(index); qDebug() << "width: " << mySyze.width() << "height: " << mySyze.height() << " count: " << count(); return QSize(mySyze.width() / count(), mySyze.height()); } }; class MyTabWidget : public QTabWidget { Q_OBJECT public: MyTabWidget(QWidget* parent = nullptr):QTabWidget(parent){ } virtual void setMyTabBar(QTabBar* tabBar){ setTabBar(tabBar); } virtual ~MyTabWidget(){} }; namespace Ui { class MainWindow; } class MainWindow : public QMainWindow { Q_OBJECT public: explicit MainWindow(QWidget *parent = 0); ~MainWindow(); private: Ui::MainWindow *ui; }; #endif // MAINWINDOW_H that is my mainwondows.cpp file #include "mainwindow.h" #include "ui_mainwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); MyTabWidget* tabWidget = new MyTabWidget(this); TabBar* tabBar = new TabBar(this); tabWidget->setMyTabBar(tabBar); ui->centralWidget->layout()->addWidget(tabWidget); tabWidget->addTab(new QWidget(tabWidget),"tab1"); tabWidget->addTab(new QWidget(tabWidget),"tab2"); } MainWindow::~MainWindow() { delete ui; } another advise my friend ? regards to know if i can update the value of the width of the tabs... and I that's the idea, i have to do the same every time when I resize the windows. Am I wrong ? You don't need to know. Basically, the code that lays out the tab should already be subscribed to the relevant signals / events, and should be calling your MyTabBar::tabSizeHint as necessary. Did you try my MyTabBar::tabSizeHint implementation above? ie; } The trick here is that we're always returning the same width for all tabs, whereas your version will return different widths as tabs are added (or deleted). It might not work, but give it a try if you haven't already. Cheers. I had a quick play, and it seems that setExpanding is not working as I expected... I'll have to look a little closer :) @Paul-Colby Yes paaul, it doesn't work for me neither any other advise ? regards
https://forum.qt.io/topic/66257/layout-each-tab-in-qtabbar/
CC-MAIN-2020-16
refinedweb
715
54.12
I am starting with Android and facing some little problems with coding my application. The idea is simple : the device receives SMS, my application filter them and do some action upon such or such type of SMS. Thanks to your forum, I found how to do so, and I can listen for incoming SMS and perform action when such or such type of SMS arrives (in my application, this is SMS from an hard coded telephone number). The goal of this piece of code is to retrieve the very last SMS that has been received. In order to work around, and try how Android works, I just wanna count the number of SMS in inbox. The matter with my code (you'll see it below) is that the result is not stable : when it is more than 1 call to this code, the number of SMS is not accurate anymore. As an example, let's say that my device doesn't contain ANY SMS, and that all the SMS the device receives are from the number on which I decide whether or not there is action to perform. 1 - first SMS received : it shows there is 1 sms in box (OK) 2 - second SMS received : it shows there is 1 sms in box (NOK, should be 2) 3 - third SMS received : it shows there are 2 sms in box (NOK, should be 3, but at least increased from last step) .. and so on. IN THE DDMS PANEL (with Eclipse), if I press Debug on the process of my application, then the counter becomes good. So from my previous example, let's say after 3 I pressed the debug button, I would have: 4 - fourth SMS received : it shows there are 4 sms in box (OK, went back to good counting) 5 - fifth SMS received : it shows there are 5 sms in box (OK) ... and so on. Please can you give me your opinion on where there is a pb in my code ?? Thanks so much for you help !!! JAVA CODE : - Code: Select all package com.degetel.mobilelocator.respappli; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.database.Cursor; import android.net.Uri; import android.os.Bundle; import android.provider.Telephony; import android.telephony.gsm.SmsMessage; public class SMSListener extends BroadcastReceiver implements Runnable { private Context _ctxt; private Intent _intent; private Thread t = new Thread(this); @Override public void onReceive(Context ctxt, Intent intent) { _ctxt = ctxt; _intent = intent; t.start(); } public void run() { // we need to check if the sender is the server Bundle bundle = _intent.getExtras(); if (bundle != null) { /* Get all messages contained in the Intent */ SmsMessage[] messages = Telephony.Sms.Intents .getMessagesFromIntent(_intent); /* retrieve the message */ for (SmsMessage currentMessage : messages) { if ((currentMessage.getDisplayOriginatingAddress()) .equalsIgnoreCase("01101981")) { Cursor cursor = _ctxt.getContentResolver().query( Telephony.Sms.CONTENT_URI, null, null, null, "_ID"); System.out.println("on a " + cursor.getCount() + " message(s)"); } } } } } XML code - Code: Select all <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <uses-permission android: <uses-permission android: <uses-permission android: <application android: <activity android: </activity> <receiver android: <intent-filter> <action android: </intent-filter> </receiver> </application> </manifest>
http://www.anddev.org/other-coding-problems-f5/unstable-results-sms-notification-with-broadcastreceiver-t3435.html
CC-MAIN-2018-09
refinedweb
520
57.06
13 December 2007 15:06 [Source: ICIS news] LONDON (ICIS news)--Gulf-based United Arab Chemical Carriers (UACC) has signed contracts with the South Korean yard SLS Shipbuilding Co to build 10 45,000 dwt chemical and product tankers, UACC said on Thursday. One of the major shareholders of UACC is the Dubai-based United Arab Shipping Company (UASC) which is primarily a container ship operator. “We had realised that many of our container clients were shipping more chemical products and they were expanding their production of petrochemical products,” said UACC spokesman Brian Baker. The company was formed in July of this year in response to this demand and it is developing its capacity in order to be able to meet the needs of its customers. The new ships will be delivered in 2011 and early 2012 and will be additions to the one 2004-built second-hand tanker and four other second-hand vessels due for delivery soon. UACC chairman ?xml:namespace> "It is our aim to ensure that the expansive projects of our customers are well supported by cost-efficient, safe and environmentally friendly means of transportation," he added. Chemical products expected to be transported from the region are monoethylene glycol (MEG), methanol and methyl tertiary butyl ether (MT.
http://www.icis.com/Articles/2007/12/13/9086677/s-korean-yard-builds-10-chem-tankers-for-uacc.html
CC-MAIN-2013-20
refinedweb
211
50.8
Find duplicate files and perform action. Project Description sweeper Find duplicate files and perform action. Usage Print duplicates from sweeper import Sweeper swp = Sweeper(['images1', 'images2']) dups = swp.file_dups() print(dups) Remove duplicate files from sweeper import Sweeper swp = Sweeper(['images1', 'images2']) swp.rm() Perform custom action from sweeper import Sweeper swp = Sweeper(['images']) for f, h, dups in swp: print('encountered {} which duplicates with already found duplicate files {} with hash {}'.format(f, dups, h)) As script: python -m sweeper/sweeper --help As installed console script: sweeper --help Installation from source: python setup.py install or from PyPI: pip install sweeper Documentation this README.rst, code itself, docstrings sweeper can be found on github.com at: Tested With Python2.7, Python3 Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/sweeper/
CC-MAIN-2018-09
refinedweb
142
59.7
Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent The unified CloudWatch agent enables you to do the following: Collect internal system-level metrics from Amazon EC2 instances across operating systems. The metrics can include in-guest metrics, in addition to the metrics for EC2 instances. The additional metrics that can be collected are listed in Metrics collected by the CloudWatch agent. Collect system-level metrics from on-premises servers. These can include servers in a hybrid environment as well as servers not managed by AWS. Retrieve custom metrics from your applications or services using the StatsDand collectdprotocols. StatsDis supported on both Linux servers and servers running Windows Server. collectdis supported only on Linux servers. Collect logs from Amazon EC2 instances and on-premises servers, running either Linux or Windows Server. Note The CloudWatch agent does not support collecting logs from FIFO pipes. You can store and view the metrics that you collect with the CloudWatch agent in CloudWatch just as you can with any other CloudWatch metrics. The default namespace for metrics collected by the CloudWatch agent is CWAgent, although you can specify a different namespace when you configure the agent. The logs collected by the unified CloudWatch agent are processed and stored in Amazon CloudWatch Logs, just like logs collected by the older CloudWatch Logs agent. For information about CloudWatch Logs pricing, see Amazon CloudWatch Pricing Metrics collected by the CloudWatch agent are billed as custom metrics. For more information about CloudWatch metrics pricing, see Amazon CloudWatch The CloudWatch agent is open-source under the MIT license, and is hosted on GitHub The steps in this section explain how to install the unified CloudWatch agent on Amazon EC2 instances and on-premises servers. For more information about the metrics that the CloudWatch agent can collect, see Metrics collected by the CloudWatch agent. Supported operating systems The CloudWatch agent is supported on x86-64 architecture on the following operating systems: Amazon Linux version 2014.03.02 or later Amazon Linux 2 Ubuntu Server versions 20.04, 18.04, 16.04, and 14.04 CentOS versions 8.0, 7.6, 7.2, and 7.0 Red Hat Enterprise Linux (RHEL) versions 8, 7.7, 7.6, 7.5, 7.4, 7.2, and 7.0 Debian version 10 and version 8.0 SUSE Linux Enterprise Server (SLES) version 15 and version 12 Oracle Linux versions 7.8, 7.6, and 7.5 macOS, including EC2 Mac1 instances 64-bit versions of Windows Server 2019, Windows Server 2016, and Windows Server 2012 The agent is supported on ARM64 architecture on the following operating systems: Amazon Linux 2 Ubuntu Server versions 20.04 and 18.04 Red Hat Enterprise Linux (RHEL) version 7.6 SUSE Linux Enterprise Server 15 Installation process overview You can download and install the CloudWatch agent manually using the command line, or you can integrate it with SSM. The general flow of installing the CloudWatch agent using either method is as follows: Create IAM roles or users that enable the agent to collect metrics from the server and optionally to integrate with AWS Systems Manager. Download the agent package. Modify the CloudWatch agent configuration file and specify the metrics that you want to collect. Install and start the agent on your servers. As you install the agent on an EC2 instance, you attach the IAM role that you created in step 1. As you install the agent on an on-premises server, you specify a named profile that contains the credentials of the IAM user that you created in step 1. Contents
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html
CC-MAIN-2021-49
refinedweb
601
55.44
We are excited to announce new precompiled PDFNet Python3 libraries are available to download -- another small step in making the dev experience with PDFTron the best it can be. New Python 3 support makes it a snap to install powerful document processing capabilities while enabling your Python developers to leverage the latest Python features with the highest security. We currently support Python 3.5 - 3.8 on Windows (x64, x86), Linux (x64, x86), and macOS. By using PIP - Python package manager, you can download and install our Python packages published on Python Package Index (PyPI) with ease. linkHow to Install Installation is now just a single one-step command. Enter the following -- and that's it! linkOn Windows: python -m pip install PDFNetPython3 linkOn Linux/macOS: python3 -m pip install PDFNetPython3 After the installation completes, you can add import PDFNetPython3 or from PDFNetPython3 import * and then fully employ our PDFNetPython3 library. Below is a sample Python script called HelloWorld.py showing how our PDFNetPython3 is integrated. # You can add the following line to integrate PDFNetPython3 # into your solution from anywhere on your system so long as # the library was installed successfully via pip from PDFNetPython3 import * def main():() Next, to test that your code works, run the script in the folder containing HelloWorld.py via the command prompt: linkOn Windows: python -u HelloWorld.py linkOn Linux/macOS: python3 -u HelloWorld.py linkFurther Resources To learn more about PDFNet Python3 installation, check out our user guide. We look forward to your feedback -- and don't hesitate to contact us should you have any questions.
https://www.pdftron.com/blog/python/python3/
CC-MAIN-2020-45
refinedweb
263
62.88
Device and Network Interfaces - converts mouse protocol to Firm Events #include <sys/stream.h> #include <sys/vuid_event.h> #include <sys/vuid_wheel.h> int ioctl(fd, I_PUSH, vuidm3p); int ioctl(fd, I_PUSH, vuidm4p); int ioctl(fd, I_PUSH, vuidm5p); int ioctl(fd, I_PUSH, vuid2ps2); int ioctl(fd, I_PUSH, vuid3ps2); The STREAMS modules vuidm3p, vuidm4p, vuidm5p, vuid2ps2, and vuid3ps2 convert mouse protocols to Firm events. The Firm event structure is described in <sys/vuid_event.h>. Pushing a STREAMS module does not automatically enable mouse protocol conversion to Firm events. The STREAMS module state is initially set to raw or VUID_NATIVE mode which performs no message processing. You must change the state to VUID_FIRM_EVENT mode to initiate mouse protocol conversion to Firm events. This can be accomplished by the following code: int format; format = VUID_FIRM_EVENT; ioctl(fd, VUIDSFORMAT, &format); You can also query the state of the STREAMS module by using the VUIDGFORMAT option. int format; int fd; /* file descriptor */ ioctl(fd, VUIDGFORMAT, &format); if ( format == VUID_NATIVE ); /* The state of the module is in raw mode. * Message processing is not enabled. */ if ( format == VUID_FIRM_EVENT ); /* Message processing is enabled. * Mouse protocol conversion to Firm events * are performed. The remainder of this section describes the processing of STREAMS messages on the read- and write-side. Incoming messages are queued and converted to Firm events. The read queue of the module is flushed of all its data messages and all data in the record being accumulated are also flushed. The message is passed upstream. Messages sent downstream as a result of an ioctl(2) system call. The two valid ioctl options processed by the vuidmice modules are VUIDGFORMAT and VUIDSFORMAT. The write queue of the module is flushed of all its data messages and the message is passed downstream. This option returns the current state of the STREAMS module. The state of the vuidmice STREAMS module may either be VUID_NATIVE (no message processing) or VUID_FIRM_EVENT (convert to Firm events). This option sets the state of the STREAMS module to VUID_FIRM_EVENT. If the state of the STREAMS module is already in VUID_FIRM_EVENT,
https://docs.oracle.com/cd/E18752_01/html/816-5177/vuidmice-7m.html
CC-MAIN-2019-04
refinedweb
344
66.13
Catkin and eclipse Hello, i am using ROS Groovy on Ubuntu 12.04. I want edit my packages in eclipse. On Fuerte i runed make eclipse-project and import this result as eclipse project. At groovy i can include my catkin workspace as project. This is working. But now i have a problems with my includes. #include "ros/ros.h" warning: Unresolved inclusion. Or ROS_INFO("text") error: Type 'std_msgs::String::ConstPtr' could not be resolved and i couldnt see with crtl + mouse the functions. What i do wrong ? what i must edit in eclipse that this is working? thank you for your help
https://answers.ros.org/question/52013/catkin-and-eclipse/
CC-MAIN-2019-43
refinedweb
103
88.02
eesh, Sorry it took so long to get back to you. My review notes are below. > I mostly finished the assigned files. But then there are some changes > which i marked as FIXME. I guess we can handle them while building the > source. Some of them are logic issues. Others are locking changes where > 2.4 was not taking the lock and 2.6 takes new locks. I have them as > #error in the code. Now that I'm done reviewing this first cut of your merge, I'd like you to revisit these FIXME and #error problems and try to fix them. Ask questions of the appropriate developer if you need more information. Ask me if you need some help figuring out a solution to any thorny problems. > fs/proc/base.c was rewritten a bit. I guess right now we can merge it. > Later i would like Laura or somebody else to review the same. This is > due to TGID and TID split and also the proc_inode changes. I'll ask Laura to take a look at it. > I also moved the cttydev and cttynode from task_struct to signal_struct. > That is where we find tty_struct now. Sounds good. I did not review any of the arch/alpha/ and include/asm-alpha/ files. These are your domain. It's up to you to look at the changes I make to files in arch/i386/ and include/asm-i386/, and determine whether equivalent changes need to be made to your Alpha files. Although drivers/char/pty.c, n_tty.c and tty_io.c were on my list, I'm actually glad to have let you merge them. They're not a core part of vproc, and it's less that I have to do. ;) A few things that I noticed: - drivers/char/Makefile.REJ was entirely ignored. How did you determine that these changes are no longer needed? - What about drivers/video/modedb.c.REJ? - You still have arch/i386/kernel/i386_ksyms.c.REJ to do. Overall, you did a great job! I made a few changes to your code: > diff -rux 'cscope*' linux-ssi.master.sep28.raw/drivers/char/pty.c linux-ssi.master.sep28.reviewed/drivers/char/pty.c > --- linux-ssi.master.sep28.raw/drivers/char/pty.c 2004-09-28 16:30:06.000000000 -0700 > +++ linux-ssi.master.sep28.reviewed/drivers/char/pty.c 2004-09-28 16:18:33.000000000 -0700 > @@ -69,7 +69,7 @@ > set_bit(TTY_OTHER_CLOSED, &tty->link->flags); > if (tty->driver->subtype == PTY_TYPE_MASTER) { > set_bit(TTY_OTHER_CLOSED, &tty->flags); > -#ifndef CONFIG_SSI /* FIXME SSI_XXX check this */ > +#ifndef CONFIG_SSI > #ifdef CONFIG_UNIX98_PTYS > if (tty->driver == ptm_driver) > devpts_pty_kill(tty->index); Looks good. > @@ -414,7 +414,11 @@ > > #ifdef CONFIG_UNIX98_PTYS > /* Unix98 devices */ > +#ifdef CONFIG_SSI > + devfs_mk_symlink("pts", "/cluster/dev/pts"); > +#else > devfs_mk_dir("pts"); > +#endif > ptm_driver = alloc_tty_driver(NR_UNIX98_PTY_MAX); > if (!ptm_driver) > panic("Couldn't allocate Unix98 ptm driver"); This was from the original .REJ file. I guess you didn't recognize where it should be applied. My review of your merge was not 100% thorough, so please make sure that there aren't other cases like this. > diff -rux 'cscope*' linux-ssi.master.sep28.raw/drivers/char/tty_io.c linux-ssi.master.sep28.reviewed/drivers/char/tty_io.c > --- linux-ssi.master.sep28.raw/drivers/char/tty_io.c 2004-09-28 16:30:06.000000000 -0700 > +++ linux-ssi.master.sep28.reviewed/drivers/char/tty_io.c 2004-09-20 15:45:35.000000000 -0700 > @@ -465,12 +465,13 @@ > struct tty_struct *tty = (struct tty_struct *) data; > struct file * cons_filp = NULL; > struct file *filp, *f = NULL; > - struct task_struct *p; > - struct pid *pid; > - int closecount = 0, n; > #ifdef CONFIG_SSI > struct vproc *vp; > +#else > + struct task_struct *p; > + struct pid *pid; > #endif > + int closecount = 0, n; > > if (!tty) > return; The original patch #ifdef'd out the p and pid variable when CONFIG_SSI was enabled. This is because they're not used by the SSI version of the code, so the compiler would throw a warning for each variable that's not used. > @@ -639,8 +640,8 @@ > #else > struct task_struct *p; > struct list_head *l; > -#endif > struct pid *pid; > +#endif > int tty_pgrp = -1; > /* > * FIXME!! SSI_XXX 2.4 was not taking BKL. This needs fix. I haven't looked The pid variable is only used in the non-SSI case. > @@ -1814,6 +1815,8 @@ > file->f_flags &= ~O_NONBLOCK; > return 0; > } > + > +#error This should be a heavily hooked tiocsctty(), as in 2.4 > #ifdef CONFIG_SSI > static int ssi_tiocsctty(struct tty_struct *tty, int arg) > { Please put all of this back the way it was with modified code scattered throughout tiocsctty(). Separating it out into ssi_tiocsctty() obscures what you had to do to adapt it to 2.6.8.1, and it unnecessarily duplicates code that is common to the SSI and non-SSI cases. > diff -rux 'cscope*' linux-ssi.master.sep28.raw/drivers/mtd/mtdblock.c linux-ssi.master.sep28.reviewed/drivers/mtd/mtdblock.c > --- linux-ssi.master.sep28.raw/drivers/mtd/mtdblock.c 2004-09-28 16:30:06.000000000 -0700 > +++ linux-ssi.master.sep28.reviewed/drivers/mtd/mtdblock.c 2004-09-20 14:57:59.000000000 -0700 > @@ -17,9 +17,6 @@ > #include <linux/vmalloc.h> > #include <linux/mtd/mtd.h> > #include <linux/mtd/blktrans.h> > -#ifdef CONFIG_SSI > -#include <linux/vproc.h> > -#endif > > static struct mtdblk_dev { > struct mtd_info *mtd; Not really needed, since now nothing else is changed in this file. > diff -rux 'cscope*' linux-ssi.master.sep28.raw/include/linux/proc_fs.h linux-ssi.master.sep28.reviewed/include/linux/proc_fs.h > --- linux-ssi.master.sep28.raw/include/linux/proc_fs.h 2004-09-28 16:30:07.000000000 -0700 > +++ linux-ssi.master.sep28.reviewed/include/linux/proc_fs.h 2004-09-20 13:50:41.000000000 -0700 > @@ -237,7 +237,7 @@ > struct proc_inode { > #ifdef CONFIG_SSI > struct vproc *vproc; > - /* FIXME This need to be removed .This is needed to track > + /* FIXME SSI_XXX: This need to be removed .This is needed to track > * the remote process open files > */ > struct file *file; Use SSI_XXX in these kinds of comments. > @@ -263,7 +263,4 @@ > return PROC_I(inode)->pde; > } > > - > - > - > #endif /* _LINUX_PROC_FS_H */ Why all the extra blank lines? > diff -rux 'cscope*' linux-ssi.master.sep28.raw/missing.patch linux-ssi.master.sep28.reviewed/missing.patch > --- linux-ssi.master.sep28.raw/missing.patch 2004-09-20 13:46:31.000000000 -0700 > +++ linux-ssi.master.sep28.reviewed/missing.patch 2004-09-20 13:47:31.000000000 -0700 > @@ -392,20 +392,6 @@ > extern struct upc_channel izo_channels[MAX_CHANNEL]; > > /* message types between presto filesystem in kernel */ > -diff -r -U 4 linux-ci/include/linux/proc_fs_i.h linux-ssi.missing/include/linux/proc_fs_i.h > ---- linux-ci/include/linux/proc_fs_i.h 2000-04-07 13:38:00.000000000 -0700 > -+++ linux-ssi.missing/include/linux/proc_fs_i.h 2004-06-11 12:07:32.000000000 -0700 > -@@ -1,6 +1,10 @@ > - struct proc_inode_info { > -+#ifdef CONFIG_SSI > -+ struct vproc *vproc; > -+#else > - struct task_struct *task; > -+#endif > - int type; > - union { > - int (*proc_get_link)(struct inode *, struct dentry **, struct vfsmount **); > - int (*proc_read)(struct task_struct *task, char *page); > diff -r -U 4 linux-ci/kernel/ksyms.c linux-ssi.missing/kernel/ksyms.c > --- linux-ci/kernel/ksyms.c 2004-04-28 13:11:56.000000000 -0700 > +++ linux-ssi.missing/kernel/ksyms.c 2004-04-01 16:33:46.000000000 -0800 You've merged this code into one of the existing 2.6 files, so I've removed it from missing.patch. Is there any other code you've merged from this file? One other thing is that I'm not including your changes to kernel/ptrace.c in the next master sandbox. John's taking over the merge of VPROC for me, and I will give him this patch along with the others that you sent on August 30. He can review what you've done and incorporate it with his merge. Keep up the good work! Best Regards, Brian I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/ssic-linux/mailman/ssic-linux-devel/?viewmonth=200409&viewday=29
CC-MAIN-2017-34
refinedweb
1,330
61.63
Data Infrastructure Management Software Discussions The NFS-connections and snapmirror dashboard is giving me this error. This might be a Grafana backward compabilitiy issue of dashboard. What version do you have? If it's older than 5, I think updating Grafana might solve this for you. I will look at that. I do get these error on the extension scripts...not sure if that impacts the dashboard. I would expect not as it should show an empty one at any rate. snapmirror [2019-11-08 11:24:00,511] [ERROR] [poll_snapmirrors] ZAPI request failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618)[2019-11-08 11:25:01,396] [ERROR] [poll_snapmirrors] ZAPI request failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618)[2019-11-08 11:26:00,793] [ERROR] [poll_snapmirrors] ZAPI request failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618)[2019-11-08 11:27:02,743] [ERROR] [poll_snapmirrors] ZAPI request failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618) i resolved the nfs-connections one. [2019-11-08 10:30:41] [WARNING] [sshpass] not installed, can't run package [nfs-connections.sh], exiting[2019-11-08 10:46:01] [WARNING] [sshpass] not installed, can't run package [nfs-connections.sh], exiting[2019-11-08 11:01:01] [WARNING] [sshpass] not installed, can't run package [nfs-connections.sh], exiting[2019-11-08 11:15:59] [WARNING] [sshpass] not installed, can't run package [nfs-connections.sh], exiting[2019-11-08 11:31:03] [DEBUG ] Session started[2019-11-08 11:31:03] [DEBUG ] Session ended The Dashboard import error is unrelated to those ones. Re: snapmirror Are you using SSL authentication? Unfortunately that's not supported in the Extention Manager (although we are planning to support that sometime soon). For now you can use the extension packages user/password authentication. Re: nfs-connections Seems like the dependency sshpass is missing. (`apt install sshpass` will install it on a Debian-like system). If you already installed it, it might be the same authentication issue as above. As we jump topics...... I ugraded to Grafana 6.4.4 and I still get the same error. Like the variable DS_GRAPHITE is not confgiured somewhere? I am not using SSL authentication for any part of the Harvest environment. Seems like a python thing #====== Polled host setup defaults ============================================host_type = FILERhost_port = 443host_enabled = 1template = defaultdata_update_freq = 60ntap_autosupport = 0latency_io_reqd = 10auth_type = password Seems like I did something wrong when exporting the SnapMirror dashboard. The ${DS_GRAPHITE} is a variable name from my Grafana configuration and is not recognized by your Grafana server. Here is a quick fix you can use: - Navigate to the SnapMirror Replications dashboard and enter Dashboard settings (click on the gear icon on right top), - Go to JSON Model and copy the code to a text editor - Replace all "datasource": "${DS_GRAPHITE}" with "datasource": null - Copy the code back, save and reload the dashboard. cc @yannb I upgraded grafana to 6.4 and harvest to 1.6, but not able to see any data for nfs connections . I followed your post and changed datasource to null for both snapmirror relationships and nfs3 connections nfssnapmirror Can you check the log files of these two extensions? If there isn't much there, can you run them in verbose mode and check the logs again. To run in verbose mode, just restart the poller for which you have activated in these extensions with -v, e.g.: ./netapp-manager -restart -poller POLLER -v Btw, I found a bug in the snapmirrors extensions (making it collect only destination metrics), I'll fix it and post an updated version here soon. Hello, Have you managed to use the Snapmirror and NFS Connection dashboards ? Thanks ! I'm also hitting this issue with NFS Connections and Snapmirror Replications. Any updates? Chris Same issue. Datasource named ${DS_GRAPHITE} was not found same issue any suggestion ? Hi, You can either run the updater here and use the new Grafana dashboards or follow my instructions here to fix the dashboard manually. Hope that helps! I believe that the link on how to manually fix the dashboard is wrong. Can you please advise if this is the case and update it with the correct one. Thanks. Grafana v6.5.3 Harvest 1.6 Thanks, @vachagan_gratian I was able to sort out the issue by following steps 1. Copy python module Unzip netapp-manageability-sdk-9.7.zip (/tmp/netapp-manageability-sdk-9.7/lib/python/NetApp) Copy these files NaElement.py NaErrno.py NaServer.py to /opt/netapp-harvest/lib 2. Update poller # csgxxxxxxxx[csgxxxxxx]hostname = XXX.XX.XX.XXgroup = XXX_cDottemplate = default,extension.conf 3. Ignore SSL validation # due to error ZAPI request failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618) vi /opt/netapp-harvest/extension/snapmirror_replications.py Add following red line after import logging & before def main(); import logging import ssl try:_create_unverified_https_context = ssl._create_unverified_contextexcept AttributeError:# Legacy Python that doesn't verify HTTPS certificates by defaultpasselse:# Handle target environment that doesn't support HTTPS verificationssl._create_default_https_context = _create_unverified_https_context def main(): 4. Add privileges # ZAPI request failed: Insufficient privileges: user 'netapp-harvest' does not have read access to this resource Add snapmirror show command to role security login role create -role netapp-harvest-role -access readonly -cmddirname "snapmirror show" 5. Change all "datasource": null, from Snapmirror Replication Settings - JSON Model ${DS_GRAPHITE} to "datasource": null Then Refresh Browser Going to try the above fix but the initial fix provided certainly doesn't work. On saving the dashboard you get Datasource named null was not found when you make the initial fix.
https://community.netapp.com/t5/Data-Infrastructure-Management-Software-Discussions/NetApp-Harvest-1-6-snapmirror-and-NFS-connections-dashboard/m-p/152204
CC-MAIN-2021-17
refinedweb
921
50.12
13.7. Text Sentiment Classification: Using Recurrent Neural Networks¶ Text classification is a common task in natural language processing, which transforms a sequence of text of indefinite length into a category of text. This section will focus. sys sys.path.insert(0, '..') import d2l from mxnet import gluon, init, nd from mxnet.gluon import data as gdata, loss as gloss, nn, rnn, utils as gutils from mxnet.contrib import text import os import tarfile 13. 13.7.1.1. Reading Data¶ We first download this data set to the “../data” path and extract it to “../data/aclImdb”. In [2]: data_dir = './' url = '' fname = gutils.download(url, data_dir) with tarfile.open(fname, 'r') as f: f.extractall(data_dir) Next, read the training and test data sets. Each example is a review and its corresponding label: 1 indicates “positive” and 0 indicates “negative”. In [3]: def read_imdb(folder='train'): data, labels = [], [] for label in ['pos', 'neg']: folder_name = os.path.join(data_dir, 'aclImdb', folder, label) for file in os.listdir(folder_name): with open(os.path.join(folder_name, file), 'rb') as f: review = f.read().decode('utf-8').replace('\n', '') data.append(review) labels.append(1 if label == 'pos' else 0) return data, labels train_data, test_data = read_imdb('train'), read_imdb('test') print('# trainings:', len(train_data[0]), '\n# tests:', len(test_data[0])) for x, y in zip(train_data[0][:3], train_data[1][:3]): print('label:', y, 'review:', x[0:60]) # trainings: 25000 # tests: 13.7.1.2. Tokenization and Vocabulary¶ We use a word as a token, which can be split based on spaces. In [4]: def tokenize(sentences): return [line.split(' ') for line in sentences] train_tokens = tokenize(train_data[0]) test_tokens = tokenize(test_data[0]) Then we can create a dictionary based on the training data set with the words segmented. Here, we have filtered out words that appear less than 5 times. In [5]: vocab = d2l.Vocab([tk for line in train_tokens for tk in line], min_freq=5) 13.7.1.3. Padding to the Same Length¶ Because the reviews have different lengths, so they cannot be directly combined into mini-batches. Here we fix the length of each comment to 500 by truncating or adding “<unk>” indices. In [6]: max_len = 500 def pad(x): if len(x) > max_len: return x[:max_len] else: return x + [vocab.unk] * (max_len - len(x)) train_features = nd.array([pad(vocab[line]) for line in train_tokens]) test_features = nd.array([pad(vocab[line]) for line in test_tokens]) 13.7.1.4. Create Data Iterator¶ Now, we will create a data iterator. Each iteration will return a mini-batch of data. In [7]: batch_size = 64 train_set = gdata.ArrayDataset(train_features, train_data[1]) test_set = gdata.ArrayDataset(test_features, test_data) Lastly, we will save a function get_data_imdb into d2l, which returns the vocabulary and data iterators. 13.7.2. Use. In [9]:,, d2l.try_all_gpus() net = BiRNN(len(vocab), embed_size, num_hiddens, num_layers) net.initialize(init.Xavier(), ctx=ctx) 13.7.2.1. Load Pre-trained Word Vectors¶ Because the training data set. In [11]: glove_embedding = text.embedding.create( 'glove', pretrained_file_name='glove.6B.100d.txt') Query the word vectors that in our vocabulary. In [12]: embeds = glove_embedding.get_vecs_by_tokens(vocab.idx_to_token) embeds.shape Out[12]: (49339,. In [13]: net.embedding.weight.set_data(embeds) net.embedding.collect_params().setattr('grad_req', 'null') 13.7.2.2. Train and Evaluate the Model¶ Now, we can start training. In [14]: lr, num_epochs = 0.01,.6124, train acc 0.663, test acc 0.748, time 41.9 sec epoch 2, loss 0.4304, train acc 0.806, test acc 0.810, time 41.9 sec epoch 3, loss 0.3806, train acc 0.833, test acc 0.840, time 42.3 sec epoch 4, loss 0.3485, train acc 0.850, test acc 0.834, time 41.8 sec epoch 5, loss 0.3256, train acc 0.863, test acc 0.847, time 41.3 sec Finally, define the prediction function. In [15]: def predict_sentiment(net, vocab, sentence): sentence = nd.array(vocab[sentence.split()], ctx=d2l.try_gpu()) label = nd.argmax(net(sentence.reshape((1, -1))), axis=1) return 'positive' if label.asscalar() == 1 else 'negative' Then, use the trained model to classify the sentiments of two simple sentences. In [16]: predict_sentiment(net, vocab, 'this movie is so great') Out[16]: 'positive' In [17]: predict_sentiment(net, vocab, 'this movie is so bad') Out[17]: 'negative' 13.7.3. Summary¶ - Text classification transforms a sequence of text of indefinite length into a category of text. This is a downstream application of word embedding. - We can apply pre-trained word vectors and recurrent neural networks to classify the emotions in a text. 13.7.4. Exercises¶ - Increase the number of epochs. What accuracy rate can you achieve on the training and testing data sets? What about trying to re-tune other hyper-parameters? -”. 13.
http://d2l.ai/chapter_natural-language-processing/sentiment-analysis-rnn.html
CC-MAIN-2019-18
refinedweb
793
55
Hi, I have my 2d iPhone game setup so that once a player touches the screen a ray is fired and if it hits an object the player can drag it around. I was just wondering how I could go about allowing the user to 'flick' objects in Java script so that if they quickly touch an object and drag in a direction the object will move in that direction, but this depends on how much speed was put into the swipe - so the quicker the swipe the faster and further the objects will travel. At the same time if they drag and hold the object will still follow the players finger as normal. Thanks for your time :) Did you get a solution for this? I'm looking for the same. Hi apple741 1, I think i have a half-baked solution. You are aware of the new Unity Events system and the new UI system that has come into unity, I was working on something similar, A UI Joystick for a 2D touch game, sort of what you see in that COD Zombies game for Iphone. This lead me to use the events that are called on other things such as Sliders. It took me FOREVER to get used to thanks to the incredibly poor or nonexistent docs, and a lack of tutorials. basically you will need these events OnDrag, OnBeginDrag, OnEndDrag and OnDrop. now to have them used we do this We inherit the various interfaces that the events are named after. to access them add the using statements i have provided. using UnityEngine; using UnityEngine.Events; using UnityEngine.EventSystems; using System.Collections; piblic class FlickableObject : Monobehaviour, IBeginDragHandler, IDragHandler, IEndDragHandler, IDropHandler { public void OnBeginDrop(PointerEventData data) { } public void OnDrag(PointerEventData data) { } public void OnEndDrag(PointerEventData data) { } public void OnDrop(PointerEventData data) { } } Now all it left is implementing it to control your object position and calculating it velocity to continue when you let go. Ok it turns out Monobehaviour doesn't invoke the Events i proposed earlier. I know Selectable does, but I'm assuming you aren't using UI object for this right? if you aren't, you could implement, or recycle the event system i proposed somehow to work with all objects. According to THe manual it should work if you add a Physics 2D ray caster to the camera, but so far that's failed me dismally. which means you will have to build your own. Uh, no actually I've got a 3D game, but I want the ball to move on the ground only. And my ground is on the X and Z plane, but most codes online have a X and Y plane movement only. ANd Vector2 objects are usually X and Y. Vector3 confuses me. :S Any suggetions? Answer by Fornoreason1000 · Mar 02, 2015 at 08:08 AM i have my 2d iPhone game i have my 2d iPhone game No worries it just means you will use the Physics Raycaster Component and add it to the camera that renders your Ball, your ball MUST have a Rigid body AND a Collider or it won't work. BTW I release i had some sort of memory problem with unity, which screwed up the physic, the raycaster actually do work. Back to it. now basically in OnDrag you want to be constantly setting the position to the position of your player finger. public void OnDrag(PointerEventData data) { //even data is based on Screen View so no need to worry about the XY XZ thing rigidbody.velocity = new Vector3(data.delta.x, 0, data.delta.y) * 10f } use the others for honing the control of the ball. you'll notice that the ball will "lag" behind the users. Make objects easier to select 1 Answer How to support Retina display 1 Answer Can I stop a lerp completely when it gets to 20% of its original value? 3 Answers Making a GetComponent array 1 Answer Flipping textures 0 Answers
https://answers.unity.com/questions/37498/how-to-flick-objects-in-a-2d-game.html
CC-MAIN-2019-39
refinedweb
665
70.84
Create a Desktop App With Angular 2 and Electron Create a Desktop App With Angular 2 and Electron Electron is an open-source project from GitHub that lets us create cross-platform desktop applications with web technologies. Join the DZone community and get the full member experience.Join For Free. Check out the code on GitHub. You can also check out our other Angular 2 material, including tutorials on working with pipes, models, and Http. Developing desktop applications is arguably harder than developing for the web. Throw in the fact that you would need three versions of the same desktop app to make it available for all the popular operating systems, plus all the work that needs to go into preparing for distribution, and it can be a daunting task for web developers to port their skills to native. This is where Electron comes in. Electron is an open-source project from GitHub that makes it easy to build cross-platform desktop apps using web technologies. With it, we get a nice set of APIs for interacting with Windows, OS X, and Linux operating systems, and all we need is JavaScript. There are, of course, other ways to create desktop applications with web technologies, but Electron is unique in its ability to easily target three operating systems at once. The other nice part of building apps with Electron is that we're not barred from using any particular framework. As long as it works on the web, it works for Electron. In this article, we'll explore how to wire up a simple image size calculator app using Electron and Angular 2. While the steps here are specific to Angular 2, remember that any front end framework will work with Electron, so these instructions could be adapted to make use of others. Setting Up Angular 2 and Electron We'll use Webpack for our Angular 2 setup, and we'll base the config loosely on the awesome Angular 2 Webpack Starter by AngularClass. Let's start with our package.json file to list our dependencies, along with some scripts that will let us easily run our webpack commands and also run the electron command to start the app. // package.json ... "scripts": { "build": "webpack --progress --profile --colors --display-error-details --display-cached", "watch": "webpack --watch --progress --profile --colors --display-error-details --display-cached", "electron": "electron app" }, "devDependencies": { "electron-prebuilt": "^0.35.4", "es6-shim": "^0.34.0", "ts-loader": "^0.7.2", "typescript": "^1.7.3", "webpack": "^1.12.9", "webpack-dev-server": "^1.14.0" }, "dependencies": { "angular2": "2.0.0-beta.0", "zone.js": "^0.5.10", "bootstrap": "^3.3.6", "gulp": "^3.9.0", "es6-shim": "^0.33.3", "reflect-metadata": "0.1.2", "rxjs": "5.0.0-beta.0" } } ... Next, we need some configuration for Webpack. // webpack.config.js var path = require('path'); var webpack = require('webpack'); var CommonsChunkPlugin = webpack.optimize.CommonsChunkPlugin; module.exports = { devtool: 'source-map', debug: true, entry: { 'angular2': [ 'rxjs', 'reflect-metadata', 'angular2/core', 'angular2/router', 'angular2/http' ], 'app': './app/app' }, output: { path: __dirname + '/build/', publicPath: 'build/', filename: '[name].js', sourceMapFilename: '[name].js.map', chunkFilename: '[id].chunk.js' }, resolve: { extensions: ['','.ts','.js','.json', '.css', '.html'] }, module: { loaders: [ { test: /\.ts$/, loader: 'ts', exclude: [ /node_modules/ ] } ] }, plugins: [ new CommonsChunkPlugin({ name: 'angular2', filename: 'angular2.js', minChunks: Infinity }), new CommonsChunkPlugin({ name: 'common', filename: 'common.js' }) ] }; We're telling Webpack to bundle up the Angular 2 scripts and serve them from a single angular2.js bundle that will be in the build directory. The scripts for our app will be served from a separate bundle called app.js. We also need some TypeScript configuration in a tsconfig.json file at the project root. { "compilerOptions": { "target": "ES5", "module": "commonjs", "removeComments": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "sourceMap": true }, "files": [ "app/app.ts" ] } All the files specific to our application will live inside the app subdirectory. There, we need to provide a package.json file that will simply tell Electron which script to use for bootstrapping. This will be the main.js file, and in it, we will tell Electron how to open and close our app. // app/package.json ... { "name": "image-size-calculator-app", "version": "0.0.1", "main": "main.js" } ... Now let's configure the application window. // app/main.js'); // Clear out the main window when the app is closed mainWindow.on('closed', function () { mainWindow = null; }); }); The main.js script is really just some boilerplate that Electron needs to fire up. We are keeping a reference to mainWindow so that garbage collection doesn't interfere and close the window on us. We create a browser window with specific dimensions and then load an index.html file from the app directory. Let's create this file next. Just like with a regular web app, we need an index.html entry point. <!-- app/index.html --> <html> <head> <meta charset="UTF-8"> <title>Image Size Calculator</title> <link rel="stylesheet" href="../node_modules/bootstrap/dist/css/bootstrap.min.css"> </head> <body> <div class="container"> <h1>Hello Electron</h1> </div> <script src="../node_modules/angular2/bundles/angular2-polyfills.js"></script> <script src="../build/common.js"></script> <script src="../build/angular2.js"></script> <script src="../build/app.js"></script> </body> </html> Aside from angular2-polyfills.js, The scripts that we're referencing aren't actually there yet, and that's because we haven't run our webpack command to generate them. The last thing we need to do before bundling our scripts is to create an empty app.ts file, as this is what our webpack.config.js file expects. With an empty app.ts in place, let's bundle the scripts. npm run watch This command was set up in our package.json file in the project root, and it runs webpack with some options. One of these options is to watch for changes, so we can now edit our app.ts file and everything will automatically get bundled again. If we look in our project root, we should now see our build directory. With all these files in place, we should be able to run the app. Remember that we've set up a command in our package.json file to do this. npm run electron If everything is wired up properly, we should now see our "Hello Electron" message. Yikes, that was a lot of boilerplate needed just to set things up! It's worth noting that a lot of this was just to set up Angular 2 and wasn't because of Electron specifically. If we were using a simpler framework (e.g., no TypeScript) or just plain JavaScript, then we wouldn't have needed as much boilerplate. The good news is that all we need to worry about now is the actual Angular 2 code. It's time to start building the app just as we would if it were on the web! Creating the Image Uploader Our simple app will let users drop images in so that they can find out their sizes. Why wouldn't they just check the image's properties? Good point. However, this app will give us a chance to see how Electron adapts web APIs for the desktop. Let's create the dropzone first. We'll do all of our Angular 2 work in one top-level component. // app/app.ts import {bootstrap} from 'angular2/platform/browser'; import {Component} from 'angular2/core'; import {NgFor} from 'angular2/common'; @Component({ selector: 'app', template: ` <div (dragover)="false" (dragend)="false" (drop)="handleDrop($event)" style="height: 300px; border: 5px dotted #ccc;"> <p style="margin: 10px; text-align: center"> <strong>Drop Your Images Here</strong> </p> </div> ` }) export class App { constructor() {} handleDrop(e) { var files:File = e.dataTransfer.files; Object.keys(files).forEach((key) => { console.log(files[key]); }); return false; } } bootstrap(App); <!-- app/index.html --> ... <div class="container"> <app></app> </div> ... To define some custom behavior for dropping an image into our app, we need to first pass false to the dragover and dragend events. The drop event is what we want to hook into, and for now we are simply logging out the details of the images we drop. That's right--we can see the same dev tools that we would in Chrome. If you're on a Mac, just do Option + Command + I to open them up. Note that to get hold of the event information for the drop, we pass $event, just like we would in Angular 1.x. So how are we getting this information, exactly? Electron provides an abstraction around native files so that we can use the HTML5 file API. With this, we get the path to the file on the filesystem. This is useful in our case, because we can link to our images and show them in our app. Let's set that up now. Displaying the Images Let's now put in some templating to display the images. For this, we'll want to use ngFor to iterate over the images we drop in. Note: As of Beta, templates are now case-sensitive. This means that what used to be ng-foris now ngFor. // app/app.ts ... template: ` <div class="media" * <div class="media-left"> <a href="#"> <img class="media-object" src="{{ image.path }}" style="max-width:200px"> </a> </div> <div class="media-body"> <h4 class="media-heading">{{ image.name }}</h4> <p>{{ image.size }} bytes</p> </div> </div> ` ... export class App { images:Array<Object> = []; constructor() {} handleDrop(e) { var files:File = e.dataTransfer.files; var self = this; Object.keys(files).forEach((key) => { if(files[key].type === "image/png" || files[key].type === "image/jpeg") { self.images.push(files[key]); } else { alert("File must be a PNG or JPEG!"); } }); return false; } } ... Now we push the dropped files onto an array called images and iterate over it in our template to get the details. To avoid other file types being dropped in, we are only accepting png and jpeg. Getting the Image Stats We want to have a way to display the total number of images dropped into the app, as well as the total size of those images. For this, we can create an imageStats function that returns these details. // app/app.ts ... template: ` <h1>Total Images: {{ imageStats().count }}</h1> <h1>Total Size: {{ imageStats().size }} bytes</h1> ` ... imageStats() { let sizes:Array<Number> = []; let totalSize:number = 0; this .images .forEach((image:File) => sizes.push(image.size)); sizes .forEach((size:number) => totalSize += size); return { size: totalSize, count: this.images.length } } ... Adding a Byte Conversion Pipe We're getting the number of bytes for each image, but ideally we would be able to get them in different units. It would be great if we had something to automatically convert the bytes to KB, MB, and GB, and display the appropriate units. We can do this easily with a custom pipe. // app/app.ts import {bootstrap} from 'angular2/platform/browser'; import {Component, Pipe, PipeTransform} from 'angular2/core'; import {NgFor} from 'angular2/common'; @Pipe({ name: 'byteFormat'}) class ByteFormatPipe implements PipeTransform { // Credit: transform(bytes, args) { if(bytes == 0) return '0 Bytes'; var k = 1000; var sizes = ['Bytes', 'KB', 'MB', 'GB']; var i = Math.floor(Math.log(bytes) / Math.log(k)); return (bytes / Math.pow(k, i)).toFixed(1) + ' ' + sizes[i]; } } @Component({ selector: 'app', pipes: [ByteFormatPipe], template: ` <h1>Total Images: {{ imageStats().count }}</h1> <h1>Total Size: {{ imageStats().size | byteFormat }}</h1> ` ... This pipe checks for the file size in bytes and returns the appropriate conversion. We then just apply the pipe to our template and we're able to get the desired output. Preparing for Distribution When distributing Electron apps, it's essential to generate an archive of the application files so that the source code is concealed. This can be done with the asar utility. npm install -g asar asar pack image-size-calculator app.asar The archive file can then be used for the app, and it will be read-only. We'll obviously want to change the name of the application and also provide a unique icon. Instructions for this, along with the other steps involved with distribution, can be found in the Electron docs. Aside: Authentication with Auth0 No matter which framework you use with your Electron app, you can easily add authentication to it with Auth0! Our Lock widget allows you to get up and running quickly. Sign up for your free Auth0 account to work with these directions. Before getting started with the code, you'll need to whitelist the file://* protocol in your Auth0 dashboard. This can be done in the Allowed Origins (CORS) area. To begin, include the Auth0-Lock library from the CDN and provide a button or other element to hook into. <!-- index.html --> ... <!-- Auth0Lock script --> <script src=""></script> <!-- Setting the right viewport --> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" /> <body> <h1>Authenticate with Auth0!</h1> <button id="login">Login</button> ... Next, create a new instance of Lock and set window.electron to an empty object to trigger the proper login flow for Electron. <!-- index.html --> <script> var lock = new Auth0Lock('YOUR_CLIENT_ID', 'YOUR_CLIENT_DOMAIN'); window.electron = {}; </script> Finally, trigger the Lock widget to be shown when the user clicks the Login button. In the callback, set the returned user profile and token into local storage for use later. <!-- index.html --> <script> ... document.getElementById('login').addEventListener('click', function() { lock.show(function(err, profile, token) { if (err) { // Error callback console.error("Something went wrong: ", err); } else { // Success calback. Save the profile and JWT token. localStorage.setItem('profile', JSON.stringify(profile)); localStorage.setItem('id_token', token); } }); }); </script> With the token in local storage, it can now be used as an Authorization header to access secured API endpoints. The way to attach the header to HTTP calls differs depending on which library or framework you're using. If you're using Angular 2 in your Electron app, you can use angular2-jwt. Follow the steps in the Angular 2 docs for more details. Not using Angular 2? We've got intergrations for many other frameworks and libraries as well! Wrapping Up Electron offers developers a way to create desktop applications with the web technologies they already know instead of needing to learn new languages that are specific to various operating systems. This is great, because skills can easily be ported, and code can be reused. Electron doesn't care about which framework we use for our apps. Even though it's in beta, Angular 2 is a great framework to use inside an Electron app and, once everything is set up, works just the same as if we were developing for the web. Published at DZone with permission of Ryan Chenkie , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/create-a-desktop-app-with-angular-2-and-electron
CC-MAIN-2019-43
refinedweb
2,457
59.4
Opened 11 years ago Closed 11 years ago #4739 closed (wontfix) len(queryset) is slow Description len(queryset) uses queryset.iterator, which performs SQL SELECT instead of SQL COUNT. queryset.len should instead call queryset.count() which is much faster as it will use the cached result of SELECT if available, otherwise performing a SQL COUNT. Index: django/db/models/query.py =================================================================== --- django/db/models/query.py (revision 5583) +++ django/db/models/query.py (working copy) @@ -106,7 +106,7 @@ return repr(self._get_data()) def len(self): - return len(self._get_data()) + return self.count() def iter(self): return iter(self._get_data()) Change History (2) comment:1 Changed 11 years ago by comment:2 Changed 11 years ago by Your solution requires a second database query, which isn't so great. If a user only wants the size of a set of results, they can call count() explicitly. If they are calling len() on the queryset, then they've already created the queryset for other reasons, so actually using the results we've already queried is efficient (recalling that the queryset is going to cache the results anyway). All that being said, one enhancement that will be introduced soon is to speed up len() for most database backends by using the cursor's rowcount attribute and (only if that isn't implemented) then falling back to the size of the result set. So, yes, we can make len() slightly more efficient for most database backends (not SQLite, but life's like that sometimes), but it isn't going to be by making another database call. The patch again wrapped in a code block:
https://code.djangoproject.com/ticket/4739
CC-MAIN-2018-09
refinedweb
272
64