text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Originally posted on my blog - selbekk.io
When you're working on a component library, or just creating reusable components in general, you often end up creating small wrapper components that only adds a css class or two. Some are more advanced, but you still need to be able to imperatively focus them.
This used to be a hard problem to solve back in the days. Since the ref prop is treated differently than others, and not passed on to the component itself, the community started adding custom props named
innerRef or
forwardedRef. To address this, React 16.3 introduced the
React.forwardRef API.
The
forwardRef API is pretty straight-forward. You wrap your component in a function call, with is passed props and the forwarded ref, and you're then supposed to return your component. Here's a simple example in JavaScript:
const Button = React.forwardRef( (props, forwardedRef) => ( <button {...props} ref={forwardedRef} /> ) );
You can then use this component like ref was a regular prop:
const buttonRef = React.useRef(); return ( <Button ref={buttonRef}> A button </Button> );
How to use forwardRef with TypeScript
I always screw this up, so I hope by writing this article I can help both you and me to figure this out.
The correct way to type a
forwardRef-wrapped component is:
type Props = {}; const Button = React.forwardRef<HTMLButtonElement, Props>( (props, ref) => <button ref={ref} {...props} /> );
Or more generally:
const MyComponent = React.forwardRef< TheReferenceType, ThePropsType >((props, forwardedRef) => ( <CustomComponentOrHtmlElement ref={forwardedRef} {...props} /> ));
It was a bit un-intuitive at first, because it looks like you can pass a regular component to ForwardRef. However, regular components don't accept a second ref parameter, so the typing will fail.
I can't count how often I've done this mistake:
type Props = {}; const Button: React.RefForwardingComponent< HTMLButtonElement, Props > = React.forwardRef( (props, ref) => <button ref={ref} {...props} /> );
This is a mistake, because the RefForwardingComponent is the type of the render function you create (the one that receives props and ref as arguments), and not the result of calling React.forwardRef.
In other words - remember to pass your type variables directly to
React.forwardRef! It will automatically return the correct type for you.
Another gotcha is the order of the type variables - it's the ref type first, then the props type. It's kind of counter-intuitive to me, since the arguments to the render function is the opposite (props, ref) - so I just remember it's the opposite of what I'd guess. 😅
I hope this article helped you figure out this pesky typing issue that have gotten me so many times in a row. Thanks for reading!
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/selbekk/forwarding-refs-in-typescript-elp | CC-MAIN-2021-43 | refinedweb | 440 | 57.16 |
Currently I have a jQuery plugin which not only sets up its function in $.fn but also $ itself, checking if (typeof(this) == ‘function’) to see if it has been called through $.pluginname or $(elm).pluginname… is this bad practice, and if so, is there a better way to make providing the element optional?
Javascript – Storing function in object – bad practice?
Is it considered bad coding-practice to store functions in an object instead of just defining them (and therefore globally)? Consider: 1. Foo = { bar: function() { alert(baz); } } Foo.bar(); vs. 2.
Bad practice to use Variable functions?
Ok, really simple question – is it bad practice to use Variable functions in php? I have a validator class that can use these and it makes sense to me but it does seem like it may be bad practice to d
Are anonymous functions a bad practice in JavaScript?
I was reading that using anonymous functions in javascript is bad practice, because it can make debugging a pain, but I haven’t seen this for myself. Are anonymous functions in JavaScript really bad p
Is supplying alike default special member functions a bad practice?
We just have a discussion over here about explicitly declaring the special member functions. Is it bad practice doing so? What I mean: The compiler generated versions perform memberwise operations. If
Attaching underscore to Object prototype, bad idea?
The common wisdom is the just because you can augment native types, doesn’t mean you should. That augmenting a native types prototype is a bad idea always, with the only exception being to polyfill be
Static method get – is this bad practice?
Had a discussion with a colleague about wether this is bad practice or not. Now I can not find immediate examples of this online. We have a lot of database object mappers and call it’s functions like
Is it bad practice to instantiate an object as an argument inside a method/function call?
Is it bad practice to instantiate an object as an argument? For example, is it bad practice to do this? myFunction(new Foo); Or should you always do it this way: $foo = new Foo; myFunction($foo); My
is it bad practice to parameters through a string of functions?
im building a reasonably large JS application and i wanted to get your opinions on some of the logic. i wanted to know if its considered bad practice to pass a parameter through a string of functions
Why is it considered bad practice to use “global” reference inside functions? [duplicate]
Possible Duplicate: Are global variables in PHP considered bad practice? If so, why? global in functions Project summary – I am writing a web CMS to get my feet wet with PHP / MySQL. In order to bre
Is it bad practice to echo out functions in php?
Is it bad practice to echo out a bunch of HTML using a function in php and having it something like this: function my_function() { global $post; $custom_fields = get_post_custom(); $some_field = $cust
Answers
The jquery extend function is exposed on both namespaces:
The jQuery source defines both as:
$.extend = $.fn.extend = ...
This would lead me to believe that they intended for you to be able to do this. However, if you look at the way the jQuery authors have set up their own functions to use both, you may get a better idea of the best practice.
For instance, you would normally want the $.fn function to call the $ function. (Much like the $.data function in the jQuery source)
$.extend({ data: function(elem, key, value) {...} });
and (a bit simplified):
$.fn.extend({ data: function(key, value){ return this.each(function(){ $.data(this, key, value); }; } });
This way, one calls the other and takes care of the check for which version you are using, and if you wanted to, you could just check for an undefined ‘elem’ param in the $ namespace.
Have you had a read through the jquery authoring docs? The log function in the example is considered the standard way of adding functions to the jquery object for developers. | http://w3cgeek.com/is-attaching-functions-to-the-jquery-object-bad-practice.html | CC-MAIN-2019-04 | refinedweb | 676 | 72.87 |
.
Program Description:-
In this example we have created Properties class and we have put three key/value pairs and every key/value pairs are both String values. We have used setProperties() method.We have store values setProperty(String key, String value) method Calls the Hashtable method put And we have use FileOutputStream method. This method we can used to be write or store the values in properties file. The OutputStream we used when load() the method in a program.
Example:-
import java.io.*; import java.util.Properties; public class WriteProperties { public static void main(String[] args) { Properties file = new Properties(); try { File file2 = new File("c://write.properties"); FileOutputStream fos = new FileOutputStream(file2); file.setProperty("database", "localhost"); file.setProperty("userName", "Naulej"); file.setProperty("Password", "naulej"); file.store(fos, ""); fos.close(); } catch (FileNotFoundException e){ e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } }
How to Run on command prompt
Output Display setProperties values properties file in java
Post your Comment | http://roseindia.net/java/example/java/core/write-properties-file-in-java.shtml | CC-MAIN-2016-22 | refinedweb | 157 | 53.27 |
Christoph Hellwig wrote:>>George Anzinger <george@mvista.com>:>> o POSIX clocks & timers> > > Care to explain what FOLD_NANO_SLEEP_INTO_CLOCK_NANO_SLEEP> is supposed to do? It's always defined in signal.h, so we can> aswell get rid of it..It is there in case some one might want nanosleep to NOT be folded into clock_nanosleep. For a while this was a moving target and I got a bit paranoid :) I see no real reason to keep it...> > And what's this:> > #ifndef div_long_long_rem> +#include <asm/div64.h>> +> +#define div_long_long_rem(dividend,divisor,remainder) ({ \> + u64 result = dividend; \> + *remainder = do_div(result,divisor); \> + result; })> +> +#endif /* ifndef div_long_long_rem */> > Any reason you can't just use do_div directly like everyone else? :)Actually I have coded a better function as part of the expanded high-res-timers which does a 64-bit/32-bit div in a much cleaner way. Again, it is part of the full high-res-timers patch. I have been thinking of submitting the complete set of these math routines outside of the high-res patch. They are designed to make scaled math easy. I have both a generic and a i386 header file, but they still need a bit of testing.The issue is getting around the C limitation of not being able to use the div and mpy instructions that take 64-bits/32-bits and return 2 32-bit results and the mpy which takes 2 32-bit operands and returns a 64-bit result.For scaled operations, they also roll a shift into the mix in an efficient way (i.e. a small inline asm function).-- | http://lkml.org/lkml/2003/2/25/292 | CC-MAIN-2013-20 | refinedweb | 264 | 65.22 |
Eclipse Community Forums - RDF feed Eclipse Community Forums how to do @Overrides in EDT <![CDATA[In RBD I have a number of widgets that have field called children widget[]; I could then create a function like function getChildren() returns (widget [ ]) {@Override {}} return (things); end This @Override allowed for my function to get called rather than the widget children getters and setters. Override is no longer valid in EDT but EDT does not automatically start using my getChildren or setChildren. How would I do this now where my widget has children and my getChildren and setChildren is used rather than the widget default? ]]> Aaron Allsbrook 2011-11-08T19:21:45-00:00 Re: how to do @Overrides in EDT <![CDATA[just to follow-up on why i want to use/overrired children - I want my widget to be usable from the palette such that you can drop it from the palette but also drop into the widget. When you drop into a widget it automatically uses the children attribute to capture the parenting.]]> Aaron Allsbrook 2011-11-08T19:50:43-00:00 Re: how to do @Overrides in EDT <![CDATA[Aaron, I believe the Box widget (Box.egl) is an example of another widget that used to specify @Override for the getChildren() function. The @Override annotation is no longer required in EDT due to the fact that the RUIWidget stereotype now specifies a defaultSuperType of Widget. The overrides in EDT should be automatic. Can you provide a small example that highlights the problem you are having? Can you step through a test case using Box and your widget to see why Box's getChildren function is being invoked and yours is not? -Brian ]]> Brian Svihovec 2011-11-08T20:36:01-00:00 Re: how to do @Overrides in EDT <![CDATA[Thanks Brian, Ill give it a try and let you know]]> Aaron Allsbrook 2011-11-09T16:16:49-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=261764&basic=1 | CC-MAIN-2015-06 | refinedweb | 319 | 70.23 |
Need to edge out the competition for your dream job? Train for certifications today.
Submit
<xsl:template
Select all
Open in new window
<xsl:template
<xsl:template
<xsl:copy>
<xsl:apply-templates
</xsl:copy>
</xsl:template>.
<xsl:template match="ns1:ListOfFields/ns
...
You have two templates with the same match.
This is not allowed, and two can not be active at the same time
The last one will be picked and depending on the processor, you will get a warning or an error
the trick is to be more specific in the template itself
meaning that the best practice in XSLT is to have three templates in this case (and drop the choose construct)
first template
Open in new window
second template
Open in new window
The empty template will delete these nodes
A third template should serve as a fallback for cases that don't apply to the condition
You don't need a specific template for these "otherwise", since these will be picked up by the generic identity copy you already have
Open in new window
If no longer relevant, I hope you learn something from this
Alternatively you can have 1 template and two when clauses in a big choose...
but what I propose is commonly considered best practice
So, you should
-always avoid choose constructs in favour of multiple templates with a predicate in the match
- never have two templates matching the same node with equal preference and equal mode
cheers
Geert
string MyXmlPath = Request.PhysicalApplicatio
// Xslt
string MyXsltPath = Request.PhysicalApplicatio
// Now create an instance of the XPathDocument class and store in there the XML document
XPathDocument xmlDoc = new XPathDocument(MyXmlPath);
// Now create an instance of the XslCompiledTransform class that we will use to transform the XML data using an XSLT stylesheet
XslCompiledTransform XSLTransform = new XslCompiledTransform();
XSLTransform.Load(MyXsltPa
// Then execute the transform and output the results.
XSLTransform.Transform(MyX
Do not forget to add these namespaces in the Default.aspx.cs file
using System.Xml;
using System.Xml.Xsl;
using System.Xml.XPath;
Run your application and see the MyXmlData's related data (transformed by the XLST file) being displayed in the screen.
If you want the transformation results to be saved to an .htm file you must ,
1. comment out this line of code
// XSLTransform.Transform(MyX
2. Add another item to your site, an .htm file. Name it MyHtmlPage.htm.
iii) Add these lines of code to the Page_Load routine.
string MyHTMLPath = Request.PhysicalApplicatio
XSLTransform.Transform(MyX
Run your application and see the MyXmlData's related data (transformed by the XLST file) being written into the html file.Open the MyHtmlPage.htm file
I was using below online xslt tester to get my result. Later I might integrate with ANT and/or vbscript (when using vbscript I was using microsoft command line util "msxsl.exe")
please help if you can
This course will introduce you to the interfaces and features of Microsoft Office 2010 Word, Excel, PowerPoint, Outlook, and Access. You will learn about the features that are shared between all products in the Office suite, as well as the new features that are product specific.
not response. please delete
Sorry for not paying attention to this question earlier.
I will object to the deletion and explain what you are doing wrong.
I hope it is still relevant to you, I only saw this now.
If it is no longer relevant... ignore my response and continue deletion, no problem with that
Geert
I realized that after few days (reading) and when ahead with choose (for now)
Thanks and regards
(note that there is a Topic Area specific for XSLT, I follow that one up more closely) | https://www.experts-exchange.com/questions/27825469/Apply-multiple-xsl-match-template-on-a-xml.html | CC-MAIN-2018-26 | refinedweb | 613 | 61.77 |
It happens all the time. You download an example application from the web but it does not run without exactly the correct versions of jars to execute against. The best solution to this problem is to release a working demo of the application that has the bare minimum of jars required for it to execute.
When developers use lazy names for jars, future users have no idea what versions were used in a project. For example, some of the jars for a real project were listed below.
46,725 commons-codec.jar 71,442 commons-discovery.jar 279,781 commons-httpclient.jar 39,443 commons-logging.jar 313,898 dom4j.jar 47,531 ehcache.jar 421,601 quartz.jar
The file names give no indication of the versions of .jars used – only the file size and occasionally the manifest can be used to tell them apart.
46,725 commons-codec-1.3.jar 71,442 commons-discovery-0.2.jar 279,781 commons-httpclient-3.0.1.jar 38,015 commons-logging-1.0.4.jar 313,898 dom4j-1.6.1.jar 47,531 ehcache-1.1.jar 421,601 quartz-1.6.0.jar
In the second example, only exactly what was needed is defined in the lib directory. There are no overlapping namespaces of package names within the jars so they can't interfere with each other.
Tip: Maven allows specific management of JAR versions but can also burn a lot of time on projects if you do not have Maven expertise and slow builds to a crawl if you don't manage a local repository. It's not worth the pain for simple JAR version management. | http://javagoodways.com/jar_hell_Jar_hell.html | CC-MAIN-2021-21 | refinedweb | 280 | 61.63 |
Stefano Mazzocchi wrote:
> I find myself in the strange situation where XSP is something in between
> JSP and E-XSLT....
> . . .
> That's how I feel XSP right now: a little squeezed between two big guys
> :)
Hmmm, unfortunately I've been so _incredibly_ busy these
days I haven't had the time it takes to write a well thought out
I'll try to present one of the main points now before all of
us end up losing our religion:
XSP Layer 1 is a *code-generation* vocabulary. E-XSLT is
part of a transformation vocabulary.
This is a critical distinction!
XSP Layer 1 is a code generation vocabulary aimed at
producing DOM transformation code. As such, it can be
used to generate server pages (as it does now) or to
generate E-XSLT extension handlers.
Being a DOM transformation language, XSLT provides
means of dynamically building nodes. When you say:
<xsl:element
you're saying "insert an element called 'continuous-function'
at the current result tree position"
In XSP Layer 1 (being a code-generation, server page
vocabulary) when you say:
<xsp:element
you're actually saying "insert executable code to create
an element called 'continuous-function' at the current
source program position".
These 2 are _not_ the same! Dynamic node building tags
in XSLT and XSP cannot be unified.
Let's assume we're using XSP's code generation facilities
to generate *XSLT extension handlers.* (a perfectly
legitimate usage, as XSP doesn't dictate the final "rendition"
of the generated source program as a servlet, a Cocoon
producer or, yes, an XSLT extension handler).
Think carefully: what would we use in this case? <xsl:element>
or <xsp:element>?
Are you done thinking? You see? XSP Layer 1 defines its
own namespace, so it comes naturally a beast like
<xsp:element> rightfully exists.
The 2 syntaxes DO NOT overlap in regard to dynamic
node creation.
The fact that both <xsp:element> and <xsl:element> yield
a comparable end result does not mean their semantics
are the same.
There's much more to this. I just wanted to present one
of the key facts. As long as we (wrongfully) look at XSP
as a plain transformation language we'll end up perceiving
it as redundant in regard to E-XSLT. As soon as we introduce
the code-generation factor, our function becomes clearer.
Ricardo | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200001.mbox/%3CNDBBJIOLGLAOBAAGJJIMOEDLCOAA.ricardo@apache.org%3E | CC-MAIN-2016-30 | refinedweb | 393 | 54.63 |
In this article I would like to show how JavaScript charting libraries can be hooked up to server-side Java code. Many Java developers don't like dealing with JS. I know I was one of them. But now I've seen the light and undergone somewhat of a conversion.
Over the years JavaScript hasn't gone away and has only become more important as other browser based technologies have fallen by the wayside like Flash, Silverlight and Applets. A comment that is sure to trigger comments :-)
There are dozens and dozens of JS Charting Libraries but for the purposes of this article I'm going to to use Google Charts. The approach outlined in this article can can easily be applied to other charting libraries.
You can get started with Google charts without writing any server side code by following this link for the parent website and this link to a starter tutorial which is pretty much copy & paste to get started.
One thing to be aware of with Google Charts, is that according to the license you are not allowed to download and host the JS yourself, so your solution will only work when the browser has access to the Internet. See this link. There used to be an API limit, but that applied to similar but different API call Google Image Charts which has now be depecrated.
The example project is available here on BitBucket.
If your Maven environment is all setup and working you should be able to just do the following from the command line to build and run the Jetty server.
mvn jetty:run
If you don't have Maven then it shouldn't be too hard to use you favourite Web Framework, include the sample code and the three Jersey jar files as decribed in the pom.xml.
Once the Jetty server has started the demo pages are available at;
Technique
You could just print all of the data into the Html page when your Java Web Framework rendered the page like ExampleOne, but that's not very interactive so in this approach I'm going to show you how to respond to user input on the browser and then update the chart with new data from the server.
If you want the chart to respond to the user then you'll need AJAX of some kind to make the background request to get the data and then update the chart.
The high level sequence of events is;
- The Select component fires an event, or the page is drawn
- JQuery performs the AJAX request to make a REST call against the server
- JAX-RS/Jersey on the Java server to responds to the REST call and returns a Json response.
- Google Charts draws the chart
Here are some of the highlights from the code and an explanation of what they are doing.
This bit of code listens for a change value event on the Select component and triggers the whole sequence of events when the user selects something from the combo-box.
$(document).ready(function(){ $("#startingYearSelect").change(function() { drawChart(); }); });
The $() syntax is a JQuery selector to find the element that we want to interrogate. "#startingYearSelect" tells JQuery that we want to find the element by the HTML ID attribute. If it had been ".something" then JQuery would have looked for all elements that matched the "something" CSS class. There are many other JQuery selectors, but these are the two you will probably use most often.
This bit of the code is JQuery to read the current value out of the "select" combo-box.
// Get the current value of the select box var startingYear = $("#startingYearSelect").val();
JQuery is also used to retrieve the Json data from the server using an AJAX request.
var jsonData = $.ajax({ url: "/rest/ExampleTwoResource/" + startingYear, dataType: "json", async: false }).responseText;
You can execute this URL yourself from the browser with;
The URL is processed on the server by the Jersey REST class;
@Path("/ExampleTwoResource/{startingYear}") public class ExampleTwoResource { @GET @Produces({MediaType.APPLICATION_JSON}) public Object[] getMethod(@PathParam("startingYear") Integer startingYear) { System.out.println("ExampleTwoResource: startingYear = " + startingYear); ... } }
This method will return an answer of something like;
[["Year","Sales","Expenses"],["1980",14100,8300],["1981",5400,7700],["1982",11300,11600],["1983",6000,5300]]
Just to make this a little clearer, think of this data as a table of rows and columns. The column headers are the data series labels and the rows are the values for each column on the x-axis.
[
["Year", "Sales","Expenses"],
["1980", 14100, 8300],
["1981", 5400, 7700],
["1982", 11300, 11600],
["1983", 6000, 5300]
]
Back on the browser the Json data that we get is then converted into a JavaScript object
// jsonData is just a String, flip it into a JS object var jsObject = jQuery.parseJSON(jsonData); And then we draw the Chart... // Specify some options about how the chart is to be drawn var chartOptions = { title: "Company Performance", hAxis: {title: "Year"}, }; // This is where we want to put the chart var chartDiv = document.getElementById("chart_div"); // Create the Chart var chart = new google.visualization.ColumnChart(chartDiv); // Ask the Chart to draw itself with these data and options chart.draw(chartData, chartOptions); | http://java.dzone.com/articles/javascript-charts-java | CC-MAIN-2014-49 | refinedweb | 860 | 66.88 |
Microsoft To Banish Memcpy() 486
kyriacos notes that Microsoft will be adding memcpy() to its list of function calls banned under its secure development lifecycle. This reader asks, "I was wondering how advanced C/C++ programmers view this move. Do you find this having a negative impact on the flexibility of the language, and do you think it will restrict the creativity of the programmer?"
No - there are plenty of safer alternatives (Score:5, Insightful)
Lame story (Trying for flamebait here?)
Re: (Score:3, Insightful)
And they aren't even removed, but (by default) a warning is issued when using them. I'd say it's a good move - passing the size of the destination buffer is usually not that complicated.
Re:No - there are plenty of safer alternatives (Score:5, Informative)
Are you high? It already takes a size argument. If this were about strcpy(3), then you'd have a point, but I do not think memcpy(3) means what you think it means.
I'm not saying you can't get yourself into trouble with inappropriate use of memcpy(3), but buffer overruns aren't the go-to threat every time.
Re:No - there are plenty of safer alternatives (Score:5, Insightful)
It's a psychological thing. Having a separate parameter for the size of the destination buffer forces the programmer to think about what that size is. Too often programmers call memcpy passing the size of the data that needs copying and forget to check that the destination is big enough. And that's why we see so many buffer overflows.
If you never make this mistake continue to use memcpy. I don't care and neither does Microsoft.
Re:No - there are plenty of safer alternatives (Score:5, Insightful)
It still will not help.
If they are a sloppy enough programmer not to look at what is going on, and to ensure the size of the destination, they will be sloppy enough to use the same dratted variable in both spots, drool all over the keyboard and move on to the next sloppy bit of code.
Re:No - there are plenty of safer alternatives (Score:4, Insightful)
Re: (Score:3, Insightful)
Who do you think has to write those high level wrappers? memcpy is one of the most ridiculously popular functions in systems level C/C++ code, especially for copying arrays or sub-arrays, where it can be much faster than a hand-written loop. You can wrap it in a function for every type you need, but that's still a lot of memcpy you have to write properly. Fortunately it's easy and this whole argument is moot.
Re:No - there are plenty of safer alternatives (Score:4, Insightful)
First of all, memcpy IS a libary call.
"but they should have to explain their need and the benefit over using a higher level wrapper to lots and lots of people."
One source tree, many O/S's. Memcpy is a ANSI C library call, I have been using it for more than 20yrs without a problem. IF MS want to pop up a warning that tells me my source will compile on gcc I can't stop them from doing so.
Re:No - there are plenty of safer alternatives (Score:4, Funny)
I'm not saying you can't get yourself into trouble with inappropriate use of memcpy(3), but buffer overruns aren't the go-to threat every time.
Didn't we already defeat the goto threat?
More to the point, if the developer doesn't know what memcpy does and how to use it correctly
... I mean ...
You might aswell write the 3 lines of code behind memcpy yourself.
The goto threat == Raptors (Score:4, Funny)
Re:No - there are plenty of safer alternatives (Score:5, Insightful)
Re:No - there are plenty of safer alternatives (Score:4, Insightful)
Technically one size argument is enough, but in a large enough software project the code that allocates the destination buffer is maintained separately from the code that copies into it. Any failure in communication (e.g. building against an outdated library) will lead to someone's linker writing a binary with code that will overrun a buffer.
With an explicit destination size parameter, the buffer copy code is no longer as sensitive to changes at the allocation site. A breakdown in communication will lead to a binary that produces a controlled runtime error instead of a buffer overrun.
Re:No - there are plenty of safer alternatives (Score:5, Insightful)
Whilst you are correct, if Microsoft is going to essentially replace the standard C library with one that has an incompatible API, why not just call it a new library and have done with it?
Or, better yet, if security really was the goal, develop a C-like language that was secure by design?
By simply making things awkward for people to write portable code, all they do is ensure that there are multiple code bases for projects (which increases the opportunity for error) or ensures that people won't write portably. Which is a more likely goal, given who we are talking about.
Re: (Score:3, Insightful)... [microsoft.com]
Re:No - there are plenty of safer alternatives (Score:4, Interesting)
>Or, better yet, if security really was the goal, develop a C-like language that was secure by design?
Or, better yet, if security really was the goal, use Ada.
There, fixed that for you
:o)
Re: (Score:3, Informative)
Perhaps that was a little bit ambiguous of me. What I was referring to were programming languages which reduce the possibility of error (eg: ADA) and/or which are designed to enforce good programming practice and rigorous standards (eg: Occam).
I consider these to be "secure by design" because they were designed to make the more common security flaws impossible and were also designed to make it possible to validate the software. (Both, if I understand the histories correctly, were linked to military efforts
Re:No - there are plenty of safer alternatives (Score:4, Insightful)
I understand the problem you are describing, but I fail to see how this solution addresses it. If there is already a disconnect between the programmer doing the copying and the programmer doing the allocating, then making the programmer doing the copying repeat himself is not going to fix the problem.
The only problem this function solves is buffer over flows caused by a programmer calculating a number of bytes to copy at runtime (e.g. by reading it from a Content-Length header) and failing to check the calculated value against what he believes is the actual size of the buffer. If the value that he believes to be the size of the buffer is wrong, changing from memcpy to memcpy_s will not catch the mistake. In other words, changing from memcpy to memcpy_s will only protect against sloppy programmers, and if they don't understand what the function is supposed to be protecting them from (which is likely) they'll probably just use the same value for copy_size and dst_size anyway (or switch to memmove), which will completely defeat the purpose of blacklisting memcpy in the first place.
Not to mention, if you're doing any pointer arithmetic and writing to an offset some number of bytes past *buffer, then passing the size of *buffer doesn't really help, unless the function is smart enough to know that (I don't see how it could be unless we pass that as a parameter as well), or the user is smart enough to calculate the remaining size of *buffer. If the user is one of the sloppy programmers that this function is meant to protect against in the first place, I think that is highly unlikely, don't you?
Re: (Score:3, Informative)
Obviously you have never made a parser of any kind. Any time you read a file in, or use a data stream (cin, cout, cerr, etc.), and many more situations (printf, aprintf, aprintf, etc. not to mention document editors, web browsers, etc) you need to be able to have a dynamically sized buffer to at least manage
Re:No - there are plenty of safer alternatives (Score:4, Insightful)
safe versions - if you prefer to blindly program away, not worrying about where your objects end up in memory. But - what is "safe"? Is there any replacement for properly testing all I/O from all possible sources?
Re:No - there are plenty of safer alternatives (Score:5, Informative)
That's physically impossible, even given infinite time. Read up on the halting problem.
However, programming a framework in which we may rule out certain things, for example a process jumping over and altering the OS, is perfectly possible. It just has to be verified through reasoning, rather than testing. The unit testing methodology is really the problem here. You cannot unit test everything.
Don't get me wrong, testing is a good start, but it's no proof of security, and a proof of security, while very hard, is possible. Kudos to Microsoft..
And when you're being intentionally unclear to the computer in addition to the reader, your code has no place in a secure production setting.
Re: (Score:3, Insightful)
That's physically impossible, even given infinite time. Read up on the halting problem.
No it's not. A computer is not a Turing machine - it has finite memory (=> finite # of states), an algorithm has to halt or visit a state it's already been in..
Fair enough, well done MS. But their new memcpy can be lied to (memcpy_s(dst, 9999, src, 40)) and guys who aren't keeping track of (and checking) their remaining destination size are the guys likely to lie to memcpy_s
Re:No - there are plenty of safer alternatives (Score:5, Funny)
Re: (Score:3, Insightful)
When it comes to programming languages, that approach just means either lots more dead or broken code, or a lot less code AND a lot less good code.
There's a higher percentage of C programs where "an attacker can execute arbitrary code of the attacker's choice", compared to say Java or Python programs. Just a look at Bugtraq over the years.
I'm half joking but there might be fewer than 10 people in the wo
"memmove()" is safer than "memcpy()". (Score:2, Funny)
As Windows products are now (and have been) mainstream products used extensively in banks and other financial institutions, reliability and security (RS) have prime importance. The speed that "memcpy()" gets you is not worth the price of reduced RS.
Re: (Score:3, Informative)
Internally to Microsoft, "banned" means that no products can be shipped using these functions. Externally, this is just a recommendation.
Re:No - there are plenty of safer alternatives (Score:5, Informative)
Just like removing printf, scanf, and most other copy/string functions. There are safe versions of memcpy that work just fine and are just as easy to use...
There's nothing unsafe about printf (since compilers started doing format type checking), as long as you don't use user input as the format string. To print user input, you use printf("%s", user_input).
strcpy() is unsafe because you don't know how many bytes you are going to be copying. strncpy() is completely safe as long as you aren't brain dead and set the 'n' to the size of the destination buffer (as opposed to strlen(src) which would be brain dead) and then slap an '\0' into the last index of the dest. sprintf, same deal, just use snprintf and tell it the max bytes it can print.
So what's unsafe about memcpy()? You explicitly specify the number of bytes to copy. If that number of bytes is greater than the known size of the destination buffer, then you've got a problem that simply adding a second 'size of dest' paramater to the copy won't fix because you already screwed the pooch on figuring that out now didn't you?
Yes memcpy() doesn't work if src and dest overlap. When that's happening, you typically know about it (you've got some clever in-situ array modification going on) and can use memmove(). memmove(), on the other hand, is equally unsafe if you can't properly specify the number of bytes to copy.
Bottom line: There's no such thing as a "safe" copy in C when we're assuming the programmer can't figure out the destination buffer size.
Re:No - there are plenty of safer alternatives (Score:4, Informative)
There's nothing unsafe about printf (since compilers started doing format type checking), as long as you don't use user input as the format string. To print user input, you use printf("%s", user_input).
%n writes to the stack. It's disabled by default in VS2005 onwards. More at and
Re:No - there are plenty of safer alternatives (Score:4, Informative)
So why is strncpy in the banned [microsoft.com] function list?
I think this is just Microsoft trying to embrace and extend. There's no better way to do that then making most existing C and C++ code invalid. The quickest alternative, of course, is to write it in C# or some other embraced language.
Hypocritically, Microsoft did NOT add memset to the banned list despite it having almost exactly the same problems as memcpy. Why? Almost every MSDN example begins with "memset(somestruct,0,sizeof(somestruct))" and invalidating every MSDN example would probably look bad.
As you pointed out, the size of the destination buffer makes no sense when dealing with pure pointers. Often memcpy is used to move memory around inside larger buffers, which completely invalidates memcpy_s as a safe replacement. memcpy is also often used to copy smaller buffers into larger ones, and accidentally copying the uninitialized (or carefully crafted by some exploit) data that comes after the source object can be just as dangerous. The correct replacement, memcpy_overkill(void *source_object, size_t source_size, size_t source_offet, void *dest_object, size_t dest_size, size_t dest_offset, size_t count) is what they're REALLY looking for, but this is impractical primarily because of the heavy use of context-less pointers (to objects within arrays, or within some other structure; the void * in memcpy's prototype hints at further possibilities) in C and C++.
Re:No - there are plenty of safer alternatives (Score:4, Interesting)
So why is strncpy in the banned function list?
Because strncpy() is as bad as strcpy(). The problem lies in the fact that if the source string is longer than the destination len, then strncpy simply stops the copy without writing a NULL. The next str* function used on the string is likely to crash.
Re:No - there are plenty of safer alternatives (Score:5, Interesting)
If you're a competent programmer then nothing is unsafe, but obviously there are a lot of stupid programmers out there who make fundamental mistakes fucking with memory when they don't understand what they're doing. What Microsoft is trying to do here is to eliminate a low hanging fruit of software security that has led to hundreds if not thousands of buffer overflow conditions and associated vulnerabilities/exploits.
The trouble is, it doesn't. Banning functions like strcpy made sense, because they were nearly always unsafe to use. On the other hand, if you're memcpying too much data for the destination, there's probably something more fundamentally wrong with your code. This, at best, conceals the bug by truncating the copy - leading to unpredictable issues later in execution instead.
Re:No - there are plenty of safer alternatives (Score:5, Insightful)
What Microsoft is trying to do here is to eliminate a low hanging fruit of software security that has led to hundreds if not thousands of buffer overflow conditions and associated vulnerabilities/exploits.
They might be trying, but they are failing, because the mistake that leads to the error in the first place (miscalculating destination buffer size) has the same effect (buffer overrun) whether you use memcpy() or memcpy_s().
Re: (Score:3, Informative)
Have a look at strlcpy [wikipedia.org]. It's non-standard, sure, having originated in OpenBSD. But it can now be found in the libc of all the *BSDs, Mac OS X, and even Solaris.
It guarantees the destination is always nul-terminated and it makes it easy to check if your destination buffer was short.
Re: (Score:3, Insightful)
So copying with the destination buffer size specified makes it safe? How so?
If I knew that the value I want to copy won't fit into the destination buffer, I wouldn't copy it there. Simple as that. Oh, because the size of the source might be variable? Then maybe the coder should do a size_ssize_d?size_s:size_d as the size argument. Taking a variable that isn't under the coder's full control as a size in any memory manipulation operation is asking for trouble anyway.
Do you think it will be safer now that you
Should have been done 30 years ago. (Score:2, Insightful)
This should have been done thirty years ago.
They should go one better... (Score:5, Insightful)
...and pop up a message box asking the user to confirm they want to copy the memory, and if they press OK then they should have to enter a captcha.
Seriously though, how is it supposed to make your code safer if you pass the size you think your destination buffer is? With memcpy, that size is implicitly greater or equal to the copy size and it's the caller's responsibility to make sure this is the case. Putting bounds checking into the copy function is ridiculous if you're responsible for passing the bounds yourself, and it goes against basic good design. I'm surprised they aren't passing the source buffer size too, just to be extra safe. Also, what happened to the __restrict keyword? It's strangely absent from the memcpy_s function declaration.
=Smidge=
Re:They should go one better... (Score:5, Informative)
The problem is memcpy returns a void *. If this is dynamically cast, it needs to be checked at runtime and may even be set to a value the programmer never intended (say unsigned 16 bit values instead of unsigned 8 bit characters). It may be an issue with updating the code - say the code was originally written for 8 bit ASCII and got updated to, say UTF-16 (16 bit). A dynamically cast void* doesn't care what the size is, it just shoves the values in the buffer. This may work fine in basic testing even, because you never overflow the buffer with 1-2 characters, and maybe even gets past a QA team, but once you go past 1/2, you've got a buffer overrun.
As I understand it, __restrict wouldn't work in a C++ program using dynamic_cast because it doesn't know the size at compile time (sorry, I'm not sure what is done in C as I haven't kept up with the language, so I have to use a C++ example). My guess is memcpy_s does runtime bounds checking (it isn't specified on the memcpy_s page, maybe the security ref - too busy to read it though).
Re: (Score:3, Informative)
Eh? the 'n' in memcpy call is number of -bytes-. not "things" you're trying to copy. it doesn't matter if you give it an array of signed 8 bit characters and copy it over to 32bit unsigned longs... you just specify n to be number of -bytes- to copy.
How can this be confusing?
Re:Should have been done 30 years ago. (Score:5, Insightful)?
if (dest_sz < copy_sz) throw; else memcpy(...); (Score:2)?
Yes. The article describes a replacement function memcpy_s that compares the copy size to the destination buffer size and throws an exception if the copy size is larger. It's still unsafe if the program lies to memcpy_s about the destination buffer size, and now it appears to needs exception support in the runtime (which based on my tests can add an extra 64 KB to your elevator controller's firmware [catb.org]).
Re:Should have been done 30 years ago. (Score:4, Interesting)
If you haven't tried Ada yet, I highly suggest looking into it. It keeps track of data sizes, types, etc... for the programmer but it will also let you get close to the hardware like C does. It's often used for safety critical software such as that used in aviation.
Unfortunately I can't recommend using Ada to develop windows apps. It's technically possible but you end up importing C library functions to do it. And if you're going to do that, you might as well just use a native development environment that is better suited to the task.
malloc() and free() (Score:5, Funny)
the worst offender is main() (Score:5, Funny)
Most any security problem can be traced back to this function.
Re:the worst offender is main() (Score:5, Funny)
you mean WinMain()
Python is done (Score:4, Funny)
Figures, Microsoft had to go kill of python and do it all in the name of security. No more accessing MEMory in C structures from our
.PY files, damn it this really pisses me off.
Re:Python is done (Score:5, Informative)
No its not. This is only banned under Microsoft's Security Development Lifecycle, which means you only care about this if you're following those set of development guidelines. Its still in the language. And you can always use memcopy_s:
Developers who want to be SDL compliant will instead have to replace memcpy() functions with memcpy_s, a newer command that takes an additional parameter delineating the size of the destination buffer.
No mention of memmove... (Score:5, Informative)
Do you find this having a negative impact on the flexibility of the language, and do you think it will restrict the creativity of the programmer?"
You can replace memcpy entirely with memmove (the latter is slightly slower and handles overlaps), and nothing in the article suggests that memmove is banned.
But, no, it shouldn't hurt creativity--they're introducing a memcpy_s, which is the same aside from taking a size parameter for the destination. That's something that is generally easy to track in new code (obviously this secure developement lifecycle is not backwards compatible).
Re: (Score:3, Insightful)?
Or are we just talking about a convenience feature that will make it easier for lazy programmers?
Re:No mention of memmove... (Score:4, Informative)?
That's the error that this is trying to fix. I'm skeptical as to how much this will help; if you're that lazy, you can just set the destination size parameter to the same value as the amount to copy.
But it might be easier to enforce at a code-review level in the organization: destination size always has to be a size tracked based on memory allocation.
Re: (Score:3, Informative)
Given that dst is a pointer, sizeof(dst) is generally going to be 4 or 8, and not do what you want.
It's more likely that programmers will just pass len to both parameters, defeating the point. Unless you define a pointer type that contains a length attribute (which wouldn't be a bad idea, but MS haven't done that) you're just relying on lengths being passed around the code being accurate, which isn't any safer.
A bad programmer will always be a bad programmer. Someone who would use strcpy on user data or m
Re: (Score:2, Insightful)
instead of
If they've been screwing up and using the wrong size for the number of bytes to copy, what's going to stop them from screwing up and putting the wrong size for the size of the destination buffer? Nothing! Now the coders that have been using something like
for the last parameter for years will have to change their code. Oh well...it's job security f
Re:No mention of memmove... (Score:4, Informative)
Now developers will write
memcpy_s(dst, sizeof(dst), src, sizeof(dst));
I get the feeling that this is mainly for Microsoft internally developed code which conforms to their security guidelines. As such, it's probably mainly intended to help in code reviews. Still pretty dubious.
Now the coders that have been using something like
MIN(sizeof(dst), bytes_to_copy)
for the last parameter for years will have to change their code.
That fails in the common case of dst being a real pointer (whether it's indexing into a static array or dynamically allocated memory or whatever).
Re: (Score:3, Informative)
memcpy_s(dst, sizeof(dst), src, sizeof(dst));
Whilst that will work, it probably doesn't do what you think.
Hint: dst and src are pointers.
First they take my gets.. (Score:5, Funny)
Re:First they take my gets.. (Score:5, Funny)
First they came for gets, And I didn't speak up because I didn't use gets
Then they came for scanf, And I didn't speak up because I didn't use scanf
Then they came for strcpy, And I didn't speak up because I didn't use strcpy
And then... they came for memcpy... And by that time there was no one left to speak up.
What an idiotic idea. (Score:5, Informative)
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
I think you misunderstand his point... the 'size' parameter isn't the number of bytes in either buffer, it's the number of bytes you want to move. Obviously this has a lot to do with the size of either allocated buffer, but it's not the same thing.
memcpy doesn't know what a buffer is-- no, it really doesn't. At it's heart, all it does is copy a byte from one pointer to another and increment bo
Typical (Score:2)
Lots of hand-waving marketing bullshit. I'm sure they're going to keep using it internally, and the exploits will still happen with Microsoft code. Just like Microsoft applications will be ignored by UAC in Windows 7.
If they wanted to do something useful, they should have removed createRemoteThread [microsoft.com] instead.
#define memcpy memmove (Score:3, Insightful)
I have often used memcpy instead of strcpy when I have known the length of the strings, and also known the destination to be large enough.
I'm guessing many developers will just #define memcpy to something else and continue as nothing happened.
then why are you using C? (Score:2, Insightful)
If you consider memcpy too dangerous then you should be using something besides C. If you're using C++ and memcpy then you really do need to know what both you and the compiler are doing.
The difference bewteen memcpy() and strcpy() (Score:3, Insightful)
The problem with strcpy() and sprintf() and like functions is that you don't know when calling them the length of the source to be copied into the supplied buffer. But with memcpy() you specify this length.
Frequently, the size of the target is calculated at run time, so bugs in memcpy() tend to be in the area of this calculation, rather than in not checking if the source fits the target.
Any lack of memcpy() would be easy to overcome, just use
memcpy_s (dst, len, src, len)
which is functionally identical to
memcpy (dst, src, len)
Removal vs deprecation (Score:2)
I don't trust outright removals without a decent period of deprecation. Microsoft has a bad history of deciding some API or function is dirty and obsolete, only to find that they've broken some of their own code or made some functions impossible to implement in some environments.
For example, they deprecated a Windows Mobile database API without having support for the new version in ActiveSync. Lots of suckers upgraded only to find out that the Grand Pubas down at Seattle Central pulled the rug out from u
Re: (Score:2)
Silly and useless (Score:5, Insightful)
This is nothing like sprintf. In sprintf there is no way to know how much data will be created ahead of time, so limit on buffer size is useful to make sure there is no buffer overrun.
With memcpy it is *precisely* known how much data will be copied. It is right there, 3rd parameter. If a developer can't do "if (sizetocopy = sizeofdstbuffer)", it is just as unlikely that he will be able to properly state that additional parameter that specifies the destination buffer size.
Of course if Microsoft is so concerned with security, why the heck did it take them years to add snptinf()? All this is is another attempt to make crossplatform development that much harder (much like all those "obsolete" POSIX functions that will barf warnings unless you use a cryptic define).
That said, if this silliness ever becomes a rule, I have an easy solution:
#define memcpy(dst, src, size) memcpy_s((dst), (src), (size), (size))
Problemo solved, now let's go actually write some real code.
Re: (Score:3, Funny)
ouch.
Ban = operator also (Score:3, Funny)
If a developer can't do "if (sizetocopy = sizeofdstbuffer)"
Uh oh, we'd better ban the = operator too, so no one can mistake it for == in an if statement ever again.
When will MS learn? (Score:5, Insightful)
Yes, you read that right. Microsoft is deprecating parts of an ISO Standard all by themselves. Not that this should surprise anyone. I would have absolutely no objection to them proposing to WG14 to deprecate those functions; heck, I'd encourage it! But besides going out and deciding to 'deprecate' parts of the standards, the replacement functions actually violate those same standards.
And the warnings are irritating. You can't write a nice cross-platform library without either spewing tons of warnings or having to put in a bunch of #defines to shut the compiler up. And if you do that, your users get irritated if they depend on these warnings because you just turned them off (and of course, if you don't, they'll complain that your library is unsafe).
Screw Microsoft.
Re:When will MS learn? (Score:5, Insightful)
In case anyone is curious, this is the type of thing that coppro is talking about:
c:\Program Files\Microsoft Visual Studio 8\VC\include\io.h(318) : see declaration of 'close'
Message: 'The POSIX name for this item is deprecated. Instead, use the ISO C++ conformant name: _close. See online help for details.'
Now, as far as I know, no ISO body has deprecated functions like close(2), open(2), read(2), and write(2). And I've always heard that methods that start with an underscore are internal compiler functions and shouldn't be called directly. I don't know why the MS compiler writers think they can do this, but it is really annoying to get hundreds of warnings like this when compiling. In addition, it hides legitimate warnings that could indicate real problems.
As to the article in question, I can't think of any good reason why memcpy(3C) would be considered unsafe, since it specifies the amount of memory to copy. Sure, you could use it to copy outside the bounds of dst, but that's just calling it incorrectly. It's not like sprintf(3C) where you could easily accidentally write outside the bounds of the string.
Re:When will MS learn? (Score:4, Insightful)
That's correct, because ISO C++ never included those functions in the first place. POSIX != ISO C. (Not that MSVC is on any kind of reasonable schedule for keeping up with ISO standards, but that's a whole different issue...)
Basically MS is deprecating their own terrible implementation of some POSIX compatibility. This is actually required for ISO C compliance: the compiler is not supposed to define a bunch of extraneous functions in the global namespace, because they might conflict with your names. Once those functions are removed entirely (and I believe you can #define them away right now) you can implement your own compatibility functions for software you're porting to Windows.
Now, this is all entirely separate from the SDL warnings GP is complaining about, which show up when you use standard ISO C functions like strcpy, sprintf, and apparently now memcpy. Which, honestly, I wish weren't quite so irritatingly implemented, although I'm torn because using those functions really is terrible.
It's not really that worth getting up in arms about, though, because JESUS CHRIST there's a compiler flag to disable the warnings, just put it in your makefile and quit bitching already!
Re: (Score:3, Interesting)
Wrong, wrong, wrong.
open, close, etc.. are not in ISO C++. Functions not in the spec (vendor-specific) are supposed to have an _ in front.
Any function you can think of (including strcpy) is "safe" if the developer specifies correct parameters. Whoopdy Do. That doesn't mean it's something that's easily verified by runtime checkers, static analysis, etc... That's why it's deprecated - it's easy to perform additional safety checks if you include both the source and destination sizes.
Re: (Score:3, Informative)
This is not the first time MS has done this. They have plenty of other standard functions that they have deprecated.
Yes, you read that right. Microsoft is deprecating parts of an ISO Standard all by themselves.
No, Microsoft isn't deprecating "parts of an ISO Standard" - only the standard committee can do that, by marking those parts as deprecated in the next version of the standard. Microsoft has enabled warnings on use of those "unsafe" functions by default, yes, but it is very much not the same thing.
Regarding "all by themselves" part - do you realize that all those "safe" *_s functions are actually covered by an ISO C99 TR? [open-std.org]. There's also a FOSS implementation [sourceforge.net] available under the MIT license.
And the warnings are irritating. You can't write a nice cross-platform library without either spewing tons of warnings or having to put in a bunch of #defines to shut the compiler up.
You don't have to u
Workaround (Score:2, Insightful)
Bootleg memcpy() Moonshine . . . (Score:2)
. . . so ban it. If I really need it, I'll write my own. What happened when the US banned alcohol? Bootlegged Moonshine.
" . .
.do you think it will restrict the creativity of the programmer?"
Quite the opposite, it will inspire them to find other creative ways around the restrictions.
Oh, and you can grep your code and say, "Look! No memcpy()! I'm secure!" But what about self-written functions that does the same thing as memcpy(), with 1,000 different names?
Stop protecting me from me! (Score:5, Interesting)
As a competent developer, I get extremely annoyed by this sort of shit.
Removing/banning memcpy doesn't change a damn thing cause the first thing I do with things that have to compile in VisualStudio now is add the following defines which turn this shit off:
_CRT_SECURE_NO_WARNINGS
_CRT_NONSTDC_NO_DEPRECATE
If the remove that option I'll simply add memcpy to my standard MS compatibility library that deals with all the other bullshit MS decides to do.
You can't fix stupid. Stop trying. People fuck up VB and C# apps just as much as the fuck up C and C++ apps. So they don't do it with a buffer overflow, they do it by shear stupidity. You'll be more secure by taking away languages that allow non-programmers to pretend to be programmers than making it harder on those of us that are just going to work around what you do anyway.
You're not going to fix broken shitty apps with exploits by removing functions, the functions aren't the problem they do exactly as they are told (or atleast they are supposed to
:). You need to fix the programmers who can't clarify what they want done. [xkcd.com]
Second pane:
You'll never find a programming language that frees you from the burden of clarifying your ideas.
easy fix (Score:5, Insightful)
Just write a one-liner that replaces all calls to memcpy with a call to memcpy_s, duplicating the size parameter.
I'm only half-joking. This is exactly how people will (mis)use memcpy_s. If you want safe memory access, you need to ban the entire C language. For those cases where you need C, you'll just have to make sure your programmers know what they're doing.
The wole thing is just a bunch of nonsense (Score:5, Insightful)
Firstly, the specification of C anf C++ standard library is governed by the corresponding standard commitee. Microsoft has absolutely no authority to "banish" anything from neither C nor C++. They can deprecate it in their
.NET code, C# etc., but it has absolutely no relevance to C and C++ languages. So, why would the author of the original question direct it to "advanced C and C++" programmers is beyond me. In general, C and C++ programmers will never know about this "interesting" development.
Secondly, the tryly unsafe and useless functions in the C standard library are the functions like "gets", which offer absolutely no protection agains buffer overflow, regardless of how careful the develoiper is. Functions like 'memcpy', on the other hand, offer sufficient protection to a qualified developer. There's absolutely no sentiment against these functions in C/C++ community and there is absolutely no possiblity of these functions to get deprecated as long as C language exists.
Shooting themselves in the foot. (Score:3, Insightful)
Now, all that's going to happen is that programmers are going to write their own memcpy-like routines using a quicky for-loop or something. It'll be just as bug prone, and harder to detect via automated source code analysis.
Re:Shooting themselves in the foot. (Score:4, Interesting)
Now, all that's going to happen is that programmers are going to write their own memcpy-like routines using a quicky for-loop or something. It'll be just as bug prone, and harder to detect via automated source code analysis.
Not to mention it'll be slower. memcpy is one of the most optimized functions in any C library. It's frequently handled as a compiler intrinsic, that can do stuff like unroll short copies, generate optimal machine code, etc.
Re:Shooting themselves in the foot. (Score:4, Interesting)
Actually, memcpy in and of itself is slow. Hand writing your own asm version of memcpy using extended cpu functions is a lot faster as memcpy itself is usually kept basic enough to work on any cpu, including the older cpu's without MMX, SSE, etc.
glibc contains specific implementations for sparc32, powerpc32, powerpc64, i386, i586, cris, i860, rs6000, and m68k. I don't know where you got your idea.
Lock in from Microsoft (Score:5, Interesting)
There have been several suggestions to replace memcpy with memcpy_s as the safer alternative. That's fine, I guess, if memcpy_s is part of the ANSI/ISO standard for C, which as far as I know, it is not; just like all the *_s functions.
Microsoft says your code is safer when using the *_s, but it will no longer be portable, it'll be Microsoft-only. They put in a warning in the compiler from VS2005 onwards about using "unsafe" functions, and that you should use *_s, which is a pain because you have to disable it as the project level, there doesn't seem to be anywhere that I've found that can just turn it off permanently. Even using the STL that comes with VS2008 will generate these warnings, even if you never do any explicit memory stuff yourself.
Microsoft did the same thing with the _* functions; a lot of them are just wrappers around their ANSI-compliant versions (_sprintf -> sprintf), but are also not portable; I worked with a guy who wrote/tested all his C code in VS6 then gave it to me to port to Unix and VMS, and the compilers would choke on not having these particular functions.
Microsoft is trying to get lock-in at the language level instead of providing a good set of Win32 API-based functions that make using memcpy() unnecessary.
Mixed views... (Score:3, Insightful)
Please ban them (Score:3, Interesting)
(I have 20+ years proffesional programming experience, a good chunk of it in C/C++, and in recent years have spent a lot of time debugging other peoples code, especially ported, re-ported and generally hacked around code !)
IMHO a lot of the posts here are missing the point when they say, "these functions are safe, you just need to check the size before hand" etc. The fact is, in the Real World, not everyone is an ace-programmer, deadlines loom, mistakes get made. Problems do not appear in testing, but can further down the line. IMHO a language simply should *not allow* the programmer to do crazy things. Potential buffer overflows, de-referencing null pointers etc should not get past the *compiler*.
To be honest I do not have a prefered language to suggest, but I do not think that tweaking C is the answer, but better, high level, languages are needed.
Re:Isn't security the programmer's responsibility? (Score:5, Informative)
you didnt read.
MSFT is banning it from their development process, not the language, use it as much as you like.
Re: (Score:2)
Re: (Score:2)
there's no need to break already compiling code
That compiling code is the problem - it could potentially be exploited (and the software can't really analyse your binaries to determine if you're being safe and checking everything properly)
Re: (Score:2)
Security used to be the programmer's responsibility, yes. The shift in language design has been to move the responsibility from the coder to the language. When you have an enforcing language doing the checking, the theory is that it reduces costs in the maintenance phase.
Ada is a prime example of this. To get code to even compile is a real chore, but once you have it compiling there's a strong chance that it will be relatively free of careless errors that would drive up costs when in maintenance. It doesn't
Re:A half-measure, at best... (Score:4, Funny)
So, Ben... or is it Peter? Do you always copy your comments verbatim from the linked article, or only when you agree with them?
Re: (Score:2)
Re: (Score:2)
Java, and other managed languages are the way to go.
Until you're trying to do something for which Java has no standard API. Last time I checked, USB joysticks were like this. There is JInput, but if you include JInput in your distribution, your project is no longer 100% Pure Java and will not run as an applet.
Or unless your target platform is incapable of running managed code. Some handheld platforms have only 4 MB of RAM, not enough for a JIT compiler and the bytecode in addition to the translated code except in trivial examples that act a generation old
Re: (Score:3, Interesting)
And for [platforms with small CPU and small RAM] we have Java ME edition.
That is, if your platform vendor offers a port of Java ME.
If you want bit-by-bit accuracy, you absolutely don't want to try translating C code to C++! You even have to worry about changing C compilers if you are doing that.
I already test my code on GCC targeting x86 and ARM. Or did you mean compilers that a hobbyist looking to start a business can't easily afford?
What you should be thinking of doing is to write an interface between the existing model code and your new one. This can be done in both C++ and Java. When the new code has to manipulate the model, it goes through this interface and calls the old code to do the necessary operation.
Mostly I was thinking about trying to port an existing video game to a phone that runs Java ME MIDP or a game console that runs XNA, while preserving frame-for-frame accuracy of the physics. You can't run the old model on such hardware because you don't have the certificate to digitally sign native code. Or di
Re: (Score:3, Insightful)
If you wrote a program that used 8+ gigs of memory that means you're an incompetent code monkey.
I have an hourly job that processes about 8GB of input data files. We found that copying the data to a tmpfs filesystem instead of leaving it on a HDD cut the work time from 20 minutes to about 30 seconds because the job necessarily requires an enormous number of random seek()s. Since mmap()ing a file on tmpfs is roughly identical to read()ing the whole file into RAM, I guess that makes me an incompetent code monkey.
Of course, my boss who got a 40x speedup in exchange for $250 worth of RAM might see thing
Re: (Score:3, Insightful)
If = is ambiguous, then you must have a habit of abusing it. While you can overload = to mean anything you want, I suppose, it would seem like you should try to preserve the general notion of the assignment operator in C and C++, which is that = never modifies the right-hand side of the equal sign, only the left hand side.
"Does it mean duplicate the contents, transfer the contents and clear the original copy, or just swap the contents of the items, which might be quicker."
I would say if you are trying to st
Re: (Score:3, Interesting)
= never modifies the right-hand side of the equal sign, only the left hand side.
std::auto_ptr would like a word with you. This was one of the dumbest decisions the committee made.
Re: (Score:2)
I don't think I've seen many instances of people using memcpy to copy structures; it generally seems to get used for copying contiguous blocks of data between buffers, and not much else. Maybe I haven't worked with enough suspender-wearing graybeards, though.
How to easily ... (Score:4, Insightful)
How to easily make your code compliant with the new safety requirements:
#define memcpy(dest,src,len) memcpy_s(dest,len,src,len)
Re: (Score:3, Informative)
Re:How to easily ... (Score:5, Informative)
Re: (Score:3, Insightful)
Some of these reactions are quite funny.
The goal of asking you to specify the length of the destination buffer is to force you to think about the data you're working with *while* you're writing the code and not afterwards in an unconnected security audit. Furthermore, it provides documentation to other people reading the code who may not have the same mental model of what's going on as you do. And as usual "other people" includes you, six months after you wrote the code.
Uh-huh. Because everyone is not just going to add a line of code to one of the base headers:
#define memcpy(dst, len, src) memcpy_s((dst),(len),(src),(len))
C is not a safe language, and stupid programmers will always find ways to mess it up. There are safer languages where you can't hang yourself as easily, and if you don't understand C, you should use them. Microsoft can't fix this problem.
Re: (Score:3, Interesting)
The idea that smart people don't make mistakes is thoroughly ridiculous.
They make mistakes, they don't make that mistake.
Smart people recognize that they make mistakes, so they create systems that help them catch and prevent their own mistakes. If you're foolish enough to believe that you can't make mistakes, then you should just turn off all the warnings on the compiler and not bother with lesser workarounds like redefining a single symbol.
That's not the issue. The new function is still relying on the programmer not making a mistake. What makes you think that anyone who would make a mistake on the number of bytes to copy wouldn't make a mistake on the size of the buffer? Alternatively, what happens if the src buffer size isn't large enough? Do we need to add another length parameter here?
I have nothing against making a language safer, but if the language can't ensure the size of the buffers
Re: (Score:3, Funny)
Yes and yes. I've been a developer for over 30 years now(nearly all C or C++). How about you? | http://tech.slashdot.org/story/09/05/15/152213/microsoft-to-banish-memcpy | CC-MAIN-2013-20 | refinedweb | 8,150 | 60.85 |
I've never built any browser extension, much less one for the Devtools.
Out of curiosity I started looking around and I've found the Google Chrome DevTools Extensions docs that served as an introduction of the different parts involved, but it wasn't enough to get on my feet and start developing my own.
The problem was that I lacked knowledge about the basic concepts behind a browser extension.
The complete Google Chrome extensions documentation is extensive, and in many cases serves more like an API reference rather than a guide, but it gave me a broad picture about the multiple moving parts that are involved, and to learn that you even need to build an intercommunication bus between the different components of the extension.
But beyond that, there wasn't a good resource for me to have a complete picture of what was required and what would be the most useful for a DevTools extension since it's a subset of what browser extensions can do.
A pragmatic way to learn about all of this that I decided to take is through open-source code. Initially, I started looking into the React DevTools, but since it's part of the React monorepo it would take some time to identify each of the relevant packages.
Fortunately for my needs, the Vue DevTools repo is self-contained, allowing me to examine it in complete isolation from other parts of the Vue code.
This is a guide through the main parts of the official Vue DevTools extension to learn from it and understand a successful approach for building these kinds of tools.
I hope that this way you can learn with a real-world example what exactly each file does, and how everything fits together. This guide isn't Vue specific in any way, You don't need to be familiar with Vue at all to follow and hopefully learn something from this guide.
This guide is divided under different sections and goes step by step with links to the official source code and analyzing some relevant snippets along the way.
Let's dive right into it!
Table of contents
- Vue Devtools Overview
- Vue detector
- Background script
- Hook
- DevTools page
- Backend and Frontend
- Proxy
- Frontend
- Backend
Vue Devtools Overview
The code, which is hosted on GitHub, is organized as a monorepo consisting of different packages, under the
/packages directory.
I followed the manual installation instructions and I was able to get a development version of the extension up and running on my browser.
By following those instructions I learned that we should start by looking into the
shell-chrome directory, as the starting point of this journey. Here we find the
manifest.json file, which contains all the metadata related to the browser extension.
Manifest file
Here we can find some relevant entry points:
"devtools_page": "devtools-background.html", "background": { "scripts": [ "build/background.js" ], "persistent": false }, "content_scripts": [ { "matches": [ "<all_urls>" ], "js": [ "build/hook.js" ], "run_at": "document_start" }, { "matches": [ "<all_urls>" ], "js": [ "build/detector.js" ], "run_at": "document_idle" } ]
Each of those specified files can be seen as different entry points because browser extensions are composed of multiple scripts that run in different contexts.
Before jumping into studying these files in detail, I'll like to briefly focus on the build tooling for this project.
Notice how all of these paths start with
build/ but we don't have a
build directory inside
shell-chrome. Let's take a quick look at our inner
package.json file to understand why:
// shell-chrome/package.json { "name": "@vue-devtools/shell-chrome", "version": "0.0.0", "dependencies": { "@vue-devtools/app-backend": "^0.0.0", "@vue-devtools/app-frontend": "^0.0.0", "@vue-devtools/shared-utils": "^0.0.0" }, "devDependencies": { "@vue-devtools/build-tools": "^0.0.0", "webpack": "^4.19.0", "webpack-cli": "^3.1.0" } }
It defines other packages from the monorepo as dependencies. The internal packages are those prefixed with
@vue-devtools.
The way this monorepo is structured is by using Yarn workspaces. Let's go to the root
package.json of the whole project:
"workspaces": [ "packages/*" ],
Everything under the
packages directory is part of this monorepo. Now let's see what the main
build script looks like:
"build": "cd packages/shell-chrome && cross-env NODE_ENV=production webpack --progress --hide-modules"
That's it! Now we know that inside
packages/shell-chrome the project is using Webpack to produce a build. So that's when the
build folder must be being created.
Analyzing the whole build process of this extension is out of scope for this post but if you're interested in learning more about it, this
webpack.config.js file is a good place to start.
Types of scripts
The main type of scripts we are going to see are the following:
As part of this guide, I'll be introducing each one of them the moment we come across them on our journey through the Vue DevTools extension.
Now, let's jump into the actual logical architecture of this extension.
Vue DevTools architecture
Each different type of script represents a different entry point for a browser extension.
Vue detector
Let's start by looking at
src/detector.js. This is a content script.
Content scripts are the parts of an extension that are running in the context of the current web page. They can query the DOM, make changes to it, and communicate with the parent extension context.
Unlike regular page scripts, they have one important limitation. Content scripts live in "isolated worlds". They can't access variables created by other scripts, even if those variables are added to the
window global.
To workaround the "isolated worlds" limitation,
detector.js includes this helper:
// shell-chrome/src/detector.js function installScript (fn) { const source = ';(' + fn.toString() + ')(window)' if (isFirefox) { window.eval(source) // in Firefox, this evaluates on the content window } else { const script = document.createElement('script') script.textContent = source document.documentElement.appendChild(script) script.parentNode.removeChild(script) } }
It wraps the provided
fn function on a IIFE string to add it on the page. Now it can run just like any other regular script on the page.
// shell-chrome/src/detector.js if (document instanceof HTMLDocument) { installScript(detect) installScript(installToast) }
detector.js injects two functions using this technique,
detect and
installToast. These are known as... injected scripts.
The pattern of injected scripts is unofficial, but it became an ad-hoc standard by the community, based on the common case of needing to run scripts on the current page with full access to the
window global and changes performed by other scripts.
I'll start with the
installToast injected script. This function adds a
__VUE_DEVTOOLS_TOAST__(message, type) method to the
window object so that messages like "Remote Devtools Connected" can be shown. Its code is part of the
app-backend package of the repo, under the toast.js module. Seeing a reference to "backend" might seem odd at this point. Don't worry too much about it now, we are going to explain it later.
The main code of the
detector content script, however, is contained on the
detect function (see the source code here). It polls the document for 10 seconds and checks for one of these possibilities:
window.__NUXT__or
window.$nuxtare detected.
- There's an element inside the DOM tree that contains a
__vue__property.
In either case, the
Vue constructor is extracted and
postMessage is used to send a message to the
window (i.e. from the injected script to the content script).
detector.js attaches an
onMessage event listener to handle messages received from the injected scripts:
// shell-chrome/src/detector.js window.addEventListener('message', e => { if (e.source === window && e.data.vueDetected) { chrome.runtime.sendMessage(e.data) } })
You might be wondering what's that
chrome global object, where does it come from? That's the "magic" of a content script. Content scripts have access to the Chrome Extension API. In this case,
chrome.runtime.sendMessage is used to send the message received from the injected script to the background script.
Background script
Wait, what's a background script? Well, it's another type of script present in browser extensions.
A background script acts like an event listener which stays dormant until an event fires from either the DevTools page or a content script. It's used as a central message bus that communicates with the different scripts of our extension. They run in the context of the browser.
In the future, service workers are going to be used instead of background scripts as part of Google Chrome extensions. This change is part of a set of changes that are tracked under Manifest version 3 for extensions.
This background script, in particular, has a
chrome.runtime.onMessage listener registered that can be used by any process that is part of the extension. Here it's only used by
detector.js, so its code is not large:
// shell-chrome/src/backgroound.js chrome.runtime.onMessage.addListener((req, sender) => { if (sender.tab && req.vueDetected) { const suffix = req.nuxtDetected ? '.nuxt' : '' chrome.browserAction.setIcon({ tabId: sender.tab.id, path: { 16: `icons/16${suffix}.png`, 48: `icons/48${suffix}.png`, 128: `icons/128${suffix}.png` } }) chrome.browserAction.setPopup({ tabId: sender.tab.id, popup: req.devtoolsEnabled ? `popups/enabled${suffix}.html` : `popups/disabled${suffix}.html` }) } })
That's the logic that makes the Vue DevTools extension icon colorful when Vue is detected on the current page, and as you can see, even the HTML file for the corresponding popup is referenced.
That's enough background script for now 😅. Later on, we are going to explore the rest of it.
Hook
Like
detector.js, there was another content script declared on the manifest file. Remember, these are our entry points). This is
hook.js.
// shell-chrome/src/hook.js import { installHook } from '@back/hook'
This is the only line of specific code. The rest of the logic that you can check if you inspect its source code, is just the very same logic to inject a script that is used on
detector.js.
I suspect that the
installScript definition that we studied earlier could be extracted to a common module and imported from both content scripts. Might be something nice to try and perhaps send a PR for 👀.
@back on the
@back/hook module path is an alias that is defined using Webpack. They are defined here.
@back points to
app-backend/src, so to learn more about
installHook we need to open the
hook.js module.
As the comments on top of that file explain, this is mainly an event emitter implementation that is exposed under the
__VUE_DEVTOOLS_GLOBAL_HOOK__ global variable:
// app-backend/src/hook.js Object.defineProperty(target, '__VUE_DEVTOOLS_GLOBAL_HOOK__', { get () { return hook } })
After defining the event emitter, a listener for the
init event is added:
// app-backend/src/hook.js hook.once('init', Vue => { hook.Vue = Vue Vue.prototype.$inspect = function () { const fn = target.__VUE_DEVTOOLS_INSPECT__ fn && fn(this) } })
A
Vue property is set on
hook. It's a very important property since it's the main reference to the Vue instance of the currently inspected page.
I was confused for some time at this point. We already had
detector.js that knows when there's a
Vue instance, but it never invokes
__VUE_DEVTOOLS_GLOBAL_HOOK__ in any way. What's going on here? When is this
"init" event emitted? After a lot of debugging around the
vue-devtools repository, I wasn't able to find it, it was surely not related to
detector.js in any way, but where was the call to emit this event?
After A LOT of debugging, I found out that I wasn't looking at the correct place at all. Turns out it's done by the Vue runtime itself!!!
Here's the code under the core Vue repo:
import { devtools, inBrowser } from 'core/util/index' /// ... if (config.devtools) { if (devtools) { devtools.emit('init', Vue) } else if ( process.env.NODE_ENV !== 'production' && process.env.NODE_ENV !== 'test' ) { console[console.info ? 'info' : 'log']( 'Download the Vue Devtools extension for a better development experience:\n' + '' ) } }
Aha!
devtools.emit('init', Vue) is the call that starts the magic. But what exactly is this
config.devtools object?
If we follow the codebase we can check that the
devtools object is defined to something familiar to us:
// detect devtools export const devtools = inBrowser && window.__VUE_DEVTOOLS_GLOBAL_HOOK__
It's the exact
window.__VUE_DEVTOOLS_GLOBAL_HOOK__ reference injected by the
hook.js file that we saw earlier. Now we're closing the loop!
And that's it for the initial content scripts that unconditionally run for every web page we visit while the Vue DevTools extension is active. We also got to know our background script.
DevTools page
This journey continues by looking at the
devtools_page property defined in the manifest file. It specifies a page that will be used when the user opens the DevTools panel of the browser (e.g. using the
Ctrl/
⌘ + J keys combination). Usually, that page only inserts a
<script> tag that will handle all the actual logic that we want to run in the DevTools window context. In our case, this is the
devtools_background.js file. That file is what is known as a devtools script:
// shell-chrome/src/devtools-background.js // This is the devtools script, which is called when the user opens the // Chrome devtool on a page. We check to see if we global hook has detected // Vue presence on the page. If yes, create the Vue panel; otherwise poll // for 10 seconds.
Those are the top comments of the file. Pretty self-explanatory! The "global hook" refers to
window.__VUE_DEVTOOLS_GLOBAL_HOOK__.Vue, that as we just saw, will be defined if the Vue runtime emits the
"init" event.
You can check the
createPanelIfHasVue function to learn more about their polling mechanism (Recursive calls to
setTimeout with 1000 ms of delay until a counter increments up to 10, effectively trying for 10 seconds).
Here's what then happens when Vue is detected:
chrome.devtools.panels.create( 'Vue', 'icons/128.png', 'devtools.html', panel => { // panel loaded panel.onShown.addListener(onPanelShown) panel.onHidden.addListener(onPanelHidden) } )
That's all the code that is required to add a new panel to the Chrome DevTools window! We define the title of the tab, its icon, the page to render and a callback to be invoked after creation.
Backend and Frontend
The actual DevTools panel is unsurprisingly, a regular Vue.js SPA. The HTML on
devtools.html is mainly a placeholder to be filled once Vue takes over:
<body> <div id="container"> <div id="app"></div> </div> <script src="./build/devtools.js"></script> </body>
The SPA initialization logic is under the
src/devtools.js script:
// shell-chrome/src/devtools.js import { initDevTools } from '@front' import Bridge from '@utils/bridge' initDevTools({ connect (cb) { // 1. inject backend code into page injectScript(chrome.runtime.getURL('build/backend.js'), () => { // 2. connect to background to setup proxy const port = chrome.runtime.connect({ name: '' + chrome.devtools.inspectedWindow.tabId }) let disconnected = false port.onDisconnect.addListener(() => { disconnected = true }) const bridge = new Bridge({ listen (fn) { port.onMessage.addListener(fn) }, send (data) { if (!disconnected) { port.postMessage(data) } } }) // 3. send a proxy API to the panel cb(bridge) }) }
After all the initial boilerplate, here is where stuff gets interesting 🎉. This DevTools extension follows a model based on two main actors: backend and frontend.
We can think of this like any regular client/server application where these two parts interchange information with each other. In our case, the "frontend" is the Vue DevTools panel itself, and our backend is a pair of content and injected scripts that run in the context of the inspected web page.
devtools.js adds the
src/backend.js injected script to the page. Afterward, it establishes a connection to the background script and initializes an instance of a custom
Bridge class registering two callbacks on it,
listen and
send, based on messages received from and sent to the background script respectively.
Before diving further into the frontend, let's take a look at what happens on
src/backend.js:
// shell-chrome/src/backend.js function sendListening () { window.postMessage({ source: 'vue-devtools-backend-injection', payload: 'listening' }, '*') } sendListening()
The
window (of the inspected page) is used as a communication mechanism. As soon as this script starts, this
{source: 'vue-devtools-backend-injection', payload: 'listening'} message is sent.
// shell-chrome/src/backend.js window.addEventListener('message', handshake) function handshake (e) { if (e.data.source === 'vue-devtools-proxy' && e.data.payload === 'init') { window.removeEventListener('message', handshake) let listeners = [] const bridge = new Bridge({ listen (fn) { const listener = evt => { if (evt.data.source === 'vue-devtools-proxy' && evt.data.payload) { fn(evt.data.payload) } } window.addEventListener('message', listener) listeners.push(listener) }, send (data) { window.postMessage({ source: 'vue-devtools-backend', payload: data }, '*') } }) // ...[some code ignored] initBackend(bridge) } else { sendListening() } }
Just like on the DevTools panel, here a
Bridge instance registering a pair of
listen/
send callbacks is constructed. However, instead of relying on the background script to propagate the messages, the
window itself is used to listen to
MessageEvents or trigger
postMessage accordingly.
Bridge
Here is the
Bridge constructor itself that both backend and frontend are using:
// shared-utils/src/bridge.js import { EventEmitter } from 'events' const BATCH_DURATION = 100 export default class Bridge extends EventEmitter { send (event, payload) { // ... } // Log a message to the devtools background page. log (message) { this.send('log', message) } _flush () { // ... } _emit (message) { // ... } _send (messages) { // ... } _nextSend () { // ... } }
Bridge is an event emitter! And it's the main communication mechanism between the backend and the frontend.
Remember how as part of the Devtools panel initialization, on
src/devtools.js, a background script connection was established?
// shell-chrome/src/devtools.js // 1. inject backend code into page injectScript(chrome.runtime.getURL('build/backend.js'), () => { // 2. connect to background to setup proxy const port = chrome.runtime.connect({ name: '' + chrome.devtools.inspectedWindow.tabId })
Here is how the background script reacts to that:
// shell-chrome/src/background.js chrome.runtime.onConnect.addListener(port => { let tab let name if (isNumeric(port.name)) { tab = port.name name = 'devtools' installProxy(+port.name) } else { tab = port.sender.tab.id name = 'backend' } if (!ports[tab]) { ports[tab] = { devtools: null, backend: null } } ports[tab][name] = port if (ports[tab].devtools && ports[tab].backend) { doublePipe(tab, ports[tab].devtools, ports[tab].backend) } })
If
port.name from the incoming connection to the background script is numeric, then it's assumed to be the Devtools panel and thus,
installProxy is invoked (the
+ prefixed to
port.name is used to coerce the
string value to a
number).
// shell-chrome/src/background.js function installProxy (tabId) { chrome.tabs.executeScript(tabId, { file: '/build/proxy.js' }, function (res) { if (!res) { ports[tabId].devtools.postMessage('proxy-fail') } else { console.log('injected proxy to tab ' + tabId) } }) }
Proxy
installProxy adds a new content script:
src/proxy.js. Unlike the two initial content scripts that are declared on the
manifest.json file and are executed on every page load, this one is dynamically added using the
chrome.tabs.executeScript API under the condition we saw earlier. Let's analyze what's this
proxy.js content script is about:
// shell-chrome/src/proxy.js const port = chrome.runtime.connect({ name: 'content-script' }) port.onMessage.addListener(sendMessageToBackend) window.addEventListener('message', sendMessageToDevtools) port.onDisconnect.addListener(handleDisconnect)
In the first place,
proxy.js also connects to the background script and then sets up a listener for messages that the background script sends, in which case it forwards the message to the backend. Also, a listener for messages received from the inspected web page is set, in which case it forwards the message to the frontend - a.k.a. the Devtools panel.
// shell-chrome/src/proxy.js sendMessageToBackend('init') function sendMessageToBackend (payload) { window.postMessage({ source: 'vue-devtools-proxy', payload: payload }, '*') }
This might result familiar: An
init message is sent to the backend, which is, as we saw earlier, what
src/backend.js was waiting for on its
handshake function to continue its initialization.
// shell-chrome/src/proxy.js function sendMessageToDevtools (e) { if (e.data && e.data.source === 'vue-devtools-backend') { port.postMessage(e.data.payload) } else if (e.data && e.data.source === 'vue-devtools-backend-injection') { if (e.data.payload === 'listening') { sendMessageToBackend('init') } } }
For propagating messages back to the frontend, it uses the connection to the background script. Despite its name, there's one case it sends an
'init' message to the backend instead. If the message received from the
window is a
'listening' one. This is a special message that is sent by the backend itself to signal it's waiting for initialization.
Even though the
Bridge instances are constructed on
src/devtools.js and
src/backend.js, they both send those instances to the respective
frontend and
backend packages of the extension through callbacks.
In the case of
src/devtools.js:
// shell-chrome/src/devtools.js import { initDevTools } from '@front' initDevTools({ connect (cb) { injectScript(chrome.runtime.getURL('build/backend.js'), () => { // ... const bridge = new Bridge({ // ... }) cb(bridge) }) }
In the case of
src/backend.js:
// shell-chrome/src/backend.js import { initBackend } from '@back' function handshake (e) { if (e.data.source === 'vue-devtools-proxy' && e.data.payload === 'init') { // ... const bridge = new Bridge({ // ... }) // ... initBackend(bridge) } }
So now that both the frontend and backend implementations hold instances to their respective communication bridge, we can take a look at how they use it.
Frontend
Let's take a look at the
initDevTools function of
app-frontend/src/index.js:
// app-frontend/src/index.js export function initDevTools (shell) { initStorage().then(() => { initApp(shell) shell.onReload(() => { if (app) { app.$el.classList.add('disconnected') app.$destroy() } window.bridge.removeAllListeners() initApp(shell) }) }) }
shell is the object literal constructed on
shell-chrome/src/devtools.js that contains some methods that are invoked here.
initStorage uses the
chrome.storage API as a storage mechanism.
initApp is where the UI magic happens:
// app-frontend/src/index.js function initApp (shell) { shell.connect(bridge => { window.bridge = bridge // ...
The assignment where the fundamental communication link is established here,
window.bridge = bridge. Now it's available on the global context of the Devtools panel.
// app-frontend/src/index.js initSharedData({ bridge, Vue, persist: true }).then(() => { if (SharedData.logDetected) { bridge.send('log-detected-vue') } const store = createStore() bridge.once('ready', version => { store.commit( 'SHOW_MESSAGE', 'Ready. Detected Vue ' + version + '.' ) bridge.send('events:toggle-recording', store.state.events.enabled) if (isChrome) { chrome.runtime.sendMessage('vue-panel-load') } }) // ...
A set of shared data between the frontend and the backend is initialized. Once it's done, a Vuex store is created (after all, the devtools panel is a regular Vue.js app!) and a listener for the
ready event is added.
You can explore what's this "shared data" consists of by going to
shared-utils/src/shared-data.js. As part of the shared data initialization, more messages are transmitted using the bridge:
// shared-utils/src/shared-data.js bridge.on('shared-data:load', () => { // Send all fields Object.keys(internalSharedData).forEach(key => { sendValue(key, internalSharedData[key]) }) bridge.send('shared-data:load-complete') }) bridge.on('shared-data:init-complete', () => { clearInterval(initRetryInterval) resolve() }) bridge.send('shared-data:master-init-waiting') // In case backend init is executed after frontend bridge.on('shared-data:slave-init-waiting', () => { bridge.send('shared-data:master-init-waiting') })
Going back to the frontend, here are some additional listeners that are set up:
// app-frontend/src/index.js // ... bridge.on('instance-details', details => { store.commit('components/RECEIVE_INSTANCE_DETAILS', parse(details)) }) bridge.on('toggle-instance', payload => { store.commit('components/TOGGLE_INSTANCE', parse(payload)) }) bridge.on('vuex:init', () => { store.commit('vuex/INIT') }) bridge.on('vuex:mutation', payload => { store.dispatch('vuex/receiveMutation', payload) }) bridge.on('router:changed', payload => { store.commit('router/CHANGED', parse(payload)) }) bridge.on('routes:init', payload => { store.commit('routes/INIT', parse(payload)) }) bridge.on('routes:changed', payload => { store.commit('routes/CHANGED', parse(payload)) }) // ...
Those are just some examples of some hooks that are added so that the backend can instruct devtools about state mutations and router changes.
After all of this, the Vue app is mounted into the div element with id
app defined on
devtools.html, and that's it! You can keep exploring the different Vue components, Vuex mutations, bridge events and messages sent, etc.
Backend
Now it's the turn of the backend, what's happens on
app-backend/src/index.js?
// app-backend/src/index.js const hook = target.__VUE_DEVTOOLS_GLOBAL_HOOK__ export function initBackend (_bridge) { bridge = _bridge if (hook.Vue) { isLegacy = hook.Vue.version && hook.Vue.version.split('.')[0] === '1' connect(hook.Vue) } else { hook.once('init', connect) } initRightClick() }
Great, a reference to the bridge is also stored and a check exists to know if the
Vue instance was already detected. In case it hasn't, we wait for it. Otherwise, we proceed to
connect to it.
// app-backend/src/index.js function connect (Vue) { initSharedData({ bridge, Vue }).then(() => { // ...
Here the same shared data is also initialized, like what we saw for the frontend (hence, why it's been given that name). Then:
// app-backend/src/index.js hook.currentTab = 'components' bridge.on('switch-tab', tab => { hook.currentTab = tab if (tab === 'components') { flush() } }) // the backend may get injected to the same page multiple times // if the user closes and reopens the devtools. // make sure there's only one flush listener. hook.off('flush') hook.on('flush', () => { if (hook.currentTab === 'components') { flush() } })
Some listeners are set up using the
bridge and setting the
currentTab property of the hook (
window.__VUE_DEVTOOLS_GLOBAL_HOOK__) to know when to perform a
'flush' (which is a Vue instance status sync cycle where the component tree structure is sent over to the devtools, to avoid dealing with stale data).
// app-backend/src/index.js bridge.on('select-instance', id => { currentInspectedId = id const instance = findInstanceOrVnode(id) if (!instance) return if (!/:functional:/.test(id)) bindToConsole(instance) flush() bridge.send('instance-selected') }) bridge.on('scroll-to-instance', id => { const instance = findInstanceOrVnode(id) if (instance) { scrollIntoView(instance) highlight(instance) } }) bridge.on('filter-instances', _filter => { filter = _filter.toLowerCase() flush() }) bridge.on('refresh', scan)
Additional listeners are added, that allows the inspected page to respond to DOM instructions sent from the devtools panel. Such as scrolling to a component, scan the page for root Vue instances, or select a component instance.
After the backend initialization ends, a
ready event is sent through the bridge:
// app-backend/src/index.js bridge.send('ready', Vue.version)
That, if you remember from earlier, is picked up on the frontend.
That's it for our backend initialization walkthrough! I'd highly recommend you to keep exploring the multiple aspects of the extension, such as the Vuex initialization and routing initialization logic, and study the different interactions between the frontend and the backend.
Conclusion
And here is where this journey ends!
When I started studying how a production-level developer tools extension was made, I never imagined it would have this level of complexity and moving parts.
I hope that this write-up can be helpful if you're thinking about making the Vue Devtools even better, or if you need to build an awesome new Devtools extension for your use case
I realized that there aren't that many resources available explaining the different aspects of one so perhaps this can help a bit :)
Thank you for reading and have a nice day!
Discussion (1)
Hello
Thanks alot for sharing this usefull article
i need that to develop my own chrom devTools extension
:* | https://practicaldev-herokuapp-com.global.ssl.fastly.net/voluntadpear/how-a-devtools-extension-is-made-1em7 | CC-MAIN-2021-21 | refinedweb | 4,527 | 51.65 |
4. More Control Flow Tools¶
Besides the
while statement just introduced, Python uses the usual
flow control statements known from other languages, with some twists.
4.1.’ is short for ‘else if’, and is useful
to avoid excessive indentation. An
if …
elif …
elif … sequence is a substitute for the
switch or
case statements found in other languages.
If you’re comparing the same value to several constants, or checking for specific types or
attributes, you may also find the
match statement useful. For more
details see match Statements.
4.2.
Code that modifies a collection while iterating over that same collection can be tricky to get right. Instead, it is usually more straight-forward to loop over a copy of the collection or to create a new collection:
# Create a sample collection users = {'Hans': 'active', 'Éléonore': 'inactive', '景太郎': 'active'} # Strategy: Iterate over a copy for user, status in users.copy().items(): if status == 'inactive': del users[user] # Strategy: Create a new collection active_users = {} for user, status in users.items(): if status == 'active': active_users[user] = status
4.3.’):
>>> list(range(5, 10)) [5, 6, 7, 8, 9] >>> list(range(0, 10, 3)) [0, 3, 6, 9] >>> list
In most such cases, however, it is convenient to use the
enumerate()
function, see Looping Techniques.
A strange thing happens if you just print a range:
>>> a construct, while an example of a function
that takes an iterable is
sum():
>>> sum(range(4)) # 0 + 1 + 2 + 3 6
Later we will see more functions that return iterables and take iterables as
arguments. In chapter Data Structures, we will discuss in more detail about
list().
4.4.
break and
continue Statements, and
else Clauses on Loops¶
The
break statement, like in C, with that of
if statements: a
try statement’s
else clause runs
when no exception occurs, and a loop’s
else clause runs when no
break
occurs. For more on the
try statement and exceptions, see
Handling Exceptions.
The
continue statement, also borrowed from C, continues with the next
iteration of the loop:
>>> for num in range(2, 10): ... if num % 2 == 0: ... print("Found an even number", num) ... continue ... print("Found an odd number", num) ... Found an even number 2 Found an odd number 3 Found an even number 4 Found an odd number 5 Found an even number 6 Found an odd number 7 Found an even number 8 Found an odd number 9
4.5.! ...
4.6.
match Statements¶
A match statement takes an expression and compares its value to successive patterns given as one or more case blocks. This is superficially similar to a switch statement in C, Java or JavaScript (and many other languages), but it can also extract components (sequence elements or object attributes) from the value into variables.
The simplest form compares a subject value against one or more literals:
def http_error(status): match status: case 400: return "Bad request" case 404: return "Not found" case 418: return "I'm a teapot" case _: return "Something's wrong with the internet"
Note the last block: the “variable name”
_ acts as a wildcard and
never fails to match. If no case matches, none of the branches is executed.
You can combine several literals in a single pattern using
| (“or”):
case 401 | 403 | 404: return "Not allowed"
Patterns can look like unpacking assignments, and can be used to bind variables:
# point is an (x, y) tuple match point: case (0, 0): print("Origin") case (0, y): print(f"Y={y}") case (x, 0): print(f"X={x}") case (x, y): print(f"X={x}, Y={y}") case _: raise ValueError("Not a point")
Study that one carefully! The first pattern has two literals, and can
be thought of as an extension of the literal pattern shown above. But
the next two patterns combine a literal and a variable, and the
variable binds a value from the subject (
point). The fourth
pattern captures two values, which makes it conceptually similar to
the unpacking assignment
(x, y) = point.
If you are using classes to structure your data you can use the class name followed by an argument list resembling a constructor, but with the ability to capture attributes into variables:
class Point: x: int y: int def where_is(point): match point: case Point(x=0, y=0): print("Origin") case Point(x=0, y=y): print(f"Y={y}") case Point(x=x, y=0): print(f"X={x}") case Point(): print("Somewhere else") case _: print("Not a point"))
A recommended way to read patterns is to look at them as an extended form of what you
would put on the left of an assignment, to understand which variables would be set to
what.
Only the standalone names (like
var above) are assigned to by a match statement.
Dotted names (like
foo.bar), attribute names (the
x= and
y= above) or class names
(recognized by the “(…)” next to them like
Point above) are never assigned to.
Patterns can be arbitrarily nested. For example, if we have a short list of points, we could match it like this:
match points: case []: print("No points") case [Point(0, 0)]: print("The origin") case [Point(x, y)]: print(f"Single point {x}, {y}") case [Point(0, y1), Point(0, y2)]: print(f"Two on the Y axis at {y1}, {y2}") case _: print("Something else")
We can add an
if clause to a pattern, known as a “guard”. If the
guard is false,
match goes on to try the next case block. Note
that value capture happens before the guard is evaluated:
match point: case Point(x, y) if x == y: print(f"Y=X at {x}") case Point(x, y): print(f"Not on the diagonal")
Several other key features of this statement:
Like unpacking assignments, tuple and list patterns have exactly the same meaning and actually match arbitrary sequences. An important exception is that they don’t match iterators or strings.
Sequence patterns support extended unpacking:
[x, y, *rest]and
(x, y, *rest)work similar to unpacking assignments. The name after
*may also be
_, so
(x, y, *_)matches a sequence of at least two items without binding the remaining items.
Mapping patterns:
{"bandwidth": b, "latency": l}captures the
"bandwidth"and
"latency"values from a dictionary. Unlike sequence patterns, extra keys are ignored. An unpacking like
**restis also supported. (But
**_would be redundant, so it is not allowed.)
Subpatterns may be captured using the
askeyword:
case (Point(x1, y1), Point(x2, y2) as p2): ...
will capture the second element of the input as
p2(as long as the input is a sequence of two points)
Most literals are compared by equality, however the singletons
True,
Falseand
Noneare compared by identity.
Patterns may use named constants. These must be dotted names to prevent them from being interpreted as capture variable:
from enum import Enum class Color(Enum): RED = 'red' GREEN = 'green' BLUE = 'blue' color = Color(input("Enter your choice of 'red', 'blue' or 'green': ")) match color: case Color.RED: print("I see red!") case Color.GREEN: print("Grass is green") case Color.BLUE: print("I'm feeling the blues :(")
For a more detailed explanation and additional examples, you can look into PEP 636 which is written in a tutorial format.
4.7.. (More about docstrings can be found in the section Documentation Strings.) and variables of enclosing functions
cannot be directly assigned a value within a function (unless, for global
variables, named in a
global statement, or, for variables of enclosing
functions, named in a
nonlocal). 1 When a function calls another function, or calls itself recursively, a new local symbol table is created for that call.
A function definition associates the function name with the function object in the current symbol table. The interpreter recognizes the object pointed to by that name as a user-defined function. Other names can also point to that same function object and can also be used to access the function:
>>> fib <function fib at 10042ed0> >>> f = fib >>> f(100)) >>> Classes) The method
append()shown in the example is defined for list objects; it adds a new element at the end of the list. In this example it is equivalent to
result = result + [a], but more efficient.
4.8. More on Defining Functions¶
It is also possible to define functions with a variable number of arguments. There are three forms, which can be combined.
4.8.1. Default Argument Values¶
The most useful form is to specify a default value for one or more arguments. This creates a function that can be called with fewer arguments than it is defined to allow. For example:
4.8 <module> TypeError: function() got multiple values for argument 'a'
When a final formal parameter of the form
**name is present, it receives a
dictionary (see Mapping Types — dict) containing all keyword arguments except for
those corresponding) for kw in keywords: print(kw, ":", keywords[kw])
It could be called like this:
cheeseshop("Limburger", "It's very runny, sir.", "It's really very, VERY runny, sir.", shopkeeper="Michael Palin", client="John Cleese", sketch="Cheese Shop Sketch")
and of course it would print:
-- Do you have any Limburger ? -- I'm sorry, we're all out of Limburger It's very runny, sir. It's really very, VERY runny, sir. ---------------------------------------- shopkeeper : Michael Palin client : John Cleese sketch : Cheese Shop Sketch
Note that the order in which the keyword arguments are printed is guaranteed to match the order in which they were provided in the function call.
4.8.3. Special parameters¶.
4.8.3.1. Positional-or-Keyword Arguments¶
If
/ and
* are not present in the function definition, arguments may
be passed to a function by position or by keyword.
4.8.3.2. Positional-Only Parameters¶
Looking at this in a bit more detail, it is possible to mark certain parameters
as positional-only. If positional-only, the parameters’ order matters, and
the parameters cannot be passed by keyword. Positional-only parameters are
placed before a
/ (forward-slash). The
/ is used to logically
separate the positional-only parameters from the rest of the parameters.
If there is no
/ in the function definition, there are no positional-only
parameters.
Parameters following the
/ may be positional-or-keyword or keyword-only.
4.8.3.3. Keyword-Only Arguments¶
To mark parameters as keyword-only, indicating the parameters must be passed
by keyword argument, place an
* in the arguments list just before the first
keyword-only parameter.
4.8.3.4. Function Examples¶ some positional-only arguments passed as keyword arguments: :
>>> some positional-only arguments passed as keyword arguments: 'pos_only'
Finally, consider this function definition which has a potential collision between the positional argument
name and
**kwds which has
name as a key:
def foo(name, **kwds): return 'name' in kwds
There is no possible call that will make it return
True as the keyword
'name'
will always bind to the first parameter. For example:
>>> foo(1, **{'name': 2}) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: foo() got multiple values for argument 'name' >>>
But using
/ (positional only arguments), it is possible since it allows
name as a positional argument and
'name' as a key in the keyword arguments:
def foo(name, /, **kwds): return 'name' in kwds >>> foo(1, **{'name': 2}) True
In other words, the names of positional-only parameters can be used in
**kwds without ambiguity.
4.8.3.5. Recap¶
The use case will determine which parameters to use in the function definition:
def f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2):
As guidance:
Use positional-only if you want the name of the parameters to not be available to the user. This is useful when parameter names have no real meaning, if you want to enforce the order of the arguments when the function is called or if you need to take some positional parameters and arbitrary keywords.
Use keyword-only when names have meaning and the function definition is more understandable by being explicit with names or you want to prevent users relying on the position of the argument being passed.
For an API, use positional-only to prevent breaking API changes if the parameter’s name is modified in the future.
4.8.4. Arbitrary Argument Lists¶
Finally, the least frequently used option is to specify that a function can be called with an arbitrary number of arguments. These arguments will be wrapped up in a tuple (see Tuples and Sequences).'
4.8.5. !
4.8.6. Lambda Expressions¶
The above example uses a lambda expression to return a function. Another use is to pass a small function as an argument:
>>> pairs = [(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')] >>> pairs.sort(key=lambda pair: pair[1]) >>> pairs [(4, 'four'), (1, 'one'), (3, 'three'), (2, 'two')]
4.8.7. Documentation Strings¶.
4.8.8. Function Annotations¶
Function annotations are completely optional metadata information about the types used by user-defined functions (see PEP 3107 and PEP 484 for more information). required argument, an optional argument, and the return
value annotated:
>>> def f(ham: str, eggs:, 'return': <class 'str'>, 'eggs': <class 'str'>} Arguments: spam eggs 'spam and eggs'
4.9. Intermezzo: Coding Style¶.
Use blank lines to separate functions and classes, and larger blocks of code inside functions.
When possible, put comments on a line of their own.
Use docstrings.
Use spaces around operators and after commas, but not directly inside bracketing constructs:
a = f(1, 2) + g(3, 4).
Name your classes and functions consistently; the convention is to use
UpperCamelCasefor classes and
lowercase_with_underscoresfor functions and methods. Always use
selfas the name for the first method argument (see A First Look at Classes for more on classes and methods).
Don’t use fancy encodings if your code is meant to be used in international environments. Python’s default, UTF-8, or even plain ASCII work best in any case.
Likewise, don’t use non-ASCII characters in identifiers if there is only the slightest chance people speaking a different language will read or maintain the code.
Footnotes | https://docs.python.org/3.11/tutorial/controlflow.html | CC-MAIN-2022-05 | refinedweb | 2,342 | 57.4 |
We have a new docs home, for this page visit our new documentation site!
You can use td-pyspark to bridge the results of data manipulations in Google Colab with your data in Arm Treasure Data.
Google Colab notebooks make it easy to model with PySpark in Google. PySpark is a Python API for Spark. Treasure Data's td-pyspark is a Python library that provides a handy way to use PySpark and Treasure Data based on td-spark.
Prerequisites
To follow the steps in this example, you must have the following Treasure Data items:
- Treasure Data API key
- td-spark feature enabled
Configuring your Google Colab Environment
You create an envelope, install pyspark and td-pyspark libraries and configure the notebook for your connection code.
Create an Envelope in Google Colab
Open Google Colab. Click File > New Python 3 notebook.
Ensure that the runtime is connected. The notebook shows a green check on the top right corner.
Prepare your Environment for the PySpark and TD-PySpark Libraries
Click the icon to add a code cell:
Enter the following code:
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!pip install pyspark td-pyspark
Create and Upload the td-spark.conf File
You specify your TD API key and site on your local file system. Create a file as follows:
An example of the format is as follows. You provide the actual values:
spark.td.apikey (Your TD API KEY)
spark.td.site (Your site: us, jp, eu01)
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.sql.execution.arrow.enabled true
Name the file
td-spark.conf and upload the file by clicking Files > Upload on the Google Colab menu. Verify that the td-spark.conf file is saved in the /content directory.
Run the Installation and Begin Work in Databricks
Run the current cell by selecting the cell and clicking
shift + enter keys.
Create a second code cell and create a script similar to the following code:
import os
os-environ[‘PYSPARK_SUBMIT_ARGS’] = ‘--jars /usr/local/lib/python2.7/dist-packages/td_pyspark/jars/td-spark-assembly.jar --properties-file /content/td-spark.conf pyspark-shell’
os-environ[“JAVA_HOME”] = “/usr/lib/jvm/java-8-openjdk-amd64"
import td_pyspark
from pyspark import SparkContext
from pyspark.sql import SparkSession
builder = SparkSession.builder.appName("td-pyspark-test")
td = td_pyspark.TDSparkContextBuilder(builder).build()
df = td.table("sample_datasets.www_access").within("-10y").df()
df.show()
TDSparkContextBuilder is an entry point to access td_pyspark's functionalities. As shown in the preceding code sample, you read tables in Treasure Data as data frames:
df = td.table("tablename").df()
You see a result similar to the following:
Your connection is working.
Interacting with Treasure Data from Google Colab
In Google Colab, you can run select and insert queries to Treasure Data or query back data from Treasure Data. You can also create and delete databases and tables.
In Google Colab, you can use the following commands:
Read Tables as DataFrames
By calling
.df() your table data is read as Spark's DataFrame. The usage of the DataFrame is the same with PySpark. See also PySpark DataFrame documentation.
df = td.table("sample_datasets.www_access").df()
df.show()
Submit Presto Queries
If your Spark cluster is small, reading all of the data as in-memory DataFrame might be difficult. In this case, you can use Presto, a distributed SQL query engine, to reduce the amount of data processing with PySpark.
q = td.presto("select code, * from sample_datasets.www_access") q.show()
q = td.presto("select code, count(*) from sample_datasets.www_access group by 1")
q.show()
You see:
Create or Drop a Database
td.create_database_if_not_exists("<my_db>"
td.drop_database_if_exists("<my_db>")
Upload DataFrames to Treasure Data
To save your local DataFrames as a table, you have two options:
- Insert the records in the input DataFrame to the target table
- Create or replace the target table with the content of the input DataFrame
td.insert_into(df, "mydb.table1")
td.create_or_replace(df, "mydb.table2")
Checking Google Colab in Treasure Data
You can use td toolbelt to check your database from a command line. Alternatively, if you have TD Console, you can check your databases and queries. Read about Database and Table Management.
Please sign in to leave a comment. | https://support.treasuredata.com/hc/en-us/articles/360034951753-TD-Python-Spark-Driver-with-Google-Colab | CC-MAIN-2020-29 | refinedweb | 701 | 59.4 |
python equivalent to perl's qw()
I do this a lot in Perl:
printf "%8s %8s %8s\n", qw(date price ret);
However, the best I can come up with in Python is
print '%8s %8s %8s' % (tuple("date price ret".split()))
I'm just wondering if there is a more elegant way of doing it? I'm fine if you tell me that's it and no improvement can be made.
Answers
Well, there's definitely no way to do exactly what you can do in Perl, because Python will complain about undefined variable names and a syntax error (missing comma, perhaps). But I would write it like this (in Python 2.X):
print '%8s %8s %8s' % ('date', 'price', 'ret')
If you're really attached to Perl's syntax, I guess you could define a function qw like this:
def qw(s): return tuple(s.split())
and then you could write
print '%8s %8s %8s' % qw('date price ret')
which is basically Perl-like except for the one pair of quotes on the argument to qw. But I'd hesitate to recommend that. At least, don't do it only because you miss Perl - it only enables your denial that you're working in a new programming language now ;-) It's like the old story about Pascal programmers who switch to C and create macros
#define BEGIN { #define END }
"date price ret".split()
QW() is often used to print column headings using join() in Perl. Column heads in the real-world are sometimes long -- making join("\t", qw()) very useful because it's easier to read and helps to eliminate typos (e.g. "x","y" or "x\ty"). Below is a related approach in real-world Python:
print("\t".join('''PubChemId Column ESImode Library.mzmed Library.rtmed Metabolite newID Feature.mzmed Feature.rtmed Count ppmDiff rtDiff'''.split()))
The triple quote string is a weird thing because it doubles as a comment. In this context, however, it is a string and it frees us from having to worry about line breaks (as qw() would).
Thanks to the previous replies for reveling this approach.
Need Your Help
UnicodeDecodeError while using json.dumps()
python json python-2.7 unicode character-encodingI have strings as follows in my python list (taken from command prompt): | http://www.brokencontrollers.com/faq/3534714.shtml | CC-MAIN-2019-51 | refinedweb | 381 | 72.66 |
Hello,
I'm Having some problems when using Facebook components for iOS.
I use Facebook SDK (by The Outercurve Foundation) in Forms for some cross platform Facebook activities.
To get the access token from Facebook I was using Xamarin.Auth.
However It does not provide a good native facebook experience for login and after some research I saw this topic and decided to try it.
This way I decided to use just for get the access token the Facebook Android SDK (by Xamarin) for Android, and the Facebook iOS SDK (by Xamarin) for iOS
In Android all good, it worked very good since the Facebook Android SDK namespace was Xamarin.Facebook.
But for iOS Im having a lot of problems since the Facebook iOS SDK namespace is just Facebook.
Due this fact Im getting some conflicts with Facebook SDK (by The Outercurve Foundation) which also have the namespace Facebook.
I tried to use some extern alias to solve the conflicts between the assemblies namespace name but whithout success.
Someone could give some help?
How can I solve this problem??
Please Im really stuck in this...
I can't continue my project...
Anyone can give a hint?
Someone from Xamarin support or from the Facebook iOS SDK component developing team could help?
Is possible to disponibilize a version with the different namespace?
For what I could see in the comments and reviews in the component page, I'm not the only one having this problem!
It would be very helpful!
@IsraelSoto I saw you posted on Facebook iOS SDK component Page and is from Xamarin Team.
Looking in the other reviews in page I saw some guys with the same problem.
example:
"joaquin grech rates this with 1
Serious bugs:
1) It's on the Facebook namespace instead of Xamarin.Facebook which conflicts with all your other facebook stuff. (The android version is on Xamarin.Facebook, not sure why the iOS didn't follow the proper pattern). 2) Since ios9 it gets a IsCancelled on every try to login. Even after following instructions it won't log in. 3) It's not updated to the latest fb sdk.
Posted on: October 7th / Version: 4.5.1"
Any chance you can help?
Thank you very much.
Or
Don't use Outercurve, try using another library (check Nuget repo)
Hello @Nad Thanks for the help.
For some reason when I import one of them the other assembly lose the reference.
And even if I switch the name of the file, I can only use one of them... the another one's classes dont appears.
As I commented I tried extern alias, but seems like Xamarin Studio don't support it or I'm doing it wrong manually.
I use Outercurve because of the easy cross platform code. I can use the same code for iOS and Android in Xamarin Forms this way.
Hello,
I'm still struggling on this.
If anyone end up with this issue and find a solution let me know, please!
@JoaquinGrech I saw you posted on iOS Facebook component page about this issue too.
Did you find a solution?
Thanks very much
@RaphaelChiorlinRanieri sorry for not giving a reply sooner, I was out for a couple of weeks.
Unfortunately, we cannot change the Facebook namespace for now but you can do this. We have available to the world the Facebook iOS binding located at monotouch-bindings repo:
Change the namespace and build the binding to generate the dll.
I will remove this branch soon and will merge it into master, so, please, keep an eye on master branch if you cannot find the facebookios-update branch.
Hello again @IsraelSoto ,
I was doing some improvements in my app and remembered this issue.
Last time I used your component I had to build the project and generate other dll with a different name than Facebook .
This was on version 4.5.1 .
Unfortunately due this I'm not getting the updates.
I would like to know if you guys think about change the namespace so it can match the android version or if you will also push the last version in Bitbucket (I saw the last version released there, is 4.10 right? and in Nugget has 4.18)
@MigueldeIcaza , @AlexSoto , @IsraelSoto
Hey guys did you progress on Facebook.iOS ?
Any plans of changing the namespace?
Hello @RaphaelChiorlinRanieri, The facebook namespace is not changing anytime soon since it would break a lot of customers already using this API. That being said you can find the Facebook component source here if you really need this change you can definitely build your own version out of it.
Tks @AlexSoto !
That was what I was looking for
| https://forums.xamarin.com/discussion/comment/299706/ | CC-MAIN-2019-47 | refinedweb | 783 | 74.39 |
/** * Tests the Java serialization of an arbitrary object model * @author CostinCozianu */ public class SerializationTest? extends TestCase implements Serializable { Object[] allObjects; static int childCount = 10000; static int linkCount = 1000; public static void main(String[] args) { try { if (args.length > 0) { if (args[0].equals("-read")) { System.out.print("reading the serialized file"); Object x= new ObjectInputStream?(new FileInputStream?("testXXXX.ser")) .readObject(); System.out.println(" ok."); System.exit(0); } childCount = Integer.parseInt(args[0]); } if (args.length > 1) { linkCount= Integer.parseInt(args[1]); } TestRunner.run(SerializationTest?.class); } catch (Exception ex) { System.err.println(ex); ex.printStackTrace(System.err); } } public void setUp() { Random r = new Random(); allObjects = new Object[childCount]; for (int i = 0; i < childCount; i++) {allObjects[i] = new TestTarget?();} for (int i = 0; i < linkCount; i++) { TestTarget? x, y; int retries = 0; do { if (++retries == 1000) throw new RuntimeException("Giving up trying to set up the graph, please reduce the number of links"); x = (TestTarget?) allObjects[r.nextInt(childCount)]; y = (TestTarget?) allObjects[r.nextInt(childCount)]; } while (x == y || !x.canAdd(y)); x.add(y); } System.err.println( "Setup completed: memory" + (Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory()) / 1024 + "K"); } public void testSerialize() throws Exception { System.err.println("trying to serialize"); long start = System.currentTimeMillis(); try { OutputStream? bOut = new FileOutputStream?("testXXXX.ser"); ObjectOutputStream? objOut = new ObjectOutputStream?(bOut); objOut.writeObject(this); objOut.close(); bOut.close(); long end = System.currentTimeMillis(); System.err.println( "Serialization completed in: " + ((end - start) / 1000.0) + "ms , memory consumption :" + (Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory()) / 1024 + "K"); Object mirror = new ObjectInputStream?(new FileInputStream?("testXXXX.ser")) .readObject(); long end1 = System.currentTimeMillis(); System.err.println( "Read back completed in: " + ((end1 - end) / 1000.0) + "ms ," + " memory consumption: "+ (Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory())/ 1024 + "K"); } catch (Exception ex) { System.err.println(ex); ex.printStackTrace(System.err); } } public static class TestTarget? implements Serializable { ArrayList links = new ArrayList(20); int index = 0; Object parent; TestTarget?() {} /* * If you uncomment this constructor andf use it, * it will make serialization problems much worse TestTarget?(Object parentLink) { parent= parentLink; }*/ public boolean canAdd(Object o) { if (links.contains(o)) return false; return true; } public void add(Object o) { links.add(o); } } }I ran the test above and it succeeds. Well, sort of. I'm assuming TestCase is junit.framework.TestCase, and my version of that class isn't serializable, so I created a non-TestCase class to save and restore. Was it supposed to demonstrate that JavaSerializationIsBroken? { As per Java serializtion specification, the fact that it inherits from TestCase is not relevant, the class is still serializable. The serialization/deserialization of a few objects still succeeds , so your assumption is wrong. } Run it again with different numbers of objects and links (the first argument and the second argument on the command line). Also if you past just "-read" it will try to read the previously serialized file. You should observe that for some numbers: serialization crashes, and for other numbers serialization succeeds but deserialization crashes. I guess a "safe" margin is 1000 10000 which crashes all the JVMs from 1.3 to 1.4.1_02. The reason being that the serialization algorithm calls itself recursively for all the attrbutes of an object that are object references. I tried it with 10000 100000 and it works fine. It might depend on memory and OS. For sure my JDK1.3 on linux behaves much better than JDK1.3.1-1.4.1 on XP. On XP it breaks even for 1000 objects 3000 links, on Linux I can go up reliably to 10000 and 50000. You either get a stack overflow error, or sometimes the JVM just crashes. Now, you have an object model that takes in memory below 1M or even a few megs, or even 10 megs, and you call ObjectOutputStream?.serialize(obj). There's nowhere in the documentation that this should blow the stack, or the JVM for that matter. Defining a meaningful equals/hashCode method on TestTarget? may be helpful. For those that don't care to read the code all the way through, also note that while the amount of memory /disk space occupied here may be small, the graph created is rather large and complicated--a 10,000 node graph with 1,000 distinct edges per node. Flat array serialization may not be the best, or necessarily an appropriate, strategy here. If you just want to see serialization break due to stack overflow for even a small amount of data, there are much simpler test cases. Define a linked list that relies upon recursion:
public class LinkedListNode? implements Serializable { public LinkedListNode?(LinkedListNode? next) { this.next = next; } private LinkedListNode? next = null; }create a reasonably long list of these:
LinkedListNode? head = null; for(int i=0;i<100000;i++) { head = new LinkedListNode?(head); }and serialize it. But you'll run into the same problem with any recursive algorithm eventually--the stack is finite. Add this method:
public int count() { if(null == next) { return 1; } else { return 1 + next.count(); } }and you'll get similar problems. Is java arithmetic broken also? Thanks for the recursive list example. Great. So you broke Java serialization yourself, also. The reason for which I chose a graph of objects rather than a linked list is that a graph of object simulate more or less typical object models, as they are created in practice by programmers. Now you claim that we're talking about a rather complicated graph. Not at all, on windows it breaks with as little as 1000 objects, 3 links per node, on Linux (which allows a larger stack size by default) it breaks with as little as 10000 objects , 10 links per node. This may seem a complicated graph to you, but the reality is that these kind of figures we are talking here are peanuts. Your analogy with the arithmetic is again flawed. If I write the function above I must be nuts, and by the way the function does not serve any good purpose. The problem is that it is nowhere advertised in JavaDoc that java serialization is recursive, and sound engineeering principles (read DavidParnas on information hiding) demand that the fact that the function is implemented recursively not be hidden from its users. All the user cares for is that serialize(Object o) works as documented. Crashing the JVM is not acceptable behaviour, and is not documented. If the documentation said: don't create Object graphs with chains longer than N or cycles longer than K or whatever other graph properties, than maybe you'd have a point, but it doesn't. And even if it did, we then could qualify the implementation as not broken, but we could qualify the design as lousy at best. I disagree. | http://c2.com/cgi/wiki?JavaSerializationAndTheStack | CC-MAIN-2014-35 | refinedweb | 1,110 | 50.94 |
NETWORK PROGRAMS ARE a natural application for threads. Threads were discussed in Section 7.6 in the context of GUI programming. (If you have not already read that section, it would be a good idea to do it now.) As we saw in that section, a thread could be used in a GUI program to perform a long computation in parallel with the event-handling thread of the GUI. Network programs with graphical user interfaces can use the same technique: If a separate thread is used for network communication, then the communication can proceed in parallel with other things that are going on in the program. Threads are even more important in server programs. In many cases, a client can remain connected to a server for an indefinite period of time. It's not a good idea to make other potential clients wait during this period. A multi-threaded server starts a new thread for each client. Several threads can run at the same time, so several clients can be served at the same time. The second client doesn't have to wait until the server is finished with the first client. It's like a post office that opens up a new window for each customer, instead of making them all wait in line at one window.
Now, there are at least two problems with the command-line chat examples, CLChatClient and CLChatServer, from the previous section.. The second problem has to do with opening connections in the first place. I can only run CLChatClient if I know that there is a CLChatServer running on some particular computer. Except in rather contrived situations, there is no way for me to know that. It would be nice if I could find out, somehow, who's out there waiting for a connection. In this section, we'll address both of these problems and, at the same time, learn a little more about network programming and about threads.
To address the first problem with the command-line chat programs, let's consider a GUI chat program. When one user connects to another user, a window should open on the screen with an input box where the user can enter messages to be transmitted to user on the other end of the connection. The user should be able to send a message at any time. The program should also be prepared to receive messages from the other side at any time, and those messages have to be displayed to the user as they arrive.. The run() method that is executed by the thread carries out the following algorithm:
Post the message "Hey, hello there! Nice to chat with you." while(running): Wait a random time, between 2 and 12 seconds Select a random message from the list Post the selected message in the JTextArea
The variable running is set to false when the applet is stopped, as a signal to the thread that it should exit. The thread is created and started in the actionPerformed method that responds when you press return or click the "Send" button for the first time. You can find the complete source code in the file ChatSimulation.java, but I really want to look at the programming for the real thing rather than the simulation. The GUI chat program that we will look at is ChatWindow.java. The interface in this program will look similar to the simulation, but there will be a real network connection, and the incoming messages will be coming from the other side of that connection. The basic idea is not much more complicated than the simulation. A separate thread is created to wait for incoming messages and post them as they arrive. The run() method for this thread has an outline that is similar to the one for the simulation:
while the connection is open: Wait for a message to arrive from the other side Post the message in the JTextArea
However, the whole thing is complicated by the problem of opening and closing the connection and by the input/output errors that can occur at any time. The ChatWindow class is fairly sophisticated, and I don't want to cover everything that it does, but I will describe some of its functions. You should read the source code if you want to understand it completely.
First, there is the question of how a connection can be established between two ChatWindows. As the ChatWindow class is designed, the connection must be established before the window is opened. Recall that one end of a network connection is represented by on object of type Socket. The connected Socket is passed as a parameter to the ChatWindow constructor. This makes ChatWindow into a nicely reusable class that can be used in a variety of programs that set up the connection in different ways. The simplest approach to establishing the connection uses a command-line interface, just as is done with the CLChat programs. Once the connection has been established, a ChatWindow is opened on each side of the connection, specified in args[0]. connection = new Socket(args[0],port); } out = new PrintWriter(connection.getOutputStream()); out.println(HANDSHAKE); out.flush(); in = new TextReader(connection.getInputStream()); message = in.getln(); if (! message.equals(HANDSHAKE) ) { throw new IOException( "Connected program is not a ChatWindow"); } System.out.println("Connected."); } catch (Exception e) { System.out.println("Error opening connection."); System.out.println(e.toString()); return; } ChatWindow w; // The window for this end of the connection. w = new ChatWindow("ChatWindow", connection);
As it happens, I've taken the rather twisty approach of putting this main() routine in the ChatWindow class itself. (Possibly, it would be better style to put the main() routine in a different class.) This means that you can run ChatWindow as a standalone program. If you run it with the command "java ChatWindow -s", it will run as a server. To run it as a client, use the command "java ChatWindow <server>", where <server> is the name or IP number of the computer where the server is running. Use "localhost" as the name of the server, if you want to test the program by connecting to a server running on the same computer as the client. Whether the program is running as a client or as a server, once a connection is made, the window will open, and you can start chatting.
The constructor for the ChatWindow has the job of starting a thread to handle incoming messages. It also creates input and output streams for sending and receiving. The part of the constructor that performs these tasks look like this (with just a few changes for the sake of simplicity):
try { incoming = new TextReader( connection.getInputStream() ); outgoing = new PrintWriter( connection.getOutputStream() ); // Here, connection is the Socket that will be used for // communication. Input and output streams are created // for writing and reading information over the connection. } catch (IOException e) { // An error occurred while trying to get the streams. // Set up user interface to reflect the error. The // "transcript" is the JTextArea where messages are displayed. transcript.setText("Error opening I/O streams!\n" + "Connection can't be used.\n" + "You can close the window now.\n"); sendButton.setEnabled(false); connection = null; } /* Create the thread for reading data from the connection, unless an error just occurred. */ if (connection != null) { // Create a thread to execute the run() method in this // applet class, and start the thread. The run() method // will wait for incoming messages and post them to the // transcript when they are received. reader = new Thread(this); reader.start(); }
The input stream, incoming, is used by the thread to read messages from the other side of the connection. It does this simply by saying incoming.getln(). This command will not return until a line of text has been received or until an error occurs. The output stream, outgoing, is used by the actionPerformed() method to transmit the text from the text input box.
When either user closes his ChatWindow, the connection must be closed on both sides. The connection might also be closed because an error occurs, such as a network failure. It takes some care to handle all this correctly. Take a look at the source code if you are interested.
There is still a big problem with running ChatWindow in the way I've just described. Suppose I want to set up a connection. How do I know who has a ChatWindow running as a server?. That's all the main routine does with the connection. The thread takes care of all the details, while the main program goes on to the next connection request. Here is the main() routine from ConnectionBroker:
public static void main(String[] args) { // The main() routine creates a listening socket and // listens for requests. When a request is received, // a thread is created to service it. int port; // Port on which server listens. ServerSocket listener; Socket client; if (args.length == 0) port = DEFAULT_PORT; else { try { port = Integer.parseInt(args[0]); } catch (NumberFormatException e) { System.out.println(args[0] + " is not a legal port number."); return; } } try { listener = new ServerSocket(port); } catch (IOException e) { System.out.println("Can't start server."); System.out.println(e.toString()); return; } System.out.println("Listening on port " + listener.getLocalPort()); try { while (true) { client = listener.accept(); // Get a connection request. new ClientThread(client); // Start a thread to handle it. } } catch (Exception e) { System.out.println("Server shut down unexpectedly."); System.out.println(e.toString()); System.exit(1); } }
Once the processing thread has been started to handle the connection, the thread reads a command from the client, and carries out that command. It understands three types of commands:, the applet will display an error notification saying that it can't connect to the server. You are likely to get an error message unless you have downloaded this on-line textbook and are reading the copy on your own computer. In that case, you should be able to run the ConnectionBroker server on your computer and use the applet to connect to it. (Just compile ConnectionBroker.java and then give the command "java ConnectionBroker" in the same directory. It will print out "Listening on port 3030" and start waiting for connections. You will have to abort the program in some way to get it to end, such as by hitting CONTROL-C.) Here is the applet:
If the applet does find a server, it will display the list of available chatters in the JComboBox on the third line of the applet. If no chatters are available on the server, then you'll just see the message "(None available)". Once you register yourself, you will be included in this list, and you can open a connection to yourself. (Not a very interesting conversation perhaps, but it will demonstrate how the program works.) The procedures for registering yourself with the server and for requesting a connection to someone in the JComboBox should be easy enough to figure out. When you register yourself, a ChatWindow will open and will wait for someone to connect to you. A ChatWindow will also open when you request a connection.
You can enter yourself multiple times in the list, if you want, and you can connect to multiple people on the list. A separate ChatWindow will open for each connection. ChatWindow.java.
Although I don't want to say too much about the ConnectionBroker program, there is still one general question I want to look at: What happens when two or more threads use the same data? When this is the case, it's possible for the data to become corrupted, unless access to the data is carefully synchronized. The problem arises when two threads both try to access the data at the same time, or when one thread is interrupted by another when it is in the middle of accessing the data. Synchronization is used to make sure that this doesn't happen. To see what can go wrong, let's look at a typical example: a bank account. Suppose that the amount of money in a bank account is represented by the class:
public class BankAccount { private double balance; // amount of money in account public double getBalance() { return balance; } public void withdraw(double amount) { // Precondition: The balance is >= the amount. balance = balance - amount; } . . // Other methods . }
Suppose that account is an object of type BankAccount, and that this variable is used by several threads. Suppose that one of these threads wants to do a withdrawal of $100. This should be easy:
if ( account.getBalance() >= 100) account.withdraw();
But suppose that two threads try to do a withdrawal at the same time from an account that contains $150. It might happen that one thread calls account.getBalance() and gets a balance of 150. But at that moment, the first thread is interrupted by the other thread. The other thread calls account.getBalance() and also gets 150 as the balance. Both threads decide its safe to withdraw $100, but when they do so, the balance drops below zero. Actually, its even worse than this. The statement "balance = balance - amount" is actually executed as several steps: Read the balance; subtract the amount; store the new balance. It's possible for a thread to be interrupted in the middle of this. Suppose that two threads try to withdraw $100. If they execute the withdrawal at about the same time, it might happen that the order of operations is:
1. First thread reads the balance, and gets $150 2. Second thread reads the balance, and gets $150 3. Second thread subtracts $100 from $150, leaving $50 4. Second thread stores the new balance, $50 5. First thread (continuing after interruption) subtracts $100 from $150, leaving $50 6. First thread stores the new balance, $50
The net result is that even though there have been two withdrawals of $100, the amount in the account has only gone down by one hundred. The bank will probably not be very happy with its programmers!
You might not think that this sequence of events is very likely, but when large numbers of computations are being performed by several threads on shared data, problems like this are almost certain to occur, and they can be disastrous when they happen. The synchronization problem is very real: Access to shared data must be controlled.
As I mentioned in Section 7.6, the Swing GUI library solves the synchronization problem in a straightforward way: Only one thread is allowed to change the data used by Swing components. That thread is the event-handling thread. If the some other thread wants to do something with a Swing component, it's not allowed to do it itself. It must arrange for the event-handling thread to do it instead. Swing has methods SwingUtilities.invokeLater() and SwingUtilities.invokeAndWait() to make this possible. This is the only type of synchronization that is used in the ChatSimulation, ChatWindow, and BrokeredChat programs.
In many cases, Swing's solution to the synchronization problem is not applicable and might even defeat the purpose of using multiple threads in the first place. Java has a more general means for controlling access to shared data. It's done using a new type of statement: the synchronized statement. A synchronized statement has the form:
synchronized ( <object-reference> ) { <statements> }
For example:
synchronized(account) { if ( account.getBalance() >= amount ) balance = balance - amount; }
The idea is that the <object-reference> -- account in the example -- is used to "lock" access to the statements. Each object in Java has a lock that can be used for synchronization. When a thread executes synchronized(account), it takes possession of account's lock, and will hold that lock until it is done executing the statements inside the synchronized statement. If a second thread tries to execute synchronized(account) while the first thread holds the lock, the second thread will have to wait until the first thread releases the lock. This means that it's impossible for two different threads to execute the statements in the synchronized statement at the same time. The scenarios that we looked at above, which could corrupt the data, are impossible.
It's possible to use the same object in two different synchronized statements. Only one of those statements can be executed at any given time, because all the statements require the same lock before they can be executed. By putting every access to some data inside synchronized statements, and using the same object for synchronization in each statement, we can make sure that that data will only be accessed by one thread at a time. This is the general approach for solving the synchronization problem. It is an approach that will work for multi-threaded servers, such as ConnectionBroker, where there are many threads that might need access to the same data. The ConnectionBroker program, for example, keeps a list of clients in a Vector named clientList. This vector is used by many threads, and access to it must be controlled. This is accomplished by putting all access to the vector in synchronized statements. The vector itself is used as the synchronization object (although there is no rule that says that the synchronization object has to be the same as the data that is being protected). Here, for your amusement is all the code from ConnectionBroker.java that accesses clientList:
/* These four methods synchronize access to a Vector, clientList, which contains a list of the clients of this server. The synchronization also protects the variable nextClientInfo. */ static void addClient(Client client) { // Adds a new client to the clientList vector. synchronized(clientList) { client.ID = nextClientID++; if (client.info.length() == 0) client.info = "Anonymous" + client.ID; clientList.addElement(client); } System.out.println("Added client " + client.ID + " " + client.info); } static void removeClient(Client client) { // Removes the client from the clientList, if present. synchronized(clientList) { clientList.removeElement(client); } System.out.println("Removed client " + client.ID); } static Client getClient(int ID) { // Removes client from the clientList vector, if it // contains a client of the given ID. If so, the // removed client is returned. Otherwise, null is returned. synchronized(clientList) { for (int i = 0; i < clientList.size(); i++) { Client c = (Client)clientList.elementAt(i); if (c.ID == ID) { clientList.removeElementAt(i); System.out.println("Removed client " + c.ID); c.ID = 0; // Since this client is no longer waiting! return c; } } return null; } } static Client[] getClients() { // Returns an array of all the clients in the // clientList. If there are none, null is returned. synchronized(clientList) { if (clientList.size() == 0) return null; Client[] clients = new Client[ clientList.size() ]; for (int i = 0; i < clientList.size(); i++) clients[i] = (Client)clientList.elementAt(i); return clients; } }
You don't have to understand exactly what is going on here, just that the synchronized statements are used to control access to data that is being shared by multiple threads. There is much more to learn about threads; synchronization is only one of the problems that arise. However, I will leave the topic here. (One reason why I covered this much was to fulfill a promise made back in Section 3.6, where there was a list of all the different types of statements in Java. The synchronized statement was the last of these that we needed to cover.)
[ Next Chapter | Previous Section | Chapter Index | Main Index ]
Ask Questions? Discuss: Java Programming: Section 10.5
Post your Comment | http://www.roseindia.net/javajdktutorials/c10/s5.shtml | CC-MAIN-2013-48 | refinedweb | 3,220 | 65.42 |
Windows Controls: The Group Box
Introduction to:
using System;
using System.Drawing;
using System.Windows.Forms;
public class Exercise : System.Windows.Forms.Form
{
GroupBox grpHolder;
public Exercise()
{
InitializeComponent();
}
private void InitializeComponent()
{
grpHolder = new GroupBox();
grpHolder.Left = 22;
grpHolder.Top = 18;
grpHolder.Width = 120;
grpHolder.Height = 58;
Controls.Add(grpHolder);
}
}
public class Program
{
static int Main()
{
System.Windows.Forms.Application.Run(new Exercise());
return 0;
}
}
This would produce:
Characteristics of a Group Box
The Caption of a Group Box
As you can see from the above picture, a group may or
may not display a caption. If you need to display a caption on it, at design
time, in the Properties window, click Text and type a string. To do
this programmatically, assign a string to the Text property of the group box
control. Here is an example:
private void InitializeComponent()
{
grpHolder = new GroupBox();
grpHolder.Left = 22;
grpHolder.Top = 18;
grpHolder.Width = 120;
grpHolder.Height = 58;
grpHolder.Text = "Cup Holder";
Controls.Add(grpHolder);
}
The Group Box as a Container
Besides serving a delimiter of an area on a form, a
group box can also serve as a container. That is, a group box can carry or
hold other containers. As such, you can create a control and add it to its
collection of controls. When you add a control to a group box, whether at
design or run time, the location you specify is relative to the group box
and not to the form. Because the group box will act as the parent, it is its
client area that is considered for the location of its child(ren)
control(s).
Here is an example of adding a control as a child of a
group box:
private void InitializeComponent()
{
grpHolder = new GroupBox();
grpHolder.Left = 22;
grpHolder.Top = 18;
grpHolder.Width = 120;
grpHolder.Height = 58;
grpHolder.Text = "Cup Holder";
Button btnDone = new Button();
btnDone.Left = 22;
btnDone.Top = 24;
grpHolder.Controls.Add(btnDone);
Controls.Add(grpHolder);
}
Automatically Resizing a Group Box
Since a group box can serve as a control container, at
design time (and at run time), you can add the desired controls to it. Here
is an example:
Notice that it is possible to have a control whose size
causes some of its section to be hidden. To accommodate the control(s)
positioned in a group box, you can make the container resize itself so as to
reveal the hidden part(s) of its controls. To support this, the GroupBox
class is equipped with the Boolean AutoSize property. The default
value of the GroupBox.AutoSize property is false. If you set it to
true, the group box would resize itself and all of its controls should
appear:
Giving Focus to a Group Box
If you are done programming in Win32, you would know
that the Microsoft Windows operating system classifies the group box as a
static controls. One of characteristics of static controls is that they
cannot receive focus. In other words, you cannot actually click a group box
and it cannot indicate that it has received focus. At the same time, in the
.NET Framework, the GroupBox class is equipped with the TabStop
and the TabIndex properties, which suggests that, by pressing Tab
while using a form that has a group box, the group box should receive focus
at one time. Still, because the group box is a static control, it cannot
receive focus. What actually happens is that, whenever a group box is
supposed to receive, it transfers the focus to its first or only control.
Using a Mnemonic
As mentioned already, a group box can be equipped with a
caption, which is created by assigning a string to its Text property. A
mnemonic is a character (or a letter) that the user can use to access a
group box. To create a mnemonic, precede one character (or one letter) of
its caption with &. Here is an example:
Using the mnemonic, the user can press Alt and the
underlined character to give focus to a control inside the group box. | http://functionx.com/vcsharp/controls/groupbox.htm | CC-MAIN-2018-34 | refinedweb | 671 | 58.18 |
gridbaylayout constraints
Leo Max
Ranch Hand
Joined: Sep 27, 2009
Posts: 36
posted
Jun 16, 2011 00:13:19
0
Hello. i don't understand the explanations of the constraints weightxy, ipadxy, anchor and fill.
Gridbaylayout example
is the site i'm learning from. There wasn't much luck in google. There were some explanations to these terms but I still don't get it. Can someone write some examples or point me to a site that does have them? I'm looking for examples with numbers and pictures. Or at least Gridbaglayout for dummies kinda explanation
Here's my code. I added the first row of components. the app window was small when it started. When maximizing the window, the textbox was stuck to the button. Setting a frame size didn't help either. When I added the rest of the components, that was no longer the case. i don't know why that happened. Also, max/minimizing the window shouldn't make any wierd looks. It's supposed to resize itself. Now my problem is that size is stuck to date. Why I don't know or how it can be fixed. I've attached a picture to help with visualizing what I'm going after. Oh and the first insets line I added was only supposed to affect the first component. Yet somehow, everything else got pushed to the right. I thought I was going to have to add inset to a few of them but one inset fixed it all :S:S:S
All I got was that the size of the smallest component in the column becomes the size of all the components in that column. And that's when weightxy affects the components? BAAAAH!!!
import java.awt.Container; import java.awt.GridBagConstraints; import java.awt.GridBagLayout; import java.awt.Insets; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JTextArea; import javax.swing.JTextField; public class App extends JFrame { JLabel a, b, c, d; JTextField e; JButton f; JTextArea g; GridBagConstraints h; public App() { a = new JLabel("aaa"); e = new JTextField(); f = new JButton("fff"); b = new JLabel("bbb"); g = new JTextArea("ggg", 5,45); c = new JLabel("ccc"); d = new JLabel("ddd"); setTitle("Previewer"); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setVisible(true); } public void addSearch () { getContentPane().setLayout(new GridBagLayout()); h = new GridBagConstraints(); h.fill = GridBagConstraints.HORIZONTAL; h.weightx = 0.5; c.insets = new Insets(0,0,0,10); h.gridx = 0; h.gridy = 0; getContentPane().add(a, h); h.fill = GridBagConstraints.HORIZONTAL; h.weightx = 0.5; h.gridx = 1; h.gridy = 0; getContentPane().add(e, h); h.fill = GridBagConstraints.HORIZONTAL; h.weightx = 0.5; h.gridx = 2; h.gridy = 0; getContentPane().add(f, h); h.fill = GridBagConstraints.HORIZONTAL; h.weightx = 0.5; h.gridx = 0; h.gridy = 1; getContentPane().add(b, h); h.fill = GridBagConstraints.HORIZONTAL; h.weightx = 0.5; h.gridx = 1; h.gridy = 1; getContentPane().add(g, h); h.fill = GridBagConstraints.HORIZONTAL; h.weightx = 0.5; h.gridx = 2; h.gridy = 1; getContentPane().add(c, h); h.fill = GridBagConstraints.HORIZONTAL; h.weightx = 0.5; h.gridx = 3; h.gridy = 1; getContentPane().add(d, h); pack(); }
pic.JPG
With a little knowledge, a
cast iron skillet
is non-stick and lasts a lifetime.
subject: gridbaylayout constraints
Similar Threads
JTextArea Problems
Problem displaying JTable inside a JScrollPane
Partitioning a JTable
ActionListener problems
GridBagLayout question
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/541947/java/java/gridbaylayout-constraints | CC-MAIN-2015-27 | refinedweb | 581 | 63.36 |
This article is the next step in Catharsis documented tutorial. Catharsis is a web-application framework gathering best-practices, using ASP.NET MVC (preview 5), NHibernate 2.0. All needed source code you can find here.
At this chapter we'll uncover the data project which is the only-responsible layer for persisting. There is no need to use database or even NHibernate for your application. You can even use more storage types, side by side. Let’s say that users and roles can be stored in
XML, localized values can be shared in one database for more applications, and NHibernate could handle only business entities.
The variety is unlimited because of separation of concern. Only Data tier can persist and access data from storage. Every upper layer does not have to care about persistence and that’s the win.
Next paragraph will describe the Catharsis data layer implementation. Always is good to know how things are working. But keep in mind, that in your application you'll only:
Nothing else.
As in the previous (Entity layer) chapter there is one big suggestion: use the Catharsis.Guidance. It’s ready to help you to create whole Entity infrastructure, or – as we'll see below – to create objects needed for one layer.
We'll continue with the previous example: There is entity
Person, stored in namespace
People. It has property
Code (business unique key) and
Second and
First name. We've already created the
Person.cs and
PersonSearch.cs files and classes. They are plain (almost without functionality) and they need to be stored.
As storage serves Microsoft SQL Server database; table Person (see Chapter V - Enter into Catharsis, adding new Entity).
As the
DAO (data access object) we will use a little bit customized NHibernate ISession. The core of NHibernateDao class comes from Billy McCafferty SharpArchitecture 0.7.3 and you should examine it for details (there is a lot of documented and described stories to get you quickly in, if you are interested in the low level design)
To smoothly work with NHibernate you need not only the
ISession object but the storage for its reference as well. It must be stored for all the time you are directly accessing the storage. And due to the
lazy-mode and the web based solution, the perfect and right place is the request
Items object.
Request is exactly the scope in which you are accessing storage. It means changing entities, store changes and display all needed (lazily loaded) properties on UI.
There are already implemented objects in Catahrsis.Mvc.dll, which will correctly create and delete
ISession object and store the reference in the request Items object. These are:
NHibernateSessionStarter,
WebSessionStorage and
CatharsisHttpModul.
Starting sequence is placed in
Global.asax (you can trace other calls in a source code…)
public class GlobalApplication : HttpApplication { protected void Application_Start(object sender, EventArgs e) { NHibernateSessionStarter.Execute(); // ....
For closing NHibernate
ISessions there is
CatharsisHttpModule which is observing the EndRequest event. It is crucial when you are redirecting among controllers, what in fact ends with new request for every controller! The unclosed
ISession disallows NHibernate to proceed and throws
exception.
protected virtual void EndRequest(object sender, EventArgs e) { var session = HttpContext.Current.Items["nhibernate.current_session"] as ISession; if (session != null) { session.Close(); } }
All that stuff is simply doing needed job for you. You can or should observe them. Once you are familiar with them, forget them. They will serve you (your application) and do not need your attention anymore.
Data layer is by default NHibernate based. There are generic
DAO objects providing all needed CRUD (create, read, update, delete) methods for your Entity. In Catharsis.Data.dll you can find out
NHGenericDao<T> with its partial extension for Tracked objects. In the ‘Firm.Product.Data.dll’ exists base object for your project
DaoBase<T>.
Catharsis is obsessed with inheritance and encapsulation (maybe me…). Therefore, instead of using Utils.cs files and static methods, better way is put all these commonly reusable methods into
DaoBase<T>. The good starting example is method
GetListByCriteria() which can help you with basic processing of the
EntitySearch object.
Paging is one of fancy things we could be proud of (but possible only due to NHibernate). Every Entity you are accessing has default Action ‘List’. It means, that even if there is a 100 thousand of rows in DB you are firstly navigated to List instead of Search (of course you can change it).
By default user gets 20 rows (you can change it in a SearchObject settings). Paging will throw current set of 20 rows and load new one and so on!
But even more, you can resort the list while you are standing on let’s say on 6th page. The list will be resorted but you'll be still watching the 6th page ...
And that's all stuff is built-in, every entity gains the sortable list and paging when firstly comes to the light of ... your solution. Please examine it, you can adjust almost anything. The main stuff could be found in just mentioned parent method
GetListByCriteria().
Guidance will generate three files; two will be placed in Data.dll and one into Common.dll. Just right click on a
Data project and the Guidance menu item will appear (clicking on a folder will be awarded by prefilled namespace). Quick resume is shown on the picture.
Every upper layer (it should be the business anyway) will never access the data.dll directly. As said in other chapters, there even does not have to be reference from business.dll to data.dll. The trick (good design) comes from using interfaces (
DI,
IoC see Chapter XIV - Dependency injection).
Guidance has created at first look almost empty interface:
[ConcreteType("Firm.Product.Data.People.PersonDao, Firm.Product.Data")] public interface IPersonDao : IDao<Person> { }
There are two points which must be mentioned.
DaoFactoryto produce object implementing
IPersonDaointerface
C#developer, but it should be at least one spoken: Façades are working with the
IPersonDaointerface, what really means that, what is not declared that is not accessible.
Final not goes to parent interface
IDao<T> which is armed with powerful set of basic methods. As a kick-off (providing basic CRUD) that will satisfy your needs:
public interface IDao<T> { T Add(T entity); T Update(T entity); void Delete(T entity); T GetById(IdT id); T GetById(IdT id, DateTime? historicalDateTime); IList<T> GetListByCriteria(ISearch searchObject); }
Base class methods have implemented almost everything to fulfill the CRUD circle. Only searching is Entity-dependent and you have to handle those customer needed requirements yourself.
If you were providing some (string type) properties in the
Catharsis.Guidance, few criteria were created for you. They are a little bit unsafe allowing users to provide * for searching (in SQL syntax %). In production environment it could badly injure the performance of your application.
public class PersonDao : DaoBase<Person>, IPersonDao { public IList<Person> GetBySearch(ISearchObject<Person> searchObject) { CreateLike(Criteria, "Code", searchObject.Example.Code); CreateLike(Criteria, "SecondName", searchObject.Example.SecondName); CreateLike(Criteria, "FirstName", searchObject.Example.FirstName); return GetListByCriteria(searchObject); } }
If users (attackers) will ask for ‘*words’ or ‘%words’, no indexes will be used to optimize the query. The database server then can be brought to troubles and unintended states (locks, etc.). Well, that is upon you to decide to allow that. Catharsis tries to be user-friendly, but you should know your users…
The last needed file for Data layer is mapping XML. There are many copies of so-called
ORM tools. But NHibernate natural separation of concern is exceptional. Why? In Catharsis, you'll meet NHibernate only on Data layer. And that’s almost shocking, because the Entity layer remains untouched, and upper levels don't know nothing about persistence.
ORM (object relational-database mapping) is in Catharsis stored in the Data project. Explaining .hbm.xml files is out of the scope this article. Anyway, you are expected to make your best to be NHibernate professional. You gain a lot, believe me.
Catharsis Guidance is doing a big part of needed mapping instead of you. Created xml file depends on a base class: Persistent entities are mapped as ‘
class’,
Tracked as ‘
joined-subclass’.
From the storage point of view, the Persistent entity (mapped as ‘class’) is stored in the separate table with defined primary key as auto incrementing. The
C# base class Persistent is (from the storage point of view) virtual, without need to be stored.
<?xml version='1.0' encoding='utf-8' ?> <hibernate-mapping <class name='Person' table='Person' lazy='true' > <id name='ID' column='PersonId' > <generator class='native'></generator> </id> <property not- <property not- <property not- </class> </hibernate-mapping>
The Tracked entities are opposite. The
Tracked abstract class has even 5 tables in the storage, needed to provide the built-in tracking. Entity class must be mapped as ‘
joined-subclass’ therefore (in
C# inherited). Also do not forget to switch off auto incrementing for its table. The unique db key (the property
ID) will be provided from
Tracked base tables (and shared among all others).
<?xml version='1.0' encoding='utf-8' ?> <hibernate-mapping <joined-subclass <key column='PersonId' /> <property not- <property not- <property not- </joined-subclass> </hibernate-mapping>
Catharsis allows you to forget on the above rows (almost). Guidance will prepare all needed stuff in the mapping
.hbm.xml file dependently on the base class. You will only append property-column mapping for other properties. That’s all.
Guidance is doing another very very very important thing for you. NHibernate can read .hbm.xml files only if they are embedded in DLL. That means that the property of any mapping file MUST be set to embedded resource. And that is done by Guidance for you as default!
It's always fine to know what's hidden inside. And even better is than forget and concern on business case. That's why Catharsis architecture and Guidance are there.
Catharsis Guidance creates on data level two files (project
.data) and puts interface into
.common project. Your next steps will only extend mapped properties in .hbm.xml file and adjust Dao for searching. If you'll decide to provide new special method on
EntityDao object, do not forget to publish it in
IEntityDao interface as well.
Enjoy Catharsis.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/applications/Catharsis_part8.aspx | crawl-002 | refinedweb | 1,713 | 58.48 |
Created on 2016-08-14 00:16 by benjamin.peterson, last changed 2016-08-14 09:43 by tehybel. This issue is now closed.
Thomas E Hybel on PSRT reports:
This vulnerability is an integer overflow leading to a heap buffer overflow. I
have attached a proof-of-concept script below.
The vulnerability resides in the Modules/_csv.c file, in the join_append and
join_append_data functions.
join_append initially calls join_append_data with copy_phase=0 to compute the
new length of its internal "rec" buffer. Then it grows the buffer. Finally it
calls join_append_data with copy_phase=1 to perform the actual writing.
The root issue is that join_append_data does not check for overflow when
computing the field rec_len which it returns. By having join_append_data called
on a few fields of appropriate length, we can make rec_len roll around and
become a small integer.
Note that there is already a check in join_append for whether (rec_len < 0). But
this check is insufficient as we can cause rec_len to grow sufficiently in a
single call to never let join_append see a negative size.
After the overflow happens, rec_len is a small integer, and thus when
join_append calls join_check_rec_size to potentially grow the rec buffer, no
enlargement happens. After this, join_append_data is called again, now with
copy_phase=1, and with a giant field_len.
Thus join_append_data writes the remaining data out-of-bounds of the self->rec
buffer which is located on the heap. Such a complete heap corruption should
definitely be exploitable to gain remote code execution.
Further details:
Tested version: Python-3.5.2, 32 bits
Proof-of-concept reproducer script (32-bits only):
--- begin script ---
import _csv
class MockFile:
def write(self, _):
pass
writer = _csv.writer(MockFile())
writer.writerow(["A"*0x10000, '"'*0x7fffff00])
--- end script ---
Python (configured with --with-pydebug) segfaults when the script is run. A
backtrace can be seen below. Note that the script only crashes on 32-bit
versions of Python. That's because the rec_len variable is an ssize_t, which is
4 bytes wide on 32-bit architectures, but 8 bytes wide on 64-bit arches.
(gdb) r
Starting program: /home/ubuntu32/python3/Python-3.5.2/python ../poc1.py
...
Program received signal SIGSEGV, Segmentation fault.
PyType_IsSubtype (a=0x0, b=b@entry=0x82d9aa0 <PyModule_Type>) at Objects/typeobject.c:1343
1343 mro = a->tp_mro;
(gdb) bt
#0 PyType_IsSubtype (a=0x0, b=b@entry=0x82d9aa0 <PyModule_Type>) at Objects/typeobject.c:1343
#1 0x080e29d9 in PyModule_GetState (m=0xb7c377f4) at Objects/moduleobject.c:532
#2 0xb7fd1a33 in join_append_data (self=self@entry=0xb7c2ffac, field_kind=field_kind@entry=0x1, field_data=field_data@entry=0x37c2f038,
field_len=field_len@entry=0x7fffff00, quoted=quoted@entry=0xbffff710, copy_phase=copy_phase@entry=0x1)
at /home/ubuntu32/python3/Python-3.5.2/Modules/_csv.c:1060
#3 0xb7fd1d6e in join_append (self=self@entry=0xb7c2ffac, field=field@entry=0x37c2f018, quoted=0x1, quoted@entry=0x0)
at /home/ubuntu32/python3/Python-3.5.2/Modules/_csv.c:1138
...
New changeset fdae903db33a by Benjamin Peterson in branch '2.7':
check for overflow in join_append_data (closes #27758)
New changeset afa356402217 by Benjamin Peterson in branch '3.3':
check for overflow in join_append_data (closes #27758)
New changeset 10b89df93c58 by Benjamin Peterson in branch '3.4':
merge 3.3 (#27758)
New changeset 55e8d3e542bd by Benjamin Peterson in branch '3.5':
merge 3.4 (closes #27758)
New changeset 609b554dd4a2 by Benjamin Peterson in branch 'default':
merge 3.5 (closes #27758)
Thanks for fixing this. I looked at the patch and it seems correct. | https://bugs.python.org/issue27758 | CC-MAIN-2020-34 | refinedweb | 570 | 51.44 |
custom resource._STRING_BLACKLIST: Specify a string array with the names of query string parameters to exclude from cache keys. All other parameters are included. You can specify
queryStringBlacklistor
queryStringWhitelist, but not both.
QUERY_STRING_WHIT: us-docker.pkg.dev/google-samples/containers/gke ingress my-ingress --namespace=cdn-how-to | grep "Address"Output:
Address: ADDRESS
ADDRESS.
Cleaning up
To prevent unwanted charges incurring on your account, release the static IP address that you reserved:
gcloud compute addresses delete cdn-how-to-address --global.
A BackendConfig allows you to precisely control the load balancer health check settings.must be less than or equal to the
INTERVAL. Units are seconds. Each probe requires an
HTTP 200 (OK)response code to be delivered before the probe timeout.
HEALTH_THRESHOLD: us-docker.pkg.dev/google-samples/containers/gke kind: Ingress metadata: name: my-ingress spec: rules: - http: paths: - path: /* pathType: ImplementationSpecific backend: service: name: my-service port: number: one (Service, port) pair can consume only one BackendConfig, even if multiple Ingress objects reference the (Service, port). This means all Ingress objects that reference the same (Service, port) must use the same configuration for Google Cloud Armor, IAP, and Cloud CDN.
IAP and Cloud CDN cannot be enabled for the same HTTP(S) Load Balancing backend service. This means that you cannot configure both IAP and Cloud CDN in the same BackendConfig.
You must use
kubectl to 1.18.20-gke.5099
- 1.19.10-gke.700 to 1.19.14-gke.299
- 1.20.6-gke.700 to 1.20.9-gke.899.
Upgrade your GKE control plane
to one of the following updated versions that patches this issue and allows
v1beta1 BackendConfig resources to be used safely:
- 1.18.20-gke.5100 and later
- 1.19.14-gke.300 and later
- 1.20.9-gke.900 and later
What's next
- GKE Ingress for single-cluster load balancing.
- Multi Cluster Ingress load balancing.
- Ingress tutorial for deploying HTTP(S) routing for GKE apps. | https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features?hl=fa | CC-MAIN-2021-43 | refinedweb | 327 | 50.84 |
I have an app that targets .net 3.5 but since 3.5 is backwards compatible with earlier versions of the .net framework users can still run my app but with some problems. The errors dont cause it to crash, but an error report is generated and if the user clicks continue the app appears to continue without problems but doesnt perform the actions defined correctly for whatever event would be taking place.
With removing imports to 3.5 only namespaces and changing the compiler target framework to 2.0 plus changing .count to .length where appropriate in my code I can make my app work fully on 2.0 without any problems.
Problem is, with MS supporting installation of .net 3.5 in Windows XP I'd like to keep my application's target framework at 3.5 so I can take advantage of newer features. Because of the problems users get running my app when they've got older versions of .net installed I would like to create an if condition in the app's load event to retrieve the .net version if possible for the end user's system. If its lower than 3.5, I'll have the app promptly let the user know of potential problems.
So...
What namespace/function do i use for .netversion?What namespace/function do i use for .netversion?Code:if (.netversion < 3.5) { MessageBox.Show("Update please!"); }
EDIT: Apologies, its a lot easier than I thought:
Code:if (System.Enviornment.Version < 3.5) { MessageBox.Show("Update please!"); } | http://cboard.cprogramming.com/csharp-programming/115998-get-net-framework-version.html | CC-MAIN-2014-35 | refinedweb | 257 | 68.87 |
#include <pcap/pcap.h> int pcap_stats(pcap_t *p, struct pcap_stat *ps);
pcap_stats() is supported only on live captures, not on ``savefiles''; no statistics are stored in ``savefiles'', so no statistics are available when reading from a ``savefile''.
A struct pcap_stat has the following members:
The statistics do not behave the same way on all platforms. ps_recv might count packets whether they passed any filter set with pcap_setfilter(3PCAP) or not, or it might count only packets that pass the filter. It also might, or might not, count packets dropped because there was no room in the operating system's buffer when they arrived. ps_drop is not available on all platforms; it is zero on platforms where it's not available. If packet filtering is done in libpcap, rather than in the operating system, it would count packets that don't pass the filter. Both ps_recv and ps_drop might, or might not, count packets not yet read from the operating system and thus not yet seen by the application. ps_ifdrop might, or might not, be implemented; if it's zero, that might mean that no packets were dropped by the interface, or it might mean that the statistic is unavailable, so it should not be treated as an indication that the interface did not drop any packets. | https://www.tcpdump.org/manpages/pcap_stats.3pcap.html | CC-MAIN-2019-04 | refinedweb | 216 | 62.01 |
Animating the Lorenz System in 3D
One of the things I really enjoy about Python is how easy it makes it to solve interesting problems and visualize those solutions in a compelling way. I've done several posts on creating animations using matplotlib's relatively new animation toolkit: (some examples are a chaotic double pendulum, the collisions of particles in a box, the time-evolution of a quantum-mechanical wavefunction, and even a scene from the classic video game, Super Mario Bros.).
Recently, a reader commented asking whether I might do a 3D animation example. Matplotlib has a decent 3D toolkit called mplot3D, and though I haven't previously seen it used in conjunction with the animation tools, there's nothing fundamental that prevents it.
At the commenter's suggestion, I decided to try this out with a simple example of a chaotic system: the Lorenz equations.
Solving the Lorenz System
The Lorenz Equations are a system of three coupled, first-order, nonlinear differential equations which describe the trajectory of a particle through time. The system was originally derived by Lorenz as a model of atmospheric convection, but the deceptive simplicity of the equations have made them an often-used example in fields beyond atmospheric physics.
The equations describe the evolution of the spatial variables $x$, $y$, and $z$, given the governing parameters $\sigma$, $\beta$, and $\rho$, through the specification of the time-derivatives of the spatial variables:
${\rm d}x/{\rm d}t = \sigma(y - x)$
${\rm d}y/{\rm d}t = x(\rho - z) - y$
${\rm d}z/{\rm d}t = xy - \beta z$
The resulting dynamics are entirely deterministic giving a starting point $(x_0, y_0, z_0)$ and a time interval $t$. Though it looks straightforward, for certain choices of the parameters $(\sigma, \rho, \beta)$, the trajectories become chaotic, and the resulting trajectories display some surprising properties.
Though no general analytic solution exists for this system, the solutions can be computed numerically. Python makes this sort of problem very easy to solve: one can simply use Scipy's interface to ODEPACK, an optimized Fortran package for solving ordinary differential equations. Here's how the problem can be set up:
import numpy as np from scipy import integrate # Note: t0 is required for the odeint function, though it's not used here. def lorentz_deriv((x, y, z), t0, sigma=10., beta=8./3, rho=28.0): """Compute the time-derivative of a Lorenz system.""" return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z] x0 = [1, 1, 1] # starting vector t = np.linspace(0, 3, 1000) # one thousand time steps x_t = integrate.odeint(lorentz_deriv, x0, t)
That's all there is to it!
Visualizing the results
Now that we've computed these results, we can use matplotlib's animation and 3D plotting toolkits to visualize the trajectories of several particles. Because I've described the animation tools in-depth in a previous post, I will skip that discussion here and jump straight into the code:
import numpy as np from scipy import integrate from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib.colors import cnames from matplotlib import animation N_trajectories = 20 def lorentz_deriv((x, y, z), t0, sigma=10., beta=8./3, rho=28.0): """Compute the time-derivative of a Lorentz system.""" return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z] # Choose random starting points, uniformly distributed from -15 to 15 np.random.seed(1) x0 = -15 + 30 * np.random.random((N_trajectories, 3)) # Solve for the trajectories t = np.linspace(0, 4, 1000) x_t = np.asarray([integrate.odeint(lorentz_deriv, x0i, t) for x0i in x = sum([ax.plot([], [], [], '-', c=c) for c in colors], []) pts = sum([ax.plot([], [], [], 'o', c=c) for c in colors], []) #:]) ax.view_init(30, 0.3 * i) fig.canvas.draw() return lines + pts # instantiate the animator. anim = animation.FuncAnimation(fig, animate, init_func=init, frames=500, interval=30, blit=True) # Save as mp4. This requires mplayer or ffmpeg to be installed #anim.save('lorentz_attractor.mp4', fps=15, extra_args=['-vcodec', 'libx264']) plt.show()
The resulting animation looks something like this:
Notice that there are two locations in the space that seem to draw-in all paths: these are the so-called "Lorenz attractors", and have some interesting properties which you can read about elsewhere. The qualitative characteristics of these Lorenz attractors vary in somewhat surprising ways as the parameters $(\sigma, \rho, \beta)$ are changed. If you are so inclined, you may wish to download the above code and play with these values to see what the results look like.
I hope that this brief exercise has shown you the power and flexibility of Python for understanding and visualizing a large array of problems, and perhaps given you the inspiration to explore similar problems.
Happy coding! | http://jakevdp.github.io/blog/2013/02/16/animating-the-lorentz-system-in-3d/ | CC-MAIN-2018-39 | refinedweb | 798 | 52.49 |
gcc.)
* -mtune=generic should probably favour movd/movq. I think it's better for a weighted-average of CPUs we care about for -mtune=generic. Most of the text below is an attempt to back up this claim, but I don't have hardware to test with so all I can do is look at Agner Fog's tables and microarch pdf.
movd is about break-even on Bulldozer, better on SnB-family, much better on Core2/Nehalem, and significantly worse only on AMD K8/K10. Or maybe use a hybrid strategy that does half with movd and half with store/reload, which can actually be better than either strategy alone on Bulldozer and SnB-family.
-----------
The tune=haswell issue is maybe separate from the others, since gcc already knows that bouncing through memory isn't the optimal strategy.
#include <immintrin.h>
__m128i combine64(long long a, long long b) {
return _mm_set_epi64x(b,a);
}
gcc8 -O3 -mtune=haswell emits:
movq %rsi, -16(%rsp)
movq %rdi, %xmm0
movhps -16(%rsp), %xmm0
(see for the wasted store with -msse4 -mno-avx).
I think what clang and ICC do is optimal for the SSE2-only case, for Intel CPUs and Ryzen:
movq %rsi, %xmm1
movq %rdi, %xmm0
punpcklqdq %xmm1, %xmm0
_mm_set_epi32(d,c,b,a) with -mtune=haswell gives us the expected movd/punpck (without SSE4), no store/reload.
-----
Using movd or movq instead of a store/reload is a code-size win: movd %eax, %xmm0 is 4 bytes (or 5 with a REX prefix for movq or high registers). Store/reload to -0x10(%rsp) is 10, 11, or 12 bytes, depending on operand size and high register(s).
movd int->xmm is lower latency than store/reload on most CPUs, especially Intel SnB-family where it's 1c latency, and also AMD Ryzen. On SnB family, store/reload's only advantage is rare cases where port5 is a throughput bottleneck and latency isn't important.
It replaces a store and a load uop with 1 ALU uop on Intel Core2 and later, and Atom/Silvermont/KNL. Also 1 uop on VIA Nano.
movd int->xmm is 2 ALU uops on AMD K10/Bulldozer-family and Jaguar, and P4, and 3 on K8/Bobcat. It never costs any more total uops for the front-end (since a movd load is 2 uops on K8/Bobcat), but decoding a multi-uop instruction can sometimes be a bottleneck (especially on K8 where a 3 m-op instruction is a "vectorpath" (microcode)).
Store/reload has one per clock throughput on every CPU, AFAIK. On most CPUs that have much weight in -mtune=generic, movd's throughput is one-per-clock or better. (According to Agner Fog's tables, only Bobcat, K8/K10, and P4 have throughput of one per 2 or 3 clocks for movd/movq int->xmm). The biggest problem is K10, with something like one per 2.8c throughput (according to a couple reports from, e.g.). Agner Fog says 3, but none of these are measuring with other instructions mixed in.
Some CPUs have better than one-per-clock throughput for movd/movq: Core2 is 0.5, and Nehalem is 0.33. So do we hurt them a lot to help PhenomII? I'd guess that Core2+Nehalem has somewhat more weight in tune=generic than K10. Some AMD PhenomII CPUs are still around, though. (But we could exclude them for code built with -mssse3)
---------
Probably the deciding factor for tune=generic is whether it hurts AMD Bulldozer-family significantly or at all. It looks there's not much difference either way: similar throughput and latency.
However, store/reload may have an advantage when two cores in a cluster are competing for their shared vector unit. Probably both of movd's macro-ops need to run on the shared vector unit, but for store/reload maybe only the load needs the shared resource. IDK if this is correct or relevant, though. Probably -mtune=bdver* should keep using store/reload, but this might not be enough of a reason to stop -mtune=generic from using movd.
Agner Fog's microarch pdf (Bulldozer section 18.11) says:
> Nevertheless, I cannot confirm that it is faster to move data from a general purpose register
> to a vector register through a memory intermediate, as recommended in AMD's optimization guide.
That AMD optimization guide advice may have been left over from K8/K10, where movd/movq from integer->vector has bad throughput.
As far as latency goes, scalar store -> vector reload is 10c on Bulldozer according to Agner Fog's numbers, while movd/movq is 10c on Bulldozer/Piledriver, and 5c on Steamroller. (Steamroller also appears to have reduced the store-forwarding latency to 6c. Agner's tables are supposed to have the store+load latencies add up to the store-forwarding latency.)
Store/reload is 2 instructions / 2 m-ops, but movd or movq is 1 instruction / 2 m-ops. This is mostly ok for the decoders, but bdver1 can't decode in a 2-2 pattern (ver2/ver3 can).
Scheduling instructions to avoid consecutive multi-uop instructions may help decode throughput on bdver1. But pairs of 2 m-op instructions are good on bdver2 and later.
With SSE4, pinsrd/q is probably good, because it's still only 2 m-ops on Bulldozer-family. Indeed, -mtune=bdver1 uses 2x store/reload and 2x pinsrd for
_mm_set_epi32(d,c,b,a).
movl %edx, -12(%rsp)
movd -12(%rsp), %xmm1
movl %edi, -12(%rsp)
movd -12(%rsp), %xmm0
pinsrd $1, %ecx, %xmm1
pinsrd $1, %esi, %xmm0
punpcklqdq %xmm1, %xmm0
Even better would probably be
movd %edx, %xmm1
movl %edi, -12(%rsp)
pinsrd $1, %ecx, %xmm1 # for bdver2, schedule so it can decode in a 2-2 pattern with the other pinsrd
movd -12(%rsp), %xmm0
pinsrd $1, %esi, %xmm0
punpcklqdq %xmm1, %xmm0
The store/reload can happen in parallel with the direct movd int->xmm1. This would be pretty reasonable for tune=generic, and should run well on Intel SnB-family CPUs.
-----
For -msse4 -mtune=core2, -mtune=nehalem, probably this is optimal:
movd %edi, %xmm0
pinsrd $1, %esi, %xmm0
pinsrd $2, %edx, %xmm0
pinsrd $3, %ecx, %xmm0
movd can run on any port and pinsrd is only 1 uop. So this has a total latency of 2 + 3*1 = 5c on Core2 Wolfdale. (First-gen core2 doesn't have SSE4.1). Front-end bottlenecks are more common on Core2/Nehalem since they don't have a uop-cache, so fewer instructions is probably a good bet even at the expense of latency.
It might not be worth the effort to get gcc to emit this for Core2/Nehalem, since they're old and getting less relevant all the time.
It may also be good for -mtune=silvermont or KNL, though, since they also have 1 uop pinsrd/q. But with 3c latency for pinsrd, the lack ILP may be a big problem. Also, decode on Silvermont without VEX will stall if the pinsrd needs a REX (too many prefixes). KNL should always use VEX or EVEX to avoid that.
Confirmed. We need to revisit a lot of the little details for generic tuning for recent GCC.
See also.
gcc -m32 does an even worse job of getting int64_t into an xmm reg, e.g. as part of a 64-bit atomic store.
We get a store-forwarding failure from code like this, even with -march=haswell
movl %eax, (%esp)
movl %edx, 4(%esp)
movq (%esp), %xmm0
Also, going the other direction is not symmetric. On some CPUs, a store/reload strategy for xmm->int might be better even if an ALU strategy for int->xmm is best.
Also, the choice can depend on chunk size, since loads are cheap (2 per clock for AMD since K8 and Intel since SnB). And store-forwarding works.
Doing the first one with movd and the next with store/reload might be good, too, on some CPUs. especially if there's some independent work that can happen for the movd result.
I also discussed some of this at the bottom of the first post in.
(In reply to Peter Cordes from comment #0)
> gcc.)
>
Yes for Ryzen, using direct move instructions should be better than using store-forwarding.
AVX512F with marge-masking for integer->vector broadcasts give us a single-uop replacement for vpinsrq/d, which is 2 uops on Intel/AMD.
See my answer on. I don't have access to real hardware, but according to reported uop counts, this should be very good: 1 uop per instruction on Skylake-avx512 or KNL
vmovq xmm0, rax 1 uop p5 2c latency
vpbroadcastq xmm0{k1}, rdx ; k1 = 0b0010 1 uop p5 3c latency
vpbroadcastq ymm0{k2}, rdi ; k2 = 0b0100 1 uop p5 3c latency
vpbroadcastq ymm0{k3}, rsi ; k3 = 0b1000 1 uop p5 3c latency
xmm vs. ymm vs. zmm makes no difference to latency, according to InstLatx64
(For a full ZMM vector, maybe start a 2nd dep chain and vinsert to combine 256-bit halves. Also means only 3 k registers instead of 7)
vpbroadcastq zmm0{k4}, rcx ; k4 =0b10000 3c latency
... filling up the ZMM reg
Starting with k1 = 2 = 0b0010, we can init the rest with KSHIFT:
mov eax, 0b0010 = 2
kmovw k1, eax
KSHIFTLW k2, k1, 1
KSHIFTLW k3, k1, 2
# KSHIFTLW k4, k1, 3
...
KSHIFT runs only on port 5 (SKX), but so does KMOV; moving from integer registers would just cost extra instructions to set up integer regs first.
It's actually ok if the upper bytes of the vector are filled with broadcasts, not zeros, so we could use 0b1110 / 0b1100 etc. for the masks. We could start with kxnor to generate a -1 and left-shift that, but that's 2 port5 uops vs. mov eax,2 / kmovw k1, eax being p0156 + p5.
Loading k registers from memory is not helpful: according to IACA, it costs 3 uops. (But that includes p237, and a store-AGU uop makes no sense, so it might be wrong.)
*** Bug 87976 has been marked as a duplicate of this bug. *** | https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80820 | CC-MAIN-2020-16 | refinedweb | 1,683 | 69.62 |
.
One of the easiest ways to drive a mouse or keyboard is via the USB HID interface. Many of the Arduino boards (such as Arduino Leonardo, Arduino Micro or Arduino Due) can be conveniently turned into an HID device. For my implementation, I used an Arduino Due, and Arduino mouse and keyboard library is used to drive the mouse.
On the Android side, I wrote a simple app (largely based on the examples discussed here) which sends the current touch locations and button click events via Bluetooth. And on the Arduino side, the Bluetooth to serial adapter sends the received commands to Arduino’s serial port. The event commands are then parsed and used to drive the mouse. To simplify the serial communications, I used a simple serial protocol I created earlier. You can take a look at that post for more detailed explanation of the protocol.
The serial commands sent from the Android to the Arduino take the following format:
x pos, y pos, action
The action field indicate whether it is a mouse move event (0), a left click (1) or a right click (2).
Here is the Arduino source code in its entirety:
#include <SimpleSerialProtocol.h> SimpleSerialProtocol p; int xOrg = 0, yOrg = 0, xVal = 0, yVal = 0, xDelta = 0, yDelta = 0; void setup() { Serial1.begin(9600); p.CmdReceivedPtr = CmdReceived; Mouse.begin(); } void loop() { p.receive(); } void CmdReceived(byte* cmd, byte cmdLength) { String s = String((char *)cmd); String x = parseCSV(s, 0); String y = parseCSV(s, 1); xVal = x.toInt(); yVal = y.toInt(); if (xVal == 0 && yVal == 0) { //buton clicked String c = parseCSV(s, 2); int btnVal = c.toInt(); if (btnVal == 1) Mouse.click(MOUSE_LEFT); else if (btnVal == 2) Mouse.click(MOUSE_RIGHT); } else { // moving if (abs(xVal - xOrg) > 1) xDelta = (xVal - xOrg); else xDelta = 0; if (abs(yVal - yOrg) > 1) yDelta = (yVal - yOrg); else yDelta = 0; if (xDelta != 0 || yDelta != 0) { Mouse.move(xDelta, yDelta, 0); xOrg = xVal; yOrg = yVal; } } } String parseCSV(String data, int index) { int found = 0; int strIndex[] = {0, -1}; char separator = ',';]) : ""; }
The function used to parse the CSV command is modified from the code in this thread on stackexchange. The Java source code on the Android side is included towards the end. For this demonstration, I only implemented mouse move and mouse click events but you can easily implement other functionalities such as scrolling and gestures as well if you want to.
Here is a short video demonstration:
Hi Kerry,
I am looking to accomplish something similar to this project. I am attempting to use iOS13’s new mouse pointer function to control an iPhone from an android device. Only issue is that I have no ability at this time to change my hardware. Is there any way that you could think of to do this, where the intermediate Arduino step is eliminated?
Thanks in advanced,
Best,
Sasha Ohayon | http://www.kerrywong.com/2015/01/18/turn-your-android-phone-into-a-wireless-touchpad/comment-page-1/ | CC-MAIN-2020-29 | refinedweb | 475 | 71.44 |
- In Visual Studio .NET, create a class library solution and project.
- Add Reference from the Project menu or Solution Explorer, browse to the \inc or \incx64 directory of the ObjectARX SDK and select acdbmgd.dll and acmgd.dll.
- Create a new class or rename the auto-created class
- In the main class file, add the namespaces you will use.
using Autodesk.AutoCAD.ApplicationServices;
using Autodesk.AutoCAD.DatabaseServices;
using Autodesk.AutoCAD.Runtime;
- Add a C# function
Note the attributes are important and must be used, otherwise the command will not be fired.
- In AutoCAD command prompt, type “netload”, then browse to the C# Library assembly, and click OK.
- In AutoCAD command prompt, type the function name, in our case, it is “CreateIt”
Now add some more meaningful code:
public static void CreatePorousObject() { Point3d center=new Point3d(9.0, 3.0, 0.0); Vector3d normal=new Vector3d(0.0, 0.0, 1.0); Circle pCirc = new Circle(center, normal, 2.0); Database acadDB = HostApplicationServices.WorkingDatabase; Autodesk.AutoCAD.DatabaseServices.TransactionManager acadTransMgr =acadDB.TransactionManager; Transaction acadTrans = acadTransMgr.StartTransaction(); BlockTableRecord acadBTR = (BlockTableRecord)acadTrans.GetObject(acadDB.CurrentSpaceId,OpenMode.ForWrite); acadBTR.AppendEntity(pCirc); acadTrans.AddNewlyCreatedDBObject(pCirc, true); acadTrans.Commit(); }
Now re-compile and run, you will see a circle drawn in AutoCAD.
Advertisements
Eike
April 25, 2013 at 5:34 pm
Hi! I have tried your quick Plugin Tutorial.
However I get an errormessage, when I try to debug:
Here are my questions:
– I am using AutoCAD Architecture 2011and Visual Studio C# 2012 Express.
Are there any compatibility problems?
– I downloaded ObjectARX 2011 onto my C drive and am referencing to the .dll in here:
C:\ObjectARX 2011\inc
This should all be fine? Am I using a wrong version?
– When I debugg it tells me that a class library cannot be started directly and that i should add a new project to the project folder and start from here, referencing the class library.
However, when I do that i get an error again..and so on.
I’d be really happy for any comment.
Thanks,
Eike
Dileep
July 16, 2013 at 5:19 pm
Autocad managed assemblies are not going to work.
You need to use the Type Library Instead.
xinyustudio
July 16, 2013 at 5:23 pm
Can you offer some links regarding to aforementioned changes? Thanks. | https://xinyustudio.wordpress.com/2010/01/24/developing-autocad-addinplugin-using-c/ | CC-MAIN-2017-13 | refinedweb | 382 | 52.36 |
Test.LeanCheck.Function.ShowFunction
Contents
Description
This module exports the
ShowFunction typeclass,
its instances and related functions.
Using this module, it is possible to implement a Show instance for functions:
import Test.LeanCheck.ShowFunction instance (Show a, Listable a, ShowFunction b) => Show (a->b) where show = showFunction 8
This shows functions as a case pattern with up to 8 cases.
The module
Test.LeanCheck.Function.Show (
Show)
exports an instance like the one above.
Synopsis
- showFunction :: ShowFunction a => Int -> a -> String
- showFunctionLine :: ShowFunction a => Int -> a -> String
- type Binding = ([String], Maybe String)
- bindings :: ShowFunction a => a -> [Binding]
- class ShowFunction a where
- tBindingsShow :: Show a => a -> [[Binding]]
- class Listable a
Documentation
showFunction :: ShowFunction a => Int -> a -> String Source #
Given a number of patterns to show, shows a
ShowFunction value.
showFunction undefined True == "True" showFunction 3 (id::Int) == "\\x -> case x of\n\ \ 0 -> 0\n\ \ 1 -> 1\n\ \ -1 -> -1\n\ \ ...\n" showFunction 4 (&&) == "\\x y -> case (x,y) of\n\ \ (False,False) -> False\n\ \ (False,True) -> False\n\ \ (True,False) -> False\n\ \ (True,True) -> True\n"
This can be used as an implementation of show for functions:
instance (Show a, Listable a, ShowFunction b) => Show (a->b) where show = showFunction 8
showFunctionLine :: ShowFunction a => Int -> a -> String Source #
Same as showFunction, but has no line breaks.
showFunction 2 (id::Int) == "\\x -> case x of 0 -> 0; 1 -> 1; ..."
bindings :: ShowFunction a => a -> [Binding] Source #
Given a
ShowFunction value, return a list of bindings
for printing. Examples:
bindings True == [([],True)] bindings (id::Int) == [(["0"],"0"), (["1"],"1"), (["-1"],"-1"), ... bindings (&&) == [ (["False","False"], "False") , (["False","True"], "False") , (["True","False"], "False") , (["True","True"], "True") ]
class ShowFunction a where Source #
ShowFunction values are those for which
we can return a list of functional bindings.
As a user, you probably want
showFunction and
showFunctionLine.
Non functional instances should be defined by:
instance ShowFunction Ty where tBindings = tBindingsShow
Instances
tBindingsShow :: Show a => a -> [[Binding]] Source #
Re-exports
class Listable a Source #
A type is
Listable when there exists a function that
is able to list (ideally all of) its values.
Ideally, instances should be defined by a
tiers function that
returns a (potentially infinite) list of finite sub-lists (tiers):
the first sub-list contains elements of size 0,
the second sub-list contains elements of size 1
and so on.
Size here is defined by the implementor of the type-class instance.
For algebraic data types, the general form for
tiers is
tiers = cons<N> ConstructorA \/ cons<N> ConstructorB \/ ... \/ cons<N> ConstructorZ
where
N is the number of arguments of each constructor
A...Z.
Instances can be alternatively defined by
list.
In this case, each sub-list in
tiers is a singleton list
(each succeeding element of
list has +1 size).
The function
deriveListable from Test.LeanCheck.Derive
can automatically derive instances of this typeclass.
A
Listable instance for functions is also available but is not exported by
default. Import Test.LeanCheck.Function if you need to test higher-order
properties.
Instances | http://hackage.haskell.org/package/leancheck-0.6.1/docs/Test-LeanCheck-Function-ShowFunction.html | CC-MAIN-2019-26 | refinedweb | 497 | 52.49 |
Utility handle class for handling the reference counting and managuement of the RCPNode object. More...
#include <RTOpPack_SPMD_apply_op_def.hpp>
Utility handle class for handling the reference counting and managuement of the RCPNode object.
Again, this is *not* a user-level class. Instead, this class is used by all of the user-level reference-counting classes.
NOTE: I (Ross Bartlett) am not generally a big fan of handle classes and greatly prefer smart pointers. However, this is one case where a handle class makes sense. First, I want special behavior in some functions when the wrapped RCPNode pointer is null. Secound, I can't use one of the smart-pointer classes because this class is used to implement all of those smart-pointer classes!
Definition at line 692 of file RTOpPack_SPMD_apply_op_def.hpp.
Ouput stream operator for RCPNodeHandle.
Definition at line 964 of file RTOpPack_SPMD_apply_op_def.hpp. | http://trilinos.sandia.gov/packages/docs/r10.8/packages/rtop/src/support/doc/html/classRTOpPack_1_1Teuchos_1_1RCPNodeHandle.html | CC-MAIN-2014-15 | refinedweb | 143 | 59.09 |
Simple Temp/Light/Time setup with LCD 1.4 (no beta anymore)
Have a simple setup with a standard hitachi 16x2 display showing temperature, light level and current server time and connection status. Not done yet because i want it to show current scene for the location where this setup is add. But scenes are not supported yet.
The setup is using the 1.4 beta library setup.
The temperature and the light values are updated every ten seconds (approximately) and values are send to the server about every 59 seconds. Also every minute the time is updated by using the time request method and displays the time in current timezone and correct offset (daylight savings). It can be +59 or -59 seconds off because the server does not yet pushes the time on change and is set on request
.
On the display at the right top is a an antenna character which will blink of no data is received from the server. This is used in combination with the time request if there is no response within 3 seconds (quite a very large margin) the antenna will blink saying there is no server response. When the server responses again it will stop blinking. I on purpose set it on server responses and not at radio level, because both mean no server "connection" (interaction).
- korttoma Hero Member last edited by
Nice work!
Would you mind sharing the sketch file for this sensor node? I would like to do something similar but I'm not skilled enough to start from scratch.
This is the current sketch, it is not final yet, so improvements are not put in it yet and not all the doc comments.
#include <TimedAction.h> #include <SPI.h> #include <MySensor.h> #include <LiquidCrystal.h> #include <Time.h> /* * The LCD circuit: * LCD RS pin to digital pin 8 * LCD Enable pin to digital pin 7 * LCD D4 pin to digital pin 6 * LCD D5 pin to digital pin 5 * LCD D6 pin to digital pin 4 * LCD D7 pin to digital pin 3 * LCD R/W pin to ground * The used LCD is an PC1602-F Be aware that the pins 15 and 16 are next to pin one so from * Left to right viewed from on top (facing lcd screen) it is 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 16, 15 */ #define CHILD_ID_LIGHT 0 #define CHILD_ID_TMP 1 #define LIGHT_SENSOR_ANALOG_PIN 0 #define TMP_SENSOR_ANALOG_PIN 1 unsigned long SLEEP_TIME = 94; // Sleep time between reads (in milliseconds) (We do calculations and wait 5 ms between readings) int readingscounter = 0; int timeCheckCounter = 0; boolean netAvailSwap = false; MySensor gw; MyMessage msgLight(CHILD_ID_LIGHT, V_LIGHT_LEVEL); MyMessage msgTemp (CHILD_ID_LIGHT, V_TEMP); uint16_t luxTotal; uint16_t tmpTotal; byte temp[8] = { 0b01000, 0b10101, 0b00010, 0b01000, 0b10101, 0b00010, 0b00000, 0b00000 }; byte degree[8] = { 0b01100, 0b10010, 0b10010, 0b01100, 0b00000, 0b00000, 0b00000, 0b00000 }; byte sun[8] = { 0b10101, 0b01110, 0b11011, 0b01110, 0b10101, 0b00000, 0b00000, 0b00000 }; byte antennaOk[8] = { 0b11111, 0b01110, 0b01110, 0b00100, 0b00100, 0b00100, 0b00000, 0b00000 }; byte timeSymbol[8] = { 0b01110, 0b10101, 0b10101, 0b11001, 0b01110, 0b00000, 0b00000, 0b00000 }; LiquidCrystal lcd(3, 4, 5, 6, 7, 8); TimedAction timedAction = TimedAction(1000,updateSensors); void setup() { lcd.begin(16, 2); lcd.createChar(0, degree); lcd.createChar(1, sun); lcd.createChar(3, antennaOk); lcd.createChar(4, temp); lcd.createChar(5, timeSymbol); // Send the sketch version information to the gateway and Controller lcd.setCursor(0, 0); lcd.print("User status"); lcd.setCursor(0, 1); lcd.print("PiDome"); // Register all sensors to gateway (they will be created as child devices) gw.begin(); gw.sendSketchInfo("PiDome u-stat+lcd", "1.0"); gw.present(CHILD_ID_LIGHT, S_LIGHT_LEVEL); gw.present(CHILD_ID_TMP, S_TEMP); delay(3000); gw.requestTime(receiveTime); } void loop(){ gw.process(); timedAction.check(); } void updateSensors(){ int curLux = constrain(map(analogRead(LIGHT_SENSOR_ANALOG_PIN), 0, 1023, 0, 101),0,100); luxTotal = luxTotal + curLux; delay(1); analogRead(TMP_SENSOR_ANALOG_PIN);///first read can be off because we just read the light sensor, so do a read and discard it int curTemp = (((analogRead(TMP_SENSOR_ANALOG_PIN) * 5.0) / 1024) - 0.5) * 100; tmpTotal = tmpTotal + curTemp; if(readingscounter==10){ lcd.setCursor(0, 0); lcd.print(" ");lcd.setCursor(0, 0);lcd.write(byte(4));lcd.print(curTemp);lcd.write(byte(0));lcd.print("C ");lcd.write(byte(1));lcd.print(curLux);lcd.print("%"); } if(timeCheckCounter==62){ lcd.setCursor(15,0); (netAvailSwap==true)?lcd.write(byte(3)):lcd.print(" "); netAvailSwap=!netAvailSwap; } else { lcd.setCursor(15,0); if(netAvailSwap==false){ lcd.write(byte(3)); } netAvailSwap = true; } if(readingscounter==59){ gw.send(msgLight.set(luxTotal/readingscounter)); gw.send(msgTemp.set(tmpTotal/readingscounter)); readingscounter = 1; tmpTotal = 0; luxTotal = 0; gw.requestTime(receiveTime); } else { readingscounter++; if(timeCheckCounter!=62)timeCheckCounter++; } } void receiveTime(unsigned long time) { timeCheckCounter = 0; netAvailSwap = true; setTime(time); lcd.setCursor(0,1); lcd.write(byte(5)); lcdDigits(hour()); lcd.print(":"); lcdDigits(minute()); } void lcdDigits(int digits){ if(digits < 10) lcd.print('0'); lcd.print(digits); }
TimedAction is from the Arduino playground (I quickly needed something to do timed actions).
- korttoma Hero Member last edited by korttoma
Thanks for the sketch @JOHN
It gives me a few ideas how to set things up but it seems like you are using a different lcd library then I had in mind.
That one use way to many digital pins for my taste.
I was planing on using the LiquidCrystal_I2C.h once my ordered display gets here.
Thanks again
@korttoma
Your welcome. Yes it does use too many pins, but this one i had just laying around doing nothing and was playing with the libraries. i2c would definitely be better imho.
- korttoma Hero Member last edited by
I found my LCD in the mailbox just now. Time pull something together.
I've made a couple of small changes:
Having a NPN laying around the display's back-light is now handled by the Atmega. And the display now shows overall users status (Awake/Sleeping/Away).
The display turns on when the users status change, when a button is pressed or connection is "lost" (No server response). After 20 seconds of user change/button press/"connection" back the display's back-light goes off again.
Planning: An extra button to send a signal to the server where it toggles user status to sleep/awake (all lights off etc.../morning routines) depending on time settings on the server.
That leaves one pin unused. Any idea's?
A little update:
The below setup will run for a while until i have founded some suited casing for it... It has the same functionalities as above but in the end will get more buttons via a binary encoder i have laying around. So, it now is soldered and put in a case used in an other project (explains the BIG cutout where the LCD is seen. At the top the LDR, an the left a tactile used for the display backlight, and at the bottom (not seen) is a tmp36 sticking out.
I'm also busy with mysensors integration in the server i'm creating... below is a work in progress screenshot of a mysensors device to be added. All though it is shown in the screenshot, automatic node addressing is not yet supported. Also automatic device creation is not done yet, you will need first to define the device yourself (fields etc..).
I'm also working on a desktop and mobile client.. This is how it looks in the desktop client when this specific device is added to the server and assigned to the floor planner with light intensity (LUX) enabled. Currently i only have one device active with lux (which is the device on this page).
I hope to get full mysensors implementation soon done, i will also include mysensor example devices to the mysensor device declarations.
@hek
Yeah sure no problem, all though there is an extra button upcoming in the next couple of days(/week?) which will say something like "Auto create device" where the driver then will be able to create the needed xml for the shown device. I'm extremely busy at the moment need to prepare for a hell lot of things the upcoming weeks so there is no real ETA for that button yet.
Maybe it is handy to put an extra screenshot on how it looks when you add the serial gateway for the first time (page auto refreshes when an usb device is plugged in):
I have updated the support this evening and the implementation now also shows the last 20 messages being requested to be logged by the gateway.
Also i have added the possibility to auto assign node ID's but can not test it because the project i started this post with is running live with a server instance and is used in triggers which control my lighting... Current method is to turn on a sensor node, take a look at the last 20 messages (refresh the page by clicking on the MySensors driver). If the log shows an address is assigned, restart the sensor node.
Automatic creation of devices is put on hold because of thread: so devices still need to be created by hand.
[EDIT]
Maybe handy to create a different post about the mysensors support if that is allowed of course
[/EDIT]
I have added a device editor to the server. It uses the sensor types as a group and the sensor variable as the control.
Group (sensor):
Control (sensor variable):
This will be my last post here because my controller has got it's own controller page | https://forum.mysensors.org/topic/284/simple-temp-light-time-setup-with-lcd-1-4-no-beta-anymore | CC-MAIN-2022-21 | refinedweb | 1,572 | 64 |
Behaviours Activities
Create a Simple Behaviour
For this activity, you use the built-in menu options to change the behaviour of a field, restricting an action that you choose to a specific role. In our example, we build a behaviour that restricts a custom field called Customer Type to only project administrators. You can use this same approach in your lab environment, or you can experiment with a different field or behaviour condition.
Access Behaviours from the the Manage Apps page.
On the Behaviours page, under Add Behaviour, enter a Name and Description for your new behaviour, then click Add.
The new behaviour appears on the Behaviours page. Notice that it is not currently mapped to a project or issue type. In our example, we’ve added the behaviour Admins Only.
Next to the behaviour you added, under Operations, click Fields.
On the new page, update the behaviour settings. If you want to set a guide workflow, select that workflow in the Guide Workflow menu. Depending on your project, you see different workflow options here. Remember that the guide workflow helps you when selecting a condition for the behaviour. In our example, we are not using a validator plugin or an initialiser.
Scroll to the Add Field field, select a field to use for this new behaviour, and then click Add. You can select custom fields or system fields. Just remember to not update required fields (like Summary) as read-only or hidden — this will break your issue creation! In our example, we choose the field Customer Type.
Under Fields, update your field behaviour:
Optional/Required - Make the field optional or required. You cannot make required fields from a system configuration optional.
Writable/Readonly - Make this field viewable by users unless they meet a condition, or make it always read only.
Shown/Hidden - Make the field hidden or visible. Do not do this with required system fields such as Summary.
Under these options, you can add a new condition to the field. When you add a condition, you also need to set the options for when the condition occurs.
When - the behaviour happens when the condition is met.
Except - the behaviour does not happen when the condition is met.
In our example, we chose to make the Customer Type field readonly, set that the user must be in the Administrators project role and used the Except option, so this field appears as readonly to all users expect those in the project administrator role.
When you are finished updating your behaviour, click Save. You still need to map the behaviour.
Scroll to the top of the behaviour after saving, and under Mappings, click Add Mapping.
Then select the mapping type (Project/Issuetype or Service Desk) and then select the project(s) and issue type(s) to associate this behaviour with. For our example, we chose the Virtual Tours team and Story issue type.
Click Add Mapping to map the behaviour to the appropriate projects and issue types, and then you are all set! Test your behaviour by creating the appropriate issue type, paying attention to any conditions you may need to test.
Create a Select List With Other
For this activity, we create a behaviour that uses an option from a select list to show a text field.
Access Behaviours from the the Add-Ons page.
On the Behaviours page, under Add Behaviour, enter a Name and Description for your new behaviour, then click Add.
The new behaviour appears on the Behaviours page. Notice that it is not currently mapped to a project or issue type. In our example, we’ve added the behaviour Select List.
Next to the behaviour, click Fields.
On the Edit Behaviour page, in the Add Field menu, select the select field you plan to use for this behaviour. For this example, we select the Favorite Fruit field.
Now, we need to add a server-side script to the field for it to show the additional text field when someone selects Other from the first select list. Click Add server-side script and copy and paste the following in the inline script editor.
def otherFaveField = getFieldByName("Favorite)
Make sure you get the green success circle in the bottom right of the editor that indicates your script is correct. Be aware if you used something other than the examples we laid out, the script will likely need tweaking to work for your situation.
Click Save to save your new behaviour and make sure to map the behaviour to the appropriate project and/or issue. | https://scriptrunner.adaptavist.com/6.3.0/jira/tutorials/labs/behaviours-activity.html | CC-MAIN-2020-50 | refinedweb | 756 | 64.61 |
Let's say that I want create a specialized wpf control, "YellowTextBox". It will be the same of a common TextBox, but it will be... yellow!. Ok, I go to code:
public class YellowTextBox: TextBox
{
}
this.Background = Brushes.Yellow;
You really ought to initialize a specialized WPF control in the initializers for the dependency properties (for properties it introduces), and in the default
Style (for the new properties, and for anything it inherits that needs a different default value).
But you want to do it in C#, for some reason. Maybe Darth Vader is standing behind you, breathing heavily, threatening to choke the life out of you if you touch a XAML file. This is unlikely, but a lot of funny things happen in the Bay Area. For good or ill, that's the question you're asking.
In that case, we're talking about a) OOP theology, b) OOP reality, and C) WPF mechanics. In terms of all of those, do it in the constructor, and in WPF, in the constructor after
InitializeComponent() (if applicable, not in your case) is called. That'll precede any styles that get applied to the control in WPF, and it's good OOP practice and theology to initialize everything in the constructor that you didn't initialize in field initializers. A new instance of a class should be all shiny and ready to go, in a consistent state that won't throw any exceptions or do anything crazy if you start using it. So that means the initialization should be all complete at that point (hence the name; nobody calls it "somewhatlaterization"). Never, never, never leave any initialization to anybody else. There's no need and it's a shabby trick to play on consumers of your code. Spare people any thought of your internals. "Don't Write Booby Traps" is almost as important an aphorism as "Keep it Simple". Maybe it's the same aphorism.
Do read up on
InitializeComponent(), but in your specific case, the constructor for a subclass of a standard control, you won't be calling it.
A control subclass in WPF will apply styles after the constructor. It must! Before the constructor executes, it doesn't exist. "After the constructor" is basically all there is, aside from the guts of the constructor itself. You can override
OnApplyTemplate() to hook into things immediately after the template is applied. But that's much too late to be initializing much (with the sole exception of private fields which will refer to template children -- they can't be initialized until the template is applied, because the controls don't exist). That's when you'd, for example, hook up event handlers to
"PART_FooBar" or whatever in the template, assuming
"PART_FooBar" exists.
So if you initialize stuff in the constructor(s), it gets applied to every instance, and if it's a WPF control class (or any
FrameworkElement subclass), consumers of your class can override it by applying a
Style or a template later on. That's good WPF practice: You want to allow people maximum scope to customize your controls in ways that won't blow stuff up.
By the way, in C# you can chain constructors:
// Assume that Whoop declares Bar and initializes it in its constructor class Foo : Whoop { public Foo(int bar = 0) : base(bar) {} public Foo(int bar, string baz) : this(bar) { Baz = baz; } // C#6 public String Baz { get; set; } = "yabba dabba do"; }
This is handy if you've got a bunch of different sets of constructor parameters, plus initialization that's common to all of them but which can't be put in field/property initializers. | https://codedump.io/share/K5AQC6pu6hPk/1/where-initialize-a-specialized-wpf-control | CC-MAIN-2018-09 | refinedweb | 610 | 59.03 |
BTW, the following change is also required to make the cursor position correctly on ttys (the #ifdef will be replaced with a run-time test in the final code): diff -u src/dispnew.c.\~3\~ src/dispnew.c --- src/dispnew.c.~3~ Wed Nov 1 08:01:19 2000 +++ src/dispnew.c Thu Nov 16 00:54:23 2000 @@ -4795,13 +4795,18 @@ /* Now just clean up termcap drivers and set cursor, etc. */ if (!pause) { - if ((cursor_in_echo_area + if ( +#if 0 /* Not true if we're using `setup_echo_area_for_active_minibuffer'. */ + (cursor_in_echo_area /* If we are showing a message instead of the mini-buffer, show the cursor for the message instead of for the (now hidden) mini-buffer contents. */ || (EQ (minibuf_window, selected_window) && EQ (minibuf_window, echo_area_window) && !NILP (echo_area_buffer[0]))) +#else + cursor_in_echo_area +#endif /* These cases apply only to the frame that contains the active mini-buffer window. */ && FRAME_HAS_MINIBUF_P (f) | http://lists.gnu.org/archive/html/emacs-devel/2000-11/msg00210.html | CC-MAIN-2014-10 | refinedweb | 142 | 62.58 |
The HP Printer Display Hack (with financial goodness)
In early January, we were tasked with creating a unique, interactive experience for the SXSW Interactive launch party with Frog Design. We bounced around many ideas, and finally settled on a project that Rick suggested during our first meeting: boxing robots controlled via Kinect.
The theme of the opening party was Retro Gaming, so we figured creating a life size version of a classic tabletop boxing game mashed up with a "Real Steel"-inspired Kinect experience would be a perfect fit. Most importantly, since this was going to be the first big project of the new Coding4Fun team, we wanted to push ourselves to create an experience that needed each of us to bring our unique blend of hardware, software, and interaction magic to the table under an aggressively tight deadline.
The BoxingBots had to be fit a few requirements:
Creating a robot that could be beaten up for 4 hours and still work proved to be an interesting problem. After doing some research on different configurations and styles, it was decided we should leverage a prior project to get a jump start to meet the deadline. We repurposed sections of our Kinect drivable lounge chair, Jellybean! This was an advantage because it contained many known items, such as the motors, motor controllers, and chassis material. Additionally, it was strong and fast, it was modular, and the code to drive it was already written.
Jellybean would only get us part of the way there, however. We also had to do some retrofitting to get it to work for our new project. The footprint of the base needed to shrink from 32x50 inches to 32x35 inches, while still allowing space to contain all of the original batteries, wheels, motors, motor controllers, switches, voltage adapters. We also had to change how the motors were mounted with this new layout, as well as provide for a way to easily "hot swap" the batteries out during the event. Finally, we had to mount an upper body section that looked somewhat human, complete with a head and punching arms.
Experimenting with possible layouts
The upper body had its own challenges, as it had to support a ton of equipment, including:
Brian and Rick put together one of the upper frames
We had to solve the problem of getting each robot to punch hard enough to register a hit on the opponent bot while not breaking the opponent bot (or itself). Bots also had to withstand a bit of side load in case the arms got tangled or took a side blow. Pneumatic actuators provided us with a lot of flexibility over hydraulics or an electrical solution since they are fast, come in tons of variations, won't break when met with resistance, and can fine tuned with a few onsite adjustments.
To provide power to the actuators, the robots had two 2.5 gallon tanks pressurized to 150psi, with the actuators punching at ~70psi. We could punch for about five 90-second rounds before needing to re-pressurize the tanks. Pressurizing the onboard tanks was taken care of by a pair of off-the-shelf DeWalt air compressors.
It wouldn’t be a polished game if the head didn’t pop up on the losing bot, so we added another pneumatic actuator to raise and lower the head, and some extra red and blue LEDs. This pneumatic is housed in the chest of the robot and is triggered only when the game has ended.
To create the head, we first prototyped a concept with cardboard and duct tape. A rotated welding mask just happened to provide the shape we were going for on the crown, and we crafted each custom jaw with a laser cutter. We considered using a mold and vacuum forming to create something a bit more custom, but had to scrap the idea due to time constraints.
Our initial implementation for detecting punches failed due to far too many false positives. We thought using IR distance sensors would be a good solution since we could detect a “close” punch and tell the other robot to retract the arm before real contact. The test looked promising, but in practice, when the opposite sensors were close, we saw a lot of noise in the data. The backup and currently implemented solution was to install simple push switches in the chest and detect when those are clicked by the chest plate pressing against them.
Different items required different voltages. The motors and pneumatic valves required 24V, the LEDs required 12V and the USB hub required 5V. We used Castle Pro BEC converters to step down the voltages. These devices are typically used in RC airplanes and helicopters.
So how does someone ship two 700lb robots from Seattle to Austin? We did it in 8 crates.
. The key thing to note is that the tops and bottoms of each robot were separated. Any wire that connected the two parts had to be able to be disconnected in some form. This affected the serial cords and the power cords (5V, 12V, 24V).
The software and architecture went through a variety of iterations during development. The final architecture used 3 laptops, 2 desktops, an access point, and a router. It's important to note that the laptops of Robot 1 and Robot 2 are physically mounted on the backs of each Robot body, communicating through WiFi to the Admin console. The entire setup looks like the following diagram:
The heart of the infrastructure is the Admin Console. Originally, this was also intended to be a scoreboard to show audience members the current stats of the match, but as we got further into the project, we realized this wouldn't be necessary. The robots are where the action is, and people's eyes focus there. Additionally, the robots themselves display their current health status via LEDs, so duplicating this information isn't useful. However, the admin side of this app remains.
The admin console is the master controller for the game state and utilizes socket communication between it, the robots, and the user consoles. A generic socket handler was written to span each computer in the setup. The SocketListener object allows for incoming connections to be received, while the SocketClient allows clients to connect to those SocketListeners. These are generic objects, which must specify objects of type GamePacket to send and receive:
public class SocketListener<TSend, TReceive> where TSend : GamePacket where TReceive : GamePacket, new()
GamePacket is a base class from which specific packets inherit:
public abstract class GamePacket { public byte[] ToByteArray() { MemoryStream ms = new MemoryStream(); BinaryWriter bw = new BinaryWriter(ms); try { WritePacket(bw); } catch(IOException ex) { Debug.WriteLine("Error writing packet: " + ex); } return ms.ToArray(); } public void FromBinaryReader(BinaryReader br) { try { ReadPacket(br); } catch(IOException ex) { Debug.WriteLine("Error reading packet: " + ex); } } public abstract void WritePacket(BinaryWriter bw); public abstract void ReadPacket(BinaryReader br); }
For example, in communication between the robots and the admin console, GameStatePacket and MovementDescriptorPacket are sent and received. Each GamePacket must implement its own ReadPacket and WritePacket methods to serialize itself for sending across the socket.
Packets are sent between machines every "frame". We need the absolute latest game state, robot movement, etc. at all times to ensure the game is functional and responsive.
As is quite obvious, absolutely no effort was put into making the console "pretty". This is never seen by the end users and just needs to be functional. Once the robot software and the user consoles are started, the admin console initiates connections to each of those four machines. Each machine runs the SocketListener side of the socket code, while the Admin console creates four SocketClient objects to connect to each those. Once connected, the admin has control of the game and can start, stop, pause, and reset a match by sending the appropriate packets to everyone that is connected.
The robot UI is also never intended to be seen by an end user, and therefore contains only diagnostic information.
Each robot has a wireless Xbox 360 controller connected to it so it can be manually controlled. The UI above reflects the positions of the controller sticks and buttons. During a match, it's possible for a bot to get outside of our "safe zone". One bot might be pushing the other, or the user may be moving the bot toward the edge of the ring. To counter this, the player's coach can either temporarily move the bot, turning off Kinect input, or force the game into "referee mode" which pauses the entire match and turns off Kinect control on both sides. In either case, the robots can be driven with the controllers and reset to safe positions. Once both coaches signal that the robots are reset, the admin can then resume the match.
Phidget hardware controlled our LEDs, relays, and sensors. Getting data out of a Phidget along with actions, such as opening and closing a relay, is shockingly easy as they have pretty straightforward C# APIs and samples, which is why they typically are our go-to product for projects like this.
Below are some code snippets for the LEDs, relays, and sensor.
LEDs – from LedController.cs
This is the code that actually updates the health LEDs in the robot's chest. The LEDs were put on the board in a certain order to allow this style of iteration. We had a small issue of running out of one color of LEDs so we used some super bright ones and had to reduce the power levels to the non-super bright LEDs to prevent possible damage:
private void UpdateLedsNonSuperBright(int amount, int offset, int brightness) { for (var i = offset; i < amount + offset; i++) { _phidgetLed.leds[i] = brightness / 2; } } private void UpdateLedsSuperBright(int amount, int offset, int brightness) { for (var i = offset; i < amount + offset; i++) { _phidgetLed.leds[i] = brightness; } }
Sensor data – from SensorController.cs
This code snippet shows how we obtain the digital and analog inputs from the Phidget 8/8/8 interface board:
public SensorController(InterfaceKit phidgetInterfaceKit) : base(phidgetInterfaceKit) { PhidgetInterfaceKit.ratiometric = true; } public int PollAnalogInput(int index) { return PhidgetInterfaceKit.sensors[index].Value; } public bool PollDigitalInput(int index) { return PhidgetInterfaceKit.inputs[index]; }
Relays – from RelayController.cs
Electrical relays fire our pneumatic valves. These control the head popping and the arms punching. For our application, we wanted the ability to reset the relay automatically. When the relay is opened, an event is triggered and we create an actively polled thread to validate whether we should close the relay. The reason why we actively poll is someone could be quickly toggling the relay. We wouldn't want to close it on accident. The polling and logic does result in a possible delay or early trigger for closing the relay, but for the BoxingBots the difference of 10ms for a relay closing is acceptable:
public void Open(int index, int autoCloseDelay) { UseRelay(index, true, autoCloseDelay); } public void Close(int index) { UseRelay(index, false, 0); } private void UseRelay(int index, bool openRelay, int autoCloseDelay) { AlterTimeDelay(index, autoCloseDelay); PhidgetInterfaceKit.outputs[index] = openRelay; } void _relayController_OutputChange(object sender, OutputChangeEventArgs e) { // closed if (!e.Value) return; ThreadPool.QueueUserWorkItem(state => { if (_timeDelays.ContainsKey(e.Index)) { while (_timeDelays[e.Index] > 0) { Thread.Sleep(ThreadTick); _timeDelays[e.Index] -= ThreadTick; } } Close(e.Index); }); } public int GetTimeDelay(int index) { if (!_timeDelays.ContainsKey(index)) return 0; return _timeDelays[index]; } public void AlterTimeDelay(int index, int autoCloseDelay) { _timeDelays[index] = autoCloseDelay; }
Since the theme of the party was Retro Gaming, we wanted to go for an early 80's Sci-fi style interface, complete with starscape background and solar flares! We wanted to create actual interactive elements, though, to maintain the green phosphor look of early monochrome monitors. Unlike traditional video games, however, the screens are designed not as the primary focus of attention, but rather to help calibrate the player before the round and provide secondary display data during the match. The player should primarily stay focused on the boxer during the match, so the interface is designed to sit under the players view line and serve as more of a dashboard during each match.
However, during calibration before each round, it is important to have the player understand how their core body will be used to drive the Robot base during each round. To do this, we needed to track an average of the joints that make up each fighter's body core. We handled the process by creating a list of core joints and a variable that normalizes the metric distances returned from the Kinect sensor into a human-acceptable range of motion:
private static List<JointType> coreJoints = newList<JointType>( newJointType[] { JointType.AnkleLeft, JointType.AnkleRight, JointType.ShoulderCenter, JointType.HipCenter }); private const double RangeNormalizer = .22; private const double NoiseClip = .05;
And then during each skeleton calculation called by the game loop, we average the core positions to determine the averages of the players as they relate to their playable ring boundary:
public staticMovementDescriptorPacket AnalyzeSkeleton(Skeleton skeleton) { // ... CoreAverageDelta.X = 0.0; CoreAverageDelta.Z = 0.0; foreach (JointType jt in CoreJoints) { CoreAverageDelta.X += skeleton.Joints[jt].Position.X - RingCenter.X; CoreAverageDelta.Z += skeleton.Joints[jt].Position.Z - RingCenter.Z; } CoreAverageDelta.X /= CoreJoints.Count * RangeNormalizer; CoreAverageDelta.Z /= CoreJoints.Count * RangeNormalizer; // ... if (CoreAverageDelta.Z > NoiseClip || CoreAverageDelta.Z < -NoiseClip) { packet.Move = -CoreAverageDelta.Z; } if (CoreAverageDelta.X > NoiseClip || CoreAverageDelta.X < -NoiseClip) { packet.Strafe = CoreAverageDelta.X; } }
In this way, we filter out insignificant data noise and allow the player's average core body to serve as a joystick for driving the robot around. Allowing them to lean at any angle, the move and strafe values are accordingly set to allow for a full 360 degrees of movement freedom, while at the same time not allowing any one joint to unevenly influence their direction of motion.
Another snippet of code that may be of interest is the WPF3D rendering we used to visualize the skeleton. Since the Kinect returns joint data based off of a center point, it is relatively easy to wire up a working 3D model in WPF3D off of the skeleton data, and we do this in the ringAvatar.xaml control.
In the XAML, we simply need a basic Viewport3D with camera, lights, and an empty ModelVisual3D container to hold or squares. The empty container looks like this:
<ModelVisual3D x: <ModelVisual3D.Transform> <Transform3DGroup> <RotateTransform3D x: <RotateTransform3D.Rotation> <AxisAngleRotation3D x: </RotateTransform3D.Rotation> </RotateTransform3D> </Transform3DGroup> </ModelVisual3D.Transform> </ModelVisual3D>
In the code, we created a generic WPF3DModel that inherits from UIElement3D and is used to store the basic positioning properties of each square. In the constructor of the object, though, we can pass a reference key to a XAML file that defines the 3D mesh to use:
public WPF3DModel(string resourceKey) { this.Visual3DModel = Application.Current.Resources[resourceKey] as Model3DGroup; }
This is a handy trick when you need to do a fast WPF3D demo and require a certain level of flexibility. To create a 3D cube for each joint when ringAvatar is initialized, we simply do this:
private readonly List<WPF3DModel> _models = new List<WPF3DModel>(); private void CreateViewportModels() { for (int i = 0; i < 20; i++) { WPF3DModel model = new WPF3DModel("mesh_cube"); viewportModelsContainer2.Children.Add(model); // ... _models.Add(model); } // ... }
And then each time we need to redraw the skeleton, we loop through the skeleton data and set the cube position like so:
if (SkeletonProcessor.RawSkeleton.TrackingState == SkeletonTrackingState.Tracked) { int i = 0; foreach (Joint joint in SkeletonProcessor.RawSkeleton.Joints) { if (joint.TrackingState == JointTrackingState.Tracked) { _models[i].Translate( joint.Position.X * 8.0, joint.Position.Y * 10.0, joint.Position.Z * -10.0); i++; } } // ... }
There are a few other areas in the User Console that you may want to further dig into, including the weighting for handling a punch as well dynamically generating arcs based on the position of the fist to the shoulder. However, for this experience, the User Console serves as a secondary display to support the playing experience and gives both the player and audience a visual anchor for the game.
The character in a first person shooter (FPS) video game has an X position, a Y position, and a rotation vector. On an Xbox controller, the left stick controls the X,Y position. Y is the throttle (forward and backward), X is the strafing amount (left and right), and the right thumb stick moves the camera to change what you're looking at (rotation). When all three are combined, the character can do things such as run around someone while looking at them.
In the prior project, we had existing code that worked for controlling all 4 motors at the same time, working much like a tank does, so we only had throttle (forward and back) and strafing (left and right). Accordingly, we can move the motors in all directions, but there are still scenarios in which the wheels fight one another and the base won't move. By moving to a FPS style, we eliminate the ability to move the wheels in an non-productive way and actually make it a lot easier to drive.
Note that Clint had some wiring "quirks" with polarity and which motor was left VS right, he had to correct in these quirks in software
:
public Speed CalculateSpeed(double throttleVector, double strafeVector, double rotationAngle) { rotationAngle = VerifyLegalValues(rotationAngle); rotationAngle = AdjustValueForDeadzone(rotationAngle, AllowedRotationAngle, _negatedAllowedRotationAngle); // flipped wiring, easy fix is here throttleVector *= -1; rotationAngle *= -1; // miss wired, had to flip throttle and straff for calc return CalculateSpeed(strafeVector + rotationAngle, throttleVector, strafeVector - rotationAngle, throttleVector); } protected Speed CalculateSpeed(double leftSideThrottle, double leftSideVectorMultiplier, double rightSideThrottle, double rightSideVectorMultiplier) { /* code from Jellybean */ }
The Boxing Bots project was one of the biggest things we have built to date. It was also one of our most successful projects. Though it was a rainy, cold day and night in Austin when the bots were revealed, and we had to move locations several times during setup to ensure the bots and computers wouldn't be fried by the rain, they ran flawlessly for the entire event and contestants seemed to have a lot of fun driving them.
Boxing Bots, violent robots that punch each other, also fulfills a hidden requirement, that the result should appeal to men only. Then everyone can sit around and wonder why there aren't more women getting into programming.
What a Debbie Downer you are. I'm sure plenty of girls got to try this and even, heaven forbid, enjoyed such a violent, reprehensible (and don't forget sexist) activity, and they aren't ashamed of it either. You seem to be incredibly out of touch with the reality that among today's younger generations girls can very well find enjoyment in "what boys like"(note the quotes) and vise-versa, AND be proud of it. Seeing as you so readily subscribe to traditional gender roles so I can only assume so.
You don't see the beauty here, only that this is violent and misogynistic? Really?
This reminds me of Real Steel, a film starring Hugh Jackman. Only with less resistance to attack, lol.
The idea, the hand-making, the codes, you guys are amazing!
Is there any video? Anybody can photoshop some pictures and paste code snippets. Video doesn't lie!
(Well done!)
@chad:
footage of the SXSW event:
hilarious informecial:
Best,
Golnaz
they had this technology in the movie "The Toy" with richard prior from 30 years ago (probably not, but the resemblance is strong)
Great one !!!
Congrats ....
Great work guys!!!!
Really nice project | https://channel9.msdn.com/coding4fun/articles/Boxing-Bots-An-Overview | CC-MAIN-2021-21 | refinedweb | 3,233 | 51.99 |
Given an octal number as input. Now, write a Kotlin program to convert Octal number to Decimal Number.
Input: 20 Output: 16
Input: 324 Output: 212
Input: 1000 Output: 512
1. Program to Convert Octal to Decimal
Pseudo Algorithm
- Initialise a variable decimalNum, that stores equivalent decimal value, with 0
- Extract each digit from the octal number.
- While extracting, multiply the extracted digit with proper base (power of 8).
- For example, if octal number is 110, decimalNum = 1 * (8^2) + 1 * (8^1) + 0 * (8^0) = 72
Sourcecode –
import java.util.* fun main() { val read = Scanner(System.`in`) println("Enter n:") val octalN = read.nextLong() var decimalNum: Long = 0 if(checkOctalNumber(octalN)) { var n = octalN var base = 1 while (n != 0.toLong()) { val lastDigit = n % 10 n /= 10 decimalNum += lastDigit * base base *= 8 } println("Equivalent Decimal : $decimalNum") } else { println("$octalN is not a octal number") } } private fun checkOctalNumber(octalNum: Long): Boolean { var isOctal = true var n = octalNum while (n != 0.toLong()) { val lastDigit = n % 10 if(!((lastDigit >= 0.toLong()) || (lastDigit <= 7.toLong()))) { isOctal = false break } n /= 10 } return isOctal }
When you run the program, output will be –
Enter n: 2432 Equivalent Decimal : 1306
Explanation:
Here, we have created an object of Scanner. Scanner takes an argument which says where to take input from.
System.`in` means take input from standard input – Keyboard.
read.nextLong() means read anything entered by user before space or line break from standard input – Keyboard.
We stored value entered by user in variable octalN.
At first, we checked if entered value is octal or not.
If NO, we print a message ” number is not octal number”.
- We use checkOctalNumber function to check if number is octal or not. If any digit in number is neither 0 nor 1, then, it is not octal number.
If Yes, we start converting those number into decimal –
- variable base contains power of 8 to the position at which current last digit we have in lastDigit
- Decimal value is sum of lastDigit * (power of 8 to the position at which current lastDigit is at)
- For example,
octalN = 110
decimalN = 1 * (8 ^ 2) + 1 * (8^1) + 0 * (8 ^ 0) = 72
Thus, we went through Kotlin Program to Convert Octal Number to Decimal Number. | https://tutorialwing.com/kotlin-program-to-convert-octal-number-to-decimal/ | CC-MAIN-2022-21 | refinedweb | 371 | 55.34 |
Created on 2018-03-22 09:21 by Jonathan Huot, last changed 2018-03-24 04:25 by ncoghlan. This issue is now closed.
Executing python modules with -m can lead to weird sys.argv parsing.
"Argument parsing" section at mention :
- When -m module is used, sys.argv[0] is set to the full name of the located module.
The word "located" is used, but it doesn't mention anything when the module is not *yet* "located".
For instance, let's see what is the sys.argv for each python files:
$ cat mainmodule/__init__.py
import sys; print("{}: {}".format(sys.argv, __file__))
$ cat mainmodule/submodule/__init__.py
import sys; print("{}: {}".format(sys.argv, __file__))
$ cat mainmodule/submodule/foobar.py
import sys; print("{}: {}".format(sys.argv, __file__))
Then we call "foobar" with -m:
$ python -m mainmodule.submodule.foobar -o -b
['-m', '-o', 'b']: (..)/mainmodule/__init__.py
['-m', '-o', 'b']: (..)/mainmodule/submodule/__init__.py
['(..)/mainmodule/submodule/foobar.py', '-o', 'b']: (..)/mainmodule/submodule/foobar.py
$
We notice that only "-m" is in sys.argv before we found "foobar". This can lead to a lot of troubles when we have meaningful processing in __init__.py which rely on sys.argv to initialize stuff.
IMHO, it either should be the sys.argv intact ['-m', 'mainmodule.submodule.foobar', '-o', '-b'] or empty ['', '-o', '-b'] or only the latest ['-o', '-b'], but it should not be ['-m', '-o', '-b'] which is very confusing.
Two of your 3 suggested alternatives could lead to bugs. To use your example:
python -m mainmodule.submodule.foobar -o -b
is a convenient alternative and abbreviation for
python .../somedir/mainmodule/submodule/foobar.py -o -b
The two invocations should give equivalent results and to the extent possible the same result.
[What might be different is the form of argv[0]. In the first case, argv[0] will be the "preferred" form of the path to the python file while in the second, it will be whatever is given. On Windows, the difference might look like 'F:\\Python\\a\\tem2.py' versus 'f:/python/a/tem2.py']
Unless __init__.py does some evil monkeypatching, it cannot affect the main module unless imported directly or indirectly. So its behavior should be the same whether imported before or after execution of the main module. This means that argv must be the same either way (except for argv[0]). So argv[0:2] must be condensed to one arg before executing __init__. I don't see that '' is an improvement over '-m'.
Command line arguments are intended for the invoked command. An __init__.py file is never the command unless invoked by its full path: "python somepath/__init__.py". In such a case, sys.argv access should be within a "__name__ == '__main__':" clause or a function called therein.
This is deliberate, and is covered in the documentation at where it says 'If this option is given, the first element of sys.argv will be the full path to the module file (while the module file is being located, the first element will be set to "-m").'
The part in parentheses is the bit that's applicable here.
We've not going to change that, as the interpreter startup relies on checking sys.argv[0] for "-m" and "-c" in order to work out how it's expected to handle sys.path initialization. | https://bugs.python.org/issue33119 | CC-MAIN-2019-26 | refinedweb | 555 | 69.79 |
In C++, the programmer abstracts real world objects using classes as concrete types. Sometimes it is required to convert one concrete type to another concrete type or primitive type implicitly. Conversion operators play smart role in such situations.
For example consider the following class
#include <iostream> #include <cmath> using namespace std; class Complex { private: double real; double imag; public: // Default constructor Complex(double r = 0.0, double i = 0.0) : real(r), imag(i) {} // magnitude : usual function style double mag() { return getMag(); } // magnitude : conversion operator operator double () { return getMag(); } private: // class helper to get magnitude double getMag() { return sqrt(real * real + imag * imag); } }; int main() { // a Complex object Complex com(3.0, 4.0); // print magnitude cout << com.mag() << endl; // same can be done like this cout << com << endl; }
We are printing the magnitude of Complex object in two different ways.
Note that usage of such smart (over smart ?) techniques are discouraged. The compiler will have more control in calling an appropriate function based on type, rather than what the programmer expects. It will be good practice to use other techniques like class/object specific member function (or making use of C++ Variant class) to perform such conversions. At some places, for example in making compatible calls with existing C library, these are unavoidable.
— Venki. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. | http://www.geeksforgeeks.org/advanced-c-conversion-operators/ | CC-MAIN-2016-18 | refinedweb | 235 | 55.24 |
First thing, rcov
compare controller size, model size, look for largest files
then wc -l on app/models
Biggest files are most important and/or biggest messes
Observer functionality with any kind of external resource go in sweepers often
observe ActiveRecord::Base
uber-sweeper
weird
not necessarily bad, certainly unusual
before_create - initializes magic number for global config - needs description, and probably relocation - no intention revealed - should be described for what it is
assigning id directly -> "major leaky abstraction"
Chad dissed some of his own code from Rails Recipes - live and learn
set associations through the association code - the value of semantic code
use Symbol#to_proc
"never make an action have more than five lines."
"whenever it's more than five lines, it's bad."
Eric Evans - Domain-Driven Design - huge recommendation
(And Kent Beck Smalltalk Best Practice Patterns)
def fullname, def reversed_fullname, no, set a :reverse keyword or even a reverse method on the attribute itself (not even the class)
Don't you think rules like "Never make X have more that N lines" are just arbritary and silly? Why not six or seven or even four?
Maybe they should at least pick a number that allows the incredibly common create action with XML support to exist without being considered "bad".
It's notes, dude. It's not meant as an endorsement. It's just so I remember what was said. And yes, of course a hard and fast rule is silly. These aren't hard and fast rules. They're guidelines. You're taking this way too seriously.
def fullname, def reversed_fullname, no, set a :reverse keyword or even a reverse method on the attribute itself (not even the class)
Giles, can you expand on this any? I presume for the sake of the example, it does something sensible like like person.fullname is 'Jamie Macey' and person.reversed_fullname is 'Macey Jamie', how would a :reverse keyword help with that?
This comment has been removed by the author. | http://gilesbowkett.blogspot.com/2007/08/rails-edge-notes-chad-fowler-marcel.html?showComment=1188159540000 | CC-MAIN-2017-34 | refinedweb | 329 | 60.14 |
Today we have a guest article by Ravikanth who works at Dell, Inc., as a lead engineer in the SharePoint Solutions Team. He loves automation and is a Windows PowerShell fanatic. He writes regularly on his blog about topics related to Windows PowerShell, SharePoint, and Microsoft server virtualization. He is also moderator on the Official Hey, Scripting Guy! Forum and a regular speaker at BangaloreITPro User Group meetings.
In today’s post, we will see an example of how to use Windows Forms TreeView control. I will explain this by showing an example of a Windows PowerShell help tree.
First, we design a simple GUI form using SAPIEN’s PrimalForms Community Edition. I placed the TreeView control, a rich text box, a link label, and a button. This is seen in the image following this paragraph. The intention of this form is to load the Windows PowerShell help for all Windows PowerShell core modules in the form of a tree, and when you select a cmdlet within the tree, help text for that cmdlet will be shown in the text box along with a link to a TechNet article at the bottom of the form.
After the design is complete, we export the form to a Windows PowerShell script using Export PowerShell option. This generates the necessary code to create the GUI form in Windows PowerShell. Now, we need to edit this script to add the custom code we need to create our process manager:
Before we go further in to the details of this script, let us quickly look at how the TreeView control is created. The following snippet shows the code generated by PrimalForms to add the TreeView control to the main form:
$treeView1 = New-Object System.Windows.Forms.TreeView $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Width = 224 $System_Drawing_Size.Height = 563 $treeView1.Size = $System_Drawing_Size $treeView1.Name = "treeView1" $System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 13 $System_Drawing_Point.Y = 37 $treeView1.Location = $System_Drawing_Point $treeView1.DataBindings.DefaultDataSourceUpdateMode = 0 $treeView1.TabIndex = 0 $form1.Controls.Add($treeView1)
We use the System.Windows.Forms.TreeView namespace to create an instance as $treeView1. After this instance is created, we assign values to properties such as size, name, and various other properties. Now, we need to build the tree and show it as the form loads.
To build the Windows PowerShell help tree, we create a function and add it to the Form Load event—$form1.Add_Load().
The following section of code shows the function we created to build the Windows PowerShell help tree:
function Get-HelpTree { if ($script:cmdletNodes) { $treeview1.Nodes.remove($script:cmdletNodes) $form1.Refresh() } $script:cmdletNodes = New-Object System.Windows.Forms.TreeNode $script:cmdletNodes.text = "PowerShell Help" $script:cmdletNodes.Name = "PowerShell Help" $script:cmdletNodes.Tag = "root" $treeView1.Nodes.Add($script:cmdletNodes) | Out-Null #Generate Module nodes $modules = @("Microsoft.PowerShell.Core","Microsoft.PowerShell.Diagnostics","Microsoft.PowerShell.Host","Microsoft.PowerShell.Management","Microsoft.PowerShell.Security","Microsoft.PowerShell.Utility") $modules | % { $parentNode = Add-Node $script:cmdletNodes $_ "Module" $moduleCmdlets = Get-Command -Module $_ $moduleCmdlets | % { $childNode = Add-Node $parentNode $_.Name "Cmdlet" } } = "" } }) $script:cmdletNodes.Expand() }
Let us now explore how the above function is building our Windows PowerShell tree. First, we add the root of the tree and call it “PowerShell Help”. Underneath this, we build help for all the Windows PowerShell core modules as different nodes. The first part is achieved by:
$script:cmdletNodes = New-Object System.Windows.Forms.TreeNode
$script:cmdletNodes.text = "PowerShell Help"
$script:cmdletNodes.Name = "PowerShell Help"
$script:cmdletNodes.Tag = "root"
Notice that I am using the Tag property to identify the node. In this case, the Tag property for root node is set to root for obvious reasons. You will later see why this is so important.
Now, we need to generate the rest of the tree and child nodes. For that we need to know which Windows PowerShell core modules are available, and the easiest way to achieve that is to store all the module names in an array as shown here:
$modules = @("Microsoft.PowerShell.Core","Microsoft.PowerShell.Diagnostics","Microsoft.PowerShell.Host","Microsoft.PowerShell.Management","Microsoft.PowerShell.Security","Microsoft.PowerShell.Utility")
The $modules variable has a list of all Windows PowerShell core modules. We can now use this to generate the child nodes. To do that, we use the Get-Command cmdlet to get a list of all cmdlets provided by each of the above Windows PowerShell modules, and then add the names of those cmdlets to the TreeView as child nodes:
$modules | % {
$parentNode = Add-Node $script:cmdletNodes $_ "Module"
$moduleCmdlets = Get-Command -Module $_
$moduleCmdlets | % {
$childNode = Add-Node $parentNode $_.Name "Cmdlet"
}
To simplify the script and enable re-use, we added another function to our script and called it Add-Node. This function will be called recursively to generate the child nodes as required:
function Add-Node {
param (
$selectedNode,
$name,
$tag
)
$newNode = new-object System.Windows.Forms.TreeNode
$newNode.Name = $name
$newNode.Text = $name
$newNode.Tag = $tag
$selectedNode.Nodes.Add($newNode) | Out-Null
return $newNode
We pass the node name and a tag associated with the node to the Add-Node function. Notice, again, that we are using tags to differentiate several levels of our Windows PowerShell tree. In this case, all the module names use Module as the tag name and cmdlets underneath a module use Cmdlet.
After we have the entire tree built, we need to define what should happen when someone selects a node or cmdlet within our tree. In this example, as mentioned earlier, when a cmdlet gets selected in the tree, we display complete help text for the cmdlet in the text box. This is achieved by adding the necessary code to the After_Select() event of every node:
= ""
})
In the above code, observe how we are checking if the selected node is a cmdlet or not. This is the reason why we need to use the Tag property at every level of the tree we generated. It helps us identify what has been selected and take an action based on that. In the case when the selected node is a cmdlet, we use the node name along with the Get-Help cmdlet to generate the help and display the same in the form’s text box.
Also, we refresh the link label at the bottom of the form to point to the online version of help for the selected cmdlet. This link label can be clicked to open an Internet browser and reach the online help version for the selected cmdlet. This is achieved by adding the necessary code as shown here:
$linkLabel1.add_click($linkLabel1_OpenLink)
$linkLabel1_OpenLink=
{
[system.Diagnostics.Process]::start($linkLabel1.text)
In case of nodes other than Cmdlets, we just default the values to something as shown above. In the end, we enable a Close button by adding $form1.Close() to $button1.Add_Close() event.
You can find the complete code for this article in the Script Repository. You can also see a bit more advanced example of data grid control along with TreeView control in the Windows PowerShell remote file explorer script at
Well, scripters, that is all there is to using Windows Forms TreeView control.
There's a typo in the link at the end. Its PsRemoteExplorer the O and M in remote got switched round
Thanks Jamesone, we have corrected the spelling.
Is there a way to determine the expand(+) and collapse(-) treenode clicks i.e. which node on Add_AfterCollapse/Expand events?
.SelectedNode is not the answer...
Using TreeView Control for WinForms at design time, | http://blogs.technet.com/b/heyscriptingguy/archive/2010/06/15/hey-scripting-guy-how-can-i-use-the-windows-forms-treeview-control.aspx | CC-MAIN-2014-23 | refinedweb | 1,244 | 57.77 |
General Question How would one create a direct input / output stream to and from an instance of command prompt (cmd.exe) using some form of a C# application, whether it be WinForms, WPF, or UWP. Effectively using CMD as if it were just a compiled library What I Actually Need I need to be able ..
Category : windows
My ..
I am looking for using barcode capture event to executing a program (either powershell or simple vbscript) on Windows 10 machine. How can I achieve that? Do I need to create a barcode program embedded in it – if yes, how will it execute? Do I need to capture the scan event in windows machine ..
I have two .cs files and I want to use a button to change value of string channel but it calls error CS0236: (A Field Initializer Cannot Reference The Nonstatic Field, Method, Or Property), please help me, I am looking for a solution two days. Thank you. in Form1.cs I have using System; using System.Collections.Generic; ..
import tkinter canvas = tkinter.Canvas() canvas.pack() tkinter.mainloop() canvas.create_text(150, 100, text = "HELLO") Hello. Sorry for my English. Start in Python. Canvas is on screen, but NO text What is wrong? Thanks. Source: Windows..
After a Windows 10 Creator Update there is a registry entry: "SOFTWAREMicrosoftWindowsCurrentVersionPrecisionTouchPadAAPThreshold" with a default vaule of 2 The touchpad is deactivated for a short time after keystrokes. For applications wants to support special actions by press a keyboard key and move the cursor by touchpad it means: The interaction hangs sporadical and is not ..
Here’s the code that doesn’t work. The spsip.txt file has IP addresses in it for the script to loop through. The code does not seem to evaluate the IF command condition. No matter what I type, it always evaluates as false and never runs the ping command. Any ideas? @echo off for /F %%f in ..
I’m automating some Excel and Powerpoint functionality with Python and win32com. Sometimes the apps don’t start up because there are "recovery" files, so I want to blow away the recovery files before my automation script starts up. To do that, I need to know where these files are kept. I can’t find them. I’m using ..
If a Pc has two network adapters both with different ips and both are connected , and i want to ping a device on the network which ip will be used ? Source: Windows.. ..
Recent Comments | https://windowsquestions.com/category/windows/page/2/ | CC-MAIN-2021-21 | refinedweb | 412 | 65.62 |
Differences between current version and predecessor to the previous major change of smb.conf(5).
Other diffs: Previous Revision, Previous Author, or view the Annotated Edit History
@@ -434,9 +434,9 @@
the architecture of the remote machine. Only some are
recognized, and those may not be 100% reliable. It currently
-recognizes Samba, WfWg, Win95, WinNT and Win2k. Anything
+recognizes Samba, WfWg, Win95, WinNT and Win2k. Anything
else will be known as
__%I__
@@ -1904,9 +1904,9 @@
__browse list (G)__
This controls whether __smbd(8)__will serve a browse list
-to a client doing a __NetServerEnum__ call. Normally set
+to a client doing a __ NetServerEnum__ call. Normally set
to true. You should never need to change this.
Default: __browse list = yes__
@@ -2211,9 +2211,9 @@
hex representation, i.e. :AB.
CAP - Convert an incoming Shift-JIS character to the 3 byte
-hex representation used by the Columbia AppleTalk Program
+hex representation used by the Columbia AppleTalk Program
(CAP), i.e. :AB. This is used for compatibility between
Samba and CAP.
@@ -2537,9 +2537,9 @@
With the introduction of MS-RPC based printer support for
Windows NT/2000 clients in Samba 2.2, it is now possible to
-delete printer at run time by issuing the DeletePrinter()
+delete printer at run time by issuing the DeletePrinter()
RPC call.
For a Samba host this means that the printer must be
@@ -2709,12 +2709,12 @@
If this option is set to true, then Samba will attempt to
recursively delete any files and directories within the
vetoed directory. This can be useful for integration with
-file serving systems such as NetAtalk which create
+file serving systems such as NetAtalk which create
meta-files within directories you might normally veto
DOS/Windows users from seeing (e.g.
-''.AppleDouble'')
+''. AppleDouble'')
Setting __delete veto files = yes__ allows these
directories to be transparently deleted when the parent
@@ -3182,9 +3182,9 @@
default
__enumports command'' to point to a program which should
generate a list of ports, one per line, to standard output.
This listing will then be used in response to the level 1
-and 2 EnumPorts() RPC.
+and 2 EnumPorts() RPC.
Default: __no enumports command__
@@ -3647,9 +3647,9 @@
Default: __no file are hidden__
Example: __hide files =
-/.*/DesktopFolderDB/TrashFor%m/resource.frk/__
+/.*/DesktopFolderDB/ TrashFor%m/resource.frk/__
The above example is based on files that the Macintosh SMB
client (DAVE) available from Thursby
@@ -4465,9 +4465,9 @@
This tells Samba to return the above string, with
substitutions made when a client requests the info,
-generally in a NetUserGetInfo request. Win9X clients
+generally in a NetUserGetInfo request. Win9X clients
truncate the info to \servershare when a user does __net
use /home__ but use the whole string when dealing with
profiles.
@@ -5377,9 +5377,9 @@
__message command (G)__
This specifies what command to run when the server receives
-a WinPopup style message.
+a WinPopup style message.
This would normally be a command that would deliver the
message somehow. How this is to be done is up to your
@@ -5435,9 +5435,9 @@
If you don't have a message command then the message won't
be delivered and Samba will tell the sender there was an
-error. Unfortunately WfWg totally ignores the error code and
+error. Unfortunately WfWg totally ignores the error code and
carries on regardless, saying that the message was
delivered.
@@ -5905,11 +5905,11 @@
-For example, a valid entry using the HP LaserJet 5 printer
-driver would appear as __HP LaserJet 5L = LASERJET.HP
-LaserJet 5L__.
+For example, a valid entry using the HP LaserJet 5 printer
+driver would appear as __HP LaserJet 5L = LASERJET.HP
+ LaserJet 5L__.
The need for the file is due to the printer driver namespace
problem described in the Samba Printing HOWTO. For more
@@ -6308,9 +6308,9 @@
__postscript (S)__
This parameter forces a printer to interpret the print files
-as PostScript. This is done by adding a %! to the start of
+as PostScript. This is done by adding a %! to the start of
print output.
This is most useful when you have lots of PCs that persist
@@ -6641,9 +6641,9 @@
See also ''printer driver file''.
-Example: __printer driver = HP LaserJet 4L__
+Example: __printer driver = HP LaserJet 4L__
__printer driver file (G)__
@@ -7188,10 +7188,10 @@
__security = share__ mainly because that was the only
option at one stage.
-There is a bug in WfWg that has relevance to this setting.
-When in user or server level security a WfWg client will
+There is a bug in WfWg that has relevance to this setting.
+When in user or server level security a WfWg client will
totally ignore the password you type in the
If your PCs use usernames that are the same as their
@@ -7257,9 +7257,9 @@
map''), is added as a potential username.
If the client did a previous __logon__ request (the
-SessionSetup SMB call) then the username sent in this SMB
+ SessionSetup SMB call) then the username sent in this SMB
will be added as a potential username.
The name of the service the client requested is added as a
@@ -7523,20 +7523,20 @@
Windows NT/2000 client in Samba 2.2, a
Under normal circumstances, the Windows NT/2000 client will
-open a handle on the printer server with OpenPrinterEx()
+always cause the OpenPrinterEx() on the server to fail. Thus
the APW icon will never be displayed. __Note :__This does
not prevent the same user from having administrative
privilege on an individual printer.
@@ -8431,19 +8431,19 @@
+attempt to issue the OpenPrinterEx() call requesting access
rights associated with the logged on user. If the user
possesses local administator rights but not root privilegde
-on the Samba host (often the case), the OpenPrinterEx() call
+on the Samba host (often the case), the OpenPrinterEx() call
will fail. The result is that the client will now display an
If this parameter is enabled for a printer, then any attempt
to open the printer with the PRINTER_ACCESS_ADMINISTER right
is mapped to PRINTER_ACCESS_USE instead. Thus allowing the
-OpenPrinterEx() call to succeed. __This parameter MUST not
+ OpenPrinterEx() call to succeed. __This parameter MUST not
be able enabled on a print share which has valid print
driver installed on the Samba server.__
@@ -8511,9 +8511,9 @@
The ''username'' line is needed only when the PC is
unable to supply its own username. This is the case for the
-COREPLUS protocol or where your users have different WfWg
+COREPLUS protocol or where your users have different WfWg
usernames to UNIX usernames. In both these cases you may
also be better using the \servershare%user syntax
instead.
@@ -8590,9 +8590,9 @@ .
+on your UNIX machine, such as AstrangeUser .
Default: __username level = 0__
@@ -8687,9 +8687,9 @@
Also note that no reverse mapping is done. The main effect
this has is with printing. Users who have been mapped may
-have trouble deleting print jobs as PrintManager under WfWg
+have trouble deleting print jobs as PrintManager under WfWg
will think they don't own the print job.
Default: __no username map__
@@ -8871,11 +8871,11 @@
; Veto any files containing the word Security,
; any ending in .tmp, and any directory containing the
; word root.
veto files = /*Security*/*.tmp/*root*/
-; Veto the Apple specific files that a NetAtalk server
+; Veto the Apple specific files that a NetAtalk server
; creates.
-veto files = /.AppleDouble/.bin/.AppleDesktop/Network Trash Folder/
+veto files = /. AppleDouble/.bin/. AppleDesktop/Network Trash Folder/
__veto oplock files (S)__
@@ -8893,13 +8893,13 @@
You might want to do this on files that you know will be
heavily contended for by clients. A good example of this is
-in the NetBench SMB benchmark program, which causes heavy
:
+the particular NetBench share :
Example: __veto oplock files = /*.SEM/__ | http://wiki.wlug.org.nz/smb.conf(5)?action=diff | CC-MAIN-2015-14 | refinedweb | 1,285 | 62.38 |
This is a guest post by Hartley Brody, whose book “The Ultimate Guide to Web Scraping” goes into much more detail on web scraping best practices. You can follow him on Twitter, it’ll make his day! Thanks for contributing Hartley!
Hacker News is a treasure trove of information on the hacker zeitgeist. There are all sorts of cool things you could do with the information once you pull it, but first you need to scrape a copy for yourself.
Hacker News is actually a bit tricky to scrape since the site’s markup isn’t all that semantic — meaning the HTML elements and attributes don’t do a great job of explaining the content they contain. Everything on the HN homepage is in two tables, and there aren’t that many
classes or
ids to help us hone in on the particular HTML elements that hold stories. Instead, we’ll have to rely more on patterns and counting on elements as we go.
Pull up the web inspector in Chrome and try zooming up and down the DOM tree. You’ll see that the markup is pretty basic. There’s an outer table that’s basically just used to keep things centered (85% of the screen width) and then an inner table that holds the stories.
If you look inside the inner table, you’ll see that the rows come in groups of three: the first row in each group contains the headlines and story links, the second row contains the metadata about each story — like who posted it and how many points it has — and the third row is empty and adds a bit of padding between stories. This should be enough information for us to get started, so let’s dive into the code.
I’m going to try and avoid the religious tech wars and just say that I’m using Python and my trusty standby libraries — requests and BeautifulSoup — although there are many other great options out there. Feel free to use your HTTP requests library and HTML parsing library of choice.
In its purest form, web scraping is two simple steps: 1. Make a request to a website that generates HTML, and 2. Pull the content you want out of the HTML that’s returned.
As the programmer, all you need to do is a bit of pattern recognition to find the URLs to request and the DOM elements to parse, and then you can let your libraries do the heavy lifting. Our code will just glue the two functions together to pull out just what we need.
import requests from BeautifulSoup import BeautifulSoup # make a single request to the homepage r = requests.get("") # convert the plaintext HTML markup into a DOM-like structure that we can search soup = BeautifulSoup(r.text) # parse through the outer and inner tables, then find the rows outer_table = soup.find("table") inner_table = outer_table.findAll("table")[1] rows = inner_table.findAll("tr") stories = [] # create an empty list for holding stories rows_per_story = 3 # helps us iterate over the table for row_num in range(0, len(rows)-rows_per_story, rows_per_story): # grab the 1st & 2nd rows and create an array of their cells story_pieces = rows[row_num].findAll("td") meta_pieces = rows[row_num + 1].findAll("td") # create our story dictionary story = { "current_position": story_pieces[0].string, "link": story_pieces[2].find("a")["href"], "title": story_pieces[2].find("a").string, } try: story["posted_by"] = meta_pieces[1].findAll("a")[0].string except IndexError: continue # this is a job posting, not a story stories.append(story) import json print json.dumps(stories, indent=1)
You’ll notice that inside the for loop, when we’re iterating over the rows in the table two at a time, we’re parsing out the individual pieces of content (link, title, etc) by skipping to a particular number in the list of
<td> elements returned. Generally, you want to avoid using magic numbers in your code, but without more semantic markup, this is what we’re left to work with.
This obviously makes the scraping code brittle, if the site is ever redesigned or the elements on the page move around at all, this code will no longer work as designed. But I’m guessing from the consistently minimalistic, retro look that HN isn’t getting a facelift any time soon. ;)
Extension Ideas
Running this script top-to-bottom will print out a list of all the current stories on HN. But if you really want to do something interesting, you’ll probably want to grab snapshots of the homepage and the newest page fairly regularly. Maybe even every minute.
There are a number of cool projects that have already built cool extensions and visualizations from (I presume) scraping data from Hacker News, such as:
-
-
-
It’d be a good idea to set this up using
crontab on your web server. Run
crontab -e to pull up a vim editor and edit your machine’s cron jobs, and add a line that looks like this:
* * * * * python /path/to/hn_scraper.py
Then save it and exit (<esc> + “:wq”) and you should be good to go. Obviously, printing things to the command line doesn’t do you much good from a cron job, so you’ll probably want to change the script to write each snapshot of stories into your database of choice for later retrieval.
Basic Web Scraping Etiquette
If you’re going to be scraping any site regularly, it’s important to be a good web scraping citizen so that your script doesn’t ruin the experience for the rest of us… aw who are we kidding, you’ll definitely get blocked before your script causes any noticeable site degradation for other users on Hacker News. But still, it’s good to keep these things in mind whenever you’re making frequent scrapes on the same site.
Your HTTP Requests library probably lets you set headers like
User Agent and
Accept-Encoding. You should set your user agent to something that identifies you and provides some contact information in case any site admins want to get in touch.
You also want to ensure you’re asking for the gzipped version of the site, so that you’re not hogging bandwidth with uncompressed page requests. Use the Accept-Encoding request header to tell the server your client can accept gzipped responses. The Python requests library automagically unzips those gzipped responses for you.
You might want to modify line 4 above to look more like this:
headers = { "User-Agent": "HN Scraper / Contact me: ", "Accept-Encoding": "gzip", }
r = requests.get("", headers=headers)
Note that if you were doing the scraping with some sort of headless browser or something like Selenium which actually downloads all the resources on the page and renders them, you’d also want to make sure you’re caching the stylesheet and images to avoid unnecessary extra requests.
If you liked this article, you might also like:
- Scraping Web Sites which Dynamically Load Data
- Ideas and Execution Magic Chart (includes a Hacker News Search Hack)
- Running Your Own Anonymous Rotating Proxies | http://blog.databigbang.com/tag/hartley-brody/ | CC-MAIN-2017-13 | refinedweb | 1,175 | 67.99 |
Infopath 2007 provides an out of the box contact selector control to select the user and validate against the Active Directory.
In this blog, We will see, how to get more out of this control. Performing some advanced functions using managed code.
For basic usage of this control, see this blog entry on infopath blog:
To start with basics, this control has predefined schema, since it simultaneously stores the display name, account id and account type.
user display name
DOMAIN/user account
user or group type
It is interesting to note that this control behaves like a repeating control, in a sense,
the user can select multiple users from the same control. Internally, the XML schema
shown above is repeated for multiple users.
2. Get the Display Names and Login Names for all users in contact Selector Control
To get the display names and login names, we just need to parse the generated XML
schema. We will store the names and login names as comma separated values.
Assuming that our control name is gpContactSelector, the code below extracts the
display names and login names.()+”;”;
}
}
The code above is pretty self explanatory. It parses the generated XML Schema
and stores the Names and login ids in two variables, names and accid as semicolon
separated values. Further operations can be then performed on these.
2. Sending Emails to All users selected in Contact Selector
To send emails, we obviously need email addresses of the contacts selected.
However, contact selector does not automatically grabs out the email addresses
of the contacts. To get the email addresses, we will first extract the login names
from the XML schema and then use the Microsoft.SharePoint.Utilities.SPUtility.GetFullNameandEmailfromLogin
class to get the email addresses.
The code below accepts the login names as semicolon separated values and builds
a string containing email addresses as semicolon separated values.
private string GetEmails(string final)
{
char[] a = { ‘;’ };
string[] loginIds = final.Split(a, StringSplitOptions.RemoveEmptyEntries);
string[] emailids = new string[loginIds.Length];
for (int i = 0; i <>
{
Microsoft.SharePoint.Administration.SPGlobalAdmin ga = new Microsoft.SharePoint.Administration.SPGlobalAdmin();
string dispname, email;
Microsoft.SharePoint.Utilities.SPUtility.GetFullNameandEmailfromLogin(ga, loginIds[i], out dispname, out email);
}
string finalstring = string.Empty;
for (int i = 0; i <>
finalstring = finalstring + emailids[i] + “;”;
return finalstring;
}
Now, we can use using System.Net.Mail namespace to send mails. This namespace
overrides the System.Web.Mail used in .NET 1.1. For those who are new to this
namespace, below is the sample code given to send mail.
private void SendMail()
{
MailMessage mail = new MailMessage();
mail.From = new MailAddress(“Admin@domain.com”, “Administrator”);
char[] a = { ‘;’ };
string[] emailIds = to.Split(a, StringSplitOptions.RemoveEmptyEntries);
for (int i = 0; i < style=””>
mail.To.Add(new MailAddress(emailIds[i]));
mail.Subject = “New Meeting Request”;
mail.Priority = MailPriority.Normal;
mail.IsBodyHtml = true;
mail.Body = GetBody();
new SmtpClient(“smtpserver”).Send(mail);
}
Hi ,
I am not able to get all the names selected seperated by ‘;’ in the contact selector control.
I am only getting the first name .present but the other name are not getting
Thanks,
Dev
Can you show the code which you have used ?
Hi ,
I used the same code as given here in the blog.
2. Get the Display Names and Login Names for all users in contact Selector Control
It seems like this code is incomplete .for (int i = 0; i <> ??
Could you please send me the exact code to be used to get all the selected names in the contact selector control.
Sorry for tat, this is the code:()+";";
}
}
Or refer here:
In the above code, for loop is incomplete.
for (int j = 0; j <> ?
for (int i = 0; i <> ?
Could you please send the exact code?
Sorry about that, this is due to the blog migration issues. Till I fix this, you can use the following url:
Hi,
Sorry to trouble you.
I checked the other link [].
In that one also, the code(for loop) is incomplete.
Thanks,
Dev
Sorry again. There seems to be mess up. Anyways, will correct that. The loop should be
for (int i = 0; i <emailids.Length;++i)
{
….
Hi,
I have a requirement like, in a QuickMail button click, I have to pass the email id’s of the selected person’s in the contact selector control to the ‘To’ part of an outlook message.So, first I am trying to get the names separated by ‘;’. This is the code that I am using.
XPathNavigator xNavMain = thisorNotifyOthers/my:Person/my:DisplayName", xNameSpace);
nodes[1] = xNavMain.Select("/my:myFields/my:gpContactSelectorNotifyOthers/my:Person/my:AccountId", xNameSpace);
nodes[2] = xNavMain.Select("/my:myFields/my:gpContactSelectorNotifyOthers/my:Person/my:AccountType", xNameSpace);
string names=string.Empty;
string accid=string.Empty;
for (int i = 0; i <= nodes.Length; ++i)
{
nodes[i].MoveNext();
if (nodes[2].Current.ToString() == "User")
{
names = names + nodes[0].Current.ToString() + ";";
accid = accid + nodes[1].Current.ToString() + ";";
}
}
But it is not getting inside the if loop.
I debugged the code to check the values.
I could find that in nodes[0].Current.ToString(),nodes[1].Current.ToString and nodes[2].Current.ToString I am not getting the dispalyname, id and ‘User’.In my Infopath form there are a lot of controls.The value of the nodes is coming as ‘nt[Some textboxvalue]….nt[some dropdown value]…..nt[Displayname]nt[id]………’.
Am I doing anything wrong here.
Please correct me if I am wrong.
Thanks for your time.
Thanks,
Dev.
alot of ad servers use the username@domain.local as the email address.
Using code is the LONGGGGG way to do it. you should be able to just resolve an email address as a function using this:
concat(AccountId,"@domain.local")
replacing the domain with your company’s domain, ex:
concat(AccountId,"@corp.local")
The for loop is incomplete in section 1 also. Could you please update or let me know what should be at
string names=string.Empty;
string accid=string.Empty;
for (int j = 0; j <>?????????????????
{
for (int i = 0; i <>??????????????????????
nodes[i].MoveNext();
if (nodes[2].Current.ToString() == "User")
{
names = names + nodes[0].Current.ToString()+";";
accid = accid + nodes[1].Current.ToString()+";";
}
}
Get and send emails or expoising control data are too simple tasks to use code. Save your skills for somthing more chalenging.
Where the stsadm command addtemplate stores the site template
Regards
Intekhab
Hi Elfoamerican,
i did followed but of no use. When i enter the user in contact slector it doesnot returnthe email. Any idea????
Thanks. | https://blogs.msdn.microsoft.com/mahuja/2008/04/01/performing-operations-on-infopath-2007-contact-selector-control-using-managed-code/ | CC-MAIN-2017-22 | refinedweb | 1,071 | 61.53 |
I have an array like this:
A = array([1,2,3,4,5,6,7,8,9,10])
B = array([[1,2,3],
[2,3,4],
[3,4,5],
[4,5,6]])
width = 3 # fixed arbitrary width
length = 10000 # length of A which I wish to use
B = A[0:length + 1]
for i in range (1, length):
B = np.vstack((B, A[i, i + width + 1]))
Actually, there's an even more efficient way to do this... The downside to using
vstack etc, is that you're making a copy of the array.
Incidentally, this is effectively identical to @Paul's answer, but I'm posting this just to explain things in a bit more detail...
There's a way to do this with just views so that no memory is duplicated.
I'm directly borrowing this from Erik Rigtorp's post to numpy-discussion, who in turn, borrowed it from Keith Goodman's Bottleneck (Which is quite useful!).
The basic trick is to directly manipulate the strides of the array (For one-dimensional arrays):
import numpy as np def rolling(a, window): shape = (a.size - window + 1, window) strides = (a.itemsize, a.itemsize) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) a = np.arange(10) print rolling(a, 3)
Where
a is your input array and
window is the length of the window that you want (3, in your case).
This yields:
[[0 1 2] [1 2 3] [2 3 4] [3 4 5] [4 5 6] [5 6 7] [6 7 8] [7 8 9]]
However, there is absolutely no duplication of memory between the original
a and the returned array. This means that it's fast and scales much better than other options.
For example (using
a = np.arange(100000) and
window=3):
%timeit np.vstack([a[i:i-window] for i in xrange(window)]).T 1000 loops, best of 3: 256 us per loop %timeit rolling(a, window) 100000 loops, best of 3: 12 us per loop
If we generalize this to a "rolling window" along the last axis for an N-dimensional array, we get Erik Rigtorp's "rolling window" function: w. Examples -------- >>> x=np.arange(10).reshape((2,5)) >>> rolling_window(x, 3) array([[[0, 1, 2], [1, 2, 3], [2, 3, 4]], [[5, 6, 7], [6, 7, 8], [7, 8, 9]]]) Calculate rolling mean of last dimension: >>> np.mean(rolling_window(x, 3), -1) array([[ 1., 2., 3.], [ 6., 7., 8.]]) """)
So, let's look into what's going on here... Manipulating an array's
strides may seem a bit magical, but once you understand what's going on, it's not at all. The strides of a numpy array describe the size in bytes of the steps that must be taken to increment one value along a given axis. So, in the case of a 1-dimensional array of 64-bit floats, the length of each item is 8 bytes, and
x.strides is
(8,).
x = np.arange(9) print x.strides
Now, if we reshape this into a 2D, 3x3 array, the strides will be
(3 * 8, 8), as we would have to jump 24 bytes to increment one step along the first axis, and 8 bytes to increment one step along the second axis.
y = x.reshape(3,3) print y.strides
Similarly a transpose is the same as just reversing the strides of an array:
print y y.strides = y.strides[::-1] print y
Clearly, the strides of an array and the shape of an array are intimately linked. If we change one, we have to change the other accordingly, otherwise we won't have a valid description of the memory buffer that actually holds the values of the array.
Therefore, if you want to change both the shape and size of an array simultaneously, you can't do it just by setting
x.strides and
x.shape, even if the new strides and shape are compatible.
That's where
numpy.lib.as_strided comes in. It's actually a very simple function that just sets the strides and shape of an array simultaneously.
It checks that the two are compatible, but not that the old strides and new shape are compatible, as would happen if you set the two independently. (It actually does this through numpy's
__array_interface__, which allows arbitrary classes to describe a memory buffer as a numpy array.)
So, all we've done is made it so that steps one item forward (8 bytes in the case of a 64-bit array) along one axis, but also only steps 8 bytes forward along the other axis.
In other words, in case of a "window" size of 3, the array has a shape of
(whatever, 3), but instead of stepping a full
3 * x.itemsize for the second dimension, it only steps one item forward, effectively making the rows of new array a "moving window" view into the original array.
(This also means that
x.shape[0] * x.shape[1] will not be the same as
x.size for your new array.)
At any rate, hopefully that makes things slightly clearer.. | https://codedump.io/share/R77wDeOrAj4m/1/efficient-numpy-2d-array-construction-from-1d-array | CC-MAIN-2017-39 | refinedweb | 855 | 71.55 |
Consider.”).
July 27th, 2006 at 9:24 pm
If you don’t want Python to be strict (in this context) you could have it return a default value for a key if it isn’t found by using the dict’s ‘get’ method.
print “Name is “+name.get(’first’, ”)+” “+name.get(’given’, ”)+”\n”
July 27th, 2006 at 10:11 pm
Thanks for pointing out. What’s good about that is it’s explicit - if you do that, you have to give some thought to the impact. I like that about Python - default to “safe behaviour” while providing the tools to override when needed.
July 27th, 2006 at 10:22 pm
Of course if you removed the -w from the call to perl you wouldn’t get your warning. Default Perl without the warnings and strict pragma is so forgiving it’s scary. (Though it can be useful for some one off comandline stuff).
July 27th, 2006 at 10:23 pm
The code you posted is pretty common in the PHP and Perl worlds, but I’m guessing that in Ruby and Python you’d probably see people creating simple “Name” classes with two attributes, “first” and “given”. Then one could force that they are both provided in the constructor (raising an exception there if they are not valid), or alternatively overridding the property getter to return a sane non-nil value if the property isn’t defined. That’d look a lot cleaner too, as it would be something like:
puts(”Name is “+name.first+” “+name.given+”\n”)
(I’m guessing here, as I haven’t used Ruby or Python too much yet…)
July 27th, 2006 at 10:43 pm
I should have said something about that I guess - to my mind the benefits of using -w and “use strict” have been hammered out so often that they should be default for anyone writing even half-serious Perl. But I guess it is a violation of this test, examining the default behaviour.
Maybe - maybe not. Think there are different schools on that - some for and others resenting the extra lines of code for the class definition. The view of the functional programmer might also vary on this point. Also, in PHP, think it’s pretty common these days to use a class for this kind of purpose these days.
July 27th, 2006 at 11:50 pm
Ruby also allows you to specify a default value when retrieving a member from a Hash, using the Hash#fetch method. Furthermore, Ruby would not halt execution if you had used string interpolation or sprintf-style string formatting.
July 27th, 2006 at 11:57 pm
Harry,
If you wanted to that Hash to generate a default value when it does not find a key (which it can) I’d do the following in your code
#!/usr/bin/ruby def get_hash(hash) Hash.new("").update(hash) end names = [ get_hash('first'=>'Bob','given'=>'Smith'), get_hash('given'=>'Lukas'), get_hash('first'=>'Mary','given'=>'Doe'), ]; def printName(name) puts("Name is "+name['first']+" "+name['given']+"\n") end names.each { |name| printName(name) }
This will make the hash generate the default string
July 28th, 2006 at 12:16 am
You forgot Haxe ;)
nice article harry..
July 28th, 2006 at 2:09 am
Now this is the kind of language comparison I can get behind (getting sick of Rails rulz! and PHP teh sux0r! and all that).
One comment: if you had an inkling in advance that one of the array keys might be missing, you could use the error suppression operator (@) to get rid of the notice (i. e., @name['first'] ). In PHP templating, this is often very useful because you might want to echo out content only if it exists, otherwise you can just ignore it (and sprinkling if’s all over the place gets old fast).
July 28th, 2006 at 3:50 am
I use a custom error handler (set_error_handler()) on PHP which mimics the Python behaviour by stopping for any error condition; this is also a good place to use things like debug_backtrace() to dump a stack trace which includes the values passed to functions or database-specific information (e.g. sql error, connection info) when appropriate.
July 28th, 2006 at 6:55 am
Installing Xdebug (for PHP) on a server will invariably call some sort of debugging backtrace when a script halts unexpectedly. It also color-codes print_r and var_dump output. I like color. :)
July 28th, 2006 at 10:29 pm
I don’t think it’s really debatable which is the best behaviour. Failing properly when something goes wrong is such an advantage for real software that it’s even got a name (FailFast). It’s an important principle and one that should be used in all ‘real’ software. Debating which language is better for non-’real’ software is not, imo, very interesting :)
July 28th, 2006 at 10:51 pm
Since you feel that way, I’m assuming that you must not consider any dynamic language appropriate for ‘real’ software, since languages that defer type checking until runtime when they could be checking types at compile time obviously haven’t put FailFast at the top of their priority list…
July 29th, 2006 at 7:12 am
You can still consider dynamic languages appropriate in a FailFast context. It’s not about type checking. It’s not about undiscovered errors. It’s about what happens when the language knows an error occurred, whether or not it’s due to a type error, and whether or not it raised an error for a variable that was binded late or early. Either the program is allowed to continue or it’s halted after an error.
If an error is consistent in its behavior and causes, then it’s easier to find and fix, and you can be more confident your code isn’t a mess. It’s just a simple bug.
If an error is sporadic and there seems to be little rhyme or reason to its causes and effects, the bug is harder to fix. It’s also a reason to start wondering if there aren’t many other subtle problems with your code.
Just my two cents.
July 29th, 2006 at 7:45 am
P.S. I think the second two paragraphs I wrote might be unclear.
It *is* about undiscovered errors in the sense that although continuing after an error is in some cases desireable, the rule and not the exception is that it’s best not to do so.
But in any case it should be a deliberate decision to let the program continue, not the default behavior of the language. The last thing you want is to believe everything is fine meanwhile your data is quietly getting corrupted as the program continues. Halting the program is one obvious way to prevent that.
I believe a language should trust the programmer to know when to catch an error and when to let it pass. But to do that, the programmer must at least be given the decision first.
From The Zen of Python:
Errors should never pass silently.
Unless explicitly silenced.
July 29th, 2006 at 7:52 am
Well, first of all, I was playing devil’s advocate; I’m a fan of both dynamically-typed languages AND statically typed ones (different tools for different tasks). I was attempting to point out the fact that FailFast isn’t a black or white philosophy (as it seems that doug above is a fan of absolutes). There are degrees of failing fast, and while it’s a good general approach, obviously sometimes other considerations take priority.
I disagree. It’s not ONLY about type checking, but type errors *are* a common source of application bugs. Catching a parameter with an incorrect type at the beginning of a method invocation (say, with a type hint) is “fail fast” when compared with not checking the type and then 20-30 lines of code later attempting to call an undefined method on that parameter. If you are one to see “fail fast” as an absolute rule, then obviously you would prefer to know as soon as the method is called that the parameter type is wrong. My contention is that if you take the fail fast philosophy to its extreme (which is probably ill-advised), you must find static, compile-time type checking to be the best approach (since that is about as early in the process as you can get), and you will have to abandon dynamically typed languages altogether.
Example (obviously contrived):
someMethod(); ?>
This is perfectly legal PHP code, yet it will fail at runtime about half the time. With static typing, the language would “know” that an error exists, and would refuse to compile it. However, dynamically typed languages would accept the equivalent of the above code without any complaints. Hence, they are not “failing fast”.
However, clearly dynamically typed languages have some (or many, depending on whom you ask) advantages over statically typed ones in certain contexts. My only point is that sometimes other concerns take priority over failing as fast as possible.
July 29th, 2006 at 7:58 am
Oops, I guess I didn’t properly escape my tags in the code snippet above (there is apparently no way to preview your comments here); let me try again:
class Foo {}
$f = new Foo();
if (mt_rand(1, 100) == 1) $f->someMethod();
Well put. This is one of the reasons I’ve never been a fan of MySQL (but that’s a whole other discussion…)
July 29th, 2006 at 10:48 pm
I think the classification of weak/strong typing when expanded is a pretty good correctly placing the languages.
PHP and perl are weak/dynamic.
Ruby and python are strong/dynamic.
And just for contrast:
C++ is weak/static
Java is strong/static
In terms of error reporting, I think we have known for quite awhile that Ruby is pretty crappy at it.
August 1st, 2006 at 7:22 pm
Ruby returns a nil if a value in a hash doesn’t exist. This is useful if you want to check for a key:
if name[:first] printName(name) else puts 'No first name' end
If you want a stricter behavior use Hash#fetch instead of Hash#[].
def printName(name) puts "Name is #{name.fetch :first} #{name.fetch :given}" end
Now the script will raise an IndexError.
August 5th, 2006 at 4:46 pm
Harry F wrote: “So one dividing line here is whether some kind of fatal error should be raised. Perl and PHP continue execution by default while Ruby and Python halt, unless you explicitly handle the problem. Which is better?”
This depends entirely on the nature of the error. A major failure that is likely to really screw the works should abort with a truly meaningful message. Many languages allow explicit handling of such events and these should be used where possible. Not all possible error conditions can be forseen, but as many as can be should be given this treatment.
Lesser problems can usually be permitted to continue, though perhaps with a warning message and a return to an input field. A custom error handler, such as that mentioned by Chris Adams can help with both types of problems, but I need to ask Chris if he uses it only during debug, or if it remains in the finished code.
Rob Walker makes an excellent point when he says, “The last thing you want is to believe everything is fine meanwhile your data is quietly getting corrupted as the program continues.”
There is a real problem with programs that continue with no error control. Subtle errors can easily be introduced into data and soon an entire database is contaminated and totally useless. My primary background is in database programing and, trust me, this is a type of problem you do not want to discover after a client has invested thousands of dollars in data collection/entry efforts. If the language has no intrinsic method to validate data, the programmer must implicitly code data checking with appropriate action before that data is used or stored.
I prefer to accomplish this myself and specify the program response rather than depend only upon the language’s intrinsic type checking. I do have to admit, however, that I am rather a newcomer to web scripting, client or server, but this means I have no hard and fast favorites. Right now I am improving my JavaScript and just beginning with PHP, so for the most part, I will just sit back and learn from y’all.
Lee Eschen
August 5th, 2006 at 10:57 pm
[...] Share and Enjoy:These icons link to social bookmarking sites where readers can share and discover new web pages. [...]
August 6th, 2006 at 3:13 am
Ah … but you *wouldn’t* write a web application in Perl, Python, PHP or Ruby (or Java, etc.) — you’d write it in a *framework*, like Rails! The question isn’t “How strict is your language?” The question is, “How good a web application programmer are you?” and “How well does your *framework* support designing secure, scalable, etc. web applications?”
October 18th, 2006 at 4:47 am
comming late to the party ;) here is lua 5.1 example
names = { { first = "Bob", given = "Smith"}, { given = "Lukas"}, { first = "Mary", given ="Doe"}, }; function printName(name) print("Name is " .. name.first .. " " .. name.given) end for x,y in pairs(names) do printName(y) end
and the runtime output
C:\lua\5. | http://www.sitepoint.com/blogs/2006/07/27/how-strict-is-your-dynamic-language/ | crawl-002 | refinedweb | 2,252 | 68.4 |
#include <HAPI_Common.h>
Definition at line 754 of file HAPI_Common.h.
Path to the .otl library file.
Definition at line 775 of file HAPI_Common.h.
Full asset name and namespace.
Definition at line 777 of file HAPI_Common.h.
Geometry inputs exposed by the asset. For SOP assets this is the number of geometry inputs on the SOP node itself. OBJ assets will always have zero geometry inputs. See Asset Inputs.
Definition at line 793 of file HAPI_Common.h.
Definition at line 781 of file HAPI_Common.h.
It's possible to instantiate an asset without cooking it. See Cooking.
Definition at line 771 of file HAPI_Common.h.
For incremental updates. Indicates whether any of the asset's materials have changed. Refreshed only during an asset cook.
Definition at line 801 of file HAPI_Common.h.
For incremental updates. Indicates whether any of the assets's objects have changed. Refreshed only during an asset cook.
Definition at line 797 of file HAPI_Common.h.
Asset help marked-up text.
Definition at line 778 of file HAPI_Common.h.
This is what any end user should be shown.
Definition at line 774 of file HAPI_Common.h.
Instance name (the label + a number).
Definition at line 773 of file HAPI_Common.h.
Use the node id to get the asset's parameters. See Nodes Basics.
Definition at line 758 of file HAPI_Common.h.
Definition at line 78067 of file HAPI_Common.h.
Transform inputs exposed by the asset. For OBJ assets this is the number of transform inputs on the OBJ node. For SOP assets, this is the singular transform input on the dummy wrapper OBJ node. See Asset Inputs.
Definition at line 787 of file HAPI_Common.h.
User-defined asset version.
Definition at line 776 of file HAPI_Common.h. | http://www.sidefx.com/docs/hengine/struct_h_a_p_i___asset_info.html | CC-MAIN-2018-30 | refinedweb | 291 | 64.47 |
Intersphinx is a very neat way to point at other projects’ documentation from within your own. Handy if you want to reference a Python module or an official Django setting or class.
You have to do some setup work, like enabling the intersphinx extension. Most
of the time, your automatically generated sphinx configuration will already
have it enabled out of the box. Now you need pointers at the other projects’
documentation (there needs to be a special
objects.inv file with link
target definitions). At the end of your file, add something like this:
intersphinx_mapping = { 'python': ('', None), 'django': ('', None), 'sphinx': ('', None), }
Now you can do fun stuff like this:
Django knows about your URL configuration because you told it where to find it with the :django:setting:`ROOT_URLCONF` setting.
Only… I got an error:
ERROR: Unknown interpreted text role "django:setting".
What? I couldn’t figure it out and asked a question on stackoverflow. I got some tips, but couldn’t get it to work.
Then I got an idea. Sphinx has a lot of things build in, but not Django’s
custom Sphinx roles like
setting! That’s not a standard Sphinx
one. Perhaps you need to copy/paste the other project’s custom sphinx
extensions to get it to work? Not terriby elegant, but worth a shot.
In the end I got it working by copying a small snippet from Django’s
sphinx extension as
_ext/djangodocs.py next to my documentation:
def setup(app): app.add_crossref_type( directivename = "setting", rolename = "setting", indextemplate = "pair: %s; setting", )
And I added the following to my Sphinx’ conf.py:
import os import sys ... sys.path.append( os.path.abspath(os.path.join(os.path.dirname(__file__), "_ext"))) # ^^^ I'll do that neater later on. extensions = ['djangodocs', # ^^^ I added that one. 'sphinx.ext.autodoc', ... ] ...
And… Yes, it worked!
So: intersphinx works, but if you point at a custom role, you need to have that custom role defined): | https://reinout.vanrees.org/weblog/2012/12/01/django-intersphinx.html | CC-MAIN-2021-21 | refinedweb | 324 | 66.94 |
How to Turn OFF or Turn ON all bits in C++
As the title of this blog suggests we are going to learn how to Turn OFF/ON all the bits of particular number in C++ using the Bitwise Not Operator.
Turning OFF/ON bits means to take complement of all the bits present in the binary form of the number. As we know that negative numbers are stored as the two’s complement of the positive counterpart. Now to Turn OFF/ON the bits we use Bitwise Not ( ~ ), commonly known as One’s Complement. We will take an example to understand how the Bitwise Not or One’s Complement work in C++.
Exempli Gratia(e.g.) : 0000 0010 is the binary equivalent of the decimal number 2, taking a complement of it’s binary equivalent we get 1111 1101.
Now take a look at the binary equivalent of 3 i.e. 0000 0011 now taking a compliment of it’s binary equivalent we get 1111 1100 and adding a +1 to it we get 1111 1101, which is the binary equivalent of -3.
From this we can yield the result that ~2 = -3, and therefore a general result can be observed i.e. ~n = -(n+1).
C++ program to Turn OFF/ON all bits of number
#include <iostream> using namespace std; int main() { int num1 = 10; int num2 = 0; num2 = ~num1; cout << "Value of num2 is: " << num2 << endl; return 0; }
There is also a keyword present in C++, compl which stands for complement and it can be used as an alternative to Bitwise Not ( ~ ).
num2 = compl num1;
Output :
Value of num2 is: -11
Also read: | https://www.codespeedy.com/how-to-turn-off-or-turn-on-all-bits-in-cpp/ | CC-MAIN-2020-24 | refinedweb | 276 | 64.24 |
transpose of this matrix (Read Only).
The transposed matrix is the one that has the Matrix4x4's columns exchanged with its rows.
using UnityEngine;
public class ExampleScript : MonoBehaviour { // You construct a Matrix4x4 by passing in four Vector4 objects // as being COLUMNS and not ROWS Matrix4x4 matrix = new Matrix4x4( new Vector4(1, 2, 3, 4), new Vector4(5, 6, 7, 8), new Vector4(9, 10, 11, 12), new Vector4(13, 14, 15, 16));
void Start() { Debug.Log(matrix); // This outputs // 1, 5, 9, 13, // 2, 6, 10, 14, // 3, 7, 11, 15, // 4, 8, 12, 16
Debug.Log(matrix.transpose); // This outputs // 1, 2, 3, 4, // 5, 6, 7, 8, // 9, 10, 11, 12, // 13, 14, 15, 16 } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/Matrix4x4-transpose.html | CC-MAIN-2020-05 | refinedweb | 128 | 68.4 |
Details
Description
stats.
Activity
Also, food for thought, when (hopefully not if) the VelocityResponseWriter is moved into core, we can deprecate stats.jsp and skin the output of this request handler for a similar pleasant view like stats.jsp+client-side xsl does now.
Any thoughts on the naming of this beast?
How about SysInfoRequestHandler - bonus: SIRH evokes RFK's assassin
"stats" is a bit overloaded (StatsComponent). as is "system" (SystemInfoHandler).
I swear when I read this, before I suggested SIRH, you had written "SystemStatsHandler" instead of "SystemInfoHandler". Not sure how you changed it without a red "edited" annotation in the header for your comment.... Et tu, Atlassian?
Anyway, pathological paranoia aside, SIRH is too close to SystemInfoHandler - I hereby begin the process of formally withdrawing it from consideration. Ok, done.
stats.xsl creates a title prefix "Solr Statistics" - how about SolrStatsRequestHandler?
+1 on SolrStatsRequestHandler
You might want to consider either omitting or making optional the Lucene Fieldcache stats; they can often be very slow to be generated ( see ). One use case for this request handler that I can see is high frequency (every few seconds) monitoring as part of performance testing, for which a fast response is pretty mandatory.
Any thoughts on the naming of this beast?
SystemInfoHandler sounds good.
This would probably also be a good time to retire "registry.jsp" ... all we need to do is add a few more pieces of "system info" to this handler (and add some param options to disable the "stats" part of the output)
Also, food for thought, when (hopefully not if) the VelocityResponseWriter is moved into core, we can deprecate stats.jsp and skin the output of this request handler for a similar pleasant view like stats.jsp+client-side xsl does now.
Even if/when VelocityResponseWRiter is in the core, i'd still rather just rely on client side XSLT for this to reduce the number of things that could potentially get missconfigured and then confuse people why the page doesn't look right ... the XmlResponseWRriter has always supported a "stylesheet" param that (while not generally useful to most people) let's you easily reference any style sheet that can be served out of the admin directory ... all we really need is an updatd .xsl file to translate the standard XML format into the old style stats view.
Some updates to Erik's previous version...
- adds everything from registry.jsp
- lucene/solr version info
- source/docs info for each object
- forcibly disable HTTP Caching
- adds params to control which objects are listed
- (multivalued) "cat" param restricts category names (default is all)
- (multivalued) "key" param restricts object keys (default is all)
- adds (boolean) "stats" param to control if stats are outputed for each object
- per-field style override can be used to override per object key
- refactored the old nested looping that stast.jsp did over every object and every category into a single pass
- switch all HashMaps to NamedLists or SimpleOrderedMaps to preserve predictable ordering
Examples...
- ?cat=CACHE
- return info about caches, but nothing else (stats disabled by default)
- ?stats=true&cat=CACHE
- return info and stats about caches, but nothing else
- ?stats=true&f.fieldCache.stats=false
- Info about everything, stats for everything except fieldCache
- ?key=fieldCache&stats=true
- Return info and stats for fieldCache, but nothing else
I left the class name alone, but i vote for "SystemInfoRequestHandler" with a default registration of "/admin/info"
Whoops .. i botched the HTTP Caching prevention in the last version
Committed revision 917812.
I went ahead and commited the most recent attachment under the name "SystemInfoRequestHandler" with slightly generalized javadocs.
Leaving the issue open so we make sure to settle the remaining issues before we release...
- decide if we want to change the name
- add default registration as part of the AdminRequestHandler (ie: /admin/info ?)
- add some docs (didn't wnat to make a wiki page until we're certain of hte name)
- decide if we want to modify the response structure (should all of the top level info be encapsulated in a container?)
Thanks Hoss for committing!
naming: I'm fine with how it is, but fine if the name changes too and +1 to adding default
Correcting Fix Version based on CHANGES.txt, see this thread for more details....
Please add an option that just lists the catalog of MBeans.
Please add an option that just lists the catalog of MBeans.
It's already there – if stats=false it just returns the list of SolrInfoMBeans from the registry (like registry.jsp)
what do you think of the proposed name change & path: SolrInfoMBeanHandler & /admin/mbeans ?
- rename to o.a.s.handler.admin.SolrInfoMBeanHandler
- add default registration as part of the AdminRequestHandler /admin/mbeans
- eliminate duplication of functionality w/SystemInfoHandler
- "docs" are left in explicit order returned by plugin
- if "cats" param is used, categories are returned in that order
Committed revision 953886. ... trunk
Committed revision 953887. ... branch 3x
re: naming. If you're someone like me who is becoming fairly familiar with using solr, but not with the solr code – then "SolrInfoMBeanHandler" or "admin/mbean" doesn't mean anything to me, and is kind of confusing. I want to get info on my indexes and caches-- it would be very non-obvious to me (if i hadn't read this ticket) that "MBean" has anything to do with this, since I don't know what an MBean is – and probably shouldn't have to to use solr through it's APIs.
So seems to me that a name based on the functions provided (not the underlying internal implementation) is preferable. But i recognize the namespace conflict problems, so much stuff in Solr already (some of it deprecated or soon to be deprecated or removed, some of it not) that it's hard to find a non-conflicting name.
Even if the underlying class is SolrInfoMBeanHandler, would it be less (or more) confusing for the path to be /admin/info still? That might be less confusing, as someone like me would still see /admin/info in the config and think, aha, that might be what I want. Or the lack of consistency might just be more confusing in the end.
I don't know what the current SystemInfoHandler does, what's the difference between that and this new one? There might be hints to naming in that. If the new one does everything the old one does, perhaps call it NewSystemInfoHandler, but still register it at /admin/info, with the other one being deprecated? Just brainstorming. Or rename the other one to OldSystemInfoHandler.
Bulk close for 3.1.0 release
The /admin/stats handler is not registered by default, nor is it included in example config. I had to add <requestHandler name="/admin/stats" class="org.apache.solr.handler.admin.SolrInfoMBeanHandler" /> to my solrconfig to get it working.
Jan: as stated above the registration i picked was /admin/mbeans - stats is too specific since the component can be used for other purposes then getting stats.
it's also not a "default" handler – it's registered if you register the AdminHandler
Jonathan: i overlooked your comment until now. the existing SystemInfoHandler isn't deprecated – it's still very useful and provides information about the entire "system" solr is running in (the jvm, the os, etc...)
I'll commit this in the near future.
Any thoughts on the naming of this beast? "stats" is a bit overloaded (StatsComponent). as is "system" (SystemInfoHandler). | https://issues.apache.org/jira/browse/SOLR-1750?attachmentOrder=desc | CC-MAIN-2015-11 | refinedweb | 1,239 | 54.73 |
Anyone who works with the Java programming language is well aware of Scanner class in Java. And for aspiring Java Developers who don’t know what Scanner class is and how to use Scanner class in Java, this article is the perfect introduction to it.
In this post, we’ll engage in a detailed discussion of Scanner class in Java, its different methods, and how they function. So, if you are looking forward to knowing more about Scanner class in Java, keep reading till the end!
What is the Scanner class in Java?
The Scanner class in Java is primarily used to obtain user input. The java.util package contains it. The Scanner class not only extends Object class, but it can also implement Iterator and Closeable interfaces. It fragments the user input into tokens using a delimiter, which is by default, whitespace.
It is pretty easy to use the Scanner class – first, you create an object of the class and then use any of the available methods present in the Scanner class documentation.
Besides being one of the simplest ways of obtaining user input data, the Scanner class is extensively used to parse text for strings and primitive types by using a regular expression. For instance, you can use Scanner class to get input for different primitive types like int, long, double, byte, float, and short, to name a few.
You can declare Java Scanner class like so:
public final class Scanner
extends Object
implements Iterator<String>
If you wish to obtain the instance of the Scanner class that reads user input, you have to pass the input stream (System.in) in the constructor of Scanner class, as follows:
Scanner in = new Scanner(“Hello upGrad”);
Read: Top 6 Reasons Why Java Is So Popular With Developers
What are the different Scanner class constructors?
Here are the six commonly used Scanner class constructors:
- Scanner(File source) – It creates a new Scanner to generate values scanned from a particular file.
- Scanner(InputStream source) – It creates a new Scanner to produce values scanned from a specified input stream.
- Scanner(Readable source) – It creates a new Scanner to deliver values scanned from a specified source.
- Scanner(String source) – It creates a new Scanner to produce values scanned from a particular string.
- Scanner(ReadableByteChannel source) – It creates a new Scanner to produce values scanned from a specified channel.
- Scanner(Path source) – It creates a new Scanner to generate values scanned from a specified file.
What are the different Scanner class methods?
Just like Scanner class constructors, there’s also a comprehensive suite of Scanner class methods, each serving a unique purpose. You can use the Scanner class methods for different data types. Below is a list of the most widely used Scanner class methods:
- void [close()] – This method is used to close the scanner.
- pattern [delimiter()] – This method helps to get the Pattern that is currently being used by the Scanner class to match delimiters.
- Stream<MatchResult> [findAll()] – It gives a stream of match results that match the specified pattern string.
- String [findInLine()] – It helps to find the next occurrence of a pattern created from a specified string. This method does not consider delimiters.
- String [nextLine()] – It is used to get the input string that was skipped of the Scanner object.
- IOException [ioException()] – This method helps to obtain the IOException last projected by the Scanner’s readable.
- Locale [locale()] – It fetches a Locale of the Scanner class.
- MatchResult [match()] – It offers the match result of the last scanning operation performed by the Scanner.
- BigDecimal [nextBigDecimal()] – This method is used to scan the next token of the input as a BigDecimal.
- BigInteger [nextBigInteger()] – This method scans the next token of the input as a BigInteger.
- byte [nextByte()] – It scans the next token of the user input as a byte value.
- double [nextDouble()] – It scans the next token of the user input as a double value.
- float [nextFloat()] – This method scans the next token of the input as a float value.
- int [nextInt()] – This method is used to scan the next token of the input as an Int value.
- boolean:
- [hasNext()] – This method returns true if the Scanner has another token in the user input.
- [hasNextBigDecimal()] – This method checks if the next token in the Scanner’s input can be interpreted as a BigDecimal by using the nextBigDecimal() method.
- [hasNextBoolean()] – It checks if the next token in the Scanner’s input can be interpreted as a Boolean using the nextBoolean() method.
- [hasNextByte()] – It checks whether or not the next token in the Scanner’s input is interpretable as a Byte using the nextBigDecimal() method.
- [hasNextFloat()] – It checks whether or not the next token in the Scanner’s input is interpretable as a Float using the nextFloat() method.
- [hasNextInt()] – It checks whether or not the next token in the Scanner’s input is interpretable as an Int using the nextInt() method.
How to use Scanner class in Java?
As we mentioned before, using the Scanner class in Java is quite easy. Below is an example demonstrating how to implement Scanner class using the nextLine() method:
import java.util.*;
public class ScannerExample {
public static void main(String args[]){
Scanner in = new Scanner(System.in);
System.out.print(“Enter your name: “);
String name = in.nextLine();
System.out.println(“Name is: ” + name);
in.close();
}
}
If you run this program, it will deliver the following output:
Enter your name: John Hanks
Name is: John Hanks
Also Read: What is Type Casting in Java | Understanding Type Casting As a Beginner
Wrapping up
This article covers the fundamentals of the Scanner class in Java. If you acquaint yourself with the Scanner class constructs and methods, with time and continual practice, you will master the craft of how to use Scanner class in Java programs.. | https://www.upgrad.com/blog/scanner-class-in-java/ | CC-MAIN-2020-40 | refinedweb | 961 | 61.77 |
A thing that makes a reader go hmmm is;
long x = whatever;.
There is one case where it fails if the result must be an integer, but not if the result is a long.
long x = -1;
x = Integer.MinValue / -1;
Makes sense.
Also, what about -2147483648 / ((long)-1)? If the result type of the expression was an int, the answer would be -2147483648. Since the result is long, you get 2147483648.
Igor
Aren’t all divisions natively cast to the whatever type is largest in the division itself?
int / decimal = decimal?
byte / float = float?
long / int = long?
Fiddling with this code seemed to confirm that very idea.
— to test, made a simple console app exe
— fiddling around with the number and the types of vars a and b
using System;
namespace NamespaceOrama
{
class Program
{
static void Main(string[] args)
{
byte a = 5;
int b = 23;
var c = (a / b);
Console.WriteLine(c.GetType().ToString());
Console.ReadLine();
}
}
}
— EOF
So, after all that, I say, "So what, and where’s the C+C music factory reference?"
Then I wait and say… "Oooh, better yet… let’s try Marky Mark, just to stay fresh."
C/C++ have integral promotions. C# specification also says almost the same thing.
However, which one is the egg and which one is chicken here?
@Christopher: The divisions are not cast. These are the defined operators:
int / int
long / long
byte / byte
decimal / decimal
When you try to divide long / int (or int / long) the int is cast to a long because that’s the best match the overload resolution can find, and there’s an implicit cast defined from int to long.
hey Eric,
you ask "First, why is it even desirable to have the result fit into an int?"
One thought that immediately came to mind is that if I’m working with ints that came from an SQL DB, divide them for some reason, and want to write the result back to the DB, then what I’m writing back darn well better be an int or the write/update will fail. Sometimes it’s harder to make changes to old DB schemas (esp if it was made when memory was expensive) than it is to change data types in a program.
private const char c = ‘C’;
private const string abc = "AB" + c;
Hm, my recent post got a bit to short. That my little code snippet is illegal makes me go hmmm.
Here is the snippet again:
private const char c = ‘C’;
private const string abc = "AB" + c;
Thank you for submitting this cool story – Trackback from DotNetShoutout
There is another important reason: to minimize mental load of understanding the language.
The rule "Any operator, when fed a mix of ints and longs, always returns a long" is much simpler and easier to remember than the rule "Any operator, when fed a mix of ints and longs, always returns a long except for division, where an int divided by a long returns an int because the result must necessarily fit into an int."
— Michael Chermside
This is well covered in computer science’s Compiler 101 course.
Now, I’m no C++ or Visual Studio/Express coder, I prefer PureBasic myself a nice procedural language that easily match C in most cases.
But I have looked at quite a bit of C and C++ code, and I have looked up the Windows SDK type definitions.
An int and a long are both actually a signed __int32.
From:
INT is a 32-bit signed integer. The range is -2147483648 through 2147483647 decimal.
This type is declared in WinDef.h as follows:
typedef int INT;
LONG is a 32-bit signed integer. The range is –2147483648 through 2147483647 decimal.
This type is declared in WinNT.h as follows:
typedef long LONG;
And in
long is 4 bytes, other names is long int, signed long int, range is –2,147,483,648 to 2,147,483,647
__int32 is 4 bytes, other names are signed, signed int, int, range is –2,147,483,648 to 2,147,483,647
int is 4 bytes, other names is signed int, range is –2,147,483,648 to 2,147,483,647
So any compiler that treats INT and LONG differently (they are both a 32bit signed integer) is bugged and unpredictable per the definitions.
“why is it even desirable to have the result fit into an int? You’d be saving merely four bytes of memory”
Sounds to me like you are talking about int as if it was a LONG LONG or __int64, and don’t forget LONG_PTR (32bits on x86, 64bits on x64)
Of course, I could be wrong and in C# a long is actually a signed __int64 rather than a signed __int32 as the Windows SDK states… they can’t both be right can they? (I trust the SDK more than C# compiler in this case)
Yes, you are wrong. Indeed, in the 32 bit windows SDK for C/C++ both INT and LONG are aliases for a 32 bit signed integer. That has nothing whatsoever to do with C#, a completely different language that targets the .NET Runtime, not the Win32 SDK. The C# compiler has nothing whatsoever to do with the Win32 SDK. There’s not a contradiction there; they are just completely different systems. In C#, an int is 32 bits and a long is 64 bits. — Eric
Maybe time for back to basics adventure in coding article to highlight this typedef mess that has stayed with C through C++ to C# (and bled into some other languages as well) and the two MSDN links I gave which everyone should have bookmarked at the very least.
It hasn’t bled through to C# at all. These definitions for C/C++ programmers have nothing whatsoever to do with C#, a completely different language that targets a different platform. — Eric | https://blogs.msdn.microsoft.com/ericlippert/2009/01/28/long-division/ | CC-MAIN-2017-09 | refinedweb | 985 | 69.41 |
Using ML.NET in Jupyter notebooks
Cesar
I do believe this is great news for the ML.NET community and .NET in general. You can now run .NET code (C# / F#) in Jupyter notebooks and therefore run ML.NET code in it as well! – Under the covers, this is enabled by ‘dotnet-try’ and its related .NET kernel for Jupyter (as early previews).
The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.
In terms of ML.NET this is awesome for many scenarios like data exploration, data cleaning, plotting data charts, documenting model experiments, learning scenarios such as courses or hands-on-labs, quizzes, etc.
Show me the code and run it!
Although I’m showing in the following steps most of the code, step by step, it is always useful, especially when dealing with Jupyter notebooks to have the Jupyter notebook code and simply run it!
I set up a Jupyter environment in MyBinder (public service in the Internet) which is a great way to try notebooks if you don’t have Jupyter setup in your own machine. You can run it by simply clicking on the below link:
Ready-to-run ML.NET Jupyter notebook at MyBinder
In that ready-to-run Jupyter notebook you can directly try ML.NET code, plotting charts from C#, display the training time and quality metrics, etc. as shown in the image below:
You can also download the Jupyter notebook with ML.NET code that I’m using in this Blog Post from here (MLNET-Jupyter-Demo.ipynb).
Note that after some time if your MyBinder environment was not active, it’ll be shutdown. Therefore, if you want to have a stable environment you might want to set it up on your own machine, as explained below.
Setting it up on your local machine
If you want to set it up on your local machine/PC, you need to install:
- Jupyter (Easiest way is to install Anaconda).
- ‘dotnet try’ global tool.
- Enable the .NET kernel for Jupyter.
Install Jupyter on your machine
The easiest and recommended way to install Jupyter notebooks is by installing Anaconda
(conda) but you can also use
pip.
When installing anaconda, it’ll also install Python. However, I want to highlight that ML.NET doesn’t have any dependency on Python, but Jupyter has.
For more details on how to install Anaconda and Jupyter please checkout the Jupyter installation guide.
Install the ‘dotnet try’ tool
The Jupyter kernel for .NET is based on the ‘dotnet try’ tool you need to install first.
The ‘dotnet try’ tool is a CLI Global Tool so you install it with the ‘dotnet CLI’.
Since these versions are early previews, they are still not in NuGet.org but in MyGet, therefore you need to provide the MyGet feed, like in the following CLI line:
dotnet tool install -g dotnet-try
Note: If you have the dotnet try global tool already installed, you will need to uninstall before grabbing the kernel enabled version of the dotnet try global tool.
List what Global Tools you have installed:
dotnet tool list -g
Update dotnet-try:
dotnet tool update -g dotnet-try
Uninstall:
dotnet tool uninstall dotnet-try -g
Issues and Open Source code
The ‘dotnet try’ tool open source repo is here: . You can research there for deeper details about it.
Issues and Feedback: If you have any issue with dotnet-try or the .NET kernel on Jupyter, please post it here:
Install the .NET kernel in Jupyter
- If you have Jupyter using Anaconda then you should execute the commands below inside the Anaconda command prompt
- Run the following command
dotnet try jupyter install
Test that it is working
- Start the Anaconda Navigator app (Double click on ‘Anaconda Navigator’ icon)
- Launch Jupyter from the ‘Launch’ button in the ‘Jupyter Notebook’ tile.
- Alternatively, from the Anaconda Prompt you can also start Jupyter by typing the following command positioned at your user’s home path.:
jupyter notebook
- You will see Jupyter and your User’s folders by default.
- Open the ‘New’ menu option and you should see the ‘.NET (C#)’ and ‘.NET (F#)’ menu options:
- Select ‘.NET (C#)’ and start hacking in C# in a new Jupyter notebook! 🙂
- For instance, you can test that C# is working with simple code like the following:
Ok, let’s hack for a while and start writing ML.NET C# code in a Jupyter notebook! 🙂
Install NuGet packages in your notebook
First things first. Before writing any ML.NET code you need the notebook to have access to the NuGet packages you are going to use. In this case, we’re going to use ML.NET and XPlot for plotting data distribution and the regression chart once the ML model is built.
For that, write code like the following. Versions might vary and you could also add the ‘using’ namespaces later on or in this same Jupyter cell:
Run this cell once. It ‘ll take some time in order to download and install the NuGet packages, that’s why it is a good idea to have this installation in a separated cell.
Declare the data-classes
When loading the datasets and when training or predicting you need to use an input class and a prediction class, like the following classes:
Here’s the code you can copy/paste in your notebook:
Load dataset in IDataView
The way you load data is exactly the same way you’d do in a regular C# project. You only need to place the dataset files in the same folder where you have your just created Jupyter notebook which by default will be your user’s root folder. You can copy the .csv files from this GitHub repo:
Then, just write the following code and run it so you see the training IDataView schema:
Here’s how you see it in Jupyter:
You can also visualize a few rows of the data loaded into any IDataView such as here:
This action is a bit more verbose, but we’re working on another data structure in .NET for exploring data named ‘DataFrame’ very similar to the DataFrame in Pandas in Python which is a lot simpler than when working with the IDataview because the DataFrame is eager instead of lazy loading plus you don’t need to work with typed data classes just for exploring data.
Plotting data with XPlot
XPlot is a popular plotting library in the F# community that you can also use from C#:
In the initial cell you already installed its Nuget package so now you can simply use it in Jupyter.
Prepare data in arrays
XPlot works with any IEnumerable based type but the most common way is by using arrays, so first of all we’re going to extract some input variables data in a few arrays:
After running that in a Jupyter cell, you can now plot data distributions such as the following histogram where you can see that most of the taxi trips were between $5 and $10.
Or more interestingly, you can see how the ‘distance’ input variable impacts the fare/price of the taxi trips, although you can also see that some other variables might be influencing, as well, because when the distance is higher the dots are more sparse probably due to the ‘time’ variable that you can also plot.
You can check the Jupyter notebook file (MLNET-Jupyter-Demo.ipynb) I’m providing and see additional plotting charts I explored.
Create the ML Regression model with ML.NET
Now, let’s get into ML.NET code. We’ll first work on the data transformations then we’ll add the trainer/algorithm and finally we’ll train the model which creates the model itself.
Data transformations in the model pipeline
In order to create a regression model we first need to make some data transformations (convert text to numbers, normalize and concatenate input variables) in our pipeline such as the following:
You should run that code in a new Jupyter cell you create.
If you want to learn more about the data transformations needed for a regression problem, take a look to this tutorial:
Add the trainer/algorithm and train the model
In the following code we add the trainer/algorithm SDCA (Stochastic Dual Coordinate Ascent) to the pipeline and then we train the model by calling the fit() method and providing the training dataset:
And here’s the execution in Jupyter with just some more ‘displaying info’ lines of code:
A very interesting thing you can use in C# when running a cell is the ‘%%time’ code which will measure the time it needed to run all the code in that Jupyter cell. This is especially interesting when you know something is going to take its time, like when training an ML model, depending on how much data you have for training. In that case above it tells us it needed almost 3 seconds, but if you have a lot of data it could be minutes or even hours.
Evaluate the model’s quality: Metrics
Once you have the model another important step is to figure out how good it is by calculating the performance metrics with some predictions that are compared to the actual values from a test-dataset, like in the following code:
Here you can directly see the metrics in the Jupyter notebook in a very neat way by simply calling ‘display(metrics)’ 🙂
Make predictions in bulk and show a bar diagram comparing predictions vs. actual values
Here’s the code on how to make a few predictions and show in a bar chart a comparison of predictions versus actual values from the test dataset:
And here’s the bar chart in Jupyter:
Plotting Predictions vs. Actual values plus the Regression line
Finally, with the following code you can plot the predictions vs. the actual values. If the regression model is working well the dots should be most of them around a straight line which is the regression line. Also, the closer the regression line is to the ‘perfect line’ (prediction is equal to the actual value in the test dataset), the better quality your model has.
Here’s the code:
And this is how you’ll see the regression line and plot chart:
Save the ML model as a file
Finally, you can also save the ML.NET model file and see it in the same folder than your Jupyter notebook:
You can the take that .ZIP file (ML.NET model) and deploy it (consume it) in any .NET application like you can see here for making predictions in an Azure Function or an ASP.NET Core app/WebAPI:
- Deploying an ML.NET model into an Azure Function
- Deploying an ML.NET model into an ASP.NET Core app/WebAPI
Conclusions and take aways
Jupyter is a great environment for scenarios such as:
- Data exploration and plotting
- Documenting Machine Learning model experiments and conclusions
- Creating courses based on Jupyter notebooks. Great for many learning scenarios
- Labs or Hands on labs
- Creating quizzes for learning environments
And now with the .NET kernel for Jupyter you can take advantage of it for all those scenarios.
Please, feel free to send us your feedback through this blog post comments or into the following GitHub issues:
dotnet-try feedback:
ML.NET feedback:
We can’t wait to hear from you about the ideas and assets you can create with Jupyter+ML.NET! 🙂
Happy coding! | https://devblogs.microsoft.com/cesardelatorre/using-ml-net-in-jupyter-notebooks/ | CC-MAIN-2019-47 | refinedweb | 1,925 | 67.89 |
Platform-independent types. More...
Platform-independent types.
#include <mi/base/types.h>
The
printf format specifier for mi::Difference.
The
printf format specifier for mi::Sint64.
The
printf format specifier for mi::Size.
The
printf format specifier for mi::Uint64.
Value of Pi.
Value of Pi / 2.
Value of Pi / 4.
Signed integral type that is large enough to hold the difference of two pointers.
It corresponds to a 32-bit signed integer on 32-bit architectures and a 64-bit signed integer on 64-bit architectures.
32-bit float.
64-bit float.
16-bit signed integer.
32-bit signed integer.
64-bit signed integer.
8-bit signed integer.
Unsigned integral type that is large enough to hold the size of all types.
This type is for example used for dimensions and indices of vectors.
It corresponds to a 32-bit unsigned integer on 32-bit architectures and a 64-bit unsigned integer on 64-bit architectures.
16-bit unsigned integer.
32-bit unsigned integer.
64-bit unsigned integer.
8-bit unsigned integer.
An enum for a three-valued comparison result.
The three values, -1, 0, and 1, have several symbolic names that can be used interchangeable depending on the context. The symbolic names group together as indicated in their order.
Reverses the sign of a three valued enum.
Returns the three valued comparison result between two values of a numerical type
T.
Tmust be comparable.
lhsor
rhsis NaN.
Returns the three valued sign for a numerical type
T.
Tmust be comparable against 0.
tis NaN.
The maximum value for
Difference.
The minimum value for
Difference.
The maximum value for
Size. | https://raytracing-docs.nvidia.com/iray/api_reference/math/html/group__mi__base__types.html | CC-MAIN-2019-22 | refinedweb | 271 | 55.3 |
Troubleshooting¶
So something has gone wrong… what do you do?¶
When Qtile is running, it logs error messages (and other messages) to its log
file. This is found at
~/.local/share/qtile/qtile.log. This is the first
place to check to see what is going on. If you are getting unexpected errors
from normal usage or your configuration (and you’re not doing something wacky)
and believe you have found a bug, then please report a bug.
If you are hacking on Qtile and you want to debug your
changes, this log is your best friend. You can send messages to the log from
within libqtile by using the
logger:
from libqtile.log_utils import logger logger.warning("Your message here") logger.warning(variable_you_want_to_print) try: # some changes here that might error raise Exception as e: logger.exception(e)
logger.warning is convenient because its messages will always be visibile
in the log.
logger.exception is helpful because it will print the full
traceback of an error to the log. By sticking these amongst your changes you
can look more closely at the effects of any changes you made to Qtile’s
internals.
Capturing an
xtrace¶
Occasionally, a bug will be low level enough to require an
xtrace of
Qtile’s conversations with the X server. To capture one of these, create an
xinitrc or similar file with:
exec xtrace qtile >> ~/qtile.log
This will put the xtrace output in Qtile’s logfile as well. You can then demonstrate the bug, and paste the contents of this file into the bug report.
Note that xtrace may be named
x11trace on some platforms, for example, on Fedora. | http://docs.qtile.org/en/latest/manual/troubleshooting.html | CC-MAIN-2021-39 | refinedweb | 275 | 73.88 |
Welcome to this month's installment of "Java In Depth." One of the earliest challenges for Java was whether or not it could stand as a capable "systems" language. The root of the question involved Java's safety features that prevent a Java class from knowing other classes that are running alongside it in the virtual machine. This ability to "look inside" the classes is called introspection. In the first public Java release, known as Alpha3, the strict language rules regarding visibility of the internal components of a class could be circumvented though the use of the
ObjectScope class. Then, during beta, when
ObjectScope.)
Look deeply into my files...:
public interface Application { public void main(String args[]); }
If the above interface was defined, and classes implemented it, then at least you could use the
instanceof operator in Java to determine if you had an application or not and thus determine whether or not it was suitable for invoking from the command line. The bottom line is that you can't (define the interface), it wasn't (built into the Java interpreter), and so you can't (determine if a class file is an application easily). So what can you do?
Actually, you can do quite a bit if you know what to look for and how to use it.
Decompiling class files
The Java class file is architecture-neutral, which means it is the same set of bits whether it is loaded from a Windows 95 machine or a Sun Solaris machine. It is also very well documented in the book The Java Virtual Machine Specification by Lindholm and Yellin. The class file structure was designed, in part, to be easily loaded into the SPARC address space. Basically, the class file could be mapped into the virtual address space, then the relative pointers inside the class fixed up, and presto! You had instant class structure. This was less useful on the Intel architecture machines, but the heritage left the class file format easy to comprehend, and even easier to break down.
In the summer of 1994, I was working in the Java group and building what is known as a "least privilege" security model for Java. I had just finished figuring out that what I really wanted to do was to look inside a Java class, excise those pieces that were not allowed by the current privilege level, and then load the result through a custom class loader. It was then that I discovered there weren't any classes in the main run time that knew about the construction of class files. There were versions in the compiler class tree (which had to generate class files from the compiled code), but I was more interested in building something for manipulating pre-existing class files.
I started by building a Java class that could decompose a Java class file that was presented to it on an input stream. I gave it the less-than-original name
ClassFile. The beginning of this class is shown below.
public class ClassFile { int magic; short majorVersion; short minorVersion; ConstantPoolInfo constantPool[]; short accessFlags; ConstantPoolInfo thisClass; ConstantPoolInfo superClass; ConstantPoolInfo interfaces[]; FieldInfo fields[]; MethodInfo methods[]; AttributeInfo attributes[]; boolean isValidClass = false; public static final int ACC_PUBLIC = 0x1; public static final int ACC_PRIVATE = 0x2; public static final int ACC_PROTECTED = 0x4; public static final int ACC_STATIC = 0x8; public static final int ACC_FINAL = 0x10; public static final int ACC_SYNCHRONIZED = 0x20; public static final int ACC_THREADSAFE = 0x40; public static final int ACC_TRANSIENT = 0x80; public static final int ACC_NATIVE = 0x100; public static final int ACC_INTERFACE = 0x200; public static final int ACC_ABSTRACT = 0x400;
As you can see, the instance variables for class
ClassFile define the major components of a Java class file. In particular, the central data structure for a Java class file is known as the constant pool. Other interesting chunks of class file get classes of their own:
MethodInfo for methods,
FieldInfo for fields (which are the variable declarations in the class),
AttributeInfo to hold class file attributes, and a set of constants that was taken directly from the specification on class files to decode the various modifiers that apply to field, method, and class declarations.
The primary method of this class is
read, which is used to read a class file from disk and create a new
ClassFile instance from the data. The code for the
read method is shown below. I've interspersed the description with the code since the method tends to be pretty long.
1 public boolean read(InputStream in) 2 throws IOException { 3 DataInputStream di = new DataInputStream(in); 4 int count; 5 6 magic = di.readInt(); 7 if (magic != (int) 0xCAFEBABE) { 8 return (false); 9 } 10 11 majorVersion = di.readShort(); 12 minorVersion = di.readShort(); 13 count = di.readShort(); 14 constantPool = new ConstantPoolInfo[count]; 15 if (debug) 16 System.out.println("read(): Read header..."); 17 constantPool[0] = new ConstantPoolInfo(); 18 for (int i = 1; i < constantPool.length; i++) { 19 constantPool[i] = new ConstantPoolInfo(); 20 if (! constantPool[i].read(di)) { 21 return (false); 22 } 23 // These two types take up "two" spots in the table 24 if ((constantPool[i].type == ConstantPoolInfo.LONG) || 25 (constantPool[i].type == ConstantPoolInfo.DOUBLE)) 26 i++; 27 }
As you can see, the code above begins by first wrapping a
DataInputStream around the input stream referenced by the variable in. Further, in lines 6 through 12, all of the information necessary to determine that the code is indeed looking at a valid class file is present. This information consists of the magic "cookie" 0xCAFEBABE, and the version numbers 45 and 3 for the major and minor values respectively. Next, in lines 13 through 27, the constant pool is read into an array of
ConstantPoolInfo objects. The source code to
ConstantPoolInfo is unremarkable -- it simply reads in data and identifies it based on its type. Later elements from the constant pool are used to display information about the class.
Following the above code, the
read method re-scans the constant pool and "fixes up" references in the constant pool that refer to other items in the constant pool. The fix-up code is shown below. This fix-up is necessary since the references typically are indexes into the constant pool, and it is useful to have those indexes already resolved. This also provides a check for the reader to know that the class file isn't corrupt at the constant pool level.
28 for (int i = 1; i < constantPool.length; i++) { 29 if (constantPool[i] == null) 30 continue; 31 if (constantPool[i].index1 > 0) 32 constantPool[i].arg1 = constantPool[constantPool[i].index1]; 33 if (constantPool[i].index2 > 0) 34 constantPool[i].arg2 = constantPool[constantPool[i].index2]; 35 } 36 37 if (dumpConstants) { 38 for (int i = 1; i < constantPool.length; i++) { 39 System.out.println("C"+i+" - "+constantPool[i]); 30 } 31 }
In the above code each constant pool entry uses the index values to figure out the reference to another constant pool entry. When complete in line 36, the entire pool is optionally dumped out.
Once the code has scanned past the constant pool, the class file defines the primary class information: its class name, superclass name, and implementing interfaces. The read code scans for these values as shown below.
32 accessFlags = di.readShort(); 33 34 thisClass = constantPool[di.readShort()]; 35 superClass = constantPool[di.readShort()]; 36 if (debug) 37 System.out.println("read(): Read class info..."); 38 39 /* 30 * Identify all of the interfaces implemented by this class 31 */ 32 count = di.readShort(); 33 if (count != 0) { 34 if (debug) 35 System.out.println("Class implements "+count+" interfaces."); 36 interfaces = new ConstantPoolInfo[count]; 37 for (int i = 0; i < count; i++) { 38 int iindex = di.readShort(); 39 if ((iindex < 1) || (iindex > constantPool.length - 1)) 40 return (false); 41 interfaces[i] = constantPool[iindex]; 42 if (debug) 43 System.out.println("I"+i+": "+interfaces[i]); 44 } 45 } 46 if (debug) 47 System.out.println("read(): Read interface info...");
Once this code is complete, the
read method has built up a pretty good idea of the structure of the class. All that remains is to collect the field definitions, the method definitions, and, perhaps most importantly, the class file attributes.
The class file format breaks each of these three groups into a section consisting of a number, followed by that number of instances of the thing you are looking for. So, for fields, the class file has the number of defined fields, and then that many field definitions. The code to scan in the fields is shown below.
48 count = di.readShort(); 49 if (debug) 50 System.out.println("This class has "+count+" fields."); 51 if (count != 0) { 52 fields = new FieldInfo[count]; 53 for (int i = 0; i < count; i++) { 54 fields[i] = new FieldInfo(); 55 if (! fields[i].read(di, constantPool)) { 56 return (false); 57 } 58 if (debug) 59 System.out.println("F"+i+": "+ 60 fields[i].toString(constantPool)); 61 } 62 } 63 if (debug) 64 System.out.println("read(): Read field info...");
The above code starts by reading a count in line #48, then, while the count is non-zero, it reads in new fields using the
FieldInfo class. The
FieldInfo class simply fills out data that define a field to the Java virtual machine. The code to read methods and attributes is the same, simply replacing the references to
FieldInfo with references to
MethodInfo or
AttributeInfo as appropriate. That source is not included here, however you can look at the source using the links in the Resources section below.
Ok, so now what?
At this point you might be asking, "What good does this do me?" The answer is "Quite a bit."
If you've compiled up these classes and have them in your class path, the simplest thing you can do is to print them out and have a look.
The
ClassFile class defines a method named
display for dumping the structure of the class file out to the terminal. I wrote a simple program named
dumpclass to show how it is used. The source code to
dumpclass is shown below.
import java.io.*; import java.util.*; import util.*; public class dumpclass { public static void main(String args[]) { try { FileInputStream fi = new FileInputStream(args[0]); util.ClassFile cf = new util.ClassFile(); // cf.debug = true; // cf.dumpConstants = true; if (! cf.read(fi)) { System.out.println("Unable to read class file."); System.exit(1); } cf.display(System.out); } catch (Exception e) { e.printStackTrace(); } } }
The code above shows how
dumpclass easily reads in a named class file and then displays it using the
display method. The output of the display is shown below. If you look at the output you will see that generic imports in the source such as
import java.io.*; are regenerated with the specific files that the
dumpclass code actually imports. If nothing else, using
dumpclass on your class files, and cutting and pasting the specific imports in for your generic imports, will save compile time on some compilers. The other interesting thing is that the source code looks like, well, source code. This is because the class file structure contains structural as well as implementation information. You should not use such information to illegally decompile other people's class files.
import java.io.FileInputStream; import java.io.PrintStream; import java.lang.Exception; import java.lang.System; import java.lang.Throwable; import util.ClassFile; /* * This class has 1 optional class attributes. * These attributes are: * Attribute 1 is of type SourceFile * SourceFile : dumpclass.java */ public synchronized class dumpclass extends java.lang.Object { /* Methods */ public static void main(java.lang.String a[]); public void dumpclass(); }
More interesting to me when I wrote these classes was the optional class file attribute. Since the
ClassFile class can write as well as read class files, it is ideal for "adding on" an optional class file attribute.
For those of you who haven't seen the specification on class files, the optional class file attribute is a chunk of opaque data that has a string typename and a chunk of opaque binary data. Sun defines a few well-known attributes (the "SourceFile" attribute shown above is one such attribute), but you can use the attributes to store arbitrarily interesting data. In my secure system prototype I had space reserved in an optional class attribute for a public key signature and a capabilities certificate.
Another interesting application of class file attribute is demonstrated by the SBKTech application Jinstall, which uses an attribute to store the compressed data for its self-extracting archive process. Using these classes and the new ZIP file routines in 1.1 makes it pretty easy to generate this type of application.
Finally, perhaps the most intriguing application of reading and rewriting class files uses attributes and class loaders. Referring back to my article on writing class loaders, and knowing that attributes can be associated with methods, in addition to being generic to the class (and in fact there is an attribute with the method to indicate the exceptions it throws), consider the following application.
Let's say you have a Java class whose method code was stored in an attribute associated with that method and encrypted by a key known only to the author's server. The actual code associated with a method was some Java code that simply threw an
UnlicensedUsageException. (Note that this is a fictional exception used to illustrate the design.) Now bundle with an application a custom class loader that was designed to load such a class. This class loader would work in the following way.
First, the code for the class would be read. Then the class would be decomposed into a
ClassFile structure. After this, the methods in the class would be checked for encryption. The class loader, once satisfied such a thing was allowed, would contact, via the Internet, the author's server and request a decryption key. That key would be applied to the encrypted code, and the decrypted code would be substituted for the place holder code. The class would be rewritten into a byte stream and then fed into the class loader for loading and execution.
The result of these steps would be a Java class file that was very much more difficult to decompile than a "normal" Java class. Further, since the decryption happens on the fly, only a modified virtual machine could be used to extract the running code (assuming a secure decrypting key exchange).
I had thought about coding an example but realized that such a class loader would no doubt be declared to be a munition and I would be branded an arms dealer. So this description will have to suffice!
Wrapping up and further thoughts
Being able to see inside a Java class can enable a Java application to manipulate that class in useful ways. I've looked at reading and writing class files directly, and then through a custom class loader importing the class into the Java run time. Being able to write classes enables such applications as "self extracting" classes. These are meta classes around a distribution of classes. Another interesting application is the notion of an encrypted class whose contents are self-decrypted just prior to running by accessing a remote key. It all goes to show that we can learn new skills by looking inside ourselves!
Next month we will look at the Reflection API and how it achieves introspection while keeping a rein on security, and I'll show you how I'd write the initial code of the Java interpreter if I had an opportunity to update that code.
Learn more about this topic
- "SBKTech Tools" -- Cool tools that take advantage of class file knowledge.
- Source files for this column:
- ClassFile.java
- AttributeInfo.java
- FieldInfo.java
- MethodInfo.java
- ConstantPoolInfo.java
- Here are the files in ZIP format
- Here are the files in TAR format
- "How to build an interpreter in Java, Part 2The structure"
The trick to assembling the foundation classes for a simple interpreter.
- "How to build an interpreter in Java, Part 1The BASICs"
For complex applications requiring a scripting language, Java can be used to implement the interpreter, adding scripting abilities to any Java app.
- "Lexical analysis, Part 2Build an application"
How to use the StreamTokenizer object to implement an interactive calculator.
- "Lexical analysis and JavaPart 1"
Learn how to convert human-readable text into machine-readable data using the StringTokenizer and StreamTokenizer classes.
- "Code reuse and object-oriented systems"
Use a helper class to enforce dynamic behavior.
- "Container support for objects in Java 1.0.2"
Organizing objects is easy when you put them into containers. This article walks you through the design and implementation of a container.
- "The basics of Java class loaders"
The fundamentals of this key component of the Java architecture.
- "Not using garbage collection"
Minimize heap thrashing in your Java programs.
- "Threads and applets and visual controls"
This final part of the series explores reading multiple data channels.
- "Using communication channels in applets, Part 3"
Develop Visual Basic-style techniques to applet design -- and convert temperatures in the process.
- "Synchronizing threads in Java, Part II"
Learn how to write a data channel class, and then create a simple example application that illustrates a real-world implementation of the class.
- "Synchronizing threads in Java"
Former Java team developer Chuck McManis walks you through a simple example illustrating how to synchronize threads to assure reliable and predictable applet behavior. | https://www.javaworld.com/article/2076993/learn-java/take-a-look-inside-java-classes.amp.html | CC-MAIN-2018-22 | refinedweb | 2,895 | 55.34 |
On Tue, 2010-10-26, Carl Banks wrote: > On Oct 25, 11:20 pm, Jorgen Grahn <grahn+n... at snipabacken.se> wrote: >> On Mon, 2010-10-25, bruno.desthuilli... at gmail.com wrote: >> >. >> >> Which mainstream languages are you thinking of? Java? Because C++ is >> as flat as Python. > > Not in my experience. The only way to get dynamic polymorphism (as > opposed to the static polymorphism you get with templates) in C++ is > to use inheritance, so when you have a class library in C++ you tend > to get hierarchies where classes with all kinds of abstract base > classes so that types can be polymorphic. I should have mentioned that I talked about the standard C++ library: almost no inheritance[1] and just one namespace level. Of course you can make a layered mess of C++ if you really try[2], but it's not something the language encourages. IMHO. > In Python you don't need > abstract base classes so libraries tend to be flatter, only inheriting > when behavior is shared. > > However it's not really that big of a difference. Right, that's one level, and you can't avoid it if you really *do* need inheritance. /Jorgen [1] Not counting the black sheep, iostreams. [2] I have seen serious C++ code trying to mimic the Java bottomless namespace pit of despair: com::company::division::product::subsystem::... -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . | https://mail.python.org/pipermail/python-list/2010-October/590700.html | CC-MAIN-2014-15 | refinedweb | 237 | 65.32 |
Our engineering team at Aol Europe have been working on a number of exciting new projects over the past few months including a brand new high-performance real-time web framework called SocketStream.We're going to take a first look at how you can start building applications with it today.
The brain-child of software engineer Owen Barnes, SocketStream is built on top of node.js, resolves around the popular single-page application (SPA) paradigm and utilizes HTML5 WebSockets, Socket.IO, Redis and other techologies to provide an extremely responsive experience for the web.
It's effectively a complete-stack for all of your client and server development needs.
The team launched SS at Hacker News London this week and we were humbled by the positive reaction on both GitHub (where we very surprisingly became the trending repo of the week) and Twitter including some interest from Jeremy Ashkenas, the developer behind Backbone.js, Underscore.js and of course, CoffeeScript.
Why do we believe the industry needs SocketStream?
Last year, we took a look at the current state of web development and noticed that in a lot of cases developers were having to write similar logic on both the server-side and client-side repeatedly.
This is something that many of us have now become used to, managing MVC-style architecture on the server in a language like PHP or Rails and then another MVC-like instance in JavaScript on the client. Although a means to an end, no developer should ideally be doubling their efforts on a codebase unless completely necessary. It makes web development feel broken as a process.
Not only this, but in a lot of situations the best experiences being offered to end-users only provided a snapshot of data from the past (neither Ajax requests nor long-polling give you a truly optimal, real-time level of data syncronization between the front and backend), so Owen decided to come up with a solution that would solve these problems. Imagine being able to change data on the backend and have it update in real-time on the user's screen.
Rather than constantly requiring a page-refresh or long-polled call to the server to maintain syncronization, why not simply pass all of the data needed back and forth between the client and server through WebSockets and packets containing updates in JSON-form instead?. This is effectively what SocketStream does, however it also does a lot more.
Let's take a brief look at WebSockets before we continue.
Understanding WebSockets
If you haven't played around with the WebSocket API before it's effectively a means for providing low-complexity, low-latency, bi-directional communicational channels over a single TCP socket and is relatively straight-forward to use.
The protocol works by initiating an HTTP-like handshake and provides developers a simple way to send packets of data back and forth between the client and server.
Whilst a complete tutorial on WebSockets is beyond the scope of this article, a basic demonstration of how they function can be seen below:
/*Create a new WebSocket, connect to the server at test.com specifying a customized protocol*/ var socket = new WebSocket("", "a-protocol"); /*Once the connection has been established, we can begin transmitting data to the server by using the WebSocket objects .send() method:*/ socket.send("lolwat?"); /*As an event-driven API , a message event is delievered to an onmessage function when messages are received. We can easily listen for data that arrives as follows:*/ socket.onmessage(function( e ){ console.log('Data arrived:' + e.data); } /*Connections can then be closed using the WebSocket's .close() method*/ socket.close(); //easy!
The low level of latency is one of the primary values of using WebSockets because it effectively enables bi-directional communication without requiring a large overhead.
As mentioned on the Mozilla blog as a comparitive example: Google Wave attempted to bring the world real-time communication with keystrokes but had a several-kilobyte overhead for each keystroke due to the TCP start-up, teardown and HTTP headers involved where realistically, only a few bytes should have been sent down the tubes.WebSockets would have allowed them to do this and this is where their power truly shines. (In case you're interested, there are some notes about the Wave protocol vs. WebSockets here along with suggestions of using them together).
Although the WebSocket API is currently still being standardized by the W3C, developers are already able to take advantage of them in browsers such as Safari 5, mobile Safari, FireFox 5 and Chrome 4+.
This is why Owen thought it was time to start using them – they will be supported by all browsers in the future but in the mean-time for browsers that don't support WebSockets natively, a polyfill such as FlashSockets can be used instead.
The rest of the SocketStream stack
- Data storage – SocketStream uses Redis which been described as many things, but it's generally considered a multi-purpose swiss army knife. It's a key-value store, a data-structure server, a non-blocking event bus and a lot more. It's very difficult to precisely categorize Redis, however it tends to fall under the NoSQL brand most of the time.
- SocketStream was built with a number of design patterns in mind, including Pub/Sub (which you can read more about here). Scalable pub/sub is baked right into the framework so you can easily keep the rest of your application decoupled from the get go (as long as you remember to use it!).
- Namespacing is one of the areas of large-scale application development that is often neglected (but as we know is very-much required) – this is why SocketStream offers the concept of API trees. These are basically a way of representing your entire application's folder structure in a JavaScript object so that code can be easily namespaced, accessed and executed on both the client and server side.
- You don't have to worry about handling your own authentication or bolting on your own HTTPS module onto the framework – both of these are built-in modules with their own APIs.
- Intelligent tools for the modern-developer – SocketStream comes with Stylus (for dynamic, expressive CSS), Jade (a node-based templating engine) for static HTML files and JavaScript templating for any dynamic templating needs. The jQuery templates plugin is provided by default, however this can be easily switched out for Mustache, Handlebars or any other client-side templating solution you might prefer.
How does SocketStream work?
As a front-end developer, build processes as essential for making projects production-ready. This includes everything from concatonating files to minification and beyond. SocketStream automatically handles all of this for you (via UglifyJS) and sends all of this data through to users the first time they connect to your site.
After this, any additional data is transmitted as serialized JSON over WebSockets instantaneously establishing when the user first connections and reestablishing if the connection drops for any reason. The end-result of this is that we no longer need worry about issues like latency or header overhead – simply streaming data between clients and servers.
Writing a real-time group chat client using SocketStream
To help you get started with SocketStream, Paul and I have written a simple application for bi-directional messaging between the client and server (effectively a simple chat app). In this section of the post, we'll take you how to install and create your first SocketStream application in next to no-time.
First, select a language – CoffeeScript or JavaScript?
In case you haven't tried it out yet, CoffeeScript is a new abstracted way of writing code that compiles to JavaScript in a concise, elegant fashion inspired by some of the syntatic sugar found in Rails and Python. Discussions on the pros and cons of it have been ongoing in the JavaScript community however SocketStream doesn't make any discrimination – you can easily code applications using whichever you prefer.
Simply use the .js or .coffee extension for new files in your application directory and the SocketStream build process will take care of the rest.
Our team however love using CS and encourage developers to give it a try as it doesn't have a large learning curve. We believe that EcmaScript.next may well incorporate aspects of CS as a part of it's specs so it may well be the future. In case you're interested in giving it a go, Arcturo wrote a free book on CS that you can read here. Alternatively, if you're a JavaScript developer just looking for a simpler way to port your code over to CS, I recommend trying out js2coffee.
Step 1 – Getting Started: Installation
Now that you've decided which language to use, let's get on to the installation of SocketStream, it's dependencies and the sources to the SocketChat application.
Before you can run SocketStream, you will need to install the following software:
- Node.js ()
- Node Package Manager ()
- Redis ()
Instructions on how to install those items of software are included below for reference, and are taken from the home pages of those software libraries.
Note: You may also need to install the following packages:
yum install gettext-devel expat-devel curl-devel zlib-devel openssl-devel
Step 1.1 – Install Node.js if you don't currently have it on your system.
Firstly, Check you have the dependencies for Node.js installed first (see here). If you satisfy them, then follow the command line instructions below:
wget # (check this is the latest version) tar xzf node-v0.4.8.tar.gz cd node-v0.4.8 ./configure make sudo make install
Step 1.2 – Install Node Package Manager (npm)
curl | sh
Step 1.3 – Install Redis
wget tar xzf redis-2.2.7.tar.gz cd redis-2.2.7 make
Step 1.4 – Install SocketStream
There are two ways you can do this. Either by using a standard git clone as follows:
git clone git://github.com/socketstream/socketstream.git cd socketstream/ npm install -g
or by directly using NPM:
sudo npm install socketstream -g
To test that SocketStream is correctly installed, create and run a test project:
socketstream new test cd test socketstream start
Step 1.5 – Install SocketChat
git clone cd socketchat/ npm link
The complete installation process should take less than 20 minutes including dependencies.
Step 2 – Understanding folder structure and API trees
Generating a new SocketStream project is then as simple as typing in:
socketstream new project_name_here
This generates a number of project directories which are covered in detail on our repo's ReadMe. These directories are:
- /app/client (CoffeeScript or JavaScript files containing application logic for the client)
- /app/server (effectively controllers in traditional MVC, logic for the server-side)
- /app/shared (code shared between the client and server)
- /app/css (the Stylus files containing your style/CSS definitions)
- /app/views (Jade views and templates)
- /lib (libraries used such as jQuery)
- /public (for static files. Used by SS for compiling your project)
- /static (for warnings)
- /vendor (optional)
You can structure the sub-folders in these directories as needed but remember that this entire API Tree can also be accessed as a JavaScript object. Effectively, we can call SS.client.app<export name> on either the browser console or on the server to get the same application logic executed.
Eg. if sum was an export defined in /app/server/, you could easily call SS.client.app.sum(25,34) to add the two numbers on either console. Once again, for information on exports, I recommend reading our official documentation.
Step 3 – Creating our chat application views
We'll be using Jade to create all of our application views. If you haven't used it before, Jade's a great templating engine that's quite heavily influenced by Haml. The basic idea is that you're able to structure your mark-up without the need for semantic tags which simplifies the amount of work needed to define your layout. As a part of our build-process, Jade templates are later compiled to actual HTML that can be rendered in the browser.
Below, we're defining a layout with a simple div messageContainer and a form (sendMessage) for submitting new messages to other users. Jade is very readable so even if its new to you, you should be able to read the code:
/app/views/app.jade – Main template
!!! 5 html(lang:"en") head meta(http-equiv="Content-Type", content="text/html;charset=UTF-8") meta(name: "apple-mobile-web-app-capable", content: "yes") meta(name: "apple-mobile-web-app-status-bar-style", content: "black-translucent") meta(name: "viewport", content: "width = 1024, initial-scale = 1, user-scalable = no") != SocketStream title SocketChat body #wrapper #main.views.hidden #header #name SocketChat #left.column #content #messageContainer #messages form#sendMessage input(id='newMessage', type='text', name='message', autocomplete='off') input(id='sendButton', type='submit', value='Send') #footer form(onsubmit: 'return false')#signIn.views.hidden h1 Please enter a username to begin: input(type='text')
The last view we need for this single-page application is a template for new messages that are sent and received. We've going to keep this simple as it's a basic chat client but you can easily extend this to support gravatar profile images or a more rich template. The user and body variables used in the below will be dynamically populated at run-time.
/app/views/templates/message.jade – Messages template
.message .user {{= user}} .body {{= body}}
Step 4 – Posting messages from the client to the server
Broadcasting messages using SocketStream is actually quite straight-forward. We'll be using the server-side .sendMessage() function defined in Step 5 that utilises Pub/Sub to publish a message centrally.
The client-side code then listens out for an event called newMessage to check (subscribe) to any new messages being published by other users that are also connected.
The best part about all of this is that we don't in any way need to worry about which server instance users are connected to. Messages are always passed on to the correct server because every SS server subscribes to the same overall instance of Redis.
Walking through the code below, we check to see if the current user has correctedly been initialized/signed-in and display a sign-in form if not. Assuming they have signed in correctly, they're able to send new messages to all of the other users that are logged in. Because SocketStream comes with built-in authentication, this process is extremely simplified.
The .renderMessage() function is used to render new messages using the jQuery templates plugin to our messages list and jQuery is also used for a minor level of DOM-manipulation and effects such as fading in login forms and new messages.
/app/client/app.coffee – Client side code
# This function is called automatically once the websocket is setup exports.init = -> $('.message.template').hide() SS.server.app.init (user) -> if user then $('#main').show() else displaySignInForm() # Bind to Submit button $('form#sendMessage').submit -> newMessage = $('#newMessage').val() SS.server.app.sendMessage newMessage, (response) -> if response.error then alert(response.error) else $('#newMessage').val('') false # Bind to new incoming message event SS.events.on 'newMessage', renderMessage # Display the user sign-in form displaySignInForm = -> $('#signIn').show().submit -> SS.server.app.signIn $('#signIn').find('input').val(), (response) -> $('#signInError').remove() displayMainScreen() false displayMainScreen = -> $('#signIn').fadeOut(230) and $('#main').show() renderMessage = (params) -> $('#templates-message').tmpl(params).appendTo('#messages') SS.client.scroll.down('#messages', 450)
Step 5 – Broadcasting messages received to all of the clients currently connected
As mentioned in Step 4, our server-side code primarily handles broadcasting new mesages but also takes care of user sign-in and session handling. As you can see, the amount of code required to get this type of application done is very minimal.
/app/server/app.coffee – Server-side code
exports.actions = init: (cb) -> if @session.user_id R.get "user:#{@session.user_id}", (err, data) => if data then cb data else cb false else cb false sendMessage: (message, cb) -> SS.publish.broadcast 'newMessage', {user: @session.user_id, body: message} cb true signIn: (user, cb) -> @session.setUserId(user) cb user
Fork or download SocketChat
You can download, fork or check out a demo of SocketChat below. Please note that we recommend running SocketStream demos in a WebKit browser such as Chrome or Safari as those currently have stable WebWorker implementations. SS is definitely going to target all modern browsers including FF5, however support is currently being expanded beyond those recommended.
From multi-user chat to multi-user gaming
SocketChat is of course just a drop in the ocean in terms of what you can create with the framework.
Another of our team-mates, Alan Milford, (who had never used CoffeeScript, Jade or SS before) built a complete multi-player game called SocketRacer using CSS3 and SocketStream in under a week (see below).
He built this on top of the basic bi-directional communication examples to syncronize telemetary data of cars that were racing but it also still supports multi-user chat. You can play the game here if you're interested in checking it out.
There are already a number of other demo apps in the pipeline but at the moment sky really is the limit with what's possible.
SocketStream and the road forward
SS is currently in it's very early alpha stages of development and there's still quite a lot of work to get done. The first area that we absolutely need to cover is unit testing, which we intend on getting into the project as soon as possible. Next, we want to focus on getting models into a release and we've been preliminarily looking at Mongoose to assist with this.
Remember that this is really a preview release and there will be bugs, so we don't advise deploying sensitive applications to the web using SS just yet.
We do however encourage developers to start playing around with SocketStream right away as we're more than happy to take on any feedback during these early stages of development. Your feedback is valued and may be considered for future releases. Have fun building some SocketStream apps!
Nice read for weekend. Well written. Gonna try now
Thanks..
Cheers. Please do let us know if you end up creating anything fun with it
Informative article about Sockets (and SocketStream)! BTW, IE9 users can add the sockets api at:
Interesting! We're reviewing this at the moment.
Why dont you use Jade for client side rendering too.. Found it much more flexible than the crappy jqtmpl
We'll certainly consider your feedback for a future release.
Thanks for the write up. SS sure looks awesome.
+1 for unit testing and models.
Thanks Addy, made a little live doc/wiki demo following this tutorial, which was super-helpful.
You're very welcome, Mike. Please feel free to hook us up to a github link to your demo
Glad you enjoyed playing around with SS.
Thank you Addy, SocketStream let dreams come true. I've tried various ways to explore the Realtime / WebSocket / node.js universe and find the perfect and modern all-in-one solution. SocketStream is near to that.
Great article. I've been trying to run through your code (OS X Snow Leopard) and am encountering some small errors.
———-
sudo socketstream start
29 Jun 11:08:06 – Starting SocketStream server…
29 Jun 11:08:07 – Generating essential asset files to get you started…
29 Jun 11:08:07 – Error: ENOENT, No such file or directory './public/assets'
node.js:134
throw e; // process.nextTick error, or 'error' event on first tick
^
Error: Unable to generate client assets libraries. Please ensure you have the latest version of SocketStream and try again
———-
Any thoughts?
Thanks!
Me also got the same error "Error: ENOENT, No such file or directory './public/assets'"
How do I resolve it?
This issue is fixed in the latest npm (0.1.2). Thanks for letting us know.
I just downloaded all of the files (NPM included) and still got this error. How can I fix it?
installation problem solved… had to do a little tweak in configuring redis
This is really a great article, and thanks for putting the code online!!
You're welcome!
Pingback: Node Roundup: Porting Node to Windows, socketstream, EventEmitter2
I'm getting an "Incompatible Browser" error message with Firefox 5. Does it not support the minimal requirements?
Hi Jake
Socket.IO 0.6 is a little temperamental with Firefox, so developers can chose to disable 'flashsockets' support and only support browsers with native websockets. This is the SS.config.browser_check.strict setting which I believe was enabled for SocketChat.
A new unstable version of SocketStream with Socket.IO 0.7 is nearly ready. This will have much better support for Firefox and other browsers – but we still want developers to have the final say in which browsers they wish to support (as it would be difficult to support fast real-time gaming on some of the fallback transports Socket.IO provides).
Owen
oh so nice!
If i think that until 3 years i use irc-applet chat
Pingback: SocketStream: A WebSocket Web Framework | FunctionSource Development
Do you plan integrating other template engines such as EJS? It would be very nice fro those unhappy with jade.
Chat demo crashes Safari 5.0.2 on OSX Snow with a Segmentation Fault
But this is a nice and inspiring work.
Hey guys! Just a few quick tips for anyone trying to get the chat demo working:
1. Please make sure you have the latest version of SocketStream, Redis and Node installed.
2. For the time being, it's best to run SocketStream in WebKit browsers. It *should* work fine in FF5 and Opera (latest) with the WebSockets flags turned on (as per our docs), but you'll get the best experience in Chrome and Safari for the time being.
3. If you receive further error messages, please open a ticket on the repo's 'issues' section on GitHub here as either I or another team member will instantly be alerted to help patch whatever behaviour is broken –
Cheers!
Addy
Great article! I'm trying to get it running on latest node 0.5.2-pre but get this error:
/usr/local/lib/node_modules/socketstream/node_modules/stylus/lib/visitor/evaluator.js:539
Evaluator.prototype.visitImport = function(import){
^^^^^^
node.js:189
throw e; // process.nextTick error, or 'error' event on first tick
^
Error: Unable to start SocketStream as we're missing stylus.
Please install with 'npm install stylus'. Please also check the version number if you're using a version of npm below 1.0
But I do have stylus installed. Any ideas?
Answering myself: downgraded to node 0.4.10 and now it works.
very nice! thanks for sharing. i´ve been playing around with websockets for a month now.
Websocket API kicks ass! Iamjuststarting to learn it, it makes everything so much quicker! Love your site and all these step-by-stap examples! More of it please!
your tutorials are very good, we have created our own Javascript Framework that can either work individually or you can combine it with Jquery, Prototype or any Javascript Framework. our Framework also comes with 13 Modules that ease your work and handle everything for you. Many People have Downloaded it and using it.
it is fully documented so you can easily understand everything.
hope you can check it on your own.
Thanks Addy!
It’s a useful post! And with SS it can built amazing real-time apps
Man, I wish I stumbled upon SocketStream sooner. I basically built almost the same thing with almost the exact same dir structure but in a much more rapid and less organized way. Anyway, I plan on switching to it and use at our startup and maybe contribute back.
REALLY GOOD WORK!
Hey Mario! I’m glad you’re interested in giving SocketStream a try. If you run into any issues with it at all feel free to reach out and we’ll do our best to help.
Thanks so much for the post and example app. Socketstream feels like it might be a fit for a simple multiplayer game I’m trying to build. Are you planning to update the example code to be 0.3.x compliant?
I don’t have plans on updating this specific example but will be building a new app using SocketStream 0.3.x that will be accompanied with it’s own tutorial at some point in the near future
Fantastic. Thanks for putting so much of yourself out there for the rest of us. Congratulations on the move to Google.
Pingback: Node Roundup: Porting Node to Windows, socketstream, EventEmitter2 - Jobbook News | http://addyosmani.com/blog/building-real-time-coffeescript-web-applications-with-socketstream/ | CC-MAIN-2014-42 | refinedweb | 4,147 | 55.24 |
Asked by:
ForeignKey Annotations Confusing for DerivedTypes
I know that often EF Code First needs some help (I.e. annotations) to properly handle mapping associations to DB relationships.
In my case I have a model of (BONDS) and their interest payments (COUPONS). In my case, BOND is an abstract base class which I further subtyped into FixedRateBond, and FloatingRateBond etc...
public abstract class Bond { public ICollection<Coupon> } public class FixedRateBond : Bond { } public class FloatingRateBond: Bond {}
Without a reference property from the COUPON class, EF would raise an error "unable to retrieve association information .... only models that include foreign key information are supported. To solve this, I seem to need my Coupon Class to be written as follows
public class Coupon { public Guid FloatingRateBondId {get;Set;} public FloatingRateBond {get;Set;} public Guid FixedRateBondId {get;Set;} public FixedRateBond {get;Set;} // etc... }
this quickly becomes unmanageable if my bond hierarchy is more complex... as an example if I were to subtype bond to 10 - 15 derived types that I'd like to support in my application, that would mean I'd have to explicitly add foreign key / reference property for each subtype and the Coupon Class would look quite messy.
What I would like to ask is if there is anyway to benefit from the fact that all my derived bonds have a common ancestor base class BOND? in other words, Is there a way to get just have a single foreign key / reference property from the Coupon class back to the parent Bond base class
like this ?
public Class Coupon { public Guid BondId {get;set;} public Bond Bond {get;set;} }Any guidance would be greatly appreciated..
- Edited by DoWorkAync Saturday, June 29, 2013 11:30 AM Typo
Question
All replies
Hi DoWorkAync,
I didn't see you have specified the Id for the entities.
By convention, code first can specify the relationship if you have added the navigation properties.
If I use the following code, I will create the database without any errors.
public abstract class Bond { public Guid BondId { get; set; } public ICollection<Coupon> Coupons { get; set; } } public class FixedRateBond : Bond { } public class FloatingRateBond : Bond { } public class Coupon { public Guid CouponId { get; set; } public Guid BondId { get; set; } public Bond Bond { get; set; } } public class FKContext : DbContext { public DbSet<Bond> Bonds { get; set; } public DbSet<Coupon> Coupons { get; set; } }
Best regards,
Chester Hong
MSDN Community Support | Feedback to us
Develop and promote your apps in Windows Store
Please remember to mark the replies as answers if they help and unmark them if they provide no help
Hello Chester,
Yes, I am sorry for the omission, I forgot to mention that my BOND does in fact, inherit from an ancestor class which contains the Id field. As I am still having the issue, I will review the code and try to submit further details about my particular case.
I suppose from your post above, that the answer to my question is that Entity Framework will be able to work with a model where:
- COUPON has FOREIGN KEY to a base class BOND
- Subtypes of BOND are those classes which have properties of type ICollection<COUPON>
Can you confirm that ?
Hello,
Further to my earlier post, I've reviewed my code, and it is possible another part of the model is causing the error. Elsewhere in my domain m
public interface ICouponSchedule { ICollection<CouponPayment> CouponPaymentSchedule { get; }
and I've attempted to include a FK in the child class:
public class ICouponPayment: EntityBase { public Guid CouponScheduleId { get; set; } public ICouponSchedule CouponSchedule{ get; set; } }
I suspect EF can't deal with a FK property (Coupon Schedule) being an interface. Am I right ?
- Edited by DoWorkAync Monday, July 01, 2013 2:39 PM Type | https://social.msdn.microsoft.com/Forums/en-US/89207ccc-5dbb-4727-931f-8c0c885ba086/foreignkey-annotations-confusing-for-derivedtypes | CC-MAIN-2017-43 | refinedweb | 618 | 52.43 |
IsoHunt.Search
Contents
Description.
Synopsis
- search :: Query -> IO Response
- data Query = Query {
- simpleQuery :: String -> Query
- data Sort
- data Order
- = Descending
- | Ascending
- data Response
- data Item
- data Image
- data MalformedJSON = MalformedJSON !ByteString
- data MalformedResponse = MalformedResponse !String !Value
The main function
search :: Query -> IO ResponseSource
Search IsoHunt with the given query.
Throws
MalformedJSON or
MalformedResponse if the result is of an
expected format.
Query
See also
simpleQuery and
def for constructing queries
Constructors
Instances
simpleQuery :: String -> QuerySource
A default query for the given search term
Response.
Exceptions
data MalformedJSON Source
The response was invalid JSON. The unparsed contents are included.
Constructors
Instances
data MalformedResponse Source
The response was valid JSON, but not of the expected format. Error message and the JSON value are included.
Constructors
Instances | http://hackage.haskell.org/package/isohunt-0.1.3/docs/IsoHunt-Search.html | CC-MAIN-2014-52 | refinedweb | 124 | 50.53 |
Let me make this clear first:
this has almost nothing to do with transformation matrixes, except that they follow similar patters, and thus, can be interchanged.
Alright! onto the good part:
I wanted to make some way to do matrix math in AS3, so i built a simple class for it, and thought i might as well share. it supports the common matrix operations, addition, subtraction, multiplication, and inverses of 1x1, 2x2, and 3x3 matrices. This can be used for solving linear systems of equations, which may be useful.
here's a link:
A small script i wrote up for testing is below. It solves a 3 variable system of equations with fairly good accuracy. ( To within math errors )
:):)Code:package { import ASUtil.Math.matrix; import flash.display.Sprite; public class Main extends Sprite { /* 1x - 3y + 3z = -4 2x + 3y - 1z = 15 4x - 3y - 1z = 19 The solution is ( 5, 1, -2 ) We solve this using an inverse matrix: [ 1 -3 3 ] [ x ] [ -4 ] [ 2 3 -1 ] * [ y ] = [ 15 ] [ 4 -3 -1 ] [ z ] [ 19 ] Gets turned into [ 1 -3 3 ] ^ -1 [ -4 ] [ x ] [ 2 3 -1 ] * [ 15 ] = [ y ] [ 4 -3 -1 ] [ 19 ] [ z ] Which should result in about [ 5 ] [ 1 ] [ -2 ] --------------------------------------------------------------- [ -4 ] [ 15 ] [ 19 ] */ private var m:matrix = new matrix ( 1, 3, -4, 15, 19 ); /* [ 1 -3 3 ] [ 2 3 -1 ] [ 4 -3 -1 ] */ private var m2:matrix = new matrix ( 3, 3, 1, -3, 3, 2, 3, -1, 4, -3, -1 ); public function Main() { trace ( m2 ); trace ( m ); trace ( m2.inverse ().mulMatrix ( m ) ); } } } | https://www.kirupa.com/forum/showthread.php?371674-AS3-Matrix-class-matrix-math&p=2636970 | CC-MAIN-2016-36 | refinedweb | 253 | 69.35 |
Xcode on steroids
Xcode projects at any scale
A user-friendly language
Project.swift
Make maintaining projects everyone's task by describing them using a plain language. And... no more Git conflicts!
import ProjectDescriptionimport ProjectDescriptionHelperslet project = Project.featureFramework(name: "Home",dependencies: [.project(target: "Features", path: "../Features"),.framework(path: "Carthage/Build/iOS/SnapKit.framework").package(product: "KeychainSwift")])
Features
Developers love simple things
We take care of the complex things — you focus on building great apps
Plain and easy language
Describe your projects as you think about them. Build settings, phases and other intricacies become implementation details.
Reusability
Instead of maintaining multiple Xcode projects, describe your project once, and reuse it everywhere.
Focus
Generated projects are optimized for your focus and productivity. They contain just what you need for the task at hand.
Early errors
If we know your project won’t compile, we fail early. We don't want you *to* waste time waiting for the build system to bubble up errors.
Conventions
Be opinionated about the structure of the projects; define project factories that teams can use to create new projects.
Scale
Tuist is optimized to support projects at scale. Whether your project is 1 target, or 1000, it should make no diffference.
Videos
A video is worth a thousand words
Watch our series of videos that explain different features of Tuist.
Video
Introduction to Tuist
In this video, I give a quick introduction to Tuist. I talk about how to install the tool and bootstrap a new modular app using the init command. Moreover, I show how to use the focus command to generate and open and Xcode project, as well as how to use "tuist edit" to edit the manifest files using Xcode.
Pedro Piñera
Testimonies
You don't need a tooling team
Tuist is already trusted by companies that let us do the heavy-lifting and complex work for them.
Tuist has delivered more than the SoundCloud iOS Collective expected! We aimed to make modularization more accessible and maintainable. We got this... and better build times!. | https://tuist.io/ | CC-MAIN-2020-29 | refinedweb | 339 | 57.98 |
See
Which is fine unless you have an old computer with old applications that still works. For example, a 2002-vintage iMac G4 still works. Slowly.
When someone jumps 11 years to a new iMac, they find that their 2002 iMac with 2007 apps has files which are essentially unreadable by modern applications.
How can someone jump a decade and preserve their content?
1. iWork Pages is cheap. Really. $19.99. I could have used it to convert their files to their new iMac and then told them to ignore the app. Pages can be hard to learn. For someone jumping from 2007-vintage apps, it's probably too much. However, they can use TextEdit once the files are converted to RTF format.
2. iWork for iCloud may be a better idea. But they have to wait a while for it to come out. And they want their files now.
3. Try to write a data extractor.
Here are some places to start.
-
- This appears to have a known bug in chaining through the ETBL resources.
- This project is more notes and examples than useful code.
Documentation on the Appleworks file format does not seem to exist. It's a very weird void, utterly bereft of information.
In the long run $19.99 for a throw-away copy of Pages is probably the smartest solution.
In the long run $19.99 for a throw-away copy of Pages is probably the smartest solution.
However, if you're perhaps deranged, you can track down the content through a simple brute-force analysis of the file. This is Python3 code to scrape the content out of a .CWK file.
import argparse import struct import sys import os from io import open class CWK: """Analyzes a .CWK file; must be given a file opened in "rb" mode. """ DSET = b"DSET" BOBO = b"BOBO" ETBL = b"ETBL" def __init__( self, open_file ): self.the_file= open_file self.data= open_file.read() def header( self ): self.version= self.data[0:4] #print( self.version[:3] ) bobo= self.data[4:8] assert bobo == self.BOBO version_prev= self.data[8:12] #print( version_prev[:3] ) return self.version def margins( self ): self.height_page= struct.unpack( ">h", self.data[30:32] ) self.width_page= struct.unpack( ">h", self.data[32:34] ) self.margin_1= struct.unpack( ">h", self.data[34:36] ) self.margin_2= struct.unpack( ">h", self.data[36:38] ) self.margin_3= struct.unpack( ">h", self.data[38:40] ) self.margin_4= struct.unpack( ">h", self.data[40:42] ) self.margin_5= struct.unpack( ">h", self.data[42:44] ) self.margin_6= struct.unpack( ">h", self.data[44:46] ) self.height_page_inner= struct.unpack( ">h", self.data[46:48] ) self.width_page_inner= struct.unpack( ">h", self.data[48:50] ) def dset_iter( self ): """First DSET appears to have content. This DSET parsing may not be completely correct.
But it finds the first DSET, which includes all of the content except for headers and footers. It seems wrong to simply search for DSET; some part of the resource directory should point to this or provide an offset to it. """ for i in range(len(self.data)-4): if self.data[i:i+4] == self.DSET: #print( "DSET", i, hex(i) ) pos= i+4 for b in range(5): # Really? Always 5? size, count= struct.unpack( ">Ih", self.data[pos:pos+6] ) pos += size+4 #print( self.data[i:pos] ) yield pos def content_iter( self, position ): """A given DSET may have multiple contiguous blocks of text.""" done= False while not done: size= struct.unpack( ">I", self.data[position:position+4] )[0] content= self.data[position+4:position+4+size].decode("MacRoman") #print( "ENDING", repr(self.data[position+4+size-1]) ) if self.data[position+4+size-1] == 0: yield content[:-1] done= True break else: yield content position += size+4
The function invoked from the command line is this.
def convert( *file_list ): for f in file_list: base, ext = os.path.splitext( f ) new_file= base+".txt" print( '"Converting {0} to {1}"'.format(f,new_file) ) with open(f,'rb') as source: cwk= CWK( source ) cwk.header() with open(new_file,'w',encoding="MacRoman") as target: position = next( cwk.dset_iter() ) for content in cwk.content_iter(position): # print( content.encode("ASCII",errors="backslashreplace") ) target.write( content ) atime, mtime = os.path.getatime(f), os.path.getmtime(f) os.utime( new_file, (atime,mtime) )
This is brute-force. But. It seemed to work. Buying Pages would have been less work and probably produced better results.
This does have the advantage of producing files with the original date stamps. Other than that, it seems an exercise in futility because there's so little documentation.
What's potentially cool about this is the sane way that Python3 handles bytes as input. Particularly pleasant is the way we can transform the file-system sequence of bytes into proper Python strings with a very simple bytes.decode(). | http://slott-softwarearchitect.blogspot.com/2013/07/almost-good-idea.html | CC-MAIN-2018-51 | refinedweb | 803 | 71.51 |
Red Hat Bugzilla – Bug 207134
codec lookup fails due to unsafe case conversion
Last modified: 2007-11-30 17:11:43 EST
Description of problem:
Python/codecs.c uses uses tolower() for case normalization in codec lookup. This
doesn't work in turkish locale when normalizing encoding name "ISO-8859-9".
Version-Release number of selected component (if applicable):
$ rpm -q python
python-2.4.3-15.fc6
How reproducible:
always
Steps to Reproduce:
(with tr_TR.UTF-8 locale)
>>> import locale
>>> locale.setlocale(locale.LC_ALL, "")
'tr_TR.UTF-8'
>>> unicode("test", "ISO-8859-9")
Traceback (most recent call last):
File "<stdin>", line 1, in ?
LookupError: unknown encoding: ISO-8859-9
Additional info:
Turkish i18n of fedora tools written in python is almost always broken due to
this bug triggered by rhpl and Iso-8859-9 encoded .po files. I'm using this hack
to make things work:
--- Python-2.4.3/Python/codecs.c.orig 2006-03-28 00:47:54.000000000 +0300
+++ Python-2.4.3/Python/codecs.c 2006-09-19 17:54:30.000000000 +0300
@@ -69,6 +69,8 @@
register char ch = string[i];
if (ch == ' ')
ch = '-';
+ else if (ch == 'I')
+ ch = 'i';
else
ch = tolower(ch);
p[i] = ch;
Better approach is to set locale to 'C' as codecs are in ascii letters anyways,
so it should be something like;
#include <locale.h>
setlocale (LC_ALL, "C");
...
ch = tolower (ch);
(In reply to comment #1)
> Better approach is to set locale to 'C' as codecs are in ascii letters anyways,
Your code is kind of a "walk around" rather than a "work around". It suggests
using C locale instead of turkish :)
Baris means that the tolower() in codecs.c should be in the C locale ... and
personally I'd lean towards doing the tolower() by hand assuming ASCII, thus.
not having to swap locale's. But I'll have to see what upstream says about this.
Also all the examples I see of people using unicode()/encode()/etc. have the
code argument lowered already, like:
unicode("test", "iso-8859-9")
...can you find documentation that the ASCII toupper()'d argument is supposed to
work?
(In reply to comment #3)
> Baris means that the tolower() in codecs.c should be in the C locale ...
i misunderstood then, sorry.
> Also all the examples I see of people using unicode()/encode()/etc. have the
> code argument lowered already, like:
>
> unicode("test", "iso-8859-9")
Uppercase is also used in the wild, as was the case for .po files I mentioned in
the original report. fortunately utf-8 is prefered nowadays.
I guess the installation problems with turkish locale is also due to sqlite
package using ISO-8859-1 in the source file coding declaration.
> ...can you find documentation that the ASCII toupper()'d argument is supposed to
> work?
>
from python-docs, section 4.8.3 Standard Encodings:
> [...] Notice that spelling alternatives that only differ in case or use a
> hyphen instead of an underscore are also valid aliases.
also there are aliases that contain 'I' letter like IBM037 and EBCDIC which will
trigger this bug.
Lowering is troublesome as well, since lower 'I' is 'ı' in Turkish.
Considering manual tolower without changing locale would be much more efficient
in terms of memory as it won't touch locale data, sample code is very trivial:
else if (ch >= 'A' && ch <= 'Z') {
ch += 35; // makes ascii lowercase char
} else {
// not an ascii letter, bork!
}
Sorry,
> else if (ch >= 'A' && ch <= 'Z') {
> ch += 35; // makes ascii lowercase char
That's
ch += 32;
(In reply to comment #6)
> ch += 32;
ch |= 32; /* would be even better :) */
But they might think about adding a utility function, as this is not the only
place with similar error in python code.
This breaks Turkish installs (related to bug 191096) and needs to be fixed for F8.
Fix committed built and requested to be tagged
Tagged for release.
Turkish install from DVD completed and also did a quick verification with the
specific case mentioned in the first omment.
This problem continues in rc2 iso (Oct 30). Was your fix in that iso?
The fix is in python-2.5.1-15.fc8, which AFAIK was built after the rc2 isos were
made. Check the python version in rc2 to be sure.
I've also verified the fix by installing in Turkish from the rc3 iso.
Ok, rc2 has python-2.5.1-14.
Thanks.
I double checked Turkish installation today, with every details that I can think
of. I again confirm that it works.
Thanks everyone.
Regards, Devrim. | https://bugzilla.redhat.com/show_bug.cgi?id=207134 | CC-MAIN-2016-30 | refinedweb | 759 | 66.74 |
Introduction to Xmap debugging
This page is the result of a paragraph write by AndreasHartmann and a discussion on the Lenya ML (subject : [Debugging] see the stream anywhere in a sitemap).
With Lenya/Cocoon XMAP, debugging a complex process with multiple pipelines in multiple XMAPs can be a challenge.
But... Don't worry ! Here you will see 3 different techniques for debugging your Xmap.
1) Cut the stream and see it
When first creating an XMAP, serialize after each generator, aggregator, and transformer and save the XML from the browser to a file. Knowing the results of each process greatly helps with writing the next stage.
You may often discover namespaces in element names that need to be either dropped in the current process or handled by the next process. The saved output provides reference for later development. [1]
TODO : add an example with explanation of matcher and url address that we have to type.
2) Don't cut the stream and see it
After an XMAP is completed, debugging is more difficult. The final results are often combined from several XMAPs. Logging intermediate results to a file is a very good solution. [1]
1 -- Create your own datetime
Datetime is create to prevent overwrites with multiple threads and to provide an history. A disadvantage is that many files are created and must be deleted manually.
A datetime is create because Lenya's default date format is "yyyy-M-dd
mm:ss Z" and most operating systems do not allow colons in filenames.
To configure a new datetime : Open the build\lenya\webapp\WEB-INF\cocoon.xconf Inside the <input-modules> element, add:
<component-instance class="org.apache.cocoon.components.modules.input.DateInputModule" logger="core.modules.input" name="filedatetime> <format>yyyy-MM-d-HH-mm-ss</format> </component-instance>
Restart your Lenya app.
2 -- Create the log to file resource :
This resource groups some transformations that allow you to write the stream.
Copy-paste this in your module's sitemap for a local usage, or in the /lenya/sitemap.xmap for a global usage :
<map:resources> <map:resource <map:transform <map:parameter </map:transform> <map:transform <map:parameter </map:transform> <map:transform </map:resource> </map:resources>
The value in <map:parameter can be configure as you want.
3 -- Do your debug :
In your pipeline just adapt the value of fileprefix and write this :
<!-- DEV DEBUG 1 BEGIN --> <map:call <map:parameter </map:call> <!-- DEV DEBUG 1 END -->
3) Log SAX events as XML
For log this you can use the tee transformer :
<map:transform
Thanks to
Solprovider [1]
-
Thorsten Scherler | http://wiki.apache.org/lenya/DebuggingTips | crawl-002 | refinedweb | 430 | 54.93 |
Ruby Module
Module in Ruby is used to group resources together and avoid namespace collisions.
We can store Ruby resources like classes, functions, data etc in a module.
Using Modules, we resolve naming issues. Example, a Bank module can contain Model class for Bank. an Employee package can contain Model class for Employee etc.
There are two types of modules:
Built-in Modules - Back to topBuilt-in Modules are Modules provided by Ruby.
Examples of a few built in modules are:
Package Name
Purpose
Kernel
Contains several important methods like puts. The Object class includes Kernel module, so every Object will have Kernel module included.
Marshal
Contains functionality to serialize and deserialize Ruby data structures. It can convert Ruby objects to Byte stream. It can also convert the read Byte streams back into the data structure.
Math
Provides functionality for mathematical operations. It defines two mathematical constants, E and PI. It provides functions to calculate sin, cosine, tangent etc.
User defined Modules - Back to topWe can create our own Modules in Ruby using the module statement.
To create classes or files related to Teleporter you can create a root level module as shown below.
module Teleporter end
Similarly if you have another amazing product called Time Machine, you can create another package and store classes or files related to Time Machine in that.
module Timemachine end
Inside the Timemachine module, create a class called TimeMachine Add the following code.
As you can see in the above example, we defined the package as the first line of the program.As you can see in the above example, we defined the package as the first line of the program.
module Timemachine class TimeMachine def goBack100Years() puts "Back to Bullock Carts" end def goForward100Years() puts "Space Elevators! Yay!" end end tm = TimeMachine.new() tm.goBack100Years() tm.goForward100Years() end
The name of the package closely follows the folder structure we created.
We then declared an interface ITimeMachine and implemented the interface in the TimeMachine class.
Next step is to run the file.
Output:
Back to Bullock Carts
Space Elevators! Yay!
Importing Modules:
Sometimes we would need to use classes in other packages.
Rather than tediously writing out the full name of the package, we can include modules using include statement.
For example, if we want to include the TimeMachine class in another class in another module, we can do that as shown below.
include Timemachine
Example given below demonstrates on how to include and use a class in our modules.
include Timemachine tm = TimeMachine.new() tm.goBack100Years() tm.goForward100Years()
Output:
Back to Bullock Carts
Space Elevators! Yay! | https://www.cosmiclearn.com/ruby/packages.php | CC-MAIN-2019-51 | refinedweb | 433 | 57.98 |
I've noticed that some kind of chained class loader would fix the issue and
I already have such mechanism in place - I set it as a thread context class
loader (TCCL). The problem here is OpenEjb sets its internal UrlClassLoader
as TCCL for some reason so multiloader also won't work in this case.
Is there a way to stop OpenEjb from changing the TCCL? Also going back to
my first post isn't it a safer bet to define the class with the internal
loader, after all later it will always be loaded with it?
Thanks for your quick replies!
Best Regards
Borislav
On Mon, Feb 27, 2012 at 2:27 PM, Jean-Baptiste Onofré <jb@nanthrax.net>wrote:
> Hi guys,
>
> A ThreadContextClassLoader could also fix the issue, but it requires some
> change in the code.
>
> Regards
> JB
>
>
> On 02/27/2012 01:24 PM, Romain Manni-Bucau wrote:
>
>> Hi,
>>
>> in tomcat we use this classloader as paretn of the webapp classloader so
>> everything is fine.
>>
>> In OSGi i think a kind of multipleclassloader can fix this issue: try to
>> load the class in openejb application classloader before the bundle
>> classloader itself.
>>
>> - Romain
>>
>>
>> 2012/2/27 Borislav Kapukaranov<b.kapukaranov@**gmail.com<b.kapukaranov@gmail.com>
>> >
>>
>> Hey folks,
>>>
>>> I'm trying to get OpenEJB running on Equinox and it is going fairly well
>>> so
>>> far. :-)
>>> Until I got stucked in the following issue - I have a WebApp that has two
>>> local EJBs (*X* and *Y*) and a Servlet. *X* has annotated field with type
>>> *Y
>>> *. The Servlet has annotated fields for both EJBs.
>>> My web container is GeminiWeb and with the help of my own ObjectFactory
>>> it
>>> handles the Servlet binding and the injection of the EJB's in the Servlet
>>> just fine.
>>> However there is one more injection that is needed - *Y* proxy into *X*.
>>> This is where I got trouble.
>>>
>>> I'm using a bundle that triggers the deployment of the WebApp as an
>>> OpenEJB
>>> module by plugging into Tomcat's mechanics. My bundle calls
>>> *org.apache.openejb.assembler.**DeployerEjb.deploy(String
>>> location)* - this method does the heavy lifting for me by processing the
>>> annotations and binds all EJB's in OpenEjb's internal JNDI so all is fine
>>> here.
>>> When a bean is injected somewhere, a proxy is created for this bean.
>>> In this code snippet from *LocalBeanProxyGeneratorImpl* we can see how a
>>> proxy is generated:
>>>
>>> private Class createProxy(*Class<?> clsToProxy*, String proxyName,
>>> *ClassLoader
>>> cl*) {
>>> String clsName = proxyName.replaceAll("\\.", "/");
>>> try {
>>> return *cl.loadClass(proxyName);*
>>> } catch (Exception e) {}
>>> try {
>>> byte[] proxyBytes = generateProxy(clsToProxy, clsName);
>>> return (Class<?>) defineClass.invoke(unsafe, proxyName,
>>> proxyBytes,
>>> 0, proxyBytes.length, *clsToProxy.getClassLoader()*,
>>> clsToProxy.**getProtectionDomain());
>>> } catch (Exception e) {
>>> throw new InternalError(e.toString());
>>> }
>>> }
>>>
>>> I've highlighted what is important for the OSGi case
>>> *- clsToProxy* is the bean's class - this is loaded and defined by the
>>> WebApp's bundle loader (*clsToProxy.getClassLoader()***).
>>> *- cl* is an UrlClassLoader which is internally created by OpenEjb for
>>> its
>>> own purposes and contains as resources the WebApp's WEB-INF/classes.
>>>
>>> What happens here is that the first time this proxy is created OpenEjb
>>> actually defines it with the WebApp's bundle loader - *
>>> defineClass.invoke(..,**clsToProxy.getClassLoader(),..**.)*. Later when
>>> this
>>> bean is injected again we end up in the same place to create a proxy,
>>> expecting to load it with OpenEjb's internal classloader - *
>>> cl.loadClass(proxyName)* - and since it knows nothing about this proxy we
>>> try to define it again with the WebApp's class loader which results in a
>>> LinkageError.
>>>
>>> Do you know how is this expected to work in OSGi? And would it be better
>>> if
>>> OpenEjb both tried to load and define with the same loader? I admit this
>>> should work outside OSGi, but in OSGi the two loaders are
>>> potentially(almost certainly) different.
>>>
>>> Any help is much appreciated! :-)
>>>
>>> Best Regards
>>> Borislav
>>>
>>>
>>
> --
> Jean-Baptiste Onofré
> jbonofre@apache.org
>
> Talend -
> | http://mail-archives.us.apache.org/mod_mbox/tomee-users/201202.mbox/%3CCAGE-agEcXS=tLQ8PKJXRn4tb+OyXSctga_mxdZWiN+ttWJ1-kA@mail.gmail.com%3E | CC-MAIN-2019-30 | refinedweb | 645 | 55.54 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
The header file 'boost/algorithm/cxx11/all_of.hpp' contains four variants
of a single algorithm,
all_of.
The algorithm tests all the elements of a sequence and returns true if they
all share a property.
The routine
all_of takes
a sequence and a predicate. It will return true if the predicate returns
true when applied to every element in the sequence.
The routine
all_of_equal
takes a sequence and a value. It will return true if every element in the
sequence compares equal to the passed in value.
Both routines come in two forms; the first one takes two iterators to define the range. The second form takes a single range parameter, and uses Boost.Range to traverse it.
The function
all_of returns
true if the predicate returns true for every item in the sequence. There
are two versions; one takes two iterators, and the other takes a range.
namespace boost { namespace algorithm { template<typename InputIterator, typename Predicate> bool all_of ( InputIterator first, InputIterator last, Predicate p ); template<typename Range, typename Predicate> bool all_of ( const Range &r, Predicate p ); }}
The function
all_of_equal
is similar to
all_of, but
instead of taking a predicate to test the elements of the sequence, it takes
a value to compare against.
namespace boost { namespace algorithm { template<typename InputIterator, typename V> bool all_of_equal ( InputIterator first, InputIterator last, V const &val ); template<typename Range, typename V> bool all_of_equal ( const Range &r, V const &val ); }}
Given the container
c containing
{ 0, 1,
2, 3, 14, 15 },
then
bool isOdd ( int i ) { return i % 2 == 1; } bool lessThan10 ( int i ) { return i < 10; } using boost::algorithm; all_of ( c, isOdd ) --> false all_of ( c.begin (), c.end (), lessThan10 ) --> false all_of ( c.begin (), c.begin () + 3, lessThan10 ) --> true all_of ( c.end (), c.end (), isOdd ) --> true // empty range all_of_equal ( c, 3 ) --> false all_of_equal ( c.begin () + 3, c.begin () + 4, 3 ) --> true all_of_equal ( c.begin (), c.begin (), 99 ) --> true // empty range
all_of and
all_of_equal work on all iterators except
output iterators.
All of the variants of
all_of
and
all_of_equal run in
O(N) (linear) time; that is, they compare against each
element in the list once. If any of the comparisons fail, the algorithm will
terminate immediately, without examining the remaining members of the sequence.
All of the variants of
all_of
and
all_of_equal take their
parameters by value or const reference, and do not depend upon any global
state. Therefore, all the routines in this file provide the strong exception
guarantee.
all_ofis also available as part of the C++11 standard.
all_ofand
all_of_equalboth return true for empty ranges, no matter what is passed to test against. When there are no items in the sequence to test, they all satisfy the condition to be tested against.
all true (where
iteris an iterator to each element in the sequence) | http://www.boost.org/doc/libs/1_64_0/libs/algorithm/doc/html/algorithm/CXX11.html | CC-MAIN-2017-43 | refinedweb | 483 | 55.34 |
Using Base Controllers In AngularJS - An Experiment
Yesterday, in a conversation that I was having with Chris Schetter, I was rehashing how my JavaScript style has changed somewhat since I've started using AngularJS. Specifically, I've started using the Revealing Module Pattern a lot more than I ever have before. While many things have contributed to this shift, one feature, or lack thereof, was the inability to use Base Controllers in AngularJS. Since Controllers cannot be generated within Factories, it seemed that Base Controllers could only be provided through global references (which goes against everything that AngularJS stands for). That said, after my conversation yesterday, I wanted to give Base Controllers one more try.
NOTE: This is just an experiment in using Factories to generate Controllers. I am not necessarily advocating this approach - see final paragraph.
In JavaScript, when you invoke a constructor function, the return value of the construtor matters. If you don't return anything, JavaScript will use the instantiated object as the result of the invocation. If, however, you return a value (other than "this") from the constructor function, JavaScript will use that return value as the result of the invocation.
When it comes to AngularJS, we can leverage this language feature to fake the ability to create Controllers inside of Factories. The idea is simple, but a bit confusing. And, after some trial and error, I finally got it to work. What we're going to do is turn our Controller into a factory.
Generally speaking, when you need to create a Controller in AngularJS, the following workflow happens:
AngularJS -> Controller constructor.
In this experiment, however, we're going to turn the Controller constructor into a factory that instantiates our "sub class" controller:
AngularJS -> Controller constructor (factory) -> Controller constructor.
This workflow can happen because the intermediary Controller can return a value other than itself. And, when it does this, AngularJS will ultimately use the return value as the target Controller. And, since the intermediary construtor can use dependency injection, it allows our sub-class controller to also use dependency injection.
In the following proof-of-concept, I am overriding the core "factory" convenience method so that I can use a single point of entry for both service factories and controller factories. This approach requires that I create a special naming convention for Controllers (which aligns nicely with my typical approach). Specifically, you have to give Controllers a dot-delimited namespace that ends in "Controller". For example:
friends.ListController
In the following demo, I have am defining a Base-Controller object; then, I am sub-classing it with a Controller that is used to render the AngularJS view-model.
- <!doctype html>
- <html ng-
- <head>
- <meta charset="utf-8" />
- <title>
- Using Base Controllers In AngularJS
- </title>
- </head>
- <body ng-
- <h1>
- {{ title }}
- </h1>
- <p>
- Foo: {{ foo }}.
- </p>
- <p>
- Bar: {{ bar }}.
- </p>
- <!-- Load jQuery and AngularJS from the CDN. -->
- <script
- type="text/javascript"
-
- </script>
- <script
- type="text/javascript"
-
- </script>
- <script type="text/javascript">
- // Create an application module for our demo.
- var app = angular.module( "Demo", [] );
- // -------------------------------------------------- //
- // -------------------------------------------------- //
- // Set up the Controller-Factory. This is not a feature that
- // AngularJS provides out of the box. As such, we have to
- // jerry-rig our own factory using the core factory.
- (function( core, coreFactory ) {
- // Controllers will be defined by dot-delimited namespaces
- // that end in "Controller" (ex. foo.BarController).
- var pattern = /\.[^.]*?Controller$/i;
- // As the factories are invoked, each will return the
- // constructor for the given Controller; we can cache these
- // so we don't have to keep re-wiring the factories.
- var constructors = {};
- // I proxy the core factory and route the request to either
- // the Controller provider or the underlying factory.
- function factory( name, controllerFactory ) {
- // If the given injectable name is not one of our
- // factories, then just hand it off to the core
- // factory registration.
- if ( ! pattern.test( name ) ) {
- return(
- coreFactory.apply( core, arguments )
- );
- }
- // Register the Controller Factory method as a
- // Controller. Here, we will leverage the fact that
- // the *RETURN* value of the constructor is what is
- // actually being used as the Controller instance.
- core.controller(
- name,
- function( $scope, $injector ) {
- var cacheKey = ( "cache_" + name );
- var Constructor = constructors[ cacheKey ];
- // If the cached constructor hasn't been built
- // yet, invoke the factory and cache the
- // constructor for later use.
- if ( ! Constructor ) {
- Constructor
- = constructors[ cacheKey ]
- = $injector.invoke( controllerFactory )
- ;
- }
- // By returning something other than _this_,
- // we are telling AngularJS to use the following
- // object instance as the Controller instead of
- // the of the current context (ie, the Factory).
- // --
- // NOTE: We have to pass $scope through as an
- // injectable otherwise the Dependency-Injection
- // framework will not know how to create it.
- return(
- $injector.instantiate(
- Constructor,
- {
- "$scope": $scope
- }
- )
- );
- }
- );
- // Return the core to continue method chaining.
- return( core );
- };
- // Overwrite the Angular-provided factory.
- core.factory = factory;
- })( app, app.factory );
- // -------------------------------------------------- //
- // -------------------------------------------------- //
- // Define the base-controller; since this is not a name-spaced
- // controller, it will be routed to the underlying, core
- // factory method.
- app.factory(
- "BaseController",
- function() {
- function BaseController( $scope ) {
- return( this );
- }
- BaseController.prototype = {
- getFoo: function() {
- return( "Foo ( from BaseController )" );
- }
- };
- return( BaseController );
- }
- );
- // -------------------------------------------------- //
- // -------------------------------------------------- //
- // Define the "sub-class" controller; since this is a name-
- // spaced controller, it will be routed to the wrapper factory
- // that will proxy the controller instantiation.
- app.factory(
- "demo.SubController",
- function( BaseController, $document ) {
- function SubController( $scope, $document ) {
- BaseController.call( this, $scope );
- // Store foo/bar for use in the View-model.
- $scope.foo = this.getFoo();
- $scope.bar = this.getBar();
- // Add some other injectables, just to make sure
- // the factory wrapper didn't screw up the
- // dependency-injection framework.
- $scope.title = $document[ 0 ].title;
- }
- // Extend the base controller.
- SubController.prototype = Object.create( BaseController.prototype );
- // Add sub-class methods.
- SubController.prototype.getBar = function() {
- return( "Bar ( from SubController )" );
- };
- // Override base method; decorate the value provided
- // by the super-class.
- SubController.prototype.getFoo = function() {
- return(
- BaseController.prototype.getFoo.call( this ) +
- "( overridden by SubClass )"
- );
- };
- return( SubController );
- }
- );
- </script>
- </body>
- </html>
When I run the above code, I get the following page output:
Using Base Controllers In AngularJS
Foo: Foo ( from BaseController )( overridden by SubClass ).
Bar: Bar ( from SubController ).
As you can see, the "Bar" value was taken directly from the sub-class. The "Foo" value, on the other hand, was taken from the sub-class, but the value provided was ultimately a decorated value from the base-class.
While I was very interested in creating a Base Controller when I first got into AngularJS, now that I've been using AngularJS for about a year, I am not sure that a Base Controller would actually add any value. With the way that $scope is used, your controller typically has to re-create all "public" methods with each Controller instance; as such, the function sharing provided by prototypal inheritance doesn't really get leveraged properly in a Controller context. That said, I am happy that I finally got a proof-of-concept controllers so big and adds more boilerplate code to it.
If you need to share some common code between controllers (some sort of inheritance), why you're not using Services? I think that's the right tool. are needed across controllers, and what are you left with? creating services and sending a reference to the scope to be manipulated there instead. The only other examples i've seen so far are not realistic in that the controller functions are all global and pollute the namespace. It seems like that just like you can do
var module = angular.module('myModule');
and you get a module back, why can't we at least do something like this?
var controller = module.controller('ControllerName');
and when you don't pass the constructor, you get the constructor back?
it's extremely frustrating that there is not a good way using existing angular conventions that there is not a good way to do this.ulating the values in the dropdownlists using ng-model="item.projectId"
Now when I submit the form, its quite obvious that the projectId will
be posted to the server due to the double-binding nature of ng-model
directive.
I want to get the selected (user selects while filling up the form) values collectively (possibly into an array ) and send it to the controller.cs file where there an array to consume those values and
store that in the table. Can anybody give me tips on this?
@Masih,
When I first got into AngularJS, I wanted to use base-controller because that's how I write everything on the server-side; BaseController's group together features that all controllers can use. In an AngularJS context, I did have things that I thought could be shared; specifically, this had to do with handling deferred results. I wanted to create something like:
this.handlePromise resource.get(), callback, errorCallback );
Now, the reason I wanted that to be shared is because I wanted the core "handleDeferred" to actually wrap my callbacks with references to "this". Something like:
wrapper = function(){ callback.apply( self, arguments ); };
This way, the shared controller methods could take care of the binding of methods to Controller instances. Plus, our application has a special Deferred class which extends the core AngularJS $q class to allow for cached data:
Now, with the complexity of creating base-controller, we actually did end up going with services / help classes.
That said, my Controllers are, in many cases, very complex. Granted, I probably put too much logic in them; but, at the same time, my User Interfaces are very complex as well. Lots of states and minor interactions.
I'm sure I can clean them up a bit; but I'm still learning :)
@Kelly,
It is a bit surprising that there's not something a bit more baked-in for it. Especially when you consider that the $scope and the Controller used to be one-in-the-same (at least from what I've read). In that case, Controllers *would* necessitate prototypal inheritance; although, that said, that original design may be why they couldn't allow for it - if they [AngularJS] had to make sure that the Controller inheritance chain worked like the current $scope chain, they *couldn't* allow you to monkey with the inheritance as it would break their workflow.
So, maybe the lack of Controller insight is just a left-over of their original approach?
Hello Sir, did you get an idea of my problem? I am really stuck sir. It would be great if you would help me out. custom factory. Create the service in the usual way:
Inject it the usual way:
If you want to share methods across different templates, add them to the scope in the base controller.
have a new instance of those methods instead of being shared on the prototype.
also another interesting way that I have working code for is in this article:
using injector.invoke allows for different dependency injections in the base controller and the child controller.
for now I am resigned to creating services. I may still write my controllers in a more OO way even though inheritance isn't happening...
How do I do it?
Code: answered you yet. It is better to stay on the subject so it's easier to discuss about specific topics;
I would recommend you to go over Google Groups on AngularJS' group at this address:
The community there is more than helpful and is definitely more of a better place to ask this kind of question.
I hope this helps you in the future
looking for prototypal inheritance across controllers too, and from these articles I understand it's not going to happen; Now, should I use a Factory as described as Phil, or $injector.invoke ? Both methods seems great in their way, despite the lack of having functions on the prototype of the baseController, but what would be the plusses and minuses of wither methods?
Thanks
@Olivier,
> "I am basically looking for prototypal inheritance across controllers too, and from these articles I understand it's not going to happen"
My hack of creating the controller inside a "factory" does give you *true* prototypal inheritance in the Controller instances. That said, it's clearly not the way AngularJS intended controllers to be used (otherwise they would have built it that way).
This is where I really start to get torn between ease-of-use and memory-efficiency. Clearly, using prototypal inheritance is a memory-plus since you don't have to re-create all your Functions every time a controller is instantiated; but, putting all your methods in the controller is just *easier*.
In the long run (before I had this hack), I moved stuff into Services that could be injected into the Controller, and that's been working out ok.
and factory might be the "way to go" indeed; thing is that I need access and manipulate the $scope of my controllers and would have liked not to have to pass the object to the service's method and what not
@Olivier,
Good luck. I'm definitely finding that with AngularJS, you get away with anything on smaller pages; but, when pages grow to an unfortunate size (when a UI isn't so great), you really have to pay attention to where tiny performance hits actually add up.
I've become good friends with the "Time Line" profiler in Chrome Dev Tools and trying to weed out where lots of Paints / Layouts are forced by various Directives.
It's fun, but frustrating to worry about things at that level.
What about writing a controller as a "normal" JavaScript class and then use Angular "as" syntax
This combination allows you to define a controller that inherit from whatever class you want
See my post about it at:
@Ori,
Interesting idea. The biggest objection to it would likely be that you can't use dependency-injection to pass around the Base Controller. This seems to go against the "angular way".
That said, since sub-classing is sufficiently difficult in AngularJS, I've just given in and started using a Function for everything. The truth is, I kind of love it. It means never having to worry about dynamic "this" binding. Everything is done using closed-over variables and it just "works." Of course, I find that you get the occasional memory leak ;) But, thank goodness for Chrome Dev Tools and the memory allocation snap-shot.
With the "Controller As" syntax, I fooled around with trying to get back into sub-classing. But, to me, it just looks awkward at this point.
@Ben did you think of keeping base types defined as angular values? This way DI wouldn't instantiate them and you would have access to the type definition directly. Do you think that could work?
```
app.value("BaseController", BaseController);
BaseController.$inject = [...]
function BaseController() { ... }
BaseController.prototype...
```
And then have a sub controller
```
app.controller("SubController", function(BaseController, $scope) {
SubController.$inject = ["$scope"];
function SubController($scope) {
BaseController.call(this);
...
}
SubController.prototype = Object.create(BaseController.prototype);
return new SubController($scope);
});
```
Could you create subclass controllers that inherit from a base controller using ES6's classes and extends?
I did this with Angular Services, and it worked well.
See here: | http://www.bennadel.com/blog/2521-using-base-controllers-in-angularjs---an-experiment.htm | CC-MAIN-2015-48 | refinedweb | 2,503 | 53.81 |
Search:
Forum
Beginners
Where To Learn?
Where To Learn?
Nov 19, 2012 at 1:48am UTC
JoshuaJ
(7)
Hey everyone, no doubt this has been asked before, but I want to ask it again, this time including some things.
So, I need to know what the best way to learn C++ for free is, I cannot afford to pay for a book, so that is why I need free. I don't want a reference site or something, I need a website that will tell me what does what and how to use it. Don't point me towards Bucky because although he can teach a few things, it bugs me that he says "You don't need to learn this" at least once in every single one of his videos.
I know some basics of programming (I studied Python for a while).
Who can help me?
P.S: if you need any other information, please ask.
Nov 19, 2012 at 5:41am UTC
Meden
(41)
You can see if a local library carries or will send for a textbook for you. That would be free. I am not sure if my library deals with textbooks, but I know if I want something they do not have, they will get it for me without me paying for anything at all.
Nov 19, 2012 at 8:07am UTC
jaded7
(104)
You should have a look at this:
You do not want to try to finish that in 21 days. Read a chapter, learn it, practice it, and then move to the next. If you have any C++ questions while learning I'd be happy to help you out (I'm not a professional, just a uni student, so take my advice with a grain of salt).
Nov 19, 2012 at 4:15pm UTC
JoshuaJ
(7)
I'll try the 21 days thing first, if I still need to learn more, then I'll try a local library :) thanks guys. BTW if i do try the library, are there any specific books you'd recomend?
Nov 19, 2012 at 11:19pm UTC
JoshuaJ
(7)
What's a good IDE? I'm using CODE::Blocks at the moment,is there one I should be using? I've heard DEV is out dated.
Nov 20, 2012 at 6:09am UTC
Meden
(41)
On the book question, I have had a book for a long time which I never gave much attention to until recently. The name of the book is "Starting Out with C++." My edition was written by Tony Gaddis in 2001. The book is riddled with typos and all the main functions are used with void. It is really a terrible book. The greatest thing about it though is that it has nice review questions at the end of every chapter. I think that helps a lot. On the advice of a forum person I purchased two other books at Amazon. "C++ Primer" by Stanley Lippman, Josee Lajoie, Barbara Moo is the first one. The second book is "Accelerated C++" by Andrew Koenig and Barbara Moo. I think Primer is a fantastic book in every way except that it does not have the "Create these programs" in a review section at the end of a chapter like "Starting Out with C++" has. Beyond that though it explains everything very well, but yea, it is more something to read than something to work through. Accelerated C++ is a little better with the exercise inclusion, but I haven't spent much time with it. Primer is such a fantastic book though in my opinion. I think a perfect match would be Primer and something that can follow along with it and suggest programs that you could write which would not be identical in purpose to the examples given in Primer, but that the examples and material covered would allow you to do if you thought about it. This is of course all from my perspective. I am a beginner as well trying to find the best way to learn. Starting Out with C++ is dirt cheap on Amazon, about $0.30, but the other two together cost me about $100. If your library will get you C++ Primer that would be really nice for you.
Nov 20, 2012 at 7:19am UTC
JoshuaJ
(7)
Either that 21 days thing is outdated or something, or my compiler is different? I have to add
using
namespace
std;
after
#include <iostream>
or else it wont work.
Nov 20, 2012 at 10:09am UTC
Moschops
(5981)
Either that 21 days thing is outdated
Sounds it. Namespaces have been part of C++ since 1998.
Nov 20, 2012 at 2:38pm UTC
JoshuaJ
(7)
so should i not read it?
Nov 20, 2012 at 3:59pm UTC
LGonzales
(48)
I'd like to recommend this book since this is the book i'm reading right now and its using codeblocks too.
ISBN: 978-0-470-31735-8
C++ All-in-One For Dummies®, 2nd Edition
Published by
Wiley Publishing, Inc.
111 River Street
Hoboken, NJ 07030-5774
Published by Wiley Publishing, Inc., Indianapolis, Indiana
Published simultaneously in Canada
Nov 20, 2012 at 4:11pm UTC
JoshuaJ
(7)
I'd like to use one of these books because they are free online
which one do you think I should use :)
Nov 20, 2012 at 4:20pm UTC
LGonzales
(48)
JoshuaJ,
if your concern is that its free, why not look no further and look at the free PDF they have on this site?
I use this a lot too just for quick searches because its a PDF.
Nov 20, 2012 at 4:59pm UTC
JoshuaJ
(7)
because the so called "tutorial" on this site is more of a reference than anything.
Nov 20, 2012 at 5:10pm UTC
LGonzales
(48)
good point.
Well, looking at your list of books, i'd have to point out that the book c++ from Bjarne Stroustrup is probably one of the better ones. thats the one that they taught with at my university.
Also, the thinking in C++ is really good.
Nov 20, 2012 at 5:30pm UTC
closed account (
3qX21hU5
)
Well you get what you pay for I would say. If you don't want outdated books or lesser quality books you got two options.
1) Go to your library and try and find a good quality C++ book. If you live by a big city you should be able to find one in stock or if they don't have one in stock they will probably ship one over.
2) Purchasing I know you said you don't have the money for it right now but if you are serious about learning C++ I would save up some money and purchase a one (Some good C++ books can be as low as $30 or so).
So your best bet would probably be to head over to the library and see if you can find a 2 or 3 good books. And also start putting away some money every paycheck to order some books if possible. I started out with 1 book on C++, now I'm up to about 8 ;p. Its nice to have everything you need by you so if you need to look something up boom its right on your bookshelf.
Here is two books that I have read and highly recommend like others here.
- Accelerated C++ - Andrew Koenig & Barbara E Moo - This book is probably the best place to start I would say. Its pretty short, but it uses a very fast paced approach to teaching C++. It doesn't teach you all the basics of C before it jumps into C++. The way they teach C++ in this book makes you feel like you are actually learning stuff that matters. Like for example probably around page 30 or so you are learning things that most other books wait till around page 150-200 to teach you how to do. The only problem with this book is they don't go to much into detail on some things which is where another book to cross reference with comes in handy.
- C++ Primer 4th Edition - Stan Lippmann, Josie Lajoie & Barbara Moo - This would be another book to pick-up with Accelerated C++. It goes into very good detail on every subject in the book, so I use it mainly as a reference for things I don't know. I also used it while reading Accelerated C++ for cross referencing the subjects I couldn't understand.
Last edited on
Nov 20, 2012 at 5:31pm UTC
Nov 20, 2012 at 6:04pm UTC
cnoeval
(605)
You're wasting your time looking for a magic book. Just pick anything and use it to learn the basics. Then work on what's really important... Writing code that challenges you to become a critical thinking, problem solver. No book can teach you that. It just takes time and practice.
Nov 20, 2012 at 6:30pm UTC
closed account (
3qX21hU5
)
I would disagree cnoeval to a certain extent. For beginners usually a good book will help you progress a lot faster and also teach you a lot more. One thing that good programming books have the you cant find in just any book is that they teach you how to go about creating your program.
So if someone just picked anything to learn C++ from yes they will learn how to be proficient in C++, but it will take a lot longer and they will be missing some key skills that a good book can teach them.
Programming for me at least is a good mix of studying from books, references, ect. and getting in their and doing it because yes you wont learn anything if you don't take what you just learned and use it and change it.
Nov 20, 2012 at 6:31pm UTC
LGonzales
(48)
cnoeval is right,
there is no one correct book, we are all bias towards our own learning style.
visual learning , or code by example, etc, we're all different in that respect.
You'll more than likely end up with a several good books that you'll constantly grab for examples and reference from.
PDF format is also great now because you can search the text quickly.
Topic archived. No new replies allowed.
C++
Information
Tutorials
Reference
Articles
Forum
Forum
Beginners
Windows Programming
UNIX/Linux Programming
General C++ Programming
Lounge
Jobs
|
v3.1
Spotted an error? contact us | http://www.cplusplus.com/forum/beginner/85363/ | CC-MAIN-2015-27 | refinedweb | 1,752 | 78.38 |
As my journey continues in learning about working with other developers, I ran into the com. prefix question. A lot of Java people I had asked about why do they do that prefixing of com to their class paths replied, “Because it’s industry common practice.” Hogwash… OOP is industry practice, right? Bleh.
People who say things like that to me quickly turn me off to doing the “common practice”. It’s the same reason my math teacher disliked me in her class in school; I’d always ask “I know we’re solving a problem, but what problem is this equation solving?” I always wanted to know why algebra and geometry was important; where would you use it, and if her answer didn’t suffice, then screw it, why waste time on it?
I’ve grown a tad more mature since then; instead of asking 1 person, I ask many. I know someone’s got the answer, it’s just a matter of finding someone who does, or who does AND is willing to part with it.
I figured since Macromedia doesn’t do it, and I see no point of com, why should I?
One of the developers I work with who is helping me replied:
– in java, if you deploy code bases server-side, you’ll have different packages for different projects and/or server configurations. So, com will be for the site.com, whilst org. will be for the site.org; the open source version.
So, it’s more about an implementation detail made more encapsulated into a high level package path giving it name and purpose vs. “just because everyone else does it.”
Granted, Flash code doesn’t get deployed server-side (but Flex does, and I’ve made Central run as a server…), but it does get deployed client-side. Case in point, if we decided to release an open-source version of our code from here at work, we’d release it as org.roundboxmedia instead of com, since com applies to our commercial products that either we own, or the client does (although, from my understanding, we own most of the code and sell it as various products), whilst org would be us supporting an open source initiative(s).
So:
– name context helps in server deployment
– name context helps in code distribution to various clients
That was enough to sell me!
* B, sorry for the late response; took me awhile to gather my thoughts on this.
Anyone else got other reasons?
3 Replies to “Putting the Com Dot in “com.business””
Not sure if im stating the obvious or not, but i personally use the domain name of my website in package names to ensure that the package name and thus the ‘namespace’ that my classes are executed in are unique and i can therefore safely assume that the code will run without any conflicts with other code in the movie, which ‘may’ be written by other people either in my team, or an open-source library from the web.
‘Why a domain name?’ A domain name not only adds indentity to code if you share it open-source, it also guarantees that the package name is unique, as only one person can own a domain name at any given time. Granted, somebody else could use your domain name as their package name, but a person would only do that if they were purposely trying to cause conflicts with your code.
I believe it was Sun that originally asked people to use their reverse domain name as the namespace for their code when writing Java applets, to solve this problem and i guess it just became a standard from there.
I see no reason, why i shouldn’t use a domain name, so ive never questioned it, can you think of something better that solves the same problem? Random characters maybe? nahhh…
At first, it was our company name:
’roundboxmedia.controls’
Just like:
‘mx.controls’
But, after discussion I see the point; I also see, after your explanation, how a domain name adds 1 extra layer of uniqueness.
I wrote something about that when somebody from the community asked me the reason.
Class naming convention, Reverse domain | https://jessewarden.com/2005/03/putting-the-com-dot-in-combusiness.html | CC-MAIN-2021-04 | refinedweb | 705 | 68.81 |
From: Roland Schwarz (roland.schwarz_at_[hidden])
Date: 2006-09-29 02:20:11
Rene Rivera wrote:
> Boost Inspection Report
> Run Date: 16:03:07 UTC, Thursday 28 September 2006
>
> An inspection program <>
> checks each file in the current Boost CVS for various problems,
> generating this as output. Problems detected include tabs in files,
> missing copyrights, broken URL's, and similar misdemeanors.
[ ... ]
> 38 usages of unnamed namespaces in headers (including .ipp files)
[ ... ]
> |thread|
>.
Can you please give me a hint, how I should correct this?
Thank you
Roland
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2006/09/110878.php | CC-MAIN-2021-25 | refinedweb | 109 | 69.38 |
On Wed, Apr 16, 2014 at 07:29:29PM +0200, Andreas Gruenbacher wrote:
> > Btw, I think the man page is wrong - given that the tmpfile is not
> > visible in the namespace it is obviously not created in the directory.
> > The directory passed in is just a handle for the filesystem it should be
> > created in.
>
> I don't agree. If the file is created with O_TMPFILE | O_EXCL, it is clear
> that the file will never be linked into the namespace. Even then, there are
> operations which are affected by the inode permissions and label of the
> anonymous file, and those should still behave reasonably. In this context,
> I would expect them to behave as if the file was actually created in the
> specified directory, not in the file system root or "nowhere" with no clearly
> defined permissions and security label.
So you want to define the files as being in a directory, but not
actually visible? That's defintively a new and strange state to be in.
> > Inheriting any ACL on creating an anonymous file seems utterly wrong.
>
> Why?
Because it has no parent to inherit it from. | http://oss.sgi.com/archives/xfs/2014-04/msg00569.html | CC-MAIN-2017-43 | refinedweb | 188 | 61.26 |
I have stayed with VC (through managed C++ and now C++/CLI) and stayed away from C#. Initially, because I did not want to learn a new language. BUT! the learning curve to C# could not have been more than what I went through to Mangaged C and then onto C++/CLI.
I am currently porting a complex C# control to C++. Hence, I'm seeing more C# than I have previously looked at in detail. And, I am impressed! C# does some things more directly than C++. So... Once again... I am thinking... I should have moved to C# 2 years ago. But... I'm still resistant. I like C++/CLI. I actually like the casts, pointers, and namespace qualifiers -- gives me a more solid feel.
But, I find, that C# is not just a fancy Visual Basic. If someone can give me a good reason to stay with C++/CLI I would like to read it. (I do interface work -- not web related.)
Last | http://forums.codeguru.com/showthread.php?391421-RESOLVED-Why-stay-with-C-and-not-move-to-C&p=1412317 | CC-MAIN-2015-18 | refinedweb | 166 | 85.99 |
Hi,
I would like to reload portion of my application without having to refresh it completely.
In order to do this I use a Ajax request to load the code (classes) and evaluate it.
The problem is that the Ext.define doesn't work if you try to "redefine" the same class.
For instance if I load the following class:
This is working the first time, but if I reload it a second time this is not working.This is working the first time, but if I reload it a second time this is not working.Code:Ext.define('Test.namespace.MyClass', { extend: 'Ext.Base', sayHello: function() { Ext.Msg.alert('say hello'); } });
How could I do to "undefine" or "unload" a class in order to be able to reload it.
Best regards
Daniel | https://www.sencha.com/forum/showthread.php?152033-Dynamic-reload&p=663570&viewfull=1 | CC-MAIN-2017-04 | refinedweb | 133 | 76.22 |
Analyze videos in near real time
This article demonstrates how to perform near real-time analysis on frames that are taken from a live video stream by using the Computer Vision API. The basic elements of such an analysis are:
- Acquiring frames from a video source.
- Selecting which frames to analyze.
- Submitting these frames to the API.
- Consuming each analysis result that's returned from the API call.
The samples in this article are written in C#. To access the code, go to the Video frame analysis sample page on GitHub.
Approaches to running near real-time analysis
You can solve the problem of running near real-time analysis on video streams by using a variety of approaches. This article outlines three of them, in increasing levels of sophistication.
Design an infinite loop
The simplest design for near real-time analysis is an infinite loop. In each iteration of this loop, you grab a frame, analyze it, and then consume the result:
while (true) { Frame f = GrabFrame(); if (ShouldAnalyze(f)) { AnalysisResult r = await Analyze(f); ConsumeResult(r); } }
If your analysis were to consist of a lightweight, client-side algorithm, this approach would be suitable. However, when the analysis occurs in the cloud, the resulting latency means that an API call might take several seconds. During this time, you're not capturing images, and your thread is essentially doing nothing. Your maximum frame rate is limited by the latency of the API calls.
Allow the API calls to run in parallel
Although a simple, single-threaded loop makes sense for a lightweight, client-side algorithm, it doesn't fit well with the latency of a cloud API call. The solution to this problem is to allow the long-running API call to run in parallel with the frame-grabbing. In C#, you could do this by using task-based parallelism. For example, you can run the following code:
while (true) { Frame f = GrabFrame(); if (ShouldAnalyze(f)) { var t = Task.Run(async () => { AnalysisResult r = await Analyze(f); ConsumeResult(r); } } }
With this approach, you launch each analysis in a separate task. The task can run in the background while you continue grabbing new frames. The approach avoids blocking the main thread as you wait for an API call to return. However, the approach can present certain disadvantages:
- It costs you some of the guarantees that the simple version provided. That is, multiple API calls might occur in parallel, and the results might get returned in the wrong order.
- It could also cause multiple threads to enter the ConsumeResult() function simultaneously, which might be dangerous if the function isn't thread-safe.
- Finally, this simple code doesn't keep track of the tasks that get created, so exceptions silently disappear. Thus, you need to add a "consumer" thread that tracks the analysis tasks, raises exceptions, kills long-running tasks, and ensures that the results get consumed in the correct order, one at a time.
Design a producer-consumer system
For your final approach, designing a "producer-consumer" system, you build a producer thread that looks similar to your previously mentioned infinite loop. However, instead of consuming the analysis results as soon as they're available, the producer simply places the tasks in a queue to keep track of them.
// Queue that will contain the API call tasks. var taskQueue = new BlockingCollection<Task<ResultWrapper>>(); // Producer thread. while (true) { // Grab a frame. Frame f = GrabFrame(); // Decide whether to analyze the frame. if (ShouldAnalyze(f)) { // Start a task that will run in parallel with this thread. var analysisTask = Task.Run(async () => { // Put the frame, and the result/exception into a wrapper object. var output = new ResultWrapper(f); try { output.Analysis = await Analyze(f); } catch (Exception e) { output.Exception = e; } return output; } // Push the task onto the queue. taskQueue.Add(analysisTask); } }
You also create a consumer thread, which takes tasks off the queue, waits for them to finish, and either displays the result or raises the exception that was thrown. By using the queue, you can guarantee that the results get consumed one at a time, in the correct order, without limiting the maximum frame rate of the system.
// Consumer thread. while (true) { // Get the oldest task. Task<ResultWrapper> analysisTask = taskQueue.Take(); // Wait until the task is completed. var output = await analysisTask; // Consume the exception or result. if (output.Exception != null) { throw output.Exception; } else { ConsumeResult(output.Analysis); } }
Implement the solution
To help get your app up and running as quickly as possible, we've implemented the system that's described in the preceding section. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the Video frame analysis sample page on GitHub.
The library contains the
FrameGrabber class, which implements the previously discussed producer-consumer system to process video frames from a webcam. Users can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired, or when a new analysis result is available.
To illustrate some of the possibilities, we've provided two sample apps that use the library.
The first sample app is a simple console app that grabs frames from the default webcam and then submits them to the Face service for face detection. A simplified version of the app is reproduced in the following code:
using System; using System.Linq; using Microsoft.Azure.CognitiveServices.Vision.Face; using Microsoft.Azure.CognitiveServices.Vision.Face.Models; using VideoFrameAnalyzer; namespace BasicConsoleSample { internal class Program { const string ApiKey = "<your API key>"; const string Endpoint = "https://<your API region>.api.cognitive.microsoft.com"; private static async Task Main(string[] args) { // Create grabber. FrameGrabber<DetectedFace[]> grabber = new FrameGrabber<DetectedFace[]>(); // Create Face Client. FaceClient faceClient = new FaceClient(new ApiKeyServiceClientCredentials(ApiKey)) { Endpoint = Endpoint }; // Set up a listener for when we acquire a new frame. grabber.NewFrameProvided += (s, e) => { Console.WriteLine($"New frame acquired at {e.Frame.Metadata.Timestamp}"); }; // Set up a Face API call. grabber.AnalysisFunction = async frame => { Console.WriteLine($"Submitting frame acquired at {frame.Metadata.Timestamp}"); // Encode image and submit to Face service. return (await faceClient.Face.DetectWithStreamAsync(frame.Image.ToMemoryStream(".jpg"))).ToArray(); }; // Set up a listener for when we receive a new result from an API call. grabber.NewResultAvailable += (s, e) => { if (e.TimedOut) Console.WriteLine("API call timed out."); else if (e.Exception != null) Console.WriteLine("API call threw an exception."); else Console.WriteLine($"New result received for frame acquired at {e.Frame.Metadata.Timestamp}. {e.Analysis.Length} faces detected"); }; // Tell grabber when to call the API. // See also TriggerAnalysisOnPredicate grabber.TriggerAnalysisOnInterval(TimeSpan.FromMilliseconds(3000)); // Start running in the background. await grabber.StartProcessingCameraAsync(); // Wait for key press to stop. Console.WriteLine("Press any key to stop..."); Console.ReadKey(); // Stop, blocking until done. await grabber.StopProcessingAsync(); } } }
The second sample app is a bit more interesting. It allows you to choose which API to call on the video frames. On the left side, the app shows a preview of the live video. On the right, it overlays the most recent API result on the corresponding frame.
In most modes, there's a visible delay between the live video on the left and the visualized analysis on the right. This delay is the time that it takes to make the API call. An exception is in the "EmotionsWithClientFaceDetect" mode, which performs face detection locally on the client computer by using OpenCV before it submits any images to Azure Cognitive Services.
By using this approach, you can visualize the detected face immediately. You can then update the emotions later, after the API call returns. This demonstrates the possibility of a "hybrid" approach. That is, some simple processing can be performed on the client, and then Cognitive Services APIs can be used to augment this processing with more advanced analysis when necessary.
Integrate the samples into your codebase
To get started with this sample, do the following:
- Create an Azure account. If you already have one, you can skip to the next step.
- Create resources for Computer Vision and Face in the Azure portal to get your key and endpoint. Make sure to select the free tier (F0) during setup.
- Computer Vision
- Face After the resources are deployed, click Go to resource to collect your key and endpoint for each resource.
- Clone the Cognitive-Samples-VideoFrameAnalysis GitHub repo.
- Open the sample in Visual Studio 2015 or later, and then build and run the sample applications:
- For BasicConsoleSample, the Face key is hard-coded directly in BasicConsoleSample/Program.cs.
- For LiveCameraSample, enter the keys in the Settings pane of the app. The keys are persisted across sessions as user data.
When you're ready to integrate the samples, reference the VideoFrameAnalyzer library from your own projects.
The image-, voice-, video-, and text-understanding capabilities of VideoFrameAnalyzer use Azure Cognitive Services. Microsoft receives the images, audio, video, and other data that you upload (via this app) and might use them for service-improvement purposes. We ask for your help in protecting the people whose data your app sends to Azure Cognitive Services.
Summary
In this article, you learned how to run near real-time analysis on live video streams by using the Face and Computer Vision services. You also learned how you can use our sample code to get started.
Feel free to provide feedback and suggestions in the GitHub repository. To provide broader API feedback, go to our UserVoice site. | https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/vision-api-how-to-topics/howtoanalyzevideo_vision | CC-MAIN-2020-29 | refinedweb | 1,571 | 57.77 |
#include <shaderNode.h>
A specialized version of
NdrNode which holds shading information.
Definition at line 81 of file shaderNode.h.
Constructor.
The list of string input properties whose values provide the names of additional primvars consumed by this node. For example, this may return a token named
varname. This indicates that the client should query the value of a (presumed to be string-valued) input attribute named varname from its scene description to determine the name of a primvar the node will consume. See
GetPrimvars() for additional information.
Definition at line 186 of file shaderNode.h.
Gets all vstructs that are present in the shader.
Returns the list of all inputs that are tagged as asset identifier inputs.
The category assigned to this node, if any. Distinct from the family returned from
GetFamily().
Definition at line 143 of file shaderNode.h.
Returns the first shader input that is tagged as the default input. A default input and its value can be used to acquire a fallback value for a node when the node is considered 'disabled' or otherwise incapable of producing an output value.
The departments this node is associated with, if any.
Definition at line 160 of file shaderNode.h.
The help message assigned to this node, if any.
Returns the implementation name of this node. The name of the node is how to refer to the node in shader networks. The label is how to present this node to users. The implementation name is the name of the function (or something) this node represents in the implementation. Any client using the implementation must call this method to get the correct name; using
getName() is not correct.
The label assigned to this node, if any. Distinct from the name returned from
GetName(). In the context of a UI, the label value might be used as the display name for the node instead of the name.
Definition at line 138 of file shaderNode.h.
Gets the pages on which the node's properties reside (an aggregate of the unique
SdrShaderProperty::GetPage() values for all of the node's properties). Nodes themselves do not reside on pages. In an example scenario, properties might be divided into two pages, 'Simple' and 'Advanced'.
Definition at line 168 of file shaderNode.h.
The list of primvars this node knows it requires / uses. For example, a shader node may require the 'normals' primvar to function correctly. Additional, user specified primvars may have been authored on the node. These can be queried via
GetAdditionalPrimvarProperties(). Together,
GetPrimvars() and
GetAdditionalPrimvarProperties(), provide the complete list of primvar requirements for the node.
Definition at line 177 of file shaderNode.h.
Gets the names of the properties on a certain page (one that was returned by
GetPages()). To get properties that are not assigned to a page, an empty string can be used for
pageName.
Returns the role of this node. This is used to annotate the role that the shader node plays inside a shader network. We can tag certain shaders to indicate their role within a shading network. We currently tag primvar reading nodes, texture reading nodes and nodes that access volume fields (like extinction or scattering). This is done to identify resources used by a shading network.
Get a shader input property by name.
nullptr is returned if an input with the given name does not exist.
Get a shader output property by name.
nullptr is returned if an output with the given name does not exist.
Definition at line 239 of file shaderNode.h.
Definition at line 240 of file shaderNode.h.
Definition at line 238 of file shaderNode.h.
Definition at line 241 of file shaderNode.h.
Definition at line 235 of file shaderNode.h.
Definition at line 234 of file shaderNode.h.
Definition at line 243 of file shaderNode.h.
Definition at line 244 of file shaderNode.h. | https://www.sidefx.com/docs/hdk/class_sdr_shader_node.html | CC-MAIN-2021-17 | refinedweb | 643 | 68.47 |
Name | Synopsis | Interface Level | Parameters | Description | Return Values | Context | See Also
#include <sys/ddi.h> #include <sys/sunddi.h> #include <sys/signal.h> void *proc_ref(void)
voidproc_unref(void *pref);
int proc_signal(void *pref, int sig);
Solaris DDI specific (Solaris DDI).
A handle for the process to be signalled.
Signal number to be sent to the process.
This set of routines allows a driver to send a signal to a process. The routine proc_ref() is used to retrieve an unambiguous reference to the process for signalling purposes. The return value can be used as a unique handle on the process, even if the process dies. Because system resources are committed to a process reference, proc_unref() should be used to remove it as soon as it is no longer needed.proc_signal() is used to send signal sig to the referenced process. The following set of signals may be sent to a process from a driver:
The device has been disconnected.
The interrupt character has been received.
The quit character has been received.
A pollable event has occurred.
Kill the process (cannot be caught or ignored).
Window size change.
Urgent data are available.
See signal.h(3HEAD) for more details on the meaning of these signals.
If the process has exited at the time the signal was sent, proc_signal() returns an error code; the caller should remove the reference on the process by calling proc_unref().
The driver writer must ensure that for each call made to proc_ref(), there is exactly one corresponding call to proc_unref().
The proc_ref() returns the following:
An opaque handle used to refer to the current process.
The proc_signal() returns the following:
The process existed before the signal was sent.
The process no longer exists; no signal was sent.
The proc_unref() and proc_signal() functions can be called from user, interrupt, or kernel context. The proc_ref() function should be called only from user context.
signal.h(3HEAD), putnextctl1(9F)
Name | Synopsis | Interface Level | Parameters | Description | Return Values | Context | See Also | http://docs.oracle.com/cd/E19253-01/816-5180/6mbbf02lk/index.html | CC-MAIN-2015-18 | refinedweb | 329 | 68.16 |
Talk:Proposed features/direction
Contents
Exact definitions needed
This proposal should try to provide the precise meaning of "direction" for common examples, as people might intuitively interpret this differently (e.g. "facing" vs "travel direction" for signs). If I understand the intention correctly, then the tag means "facing" for every object that has a "front" or door, such as billboards, signs, benches, vending machines, telephone booths, ...? For the more exotic meanings such as planting direction, which are not at all obvious, we should maybe even consider to use a more expressive key like planting_direction. --Tordanik 12:35, 10 August 2011 (BST)
heading not direction
"heading" is a more precise term than "direction". Brycenesbitt
- I agree. But direction is already widely used, I'm just documenting it to be added to the wiki. --Zverik 06:19, 25 August 2011 (BST)
Fixed angles.svg
As Fkv has noted in the voting section (thanks!), Image:angles.svg had typos for some of the compass directions. I've corrected them and uploaded a new image revision. --Tordanik 15:47, 5 September 2011 (BST)
direction=forward/backward
the proposal says that it doesn't interfere with the values forward/backward. i disagree. most of your values describe the physical orientation of an element, while forward/backward describe an access restriction. throwing those in the same pot by using the same tag is not a good idea. my suggestion, as i mentioned on a couple of other talk pages already, would be to use the access namespace. by using access.direction=forward/both/backward you could describe the access restriction, while direction (without a namespace) would describe the physical orientation of an element. --Flaimo 21:35, 8 September 2011 (BST)
"cardinal directions"
The example "NE" and the compass rose drawing make it clear that intercardinal/ordinal directions, as well as further divisions, are possible values. However, the text only mentions "cardinal directions". As far as I know, this term refers - strictly speaking - only to the 4 main directions. Can we rephrase this (either now or in the final documentation) to e.g. "An sequence of upper-case latin characters from [NWSE], meaning one of the 8 cardinal and intercardinal directions or their 8 direct subdivisions"? --Tordanik 22:15, 8 September 2011 (BST)
- Yes, if it would be more correct. I copied those terms from the previous version, but don't really understand the difference between ordinal, cardinal and intercardinal (having not spent time on thoroughly studying the wiki on directions). So I'd be glad for any help clarifying the description — but in the final documentation, since it's not good to change the proposal that's been in voting for several weeks. --Zverik 07:48, 9 September 2011 (BST)
Directions possible when passing a turnstile barrier
I am thinking of turnstiles. E.g., many public parks or pools in Germany have few entrance gates (where you have to pay), but some more turnstiles so people have a shorter way out.
I propose to use "direction=out" on such turnstile barriers.
Gpermant 10:06, 17 February 2012 (UTC)
- Wouldn't entrance=exit be appropriate for that situation? --Tordanik 13:02, 17 February 2012 (UTC)
- No, because the entrance or exit tag is on a node, and routing does not natively check nodes. And additionally, you would have to define "inside" and "outside". The best way is to tag the way oneway.
--Lulu-Ann 15:03, 17 February 2012 (UTC) | https://wiki.openstreetmap.org/wiki/Talk:Proposed_features/direction | CC-MAIN-2018-26 | refinedweb | 573 | 55.03 |
Recent:
Archives:
Google and its MapReduce framework may rule the roost when it comes to massive-scale data processing, but there's still plenty
of that goodness to go around. This article gets you started with Hadoop, the open source MapReduce implementation for processing
large data sets. Authors Ravi Shankar and Govindu Narendra first demonstrate the powerful combination of
map and
reduce in a simple Java program, then walk you through a more complex data-processing application based on Hadoop. Finally, they
show you how to install and deploy your application in both standalone mode and clustering mode.
Are you amazed by the fast response you get while searching the Web with Google or Yahoo? Have you ever wondered how these services manage to search millions of pages and return your results in milliseconds or less? The algorithms that drive both of these major-league search services originated with Google's MapReduce framework. While MapReduce is proprietary technology, the Apache Foundation has implemented its own open source map-reduce framework, called Hadoop. Hadoop is used by Yahoo and many other services whose success is based on processing massive amounts of data. In this article we'll help you discover whether it might also be a good solution for your distributed data processing needs.
We'll start with an overview of MapReduce, followed by a couple of Java programs that demonstrate the simplicity and power of the framework. We'll then introduce you to Hadoop's MapReduce implementation and walk through a complex application that searches a huge log file for a specific string. Finally, we'll show you how to install Hadoop in a Microsoft Windows environment and deploy the application -- first as a standalone application and then in clustering mode.
You won't be an expert in all things Hadoop when you're done reading this article, but you will have enough material to explore and possibly implement Hadoop for your own large-scale data-processing requirements.
MapReduce is a programming model specifically implemented for processing large data sets. The model was developed by Jeffrey
Dean and Sanjay Ghemawat at Google (see "MapReduce: Simplified data processing on large clusters"). At its core, MapReduce is a combination of two functions --
map() and
reduce(), as its name would suggest.
A quick look at a sample Java program will help you get your bearings in MapReduce. This application implements a very simple
version of the MapReduce framework, but isn't built on Hadoop. The simple, abstracted program will illustrate the core parts
of the MapReduce framework and the terminology associated with it. The application creates some strings, counts the number
of characters in each string, and finally sums them up to show the total number of characters altogether. Listing 1 contains
the program's
Main class.
public class Main { public static void main(String[] args) { MyMapReduce my = new MyMapReduce(); my.init(); } }
Listing 1 just instantiates a class called
MyMapReduce, which is shown in Listing 2.
import java.util.*; public class MyMapReduce ...
Download complete Listing 2
As you see, the crux of the class lies in just four functions:
init()method creates some dummy data (just 30 strings). This data serves as the input data for the program. Note that in the real world, this input could be gigabytes, terabytes, or petabytes of data!
step1ConvertIntoBuckets()method segments the input data. In this example, the data is divided into six smaller chunks and put inside an
ArrayListnamed
buckets. You can see that the method takes a list, which contains all of the input data, and another
intvalue,
numberOfBuckets. This value has been hardcoded to five; if you divide 30 strings into five buckets, each bucket will have six strings each. Each bucket in turn is represented as an
ArrayList. These array lists are put finally into another list and returned. So, at the end of the function, you have an array list with five buckets (array lists) of six strings each.
step2RunMapFunctionForAllBuckets()is the next method invoked from
init(). This method internally creates five threads (because there are five buckets -- the idea is to start a thread for each bucket). The class responsible for threading is
StartThread, which is implemented as an inner class. Each thread processes each bucket and puts the individual result in another array list named
intermediateresults. All the computation and threading takes place within the same JVM, and the whole process runs on a single machine.
step3RunReduceFunctionForAllBuckets()method collates the results from
intermediateresults, sums it up, and gives you the final output.
intermediateresultsneeds to combine the results from the parallel processing explained in the previous bullet point. The exciting part is that this process also can happen concurrently!
More
ResponseBy ravishankar on November 20, 2008, 5:55 pmHi, There are two things - a job tracker and and a nodetracker(Namenode and datanodes).The job tracker knows when a node completes the work. Usually the Map Reduce...
Reply | Read entire comment
Query:By Anonymous on October 14, 2008, 11:26 pmThe last sentence: Note that intermediateresults needs to combine the results from the parallel processing explained in the previous bullet point. The exciting...
Reply | Read entire comment
View all comments | http://www.javaworld.com/javaworld/jw-09-2008/jw-09-hadoop.html | crawl-002 | refinedweb | 859 | 53.92 |
Projects/KHTML
In-progress bugfixes
- SadEagle:bug #169988 --- focus/blur on all elements, head/body parsing #?????, multiple problems with onchange (bug #170451, many others)
- head/body parsing : bug #170694 ?
Acid3 stuff
See Projects/Acid3
Major targets
- SVG --- vtokarev
- DOM namespace changes --- vtokarev --- done (in trunk)
- Class attribute hashconsing for improved css selectors performance --- vtokarev
- Implementation that we have currently is significantly slower on specific test case (probably less on real-life pages). Anyway, it should improve performance (maybe even memory usage), with small drawback on complexity
-.
- I analyzed this some more, and it's very hard to fix; basically the whole restoreState on multiple frames thing is broken, as it tries to restore kids independently of parents, and each may have <script> fragments, etc. The code likely needs to be reworked to do deferred application of saved info such as scroll position, etc., once loaded. -Maks
This page was last edited on 19 November 2008, at 19:33. Content is available under Creative Commons License SA 4.0 unless otherwise noted. | https://techbase.kde.org/Projects/KHTML | CC-MAIN-2020-40 | refinedweb | 171 | 53.41 |
Member
1 Points
All-Star
43201 Points
Jun 07, 2019 01:05 PM|mgebhard|LINK
AlenpeteraarksLoading data in view by calling webapi and jquery? Between these two which one is better to load data ?
Unclear.
jQuery runs in the browser. jQuery has an AJAX function that can call a Web API URL then update the current page (DOM) with the data returned from Web API.
A View is a construct in MVC that runs on a web server.
Member
1 Points
All-Star
43201 Points
All-Star
190257 Points
Moderator
Jun 07, 2019 01:24 PM|Mikesdotnetting|LINK
If you are asking whether it is better to render the HTML on the server or the client, that really depends. There is no way anyone else can give you a definitive answer without knowing everything about your application, database, the specific workflow you are addressing etc. You should try both approaches and see which one performs better in your scenario.
Participant
770 Points
Jun 07, 2019 01:25 PM|AddWeb Solution|LINK
Hello Alenpeteraarks,
As you described , you want to load data on page using ajax and through Webapi. In MVC project to load data using AJAX? Where you get problem ? Define more . What you want ?
Jun 08, 2019 04:27 AM|yogyogi|LINK
AlenpeteraarksLoading data in view by calling webapi and jquery? Between these two which one is better to load data ?
I want to start that you will need both jQuery and Web API to load data on the View. That is both will work together to achieve this feature. Kindly refer this tutorial for understanding how to achieve it.
Contributor
3660 Points
Jun 10, 2019 07:42 AM|Yuki Tao|LINK
Hi Alenpeteraarks,
AlenpeteraarksBut i want to load data onnpage using direct webapi and through ajax.
For example by AJAX:
you could call ajax in Initialization function:
<script type="text/javascript" src=""></script> <script type="text/javascript"> $(function () { $("#btnGet").click(function () { var person = '{Name: "' + $("#txtName").val() + '" }'; $.ajax({ type: "POST", url: "/api/yourcontroller/youraction",//if in default route in webapi data: person, contentType: "application/json; charset=utf-8", dataType: "json", success: function (response) { alert("Hello: " + response.Name + ".\nCurrent Date and Time: " + response.DateTime); } }); }); }); </script>
in controller:
public class xxxController : ApiController { [Route("api/yourcontroller/youraction")] [HttpPost] public PersonModel youraction(PersonModel person) { person.DateTime = DateTime.Now.ToString(); return person; } }
More details,You could refer to this article about how to make a jQuery POST call to Web API 2 Controller’s method using jQuery AJAX in ASP.Net MVC Razor.
In additian,you could also Consume Web API in .NET using HttpClient,
HttpClient sends a request to the Web API and receives a response.
We then need to convert response data that came from Web API to a model and then render it into a view.
for example:
public ActionResult Index() { IEnumerable; } else //web api sent error response { //log response status here.. students = Enumerable.Empty<StudentViewModel>(); ModelState.AddModelError(string.Empty, "Server error. Please contact administrator."); } } return View(students); }
More details,You could refer to tutorial about how to Consume Rest api from MVC:
Best Regards.
Yuki Tao
7 replies
Last post Jun 10, 2019 07:42 AM by Yuki Tao | https://forums.asp.net/t/2156381.aspx?Load+data+in+view+ | CC-MAIN-2019-47 | refinedweb | 529 | 56.86 |
By Jonathan Lurie and Timothy Stockstill
Microsoft’s Visual Studio .NET has introduced many new concepts to the Visual Studio developer, including the Microsoft Intermediate Language (MSIL) with runtime compilation, garbage collection, Common Language Runtime (CLR), and, perhaps most misunderstood of all: namespaces and assemblies. To help you develop a better understanding of the .NET environment, this article will explore namespaces and assemblies and clarify the relationship between them.
The namespace
At first glance, it appears as if namespaces represent little more than the C++ include directive or the addition of a VB module to a project. But the concept of namespaces and assemblies with the C# using directive and the VB Imports statement in Visual Studio .NET extends beyond the inclusion of predefined header files. They represent a method of interacting with external code libraries that may be new to the Microsoft developer.
Put simply, a namespace is just a grouping of related classes. It's a method of putting classes inside a container so that they can be clearly distinguished from other classes with the same name. Programmers skilled in the Java language will recognize namespaces as packages. A namespace is a logical grouping rather than a physical grouping. The physical grouping is accomplished by an assembly, which equates most directly to a dynamic link library (DLL), COM object, or OCX module.
The .NET CLR consists of multiple namespaces, which are spread across many assemblies. For example, ADO.NET is the set of classes located in the System.Data namespace, and ASP.NET is the set of classes located in the System.Web namespace.
Figure A shows how classes are divided up in the namespaces that compose the .NET CLR. Each block represents a separate namespace. In the CLR, the classes and structures contained in each of the namespaces represent a common theme of development responsibility. Further, the namespace system can be hierarchical, allowing for compartmentalization of functionality inside of a parent namespace. For example, the System namespace is the root for all other namespaces.
It is important to understand that a namespace does not always equate to a functional library, such as in a traditional DLL, COM object, or OCX. In the .NET environment, namespaces represent the logical organization of code but not necessarily the physical location of the code.
Assemblies
In Visual Studio .NET, code is physically organized and structured in assemblies. An assembly is almost identical to a DLL, although one assembly consists of one or more DLL or EXEs. In fact, you will have a hard time finding references to the term DLL—it seems that Microsoft is keen to put the term into the annals of computer science history. Those with Java skills will recognize the assembly as a Jar file.
An assembly is a binary file that contains .NET executable code complied into MSIL. The relationship that exists between assemblies and namespaces can be rather confusing, since one assembly can contain one or more namespaces, and one namespace can be contained by one or more assemblies. Each assembly can be configured with a root namespace. If the root namespace value is left blank, the root namespace value defaults to the assembly name. You can set the root namespace through the project properties dialog box, as shown in Figure B.
Assemblies provide many benefits over traditional DLL, OCX, or COM files. The following are just some of the advantages offered by the assembly design:
- Assemblies form .NET’s security boundary, which, with the assistance of the computer's security policy, can specify what actions can be performed.
- Assemblies introduce a cleaner method of object versioning than that presented by COM. The assembly naming and versioning system allows multiple objects to share the same name while still allowing an application to identify the correct object without the necessity of resorting to GUIDS.
- Assemblies provide for side-by-side (SxS) execution. With SxS execution, two applications (or the same application, for that matter) can instantiate two versions of the same object.
- While standard assemblies can be registered and placed in the global assembly cache, an assembly does not have to be registered to be useful. Unlike COM objects, which required the presence of entries in the NT/9X registry, the presence of an assembly in your application's directory is sufficient for it to be recognized by your application. This permits the development of simple applications that can be executed without an install process. This could mark the end of DLL hell and result in the return to the era of xCopy deployments. All you crackers will be pirating software for years to come!
Using classes
Suppose you have a class called Manager residing in the Employee namespace, which resides in the Company.dll assembly. To use the class, you must first reference the assembly using the References section of the Solution Explorer. For Java programmers, this is analogous to placing the Jar file in the class path. This tells the project where to find the physical file. Once a reference is in place, you can use the imports Employee (VB) or using Employee (C#) directive to make use of the namespace. Imports can be done on a file basis or on a project basis. You should use the latter when you will use the namespace in a majority of the files in your project. For example, it is likely that you will use System.data in many of your files, so a project-level import for this namespace would be appropriate. Again, use the project properties dialog box to configure project level imports.
Conclusion
Although the relationship between namespaces and assemblies may be difficult to grasp at first, you must remember only that namespaces represent the logical view of your object model, and assemblies represent the physical deployment. Namespaces provide a hierarchical response to the difficulties of object identification and location. They remove many of the ambiguities found with other forms of object referencing and simplify library design by allowing related objects to reside together.
Through the use of assembly management, you can easily control versioning, security, and other aspects of library deployment that traditionally had to be performed by hand. Furthermore, the SxS assembly version enables hot swapping of loaded modules, not to mention simplified deployment. Welcome back xCopy, we’ve missed you. With all the advantages that the assembly/namespace pair offers, you may find yourself wondering how you ever got along without them. | http://www.techrepublic.com/article/whats-in-a-namespace/ | CC-MAIN-2017-30 | refinedweb | 1,072 | 55.03 |
.4
** Emacs can be compiled with POSIX ACL support.
This happens by default if a suitable support library is found at
build time, like libacl on GNU/Linux. To prevent this, use the
configure option `--without-acl'.
* Startup Changes in Emacs 24.4
* Changes in Emacs 24.4
+++
** .
+++
** .
** New option `scroll-bar-adjust-thumb-portion'.
* Editing Changes in Emacs 24.4
** New commands `toggle-frame-fullscreen' and `toggle-frame-maximized',
bound to <f11> and M-<f10>, respectively.
* Changes in Specialized Modes and Packages in Emacs 24.4
**.
** cl-lib
*** New macro cl-tagbody.
+++
*** letf is now just an alias for cl-letf.
**.
** ERC
*** New option `erc-accidental-paste-threshold-seconds'.
If set to a number, this can be used to avoid accidentally paste large
amounts of data into the ERC input.
** Icomplete is a bit more like IDO.
*** key bindings to navigate through and select the completions.
*** The icomplete-separator is customizable, and its default has changed.
*** Removed icomplete-show-key-bindings.
** Image mode
---
***.
** Isearch
*** `C-x 8 RET' in Isearch mode reads a character by its Unicode name
and adds it to the search string.
** MH-E has been updated to MH-E version 8 two types of operation:
when its arg ADJACENT is non-nil (when called interactively with C-u C-u)
it works like the utility `uniq'. Otherwise by default it deletes
duplicate lines everywhere in the region without regard to adjacency.
** Tramp
+++
*** New connection method "adb", which allows to access Android
devices by the Android Debug Bridge. The variable `tramp-adb-sdk-dir'
must be set to the Android SDK installation directory.
+++
*** Handlers for `file-acl' and `set-file-acl' for remote machines
which support POSIX ACLs.
** Woman
*** The commands `woman-default-faces' and `woman-monochrome-faces'
are obsolete. Customize the `woman-* faces instead.
** Obsolete packages:
*** longlines.el is obsolete; use visual-line-mode instead.
*** terminal.el is obsolete; use term.el instead.
* New Modes and Packages in Emacs 24.4
** New nadvice.el package offering.
* Incompatible Lisp Changes in Emacs 24.4
**'.
* Lisp changes in Emacs 24.4
** Support for filesystem notifications.
Emacs now supports notifications of filesystem changes, such as
creation, modification, and deletion of files. This requires the
'inotify' API on GNU/Linux systems. On MS-Windows systems, this is
supported for Windows XP and newer versions.
** Face changes
*** The in Emacs 24.4 on non-free operating systems
+++
** The "generate a backtrace on fatal error" feature now works on MS Windows.
The backtrace is written to the 'emacs_backtrace.txt' file in the
directory where Emacs was running.
* Installation Changes in Emacs 24.3
** The default X toolkit is now Gtk+ version 3.
If you don't pass `--with-x-toolkit' to configure, or if you use
`- about possibly-questionable C code. On a recent GNU system there
should be no warnings; on older and on non-GNU systems the generated
warnings may be useful.
**' and `vcdiff' have been removed
(from the bin and libexec directories, respectively). The former is
no longer relevant, the latter is replaced by lisp (in vc-sccs.el).
*' to nil.
*** `C-h f' now reports previously-autoloaded functions as "autoloaded",
even after their associated libraries have been loaded (and the
autoloads have been redefined as functions).
**.
*** Setting `imagemagick-types-inhibit' to t now disables the use of
ImageMagick to view images. (You must call `imagemagick-register-types'
afterwards if you do not use customize to change this.)
*** The new variable `imagemagick-enabled-types' also affects which
ImageMagick types are treated as images. The function
`imagemagick-filter-types' returns the list of types that will be
treated as images.
**.
** Internationalization
*** New language environment: Persian.
*** New input method `vietnamese-vni'.
** Nextstep (GNUstep / Mac OS X) port
*** Support for fullscreen and the frame parameter fullscreen.
***.)
***)
*** CL's main entry is now (require 'cl-lib).
`cl-lib' is like the old `cl' except that it uses the namespace cleanly;
i.e., all its definitions have the "cl-" prefix (and internal definitions
use the "cl--" prefix).
If `cl' provided a feature under the name `foo', then `cl-lib'
provides it under the name `cl-foo' instead; with the exceptions of the
few ")
** Diff mode
*** Changes are now highlighted using the same color scheme as in
modern VCSes. Deletions are displayed in red (new faces
`diff-refine-removed' and `smerge-refined-removed', and new definition
of `diff-removed'), insertions in green (new faces `diff-refine-added'
and `smerge-refined-added', and new definition of `diff-added').
*** The variable `diff-use-changed-face' defines whether to use the
face `diff-changed', or `diff-removed' and `diff-added' to highlight
changes in context diffs.
*** The new command `diff', and `dired-do-touch' yanks the attributes of the
file at point.
*** When the region is active, `m' (`dired-mark'), `u' (`dired-unmark'),
`DEL' (`dired-unmark-backward'), and .
** Compile has a new option `compilation-always-kill'.
** Customize
*** `custom-reset-button-menu' now defaults to t.
*** Non-option variables are never matched in `customize-apropos' and
`customize-apropos-options' (i.e., the prefix argument does nothing for
these commands now).
**.
*** Remote processes are now also supported on remote MS-Windows hosts.
**.
** notifications.el supports now version 1.2 of the Notifications API.
The function `notifications-get-capabilities' returns the supported
server properties.
** Flymake uses fringe bitmaps to indicate errors and warnings.
See `flymake-fringe-indicator-position', `flymake-error-bitmap' and
`flymake-warning-bitmap'.
** The FFAP option `ffap-url-unwrap-remote' can now be a list of strings,
specifying URL types that should be converted to remote file names at
the FFAP prompt. The default is now '("ftp").
** New Ibuffer `derived-mode' filter, bound to `/ M'.
The old binding for `/ M' (filter by used-mode) is now bound to `/ m'.
** New option `mouse-avoidance-banish-position' specifies where the
`banish' mouse avoidance setting moves the mouse.
** In Perl mode, new option `perl-indent-parens-as-block' causes non-block
closing brackets to be aligned with the line of the opening bracket.
** In Proced mode, new command `proced-renice' renices marked processes.
** New option `async-shell-command-buffer' specifies the buffer to use
for a new asynchronous `shell-command' when the default output buffer
`*Async Shell Command*' is already in use.
** `S' in Tabulated List mode
(and modes that derive from it), sorts the column at point, or the Nth
column if a numeric prefix argument is given.
** `which-func-modes' now defaults to t, so Which Function mode, when
enabled, applies to all applicable major modes.
** `winner-mode-hook' now runs when the mode is disabled, as well as when
it is enabled.
** Follow mode no longer works by using advice.
The option `follow-intercept-processes' has been removed.
** `javascript-generic-mode''.
** `random' by' contains a substring "\?",
that substring is inserted literally even if the LITERAL arg is
non-nil, instead of causing an error to be signaled.
** `select-window' now always makes the window's buffer current.
It does so even if the window was selected before.
** The function `x-select-font' can return a font spec, instead of a
font name as a string. Whether it returns a font spec or a font name
depends on the graphical library.
** `face-spec-set' no longer sets frame-specific attributes when the
third argument is a frame (that usage was obsolete since Emacs 22.2).
** `set-buffer-multibyte' now signals an error in narrowed buffers.
** The CL package's `get-setf-method' function' are'
** CL-style generalized variables are now in core Elisp.
`setf' is autoloaded; `push' and `pop' accept' also accepts a (declare DECLS) form, like `defmacro'.
The interpretation of the DECLS is determined by `defun-declarations-alist'.
** New macros `setq-local' and `defvar-local'.
** Face underlining can now use a wave. | https://emba.gnu.org/emacs/emacs/-/blame/21cd50b803cb63b66f81db0a18dbaac6d7269348/etc/NEWS | CC-MAIN-2021-17 | refinedweb | 1,284 | 59.9 |
Recently, I needed a piece of software where you can input some numbers, validate and process them, and print some result. Since I needed to do scientific computing with it I decided to use Python with NumPy and SciPyand because I needed validation I decided that HMTL input validations would be the easiest to use.
Then I had a choice from a ton of Python web frameworks and since “microframework” sounded like exactly what I need and I’ve heard the name before I decided to give Flask a try.
Getting started
All you need to get started after you’ve installed Flask is only 1 python file.
from flask import Flask app = Flask(__name__) @app.route("/") def index(): return "Hello there!"
Start it with a simple command:
FLASK_APP=app.py FLASK_ENV=development flask run
That’s all you need to render a simple string on the “/” page. But who the fuck needs to render a string? Even if all you need is only to render a string it would look awful without some CSS. So we’ll go deeper and get into rendering templates.
Rendering templates
In case you want to render a template you can use a built-in method called, you guessed it,
render_template.
from flask import Flask, render_template app = Flask(__name__) @app.route("/") def index(): return render_template('index.html')
You’ll also need to create
index.html page in
/templates folder.
<!doctype html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Computer math algorithms</title> </head> <body> <h1>Computer math algorithms</h1> <ul> <li><a href="/equations">System of linear equations</a></li> </ul> </body> </html>
As you can see, I’ve added a link to
/equations, but if you click the link now all you’ll see is “404 Not Found” page. To fix this you’ll need to add another route and another template the same way we did before.
@app.route("/equations") def equations(): return render_template('equations.html')
The first form
The current project is going to solve a system of linear equations. First of all, we need to know how many equations there are. To get that number we can use a form with one number input.
<form action="/equations"> <label for="equations_number">Number of equations:</label> <input type="number" name="equations_number" required <br> <input type="submit" value="Submit"> </form>
By the way, for some simple CSS I recommend reading a great website. To find CSS used there you can use developers tools.
This form can’t be submitted without number of equations because of the required attribute and on the input tag. Also, the number cannot be less than 1 and more than 100 because of
min and
max attributes respectively.
The number could be bigger than 100, but the website won’t be usable (it’s not really usable after 10 already). In that case, I would add import from some sort of spreadsheet file.
After entering any valid number, click on “Submit” button and a parameter will be added to the URL
/equations?equations_number=2.
Dealing with arguments
To read that parameter on the backend
request.args can be used like so:
from flask import request @app.route("/equations") def equations(): equations_number = request.args.get('equations_number', type = int) return render_template('equations.html', equations_number=equations_number)
Here we’re passing a name of the needed parameter to request.args.get and a type so that it converts to integer. equations_number argument is then passed to render_template so that it is available in the template.
Is this a system of linear equations?
All we need for the system of equations is a matrix of coefficients
A and a column vector
b so that the equation would be
Ax=b. Example:
x + 2y = 3 4x + 5y = 6
In this case, we would have the following
A and
b:
A = [[1, 2], [4, 5]] b = [[3], [6]]
Making a form with a two-dimensional array input
Right now we need a form for
A and
b.
>
What this code does is it creates a form and
equations_number times renders input for each equation. And each equation needs
equations_numbercoefficients (that’s what for loop with
j is for) and one number after the equals sign (
b).
Another interesting thing to note here is that input with
name="a" gets rendered
equations_number^2 times and equations_number times for
name="b". Flask supports this types of forms and to read data from them
request.args.getlist can be used like so:
b = request.args.getlist('b', type=int)
We could do the same for the
A array, but we need it to be two-dimensional. This is where NumPy comes in handy with a
reshape method. All you need to provide for it is an array and a tuple of the shape you want.
import numpy as np shape = (len(b), len(b)) A = request.args.getlist(‘a’, type=int) # 1-dimensional A = np.reshape(A, shape) # 2-dimensional
Conditional rendering
I would like to hide equations number form after it is submitted. To do that we can use “conditional rendering”. It’s pretty straightforward:
{% if equations_number %} <form action="/equations">... form for `A` and `b` ...</form> {% else %} <form action="/equations">... form for `equations_number` ...</form> {% endif %}
Finally solving the system
Once we have all the coefficients we can solve the system of equations. I could try to bother with how you can invert matrix
A and multiply it with a vector
b, but I won’t. Today we’ll use NumPy once again. This library has a function for exactly what we want — solving a system of linear equations and it is called
numpy.linalg.solve. All it needs is our matrix
A and
b, which we already have.
x = np.linalg.solve(A, b)
This is where things get a bit tricky. Since we’re using this one action for everything related to solving the system we have add some ifs. Here’s the final code:
equations_number = request.args.get('equations_number', type = int) b = request.args.getlist('b', type=int) A = np.array(request.args.getlist('a', type=int)).reshape(len(b), len(b)) x = np.array([]) if len(b) > 0: x = np.linalg.solve(A, b) return render_template('equations.html', equations_number=equations_number, A=A, b=b, x=x)
Let’s check it with some screens we’re using this action for:
- Start:
equations_numberis
None, and all other variables are empty arrays
- Number of equations is entered:
equations_numberis present, but all other variables are still empty arrays
- A and b are entered:
equations_numberis
Noneagain,
bis an array,
Ais a 2D array and
xis an answer for the system of equations
With that we can render our final form:
{% if x.size > 0 %} <p>A = {{ A }}</p> <p>B = {{ b }}</p> <p>x = {{ x }}</p> {% elif equations_number %} > {% else %} <form action="/equations"> <label for="equations_number">Number of equations:</label> <input type="number" name="equations_number" required <br> <input type="submit" value="Submit"> </form> {% endif %}
Areas of improvement (homework?)
- Separate views and actions for different screens of equation
- A base layout all views
- Form accessibility for bigger inputs and prettier output
Conclusion
In this story I’ve covered:
- Setting up your Flask application
- Rendering templates
- Making HTML forms
- How to read arguments with Flask (regular, arrays, and even 2-dimensional arrays)
- Conditional rendering
- How a system of linear equations can be represented and solved with NumPy
You can find the full code of this project, as well as my other projects, on my GitHub page. If you liked this article you can follow me and if you didn’t — you can leave an angry comment down below.
This article was originally posted on my blog. | https://learningactors.com/what-to-do-when-you-need-a-web-app-quickly/ | CC-MAIN-2020-10 | refinedweb | 1,285 | 64.1 |
On Sunday 27 July 2008 14:31, Matthew Dillon wrote: > :When a forward commit block is actually written it contains a sequence > :number and a hash of its transaction in order to know whether the > :... > ). > >. So I do not want users to fixate on that detail. The mount option allows them to choose between "fast but theoretically riskier" and "warm n fuzzy zero risk but not quite so fast". If the example of Ext3 is anything to go by, almost everybody chooses the "ordered data" mode over the "journal data" mode given the tradeoff that the latter is about 30% slower but offers better data integrity for random file rewrites. The tradeoff for the option in Tux3 will be maybe 1 - 10% slower in return for a miniscule reduction in the risk of a false positive on replay. Along the lines of deciding to live underground to avoid the risk of being hit by a meteorite. Anyway, Tux3 will offer the option and everybody will be happy. Actually implementing the option is pretty easy because the behavior variation is well localized. >. Yes, a continuous running space, not preallocated. The forward log will insert itself into any free blocks that happen to be near the the transaction goal location. Such coopted free space will be implicitly unavailable for block allocation, slightly complicating the block allocation code which has to take into consideration both the free space map and all the outstanding log transactions. > > :... > > Ok, here I spent about 30 minutes constructing a followup but then > you answered some of the points later on, so what I am going to do > is roll it up into a followup on one of your later points :-) > > > > Wait, it isn't? I thought it was. I think it has to be because > the related physical B-Tree modifications required can be unbounded, Logical logging is not idempotent by nature because of the uncertainty of the state of the object edited by a logical operation: has the operation already been applied or not? If you know something about the structure of the target you can usually tell. Suppose the operation is a dirent create. It has been applied to the target if the direct already exists, otherwise not. I do not like this style of special case hacking to force the logical edit to be idempotent. Instead, I choose to be sure about the state of the target object by introducing the rule that after a logical log operation has been generated, nothing is allowed to write to the object being edited. The logical operation pins the state of the disk image of target object. The object stays pinned until the logical log entry has been retired by a physical commit to the object that updates the object and retires the logical edit in one atomic log transaction. I was working on a specific code example of this with pseudocode and all disk transfers enumerated, but then your email arrived. Well after this response the example will probably be better. > and because physical B-Tree modifications are occuring in parallel the > related physical operations cause the logical operations to become > bound together, meaning the logical ops *cannot* be independantly > backed out Tux3 never backs anything out, so there is some skew to clear up. The logical and physical log operations are indeed bound together at both the disk image and the processor level, but not in the way that you might expect. The primary purpose of rolling up logical log entries into physical updates is to control resource consumption; the secondary purpose is to reduce the amount of replay required to reconstruct "current" memory images of the btree blocks on reboot or remount. In the first case, resource exhaustion, a high level vfs transaction may have to wait for a rollup to finish, cleaning a number of dirty buffer cache blocks and releasing resources needed for the rollup, such as bio structs and lists of transaction components. Such blocking is nominally enforced by the VMM by sending the requesting process off to scan for and try to free up some dirty memory (a horribly bad design idea in Linux that causes recursive calls into the filesystem, but that is the way it works) during which time the process is effectively blocked. Tux3 will add a more responsible level of resource provisioning by limiting the number of transactions that can be in flight, much as Ext3 does. This allows the filesystem to fullfill a promise like "never will use up all of kernel reserve memory after having been given privileged access to it in order to clean and recover some dirty cache blocks". In the second case, keeping the replay path short, high level operations can proceed in parallel with physical updates, because a transaction is _logically_ completed as soon as committing the associated logical edits to disk has completed. So high level operations block on logical commit completions, not on physical commit completions except in the case of resource exhaustion. >. If the user writes: rm a/b/c& rm a/b& rm a& Then they deserve what they get, which is a probable complaint that a directory does not exist. If the operations happen to execute in a favorable order then they will all succeed. With Tux3 this is possible because destroying the 1TB file can proceed asynchronously while the VFS transaction completes and returns as soon as a commit containing ['unlink', inum_a/b/c, dnum_a/b] ['destroy', inum_a/b/c] has been logged. So deleting the 1TB file can be very fast, although it may be some time before all the occupied disk space becomes available for reuse. To make the deletion visible to high level filesystem operations, some cached disk blocks have to be updated. For a huge deletion like this it makes sense to invent a versioned "orphan" attribute, which is added to the inode table leaf, meaning that any data lookups arriving from still-open files should ignore the file index tree entirely. > Second, the logical log entry for "rm a/b/c" cannot be destroyed > (due to log cycling based on available free space) until after the > related physical operation has completed, which could occur seconds > to minutes later. True, but that only pins one commit block on disk. >. I do not think I have that problem because recovering the space used by the big file can proceed incrementally: free a few blocks from the inode index leaves; commit the revised file index blocks and free tree blocks; repeat as necessary. After a crash, find the logical inode destroy log entry and resume the destroy. >). Yes indeed. A matter of totalling all that up, which is fortunately bounded to a nice low number. Then in grand Linux tradition, ignore the totals and just rely on there being "megabytes" of reserve memory to handle the worst case. Obviously a bad idea, but that is how we have done things for many years. > :... > :ugly or unreliable when there are lot of stacked changes. Instead I > :introduce the rule that a logical change can only be applied to a known > :good version of the target object, which promise is fullfilled via the > :physical logging layer. > :... > . > >. It sounds like a good idea, I will ponder. > ? > > hope that question is cleared up now. > . > >. The presence of a "link" record in the logical log implies a link count in addition to whatever is recorded in the on-disk inode table leaf. On the other hand, the "current" image of the inode table leaf in the buffer cache has the correct link count when the sys_link returns. > . > > True enough. Those holes will create significant fragmentation > once you cycle through available space on the media, though. It depends on how high the density of such holes is. I will try to keep it down to a few per megabyte. The nice think about having some one block holes strewn around the disk is, there is always a nearby place for a commit block to camp out for a while. > :> So your crash recovery code will have to handle=20 > :> both meta-data undo and completed and partially completed transaction= > : > . I am hoping not to mess that up ;-) The thing about physical logging in Tux3 is, the logged data blocks are normally immediately linked into the index blocks as actual file data or btree metadata blocks, so they are only written once, and they are hopefully written near where the rest of the structure lives. It is only in the optional "update in place" style of physical logging that the commit data blocks are written twice, which will be no worse than a traditional journal and probably a lot better because of not needing to seek to the journal region. >) Nice, now I understand. But do you not have to hold all filesystem transactions that depend on the modification until the btree node has completed writing to disk? With logical logging you only have to wait on the logical commit to complete, into which may be packed a number of other changes for efficiency. > (NOTE: I do realize that the REDO log can be compressed just as the > UNDO one, by recording actual data shifts from B-Tree insertions > and deletions, it gets really complex when you do that, though, and > would not be idempotent). As described above, it is not how I do. I think I see it, but I have my doubts because you have to block transactions waiting for the up to date copy of the btree data to land on disk. Either that, or you may give userspace the impression that some transaction has gotten onto stable storage when that is not the case. If you do in fact block transactions until the current image of the btree leaf has been written out then the blocking behavior is no better than REDO style, and REDO can batch together many updates to different logical blocks into a single commit block, which seems like fewer transfers overall. > . > > Yah. I see. Noting that issue I brought up earlier about the > "rm a/b/c". Locking the caboose of your log until all related physical > operations have been completed could create a problem. I hope that is cleared up now. > . > > am gravitating towards that style for Tux3's commit-related short term allocations. > . > > The cookies are 64 bits in DragonFly. I'm not sure why Linux would > still be using 32 bit cookies, file offsets are 64 bits so you > should be able to use 64 bit cookies. It is not Linux that perpetrates this outrage, it is NVFS v2. We can't just tell everybody that their NFS v2 clients are now broken. > For NFS in DragonFly I use a 64 bit cookie where 32 bits is a hash key > and 32 bits is an iterator to deal with hash collisions. Poof, > problem solved. Which was my original proposal to solve the problem. Then Ted told me about NFS v2 :-O Actually, NFS hands you a 62 bit cookie with the high bits of both s32 parts unused. NFS v2 gives you a 31 bit cookie. Bleah. > . > >. Yes, I noticed that. Check out dx_hack_hash: It distributed hashes of ascii strings very well for me, with few funnels. It way outperformed some popular hashes like TEA. Ted's cut down crytographic hash is yet more uniform but costs much more CPU. > . > > Well, the B-Tree fanout isn't actually that big a deal. Remember > HAMMER reblocks B-Tree nodes along with everything else. B-Tree > nodes in a traversal are going to wind up in the same 16K filesystem > block. It affects the number of probes you have to do for the lookup. Binsearching hits one cache line per test while each node lookup hits a lot more. >. It is not just the cache footprint but the time it takes to get your key tables into L1 cache. To be sure, we are talking about small details here, but these small details can add up to a difference of a factor of two in execution speed. Which maybe you don't care much about now that you are orders of magnitude faster than what came before ;-) > :... > :>. > >. Following my prescription above, the file will be full of zeros when re-extended because the data pointers were removed. But I think you are right, truncate really is a delete and I actually argued myself into understanding that. > . > >. Yes, I have thought about it a little more, and I imagine something like a small array of dirty bits that climb up the btree with the help of logical logging, where each bit means that there is something interesting for some corresponding replication target to look at, somewhere down in the subtree. > :>. > >. The Linux dentry cache actually implements proper namespace semantics all by itself without needing to flush anything, which is what Ramfs is. Ramfs just lives entirely in cache without even being backed by a ramdisk, until the power goes off. > In particular, caching a namespace deletion (rm, rename, etc) > without touching the meta-data requires implementing a merged lookup > so you can cancel-out the directory entry that still exists on-media. > So it isn't entirely trivial. Linux implements "negative dentries" to say "this entry is not here". >). I have not looked at that part for a while now, and I did not look at it just now, but Al Viro has been fiddling with it for years getting the bugs out one at a time. The last major one I heard of was some time ago. It works somehow, I should look more closely at it. > Most filesystems will dirty meta-data buffers related to the media > storage as part of executing an operation on the frontend. I don't > know of any which have the level of separation that HAMMER has. Modern Linux filesystems get close I think. Particularly in journalled data mode, Ext3 marks all the buffers it deals with as "don't touch" to the VFS and VMM, which have no idea how to obey the necessary ordering constraints. There is also this thing called the "journalling block device" that provides an abstract implementation of a physical journal, which is actually used by more than one filesystem. (Three I think, including Ext4, however I now hear noise about rewriting it.) > . > > Jeeze, BSD's have been doing that forever. That's what VOP_BMAP is > used for. I'm a little surprised that Linux doesn't do that yet. > I'll expand on that down further. Either you forgot to expand or I missed it. I am interested. > :Directory format to index the extent within a leaf: > : > : struct entry { unsigned loglo:24, offset:8 }; > : struct group { unsigned loghi:24, count:8 }; > ... > Man, those are insanely small structures. Watch out for the cpu > and in-kernel-memory tracking overhead. I have written the code, mostly, and it is tight. A similar idea has been part of ddsnap for 5 years now. > : * :-) I remember it well. You were the one who put Rik up the reverse map design that caused the main fireworks between 2.3 and 2.4.9. (I was the one who finally got it work by social-engineering the merging of the reverse map part with Andrea's more robust LRU design.) I also remember the time when BSD buffers were far superior to Linux ones, yes. In 2.0 the arrangement sucked enormously: sort-of coherency between the buffer cache and the page cache was achieved by physically copying changes from one to the other. Today, the buffer cache and page cache are nearly fully unified. But the unification will never be complete until the page size is variable, so the remaining tasks done by buffers can be done by pages instead. Anyway, this part of your OS is more similar than different, which will help a lot with a port. > The typical BSD (Open, Net, Free, DragonFly, etc) buffer cache structure > is a logically indexed entity which can also contain a cached physical > translation (which is how the direct data bypass works). Linux caches physical translations by gluing one or more buffer heads onto each page, which is one of the few remaining uses of buffer heads. To finally get rid of buffer heads the physical cache needs to be done some other way. I am sure there are lots of better ways. >. This is similar to what I do when I write out a physical block and use a logical log entry to make the on-disk parent point at it (actually, it is a promise to make the parent point at it some time in the future. But then I do not let those things live very long. Eventually it might make sense to let them live a little longer, perhaps by generalizing the idea of the single linear log to a forest of little logs, some of which could be left around for a long time instead of being quickly rolled up into the canonical structures. > :>. > >. Indeed, I need to handle that. Since I do want to stick with a simple linear sequence of logical log commits for now, I do not want to leave any of them sitting around for a log time. One easy thing to do is to put any orphans aside at rollup time, in an unversioned orphan file, say. > Maybe the ticket for DragonFly is to simply break storage down into > a reasonable number of pieces, like cutting up a drive into 256 pieces, > and create a layer to move and glue those pieces together into larger > logical entities. Linux LVM is all about device mapper, and actually, device mapper does not really have a reason to be a separate subsystem, it can just be the way the block layer works. If interested, check out my posts on bio stacking and rolling device mapper's "clone_and_map" idea up into the generic block layer. I think all unixen should do this. Other than that, device mapper is just about making device numbers point at different virtual devices, transparently to userspace. Even device stacking works like that: extract the "virtual" device (which may or may not be a real device) from the device number you are going to remap, store it in some other device number, create a new virtual device on top of the other device number, finally store the new virtual device in the old device number. The old device number has now been transparently stacked. Nothing to it. > I'm editing down as much as I can :-) This one took 6 hours, time to > get lunch! Only 3 hours so far here... hmm, make that four. Daniel | http://leaf.dragonflybsd.org/mailarchive/kernel/2008-07/msg00139.html | CC-MAIN-2014-15 | refinedweb | 3,121 | 67.59 |
A few days ago I blogged about a nasty bug in .NET 1.1 SP1, which made a nested groupbox control show up with garbled caption text (see here and here).
It took some hoops but Microsoft has fixed this now. It's a fix that's available through PSS, and has KB number article number 890828. The fix is still under review so it might be the KB article is not yet available, the fix though is available through PSS.
Posted
Friday, January 28, 2005 12:57 PM
by
FransBouma
| 6 comment(s)
Yesterday I blogged about a horrible huge bug in the groupbox control for winforms in .NET 1.1 SP1 on a themed XP system (and that's pretty much all XP systems nowadays, since .NET 1.1 SP1 is a mandatory fix on windows update.). Today I'll show you a repro case. It's a silly form with just two nested groupboxes. On a themed XP machine you'll see that the inner groupbox' caption is garbled and has a horrible font.
I've tested this on more XP machines and they all showed the same results, with different themes (native XP ones). I'll try to contact PSS later today to get a fix for this AND to get this fixed publicly, because a private PSS-call-us-fix is useless, as users of applications by ISV's first have to call PSS to grab the special fix, which most of them won't do.
Update: It seems to occur on .NET 2.0 beta1 as well.
Full code: WinAppTest.zip
Form1.cs:
using System;
using System.Drawing;
using System.Collections;
using System.ComponentModel;
using System.Windows.Forms;
using System.Data;
namespace WinAppTest
{
///
/// Summary description for Form1.
///
public class Form1 : System.Windows.Forms.Form
{
private System.Windows.Forms.GroupBox groupBox1;
private System.Windows.Forms.GroupBox groupBox2;
private System.ComponentModel.Container components = null;
public Form1()
{
InitializeComponent();
}
#region Windows Form Designer generated code
///
/// Clean up any resources being used.
///
protected override void Dispose( bool disposing )
{
if( disposing )
{
if (components != null)
{
components.Dispose();
}
}
base.Dispose( disposing );
}
///
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
///
private void InitializeComponent()
{
this.groupBox1 = new System.Windows.Forms.GroupBox();
this.groupBox2 = new System.Windows.Forms.GroupBox();
this.groupBox1.SuspendLayout();
this.SuspendLayout();
//
// groupBox1
//
this.groupBox1.Controls.Add(this.groupBox2);
this.groupBox1.FlatStyle = System.Windows.Forms.FlatStyle.System;
this.groupBox1.Location = new System.Drawing.Point(21, 27);
this.groupBox1.Name = "groupBox1";
this.groupBox1.Size = new System.Drawing.Size(336, 174);
this.groupBox1.TabIndex = 0;
this.groupBox1.TabStop = false;
this.groupBox1.Text = "groupBox1";
//
// groupBox2
//
this.groupBox2.FlatStyle = System.Windows.Forms.FlatStyle.System;
this.groupBox2.Location = new System.Drawing.Point(81, 45);
this.groupBox2.Name = "groupBox2";
this.groupBox2.TabIndex = 0;
this.groupBox2.TabStop = false;
this.groupBox2.Text = "groupBox2";
//
// Form1
//
this.AutoScaleBaseSize = new System.Drawing.Size(5, 13);
this.ClientSize = new System.Drawing.Size(502, 278);
this.Controls.Add(this.groupBox1);
this.Name = "Form1";
this.Text = "Form1";
this.groupBox1.ResumeLayout(false);
this.ResumeLayout(false);
}
#endregion
///
/// The main entry point for the application.
///
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.DoEvents();
Application.Run(new Form1());
}
}
}
Posted
Friday, January 21, 2005 11:50 AM
by
FransBouma
| 11 comment(s)
I decided to upgrade to .NET 1.1 SP1. Then I ran one of my .NET winforms applications which has nested groupbox controls on a winform and FlatStyle set to System so they will be XP themed.
Then I saw that the nested groupbox controls had their captions in bold arial font and this made the caption to be too big and the text wasn't readable. See this screenshot:
As you can see: a butt-ugly caption which is also crippled. This is an existing app, so I didn't do anything. It worked perfectly on .NET 1.1.
What to do? How can this be solved? I googled on this but all I find are some people having the same problem. Needless to say, I'm really pissed off by this. I hope there is an easy fix for this.
Update: It seems that when you add a panel and dock the groupbox inside the panel it works.So I now have to replace all nested group boxes with panels and dock the groupboxes inside these panels.
So Microsoft, how about FIXING this stupid bug A.S.A.P. ?
Posted
Thursday, January 20, 2005 3:36 PM
by
FransBouma
| 6 comment(s)
Via.
Posted
Thursday, January 06, 2005 11:20 AM
by
FransBouma
| 21 comment(s)
Today we released Template Studio, a full-featured IDE for creating / editing / testing templates for LLBLGen Pro. Template Studio is free for our customers and therefore one of the benefits if you decide to join the largest O/R mapper-family for .NET!
Below you'll find 3 screenshots. Clicking them will bring up the 1600x1200 version.
The main screen
The main screen shows you the multiple-document IDE with at the left a loaded LLBLGen Pro project. In the center you'll see multiple templates loaded, using TDL (our own template language) at the top and C# at the bottom. Furthermore you see an example of the intellisense build in for C# and VB.NET. At the left you see the viewer with the currently loaded templateset and all the templates defined in that templateset.
Compiler feedback
This screenshot shows you the compiler feedback. TDL templates are interpreted but C#/VB.NET templates are compiled into an assembly which is then executed to produce the output. As you can see the template at the bottom has an error, which is listed in the Application Output window at the bottom. Double-clicking that error will bring you to the C# code generated from the templates (which is not the output, but the actual code which is executed to produce the output) and the error found is visualized with a red line, similar to the ones we're all familiar with from VS.NET.
Run single task configuration screen
The code generator engine of LLBLGen Pro is build around tasks: a nested set of tasks is executed and each task can produce code or perform a code-generation supporting task like creating a directory, checking out code, compiling assemblies etc. etc. This powerful engine is directly integrated in Template Studio so testing a template is a breeze. This screenshot shows the configuration screen to run a single task. You can select one of your favorite tasks or setup a new one, for example based on one of the many pre-defined tasks in the list.
Template Studio is free for LLBLGen Pro customers and is created using Janus Systems' .NET Windows Forms controls v2.0
Posted
Monday, January 03, 2005 6:06 PM
by
FransBouma
| 10 comment(s)
As they say in the part of Holland I come from: Folle Lok en Seine! or in plain English: Happy New Year! :)
Posted
Saturday, January 01, 2005 12:23 AM
by
FransBouma
| 6 comment(s) | http://weblogs.asp.net/fbouma/archive/2005/01.aspx | crawl-001 | refinedweb | 1,169 | 60.41 |
This documentation is archived and is not being maintained.
list::back
Visual Studio 2005
Returns a reference to the last element of a list.
If the return value of back is assigned to a const_reference, the list object cannot be modified. If the return value of back is assigned to a reference, the list object can be modified.
When compiling with _SECURE_SCL 1, a runtime error will occur if you attempt to access an element in an empty list. See Checked Iterators for more information.
// list_back.cpp // compile with: /EHsc #include <list> #include <iostream> int main( ) { using namespace std; list <int> c1; c1.push_back( 10 ); c1.push_back( 11 ); int& i = c1.back( ); const int& ii = c1.front( ); cout << "The last integer of c1 is " << i << endl; i--; cout << "The next-to-last integer of c1 is " << ii << endl; }
Referencelist Class
list::back and list::front
Standard Template Library
Other Resourceslist Class Members
Show: | https://msdn.microsoft.com/en-US/library/y4d8h412(v=vs.80).aspx | CC-MAIN-2016-44 | refinedweb | 153 | 66.74 |
Hey all,
Im pretty new to c++ and alot of the constructs I am using in this program - strings, functions, pointers, arrays, structures etc.
The main below is fleshed out and is compiling correctly. I am having difficulty passing around values and getting my three functions in this program to work properly. I have the arguments and parameters allready set up for each function as a skeleton and I have commented in each of the three functions what I am trying to get them to do.
Any help w/my functions is greatly appreciated!
Code:///////////////////////////////// // WORK IN PROGRESS // //////////////////////////////// //compilable // This is a program for "renting" movies. There are only two movies that will be used in the // program - Jurassic Park & Lord of the Rings. The user can do a search for a movie // (only Jurassic Park or Lord of the Rings will produce a result as they are the // only ones that will be used for the time being), view all the movies // not currently "rented", and then proceed to "rent" a movie // thus making it unavailable at that point. Each option corresponds // to a function set up in the // selection menu set up in main. #include <iostream> using namespace std; struct movie_structure { char movie_name[40]; char director[40]; char product_number[10]; bool rented; }; void rent_movie (movie_structure* movie) { //Will "rent" a movie making it no longer available (i.e. set boolean value to true) } movie_structure* get_movie (movie_structure movie_library[], char* movie_name, int library_size) { //will return a pointer to a movie based on title supplied, else will return null pointer } void print_movies_available (movie_structure movie_library[], int library_size) { //will display the movies not currently "rented" (i.e. movies w/false boolean value) } int main() { char user_input_title[81]; // user's input movie_structure* movie; // movie pointer int selection = 0; // for the menu selections int number_of_movies = 2; // number of movies in the 'movie library' movie_structure movie_library[] = { //title //director //product #'s //boolean value {"Jurassic Park", "Steven Spielberg", "913564180", 0}, {"Lord of the Rings", "Peter Jackson", "376145212", 0} }; cout<<"*******************************"<<endl; cout<<"* *"<<endl; cout<<"* WELCOME TO THE VIDEO STORE! *"<<endl; cout<<"* *"<<endl; cout<<"*******************************"<<endl<<endl; do{ cout<<"OPTIONS:"<<endl<<endl; cout<<"1 - Search for a movie."<<endl<<endl; cout<<"2 - View all available movies."<<endl<<endl; cout<<"3 - Rent a movie."<<endl<<endl; cout<<"4 - Exit store."<<endl<<endl; cout<<"SELECTION: "; cin>>selection; cout<<endl; if(cin.good() && selection > 0 && selection < 5){ if(selection == 1 || selection == 3){//selections that require a movie title cin.ignore(); cout<<"Enter the title of the movie."<<endl<<"Title: "; cin.getline(user_input_title,81); movie = get_movie(movie_library, user_input_title, number_of_movies); if (movie == NULL) //invalid title cout<<endl<<user_input_title<<" not found. Try again."<<endl<<endl; else if (movie->rented == true) //valid movie, but movie is currently "rented" cout<<movie->movie_name<<" is checked out. Sorry!"<<endl<<endl; else if (selection == 1) //movie available for "rent" cout<<endl<<movie->movie_name<<" by "<<movie->director<<" is available."<<endl<<endl; else{ //movie available cout<<endl<<movie->movie_name<<" is available. Let me check it out for you!"<<endl<<endl; rent_movie(movie); } } else if(selection == 2){ //selection 2 will print all movies available print_movies_available(movie_library, number_of_movies); } else { cout<<"Goodbye."<<endl; return 0; } } else { //invalid selection cout<<endl<<"Invalid selection."<<endl<<endl; selection = 1; } } while(selection < 4); return 0; }
Any input, tips, or coding ideas?
Thanks for your help! | https://cboard.cprogramming.com/cplusplus-programming/117639-trying-get-few-functions-work-properly.html | CC-MAIN-2017-43 | refinedweb | 543 | 62.88 |
Here we provide an overview of the MongoDB database. In subsequent posts we will give more in depth examples of how to use MongoDB.
First, MongoDB is a noSQL big data database. It fits the definition of big data, because it scales (i.e., can be made larger) simply by adding more servers to a distributed system. And it is does not require any schema, like an RDBMS database, such as Oracle.
MongoDB data records are stored in JSON (JavaScript Object Notation) format, which is self-describing, meaning the metadata (i.e., schema) is stored together with the data.
Command Line Shell
Mongo has an interactive command shell. JavaScript programmers will love this because the syntax is JavaScript. To open the shell you simply type:
mongo
Concepts
MongoDB records are called documents. Each MongoDB database (You can have many.) includes collections, which are a set of JSON documents. Each collection and document has an ObjectID created by MongoDB or supplied by the programmer.
To illustrate, suppose we have one database called products.
We could have two collections to contain all products grouping them by where they are sold:
Data storage is cheap and memory and CPU costs more. So, some big data databases, like Cassandra and MongoDB, throw out the idea of a normalized database, which is one of the key principles of an RDBMS database.
For example, with Oracle you would have a product category in a product record. The product category table contains fields common to all of those products. Each product record points to a product category record, so that such common data is not stored more than once:
But then you have to do a join operation if you want to know the color or weight of a product. But a join is a computationally expensive operation. That takes time. MongoDB would store the data like this:
RDBMS programmers say that creates duplication and wastes space. MongoDB programmers would say “yes,” but speed is more important than storage.
In other words, MongoDB records might look like this:
Obviously. When you know the category you know the color.
We will illustrate that by creating the products database and adding some products there. Paste these commands into the MongoDB shell.
First create the products database.
use products
switched to db products
Then these two collections:
> db.createCollection("boyDiapers")
{ "ok" : 1 }
> db.createCollection("girlDiapers")
{ "ok" : 1 }
>
Then add some data:
db.boyDiapers.insert([
{
size: 1,
color: 'blue',
brand: 'toddler tyke',
}
])
db.girlDiapers.insert([
{
size: 1,
color: 'pink',
brand: 'little angel',
}
])
Notice two things. First, we use the format db.(collection).insert to add the document. Second, we use the brackets [], which indicates an array, do that we can add more than one document at a time.
Now create some more data so that we can query for data:
db.boyDiapers.insert([
{
size: 2,
color: 'white',
brand: 'boy large white'
}
])
db.girlDiapers.insert([
{
size: 2,
color: 'while',
brand: 'girl large'
}
])
Selecting Data
If you use find with no arguments it lists all documents. Use pretty to display the results in easy-to-read indented JSON format:
> db.girlDiapers.find().pretty()
{
"_id" : ObjectId("59d1e9d5ccf50b62c5a7af55"),
"size" : 1,
"color" : "pink",
"brand" : "little angel"
}
{
"_id" : ObjectId("59d1f022ccf50b62c5a7af57"),
"size" : 1,
"color" : "while",
"brand" : "girl large"
}
{
"_id" : ObjectId("59d1f565ccf50b62c5a7af59"),
"size" : 2,
"color" : "while",
"brand" : "girl large"
}
Find all girl diapers of size 2 add arguments to the find statement:
db.girlDiapers.find({"size":2})
{ "_id" : ObjectId("59d1f565ccf50b62c5a7af59"), "size" : 2, "color" : "while", "brand" : "girl large" }
Now, you could not search both boy’s and girl’s diapers collections at the same time. MongoDB does not do that. Instead you have to program that in your application that you would code using some driver (See below).
Normalized Documents
We just said that in MongoDB there is no normalization because storage is cheap and computational power expensive. But you can create normalize documents.
For example we can create a sales record for each size 2 girl large document like this with the diaper field pointing to the diaper object. That might make more sense in this case as you would not want the diaper collection to grow many times larger each time you make a sale.
db.girlDiapers.insert([
{ "diaper" : ObjectId("59d1f565ccf50b62c5a7af59"),
"price" : 45.2,
"quanity" : 10,
"sku" : "case"
}
])
MongoDB Drivers
Of course, you probably would not use the command line shell for an application. Instead you would write a program to interact with MongoDB using any of the many drivers available. There are drivers for C++, C#, Java, Node.JS, Scala, Python, and more.
For example, to use Python:
sudo pip install pymongo
Then to query for size 2 diapers across the boy and girl collections:
from pymongo import MongoClient
client = MongoClient()
db = client.products
x=db.collection_names()
for i in range(len(x)):
c=x[i] d = db.get_collection(c)
for e in d.find({"size": 2}):
print(e)
Outputs:
{'size': 2.0, 'brand': 'boy large white', 'color': 'white', '_id': ObjectId('59d1f564ccf50b62c5a7af58')}
{'size': 2.0, 'brand': 'girl large', 'color': 'while', '_id': ObjectId('59d1f565ccf50b62c5a7af59')}
In the next post we will get into some more advanced MongoDB topics.
Wikibon: Automate your Big Data pipeline
These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.
See an error or have a suggestion? Please let us know by emailing blogs@bmc.com. | http://www.bmc.com/blogs/mongodb-overview-getting-started-with-mongodb/ | CC-MAIN-2019-09 | refinedweb | 887 | 66.54 |
[
]
Henry Saputra commented on GORA-32:
-----------------------------------
Sorry for the late response, was having some issue with my env. Will apply this by end of
day today.
> Map type with long values generates non-compilable Java class
> -------------------------------------------------------------
>
> Key: GORA-32
> URL:
> Project: Gora
> Issue Type: Bug
> Components: schema
> Affects Versions: 0.1-incubating
> Reporter: Yves Langisch
> Attachments: GORA-32.patch, unboxing.patch
>
>
> I have the following Avro JSON schema:
> {
> "type": "record",
> "name": "Request",
> "namespace": "ch.test.generated",
> "fields" : [
> {
> "name": "data",
> "type": {
> "type": "map",
> "values": "long"
> }
> }
> ]
> }
> Compiling the schema I get a Java class that does not compile. The problem is that primitive
types are not allowed in generic maps:
> ...
> public Map<Utf8, long> getData() {
> return (Map<Utf8, long>) get(0);
> }
> ...
> The issue seems to be that in the {{GoraCompiler}} class the unboxed types are used.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/incubator-gora-dev/201106.mbox/%3C945015264.25822.1308685427547.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2018-05 | refinedweb | 147 | 59.09 |
Font Awesome is an... awesome (sorry I had to) product. React is a brilliant coding library. It would only make sense to use them together. I've been using Font Awesome for a long time and was stoked when their Kickstarter for the new version went live.
There's a whopping 3,978 icons as of the time of this writing!
Table of Contents
- Ways to use Font Awesome
- Using Font Awesome 5 and React
- Choosing Fonts
- Using Icons From Specific Packages
- Using Pro Fonts
- Installing Font Awesome
- Creating an Icon Library
- Importing an Entire Icon Package
- Importing Icons individually
- Sizing Icons
- Coloring Icons and Backgrounds
- Transforming Icons
- Fixed Width Icons
- Spinning Icons
- Advanced: Masking Icons
- Using react-fontawesome and Icons Outside of React
- Conclusion
We use React and Font Awesome together whenever you see an icon here on Scotch. That includes the user navigation, cards, brand icons, and more.
While the Font Awesome team has made a React component to make this integration easy, I found a couple gotchas and had to understand some fundamental things about the new Font Awesome 5 and how it's structured. I'll write up what I found and the ways to use the React Font Awesome component.
Ways to use Font Awesome
Normally, if you were used to how Font Awesome worked in their previous versions, then you would add the
.css file to the head of your document and then use something like:
<i class="fa fa-user-happy"></i>
This was cool in the previous version, but the downside was that we had to bring in the entire Font Awesome library even if we only used some fonts
Font Awesome 5
With Font Awesome 5, there are a few different ways we can use the icons.
The SVG way has benefits detailed by the Font Awesome team and contrary to what I had originally thought, faster than the font-face way.
Another big benefit to the SVG way is that we can pick and choose what fonts we need and only include those in our final bundle size.
The problem with Font Awesome and React together
With the SVG and JS way, the JS to parse our HTML and add the SVG may fire before React has time to mount it's components. So we have to find a way to parse the HTML once React has mounted its components.
Using Font Awesome 5 and React
Lucky for us, the Font Awesome team has created a React component to use Font Awesome with React. With this library, we are able to do the following after you pick your icon. We'll use the boxing-glove icon and do everything right inside
App.js
import React from "react"; import { render } from "react-dom"; // get our fontawesome imports import { faHome } from "@fortawesome/free-solid-svg-icons"; import { FontAwesomeIcon } from "@fortawesome/react-fontawesome"; // create our App const App = () => ( <div> <FontAwesomeIcon icon={faHome} /> </div> ); // render to #root render(<App />, document.getElementById("root"));
Now we have a little home icon! Notice how we can pick out only the
home icon so that only one icon is added to our bundle size.
It's tiny and not styled, but we have it!
Now, Font Awesome will make sure that this component will replace itself with the SVG version of that icon once this component is mounted!
Choosing Fonts
I'm placing this section before installing/using because it's important to know how the Font Awesome libraries are laid out. Since there are so many icons, the team decided to split them up into multiple packages.
These packages are differentiated by the following. I'm also placing the package name that you would
npm install here:
When picking and choosing which fonts you want, I recommend visiting the Font Awesome icons page. Notice the filters along the left. Those are very important because they will indicate what package to import your icon from.
In the example above, we pulled the home icon out of the
@fortawesome/free-solid-svg-icons package.
Knowing which package an icon belongs to
You can figure out which package an icon belongs to by seeing the filters on the left. You can also click into an icon and see the package it belongs to.
Once you know which package a font belongs to, it's important to remember the three-letter shorthand for that package. Here they are:
You can search for a specific type from the icons page:
Using Icons From Specific Packages
If you browse the Font Awesome icons page page, you'll notice that there are usually multiple versions of the same icon like this one:
In order to use a specific icon, you will need to change up your
<FontAwesomeIcon> a little bit. Here's using mutiple types of the same icon from different packages. Remember those three-letter shorthands we talked about earlier.
Note: The below examples won't work until we build an icon library in a few sections.
// solid version <FontAwesomeIcon icon={['fas', 'code']} /> // defaults to solid version if not specified <FontAwesomeIcon icon={faCode} />
And the light version using
fal:
// light version <FontAwesomeIcon icon={['fal', 'code']} />;
We had to switch our
icon prop to be an array instead of a simple string. Normally, the icon would default to the solid (
fas) version so you could rewrite the above as:
Using Pro Fonts
Since the pro fonts are the fonts you have to pay for, they have to be paywalled somehow. How do we authenticate and then
npm install? How does handle paid packages?
We are going to add an
.npmrc to the root of our application and you can find your secret key in our Font Awesome settings:
Add your
.npmrc to the root of your project with the following:
@fortawesome:registry= //npm.fontawesome.com/:_authToken=YOUR-TOKEN-HERE
Installing Font Awesome
I know this part sounds simple, but it actually tripped me up a bit. Since there are multiple versions of an icon, multiple packages, and free/pro packages, installing them all isn't as simple as installing one npm package.
You'll need to install multiple and pick and choose which icons you want. You can:
For this article, we'll install everything so we can demonstrate the multiple ways:
// the base packages npm i -S @fortawesome/fontawesome-svg-core @fortawesome/react-fontawesome // regular icons npm i -S @fortawesome/free-regular-svg-icons npm i -S @fortawesome/pro-regular-svg-icons // solid icons npm i -S @fortawesome/free-solid-svg-icons npm i -S @fortawesome/pro-solid-svg-icons // light icons npm i -S @fortawesome/free-light-svg-icons npm i -S @fortawesome/pro-light-svg-icons // brand icons npm i -S @fortawesome/free-brands-svg-icons
Or if you prefer to get them all installed in one go:
// if you just want the free things npm i -S @fortawesome/fontawesome-svg-core @fortawesome/react-fontawesome @fortawesome/free-regular-svg-icons @fortawesome/free-solid-svg-icons @fortawesome/free-light-svg-icons @fortawesome/free-brands-svg-icons // if you have pro enabled and an .npmrc npm i -S @fortawesome/fontawesome-svg-core @fortawesome/react-fontawesome @fortawesome/free-regular-svg-icons @fortawesome/pro-regular-svg-icons @fortawesome/free-solid-svg-icons @fortawesome/pro-solid-svg-icons @fortawesome/free-light-svg-icons @fortawesome/pro-light-svg-icons @fortawesome/free-brands-svg-icons
We've installed the packages, but haven't actually used them in our application or added them to our app bundles just yet. Let's look at how we can do that now.
Creating an Icon Library
It can be tedious to import the icon you want into multiple files. Let's say you use the Twitter logo in a bunch of places, you don't want to write that whole thing everywhere.
To *import everything in one place *instead of importing each icon into each separate file, we'll create a Font Awesome library.
I like creating a
fontawesome.js in the
src folder and then importing that into
index.js. Feel free to add this file wherever as long as the components you want to use the icons in have access (are child components).
You could even do this right in your
index.js or
App.js, but I moved it out since this file can get large.
// import the library import { library } from '@fortawesome/fontawesome-svg-core'; // import your icons import { faMoneyBill } from '@fortawesome/pro-solid-svg-icons'; import { faCode, faHighlighter } from '@fortawesome/free-regular-svg-icons'; library.add( faMoneyBill, faCode, faHighlighter // more icons go here );
If you did this in its own file, then you'll need to import into
index.js:
import React from 'react'; import { render } from 'react-dom'; // import your fontawesome library import 'fontawesome'; render(<App />, document.getElementById('root'));
Importing an Entire Icon Package
This isn't recommended because you're importing every single icon into your app. Bundle size could get large but if you're so inclined, you can import everything from a package.
Let's say you wanted all the *brand icons *in
@fortawesome/free-brands-svg-icons.
import { library } from '@fortawesome/fontawesome-svg-core'; import { fab } from '@fortawesome/free-brands-svg-icons'; library.add(fab);
fab represents the entire brands package.
Importing Icons individually
The recommended way to use Font Awesome icons is to import them one by one so that your final bundle sizes are as small as possible. Only use what you need.
You can create a library from multiple icons from the different packages like so:
import { library } from '@fortawesome/fontawesome-svg-core'; import { faUserGraduate } from '@fortawesome/pro-light-svg-icons'; import { faImages } from '@fortawesome/pro-solid-svg-icons'; import { faGithubAlt, faGoogle, faFacebook, faTwitter } from '@fortawesome/free-brands-svg-icons'; library.add( faUserGraduate, faImages, faGithubAlt, faGoogle, faFacebook, faTwitter );
Importing the Same Icon from Multiple Styles
What about if you want all the types of boxing-glove for the
fab,
fas, and
fal packages? Import them all as a different name and then add them.
import { library } from '@fortawesome/fontawesome-svg-core'; import { faBoxingGlove } from '@fortawesome/pro-light-svg-icons'; import { faBoxingGlove as faBoxingGloveRegular } from '@fortawesome/regular-light-svg-icons'; import { faBoxingGlove as faBoxingGloveSolid } from '@fortawesome/solid-light-svg-icons'; library.add( faBoxingGlove, faBoxingGloveRegular, faBoxingGloveSolid );
You can then use them using the different prefixes:
<FontAwesomeIcon icon={['fal', 'boxing-glove']} /> <FontAwesomeIcon icon={['far', 'boxing-glove']} /> <FontAwesomeIcon icon={['fas', 'boxing-glove']} />
Sizing Icons
The ability to size icons was always a pain. Font Awesome 5 makes this incredibly easy. I find myself using this a ton.
Once you've installed everything and adding your icons to your Font Awesome library, let's use them and size them. I'll use the light (
fal) since that's what we use around Scotch.io.
// normal size <FontAwesomeIcon icon={['fal', 'code']} /> // named sizing <FontAwesomeIcon icon={['fal', 'code']} <FontAwesomeIcon icon={['fal', 'code']} <FontAwesomeIcon icon={['fal', 'code']} <FontAwesomeIcon icon={['fal', 'code']} // numbered sizing (up to 6) <FontAwesomeIcon icon={['fal', 'code']} <FontAwesomeIcon icon={['fal', 'code']} <FontAwesomeIcon icon={['fal', 'code']} <FontAwesomeIcon icon={['fal', 'code']} <FontAwesomeIcon icon={['fal', 'code']} // decimal sizing <FontAwesomeIcon icon={['fal', 'code']}
Coloring Icons and Backgrounds
Font Awesome has a cool way of styling the SVGs it uses. It just takes the text-color of the CSS!
If you were to place a
<p> tag where this icon were to go, what color would the paragraph be? That's the color of the icon!
<FontAwesomeIcon icon={faHome} style={{ color: 'red' }} />
Transforming Icons
Font Awesome has a nifty power transforms feature where you can string together different transforms.
<FontAwesomeIcon icon={['fal', 'home']}
You can use any of the transforms found on the Font Awesome site:
I've been using this a lot to move icons up/down/left/right to get the positioning perfect next to text or inside of buttons.
Fixed Width Icons
When using icons in a spot where they all need to be the same width and uniform, Font Awesome lets us use the fixedWidth prop. For instance, we needed fixed widths for our navigation dropdown:
<FontAwesomeIcon icon={['fal', 'home']} fixedWidth /> <FontAwesomeIcon icon={['fal', 'file-alt']} fixedWidth /> <FontAwesomeIcon icon={['fal', 'money-bill']} fixedWidth /> <FontAwesomeIcon icon={['fal', 'cog']} fixedWidth /> <FontAwesomeIcon icon={['fal', 'usd-square']} fixedWidth /> <FontAwesomeIcon icon={['fal', 'play-circle']} fixedWidth /> <FontAwesomeIcon icon={['fal', 'chess-king']} fixedWidth /> <FontAwesomeIcon icon={['fal', 'sign-out-alt']} fixedWidth />
Spinning Icons
Spinning is a cool trick that I use for form buttons when a form is processing. You can use the spinner icon to make a nice loading effect.
<FontAwesomeIcon icon={['fal', 'spinner']} spin />
You can use the
spin prop on anything!
<FontAwesomeIcon icon={['fal', 'code']} spin />
Advanced: Masking Icons
I haven't used this too much yet, but Font Awesome let's you combine two icons to make some cool effects with masking.
All you have to do is define your normal icon and then use the
mask prop to define a second icon to lay on top. The first icon will be constrained within the masking icon.
We created our Tag Filters using masking:
<FontAwesomeIcon icon={['fab', 'javascript']} mask={['fas', 'circle']} transform="grow-7 left-1.5 up-2.2" fixedWidth />
Notice how you can chain together multiple
transform props to move the inner icon to fit inside the masking icon.
We even colorize and change out the background logo with Font Awesome:
Using react-fontawesome and Icons Outside of React
This is a tricky problem to have. Let's say that your entire site isn't a single-page-app (SPA). You have a traditional site and have sprinkled React on top, much like our own Scotch.io.
It wouldn't be good to import the main SVG/JS library and then also import the react-fontawesome library. So which do we choose?
The Font Awesome team has seen this and has provided a way to use the React libraries to watch for icons outside of React components.
If you have any
<i class="fas fa-stroopwafel"></i> , we can tell Font Awesome to watch and update those using:
import { dom } from '@fortawesome/fontawesome-svg-core' dom.watch() // This will kick of the initial replacement of i to svg tags and configure a MutationObserver
MutationObserver's are a cool new web technology that allow us to watch the DOM for changes performantly. Find out more about this technique on the React Font Awesome docs.
Conclusion
Using Font Awesome and React together is a great pairing. The move to the multiple packages and styles of icons threw me off when I first started using the two together. Hopefully this helped you out and you are well on your way to adding those hundreds of great icons to your projects. | https://scotch.io/tutorials/using-font-awesome-5-with-react | CC-MAIN-2019-22 | refinedweb | 2,440 | 59.43 |
get short name for QQ, RDF, AA etc
I wonder whether there is a way to get back the short name of the function of the various rings and fields like QQ, RDF, AA, RLF, RR, etc. as a string.
For example
sage: a=QQ sage: str(a) 'Rational Field'
But I am looking for something like:
sage: function_I_want(a) 'QQ'
I guess I could predefine a dictionary like this:
shortnames={eval(name):name for name in ['QQ', 'RDF', 'AA', 'RLF', 'RR']}
and then have a function
def function_I_want(a): return shortnames[a]
But this seems a bit clumsy. Is there a better way to do this; or is there somewhere in the sage code such a dictionary already defined?
The reason I am thinking about this is the following tiny bug:
The name
QQis not an intrinsic property of the field of rational numbers. It's just that for convenience, the default toplevel environment has the binding
QQ=RationalField(). You cannot really let the Rational Field depend on it. For the bug you're referring to: You cannot assume rings have a "short, globally defined name". There are infinitely many possible finite fields and number fields already, in different representations.
thanks for the explanation! | https://ask.sagemath.org/question/36150/get-short-name-for-qq-rdf-aa-etc/?answer=36152 | CC-MAIN-2019-39 | refinedweb | 205 | 68.7 |
Sugar Install
Budget $30-100 USD
I have an account on [url removed, login to view], and they
installed sugar open source on my account for me. I
have looked through sugar and it looks great for a
friend of mine and i who are trying to get a small
business off the ground and live 2 hours apart and
don't meet regularly.
He currently uses Act 6 and i want us to use sugar as
we can access it online, but it needs to do all that
he can do in Act. It would also need to be able to
import his current list of about 550 contacts from
Act.
The upshot is, that i am a happy amateur (dangerous!!)
and can kind of find my way round in terms of web
design etc, and i can talk the talk to a degree, but
when it comes down to it i am lacking the skills to
get us moving with sugar and i do want it to work for
us as i am all for open source.
I have been into the sugar app on my server and have
set up outgoing e-mail no problem. I am struggling
with getting the inbound e-mail to work, though i can
get the test to find the inbox. I have added the code
in sugar to the crontab on my hosting account and i
have spoken to my support who gave me a different
script and i changed it to that. But i still can't get
the script to fire to pick up inbound e-mail.
As well as this i need a pointer on importing from Act
and also about the possibility of importing from excel
or other software.
Then there is mail merge. I will need to mail merge
for it to do the same as Act. I don't have an issue
with buying a word merge module, but so far all
attempts to upload modules have failed. I just don't
have the skills to do it and i need help!!!!
3 freelance font une offre moyenne de $71 pour ce travail
I can install this for you within a few hours. Please private message me if your interested. Hope to hear from you you soon. | https://www.fr.freelancer.com/projects/php/sugar-install/ | CC-MAIN-2018-26 | refinedweb | 382 | 84.2 |
Hi
I am happy to announce the 0.3.1 release of ActiveRBAC Engine. The
biggest improvement on the 0.3 release is that it runs with Rails 1.1
now.
Get your personal copy now from
There is a manual PDF with a tutorial available at
which is also included in the full downloads.):
removed_email_address@domain.invalid
You can sign up here:
Changelog
- The RDOC documentation now only contains the API reference. The
manual is available as a PDF at
releases/ActiveRbacManual.pdf (#121)
- Fixed the namespace problem of controllers & models (#119)
- Fixed a problem with the “railfix” code in Rails 1.1 (#114)
- ActiveRBAC now runs with Rails 1.1 (tested with 1.1.2) (#118)
- The files in app/model become stubs which simply import the
ActiveRBAC mixins. This should make extending Models in your own code
easier. (#112)
- Moving the constants User::DEFAULT_PASSWORD_HASH_TYPES and
User::DEFAULT_STATES to private class methods with lowercased names.
(CHECK FOR DEPENDENCY IN YOUR CODE)
- Renaming the “redirect_to” parameter/session variable nam of
LoginController to “return_to” (#103)
- Adding “all_static_permissions” method to User. (#109)
- Adding Version identifier as described in-
engines.org/engines/classes/Engine.html (#104)
- Removing 3 lines from user_controller.rb that expected
InvalidStateTransition to be thrown (#113)
- adding support for redirect_to feature to LoginController (#100)
- Adding migration for schema import
- fixing a documentation issue (#94) | https://www.ruby-forum.com/t/ann-activerbac-0-3-1-released/58513 | CC-MAIN-2018-47 | refinedweb | 225 | 50.94 |
15. Re: How do you make rich:datascroller do true pagination?kewldude Aug 2, 2007 1:17 AM (in response to rickarcmind)
All right I was able to make it work. I was also able to put a boolean variable that gets tag whenever a new query to the database needs to be done (this is to get the count(*) for the getTotalRowsCount() method). My problem right now is whenever I go through the paging...I've observed that the method getPagedDataModel() is called twice every time...that two database hits every time a page is clicked...Has anyone expereinced that? What would cause the double method call to getPagedDataModel()?
16. Re: How do you make rich:datascroller do true pagination?sergeysmirnov Aug 2, 2007 2:34 AM (in response to rickarcmind)
put phase tracker to your app and see when.
Most likely, once when the component tree is restored at the beginning of the lifecycle. The second time when the view is rendered at the end of it. If so, it is usual behaviour for JSF. You need to take care about caching the data during one request cycle (if you did not expect the data might be changed between the first and last phases)
17. Re: How do you make rich:datascroller do true pagination?kewldude Aug 2, 2007 3:37 AM (in response to rickarcmind)
Yeah you're right. Lets say I have a dataTable (row to display = 10) that has 20 items in it so the scroller will have 2 pages.
First time the dataTable was diplayed the call to getPagedDataModel happens during the Render Response of the current request.
Then page 2 was clicked the getPagedDataModel was called during the Apply Request Values stage, but the call to getPagedDataModel is the same call as the previous request @ the Render Response phase (same call meaning, the call to DB contains parameters to display the 1st page of the table).
Then another call to getPagedDataModel during the Render Response phase eventually having the parameters to display the 2nd page of the table. Is this the right behavior? What about the caching of data? What class/objects do I need to play around with?
18. Re: How do you make rich:datascroller do true pagination?kewldude Aug 7, 2007 2:02 AM (in response to rickarcmind)
^ anyone has any suggestion regarding my problem?
19. Re: How do you make rich:datascroller do true pagination?ishabalov Aug 7, 2007 5:30 PM (in response to rickarcmind)
You need to play with rich:dataTable with
org.ajax4jsf.model.ExtendedDataModel and org.ajax4jsf.model.SerializableDataModel
That may help you.
I has hope to put together small example of this, but it is still just a plans.
20. Re: How do you make rich:datascroller do true pagination?amitev Aug 7, 2007 7:11 PM (in response to rickarcmind)
Igor, an example for this would be great. There are many people that want to know how to do it "the right way"
21. Re: How do you make rich:datascroller do true pagination?ishabalov Aug 8, 2007 11:17 PM (in response to rickarcmind)
Adrian, you can look here :-)
22. Re: How do you make rich:datascroller do true pagination?dinesh.gupta Nov 6, 2007 6:43 AM (in response to rickarcmind)
Hi,
In data Iteration section, under datatable, extendedDataTable, it doesn't render amount column In IE6 but correctly displayed in Mozilla 2.0.
Any one idea about this???
Regards,
Dinesh Gupta
23. Re: How do you make rich:datascroller do true pagination?dinesh.gupta Nov 7, 2007 7:57 AM (in response to rickarcmind)
Hi,
Please tell me how can I get currently displayed page no
and records are displaying from - to .
for example I want to display this format.
20 -24 of 2500 First previous 1|2|3|4| New |Last
can I get no of record currently displayed.
Beacuse We using Lucene so we require from & to value and page no.
Please Help me.
Reply ASAP.
Thanks in advance.
Regards
dinesh.gupta01@hotmail.com
24. Re: How do you make rich:datascroller do true pagination?ilya_shaikovsky Nov 8, 2007 4:23 AM (in response to rickarcmind)
pageIndexVar and attribute could be used to get current page number.
25. Re: How do you make rich:datascroller do true pagination?maxmustang Nov 24, 2007 3:21 PM (in response to rickarcmind)
Hi all,
I found a very interesting article with example:
Its in german but the code and solution seems be there; Ill need it too soon :-)
Max
26. Re: How do you make rich:datascroller do true pagination?vladimir.kovalyuk May 15, 2008 2:03 PM (in response to rickarcmind)
Reference from the manual implies we could find usable example in this thread. Having read all the pages I've found just some excerpts that could be threated as something to start with not more. I'll give it a try but nevertheless I expected something simplier.
I believe rich:datascroller cannot be considered as first class component because:
1. it is not scallable at all
2. it is tied to datatable and cannot be used separately
From my perspective binding datascroller to datatable was a short-sighted decision. Generally speaking we need to navigate pages whatever they consist of. DataTable is a case but not the rule.
I think the datascroller should have been fine with two attributes: the total number of pages and the current page. That's all!
I'd suggest publishing planned changes to richfaces prior to implementing them in order to discuss pros and cons with the community (and avoid strange APIs like this or tree). And RTFM MVC for emotional posters.
27. Re: How do you make rich:datascroller do true pagination?bostone May 15, 2008 4:12 PM (in response to rickarcmind)
Will all this insanity work with scrollableDataTable? I wasn't able successfully use extended data model with it
P.S. Mr. Rick - I do enjoy your sense of humor :)
28. Re: How do you make rich:datascroller do true pagination?ngotau1989 Jun 24, 2011 3:04 AM (in response to rickarcmind)
calm down guys
ok, let do it in asian way.
The principle is simple.
View layer catch paging event, send page number to the controller.
<f:facet
<rich:dataScroller
</f:facet>
Controller now has page number, pageSize already defined. So it knows the range of records to load from database.
public String search(){
userSearchCondition.setStart((page-1) * userSearchCondition.getPageSize());
SearchResult<User> searchResult = userService.search(userSearchCondition);
users = new ArrayList<UserAdapter>();
for(User user : searchResult.getRows()){
users.add(new UserAdapter(user));
}
users = new ResultList<UserAdapter>(users, searchResult.getTotalRow(), userSearchCondition.getPageSize());
return "success";
}
public void setPage(int page) {
this.page = page;
search();
}
and here's the trick
public class ResultList<T> extends AbstractList<T> {
private final List<T> rows;
private final int total;
private int pageSize;
public ResultList(List<T> rows, int total, int pageSize){
this.pageSize = pageSize;
this.rows = rows;
this.total = total;
}
@Override
public T get(int index) {
index = index%pageSize;
return rows.get(index);
}
@Override
public int size() {
return total;
}
}
29. Re: How do you make rich:datascroller do true pagination?itoito Sep 7, 2011 9:34 AM (in response to ngotau1989)
Hi, I had the same problem last week and I think a different solution only with RichFaces.
- First I load the 10 first records by Java in a List
- Then, I do a COUNT of ALL the records.
- I load the rest of the List with the same empty Object as to the total; they are pointing to the same empty Object, no cost os memory.
- Then I have the paginator done.
<rich:datascroller
- In myBean, on the method "setPageNumber", I load the data for this page and replace the empty Object with the Data Base Object, showing it in the table.
PROS of this method:
- It´s Simple
- It´s clear
- No more jar
- Pages cached (I have a page in memory if I return to this page)
Thanks for your ideas | https://developer.jboss.org/message/625250 | CC-MAIN-2021-49 | refinedweb | 1,336 | 66.03 |
I am totally new to any type of programing and I was hesitant in posting this question but I cannot figure out what is wrong with this I am currently reading the C++ primer plus book and I copied this code and it doesn't want to run I have know idea why it doesn't work
the errors I am getting is thisthe errors I am getting is thisCode://bodini.cpp -- using escape sequences #include <iostream> using namespace std; int main() { cout << "\aOperation "HyperHyde" is now activated!\n"; cout << "Enter your agent code:_________\b\b\b\b\b\b"; long code; cin >> code; cout << "\aYou entered " << code << " ...\n"; cout << "\aCode verified! Proceed with Plan Z3!\n"; cin.get(); return 0; }
but I can't see anything wrong with itbut I can't see anything wrong with it6 C:\Documents and Settings\Server2003\My Documents\C++ Projects\Bondini.cpp expected `;' before "HyperHyde" | http://cboard.cprogramming.com/cplusplus-programming/97640-could-someone-explain-what-i-did-wrong.html | CC-MAIN-2014-41 | refinedweb | 153 | 60.95 |
At Factorial, we maintain an engineering Handbook where we document aspects such as common abstractions, programming principles and documentation of our architecture.
Among these, there are a set of aphorisms stemming from coding best practices and common pitfalls we have been encountering during the last 4 years building our product. Let's share some of these.
Beware, these aphorisms are language-agnostic and, thus, you might find them more or less regularly in your day to day depending on the characteristics of your preferred programming language.
Depend on contracts rather than data structures
This is my personal favorite because it's so simple yet so ubiquitous. Put
bluntly, data structures make for lousy interfaces: they are often opaque, mutable and nullable, all characteristics you wouldn't want for an interface — and yet we use them as such constantly.
The most common example is accepting a hash as a method argument:
def method(hash) puts hash[:foo][:bar] # This can print something, null or raise an unexpected error. end
NOTE: This is especially prevalent in languages that make it easier to work with data structures and nullables such as Ruby or Javascript.
Using a data structure introduces an implicit dependency between your method and the data structure's shape, a dependency that could have been easily avoided by simply passing the expected value:
def method(bar: nil) puts bar # This can print something or null end
In cases where the dependency with the data structure cannot be avoided, isolate it behind an API:
def method(hash) puts HashAccessor.new(hash).get(:bar) # This can print something or null end class HashAccessor def initialize(hash) @hash = hash end def get(attr) # ... end end
Although this might come across as a superfluous refactor it minimizes the spread of the dependency and paves the way for a future refactor in which we would get rid of this data structure dependency altogether.
Avoid null values
Hardly a surprise. All the new cool kids in the block are using it.
The problem with null values was already hinted at in the previous aphorism: it's a very hard contract to enforce. This will result in errors happening far away from the underlying issue:
# main.rb ROLES = { admin: 'admin', manager: 'manager' } role = ROLES[user.role] user_presenter(role) # user_presenter.rb def user_presenter(role) return 'Admin' if role == ROLES[:admin] ManagerRolePresenter.render(role) end # manager_role_presenter.rb def manager_role_presenter(role) "#{role.capitalize} – Lead" # << Unexpected `capitalize` message for `nil`! This is very far from the origin of the `nil` and hard to fix end
To avoid null values you can enforce types (if your language supports it), implement the optional pattern, use a custom contract with data validation (such as a Struct) or directly raise an exception closer to the null source.
Code should be greppable
Or, put another way, do not be too smart for your own good.
As a rule of thumb, your teammates should be able to easily navigate your code using only a pattern-matching tool like
grep (or
ctrl-f for those using fancier editors).
If that weren't the case it probably means your code is not explicit enough and is hiding dependencies with dangerous techniques like meta-programming:
module Jon class Snow # ... end end # main.rb def main "Jon::#{name}".constantize.new # This can blow up end
As if the potential error wasn't bad enough, this technique obfuscates the
author's intention and makes navigating your code more difficult than necessary.
To avoid these pitfalls and improve code quality make extensive use of static analysis tools
such as linters or static type analysis.
In the particular case of meta-programming, replace it with a good ol' Map or switch statement.
Depend on abstractions rather concretions
Lastly, a general principle. If you take a look at most of our previous examples and their proposed solutions, they all share a common property: the solution implements an abstraction.
Do not depend on data structures, depend on an abstraction that gives you access to the underlying data. Do not depend on nullable values, depend on an abstraction that handles the nullable state for you... You can see the pattern.
The advantage of abstractions over concretions is that they change less often, and this is good for code maintainability:
You often cannot control the amount of changes over time, but you can control the number of dependencies. Stay away from zone B and you will be fine.
That's it! We hope you will find some useful tips from this list that you can put to good use in your daily coding practice.
Cheers.
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/factorial/a-set-of-programming-aphorisms-at-factorial-3j0m | CC-MAIN-2020-34 | refinedweb | 767 | 52.09 |
> > <xsl:template
> > <xsl:param
> > <xsl:param
> > <xsl:param
> > <xsl:param
> >
> > <create source="{$source}" target="graphics/{$id}-header.jpg"
> >
> > <processor name="xslt">
> > <parameter name="stylesheet"
> >
> > </processor>
> > </create>
> > </xsl:template>
>
> whoa! is this part of the XSLT spec or is it Stylebook specific. I mean
> the ability to apply a separate stylesheet to some element (subtree of
> elements) in XML document and save the results to disk or include it in
> the doc, instead of the original element.
<create> as far as I know is a Stylebook tag (I'm guessing this but it is
not in the XSL namespace so can't be part of the XSL spec).
> What I would like to do is to hand off the processing of tag2 to
> subSheet.xsl instead of processing it within rootSheet.xsl. And after
> processing, tag2 would be replaced with whatever contents come out of
> transformation with subSheet.xsl:
> <root stylesheet="rootSheet.xsl">
> <tag1 />
> <tag2 stylesheet="subSheet.xsl">
> <subtag1 />
> </tag2>
> </root>
>
> The example is probably very incorrect from the syntax point of view,
> but hopefully you understand what I mean...
I don't know how to do this but you may want to look at the spec,, specifically xsl:import and xsl:include.
Ross Burton | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200005.mbox/%3C00c001bfc4fd$0bd2d640$e06b8cd4@eddie%3E | CC-MAIN-2015-32 | refinedweb | 204 | 64.81 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.