text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Subject: Re: [boost] Boost.DLL formal review
From: Antony Polukhin (antoshkka_at_[hidden])
Date: 2015-07-11 17:50:46
2015-07-11 14:51 GMT+03:00 Bjorn Reese <breese_at_[hidden]>:
> It is unclear to me whether there are any limitations about aliasing.
> For instance, can I alias constructors, operators, or functions in an
> anonymous namespace?
>
Functions and classes from anonymous namespaces could be exported using
aliases. There's no anonymous namespace related restrictions.
> -.
>
This is avoided for Windows (we have the Boost.Winapi module that takes
care of that), but for Linux, MacOS and other platforms this could be an
issue. Extremely heavy headers are avoided (for examples those, that
contain platform specific binary formats descriptions). Taking care of
others requires new module Boost.POSIX, which is not in my plans right now.
> -]"
>
I'll fix those issues.
> The design rationale and the FAQ seems to be in disagreement about ABI
> portability.
>
Did not get that. At what place do they disagree? ABI != portability.
-- Best regards, Antony Polukhin
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2015/07/224133.php | CC-MAIN-2021-43 | refinedweb | 191 | 70.39 |
Articles
My Account
Messages
Tools
Join
Tech Blog, ASP.Net, VB.Net, C#.Net, Programming Help, Help Guide
Search Topics
Find:
In:
Topic Names
Topic Content
Reply Posts
Everything
Hot Topics
Topic Name
Reads Today
Fixing "Access to file '..\Silverlight\1033\SilverlightProject.zip\TestPage.html.js' is denied" error (Silverlight)
7
Displaying only the first 200 words of a long text field. (VB.Net)
6
Formatting a Date String (ASP.Net)
5
Getting started with Silverlight for Visual Studio 2008 (Silverlight)
3
Finding Useful Database Information in SQL Server (SQL Server)
3
Creating a dynamic thumbnail generator (ASP.Net)
3
Using the Replace Function for strings (VB.Net)
3
Using the <connectionStrings> Configuration Section (ASP.NET)
2
How to Use Paging with the .Net Repeater Control (ASP.Net)
2
Dynamically inserting JavaScript with RegisterClientScriptInclude (ASP.NET)
2
Active Topics
Topic Name
Posts Today
Newest Topics
Using the ASP.NET AJAX UpdatePanel (ASP.Net)
As web developers we are faced with many obstacles unique to the medium in which we work. The internet provides an unprecedented audience for our work, but also bears the burden in that its stateless structure often makes it challenge to develop applications that provide the swift and familiar functionality of desktop software. Todays trends are continually headed toward an ever growing interconnected network of people, computers, and information, and the demand for seemless integration is growing amongst end users.
Background
A huge step toward breaching the gap between the thick desktop client flexibility with that of the thin web client has been through the development of client-side callbacks, which is a way to harness the clients browser to communicate with the web server, effectively targetting information that changes - but leaving the remaining markup untouched. This technology grew to be known as AJAX, which stands for Asynchronous Javascript and XML. The AJAX movement has become incredibly large as people have discovered how to harness the ideas and benefits of AJAX techniques and applied them to create richer and more interactive applications on the web. One striking aspect of true AJAX in action is that it involves a real knowledge...
Using System.Windows.Threading DispatcherTimer.Tick event in VB.Net (VB.Net)
The DispatherTimer.Tick event is the best way in .Net to create an animation loop for either a game or graphical application, but use of it in VB.Net is tricky. I've seen several articles posted around the internet how to use the DispatherTimer's Tick event with C# and I will show you that here in case you're a C# programmer that happened upon this article:
C#
using System.Windows.Threading;
private DispatcherTimer _dpTimer;
_dpTimer = new DispatcherTimer();
_dpTimer.Interval = TimeSpan.FromMilliseconds(20);
_dpTimer.Tick += new EventHandler(CalledFunction);
_dpTimer.Start();
void CalledFunction(object sender, EventArgs e)
{
}
What you tend to find with this code when translating it to VB.Net is that the .Tick += new EventHandler will not work. It will give you some crazy syntax error about handlers.
To make this work, you have to make sure you're including the System.Windows.Threading class into your form.
Here's how I addressed using the .Tick function in VB.Net
VB
Imports System.Windows.Threading
Private dpTimer As DispatcherTimer
Public Sub New()
dpTimer = New DispatcherTimer
dpTimer.Interval = Timespan.FromMilliseconds(20)
AddHandler dpTimer.Tick, AddressOf CalledSub
dpTimer.Start()
End Sub
Private Sub CalledSub
'Perform your per-tick work here
End Sub
The 20 milliseconds will make your loop run about 30 times per second (assuming there's not any intense workload for the computer running it). VB.Net also will not automatically recognize the .Tick when you add it to the dpTimer...
Working with a CrystalReportViewer with an AJAX UpdatePanel (ASP.NET)
The CrystalReportViewer built into Visual Studio provides a simple and powerful way to display Crystal Reports. The Report Engine actively works behind the scenes to produce interactive reports appropriate for a web-based environment. One negative aspect of the CrystalReportViewer for ASP.NET is, however, that it requires a large amount of postbacks to do its work. This of course produces undesirable screen flicker. Anyone familiar with the recent trend in web development toward seamless transitions via AJAX will know that this technology has become tightly integrated into Visual Studio into its latest 2008 release, and is available as an extension to 2005. The logical idea is, of course, to throw AJAX into the mix to solve this problem. That problem is, it doesn't quite work as expected...
The CrystalReportViewer itself produces output that is not directly compatible with the ASP.NET AJAX UpdatePanel control. What you will run into quickly is that the Export and Print functionality provided by the CrystalReportViewer will not function - no print or export dialog boxes will apear to the user. The solution is more of a compromise - you cannot at the time being (until BusinessObjects produces an updated version) use the UpdatePanel to entirely suppress...
Defining a Custom ORDER BY Sort Order (SQL Server)
The SQL ORDER BY clause is very useful to sort your results quickly and easily. An underlying problem with ORDER BY is that, in its simplest implementation, we are quite limited to how it actually functions. We leave the SQL Engine of our database of choice to decide what the order is. Typically we could use the ORDER BY clause to either modify the sort order -
SELECT * FROM EMPLOYEES ORDER BY EMP_ID [ASC|DESC]
or even numerically order the columns to sort based on the SELECT clause -
SELECT EMP_ID, NAME FROM EMPLOYEES ORDER BY 1,2
These approaches give us some control over how this clause functions, however it isn't very friendly or functional when we have very specific needs. What if we need to specify a custom order that the basic functions of ORDER BY are incapable of?
Using our Employee example again, what if we need to order by a specific seniority level - say we want the supervisors to appear first, then associates, then executives. We may get lucky with a standard ORDER BY, but odds are it won't work out as we intended. For simplicity's sake, the seniority level is a field within the EMPLOYEE table.
To solve this...
Formatting Numbers using FormatNumber() Function (VB.Net)
Formatting numbers in programming because a very important task especially when dealing with percentages, currency, and dates. For more about formatting dates, visit this article:
First, I'm going to start with general number formatting and the FormatNumber function. The return value of this function is the number formatted in the manner specified in the parameters being passed. What are these parameters? Here is what it looks like:
FormatNumber(value [, trailing digits] [, leading digit] [, parentheses] [, group digits])
All of the parameters for this function are optional, so you can simply call a FormatNumber(9876.54321) and you will get back 9,876.54 as the return. This is good for quick & simple 2 decimal point return with commas because 2 trailing digits is the default.
To specify the decimals, we'll pass that number in the trailing digits parameter: FormatNumber(9876.54321, 6) and get a return of 9,876.543210 because we requested 6 trailing digits.
In the leading digits column, we're passing a boolean value of True or False to say if you want leading digits to show up... so when your number is less than 1, you're specifying if you want it to appear as 0.99 or just...
Creating a custom collection object (VB.Net)
Collections are used all over in .Net programming, and there are quite a few native collections that you can utilize... however, many programmers find the need to create their own collection object because collecting your instances of objects into arrays works, but doesn't give you any additional properties. When I create new objects, I create both a Collection and a regular instance of the objec in the same .vb file (this same thing pertains to objects in C# as well, though the syntax will be different).
I start off with this:
Public Class NewObjectCollection
Inherits CollectionBase
End Class
Public Class NewObject
End Class
Go ahead and set your variables and properties for the regular object like you normally would, but hold off on that for the collection until you fully understand what the collection does.
Once you've got all the properties set up, then inside your collection object, you'll want a method to fill the collection somehow. For most of us, this is a SELECT from a database table to get all the records and we're going to be storing each as an object in the collection object. Before we create the fill method, we want to create our...
Basic Beginners Guide to Writing XAML for Silverlight (Silverlight)
XAML (pronounced zamole), which stands for Extensible Application Markup Language is what we use when we're drawing the output for Silverlight applications. You can get XAML authoring tools such as Microsoft Expression Blend, but most of us developers who are learning to program in Silverlight probably don't have that tool or any others. For now, in starting your learning experience, you're going to have to write the XAML. Visual Studio 2008 allows us to drag & drop some of the controls from the Toolbox to the XAML window, saving some time, but because there isn't a properties window yet, you have to know what the commands are to make edits. When you start, you should see something like this:
<UserControl x:Class="SilverlightApplication1.Page"
xmlns=""
xmlns:
<Grid x:\
</Grid>
</UserControl>
You will want to drag your new controls or type them into the Grid section of the basic layout. Above, I've converted the background color to a basic blue.
First, I'm going to drag a TextBlock in:
<TextBlock></TextBlock>
This is much like a label to ASP.Net pages (.aspx). Well, in order to actually see...
PageTitle.InnerText bug in Visual Studio.NET 2003 (Visual Studio)
One tricky bug in Visual Studio 2003 I ran across recently involved an exception being thrown when programmatically settting a page title by reading in a key from the web.config file. We first need to define our title as being run on the server side:
<title runat="server" id="PageTitle">
We can then set this value dynamically in the code-behind:
PageTitle.InnerText = ConfigurationSettings.AppSettings("title")
The exception in particular stated that a control being referenced does not exist, which is very strange - nothing is incorrect with the particular bit of code. I scratched my head at this for awhile. The html tag we defined was clearly set up to run at the server level, and our Page_Load() code is valid. The problem is that Visual Stuio.NET 2003 actually gets to helpful here and actually
removes
the runat="server" portion of our <title> tag. This is an intermittent issue, but it is something to keep in the back of your mind when working with the page through a configuration file!
Dynamically inserting JavaScript with RegisterClientScriptInclude (ASP.NET)
ASP.NET offers programmers a wealth of options and features to provide users with deep and interactive data-driven applications. As its name implies, this functionality is made possible due to the server it resides on. As ASP.NET developers, we are (for the most part) restricted to working exclusively on the server end - the client is fairly well protected from us accessing their machine. This works well most of the time - but what about when we want to harness the power of the users browser or computer? This is an arena that JavaScript has reigned supreme over for a number of years. While the two technologies are seperate in their own right, we can use the .NET library to mix these two technologies together, allowing us to programmtically utilize JavaScript functionality in our ASP.NET code-behinds.
A project I recently worked on decided very late in the project lifecycle that the free Google Analytics service was to be included on every page in our application. At this point we were looking at a fairly intensive manual process to update all of our pages - and possibly even worse was that there was not a real intuitive way to safe-guard this in...
How to Use Paging with the .Net Repeater Control (ASP.Net)
One of the most common tasks when building ASP.NET web applications is displaying the data. When a web application contains high volume of data, paging functionality can help the user view web page easily and clearly. Unlike paging functionality built in ASP.Net DataGrid control, ASP.Net repeater control provides more user defined flexibility. The following code will show you how to create paging control in ASP.Net repeater control.
On the html page:
<html>
<head>
</head>
<body>
<form>
<table>
<tr>
<td align="center"><asp:hyperlink</asp:hyperlink><asp:hyperlink</asp:hyperlink>
</td>
</tr>
<asp:repeater id=”rptTemp” ruant=”server”>
<ItemTemplate>
<tr>
<td >
<asp:Label ID="lblID" Runat="server" text=’%#DataBinder.Eval(Container.DataItem, “ID")%>’</asp:Label>
</td>
<td >
<asp:Label ID="lblName" Runat="server" text=’%#DataBinder.Eval(Container.DataItem, “Name")%>’</asp:Label>
</td>
<td >
<asp:Label ID="lblDesc" Runat="server" text=’%#DataBinder.Eval(Container.DataItem, “Description")%>’</asp:Label>
</td>
</tr>
</ItemTemplate>
</asp:repeater>
</table>
</form>
</body>
</html>
In the code behind:
Dim objPds As PagedDataSource = New PagedDataSource
Dim dv As DataView = New DataView(bindTable)’bindtalbe is a collection of data
objPds.DataSource = dv
objPds.AllowPaging = True
objPds.PageSize =10(‘how many data you wish to display)
If objPds.PageCount > 1 Then
lnkPrev.Visible = True
lnkNext.Visible = True
Dim curpage As Integer
If Not IsNothing(Request.QueryString("Page")) Then
curpage = Convert.ToInt32(Request.QueryString("Page"))
Else
curpage = 1
End If
objPds.CurrentPageIndex = curpage - 1
...
User Name:
Keep Me Logged In
Trouble Logging In?
Not Registered? Sign up for free!
About StellarPC.com
StellarPC.com is a new, free, tech community where we encourage intelligent discussion of programming techniques, website development, troubleshooting, and many other areas of I.T. functions.
More about us...
Donate to help us out | http://www.stellarpc.com/ | crawl-001 | refinedweb | 2,347 | 56.25 |
SIGPENDING(2) SIGPENDING(2)
sigpending - return set of signals pending for thread (POSIX)
#include <signal.h>
int sigpending (sigset_t *maskptr);
sigpending returns the set of signals pending for the calling thread
(i.e., blocked from delivery) in the space pointed to by maskptr.
Routines described in sigsetops(3) are used to examine the returned
signal set.
sigpending will fail if:
[EFAULT] maskptr points to memory that is not a part of process's
valid address space.
kill(2), sigaction(2), sigprocmask(2), sigsuspend(2), sigsetops(3).
A 0 value indicates that the call succeeded. A -1 return value indicates
an error occurred and errno is set to indicate the reason.
The POSIX and System V signal facilities have different semantics. Using
both facilities in the same program is strongly discouraged and will
result in unpredictable behavior.
PPPPaaaaggggeeee 1111 | http://nixdoc.net/man-pages/IRIX/man2/man2/sigpending.2.html | CC-MAIN-2013-20 | refinedweb | 138 | 50.12 |
Generics in XAML
The .NET Framework XAML Services as implemented in System.Xaml provides support for using generic CLR types. This support includes specifying the constraints of generics as a type argument and enforcing the constraint by calling the appropriate Add method for generic collection cases. This topic describes aspects of using and referencing generic types in XAML.
x:TypeArguments is a directive defined by the XAML language. When it is used as a member of a XAML type that is backed by a generic type, x:TypeArguments passes constraining type arguments of the generic to the backing constructor. For reference syntax that pertains to .NET Framework XAML Services use of x:TypeArguments, which includes syntax examples, see x:TypeArguments Directive.
Because x:TypeArguments takes a string, and has type converter backing, it is typically declared in XAML usage as an attribute.
In the XAML node stream, the information declared by x:TypeArguments can be obtained from XamlType.TypeArguments at a StartObject position in the node stream. The return value of XamlType.TypeArguments is a list of XamlType values. Determination of whether a XAML type represents a generic type can be made by calling XamlType.IsGeneric.
In XAML, a generic type must always be represented as a constrained generic; an unconstrained generic is never present in the XAML type system or a XAML node stream and cannot be represented in XAML markup. A generic can be referenced within XAML attribute syntax, for cases where it is a nested type constraint of a generic being referenced by x:TypeArguments, or for cases where x:Type supplies a CLR type reference for a generic type. This is supported through the XamlTypeTypeConverter class defined by .NET Framework XAML Services.
The XAML attribute syntax form enabled by XamlTypeTypeConverter alters the typical MSIL / CLR syntax convention that uses angle brackets for types and constraints of generics, and instead substitutes parentheses for the constraint container. For an example, see x:TypeArguments Directive.
If you use XAML 2009 instead of mapping the CLR base types to obtain XAML types for common language primitives, you can use XAML 2009 built-in types as information items in x:TypeArguments. For example, you could declare the following (prefix mappings not shown, but x is the XAML language XAML namespace for XAML 2009):
For XAML 2006 usage when specifically targeting WPF, x:Class must also be provided on the same element as x:TypeArguments, and that element must be the root element in a XAML document. The root element must map to a generic type with at least one type argument. An example is PageFunction<T>.
Possible workarounds to support generic usages include defining a custom markup extension that can return generic types, or providing a wrapping class definition that derives from a generic type but flattens the generic constraint in its own class definition.
In WPF and targeting .NET Framework 4, you can use XAML 2009 features together with x:TypeArguments, but only for loose XAML (XAML that is not markup-compiled). Markup-compiled XAML for WPF and the BAML form of XAML do not currently support the XAML 2009 keywords and features.
Custom workflows in Windows Workflow Foundation for .NET Framework 3.5 do not support generic XAML usage. | http://msdn.microsoft.com/en-us/library/ee956431(v=vs.100).aspx | CC-MAIN-2014-42 | refinedweb | 540 | 55.03 |
Better way to make init a java builder class from clojure map?
everyone
I'm try to write a function to wrap CsvReadOptions.Builder.html in clojure .
The function will take a map like this : {:header true :locale "US"}, the function will configure the builder according to the map.
(defn reader-options [ opts ] (let [ b (CsvReadOptions$Builder.)] (cond-> b (contains? opts :locale ) (.locale (:locale opts)) (contains? opts :header ) (.header (:header opts)) true (.build )) ) )
Sorry if it is too much to ask, is there a better way in clojure to accomplish this ? because the key works duplicates in single line.
(contains? opts :locale ) (.locale (:locale opts))
Thank you again for any suggestions.
gen-class - clojure.core, MyClass with a method named mymethod, gen-class will generate an All inherited methods, generated methods, and init and main functions (see :methods, can be referred to by their Java names (int, float etc), and classes in the java.lang a mapping from a constructor signature to a superclass constructor signature. Clojure provides aliases for primitive Java types and arrays which do not have typical representations as Java class names. The types are represented according to the specification of Java Field Descriptors. For example, byte arrays (byte-array []) have a type of "[B".
In this problem you have several factors present:
- Optional parameters
- Mutable objects
- Java interop
This is the reason you are getting
locale and
header replicated 3 times on each line. I cannot think of a straightforward way of reducing this duplication. If this was a common pattern in you application, you could write a macro (compiler extension) to make it easier. Unless this is a very frequent occurrance in your code, the cost (in complexity, documentation, misunderstandings, etc) is going to greatly exceed the benefit.
Kind of like buying a new car instead of cleaning the old car. Except in extreme circumstances, the cost is probably not worth the benefit.
;)
Clojure Programming: Practical Lisp for the Java World, Practical Lisp for the Java World Chas Emerick, Brian Carper, Christophe Map, String) constructor will call the Excep tion(String) constructor on our to the superclass's constructor are determined by the :init function we identify. We'll see shortly how these make for a useful Java API for our custom exception. gen-class.
You could use destructuring in
let:
(let [{:keys [a b c]} {:a 1 :b false}] [a b c]) ;; => [1 false nil]
or in function arguments:
(defn args-demo [{:keys [a b c]}] [a b c]) (args-demo {:a 1 :b false}) ;; => [1 false nil]
The issue is that it binds to
nil if a specific key is not present in the map. If your values can have
nil values then it won't work.
You could use some "marker" values for not present values:
(let [{:keys [a b c] :or {a ::absent b ::absent c ::absent}} {:a 1 :b nil}] (cond-> {} (not= a ::absent) (assoc :a2 a) (not= b ::absent) (assoc :b2 b) (not= c ::absent) (assoc :c2 c))) ;; => {:a2 1, :b2 nil}
You could also create a macro:
(defmacro when-key-present [k m & body] `(let [{:keys [~k] :or {~k ::not-found}} ~m] (when (not= ~k ::not-found) ~@body))) (when-key-present a {:a false :b nil} (println "a was" a)) ;; prints "a was false" (when-key-present b {:a false :b nil} (println "b was" b)) ;; prints "b was nil" (when-key-present c {:a false :b nil} (println "c was" c)) ;; doesn't print anything
And your function would become:
(defn reader-options [opts] (let [builder (CsvReadOptions$Builder.)] (when-key-present locale opts (.locale builder locale)) (when-key-present header opts (.header builder header)) (.build builder)))
You could go even beyond and create a macro that would assume that the key in opts map is identical to the builder method that should be invoked so you could then use it like:
(let [builder (CsvReadOptions$Builder.)] (doseq [k (keys opts)] (set-builder-property builder k opts)))
but this becomes more and more magical and harder to understand so I would think twice before resorting to macros.
Java Interop, All classes in java.lang are automatically imported to every namespace. from left to right, and passed to the constructor of the class named by Classname. for objects of Java types. contains? and get work on Java Maps, arrays, Strings, that implement one or more interfaces and/or extend a class with the proxy macro. "Class." in Clojure is the same as "new Class()" in Java. Thanks to Walter Tetzner for pointing this out! -Gregg W (Nov 17, 2010) Importing Java Classes into Your Clojure Program. There is, unfortunately, one way in which Java is better than Clojure—at least in the sense that chocolate is "better" (i.e., more pleasant) than broccoli.
Practical Clojure, To do additional computation after the superclass constructor, the :post-init argument want to add two methods to your class with the following Java signatures: public int add(int a, The argument to :constructors is a map of the form {[types. From the java man pages regarding the -jar option:. When you use this option, the JAR file is the source of all user classes, and other user class path settings are ignored.
Ahead-of-time Compilation and Class Generation, To generate named classes for use by Java Each file generates a loader class of the same name with "__init" appended. use these classes directly, as use, require and load will choose between them and more recent source Specifying constructor signatures Controlling the mapping to an implementing namespace. The Clojure debugging toolkit is an excellent tool for digging into compiled Clojure code and it demonstrates some of the ways Clojure's classes differ from classes produced from java source. It would be difficult for instance to write java code to represent Clojures closures for instance (though it's possible)
Using Clojure To Generate Java To Reimplement Clojure, Clojure already takes advantage of this, by representing maps with eight or fewer it will spill over into a PersistentHashMap, which under the covers is a 32-way tree. Unlike javac, this will format Java snippets, not just full class files: With this, I could begin to build up my data structure implementations A map entry (key-value pair). The Map.entrySet method returns a collection-view of the map, whose elements are of this class. The only way to obtain a reference to a map entry is from the iterator of this collection-view. | http://thetopsites.net/article/52659479.shtml | CC-MAIN-2020-34 | refinedweb | 1,077 | 59.94 |
Lambda Lambda Lambda
Posted May 20, 2013 at 10:13 AM | categories: programming | tags: | View Comments
Updated June 26, 2013 at 06:56 PM
Table of Contents
Is that some kind of fraternity? of anonymous functions? What is that!? There are many times where you need a callable, small function in python, and it is inconvenient to have to use
def to create a named function. Lambda functions solve this problem. Let us look at some examples. First, we create a lambda function, and assign it to a variable. Then we show that variable is a function, and that we can call it with an argument.
f = lambda x: 2*x print f print f(2)
<function <lambda> at 0x0000000001E6AAC8> 4
We can have more than one argument:
f = lambda x,y: x + y print f print f(2, 3)
<function <lambda> at 0x0000000001E3AAC8> 5
And default arguments:
f = lambda x, y=3: x + y print f print f(2) print f(4, 1)
<function <lambda> at 0x0000000001E9AAC8> 5 5
It is also possible to have arbitrary numbers of positional arguments. Here is an example that provides the sum of an arbitrary number of arguments.
import operator f = lambda *x: reduce(operator.add, x) print f print f(1) print f(1, 2) print f(1, 2, 3)
<function <lambda> at 0x0000000001DFAAC8> 1 3 6
You can also make arbitrary keyword arguments. Here we make a function that simply returns the kwargs as a dictionary. This feature may be helpful in passing kwargs to other functions.
f = lambda **kwargs: kwargs print f(a=1, b=3)
{'a': 1, 'b': 3}
Of course, you can combine these options. Here is a function with all the options.
f = lambda a, b=4, *args, **kwargs: (a, b, args, kwargs) print f('required', 3, 'optional-positional', g=4)
('required', 3, ('optional-positional',), {'g': 4})
One of the primary limitations of lambda functions is they are limited to single expressions. They also do not have documentation strings, so it can be difficult to understand what they were written for later.
1 Applications of lambda functions
Lambda functions are used in places where you need a function, but may not want to define one using
def. For example, say you want to solve the nonlinear equation \(\sqrt{x} = 2.5\).
from scipy.optimize import fsolve import numpy as np sol, = fsolve(lambda x: 2.5 - np.sqrt(x), 8) print sol
6.25
Another time to use lambda functions is if you want to set a particular value of a parameter in a function. Say we have a function with an independent variable, \(x\) and a parameter \(a\), i.e. \(f(x; a)\). If we want to find a solution \(f(x; a) = 0\) for some value of \(a\), we can use a lambda function to make a function of the single variable \(x\). Here is a example.
from scipy.optimize import fsolve import numpy as np def func(x, a): return a * np.sqrt(x) - 4.0 sol, = fsolve(lambda x: func(x, 3.2), 3) print sol
1.5625
Any function that takes a function as an argument can use lambda functions. Here we use a lambda function that adds two numbers in the
reduce function to sum a list of numbers.
print reduce(lambda x, y: x + y, [0, 1, 2, 3, 4])
10
We can evaluate the integral \(\int_0^2 x^2 dx\) with a lambda function.
from scipy.integrate import quad print quad(lambda x: x**2, 0, 2)
(2.666666666666667, 2.960594732333751e-14)
2 Summary
Lambda functions can be helpful. They are never necessary. You can always define a function using
def, but for some small, single-use functions, a lambda function could make sense. Lambda functions have some limitations, including that they are limited to a single expression, and they lack documentation strings.
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | http://kitchingroup.cheme.cmu.edu/blog/2013/05/20/Lambda-Lambda-Lambda/ | CC-MAIN-2018-30 | refinedweb | 657 | 72.66 |
Code should be reusable. An expression traversing a data structure shouldn’t be written multiple times, it should be pulled out into a generic traversal function. At a larger scale, a random number generator shouldn’t be written multiple times, but rather pulled out into a module that can be used by others.
It is important that such abstractions must be done carefully. Often times a type is visible to the caller, and if the type is not handled carefully the abstraction can leak.
For example, a set with fast random indexing (useful for random
walks on a graph) can be implemented with a sorted
Vector.
However, if the
Vector type is
leaked, the user can use this knowledge to violate the invariant.
import scala.annotation.tailrec /** (i in repr, position of i in repr) */ def binarySearch(i: Int, repr: Vector[Int]): (Boolean, Int) = /* elided */ object }
import IntSet._ // import IntSet._ val good = add(1, add(10, add(5, empty))) // good: IntSet.Repr = Vector(1, 5, 10) val goodResult = contains(10, good) // goodResult: Boolean = true val bad = good.reverse // We know it's a Vector! // bad: scala.collection.immutable.Vector[Int] = Vector(10, 5, 1) val badResult = contains(10, bad) // badResult: Boolean = false val bad2 = Vector(10, 5, 1) // Alternatively.. // bad2: scala.collection.immutable.Vector[Int] = Vector(10, 5, 1) val badResult2 = contains(10, bad2) // badResult2: Boolean = false
The issue here is the user knows more about the representation than they
should. The function
add enforces the sorted invariant on each insert,
and the function
contains leverages this to do an efficient look-up.
Because the
Vector definition of
Repr is exposed, the user is
free to create any
Vector they wish which may violate the invariant,
thus breaking
contains.
In general, the name of the representation type is needed but the definition is not. If the definition is hidden, the user is only able to work with the type to the extent the module allows. This is precisely the notion of information hiding. If this can be enforced by the type system, modules can be swapped in and out without worrying about breaking client code.
It turns out there is a well understood principle behind this idea called existential quantification. Contrast with universal quantification which says “for all”, existential quantification says “there is a.”
Below is an encoding of universal quantification via parametric polymorphism.
trait Universal { def apply[A]: A => A }
Here
Universal#apply says for all choices of
A, a function
A => A can be
written. In the Curry-Howard Isomorphism, a profound
relationship between logic and computation, this translates to “for all propositions
A,
A implies
A.” It is therefore acceptable to write the following, which picks
A to be
Int.
def intInstantiatedU(u: Universal): Int => Int = (i: Int) => u.apply(i) // intInstantiatedU: (u: Universal)Int => Int
Existential quantification can also be written in Scala.
trait Existential { type A def apply: A => A }
Note that this is just one way of encoding existentials - for a deeper discussion, refer to the excellent Type Parameters and Type Members blog series.
The type parameter on
apply has been moved up to a type member of the trait.
Practically, this means every instance of
Existential must pick one choice of
A, whereas in
Universal the
A was parameterized and therefore free. In the
language of logic,
Existential#apply says “there is a” or “there exists some
A such that
A implies
A.” This “there is a” is the crux of the error when trying
to write a corresponding
intExistential function.
def intInstantiatedE(e: Existential): Int => Int = (i: Int) => e.apply(i) // <console>:19: error: type mismatch; // found : i.type (with underlying type Int) // required: e.A // (i: Int) => e.apply(i) // ^
In code, the type in
Existential is chosen per-instance, so there is no way
of knowing what the actual type chosen is. In logical terms, the only guarantee is
that there exists some proposition that satisfies the implication, but it is not
necessarily the case (and often is not) it holds for all propositions.
In the ML family of languages (e.g. Standard ML, OCaml), existential quantification and thus information hiding, is achieved through type members. Programs are organized into modules which are what contain these types.
In Scala, this translates to organizing code with the object system, using the same
type member feature to hide representation. The earlier example of
IntSet can then
be written:
/** Abstract signature */ trait IntSet { type Repr def empty: Repr def add(i: Int, repr: Repr): Repr def contains(i: Int, repr: Repr): Boolean } /** Concrete implementation */ object VectorIntSet extends }
As long as client code is written against the signature, the representation cannot be leaked.
def goodUsage(set: IntSet) = { import set._ val s = add(1, add(10, add(5, empty))) contains(5, s) } // goodUsage: (set: IntSet)Boolean
If the user tries to assert the representation type, the type checker prevents it at compile time.
def badUsage(set: IntSet) = { import set._ val s = add(10, add(1, empty)) // Maybe it's a Vector s.reverse contains(10, Vector(10, 5, 1)) } // <console>:23: error: value reverse is not a member of set.Repr // s.reverse // ^ // <console>:24: error: type mismatch; // found : scala.collection.immutable.Vector[Int] // required: set.Repr // contains(10, Vector(10, 5, 1)) // ^
Abstract types enforce information hiding at the definition site (the definition
of
IntSet is what hides
Repr). There is another mechanism that enforces information
hiding, which pushes the constraint to the use site.
Consider implementing the following function.
def foo[A](a: A): A = ???
Given nothing is known about
a, the only possible thing
foo can do is return
a. If
instead of a type parameter the function was given more information..
def bar(a: String): String = "not even going to use `a`"
..that information can be leveraged to do unexpected things. This is similar to
the first
IntSet example when knowledge of the underlying
Vector allowed unintended
behavior to occur.
From the outside looking in,
foo is universally quantified - the caller gets to
pick any
A they want. From the inside looking out, it is
existentially quantified - the implementation knows only as much
about
A as there are constraints on
A (in this case, nothing).
Consider another function
listReplace.
def listReplace[A, B](as: List[A], b: B): List[B] = ???
Given the type parameters,
listReplace looks fairly constrained. The name and signature
suggests it takes each element of
as and replaces it with
b, returning a new list.
However, even knowledge of
List can lead to type checking implementations with strange behavior.
// Completely ignores the input parameters def listReplace[A, B](as: List[A], b: B): List[B] = List.empty[B]
Here, knowledge of
List allows the implementation
to create a list out of thin air and use that in the implementation. If instead
listReplace
only knew about some
F[_] where
F is a
Functor, the implementation becomes much more
constrained.
trait Functor[F[_]] { def map[A, B](fa: F[A])(f: A => B): F[B] } implicit val listFunctor: Functor[List] = new Functor[List] { def map[A, B](fa: List[A])(f: A => B): List[B] = fa.map(f) } def replace[F[_]: Functor, A, B](fa: F[A], b: B): F[B] = implicitly[Functor[F]].map(fa)(_ => b)
replace(List(1, 2, 3), "typelevel") // res8: List[String] = List(typelevel, typelevel, typelevel)
Absent any knowledge of
F other than the ability to
map over it,
replace is
forced to do the correct thing. Put differently, irrelevant information about
F is hidden.
The fundamental idea behind this is known as parametricity, made popular by Philip Wadler’s seminal Theorems for free! paper. The technique is best summarized by the following excerpt from the paper:
Write down the definition of a polymorphic function on a piece of paper. Tell me its type, but be careful not to let me see the function’s definition. I will tell you a theorem that the function satisfies.
Information hiding is a core tenet of good program design, and it is important to make
sure it is enforced. Underlying information hiding is existential quantification,
which can manifest itself in computation through abstract types and
parametricity. Few languages support defining abstract type members, and fewer
yet support higher-kinded types used in the
replace example. It is therefore
to the extent that a language’s type system is expressive that
abstraction can be enforced.
This blog post was tested with Scala 2.11.7 using tut.
Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog | https://typelevel.org/blog/2016/03/13/information-hiding.html | CC-MAIN-2019-13 | refinedweb | 1,434 | 56.25 |
Related
Tutorial
Creating a Live Search Feature in React Using Ax.
Axios is a powerful HTTP client that allows to easily implement Ajax requests in a JavaScript app. We’ve covered the basics of using Axios with React here, so you can read that first if Axios or Axios + React is all new to you.
In this tutorial we’ll be building a live search feature inside a React app with the help of Axios. Our app will allow us to do a simple movie search using the API from themoviedb.org.
This tutorial is divided into 3 section:
- Part 1: How to make live search work in React with Axios
- Part 2: Preventing unnecessary requests
- Part 3: Caching HTTP requests and resonses
Initializing the App
This tutorial assumes that you have some experience using React, so we’ll skip the initializing step to save our valuable time. You can use your any favorite boilerplate, and in this tutorial we’ll simply use Create React App to initialize the app.
Once the app is initialized, let’s add
axios to it:
$ yarn add axios or npm i axios
Next, copy the code below to your
App component:
import React, { Component } from 'react'; import axios from 'axios'; import Movies from './Movies'; class App extends Component { state = { movies: null, loading: false, value: '' }; search = async val => { this.setState({ loading: true }); const res = await axios( `{val}&api_key=dbc0a6d62448554c27b6167ef7dabb1b` ); const movies = await res.data.results; this.setState({ movies, loading: false }); }; onChangeHandler = async e => { this.search(e.target.value); this.setState({ value: e.target.value }); }; get renderMovies() { let movies = <h1>There's no movies</h1>; if (this.state.movies) { movies = <Movies list={this.state.movies} />; } return movies; } render() { return ( <div> <input value={this.state.value} onChange={e => this.onChangeHandler(e)} {this.renderMovies} </div> ); } } export default App;
Note:
Movies is just presentational/dumb component and simply renders the data we give it. It does not touch our data.
Input Component
So, we have a controlled
input element that calls
onChangeHandler method when we type something in.
onChangeHandler changes the value property in the
state and calls the
search method, giving it the input’s value as an argument.
Search
Take the following piece of code from above:
search = async val => { this.setState({ loading: true }); const res = await axios( `{val}&api_key=dbc0a6d62448554c27b6167ef7dabb1b` ); const movies = await res.data.results; this.setState({ movies, loading: false }); };
In the
search method we are making a
GET request to our API to get the movies we want. Once we get the results we update the component’s
state via
setState. And when we change the state via
setState, the component re-renders with the changed state.
Simple as that!
Preventing Unnecessary Requests
You may notice that we send requests every time when we update the input. This can lead to an overload of requests, especially when we receive large responses.
To see this problem in action, open the network tab in your browser’s DevTools. Clear the network tab. Type some movie’s name in the input.
As you see we’re downloading all the data every time a keystroke happens. To solve this issue let’s create a
utils.js file in the
src directory:
$ cd src $ touch utils.js
Copy the following code into
utils.js:
import axios from 'axios'; const makeRequestCreator = () => { let token; return (query) => { // Check if we made a request if(token){ // Cancel the previous request before making a new request token.cancel() } // Create a new CancelToken token = axios.CancelToken.source() try{ const res = await axios(query, {cancelToken: cancel.token}) const result = data.data return result; } catch(error) { if(axios.isCancel(error)) { // Handle if request was cancelled console.log('Request canceled', error.message); } else { // Handle usual errors console.log('Something went wrong: ', error.message) } } } } export const search = makeRequestCreator()
Change the
App component as well to make use of our new utility function:
// ... import { search } from './utils' class App extends Component { // ... search = async val => { this.setState({ loading: true }); // const res = await axios( const res = await search( `{val}&api_key=dbc0a6d62448554c27b6167ef7dabb1b` ); const movies = res; this.setState({ movies, loading: false }); }; // ...
What’s happening there now?
Axios has so called cancel tokens that allow us to cancel requests.
In
makeRequestCreator we create a variable called
token. Then with a request, if the
token variable exists we call its
cancel method to cancel the previous request. Then we assign
token a new
CancelToken. After that we make a request with the given query and return the result.
If something goes wrong we catch the errors in the
catch block and we can check and handle whether a request was cancelled or not.
Let’s see what happens in the network tab now:
As you see we downloaded only one response. Now our users pay for only what they use.
Caching HTTP Requests and Responses
If we type the same text in the input multiple times, again, we make a new request each time.
Let’s fix this. We will change our utility function in
utils.js a little bit:
const resources = {}; const makeRequestCreator = () => { let cancel; return async query => { if (cancel) { // Cancel the previous request before making a new request cancel.cancel(); } // Create a new CancelToken cancel = axios.CancelToken.source(); try { if (resources[query]) { // Return result if it exists return resources[query]; } const res = await axios(query, { cancelToken: cancel.token }); const result = res.data.results; // Store response resources[query] = result; return result; } catch (error) { if (axios.isCancel(error)) { // Handle if request was cancelled console.log('Request canceled', error.message); } else { // Handle usual errors console.log('Something went wrong: ', error.message); } } }; }; export const search = makeRequestCreator()
Here we created a
resources constant which keeps our downloaded responses. And when we are doing a new request we first check if our
resources object has a result for this query. If it does, we just return that result. If it doesn’t have a suitable result we make a new request and store the result in
resources. Easy enough!
Let’s summarize everything in a few words. Every time when we type something in the
input:
- We cancel the previous request, if any.
- If we already have a previous result for what we typed we just return it without making a new request.
- If we don’t have that result we make a new one and store it.
If you’re interested, you can find a Redux version of this app in this repo
Conclusion
Congrats 🎉🎉🎉! We’ve built a live search feature which doesn’t download unnecessary responses as well as caches responses. I hope you’ve learned a thing or two about how to build an efficient live search feature in React with the help of Axios.
Now, our users spend as little traffic data as possible. If you enjoyed this tutorial, please share it out! 😄
You can find final result and source code here for this post in this CodeSandbox. | https://www.digitalocean.com/community/tutorials/react-live-search-with-axios | CC-MAIN-2020-34 | refinedweb | 1,143 | 58.99 |
In this example we will use the
scipy.constants.physical_constants dictionary to determine which are the least-accurately known constants. To do this we need the relative uncertainties in the constants' values; the code below uses a structured array to calculate these and outputs the least well-determined constants.
import numpy as np from scipy.constants import physical_constants def make_record(k, v): """ Return the record for this constant from the key and value of its entry in the physical_constants dictionary. """ name = k val, units, abs_unc = v # Calculate the relative uncertainty in ppm rel_unc = abs_unc / abs(val) * 1.e6 return name, val, units, abs_unc, rel_unc dtype = [('name', 'S50'), ('val', 'f8'), ('units', 'S20'), ('abs_unc', 'f8'), ('rel_unc', 'f8')] constants = np.array([make_record(k, v) for k,v in physical_constants.items()], dtype=dtype ) constants.sort(order='rel_unc') # List the 10 constants with the largest relative uncertainties for rec in constants[::-1][:10]: print('{:.0f} ppm: {:s} = {:g} {:s}'.format(rec['rel_unc'], rec['name'].decode(), rec['val'], rec['units'].decode()))
The output is shown below. When this exercise was first written, the Newtonian gravitational constant, $G$, was not known to better than about 120 ppm; newer measurements have put its accuracy at 46 ppm.
9447 ppm: weak mixing angle = 0.2223 6971 ppm: proton rms charge radius = 8.751e-16 m 1168 ppm: deuteron rms charge radius = 2.1413e-15 m 428 ppm: proton magn. shielding correction = 2.5691e-05 428 ppm: proton mag. shielding correction = 2.5691e-05 92 ppm: tau mass = 3.16747e-27 kg 91 ppm: tau mass energy equivalent = 2.84678e-10 J 91 ppm: proton-tau mass ratio = 0.528063 91 ppm: muon-tau mass ratio = 0.0594649 91 ppm: neutron-tau mass ratio = 0.52879 | https://scipython.com/book/chapter-8-scipy/examples/the-least-well-determined-physical-constants/ | CC-MAIN-2021-39 | refinedweb | 286 | 62.14 |
FNS policies specify the types and arrangement of namespaces within an enterprise and how such namespaces can be used by applications. For example, which namespaces can be associated with which other namespaces. The FNS policies described here include some extensions to XFN policy. These are explicitly defined with notes.
The FNS enterprise policies deal with the arrangement of enterprise objects within the namespace. Each enterprise objects has its own namespace.
By default, there are seven FNS enterprise objects and namespaces:
Organization (orgunit). Entities such as departments, centers, and divisions. Sites, hosts, users, and services can be named relative to an organization. The XFN term for organization is organizational unit. When used in an initial context the identifier org can be used as an alias for orgunit.
Site (site). Physical locations, such as buildings, machines in buildings, and conference rooms within buildings. Sites can have files and services associated with them.
Host (host). Machines. Hosts can have files and services associated with them.
User (user). Human users. Users can have files and services associated with them.
File (fs). Files within a file system.
Service (service). Services such as printers, faxes, mail, and electronic calendars.
Printer (service/printer). The printer namespace is subordinate to the service namespace.
Figure 22-1 shows how these enterprise namespaces are arranged.
Some of these namespaces, such as users and hosts, can appear more than once in a federated namespace.
The policies that apply to these namespaces are summarized in Table 22-2.
Enterprise.
There are seven namespaces supplied with FNS:
Organization. (See "Organizational Unit Namespace")
Site. (See "Site Namespace")
Host. (See "Host Namespace")
User. (See "User Namespace")
File. (See "File Namespace")
Service. (See "Service Namespace")
Printer. (See "Service Namespace")
The organizational unit namespace provides a hierarchical namespace for naming subunits of an enterprise. Each organizational unit name is bound to an organizational unit context that represents the organizational unit. Organization unit names are identified by the prefixes org/, orgunit/, or _orgunit/. (The shorthand alias org/ is only used in the initial context, never in the middle of a compound name. See "Initial Context Bindings for Naming Within the Enterprise" and "Composite Name Examples".)
In an NIS+ environment, organizational units correspond to NIS+ domains and subdomains.
Under NIS+, organization units must map to domains and subdomains. You must have an organizational unit for each NIS+ domain and subdomain. You cannot have "logical" organization units within a domain or subdomain. In other words, you cannot divide an NIS+ domain or subdomain into smaller organization units. Thus, if you have a NIS+ domain doc.com. and two subdomains sales.doc.com. and manf.doc.com., you must have three FNS organizational units corresponding to those three domains.
Organizational units are named using dot-separated right-to-left compound names, where each atomic element names an organizational unit within a larger unit. For example, the name org/sales.doc.com. names an organizational unit sales within a larger unit named doc.com. In this example, sales is an NIS+ subdomain of doc.com.
Organizational unit names can be either fully qualified NIS+ domain names or relatively named NIS+ names. Fully qualified names have a terminal dot; relative names do not. Thus, if a terminal dot is present in the organization name, the name is treated as a fully qualified NIS+ domain name. If there is no terminal dot, the organization name is resolved relative to the top of the organizational hierarchy. For example, orgunit/west.sales.doc.com. is a fully qualified name identifying the west organization unit, and _orgunit/west.sales is a relatively qualified name identifying the same subdomain.
In a NIS environment there is only one organization unit per enterprise which corresponds to the NIS domain. This orgunit is named orgunit/domainname where domainname is the name of the NIS domain. For example, if the NIS domain name is doc.com, the organizational unit is org/doc.com.
In an NIS environment, you can use an empty string as a shorthand for the organizational unit. Thus, org// is equivalent to org/domainname.
There is only one FNS organization unit and no subunits when your primary enterprise-level name service is files-based. The only permitted organization unit under files-based naming is org//.
The site namespace provides a geographic namespace for naming objects that are naturally identified with their physical locations. These objects can be, for example, buildings on a campus, machines and printers on a floor, conference rooms in a building and their schedules, and users in contiguous offices. Site names are identified by the prefixes site/or _site/.
In the Solaris environment, sites are named using compound names, where each atomic part names a site within a larger site. The syntax of site names is dot-separated right-to-left, with components arranged from the most general to the most specific location description. For example, _site/pine.bldg5 names the Pine conference room in building 5, while site/bldg7.alameda identifies building 7 of the Alameda location of some enterprise.
The.
The.
A file namespace (or file system) provides a namespace for naming files. File names are identified by the prefixes fs/or _fs/. For example the name fs/etc/motd identifies the file motd which is stored in the /etc directory.
The file namespace is described in more detail in "FNS File Naming". and file contexts are discussed in "File Contexts Administration".
The.
The.
The.
F.
This.
F. | http://docs.oracle.com/cd/E19455-01/806-1387/6jam692dp/index.html | CC-MAIN-2015-32 | refinedweb | 904 | 51.04 |
direct.showbase.Pool¶
from direct.showbase.Pool import Pool
Pool is a collection of python objects that you can checkin and checkout. This is useful for a cache of objects that are expensive to load and can be reused over and over, like splashes on cannonballs, or bulletholes on walls. The pool is unsorted. Items do not have to be unique or be the same type.
Internally the pool is implemented with 2 lists, free items and used items.
Example
p = Pool([1, 2, 3, 4, 5]) x = p.checkout() p.checkin(x)
Inheritance diagram
- class
Pool(free=None)[source]¶
Bases:
object
cleanup(self, cleanupFunc=None)[source]¶
Completely cleanup the pool and all of its objects. cleanupFunc will be called on every free and used item. | https://docs.panda3d.org/1.10/python/reference/direct.showbase.Pool | CC-MAIN-2020-16 | refinedweb | 126 | 67.35 |
Website Redesign and added functionality
Bütçe $300-1500 USD
In need of a website redesign/update. We are an Adobe partner and sell Adobe software and offer training as well. The site will need a registration system for clients to register for a free seminar (online) as well as register/order training courses either at a clients site or in a public class.
A new logo, banners and SEO are desired as well. Please send PM for more info and current URL (which will be changing).
51 freelancers are bidding on average $1022 for this job
Hi, PM more details for a meaningful discussion. We have designed & developed more than 1000+ websites & can assure you of professionlism, quality & timely deliveries. View our reviews + projects won list mn you w Daha Fazla
Hi, We have over 5 years of experience of web designing, search engine optimisation and web programming on php and other web technologies and programmed many applications. We are confident to complete this project with Daha Fazla
hello, Kindly see PMB Thanks
Hi, pls check pmb,thanx
I run a Sacramento, CA website design firm with 8 years of design experience developing over 80 websites, and 13 years of IT Engineering experience. Your bid includes a complete design of your site to your specific req Daha Fazlala
Please check your PMB for our proposal and our full portfolio. Thank you. EncodedART Inc – A Solution Provider Company.
plz check PMB
Hello, Japple! We are very glad you gave us a chance to place a bid on the project. I encourage you to view our portfolio at [url removed, login to view] We have good opportunities to develop this projec Daha Fazla
We have a team of highly qualified and creative professionals. Give us a chance to show our talents and we assure you quality. [url removed, login to view]
Hello Sir We are an offshore well established company for about more than five years. We have experienced developers (PHP, ASP, ASP.NET, MYSQL, C#,JAVA,JSP, C++ etc.) and those are dedicated in their respective fields. Daha Fazla
Please check the PMB. Thanks
Hello, Thank you for your time to read our proposal. Mxicoders is a leading IT solutions company to provide e-commerce, e-business and branding solutions to small and medium size business worldwide. Mxicoders Daha Fazla | https://www.tr.freelancer.com/projects/flash-website-design/website-redesign-added-functionality/ | CC-MAIN-2018-13 | refinedweb | 389 | 63.19 |
Fixed working with time type in Fluent Nhibernate "Cannot apply object of type" System.DateTime "to type" NHibernate.Type.TimeType "
I founded NHibernate for several built-in types that are not present in
C#
, but are present in some of the SGBDs.
Now I have the following:
public class TimeSlot : EntityBase { public virtual NHibernate.Type.TimeType FromTime { get; set; } public virtual NHibernate.Type.TimeType ToTime { get; set; } } public class TimeSlotMap : ClassMap<TimeSlot> { public TimeSlotMap() { Id(c => c.Id).GeneratedBy.Identity(); Map(c => c.FromTime); Map(c => c.ToTime); } }
In MSSQL, this table is similar to the attached image
Now when I try to query this table, I get the following exception:
Cannot overlay object of type 'System.DateTime' on type 'NHibernate.Type.TimeType
What am I doing wrong? And how does Fluent NHibernate work with the date type?
source to share
I would suggest using
TimeSpan
for DB type
Time
public class TimeSlot : EntityBase { public virtual TimeSpan FromTime { get; set; } public virtual TimeSpan ToTime { get; set; } }
And then NHibernate has a special type to handle this trick:
Map(c => c.FromTime) .Type<NHibernate.Type.TimeAsTimeSpanType>(); ...
This will allow you to work with the .NET native "type time" -
MSDN - TimeSpan Structure
Represents a time interval.
What is NHibernate.Type.TimeType then?
If we prefer to use
DateTime
which NHibernate can use to express time, we should do it like this:
public class TimeSlot : EntityBase { public virtual DateTime FromTime { get; set; } public virtual DateTime ToTime { get; set; } }
And now we can use the question type -
NHibernate.Type.TimeType
Map(c => c.FromTime) .Type<NHibernate.Type.TimeType>(); ...
What's the description:
/// <summary> /// Maps a <see cref="System.DateTime" /> Property to an DateTime column that only stores the /// Hours, Minutes, and Seconds of the DateTime as significant. /// Also you have for <see cref="DbType.Time"/> handling, the NHibernate Type <see cref="TimeAsTimeSpanType"/>, /// the which maps to a <see cref="TimeSpan"/>. /// </summary> /// <remarks> /// <para> /// This defaults the Date to "1753-01-01" - that should not matter because /// using this Type indicates that you don't care about the Date portion of the DateTime. /// </para> /// <para> /// A more appropriate choice to store the duration/time is the <see cref="TimeSpanType"/>. /// The underlying <see cref="DbType.Time"/> tends to be handled differently by different /// DataProviders. /// </para> /// </remarks> [ ] public class TimeType : PrimitiveType, IIdentifierType, ILiteralType
Also check:
Date and time support in NHibernate
... Time-bound DbTypes only store the time, not the date. There is no Time class in .NET, so NHibernate uses DateTime with date component set to 1753-01-01, minimum value for SQL time, or System.TimeSpan - depending on the selected DbType ...
source to share | https://daily-blog.netlify.app/questions/2169299/index.html | CC-MAIN-2021-49 | refinedweb | 439 | 59.19 |
by Derek Hildreth – Technologic Systems
This comprehensive and easy to read example C code is in this article, but first and foremost, here’s the source code we’ll be working with:
To get started, download or clone this repository to your board, extract it, and change your directory to it, like so:
wget unzip master.zip cd gpio-sysfs-demo-master/ the standalone utility using (essentially):
gpioctl –ddrout 59
gpioctl –setout 59
gpioctl –clrout 59:
#include <stdlib.h> #include <stdio.h> #include <unistd.h> #include "gpiolib.h" int main(int argc, char **argv) { int gpio_pin = 59; gpio_export(gpio_pin); gpio_direction(gpio_pin, 1); for(int i = 0; i < 5; i++) { printf(">> GPIO %d ON\n", gpio_pin); gpio_write(gpio_pin, 1); sleep(1); printf(">> GPIO %d OFF\n", gpio_pin); gpio_write(gpio_pin, 0); sleep(1); } return 0; }hz.. | https://www.electronics-lab.com/robust-c-library-utility-gpio-sysfs-interface-linux/ | CC-MAIN-2022-05 | refinedweb | 134 | 52.19 |
There are many good reasons why you might want to migrate a Web Pages site to MVC, but it's important to state at the outset that performance and scalability should not be among them. A Razor Web Pages site is based on the same ASP.NET framework as MVC and Web Forms sites, and pound for pound, will scale and perform just the same as the other two types of site. Whatever your reasons for wanting to make the change, you should bear in mind that the Web Pages framework was partly introduced as a smoother on-ramp to ASP.NET development because it was (and still is) considered that MVC is a complex framework. It has a number of moving parts and their roles will be explained here by comparing them to equivalent parts of the Web Pages framework. Your work with Razor Web Pages certainly puts you at an advantage in terms of learning about MVC.
There is little point in retaining the dynamic-based
Database helper when moving across to MVC, so this migration will ditch it in favour of the Entity Framework for data access.
The Bakery Site
Just to recap, the Bakery template site that comes with WebMatrix is a simple application that features a three-step ordering process: bakery items are displayed; the user selects one by clicking on it; the user completes a form and submits it. Product details are stored in a database and their images in the file system. As such, the application includes enough features at a basic level to help illustrate the foundations of MVC.
Model, View, Controller
Model View Controller is an architectural pattern chiefly concerned with generating appropriate UI based on user input. It is often referred to as a Presentation Pattern. One of the key selling points behind the pattern is that it promotes "separation of concerns". The concerns that should be separated are the logical layers within an application. These include presentation, business logic, data access and any service layers. The reasons why you should consider separating your concerns include easier maintenance, greater possible reuse, ease of testing. The Razor Web Pages development model doesn't do anything to promote good separation. A typical page consists of database calls, business logic such as validation and calculations, service layer artifacts like email generation and sending, and of course, presentational HTML intermixed with server side code. If you have tried to keep as much server-side code as possible in a code block at the top of the page, you have already exercised a certain separation and that will make your migration much eaiser. The Bakery template, along with the Photo Gallery and Starter Site templates all demonstrate some discipline in terms of where the server side code goes.
So what goes where in an MVC application? The View belongs in the presentational layer and generally, the stuff below the code block in your Razor Web Page will allow itself to be transplanted straight into an MVC Razor View file with few complaints (so long as you don't have any database calls down there...). Database calls, emailing and validation are all part of the Model, which is really a catch-all area for server-side logic. Therefore the contents of your top-of-the-page code block will find itself in one form or another somewhere in the Model. The Controller is the new bit to Web Pages developers. Its role is to process incoming requests, ensure that appropriate application logic is executed based on the request, and to see that the correct response is generated by calling a particular View. It basically controls the flow of the application between browser and server.
Some of these ideas can seem abstract at first, but they soon become clearer through example. And so to work.
Creating an MVC Application
ASP.NET MVC applications are built using what is known as the Web Application Project type. Web Pages sites are built using the Web Site Project type. The chief difference between the two of them is that Web Application projects must be precompiled before they are deployed to a web server, while you can FTP raw source files from a Web Site project to a web server, and they will be compiled on demand when the first request comes into the application. WebMatrix only supports Web Site projects, so it is unsuitable for building MVC sites. So the migration will involve creating a new MVC application in Visual Studio, and porting as much code across to it as possible.
Choose the New Project option in Visual Studio and select ASP.NET Web Application
Provide a name for your application and click OK.
Choose MVC from the available templates and click OK.
You now have a basic ASP.NET MVC web site. The structure of the default site is illustrated below
There are folders for all three parts of MVC - a Models folder, a Views folder and a Controllers folder. The only one that the framework relies on by default is the Views folder. ASP.NET MVC expects to find view files there. You can place controllers and model classes pretty much anywhere within your application. Typically, most developers put controller classes into the Controllers folder, but will place other classes that belong to the Model wherever they like. Some even delete the Models folder up front. In the second and third parts of this tutorial, you will place Model code in a number of different locations.
At this stage, you should copy across the database and image files from the existing Bakery site. You should do this by right clicking on folders and choosing Add Existing Item. That way the items are included in the project automatically. However, you might just want to use Windows Explorer to copy the images across, and then click the Show All Files icon at the top of Solution Explorer (with the red box around it in the preceding image) and then right click on the images folder and choose Include in Project.
Layouts and Views
The Views folder contains a folder per controller and one called Shared. By convention, you place layout and "partial" files into the Shared folder. Partial files are the MVC equivalent to the content blocks which are called via the
RenderPage method in Web Pages. The following code shows the Bakery template layout page transplanted to the Views\Shared\_Layout.cshtml file. Changes to the code are highlighted in yellow:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>Fourth Coffee - @ViewBag.Title</title> <link href="~/Content/Site.css" rel="stylesheet" /> <link href="~/favicon.ico" rel="shortcut icon" type="image/x-icon" /> <script src="~/Scripts/modernizr-2.6.2.js"></script> <script src="~/Scripts/jquery-1.10.2.min.js"></script> @RenderSection("Scripts", required: false) </head> <body> <div id="page"> <header> <p class="site-title"><a href="~/">Fourth Coffee</a></p> <nav> <ul> <li><a href="~/">Home</a></li> <li><a href="~/About">About Us</a></li> </ul> </nav> </header> <div id="body"> @RenderBody() </div> <footer> ©@DateTime.Now.Year - Fourth Coffee </footer> </div> </body> </html>Only one change was necessary:
Page.Titlebecame
ViewBag.Title.
ViewBagis MVC's equivalent mechanism for passing small pieces of data from page to page, or controller to page. In the WebMatrix Bakery site, the layout is specified in _PageStart.cshtml. The equivalent in MVC is a file named _ViewStart.cshtml. You can find that in the root of the Views folder with the following single line of code:
@{ Layout = "~/Views/Shared/_Layout.cshtml"; }
If you run the application at this stage, you will see that the home page has adopted the Bakey layout just as expected:
However, if you click on the About Us link, you will get a Page Not Found error - despite the fact that there is an About.cshtml view file in the Views\Home folder. In order to be able to fix this, you need to know a little about Routing and Controllers.
A Little About Routing And Controllers
By default, incoming requests to a Razor Web Pages site are mapped to physical files on disk. So a request for will be matched to a file called about.cshtml in the root folder of your site as I have described in a previous article on routing in Web Pages. The Web Pages framework receives a request and locates the appropriate file based on the URL, then executes the code in the file, returning the result (usually HTML, but could be JSON, XML, text, binary data etc.) as a response. In ASP.NET MVC, requests are not mapped to files. They are mapped to methods on controller classes instead. A request comes in and the framework locates the correct controller, instantiates an instance of it, then calls the method associated with the request, returning the result as a response to the client. The following code shows the
HomeController with its
Index() method:
public class HomeController : Controller { public ActionResult Index() { return View(); } }
When the
Index() method is called, it in turn calls the
Controller.View() method which locates the appropriate View file based on a convention, and executes the logic in it, returning the result. The convention used to locate the view is first to look in the Views folder for a subfolder named after the current controller (Home), then to look for a file in that folder named after the current method being executed (Index).
It's not difficult to conceptualise a system locating physical files based on a URL - you can write code yourself easily enough to do that, but how does a URL get matched up to a method on a class? The mechanism reponsible for that is called Routing. The following piece of code comes from App_Start\RouteConfig.cs. It features a method called
RegisterRoutes which is responsible for defining how URLs are to be mapped to controller methods.
public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); }
The
RouteCollection.MapRoute method call sets up the default mapping between URLs and the controller method that is invoked. The route is given a name of
Default, and a url pattern is specified. The pattern says that the first segment of the URL should be treated as the name of the controller to be invoked, and the second segment should be matched to the name of the method to be called on that controller. The last part of the pattern represents parameters that might be passed in to the controller's method. So a request for will be mapped to a method called
Show() that accepts an
int parameter on a controller called
Products. The default route is a method called
Index() on a controller called
Home. Quite often, this route configuration covers all of the needs for a site.
Without any alterations, the existing About page (Views\Home\About.cshtml) will be reached at, but the link in the Bakery site's layout page that we just migrated points to (without the name of the controller). A new route needs to be registered to cater for that. The highlighted block shows how to specify it.
public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( name: "About", url: "about", defaults: new { controller = "Home", action = "About"} ); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); }
It says that if the URL consists of just one segment with the value "about", invoke the
About method on the
Home controller. It has been placed above the default route because routes are registered in the order that they are declared, and when the routing system looks at the
RouteTable for a route that matches the incoming URL, it will take the first match and ignore all other possible matches.
Having defined a route, it's time to migrate the About page.
Make the alteration outlined above to the RouteConfig.cs file.
Replace the content of the Views\Home\About.cshtml file with the HTML from the Bakery About.cshtml file.
Open the HomeController.cs file in the Controllers folder and make the highlighted change to the
Aboutmethod:
public ActionResult About() { ViewBag.Title = "About"; return View(); }
Rebuild the project by pressing Shift+Ctrl+B.
Run the application by pressing Ctrl+F5 and then click the About Us link in the layout page. The About page should appear.
The change that you made to the controller was to replace the unused
ViewBag.Message with
ViewBag.Title.
ViewBag.Title is used to set the
<title> element in the layout page. You could also have left this declaration in a code block at the top of your view file. Ideally, you should strive to minimise the amount of server-side code you place in a view files. But whichever approach you take, you should aim for consistency.
Summary
You have begun the process of migrating a Razor Web Pages site to ASP.NET MVC. You have looked at the roles of the View and the Controller in MVC. You've seen how routing is configured to determine which controller's method is invoked in response to a requested URL, and you have managed to get a page to run. It is perfectly possible to build an entire application with just this knowledge so long as the application is static HTML. However, the Bakery site is dynamic and includes data, validation and sending of email - all aspects of the Model. The second part of this tutorial will explore MVC's Model in more detail, and will start the process of migrating the code blocks to the new site. | https://www.mikesdotnetting.com/article/262/migrating-from-razor-web-pages-to-asp-net-mvc-5-views-and-controllers | CC-MAIN-2020-05 | refinedweb | 2,293 | 62.58 |
The.
56 thoughts on “Stator Library Makes Your Arduino Code Easier To Read”
States are great, but really, just naming your variables and constants something interesting to begin with is an awesome first step for a lot of people! Get rid of those “magic” numbers which seemingly appear out of nowhere.
Useless for coders with good habits,
and giving bed habits to lazy coders.
Very bad idea
Arduino isn’t a desktop computer… Many of the people never really learned to program, before the started on Arduino. Just like many of them never really learned electronics, since they use the ready-made modules, and just need to connect wires to the header pins.
My experience, I just recently started using Arduino, mostly because there are a lot of modules in the surplus stores really cheap. Strip board, perf board, even etching your own boards, is sort of a side project of their own. I never learned C, mostly because I’m not a structured programmer, so stuck with BASIC and machine language, from my 8-bit computer programming days. I play with ATTiny microcontrollers some, but it was a lot of work, then my programmer died. So, Arduino started to look pretty good, everything already on a board. plugs right into the USB port. I still don’t like C, but learning how to get around. I’m more of a code-pirate though, I’ll pick through someone else’s code, for the parts I need in my projects, still have to make some changes, but works. I don’t find it too terrible, unless they use higher level functions, or heavy into library functions.
Arduino is really all about programming, it’s a necessary evil, which some people just push through, best they can. I don’t see myself using libraries like this, just more work and confusion.
I disagree, the library would be helpful just as Lewin makes clear, it lowers the cognitive load. If you have two values you have to track two values and you have to check and update them consistently everywhere. It’s a bug waiting to happen. The Stator class would be more trustworthy because it takes care of both values.
It could be very good if the Stator class were shown with an enum of defined states for a state machine as well:
enum MachineState {Initialise, WaitForCommand, Work, ReadTemperature};
…
Stator machineState;
machineState = Initialise;
Calling an object of the Stator class ‘mystuff’ is a bad idea though, that is clear.
Agree.
I get a laugh every time I see “my” anywhere in code. Waste of characters! No matter who is reading it, it’s “mine”! So therefore it specifies nothing at all – except in very special cases where the words server and client or master and slave would have made more sense anyway.
The use of “my” in sample code just helps the reader to understand that the particular object isn’t part of the class.
my = beware that I am making something up that broken the conventional way of doing things.
Haha, just noticed that wordpress ate my use of the template usage:
Stator machineState;
If you really think that “mystuff.changed()” is more readable than “state != lastState”, I don’t know what to say. Adding a function/method called changed() just means that you have to go look at that function to know what it’s checking.
It’s this kind of adding code for no good reason that leads to both code bloat and insanity. Anybody who’s the least bit familiar with C/C++ knows what the “state != lastState … lastState = state” construct does, because it’s all right in front of you. NOBODY knows what the changed() function means without looking it up.
This. Thanks
+1
By the way, the examples on the github page, with and without the library… they’re not even functionally equivalent. One resets the “old” variables only if both conditions are met (both values other than their respective old variables), but the library changed() method checks whether the respective values changed with their last assignment. Different behavior. And that’s the problem with these “helpers”. You forget what exactly they’re doing.
It is a waste of time to look up what they actually do over what they have to offer.
+1
But I would like to add that ALL code is difficult to understand (some more then others).
So usefull comments should be added. See is as a note to yourself in the far future (3 months from now).
There’s no shame in writing comments in your code. Some people are lazy others are arrogant and think you won’t need it, but many think that they will add it later… but somehow they never do. And so you’ll end up with code that is difficult to read when you see it the fist time. Especially when variable names are badly chosen, numbers instead of enums are used and brackets are left out or are all over the place. A proper case statement can improve a simple if ifelse ifelse else structure in a few seconds… etc.
BAD HABBIT:
writing code without any comments
BAD COMMENT: (writing exactly what each line does and not what it means as a whole*/
/*check if flag is set, loop until cleared*/
/*copy data from buffer to SSP register*/
/*increment pointer*/
/*decrement variable*/
/*check if variable is zero*/
/*do again*/
GOOD COMMENT:
/*transfer all stringdata to serial port*/
Yes, the frustration you’re saving is likely your own! I find the best comments are along the lines of “what was I thinking” and “why did I structure this the way I did”. No sin to put a short paragraph in here and there.
I find the best comments are the ones that tell me why I didn’t do it the other (shorter / more obvious) way.
// can’t just do xyz because the abc won’t def
I prefer comments as specifications. The comments explain what the code *should* do or what the objective is, rather than what the code does or how it does it. The comments can then be written before the code PDL-style and form the basis of unit tests too. Such comments only disagree with the code when the specification changes rather than merely the implementation.
+1. Every function should begin with a specification. What the parameters mean, what it does, and what it returns. Also, every source file should indicate what group of functions, classes, or whatever, are in it. I’ve seen WAY too many “open source” projects whose files begin with their legal statements, then jump right into code.
+1
Not trying to be snarky, but.. So there is a library for hiding the “complexity” of integer comparison? What next, will there be a isSmaller(), isBigger() and isEqual() too, as the “” and “==” are equally complex concepts? How about localized versions for us with English as second or third language?
Perhaps one should try using the visual programming languages, if the normal mathematical operations are too hard to read.
Well, mystuff.changed() is quite readable in my opinion. But that’s not the biggest worry I have. The Stator class is also keeping track of time, which adds more functionality to the class than you’d need. There’s no need for this in the first place!
The other drawback in my opinion is that this library requires a C++ compiler while it’s more practical to limit yourself to C without the classes, as that’s easier to use. Classes can make things slightly more complex but also tend to eat up more memory.
Besides, if you want to implement something like this then it’s easier to implement an OnChanged() event instead by using a callback function that gets called whenever the value changes.
But it’s not good as it requires a bit more code for implementing a simple check.
The idea is that as soon as you have more than variable to track you get a lot of repeated code and a the simple state != lastState becomes state1 != lastState1 && state2 != lastState2 && state3 != lastState3 and all of its accompanying logic code to handle the state transition.
So you gotta make sure you have written the logic correctly for each of the states. Small mistakes creep in fairly quickly so its easier to rely on a smaller construct that behaves the same every time and isn’t prone to you forgetting an assignment for state3.
I mostly do rapid prototyping. More abstraction to keep me from repetitive errors is helpful.
If it doesn”t help you that’s perfectly fine. It’s not a “this fixes everyone’s problems” library.
Doing fast prototyping is no excuse for writing bad code. When you start to use state-flags that are either true or false, you’re doing it wrong! This is when you would need bitfields instead, where you can use a single int to keep the state for 16 to 32 different boolean flags in a single variable. So yeah, you would need to define a few constants like STATE1TRUE=1, STATE2TRUE=2, STATE3TRUE=4 and even STATE1AND3TRUE=STATE1TRUE || STATE3TRUE. Then you can compare if (flag && STATE1AND3TRUE==STATE1AND3TRUE) to check for multiple flags all at once…
You can also check if (flag && STATE1AND3TRUE==STATE1TRUE) to make sure state 1 is true and state 3 is not. Or check for STATEFALSE (=0) to check if both are false…
Using bitfields is just something you should get used to. The related AND/OR/NOT/XOR logic tends to be challenging for new developers but once you understand the concept, it should be relatively easy. And for more complex situations, you can always write things down to work it out. With pen and paper!
Wasn’t meant to be a comment to you but to BrightBlueJim.
Anyways, you honestly think that construct you describe is easier to read?
That’s exactly what I was thinking with the given examples. It’s so safe to use “!=” to represent “not equal to” that I don’t remember the last time someone that wasn’t a programmer asked me what that meant or showed any sign they didn’t understand what it meant.
Last time I tried to work with a programmer that insisted on writing stupid “helper” functions like “changed()”. I began to pepper the code with alternatives like: “state ^ lastState”, “state – lastState”, “~(state | lastState) == ~state & ~lastState” and a boatload of other methods of testing equivalence reaching all the way into linear algebra. Really didn’t take long for that dude to stop wasting time on such functions.
My pet peeve is ‘{‘ and ‘}’ !!
Sitting through a code review and the author of a piece of code can not find the start and end of his functions, because he could not find the ‘{}’ in the B/W editor that everyone loves to hate.
White space is just as important as useful variable names.
Isn’t that why python uses tabs/indents to make the code more readable ??
Even the interpreter enforces that.
Why can’t the C/C++ people take the hint ?
Tabs and spaces are not more readable.
Ive spent hours looking at other peoples code trying to find out why somethings not working only to find they put a tab in the wrong place. Looking for something thats not there is a lot harder than trying to find a “}” put in the wrong place.
Theres no excuse for being lazy and not properly setting your code out in a readable form.
Enforced white space in a chunk of code is not a remedy. Its just easier to make it unreadable.
Even the very simplest editors have a bracket matching function. And most of the C or C++ I read – and all that I write, does also use indentation to make things clear. Further, I rarely put a bracket on the same line as code.
There they are, with whitespace after.
You can do this sort of thing with any language I know of.
Languages that force me to do it their way and insult my intelligence don’t get used around here.
What are you talking about?
He isn’t missing any brackets in his code.
In one part of your comment you seem to be whining that you don’t like brackets and prefer Python style where grouping happens by whitespace. In another part you seem to be complaining about someone not using brackets, saying he could not find the keys.
Well which is it?
Also for every editor there are a lot of people that love to hate them. What editor is limited to B/W? Even the simplest commandline editor runs in a terminal where the user can usually choose whatever colors. You seem to think it is obvious which editor you are talking about but I am not sure you even know.
As for white space.. you are right. That is important too. But most programmers I deal with seem to already know that. The lesson I think people need to learn is that TOO MUCH whitespace is just as bad as too little. Too many spaces per indent means the eye has to scan left/right more. Scanning left and right is a great way to lose one’s place and makes concentrating on the meaning harder. 3 spaces is plenty, 2 is good with a blocky font! Also, devoting a line to an opening bracket? WHY! It does nothing for readability, it just means more scrolling. Arguably that is better than left/right scanning but extra scrolling for no benefit is still a detriment. Put your opening brackets at the end of the method name, conditional or loop statement. Do not put it on the next line down.
Oh, and the way indentation is handled for switch statements in pretty much every style guide and as the default behavior of pretty much every editor… OMFG, WTF is wrong with people?!?!?!
Ok, that’s enough. /rant
Perfect example:
>> Ok, that’s enough. /rant ( end rant )
Where is the start rant ?
Oh, you forgot to add it in your hurry to rant !
Hey Lewin,
“working with Amazon Dash buttons.”
should read:
“working with Arduino Dash buttons.”
You know, truth in advertising.
Perhaps a good (and easy) start would be to add an “auto-formater” in the Arduino editor.
which it has
which is ctrl-t?
I can see what it might be useful for the author, but does that level of abstracting help? And is really clear what they are doing, or are we just adding a layer of bloat?
Clean code would probably be a better start – he could start with putting the ‘{‘ in the right place…
It would be much better when Arduino coders learn to code more properly. Seriously, if these coders did a better job then this library would not be required!
The C language has an excellent tool for this, which happens to be the bitwise operations. This allows you to hold various states inside a single variable. The use of bit-fields is generally preferred as it saves memory. Interestingly enough, you can actually use multi-bit fields in your structures like this:
struct {
unsigned int suit : 2;
unsigned int value : 4;
unsigned int face : 1;
unsigned int state : 1;
} card;
And yes, that would create an 8-bit playing card for the four suits (clubs, spades, hearts and diamonds) plus the card value from 0 to 15 there 0 would be a joker, 1 the ace and 11, 12 and 13 the Jack, Queen and King, allowing you to add two more card values. The face would be an indication if the face value is up, thus visible and the state would indicate if the card is still in the game or not. Or whatever you like, really. So you can check card.face to see if it’s face up or down or card.value to check it’s value. And if need be, it can be streamed quite easily as it’s just a single byte in total…
You can make larger fields too, btw. For example, a whole deck might be defined as:
struct {
unsigned int spades: 13;
unsigned int clubs: 13;
unsigned int hearts: 13;
unsigned int diamonds: 13;
} deck;
This is 52 bits in total but the compiler will turn it into 64-bits. And every field in the structure would indicate if that specific card is inside the deck or not. You can also use this structure to represent the hand of a player or even check for specific winning patterns. You can even use the 12 remaining bits for additional states about the deck, if need be.
But bitwise operations like these are too challenging for most, even though they make coding a lot easier when properly used…
I’m not really sure what you’re trying to say with this, the helpful thing about the library isn’t the fact it puts two values into a single object, it’s that it uses C++ operator overloading to ease the burden of tracking the state. If I’d been implementing it I wouldn’t have gone for the two templated versions, but that’s neither here nor there.
I’m just saying that making a generic library to handle states is a bit overkill and leads to unnecessary code. You can have multiple flags inside a single variable by using bitwise operators AND/OR/XOR/NOT logic together with a list of constants or predefined values. And in case you need more than just boolean flags you can also just use bit-fields within a single type and specify the length in bits for every value. My example is a card game where I can store a whole deck inside a single int or a single card with two states in a single 8-bit value.
All this without using C++…
And as I said, the library is doing way too much, yet doesn’t do the important things… It keeps track of the age of the variable, which is a bit pointless. But it also doesn’t do anything useful when the value is changed. It would be useful if it calls a callback function when the value is changed so you respond immediately on any change.
The art of using scruffy code to make others’ code readable ;-)
bool changed(){
if(_state != _lastState){
return true;
}else{
return false;
}
}
should be written
bool changed() {
return (_state != _lastState);
}
(and we are speaking of microcontrollers, where cpu cycles and memory are a concern)
no, it should be
bool changed()
{
return (_state != _lastState);
}
I would put three spces before that return keyword, but this style is better. Sometimes, I have had to print the code and draw lines with a marker to find some misbehaved “}” …
Or it could be …
bool changed()
{
return (_state != _lastState);
}
This is a rare indentation style but I have used it for decades, and find it very readable.
Hard to comment on indentation style on a forum that strips leading spaces.
OOPS did not know that about leading spaces. Just pretend 3 spaces before every line after the first.
Thanks Andy
I am not a proper programmer, but I seem to code a lot of state machines. And I use “switch”
static int state
switch (state){
case 0; // Idle
if read_input() <= 0 break;
state = 1;
timer = 10000;
// fall-through
case 1; // wait for reset
if (timer -= fperiod) 0) break ; // continue waiting
state = 3 ;
// fall-through
case 2;
case 3;
…
and so on
…
}
in C++ usign switch with a enum struct , and you save a lot of time/memory space …. and easy to coding
The named states also improves readability over a bunch of numerical constants.
People… Just switch over to PlatformIO on Visual Studio Code (or whatever IDE) and enjoy updates, code formatting, linting and a stable IDE…
Just make sure to do it on a rainy day, it will take you some time to get used to the new flow.
Sublime text for me, but I even use gedit…any modern editor is decent these days. All match brackets easily, almost all are better than the Arduino IDE…
All have updates, and the list of languages that syntax highlight might be a better criterion. Or the ability for refactoring to really work despite odd rules in some language – where the same name can be used as a function or a variable depending on context…
+1
Many opinions, and many of them well reasoned. I started to read the article and thought I disliked the idea, then considered I have also created simple functions in the past just to simplify reading the code, when tired or in a hurry.
Lewin´s solution is not for all people, nor is something that should be enforced. It Is just one more suggestion of ways to make your code more readable to yourself, mainly, and a way to induce thioughts in “how could I make my code clearer ” .
In the case of his examples, if one uses the state comparision in just one place, no need to have another abstraction and the bloat that comes with it. But like my first coding teacher taught us, “if you have to repeat that piece of code in a lot of places, it should be turned into a function”.
It surprises me how much tricks get reinvented because the young kids don’t know about it, and then everyone cheers about it.
It doesn’t surprise me, it is new if you haven’t been taught it.
The problem is the crappy way we’ve documented and communicated our “innovations” and “passed” that knowledge on to others.
whats wrong with a simple c macro:
#define changed(ab) (was_##ab != ab)?(was_##xy=ab),true:false
What’s wrong is it’s so unreadable I doubt if it would do anything useful. Seems like something missing…
Not that difficult, but requires some basic knowledge of C macro concatenation.
I would at least put anther bracket around it just in case. | https://hackaday.com/2019/05/09/stator-library-makes-your-arduino-code-easier-to-read/?replytocom=6148380 | CC-MAIN-2019-39 | refinedweb | 3,693 | 70.63 |
How to save or write files to Lopy flash
I am sending an ASCII file from one Lopy4 to another Lopy4 through LoRa and I could send the ASCII characters in packets from one to another where the receiver Lopy receives the packets (I confirmed this as I can print the received packets). Meanwhile I am also trying to save/write the received ASCII packets in the flash memory under "image.txt" of the receiving Lopy4. I used code below but when I download the "image.txt" and view it, it still appears as an empty file.
from network import LoRa import socket import time lora = LoRa(mode=LoRa.LORA, frequency=915000000) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setblocking(True) while True: # send some data rec = s.recv(1024) print(rec) fo = open("image.txt", "wb") fo.write(rec)
I may be doing wrong. If any one can help me on how to write/save the received packets to "image.txt" file that I created.
Your kind help will be appreciated
Thank you
@rcolistete If using the nvs class by @oligauc below: that one indeed writes to the flash file system. If you use the nvs_xxxx methods from the pycom module, data is stored in the separate nvs partition and not in the file system partition.
- rcolistete last edited by
@robert-hh But isn't '/flash/store.nvram' a common file in '/flash' file system ?
@gcapilot No, it does not. It write to a special area of the flash, but not the file system in flash, which is located in the so called nvs partition at offset 0x9000, size 0x7000.
@oligauc said in How to save or write files to Lopy flash:))
Doesn't this code just create a regular file and write/read to it?
- rcolistete last edited by
Other example, using 'with' there is no need do close the file (it is done in the end of the 'with' block :
try: with open('/flash/log.txt', 'a') as f: f.write('test') os.sync() except OSError as e: print('Error {} writing to file'.format(e))
You have to close the file. If you don't close or flush the stream, the payload may remain in the buffer and will not be written on your device. Because there is no working flush, you have to close the file.
@phusy012 Please find example code below.
Create a file named nvs.py
import uos class Nvs: def __init__(self, delimiter, file_size): self.max_file_size = file_size self.delimiter = delimiter self.file_path = '/flash/store.nvram' self.file_handle = None def empty(self): isEmpty = False try: if uos.stat(self.file_path)[6] == 0: isEmpty = True except Exception as e: isEmpty = True return isEmpty def full(self): isFull = True try: if uos.stat(self.file_path)[6] < self.max_file_size: isFull = False except Exception as e: isFull = False return isFull def open(self): if self.file_handle is None: self.file_handle = open(self.file_path, "a+") def close(self): if self.file_handle: self.file_handle.close() self.file_handle = None def store(self, data): if self.full(): return False if self.file_handle is None: self.open() try: self.file_handle.write(data) self.file_handle.write(self.delimiter) self.file_handle.flush() except Exception as e: print("NVRAM: Exception storing data: " + str(e)) return False return True def read_all(self): buf = '' if self.file_handle is None: self.open() try: self.file_handle.seek(0,0) buf = self.file_handle.read() except Exception as e: print("NVRAM: Exception reading file: " + str(e)) return buf
Create a file named main.py)) | https://forum.pycom.io/topic/5306/how-to-save-or-write-files-to-lopy-flash | CC-MAIN-2022-21 | refinedweb | 583 | 68.87 |
What.
What is ML.NET?
ML.NET is a free, cross-platform, open-source machine learning framework provided by Microsoft. It is made specifically for the .NET community. With the help of ML.Net framework, you can easily implement and integrate machine learning features into your existing or new .Net applications.
To start working on ML.Net, you don’t have to be a machine learning expert. You can just start building simple ML.Net applications while teaching yourself.
In this article, I am going to give a demo of a simple yet exciting ML.Net based project. We will develop an application that will be used to forecast bike rental service demand using univariate time series analysis.
The code for this sample can be found on the CloudBloq/BlazorWithML.NET repository on GitHub.
What You’ll Need to Get Started
- Visual Studio 2019 with the ".NET Core cross-platform development" workload installed.
Needed Projects
We will be making use of three projects:
- An ASP.NET Blazor WebAssembly project which serves as the UI to display the forecast.
- An ASP.NET Web Api project which feeds the UI with data (the bike rental service demand forecast) from the service.
- A service [C# Class Library (.NET Standard)] which will contain the logics used to forecast bike rental service demand.
If you clone the CloudBloq/BlazorWithML.NET repository, the database already resides in the Data folder in project BlazorWithML.NET.Api. When you run the application for the first time, the database will be automatically created and filled with the needed data. The database already contains the data that will be used to train the model.
Setting Up the Projects
- ASP.NET Blazor WebAssembly project
Launch Visual Studio, click the Create a new project link. Next, type Blazor in the search box and click the first option that comes up (Blazor App):
Give the project and solution a name, e.g. BlazorWithML.NETPOC then click the Create button.
Next, select the Blazor WebAssembly App option and click the Create button. This will create the project and solution for you. This link will help you understand a Blazor WebAssembly project structure
- ASP.NET Web Api project
Right click the Solution in the Solution Explorer and select Add -> New Project, search for Web Application and select the ASP.NET Core Web Application option:
click Next, give the project a name BlazorWithML.NET.Api, then click Create, select the API option:
and finally click Create.
- C# Class Library (.NET Standard)
Right click the Solution in the Solution Explorer and select Add -> New Project, search for Class Library (.NET Standard) and select the C# version. Click Next, give the project a name Forecasting_BikeSharingDemandLib, then click Create.
Adding Projects Dependencies
We need to add the ML.NET and SqlClient package to the service: Right-click the Forecasting_BikeSharingDemandLib project, select Manage NuGet Packages for Solution. Under the Browse section, search for Microsoft.ML and click on it then click the Install button. After that, search for Microsoft.ML.Timeseries then click the Install button. Also search for System.Data.SqlClient and Install it.
We need to give the Api access to the Service: This can be done by adding a reference of the service to the Api. Right-click the Dependencies of project BlazorWithML.NET.Api and click Add project Reference, select the Forecasting_BikeSharingDemandLib checkbox and click ok.
Writing the Service
The service is the backend of the application. It contains the logics used to forecast bike rental service demand.
We will make use of univariate time series analysis algorithm known as Singular Spectrum Analysis to forecast the demand for bike rentals.
Understand the problem
In order to run an efficient operation, inventory management plays a key role. Having too much of a product in stock means not generating any revenue. Having too little product leads to lost sales. Therefore, the constant question is, what is the optimal amount of inventory to keep on hand? Time-series analysis helps provide an answer to these questions by looking at historical data, identifying patterns, and using this information to forecast values some time in the future.
The algorithm used in this article for analyzing data is univariate time-series analysis.
Univariate time-series analysis takes a look at a single numerical observation over a period of time at specific intervals, for example monthly sales. It works by decomposing a time-series into a set of principal components. These components can be interpreted as the parts of a signal that correspond to trends, noise, seasonality, and many other factors. Then, these components are reconstructed and used to forecast values some time in the future.
Create the needed files and folders
Right-click the Forecasting_BikeSharingDemandLib in the Solution Explorer then select Add -> New Folder and give it the name Model. This folder will contain the objects used to model the output.
The output will be in two forms:
- The Evaluation, which will contain the Mean Absolute Error and the Root Mean Squared Error and
- The Forecast, which will contain Lower Estimate, UpperEstimate, Forecast...
Create the Forecast output object by right clicking the Model folder and select Add -> class then name it ForecastOutput.cs. The code for it is:
public class ForecastOutput { public string Date { get; set; } public float ActualRentals { get; set; } public float LowerEstimate { get; set; } public float Forecast { get; set; } public float UpperEstimate { get; set; } }
Next, right-click the Model folder and select Add -> class then name it EvaluateOutput.cs. The code for it is:
public class EvaluateOutput { public float MeanAbsoluteError { get; set; } public double RootMeanSquaredError { get; set; } }
We also need to add the class that will contain the logic. Right-click the project Forecasting_BikeSharingDemandLib and select Add -> class then name it BikeForcast.cs.
The Logic
The entire code for the file BikeForcast.cs is (I will break it down bit by bit):
GetConnectionString(), lines 15 to 22, is used to get the connection string where the data is stored and also the Model path. MLModel.zip can be found in the root folder of project BlazorWithML.NET.Api.
Lines 144 to 151 define the ModelInput class. This class is used to hold the database objects and it contains the following columns:
- RentalDate: The date of the observation.
- Year: The encoded year of the observation (0=2011, 1=2012).
- TotalRentals: The total number of bike rentals for that day.
Lines 153 to 161 define the ModelOutput class. This class contains the following columns:
- ForecastedRentals: The predicted values for the forecasted period.
- LowerBoundRentals: The predicted minimum values for the forecasted period.
- UpperBoundRentals: The predicted maximum values for the forecasted period.
Line 24 defines the GetBikeForcast() method that takes in a parameter numberOfDaysToPredict : int. The parameter will be gotten from the UI. The user will have to input the amount of days (between 1 and 500) he/she wants to predict for. This method holds the logic to forecast the demands with the help of two other methods for Evaluate and Forecast.
Line 26 defines the MLContext. The MLContext class is a starting point for all ML.NET operations, and initializing mlContext creates a new ML.NET environment that can be shared across the model creation workflow objects. It's similar, conceptually, to DBContext in Entity Framework.
Next we need to load the data from the database.
Line 29 creates a DatabaseLoader that loads records of type ModelInput. Line 32 defines the query to load the data from the database.
ML.NET algorithms expect data to be of type Single. Therefore, numerical values coming from the database that are not of type Real, a single-precision floating-point value, have to be converted to Real.
The Year and TotalRental columns are both integer types in the database. Using the CAST built-in function, they are both cast to Real.
We will create a DatabaseSource to connect to the database and execute the query on line 35, after which we then load the data into an IDataView on line 40.
The dataset contains two years' worth of data. Only data from the first year is used for training; the second year's data is held out to compare the actual values against the forecast produced by the model. Filter the data (line 43 and 44) using the FilterRowsByColumn transform.
For the first year, only the values in the Year column less than 1 are selected by setting the upperBound parameter to 1. Conversely, for the second year, values greater than or equal to 1 are selected by setting the lowerBound parameter to 1.
Next, we will define a pipeline that uses the SsaForecastingEstimator to forecast values in a time-series dataset, as shown on lines 47 to 56.
The forecastingPipeline takes 365 data points for the first year and samples or splits the time-series dataset into 30-day (monthly) intervals as specified by the seriesLength parameter. Each of these samples is analyzed through a weekly or 7-day window. When determining what the forecasted value for the next period(s) is(are), the values from the previous seven days are used to make a prediction.
The model is set to forecast a period of the number of day(s) the User input into the future as defined by the horizon parameter. Because a forecast is an informed guess, it's not always 100% accurate. Therefore, it's good to know the range of values in the best and worst-case scenarios as defined by the upper and lower bounds. In this case, the level of confidence for the lower and upper bounds is set to 95%. The confidence level can be increased or decreased accordingly. The higher the value, the wider the range is between the upper and lower bounds to achieve the desired level of confidence.
On line 59, we use the Fit method to train the model and fit the data to the previously defined forecastingPipeline.
Next, we will evaluate the model, and lines 74 to 104 defines an helper method that will be used to evaluate the model.
To evaluate performance, the following metrics are used:
- Mean Absolute Error: Measures how close predictions are to the actual value. This value ranges between 0 and infinity. The closer to 0, the better the quality of the model.
- Root Mean Squared Error: Summarizes the error in the model. This value ranges between 0 and infinity. The closer to 0, the better the quality of the model.
We evaluate how well the model performs by forecasting next year's data and comparing it against the actual values.
Inside the Evaluate method, forecast the second year's data by using the Transform method with the trained model on line 77.
We get the actual values from the data by using the CreateEnumerable method on line 80, and the forecasted values by using the CreateEnumerable method on line 85.
We then calculate the difference between the actual and forecasted values, commonly referred to as the error, on line 90.
We measure performance by computing the Mean Absolute Error and Root Mean Squared Error values on lines 93 and 94.
Next, we need to save the evaluated model. The model is saved in a file called MLModel.zip as specified by the previously defined modelPath variable from the GetConnectionString() method. Use the Checkpoint method to save the model on line 66.
Finally, we will use the model to forecast demand. Lines 106 to 140 define the Forecast helper method.
We create a List to hold the result of the forecast on line 108, then we use the Predict method on line 111 to forecast rentals for the number of day(s) entered by the user.
Then we align the actual and forecasted values on lines 113 to 131.
With this, we are done with the logic.
The Api
The Blazor WebAssembly app will need an Api to get the forecast from the service. First, we need to enable CORS on the Api. Go to the Startup.cs file of project BlazorWithML.NET.Api. Create a new field:
readonly string AllowedOrigin = "allowedOrigin";
Next, add the CORS service to your app by adding these lines of code to the ConfigureServices(IServiceCollection services) method:
services.AddCors(option => { option.AddPolicy("allowedOrigin", builder => builder.AllowAnyOrigin().AllowAnyMethod().AllowAnyHeader() ); });
Finally, add the CORS middleware to the Configure(IApplicationBuilder app, IWebHostEnvironment env) method:
app.UseCors(AllowedOrigin);
Ensure that this middleware is added as the first middleware in the pipeline that is right after the:
if (env.IsDevelopment()) { app.UseDeveloperExceptionPage() }
The Controller
Create an empty Api controller in the Controllers folder, and name it BikeDemandForcastController.cs. The code for this controller is:
The first endpoint, lines 11 to 17, (/GetEvaluateOutput/{numberOfDaysToPredict}) is used to get the EvaluateOutput.
The second endpoint, lines 20 to 26, (/GetForecastOutput/{numberOfDaysToPredict}) is used to get the ForecastOutput.
The UI
First, we are going to set the base Uri of the Api endpoints. This can be done in the Program.cs of project BlazorWithML.NET by replacing:
builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) });
with
builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri("") });
Model Objects
The Api returns data in JSON format so we need to map the response data to an object which would be displayed to the user. Create a new folder called Model and create a class called ForecastOutput.cs in the Model folder. The code for this class is:
Create another class called EvaluateOutput.cs:
We will also need an object to store the input from the user. In the Model folder create a class called BikeForcastInput.cs:
Blazor Page
Blazor pages end with .razor and they are called Razor component.
In the Pages folder, create a Razor Component and name it BikeForcast.razor This component will be used to get the input(number of day(s) from the user) and display the Evaluate and Forecast output. The code is:
Line 1 defines the Uri the BikeForcast.razor will be mapped to. Lines 11 to 70 define the components that will be displayed to the user. The GetBikeDemandForcast() (lines 78 to 82) is used to call the endpoints to get the Evaluate and Forecast.
- Navigation
Replace the code in the NavMenu.razor component in the Shared folder with:
With this, we are done with coding. Now it's time to test our application.
Testing the application
First, we need to set the UI (BlazorWithML.NET) and Api (BlazorWithML.NET.Api) to start up at the same time when we run the application.
Right-click the solution then click Set Startup Projects. Select Multiple startup projects and set the Action of BlazorWithML.NET and BlazorWithML.NET.Api to Start. You should get the image below:
Click Apply then Ok. Right-click the solution and click Build Solution. After Build is successful, click CTRL + f5 to run the application. If you are getting an IIS error restart your VS Build the project again, and click CTRL + f5. Once the application is running you, should get the image below:
Click the Bike Demand Forcast link, enter a number between 1 and 500, and you should get the result of the Forecast in a table as shown below:
Do More Blazor or ML.NET!!
If you enjoyed this article and feel like learning more about Blazor WebAssembly or ML.NET, the links below might be just what you need:
Awesome! Please share it with anyone you think could use this information. Thanks for reading. As always if you have any questions, comments, or concerns about this post feel free to leave a comment below.
Discussion (6)
So, I have put to practice all that you have shared here and I have a bug since yesterday 13th February 2021, I will appreciate if you be willing to assist. Apparently I sent you a connection request on LinkedIn but I think you are yet to confirm it. How can I reach you so I could share my code base with you.
Regards
Hi, thanks for reading the article. I have accepted your LinkedIn request, thanks
Wow, I am blown away by the details of your article. I am working on s mini ML. Net project and the knowledge shared here will be valuable to my learning curve.
Hello Mr. Ojo, I have gone through everything you explained here and also applied same. I have a bug will you be willing to assist me please?
Yes I am willing to assist, thanks.
Hi! After running the app, and editing the number of days, I get the error: dev-to-uploads.s3.amazonaws.com/up... | https://practicaldev-herokuapp-com.global.ssl.fastly.net/jioophoenix/fun-with-asp-net-core-blazor-webassembly-and-ml-net-4i29 | CC-MAIN-2022-27 | refinedweb | 2,756 | 66.44 |
Code. Collaborate. Organize.
No Limits. Try it Today.
1. Who may be interested
2. What is Boot Loader
3. Be ready to go deeper
3.1. So what language you should know to develop Boot Loader
3.2. What compiler you need
3.3. How system boots
4. Let’s code
4.1 Program architecture
4.2 Development environment
4.3 BIOS interrupts and screen clearing
4.4 «Mixed code»
4.5 CString implementation
4.6 CDisplay implementation
4.7 Types.h implementation
4.8 BootMain.cpp implementation
4.9 StartPoint.asm implementation
5. Let’s assemble everything
5.1 Creation of COM file
5.2 Assembly automation
6. Testing and Demonstration
6.1 How to test boot loader.
6.2 Testing with the virtual machine VmWare
6.2.1 Creation of the virtual machine
6.2.2 Working with Disk Explorer for NTFS
6.3 Testing on the real hardware
6.4 Debug
7. Information Sources
8. Conclusion
Most of all I’ve written this article for those who have been always interested in the way the different things work. It is for those developers who usually create their applications in high-level languages such as C, C++ or Java, but faced with the necessity to develop something at low-level. We will consider low-level programming on the example of working at system loading.
We will describe what is going after you turn on a computer; how the system is loading. As the practical example we will consider how you can develop your own boot loader which is actually the first point of the system booting process.
Boot loader is a program situated at the first sector of the hard drive; and it is the sector where the boot starts from. BIOS automatically reads all content of the first sector to the memory just after the power is turned on, and jump to it. The first sector is also called Master Boot Record. Actually it is not obligatory for the first sector of the hard drive to boot something. This name has been formed historically because developers used to boot their operating systems with such mechanism.
In this section I will tell about knowledge and tools you need to develop your own boot loader and also remind some useful information about system boot.
On the first stage on the computer work the control of hardware is performed mainly by means of BIOS functions known as interrupts. The implementation of interrupts is given only in Assembler – so it is great if you know it at least a little bit. But it’s not the necessary condition. Why? We will use the technology of “mixed code” where it is possible to combine high-level constructions with low-level commands. It makes our task a little simpler.
In this article the main development languages is C++. But if you have brilliant knowledge of C then it will be easy to learn required C++ elements. In general even the C knowledge will be enough but then you will have to modify the source code of the examples that I will descried here.
If you know Java or C# well unfortunately it won’t help for our task. The matter is that the code of Java and C# languages that is produced after compilation is intermediate. The special virtual machine is used to process it (Java Machine for Java, and .NET for C#) which transform intermediate code into processor instructions. After that transformation it can be executed. Such architecture makes it impossible to use mixed code technology – and we are going to use it to make our life easier, so Java and C# don’t work here.
So to develop the simple boot loader you need to know C or C++ and also it would be good if you know something about Assembler – language into which all high-level code is transformed it the end.
To use mixed code technology you need at least two compilers: for Assembler and C/C++, and also the linker to join object files (.obj) into the one executable.
Now let’s talk about some special moments. There are two modes of processor functioning: real mode and protected mode. Real mode is 16-bit and has some limitations. Protected mode is 32-bit and is fully used in OS work. When it starts processor works in 16-bit mode. So to build the program and obtain executable file you will need the compiler and linker of Assembler for 16-bit mode. For C/C++ you will need only the compiler that can create object files for 16-bit mode.
The modern compilers are made for 32-bit applications only so we won’t able to use them.
I tried a number of free and commercial compilers for 16-bit mode and choose Microsoft product. Compiler along with the linker for Assembler, C, C++ are included into the Microsoft Visual Studio 1.52 package and also can be downloaded from the official site of the company. Some details about compilers we need are given below.
ML 6.15 – Assembler compiler by Microsoft for 16-bit mode;
LINK 5.16 – the linker that can create .com files for 16-bit mode;
CL – С, С++ compiler for 16-bit mode.
You can also use some alternative variants:
DMC – free compile for Assembler, C, C++ for 16 and 32-bit mode by Digital Mars;
LINK – free linker for DMC compiler;
There are also some products by Borland:
BCC 3.5 – С, С++ compiler that can create files for 16-bit mode;
TASM - Assembler compiler for 16-bit mode;
TLINK – linker that can create .com files for 16-bit mode.
All code examples in this article were built with the Microsoft tools.
In order to solve our task we should recall how the system is booting.
Let’s consider briefly how the system components are interacting when the system is booting (see Fig.1).
After the control has been passed to the address 0000:7C00, Master Boot Record (MBR) starts its work and triggers the Operating System boot. You can learn more about MBR structure for example here.
In the next sections we will be directly occupied with the low-level programming – we will develop our own boot loader.
Boot loader that we are developing is for the training only. Its tasks are just the following:
BootMain
Program architecture is described on the Fig.2 that is followed by the text description.
The first entity is StartPoint that is developed purely in Assembler as far as high-level languages don’t have the necessary instructions. It tells compiler what memory model should be used, and what address the loading to the RAM should be performed by after the reading from the disk. It also corrects processor registers and passes control to the BootMain that is written in high-level language.
StartPoint
Next entity– BootMain – is an analogue of main that is in its turn the main function where all program functioning is concentrated.
main
CDisplay and CString classes take care of functional part of the program and show message on the screen. As you can see from the Fig.2 CDisplay class uses CStringclass in its work.
CDisplay
CString
Here I use the standard development environment Microsoft Visual Studio 2005 or 2008. You can use any other tools but I made sure that these two with some settings made the compiling and work easy and handy.
First we should create the project of Makefile Project type where the main work will be performed (see Fig.3).
File->New\Project->General\Makefile Project
Fig.3 – Create the project of Makefile type
To show our message on the screen we should clear it first. We will use special BIOS interrupt for this purpose.
BIOS proposes a number of interrupts for the work with computer hardware such as video adapter, keyboard, disk system. Each interrupt has the following structure:
int [number_of_interrupt];
where number_of_interrupt is the number of interrupt
Each interrupt has the certain number of parameters that should be set before calling it. The ah processor register is always responsible for the number of function for the current interrupt, and the other registers are usually used for the other parameters of the current operation. Let’s see how the work of int 10h interrupt is performed in Assembler. We will use the 00 function that changes the video mode and clears screen:
ah
int 10h
mov al, 02h ; setting the graphical mode 80x25(text)
mov ah, 00h ; code of function of changing video mode
int 10h ; call interruption
We will consider only those interrupts and functions that will be used in our application. We will need:
int 10h, function 00h – performs changing of video mode and clears screen;
int 10h, function 01h – sets the cursor type;
int 10h, function 13h – shows the string on the screen;
Compiler for C++ supports the inbuilt Assembler i.e. when writing code in igh-level language you can use also low level language. Assembler Instructions that are used in the high level code are also called asm insertions. They consist of the key word __asm and the block of the Assembler instructions in braces:
__asm
__asm ; key word that shows the beginning of the asm insertion
{ ; block beginning
… ; some asm code
} ; end of the block
To demonstrate mixed code let’s use the previously mentioned Assembler code that performed the screen clearing and combine it with C++ code.
void ClearScreen()
{
__asm
{
mov al, 02h ; setting the graphical mode 80x25(text)
mov ah, 00h ; code of function of changing video mode
int 10h ; call interrupt
}
}
CString class is designed to work with strings. It includes Strlen() method that obtains pointer to the string as the parameter and returns the number of symbols in that string.
Strlen()
// CString.h
#ifndef __CSTRING__
#define __CSTRING__
#include "Types.h"
class CString
{
public:
static byte Strlen(
const char far* inStrSource
);
};
#endif // __CSTRING__
// CString.cpp
#include "CString.h"
byte CString::Strlen(
const char far* inStrSource
)
{
byte lenghtOfString = 0;
while(*inStrSource++ != '\0')
{
++lenghtOfString;
}
return lenghtOfString;
}
CDisplay class is designed for the work with the screen. It includes several methods:
TextOut()
ShowCursor()
ClearScreen()
// CDisplay.h
#ifndef __CDISPLAY__
#define __CDISPLAY__
//
// colors for TextOut func
//
#define BLACK 0x0
#define BLUE 0x1
#define GREEN 0x2
#define CYAN 0x3
#define RED 0x4
#define MAGENTA 0x5
#define BROWN 0x6
#define GREY 0x7
#define DARK_GREY 0x8
#define LIGHT_BLUE 0x9
#define LIGHT_GREEN 0xA
#define LIGHT_CYAN 0xB
#define LIGHT_RED 0xC
#define LIGHT_MAGENTA 0xD
#define LIGHT_BROWN 0xE
#define WHITE 0xF
#include "Types.h"
#include "CString.h"
class CDisplay
{
public:
static void ClearScreen();
static void TextOut(
const char far* inStrSource,
byte inX = 0,
byte inY = 0,
byte inBackgroundColor = BLACK,
byte inTextColor = WHITE,
bool inUpdateCursor = false
);
static void ShowCursor(
bool inMode
);
};
#endif // __CDISPLAY__
// CDisplay.cpp
#include "CDisplay.h"
void CDisplay::TextOut(
const char far* inStrSource,
byte inX,
byte inY,
byte inBackgroundColor,
byte inTextColor,
bool inUpdateCursor
)
{
byte textAttribute = ((inTextColor) | (inBackgroundColor << 4));
byte lengthOfString = CString::Strlen(inStrSource);
__asm
{
push bp
mov al, inUpdateCursor
xor bh, bh
mov bl, textAttribute
xor cx, cx
mov cl, lengthOfString
mov dh, inY
mov dl, inX
mov es, word ptr[inStrSource + 2]
mov bp, word ptr[inStrSource]
mov ah, 13h
int 10h
pop bp
}
}
void CDisplay::ClearScreen()
{
__asm
{
mov al, 02h
mov ah, 00h
int 10h
}
}
void CDisplay::ShowCursor(
bool inMode
)
{
byte flag = inMode ? 0 : 0x32;
__asm
{
mov ch, flag
mov cl, 0Ah
mov ah, 01h
int 10h
}
}
Types.h
Types.h is the header file that includes definitions of the data types and macros.
// Types.h
#ifndef __TYPES__
#define __TYPES__
typedef unsigned char byte;
typedef unsigned short word;
typedef unsigned long dword;
typedef char bool;
#define true 0x1
#define false 0x0
#endif // __TYPES__
BootMain.cpp
BootMain() is the main function of the program that is the first entry point (analogue of main()). Main work is performed here.
BootMain()
main()
// BootMain.cpp
#include "CDisplay.h"
#define HELLO_STR "\"Hello, world…\", from low-level..."
extern "C" void BootMain()
{
CDisplay::ClearScreen();
CDisplay::ShowCursor(false);
CDisplay::TextOut(
HELLO_STR,
0,
0,
BLACK,
WHITE,
false
);
return;
}
StartPoint.asm
;------------------------------------------------------------
.286 ; CPU type
;------------------------------------------------------------
.model TINY ; memory of model
;---------------------- EXTERNS -----------------------------
extrn _BootMain:near ; prototype of C func
;------------------------------------------------------------
;------------------------------------------------------------
.code
org 07c00h ; for BootSector
main:
jmp short start ; go to main
nop
;----------------------- CODE SEGMENT -----------------------
start:
cli
mov ax,cs ; Setup segment registers
mov ds,ax ; Make DS correct
mov es,ax ; Make ES correct
mov ss,ax ; Make SS correct
mov bp,7c00h
mov sp,7c00h ; Setup a stack
sti
; start the program
call _BootMain
ret
END main ; End of program
Now when the code is developed we need to transform it to the file for the 16-bit OS. Such files are .com files. We can start each of compilers (for Assembler and C, C++) from the command line, transmit necessary parameters to them and obtain several object files as the result. Next we start linker to transform all .obj files to the one executable file with .com extension. It is working way but it’s not very easy.
Let’s automate the process. In order to do it we create .bat file and put commands with necessary parameters there. Fig.4 represents the full process of application assembling.
Let’s put compilers and linker to the project directory. In the same directory we create .bat file and fill it accordingly to the example (you can use any directory name instead of VC152 where compilers and linker are situated):
.\VC152\ML.EXE /AT /c *.asm
.\VC152\LINK.EXE /T /NOD StartPoint.obj bootmain.obj cdisplay.obj cstring.obj
del *.obj
As the final stage in this section we will describe the way how to turn Microsoft Visual Studio 2005, 2008 into the development environment with any compiler support. Go to the Project Properties: Project->Properties->Configuration Properties\General->Configuration Type.
Configuration Properties tab includes three items: General, Debugging, NMake. Go to NMake and set the path to the build.bat in the Build Command Line and Rebuild Command Line fields – Fig.5.
Fig.5 –NMake project settings
If everything is correct then you can compile in the common way pressing F7 or Ctrl + F7. At that all attendant information will be shown in the Output window. The main advantage here is not only the assembly automation but also navigation thru the code errors if they happen.
This section will tell how to see the created boot loader in action, perform testing and debug.
You can test boot loader on the real hardware or using specially designed for such purposes virtual machine – VmWare. Testing on the real hardware gives you more confidence that it works while testing on the virtual machine makes you confident that it just can work. Surely we can say that VmWare is great method for testing and debug. We will consider both methods.
First of all we need a tool to write our boot loader to the virtual or physical disk. As far as I know there a number of free and commercial, console and GUI applications. I used Disk Explorer for NTFS 3.66 (version for FAT that is named Disk Explorer for FAT) for work in Windows and Norton Disk Editor 2002 for work in MS-DOS.
I will describe only Disk Explorer for NTFS 3.66 because it is the simplest method and suits our purposes the most.
We will need VmWare program version 5.0, 6.0 or higher. To test boot loader we will create the new virtual machine with minimal disk size for example 1 Gb. We format it for NTFS file system. Now we need to map the formatted hard drive to VmWare as the virtual drive. To do it:
File->Map or Disconnect Virtual Disks...
After that the window appears. There you should click Map button. In the next appeared window you should set the path to the disk. Now you can also chose the letter for the disk- see Fig.6.
Fig.6 – Parameters of virtual disk mapping
Don’t forget to uncheck the “Open file in read-only mode (recommended)” checkbox. When checked it indicates that the disk should be opened in read-only mode and prevent all recording attempts to avoid data corruption.
After that we can work with the disk of virtual machine as with the usual Windows logical disk. Now we should use Disk Explorer for NTFS 3.66 and record boot loader by the physical offset 0.
After program starts we go to our disk (File->Drive). In the window appeared we go to the Logical Drives section and chose disk with the specified letter (in my case it is Z) – see Fig.7.
Fig.7 – choosing disk in Disk Explorer for NTFS
Now we use menu item View and As Hex command. It the appeared window we can see the information on the disk represented in the 16-bit view, divided by sectors and offsets. There are only 0s as soon as the disk is empty at the moment. You can see the first sector on the Fig.8.
Fig.8 – Sector 1 of the disk Open. After that the content of the first sector should change and look like it’s shown on the Fig.9 – if you haven’t changed anything in the example code, of course.
You should also write signature 55AAh by the 1FE offset from the sector beginning. If you don’t do it BIOS will check the last two bytes, won’t find the mentioned signature and will consider this sector as not the boot one and won’t read it to the memory.
To switch to the edit mode press F2 and write the necessary numbers –55AAh signature. To leave edit mode press Esc.
Now we need to confirm data writing.
To apply writing we go to Tools->Options. Window will appear; we go to the Mode item and chose the method of writing - Virtual Write and click Write button – Fig.10.
A great number of routine actions are finished at last and now you can see what we have been developing from the very beginning of this article. Let’s return to the VwWare to disconnect the virtual disk (File->Map or Disconnect Virtual Disks… and click Disconnect).
Let’s execute the virtual machine. We can see now how from the some depth, from the kingdom of machine codes and electrics the familiar string appears ““Hello, world…”, from low-level…” – see Fig.11.
Testing on the real hardware is almost the same as on the virtual machine except the fact that if something doesn’t work you will need much more time to repair it than to create the new virtual machine. To test boot loader without the threat of existent data corruption (everything can happen), I propose to use flash drive, but first you should reboot your PC, enter BIOS and check if it supports boot from the flash drive. If it does than everything is ok. If it does not than you have to limit your testing to virtual machine test only.
The writing of boot loader to the flash disk in Disk Explorer for NTFS 3.66 is the same to the process for virtual machine. You just should choose the hard drive itself instead of its logical section to perform writing by the correct offset – see Fig.12.
Fig.12 – Choosing physical disk as the device
If something went wrong – and it usually happens – you need some tools to debug your boot loader. I should say at once that it is very complicated, tiring and time-eating process. You will have to grasp in the Assembler machine codes – so good knowledge of this language is required. Any way I give a list of tools for this purpose:
TD (Turbo Debugger) – great debugger for 16-bit real mode by Borland.
CodeView – good debugger for 16-bit mode by Microsoft.
D86 – good debugger for 16-bit real mode developed by Eric Isaacson – honored veteran of development for Intel processor in Assembler.
Bocsh – program-emulator of virtual machine that includes debugger of machine commands.
“Assembly Language for Intel-Based Computers” by Kip R. Irvine is the great book that gives good knowledge of inner structure of the computer and development in Assembler. You ca also find information about installation, configuration and work with the MASM 6.15 compiler.
This link will guide you to the BIOS interrupt list:.
Of course it is just a small piece comparing with the huge theme of low-level programming, but if you get interested of this article – it’s great.
See more case studies and research results at Apriorit. | http://www.codeproject.com/Articles/36907/How-to-develop-your-own-Boot-Loader?fid=1541607&df=90&mpp=25&sort=Position&spc=Relaxed&tid=4206253 | CC-MAIN-2014-23 | refinedweb | 3,425 | 64.81 |
MenuList API
API documentation for the React MenuList component. Learn about the available props and the CSS API.
Import
You can learn about the difference by reading this guide on minimizing bundle size.
import MenuList from '@mui/material/MenuList'; // or import { MenuList } from '@mui/material';
A permanently displayed menu following.
It's exposed to help customization of the
Menu component if you
use it separately you need to move focus into the component manually. Once
the focus is placed inside the component it is fully keyboard accessible.
Props
Props of the List component are also available.
The
refis forwarded to the root element. | https://mui.com/material-ui/api/menu-list/ | CC-MAIN-2022-27 | refinedweb | 103 | 50.43 |
Agenda
See also: IRC log
<trackbot> Date: 24 November 2011
pgroth: I can scribe
<pgroth> thanks stain!
<pgroth> Scribe: stain
will you do the magic things for bumping to the next agendum
<pgroth> i actually don't know how to do it
ok, I'll do it
<pgroth> I'll do the topics
that's what I meant :)
can we add an agenda item to ask when we should do the xmas break?
<pgroth> ok
<pgroth> yes
<dgarijo> well it looks like many people are on holiday today :)
<pgroth>
short meeting today
<pgroth> PROPOSED to accept the minutes of the Nov. 17 telecon
<dgarijo> +1
+1
<khalidbelhajjame> +1
<jcheney> +1
<GK> +1
<pgroth> ACCEPTED Minutes of Nov 17 telecon
<pgroth>
ACTION-43 - Pgroth organising now - just waiting for actual confirmation before sending out email - hopefully by end of tomorrow
ACTION-44 on Graham - we can come back to this when we talk about PAQ
<GK> Oops, that fell of my Radar
Stian asked about what we do over Christmas break
Luc: Propose to have last call just before Christmas, Thurs 22 - not call 29th - resume on 5th of Jan
<GK> (I'll be on holiday on 22 Dec)
(me too)
pgroth: sounds reasonable - but if too many o vacation 22nd we'll cancel
<dgarijo> I'll be on holidays, but I think I can make it
<scribe> ACTION: Pgroth to Send email about holiday break [recorded in]
<trackbot> Created ACTION-45 - Send email about holiday break [on Paul Groth - due 2011-12-01].
(I can probably make it, I will be in EDT for once)
dgarijo: discussed Luc's issues
on Monday, wrapping up
... updated document - almost ready for release
<dgarijo>
I'll timestamp it
pgroth: issues with (?) section - did you plan to address that?
dgarijo: not aware about concerns over constraints. Planning to put it in an annex - but to put it in a different document
<pgroth> zednik
<pgroth> ?
<satya> @Luc: Are we discussing the PROV-O?
Luc: dgarijo don't seem to be aware of comments on section 4 and 5, we said that they should not be part of the FPWD - instead they should be included in the (?) document
<khalidbelhajjame> Luc, that wasn't discussed in the last telecon
Luc: what is happening with section 4, 5
satya: had a discussion on
section 4. In email to Luc and Paul, we think that
extensibility of PROV-O is important to show - but we
understand they are really long
... we are suggesting similar javascript buttons to hide/show RDF/XML
<dgarijo> when did discussion happened? I was not aware :(. Sorry.
Monday
satya: also reviewing content of
section 4 - but believe some content should be there in
PROV-O
... on section 5.3 - they have moved to appendix - should improve readability
... can revisit these after issues in PROV-DM are propagated to PROV-O
(Annex: )
Luc: believe sec 4 is not by the charter - we should be domain independent
<khalidbelhajjame> Can then Section 4 be released as a note?
Luc: Section 4 explains how one
can extend ontology for specific needs - how can this be
normative? There are many different ways to extend it. Not by
the charter - not what applications can do to represent
provenance internally
... Focus on provenance exchange - not reached conclusion on how to represent provenance internally
... now section 5 -> appendix - most issues that are closed are removed or no longer relevant as PROV-DM has changed completely in tis point of view
... It does not show WG in a good light with raised issues flagged in document, when they have been closed
... what is the message of all those issues?
... For purpose of simplification of FPWD I would recommend to remove the whole section from the document
Satya: The issues raised in
section 5 removed from PROV-DM happened after I raised - or
wrongly stated.
... when we raise issues, and changes in PROV-DM - but we know propagating those changes in PROV-O will take time
... with section 4 - as GK mentioned in chat, 2 issues. Sec 4 is not normative, but we can make it even more explicitly clear. But we think it is important to show these examples to illustrate
<dgarijo> what is the problem of releasing section 4 in a separate document? I don't see the issue there.
satya: for instance if you did crime file example - how would you do it with existing concepts and wit extended concepts. And same for workflow. But we are not stating it is normative
<jcheney> I think we should say explicitly that it is non-normative, or put it into a non-normative document
GK: Agree with satya, don't think
it violates charter to discuss extension mechanism. In fact
charter invisions an extension mechanism.
... so it *is* supported by charter
<Zakim> GK, you wanted to say that I don't think explaining extension mechanisms violate the charter constraint of app independence
Could I propose to just make it clearer that it is non-normative
Luc: wit Workflow example, there were a number of.. domain-specific concepts
(but it's an example of a domain-specific approach?)
<dgarijo> @Luc: wf:seenAtPort, wf:sawValue, etc.
Luc: could not see the corresponding PROV-O concepts. But that was problematic for interoperability exchange needs. Even if we make it non-normative there would be problems.
stain: is issue that the example customizes PROV-O to the point of customizing away from PROV-O so that you can only see the PROV-O statements using OWL reasoning?
Luc: yes, that's what I meant
satya: using standard mechanism
should make it possible for semantic web applications - could
you point out exactly what are the issues so we can address
them?
... in particular if it prevents interoperability
Luc: (?) belongs to scientific workflow namespace
pgroth: I think we need to
separate questions
... q1 is if showing example of expansion shows interoperability..
... q2 is where this belongs
<GK> @paul - good intervention!
pgroth: in charter, extensibility
is often done through best practices
... now where sould this extensibility description/example go? that's main question.
... Right now this is a very long piece of detailed description on how to extend, and should go in a best practice note
... and confuses the issue of PROv-O just because it is large/long
... technical issues can then be discussed after FPWD
<Luc> +1 to Paul's comment
+1 to make a Best Practice document
Luc: not saying to bin examples, just to see them in a Best Practic document
<Luc> what about releasing a fpwd of teh best practice containing thes examples?
<GK> @satya - I still have sympathy for mentioning extension mechanism in prov-o, but maybe more briefly, and use best practice to provide the illustrative material?
stain: do we make a Best Practice document for the FPWD or just keep these on the shelf (remove from PROV-O) document for the first FPWD?
<dgarijo> +1 to Lucs comment: The examples are already done, right?
<GK> ... the extension mechanism used here is RDF specific, and prov-o is (in part) telling us how to use RDF to carry DM
satya: then section 4 should show
that PROV-O can be specialised
... Stian's wf example is a good example of modelling provenance information - but we can move it to a Best Practice document and leave a small example in section 4
... then it should not distract from the main point of PROV-O document
<khalidbelhajjame> +q
khalidbelhajjame: there are other examples on how to specify relationships specified in PROV-DM
<satya> @GK +1
khalidbelhajjame: don't like this medium solution with smaller examples
<dgarijo> +1 to Khalid's comment. Why not just add a reference to the best practice?
khalidbelhajjame: if this is not a good place, then they should all be removed and have an extension section only
GK: difficult now as we don't have such a Best Practice document - would be easier to talk about and refactor it once we have that.
<Zakim> GK, you wanted to tentatively suggest that we look to refactoring the text when we have a best practices document on the table. Meanwhile, just signal the current as
GK: suggestion is to recognize that it would happen - but for time being don't do it - just signal non-normative
+1
pgroth: issue is that it is a lot
of material
... as a first public workflow draft it makes a particular impression
... different people have different impressions of FPWDs
... good start for a Best Practice document - .. but..
GK: if worried about first impression, could it be sufficient with a big flag to say explicitly that this material will go to a best-practice document?
<khalidbelhajjame> +q
pgroth: would prefer just to move it out for now
khalidbelhajjame: People don't always read the whole document to know they can skip it. They look at TOC and just jump down
<Luc> what's the issue with creating today a first draft of the best practice document?
khalidbelhajjame: and so tey might not see it is non-normative
<GK> (So if readers don't go there, have they been given an adverse fiurst impression?)
Luc: OK, can do that :)
just copy and delete
<dgarijo> @stian:+1
<Luc> @stain, yes, plus a small intro
pgroth: two options a) Label Section 4 wit a big notice b) Just copy whole of section 4 and make it first draft of best practice document - and actually link to it
<pgroth> option a
+1
<jcheney> +1
<satya> +1
option a) Keep 4 as it is - label with NON-NORMATIVE-and-will-go-to-best-practice
option B) Create new Best PRactice document - just section 4 moved there
<GK> (a) +0.5, (b) +0.5
<dgarijo> +1 to b.
+1 to b
<khalidbelhajjame> @GK :-)
<satya> +1 to b
<smiles> +1 to b
<dcorsar> +1 to b
<khalidbelhajjame> +1 to b
I can take the action
<jcheney> Happy with either.
<Luc> proposal: release both documents at the same time as fpwd
<scribe> ACTION: Stian to Move section 4 of PROV-O to new best-practice document [recorded in]
<trackbot> Created ACTION-46 - Move section 4 of PROV-O to new best-practice document [on Stian Soiland-Reyes - due 2011-12-01].
satya: so think we should keep a paragraph about extension and linking to best practice document
pgroth: so keeping first paragraph (before 4.1) on
satya: yes, and with link to examples in best practice
Luc: sounds reasonable
<khalidbelhajjame> :-)
RESOLVED ..whatever we argued about :)
<pgroth> Resolved: keep roughly first paragraph of section 4, move rest of section 4 to best practice document
<GK> I heard: examples will be removed, but v brief descrioption of extension mechanism will remain
right
but that is the same
pgroth: Annex A Provenancespecific constraints to be removed - as it makes us look bad
<GK> @Stian yes --- I was typing that before Paul's summary got in.
;)
satya: what Luc/Pgroth wants is that those issues sould not be seen. Some of them have not gone away! But should not be seen in the document?
I think it should be in ere if PROV-DM and PROV-O is in kind of conflict
<khalidbelhajjame> We need another button: Show Issues only to WG members :-)
<satya> @Khalid :)
pgroth: Keeping track of them..
PROV-DM changes that have not been reflected in PROV-O
... but we commented it out from the FPWD
satya: ok, we can comment it out [from the FPWD], but keep it in the document
pgroth: does that resolve it?
Luc: Believe so
(issues are public anyway, remember!)
pgroth: then we should be ready to do an FPWD, right?
Luc: propose to vote on releasing both PROV-O and Primer FPWD [ at the same time ]
<dgarijo> +1 to that
sorry
the Best PRactice document
(which does not yet exist! ;) )
<khalidbelhajjame> Is there anything else that should be added to Best Practice document other than Section 4 of prov-o document?
GK hang, on, I'll be quick in mercurial!
it will only be section 4 for now
pgroth: sould vote on FPWD on PROV-O with intention to vote on Best Practice FPWD next week
<jcheney> I agree with not voting on FPWD for best practices now.
can't we link to Best Practice doc in Mercurial ?
Luc: (?) that best practice doc will contain the examples in 4.1 and 4.2 of PROV-O
<pgroth> Proposed: release PROV-O as first public working draft with above mentioned changes
<GK> +1
<smiles> +1
<khalidbelhajjame> +1
+1 (witout the ] thing)
<dgarijo> +1
<pgroth> +1
<jcheney> +1
<dcorsar> +1
<satya> +1
(we're all waiting for Luc!)
<pgroth> Accepted: release PROV-O as first public working draft with above mentioned changes
Luc: supportive - but don't vote as a chair
pgroth: but I've been voting as a chair !!
<satya> @Paul :)
congrats everyone!
<khalidbelhajjame> Hurray
pgroth: editors draft of best practice document which should be good to come along
<Luc> congrats to the prov-o team!
<dgarijo> :)
GK: moved issues to boxes -
cleaned up - not much else
... happy to do remaining things - but if I had problems.. could pgroth pick up if GK drops the ball?
pgroth: happy to do the test
GK: might not be available in the near future
<Zakim> GK, you wanted to say Can we really vote on a documen t that doesn't exist yet?
pgroth: getting close to FPWD
<Luc>
<pgroth> lots of echo
Luc: we voted on a number of
proposals, those changes are being implemented
... some questions on derivations
... being edited as we speak
... some proposal from Yolanda on agents.. and edits are in progress as well
... still very much editors draft, bouncing Luc <> Paolo
... you can have a look at it, but not yet ready for internal review
... don't file issues on the actual current document yet
... hoping to have feedback soon
... and mke it availabile to WG for internal evaluation
... hope is to have second working draft released as soon as possible
(You mean before christmas?)
<Luc> @stain, yes, hopefully, 2 weeks time
Paolo: Question on please do not
.. PROV-O alignment
... most changes would be simplifying
... and not throw everyting up in the air again
@Luc btw - when did we resolve vote on Process Execution -> Account ? I remember voting -1 ..
Paolo: flurry of activity last weeks.. nice things with chain of responsibility
<dgarijo> @Stian: you mean Activity, right?
<Luc> @stain, what is this? PE -> account?
yes, sorry
Activity
so when do we get the internal review?
if second WD is in 2 weeks
<Luc> @stain, hopefully, next week
pgroth: possilibity about note on
doing PROV-JSON with some support. How would we proceed?
... Southampton have actually worked on this - a JSON serialisation of PROV-DM
... then discussion on how WG would like to proceed
... given time.. let us hear about it
DongHuynh: observing WG
development
... first time in meeting
... in Southampton capture provenance in many applications
... to have a common format
... ow to represent in JSON? Here's our document showing thihs.
... when implementing this we wanted to ensure interoperability. Not just our 3 applications, but also future applications
... so stay close to PROV-DM
... as it will likely widely adopted when it is a W3C recommendation.
... so also lightweight - like using JSON datatypes where possible - but witout loosing expressitivity like custom data types
... don't want to bother with complex configurations when not needed.
... introduced some [shortcuts?]
<Luc> design rationale
examples
<DongHuynh>
DongHuynh: says that that
Document you just saw was derived from a document int he
Mercurial repository
... with a few examples they are all from PROV-DM - the PROV-DM namespace is the default
<DongHuynh>
DongHuynh: second example
exands
... introduces a prefix for applicatoin specific information
(line 35 is not valid JSON btw)
DongHuynh: in first level,
prefix/entity/activity, etc.. PROV-DM level
... at next level is the entity
... at third level attribute value pairs
<Luc> @stain, yes, looks like a typo
<khalidbelhajjame> +q
DongHuynh: questions?
GK: (skipping the queue!)
... JSON-LD?
... Providing possibility to link fairly well with RDF, but difficult to tell at first ga
glance
DongHuynh: will look at JSON LD for hints/clues
khalidbelhajjame: in examples..
entity, agent..
... is there a mechanism for (?) actually is.. (?)
... JSON schema?
... to say how it can be serialised
DongHuynh: could not hear very well..
khalidbelhajjame: you specify how
to specify PROV-DM assertions using JSON
... if you have a JSON document.. is there a way to know that it is valid PROV-DM [PROV-JSON] ?
... like using existing JSON Schema approaching
... to say ow instances of PROV-DM looks like in JSON
DongHuynh: one rational is to
maintain interoperability
... so we want a two-way mapping from PROV-DM to PROV-JSON
... no tool for checking conformity
... working on this
<pgroth>
DongHuynh: have workin progress
wich can convert a PROV-DM record in PROV-ASN to PROV-JSON
structure
... next step is the reverse to check semantics
... aware of JSON Schema
... could be good to describe what is now in the HTML
... not convinced about popularity of JSON Schema
... is it really used
... more useful to have a document that describe mapping by example
<khalidbelhajjame> Thanks Dong
DongHuynh: main readers would be developers, and examples should help to kickstart process
pgroth: we are running out of
time now
... very interesting work
... would want to discuss this more on the mailing list on how we want to proceed
Luc: Is it possible to have a
sense here now?
... who would be interested in working on this spec?
+1
<jcheney> +0.5 (what exactly is the specification going to specify?)
<khalidbelhajjame> +1 (I am far from being an expert but would like to participate)
Luc: not *this* specification - but A PROV-JSON specification from the WG
<GK> It depends on timing, and principles. I'd want us to see DM very stable first.
@GK +1
@GK perhaps this is a spring project
<GK> Yes, maybe in spring.
<jcheney> @GK - I also think this is lower priority and can happen later - otherwise we will have too many moving parts to sync
I am fully loaded with PROV involvement at the moment
<jcheney> same with PROV-XML
<GK> @jcheney +1
@jcheney +1
pgroth: ok, as chairs we will look at scheduling this
thanks everybody!
<jcheney> bye
<dgarijo> happy thanksgiving
<pgroth> trackbot, end telcon
This is scribe.perl Revision: 1.136 of Date: 2011/05/12 12:01:43 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Found Scribe: stain Inferring ScribeNick: stain Default Present: pgroth, Luc, stain, dgarijo, jcheney, khalidbelhajjame, GK, [IPcaller], Bjorn_Bringert, Satish_Sampath, Paolo Present: pgroth Luc stain dgarijo jcheney khalidbelhajjame GK [IPcaller] Bjorn_Bringert Satish_Sampath Paolo Regrets: Christian Runnegar Agenda: Found Date: 24 Nov 2011 Guessing minutes URL: People with action items: pgroth stian WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output] | http://www.w3.org/2011/11/24-prov-minutes.html | CC-MAIN-2017-17 | refinedweb | 3,148 | 62.68 |
*term.txt* Nvim NVIM REFERENCE MANUAL Terminal UI *tui* Nvim (except in |--headless| mode) uses information about the terminal you are using to present a built-in UI. If that information is not correct, the screen may be messed up or keys may not be recognized. Type <M-]> to see the table of contents. ============================================================================== Startup *startup-terminal* Nvim (except in |--headless| mode) guesses a terminal type when it starts. |$TERM| is the primary hint that determines the terminal type. *terminfo* *E557* *E558* *E559* The terminfo database is used if available. The Unibilium library (used by Nvim to read terminfo) allows you to override the system terminfo with one in $HOME/.terminfo/ directory, in part or in whole. Building your own terminfo is usually as simple as running this as a non-superuser: curl -LO gunzip terminfo.src.gz tic terminfo.src *$TERM* The $TERM environment variable must match the terminal you are using! Otherwise Nvim cannot know what sequences your terminal expects, and weird or sub-optimal behavior will result (scrolling quirks, wrong colors, etc.). $TERM is also important because it is mirrored by SSH to the remote session, unlike other common client-end environment variables ($COLORTERM, $XTERM_VERSION, $VTE_VERSION, $KONSOLE_PROFILE_NAME, $TERM_PROGRAM, ...). For this terminal Set $TERM to |builtin-terms| ------------------------------------------------------------------------- iTerm (original) iterm, iTerm.app N iTerm2 (new capabilities) iterm2, iTerm2.app Y anything libvte-based vte, vte-256color Y (e.g. GNOME Terminal) (aliases: gnome, gnome-256color) tmux tmux, tmux-256color Y screen screen, screen-256color Y PuTTY putty, putty-256color Y Terminal.app nsterm N Linux virtual terminal linux, linux-256color Y *builtin-terms* *builtin_terms* If a |terminfo| database is not available, or no entry for the terminal type is found in that database, Nvim will use a compiled-in mini-database of terminfo entries for "xterm", "putty", "screen", "tmux", "rxvt", "iterm", "interix", "linux", "st", "vte", "gnome", and "ansi". The lookup matches the initial portion of the terminal type, so (for example) "putty-256color" and "putty" will both be mapped to the built-in "putty" entry. The built-in terminfo entries describe the terminal as 256-colour capable if possible. See |tui-colors|. If no built-in terminfo record matches the terminal type, the built-in "ansi" terminfo record is used as a final fallback. The built-in mini-database is not combined with an external terminfo database, nor can it be used in preference to one. You can thus entirely override any omissions or out-of-date information in the built-in terminfo database by supplying an external one with entries for the terminal type. Settings depending on terminal *term-dependent-settings* If you want to set terminal-dependent options or mappings, you can do this in your init.vim. Example: if $TERM =~ '^\(rxvt\|screen\|interix\|putty\)\(-.*\)\?$' set notermguicolors elseif $TERM =~ '^\(tmux\|iterm\|vte\|gnome\)\(-.*\)\?$' set termguicolors elseif $TERM =~ '^\(xterm\)\(-.*\)\?$' if $XTERM_VERSION != '' set termguicolors elseif $KONSOLE_PROFILE_NAME != '' set termguicolors elseif $VTE_VERSION != '' set termguicolors else set notermguicolors endif elseif $TERM =~ ... ... and so forth ... endif *scroll-region* *xterm-scroll-region* Where possible, Nvim will use the terminal's ability to set a scroll region in order to redraw faster when a window is scrolled. If the terminal's terminfo description describes an ability to set top and bottom scroll margins, that is used. This will not speed up scrolling in a window that is not the full width of the terminal. Xterm has an extra ability, not described by terminfo, to set left and right scroll margins as well. If Nvim detects that the terminal is Xterm, it will make use of this ability to speed up scrolling that is not the full width of the terminal. This ability is only present in genuine Xterm, not in the many terminal emulators that incorrectly describe themselves as xterm. Nvim's detection of genuine Xterm will not work over an SSH connection, because the environment variable, set by genuine Xterm, that it looks for is not automatically replicated over an SSH login session. *tui-colors* Nvim uses 256 colours by default, ignoring |terminfo| for most terminal types, including "linux" (whose virtual terminals have had 256-colour support since 4.8) and anything claiming to be "xterm". Also when $COLORTERM or $TERM contain the string "256". Nvim similarly assumes that any terminal emulator that sets $COLORTERM to any value, is capable of at least 16-colour operation. *true-color* *xterm-true-color* Nvim emits true (24-bit) colours in the terminal, if 'termguicolors' is set. It uses the "setrgbf" and "setrgbb" |terminfo| extensions (proposed by Rüdiger Sonderfeld in 2013). If your terminfo definition is missing them, then Nvim will decide whether to add them to your terminfo definition, using the ISO 8613-6:1994/ITU T.416:1993 control sequences for setting RGB colours (but modified to use semicolons instead of colons unless the terminal is known to follow the standard). Another convention, pioneered in 2016 by tmux, is the "Tc" terminfo extension. If terminfo has this flag, Nvim will add constructed "setrgbf" and "setrgbb" capabilities as if they had been in the terminfo definition. If terminfo does not (yet) have this flag, Nvim will fall back to $TERM and other environment variables. It will add constructed "setrgbf" and "setrgbb" capabilities in the case of the the "rxvt", "linux", "st", "tmux", and "iterm" terminal types, or when Konsole, genuine Xterm, a libvte terminal emulator version 0.36 or later, or a terminal emulator that sets the COLORTERM environment variable to "truecolor" is detected. *xterm-resize* Nvim can resize the terminal display on some terminals that implement an extension pioneered by dtterm. |terminfo| does not have a flag for this extension. So Nvim simply assumes that (all) "dtterm", "xterm", "teraterm", "rxvt" terminal types, and Konsole, are capable of this. *tui-cursor-shape* Nvim will adjust the shape of the cursor from a block to a line when in insert mode (or as specified by the 'guicursor' option), on terminals that support it. It uses the same |terminfo| extensions that were pioneered by tmux for this: "Ss" and "Se". If your terminfo definition is missing them, then Nvim will decide whether to add them to your terminfo definition, by looking at $TERM and other environment variables. For the "rxvt", "putty", "linux", "screen", "teraterm", and "iterm" terminal types, or when Konsole, a libvte-based terminal emulator, or genuine Xterm are detected, it will add constructed "Ss" and "Se" capabilities. Note: Sometimes it will appear that Nvim when run within tmux is not changing the cursor, but in fact it is tmux receiving instructions from Nvim to change the cursor and not knowing what to do in turn. tmux has to translate what it receives from Nvim into whatever control sequence is appropriate for the terminal that it is outputting to. It shares a common mechanism with Nvim, of using the "Ss" and "Se" capabilities from terminfo (for the output terminal) if they are present. Unlike Nvim, if they are not present in terminfo you must add them by setting "terminal-overrides" in ~/.tmux.conf . See the tmux(1) manual page for the details of how and what to do in the tmux configuration file. It will look something like: set -ga terminal-overrides '*:Ss=\E[%p1%d q:Se=\E[ q' or (alas!) for Konsole specifically, something more complex like: set -ga terminal-overrides 'xterm*:\E]50;CursorShape=%?%p1%{3}%<%t%{0}%e%{1}%;%d\00. ============================================================================== Window size *window-size* [This is about the size of the whole window Vim is using, not a window that is created with the ":split" command.] On Unix systems, three methods are tried to get the window size: - an ioctl call (TIOCGSIZE or TIOCGWINSZ, depends on your system) - the environment variables "LINES" and "COLUMNS" - from the |terminfo| entries "lines" and "columns" If everything fails a default size of 24 lines and 80 columns is assumed. If a window-resize signal is received the size will be set again. If the window size is wrong you can use the 'lines' and 'columns' options to set the correct values. See |:mode|. ==============================================================================' and 'ruler' options. The command characters and cursor positions will not be shown in the status line (which involves a lot of cursor motions and attribute changes for every keypress or movement). you are using a color terminal that is slow when displaying lines beyond the end of a buffer, this is because Nvim is drawing the whitespace twice, in two sets of colours and attributes. To prevent this, use this command: hi NonText cterm=NONE ctermfg=NONE This draws the spaces with the default colours and attributes, which allows the second pass of drawing to be optimized away. Note: Although in theory the colours of whitespace are immaterial, in practice they change the colours of cursors and selections that cross them. This may have a visible, but minor, effect on some UIs. ============================================================================== Using the mouse *mouse-using* This section is about using the mouse on a terminal or a terminal window. How to use the mouse in a GUI window is explained in |gui-mouse|. For scrolling with a mouse wheel see |scroll-mouse-wheel|. These characters in the 'mouse' option tell in which situations the mouse will be used by Vim: n Normal mode v Visual mode i Insert mode c Command-line mode h all previous modes when in a help file a all previous modes r for |hit-enter| prompt. *xterm-copy-paste* NOTE: In some (older) xterms, it's not possible to move the cursor past column 95 or 223. This is an xterm problem, not Vim's. Get a newer xterm |color-xterm|.") *bracketed-paste-mode* Bracketed paste mode allows terminal applications to distinguish between typed text and pasted text. Thus you can paste text without Nvim trying to format or indent the text. See also Nvim enables bracketed paste by default. If it does not work in your terminal, try the 'paste' option instead. , Windows) Windows and for an xterm.. In Insert mode, when a selection is started, Vim goes into Normal mode temporarily. When Visual or Select mode ends, it returns to Insert mode. This is like using CTRL-O in Insert mode. Select mode is used when the 'selectmode' option contains "mouse". and X11 | https://neovim.io/doc/user/term.html | CC-MAIN-2017-39 | refinedweb | 1,701 | 53.71 |
A free and open dialogue is critical to the success of any civilisation. History shows that whenever free speech is oppressed, tyranny ensues.
A free and open dialogue is critical to the success of any civilisation. History shows that whenever free speech is oppressed, tyranny ensues.
I have just received another 30 day ban from FaceBook for sharing an image of the Hindu symbol for peace and unity, because either somebody flagged it or perhaps their algorithms mistakenly identified it as a Swastika, which is illegal in some countries.
Either way, the oppression of expression is the first step toward tyranny. We should all be wary of any attempts to suppress opinion, no matter how inane or offensive they may seem to some. The marketplace of ideas relies on this; good ideas will be encouraged and built upon, while bad ideas will eventually die out through a cultural form of natural selection.
Only those with invested interests seek to silence others who disagree with them. The truth will always emerge if open dialogue is allowed.. slow is that a bucket is only located in one geographical location. The location is selected when you create the bucket. For example, if your bucket is created on Amazon servers in California, but your users are in India, then images will still be served from California. This geographical distance causes slow image loading on your website.
Further, it is not uncommon to see very heavy images on S3, with large dimensions and high byte size. One can only speculate on the reasons, but it is probably related to the publication workflow and the convenience of S3 as a storage space.
Let’s see how we can accelerate image delivery while keeping S3’s workflow and convenience level.Speeding Up Image Delivery on S3
To catch the two flies of global distribution and image optimization at once, let's see how an image CDN like ImageEngine can leverage S3 as an origin.
Step1: Create the S3 bucket
Once logged in to the Amazon console it is easy to create the bucket and store content in it. By default buckets are private. In order for the Image CDN to reach the origin image, we must create a bucket policy to make the contents of the bucket available on the web.
Once you’ve implemented the policy that fits your needs, the bucket should be available in your browser using this scheme for a hostname:
https://<bucket>.s3.amazonaws.com/<file>
Alternatively:-<location>.amazonaws.com/<bucket>/<file>
For example:
Or
Step 2: Sign up for an ImageEngine account.
ImageEngine offers free trial accounts with no strings attached. The signup process will ask for a nickname for the account, a domain name intended for serving images, and an origin location.
Give the account a name you like and provide a domain name you think you’ll be using to serve images on your web site. This can, of course, be changed later. In our case, we choose “s3img.mydomain.com”.
The origin is the S3 bucket we created in step 1. There are two ways to configure the S3 bucket. You can use the S3 protocol or HTTP.
If you want to use the S3 protocol, check the S3 radio button and type the name of the bucket.
If you want to use HTTP, then select the HTTP radio button and type in the fully qualified hostname. Note that if you want HTTPS, you’ll need to use the notation with the bucket name in the path: s3-.amazonaws.com. Submit the hostname for now, and you can edit the origin later and add the bucket name.
On the question “Allow origin prefixing” it is safe to answer “No” for now. Hit submit and the account is created.
Step 3: Configure your DNS
This step is not strictly necessary, but if you want to serve images from the domain name provided in step1 you’ll need to add a CNAME record in the DNS. The DNS info needed is presented to you when submitting the form in step 2.
Record Name: s3img.mydomain.com Record Type: CNAME Record Value: s3img.mydomain.com.imgeng.in
Note that the Record Value can also be used to access images on S3:
Step 4: tune settings
In less than 5 minutes, the S3 bucket is configured, the ImageEngine account is created and ImageEngine is serving optimized images from S3. You can also log in to the ImageEngine dashboard to manage multiple domain names and origins. Also in the dashboard you can tune the default behaviour of ImageEngine. For example tune the image quality, size, formats and so on.ImageEngine Makes Amazon S3 Faster
By these simple steps, images stored on Amazon S3 are now globally distributed and optimized by ImageEngine. If you want more information about the optimization process and the resulting next-gen file formats, then you can read about them here.
To analyse the impact of ImageEngine we took a representative sample of the sites in the HTTParchive data set which store images on S3 and ran them through the ImageEngine demo tool, comparing original and optimized web site performance.
Looking at the optimization aspect isolated, the savings in bytes are dramatic! The average byte size of an original image stored on Amazon S3 is 2.9MB, while the average optimized by ImageEngine is only 0.6MB. 78% less data! The dramatic savings suggest that a typical workflow involves storing images on Amazon S3 does not include any particular steps to reduce the file size before it is stored on S3.
The primary benefit is faster page loading for better user experience. Fewer data delivered also saves the end-users data plan as well as battery life because lighter images require less processing power to decode and display. Additionally, images will be displayed on the screen sooner freeing processing power to other tasks.
Fewer data to transmit over the wire also has a direct impact on the time it takes to display the page in the user’s browser. We’ve looked at the visual complete time; the time it takes for the page to be completely rendered visually in the browser.
On average, the sample using ImageEngine is visually complete 3.4 seconds earlier than then sites with unoptimized images. 3.4 seconds is a huge improvement. Knowing that 53% of users leave the site if it takes more than 3 seconds to load,3.4 seconds improvement just by enabling ImageEngine is a giant leap towards the 3-second goal. Additionally, now that images are served from a CDN, not from a static S3 location, we get the advantage of global distribution and reduced latency.
This is also why addressing images to improve performance, conversion rates and ultimately revenue is considered a low hanging fruit. Relatively little effort, but huge impact. Give ImageEngine a try!
Not sure if SEO questions are relevant for StackOverflow. Feel free to correct me if not.
Not sure if SEO questions are relevant for StackOverflow. Feel free to correct me if not.
Have a "quiz" website that shows many duplicate images. For example - when users are going through a "goals" section of the quiz the image stays the same until the section changes.
<p class= I'm very motivated.</p>
<img id='image1' src='' alt= "Woman shooting bow">
<p class= I want to succeed.</p>
<img id='image2' src='' alt= "Woman shooting bow">
The images are hidden and shown depending on the question.
My question is:
What is the best way to handle this from an SEO standpoint? Just make different alt tags for the same image?
Should I create a variable and somehow use it for the images>
I have a component in React which displays (or doesn't at the moment) an image within an src tag.
I have a component in React which displays (or doesn't at the moment) an image within an src tag.
The file path & file name of the image is passed via props after being selected by the user, so I can't do an import at the top of the file. Apparently, I should be able to do something like src={require(file)} but this causes webpack to throw a hissy fit and give the following error: Error: cannot find module "."
As an e.g., a typical filepath/filename I pass to the component is: '../../../images/black.jpg'
This is a stripped-down version of the code in question:
import React, { Component } from "react";
class DisplayMedia extends Component {
render() {
return (
<div className="imgPreview">
<img src={props.file}
</div>
);
}
}
export default DisplayMedia; | https://morioh.com/p/3e7a1728296e | CC-MAIN-2020-05 | refinedweb | 1,437 | 63.7 |
Stellar Phoenix BKF Recovery 2.0 With Crack 64 Bit
Actual a Hidden Archive Z keygen crack trainer.. Stellar Phoenix BKF Recovery v2.0 with crack 64 bit SysTools BKF.
RUNNAR OMEGA QUARTZ ACTIVATION DIESEL OFFICE 3.8 crack. Stellar Phoenix BKF Recovery 2.0 v.4.3 Crack. Stellar Phoenix BKF Recovery.
Stellar Phoenix Windows Data Recovery – Professional v.5.0 serial. Accent ZIP Password Recovery (64-bit) v.4.2 keygen. Outlook Recovery Toolbox v.2.0.11 serial keygen. Stellar Phoenix Windows Data Recovery Professional v9.0.0.1 keygen.One person has died and three others have suffered life-threatening injuries after a collision between a bus and a stationary lorry in London, police said.
The bus collided with the lorry in Tower Bridge Road shortly before 11.30am on Monday.
Two people, an adult man and woman, and an eight-year-old girl who was a pedestrian, were taken to St Thomas’ Hospital.
A 40-year-old man was taken to King’s College Hospital after he was hit by a second lorry on Tower Bridge Road.
The road was closed while firefighters cut the lorries apart, Scotland Yard said.
A Metropolitan Police spokesman said: “A collision occurred on Tower Bridge Road where a bus collided with a lorry.
“A man has been taken to hospital in a critical condition. Three others were taken to hospital by London Ambulance Service and have been discharged.”Live4
Live4
Overall guest ratings
Nearby
About
Nestled in the mountains between Florence and Siena, this hotel is just 2.2 mi (3.5 km) from the center of Siena and the Opera di San Domenico. In addition, attractions like the Palazzo Pubblico and the Piazza del Campo are within 1 mi (2 km). With city views and a rooftop terrace, apartments are spacious and offer parquet flooring and free WiFi. Each apartment features a private bathroom with a shower and bathrobes. Guests at this Siena hotel can relax in the sunshine on the rooftop terrace. An on-site restaurant serves regional food and has outdoor seating. Siena Airport is 3 mi (5 km) away and a shuttle-
our most recent programs & applications,. and Stellar Phoenix BKF Recovery 2.0 With Crack 64 Bit. once again with a Microsoft Office 2013 product key on the market,. our clients a simple way to perform a logical recovery of all of the. The Compaq Hard Drive Diagnostic Tool.//===———————————————————————-===//
//
// The LLVM Compiler Infrastructure
//
// This file is dual licensed under the MIT and the University of Illinois Open
// Source Licenses. See LICENSE.TXT for details.
//
//===———————————————————————-===//
//
// template
// class lognormal_distribution
// lognormal_distribution(lognormal_distribution&&);
#include
#include
int main()
{
{
typedef std::lognormal_distribution D;
D d1(1.5, 1);
D d2;
assert(d1!= d2);
d2 = std::move(d1);
assert(d1 == d2);
}
{
typedef std::lognormal_distribution D;
D d1(1.5, 1);
D d2(1.5, 2);
D d3;
assert(d1!= d2);
d3 = std::move(d1);
assert(d1 == d3);
assert(d2!= d3);
}
}
using System.Runtime.InteropServices;
namespace Sledge.BspEditor.UserInterface.Dialogs
{
public static class ErrorControls
6d1f23a050 | https://ibipti.com/stellar-phoenix-bkf-recovery-2-0-with-upd-crack-64-bit/ | CC-MAIN-2022-40 | refinedweb | 503 | 61.53 |
Compiler Error CS0117
'type' does not contain a definition for 'identifier'
This error occurs when a reference is made to a member that does not exist for the data type.
Several common situations can generate this error:
Calling a method that does not exist.
Using the Item property followed by an indexer.
Calling a qualified method when a class name and its enclosing namespace name are the same.
Calling an interface written in a language that supports static members inside interfaces.
The following sample generates CS0117.
Example
In this example, the Item property is used with an indexer. In C#, you can use a property or an indexer to access a member, but not both. The following sample generates CS0117.
CS0017 also occurs if you use a library written in a language that allows static members in interfaces, and you try to access the static member from C#.
The following sample generates CS0117. | https://msdn.microsoft.com/en-us/library/c4aad8at(v=VS.80).aspx | CC-MAIN-2017-34 | refinedweb | 153 | 57.37 |
/* ** (c) COPYRIGHT MIT 1995. ** Please first read the full copyright statement in the file COPYRIGH. */
The stream part of the Library is a set of streams that can be used in many contexts throughout the Library. It also contains conversions between streams and other memory containers, for example Chunks.
#ifndef WWWSTREAM_H #define WWWSTREAM also buffers the result to find out the content length. If a maximum buffer limit is reached Content-Length is calculated for logs but it is not sent to the client -- rather the buffer is flushed right away.
#include "HTConLen.h" /* Content Length Counter */
This module contains a set of basic file writer streams that are used to dump data objects to disk at various places within the Library core. Most notably, we use these streams in the Format Manager in order to handle external presenters, for example post script viewers etc. These streams can of course also be used in other contexts by the application.
#include "HTFWrite.h" #include "HTFSave.
#include "HTGuess.h" /* Guess stream */
The Tee stream just writes everything you put into it into two oter streams. One use (the only use?!) is for taking a cached copey on disk while loading the main copy, without having to wait for the disk copy to be finished and reread it.
#include "HTTee.
#include "HTMerge.h"
If you do not like the stream model in libwww, then you can use this stream to convert a stream object into a Chunk object which is a dynamic character string buffer in memory.
#include "HTSChunk.h"
This version of the stream object is a hook for clients that want an unparsed
stream from libwww. The HTXParse_put_* and HTXParse_write routines copy the
content of the incoming buffer into a buffer that is realloced whenever
necessary. This buffer is handed over to the client in
HTXParse_free.
#include "HTXParse.h" /* External parse stream */ #include "HTEPtoCl.h" /* Client callbacks */
This is a filter stream suitable for taking text from a socket and passing it into a stream which expects text in the local C representation.
#include "HTNetTxt.h"
#ifdef __cplusplus } /* end extern C definitions */ #endif #endif | http://www.w3.org/Library/src/WWWStream.html | CC-MAIN-2015-18 | refinedweb | 353 | 62.48 |
#include <rtt/dev/AnalogInInterface.hpp>
Unit (MU) : Unit of what is actually read on the analog channel (e.g. Volt)
Definition at line 66 of file AnalogInInterface.hpp.
This enum can be used to configure the arefSet() function.
Definition at line 74 of file AnalogInInterface.hpp.
Create a nameserved AnalogInInterface.
When name is not "" and unique, it can be retrieved using the AnalogOutInterface::nameserver.
Definition at line 90 of file AnalogInInterface.hpp.
Set the analog reference of a particular channel.
We took (for now) the comedi API for this, where every aref (eg. Analog reference set to ground (aka AREF_GROUND) corresponds to an unsigned int.
Returns the binary highest value.
Definition at line 172 of file AnalogInInterface.hpp.
References rawRange().
Returns the binary lowest value.
Definition at line 166 of file AnalogInInterface.hpp.
Returns the binary range (e.g.
12bits AD -> 4096)
Definition at line 160 of file AnalogInInterface.hpp.
References rawRange().
Set the range of a particular channel.
We took (for now) the comedi API for this, where every range (eg. -5/+5 V) corresponds to an unsigned int. You should provide a mapping from that int to a particular range in your driver documentation
Returns the absolute maximal range (e.g.
12bits AD -> 4096).
Referenced by binaryHighest(), and binaryRange().
Read a raw value from channel chan.
Referenced by RTT::AnalogInput::rawValue().
Read the real value from channel chan.
Referenced by RTT::AnalogInput::value().
The NameServer for this interface.
Definition at line 178 of file AnalogInInterface.hpp. | http://people.mech.kuleuven.be/~orocos/pub/stable/documentation/rtt/v1.8.x/api/html/classRTT_1_1AnalogInInterface.html | crawl-003 | refinedweb | 248 | 54.69 |
How Haskell and I met¶
I started my programming journey (effectively) with Python and got quite fond of it before I learned Go and, after some reluctance, learned to see the upsides of both languages. Python and Go are my two strongest languages to this day, but some time after I got comfortable with Go I realized that I wasn't a huge fan of either language (albeit I still have a soft spot for Python and probably always will). I wanted the best of both worlds. I thought, "There has to be a better language out there, and it's worth my time to find it and learn it".
I tried out Perl, Ruby, Lisp, and probably one other I don't remember, but I didn't end up sticking with any of them. Ruby I lost interest in because I wasn't coming across any substantial difference from Python (and I didn't hear about any by web search either) so I figured it wouldn't be worth the investment. Ruby is dynamically typed, name errors at runtime, object-oriented... basically all the same flaws Python has. I knew that even if I had ended up preferring it to Python, it wouldn't be a big enough improvement to conclude my search.
I gave Perl the longest chance. I was referred to Perl by someone else and told that it was "way less verbose" than Python, not as readable but particularly good for scripts and batch editing - which I always used Python for at the time. I did get far enough into Perl to write an actually useful text processing script which I used to convert all the pose codes to a new paradigm in my VN project Return To The Portrait (I ended up rolling back the redesign after I realized it wouldn't work, but that wasn't Perl's fault, it was Renpy's). Still, I never liked Perl and couldn't understand what the hype was about. It has C-like syntax and just generally looked like an uglier version of Python with nameless function parameters, no nested arrays... you read that right. I also didn't find it to be any more concise.
Lisp I don't remember why I quit; I gave it at most a couple of hours. I do know I had some issues installing it and running something trivial. Probably I just tried it at a bad time in my life where I ended up not having the time to put into long-term skill investments for a week after that, and just never went back to it.
I came across Haskell. I knew very little of functional programming when I started. But when I read in Learn You A Haskell For Great Good that you can't change the value of a variable, I was too skeptical to not go on.
I had some bad experiences early on and thought about quitting, but someone told me to keep going and I did; eventually I got past the early blocks and started to think I actually really liked Haskell. But that wasn't the end of the road. My enthusiasm dropped over a long time as I watched the cost of learning all this stack higher and higher. It was months before I started to think I understood monads, and more months before I actually did understand them and the rest of the stuff like Writer/Tuple, Reader/Function, State, transformers, and the rest of that black magic.
As of this writing, I have no major accomplishments in Haskell, but I've used it with GTK using haskell-gi (as well as helped out with the missing GObject Introspection annotation problem there), briefly collaborated on a Snake game AI, dabbled in AST parsing, and I think I understand enough to use Haskell for something serious.
So while I'm nowhere near Haskell mastery (and probably never will be), I'm done holding off on giving my opinion of Haskell, even though it's not as informed as some of my other language opinions.
Compiled AND interactive¶
When I found out that there was a language that both compiles to native code and has an interactive mode, I started to think I really had stumbled onto the perfect language. In this regard at least, Haskell really is the best of both worlds between Go and Python.
The type system¶
Haskell's type system has so many great ideas. Without any "cheat the type system and give up safety" features, you can have parameterized types, sum types, interfaces, enums, and you don't suffer from the "losing type information when going through a general function" problem that some other statically typed languages have. What I mean is: in Go, for example, if you have a function that takes an interface type, does something to it, and returns it, it has to return it as the interface type because it doesn't know what its concrete type is. Meaning if you pass it a value of a known concrete type, you lose the information of which concrete type it is when it comes back out. In Haskell, the lost type information "reappears" on the other side. This lets you reuse a ton of stuff you wouldn't be able to reuse in other languages.
Not only is the type system extremely flexible and powerful, but you usually don't have to write the types. GHC will infer them as generally as possible: if you have a function that adds two variables, it'll know they have to be some type that implements the
Num class, but it can be any such type.
It's still a good idea to put type signatures on top-level bindings as it improves readability and can help puzzle through type errors, but things are helped a lot by not needing type signatures on local variables and lambdas. (You also don't need to declare local variables.)
Null is dead. Long live types!¶
Sum types replace null! The
Maybe and
Either monads offer ways of dealing with failure in a safe, compile-time-checkable way. And you can also use exceptions when they're convenient.
No struct namespacing¶
This is the big problem with Haskell's type system: there's no syntax for accessing a struct field; struct fields are actually functions that take the struct type and return the right part of it... and that of course means they're in the global namespace, so structs can never share field names.
Just let that sink in and think how horrible that is. Ever worked on an application with data models that share field names like
customer_id,
creation_date? I have, and I would pay a fuckton to not have to call them
job_customer_id,
quote_customer_id,
customer_creation_date,
job_creation_date...
Oh, and as a result, there obviously can't be struct inheritance. As if this could get worse.
It's like the Haskell designers never worked in that kind of application. It wouldn't surprise me if few of them did - Haskell's following is mostly academic mathematicians who publish papers on increasingly mind-blowing abstractions and few people in the industry actually using them, so its development is kind of in an ivory tower.
There's a compiler extension to sort of fix the namespacing, but it solves at most half the problem.
You can also get around it with classes (ie. interfaces), by defining an instance of the class for each type, but that's insanely clunky as you have to write all the shared field names three times.
Oh, and God help you if you have to deal with nested structs. Since you can't mutate anything, to update a field of a nested struct you have to also update the nested struct itself as a field in the parent struct. There's another elaborate abstraction to "simplify" that: lenses, which I've tried to learn multiple times to learn after understanding monad transfomers and failed.
Syntax¶
The big unusal thing about Haskell is that it's function call syntax is shell-like:
func arg1 arg2 instead of
func(arg1, arg2). It makes more sense for a language with automatic currying, since there's no difference between evaluating a function's name and calling it with no arguments.
Haskell's generally pretty concise. The above syntax combined with functional purity, the powerful type system and type interference, automatic currying and ultra-terse lambdas let you express code in less lines than most other static languages, competitive with Python, in fact.
Branching is a mess¶
There are four different syntaxes for branching:
if-then-else
case ... of
guards
pattern matching in function definitions
And the types of things to branch based on:
Value comparisons
Structural comparisons (for example, whether an
Eithervalue is
Leftor
Right)
Pattern matching in function definitions is really just syntactic sugar for case expressions; cases support structural comparisons and exact value comparisons, but not arbitrary value comparisons. For example, if you case on a number, you can't have a branch for
x > 5.
if supports anything value-level, but nothing structural, and its indentation doesn't work the way it does in other languages;
then is supposed to be indented under
if, meaning nested
if...
else if expressions flow to the right (and multiline
let bindings have weird quirks that still confuse me, making it even more difficult to use this workaround).
Guards are an
if-
elif-
else tree-like syntax that can accept an arbitrary value comparison for each guard, but they can only be used in the context of pattern matching. For all the different syntaxes Haskell has for branching, there's just no good way to do a good old
if-
elif-
else tree.
There are compiler extensions to fix this, like MultiWayIf, but they're compiler extensions, and they come with quirky syntax of their own (the
if and first guard both have to be on the same line as the preceding
= which can be unreasonable).
Oh, and since
let can only be used inside an expression (so it can't span over multiple guards), the
where keyword exists which does exactly the same thing but applies to a set of guards (and goes at the end). If only guards could just be used in an expression...
Some links with more information on Haskell's myriad of should-be-but-aren't-redundant branching syntaxes:
A Gentle Introduction to Haskell
Someone else on Stack Overflow struggling with the mysteries of layout
Bad debugging¶
Some exceptions don't give stack traces, including some particularly common ones like
Prelude.!!: index too large and
Prelude.head: empty list (index out of range errors).
As for logging: pure functions can't do IO so you can't log from within them. And of course, a pure function can't call an impure one, so it takes a pretty onerous refactor to get a log statement deep inside the main logic; one that usually isn't worth it for debugging. (No, the Writer monad is not a solution because it's a comparably onerous refactor for each function it has to touch and still can't log without IO.)
In theory, you shouldn't need to log in a pure function because you can compose them from small pieces that you can prove are individually correct. At least, that's the talking point I heard from some Haskell evangelists when I was new. But that's not how things go in practice. One way or another, complex logic gets complicated, and there are bugs you won't find from testing components in isolation, even ignoring that some (eg. GTK stuff) can be difficult to test in isolation because of the nature of what they do.
The learning curve is absurd¶
Haskell might be worth it, but damn is it hard to understand. To really unleash the power of Haskell you have to understand things like Functors, Applicatives, Monads, Monoids, Monad transformers, and tons of other stuff I haven't even touched yet. This is a very real downside because time spent learning these concepts is time not spent using the language at its fullest.
In another language I could be on my first hour and look up the name of the standard library function to generate random integers and then import the module and get it done. In Haskell, to even start to do a lot of basic things you have to first understand these ridiculously abstract concepts. I remember spending about a day learning how to generate random numbers (and that was not my first day with the language).
Haskell feels like a never-ending rabbit hole of ludicrously elaborate abstractions to learn, and sometimes it seems like it's just to win back the functionality that was trivial in other languages. I'm not trying to pass a judgdement here - I've felt both ways, and am still undecided on the philosophy.
This might sound like an "it's bad because I don't understand it" criticism, but it's not. A steep learning curve does directly undermines the point of a tool (making work more efficient), even if it can be compensated for.
Recursion is looping, but harder¶
Since loops involve state, Haskell doesn't have loops. When mapping over a sequence doesn't cut it, you resort to recursion.
But that doesn't actually eliminate state as far as the benefits of doing so are concerned. Recursion is still effectively state. It's a bit harder to think about because it's a call stack instead of a loop, meaning the first iteration isn't removed from the picture when you loop, but you're still going over the same code with different values in the same names. For all intents and purposes, recursion is looping, but harder.
No default arguments... and it makes the lack of looping even worse¶
Haskell naturally doesn't have default arguments - they don't make much sense with automatic currying (although OCaml proves it's possible to reconcile the two). And combined with the way looping works, this very often means functions that want to have some sort of strictly internal state, like a running list of results, have to have a wrapper to start it with an empty list.
Package management and build system hell¶
It's a bloody nightmare.
There are several different pieces of software involved:
ghc-pkg, Cabal, cabal-install (which is a separate package from Cabal), Stack, and hpack (the Haskell Platform seems to just bundle a few of those things). All, besides hpack which I haven't used, are poorly documented and seem to freak out with inexplicable behavior at random times (by which I mean, every time).
The most obvious solution is just to use cabal-install with GHC directly. Unfortunately, cabal-install is a capricious demon that hates coders and exists to confound them and inflict depression. Sometimes I get errors saying an import is ambiguous because there are two packages it could refer to... and they're the same version of the same package. And that happens without doing anything weird, just right after running a single install command on an otherwise pristine system. Sometimes I can import something in the REPL but not build with it. Sometimes I finally get a package to work and then the next day I can't import it because "the package is hidden", and I have no idea what I changed.
ghc-pkg expose isn't the solution either. All the time I run into situations where it seems like the only way to fix a problem is to delete and reinstall everything related to Haskell.
Cabal also apparently doesn't install libraries by default, but the initial output you get if you forget the flag says it's building a library:
In order, the following will be built (use -v for more details): - random-1.1 (lib) (requires download & build)
Followed by a successful-looking build log and then:
Warning: You asked to install executables, but there are no executables in target: random. Perhaps you want to use --lib to install libraries instead.
And sure enough the install failed.
Oh, and the ability to remove packages is in a separate package.
Cabal is actually mainly a build tool, so I probably haven't even seen the worst of it.
Stack was actually created on top of Cabal to supposedly fix the problems with it. Unfortunately, I don't think it does anything of the sort. As far as I can find, it installs things in a way that's inaccessible to Haskell tools outside of Stack, meaning I can only use it as a package manager if I'm also using it as a build tool, and as a build tool it adds at least one new necessary config file in addition to the ones Cabal needs, is poorly documented, and that's just not worth it for me.
Also, it's not bad but install time is non-negligible, unlike with
pip,
npm, and
go get, because Cabal seems to actually build everything it installs into a lib rather than just downloading source.
The one good thing about Haskell tooling-wise is that GHC can actually be pretty helpful. The type error messages are absurdly arcane, but that's mostly because the concepts are arcane, not because GHC is unfriendly. GHC can (with warning flags) point out unused variables and imports, and if you misspell or forget to import something, it sometimes knows what you mean or what module it's in. Even with the arcane errors, sometimes it suggests the solution directly (like "probable fix: use a type annotation to specify what a0 should be"), or recommends a compiler extension. It was GHC, not the Haskell community, that introduced me to the wonderful
ScopedTypeVariables (which really should be part of the language standard).
Stdlib and ecosystem¶
The ecosystem reminds me of Javascript in that even a single package on top of a base install often pulls in two dozen dependencies. And the standard library is tiny compared to other languages.
That statement is actually a bit unclear because there's some contention about what counts as the standard library. I define the standard library to be whatever comes with the compiler install. In this case, that's almost nothing. Here's a map of a superset of the standard library - if I filter out stuff that doesn't come with GHC, what all is left? Besides type system stuff, basic operations on lists and strings, and stuff related to the internals of GHC, pretty much just an FFI, OS interfaces, and Windows-specific graphics stuff.
Here are a few examples of incredibly important stuff that's left out: regex, JSON, HTTP, and randomness. There are multiple packages for the first three of those, but being not built-in means you have to do research to decide which one to use, they don't tend to be well documented, and worst of all: you have to go through Cabal hell to get them.
Documentation¶
Documentation tends to be subpar. I heard someone remark that "the Haskell community doesn't understand that type signatures are not a substitute for documentation", and that's kind of true, although it started seeming a lot better to me once I got past beginner level in understanding the concepts (this was true even for things that I didn't think involved abstractions I hadn't got).
There also isn't an easy way to view it from the command-line as far as I know. But to compensate, there's Hoogle, which is something that doesn't seem to have an equivalent in other languages. Hoogle is a CLI tool that lets you search a database of packages for function names or type signatures. I don't know nearly everything about it, but I think it's really powerful.
The conceptual tutorials tend not to be very good - they're hard concepts to teach, with the curse of knowledge applying far stronger than usual. There are some pretty good ones though; part of the problem is that they're drowned by bad ones due to the monad tutorial fallacy.
Strings are a mess¶
5 different string types. With overloaded function names for conversion. The OverloadedStrings compiler extension helps, but the string types are still confusing and inconvenient.
List operations¶
Haskell is pretty par on concise list operations. Some are easy: index, search, reverse, sort, and of course map, filter, and comprehensions (though comprehensions are way less readable than Python's). But not others: negative index, slice, insert, remove (although
take and
drop fill the majority of your slice and remove needs), and update at position.
There's the
Seq type in
Data.Sequence which is more aimed at being a replacement for mutable lists so it supports insert/update/remove, but of course since it's a different type (and doesn't have the same interface), you have to convert to use it with anything that's designed to work on the normal list type.
Seq also doesn't have syntactic support. You have to use it with
fromList or something similar. And you index it with
lookup i items. And to make matters worse, their function names overlap so you have to use a qualified import.
Concurrency¶
I haven't done much with Haskell's concurrency, but it seems pretty solid.
forkIO starts a green thread and the thread communication mechanism,
MVar, sounds a lot like Go's channels. Parallel evauation in pure code involves a function called
par from the the
parallel package, which takes two arguments and itself evaluates to the second one, but internally, makes GHC start doing the work of evaluating the other so it can already be done when a later expression evaluates to it. It's a counterintuitive approach, but it is incredibly concise - you could parallelize an expensive operation by editing only one line.
Haskell definitely taught me a lot about type system theory and language design. But there are a few reasons I won't be going farther with it.
The biggest one is that I don't believe in pure functional programming anymore. I was still unsettled on it when I last touched Haskell; it was mostly Rust that convinced me, by showing that most of the benefits of functional programming can be had without outlawing side effects.
The other reason is that I found out about Idris. Idris seems like basically a remake of Haskell with design mistakes and historical artifacts fixed. If I ever do want to use a pure functional language, I'd go for Idris. It seems like the effort I put into learning Haskell would carry over with about 90% efficiency (I played with Idris a bit). | https://yujiri.xyz/software/haskell | CC-MAIN-2021-04 | refinedweb | 3,819 | 66.88 |
I often find myself anticipating future contract requirements and wanting to add to my skill set. But staying on top of emerging technologies reminds me of trying to keep three beach balls under water at once. As soon as I have a solid grasp of one new programming language or concept or hardware interface, two more pop up.
A client engagement surfaced a few months ago that called for me to work with Python. I figured the easiest way for me to get up to speed would be to apply my knowledge of a similar language and translate an existing script to Python. After some investigation, I found out that Python was somewhat similar to Perl, a language I know fairly well. You can read about the Perl version of this script here.
I’m not going to show how I did the conversion but rather walk through the Python version of this script and illustrate the key statements that are used. The script I chose imports a delimited flat file containing 3,000 inventory items, extracts the item description field (which is variable-length) and converts it into three 30-character fields, and rewrites the file. You can see the script in its entirety in Listing A.
Note
In Python, loops and flow control statements aren’t terminated, which can get a little confusing. You'll notice that I've added comments to indicate where code blocks are terminated. This helps me better organize and read my code.
Here we go.
import string #use string library
import re #use regular expression library
These first two lines import the string and regular expression classes. Python is fairly object-oriented and allows for classes to be imported, increasing the expandability and modularity of one's code.
inputfile = "c:\Work\CNET\Inventory.txt" #set "inputfile" to be the name of the delimitated file.
outputfile = "c:\Work\CNET\inv.txt" #set "outputfile" to the name of the output file.
This first fragment shows how to define a string variable in Python (e.g., variable_name = “string”). Python statements are terminated at the end of each line. Comments begin with a pound sign (#).
f = open(inputfile) #open "inputfile" for reading
o = open(outputfile, "w") #open "outputfile" for writing
These two statements create the file handles necessary for importing and exporting the two files. A file handle is simply a data structure that Python uses to access external files. When the open statement is used with only one argument, the file is opened for reading only; the “w” indicates that the file is opened for writing.
while (1): #process the input file
offset =0 #the first 30 charater field offset
line = f.readline() #assign the current line to "line"
if not line : break #exit the while loop at the end of the file
The while statement will execute the code contained within it until the condition is false. I used 1 because I chose to use the if statement to exit the while loop from inside. Python uses only indentation to block code; this requires you to pay close attention to what you are doing but helps ensure readable code.
The f.readline() statement uses the readline method for file handle objects. The method returns a string containing the current line of the file and moves the pointer to the next line. When the last line of the file is read, the pointer is null. The statement
if not line : break
is used to exit the loop because line will be equal to null at the end of the file, so the loop will exit accordingly.
line = line.rstrip() #remove the newline character (and any ending whitespaces)
cols = line.split('\t') #split on tabs
The rstrip method will remove any ending white space characters from the string object and return the new string. The split method takes one argument, the character to be split upon, and returns a list (or array) of strings. These methods were imported at the beginning of the script when the string class was imported. I think now would be a good time to also point out that object types are not differentiated syntactically in Python. The object type is simply defined at the time the variable is declared.
splitme = cols[6] #set "splitme" to be the data from the 7th column
splitup = list(splitme) #set "splitup" to be a list of characters from the string "splitme"
The first line above shows how to point to an element in an array. The array is indexed from 0 to (n-1), where n is the number of elements. The second line demonstrates the list function, which takes a string as an argument and returns a list of characters.
p = re.compile('\s') #compile a regular expression object "p" to find spaces.
In this statement, I have used the regular expression class to create a regular expression object. The object must be “compiled” using the compile method. This method takes a regular expression as an argument that will be used in pattern matching. Here, ‘\s’ is used, indicating that a space is the only thing being sought. Different expressions could be used to match elements such as white spaces, any alphanumeric character, or any numeral.
if len(splitup) > 30: #if the item description contains more than 30 characters
This statement introduces the len function. This function takes a list as its argument and returns an integer whose value is equal to the number of elements in the list. I think it's quite handy.
for i in range(11): #count from 0 to 10
I found two interesting things when creating for loops in Python. The first is that for loops are iterated over a list of elements. I could have said
for i in [0,1,2, 3, 4, 5, 6, 7, 8, 9, 10]
but I chose the range function—which is the second interesting thing I discovered. The range function generates a list of integers automatically. It takes one or two arguments. If one argument is given (n), a list from 0 to n-1 is generated. If two arguments are given (m,n), a list from m to n-1 is generated.
Being able to iterate over lists is rather useful because you can iterate over a list of any object type. You could, for example, use a list of strings or a list of regular expression objects.
m = p.match(splitup[(30 -i)]) #find the first space
The match method of the regular expression class takes a character or string as its argument and compares it with the regular expression that was compiled. It returns true if the expression was matched.
As I mentioned earlier, loops and flow control statements aren’t terminated in Python. In Listing B, you can see I've added comments to indicate where code blocks are terminated.
newguy = string.join(splitup,'') + '\t' #make the list a string
The join method is the opposite of the split method. It takes the list of strings or characters to be joined as its first argument and a separator as the second. To concatenate two strings, the plus sign (+) is used.
The portion of code in Listing C didn't change from the Perl version of the script.
When the if condition preceding it is not met, the else conditional statement shown in Listing D is executed.
In Listing E, the write method for file objects is used. The only argument passed to it is the string to be written to the file.
Summary
Here's a recap of the elements I covered in this Python script:
- · Objects and classes
- · Variables (object and class types, scalars, and lists)
- · Flow control (while and for loops, and if/else statements)
- · Functions (e.g., len)
- · Methods of objects (e.g., join and split)
- · File I/O
Python is a fairly simple language to pick up. The conversion of the Perl script took about six hours. Check python.org for some great information. Its Windows download contains an editor and debugger, so writing, testing and executing code is easy and pleasurable. | http://www.techrepublic.com/article/pick-up-some-python-with-this-script-walk-through/ | CC-MAIN-2017-34 | refinedweb | 1,349 | 71.04 |
Type: Posts; User: Crash8308
I have a WPF where I am implementing a WebBrowser control and I have a list of objects that when i select them it changes the url binding for the webBrowser control and all is well.
However, if...
I was wondering if there is any way to assign math operators to objects class that I constructed?
or to construct a math operation from objects. I am coding a Form app that does some pretty...
you need to have a second event trigger/button to transfer the checked items to do what you want with it.
that is where the code i gave you comes into play
+1
figured it out. this is for a DataGridView with 2 columns, granted you will have to refresh the entire view to get the entire list back but it is a start. It searches the entire grid cell by cell...
I have a datagridview that is not connected to a data set.
the assembly I am working with is the iControl for F5 systems, with it i am able to capture server IPs, names, and statuses into a struct...
this might help:
private void button2_Click(object sender, EventArgs e)
{
string[] checkeditems = new string[checkedListBox1.Items.Count];
for (int i = 0;...
void button1_KeyPress(object sender, System.Windows.Forms.KeyPressEventArgs e)
{
if (e.KeyChar.Equals((char)13)) //(char)13 = "Retrun/Enter" on English Standard keyboards
...
I wou8ld suggest a SQL CE file and execute a query against it returning the value. XML and Text are fine too but have less options I think.
Instead of hard-coding queries into your source code you should be executing stored procedures. (mySQL 5.0+ and MSSQL server support stored procedures.)
that should increase the responsiveness of...
Something like this:
namespace Plaything
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
} | http://forums.codeguru.com/search.php?s=695757392c28e928e968dab18ae03266&searchid=8424203 | CC-MAIN-2016-07 | refinedweb | 304 | 74.49 |
Feature #5378
Prime.each is slow
Description
See discussion here:') { primes_up_to(2000000).inject(0) { |memo,obj| memo + obj } }
x.report('Prime.each') { Prime.each(2000000).inject(0) { |memo,obj| memo + obj } }
end
$ ruby -v
ruby 1.9.2p290 (2011-07-09 revision 32553) [x86_64-darwin10.8.0]
$ ruby lol.rb
user system total real
primes_up_to 1.470000 0.020000 1.490000 ( 1.491340)
Prime.each 7.820000 0.010000 7.830000 ( 7.820969)
Related issues
History
#1
Updated by Hiroshi Shirosaki about 4 years ago
- File prime.patch
added
It seems that converting from integer to bitmap tables in EratosthenesSieve class is slow.
This patch improves Prime performance.') { p primes_up_to(1500000).inject(0) { |memo,obj| memo + obj } }
2.times do
x.report('Prime.each') { p Prime.each(1500000).inject(0) { |memo,obj| memo + obj } }
end
end
before¶
$ ruby -v ~/prime_bench.rb
ruby 1.9.4dev (2011-10-01 trunk 33368) [x86_64-darwin11.1.0]
user system total real
primes_up_to 2.530000 0.020000 2.550000 ( 2.550595)
Prime.each 6.450000 0.010000 6.460000 ( 6.461948)
Prime.each 0.880000 0.000000 0.880000 ( 0.877138)
after¶
$ ruby -v -Ilib ~/prime_bench.rb
ruby 1.9.4dev (2011-10-01 trunk 33368) [x86_64-darwin11.1.0]
user system total real
primes_up_to 2.560000 0.020000 2.580000 ( 2.583900)
Prime.each 4.630000 0.010000 4.640000 ( 4.633154)
Prime.each 0.330000 0.000000 0.330000 ( 0.325838)
#2
Updated by Peter Vanbroekhoven about 4 years ago
Note that the primes_up_to method Mike posted is not quite optional in that the intended optimization in the form of the reject doesn't do anything. The reject is executed before the loop and so the loop is still executed for all numbers instead of just for the primes.
If you use the version below instead, it is over 2.5 times faster for 2 mil primes on my machine. That would make the new built-in version still almost 5 times slower than the pure-Ruby version. Note also that in my benchmarks I changed the inject block to just return memo and not calculate the sum because that skews the results by quite a bit; there's the extra summing, but the sum gets in the Bignum range and so it adds object creation and garbage collection.
def primes_up_to(n)
s = [nil, nil] + (2..n).to_a
(2..(n ** 0.5).to_i).each do |i|
if si.step(n, i) { |j| s[j] = nil }
end
end
s.compact
end
#3
Updated by Yusuke Endoh about 4 years ago
- Tracker changed from Bug to Feature
#4
Updated by Yusuke Endoh about 4 years ago
- Status changed from Open to Assigned
- Assignee set to Yuki Sonoda
Hello,
Just slowness is not a bug unless it is a regression, I think.
So I moved this ticket to the feature tracker.
I believe that there is no perfect algorithm to enumerate
primes. Any algorithm has drawback and advantage. Note that
speed is not the single important thing. I could be wrong,
but I guess that prime.rb does not priotize speed (especially,
linear-order cost), but high-abstract design.
Even in terms of speed, my version is about 2 times faster
than Peter's, though it uses extra memory. So, there are
trade-offs.
def primes_up_to_yusuke(n)
primes = [2]
n /= 2
prime_table = [true] * n
i = 1
while i < n
if prime_table[i]
primes << j = i * 2 + 1
k = i + j
while k < n
prime_table[k] = false
k += j
end
end
i += 1
end
primes
end
user system total real
primes_up_to_mike 1.720000 0.010000 1.730000 ( 1.726733)
primes_up_to_peter 0.780000 0.020000 0.800000 ( 0.795156)
primes_up_to_yusuke 0.410000 0.000000 0.410000 ( 0.419209)
Prime.each 4.760000 0.010000 4.770000 ( 4.765654)
I think every prime-aholic should implement their own favorite
algorithm by himself :-)
--
Yusuke Endoh mame@tsg.ne.jp
#5
Updated by Yutaka HARA almost 3 years ago
- Target version set to next minor
#6
Updated by Charles Nutter almost 3 years ago
JRuby numbers for the various implementations proposed (best times out of ten in-process iterations):
mconigliario's version:
user system total real
primes_up_to 2.100000 0.000000 2.100000 ( 1.062000)
Prime.each 0.980000 0.010000 0.990000 ( 0.883000)
h.shirosaki's version:
user system total real
primes_up_to 2.100000 0.010000 2.110000 ( 1.014000)
Prime.each 1.030000 0.000000 1.030000 ( 0.930000)
calamitas's version:
user system total real
primes_up_to 1.130000 0.020000 1.150000 ( 0.467000)
Prime.each 1.020000 0.000000 1.020000 ( 0.908000)
mame's version:
user system total real
primes_up_to 0.180000 0.000000 0.180000 ( 0.143000)
Prime.each 0.970000 0.000000 0.970000 ( 0.948000)
Ruby 1.9.3p286 running mame's version:
user system total real
primes_up_to 0.380000 0.000000 0.380000 ( 0.382392)
Prime.each 0.790000 0.000000 0.790000 ( 0.793005)
Definitely some room for improvement over the base implementation.
#7
Updated by Marc-Andre Lafortune 8 months ago
- Duplicated by Feature #10354: Optimize Integer#prime? added
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/5378 | CC-MAIN-2015-40 | refinedweb | 856 | 75.4 |
Utility Stylesheets, Part Two
May 5, 2004
Last month we looked at some short utility stylesheets, each dedicated to a specific task that may be necessary with a wide variety of XML documents: stripping empty paragraphs, converting mixed content to element content, and adding ID values to elements. Stylesheets like these can serve as building blocks in the creation of a large, complex workflow composed of pipelined modular processes. This week, we'll look at several more such stylesheets.
Strip the Namespaces from a Document
XML namespaces play an important role in XML applications; they help to keep
track of which elements and attributes come from where, but to be honest, they're
such a
pain sometimes. The following stylesheet copies all source tree nodes to the result
tree,
and it uses XPath 1.0's
local-name() function to make sure that the elements
and attributes on the result tree have no namespace prefix. (It must be useful --
when I
suggested last month that readers send in their own short utility stylesheets, one
sent me
his own version of this stylesheet without knowing that I had planned to include one
just
like it.)
<!-- Copy document, stripping namespaces, i.e. for elements and attributes only copy the local part of their names. --> <xsl:stylesheet xmlns: <xsl:template <xsl:element <xsl:apply-templates </xsl:element> </xsl:template> <xsl:template <xsl:attribute <xsl:value-of </xsl:attribute> </xsl:template> <xsl:template <xsl:copy> <xsl:apply-templates </xsl:copy> </xsl:template> </xsl:stylesheet>
In XSLT,
xsl:copy elements and literal result elements are popular
ways to add elements and attributes to result trees, but this stylesheet demonstrates
a key
advantage of using
xsl:element and
xsl:attribute elements instead:
because they offer more control over the names of those elements and attributes. The
name attributes in these elements call the
local-name() function
to convert the original names to the ones with no namespace prefixes; using other
function
calls (or combinations of functions) can let you be even more creative in how you
name your
result elements and attributes.
Converting Attribute Value Qnames to URIs
The use of qualified names (names that include a namespace prefix) in attribute values is
generally considered a Bad Idea in XML design. After all, a namespace prefix is only
standing in for the full URI of the namespace it represents, and while XML parsers
track the
prefix/URL relationship for a document's element and attribute names, they don't do
this for
attribute values. See Kendall Clark's February 2002 XML Deviant
column for a fuller discussion, which points out that XSLT 1.0 itself uses qnames
in
attribute values. For example, if you declare that xmlns:h="",
you can then set your
xsl:template element's
match attribute to
"h:h1" or "h:p" to define a template rule for
h or
p elements from
the namespace.
When I read in a W3C
IRC log that "XSLT 1.0 can't deal well with qnames," however, I took it as a challenge
-- it can't deal well with qnames if you don't use the (little-used)
namespace:: axis. With a bit of help from David Carlisle, I came up with a
stylesheet that converts a namespace prefix in an attribute value to the corresponding
URI:
<!-- qname2uri.xsl: convert namespace prefixes in attribute values to their associated URIs. --> <xsl:stylesheet xmlns: <xsl:template <!-- For any attributes that have a colon in their value... --> <xsl:variable <xsl:value-of </xsl:variable> <xsl:variable <!-- URI that the prefix maps to: namespace node of parent whose name() = the namespace prefix. --> <xsl:variable <xsl:choose> <xsl:when <xsl:value-of </xsl:when> <xsl:otherwise> <!-- Uncomment the following xsl:text element to flag prefixes that weren't declared. --> <!-- <xsl:text>NO-URI-DECLARED-FOR-PREFIX:</xsl:text>--> <xsl:value-of <xsl:text>:</xsl:text> </xsl:otherwise> </xsl:choose> </xsl:variable> <!-- Add attribute to result tree, substituting URI for prefix. --> <xsl:attribute <xsl:value-of <xsl:value-of </xsl:attribute> </xsl:template> <!-- Copy anything not covered by that first template rule. --> <xsl:template <xsl:copy> <xsl:apply-templates </xsl:copy> </xsl:template> </xsl:stylesheet>
The stylesheet has two template rules: the first handles attributes with colons
in their values, and the second copies any other source tree node to the result tree
unchanged. The first defines two variables to make its logic more modular: the "nsprefix"
variable stores the namespace prefix, and the "nsURI" variable stores the URI that
corresponds to that namespace prefix. If the stylesheet declares no URI for that prefix,
"nsURI" just stores the prefix; uncommenting the
xsl:text element with the
value of "NO-URI-DECLARED-FOR-PREFIX:" adds that string to flag the lack of a properly
declared URI for that prefix. You can easily change that to a proper URI or to any
string
you want.
To test this stylesheet, I used the following document as a source document:
<a xmlns: <b>this is a test</b> <b attr1="sn:blah">Second b element.</b> <b attr1="xx:blah">Third b element.</b> <!-- No declaration for xx. --> <c xmlns: <!-- Redeclared prefix. --> <d color="red" direction="north"> <!-- No colons in these values. --> <x attr2="sn:foo">nested namespace</x> </d> </c> </a>
The three commented lines attempt to trip up a conversion program that doesn't handle the URI-prefix mapping properly. Although it's not a very extensive test, it shows that the stylesheet works pretty well, creating this result from it:
<?xml version="1.0" encoding="utf-8"?><a xmlns: <b>this is a test</b> <b attr1="">Second b element.</b> <b attr1="xx:blah">Third b element.</b> <!-- No declaration for xx. --> <c xmlns: <!-- Redeclared prefix. --> <d color="red" direction="north"> <!-- No colons in these values. --> <x attr2="">nested namespace</x> </d> </c> </a>
The second
b element's prefix was mapped to the snee.com URI, and
the third
b element's prefix was left alone because it had no corresponding
URI. The
d element's attribute values were left alone, and the
x
element's namespace prefix, which was the same as the one on the second
b
element, was mapped to a different URI: the one that the "sn" prefix was mapped to
in the
c element that contains the
d element, thereby showing that the
scoping of the declarations was respected.
Converting a Document's Encoding
There are several utilities available that can convert a file's encoding, but if you need to convert the encoding of an XML document, an XSLT processor and an eight-line stylesheet (OK, a little longer with blank lines added for readability) are all you need.
The following stylesheet has only one template rule: the same one we've seen in most of the utility stylesheets, which copies everything passed to it verbatim.
<xsl:stylesheet xmlns: <xsl:output <xsl:template <xsl:copy> <xsl:apply-templates </xsl:copy> </xsl:template> </xsl:stylesheet>
The stylesheet also has an
xsl:output element. This element has
many useful attributes, and the
encoding one is particularly valuable: tell it
what encoding to use when writing the result document, and your stylesheet is ready
to
convert some documents. If your XSLT processor can't handle the output encoding you've
asked
for, it will tell you.
The choice of encodings that your XSLT processor can read and write isn't entirely up to the processor. The XML parser that it uses determines which encodings it can read, and for a Java-based XSLT processor, the JVM in use may limit the number of supported output encodings. Check your processor's documentation -- for example, the "Character encodings supported" section of Saxon 6.5.3's Standards Conformance page lists four input encodings recognized by the built-in AElfred parser that it uses by default, and nine encodings that it supports for output, if your JVM supports them. | http://www.xml.com/pub/a/2004/05/05/tr.html | CC-MAIN-2017-13 | refinedweb | 1,318 | 59.74 |
I have been trying to make a script that extracts *.rar files for but am receiving errors. I've been struggling to understand the documentation of the module to no avail (I'm new to programming so sometimes get a bit lost in all the docs).
Here is the relevant part of my code, and the error received.
Snippet from my code:
import rarfile
rarpath='/home/maze/Desktop/test.rar'
def unrar(file):
rf=rarfile.RarFile(file)
rf.rarfile.extract_all()
unrar(rarpath)
File "unrarer.py", line 26, in unrar
rf.rarfile.extract_all()
AttributeError: 'str' object has no attribute 'extract_all'
rarfile
unrar
pip
Support for RAR files in general is quite poor, this experience is par for the course.
In order to get the
rarfile Python module to work, you have to also install a supported tool for extracting RAR files. Your only two choices are
bsdtar or
unrar. Do not install these with Pip, you have to install these with your Linux package manager (or install them yourself, if you think that the computer's time is more valuable than your time). For example on Debian-based systems (this includes Ubuntu) run,
sudo apt install bsdtar
Or,
sudo apt install unrar
Note that bsdtar does not have the same level of support for RAR files that Unrar does. Some newer RAR files will not extract with bsdtar.
Then your code should look something like this:
import rarfile def unrar(file): rf = rarfile.RarFile(file) rf.extract_all() unrar('/home/maze/Desktop/test.rar')
Note the use of
rf.extract_all(), not
rf.rarfile.extract_all().
If you are just doing
extract_all then there is no need to use a
rarfile module, though. You can just use the
subprocess module:
import subprocess path = '/path/to/archive.rar' subprocess.check_call(['unrar', 'x', path])
The
rarfile module is basically nothing more than a wrapper around
subprocess anyway.
Of course, if you have a choice in the matter, I recommend migrating your archives to a more portable and better supported archive format. | https://codedump.io/share/fdZ8eWvQKg5g/1/not-managing-to-extract-rar-archive-using-rarfile-module | CC-MAIN-2016-50 | refinedweb | 335 | 65.52 |
Hello,
I've been stuck on this homework assignment for some time now, and tried looking for possible solutions. The assignment goes like this...Write a Payroll class that uses the following arrays as fields:
employeeID - An array of seven integers to hold employee identification numbers.
The array should be initialized with the following numbers:
5658845 4520125 7895122 8777541
8451277 1302850 7580489
hours - An array of seven integers to hold the number of hours worked by each employee.
payRate - An array of seven doubles to hold each employee's hourly pay rate.
wages - An array of seven doubles to hold each employee's gross wages.
The class should relate the data in each array through the subscripts.
For example, the number in element 0 of the hours array should be the number of hours worked by the employee
whose identification number is stored in element 0 of the employeeID array.
That same employee's pay rate should be stored in element 0 of the payRate array.
In addition to the appropriate accessor and mutator methods,
the class should have a method that accepts an employee's identification number
as an argument and returns the gross pay for that employee.
Demonstrate the class in a complete program that displays each employee number
and asks the user to enter that employee's hours and pay rate.
It should then display each employee's identification number and gross wages.
Input Validation: Do not accept negative values for hours or numbers less than 6.0 for a pay rate.
My problem with this program is that everytime I try to print the employee ID's or the wages, I get hashcode or something like it (#[I1a77cdc or something like that). I tried using the toString method, but it lists all of the values, when I'm trying to display one at a time. Here is the code for the class:
This is the demo program to list the ID's. I've been messing with it for some time, and right now I just want it to display values. Any help is appreciated!This is the demo program to list the ID's. I've been messing with it for some time, and right now I just want it to display values. Any help is appreciated!Code Java:
// moduleArray class public class moduleArray { final int NUM_EMPLOYEES = 7; int[] employeeID = {5658845, 4520125, 7895122, 8777541, 8451277, 1302850, 7580489}; int[] hours = new int[NUM_EMPLOYEES]; double[] payRate = new double[NUM_EMPLOYEES]; double[] wages = new double[NUM_EMPLOYEES]; // setHours method public void setHours(int[] time) { hours = time; } // setPayRate method public void setPayRate(double[] pay) { payRate = pay; } //getEmployeeID method public int[] getEmployeeID() { return employeeID; } // getWages method public double[] getWages() { wages = hours[] * payRate[]; return wages; } }
Code Java:
import java.util.Scanner; public class moduleArrayDemo { public static void main(String[] args) { final int NUM_EMPLOYEES = 7; int[] ID = new int[NUM_EMPLOYEES]; int[] time = new int[NUM_EMPLOYEES]; double[] pay = new double[NUM_EMPLOYEES]; Scanner keyboard = new Scanner(System.in); moduleArray[] employee = new moduleArray[NUM_EMPLOYEES]; for (int i = 0; i < employee.length; i++) employee[i] = new moduleArray(); for (int i = 0; i < 7; i++) System.out.println(employee[i]); } } | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/36138-trying-get-values-arrays-rather-than-hashcode-printingthethread.html | CC-MAIN-2014-42 | refinedweb | 521 | 61.06 |
I'm trying to do the following using a linked list.
delete("is",2)
print 1:This 2:is 3:an 4:an 5:icorrect 6:sntence
delete("an",3)
print 1:This 2:is 3:an 4:icorrect 5:sntence
delete("icorrect",4)
print 1:This 2:is 3:an 4:sntence
insert("incorrect",4)
print 1:This 2:is 3:an 4:incorrect 5:sntence
delete("sntence",5)
insert("sentence",5)
print 1:This 2:is 3:an 4:incorrect 5:sentence
neighbors("is") 2:is previous:This next:an
It's like a sentence editor. It follows the commands on the left side ( print, delete, insert) etc. so that "This is is an an icorrect sntence." becomes "This is an incorrect sentence."
Using fstream to read the text file this is from and load it into the nodes. Each word in a seperate node. I have no idea how to build this linked list. The book only talks about deleting first node, deleting last node, retrieve data from first node etc. But that is not what I am looking for, I think.
Would this do it?
public class SLinkedList { protected Node head; // head node of the list /** Default constructor that creates an empty list */ public SLinkedList() { head = null; } // ... update and search methods would go here ... }
I have been trying forever to figure this out. Help is really appreciated!! | https://www.daniweb.com/programming/software-development/threads/316717/linked-lists | CC-MAIN-2017-26 | refinedweb | 231 | 75.71 |
Need to install CircuitPython on Raspberry Pi Pico board. because we can use WIZNET5K library for W5500 Ethernet function. For more detail, I already upload 1st project with Raspberry Pi Pico, WIZ850io (W5500), Circuit Python env., Hardware connection, Ping demo example. Please visit the below project first.2. DHCP Example
visit my source repo.
Download (or fork, copy,...) samples and copy them into your Raspberry Pi Pico board.
1. copy "lib" folder into your Raspberry Pi Pico board.
2. open "Pico_W5500_Echo_Demo.py" OR get the code "Pico_W5500_DHCP_Echo" from below code section in this project.
in the lib, I modified some codes in WIZNET5K library from Adafruit.
- In adafruit_wiznet5k.py - before DHCP, check link status first.
# in the def __init__ function
start_time = time.monotonic()
while True:
if self.link_status or ((time.monotonic() - start_time) > 5) :
break
time.sleep(1)
if self._debug:
print("My Link is:", self.link_status)
- In adafruit_wiznet5k_dhcp.py - before DHCP message sending, init _BUFF byte array.
def send_dhcp_message(self, state, time_elapsed):
# before making send packet, should init _BUFF. if not, DHCP sometimes fails, wrong padding, garbage bytes, ...
for i in range(len(_BUFF)):
_BUFF[i] = 0
Latest code, you can download from
or
- (fork version)
3. copy all contents into "code.py" on your Pico board.
4. save "code.py" and open REPL.3. DHCP processing
my REPL console window log:
DHCP process is ok.
tcp server socket is also opened.4. ECHO Demo
First I try to ping to the board. no problem.
Second, I tried to TCP connect to (as described on our python code) - I used "Hercules" freeware tool for socket test.
- Destination IP - 192.168.0.20
- Port - 50007
- Send data - "12345"
You can find WebClient Demo easily from Adafruit WIZNET5K Ethernet library. Let's try this sample code as well. It's easy. Powerful CircuitPython!!
original example, visit
I also prepared "Pico_W5500_WebClient_Demo.py" on my repo.
-
- or can copy code from "Pico_W5500_WebClient_Demo" in below code section
Just copy this code (or download, rename, copy) into "code.py" on your Raspberry Pi Pico board, then save. Done!
you can see the result log in your REPL console window as below.
1. IP lookup adafruit.com result
2. Getting page from ""
3. Getting JSON data from ""
*) Check again! This example, you need "adafruit_requests.mpy" file in lib folder.
All code, you can find the below repo.
Happy coding! | https://www.hackster.io/bjnhur/how-to-add-w5500-ethernet-to-raspberry-pi-pico-python-2-77c78c | CC-MAIN-2022-27 | refinedweb | 392 | 71.71 |
Log message:
gnucash-docs: update to 4.10.1.
4.10.1- 28 March 2022
o Re-release 4.10 because a CMakeLists.txt error included only the
Portuguese version of the Tutorial and Concepts Guide.
4.10 - 27 March 2022
o Update Preferences documentation to match current state
o Fix 2 "[WARN] FOUserAgent - Destination: Unresolved ID reference"
o Updates to Gen Imp Tran Matcher other than for new Append checkbox
o Adjusted entity for image width in gnc-docbookx.dtd and removed a
duplicate entry.
o Improve the documentation of the Find dialog.
o Memo isn't a transaction field, Notes is.
o F::Q Link to IRC channel inserted as entity
o F::Q insert note on NAV, insert Entity for Data file
o Remove country codes from lang attribute and other minor formalities
Log message:
gnucash-docs: update to 4.9.
4.9 - 19 December 2021
o Bug 797950 - Reconcilation docs don't mention automatic
credit card payment feature.
o Guide:C: Add a directory with datafiles for faster regeneration of
images
o Chapter "Online-Quotes" created.
Description of the installation and configuration of F::Q (moved
from Help_ch_Account-Actions-xml).
o Remove several Autotools remains
o Drop TravisCI as we are using Github Workflows now.
o Removed the information of HACKING file from README.
o Add ghelp to the default target
At some point in the past ghelp didn't have to be built, as one could
develop and test simply from the source directory. That is no longer
the case so perform a build by default if ghelp is enabled.
Log message:
finance: Replace RMD160 checksums with BLAKE2s checksums
All checksums have been double-checked against existing RMD160 and
SHA512 hashes
Log message:
finance: Remove SHA1 hashes for distfiles
Log message:
gnucash-docs: update to 4.8.
4.8 - 28 September 2021
o Remove outdated files.
o Remove autotools.
4.7 - 26 September 2021
o Bug 798226 - minor mistakes in Tutorial and Concepts guide 2
o Bug 798226 - postprocessing: xmlformat
o Bug 798236 - Remove comment about swapped A/P & A/R terminology
o Replace COPYING file from GPL 3 to GPL 2.
o Substantial editing of the C documentation to make the meanings more
understandable to translators.
o Apply dtd-locale to help/de/Help_para-assist-intro.xml.
o Add ENTITY(s) prefix guisubmenu, guimenuitem, and guilabel as gsm,
gmi, and gl, respectively.
o Move untranslated entity messages from gnc-docbookx.dtd to each
locale file.
o Make DTD ENTITY(s) translatable. See docbook/README.
Bug 798273 - Consider a entity import system like in docbook-xsl
o Guide:C:Currency: update images Part 1
o Fix license file to use actual file instead of softlink.
o Unify words and account names. Fix minor typos and tags. Add commas
to the numbers. Add some tags. Fix according to the review comments.
o Add license file to git tracking
It is an autogenerated file from autotools but it was ignored by our
current git config. A previous commit chose to install the file, but
that's difficult if it's missing.
o Install license files COPYING and COPYING-DOCS
o Help/de: Crop Export screenshots
o Drop travis-ci in favour of github worflows
o Remove obsolete appendix B about FAQ from guide.
o Remove obsolete appendix C about VAT from guide.
o Guide/C: Note on fieldnames in CSV import
o Mark Guide's import chapter as outdated
o Minor improvements in C and de Help Tips on alphavantage
Log message:
gnucash-docs: update to 4.6.
Concurrent with the release of GnuCash 4.6 we're pleased to also release a new \
version of the companion Help and Tutorial and Concepts Guide
Between 4.5 and 4.6, the following bugfixes were accomplished:
Bug 798178 - : Wrong Color in Scheduled Transactions Window text
Bug 798217 - minor mistakes in Tutorial and Concepts guide
The following fixes and improvements were not associated with bug reports:
Update ch_invest.xml
Help/C: New screenshots, remove unused images, and image optimization
New help/de/figures/Main-window-callouts + helper files
Add ENTITY vers-last-2 for reference of major changes
Make calibre optional in cmake
Create Github actions to replace TravisCI
Fixes reference to Help Manual
Help: link Setup for Online Transactions in C, de
Help: Replace most <literallayout> by <screen>
Help pt: Add missing xmlns:xi parameters
Replace most <literallayout> by <screen>; <screen> uses \
Monospace while <literallayout> keeps the default (proportional) font
Backport of improvements from de/Help_ch_GUIMenus.xml
Added new menu items
Corrected the order of menu items
Removed duplicate descriptions
insert <accel>-Tags
Update PACKAGE_URL of configure.ac
xmlformat all docs
EEC became EU decades ago, but we had still references
Several fixes of shortcuts in C and pt
Check for " >" to avoid unwanted wraps
Add xmlformat incl. configuration
Improve the wiki link in the note for translators
Log message:
gnucash-docs: update to 4.5.
4.5 - 28 March 2021
o Bug 798089 - Starting "Tutorial and Concepts Guide" writes
namespace error to console
o Add wiki links about Online Banking Setup
o Online banking: Table of protocols
o Rewording of tools abstract
o Add IDs to all html chunks of help
o Explain default sort order and a partial review of the register
view menus.
o Update links about tax report …
o New section "Country Specific Reports"<p>Moved \
US:TXF,
added de:ElStEr
o Report: Join several notes in one footnote
o Several updates in report-create
o Guide: New year
o Update saved-reports location
o Update copyright year of german guide
o Specify ISO currencies in overview (English, German)
Log message:
gnucash-docs: update to 4.4.
4.4 - 28 December 2020
o Bug 798038 - Incorrect spelling in german account templates 'common'
and 'full', part 3: docs
4.3 - 27 December 2020
o Bug 798031 - : Update default of Date Completion | https://pkgsrc.se/finance/gnucash-docs | CC-MAIN-2022-27 | refinedweb | 976 | 53 |
Recursion in C or in any other programming language is a programming technique where a function calls itself certain number of times. A function which calls itself is called a recursive function, the call is recursive call and the process of function implementation is recursion.
Formally, Recursion is a programming technique that comes from recurrence relation, where the problem is divided further in sub problems smaller in size but same in nature. This division stops when the problem cannot be divided further. This point is called base case from where control moves back towards the original problem by solving sub problems and assembling results in order to get the final result.
While studying recursion in C, if you are asked to write a function
void printBin(int n), which takes a decimal integer as an argument, and prints it's binary form on screen. You can think of two implementations of this function. First version is an iterative one that stores binary digits in an array as they are produced then prints the array content in reverse order. The second one is a recursive version
void recPrintBin(int n). Recursive version
recPrintBin calls itself by passing n/2 to the subsequent calls until n is either 1 or zero, then start printing n%2 from the stack of calls, and solves the problem recursively.
Below piece of code is the iterative version (
void printBin(int n)) of decimal to binary conversion. In this implementation we first get the number converted in 1's and 0's but not in right order, so to print the string of binary digits in correct order string
binS first needs to be reversed, and then printed.
/* File: dec_to_bin.c Program: decimal to binary conversion of positive integer (iterative version) */ #include <stdio.h> #include <string.h> void printBin (int); void reverse(char *); int main () { printf("Binary of 19 is: "); printBin(19); printf("\n"); } void printBin (int n) { char binS[256]; //contains binary digits. int i = 0; do { binS[i++] = n % 2 + '0'; } while (n/=2); binS[i] = '\0'; //null at the end of string reverse (binS); //reverse the order of 1's and 0's printf ("%s\n", binS); } /* helper function to reverse a string */ void reverse (char *s) { int lastIndex = strlen(s) - 1; int firstIndex = 0; char ch; while (firstIndex < lastIndex) { ch = s[firstIndex]; s[firstIndex] = s[lastIndex]; s[lastIndex] = ch; firstIndex++; lastIndex--; } } OUTPUT ====== Binary of 19 is: 10011
Now, see the recursive version of the same problem.
/* Demonstration of Recursion in C File: rec_dec_to_bin.c Program: decimal to binary conversion of positive integer (recursive version) */ #include <stdio.h> #include <string.h> void recPrintBin (int); int main () { int n; printf("Enter an integer: "); scanf("%d", &n); printf("Recursively computed binary of %d is: ", n); recPrintBin(n); printf("\n"); } void recPrintBin (int n) { if (n == 1 || n == 0) //base case { printf ("%d", n); return; } recPrintBin (n/2); //recursive call printf("%d", n%2); } OUTPUT ====== Enter an integer: 19 Recursively computed binary of 19 is: 10011
From above pieces of codes, it can be clearly seen that the second piece of code is more concise and cleaner. However, application of recursion is completely problem dependent and it may not be suitable for all problem types.
If you analyze decimal to binary conversion problem, you will find that in the first solution (iterative one), the binary-digit generated in the last iteration should be printed first. That's the reason character array
binS needed to be reversed before printing on console.
On the contrary, the recursive version does not need any reverse of binary digits because in recursion, the very last call to function is processed very first, then second last and so on. This is done with help of a stack of calls, where the last call to function is placed on top of the stack, therefore processed first. That's how recursion in C is tackled and processed.
There are some classical examples which can be better implemented using recursion rather than their iterative counterparts. Towers of Hanoi, recursive listing of directories or folders and Russian Matryoshka dolls are some of them.
A recursive function has following two parts:
1. The base case
2. Recursive call
Base case is the case for which the function is evaluated without recursion. In above example the base case is
if(n == 1 || n ==.
A function is recursive if it makes a call to itself directly or indirectly. If a function
f() calls itself within from its own body, it is called recursive. Secondly, if a function
f() calls another function
g() that ultimately calls back to
f(), then it can also be considered a recursive function. Following variants of recursion tell us making recursive calls in different ways depending upon the problem.
In linear recursion a function calls exactly once to itself each time the function is invoked, and grows linearly in proportion to the size of the problem. Finding maximum among an array of integers could be a good example of linear recursion. The same is demonstrated below:
Let's assume that we are given an array
arr of
n decimal integers and we have to find the maximum element from the given array. We can solve this problem using linear recursion by observing that the maximum among all integers in
arr is
arr[0], if
n==1, or the maximum of first
n-1 in
arr and the last element in
arr.
/* File: rec_max_array.c Program: Find maximum among array elements recursively. */ #include <stdio.h> #define SIZE 10 int recursiveMax (int *, int ); int max (int, int); int main () { int arr[SIZE] = {1, 3, 5, 4, 7, 19, 6, 11, 10, 2}; int max = recursiveMax(arr, SIZE); printf("Maximum element among array items is: %d\n", max); } int recursiveMax (int *arr, int n) { if (n == 1) return arr[0]; return max (recursiveMax (arr, n-1), arr[n-1]); } /* helper function to compute max of two decimal integers */ int max (int n, int m) { if (n > m) return n; return m; } OUTPUT ====== Maximum element among array items is: 19
Tail Recursion is another form of linear recursion, where the function makes a recursive call as its very last operation. Note that, there is a difference between last operation and last statement. If you see at
recursiveMax (defined above), then you will find that recursive call to function
recursiveMax is last statement but not last operation, code
arr[n-1] yet to be processed to complete the execution of last statement.
Therefore, a function will be said tail recursive if the recursive call is the last thing the function performs. When a function uses tail recursion, it can be converted to the non-recursive one, by iterating through the recursive calls rather than calling them explicitly. Reversing array elements could be a good example of tail recursion. The recursive solution of reversing array elements problem is illustrated as below, followed by it's iterative version.
Suppose, we are given an array of decimal integers, and we have to reverse the order of elements in which they are stored. We can solve this problem by using linear recursion. Reversal of an array can be achieved by swapping the first and last elements and then recursively reversing the remaining elements in the array.
/* File: rec_array_reverse.c Program: Reverse array elements recursively. */ #include <stdio.h> #define SIZE 10 void recursiveRev (int *, int, int); int main () { int arr[SIZE] = {1, 2, 3, 4, 5, 6, 7, 8, 9 , 10}; int i; recursiveRev (arr, 0, SIZE-1); printf ("Printing array elements after reversing them...\n"); for (i = 0; i < SIZE; i++) printf ("%d\n", arr[i]); } void recursiveRev (int * arr, int i, int j) { if(i < j) { int tmp; tmp = arr[i]; arr[i] = arr[j]; arr[j] = tmp; recursiveRev (arr, i+1, j-1); } } OUTPUT ====== Printing array elements after reversing them... 10 9 8 7 6 5 4 3 2 1
Recursive version of reversing array elements problem can easily be converted into iterative one as follows:
void iterativeRev (int *arr, int n) { int i = 0, j = n - 1; int tmp; while (i < j) { tmp = arr[i]; arr[i] = arr[j]; arr[j] = tmp; i++; j--; } }
But, if you look at the signatures of
recursiveRev and
iterativeRev you will find a little difference there, both functions are passed different number of arguments. It is so, because to facilitate recursion sometimes the problem may need to be redefined.
As name suggests, in binary recursion a function makes two recursive calls to itself when invoked, it uses binary recursion. Fibonacci series is a very nice example to demonstrate binary recursion. See the example below:
fib (1) = fib (2) = 1
fib (n) = fib (n-1) + fib (n-2), if n > 2
By definition, the first two numbers in the Fibonacci sequence are 0 and 1, and each subsequent number is the sum of the previous two [Definition Source wikipedia].
/* File: bin_rec_fib.c Program: Computes n-th fibonacci number using binary recursion. */ #include <stdio.h> int recursiveFib (int); int main () { int n = 6; printf ("%dth fibonacci number is %d\n", n, recursiveFib(n)); } int recursiveFib (int n) { // base case if (n <= 1) return n; // binary recursive call return recursiveFib(n-1) + recursiveFib (n - 2); } OUTPUT ====== 6th fibonacci number is 8
Above piece of code exercises binary recursion in order to compute 6th fibonacci number. In this piece of code
recursiveFib returns
n in case
n is less than or equal to 1. This is the base condition in
recursiveFib from where recursive call returns to previous call. Following figure shows how
recursiveFib is executed.
Multiple recursion can be treated a generalized form of binary recursion. When a function makes multiple recursive calls possibly more than two, it is called multiple recursion.
When a function is called recursively, each call gets a fresh copy of all local/automatic variables. And every call is saved in a stack, where the last most call will be processed first. Every program maintains a stack (a special area of memory) during run time to remember where to go back when a function is called.
You can trace a recursive function by pushing every call it makes to a stack and then come to the previous call when it returns. Look at the following piece of code and try to trace it; we will do in a moment's time.
#include <stdio.h> int mystery (int a, int b) { if (b == 0) return 0; if (b % 2 == 0) return mystery (a+a, b/2); return mystery (a+a, b/2) + a; } int main() { int i = 4, j = 5; printf("mystery of %d and %d is: %d\n", i, j, mystery (4, 5)); } OUTPUT ====== mystery of 4 and 5 is: 20
In below figure, I represent each call with a concentric box, where inner boxes will represent the deeper level of recursion. As you see, in the innermost box recursion terminates and no further recursive call is made. And, it starts returning to parent calls ladder by ladder.
The main advantage recursion provides to programmers is that it takes less code to write comparative to iterative version. The code written with help of recursion is more concise, cleaner and easier to understand.
There are some data structures you will see which are quite easy to code with help of recursion. Tree traversals, quick, and merge sort are a few examples.
Despite above benefits, recursion provides no storage saving, nor time. As we have discussed that every recursive call is saved in a special memory area the program maintains (i.e., stack). Pushing a call frame to stack and removing it back is time consuming relative to the record keeping required by iterative solution. There may be chances your recursive solution results into Stack overflow, in case the number of recursive calls does not fit into the stack.
Hope you have enjoyed reading Recursion in C. Having seen at pros and cons of recursion, you would have realized that recursion is programmer friendly, not system because recursive version of a function is shorter to write, cleaner to see, and easier to maintain. And so, as it is said "anyone can write code that computers can understand, wise programmers write code that humans can understand." So keep coding with recursion. And yes, how long do you think the following program will take to finish its execution?
#include <stdio.h> long guessIt(int num) { if (num == 0) return 0; if (num == 1) return 1; return guessIt(num-1) + guessIt(num-2); } int main() { printf("num : guessIt(num)\n"); printf("=========================\n"); for (int num = 0; num < 100; num++) { printf("\n%3d: %d", num, guessIt(num)); } }
In this tutorial we talked of recursion in C language and how does recursion work? Then types of recursion (linear, tail, binary, and multiple recursion), tracing recursive calls, and pros and cons of recursion. | http://cs-fundamentals.com/c-programming/recursion-in-c.php | CC-MAIN-2017-17 | refinedweb | 2,141 | 56.79 |
Worldwide Microcontroller Link for Under $20
Introduction: Worldwide Microcontroller Link for Under $20
Control your home thermostat from work. Turn on a sprinkler from anywhere in the world by flicking a switch. This Instructable shows how to link two or more $4 microcontrollers using the backbone of the internet and some simple VB.Net code.
This builds on an earlier Instructable which shows how to link a microcontroller to a PC and use a pot to control a servo This time we have a microcontoller talking to a VB.Net program then to an ftp website, back to another VB.Net program and thence a second microcontroller anywhere in the world, with or without human intervention.
How else are the machines in The Matrix ever supposed to take over if they can't talk to each other?
Step 1: Gather the Parts
Many of the parts are the same as in the PC Control Instructable and it is suggested that this be completed first before attempting to link two microcontrollers. While it is quite possible to use a pot to control a servo, this time round we are going to go for something simpler - a switch turning on a led. The switch could easily be a tank level sensor and the led could be a pump down near a river but let's get something simple working first.
Parts - Two Picaxe 08M chips - available from many sources including Rev Ed (UK), PH Anderson (USA) and Microzed (Australia). These chips are under $4US.
Two of: Protoboard, 9V battery and battery clips, 10k resistor, 22k resistor, 33uF 16V capacitor, 0.1uF capacitor, 7805L low power 5V regulator, wires (solid core telephone/data wire eg Cat5/6), LED, 1k resistor.
1 of: D9 female socket and cover and 2 metres of 3 (or 4) core data wire (for download) and a toggle switch.
2 computers with 9 pin serial ports (can be debugged on one computer though) and an internet connection.
For computers with no serial port, a USB to serial device and a small stereo socket.
Step 2: Download and Install Some Software
We will need the free VB.Net and the picaxe controller software and if you have done the PC controller Instructable you will already have these.
VB.Net (Visual Basic Express) is available from
The picaxe software is available from
You will need to register with microsoft to get the download - if this is a problem use a fake email or something. I actually found it helpful giving my real email as they send occasional updates.
I'm also going to mention the picaxe forum as this is the sort of forum staffed by teachers and educators and where students can usually get answers to questions within a few hours. The forum is very understanding of even the simplest questions as some of the students are still at primary school level. Please don't be scared to ask for help!
Step 3: Build a Download Circuit
This download circuit uses a picaxe chip, a couple of resistors, a regulator and a 9V battery. More information is available in the picaxe documentation/help which comes up in the help menu of the program. The circuit should only take a few minutes to build once all the parts are to hand. Once a chip is programmed it retains its program in EEPROM even when the power is turned off. Since we are programming two chips it might be worth labeling the chips so you know which is which. You can always go back and reprogram a chip by removing a link and moving a resistor.: Program the Chips
We will call one program Tx and one Rx. Tx is the controlling chip and has a switch and a led. Rx also has a led. When the switch changes the signal goes from Tx to Rx, changes the led and also changes a second variable which then goes back to Tx. So flick the switch and in less than a minute the led changes on both circuits indicating that the message got there and the Rx is acting on the new switch position.
At the simplest level the picaxe has 14 single-byte registers. When a virtual network is created we link all those registers together so if a byte changes in one picaxe it changes in all the picaxes. Clearly if two picaxes are trying to change the same byte then it will get very confusing but if each picaxe only changes one byte then all the other picaxes can see that change and can act on it. Simple messages can be passed back and forward if a certain byte is only changed by one picaxe. A pot can change the value in a register and one or more other picaxes can sense that change and move a servo or whatever and turn on a heater. A second register could send back the temperature in the room.
Copy and paste the programs in turn into the picaxe programmer and download them to each of the respective chips using the blue download arrow from within the picaxe programmer.
Tx:
main:serin 3,N2400,("Data"),b0,b1,b2,b3,b4,b5,b6,b7,b8,b9,b10,b11,b12,b13' get packet from computer
if pin2=0 then' test the switch and set register b0 depending on status
b0=0
else
b0=1
endif
if b1=0 then' other picaxe sets b1 depending b0
and Rx:
main:serin 3,N2400,("Data"),b0,b1,b2,b3,b4,b5,b6,b7,b8,b9,b10,b11,b12,b13' get packet from computer
b1=b0' change register b1 to equal register b0
if b1=0 then
Step 5: Build the Tx Circuit
If you are flipping back and forth between a working circuit and a programming circuit be sure to change the connection to leg 2 and the location of the 22k resistor from leg 2 to leg 4. Or you can build a dedicated download circuit and move the chips across. Just note whether a circuit is running or downloading as it can get quite confusing. In particular, note that a running circuit will not work if leg 2 is left floating - it needs to be grounded. Leg 2 is the download pin and if it is left floating it picks up stray RF from flouro lights and the chip thinks another program is being downloaded.
It is also worth mentioning picaxe nomenclature which calls a physical pin a leg and a virtual pin a pin. Thus an output on pin 2 in code is actually an output on physical leg 5. This might seem strange but it means that code can be ported to bigger picaxes like the 28 and 40 pin versions and still work.
Step 6: Build the Rx Circuit
This circuit is almost the same as the transmitter - it just has no switch.
Step 7: Write Some VB.Net Code
I could have compiled the code and made this program available as a compiled .exe but learning some VB.Net is so incredibly useful that it is worth going through it step by step. If you are running this on two different computers you can Build the program into an .exe which creates a little setup program which can be installed on the second computer. Or you can put VB.Net on both computers and run the programs from within VB.Net
Let's assume you know how to open a new VB.net project from step 7 and 8 of
On the blank form let's add the following components from the toolbar and put them on the form in the locations as shown. For the labels and the textboxes, change the text property (over on the lower right) to what is needed. Don't worry about the settings for the timer - we will change them in the code but do make sure to put a timer in. You can move things around and there are no real rules about location. The big text box is a RichTextBox and the smaller three are ordinary Textboxes. In terms of order we are starting at the top of the form and moving down. If you leave something out there will be an error in the code which should give some sort of clue.
Please pick a random filename for Textbox3 - this is the name of your unique group of picaxes on the ftp server and obviously if we all use the same name then data is going to get all muddled!
Sorry about the dashes in this table - putting in spaces loses the formatting in the table.
Toolbox object-------Text-----------------------------------------Notes
Label1------------------Picaxe Communications
Label2------------------FTP Status
Label3------------------Status
Label4------------------Picaxe Registers
Label5------------------Register 0-13
Label6------------------Value 0-255
Label7------------------FTP link filename
Textbox1----------------0----------------------------------------------0 is a zero not an O
Textbox2----------------0
Textbox3----------------Myfilename-------------------------------Change so no clashes!
Button1------------------Modify
Richtextbox1
Picturebox1
Picturebox2
Timer1
Step 8: Add Some Code
See step 12 of the other instructable for the location of the button that flips between form view and code view. Switch to code view and paste the following code in. The colors should all reappear as in the screenshot . If a line hasn't copied properly due to a wordwrap problem then delete spaces till the error message goes away. I've tried to comment most of the lines so the code at least makes some sense. Delete the public class bit so the text is blank before pasting this - this code already has a public class. If an object like a text box hasn't been placed on the form or has the wrong name then it will come up in the text code with a squiggly blue line under
Dim ModifyFlag As Boolean
Private Sub Form1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load ' need all this rubbish stuff - .net puts it in automatically when go form1events above/load
Timer1.Enabled = True ' put this in code as defaults to false when created
Timer1.Interval = 20000 ' every 20 seconds
PictureBox1.BackColor = Color.Gray ' start with the comms boxes gray
PictureBox2.BackColor = Color.Gray
ModifyFlag = False ' if modify a value manually then skip download
RichTextBox1.Multiline = True ' so can display more than one line
Call DisplayPicaxeRegisters() ' display the 14 registers
Call ReadFTPFilename() ' read the filename off the disk (resaved every 20 secs)
End Sub
Sub SerialTxRx()
Dim DataPacket(0 To 17) As Byte ' entire data packet "Data"+14 bytes
Dim i As Integer ' i is always useful for loops etc
PicaxeRegisters(i - 4) = DataPacket(i) ' move the new data packet into the register array
PictureBox1.BackColor = Color.GreenYellow ' working
Catch ex As Exception
PictureBox1.BackColor = Color.Red ' not working
End Try
End Sub
Sub FTPUpload(ByVal Filename As String)
Dim localFile As String 'place to store data
Dim remoteFile As String ' filename is case sensitive this is really important
Const host As String = "" ' note the 0 is a zero not a character O
Const username As String = "picaxe.0catch.com"
Const password As String = "picaxetester"
Dim URI As String
localFile = Filename ' maybe not needed but if define a location eg c:\mydirectory can add easily this way
remoteFile = "/" + Filename ' file on ftp server needs "/" added in front
URI = host + remoteFile
Try
Dim ftp As System.Net.FtpWebRequest = CType(System.Net.FtpWebRequest.Create(URI), System.Net.FtpWebRequest) = New System.Net.NetworkCredential(username, password) ' log in = False ' will be disconnecting once done = True ' use binary comms = 9000 ' timeout after 9 seconds - very useful as ftp sometimes dies
'timeout (and the clock frequency of 20 secs) may need to be slower for dialup connections = System.Net.WebRequestMethods. ' start sending file
Dim fs As New FileStream(localFile, FileMode.Open) ' open local file
Dim filecontents(fs.Length) As Byte ' read into memory
fs.Read(filecontents, 0, fs.Length)
fs.Close() ' close the file
Dim requestStream As Stream = ' start ftp link
requestStream.Write(filecontents, 0, filecontents.Length) ' send it
requestStream.Close() ' close the link
PictureBox2.BackColor = Color.GreenYellow ' change the box to green to say worked ok
Label2.Text = "FTP Connected" ' text saying it connected
Catch 'can't connect
PictureBox2.BackColor = Color.Red ' box to red as no connection
Label2.Text = "FTP Upload Fail" ' text saying connection failed
End Try
End Sub
Sub FTPDownload(ByVal Filename As String)
' downloads remotefile to localfile
Dim localFile As String 'place to store data
Dim remoteFile As String ' filename is case sensitive this is really important
Const host As String = ""
Const username As String = "picaxe.0catch.com"
Const password As String = "picaxetester"
Dim URI As String
'localFile = "C:\" + Filename ' store in root directory but can change this
localFile = Filename ' so can add c:\ if need to define actual location
remoteFile = "/" + Filename ' added to remote ftp location
URI = host + remoteFile ' make up full address
Try
Dim ftp As System.Net.FtpWebRequest = CType(System.Net.FtpWebRequest.Create(URI), System.Net.FtpWebRequest) = New System.Net.NetworkCredential(username, password) ' log in = False ' will be disconnecting after finished = True ' binary mode = 9000 ' timeout after 9 seconds = System.Net.WebRequestMethods. ' download a file
' read in pieces as don't know how big the file is
Using response As System.Net.FtpWebResponse = CType(, System.Net.FtpWebResponse)
Using responseStream As IO.Stream = response.GetResponseStream
Using fs As New IO.FileStream(localFile, IO.FileMode.Create)
Dim buffer(2047) As Byte
Dim read As Integer = 0
Do
read = responseStream.Read(buffer, 0, buffer.Length) ' piece from ftp
fs.Write(buffer, 0, read) ' and write to file
Loop Until read = 0 ' until no more pieces
responseStream.Close() ' close the ftp file
fs.Flush() ' flush clear
fs.Close() ' and close the file
End Using
responseStream.Close() ' close it even if nothing was there
End Using
response.Close()
PictureBox2.BackColor = Color.GreenYellow ' green box as it worked
Label2.Text = "FTP Connected" ' and text saying it worked
End Using
Catch ' put error codes here
PictureBox2.BackColor = Color.Red ' red box as it didn't work
Label2.Text = "FTP Download Fail" ' and message to say this
End Try
End Sub
Private Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick
If ModifyFlag = False Then 'if user changed a byte then don't download
Label3.Text = "Downloading"
System.Windows.Forms.Application.DoEvents() ' so new label text displays
Call FTPDownload(TextBox3.Text) ' download remote file
Label3.Text = "Downloaded"
System.Windows.Forms.Application.DoEvents()
Call ReadRemoteFileToRegisters() ' save file numbers to the register array
Label3.Text = "Talking to picaxe"
System.Windows.Forms.Application.DoEvents()
Else
ModifyFlag = False 'reset the flag
End If
Call SerialTxRx() ' send to the picaxe and read it back
Label3.Text = "Sent and recieved from picaxe"
System.Windows.Forms.Application.DoEvents()
Call DisplayPicaxeRegisters()
Call SaveRegistersToLocalFile() ' save numbers to file
Label3.Text = "Uploading"
System.Windows.Forms.Application.DoEvents()
Call FTPUpload(TextBox3.Text) ' send back up to ftp site named as my name
Label3.Text = "Resting"
Call SaveFTPFilename() ' so reads in when restart
End Sub
Sub DisplayPicaxeRegisters()
Dim i As Integer
Dim registernumber As String
RichTextBox1.Multiline = True ' so can display more than one line in the text box
RichTextBox1.Clear() ' clear the text box
For i = 0 To 13
registernumber = Trim(Str(i)) ' trim off leading spaces
If i < 10 Then
registernumber = "0" + registernumber ' add 0 to numbers under 10
End If
RichTextBox1.AppendText(registernumber + " = " + Str(PicaxeRegisters(i)) + Chr(13))
Next ' chr(13) is carriage return so new line
End Sub
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
Dim i As Integer
' check out of range first
i = Val(TextBox1.Text)
If i < 0 Or i > 13 Then
TextBox1.Text = 0
End If
i = Val(TextBox2.Text)
If i < 0 Or i > 255 Then
TextBox2.Text = 0
End If
PicaxeRegisters(Val(TextBox1.Text)) = Val(TextBox2.Text) ' change the value
Call DisplayPicaxeRegisters() ' and refresh the display
ModifyFlag = True ' and next ftp link skip downloading
End Sub
Sub SaveRegistersToLocalFile() ' save register array in a local text file
Dim i As Integer
FileOpen(1, TextBox3.Text, OpenMode.Output) ' open the text file named in the text box
For i = 0 To 13
PrintLine(1, Str(PicaxeRegisters(i))) ' save 14 values
FileClose(1) ' close the file
End Sub
Sub ReadRemoteFileToRegisters() ' read local text file into the register array
Dim i As Integer
Dim LineOfText As String
Try
FileOpen(1, TextBox3.Text, OpenMode.Input) ' read the remote file name
For i = 0 To 13
LineOfText = LineInput(1) ' read in the 14 lines
PicaxeRegisters(i) = Val(LineOfText) ' convert text to values
FileClose(1)
Catch ex As Exception
FileClose(1) ' file doesn't exist so do nothing
End Try
End Sub
Sub ReadFTPFilename() ' so the name of the remote ftp file is the same next time this program is run
Dim LineOfText As String
Try
FileOpen(1, "FTPFilename.txt", OpenMode.Input) ' open the file
LineOfText = LineInput(1)
TextBox3.Text = LineOfText ' read the name
FileClose(1)
Catch ex As Exception
FileClose(1)
End Try
End Sub
Sub SaveFTPFilename()
FileOpen(1, "FTPFilename.txt", OpenMode.Output) ' save the remote ftp file name
PrintLine(1, TextBox3.Text)
FileClose(1)
End Sub
End Class
Step 9: Run the Program on Both PCs
Start running the program by clicking the green triangle at the top middle of the screen - the 'Start Debugging' button. Nothing will happen for 20 seconds and then the program will attempt to connect to the ftp server and will attempt to connect to the picaxe. The pictureboxes will either go red or green.
The ftp location is a free website and anyone can use this but you need to use a different ftp working filename (mine is DoctorAcula1) otherwise we could all end up with each other's data if we use the same filename! If you like you can eventually get your own ftp site - just change the ftp location, username and password in two places in the code from my 0Catch website. Most websites allow ftp. Multiple computers can access the same ftp file - the ftp fileserver sorts out in what order these happen. Occasionally there are data clashes or hangs and these seem to happen every 20 file reads. There is a timeout in the code if this happens so it returns no data rather than corrupted data.
Using a broadband connection with a 128kbs upload speed means a file upload takes about 3 seconds but sometimes up to 8 seconds, most of which is taken up in handshaking rather than data transfer. This sets the timer1 time of a minimum of about 20 seconds taking into account download, upload and chat with the picaxe. With very fast broadband you may be able to shorten the cycle time.
You can change a register manually within the VB program. If you do, the next timer cycle skips downloading from the ftp site and sends the new data to the picaxe and then reads it back and uploads it. The new data thus finds its way to all picaxes linked to this group. This is helpful for debugging and/or for linking PC software into the microcontroller hardware loop. Websites can also access the hardware loop using PERL script or similar to write a new file to the ftp site.
This screenshot was taken running the Tx chip, the switch was on and the register b0 = to 1 had been sent to the Rx chip which had then changed register b1 to 1 as well. The led was thus lit on both boards.
This is a trivial application but it is easy to turn on a 3.6Kw pump instead of a led. Some more ideas are at including linking picaxes via solar powered radio links. With radio links plus the internet it is possible for 'The Machines' to reach into many corners of the globe.
There are some ideas around on the picaxe forum about taking this idea further and replacing the PC and ftp site with dedicated webserver chips that plug straight into a router. Clearly this would decrease the power consumption of a link. If you are interested in further discussions please post on the Intstructable comments and/or on the picaxe forum.
Dr James Moxham
Adelaide, South Australia
Step 10: Screenshots of Code
By request, here are a series of screenshots of the vb.net code with all the formatting in place. This code was actually copied back of this instructable and the formatting reappeared automatically. It would be better to copy and paste the text than try to read these pictures but these will be useful if you are in an internet cafe and can't install vb.net.
Step 11: Screenshot2
Screenshot 2
Step 12: Screenshot 3
Screenshot 3
Step 13: Screenshot 4
Screenshot 4
Step 14: Screenshot 5
Screenshot 5
Step 15: Screenshot 6
Screenshot 6
I had a similar idea of remote control using an ftp server, I wrote a batch program that could read a file on the server and execute the command I uploaded. Good job.
1st of all:?
Thanks for the inspiring instructable. This FTP communication link opens up whole new avenues in communication with a micro. :)
Oh this is just a cool instructable, it inspires me to jump into microcontroler....it's the future...
can you buy these chips on Digi-key?
Hmm wouldn't it be cool if a master pic sends its code up and then all the salves connected to to reprogram them selves to the master. is this even possible?
I asked that question myself about a year ago. Finally got an answer! No, it can't be done with picaxe, you need something with a proper operating system. I'm working with 4 CP/M boards at the moment (N8VEM) connected by wireless. Yes, they can auto find each other, and yes, they can auto update new software and transfer it using wireless (xmodem). Working on getting a link between the wireless network and the internet - that is almost working now.
How do u complied hex files from vb2008?
ASAP!
Hey that sounds real cool you should keep us posted on that or maybe even write an instructable?
hi everyone i recently posted a instructable on how to program a picaxe automaticly from a bas file on an ftp server heres a link if your interested
What a great instructable! I thought with all that s/w there would be a problem or 2, but none. Everything worked great. I have Vista on one machine and XP on another. The Vista PC was short on com ports and an USB adaptor that is Prolific type was used on port 13. I will keep article and s/w for referance. Thanks for sharing this information. Bruce
Would this be possible to do with a Picaxe 28x chip?
Of course. And I don't think you would need to change the code either. Just pick which pin is going to be the input and which will be the output.
Thanks!
sigh* mac compatable?
I'm not an expert on Macs, do they have a real world interface (USB?). If not, maybe get an old PC (can I say that?). My kids have the fast computers in my house but much of my development is done on an old 300Mhz machine. Anything under 800Mhz is a giveaway item at computer stores (the pile is out the back mate, take what you want as we are taking it to the tip next week).
Can vb.net such as vb2008 will burned to pic16F628?
Macs do have USB (If I remember correctly, apple is even part of the USB-IF) and are able to use just about any USB-RS232 adaptor, As well as USB-Parallel adaptors. It is also very easy to create a program with the same (or better) functionality with the free, full version of Apple XCode (which comes on all OSX Discs, and you can download free at )
I've been using XCode 2.4 on tiger, and now 3.0 on leopard with a USB-RS232 adaptor, and OSX-AVR to do stuff with AVR microcontr
actually the basic stamp is mac compatable as long as you dont get the DAME SERIAL TO USB CABLE FROM RADIO SHACK!!!!!!!!!!!
Fabulous instructable Dr Acula. I've long dreamt of a remote control for my picaxe controlled brewing machine! () Thanks for giving me the steps to put that together.
That's a good idea. You can have the brewing kit at home and at work you could have a digital temp display and a manual override if it gets too hot or cold. Strobe lights could go off if the temp is out of range. And if the data link is broken because some nefarious individual breaks into the beer brewing shed and tries to steal the beer, you can rush home and catch them! Or at the very least the temp will stay in range and you won't end up with a toxic homebrew like my neighbour used to make.
Hmmm, I can see a case for a streaming web cam for shed security ;-) And with this great idea you could pump brewing water out of the stream and make the nectar in one of your remote sheds, only needing to walk up the hill when it is time to pull that pint. Cheers
Absolutely Amazing! I'll be shopping for parts right after Halloween. The hard ware is cheap, the software is done for me (Thanks). Have you thought of any practical applications? Or impractical for that matter? I see so much potential, but I'm not seeing the killer app. Anyone have useful or silly ideas on how to use this?
This may not have many practical applications exactly as it is because two computers need to be on to send messages. By the time a computer has been booted and the program started it probably would be easier to turn the thermostat on manually. But there are two things I think that are useful. The first is vb.net ftp code. With a few simple changes the file being sent can be index.htm which is the default home page for most websites. I have a vb6 program uploading pictures round my house to a website every 30 minutes and this can be easily ported to vb.net. The second application is to leave out the computer. There are a number of embedded web server chips available (eg Simplelan) which ought to be able to make direct chip-to-chip comms possible without needing the computer. Eg when away on holidays it could be possible to turn on sprinklers and lights at home. I'm sure there are devices available already that do this - the trick is to do it as cheaply as possible. I'll try to think of some more ideas...
Nice project! Don't know if it the fault of the poster or the posting software, but the code could be a lot easier to read if it had the structured indents.
VB.net will automatically put the indents and the colors back in when you paste in the text. One of the nifty new features of the newer versions of VB.
I'm aware of that feature of VB, but it is not the sort of thing one can easily do from an internet cafe if all one wants to do for now is read and follow the code. Nice project!
I have added 6 more steps with screenshots of the vb.net code. It is a bit hard to read due to the jpg compression but probably easier than the raw text. Hope this helps.
Clicking the "i" button will bring you to higher resolution version of the image ;) | http://www.instructables.com/id/Worldwide-microcontroller-link-for-under-20/ | CC-MAIN-2018-05 | refinedweb | 4,616 | 62.98 |
On February at the Going Native conference, we promised to work on implementing more parts of the C++11 standard. We also made a commitment to progressively roll out these features on a faster cadence through out-of-band releases such as CTPs (customer technology previews).
We delivered!.
The November 2012 CTP release is available immediately for download here:. It contains the following C++11 additions:
- Variadic templates
- Uniform initialization and initializer_lists
- Delegating constructors
- Raw string literals
- Explicit conversion operators
- Default template arguments for function templates
For those eager to learn how to put these cool C++11 features into practice, Stephan Lavavej took the compiler out for a spin in part 6 of his ongoing Core C++ series on Channel 9. Check it out!
Installation and Usage
After downloading and installing the program, you can launch Visual Studio 2012, load your C++ project and you can switch to the new compilers.We recommend you can create a separate project configuration from menu Build > Configuration Manager by duplicating your existing configuration and then follow the steps below:
- Open Project Property Pages (Alt+F7 under the Visual C++ mappings)
- From the ‘General’ tab, change ‘Platform toolset’ from ‘Visual Studio 2012 (v110)’ to ‘Microsoft Visual C++ Compiler Nov 2012 CTP (v120_CTP_Nov)’ and close the Property Pages
- Launch a full rebuild of your project
Important Notes
- This is a Customer Technology Preview and does not come with a ‘Go Live’ license.
- Visual Studio 2012 is required as a prerequisite for installing the package. If you don’t already have one, just download the free Desktop Express edition here.
- This package contains only the compiler, and does not yet come with an updated standard library to use the features (such as a std::vector initializer_list constructor).
- This version of the compiler is compatible with CRT 11.0 only and can be used as an alternative for the Visual C++ 2012 RTM compiler only.
- While a new Platform Toolset is provided for convenience of integrating the compiler as part of the Visual Studio 2012 build environment, the VS 2012 IDE, Intellisense, debugger, static analysis, and other tools remain essentially unchanged and do not yet provide support for these new C++11 features.
- For a list of known breaking changes introduced to support C++11, consult the documentation on the download site. It will always include the most up-to-date information.
We Want Your Feedback!
If you find any bugs (and a few likely escaped us!), please submit a report for Visual Studio via Microsoft Connect and use “[Codename Milan]” as a prefix in the bug title. You can also leave comments below and submit sugestions to Visual Studio UserVoice.
We’re very excited to have reached this milestone and hope you enjoy experimenting with this new compiler and report back. Remember, you can grab the CTP here and watch STL’s Core C++ Episode 6 of n video here.
Join the conversationAdd Comment
Thank you so much. I love you.
Is there a define available to know what CTP I am compiling with?
For example, can I wrote code like:
#if CTP > 3
#define VARIADIC_TEMPLATES_SUPPORTED
#endif
How can this new version of the compiler be used from the command line? I downloaded and installed the upgrade, but command line compilation continues to reject code using e.g., variadic templates.
@A Nonymous
The command line compilation can be configured by updating the PATH and INCLUDE variables. After launching the VS2012 Developer Command Prompt, you need to:
set PATH=[drive:][Program Files]Microsoft Visual C++ Compiler Nov 2012 CTPbin;%PATH%
set INCLUDE=[drive:][Program Files]Microsoft Visual C++ Compiler Nov 2012 CTPinclude;%INCLUDE%
Everything should work after that.
Hope this helps,
Marian
Thanks very much for you C++ commitment.
When it is released in its final form, will this work with the Windows XP targeting?
First of all: great work, you're delivering what you promised in terms of quicker incremental updates. Thank you for that!
Is there a known timeline (or approximate timeline) on the front end "red squiggles" support and (more importantly) library support for initialization lists on vectors etc? Just really looking forward to full support for these features! :)
Chris Blume: Test _MSC_FULL_VER. It is 170050727 for VC11 RTM and 170051025 for this CTP.
Will the CTP, when actually released, be link and compile compatible with the original VS2012.
Just wanted to make sure that if I build and develop using the newer compiler, including presumably, an updated C++ standard library using these features, that as long as I don't use the newer features in my header files I can hand my headers and DLLs to someone using the original VS 2012 and expect no explosions?
Also when released in production is this going to be an optional download and install or will it be more like an SP1 on Visual Studio 2012 which you would expect everyone to install and hence "upgrade" to.
Does this patch work with the express editions ?
Uniform initialization:
#include <vector>
std::vector<int> a { 1, 2, 3, 4, 5}; // failed to compile (
int a[] {1, 2, 3, 4, 5}; // compiles ok
what am I missing?
@belofn:
You missed the remark that the library has NOT been updated (it's just a compiler update currently)…
davidhunter22: We are going to break binary compatibility for the C++ Standard Library (as we have done between VS 2003, 2005, 2008, 2010, and 2012). We need to make significant changes which aren't compatible with preserving binary compatibility.
CTP appears to have problems with unnamed initializer lists in range-based for loops:
This compiles:
auto j = { 1, 2, 3, 4 };
for ( auto i : j ) {
std::cout << i << "n";
}
But this doesn't…
for ( auto i : { 1, 2, 3, 4 } ) {
std::cout << i << "n";
}
The above produces: Error 1 error C2668: 'std::begin' : ambiguous call to overloaded function
I assume that this is because std::begin on an initializer list is a library feature, and as we've been told, the library hasn't been updated for initializer lists yet.
Too bad I don't have time to test this, this is definitely a great update.?
Let me extend the question: Are these listed the only changes in this CTP, or will these be the only planned changes in VC12? If not, can you tell if you plan what additional stuff are you planning? One can never have enough C++11 features :)
>>We delivered!
You delivered? If anything, you can say that you delivered only when the final (RTM) version will be ready. For now the only thing you can say is: We've started working on it.
From STL – "We are going to break binary compatibility for the C++ Standard Library (as we have done between VS 2003, 2005, 2008, 2010, and 2012). We need to make significant changes which aren't compatible with preserving binary compatibility."
Well if you carry on calling it VS 2012 and it breaks binary compatibility is see confusion in the future! Hopefully mismatches will be detected at compile time and a an error put out by the compiler. Note that I am all for this release policy and am really happy to see MS doing this, I would much rather have the possibility of a bit of confusion and new features than nothing for 2-3 years.
Any chance of anyone adding a new column, maybe "VS12++", here blogs.msdn.com/…/10209291.aspx, or is there a better place already than that?
BTW does the future official release of this already have an assigned _MSC_VER number?
"You delivered?"
Yes, they delivered. Obviously not what you're looking for, but they delivered what they said they delivered.
"If anything, you can say that you delivered only when the final (RTM) version will be ready. For now the only thing you can say is: We've started working on it."
No, if they could only say "We've started working on it" then there wouldn't be anything that you could use now, even if it is only for testing and development.
Now, some of the stuff they've done with VS2012 has been truly terrible (and much but not all has since been reversed), but let's not completely reject what they've done here. Yes, they still have work to do, but they put out a compiler update 2 months after the final release, something they've never even come close to doing before.
@Greg the point is that when someone claims that s/he delivered/did something the most common and acceptable meaning of it is that the work has been completed. Of course we can argue about each word and meaning of each phrase etc. etc. but the main point is that they did not deliver anything except CTP? What use is that? Would you really boast about it? Like, hey, we've have alpha version ready? And what?
To me, when we talk about software development, if someone states that s/he delivered something, to me it means that the project is as complete as this person was able/contracted to make it. Saying that someone delivered CTP, is similar to saying that you can download something for free, only in order to be able to download it you have to register which is not free, and you have to pay for registration (I know this isn't the best analogy but I'm sure you're getting my gist).
And in this particular example, the client is expected RTM version and only when RTM ver. is ready they can say that they delivered what they've promised.
Unless the only thing they've promised and client expected from them is that they've promissed that they will deliver CTP, in that case they correct. But I'm pretty sure that most clients are really interested in RTM not CTP and delivering RTM is what really matters. But hey, Greg, if you happy with the way they talk to you and treat you – be my guest – swallow everything they tell you. Do not want to insult you personally, but I believe that your behaviour can actually bring more bad than good. Why? Only by expecting high quality, high conformance we as a developers/community have a chance to have high quality tools. If, on the other hand, someone like you is happy with everything what is thrown at them and doesn't complain, why would those people try to improve anything? Why improve anything when client is happy?
Think about it Greg.
>>but they put out a compiler update 2 months after the final release, something they've never even come close to doing before
And why do you think they did that? From the goodness of their hearts? I hope you realize that competition, yes, competition, not customers! made them do such dramatic move.
I hope you are aware of that.
Just realised that IntelliSense already appears to have some support for variadic templates – although it seems to crash the Visual C++ Package Server quite often (depending on the code)… :-(
I thought about it, doesn't change anything. I know exactly why they're doing it. I just don't think they deserve being blasted because what they delivered doesn't match your definition of delivered. There are plenty of things to get on them about. Their choice of words here isn't one of those things.
@Greg the whole point is that they AGAIN are trying to cover something with their not-so-clever words instead of being honest and transparent (as much as possible of course, no-one expects anything else but the standard). That's the whole point, and that's why I have problem with their attitude and approach to the customer – The pattern is: never tell the truth, tell something which resembles the truth and puts us (MS) in the bright spot. Because if we were to tell the actual truth people would get really pi**ed off.
I'm sure you remember the infamous //Build/, and the so-much-promise-loaded pre //Build/ with regards to C++ and its future at MS. They always follow the same pattern. Just observe them and you see how those puzzles starting to fit togheter.
And being happy at every little uncooked piece of scrap they throw at you? Most definitely doesn't make things any better.
If we – developers – want to have high quality tools and being threated seriously, we must be open and honest with them. We (I, and am sure many others) want high quality tools, we want to be threated as first class citizens, we DO NOT want to be promised anything and then having, mr Sutter telling me (us) that CX was necessary because it will look better in debugger. Ba*ls to that. If you read the thread in which mr Sutter made complete fool of himself, you'll see for yourself that, again, pattern has been repeated. Check if for yourself.
Regards
@davidhunter22: There is no such thing as VS12. There is VS2012, the whole package, that contains (among other things) VC11, a C++ compiler. This CTP adds / will add VC12 to VS2012. There is no reason calling it VC12++ or VS12++, you just need to understand what numbering means what.
I'm not talking about CX here. As I said, there are plenty of places to ding them.
Silicon Kiwi: That's a compiler bug, great catch. I've filed it as DevDiv#511531 "[Milan] "for (int x : { 44, 55, 66 })" doesn't compile in the presence of both <initializer_list> and <iterator>" in our internal database.
Regardless of whether std::initializer_list is being used explicitly or implicitly, it has member .begin()/.end(), so the compiler should always use that – it should never look for non-member begin()/end().
Dávid Róbert>?
That hasn't been implemented yet in VC. (The guest poster in question apparently tested his code with another compiler, hence the confusion. The post was apparently edited later to depict the private-and-unimplemented workaround – although with a typo, it should have said "File(const File&);".)
> Are these listed the only changes in this CTP
This list is exhaustive for this CTP.
> or will these be the only planned changes in VC12?
I would like to answer this question, but I am not allowed to.
davidhunter22> Any chance of anyone adding a new column
I will probably publish an updated table when Milan is closer to its final release.
> BTW does the future official release of this already have an assigned _MSC_VER number?
According to my knowledge, that hasn't been decided yet. Because the STL's #pragma detect_mismatch machinery is powered by _MSC_VER, I definitely want it to be updated, although I don't especially care about the precise value. (1729 would endlessly amuse me.)
atch666> If we – developers – want to have high quality tools and being threated seriously, we must be open and honest with them.
What would really help would be prompt and high-quality bug reports. (Prompt, as in trying out builds as soon as they're released. When bug reports arrive right after a CTP is released, we can actually do something about them. If people wait until an RC to report bugs, we can't fix anything less than kitten-vaporizing bugs. High-quality, as in self-contained test cases, preferably minimized as much as possible, ideally without using any libraries if you're trying to report a compiler bug.)
Last i checked, "open and honest" != "condescending and rude". Just saying.
@cHao Why is it that any kind of critique is being seen as condescending and rude? What's wrong with people? He didn't swear nor attack anyone, didn't pick a fight with anyone, used polite and neutral language – while he
was criticizing. This cannot be seen as rude and condescending. Grow up! When last time I've checked:
"being open with devs and transparent" != "telling fibs and fairy tails". And that's what MS is doing over and over again. Now they delivered and they are proud of it. CTP. CTP my as*. CTP is not what devs want, and not what is for devs important. Of course it is good sign that at least they did start to work on implementing more C++11 features, but having CTP and telling that they delivered? C'mon.
@Stephan T. Lavavej – I myself filled number of bugs – real bugs, not improvements suggestions, not anything else but real fleshy bugs. Most likely type of response from MS Connect is – won't fix, or by design. That's why I believe that you (MS not you personally) are not interested really in fixing bugs which were in your product for years now. Until such attitude on MS Connect won't change, why would anyone even bother to spent time, file the bug, describe it, provide use case, just to be told, at the end of prolonged and pointless in hindsight discussion, by some smart as* that – it won't fix. Just to be clear – I believe that you are REAL C++ guy with great passion for this beautiful langauge and I also believe that if it was up to you you'd have C++11 implemented long time ago. But unfortunately, things are as they are.
Regards
Hi, thanks for the heads up on C++11. Is there, by any chance, have you considered implementing C'11 updates?
I am trying to port some old C++ libraries to Windows Phone and Windows Store runtime. But apparently, where those libraries have dependencies on C (for I/O etc), its becomes tricky to port them into new runtimes. I am unable to find any OCR lib for WP and RT. The well-known terresact project depends on some base C libs for images IO (zlib, jpeg, tiff, bmp, png etc). Is there an easy way of porting them?
Or simply, is there a tool to convert win32 atl/lib/dll to winRT's one? Of course not! But its not impossible to make one.. well for guys at msft. :-)
Hey guys, fantastic improvements with this CTP! I look forward to putting them to use. When are you planning on updating the stl implementation to make use of the new language features? Also, I noted in the presentation and above you state these changes do not come with a 'Go Live' license – when will an updated version with this license be available?
@Knowing me knowing you, a-ha
Transparency is an aspiration. There is always room for improvement, but we must balance the need for transparency with other forces.
Customer bugs are taken seriously and triaged with care. Sometimes we do not / cannot include a full explanation for different bug resolutions — it does not mean the bug was not carefully considered.
I'd love to follow up with you. Ping me at ebattali@microsoft.com if you want to chat offline.
@Dennis is there a tool to convert win32 atl/lib/dll to winRT's one?
Not that I know of, but maybe someone in the community will create one :)
The document that describes the breaking changes, Visual C++ Nov 2012 CTP Breaking changes.docx, contains what appears to be a typo. It states under the heading "Breaking change #4: Uniform initializer":
Mitigation: Add an explicit narrowing conversion.
int i = 0;
char c = {(int)i};
I believe the cast in the initializer list should be an explicit narrowing conversion of '(char)'.
I am a bit confused now. This is *not* a "Visual Studio 2012 Update 1", aka "Quarterly Update" with promised XP support, right?
I understand that this is rather an early preview of the next VS120 C++ compiler.
Awesome news
Azarien, they have not yet announced when this will be released. I suspect that it will not be until after the first quarterly update with XP support.
Function pointers (including member function pointers) that use empty parameter packs do not compile. This example would obviously crash if run, but is the shortest code I have to demonstrate the issue:
template<class… Args>
void test(Args… args)
{
void (*func)(Args…) = nullptr;
func(args…);
}
test(); // fails to compile with error C2143: syntax error : missing ';' before '='
test(1); // compiles without error
Looky looky channel9.msdn.com/…/Herb-Sutter-at-Build-2012
C++ is secccccx! :D
Hi I am trying to make use of uniform initialization, but can't get it to work in one rather important case..
struct IntArray
{
int integers[2];
};
struct Object
{
Object(): Ints{{1,2}} //<– compiler error
{}
IntArray Ints;
};
int main()
{
IntArray WorksFine{{2,3}}; //<— this works
return 0;
}
I mostly want this for initializing and std::array in a constructor of a class
This code compiles on gcc 4.7.2, or on VC11 CTP with the braces replaced by parentheses. Is this a bug or is this related to the fact that the Standard Library has not been updated to use uniform initialization?
#include <iostream>
#include <string>
struct Meow
{
int i_;
Meow(std::string const& purr)
:
i_ { std::stoi(purr) } // parentheses work!
{
std::cout << i_ << "n";
}
};
int main()
{
Meow("1729");
}
Devin Doucette: I've filed DevDiv#513764 "[Milan] "void (*fp2)(Args…) = fp;" doesn't compile for empty Args" in our internal database.
froglegs: I've filed DevDiv#513777 "[Milan] "Bar() : foo{ { 11, 22, 33 } } { }" doesn't compile" in our internal database.
rhalbersma: That's a bug, and it's unrelated to the fact that we haven't updated the STL yet. The problem is triggered by the qualified call; if you use an unqualified call (with a using-directive for std or a using-declaration for std::stoi) it compiles. I've filed DevDiv#513785 "[Milan] "Meow() : i{Std::Stoi()} { }" doesn't compile" in our internal database.
I hope that template alias will also appear in the next update :-)
sadly, this didn't work.
devmaster.net/…/13973-c-template-metaprogramming-brainfck-interpreter
I tried to compile Qt 5, but it errored out almost immediately. Any ideas?
cl : Command line error D8027 : cannot execute 'C:Program Files (x86)Microsoft
Visual Studio 11.0VCBINamd64c2.dll'
NMAKE : fatal error U1077: '"C:Program Files (x86)Microsoft Visual C++ Compile
r Nov 2012 CTPbincl.EXE"' : return code '0x2'
This doesn't compile:
struct AB { int a; int b; };
AB ab() { return { 3, 141 }; }
Devin Doucette, rhalbersma: Your bugs have been fixed by JonCaves.
froglegs: Your bug's syntax error has been fixed by JonCaves. It is still suffering from a second problem (the compiler doesn't know how to do aggregate initialization when calling a function) which is being tracked separately.
Finally :)
Great work !
But still lack support for some needed features:
Alias templates.
Unrestricted unions.
Explicitly defaulted and deleted special member functions
Allow sizeof to work on members of classes without an explicit object
Thanks Stephan and JonCaves. That's great news. Do you know, or can you tell us, when the next CTP is likely to be released?
Please add more support for C++11 and will it be available for VC++ 2012 in the near future or is it for the next version? gcc supports more than 90% of all the C++11 standards: gcc.gnu.org/…/cxx0x.html so when are you guys gonna catch up? This is what happens when we depend on proprietary software when open source is the future
Devin Doucette: Sorry, I'm not allowed to talk about release dates.
Although release dates cannot be discussed, and i completely understand, my only question is when are we going to be able to get the bug fixes?
Thanks Stephan and JonCaves.
If you have free time, please, see what's going on here:
template <int … id>
inline void variadic_test()
{
int ar[] =
{
Loki::TL::IndexOf<
type_system::system_types,
typename Loki::TL::TypeAt<type_system::system_types, id>::Result // OK: id is used as const expr.
>::value …
};
typedef typename
type_tuple<typename Loki::TL::TypeAt<type_system::system_types, id>::Result …>::type type; // ERROR: id is not used as const expr. Why?
// invalid template argument for 'Loki::TL::TypeAt', expected compile-time constant expression
//error C3520: 'id' : parameter pack must be expanded in this context
}
Simply put. When applied ADL parameter pack is unpacked, but when we use pack in template parameters list explicit for call some function – we get error…
What is really going on…
template<typename … T>
void test_1(T … )
{
cout << __FUNCTION__ << endl;
}
template<typename … T>
void test_2()
{
cout << __FUNCTION__ << endl;
}
template <int … id>
inline void variadic_test()
{
test_1(TypeAt_<type_system::system_types, id>::Result() …); // OK: id is used as const expr, may be argument-dependent lookup helps us…
test_2<TypeAt_<type_system::system_types, id>::Result … >(); // ERROR: id is NOT used as const expr, expected compile-time constant expression;
// error C3520: 'id' : parameter pack must be expanded in this context;
// error C3546: '…' : there are no parameter packs available to expand
}
I submitted two bugs, but I think you QA doesn't know to test them with the new compiler as they requested projects and a video.
connect.microsoft.com/…/codename-milan-regression-warning-4554-erroneously-emitted
connect.microsoft.com/…/codename-milan-regression-making-function-operator-public-with-using-statement
Another issue not related to the new compiler is that I recently I had to move my 64 bit release builds to the native x64 compiler because link time code generation runs out of memory when using the default x86_amd64 one.
I expect the same thing to be an issue with the 32 bit build soon so what are the changes that you could start shipping amd64 cross compilers?
Just thrilled to see Microsoft continue down this road. Onwards to C++ 14 and 17!
I was under the impression that Uniform Initialization Syntax would allow you to allow you take this C style array init:
struct Wobble
{
char c;
short s;
int i;
float f;
double d;
};
Wobble ws[]=
{
{ 'a', 1, 2, 3.1f, 4.2 },
{ 'b', 5, 6, 7.3f, 8.4 },
{ 'c', 9, 10, 11.5f, 12.6 },
{ 'd' },
};
and now add a constructor to Wobble that this array init will call:
struct Wobble
{
Wobble( char _c = 'z', short _s = 32767, int _i = 65536, float _f = 3.14f, double _d = 2.718 )
: c( _c ), s( _s ), i( _i ), f( _f ), d( _d ) { cout << "+Wobble" << endl; }
char c;
short s;
int i;
float f;
double d;
};
However, adding the constructor makes the ws[] definition not compile with "error C2078: too many initializers". If I change ws[] to the following, it works:
Wobble ws[]=
{
{{ 'a', 1, 2, 3.1f, 4.2 }},
{{ 'b', 5, 6, 7.3f, 8.4 }},
{{ 'c', 9, 10, 11.5f, 12.6 }},
{{ 'd' }},
};
Having to add the extra {} for each array entry feels like a bug according to what I've read about the C++11 feature. Yes?
Even though static analysis wasn't advertised as supporting the new featureset, should any newly existing crashes with analysis be reported?
For instance:
#include <stdio.h>
int main()
{
const char* thing("thing");
int i = 0;
int& ri(i);
return 0;
}
Lines 4 and 6 cause the compiler to crash if /analyze is passed, with this error message:
fatal error C1001: An internal error has occurred in the compiler.
(compiler file 'f:ddvctoolscompilercxxfeslp1cxxgrammar.y', line 6298)
enotis: We really need self-contained test cases in order to investigate anything. (Snippets removed from their complete context are insufficient.) When you have such a test case, please file a bug through Microsoft Connect.
Erik Olofsson: Those responses were from Connect's first-tier support, who probably didn't understand "Codename Milan" in the title. (VC's QA team obviously would have.) Your bugs have now been assigned to the compiler team; internally they are DevDiv#520545 and 520547. In the future, it would probably help to make it really, really clear when you're reporting a bug against a preview release.
> I expect the same thing to be an issue with the 32 bit build soon so what are the changes that you could start shipping amd64 cross compilers?
Someone from the compiler team would have to answer that.
OII_Mark: Yep, that's a bug. I've filed it as DevDiv#524613 "[Milan] "Point points[] = { { 11, 22 }, { 33, 44 }, { 55, 66 } };" should compile when Point has an (int, int) constructor", with a walkthrough of the Standardese that says it should work.
Michael> Even though static analysis wasn't advertised as supporting the new featureset, should any newly existing crashes with analysis be reported?
Yes, please. (Our STL tests are currently happy under /analyze, but that's not 100% exhaustive.) I've confirmed that your bug still repros and I've filed it as DevDiv#524661 "[Milan] _AST_NEEDS_WORK_ assertion (grammar.y 6298) triggered by "int& ri(i);"". (The "retail" compiler just ICEs, but I can build the "checked" compiler that emits assertions when it gets confused, hence the title.)
@Aaron Kaluszka
You need to set proper environment variables:
set PATH=C:Program Files (x86)Microsoft Visual C++ Compiler Nov 2012 CTPbin;C:Program Files (x86)Microsoft Visual Studio 11.0Common7IDE;C:Program Files (x86)Microsoft Visual Studio 11.0VCbin;%PATH%
set INCLUDE=C:Program Files (x86)Microsoft Visual C++ Compiler Nov 2012 CTPinclude;C:Program Files (x86)Microsoft Visual Studio 11.0VCinclude;%INCLUDE%
set LIB=C:Program Files (x86)Microsoft Visual Studio 11.0VClib;C:Program Files (x86)Windows Kits8.0Libwin8umx86;%LIB%
It's not clear to me, does the Visual Studio 2012 update 1 contain the November CTP release announced at Build?
The CTP fails to compile this code involving variadic templates and decltype:
int foo (int)
{return 0;}
template <class… A>
auto bar (A… a) -> decltype (foo (a…))
{return foo (a…);}
int main ()
{return bar (1);}
I've filed a bug report. See:
[Codename Milan] Specialization fails for variadic function template (C2893)
connect.microsoft.com/…/772977
SEB.F: It doesn't. VS Updates are production-quality, while CTPs are alpha-quality.
Quick question, why does this fail to compile?
template<typename… P>
struct Foo1
{
template<P…> // error C3522: 'P' : parameter pack cannot be expanded in this context
// error C3520: 'P' : parameter pack must be expanded in this context
static void Bar(P… a){
// Empty
}
};
Is this in error or? g++ compiles it fine.
Thanks,
-Ryan
Hi everyone
If anyone wants to install the 'Visual Studio 2012 Update 1':
blogs.msdn.com/…/visual-studio-2012-update-1-now-available.aspx
Plz remember that it MUST be installed BEFORE install this 'November CTP' version. Otherwise the installation of Upd1 will encounter problems and the application made from your project won't running under the Windows XP even if you choiced the vc110_xp as the compiler for your project.
@Ryan Wright: You can report this at the
connect.microsoft.com/visualstudio
Never forgot to use “[Codename Milan]” as a prefix in the title.
Thanks for the great CTP.
The c++x11 I'm still hanging out for is: noexcept
Keep Up the great work!
////////////////////// This code does not compile.. isn't this a compiler bug?
int foo( int x, int y ) { return x + y; }
template< class… _Args >
auto call_foo( _Args… args ) -> decltype( foo( args… ) )
{ return foo( args… ); }
// some codes…
call_foo( 10, 20 ); // error C2893: Failed to specialize function template 'unknown-type call_foo(_Args…)'
Sorry.. it was already reported..
connect.microsoft.com/…/772977
///////// while finding workaround for above problem, encountered another one…
int foo( int, int ) { return 0; }
template< class… _Args >
struct get_foo_result
{
typedef decltype( foo( *static_cast< _Args* >( nullptr )… ) ) Type; // error C3520: '_Args' : parameter pack must be expanded in this context
};
// … some code
get_foo_result< int, int >::Type t;
/////////////// How can I expand the parameter pack which is already expanded ?
/////////////// This compiles fine in gcc 4.6.1
@Cameron – re: `noexcept`, below is what I have been using in the interim. It works for the common case, but not for noexcept(false) (which hopefully is very rare in usage anyway)
//
// Until C++11 support
//
#define _ALLOW_KEYWORD_MACROS // Required or xkeycheck.h will barf
#define noexcept throw()
@Ian Jirka:
Shouldn't that be declspec(nothrow)? | https://blogs.msdn.microsoft.com/vcblog/2012/11/02/announcing-november-ctp-of-the-c-compiler-now-with-more-c11/ | CC-MAIN-2018-39 | refinedweb | 5,324 | 62.07 |
Nepomuk
#include <Nepomuk/ResourceManager>
Detailed Description
The ResourceManager is the central Nepomuk configuration point.
Use the initialized() method to check the availabity of the Nepomuk system. Signals nepomukSystemStarted() and nepomukSystemStopped() can be used to enable or disable Nepomuk-specific GUI elements.
Definition at line 55 of file resourcemanager.h.
Member Function Documentation
Retrieve a list of all resource managed by this manager.
- Warning
- This list will be very big. Usage of this method is discouraged. Use Query::QueryServiceClient in combination with an empty Query::Query instead.
- Since
- 4.3
Retrieve a list of all resources of the specified type.
This includes Resources that are not synced yet so it might not represent exactly the state as in the RDF store.
- Warning
- This list can be very big. Usage of this method is discouraged. Use Query::QueryServiceClient in combination with a Query::Query containing one Query::ResourceTypeTerm instead.
Retrieve a list of all resources that have property uri defined with a value of v.
This includes Resources that are not synced yet so it might not represent exactly the state as in the RDF store.
- Parameters
-
- Warning
- This list can be very big. Usage of this method is discouraged. Use Query::QueryServiceClient in combination with a Query::Query containing one Query::ComparisonTerm instead.
- Deprecated:
- Use allResourcesWithProperty( const QString& type )
Create a new ResourceManager instance which uses model as its override model.
This allows to use multiple instances of ResourceManager at the same time. Normally one does not need this method as the singleton accessed via instance() should be enough.
- Parameters
-
- Since
- 4.3
- Deprecated:
- Use the Resource constructor directly.
Creates a Resource object representing the data referenced by uri. The result is the same as from using the Resource::Resource( const QString&, const QString& ) constructor with an empty type.
In KDE 4.3 support for multiple ResourceManager instances has been introduced.
To keep binary compatibility both the constructor's and destructor's access visibility could not be changed. Thus, instead of deleting a custom ResourceManager instance the standard way, one has to call this method or use QObject::deleteLater.
- Since
- 4.3
Whenever a problem occurs (like for example failed resource syncing) this signal is emitted.
- Parameters
-
Generates a unique URI that is not used in the store yet. This method ca be used to generate URIs for virtual types such as Tag.
Initialize the Nepomuk framework.
This method will initialize the communication with the local Nepomuk-KDE services, ie. the data repository. It will trigger a reconnect to the Nepomuk database.
There is normally no reason to call this method manually except when using multiple threads. In that case it is highly recommended to call this method in the main thread before doing anything else.
- Returns
- 0 if all necessary components could be found and -1 otherwise.
Retrieve the main data storage model.
Emitted once the Nepomuk system is up and can be used.
- Warning
- This signal will not be emitted if the Nepomuk system is running when the ResourceManager is created. Use initialized() to check the status.
- Since
- 4.4
Remove the resource denoted by uri completely.
This method is just a wrapper around Resource::remove. The result is the same.
This signal gets emitted whenever a Resource changes due to a sync procedure.
Be aware that modifying resources locally via the Resource::setProperty method does not result in a resourceModified signal being emitted.
- Parameters
-
NOT IMPLEMENTED YET. | http://api.kde.org/4.x-api/kdelibs-apidocs/nepomuk/html/classNepomuk_1_1ResourceManager.html | CC-MAIN-2014-15 | refinedweb | 569 | 51.75 |
Today's lab will focus on random numbers, indefinite loops, and more on command-line scripts.
Software tools needed: web browser and Python IDLE programming environment with the pandas, numpy, and folium package installed.
import randomThe random library includes a function that's similar to range, called randrange. As with range, you can specify the starting, stopping, and step values, and the function randrange chooses a number at random in that range. Some examples:
Notice that our turtle turns a degrees, where a is chosen at random between 0 and 359 degrees. What if your turtle was in a city and had to stay on a grid of streets (and not ramble through buildings)? How can you change the randrange() to choose only from the numbers: 0,90,180,270 (submit your answer as Problem #10).
We have been using for-loops to repeat tasks a fixed number of times (often called a definite loop). There is another type of loop that repeats while a condition holds (called a indefinite loop). The most common is a while-loop.
while condition: command1 command2 ... commandNWhile the condition is true, the block of commands nested under the while statement are repeated.
For example, let's have a turtles continue their random walk as long as their x and y values are within 50 of the starting point (to keep them from wandering off the screen):
Indefinite loops are useful for simulations (like our simple random walk above) and checking input. For example, the following code fragment:
age = int(input('Please enter age: ')) while age < 0: print('Entered a negative number.') age = int(input('Please enter age: ')) print('You entered your age as:', age)will ask the user for their age, and continue asking until the number they entered is non-negative (example in pythonTutor).
In Lab 3, we introduced the shell, or command line, commands to create new directories (folders) and how to list the contents of those folders (and expanded on this with relative paths in Lab 4 and absolute paths in Lab 5). In Lab 6, we wrote a simple script that prints: Hello, World. We can write scripts that take the shell commands we have learned and store them in a file to be used later.
It's a bit archaic, but we can create the file with the vi editor. It dates back to the early days of the Unix operating system. It has the advantage that it's integrated into the Unix operating system and guaranteed to be there. It is worth trying at least once (so if you're in a bind and need to edit Unix files remotely, you can), but if you hate it (which is likely), use the graphical gEdit (you can find it via the search icon on the left hand menu bar).
Let's create a simple shell script with vi:
#Your name here #October 2017 #Shell script: creates directories for project mkdir projectFiles cd projectFiles mkdir source mkdir data mkdir results
chmod +x setupProject(changes the "mode" to be executable).
./setupProject
Troubles with vi? It's not intuitive-- here's more on vi:
When done, see the. | https://stjohn.github.io/teaching/csci127/s19/lab10.html | CC-MAIN-2021-43 | refinedweb | 526 | 67.99 |
{
public static void main (string args [ ])
{
system.out.println ("I am a Beginners");
}
}
hi guys what i wrote in above is it correct? ...am just starting programming and please help void main (string args [ ])
{
system.out.println ("I am a Beginners");
}
}
hi guys what i wrote in above is it correct? ...am just starting programming and please help me out...
You tell us...does it compile? Does it work properly? Suggested reading:You tell us...does it compile? Does it work properly? Suggested reading:hi guys what i wrote in above is it correct?
If you don't have PC then how are you posting ? just curious
You should invest in a pc
To gain the better understanding of concepts, you must implement what you read. By the why, you can still predict if the given code will compile and run successfully or not, and that is only possible if you have enough knowledge of development.
In this case, you program will never compile. There are many errors (Syntax errors). So, it's recommended to invest to buy a computer for your experience of development. Good Luck
funny thing is only after getting my phone am interested in coding an app for my phone ....damn money!!!
See the following link: Lesson: A Closer Look at the "Hello World!" Application (The Java™ Tutorials > Getting Started)\
And if you want to learn to program, as the others have said - you need to invest in a computer.
Dear......
What Others have said is true.......unless you implement the code.......you wont be able to understand how it works or why it works the way it works......
Anyway.....answer to your question is this......
-------------------------------------------------------------------------
public static void main (string args [ ]){
system.out.println ("I am a Beginner");
}
---------------------------------------------------------------------------------------
I think that two parenthesis were out of order other than that one spealing mistake........
and everything else is correct......
------------------------------------------------
Go pursue Java and do it on a computer.
Regards,
Ashishkumar.
@ashish: You know your program has many syntax errors?
Use code tags and proper formatting.
Mr. 777 is correct.Mr. 777 is correct.
It should be more along the lines of:
public class whateverYouWantToCallThis
{
tabpublic static void main(String args[])
tab{
tabtabSystem.out.println("I am a beginner.");
tab}
}
...Although I'm also a beginner, so I may be wrong.
Anyway, you were mostly right! Just make sure to pay attention to your cases since java is a case-sensitive language. Good luck with the learning, dude.Anyway, you were mostly right! Just make sure to pay attention to your cases since java is a case-sensitive language. Good luck with the learning, dude.
Last edited by moonchild; November 15th, 2011 at 11:53 AM. | http://www.javaprogrammingforums.com/java-theory-questions/11615-help-beginners.html | CC-MAIN-2014-35 | refinedweb | 449 | 69.89 |
On Wed, Nov 03, 2004 at 08:17:56AM -0500, Albert Cahalan wrote: > If __ARMEB__ is not compiler-defined, something is > broken. I didn't know __ARMEB__ was set by compiler, I though it was kernel configuration stuff. anyway, that make the problem on ppc then. > Next time, start with i386. If Linus won't go for it, > then don't screw around adding __KERNEL__ to other places. > Stuff breaks, needlessly. I'd go so far as to say that > the Linux API has been broken on ppc. you could use even harder words; that's probably the end of world ... :) Anyway, Benjamin a trivial patch follow, can you apply it ? (2.6.9 and 2.6.10 seems equally bitten by that) Unless there's something we don't know about the __KERNEL__ protection. -- Tab --- unaligned.h.orig 2004-11-03 18:37:24 +0100 +++ unaligned.h 2004-11-03 18:37:29 +0100 @@ -1,4 +1,3 @@ -#ifdef __KERNEL__ #ifndef __PPC_UNALIGNED_H #define __PPC_UNALIGNED_H @@ -15,4 +14,3 @@ #define put_unaligned(val, ptr) ((void)( *(ptr) = (val) )) #endif -#endif /* __KERNEL__ */ | https://lists.debian.org/debian-powerpc/2004/11/msg00066.html | CC-MAIN-2016-22 | refinedweb | 179 | 76.72 |
Next: Library version handling, Previous: Extra tests modules, Up: Miscellaneous Notes
The function definitions provided by Gnulib (
.c code) are meant
to be compiled by a C compiler. The header files (
.h files),
on the other hand, can be used in either C or C++.
By default, when used in a C++ compilation unit, the
.h files
declare the same symbols and overrides as in C mode, except that functions
defined by Gnulib or by the system are declared as ‘extern "C"’.
It is also possible to indicate to Gnulib to provide many of its symbols
in a dedicated C++ namespace. If you define the macro
GNULIB_NAMESPACE to an identifier, many functions will be defined
in the namespace specified by the identifier instead of the global
namespace. For example, after you have defined
#define GNULIB_NAMESPACE gnulib
at the beginning of a compilation unit, Gnulib's
<fcntl.h> header
file will make available the
open function as
gnulib::open.
The symbol
open will still refer to the system's
open function,
with its platform specific bugs and limitations.
The symbols provided in the Gnulib namespace are those for which the
corresponding header file contains a
_GL_CXXALIAS_RPL or
_GL_CXXALIAS_SYS macro invocation.
The benefits of this namespace mode are:
openhas to be overridden, Gnulib normally does
#define open rpl_open. If your package has a class with a member
open, for example a class
foowith a method
foo::open, then if you define this member in a compilation unit that includes
<fcntl.h>and use it in a compilation unit that does not include
<fcntl.h>, or vice versa, you will get a link error. Worse: You will not notice this problem on the platform where the system's
openfunction works fine. This problem goes away in namespace mode.
gnulib::openin your code, and you forgot to request the module ‘open’ from Gnulib, you will get a compilation error (regardless of the platform).
The drawback of this namespace mode is that the system provided symbols in
the global namespace are still present, even when they contain bugs that
Gnulib fixes. For example, if you call
open (...) in your code,
it will invoke the possibly buggy system function, even if you have
requested the module ‘open’ from gnulib-tool.
You can turn on the namespace mode in some compilation units and keep it turned off in others. This can be useful if your package consists of an application layer that does not need to invoke POSIX functions and an operating system interface layer that contains all the OS function calls. In such a situation, you will want to turn on the namespace mode for the application layer—to avoid many preprocessor macro definitions—and turn it off for the OS interface layer—to avoid the drawback of the namespace mode, mentioned above. | http://www.gnu.org/software/gnulib/manual/html_node/A-C_002b_002b-namespace-for-gnulib.html | CC-MAIN-2014-10 | refinedweb | 467 | 61.56 |
You've already seen how Core Data takes a huge amount of work away from you, which is great because it means you can focus on writing the interesting parts of your app rather than data management. But, while our current project certainly works, it's not going to scale well. To find out why open the Commit+CoreDataClass.swift file and modify its class to this:
public class Commit: NSManagedObject { override init(entity: NSEntityDescription, insertInto context: NSManagedObjectContext?) { super.init(entity: entity, insertInto: context) print("Init called!") } }
When you run the program now you'll see "Init called!" in the Xcode log at least a hundred times - once for every
Commit object that gets pulled out in our
loadSavedData() method. So what if there are 1000 objects? Or 10,000? Clearly it's inefficient to create a new object for everything in our object graph just to load the app, particularly when our table view can only show a handful at a time.
Core Data has a brilliant solution to this problem, and it's called
NSFetchedResultsController. It takes over our existing
NSFetchRequest to load data, replaces our
commits array with its own storage, and even works to ensure the user interface stays in sync with changes to the data by controlling the way objects are inserted and deleted.
No tutorial on Core Data would be complete without teaching
NSFetchedResultsController, so that's the last thing we'll be doing in this project. I left it until the end because, although it's very clever and certainly very efficient,
NSFetchedResultsController is entirely optional: if you're happy with the project as it is, you're welcome to skip over this last chapter.
First, add a new property to
ViewController that will hold the fetched results controller for commits:
var fetchedResultsController: NSFetchedResultsController<Commit>!
We now need to rewrite our
loadSavedData() method so that the existing
NSFetchRequest is wrapped inside a
NSFetchedResultsController. We want to create that fetched results controller only once, but retain the ability to change the predicate when the method is called again.
Before I show you the code, there are three new things to learn. First, we're going to be using the
fetchBatchSize property of our fetch request so that only 20 objects are loaded at a time. Second, we'll be setting the view controller as the delegate for the fetched results controller – you'll see why soon. Third, we need to use the
performFetch() method on our fetched results controller to make it load its data.
Here's the revised
loadSavedData() method:
func loadSavedData() { if fetchedResultsController == nil { let request = Commit.createFetchRequest() let sort = NSSortDescriptor(key: "date", ascending: false) request.sortDescriptors = [sort] request.fetchBatchSize = 20 fetchedResultsController = NSFetchedResultsController(fetchRequest: request, managedObjectContext: container.viewContext, sectionNameKeyPath: nil, cacheName: nil) fetchedResultsController.delegate = self } fetchedResultsController.fetchRequest.predicate = commitPredicate do { try fetchedResultsController.performFetch() tableView.reloadData() } catch { print("Fetch failed") } }
Because we're setting
delegate, you'll also need to make
ViewController conform to the
NSFetchedResultsControllerDelegate protocol, like this:
class ViewController: UITableViewController, NSFetchedResultsControllerDelegate {
That was the easy part. However, when you use
NSFetchedResultsController, you need to use it everywhere: that means it tells you how many sections and rows you have, it keeps track of all the objects, and it is the single source of truth when it comes to inserting or deleting objects.
You can get an idea of what work needs to be done by deleting the
commits property: we don't need it any more, because the fetched results controller stores our results. Immediately you'll see five errors appear wherever that property was being touched, and we need to rewrite all those instances to use the fetched results controller.
Second, replace the
numberOfSections(in:) and
numberOfRowsInSection methods with these two new implementations:
override func numberOfSections(in tableView: UITableView) -> Int { return fetchedResultsController.sections?.count ?? 0 } override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { let sectionInfo = fetchedResultsController.sections![section] return sectionInfo.numberOfObjects }
As you can see, we can read the
sections array, each of which contains an array of
NSFetchedResultsSectionInfo objects describing the items in that section. For now, we're just going to use that for the number of objects in the section.
Third, find this line inside the
cellForRowAt method:
let commit = commits[indexPath.row]
And replace it with this instead:
let commit = fetchedResultsController.object(at: indexPath)
As you can see, fetched results controllers use index paths (i.e., sections as well as rows) rather than just a flat array; more on that soon!
You’ll get another error inside the
didSelectRowAt method, because that was reading from the
commits array. Replace the offending line with this:
vc.detailItem = fetchedResultsController.object(at: indexPath)
The final two errors are in the
commit method, where deleting items happens. And this is where things get more complicated: we can't just delete items from the fetched results controller, and neither can we use
deleteRows(at:) on the table view. Instead, Core Data is much more clever: we just delete the object from the managed object context directly.
You see, when we created our
NSFetchedResultsController, we hooked it up to our existing managed object context, and we also made our current view controller its delegate. So when the managed object context detects an object being deleted, it will inform our fetched results controller, which will in turn automatically notify our view controller if needed.
So, to delete objects using fetched results controllers you need to rewrite the
commit method to this:
override func tableView(_ tableView: UITableView, commit editingStyle: UITableViewCellEditingStyle, forRowAt indexPath: IndexPath) { if editingStyle == .delete { let commit = fetchedResultsController.object(at: indexPath) container.viewContext.delete(commit) saveContext() } }
As you can see, that code pulls the object to delete out from the fetched results controller, deletes it, then saves the changes – we no longer touch the table view here.
That being said, we do need to add one new method that gets called by the fetched results controller when an object changes. We'll get told the index path of the object that got changed, and all we need to do is pass that on to the
deleteRows(at:) method of our table view:
func controller(_ controller: NSFetchedResultsController<NSFetchRequestResult>, didChange anObject: Any, at indexPath: IndexPath?, for type: NSFetchedResultsChangeType, newIndexPath: IndexPath?) { switch type { case .delete: tableView.deleteRows(at: [indexPath!], with: .automatic) default: break } }
Now, you might wonder why this approach is an improvement – haven't we just basically written the same code only in a different place? Well, no. That new delegate method we just wrote could be called from anywhere: if we delete an object in any other way, for example in the detail view, that method will now automatically get called and the table will update. In short, it means our data is driving our user interface, rather than our user interface trying to control our data.
Previously, I said "Using attribute constraints can cause problems with
NSFetchedResultsController, but in this tutorial we're always doing a full save and load of our objects because it's an easy way to avoid problems later." It's time for me to explain the problem: attribute constraints are only enforced as unique when a save happens, which means if you're inserting data then an NSFetchedResultsController may contain duplicates until a save takes place. This won't happen for us because I've made the project perform a save before a load to make things easier, but it's something you need to watch out for in your own code.
If you run the app now, you'll see "Init called!" appears far less frequently because the fetched results controller lazy loads its data – a significant performance optimization.
Before we're done with
NSFetchedResultsController, I want to show you one more piece of its magic. You've seen how it has sections as well as rows, right? Well, try changing its constructor in
loadSavedData() to be this:
fetchedResultsController = NSFetchedResultsController(fetchRequest: request, managedObjectContext: container.viewContext, sectionNameKeyPath: "author.name", cacheName: nil)
The only change there is that I've provided a value for
sectionNameKeyPath rather than
nil. Now try adding this new method to
ViewController:
override func tableView(_ tableView: UITableView, titleForHeaderInSection section: Int) -> String? { return fetchedResultsController.sections![section].name }
If you run the app now, you'll see the table view has sections as well as rows, although the commits inside each section won’t match the author name you can see. The reason for this discrepancy is that we’re still sorting by date – go to
loadSavedData() and change its sort descriptor to this:
let sort = NSSortDescriptor(key: "author.name", ascending: true)
Much better!
So,
NSFetchedResultsController is not only faster, but it even adds powerful functionality with a tiny amount of code – what's not to. | https://www.hackingwithswift.com/read/38/10/optimizing-core-data-performance-using-nsfetchedresultscontroller | CC-MAIN-2020-16 | refinedweb | 1,454 | 52.6 |
How can I set a variable to a random number from X to Y?
X being the minimum limit of the randomization.
Y being the maximum limit of the randomization.
Printable View
How can I set a variable to a random number from X to Y?
X being the minimum limit of the randomization.
Y being the maximum limit of the randomization.
hmmm is it bad that I don't understand the FAQ on it?
I mean it looks way too complicated...
why can't it be like Random(10-20) lol.
I guess if someone got into some details, of how it worked, it would help. I also have no idea what a "seed" is.
Thanks.
Without a seed the computer would issue the same set of random numbers every time the program runs.
A pseudo-random number generator is some sort of deterministic recurrence relation (in general). The type commonly implemented for rand( ) is called a linear congruential generator (LCG), and has the form:
x[i+1] = a * x[i] + c (mod m)
Now, of course you must have a starting point, an x value that is fixed. So, x[0] is the initial value, and is therefore the 'seed' value.
That is very interesting. Where can I find more information about this? Maybe I will be able to beat the kino machines :)
guys I'm a newbie lol... I don't even understand pointers that well yet. Those answers are a bit too technical lol.
So lets see, a seed is a starting point in a random range?
I can't seem to figure out the syntax for rand()
I've tried this:
randomnumber = rand(10)
in hopes that this would result in a random number 0-10. Instead this gave me the following compilation error:
Decision1.cpp(18) : error C2661: 'rand' : no overloaded function takes 1 arguments
secondly, what is an overloaded function?
Sorry guys for being so slow, I know it's annoying. I really appreciate anyone that helps. Thanks again.
The syntax for rand is simply:
int r = rand( );
This returns a random number on whatever the range of the modulus (m in the above equation is). That is, the range is 0...m-1. You can easily use modular arithmetic to get it is a range. Here's a quick demo function:
Should work, but I'm a bit tired, so no guarantees.Should work, but I'm a bit tired, so no guarantees.Code:
int random(int low, int high)
{
return rand() % (high - low + 1) + low ; // *edit* Fixed hideous error
}
The high order bits in these are considered to be more random than the low order bits, so you could exploit that by getting a random floating point number (rand( ) / RAND_MAX, I believe), and then if it falls within 0...1/n it is 0, if it is in 1/n...2/n, then it is 1, etc.
As for some sources on this stuff, The Art of Computer Programming Vol 2 is good. Numerical Recipes in C (or in C++) also has a good section ( ). CounterPane Labs has some stuff on cryptographically secure PRNGs ( ). Other than that, I'd say search Google (linear congruential generators, pseudo random number generators, Mersenne twister).
Cheers
randomnumber = rand() % 10;
You should #include <cstdlib>
rand() returns a pseudorandom number from 0 to some maximum, inclusive.
Modulo 10 would bring this to a 0 to 10-1=9 range, albeit with a possible small bias.
Function overloading would be having functions of the same name, but differentiated in some way, e.g. the arguments passed.
You get that error since the compiler tries to look for a function called rand, but with an argument. It doesnt find any though.
I suggest you read tutorials, search the web etc a bit more.
You probably want to use the rand function in conjunction with % (modulus).
I think rand() returns a number between 0 and max integer.
So number=rand() % 11; // returns a number between 0 and 10
say rand() returned 22 then
22 / 11 = 2 remainder 0
23 / 11 = 2 remainder 1
24 / 11 = 2 remainder 2
.
.
.
32 / 11 = 2 remainder 10
33 / 11 = 2 remainder 0 // hence the range 0-10 is established
number =rand() % (High-Low+1)+Low;
as in the tutorial returns a number between high and low and assigns it to number
btw an overloaded function is a function with the same name but different signature, different type of parameter
for instance you can hav
int funtion(int number);
or
int function(double number);
the function is overloaded to handle double or int parameters and may handle things different based on the parameters.
the mod operator returns the remainder after devision
Hmm, it doesn't seem to be working. I have no idea what the math is doing, but I pasted it in my code anyway as a function, and called on that function in my mainline.
For those who care, here is the program:
The program compiles with no errors or warnings. The problem is, no matter what number I enter, it always says I have found the magic number! Heh, this is so frustrating! lolThe program compiles with no errors or warnings. The problem is, no matter what number I enter, it always says I have found the magic number! Heh, this is so frustrating! lolCode:
// IF statements
// blah!!!!
#include <iostream>
using namespace std;
int random(int low, int high)
{
return rand() % (high - low) + low + 1;
}
int main()
{
// Initialization of Variables
int userpick = 0;
int randomnumber = 0;
// Prompt User Input & Processing
cout << "Enter a number 1-10: ";
cin >> userpick;
cout << endl;
randomnumber = random(1,10);
// IF Statement
if (userpick = randomnumber)
cout << "Great! You found the magic number!\n\n";
else
cout << "Sorry, wrong number!\n\n";
// Conclusion Statements
system("pause");
return 0;
}
that's where your problem lies.that's where your problem lies.Code:
if (userpick = randomnumber)
Err... well, there was a small error in my random function (its fixed in the original post though). The 1 is on the wrong side of the parens. Your problem is you are using the assignment ( = ) operator, not the equality test ( == ) operator, and unless the assignment is to the value 0, it will always return true (i.e. non-zero).
You'll probably want to do this before calling rand().
Code:
srand(time(NULL));
Quote:
Originally posted by anonytmouse
You'll probably want to do this before calling rand().
Code:
srand(time(NULL));
I have no idea what this is doing. What does time have to do with getting a random integer? what is with the syntax? and why null? | http://cboard.cprogramming.com/cplusplus-programming/47338-random-integer-printable-thread.html | CC-MAIN-2013-48 | refinedweb | 1,105 | 65.42 |
Dear all,
I am a computational physicist and a novice Julia programmer.I have written several simple codes using Julia, although mostly they still look like Fortran
.
With the release of Julia 0.7.0-alpha, I have been trying to update my code.
So far so good until I came to this serious problem (IMHO) which make me to hard restart my laptop several times.
I am sorry because I could not present the minimal code that reproduce the problem I have faced because I could not risk to do any more hard restart again on my laptop.
I will just give the link to github of the problematic code below.
The code is my effort to refactor PyQuante.jl code of Rick Mueller.
Using Julia 0.6.3 the code runs fine. However, when I use 0.7.0-alpha, I got the following error:
ERROR: LoadError: LoadError: error in method definition: function Base.push! must be explicitly imported to be extended Stacktrace: [1] top-level scope [2] include at ./boot.jl:314 [inlined] [3] include_relative(::Module, ::String) at ./loading.jl:1071 [4] include(::Module, ::String) at ./sysimg.jl:29 [5] include(::String) at ./client.jl:393 [6] top-level scope [7] include at ./boot.jl:314 [inlined] [8] include_relative(::Module, ::String) at ./loading.jl:1071 [9] include(::Module, ::String) at ./sysimg.jl:29 [10] exec_options(::Base.JLOptions) at ./client.jl:267 [11] _start() at ./client.jl:427 in expression starting at /home/efefer/WORKS/my_github_repos/ElectronicStructure.jl/ffr-ElectronicStructure.jl/LO_Gaussian/CGBF.jl:44 in expression starting at /home/efefer/WORKS/my_github_repos/ElectronicStructure.jl/ffr-ElectronicStructure.jl/LO_Gaussian/test_BasisSet.jl:9
So, I try to import
Base.push! in file CGBF.jl:
import Base.push! # This should become the constructor along with center and power function push!(cbf::CGBF,expn,coef) Base.push!(cbf.pgbfs, PGBF(expn, cbf.center, cbf.power)) Base.push!(cbf.coefs, coef) normalize!(cbf) end
When I do it like this the code run at some point before call to this function and then it hang up. I first thought that this is not a Julia problem and simply my laptop hangs (again because of unknown problem, which happened sometimes).
However, I run into this problem again when running
julia-0.7.0-alpha test_BasisSet.jl. After several hard restart, I tried to make sense of this problem and try to see what happened to my laptop when I run this. So, I open up
htop and observed that memory usage of Julia jumps up to 55% and before my laptop hang up.
I have encountered the hang up problems when I used Julia before (mainly because I tried to construct sparse matrix using
kron), but this problem is the most severe I have encountered so far.
This problem might be solved if I avoid using push and allocate memory beforehand which is the usual way I am handling this problem in Fortran. However, I might use push! somewhere and be aware of this problem.
Any help will be appreciated.
Best regards,
Fadjar Fathurrahman
I am using Ubuntu 16.04, 64 bit
julia> Sys.total_memory()/2^20 3835.27734375 julia> Sys.cpu_ cpu_info cpu_summary julia> Sys.cpu_info() 2-element Array{Base.Sys.CPUinfo,1}: Intel(R) Pentium(R) CPU B980 @ 2.40GHz: speed user nice sys idle irq 840 MHz 37364 s 141 s 7177 s 272661 s 0 s Intel(R) Pentium(R) CPU B980 @ 2.40GHz: speed user nice sys idle irq 971 MHz 40149 s 239 s 7743 s 268268 s 0 s | https://discourse.julialang.org/t/problem-with-base-push-in-0-7-0-alpha/11921 | CC-MAIN-2018-30 | refinedweb | 593 | 61.12 |
When I run my code it tells me: Type Error: unorderable types: str() < float(). I can't figure out why it won't let me compare these two numbers. The list I am using is defined, and the numbers in it have been redefined as floats, so I'm not sure what else to do. Any suggestions?
def countGasGuzzlers(list1, list2): total = 0 CCount = 0 HCount = 0 for line in list1: if num < 22.0: total = total + 1 CCount = CCount + 1 for line in list2: if num < 27.0: total = total + 1 Hcount = Hcount = 1 print('City Gas Guzzlers: ',CCount) print('Highway Gas Guzzlers: ',HCount) print('Total Gas Guzzlers: ',total)
This is my list definition. I'm pretty sure it's fine, but maybe there are some bugs in here as well?
CityFile = open('F://SSC/Spring 2015/CSC 110/PythonCode/Chapter 8/HW 4/carModelData_city','r') for line in CityFile: CityData = CityFile.readlines() for num in CityData: numCityData = float(num) CityList = numCityData HwyFile = open('F://SSC/Spring 2015/CSC 110/PythonCode/Chapter 8/HW 4/carModelData_hwy','r') for line in HwyFile: HwyData = HwyFile.readlines() for num in HwyData: numHwyData = float(num) HwyList = numHwyData | http://www.howtobuildsoftware.com/index.php/how-do/bJt/python-list-loops-if-statement-compare-python-using-a-created-list-as-a-parameter | CC-MAIN-2018-43 | refinedweb | 194 | 55.13 |
ASP updated, the cache can simply refresh itself. Furthermore, developers can define the length of time an item is to be cached, indicate cache dependencies, create cached versions per browser, and indicate where an item should be cached (client, server, proxy, etc.).
Caching is one of the greatest new features that ASP.NET provides regarding application performance.
There is no doubt that caching is one of the best methods of increasing your Web application's performance. ASP.NET and the .NET Framework provide a set of tools that make implementing caching simple. Data that is used over and over in your application, updated semi-regularly (like a catalog, list of contacts, store locations, etc.) make great candidates for caching. The right caching strategy will no doubt increase throughput on your servers and put smiles on your users' faces. This article describes how to implement caching in your applications using the .NET Framework.
Caching Basics (Output Caching)
Output caching involves the strategy of taking a dynamically generated Web page and caching its content so that subsequent requests for the page are fulfilled from an in-memory cache rather than through the execution of code. In high volume Web sites, the right caching strategy can dramatically improve scalability statistics.
ASP.NET gives developers multiple methods of implementing caching. For the most basic page-level caching, developers can use the declarative API, which is simply the term for defining caching parameters at the top of an ASP.NET page using HTML style tags.
First you need to find a candidate page for caching. For example, suppose that you have a Web-based report that displays monthly sales figures by region for the current year. This report's data is updated infrequently, but regularly, as different regions report at different times. The report is accessed many times by many people throughout the organization. Sounds like a good candidate for caching! To do so, you simply place the following declaration at the top of your ASP page.
<%@ OutputCache Duration="3600" VaryByParam="None" %>
This declaration instructs the .NET Framework to cache this Web page for a specified number of seconds (in this example, the page will be cached for 3600 seconds, or one hour). Requests made for the page during this time are returned from cache. The page does not execute its code to satisfy these requests, but instead uses in-memory results, thus conserving server resources and increasing page response times.
This declarative API allows you to indicate a simple and effective caching strategy for most of your pages. However, for developers who want more control over caching within their code, ASP.NET also provides a programmatic API. This API is a set of classes, methods and properties that wrap the caching features exposed by the .NET Framework. These classes enable you to set caching directives from within your ASP.NET applications. For example, the caching directive in the previous example could also be indicated programmatically by the following code:
Page.Response.Cache.SetExpires(Now.AddHours(1))
This code can be added to your code-behind class inside your Page_Init or Page_Load method. Notice that we call the SetExpires method to indicate a date on which the cached page should expire (again, one hour in our example).
This article will walk you through the classes related to caching with ASP.NET. It will further explore different caching strategies. It will also discuss updating the cache, removing items from the cache, and setting cache dependencies.
The Namespaces (System.Web and System.Web.Caching)
There are two namespaces containing ASP.NET caching classes: System.Web and System.Web.Caching. Caching-related classes found inside System.Web are used to cache Web pages and portions of Web pages. The System.Web.Caching class provides developers direct access to the cache, which is useful for putting application data into the cache, removing items from the cache and responding to cache-related events. Table 1 provides a quick reference to the various classes used in ASP.NET caching strategies.
Cacheability (Cache on the Client/Server)
Before a page can be cached, you must indicate its "cacheability." Setting this parameter indicates the devices on which your page can be cached. Valid devices include the client making the request, the server fulfilling the request, proxy servers, etc. To indicate a page's cacheability, you use the following method call at the top of your page:
Page.Response.Cache.SetCacheability( _ HttpCacheability.Server)
The SetCacheability method takes a member of the HttpCacheability enumeration as its only parameter. The members of this enumeration are designed to allow you to control where your page is cached. Table 2 presents the details for each enumeration member.
Version Caching (Versions of Pages)
Dynamically generated pages are typically created based on a set of parameters sent either to the page in the form of querystring parameters or posted using form fields and a submit button. ASP.NET makes it easy to cache these pages as well. The key to caching these pages is to cache versions of the page. That is, one version of the page for each value for a given parameter (or set of parameters).
The duration parameter in the example code represents the number of seconds the page should remain in the output cache before being refreshed.
For example, suppose our hypothetical regional sales report allows users to filter the data they view by month. Imagine the month parameter is passed to the page via the querystring. If we indicate that the page is cached, but do not cache based on this parameter, then only calls to the page without the querystring parameter will be cached. Each call using the querystring will override the cache and cause the page to execute its code rather than pull from cache. Of course this is not the caching behavior you want. Instead, each call to the page with a new month should result in a new page added to the cache as its own separate item. This way, subsequent requests for the same month can be delivered from the cache.
.NET makes version caching rather simple via the HttpCacheVaryByParams class. This class is accessed by a property exposed by Response.Cache. For example, the following code indicates that the page should cache versions of itself based on the Month querystring parameter.
Page.Response.Cache.VaryByParams( _ "Month") = True
You'll be happy to know that the same line of code works for parameters posted to the page. For instance, if you submit the month data to the page based on a user form field (text box, dropdown, etc.) with the same name of "month," then the same line of code results in multiple cached versions of the page.
Of course, pages are often rendered based on more than one parameter. For instance, both month and region might be parameters in our example. Each combination of month and region should result in a separate page being added to the cache. You can indicate a group of parameters on which to base caching for a given page using the same VaryByParams property. To do so, you simply separate each parameter with a semicolon, for example:
Page.Response.Cache.VaryByParams( _ "Month;Region") = True
You can also indicate that a page should be added to the cache based on all its parameters. To do this, you use an asterisk (*) as the VaryByParam value.
Fragment Caching (Page Portions / User Controls)
Many times, it is simply not practical to cache the entire contents of a given page. Suppose, for example, that our fictitious sales report required real-time updates for the current month. Past months might be able to stay in the cache for days without being updated. An ideal strategy in this case would be to cache the previous month's data while updating the current month on every request.
Valid devices include the client making the request, the server fulfilling the request, proxy servers, etc.
To cache portions of a page while dynamically generating the rest of the page you must employ a user control strategy. In the monthly reporting example you could sub-divide the page into a user control that rendered all past months and the page itself, which would be responsible for generating the data for the current month. The key to this type of page fragmentation is to ensure that the user control remains completely self-contained, which means that the user control should not rely on the code inside the Web page for execution (this happens to also be good user control design). Once fragmented, you simply tag the user control to be cached (inside its code-behind class). You can do so using the declarative API (see Caching Basics) or through the attribute class, PartialCachingAttribute. The following is an example using the PartialCachingAttribute class:
<PartialCaching(3600)> _ Public MustInherit Class PastMonths Inherits System.Web.UI.UserControl
In this example, we set the caching duration parameter to 3600 seconds (one hour). In addition to the required duration parameter, this attribute class also allows you to employ version caching of your control. To do so, you use the optional parameter, VaryByParam. This parameter works the same way as the VaryByParam property did in the previous section. However, because the control must remain completely self-contained in order to cache properly, the VaryByParam only works for data that is posted from the user control back to its code-behind class. That is, you must submit data from the control back to the control's code. You cannot, for instance, cache the control based on querystring values from the page hosting the control.
Application Data Caching (Frequently Accessed Data)
Often times, our application data does not present itself as a Web page or portion of a page that can be cached. Rather, we might have a data set that was expensive to create and is requested often or a database connection string that we want to add to the cache. Thankfully, in cases like these, ASP.NET provides us direct access to the Cache via the Cache class.
The Cache class represents the collection of cached items in an ASP.NET application. That is, one (and only one) Cache class is created by the framework for your application domain. For this reason, the Cache is dependent on your application. If your Web application is restarted, for instance, the Cache is flushed and re-created by the framework (empty, of course).
If you know how to work with a collection class, then you know how to work with the Cache object. Like a collection class, the Cache class exposes the Item property for accessing items, and a Count property for determining the number of items in the collection. The Cache class is accessed by your code from the HttpContext object, for example:
Response.Write( _ HttpContext.Current.Cache.Item("MyCachedItem"))
Of course, you do not have to specifically reference the HttpContext object in ASP.NET. The same line could be written as follows:
Response.Write(Cache("MyCachedItem"))
Note that when you attempt to retrieve an item from the cache, you will want to respond gracefully if the requested item does not exist. For instance, if you assign a local variable to hold the contents of your cached item and the item is not found, your variable will contain nothing. Therefore, a typical strategy is to check the variable's value using the Visual Basic IsNothing function. If the function returns true, often times you will want to call a procedure to add the item back to the cache.
To place items into the cache, the Cache class exposes both an Add and an Insert method. The Add method exists to support the object's contract as a collection class. Calling it returns a reference to the item added to the cache. However, for most operations, you should use the Insert method. Unlike the Add method, the Insert method allows you to overwrite an item already in the cache without first calling the Remove method (whereas the Add method fails if the item is already in the cache). For example, the following code inserts the contents of the connString variable into the cache:
Cache.Insert(key:="ConnString", _ value:=connString)
Of course, this is the most basic example. The Insert method has a number of other parameters that enable you to control items added to the cache. One of the most notable is slidingExpiration. You can use the slidingExpiration parameter to indicate when an item should be removed from the cache. Unlike the absoluteExpiration parameter, slidingExpiration enables you to set the number of minutes after the last request that an item in the cache should expire.
If the server requires memory resources it will sometimes claim them from the cache. This process is called scavenging. The framework can automatically remove an item from cache to reclaim memory. In this case, the priority parameter comes in handy. The priority parameter enables you to rank cached items relative to one another. This parameter takes a member of the CacheItemPriority enumeration whose values, in order of least likelihood of being removed, include NotRemovable, High, AboveNormal, Normal, BelowNormal and Low. Table 3 further presents a reference to all of the parameters available from the various overloaded versions of the Insert method.
You can explicitly remove items from the cache. Normally, the .NET caching system automatically manages cached items. However, sometimes you may want to remove items explicitly from the cache based on an event or user action. To do so, you simply call the Remove function as follows:
Cache.Remove(key:="MyCachedItem")
Caching Dependencies
The validity of items in the Cache is often times dependent on other objects. For instance, suppose we create another fictional report to display year-to-date sales. This page can also be cached, but rather than having a specific timeout period, it instead relies on the monthly sales data that was added to the cache separately. The page only needs to be refreshed if there is an update to the cached monthly sales data. Therefore, a dependency exists between this report and the monthly sales data. Additionally, let's suppose that our data store for the monthly sales data rests in an XML file (Listing 1). In this case, the cache only needs to be updated when this file changes. Therefore a dependency between the file and the cached information exists.
If you know how to work with a collection class, then you know how to work with the Cache object.
Thankfully, ASP.NET enables you to enforce these types of dependencies. In the first instance, we can use the AddCacheItemDependency method in the HttpResponse object to indicate a dependency between the year-to-date sales report and the cached monthly sales figures. To set up this dependency, simply add the following call to the response inside the Web page's code-behind class, where cacheKey represents the key item in the cache collection:
Response.AddCacheItemDependency( _ cacheKey:="MySalesData")
Of course, this works nicely for cached Web pages that are reliant on an item in the cache. However, you might want to use two items directly in the cache that are reliant on one another. In this case, you must access the cache directly through the Cache object (see Application Data Caching). The Insert method of the Cache class enables you to setup this dependency. You do so by creating an instance of the CacheDependency object and passing it as a parameter to the Insert method. Among other things, this class allows you to indicate one or more cacheKeys that the newly added item will be dependent upon. The following code creates an instance of CacheDependency for the Insert method:
Dim cacheDepend As New _ System.Web.Caching.CacheDependency( _ fileNames:=Nothing, _ cacheKeys:=New String(0) {"MyCachedItem"})
Just as there are two distinct ways to set dependencies between cached items (page to cached item and cached item to cached item), similar methods exist for setting dependencies between cached items and files/directories. First, to indicate that the validity of a cached response (Web page) is dependent on a file, you can call the AddFileDependency method in the HttpResponse object as follows:
Response.AddFileDependency( _ fileName:="C:\data\MonthlySalesByRegion.XML")
In addition, you can indicate that an item being added to the cache is dependent on a file (or set of files). You do so via the Cache class. Again, you call the Insert method and pass an instance of the CacheDependency object. The following code creates a CacheDependency object based on a file:
Now, you simply call Insert and pass the cacheDependency object as a parameter.
Putting it All Together (A Sample Application)
Okay, we've hinted at it long enough, now let's apply this caching knowledge to an actual version of the sales report page. The page we are going to create will read sales data from an XML file. The sales data is split out by region and month. Each region has an attribute called id, which uniquely identifies it. Of course it also has a name. The sales information is found as elements under the region node containing the month attribute. Listing 1 provides the complete contents of the XML file.
Our page is going to present the sales data to the user based on the Month querystring parameter. Therefore, we want to cache different versions of the page based on this parameter. To do so, we call the VaryByParams property of the Response.Cache object.
Additionally, we want to make the cached page dependent on our XML file. That is, when the XML file gets updated, the cached page should automatically expire and refresh itself. We do this by calling the AddFileDependency method off of the Response object. The rest of our code reads the contents of the XML file and adds it to an HTML table displayed to the user. Listing 2 presents the full code for our page's load event. Figure 1 shows the output of sales report.
As we've seen, caching can be a simple and effective strategy that can give your ASP.NET applications an extra edge in performance. You should now be ready to employ these same strategies in your code. | http://www.codemag.com/article/0205071 | CC-MAIN-2015-22 | refinedweb | 3,044 | 63.7 |
Changes to the Frontmatter
Changes to Editors and Authors to acknowlege the death of Robert Miner.
Changes to Abstract to highlight MathML may be used in HTML as well as XML.
Changes to the Chapter 1 Introduction
Changes to Section 1.3 Overview to highlight MathML may be used in HTML as well as XML.
Changes to the Chapter 2 MathML Fundamentals
Changes to Section 2.1.1 General Considerations to highlight MathML may be used in HTML as well as XML.
Add element markup to heading in Section 2.2 The Top-Level
<math> Element.
Changes to Section 2.1.2 MathML and Namespaces the
xmlns syntax for namespaces only applies to the XML serialisation.
Changes to Section 2.1.5.2 Length Valued Attributes to clarify that values specified with a
% or no unit are multiples of a reference value, which may differ from the default value used when the value is not specified.
Changes to namedspace in Section 2.1.5.2 Length Valued Attributes some attribute values such as "thinmathspace" were marked up as attribute names. (This affected formatting and also the index Section I.2 MathML Attributes).
Additional paragraph describing global document property defaults in Section 2.1.5.4 Default values of attributes
Changes to the Chapter 3 Presentation Markup
Refer to "characters" rather than "MathML Characters" in Section 3.1 Introduction.
Delete the note about bidi in HTML in Section 3.1.5.2 Bidirectional Layout in Token Elements (As there are proposals to change the HTML behavior).
Corrected mistaken refererence to
mtext, replaced by reference to
mo in Section 3.1.8.2 Warning: spacing should not be used to convey meaning
Refer to HTML rather than XHTML in Section 3.1.10 Mathematics style attributes common to presentation elements
Modify heading of Section 3.2.1
Token Element Content Characters,
<mglyph/>.
Note in Section 3.2.1.2 Using images to represent
symbols
<mglyph/> that the requirement to use
src and
alt is not enforced by the schema.
New section Section 3.2.2.2 Embedding HTML in MathML detailing the use of HTML elements on MathML token elements
Use U+2026 rather than
. . . in the example in Section 3.2.3 Identifier
<mi>.
Use percentage lengths rather than unitless lengths in examples in Section 3.2.5.8 Stretching of operators, fences and accents
and Section 3.3.2 Fractions
<mfrac>
Reference the Arabic Mathematical Symbols block in describing
mathvariant in Section 3.2.2 Mathematics style attributes common to token elements.
Do not specify that Math defaults may be set by using attributes in the MathML Namespace on the containing document, leave
the mechanism open. Section 3.2.5 Operator, Fence, Separator or Accent
<mo>
Changes to Section 3.2.5.2 Attributes to clarify defaults may be specified in any containing document.
Changes to Section 3.2.5.2.1 Dictionary-based attributes to clarify the interpretation of
maxsize,
minsize and
symmetric values.
Changes to Section 3.2.5.2.3 Indentation attributes to clarify behaviour if
indenttarget results in an unachievable alignment specification.
Changes to the examples in Section 3.2.5.5 Invisible operators so that each example is rendered as a separate
math expression.
Suggest CSS Counters as a possible mechanism for equation numbering in Section 3.5.3 Labeled Row in Table or Matrix
<mlabeledtr>
Minor improvements to the markup in Section 3.3.4 Style Change
<mstyle>.
Minor improvements to the markup in Section 3.3.9 Enclose Expression Inside Notation
<menclose>.
Changes to the attribute table Section 3.3.6.2 Attributes To clarify that unitless lengths are allowed on
mpadded, meaning, as usual, a multiplier of the stated default. Note that this change also affects the
mpadded-length grammar uin the extracted schema.
Explictly list mprescripts and none in heading for Section 3.4.7 Prescripts and Tensor Indices
<mmultiscripts>,
<mprescripts/>,
<none/>.
Changes to Section 3.5.1.2 Attributes to clarify that
displaystyle defaults to "false".
Minor improvements to the markup in Section 3.6.1 Stacks of Characters
<mstack>.
Editorial improvements to Section 3.6.8.1 Addition and Subtraction.
Modify the markup in the examples in Section 3.6.8.4 Repeating decimal so that MathML renderings are shown in some versions of this specification.
Note that attributes in other namespaces are not available in HTML in Section 3.7.1 Bind Action to Sub-Expression
<maction>
Changes to the Chapter 4 Content Markup
Add element markup to heading in Section 4.2.1.1 Rendering
<cn>,
<sep/>-Represented Numbers .
Add syntax table for qualifier elements in Section 4.3.3.1 Uses of
<domainofapplication>,
<interval>,
<condition>,
<lowlimit> and
<uplimit> and Section 4.3.3.2 Uses of
<degree>.
Modify the text in Section 4.1.5 Content MathML Concepts to clarify the role of the Qualifier row of syntax tables. (AM)
Spurious
apply removed from the "0" case in the example in Section 4.4.1.9 Piecewise declaration
<piecewise>,
<piece>,
<otherwise>.
Changes to Rewrite: partialdiffdegree The expression expression-in-x1-xk was rewritten to A. (AM)
Additional note added to the mathmltypes description clarifying that "complex" should be taken as an alias for "complex-cartesian" when rewriting to Strict Content MathML. (AM)
Changes to s_data1.mean, s_dist1.mean, s_dist1.moment and s_data1.moment examples to use new values for ⟨ and ⟩ so the result is in Unicode NFC form.
Changes to markup of syntax tables in Section 4.2.5 Function Application
<apply> and Section 4.2.7.1 The
share element to avoid redundant colspans, which make the html5 version invalid.
Clarify the behavior of qualifiers in Step 4b of the rewrite to Strict Content MathML. (AM)
Clarify that the types of the arguments are used to distinguish between set and multiset use of the
set constructor in Section 4.3.4.1.2 Rewriting to Strict Content MathML and Section 4.3.4.2.2 Rewriting to Strict Content MathML. (AM)
Fix spelling in Section 4.4.2.16 Not
<not/>.
Fix spelling in Section 4.4.3.1 Equals
<eq/>.
Split Section 4.4.7.1 Common trigonometric functions
<sin/>,
<cos/>,
<tan/>,
<sec/>,
<csc/>,
<cot/>
into separate sections Section 4.4.7.1 Common trigonometric functions
<sin/>,
<cos/>,
<tan/>,
<sec/>,
<csc/>,
<cot/>
,
Section 4.4.7.1 Common trigonometric functions
<sin/>,
<cos/>,
<tan/>,
<sec/>,
<csc/>,
<cot/>
,Section 4.4.7.2 Common inverses of trigonometric functions
<arcsin/>,
<arccos/>,
<arctan/>,
<arcsec/>,
<arccsc/>,
<arccot/>, Section 4.4.7.3 Common hyperbolic functions
<sinh/>,
<cosh/>,
<tanh/>,
<sech/>,
<csch/>,
<coth/>, Section 4.4.7.4 Common inverses of hyperbolic functions
<arcsinh/>,
<arccosh/>,
<arctanh/>,
<arcsech/>,
<arccsch/>,
<arccoth/> , add new presentation images for
arcsin.
Add element markup to heading in Section 4.4.7.7 Logarithm
<log/>
,
<logbase>
.
Minor rearrangement of heading in Section 4.4.8.6 Moment
<moment/>,
<momentabout>
Add syntax table for deprecated elements in Section 4.5.1 Declare
<declare>, Section 4.5.3 Relation
<fn> and Section 4.5.2 Relation
<reln>.
Changes to Chapter 5 Mixing Markup Languages for Mathematical Expressions.
Changes to Section 5.1.1 Annotation elements to highlight MathML may be used in HTML as well as XML.
Add additional note warning namespace extensibility exmple not applicable to HTML.
Add additional note warning namespace extensibility exmple not applicable to HTML.
Add additional note warning namespace extensibility exmple not applicable to HTML.
Additional section Section 5.2.3.3 Using
annotation-xml in HTML documents detailing the use of
annotation-xmlin HTML docuemnts
Show tag markup around element names in section headings in semantics, annotation and annotation-xml.
Changes to Chapter 6 Interactions with the Host Environment.
Editorial wording changes in Section 6.4 Combining MathML and Other Formats.
Editorial wording changes in Section 6.5 Using CSS with MathML.
Changes to wording on namespace use in Section 6.1 Introduction.
Additional section Section 6.2.2 Recognizing MathML in HTML.
Remove XML Declaration and
mml namespace prefix from the examples in Section 6.3.4 Examples.
Delete recommendation to use prefixed element names in XHTML in Section 6.4.1 Mixing MathML and XHTML.
Split HTML into a separate section from other non-XML use Section 6.4.3 Mixing MathML and HTML and Section 6.4.2 Mixing MathML and non-XML contexts
Remove the reference, Layout engines that lack native MathML support, to [MathMLforCSS] in Chapter 6 Interactions with the Host Environment.
Changes to Chapter 7 Characters, Entities and Fonts.
Change the DTD description in Section 7.3 Entity Declarations to reference the Combined HTML MathML entity set rather than the legacy ISO entity sets. This does not change any existing
definition, but adds the following 38 entity definitions:
" (U+0022),
& (U+0026),
< (U+003C),
> (U+003E),
© (U+00A9),
® (U+00AE),
Α (U+0391),
Β (U+0392),
Ε (U+0395),
Ζ (U+0396),
Η (U+0397),
Ι (U+0399),
Κ (U+039A),
Μ (U+039C),
Ν (U+039D),
Ο (U+039F),
Ρ (U+03A1),
Τ (U+03A4),
Χ (U+03A7),
ε (U+03B5),
ο (U+03BF),
ς (U+03C2),
ϑ (U+03D1),
ϒ (U+03D2),
(U+200C),
(U+200D),
(U+200E),
(U+200F),
‚ (U+201A),
„ (U+201E),
‹ (U+2039),
› (U+203A),
‾ (U+203E),
⁄ (U+2044),
€ (U+20AC),
™ (U+2122),
ℵ (U+2135),
↵ (U+21B5).
Changes to Appendix A Parsing MathML.
Modify the schema regular expression to allow the deprecated unitless length attributes.
The schema now enforces a mandatory space and optional minus sign before rownumber in the
align attribute of
mtable and
mstack.
Modify the schema (including DTD and XSD versions) to include the attributes listed in Section 3.2.5.2.3 Indentation attributes on
mspace to match the text description in Section 3.2.7 Space
<mspace/>.
Modify the regular expressions used for
mpadded-length and
length so that there must be at most one
. and at least one digit. (FW)
New sections: Section A.5 Parsing MathML in XHTML and Section A.6 Parsing MathML in HTML.
Changes to Appendix C Operator Dictionary.
Add entries for the characters listed in Section 7.7.2 Pseudo-scripts.
Changes to Appendix E Working Group Membership and Acknowledgments.
Changes to Section E.1 The Math Working Group Membership to note the death of Robert Miner.
Changes to Appendix G Normative References.
Changes to Appendix I Index.
Changes to Section I.2 MathML Attributes.
Changes Using images to represent
symbols
for Mathematical Expressions..
Three MathML 1 attrbutes on
math that were deprecated and undocumented in MathML2 but retained in the MathML2 DTD have been removed.
name (use
id instead),
baseline and
type (These are not used by any known implementation, so can be removed.) See Chapter 7 of [MathML1].. | http://www.w3.org/TR/2014/PER-MathML3-20140211/appendixf-d.html | CC-MAIN-2015-14 | refinedweb | 1,779 | 61.12 |
Node.js Step by Step: Introduction
Node.
An Introduction to Node.js
Press the HD button for a clearer picture.
View more Nettuts+ screencasts on YouTube.
Screencast Transcript
Hi guys, my name is Christopher Roach, and I'll be your guide throughout this series of screencasts on Node.js. In this series we'll be using Node to create a simple blog engine, like the one made famous in the popular Ruby on Rails introductory video. The goal of this series is to give you, the viewer, a real feel for how Node works so that, even when working with any of the popular web development frameworks out there, such as Express or Getty, you'll feel comfortable enough with the inner workings of Node to be able to drop down into its source and make changes to suit your needs as necessary.
Installation
Before we get into some of the details of what Node is and why you'd want to use it, I'd like to go ahead and get us started with the installation of Node, since, though super easy, it can take some time.
Node is still very young, and is in active development, so it's best to install from the source.
Node is still very young, and is in active development, so it's best to install from the source. That said, Node has very few dependencies, and so compilation is nowhere near as complicated as other projects you may have fought with in the past. To get the code, visit the Node.js website . If you scroll down the page to the download section, you'll find a couple of choices. If you have Git installed, you can do a clone of the repository and install from there. Otherwise, there's a link to a tarball that you can download instead. In this video, I'll keep things simple, and install from the tarball.
While this is downloading, now is a good time to mention that efforts are ongoing to provide a port of Node for Windows, and there are instructions for installing on Windows for Cygwin or MinGW. I believe there are even some binary packages out there that you can install from, but at the time of this writing, it's primary environment is Unix and Linux based platforms. If you're on a Windows machine, you can click on the link for build instructions and follow the set of instructions there for a Windows installation or you can install a version of Linux, such as Ubuntu, and install Node there.
When it's finished download, simply untar and unzip the package with
tar -xvf and
cd into the directory it created. First we need to do a
./configure, then
make, and finally
make install. That's going to take a little time to build, so I'll let that run in the background and take this opportunity to talk a bit more about Node, and why it's causing such a stir in the web development community.
Introduction to Node
Node is JavaScript on the server.
So, if this article and video is your first introduction to Node, you're probably wondering just what it is and what makes it worth learning when there are already so many other web development frameworks out there to choose from. Well, for starters, one reason you should care is that Node is JavaScript on the server, and let's face it, if you work on the web, love it or hate it, you're going to have to work with JavaScript at some point. Using JavaScript as your backend language as well as for the client-side means a whole lot less context switching for your brain.
Oh, I know what you're thinking: "so Node is JavaScript on the server, well that's great, but there've been other JavaScript on the server attempts in the past that have basically just fizzled."
What makes Node any different from the rest?
Well, the short answer is: Node is server-side JavaScript finally done right. Where other attempts have basically been ports of traditional MVC web frameworks to the JavaScript language, Node is something entirely different. According to its website, Node is evented I/O for V8 JavaScript, but what exactly does that mean? Let's start with V8.
V8 is Google's super fast JavaScript implementation that's used in their Chrome browser.
Through some really ingenious application of "Just in Time" compilation, V8 is able to achieve speeds for JavaScript that make users of other dynamic languages, such as Python and Ruby, green with envy. Take a look at some of the benchmarks and I believe you'll be amazed. V8 JavaScript is up there with many JVM-based languages such as Clojure and Java and compiled languages, such as Go in many cases.
JavaScript's ability to pass around closures makes event-based programming dead simple.
The other key phrase in that statement is evented I/O. This one is the biggie. When it comes to creating a web server you basically have two choices to make when dealing with multiple concurrent connection requests. The first, which is the more traditional route taken by web servers such as Apache, is to use threads to handle incoming connection requests. The other method, the one taken by Node and some extremely fast modern servers such as Nginx and Thin, is to use a single non-blocking thread with an event loop. This is where the decision to use JavaScript really shines, since JavaScript was designed to be used in a single threaded event loop-based environment: the browser. JavaScript's ability to pass around closures makes event-based programming dead simple. You basically just call a function to perform some type of I/O and pass it a callback function and JavaScript automatically creates a closure, making sure that the correct state is preserved even after the calling function has long since gone out of scope. But this is all just technical jargon and I'm sure you're dying to see some code in action. I'm going to fast forward a bit to the end of this install, so we can start playing around with our brand new, freshly minted copy of Node.
Confirming the Installation
So, it looks like my build has finally finished; I want to quickly check and make sure that everything went well with the install. To do so, simply run
node --version from the command line, and you should see some indication that you're running the latest version of Node which, at this time, is version 0.4.5. If you see a version print out then you can rest assured that everything went swimmingly and you're ready to write your first Node app. So, let's
cd back into our home directory and create a new folder to hold all of our work during the course of this series of screencasts. Here I'm simply going to call mine '
blog' and let's
cd into that to get started.
Node - The Server Framework
Unlike other frameworks, Node is not strictly for web development. In fact, you can think of Node as a framework for server development of any kind. With Node you can build an IRC server, a chat server, or, as we'll see in this set of tutorials, an http server. So since we can't have an introductory tutorial without the obligatory '
Hello World' application, we'll begin with that.
Hello World
Let's create a new file called
app.js. Now Node comes with a handfull of libraries to make the development of event-based servers easy. To use one of the available libraries, you simply include its module using the require function. The require function will return an object representing the module that you pass into it and you can capture that object in a variable. This effectively creates a namespace for the functionality of any required module. For the creation of an HTTP server, Node provides the http library. So let's go ahead and require that now and assign the returned object to the http variable.
Next, we'll need to actually create our server. The http library provides a function called
createServer that takes a callback function and returns a new server object.
The callback function is what Node calls a listener function and it is called by the server whenever a new request comes in.
Whenever an HTTP request is made, the listener function will be called and objects representing the HTTP request and response will be passed into the function. We can then use the response object inside of our listener function to send a response back to the browser. To do so, we'll first need to write the appropriate HTTP headers, so let's call the
writeHead function on our response object.
The
writeHead function takes a couple of arguments. The first is an integer value representing the status code of the request which for us will be 200, in other words, OK. The second value is an object containing all of the response headers that we'd like to set. In this example, we'll simply be setting the Content-type to 'text/plain' to send back plain text.
Once we've set the headers, we can send the data. To do that, you'll call the
write function and pass in the data that you wish to send. Here, let's call the
write function on our response object and pass in the string "
Hello World".
To actually send the response, we need to signal to the server that we're done writing the body of our response; we can do that by calling
response.end. The
end function also allows us to pass in data as well, so we can shorten up our server code by getting rid of the call to the write function that we made earlier and instead passing in the string "
Hello World" to the end function, like so.
Now that we've created our server, we need to set it up to listen for new requests. That's easy enough to do: call the listen function on our server object and pass in a port number for it to listen on; in this case I'll be using port
8000. The listen function also takes an optional second parameter which is the hostname URL, but since we're just running this locally, we can safely skip that parameter for now.
Finally, let's print out a message to let us know that our server is running and on what port it's listening for new requests. You can do that by calling
console.log, just like we would in the browser, and passing in the string "
Listening on". There we go, now let's run our app by calling node and passing to it the name of the file we want it to execute.
THE REPL
Before we bring this first article and video in the series to a close, let's flip back over to the terminal and quickly take a look at Node's REPL.
A REPL, for those unfamiliar with the acronym, stands for Read-Eval-Print-Loop which is nothing more than a simple program that accepts commands, evaluates them, and prints their results.
It's essentially an interactive prompt that allows you to do pretty much anything that you can do with regular Node, but without all the overhead of creating a separate file, and it's great for experimentation, so let's play around a bit with the REPL and learn a bit more about Node.
We'll first need to stop our server application by hitting
Ctrl-C. Then run node again, this time, however, without a filename. Running node without any arguments will bring up the REPL, as we can see here by the change in the prompt. The REPL is very simple: basically you can write JavaScript code and see the evaluation of that code. Despite its simplicity, though, the REPL does have few commands that can come in handy and you can get a look at each of these by calling the .help command at the prompt. Here (refer to screencast) we see a list of four commands, the first of which is the
.break command. If you are writing some code that spans several lines and you find that you've made some type of mistake, and need to break out for whatever reason, the
.break command can be used to do so. Let's try it out now...
I'm going to create a function here and I'll just call it
foo and open the function body and then hit
enter. Notice that, on the next line, rather than seeing the typical greater than symbol, we now see a set of three dots, or an ellipsis. This is Node's way of indicating to us that we have not yet finished the command on the previous line and that Node is still expecting more from us before it evaluates the code that we've typed in. So, let's go ahead and add a line of code now: we'll do
console.log and we'll print out the name of the function. Let's now hit enter, and, again, notice that the ellipsis character is being displayed once more. Node is still expecting us to finish the function at some point. Now let's assume that I've made a mistake and I just want to get back to a normal prompt. If, I continue to hit enter, Node continues displaying the ellipsis character. But, if I call the
.break command, Node will break us out of the current command and takes us back to the normal prompt.
Next, we have the
.clear command. This one will clear our current context. So if you've cluttered up the environment with the creation of several variables and functions and you want a clean slate, simply run the .
clear command and Voila, everything magically disappears.
.exit and
Finally, there's the
.exit and
.help commands. The
.help command is fairly obvious, since it's the command we used to see the list of commands in the first place. The
.exit command is equally obvious: you essentially just call it to exit the REPL, like so.
So, that pretty much covers all of the functionality that the REPL provides outside of the evaluation of the code you enter. But before we leave the REPL completely, I'd like to take this opportunity to discuss some differences and similarities between JavaScript in the browser and Node's flavor of JavaScript. So let's run Node again and jump back into the REPL.
The first difference between client-side JavaScript and Node is that, in the browser, any function or variable created outside of a function or object is bound to the global scope and available everywhere. In Node though, this is not true. Every file, and even the REPL, has its own module level scope to which all global declarations belong. We'll see this put to use later in the series when we discuss modules and create a few of our own. But for now, you can see the actual module object for the REPL by typing module at the prompt. Notice that there is a prompt attribute buried a few levels deep in our module object? This controls the prompt that we see when in the REPL. Let's just change that to something slightly different and see what happens. There now, we have a brand new prompt.
Another difference between Node JavaScript and browser JavaScript is that in the browser, you have a global window object that essentially ties you to the browser environment.
In Node, there is no browser, and, hence, no such thing as a
window object. Node does however have a counterpart that hooks you into the operating environment that is the process object which we can see by simply typing process into the REPL. Here you'll find several useful functions and information such as the list of environment variables.
One similarity that is important to mention here is the setTimeout function. If you're familiar with client-side JavaScript, you've no doubt used this function a time or two. It basically let's you setup a function to be called at a later time. Let's go ahead and try that out now.
> function sayHello(seconds) { ... console.log('Hello '); ... setTimeout(function() { ... console.log('World'); ... }, seconds * 1000); ... }
This will create a function that when called, prints out the string 'Hello' and then a few seconds later prints the string 'World'. Let's execute the function now to see it in action.
> sayHello(2);
There are a couple of important ideas to take notice of here. First, Ryan Dahl, the creator of Node, has done his best to make the environment as familiar as possible to anyone with client-side JavaScript experience. So the use of names such as
setTimeout and setInterval rather than sleep and repeat, for example, was a conscious decision to make the server-side environment match, wherever it makes sense, the browser environment.
The second concept that I want you to be aware of is the really important one. Notice that, when we call
sayHello, right after printing the first string, control is immediately given back to the REPL. In the time between when the first string is printed and the callback function executed, you can continue to do anything you want at the REPL's prompt. This is due to the event-based nature of Node. In Node, it's near impossible to call any function that blocks for any reason and this holds true for the setTimeout function. Lets call our
sayHello function again, however, this time let's pass in a slightly longer timeout interval to give us enough time to play around a bit and prove our point. I believe 10 seconds should do the trick.
There we see the first string. Let's go ahead and run some code of our own, how about
2 + 2. Great, we see that the answer is
4 and... there's our second string being printed out now.
Conclusion
So that brings us to the close of the first episode in this series. I hope this has been a fairly informative introduction to Node for you, and I hope I've done a decent enough job of explaining why it's so exciting, what it has to offer, and just how fun and simple it is to use. In the next episode, we'll actually start writing some of the code for our blog engine; so I hope you'll all join me again when things get a bit more hands on. See you then!
Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| http://code.tutsplus.com/tutorials/this-time-youll-learn-node-js--net-19448 | CC-MAIN-2015-32 | refinedweb | 3,183 | 69.01 |
Introduction to programming¶
Written by Luke Chang
In this notebook we will begin to learn how to use Python. There are many different ways to install Python, but we recommend starting using Anaconda which is preconfigured for scientific computing. Start with installing Python 3.7. For those who prefer a more configurable IDE, Pycharm is a nice option. Python is a modular interpreted language with an intuitive minimal syntax that is quickly becoming one of the most popular languages for conducting research. You can use python for stimulus presentation, data analysis, machine-learning, scraping data, creating websites with flask or django, or neuroimaging data analysis.
There are lots of free useful resources to learn how to use python and various modules. See Jeremy Manning’s or Yaroslav Halchenko’s excellent Dartmouth courses. Codeacademy is a great interactive tutorial. Stack Overflow is an incredibly useful resource for asking specific questions and seeing responses to others that have been rated by the development community.
Jupyter Notebooks¶
We will primarily be using Jupyter Notebooks to interface with Python. A Jupyter notebook consists of cells. The two main types of cells you will use are code cells and markdown cells.
A code cell contains actual code that you want to run. You can specify a cell as a code cell using the pulldown menu in the toolbar in your Jupyter notebook. Otherwise, you can can hit esc and then y (denoted “esc, y”) while a cell is selected to specify that it is a code cell. Note that you will have to hit enter after doing this to start editing it. If you want to execute the code in a code cell, hit “shift + enter.” Note that code cells are executed in the order you execute them. That is to say, the ordering of the cells for which you hit “shift + enter” is the order in which the code is executed. If you did not explicitly execute a cell early in the document, its results are now known to the Python interpreter.
Markdown cells contain text. The text is written in markdown, a lightweight markup language. You can read about its syntax here. Note that you can also insert HTML into markdown cells, and this will be rendered properly. As you are typing the contents of these cells, the results appear as text. Hitting “shift + enter” renders the text in the formatting you specify. You can specify a cell as being a markdown cell in the Jupyter toolbar, or by hitting “esc, m” in the cell. Again, you have to hit enter after using the quick keys to bring the cell into edit mode.
In general, when you want to add a new cell, you can use the “Insert” pulldown menu from the Jupyter toolbar. The shortcut to insert a cell below is “esc, b” and to insert a cell above is “esc, a.” Alternatively, you can execute a cell and automatically add a new one below it by hitting “alt + enter.”
print("Hello World")
Package Management¶
Package managment in Python has been dramatically improving. Anaconda has it’s own package manager called ‘conda’. Use this if you would like to install a new module as it is optimized to work with anaconda.
!conda install *package*
However, sometimes conda doesn’t have a particular package. In this case use the default python package manager called ‘pip’.
These commands can be run in your unix terminal or you can send them to the shell from a Jupyter notebook by starting the line with
!
It is easy to get help on how to use the package managers
!pip help install
!pip help install
!pip list --outdated
!pip install setuptools --upgrade
Variables¶
Python is a dynamically typed language, which means that you can easily change the datatype associated with a variable. There are several built-in datatypes that are good to be aware of.
Built-in
Numeric types:
int, float, long, complex
String: str
Boolean: bool
True / False
NoneType
User defined
Use the type() function to find the type for a value or variable
Data can be converted using cast commands
# Integer a = 1 print(type(a)) # Float b = 1.0 print(type(b)) # String c = 'hello' print(type(c)) # Boolean d = True print(type(d)) # None e = None print(type(e)) # Cast integer to string print(type(str(a)))
<class 'int'> <class 'float'> <class 'str'> <class 'bool'> <class 'NoneType'> <class 'str'>
Math Operators¶
+, -, *, and /
Exponentiation **
Modulo %
Note that division with integers in Python 2.7 automatically rounds, which may not be intended. It is recommended to import the division module from python3
from __future__ import division
# Addition a = 2 + 7 print(a) # Subtraction b = a - 5 print(b) # Multiplication print(b*2) # Exponentiation print(b**2) # Modulo print(4%9) # Division print(4/9)
9 4 8 16 4 0.4444444444444444
String Operators¶
Some of the arithmetic operators also have meaning for strings. E.g. for string concatenation use
+sign
String repetition: Use
*sign with a number of repetitions
# Combine string a = 'Hello' b = 'World' print(a + b) # Repeat String print(a*5)
HelloWorld HelloHelloHelloHelloHello
Logical Operators¶
Perform logical comparison and return Boolean value
x == y # x is equal to y x != y # x is not equal to y x > y # x is greater than y x < y # x is less than y x >= y # x is greater than or equal to y x <= y # x is less than or equal to y
# Works for string a = 'hello' b = 'world' c = 'Hello' print(a==b) print(a==c) print(a!=b) # Works for numeric d = 5 e = 8 print(d < e)
False False True True
Conditional Logic (if…)¶
Unlike most other languages, Python uses tab formatting rather than closing conditional statements (e.g., end).
Syntax:
if condition: do something
Implicit conversion of the value to bool() happens if
conditionis of a different type than bool, thus all of the following should work:
if condition: do_something elif condition: do_alternative1 else: do_otherwise # often reserved to report an error # after a long list of options
n = 1 if n: print("n is non-0") if n is None: print("n is None") if n is not None: print("n is not None")
n is non-0 n is not None
Loops¶
for loop is probably the most popular loop construct in Python:
for target in sequence: do_statements
However, it’s also possible to use a while loop to repeat statements while
conditionremains True:
while condition do: do_statements
string = "Python is going to make conducting research easier" for c in string: print(c)
P y t h o n i s g o i n g t o m a k e c o n d u c t i n g r e s e a r c h e a s i e r
x = 0 end = 10 csum = 0 while x < end: csum += x print(x, csum) x += 1 print(f"Exited with x=={x}")
0 0 1 1 2 3 3 6 4 10 5 15 6 21 7 28 8 36 9 45 Exited with x==10
Functions¶
A function is a named sequence of statements that performs a computation. You define the function by giving it a name, specify a sequence of statements, and optionally values to return. Later, you can “call” the function by name.
def make_upper_case(text): return (text.upper())
The expression in the parenthesis is the argument.
It is common to say that a function “takes” an argument and “returns” a result.
The result is called the return value.
The first line of the function definition is called the header; the rest is called the body.
The header has to end with a colon and the body has to be indented. It is a common practice to use 4 spaces for indentation, and to avoid mixing with tabs.
Function body in Python ends whenever statement begins at the original level of indentation. There is no end or fed or any other identify to signal the end of function. Indentation is part of the the language syntax in Python, making it more readable and less cluttered.
def make_upper_case(text): return (text.upper()) string = "Python is going to make conducting research easier" print(make_upper_case(string))
PYTHON IS GOING TO MAKE CONDUCTING RESEARCH EASIER
Python Containers¶
There are 4 main types of builtin containers for storing data in Python:
list
tuple
dict
set
Lists¶
In Python, a list is a mutable sequence of values. Mutable means that we can change separate entries within a list. For a more in depth tutorial on lists look here
Each value in the list is an element or item
Elements can be any Python data type
Lists can mix data types
Lists are initialized with
[]or
list()
l = [1,2,3]
Elements within a list are indexed (starting with 0)
l[0]
Elements can be nested lists
nested = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
Lists can be sliced.
l[start:stop:stride]
Like all python containers, lists have many useful methods that can be applied
a.insert(index,new element) a.append(element to add at end) len(a)
List comprehension is a Very powerful technique allowing for efficient construction of new lists.
[a for a in l]
# Indexing and Slicing a = ['lists','are','arrays'] print(a[0]) print(a[1:3]) # List methods a.insert(2,'python') a.append('.') print(a) print(len(a)) # List Comprehension print([x.upper() for x in a])
lists ['are', 'arrays'] ['lists', 'are', 'python', 'arrays', '.'] 5 ['LISTS', 'ARE', 'PYTHON', 'ARRAYS', '.']
Dictionaries¶
In Python, a dictionary (or
dict) is mapping between a set of indices (keys) and a set of values
The items in a dictionary are key-value pairs
Keys can be any Python data type
Dictionaries are unordered
Here is a more indepth tutorial on dictionaries
# Dictionaries eng2sp = {} eng2sp['one'] = 'uno' print(eng2sp) eng2sp = {'one': 'uno', 'two': 'dos', 'three': 'tres'} print(eng2sp) print(eng2sp.keys()) print(eng2sp.values())
{'one': 'uno'} {'one': 'uno', 'two': 'dos', 'three': 'tres'} dict_keys(['one', 'two', 'three']) dict_values(['uno', 'dos', 'tres'])
Tuples¶
In Python, a tuple is an immutable sequence of values, meaning they can’t be changed
Each value in the tuple is an element or item
Elements can be any Python data type
Tuples can mix data types
Elements can be nested tuples
Essentially tuples are immutable lists
Here is a nice tutorial on tuples
numbers = (1, 2, 3, 4) print(numbers) t2 = 1, 2 print(t2)
(1, 2, 3, 4) (1, 2)
sets¶
In Python, a
set is an efficient storage for “membership” checking
setis like a
dictbut only with keys and without values
a
setcan also perform set operations (e.g., union intersection)
Here is more info on sets
# Union print({1, 2, 3, 'mom', 'dad'} | {2, 3, 10}) # Intersection print({1, 2, 3, 'mom', 'dad'} & {2, 3, 10}) # Difference print({1, 2, 3, 'mom', 'dad'} - {2, 3, 10})
{1, 2, 3, 'mom', 10, 'dad'} {2, 3} {1, 'mom', 'dad'}
Modules¶
A Module is a python file that contains a collection of related definitions. Python has hundreds of standard modules. These are organized into what is known as the Python Standard Library. You can also create and use your own modules. To use functionality from a module, you first have to import the entire module or parts of it into your namespace
To import the entire module, use
import module_name
You can also import a module using a specific name
import module_name as new_module_name
To import specific definitions (e.g. functions, variables, etc) from the module into your local namespace, use
from module_name import name1, name2
which will make those available directly in your
namespace
import os from glob import glob
Here let’s try and get the path of the current working directory using functions from the
os module
os.path.abspath(os.path.curdir)
'/Users/lukechang/Dropbox/Dartbrains/Notebooks'
It looks like we are currently in the notebooks folder of the github repository. Let’s use glob, a pattern matching function, to list all of the csv files in the Data folder.
data_file_list = glob(os.path.join('../..','Data','*csv')) print(data_file_list)
[]
This gives us a list of the files including the relative path from the current directory. What if we wanted just the filenames? There are several different ways to do this. First, we can use the the
os.path.basename function. We loop over every file, grab the base file name and then append it to a new list.
file_list = [] for f in data_file_list: file_list.append(os.path.basename(f)) print(file_list)
['salary_exercise.csv', 'salary.csv']
Alternatively, we could loop over all files and split on the
/ character. This will create a new list where each element is whatever characters are separated by the splitting character. We can then take the last element of each list.
file_list = [] for f in data_file_list: file_list.append(f.split('/')[-1]) print(file_list)
['salary_exercise.csv', 'salary.csv']
It is also sometimes even cleaner to do this as a list comprehension
[os.path.basename(x) for x in data_file_list]
['salary_exercise.csv', 'salary.csv']
Exercises¶
Find Even Numbers¶
Let’s say I give you a list saved in a variable: a = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]. Make a new list that has only the even elements of this list in it.
Find Maximal Range¶
Given an array length 1 or more of ints, return the difference between the largest and smallest values in the array.
Duplicated Numbers¶
Find the numbers in list a that are also in list b
a = [0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361]
b = [0, 4, 16, 36, 64, 100, 144, 196, 256, 324]
Speeding Ticket Fine¶
You are driving a little too fast on the highway, and a police officer stops you. Write a function that takes the speed as an input and returns the fine.
If speed is 60 or less, the result is
$0. If speed is between 61 and 80 inclusive, the result is
$100. If speed is 81 or more, the result is
$500. | https://dartbrains.org/content/Introduction_to_Programming.html | CC-MAIN-2020-40 | refinedweb | 2,356 | 59.23 |
Tldr;
- Introduction
- About styled-components
- Installing styled-components
- Using styled-components
- Props in styled-components
- Building the app - Grocery UI
- Adding user avatar image
- Absolute Positioning in React Native
- Adding icons in a React Native
- Adding horizontal ScrollView
- Adding a vertical ScrollView
- Building a card component
- Conclusion
Introduction
Whether you are a web developer or mobile app developer, you know without a good amount of styling your application, UI would probably suck. Styling an application is important. I cannot put enough emphasis on how important it is for a mobile app to have a pleasing design and good use of colors.
If you are getting into React Native or have already dipped your toes, do know that there are different ways you can style a React Native app. I have already discussed the basics and some of the different ways to style your React Native components in the article below. Such as, to create a new style object you use StyleSheet.create() method and encapsulating them. Go check it out 👇
This tutorial is going to be about styling your React Native apps using 💅 Styled Components. Yes, styled-components is a third party library. Using it is a matter of choice, but also another way to style components, and many of you might find it easy to use. Especially, if you have used this library before with other frameworks. One common use case is React.
What is Styled Components?
Styled Components is a CSS-in-JS library that somehow enforces developers to write each component with their own styles and has both of them in one place. This enforcement has lead to some happy times for some happy developers resulting in optimizing their experience and output.
In React Native, the styling of components is already done by creating JavaScript objects and if you do not encapsulate them, in most cases, your components and their styling is going to end up in one place.
React Native tends to follow a certain convention when it comes to styling your app. Such as all CSS property names should be in
camelCase such as for
background-color in React Native is:
backgroundColor: 'blue`
Some web developers get uncomfortable by these conventions. Using a third party library like styled components can give you wings. You do not have to switch between the context of conventions much, apart from the properties and React Native's own Flexbox rules.
Behind the scenes, styled components just convert the CSS text into a React Native stylesheet object. You can check how it does that here.
Enough with the story, let's get to work!
Installing Styled Components
To install the library styled-components in a React Native project, we need to have a react native project first. To get started quickly, I am going to use awesome Expo. Make sure you have
expo-cli installed.
# To install expo-cli npm install -S expo-cli # Generate a project expo init [YourApp-Name]
When running the last command, the command line prompt will you as few questions. First one is,
Choose a template, where I chose
expo-template-blank, then enter display name of your app and then either use
npm or
yarn to install dependencies. I am going with npm.
Once all the dependencies installed, you can open this project in your favorite code editor. Next step is to install latest version of
styled-components library.
npm install -S styled-components
That's it for installation.
Using Styled Components
Open up
App.js file right now and make some modifications.' } });
From your favorite terminal window, run the command:
npm run ios if you are on macOS. For Linux and Windows users the command is
npm run android but make sure you have android virtual device running in the background. Our code currently looks like below.
Let us make some changes to it and use our newly installed library. To get started, import the library like below.
import styled from 'styled-components';
Make changes to the component's render function like below. Replace both
View and
Text with
Container and
Title. These new elements are going to be custom using semantics from styled-components.
export default class App extends React.Component { render() { return ( <Container> <Title>React Native with 💅 Styled Components</Title> </Container> ); } }
styled-components utilizes tagged template literals to style your components using back ticks. When creating a component in React or React Native using
styled-components, each component is going to have styles attached to it.
const Container = styled.View` flex: 1; background-color: papayawhip; justify-content: center; align-items: center; `; const Title = styled.Text` font-size: 20px; font-weight: 500; color: palevioletred; `;
Notice how the contained is a React Native
View but has styling attached to it.
The complete code for
App.js file after changes.
import React from 'react'; import styled from 'styled-components'; export default class App extends React.Component { render() { return ( <Container> <Title>React Native with 💅 Styled Components</Title> </Container> ); } } const Container = styled.View` flex: 1; background-color: papayawhip; justify-content: center; align-items: center; `; const Title = styled.Text` font-size: 24px; font-weight: 500; color: palevioletred; `;
In the above snippet, do take a note that we are not importing an React Native core components such as
View,
Text or
StyleSheet object. It is that simple. It uses the same
flexbox model that React Native Layouts. The advantage here is that, you get the advantage of using same and understandable syntax that you have been using in Web Development.
Using Props in Styled Components
Often you will find yourself creating custom components for your apps. This does gives you an advantage to stay DRY. Using
styled-components is no different. You can leverage this programming pattern by building custom components that require
props from their parent components.
props are commonly known as additional properties to a specific component. To demonstrate this, create a new file called
CustomButton.js.
Inside this file we are going to create a custom button that requires props such as
backgroundColor,
textColor and the text itself for the button. You are going to use
TouchableOpacity and
Text to create this custom button but without importing
react-native library using a functional component
CustomButton.
import React from 'react'; import styled from 'styled-components'; const CustomButton = props => ( <ButtonContainer onPress={() => alert('Hi!')} backgroundColor={props.backgroundColor} > <ButtonText textColor={props.textColor}>{props.text}</ButtonText> </ButtonContainer> ); export default CustomButton; const ButtonContainer = styled.TouchableOpacity` width: 100px; height: 40px padding: 12px; border-radius: 10px; background-color: ${props => props.backgroundColor}; `; const ButtonText = styled.Text` font-size: 15px; color: ${props => props.textColor}; text-align: center; `;
By passing an interpolated function
${props => props...} to a styled component's template literal you can extend its styles. Now add this button to
App.js file.
render() { return ( <Container> <Title>React Native with 💅 Styled Components</Title> <CustomButton text="Click Me" textColor="#01d1e5" backgroundColor="lavenderblush" /> </Container> ); }
On running the simulator, you will get the following result.
Building the app - Grocery UI
What are we building in this section? A UI screen for an app that might be a Grocery Store. You are going to build the home screen that looks like the one below.
We will be using our knowledge of
styled-components so let's get started! Open up
App.js. Declare a new
Container View using styled. Inside the back ticks, you can put pure CSS code there with the exact same syntax. The
View element is like a
div in HTML or web programming in general. Also, create another view called
Titlebar inside
Container.
Inside
Titlebar, it will contian three new elements. One is going to be image,
Avatar and the other two are text:
Title and
Name.
import React from 'react'; import styled from 'styled-components'; export default class App extends React.Component { render() { return ( <Container> <Titlebar> <Avatar /> <Title>Welcome back,</Title> <Name>Aman</Name> </Titlebar> </Container> ); } } const Container = styled.View` flex: 1; background-color: white; justify-content: center; align-items: center; `; const Titlebar = styled.View` width: 100%; margin-top: 50px; padding-left: 80px; `; const Avatar = styled.Image``; const Title = styled.Text` font-size: 20px; font-weight: 500; color: #b8bece; `; const Name = styled.Text` font-size: 20px; color: #3c4560; font-weight: bold; `;
Run
npm run ios and see it in action.
Right now, everything is how in the middle of the screen. We need the
Titlebar and its contents at the top of the mobile screen. So styles for
Container will be as below.
const Container = styled.View` flex: 1; background-color: white; `;
Adding user avatar image
I am going to use an image that is stored in
assets folder in the root of our project. If are free to use your own image but you can also download the assets for this project below.
To create an image even with the
styled-components, you need the
Image component. You can use the
source props to reference the image based on where it is located.
<Titlebar> <Avatar source={require('./assets/avatar.jpg')} /> <Title>Welcome back,</Title> <Name>Aman</Name> </Titlebar>
The styling for
Avatar will begin with a width and height of
44 pixels. Having a
border-radius exactly half the value of width and height, adds the circle to the image.
border-radius is the property that you will be using a lot to create corners.
const Avatar = styled.Image` width: 44px; height: 44px; background: black; border-radius: 22px; margin-left: 20px; `;
You will get the following result.
Now notice that, the avatar image and the text are piling up. They are taking the same space on the screen. To avoid this, you are going to use
position: absolute CSS property.
Absolute Positioning in React Native
CSS properties such as
padding and
margin are used to add space between UI elements in relation to one another. This is the default layout position. However, you are currently in a scenario where it will be beneficial to use absolute positioning of UI elements and place the desired UI element at the exact position you want.
In React Native and CSS in general, if
position property is set to
absolute, then the element is laid out relative to its parent. CSS has other values for
position but React Native only supports
absolute.
Modify
Avatar styles as below.
const Avatar = styled.Image` width: 44px; height: 44px; background: black; border-radius: 22px; margin-left: 20px; position: absolute; top: 0; left: 0; `;
Usually, with position absolute property, you are going to use a combination of the following properties:
- top
- left
- right
- bottom
In our case above, we use
top and
left both set to
0 pixels. You will get the following output.
Adding icons in a React Native
Expo boilerplate comes with a set of different icon libraries such as Ionicons, FontAwesome, Glyphicons, Material icons and many more. The complete list of icons you can find here, a searchable website.
To use the library, all you have to do is write the import statement.
import { Ionicons } from '@expo/vector-icons';
Inside the
Titlebar view, add the icon.
<Titlebar> {/* ... */} <Ionicons name="md-cart" size={32} </Titlebar>
Each icon needs props for name that you can choose, size and color. Right now, if you look at the simulator, you will notice the same problem we had when adding the avatar image. There is no space between the icon and other UI elements inside the title bar.
To solve this, let us use the absolute positioning property as inline style to
<Ionicons />
<Ionicons name="md-cart" size={32} color="red" style={{ position: 'absolute', right: 20, top: 5 }} />
Why an inline style? Because
Ionicons is not generated using styled-components.
Mapping through a List
Inside
components/ folder create a new file called
Categories.js. This file is going to render a list of category items for the Grocery UI app.
import React from 'react'; import styled from 'styled-components'; const Categories = props => ( <Container> <Name>Fruits</Name> <Name>Bread</Name> <Name>Drinks</Name> <Name>Veggies</Name> </Container> ); export default Categories; const Container = styled.View``; const Name = styled.Text` font-size: 32px; font-weight: 600; margin-left: 15px; color: #bcbece; `;
Right all the data is static. Import this component in
App.js and place it after
Titlebar.
import Categories from './components/Categories'; // ... return ( <Container> <Titlebar>{/* ... */}</Titlebar> <Categories /> </Container> );
You will get the following output.
Their can be a plenty number of categories. To make the names of categories dynamic, we can send it through
App.js file.
const Items = [ { text: 'Fruits' }, { text: 'Bread' }, { text: 'Drinks' }, { text: 'Veggies' }, { text: 'Meat' }, { text: 'Paper Goods' } ]; // Inside the render function replace <Categories /> with { items.map((category, index) => ( <Categories name={category.text} key={index} /> )); }
In above snippet, you are using
map function from JavaScript to iterate through an array render a list of items, in this category names. Adding a
key prop is required.
To make this work, also modify
Categories.js.
const Categories = props => <Name>{props.name}</Name>;
Adding Horizontal ScrollView
This list is right now not scrollable. To make it scrollable, let us place it inside a
ScrollView. Open up
App.js file place the categories inside a
ScrollView, but first import it from React Native core.
import { ScrollView } from 'react-native'; // ... <ScrollView> {items.map((category, index) => ( <Categories name={category.text} key={index} /> ))} </ScrollView>;
You will notice not a single change in the UI. By default scrollable lists in React Native using
ScrollView are vertical. Make this horizontal by adding the prop
horizontal.
<ScrollView horizontal={true}> {items.map((category, index) => ( <Categories name={category.text} key={index} /> ))} </ScrollView>
It works, but does not looks good.
Let us add some inline styles to the
ScrollView.
<ScrollView horizontal={true} style={{ padding: 20, paddingLeft: 12, paddingTop: 30, flexDirection: 'row' }} showsHorizontalScrollIndicator={false} > {items.map((category, index) => ( <Categories name={category.text} key={index} /> ))} </ScrollView>
Now it looks better. The prop
showsHorizontalScrollIndicator hides the horizontal scroll bar that by default appears beneath the name of the categories.
Adding a vertical ScrollView
Next step is to add a
ScrollView that act as a wrapper inside the
Container view such that whole area becomes scrollable vertically. There is a reason to do this. You are now going to have items separated into two columns as images with texts related to a particular category.
Modify
App.js file.
return ( <Container> <ScrollView> <Titlebar>{/* and its contents */}</Titlebar> <ScrollView horizontal={true}> {/* Categories being rendered */} </ScrollView> <Subtitle>Items</Subtitle> </ScrollView> </Container> );
Notice that we are adding another styled component called
Subtitle which is nothing but a text.
const Subtitle = styled.Text` font-size: 20px; color: #3c4560; font-weight: 500; margin-top: 10px; margin-left: 25px; text-transform: uppercase; `;
It renders like below.
Building a card component
In this section, we are going to create a card component that will hold an item's image, the name of the item and the price as text. Each card component is going to have curved borders and box shadow. This is how it is going to look like.
Create a new component file called
Card.js inside
components directory. The structure of the
Card component is going to be.
import React from 'react'; import styled from 'styled-components'; const Card = props => ( <Container> <Cover> <Image source={require('../assets/pepper.jpg')} /> </Cover> <Content> <Title>Pepper</Title> <PriceCaption>$ 2.99 each</PriceCaption> </Content> </Container> ); export default Card;
Currently, it has static data, such as the image, title and content. Let us add the styles for each styled UI elements in this file.
const Container = styled.View` background: #fff; height: 200px; width: 150px; border-radius: 14px; margin: 18px; margin-top: 20px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.15); `; const Cover = styled.View` width: 100%; height: 120px; border-top-left-radius: 14px; border-top-right-radius: 14px; overflow: hidden; `; const Image = styled.Image` width: 100%; height: 100%; `; const Content = styled.View` padding-top: 10px; flex-direction: column; align-items: center; height: 60px; `; const Title = styled.Text` color: #3c4560; font-size: 20px; font-weight: 600; `; const PriceCaption = styled.Text` color: #b8b3c3; font-size: 15px; font-weight: 600; margin-top: 4px; `;
The
Container view has a default background of color white. This is useful in scenarios where you are fetching images from a third party APIs. Also, it provides a background the to text area below the image.
Inside the
Container view, add an
Image and wrap it inside a
Cover view. In React Native there two ways you can fetch an image
If you are getting an image from the static resource as in our case, you use use
source prop with keyword
require that contains the relative path to the image asset stored in the project folder. In case of networking images or getting an image from an API, you use the same prop with a different keyword called
uri. Here is an example of an image being fetched from an API.
<Image source={{ uri: '' }} />
The
Cover view uses rounded corners with
overflow property. This is done to reflect the rounded corners. iOS clips the images if coming from a child component. In our case, the image is coming from a
Card component which is a child to
App component.
The
Image component takes the width and height of entire
Cover view.
Now let us import this component inside
App.js file, after the
Subtitle and let us see what results do we get.
render() { return ( <Container> <ScrollView> {/* ... */} <Subtitle>Items</Subtitle> <ItemsLayout> <ColumnOne> <Card /> </ColumnOne> <ColumnTwo> <Card /> </ColumnTwo> </ItemsLayout> </ScrollView> </Container> ) } // ... const ItemsLayout = styled.View` flex-direction: row; flex: 1; `; const ColumnOne = styled.View``; const ColumnTwo = styled.View``;
After
Subtitle, add a new view called
ItemsLayout. This is going to be a layout that allows different cards to be divided between two columns in each row. This can be done by giving this view a
flex-direction property of value
row.
ColumnOne and
ColumnTwo are two empty views.
On rendering the screen of the simulator, looks like below.
Conclusion
Have you tried styled-components with React Native before? If not, are you going to try it now in your next project? Do comment below if you do or do not find
styled-components a comfortable way to use in your React Native applications. You can extend this application too! Let your imagination wander. Submit a PR if you do so.
You can find the complete code for this article in the Github repo 👇
This post was orginially published here.
I am available on Twitter so feel free to DM me if you need to. I also send a weekly newsletter to developers who are interested in learning more about web technologies and React Native
Discussion (5)
import styled from 'styled-components';does it work? not
import styled from 'styled-components/native';?
Both works.
Good content !
And the link to your repository is broken
And I did't see you import any RN components
Where? | https://dev.to/amanhimself/using-styled-components-with-react-native-4k15 | CC-MAIN-2021-31 | refinedweb | 3,119 | 58.28 |
For a start, just create a Windows application project. Then add a picture box control on your form and name it pbMyPictureBox or whatever you like. I changed tha back color of my picturebox so it is seen where it is positioned.
After that open the code view of your form and we are going to modify and add few lines of code. I modified the default Form1 constructor so that it accepts an array of strings. Then I created a private method named InitCustomComponents that also accepts an array of string and is called just after the Form1 is initialized.
In my InitCustomComponents method I first check if passed arguments are null and if there is any items in array at all. Then I read the first item in array which is actually a path of an image file that is going to be dropped on our application's executable.
Of course it is possible that user drags and drops some other file type (not an image), that is why a enclosed the following code inside try catch block, to catch this kind of scenario. If file is an image, just create a new Bitmap from that file, and show it in a picture box, otherwise an exception will be thrown and user will be informed that there was an invalid parameter, but you can write any other message to be displayed to user.
using System; using System.Drawing; using System.Windows.Forms; namespace DroppingFileApp { public partial class Form1 : Form { //Modified Form1 constructor to accept arguments public Form1(string[] args) { InitializeComponent(); //Call the method that reads arguments //and shows the image in picturebox InitCustomComponents(args); } private void InitCustomComponents(string[] args) { //Verify if there are any arguments at all if (args != null && args.Length == 1) { //read the first argument in args string array string file = args[0]; //Wrap the code in try catch block since it is //possible that argument is not a file or not //a picture try { Bitmap image = new Bitmap(file); pbMyPictureBox.SizeMode = PictureBoxSizeMode.Zoom; pbMyPictureBox.Image = image; } catch (Exception ex) { //Inform a user that there was an error MessageBox.Show(ex.Message); } } } } }
Now that our Form1 class is complete, we can make just a few more changes to our Program.cs class. This class is actually an entry point for our application and this class is the one that creates our Form1 instance and initializes it.
As you can see I only modified two lines of code in this file. First of all, a modified Main constructor to accept an array of string (args) and then when Application.Run method is called I passed the args variable to Form1 constructor.
using System; using System.Windows.Forms; namespace DroppingFileApp { static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main(string[] args) //Modified Main constructor to accept arguments { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); //Pass the args argument in Form1 constructor Application.Run(new Form1(args)); } } }
Now, compile your application, go to your Windows Explorer, find the application's executable file and then drag some image file and drop it on our executable and our application should open and show the picture.
But there is more :-) You can even open an image with our application using Command Prompt (like most windows programs provide this feature):
To do this, open the Command Prompt, navigate to directory where your application's executable exists and type the following (use the name of your application instead):
C:\>DroppingFileApp.exe "C:\Ales\Temp\firstStep.jpg"
That's it. I hope you like this tutorial and that you learned something new. | http://www.dreamincode.net/forums/topic/147466-drag-and-drop-a-file-on-another-file/page__pid__1979786__st__0 | CC-MAIN-2016-07 | refinedweb | 602 | 50.97 |
Every
Both
SyntaxError and
IndentationError indicate a problem with the syntax of your program, but an
IndentationError is more specific: it always means that there is a problem with how your code is indented.
Another very common type of error is called a NameError, and occurs when you try to use a variable that does not exist. For example:
print(a)
Variable name errors come with some of the most informative error messages, which are usually of the form “name ‘the)
The second is that you just forgot to create the variable before using it. In the following example,
count should have been defined (e.g., with
count = 0) before the for loop:
for number in range(10): count = count + number print("The count is: " + str(count))])
Here, Python is telling us that there is an
IndexError in our code, meaning we tried to access a list index that did not exist.
The last type of error we’ll cover today are those associated with reading and writing files:
FileNotFoundError. If you try to read a file that does not exist, you will recieve a
FileNotFoundError telling you so.
file_handle = open('myfile.txt', 'r')
One reason for receiving this error is that you specified an incorrect path to the file. For example, if I am currently in a folder called
myproject, and I have a file in
myproject/writing/myfile.txt, but I try to just open
myfile.txt, this will fail. The correct path would be
writing/myfile.txt. It is also possible (like with
NameError) that you just made a typo.
Another issue could be that you used the "read" flag instead of the "write" flag. Python will not give you an error if you try to open a file for writing when the file does not exist. However, if you meant to open a file for reading, but accidentally opened it for writing, and then try to read from it, you will get an
IOError error telling you that the file was not opened for reading:
file_handle = open('myfile.txt', 'w') file_handle.read()
These are the most common errors with files, though many others exist. If you get an error that you’ve never seen before, searching the Internet for that error type often reveals common reasons why you might get that error.
Run the code below and read the traceback. Identify the following pieces of information about it:
import errors_02 errors_02.print_friday_message()
#1. 3 #2. ~/workshop_sunpy_astropy/errors_02.py #3. print_message() #4. 11 #5. KeyError #6. 'Friday'
SyntaxErroror an
IndentationError?
def another_function print("Syntax errors are annoying.") print("But at least python tells us about them!") print("So they are usually not too hard to fix.")
# def another_function - SyntaxError # print("But at least...") - IndentationError
NameErrordo you think this is? In other words, is it a string with no quotes, a misspelled variable, or a variable that should have been defined but was not?
for number in range(10): # use a if the number is a multiple of 3, otherwise use b if (Number % 3) == 0: message = message + a else: message = message + "b" print(message)
#1. # if (Number % 3) == 0: - NameError # message = message + a - NameError (twice) #2. # misspelled variable # variable not defined # string with no quotes
seasons = ['Spring', 'Summer', 'Fall', 'Winter'] print('My favorite season is ', seasons[4])
#1. # seasons[4] - IndexError | https://nbviewer.jupyter.org/github/OpenAstronomy/2016-01-11_Sheffield_Notes/blob/master/03-Python2/03-python2-errors-instructors.ipynb | CC-MAIN-2020-45 | refinedweb | 554 | 62.78 |
Mapping Corporate Twitter Account Networks Using Twitter Contributions/Contributees API Calls
Savvy users of social networks are probably well-versed in the ideas that corporate Twitter accounts are often “staffed” by several individuals (often identified by the ^AB convention at the end of a tweet, where AB are the initials of the person wearing the that account hat (^)); they may also know that social media accounts for smaller companies may actually be operated by a PR company or “social media guru” who churns out tweets their behalf via Twitter accounts operated in the company’s name and in support of it’s online marketing activity.
Rooting around the Twitter API looking for something else, I spotted a GET users/contributees API cal, along with a complementary GET users/contributors call that return “an array of users (i.e. Twitter accounts) that the specified user can contribute to”, and the accounts that can contribute to a particular Twitter account respectively.
I didn’t know this functionality existed, so I put out a fishing tweet to see if anyone knew of any accounts running this feature other than the twitterapi account used by way of example in the API documentation. A response from Martin Hawksey (on whom I’m increasingly reliant for helping me keep up and get my head the daily novelties that the web throws up!), suggested it was a feature that has been quietly rolling out to premium users: Twitter Starts Rolling Out Contributors Feature, Salesforce Activated. Via his reading of that post (I think), Martin suggested that a Bing(;-) search for site:twitter.com “via web by” would turn up a few likely candidates, and so it did…
So why’s this interesting? Because given the ID of an account that a company users for corporate tweets, or the ID of a user who also contributes to a corporate account via their own account, we might be able to map out something of the corporate comms network for an organisation operating multiple accounts (maybe a company, but maybe also a government department or local council ,or lobbiest group), or the client list of “social media guru” operating various accounts for different SMEs.
Anyway, here’s quick script for exploring the TWitter contributors/contributees API. The output is a graphml file we can visualise in Gephi.
And here are a couple of views over what it comes up with. Firstly, a map bootstrapped from the @twitterapi account:
And here’s one I built out from HuffingtonPost:
So what do we learn from this? Firstly it’s yet another example of how networks get everywhere. Secondly, it raises the question (for me) of whether there are any cribs in other multi-contributor social network apps (maybe in tweet metadata) that allow us to identify originating authors/users and hence find a way into mapping their contribution networks.
As well as building out from an account name to which users contribute, we can bootstrap a map from a user who is known to contribute to one or more accounts (code not included in Github gist atm).
So for example, here’s a map built out from user @VeeVee:
I guess one of the next questions from a tool building point of view is: is there a more reliable way of getting cribs into possible contributor/contributee networks? Another is: are any other multi-contributor services (on Twitter or other networks, such as Google+) similarly mappable?
PS Just noticed this: Google to drop Google Social API. I also read on a Google blog that the Needlebase screenscraping tool Google acquired as part of the ITA acquisition will be shut down later this year…
Big red circles around things always catch my attention which is why the TechCrunch article stuck in my mind (which I only went looking for to try and work out what was going on). On more trawl for more info it was interesting to see the Salesforce twitter account had gone back to ^AB type signatures to use their ‘Social Media Monitoring and Engagement’ platform radian6 for tweeting. Given that the search term is ‘via web by’ suggests that almost 2 years on Twitter hasn’t got around to a post as a contributor part of their API (imagine this has left some businesses scratching their head).
The HuffingtonPost web is interesting. Given that it appears updates are via the web why bother with a network of nameless accounts.
[Didn’t know Social API was closing – balance is restored ;)]
Martin
Hi there,
Goldsmiths CAST student here. I get the following error when I run your script:
Traceback (most recent call last):
File “twContribs.py”, line 15, in
fpath=’/’.join([‘reports’,’contributors’,’_’.join(args.contributeto)])
TypeError
—
Am I doing something wrong here?
Thanks,
Sam
@sam how are you running the script… it’s all a bit clunky (didn’t I warn you?!;-)
An example way of calling it is:
python twContribs.py -contributeto twitterapi -depth 5
PS I also updated the gist just now to the copy I currently have running locally, just in case..
@tony Thanks for this, the updated script seemed to work, however, now I only have one Node when I open the graph.graphml into Gephi. Should I change something in the script?
Thanks
Sam
@tony. Not sure what’s happening, as it looks like I’m putting in the required data in lines 6-9:
parser = argparse.ArgumentParser(description=’Mine Twitter account contributions’)
parser.add_argument(‘-contributeto’,nargs=’*’, help=”MichelleObama,whitehouse,datastore,castlondon,GdnDevelopment”)
parser.add_argument(‘-contributeby’,nargs=’*’, help=”danmcquillan,zia505,datastore,castlondon,AlexGraul”)
parser.add_argument(‘-depth’,default=3,type=int,metavar=’N’,help=’5′)
—
But I get this at the ned of my python output and no nodes:
fetching fresh copy of fetched url:
oops
{‘userlist': [], ‘graph': , ‘contributors': {‘twitterapi': []}, ‘accountlist': [‘twitterapi’], ‘contributees': {}}
contributors {‘twitterapi': []}
contributees {}
accountlist [‘twitterapi’]
userlist []
@sam is simplejson and urllib2 loading? From a console, run Python (just type: python); then:
import simplejson
import urllib2
Or start putting print statements everywhere to try to track what’s going on;-)
@sam: also note that: ‘help=”A space separated list of account names (without the @) for whom you want to find the contributors.”)’ is a statement that appears when you you call the help file relating to the script from the command line ( python twContribs.py -h ), not a “PUT SPACE SEPARATED VALUES HERE” instruction to the user.
The ‘parser’ commands in the script set up a parser that python uses when you execute a command from the command line. So from the command line, if you type something like:
python twContribs.py -contributeto twitterapi
the script knows about -contributeto
whereas it doesn’t know about:
python -someRandomCommandLineArgument somerandomvalue
(If you look up the python documentation – use your favourite search engine to search for: python argparse – it will explain what argparse is about.)
Also note that the script accepts space separated multpiple values [help=”A space separated list of account names (without the @) for whom you want to find the contributors.”] so you can run things like: python twContribs.py -contributeto twitterapi starbucks
If you try comma separated vals, it probably won’t work…
It’s also worth bearing in mind that most accounts aren’t associated with contributions to/by other accounts…
@sam oh yes, one final thing… the script uses unauthenticated twitter api calls, so it maxes out quite quickly (150 calls an hour). I should probably print an error message when this happens, but I don’t (feel free to add it into the script). A quick way to check (though it uses an API call) is just to call the API from your browser eg paste:
into your browser location bar. If you get a message along the lines of “Error – too many calls/API rate limit exceeded, back off for an hour..” then you’ve maxed out for a bit… You can get more calls per hour using OAuth/authenticated calls to the API, but that’s more code, more things to go wrong, etc etc.
@tony Ah, understood. Thanks for your input – I’ll give this a try – appreciated.
@sam any joy?
@tony Almost got it working – just a few glitches which I am ironing out though printing – will give it another try first thing in the morning – my eyes are going a bit matrix at the mo’. Thanks for your help!
@sam what sort of glitches??? The script is tiny – would be handy for me to know how many different things can go wrong with it…. ;-)
@tony Nothing wrong with your script – just one of my libraries not properly installed. All sorted now. Thanks for your help. :-)
@sam Ah, thanks… I maybe need to write a diagnostic script that tests for libraries I commonly use that folk can run as a test script; would that be useful?
@tony That would be extremely useful. Thanks. | http://blog.ouseful.info/2012/01/23/mapping-corporate-twitter-account-networks-using-the-twitter-multiple-author-contributionscontributees-api/ | CC-MAIN-2015-32 | refinedweb | 1,477 | 58.21 |
XML is inherently a difficult data structure to manipulate. The white spaces, return lines, and new lines make a big difference in validating signatures and/or digest values. If you accidentally miss a character in your text manipulation, perform the wrong canonicalization, etc. your one-way SHA hash can easily be affected, causing you to be unable to verify the signature of the data.
One of the idiosyncracies of the lxml library, described best in this lxml document, is that the internal data structures are stored as Element objects with a .text and .tail property. The .text represents all the underlying value within the tag, while the .tail property represents the text between tags. This data structure differs from the DOM-model in that the text after an element is represented by the parent. For example, consider this XML-structure:
<a>aTEXT
<b>bTEXT</b>bTAIL
</a>aTAIL
This can be represented with the following lxml code:
import etree a = etree.Element('a') a.text = "aTEXT" a.tail = "aTAIL" b = SubElement(a, 'b') b.text = "bTEXT" b.tail = "bTAIL"
What happens if you remove the 'b' node? Ideally, the text with the 'b' tag disappears, while the bTAIL gets moved up. The structure would look like the following:
<a>aTEXTbTAIL</a>aTAIL
The command to remove the lxml node would be:
a.remove(b)
Upon making this change, however, it appears in lxml v2.3, the output appeared as: <a>aTEXT</a>aTAIL</a>
In order to understand what's going on, I had to download the source for the lxml, install the Cython library that converts the .pyx code to .C bindings, recompile, and link the new etree.so binary. If you're curious, the instructions for doing so here are posted here.
Upon inspecting the etree.pyx, I noticed the code to move the tail occured after unlinking the node. What we really wanted is that the tail to be moved before the node is unlinked. Otherwise, the information about the tail would also be potentially be removed, which may have explained why the tail was never copied.
def remove(self, _Element element not None): - tree.xmlUnlinkNode(c_node) _moveTail(c_next, c_node) + tree.xmlUnlinkNode(c_node)
Examining the _moveTail code also points to something interesting. The .tail is represented internally by XML-based text-based nodes, which are siblings of the current node (denoted by the .next pointer). Text nodes are also XML-based text-nodes, but appear to be children of the node. There is a loop that traverses the linked list of nodes, such that there can be multiple text-nodes, which could could happen if multiple subelements were removed, and you were left with a chain of XML-based .tail nodes.
cdef void _moveTail(xmlNode* c_tail, xmlNode* c_target): cdef xmlNode* c_next # tail support: look for any text nodes trailing this node and # move them too c_tail = _textNodeOrSkip(c_tail) while c_tail is not NULL: c_next = _textNodeOrSkip(c_tail.next) tree.xmlUnlinkNode(c_tail) tree.xmlAddNextSibling(c_target, c_tail) c_target = c_tail c_tail = c_next
Upon fixing this code, the text_xinclude_test started failing. If I recompiled and reverted back to the original etree.pyx, the test passed fine. One even more unusual aspect was the invocation of the self.include(), which appeared to be overriden depending on whether the lxml library would rely on the native implementation of the xinclude() routine, or rely on its Python-based version that allows external URL's to referenced in ElementInclude.py.
def test_xinclude_text(self): filename = fileInTestDir('test_broken.xml') root = etree.XML(_bytes('''\
''' % filename)) old_text = root.text content = read_file(filename) old_tail = root[0].tail self.include( etree.ElementTree(root) ) self.assertEquals(old_text + content + old_tail, root.text)''' % filename)) old_text = root.text content = read_file(filename) old_tail = root[0].tail self.include( etree.ElementTree(root) ) self.assertEquals(old_text + content + old_tail, root.text)
The test_xinclude_text() is a routine to verify that one can use <:xi:include> directives to incorporate other files within an XML-document. When such a tag is discovered, the contents of the file is read (in this case, the contents of test_broken.xml) and the entire node is substituted with this text. The parent node's .text property will then be set and the <xi:include> is removed.
It appears that code within the ElementInclude.py the text appeared to mask this issue by appending the tail before removing it:
@@ -204,7 +204,8 @@ def _include(elem, loader=None, _parent_hrefs=None, base_url=None): elif parent is None: return text # replaced the root node! else: - parent.text = (parent.text or "") + text + (e.tail or "") + parent.text = (parent.text or "") + text parent.remove(e)
The entire pull request for this fix is located here:
Update on this PR:
Note that this is a deliberate design choice. It will not change.
In other words, if you remove a subelement, you have to take care of the .tail and move it to the right tag. The lxml library will not change so this PR request was rejected.
Hi, can you tell me how to remove HTML-formating tags such as 'i', 's' or 'em', but preserve the text?
For example, this:
1st i 1st b1st em
should become
1st i 1st b 1st em
Greetings
> In other words, if you remove a subelement, you have to take care of the .tail and move it to the right tag.
Please tell me how to do this. I've tried severall times, but it doesn't what I want.
Sorry to ask you again,
afix
I know this is old, but for anyone looking, use the drop_tree() method, documented on this page:
unfortunately, the issues is not limited to just 'removing' or 'dropping subtrees', its a major pain when moving nodes/subtrees around.
IMHO, I find it a real shame because this issue renders a great (fast) library rather unusable -- unless I do what Roger did in this post to set things right for myself before I use it..
It doesn't appear to remove the tail text. | http://hustoknow.blogspot.com/2011/09/lxml-bug.html | CC-MAIN-2017-51 | refinedweb | 993 | 57.98 |
I've been using Xcode for quite some time now, and I just installed it on my MacBook (I had it installed before, but I accidently erased my hard disk). Now I'm just trying to rewrite my game engine, so I started off with a simple cPoint2D class.
When I try to compile it, the linker keeps failing because "all the symbols in cPoint2D are undefined". I checked to make sure that cPoint2D.cpp was getting compiled, and it was. Then I tried moving all the code into main.cpp, and the problem went away. So, I figure this is more of a compiler issue than with my code, so I posted it here in the Tech Board.
But I just don't get it.......
Here is my code, just for you to observe:
Please do not criticize me for bad class design. This is a work in progress.
cPoint3D.h:
cPoint2D.cpp:cPoint2D.cpp:PHP Code:
// cPoint2D - basic class for 2D points
#ifndef CPOINT2D_H
#define CPOINT2D_H
#include <iostream>
using namespace std;
template <class T>
class cPoint2D {
public:
T x, y;
T operator*(cPoint2D<T> &value);
cPoint2D<T> operator+(cPoint2D<T> *value);
void print();
};
typedef cPoint2D<float> cfloat2D;
typedef cPoint2D<int> cint2D;
#endif
Basically, the compiler doesn't cough on this; it's just whenever I try to call any of the member functions from cPoint2D that I get a linker error saying that the function is undefined.Basically, the compiler doesn't cough on this; it's just whenever I try to call any of the member functions from cPoint2D that I get a linker error saying that the function is undefined.PHP Code:
// cPoint2D - basic class for 2D points
#include "cPoint2D.h"
template <class T>
T cPoint2D<T>::operator*(cPoint2D<T> &value) {
return x*(value->x) + y*(value->y);
}
template <class T>
cPoint2D<T> cPoint2D<T>::operator+(cPoint2D<T> *value) {
value->x += x;
value->y += y;
return value;
}
template <class T>
void cPoint2D<T>::print() {
cout << "cPoint2D: x=" << x << "; y=" << y << endl;
} | http://cboard.cprogramming.com/tech-board/81059-why-earth-xcode-%2A-%5E%25-acting-up.html | CC-MAIN-2014-49 | refinedweb | 335 | 68.5 |
On 08/25/2009 08:22 PM, Nitin Gupta wrote:> On 08/25/2009 03:16 AM, Hugh Dickins wrote:>> On Tue, 25 Aug 2009, Nitin Gupta wrote:>>> On 08/25/2009 02:09 AM, Hugh Dickins wrote:>>>> On Tue, 25 Aug 2009, Nitin Gupta wrote:>>>>> On 08/24/2009 11:03 PM, Pekka Enberg wrote:>>>>>>>>>>>> What's the purpose of passing PFNs around? There's quite a lot of PFN>>>>>> to struct page conversion going on because of it. Wouldn't it make>>>>>> more sense to return (and pass) a pointer to struct page instead?>>>>>>>>>> PFNs are 32-bit on all archs>>>>>>>> Are you sure? If it happens to be so for all machines built today,>>>> I think it can easily change tomorrow. We consistently use unsigned>>>> long>>>> for pfn (there, now I've said that, I bet you'll find somewhere we>>>> don't!)>>>>>>>> x86_64 says MAX_PHYSMEM_BITS 46 and ia64 says MAX_PHYSMEM_BITS 50 and>>>> mm/sparse.c says>>>> unsigned long max_sparsemem_pfn = 1UL<< (MAX_PHYSMEM_BITS-PAGE_SHIFT);>>>>>>>>>> For PFN to exceed 32-bit we need to have physical memory> 16TB (2^32>>> * 4KB).>>> So, maybe I can simply add a check in ramzswap module load to make>>> sure that>>> RAM is indeed< 16TB and then safely use 32-bit for PFN?>>>> Others know much more about it, but I believe that with sparsemem you>> may be handling vast holes in physical memory: so a relatively small>> amount of physical memory might in part be mapped with gigantic pfns.>>>> So if you go that route, I think you'd rather have to refuse pages>> with oversized pfns (or refuse configurations with any oversized pfns),>> than base it upon the quantity of physical memory in the machine.>>>> Seems ugly to me, as it did to Pekka; but I can understand that you're>> very much in the business of saving memory, so doubling the size of some>> of your tables (I may be oversimplifying) would be repugnant to you.>>>> You could add a CONFIG option, rather like CONFIG_LBDAF, to switch on>> u64-sized pfns; but you'd still have to handle what happens when the>> pfn is too big to fit in u32 without that option; and if distros always>> switch the option on, to accomodate the larger machines, then there may>> have been no point to adding it.>>>> Thanks for these details.>> Now I understand that use of 32-bit PFN on 64-bit archs is unsafe. So,> there is no option but to include extra bits for PFNs or use struct page.>> * Solution of ramzswap block device:>> Use 48 bit PFNs (32 + 8) and have a compile time error to make sure that> that MAX_PHYSMEM_BITS is < 48 + PAGE_SHIFT. The ramzswap table can> accommodate> 48-bits without any increase in table size.>I went crazy. I meant 40 bits for PFN -- not 48. This 40-bit PFN should be sufficient for all archs. For archs where 40 + PAGE_SHIFT < MAX_PHYSMEM_BITSramzswap will just issue a compiler error.Thanks,Nitin | https://lkml.org/lkml/2009/8/25/330 | CC-MAIN-2015-11 | refinedweb | 495 | 75.64 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
How to delete an invoice after validation?
How to delete an invoice after validation?.
Go to Addons>>account module, open account_invoice.py file. find the unlink method which is same as:
def unlink(self, cr, uid, ids, context=None): if context is None: context = {} invoices = self.read(cr, uid, ids, ['state','internal_number'], context=context) unlink_ids = [] for t in invoices: if t['state'] in ('draft', 'cancel') and t['internal_number']== False: unlink_ids.append(t['id']) else: raise osv.except_osv(_('Invalid Action!'), _('You can not delete an invoice which is not cancelled. You should refund it instead.')) osv.osv.unlink(self, cr, uid, unlink_ids, context=context) return True
Just remove "and t['internal_number']== False" from if statement or change it to "and t['internal_number']== True", and save it. after doing it you have to restart the openerp server. Now go to admin and delete the canceled invoice, it will work perfectly.
There is also an another trick, which will work every time.
In account_invoice.py in the 'action_cancel' method
self.write(cr, uid, ids, {'state':'cancel', 'move_id':False})
just replace this code with below one:
self.write(cr, uid, ids, {'state':'cancel', 'internal_number':False ,'move_id':False})
If the Invoice has not been paid, you can CANCEL it.
Only do this if you are sure the Invoice has not been sent to your Customer/Supplier.
thank you very much for your answer, but how can I remove it
Also - if you unreconcile the payments to reopen it, you can cancel an invoice previously recorded as paid.
@Bista Solutions, I have unreconciled the payments and tried to re-open the invoice but. it's still in PAID state. This is in V7.
Hello,
I stuck at step 4 ... In my installation (openerp 7, python 2.6, centos) following fragment of account_invoice.py code is little different, eg:
def unlink(self, cr, uid, ids, context=None): if context is None: context = {} invoices = self.read(cr, uid, ids, ['state','internal_number'], context=context) unlink_ids = [] for t in invoices: if t['state'] not in ('draft', 'cancel'): raise openerp.exceptions.Warning(_('You cannot delete an invoice which is not draft or cancelled. You should refund it instead.')) elif t['internal_number']: raise openerp.exceptions.Warning(_('You cannot delete an invoice after it has been validated (and received a number). You can set it back to "Draft" state and modify its content, the n re-confirm it.')) else: unlink_ids.append(t['id']) osv.osv.unlink(self, cr, uid, unlink_ids, context=context) return True def onchange_partner_id(self, cr, uid, ids, type, partner_id,\ date_invoice=False, payment_term=False, partner_bank_id=False, company_id=False): partner_payment_term = False acc_id = False bank_id = False fiscal_position = False
There is no such thing like: t['internal_number']== False. I tried delete this line, modify content sth, but installation just hangs.
Also, I can modify account_invoice.py content using second method, openerp working, but with no effect (invoice still cant be deleted). In log i have following output:
Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/openerp-7.0_20130908_231053-py2.6.egg/openerp/osv/osv.py", line 131, in wrapper return f(self, dbname, args, *kwargs) File "/usr/lib/python2.6/site-packages/openerp-7.0_20130908_231053-py2.6.egg/openerp/osv/osv.py", line 197, in execute res = self.execute_cr(cr, uid, obj, method, args, *kw) File "/usr/lib/python2.6/site-packages/openerp-7.0_20130908_231053-py2.6.egg/openerp/osv/osv.py", line 185, in execute_cr return getattr(object, method)(cr, uid, args, *kw) File "/usr/lib/python2.6/site-packages/openerp-7.0_20130908_231053-py2.6.egg/openerp/addons/sale/sale.py", line 1012, in unlink return super(account_invoice, self).unlink(cr, uid, ids, context=context) File "/usr/lib/python2.6/site-packages/openerp-7.0_20130908_231053-py2.6.egg/openerp/addons/account/account_invoice.py", line 474, in unlink
Any ideas will be welcomed :)
2) Allow Cancelling Entries of corresponding journals.
Hi, I am stuck on step 2. I have installed account_cancel module.
Yet, in settings-->>accounting-->>configuration -- I do not have any check options relating to journals. I have unistalled and re-installed account_cancel module, updated it, looked at the technical features tab etc and nothing.
It simply isn't showing up under settings. I can't select "cancel entries of corresponding journals" because I cannot find it!
any advice/ thoughts please? Very much appreciated.
What you are looking for is a checkbox on the Journal form view. Accounting --> Configuration --> Journals -> Journals. Open the Journal for which you want to allow cancelling, and make sure "Allow Cancelling Entries" is checked.
Hi Ray, thank you for taking the time to answer.
That option isn't showing. I cannot access journals under configuration. I have since discovered that the online edition doesn't allow for modules to be added / updated, thus explaining why this configuration option isn't showing up. Many thanks anyways!
The online edition allows for modules written by OpenERP to be installed. The module "Cancel Journal Entries" is one of these modules. You need to be logged in as the administrator of your database to install the module, and a member of the Technical Features group to see all of the hidden menus.
Is there any option same as that used to delete the invoice so that all the other documents like sales orders,delivery orders,internal moves etc can be deleted in various statuses without much difficulty?
You can set the sales journal to allow canceling entries. by going to Configuration > Journals. Then you can cancel the invoice to remove any journal entries and the invoice will appear grayed out in your list. OR you can refund it if thats more applicable to the particular situation.
However, you shouldn't be able to delete the record completely from OpenERP, once it has been validated. Full deletion would not be fantastic accounting practice as there wouldn't be any traceability :)
Hope that helps.
Hi! What if all you entries that you made until the moment you want to delete them are for test purpose only. How can you delete those entries (invoice, payments...) ?
If you are just testing, maybe its worth creating a separate database for testing purposes and keeping all your production entries in a real one. If its too late and you HAVE to remove entries you can do this manually by deleting them from postgres, but this is not advisable if there is an alternative.
I have faced the same problem, and I have deleted all records from account_invoice* tables in openerp database. That data were not for testing, is just a bad operation For example, I have created a proforma invoice, then I needed to change the price but proforma does not allow to change it. I tried to refund, but new records were created. After some trials to cancel an invoice, I got 6 new invoices in my accounting system. So the only solution was to delete data! | https://www.odoo.com/forum/help-1/question/how-to-delete-an-invoice-after-validation-20831 | CC-MAIN-2016-44 | refinedweb | 1,180 | 58.48 |
TL;DR - Here is a demo of what we are going to achieve. Once you have this basic implementation you can use it in many different ways - in a View, to send notifications, etc.
You should be familiar with:.
First, you should install the third-party you want to use.
$ pip install third-party
The solution I am going to introduce to you is pretty generic so you could use it for every third-party app you need to integrate. Most of them come with a
client you need to instantiate using an API key or something like that.
You actually use this client to communicate with the API of your third-party app. If there is no client interface in the library you can always create an abstract one and use it if needed.
Every third-party app has its own documentation and you have to follow it to be able to use their interface. Mostly, creating a new client instance is done this way:
from third_party import ThirdPartyClient # Initialize client using api key client = ThirdPartyClient(api_key=settings.THIRD_PARTY_API_KEY)
!!! I strongly recommend to store data such as API keys and stuff like that into environment variables and using it from your settings files.
Now, once we’ve finished with the installation, we have to figure out how we are going to use our third-party application. As calling the server usually takes some time to get the response back, we should make calls asynchronously.
I am going to use Celery as it’s a stable and easy-to-use option for our needs. It requires a message queue and a great option here is RabbitMQ. You can check their documentation for the steps of installation if you haven’t set them up before.
As we are all set up, we are ready to go and write a task to take some information from our third-party.
from third_party import ThirdPartyClient from celery import shared_task from django.conf import settings @shared_task def fetch_data(): client = ThirdPartyClient(api_key=settings.THIRD_PARTY_API_KEY) payload = client.fetch_data_method() return payload
Everything is great at the moment – we have third-party app in our system which has a client and we communicate with it via async celery tasks. Now we have to go back to the synchronous MVC world of Django.
First, we need to make a model to store the data we fetch from the third-party app in order to be able to use it in our views and templates.
from django.db import models class ThirdPartyDataStorage(models.Model): # your fields here ...
Since we already have where to store the fetched data, it’s about time to actually store it. This must happen in a specific moment – after we fetch it. As we do this via a celery task, we have to wait for the task to finish. Actually, there are 2 options:
one function would have to do a couple of things;
Now we clearly know what we have to do:
OK, let’s do the first two steps:
# in tasks.py from third_party import ThirdPartyClient from celery import shared_task from django.conf import settings from django.db import transaction from .models import ThirdPartyDataStorage @shared_task def fetch_data(): client = ThirdPartyClient(api_key=settings.THIRD_PARTY_API_KEY) fetched_data = client.fetch_data_method() return fetched_data @shared_task @transaction.atomic # In case you have complex logic connected to DB transactions def store_data(feched_data): container = ThirdPartyDataStorage(**fetched_data) container.save() return container.id
Our tasks are ready – now we have to chain them. This can be done easily using Celery’s chain().
To use it we can create a new task that just delays them:
from celery import shared_task, chain @shared_task def fetch_data_and_store_it(): t1 = fetch_data.s() t2 = store_data.s() return chain(t1, t2).delay()
Data returned from the first task will be given to the second one as we use signatures (
.s()). If you are not familiar with Celery I highly recommend that you read their documentation.
We’ve just come up with a single task from our
tasks.py file which we are going to use –
fetch_data_and_store_it().
The next question is: Where should I use it? In the view is probably the most natural answer you come up with and yes, you’re right.
In the MVC model (Django’s MTV), Controllers (Views) are responsible for generating the business logic. As your project gets bigger and bigger, you may need to use the task in different places – views, APIviews, etc.
In that case, you have to think of a better place of doing some more complicated business logic. For the needs of the current example, we will just use the task directly in a view but I recommend you to follow our blog for articles oriented on making your Django project nicely structured.
The example:
# in views.py from django.shortcuts import render from django.http import HttpResponseNotAllowed from .tasks import fetch_data_and_store_it def my_view(request): if request.method == 'GET': task = fetch_data_and_store_it.delay() return render(request, 'index.html') return HttpResponseNotAllowed(['GET'])
That’s all – our structure is now clear. Let’s think of the main reason for this article – What do I do when an error occurs when I call a third-party app?
As we can see, our tasks are just python functions decorated with @shared_task (there are several other decorators in Celery that can do nearly the same).
What do these decorators really do? They just transform our function into an
instance of Celery's Task class. Of course, as you expect, this class has all methods that we need if we want to plug into and make some custom logic.
What we are really interested in are the methods
on_failure() &
on_success() – the first one is called if an Exception is raised and the other one is called if there is no Exception.
We want to use the base Task class to handle third-party errors. Mostly all of the third-party apps raise an Exception when there is an error – you have messed up the API key (403 Permission Denied), a third-party server has some issues (500 Internal Server Error), and so on.
You may act differently depending on the errors, as Celery gives us the
retry() but you always want to know when an error occurs.
Furthermore, if your system is used by some administration you should tell them when an error occurs – for example, an email is not sent or something like that. It’s not proper to let them search in celery.logs.
When we know all these stuff we are ready to use the OOP powers and make our
BaseErrorHandler. You may’ve noticed that it’s Base so we have to make it abstract enough to use it in all third-party apps we want to integrate with. That’s why we are going to make a mixin:
class BaseErrorHandlerMixin: def on_failure(self, exc, task_id, args, kwargs, einfo): ''' exc – The exception raised by the task. task_id – Unique id of the failed task. args – Original arguments for the task that failed. kwargs – Original keyword arguments for the task that failed. ''' pass def on_success(self, retval, task_id, args, kwargs): ''' retval – The return value of the task. task_id – Unique id of the executed task. args – Original arguments for the executed task. kwargs – Original keyword arguments for the executed task. ''' pass
on_failure() method is run by the worker when the task fails.
on_success() is run by the worker when the task is executed successfully. These methods are called synchronously before the serialization so there is no problem in having an Exception.
Now we have the main abstraction – let’s dive right in!
As I’ve said before, every Celery task is actually an instance of Task class. Therefore, we can use it and inherit it.
This is easily done by changing the base of our tasks in the decorator we use. The problem is we’ve done just a mixin.
What is our new “base”? The answer is simple. We didn’t do a mixin to just be fancy – we want flexibility – now we can use it for every third-party task in our system.
The next step is to use the
Task class:
# in tasks.py from celery import Task from .mixins import BaseErrorHandlerMixin # Be careful with ordering the MRO class ThirdPartyBaseTask(BaseErrorHandlerMixin, Task): pass
In our example we just have to add the following in the
fetch_data() task decorator (you have to change the base of each task you want to track exceptions from – in our example
store_data() task is not making any third-party calls or doing any complicated logic so we don’t need to change its base):
@shared_task(base=ThirdPartyBaseTask) def fetch_data(): client = ThirdPartyCLient(api_key=settings.THIRD_PARTY_API_KEY) fetched_data = client.fetch_data_method() return fetched_data
Now we are ready to handle exceptions – let’s actually do it!
We have the architecture done and we just have to implement the logic to actually handle the errors.
First of all, we have to remember we are in the synchronous MVC world of Django. Yes, the option we have is to store the exceptions information in the database. This way we can use it to visualize it in a view and easily track all errors.
from django.db import models class AsyncActionReport(models.Model): PENDING = 'pending' OK = 'ok' FAILED = 'failed' STATUS_CHOICES = ( (PENDING, 'pending'), (OK, 'ok'), (FAILED, 'failed') ) status = models.CharField(max_length=7, choices=STATUS_CHOICES, default=PENDING) error_message = models.TextField(null=True, blank=True) error_traceback = models.TextField(null=True, blank=True) def __str__(self): return self.action
This is the basic implementation of a model we want. Other fields you may want to add are:
We have where to store the exceptions information so we have to use it now. Here is where our
BaseErrorHandlerMixin comes in place.
Here is a simple diagram that shows what we are going to
Since we are ready with the
AsyncActionReport model, we have to create an instance of it. But how and where should we create it if we want to use it in the
main task -> secondary tasks -> BaseErrorHandlerMixin?
Here comes the magic of Python! We are able to create the instance in the first position of the track (
fetch_data_and_store_it() task) and to pass it upstairs. Just use the
**kwargs!
In other words, once we create an
AsyncActionReport instance, we can pass its
id as a key-word argument in the secondary tasks we want to handle errors from (
fetch_data() task).
This is how our tasks.py file finally looks:
# in tasks.py from third_party import ThirdPartyClient from celery import shared_task, Task, chain from django.db import transaction from django.conf import settings from .models import ThirdPartyDataStorage, AsyncActionReport from .mixins import BaseErrorHandlerMixin class ThirdPartyBaseTask(BaseErrorHandlerMixin, Task): pass @shared_task(base=ThirdPartyBaseTask) def fetch_data(**kwargs): """ Expected kwargs: 'async_action_report_id': AsyncActionReport.id. This kwargs is going to be passed to the constructor of the ThirdPartyBaseTask so we can handle the exceptions and store it in the AsyncActionReport model. """ client = ThirdPartyClient(api_key=settings.THIRD_PARTY_API_KEY) fetched_data = client.fetch_data_method() return fetched_data @shared_task @transaction.atomic def store_data(fetched_data): container = ThirdPartyDataStorage(**fetched_data) container.save() return container.id @shared_task def fetch_data_and_store_it(): async_action_report = AsyncActionReport() t1 = fetch_data.s(async_action_report_id=async_action_report.id) t2 = store_data.s() return chain(t1, t2).delay()
Now we can add the actual logic for handling errors in the
BaseErrorHandlerMixin
from .models import AsyncActionReport class BaseErrorHandlerMixin: def on_failure(self, exc, task_id, args, kwargs, einfo): AsyncActionReport.objects.filter(id=kwargs['async_action_report_id'])\ .update(status=AsyncActionReport.FAILED, error_message=str(exc), error_traceback=einfo) def on_success(self, retval, task_id, args, kwargs): AsyncActionReport.objects.filter(id=kwargs['async_action_report_id'])\ .update(status=AsyncActionReport.OK)
Everything is ready and we are now able to handle the errors from the third-party apps we use. As our project gets bigger and bigger we may need to use more and more third-party apps.
What I can recommend to you is to make a different app called
integrations where you can add all logic connected with the integrations you use.
That’s how you can make your code easier to read and maintain and you can unittest it. And as I have just mentioned about “unittesting cellery tasks” you can look forward to an article about that soon. | https://www.hacksoft.io/blog/handle-third-party-errors-with-celery-and-django | CC-MAIN-2022-40 | refinedweb | 2,011 | 57.87 |
Do you want to check numpy version after the installion on numpy on your system. Then this article is for you. In this entire post you will know various ways to check numpy version in Python.
Various Method to Check Numpy Version in Python
Method 1: Checking numpy version using two dot operators
Here you have to use two dot operator and word “version” just like below. It output the version of currently installed numpy.
import numpy as np np.version.version
Method 2 : Two underlines in the beginning and in the end
You can use __version__ to know the current version of the installed numpy array.
import numpy as np np.__version__
Method 3: How to check the version of numpy in cmd
Just like you are writing code in your IDEs and outputting the numpy version. You can do so in command prompt also. Make sure that you have already installed numpy module.
Open your command prompt and type the following text and click on enter.
python -c "import numpy; print(numpy.__version__)"
You will get the version number
Method 4: Check numpy version using pip
You can use the pip show command to know the version of the numpy array. Copy and paste the command below to output the version number.
pip show numpy
Using this command you will get all the details about the python package like, Name,version, author ,summary ,location e.t.c which is very useful.
Method 5: Know numpy version with other modules.
There is also a way to know all the installed python pakages in your system. You will get module name and its version. Use the below command to get the all the modules name. Then look for the numpy and its version
pip list
END NOTES
These are various method I have compiled for you. You can use any method as per your convenience to check numpy version in python. If you ask me which one you should use then it depends upon where are you coding. For example If I am using pycharm then I will use the first and second one and if in command prompt then method 3 and 4. And method 5 if you are using python shell.
Hope this article has completed your confusion on how to numpy version in python. If you have any other queries then you can contact us.
Source:
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox. | https://www.datasciencelearner.com/check-numpy-version-python/ | CC-MAIN-2021-39 | refinedweb | 417 | 82.54 |
table of contents
- buster 4.16-2
- buster-backports 5.04-1~bpo10+1
- testing 5.07-1
- unstable 5.08-1
NAME¶mbtowc - convert a multibyte sequence to a wide character
SYNOPSIS¶
#include <stdlib.h>
int mbtowc(wchar_t *pwc, const char *s, size_t n);
DESCRIPTION¶The known only to the mbtowc() function. If s does not point to a null byte ('\0'),zero if the encoding has nontrivial shift state, or zero if the encoding is stateless.
RETURN VALUE¶If s is not NULL, the mbtowc() function returns the number of consumed bytes starting at s, or 0 if s points to a null byte, or -1 upon failure.
If s is NULL, the mbtowc() function returns nonzero if the encoding has nontrivial shift state, or zero if the encoding is stateless.
ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶POSIX.1-2001, POSIX.1-2008, C99.
NOTES¶The behavior of mbtowc() depends on the LC_CTYPE category of the current locale.
This function is not multithread safe. The function mbrtowc(3) provides a better interface to the same functionality. | https://manpages.debian.org/buster-backports/manpages-dev/mbtowc.3.en.html | CC-MAIN-2020-45 | refinedweb | 188 | 67.25 |
How can one change the following text
The quick brown fox
jumps over the lazy dog.
to
The quick
brown fox + jumps over the + lazy dog.
using regex?
A solution for Ruby is still
missing... A simple one I came to so far is
def textwrap
text, width, indent=""
View Replies
wrap
long
lines
text
using
Regular
View Replies
I'm trying to read the source code from a website 100 lines at a
time
For example:
self.code =
urllib.request.urlopen(uri)#Get 100 first linesself.lines =
self.getLines()...#Get 100 next linesself.lines =
self.getLines()
My getLines code is like this:
def getLines(self): res = [] i = 0
Hi I am trying to set the space between horizontal grid lines of y axis
to 0.3 and vertical grid lines of x axis to 0.4 in XY Plot.
I
have tried setting the width and margins but,I see no difference in
horizontal and vertical grid lines of xy plot.
please can u let
me know how can we achieve it.
thanks.
I have a text file which is like:
#aabc ld
#ac bc acz c #hello
I want to
read this file and check the lines between lines started with "#". If the
line is started with "#" then ignore it and if it is not started with "#"
then redirect the whole line in another file.thus the content of the
file (new) should be at firs
I need to know the fastest way possible, in PHP or Linux Command Shell,
to reduce a text file that has more than 10 lines to only have the last 10
lines. I want to use 10 lines for this example.
Thanks :)
I have a simple search script that takes user input and searches across
directories & files and just lists the files it is found in. What I
want to do is to be able to is when a match is found, grab 4 lines above
it, and 3 lines below it and print it. So, lets say I have.
"a;lskdj a;sdkjfa;klsjdf a aa;ksjd a;kjaf
;;jk;kj asdfjjasdjjfajsd jdjdjdjajsdf
I have a file with contents
abc dbw ;xxx{ sample
test } bewolf bewolf test
I need to
check for
xxx{ sample test } bewolf
and comment out these line like
/*xxx{ sample test
}bewolf*/
I have tried with grep
but grep searches fo
grep
I just started off with OpenMP using C++. My serial code in C++ looks
something like this:
#include <iostream>#include
<string>#include <sstream>#include
<vector>#include <fstream>#include
<stdlib.h>int main(int argc, char* argv[]) { string
line; std::ifstream inputfile(argv[1]); if
I've racked my brain trying to come with a solution but in vain. Any
guidance would be appreciated.
_data_mascotfriendoceanparsimon**QUERY**applejujubeapricotmaple**QUERY**rosemahonia
....Given the search keyword is QUERY, it would output:
parsimon**QUERY**appl
I need to process data inside a richtextbox to output to an exact range
of cells and I need to process each line of data to a single cell.
The total amount of data is 52 lines in the richtextbox and I want
each data (each line) placed in a single cell range
(C17:C42,H17:H42)
I've already tried it with my code:
range = objSheet.get_Range("C17:C42,H17:H42", | http://bighow.org/tags/lines/1 | CC-MAIN-2017-22 | refinedweb | 547 | 67.28 |
React Router allows information to be read from the URL as parameters.
Creating a Parameterized Route
It’s just a matter of the
path property of a
Route; any segment that starts with a colon will be treated as a parameter:
class App extends Component { render() { return ( <BrowserRouter> <div> <Route exact path="/" component={HomePage}/> {/* Parameters are defined by placing a colon before a word. */} {/* In this case, `username` is a parameter. */} {/* Parameters will be passed to the component. */} <Route path="/Users/:username" component={UserPage}/> </div> </BrowserRouter> ); } }
When the URL matches the path (ex: ‘/Users/Kevin’), that route will be rendered.
🐊 Alligator.io recommends ⤵Fullstack Advanced React & GraphQL by Wes Bos
Using Parameters
Of course, it doesn’t mean much unless you can access the parameters. So, React Router passes them in as properties:
// Data from `Route` will be passed as a prop called `match`. function UserPage({ match }) { return ( <div> {/* The URL is passed as `match.url`. */} {/* `match.url` and `match.path` will be defined whether or not the path is parameterized. */} <div>{`The URL is "${match.url}"!`}</div> {/* The path (the one you gave `Route`) is passed as `match.path`. */} <div>{`It matched the path "${match.path}"!`}</div> {/* The parameters are passed as `match.params`. */} <div>{`The parameter is "${match.params.username}"!`}</div> </div> ); }
match.params will be populated with the values from the URL (i.e. for ‘/Users/Kevin’,
username would be ‘Kevin’).
Parameters can be in any part of the path, not just at the end; for example, if you wanted to add a page about friends of a user, you could make a route at
/Users/:username/Friends. | https://alligator.io/react/react-router-parameters/ | CC-MAIN-2019-09 | refinedweb | 271 | 66.94 |
ChoiceFormat converts between ranges of numeric values and strings for those ranges. More...
#include <choicfmt.h>
ChoiceFormat converts between ranges of numeric values and strings for those ranges.
The strings must conform to the MessageFormat pattern syntax.
ChoiceFormat is probably not what you need. Please use
MessageFormat with
plural arguments for proper plural selection, and
select arguments for simple selection among a fixed set of choices!
A
ChoiceFormat splits the real number line
-∞ to
+∞ into two or more contiguous ranges. Each range is mapped to a string.
ChoiceFormat was originally intended for displaying grammatically correct plurals such as "There is one file." vs. "There are 2 files." However, plural rules for many languages are too complex for the capabilities of ChoiceFormat, and its requirement of specifying the precise rules for each message is unmanageable for translators.
There are two methods of defining a
ChoiceFormat; both are equivalent. The first is by using a string pattern. This is the preferred method in most cases. The second method is through direct specification of the arrays that logically make up the
ChoiceFormat.
Note: Typically, choice formatting is done (if done at all) via
MessageFormat with a
choice argument type, rather than using a stand-alone
ChoiceFormat.
The pattern string defines the range boundaries and the strings for each number range. Syntax:
choiceStyle = number separator message ('|' number separator message)* number = normal_number | ['-'] ∞ (U+221E, infinity) normal_number = double value (unlocalized ASCII string) separator = less_than | less_than_or_equal less_than = '<' less_than_or_equal = '#' | ≤ (U+2264) message: see MessageFormat
Pattern_White_Space between syntax elements is ignored, except around each range's sub-message.
Each numeric sub-range extends from the current range's number to the next range's number. The number itself is included in its range if a
less_than_or_equal sign is used, and excluded from its range (and instead included in the previous range) if a
less_than sign is used.
When a
ChoiceFormat is constructed from arrays of numbers, closure flags and strings, they are interpreted just like the sequence of
(number separator string) in an equivalent pattern string.
closure[i]==TRUE corresponds to a
less_than separator sign. The equivalent pattern string will be constructed automatically.
During formatting, a number is mapped to the first range where the number is not greater than the range's upper limit. That range's message string is returned. A NaN maps to the very first range.
During parsing, a range is selected for the longest match of any range's message. That range's number is returned, ignoring the separator/closure. Only a simple string match is performed, without parsing of arguments that might be specified in the message strings.
Note that the first range's number is ignored in formatting but may be returned from parsing.. (The round parentheses in the notation above indicate an exclusive boundary, like the turned bracket in European notation: [-Inf, 1) == [-Inf, 1[ )
{0, 1, 1}, {FALSE, FALSE, TRUE}, {"no files", "one file", "many files"}
Here is an example that shows formatting and parsing:
User subclasses are not supported. While clients may write subclasses, such code will not necessarily work and will not be guaranteed to work stably from release to release.
Definition at line 171 of file choicfmt.h.
Constructs a new ChoiceFormat from the pattern string.
Constructs a new ChoiceFormat with the given limits and message strings.
All closure flags default to
FALSE, equivalent to
less_than_or_equal separators.
Copies the limits and formats instead of adopting them.
Constructs a new ChoiceFormat with the given limits, closure flags and message strings.
Copies the limits and formats instead of adopting them.
Copy constructor.
Destructor.
Sets the pattern.
Sets the pattern.
Clones this Format object.
The caller owns the result and must delete it when done.
Implements icu::Format.
Formats a double number using this object's choices.
Implements icu::NumberFormat.
Formats an int32_t number using this object's choices.
Implements icu::NumberFormat.
Formats an int64_t number using this object's choices.
Reimplemented from icu::NumberFormat.
Formats an array of objects using this object's choices.
Returns NULL and 0.
Before ICU 4.8, this used to return the limit booleans array.
Returns a unique class ID POLYMORPHICALLY.
Part of ICU's "poor man's RTTI".
Implements icu::NumberFormat.
Returns NULL and 0.
Before ICU 4.8, this used to return the array of choice strings.
Returns NULL and 0.
Before ICU 4.8, this used to return the choice limits array.
Returns icu::NumberFormat.
Assignment operator.
Returns true if the given Format objects are semantically equal.
Objects of different subclasses are considered unequal.
Reimplemented from icu::NumberFormat.
Looks for the longest match of any message string on the input text and, if there is a match, sets the result object to the corresponding range's number.
If no string matches, then the parsePosition is unchanged.
Implements icu::NumberFormat.
Sets the choices to be used in formatting.
For details see the constructor with the same parameter list.
Sets the choices to be used in formatting.
For details see the constructor with the same parameter list.
Gets the pattern. | http://icu-project.org/apiref/icu4c/classChoiceFormat.html | CC-MAIN-2016-30 | refinedweb | 837 | 59.5 |
Windows Search Glossary
#
.osd file
OpenSearch Descriptor file.
.osdx file
An OpenSearch description XML file that describes available server connections and result formats for a specific web-based data source. It is used for interacting with the Windows Shell. See also: OpenSearch descriptor.
A
Advanced Query Syntax (AQS)
The default query syntax used by Windows Search to query the index and to refine and narrow search parameters. AQS is primarily user facing, and can be used by users to build AQS queries, but can also be used programmatically. See also: Natural Query Syntax (NQS).
AQS
See definition for: Advanced Query Syntax (AQS).
association
A mapping of a file name extension (for example, .mp3) or protocol (for example, http) to a programmatic identifier (ProgID). This mapping is stored in the registry as a per-user setting with a per-computer fallback. Applications that participate in the Default Programs system set the association mapping for the file name extension or protocol to point to the ProgID keys that they own.
association array
An ordered list of registry locations used to store information about an item type, including handlers, verbs, and other attributes like the icon and display name of the type. For example, a .jpg file has the following association array on a default Windows system: "HKCR\jpgfile", "HKCR\SystemFileAssociations\.jpg", "HKCR\SystemFileAssociations\image", "HKCR\*", "HKCR\AllFileSystemObjects".
Atom
An XML schema used for web feeds and content distribution, developed as an alternative to Really Simple Syndication (RSS). The Atom syndication format was published as an IETF proposed standard in RFC 4287.
B
bind
To load or associate code with data. For example, a handler may be associated with a Shell data source.
binding
A request in a search query for a column in a returned rowset. The binding specifies a property to be included in the search results.
bookmark
An indicator that uniquely identifies a row within a set of rows.
C
canonical name
The unique name of a resource. Canonical means "according to the rules." See also: canonical verb name.
canonical verb name
A language-neutral name that can be used programmatically to refer to a verb, regardless of the localized string in the user interface. See also: canonical name, verb.
catalog
The highest-level unit of organization in Windows Search. A catalog represents a set of indexed documents that can be queried. A catalog consists of a properties table with the text or value and corresponding location stored in columns of the table. Each row of the table corresponds to a separate document in the scope of the catalog, and each column of the table corresponds to a property. See also: index, Windows Search service.
category
A hierarchical grouping of rows. For example, a query result that contains author and title columns can be categorized based on author. In this example, each group of rows that contains the same value for author constitutes a category.
chapter
A collection of rows within a set of rows. See also: catalog, category.
column
The container for a single type of information in a row. Columns map to property names and specify which properties are used for the search query's command tree elements. See also: category.
command tree
A combination of restrictions, categories, and sort orders that are specified for the search query. See also: category.
container
A type of Shell item that can contain other items. Items in a container are exposed to the Shell namespace by using a Shell data source. Examples include folders, drives, network servers, and compressed files with a .zip file name extension. See also: Shell data source, folder, Shell item.
content
Text and properties associated with a Shell item or a content source that can be indexed.
content source
An item that can be accessed by the indexer. Content sources are addressable by a URL and are provided to the indexer by a protocol handler. Examples include: file system files and folders, Microsoft Outlook items and folders, database records, and Microsoft SharePoint stored items. A content source can be exposed as Shell items by implementing a Shell data source. See also: content, Shell item.
content view
A view in Windows Explorer (offered in Windows 7 and later) that displays the most relevant content for each item in the list based on its file name extension or Kind association. Content view uses a resizing logic that drops properties when the window size decreases to ensure that the most critical properties still have room to be clearly readable. See also: layout pattern, Kind, Kind association.
content view mode
See definition for: content view.
context menu
This term is sometimes used to mean shortcut menu. See definition for: shortcut menu.
context menu handler
This term is sometimes used to mean shortcut menu handler. See definition for: shortcut menu handler.
crawl
To iterate over a crawl scope, identifying content sources that require indexing or re-indexing.
crawl scope
A collection of data stores (identifiable by URL) that represents content that the indexer crawls and indexes.
cursor
In the context of the local index, a cursor is an indicator for working with one row or a small block of rows at a time in a set of data returned in a result set. After the cursor is positioned on a row, operations can be performed on that row or on a block of rows starting at that position.
D
Data Management, Exploration and Mining
See definition for: Database Mining Extensions (DMX).
data object handler
A handler that provides additional clipboard formats for the data object (IDataObject) of an item. Data objects are used in drag-and-drop and copy/paste scenarios.
data source
This term is sometimes used to mean data store or Shell data source. See definition for: data store, Shell data source.
data store
A repository of data. A data store can be exposed to the Shell programming model as a container using a Shell data source. The items in a data store can be indexed by the Windows Search system using a protocol handler.
Database Mining Extensions (DMX)
A query language used to create and manipulate data mining. The administrative templates for Windows 7, Windows Search, and Windows Explorer are .admx files, and rely on DMX technology. The following templates can be customized through Group Policy: Search.admx, Explorer.admx, and WindowsExplorer.admx.
DMX
See definition for: Database Mining Extensions.
document
A Shell item that contains text, and for which the IFilter interface could be implemented.
drop handler
A handler that enables a particular item type to support drag-and-drop and copy/paste scenarios.
drop target
A data object that is dragged and dropped onto a file. See also: data handler, drop handler.
dynamic verb
A verb that depends on the state of a Shell item or of the system; the appearance of the item is state based and requires that the executing code determine whether the item should appear. See also: shortcut menu handler, static verb, verb.
E
Explorer command
An object that can be presented as a button near the top of the Windows Explorer window that provides functionality for items and containers in that window. A Shell data source provides the Windows Explorer command objects for a particular container item. Commands are sometimes used as verbs.
F
federated search
An extensibility model that enables searching data stores and representing the results as Shell items in Windows Explorer. See also: federated search provider, search connector, OpenSearch descriptor, OpenSearch standard.
federated search connector
See definition for: search connector.
federated search provider
A web service, implemented by a data store, that supports the protocols used by Windows 7 so that Windows 7 and later versions can search that data store remotely. See also: OpenSearch descriptor, OpenSearch standard.
file association
See definition for: file type association.
file format
A format for data stored in a file that has a documented format specification. Examples include OLE DocFile, OPC, XML, ZIP and other well known file format specifications. File type creators generally use an existing file format as the basis of a new file type. A file format can be simply a definition that is not instantiated as a file type.
file format handler
This term is a synonym for file type handler. See definition for: file type handler.
file name extension
See definition for: file name extension.
file name extension
The primary indicator of a file type for file system items, it is the portion of the file name that follows the final dot. The file name extension cannot contain spaces or non-ASCII characters and applies only to files (not folders). File name extensions are compared using a comparison function that is not sensitive to case or locale. See also: file format, file type.
file type
A particular file name extension value, like ".htm" or ".jpg", that defines a class of files that are of the same type and have a common set of associations. See also: Kind, file type association.
file type association
For a particular file name extension, the association array elements that define where handlers and other attributes can be registered. See also: association array, file type.
file type customization
An association that enables Shell to customize how Shell treats a file type. File type customizations include: specifying the application used to open the file when double-clicked, adding commands to the shortcut menu for a file type, specifying a custom icon, specifying a MIME content type to associate with a file type, specifying a perceived type, and specifying one or more applications associated by file type with the Open With dialog box. See also: PerceivedType.
file type handler
A handler registered for a file type. See also: handler.
filter.
folder
See definition for: container.
H
handler
A COM object that provides functionality for a Shell item. Most Shell data sources offer an extensible system for binding handlers to items. For example, the file system folder uses the association system to look up the handlers for a particular file type. See also: file association, file type, file type customization.
I
icon handler
A handler that provides the information needed to generate and cache an icon for an item. The file system data store supports loading an icon handler for an item based on the file type, enabling that handler to provide an icon that is used for all instances of that file type.
index
n. A catalog that stores the content and properties of Shell items to enable fast searches. See also: catalog, indexer, indexing, inverted index. v. To access content sources, filter the sources for content and properties, and insert the extracted values into the index (for text) and the Windows Search property store (for properties). See also: content source, index, indexer, inverted index.
indexer
An application that does indexing or coordinates indexing. See also: index, indexing, inverted index.
infotip handler
A handler that provides pop-up text when the user hovers the mouse pointer over a user interface object.
inverted index
A persistent structure that contains the content pulled out of files by Windows Search. The text is organized into an index that maps from a word in a property to a list of the documents and locations within a document that contain that word. Hence, an inverted index is the inverse of the process of extracting the text and properties from the document and putting them into the indexer. See also: index, indexer, indexing.
item
See definition for: Shell item.
item class
See definition for: file type.
K
Kind
A property that provides a user-friendly Kind name, and can be associated with a list of properties and a layout pattern. Kind was introduced in Windows Vista to express a more end-user friendly notion of file type and it was defined to be a multi-value string property (canonical string values), thus you can have an "audio;video" or "link;document" Kind value. Some user-friendly Kind names are already associated with properties and layout patterns. For example, items associated with Kind.Picture and items associated with Kind.Document display different properties even when they are in the same view. Each item Kind can be associated with one of four unique layout patterns that define the number of properties displayed for each item and their layout. See also: Kind association, content view, layout pattern.
Kind association
A property in the property system, called System.Kind, that determines which UX templates are displayed for a file. This property also provides a user-friendly name for the type of the item and is linked to the file name extension. See also: Kind.
L
layout pattern
One of several arrangements for displaying properties. In Windows 7 and later, when you are registering a new file type, you can use the content view to register a custom property list and layout pattern for your file type. You can choose from four different layout patterns: Alpha (for document search results that contain code snippets), Beta (for email search results with code snippets), Gamma (similar to Alpha but with a two-line layout instead of four), and Delta (for showing many shorter properties, such as with music and pictures). See also: content view, Kind, Kind association.
M
metadata handler
This term is sometimes used to mean property handler. See definition for: property handler.
N
namespace extension
See definition for: Shell data source.
namespace walk
A helper process that traverses the namespace of a container or set of containers, discovering each item and possibly doing something with each. The INamespaceWalk interface can be used to walk any part of the Windows Explorer namespace or to discover the items referenced by a data object or view. Container verbs (like "play" on Artists containers) walk the namespace and discover the items.
natural language query
See definition for: Natural Query Syntax (NQS).
Natural Query Syntax (NQS)
A query syntax that is more relaxed than AQS and looks more like human language. NQS can be used by Windows Search to query the index if NQS is selected instead of the default, AQS. See also: Advanced Query Syntax (AQS).
noise word
A word that is ignored by Windows Search when it is present in the restrictions specified for the search query, because it has little discriminatory value. Examples include "and" and "the."
NQS
See definition for: Natural Query Syntax (NQS).
O
Object Linking and Embedding Database (OLE DB)
A standard set of interfaces that provides heterogeneous access to disparate sources of information located anywhere, such as file systems, email folders, and databases.
OLE DB
See definition for: Object Linking and Embedding Database.
OpenSearch descriptor
An XML file that describes available server connections and result formats for a specific web-based data source. This file contains one or more URL templates, and uses an .osdx file name extension when interacting with the Windows Shell. An OpenSearch description is sometimes referred to as a search connector, although it is only the description portion of a connector. See also: search connector.
OpenSearch standard
A collection of simple formats and protocols used for sharing search results. For more information, see the OpenSearch website ().
P
PerceivedType
A broad category of file format types. PerceivedType was introduced in Windows XP, and supports a limited set of known file types (examples include Image, Text, Audio, and Compressed file types). File types, generally public file types, can also have a perceived type. For example, the image file types .bmp, .png, .jpg, and .gif are also of the perceived type, image. At the programming layer, PerceivedType is expressed as an integer. Because there is code that uses Kind and PerceivedType, file format owners must register both. For example "play all" depends on PerceivedType. See also: file type.
preview handler
A handler that quickly produces a read-only, simplified view of the Shell item to be displayed in the Windows Explorer preview pane.
previewer
This term is sometimes used to mean preview handler. See definition for: preview handler.
property handler
A handler that translates data stored in a file into a structured schema that is recognized by and can be accessed by Windows Explorer, Windows Search, and other applications. These systems can then interact with the property handler to write and read properties to and from the file. The translated data includes details view, infotips, details pane, property pages, and so forth. Each property handler is associated with a particular file type, identified by the file name extension. See also: property system.
property sheet handler
A handler that is used to create custom property sheets with UI pictures and controls that permit custom interaction with a file type.
property system
An extensible read/write system of data definitions that uses properties implemented as name-value pairs. See also: property handler, Shell item.
property value
A value associated with a property name for a Shell item. For example, "Author", "Size", and "Date Taken" are properties. Property values are expressed as a PROPVARIANT structure.
protocol handler
A handler that accesses content sources and provides an IUrlAccessor object for a specified protocol and URL. Protocol handlers extend Windows Search functionality, and may provide change notifications to indexers. Different protocol handlers are required to index specific types of data stores. To provide a reasonable user experience, you must also provide a Shell data source for the data store in addition to implementing your protocol handler. The protocol handler exposes the items in the data store to the indexer, while the Shell data source exposes the items in the data store to the Shell.
R
restriction
A condition that a file must meet to be included in the search results that are returned by Windows Search.
row
The columns that contain the property values that describe a single result from the set of objects that matched the restrictions specified in a search query.
rowset
A set of rows returned in the search results.
S
search connector
An XML file that contains information about a data store. Search connectors are deployed for federated search.
search consumer
A component or application that queries the index.
search federation
See definition for: federated search provider.
search provider
A component or application that provides data to Windows Search.
search scope
This term is sometimes used to mean crawl scope. See definition for: crawl scope.
Shell data source
A component that is used to extend the Shell namespace and expose items in a data store. In the past, the Shell data source was referred to as the Shell namespace extension. See also: container, handler, Shell item.
Shell extension
This term is sometimes used to mean file type handler. See definition for: file type handler.
Shell extension handler
This term is sometimes used to mean file type handler. See definition for: file type handler.
Shell handler
This term is sometimes used to mean file type handler. See definition for: file type handler.
Shell item
A single piece of content. Some Shell items are content sources, and some are not. A folder is a content source, for example, but a .jpg file is not. File type handlers expose Shell items. In some contexts "item" is used to distinguish containers from noncontainers. See also: container, content source, file type handler.
Shell namespace extension
This term is sometimes used to mean Shell data source. See definition for: Shell data source.
shortcut menu
A user interface that is used to present a collection of verbs associated with a user interface element, such as a file or folder.
Shortcut menu handler
A handler that adds verbs for an item or items. These verbs are commonly displayed in a shortcut menu. See also: shortcut menu.
static verb
A verb that applies to a Shell item without needing to inspect the current state of an item or system. A static verb is based on a static registration of the associated elements of an item, and does not change.
T
thumbnail handler
A handler that provides a static image to represent a Shell item.
thumbnail provider
This term is sometimes used to mean thumbnail handler. See definition for: thumbnail handler.
U
URL template
A URL-based connection string that is used to query a web server for search results. The template looks like a URL, but contains several placeholder values (such as {searchTerms}) that the client must replace with data about the results it wants to retrieve. The definition of URL templates is key to implementing the federated search and OpenSearch standards.
user friendly kind name
See definition for: Kind.
V
verb
An individual action that can be called by a Shell item. Examples include open and print. Verbs are sometimes referred to as commands or tasks. See also: dynamic verb, shortcut menu handler, static verb.
verb handler
This term is sometimes used to mean shortcut menu handler. See definition for: shortcut menu handler.
W
walk
See definition for: namespace walk.
Windows Search
See definition for: Windows Search service.
Windows Search property store
The cache of property values used in the implementation of the Windows Search service. These property values can be programmatically queried by using the Windows Search OLE DB provider. The Windows Search property store collects and stores properties emitted by filter handlers or property handlers when an item such as a Word document is indexed. This store is discarded and rebuilt when the index is rebuilt.
Windows Search service
Refers to Windows Search 3.0 and above. This service analyzes a set of documents, extracts useful information, and then organizes the extracted information so that properties of those documents can be efficiently returned in response to queries. See also: catalog. | https://docs.microsoft.com/en-us/windows/win32/search/search-glossary | CC-MAIN-2020-10 | refinedweb | 3,589 | 57.67 |
First I should mention that I’m using the latest iOS 4.2 SDK.
I can build 1.3 for the iOS simulator, but when I make a device build using either armv6 or armv7 I get the following errors when compiling SDL_spinlock.c:
Code:
{standard input}:57:selected processor does not support
ldrex r1,[r3]' {standard input}:58:selected processor does not supportteq r1,#0’
{standard input}:59:selected processor does not support `strexeq r1,r2,[r3]'
Command /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.2 failed with exit code 1
and
Code:
{standard input}:60:thumb conditional instruction not in IT block
Command /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.2 failed with exit code 1
It looks like the offending code in SDL_AtomicTryLock() is (after the #else):
Code:
#if defined(arm)
#ifdef ARM_ARCH_5
int result;
asm volatile (
“swp %0, %1, [%2]\n”
: “=&r,&r” (result) : “r,0” (1), “r,r” (lock) : “memory”);
return (result == 0);
#else
int result;
asm volatile (
“ldrex %0, [%2]\nteq %0, #0\nstrexeq %0, %1, [%2]”
: “=&r” (result) : “r” (1), “r” (lock) : “cc”, “memory”);
return (result == 0);
#endif
Any one else see this?
I also could not build from the source fetched from hg because the file SDL_revision.h was not found but the .zip of SDL 1.3 on the download page had it so I copied it over and got past that problem. | https://discourse.libsdl.org/t/sdl-1-3-for-ios-device-build/17862 | CC-MAIN-2022-21 | refinedweb | 236 | 51.89 |
By Steve Oualline
Price: $34.95 USD
£24.95 GBP
Cover | Table of Contents | Colophon
int total; /* Total number accounts */
total. We let the compiler decide what particular bytes of memory to use; that decision is a minor bookkeeping detail that we don't want to worry about.
totalis a simple variable. It can hold only one integer and describe only one total. A series of integers can be organized into an array as follows:
int balance[100]; /* Balance (in cents) for all 100 accounts */
struct rectangle { int width; /* Width of rectangle in pixels */ int height; /* Height of rectangle in pixels */ color_type color; /* Color of the rectangle */ fill_type fill; /* Fill pattern */ };
?LSTUIT User is a twit
=instead of
==. These programs let you learn how to spot mistakes in a small program. Then, when you make similar mistakes in a big program, and you
% mkdir hello % cd hello
C:> MKDIR HELLO C:> CD HELLO
[File: hello/hello.c] #include <stdio.h> int main() { printf("Hello World\n"); return (0); }
% cc -g -ohello hello.c
-goption enables debugging. (The compiler adds extra information to the program to make the program easier to debug.) The switch
-ohellotells the compiler that the program is to be called
hello, and the final
hello.cis the name of the source file. See your compiler manual for details on all the possible options. There are several different C compilers for UNIX, so your command line may be slightly different.
C:> MKDIR HELLO C:> CD HELLO
C:> TC
INSERTkey to add a file to the project. The file we want to add is HELLO.C as seen in Figure 2-7.
ESCto get out of the add-file cycle.
UP-ARROWto go up one line. The line with HELLO.C should now be highlighted as seen in Figure 2-8.
ENTERto edit this file.
mancommand. (UNIX uses
manas an abbreviation for manual.) To get information about a particular subject, use the following command:
man subject
printffunction, you would type:
man printf
man -k keyword
man -k output
p,
q, and
r:
int p,q,r;
int account_number; int balance_owed;
balance_owedin dollars or cents? We should have added a comment after each declaration to explain what we were doing. For example:
int account_number; /* Index for account table */ int balance_owed; /* Total owed us (in pennies)*/
grepcan also help you quickly find a variable's definition.)
/******************************************************** * Note: I have no idea what the input units are, nor * * do I have any idea what the output units are, * * but I have discovered that if I divide by 3 * * the plot sizes look about right. * ********************************************************/
while (! done) { printf("Processing\n"); next_entry(); } if (total <= 0) { printf("You owe nothing\n"); total = 0; } else { printf("You owe %d dollars\n", total); all_totals = all_totals + total; }
while (! done) { printf("Processing\n"); next_entry(); } if (total <= 0) { printf("You owe nothing\n"); total = 0; } else { printf("You owe %d dollars\n", total); all_totals = all_totals + total; }
/* poor programming practice */ temp = box_x1; box_x1 = box_x2; box_x2 = temp; temp = box_y1; box_y1 = box_y2; box_y2 = temp;
/* * Swap the two corners */ /* Swap X coordinate */ temp = box_x1; box_x1 = box_x2; box_x2 = temp; /* Swap Y coordinate */ temp = box_y1; box_y1 = box_y2; box_y2 = temp;
/**************************************************** * ...Heading comments... * ****************************************************/ ...Data declarations... int main() { ...Executable statements... return (0); }
/**************************************************** * ...Heading comments... * ****************************************************/ ...Data declarations... int main() { ...Executable statements... return (0); }
main. The name
mainis special, because it is the first function called. Other functions are called directly or indirectly from
main. The function
mainbegins with:
int main() {
return (0); }
return(0);is used to tell the operating system (UNIX or MS-DOS/Windows) that the program exited normally (Status=0). A nonzero status indicates an error—the bigger the return value, the more severe the error. Typically, a status of 1 is used for the most simple errors, like a missing file or bad command-line syntax.
/*and
*/. Following this box is the line:
#include <stdio.h>
printffrom this package.
printf("Hello World\n");
;) to end a statement in much the same way we use a period to end a sentence. Unlike line-oriented languages such as BASIC, an end-of-line does not end a statement. The sentences in this book can span several lines—the end of a line is treated just like space between words. C works the same way. A single statement can span several lines. Similarly, you can put several sentences on the same line, just as you can put several C statements on the same line. However, most of the time your program is more readable if each statement starts on a separate line.
*), divide (
/), and modulus (
%) have precedence over add (
+) and subtract (-). Parentheses, ( ), may be used to group terms. Thus:
(1 + 2) * 4
1 + 2 * 4
(1 + 2) * 4.
int main() { (1 + 2) * 4; return (0); }
"Take your wheelbarrow and go back and forth between the truck and the building site.""Do you want me to carry bricks in it?""No. Just go back and forth."
sam,
Sam, and
SAMspecify three different variables. However, to avoid confusion, you should use different names for variables and not depend on case differences.
average /* average of all grades */ pi /* pi to 6 decimal places */ number_of_students /* number students in this class */
3rd_entry /* Begins with a number */ all$done /* Contains a "$" */ the end /* Contains a space */ int /* Reserved word */
total /* total number of items in current entry */ totals /* total of all entries */
entry_total /* total number of items in current entry */ all_total /* total of all entries */
answercan be:
int answer; /* the result of our expression */
answer. The semicolon (
;) marks the end of the statement, and the comment is used to define this variable for the programmer. (The requirement that every C variable declaration be commented is a style rule. C will allow you to omit the comment. Any experienced teacher, manager, or lead engineer will not.)
type name; /* comment */ | http://www.oreilly.com/catalog/9781565923065/toc.html | crawl-001 | refinedweb | 963 | 65.01 |
Portfolio Optimization in Python
Want to share your content on python-bloggers? click here.
We will show how you can build a diversified portfolio that satisfies specific constraints. For this tutorial, we will build a portfolio that minimizes the risk.
So the first thing to do is to get the stock prices programmatically using Python.
How to Download the Stock Prices using Python
We will work with the
yfinance package where you can install it using
pip install yfinance --upgrade --no-cache-dir You will need to get the symbol of the stock. You can find the mapping between NASDAQ stocks and symbols in this csv file.
For this tutorial, we will assume that we are dealing with the following 10 stocks and we try to minimize the portfolio risk.
- Google with Symbol GOOGL
- Tesla with Symbol TSLA
- Facebook with Symbol FB
- Amazon with Symbol AMZN
- Apple with Symbol AAPL
- Microsoft with Symbol MSFT
- Vodafone with Symbol VOD
- Adobe with Symbol ADBE
- NVIDIA with Symbol NVDA
- Salesforce with Symbol CRM
We will download the close prices for the last year.
import pandas as pd import numpy as np import yfinance as yf from scipy.optimize import minimize import matplotlib.pyplot as plt %matplotlib inline symbols = ['GOOGL', 'TSLA', 'FB', 'AMZN', 'AAPL', 'MSFT', 'VOD', 'ADBE', 'NVDA', 'CRM' ] all_stocks = pd.DataFrame() for symbol in symbols: tmp_close = yf.download(symbol, start='2019-11-07', end='2020-11-07', progress=False)['Close'] all_stocks = pd.concat([all_stocks, tmp_close], axis=1) all_stocks.columns=symbols all_stocks
Get the Log Returns
We will use the log returns or continuously compounded return. Let’s calculate them in Python.
returns = np.log(all_stocks/all_stocks.shift(1)).dropna(how="any") returns
returns.plot(figsize=(12,10))
Get the Mean Returns
We can get the mean returns of every stock as well as the average of all of them.
# mean daily returns per stock returns.mean()
GOOGL 0.001224 TSLA 0.007448 FB 0.001685 AMZN 0.002419 AAPL 0.002422 MSFT 0.001740 VOD -0.001583 ADBE 0.002146 NVDA 0.004077 CRM 0.001948 dtype: float64
# mean daily returns of all stocks returns.mean().mean()
0.0023526909011353354
Minimize the Risk of the Portfolio
Our goal is to construct a portfolio from those 10 stocks with the following constraints:
- The Expected daily return is higher than the average of all of them, i.e. greater than 0.003
- There is no short selling, i.e. we only buy stocks, so the sum of the weights of all stocks will ad up to 1
- Every stock can get a weight from 0 to 1, i.e. we can even build a portfolio of only one stock, or we can exclude some stocks.
Finally, our objective is to minimize the variance (i.e. risk) of the portfolio. You can find a nice explanation on this blog of how you can calculate the variance of the portfolio using matrix operations.
We will work with the
scipy library:
# the objective function is to minimize the portfolio risk def objective(weights): weights = np.array(weights) return weights.dot(returns.cov()).dot(weights.T) # The constraints cons = (# The weights must sum up to one. {"type":"eq", "fun": lambda x: np.sum(x)-1}, # This constraints says that the inequalities (ineq) must be non-negative. # The expected daily return of our portfolio and we want to be at greater than 0.002352 {"type": "ineq", "fun": lambda x: np.sum(returns.mean()*x)-0.003}) # Every stock can get any weight from 0 to 1 bounds = tuple((0,1) for x in range(returns.shape[1])) # Initialize the weights with an even split # In out case each stock will have 10% at the beginning guess = [1./returns.shape[1] for x in range(returns.shape[1])] optimized_results = minimize(objective, guess, method = "SLSQP", bounds=bounds, constraints=cons) optimized_results
Output:
fun: 0.0007596800848097395 jac: array([0.00113375, 0.00236566, 0.00127447, 0.0010218 , 0.00137465, 0.00137397, 0.00097843, 0.00144561, 0.00174113, 0.0014457 ]) message: 'Optimization terminated successfully.' nfev: 24 nit: 2 njev: 2 status: 0 success: True x: array([0.08447057, 0.17051382, 0.09077398, 0.10128927, 0.10099533, 0.09145521, 0.04536969, 0.09705495, 0.12378042, 0.09429676])
The optimum weights are the array x and we can retrieve them as follows:
optimized_results.x
Output:
array([ 8.44705689, 17.05138189, 9.07739784, 10.12892656, 10.09953316, 9.14552072, 4.53696906, 9.70549545, 12.37804203, 9.42967639])
We can check that the weights sum up to 1:
# we get 1 np.sum(optimized_results.x)
And we can see that the expected return of the portfolio is
np.sum(returns.mean()*optimized_results.x)
Output:
0.002999999997028756
Which is almost 0.003 (some rounding errors) which was our requirement.
Final Weights
So let’s report the optimized weights nicely.
pd.DataFrame(list(zip(symbols, optimized_results.x)), columns=['Symbol', 'Weight'])
Want to share your content on python-bloggers? click here. | https://python-bloggers.com/2020/11/portfolio-optimization-in-python/ | CC-MAIN-2021-10 | refinedweb | 817 | 68.67 |
Pathologically Polluting Perlby Brian Ingerson
February 06, 2001
Pathologically Polluting Perl''. To accomplish this, Inline must do away with nuisances such as interface definition languages, in Action - Simple examples in C
Inline addresses an old problem in a completely revolutionary way. Just describing Inline doesn't really do it justice. It should be seen to be fully appreciated. Here are a couple examples to give you a feel for the module.
Hello, world prep. (Where have I heard that before?)
Just Another ____ Hacker, after the special marker '
__C__'.. You can learn more about simple Perl internals by reading the
perlguts and
perlapi documentation distributed with Perl.
What about XS and SWIG?,++, Python, and CPR. There are plans to add many more.
One-Liners
Perl is famous for its one-liners. A Perl one-liner is short piece of Perl code that can accomplish a task that would take much longer in another language. It is one of the popular techniques that Perl hackers use to flex their programming muscles.
So you may wonder: ``Is Inline powerful enough to produce a one-liner that is also bonifide C extension?'' Of course it is!'
I have been using this one-liner as my email signature for the past couple months. I thought
Supported Platforms forSD's.
There are two common ways to use Inline on MS Windows. The first one is with ActiveState's ActivePerl for MSWin32.. This is an actual Unix porting layer for Windows. It includes all of the most common Unix utilities, such as
bash,
less,
make,
gcc and of course
perl.
The Inline Syntax
Inline is a little bit different than most of the Perl modules that you are used to. It doesn't import any functions into your namespace and it doesn't have any object oriented methods. Its entire interface is specified through
'use Inline ...' commands. The general Inline usage is:
use Inline C => source-code, config_option => value, config_option => value;
Where
C is the programming language, and
source-code is a string, filename, or the keyword '
DATA'. You can follow that with any number of optional '
keyword => value' configuration pairs. If you are using the 'DATA' option, with no configuration parameters, you can just say:
use Inline C;
Fine Dining - A Glimpse at the C Cookbook
In the spirit of the O'Reilly book ``Perl Cookbook'', Inline provides a manpage called C-Cookbook. In it you will find the recipes you need to help satisfy your Inline cravings. Here are a couple of tasty morsels that you can whip up in no time. Bon Appetit!
External Libraries.
It Takes All Types
Older versions of.0; }
Some Ware Beyond the C.
See Perl Run. Run Perl, Run!. But what if you could pass your C program to a perl program that could pass it to Inline? Then you could write this program:
#!/usr/bin/cpr int main(void) { printf("Hello, world\n"); }
and just run it from the command line. Interpreted C!
And thus, a new programming language was born. CPR. ``C Perl Run''. The Perl module that gives it life is called
Inline::CPR. ActiveState; }
Running this program prints:
Hello world, I'm running under Perl version 5.6.0
Using the
eval() call this CPR program calls Perl and tells it to use Inline C to add a new function, which the CPR program subsequently calls. I think I have a headache myself.
The Future of Inline
Inline version 0.30 was written specifically so that it would be easy for other people in the Perl community to contribute new language bindings for Perl. On the day of that release, I announced the birth of the Inline mailing list, inline@perl.org. (inline-subscribe@perl.org) and speak up.
My primary focus at the present time, is to make the base Inline module as simple, flexible, and stable as possible. Also I want to see Inline::C become an acceptable replacement for XS; at least for most situations.
Conclusion!''
| http://www.perl.com/pub/a/2001/02/inline.html | crawl-002 | refinedweb | 665 | 67.04 |
Here is a simple solution:
def edge(x, y): return (x, y) if x < y else (y, x) def create_tour(nodes): # there are lots of ways to do this # a boring solution could just connect # the first node with the second node # second with third... and the last with the # first tour = [] l = len(nodes) for i in range(l): t = edge(nodes[i], nodes[(i+1) % l]) tour.append(t) return tour
And here is a slightly more complicated solution with some randomness:
from random import randint def edge(x, y): return (x, y) if x < y else (y, x) def poprandom(nodes): x_i = randint(0, len(nodes) - 1) return nodes.pop(x_i) def pickrandom(nodes): x_i = randint(0, len(nodes) - 1) return nodes[x_i] def check_nodes(x, nodes, tour): for i, n in enumerate(nodes): t = edge(x, n) if t not in tour: tour.append(t) nodes.pop(i) return n return None def create_tour(nodes): connected = [] degree = {} unconnected = [n for n in nodes] tour = [] # create a connected graph # first, pick two random nodes for an edge x = poprandom(unconnected) y = poprandom(unconnected) connected.append(x) connected.append(y) tour.append(edge(x,y)) degree[x] = 1 degree[y] = 1 # then, pick a random node from the unconnected # list and create an edge to it while len(unconnected) > 0: x = pickrandom(connected) y = poprandom(unconnected) connected.append(y) tour.append(edge(x, y)) degree[x] += 1 degree[y] = 1 # now make sure each node has an even degree. # have the problem of not adding a duplicate edge odd_nodes = [k for k, v in degree.items() if v % 2 == 1] even_nodes = [k for k, v in degree.items() if v % 2 == 0] # there will always be an even number of odd nodes # (hint: the sum of degrees of a graph is even) # so we can just connect pairs of unconnected edges while len(odd_nodes) > 0: x = poprandom(odd_nodes) cn = check_nodes(x, odd_nodes, tour) if cn is not None: even_nodes.append(x) even_nodes.append(cn) else: # if we get here # the node is already connected to # all the odd nodes so we need to find an # even one to connect to cn = check_nodes(x, even_nodes, tour) # cn cannot be None, and needs to be # added to the odd_nodes list odd_nodes.append(cn) # but, x is now an even node even_nodes.append(x) return tour | https://www.udacity.com/wiki/cs215/problemset1code | CC-MAIN-2016-40 | refinedweb | 395 | 54.05 |
Optimal Interpolation Nodes
Project description
Optimal Interpolation Nodes
Computes a set of points with an optimal Lebesgue constant. Having an exact analytical solution for these points is an unresolved problem in mathematics. The code here approximates the values numerically. Numerical approximations have been available for many years, but to my knowledge, there are no other open source Python libraries with this functionality.
Usage
Installation
pip install optinterp
Example
import optinterp nds = optinterp.nodes(10)
This functions similarly to numpy's
chebpts1 but produces points with a slightly improved Lebesgue constant:
import numpy as np nds = np.polynomial.chebyshev.chebpts1(10) nds = nds / nds[-1]
Algorithm Description
This solution expoits the following properties:
- Optimal interpolation points can take values -1 and 1 for their minimum and maximum.
- To mimimize the global maximum of the Lebesgue function, all local maxima should be equal.
- Moving two adjacent nodes closer together reduces the local maximum of the Lebesgue function at the expense of increasing the other local maxima.
Start with two initial guesses for optimal points. The extended Chebyshev nodes and a set of slightly perturbed Chebyshev nodes. Then for each set of nodes define:
dx_i = x_{i+1} - x_i dL_i = L_i - L_{avg}
Where
L_i is the local maximum of the Lebesgue function between
x_{i+1} and
x_i. Now assuming each
dL_i is a function of
dx_i, use the Secant method to find roots:
dx_{i, n+1} = dx_{i, n} - dL_{i, n} * (dx_{i, n} - dx_{i, n-1}) / (dL_{i, n} - dL_{i, n - 1})
For the next iteration, calculate each node
x_i from these roots of
dx_i and scale the values to be from -1 to 1.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/optinterp/ | CC-MAIN-2022-33 | refinedweb | 308 | 52.19 |
A few weeks ago I read the article “Are poker ‘bots’ raking online pots?” on
MSNBC’s website. Being an avid poker player/enthusiast I started toying with
the idea of writing a Texas Hold’em bot just for the fun of it (ya, I know,
geek). Although it seemed like a great idea at the time my enthusiasm for
actually writing it wasn’t too high. Then I came across the article “Poker
Bots: How Bad of a Problem are they?” that stated that “many computer
programmers are going to try and fail in developing their poker bots.” Sadly, I
took this as a personal affront and decided to write the bot.
However, with a slight twist. I am not trying to have my bot play in any online
games or anything. I just want to see how it will do against bots created by
other programmers. After enlisting several of my co-workers to write bots I
started looking for other people to enter and thought that Code Project was a
great place to start. So, I am posting this article in an attempt to get some
of you programmers out there involved in our little tournament.
In this article I will go over the PlayingCardLibrary that I built to implement
the cards and the evaluations of hands. As well as, the requirements for anyone
wanting to submit a bot into the tournament. Not wanting to re-invent the wheel
I decided to look around and see what was already out there. During my browsing
around I found several libraries that were used to evaluate hands and implement
the cards but I didn’t care for them. They often seemed long and confusing, so
I decided to write my own.
The PlayingCardLibrary is a library that implements your standard 52 card deck.
Ace of Spades, down to the 2 of clubs. It has 4 classes of relevance. Card.cs,
Deck.cs, Hand.cs, and PokerGame5Cards.cs.
Card.cs
Deck.cs
Hand.cs
PokerGame5Cards.cs
Card.cs,of course, represents the basic card in a 52 card deck. It
has a member (rank) and a suit with properties to access that information. Card
also has a public method that can be used to get the image of the card in JPG
format.
public Image CardImage()
{
return (new CardImages.ImageRetrival()).GetImage(cardValue);
}
Card.cs implements IComparable which allows the cards
to be sorted in two ways. If sortWithSuit is set to true, then the
sorting will be done with regards to suit first, then rank, meaning that if you
placed all the cards in an ArrayList and called its sort method
the cards would be arranged exactly like a new deck of cards: Ace of Spades
down to the Two of Spades, then hearts, diamonds and clubs. If sortWithSuit
is false, then the cards are sorted Ace’s, King’s, Queen’s, Jack’s, …, 3’s,
2’s.
IComparable
sortWithSuit
ArrayList
false
Deck.cs is your basic 52 card deck. It has three public methods of
interest, Shuffle, which randomizes the cards in the deck. Resets the
deckIndex
to zero so new cards are dealt from the top of the deck (figuratively).
deckIndex
public void Shuffle()
{
// reset the index into the newly shuffled cards
deckIndex = 0;
// temp variable need to do the swaping
int temp = 0;
Card card;
// for every card in the deck switch it with another
for(int i = 0; i < cards.Length; i++)
{
temp = random.Next(0,cards.Length);
card = cards[temp];
cards[temp] = cards[i];
cards[i] = card;
}
}
The HasNext() and DealNext() methods work in
conjunction. HasNext() indicates if there is another card that can
be dealt and DealNext() deals that card. Calling DealNext()
without checking HasNext() can lead to an IndexOutOfRangeException
which is changed into a OutOfCardsException.
HasNext()
DealNext()
IndexOutOfRangeException
OutOfCardsException
PokerGames5Cards.cs is where things get a little more interesting.
After searching for an existing library that could evaluate a set of cards to
determine the best possible 5 card (Hand) set, I found that most of the
existing code was more complicated then it needed to be, often consisting of
several hundred lines of code that was confusing to use. After a few hours of
searching and examining existing packages I decided to write this library.
Central to this library is the evaluation method. PokerGames5Cards.cs
has only one public method: Evaluate, it takes a Card[]
array of 5 or more cards and an object, it returns the best 5 card
hand. The basic idea behind Evaluate is that it makes every possible 5 card
combination and then sorts them based on the relative strength of the hand.
PokerGames5Cards.cs
Evaluate
Card[]
object
public Hand Evaluate(Card[] cards, object player)
{
if(cards.Length < 5)
{
throw new ArgumentException("Not enough Cards to perform evaluation");
}
ArrayList combinations = new ArrayList();
int len = cards.Length;
// iterate through all possible combinations
for(int i = 0; i < len - 4; i++)
for(int j = i+1; j < len - 3; j++)
for(int k = j+1; k < len - 2; k++)
for(int r = k+1; r < len - 1; r++)
for(int s = r+1; s < len; s++)
{
// create a new hand
Hand h = new Hand(player);
// add the cards
h.AddCards(new Card[]{cards[i],
cards[j],
cards[k],
cards[r],
cards[s]});
// evaluate the hand
EvaluateHand(h);
// add it to the group
combinations.Add(h);
}
// sort the compiled hands
combinations.Sort();
// return the largest
return (Hand)combinations[0];
}
The code needed to determine if a hand falls with in a certain category (flush,
straight, …) is fairly straight forward and I’ll leave it up to the readers to
examine the code directly. The interesting part is how we evaluate each hand.
Because poker hands neatly partition themselves into categories and each
category can be further partitioned into subcategories, I decided to map all
possible hands into the set of integers. With this accomplished, any hand can
be quickly and easily compared to any other hand, without having to go into all
the checks (to see if they are both 2 pairs, then checking the pairs, then
checking the kickers if needed) so prominent in other packages.
I start by evaluating each hand to determine what major category it falls into
(Flush, straight, …), then I reorder the cards with the strongest cards
relative to the major category in the front. (The reordering isn’t really
necessary but it makes it easier to put on screen if they are already in
order).
Example one: (AH) = (rank, suit) = Ace of Hearts
3H, 5H, JH, 8H 2H falls into the major category for flushes. The cards are then
reordered to JH, 8H, 5H, 3H, 2H and mapped into a 32 bit integer using the
following private method.
private int SetHandValue()
{
int handValue = 0;
handValue = ApplyMask(handValue, (int) type, 20);
handValue = ApplyMask(handValue, (int) cards[0].GetMember, 16);
handValue = ApplyMask(handValue, (int) cards[1].GetMember, 12);
handValue = ApplyMask(handValue, (int) cards[2].GetMember, 8);
handValue = ApplyMask(handValue, (int) cards[3].GetMember, 4);
handValue = ApplyMask(handValue, (int) cards[4].GetMember, 0);
return handValue;
}
private int ApplyMask(int origninal, int value, int shift)
{
int temp = value << shift;
return origninal | temp;
}
Where type is of Type enumeration
type
Type
private enum Type
{
HighCard = 0,
Pair,
TwoPair,
ThreeOfAKind,
Straight,
Flush,
FullHouse,
FourOfAKind,
StraightFlush,
RoyalFlush
}
Example two:(already reordered)
(Hand 1)8H, 8D, 2H, 2D, KC = 0010 1000 1000 0010 0010 1101
(Hand 2)8S, 8C, 2S, 2C, AC = 0010 1000 1000 0010 0010 1110
now we can see that Hand 2 easily beats Hand 1 simply by checking it numerical
value.
Hand.cs is relatively simple, it is a place to hold the Card[]
for each hand, the numerical value of the hand, and implements the IComparable
interface so the hands can be compared against other hands.
In Summary, all of the classes in this library are short and to the point. Hands
are easily partitioned possible into groups and subgroups (including kicker
cards) that can be mapped directly to the set of integers for easy comparison.
Ok, so the fun stuff.... who is programmer enough to compete? On too, THE
PROGRAMMERS CHALLENGE!!! I have written a little Texas Hold’em game. And I’d
like to have as many people as possible submit their idea of a great poker bot.
In January 05 we’ll have a series (around 1000) of tournaments to determine the
winning bot. The reason for having more then one tournament is to give the bots
a chance to learn the moves of the other bots (if desired), rather then to just
rely on the statistics of the current hand, if they want to.
Please Examine the second zip file. It contains all the files that you need in
order to complete your bot. Of course, you need the card library along with: AbstractPlayer.cs,
HoldemPlayer.cs, HoldemGameState.cs, HoldemActionEvent.cs,
ActivePlayer.cs.
AbstractPlayer.cs
HoldemPlayer.cs
HoldemGameState.cs
HoldemActionEvent.cs
ActivePlayer.cs
AbstractPlayer.cs and HoldemPlayer.cs are to abstract
classes that provide a lot of the basic functionality of the player. Your bot
must extend HoldemPlayer.cs, inside the holdem player you have the
following information. Card[] cards contains your hole cards, you
can check isButton, isBigBlind, isSmallBlind
to determine if you are the dealer or one of blinds. Your inheriting class must
provide concrete implementation for the abstract methods in HoldemPlayer and
AbstractPlayer. That is, you must provide the functionality for the following
methods:
Card[] cards
isButton
isBigBlind
isSmallBlind
public override HoldemActionEvent
EvaluateCurrentAction(HoldemGameState state)
{
// Add your code here
}
In EvaluateCurrentAction you need to make some kind of move, you are passed the
current game state, which gives you PlayersChipCount (returns the
players chip count), AmountNeededToCall (returns the amounted
needed to call), GetPlayerList (returns a list of the active
players), GetCommunityCards (returns the table cards), GetBigBlind
(tells you what the current bigBlind is), GetPotSize (tells you
the current size of the pot) and GetLastAction (returns the last
action that was performed)
EvaluateCurrentAction
PlayersChipCount
AmountNeededToCall
GetPlayerList
GetCommunityCards
GetBigBlind
GetPotSize
GetLastAction
public override void HoldemActionUpdate(HoldemActionEvent action)
{
// your code here
}
ActionUpdate is a curtsy method that informs all the plays of
moves that are made, as they are made, including your moves. They can be used
to track the game. Then when you get the game state in EvaluateCurrentAction
you can verify your internal tracking.
ActionUpdate
EvaluateCurrentAction
HoldemActionUpdate gives you access to the event of the player. Specifically,
the methods Actions and Amount. Actions returns
the event action and Amount returns the amount associated with
that action. where Actions is the following enumeration.
HoldemActionUpdate
Actions
Amount
public enum Actions
{
FOLD = 1,
CALL,
RAISE,
ALLIN
}
OK, so, if you up for the action, email your bots (as text only - no
attachments) to the BotChallenge2k5@hotmail.com with the following subject
line: "BOT CHALLENGE 2K5 !!"
and the winner will be posted after the tournaments have been run... Good
programming and Good luck. These emails will be read in as text and placed into
a single file named after whatever you call your class that extend HoldemPlayer.cs.
so please be sure that all of the classes that you need are all working
properly... No extra text, if it doesn't compile, its deleted.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
deck d
for(i = 0..51)
r = random(i,52)
swap(d[i],d[r])
// for every card in the deck switch it with another
for(int i = 0; i<cards.length; i++)="" {="" temp="random.Next(0,cards.Length);" card="cards[temp];" cards[temp]="cards[i];" cards[i]="card;" }="" <="" code="">
And here is what I propose instead:
//.
temp = random.Next(i,cards.Length);
// for every card in the deck switch it with another
for(int i = 0; i<cards.Length; i++)
{
temp = random.Next(0,cards.Length);
card = cards[temp];
cards[temp] = cards[i];
cards[i] = card;
}
// for every card in the deck switch it with another
for(int i = 0; i<cards.Length; i++)
{
temp = random.Next(i,cards.Length); card = cards[temp];
cards[temp] = cards[i];
cards[i] = card;
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/8754/Poker-Card-Library-and-Poker-Bot-Challenge | CC-MAIN-2016-44 | refinedweb | 2,075 | 61.87 |
Hi, We have seen bad_pte_print when testing crashdump on an SN machine in recent 2.6.20 kernel. There are tons of bad pte print (pfn < max_low_pfn) reports when the crash kernel boots up, all those reported bad pages are inside initmem range; That is because if the crash kernel code and data happens to be at the beginning of the 1st node. build_node_maps in discontig.c will bypass reserved regions with filter_rsvd_memory. Since min_low_pfn is calculated in build_node_map, so in this case, min_low_pfn will be greater than kernel code and data. Because pages inside initmem are freed and reused later, we saw pfn_valid check fail on those pages. I think this theoretically happen on a normal kernel. When I check min_low_pfn and max_low_pfn calculation in contig.c and discontig.c. I found more issues than this. 1. min_low_pfn and max_low_pfn calculation is inconsistent between contig.c and discontig.c, min_low_pfn is calculated as the first page number of boot memmap in contig.c (Why? Though this may work at the most of the time, I don't think it is the right logic). It is calculated as the lowest physical memory page number bypass reserved regions in discontig.c. max_low_pfn is calculated include reserved regions in contig.c. It is calculated exclude reserved regions in discontig.c. 2. If kernel code and data region is happen to be at the begin or the end of physical memory, when min_low_pfn and max_low_pfn calculation is bypassed kernel code and data, pages in initmem will report bad. 3. initrd is also in reserved regions, if it is at the begin or at the end of physical memory, kernel will refuse to reuse the memory. Because the virt_addr_valid check in free_initrd_mem. So it is better to fix and clean up those issues. Calculate min_low_pfn and max_low_pfn in a consistent way. Below is the patch, please review and comments Signed-off-by: Zou Nan hai <nanhai.zou@intel.com> diff -Nraup a/arch/ia64/mm/contig.c b/arch/ia64/mm/contig.c --- a/arch/ia64/mm/contig.c 2007-02-27 00:42:06.000000000 -0500 +++ b/arch/ia64/mm/contig.c 2007-02-27 03:03:00.000000000 -0500 @@ -75,26 +75,6 @@ show_mem (void) unsigned long bootmap_start; /** - * find_max_pfn - adjust the maximum page number callback - * @start: start of range - * @end: end of range - * @arg: address of pointer to global max_pfn variable - * - * Passed as a callback function to efi_memmap_walk() to determine the highest - * available page frame number in the system. - */ -int -find_max_pfn (unsigned long start, unsigned long end, void *arg) -{ - unsigned long *max_pfnp = arg, pfn; - - pfn = (PAGE_ALIGN(end - 1) - PAGE_OFFSET) >> PAGE_SHIFT; - if (pfn > *max_pfnp) - *max_pfnp = pfn; - return 0; -} - -/** * find_bootmap_location - callback to find a memory area for the bootmap * @start: start of region * @end: end of region @@ -155,9 +135,10 @@ find_memory (void) reserve_memory(); /* first find highest page frame number */ - max_pfn = 0; - efi_memmap_walk(find_max_pfn, &max_pfn); - + min_low_pfn = -1; + max_low_pfn = 0; + efi_memmap_walk(find_max_min_low_pfn, NULL); + max_pfn = max_low_pfn; /* how many bytes to cover all the pages */ bootmap_size = bootmem_bootmap_pages(max_pfn) << PAGE_SHIFT; @@ -167,7 +148,8 @@ find_memory (void) if (bootmap_start == ~0UL) panic("Cannot find %ld bytes for bootmap\n", bootmap_size); - bootmap_size = init_bootmem(bootmap_start >> PAGE_SHIFT, max_pfn); + bootmap_size = init_bootmem_node(NODE_DATA(0), + (bootmap_start >> PAGE_SHIFT), 0, max_pfn); /* Free all available memory, then mark bootmem-map as being in use. */ efi_memmap_walk(filter_rsvd_memory, free_bootmem); diff -Nraup a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c --- a/arch/ia64/mm/discontig.c 2007-02-27 00:42:06.000000000 -0500 +++ b/arch/ia64/mm/discontig.c 2007-02-27 03:00:30.000000000 -0500 @@ -86,9 +86,6 @@ static int __init build_node_maps(unsign bdp->node_low_pfn = max(epfn, bdp->node_low_pfn); } - min_low_pfn = min(min_low_pfn, bdp->node_boot_start>>PAGE_SHIFT); - max_low_pfn = max(max_low_pfn, bdp->node_low_pfn); - return 0; } @@ -467,6 +464,7 @@ void __init find_memory(void) /* These actually end up getting called by call_pernode_memory() */ efi_memmap_walk(filter_rsvd_memory, build_node_maps); efi_memmap_walk(filter_rsvd_memory, find_pernode_space); + efi_memmap_walk(find_max_min_low_pfn, NULL); for_each_online_node(node) if (mem_data[node].bootmem_data.node_low_pfn) { diff -Nraup a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c --- a/arch/ia64/mm/init.c 2007-02-27 00:42:06.000000000 -0500 +++ b/arch/ia64/mm/init.c 2007-02-27 03:08:10.000000000 -0500 @@ -616,6 +616,22 @@ count_reserved_pages (u64 start, u64 end return 0; } +int +find_max_min_low_pfn (unsigned long start, unsigned long end, void *arg) +{ + unsigned long pfn_start, pfn_end; +#if CONFIG_FLATMEM + pfn_start = (PAGE_ALIGN(__pa(start))) >> PAGE_SHIFT; + pfn_end = (PAGE_ALIGN(__pa(end - 1))) >> PAGE_SHIFT; +#else + pfn_start = GRANULEROUNDDOWN(__pa(start)) >> PAGE_SHIFT; + pfn_end = GRANULEROUNDUP(__pa(end - 1)) >> PAGE_SHIFT; +#endif + min_low_pfn = min(min_low_pfn, pfn_start); + max_low_pfn = max(max_low_pfn, pfn_end); + return 0; +} + /* * Boot command-line option "nolwsys" can be used to disable the use of any light-weight * system call handler. When this option is in effect, all fsyscalls will end up bubbling diff -Nraup a/include/asm-ia64/meminit.h b/include/asm-ia64/meminit.h --- a/include/asm-ia64/meminit.h 2007-02-27 00:42:07.000000000 -0500 +++ b/include/asm-ia64/meminit.h 2007-02-27 03:01:15.000000000 -0500 @@ -35,6 +35,7 @@ extern void reserve_memory (void); extern void find_initrd (void); extern int filter_rsvd_memory (unsigned long start, unsigned long end, void *arg); extern void efi_memmap_init(unsigned long *, unsigned long *); +extern int find_max_min_low_pfn (unsigned long , unsigned long, void *); /* * For rounding an address to the next IA64_GRANULE_SIZE or order - To unsubscribe from this list: send the line "unsubscribe linux-ia64" in the body of a message to majordomo@vger.kernel.org More majordomo info at on Wed Feb 28 12:34:34 2007
This archive was generated by hypermail 2.1.8 : 2007-02-28 12:34:49 EST | http://www.gelato.unsw.edu.au/archives/linux-ia64/0702/20072.html | CC-MAIN-2020-16 | refinedweb | 924 | 55.54 |
Every)
I’ve never felt like the limited feature set of Compressed Folders was an issue. It already offers far better functionality than most other operating systems’ file compression UIs, and if you really need additional features you’ll probably want something best implemented in a third-party application anyway.
This is one thing that if there was a Windows Uservoice I would upvote. Unfortunately this having been frozen in time means people are forced to use things like 7zip and WinRaR both of which have security issues an no patching mechanism. Having this sort of thing available as an OS API (preferably with an api compatible with gzip?) would be a huge boon. I know the code is there, IIS uses it, but as far as I know it’s not exposed anywhere documented.
The .zip file format was created circa 1989 and its creator, Phil Katz, made the specs freely available so that anyone could write their own zip handlers. That’s why free programs like 7zip are able to exist and why there are many programs out there that routinely handle zip files. This is a weird issue, especially for a company as large and rich as Microsoft.
Oh, Unicode file name support was exactly what I was going to complain about… I don’t care about all the fancy features like AES, I would just very much like if it could *create* archives with Unicode names, not just extract them.
In recent Windows 10 you can create ZIPs with PowerShell using Compress-Archive/Extract-Archive:
Only downside is it can’t handle archives >2gb (which I think is a limitation shared with the GUI-based archiver).
I believe that was a limitation in the ZIP spec in general until ZIP64 was introduced (in 2001, though it wasn’t implemented broadly until much later). Not sure why .NET wouldn’t implement that particular extension though… might be a limitation in the file streams it’s using. I’ve noticed some inconsistencies in buffers and streams where some expose a LongLength and LongIndex and others do not, forcing you to only have access to 2GB of a file.
I believe you can use System.IO.Compression namespace in .NETv4.5+ in PowerShell for older operating systems
2017: The most popular desktop OS gets support in its primary scripting language for the most common compression format. How can this be a -100 feature? Instead a poorly performing lz library was added, the lz algorithm is surely more patented than zip, which should have been bundled in windows since day 1. Fun isn’t something one considers when using winapi, but that nano edition removed lz32.dll recently does put a smile on my face.
But on another topic, doesn’t Windows happen to have another implementation for said “compression and decompression code” by which it implements e.g. loading of .appx or .docx files (which are zips under the hood)? Or is _that_ also licensed in strange ways?
I imagine that’s part of Microsoft Office code, not Windows code.
I would imagine there were a “licensing terms change” between Microsoft and PKWare, because .NET framework now provides programmatic mean to do both zip and unzip, and .NET runtime is an optional “windows feature” that comes with installation media of Windows OS.
Also, IMO those companies selling archiving algorithm nowadays only care about the ability to create archive but not extracting archive. RARSoft was giving away the unrar library so that utility programs such as 7-Zip can extract RAR but not create RAR file. The allows the algorithm to reinforce its “industrial standard” status. I suspect if Microsoft just ask them if Microsoft is allowed to add code to extract AES encrypted ZIP files, it would be an easy okay.
Why do you think it was PKWare?
Use the Open XML SDK to do that instead:
Presumably, the ZIP code in .NET is independent of the ZIP code powering Compressed Folders. If this is the case, then why not re-implement Compressed Folders using the .NET implementation (converted to native if necessary)? That way, MS would then be able to extend support to include AES encryption and ZIP64, and also allow programmatic manipulation of ZIP files.
And yes, I realise I’ve asked a question that has likely been asked dozens of times already :)
I’m sure you and I weren’t the only ones with that thought. It’s not clear to me if Raymond is trying to say that the “terms of the license” actually prohibits Microsoft from implementing their own Zip shell extension (hopefully only until some specific future date), or whether it’s just prohibiting Microsoft from using this licensed shell extension for those purposes.
Companies get upset with Microsoft when they start including features in the OS that these companies had been selling (see when MS added a web browser and when they added anti-malware, just to name a couple), and beyond legal anti-trust concerns, there’s also just the business concerns of annoying companies which provide the software that makes it worthwhile for people to buy Windows. I’m sure they sometimes feel like they can’t win, where if they continue to not include some feature inside the core OS people aren’t happy, and if they do include it then also people aren’t happy.
As far as I can tell there are at least three different compression implementations in the windows codebase: IIS, .NET, and Compressed folders. I’m pretty sure that none of them share code either.
It could be more than three. There is zlib in Windows HTTP library (client, not server). I discovered it by accident by placing a break-point into inflateInit2 function and the debugger matched it in my binary and, to my surprise, inside webio.dll.
All the more argument for having a single common OS maintained zlib.dll that everybody can share. I don’t think it would help compressed folders but it would allow for the rest of us to not have to duplicate functionality the OS can ship and patch.
As was mentioned, there are other things to spend development resources on. Even if there ARE three different compression libraries in Windows, if they all work, there’s not a big incentive to refactor stuff that is working.
There’s sqlite (winsqlite3.dll) in Windows 10, so that’s a start.
It’s likely another instance of the -100 point barrier. The existing UI and code work just fine for 90-some percent of cases, whereas using ZipFile would require someone to rewrite all the Explorer hooks, mimic the existing UI, and generally do a lot of drudge work for little to no gain. And like Raymond said, developer resources are limited.
That implementation had its own problems, and they eventually switched to zlib I thinl.
A third-party to implement ZIP functionality, really?
When zip file parsing/generation can easily be implemented from scratch in a day by someone who knows nothing about it, and, as for compression, zlib existed since 1995 and I believe its license is permissive enough.
A component that is mostly independent and implementable from scratch by someone who knows nothing about it in a reasonable timeframe is a slam dunk candidate for a component to outsource when you are short on time.
And yet, nobody is out there getting rich implementing this and selling it to Windows users. . .
I think one obvious answer to the customer would be, “why do you want this?” Because I’m pretty sure they just read about it in an in-flight magazine.
Compression algorithms in the shell are really nifty in theory, although I think much less useful than they were in 2000, because floppy disks are dead and bandwidth is exponentially better and, in fact, you can do compression on-the-fly on a decent network connection.
But encryption is a different animal. If, for some reason, you wanted to encrypt a file/directory at one end, and decrypt it at the other — just pipe the zipped file through your choice of encryption algorithm (and back again at the other end). And, of course, the more useful question is “which encryption algorithm do you, the sender, and your partner, the receiver, intend to use?” It might be AES. It might not.
I don’t even know that I’d budge the -100 point needle as much as a single digit on this one, were I a manager in charge of engineering resources.
Compression isn’t the only advantage to compressed archives. Since multiple files can be compressed into a single compressed archive, they’re a popular way to transfer or distribute files. For example, if you need to send someone a month’s worth of logs, you can individually send them one at a time, or you can package the whole logs folder up into a single ZIP and just send that one file.
“…or you can package the whole logs folder up into a single ZIP and just send that one file.”
Indeed you can. I should have made it clear that I’m talking about compressing anything — an entire disk if you want — not just files.
It’s important not to get fixated on the tool used for compression. Compression does one thing and one thing well — it reduces the size of the source. Encryption, on the other hand, does the opposite — if the entropy of the input equals the entropy of the output, well, it’s really not much of an encryption algorithm, is it?
You get basic encryption for free over the wire, these days. I was puzzling over my favorite news feed, which for some reason is delivered over https, and then it occurred to me: why not? The extra bandwidth is not relevant, and who knows when that newsfeed might need encryption?
Think of it as a pipeline. Source data -> appropriate size for transmission -> appropriate encoding -> appropriate decoding -> appropriate decompression -> back to source data.
Separating the two steps (compression and encoding) doesn’t lose you much, if anything, in terms of comprehension, but it gives you as much flexibility as you and the other end of the conversation might need.
Compression algorithms in the shell are more than just a way to save disk space, which yes, seems quaint in an era of TB-sized volumes. They’re a convenient and nearly universally understood file packager. It’s a lot easier to package a series of photos, for example, into a single Zip and send it off versus sending hundreds of separate files, even though the disk space savings is nil for an already-compressed format.
Sounds like a bad deal was made.
I really wish there was an option to disable Zip folders in explorer. I wind up using registry hacks to remove them, but they keep getting re-enabled on random updates.
My issue is that it’s treating them like a directory when they are not. So if I have a directory with a couple of dozen zip files in them, and I happen to open that directory in Explorer, it happily proceeds to expand the folder tree into all of the zip files.
I want my zip files to be treated as files.
At the least…it would be good if it didn’t just say “Unspecified Error”. for AES. It’s really confusing, especially for those without ever having encountered this before. If it just said something like “AES Encryption in Zip not supported natively in Windows, please download a Zip tool.” (well something that directed the user what to do)…it’d be easier to stomach Microsoft’s decision here. I think there was something similar with DVD playback, where it allow for downloading from the Windows Store or something like that (may be remembering wrong), instead of just failing with unspecified error.
Naturally, code written before the AES feature was introduced cannot have an error message talking about a feature that didn’t exist. As far as old code is concerned, the file appears to be corrupted.
Knowing nothing about the way the Windows or Zip code works, I wonder if Microsoft could somehow pre-process the zip header themselves in new code (before using whatever licensed zip tool you have used and are locked in to), and notify about the AES error beforehand, avoiding some confusion.
Just for you, I took a look at the code. It took a while even to find the part that checks the flags that would indicate AES. (Somebody didn’t believe in giving names to constants.) And I don’t know what sort of header preprocessing leads to automatic code generation that detects and reports AES.
I’m pretty sure he meant use the .NET code as a reference to write a new header parser to check if a file is AES.
I’m not going to throw out the old header parser and replace it with a new one inspired by the .NET version. That parser is almost certainly tightly-coupled to the rest of the code.
I feel like you can’t blame all deficiencies on old code, especially given MS resources.
Yes, pre-AES code didn’t have a crystal ball and could not have a clear error message for a future feature.
Maybe, ZIP is not high enough on the priority list to warrant assigning resources to support new features.
But in 15 years, you could have added some error handling and a helpful error message.
The user experience is bad and you’re just making up excuses.
In 15 years, I’ve not had a single ZIP file on any Windows machine that used AES encryption. So perhaps your case is an edge case, and Raymond isn’t making excuses for not implementing something that could cause bugs for the majority of users to satisfy a minority of users (who can probably figure out what the problem is themselves anyway).
Many people won’t try to open an AES-encrypted zip, ever. Good for you that you belong to this group.
Turns out some of my customers have sent me a bunch of files by e-mail in a single AES-encrypted zip archive. I would say it’s a reasonable use-case if you’re going to transmit files by e-mail.
As Raymond himself mentionned before, when you have 200 millions customers, even a small percentage of them is a lot of people.
I’m making up numbers, but if as few as 0.5% of users have seen an encrypted zip at least once, that would still be 1 million customers affected.
Turn it anyway you want but “Unspecified Error” is a terrible UX. That’s a fact.
It could be “AES encryption is not supported” with relatively little resources and little risk for existing users. I have a hard time believing that making this change is that much of a deal for a behemoth like Microsoft. Raymond could say “we just don’t care”, but instead he’s making up excuses about the code being older than the feature… If he _wanted_ to do something about, he _could_.
@bystander and Drak: if AES-encrypted Zip files were to somehow become as commonplace as unencrypted Zips, and if users were to run those files through the compressed Files feature, then I supposed that would move the needle on the -100 points deficit. The feature set is limited to just the most common Zip file cases circa 2000, which are still the most common cases today. I would reasonably expect that a user who receives Zip files with post-2000 encryption algorithms would have a separate tool to handle those files.
I believe that ZIP files are much less important as a transfer mechanism today than they were even a couple of years ago. I remember when most Web sites stored their downloadables as ZIP, RAR, or GZ files on an FTP server; now FTP servers as essentially gone on many sites and have been replaced with cloud storage or torrents (or some equivalent.) One of the first things that I used to do was install 7-Zip because I often encountered oddball archives; since 2010 or so, I haven’t bothered doing so and honestly rarely missed it. As for the new features that were added to the ZIP format this century, I think that, like CD-I and HD-DVD, most came too late to gain widespread adoption and other technologies filled the need: I certainly have never handled a ZIP file using any of the features except for ZIP64.
Also, hasn’t 7zip and Gzip even, made big advances every year in zip compression since the year 2000? It seems like there would be a benefit with much smaller files, maybe faster times, even without updating to AES, but just updating the zipping algorithm.
System.IO.Compression.DeflateStream that added in .NETv2 was switched to use zlib instead of built-in deflate algorithm as of .NETv4.5 because of the same reason.
I wish there was a good way to turn off zip folders in Windows (a files and folder setting would be awesome). The feature is confusing on it’s face (is it a file or is it a folder?) and typically causes more problems than it solves. It used to be a simple matter of unregistering the zipfldr.dll COM component but it’s no longer that simple.
If Microsoft isn’t going to update this feature (or make it user friendly) then they should at least allow us to turn it off.
Native Zip support was one of the new features in Windows ME I enjoyed the most. No more need for WinZip and other nagware!
I have never felt restricted by the limited functionality of Windows’ Zip support. Still, I do find it somewhat interesting that a lack of resources hinders the development of this (moderately advanced) feature, given the fact that Microsoft is one of the world’s largest companies. I suspect that the demand for improved Zip support is pretty low — maybe most users are as satisfied with the current functionality as I am.
There are some edge cases where the built-in zip functionality falls down badly. The main one is large zip archives with lots of subfolders and files. Use Windows to unzip, say, the Eclipse zip file, and it could take 10-15 minutes (even with an SSD, when you’re extracting from one physical drive to a different one!), whereas WinZip can do it in just a couple.
It may be slow, but it does work. The majority of Windows users aren’t going to be unzipping huge files anyway, and if they do, the don’t have any expectation that it’s going to be quick.
It dates back to Plus! for Windows 98 I think, and was also in Windows Me.
It had to be a different code: june 17, 2002 press release “We are very pleased to announce that Microsoft has chosen Inner Media’s DynaZip technology to provide the compression backbone for their Windows XP Compressed Folder features,” says Inner Media President, Neil Rosenberg. “
That could just be a continuing engagement kind of announcement sort of thing. It wouldn’t surprise me if Microsoft went back to the same vendor and said, “Hey, this needs to work against NT now.”
“Anybody who has worked with the Compressed Folders feature currently in Windows XP will know that DynaZip will probably be an improvement, as the Compressed Folders feature isn’t exactly a top performer. ” – source linked in “Website”.
Honestly, the bits I hate about ZIP in Explorer are the weird arbitrary limitations in what you can do from the UI.
eg you can’t drag’n’drop move or copy from one compressed folder into another, or even open a file with a non-default handler.
Plus it grinds to an absolute halt when simply directory listing inside a ZIP that contains a thousand files. No excuse for that, given how ZIP works internally.
I’m always amazed with the Cabinet (.cab) compression format. It has good command line utilities, more than competitive compression ratios, and Windows built-in support for decompression. It’s a convenient solution to attach text or configuration files as compressed resources to executable files.
There’s more!
0. .CAB is Windows native archive format!
1. the “Microsoft Update” files .MSU of Vista and newer versions are .CABs.
2. the “Microsoft Installer” uses external .CABs, or .CABs embedded into .MSI and .MSP, and processes them without unpacking the .CABs to disk.
3. the SetupAPI introduced with ’95 can install directly from .CABs (); it can even run .INF scripts packaged in .CABs ().
If it was made into a standard, it would be interesting to perform comparisons against the other major compression algorithms to see how well it fares. But my guess is that Microsoft is happy keeping it internal instead of effectively “productizing” it.
For everyday use, i.e. with a reasonable speed vs. archive size tradeoff, I’m absolutely happy with CAB (my “gut-feeling” is that the resulting archives are small enough).
There’s a nice zip utility [0] to produce standard zip-files even smaller than what’s possible with 7-Zip, but it’s taking much longer, so I’m using it only for automated tasks.
[0]
.CAB archives of Windows executables are typically just half the size of .ZIP archives, sometimes much less,
Which part of “it’s Windows native archive format”, i.e. the standard on Windows, is not understood?
JFTR: Microsoft published the format specification in the last millennium.
“Because adding features requires engineering resources, and engineering resources are limited” – I wish Microsoft didn’t waste so much of its resources on Windows Runtime and other such things that no one really wanted.
Ah, I experienced some of the unpleasantnesses involves with trying to use the Zip Folder extension programmatically. Including the fact that when you use the SHFileOperation trick to add a file, the adding will always be done asynchronously, with no non-kludgy way to wait for it (including that it doesn’t use SHGetInstanceExplorer() as explained in May 28, 2008 article: I know, I tried. — That said, I didn’t know these functions existed until I read that article, published 5+ years after Windows XP).
Somehow, learning at least some of those problems were intentional makes me both more and less angry about it.
Exactly why are engineering resources so limited, in a multi-billion dollar multi-national corporation? This excuse is one I’ve never understood.
My own company has three employees, we licenced the same zip code as Microsoft did, then we bought the source code, updated it to support things like Unicode filenames, and all is dandy. Took a few weeks.
Exactly how can your resources be this limited?
You probably don’t have to maintain a three-million file code base.
I have heard this excuse once too many. Yet, during Steve Ballmer time, Microsoft purchased other companies as if they were candy, rebranded their products, failed to gain traction, sold them to some other company and moved on. The ill-fated Expression Studio brand and the ForeFront brand were such examples. And don’t let me mention a certain very reputable European hardware manufacturer! Obviously, having to maintain a three-million codebase file has never been a problem.
My own experience at Microsoft (take it as you will) is that you can make local changes quite easily. Let’s say you want to change a logit analysis to a probabilistic analysis on the server — easy! A few test gates to production, and you’re done.
Now, imagine Raymond’s scenario, which he underestimates. Three million files is nothing at Microsoft. Add in the customer dependencies and the versions and the different operating systems and ask yourself, would you really want to build AES back into that?
I think not. Even if the product wasn’t licensed from somebody else, it would still be a -100 engineering points before you could come up with a good reason for doing it, rather than doing something rather more relevant to your customers.
On the plus side, from your point of view … this is why the world needs responsive three person software engineering companies that are responsive to the < 100 small companies they deal with.
Honestly I’d want to combine all the windows compression libs into one anyway and use that for the shell as well. I’d know at some point the Google project zero folks are going to test this code and find things I don’t want to have to fix. So personally to me the question is: What’s the cost to switch to something fairly common and standard (zlib) vs what we have now and what’s the time cost if someone finds a vuln (which they will).
Per dollar of revenue I guarantee we maintain more code than you do!
If that’s the case, Microsoft should have moved on to something free and open-source (like 7-zip) a long time ago. This would have cut down royalty fees too.But since PowerShell can now compress and decompress ZIP operations, I can only assume the restricting license terms have expired.
The licensing terms are still in full force. What you see in .NET is an unrelated implementation which therefore not subject to the licensing terms that apply to the Compressed Folders implementation.
Seems like the matter is more complex than it appears from your post. The interesting this is, you seem to have written to clarify things.
Oh, well. I guess you can’t; NDA, anyone?
It seems pretty clear to me: it does the minimum required for most people (which was deemed sufficient), and while Microsoft has the source code, and it’s not a codebase that anyone internally has any real knowledge of (i.e. it’s essentially a FCIB.) Making changes to it would be a massive pain, some of the extensions are patented (e.g.), and there are many, many alternatives that you can choose from to access ZIP files; this makes it unlikely as a candidate for updating, and rewriting a component that works fine in general seems silly!
The code is a FirstCarribbean International Bank?
Foreign checked-in binary:
I agree, it seems pretty straight forward to me. The ZIPed folders functionality in Windows is sufficient for the majority of users’ purposes; there’s no compelling reason to do extensive changes which would soak up development resources for little benefit.
Anyone who needs extended functionality, like AES or better compression, would be better served just installing one of the third party options, most of which integrate in to the shell window anyway (we standardised on using 7-zip at work ages ago).
Why did you not consider replacing it by an opensource library having a suitable feature set, then?
The interface to the library would be different, which means changing all the places where the existing code interacts with the library. (Oh sorry, you have different handle lifetime policies? And you encode locations in the ZIP file differently? Too bad we persisted that in the IDLIST, so now we have to add a translation layer that can take old IDLISTs and convert the locations into the new library’s format.)
When will you learn? No, you *don’t* keep such backwards compatibility, because that will make the whole OS irrelevant in the OS market when more of your development resources are spent keeping old stuff working instead of developing new stuff that more people want. A good opportunity to break backwards compat is for example each time when the hardware architecture are changing, like going to x86-64 (b/c then applications have to be recompiled anyway). You missed that opportunity, please don’t miss the next.
Also, as you probably know, the compression library that you licensed have had security issues because the infamous supplier stopped patching your library years ago. Please move to zlib instead, it’s free, it’s faster, supports encryption, and you’re already using it in other ms products, etc, etc. You’re really just making up excuses.
here’s a NSE (should work for windows explorer too) that does most archive types, including zip/7z/rar. Read-only at present
Raymond,
Windows 10 includes the open-source libarchive library by default (system32/archiveint.dll) and that library supports tar, pax, cpio, zip, xar, lha, ar, cab, mtree, rar, gzip, bzip2, lzip, xz, lzma etc… and even includes support for… encrypted zip archives! :)
The shell should deprecate support for “zip folders” and instead support “archives” considering the licence terms for “zip folders” are detrimental to the security of end-users while also preventing development of a modern shell experience with multi-archive support.
The libarchive library (in your words) is an “unrelated implementation which therefore not subject to the licensing terms that apply to the Compressed Folders implementation”… and it’s already included on Windows 10. Why doesn’t the shell make use of it and add multi-archive support to Explorer?
Because 99% of the users are not impacted by this issue, and the other 1% know how to install and use 7zip?
“archiveint.dll” does not exists on my Win10 x64 system at home. I suspect that it comes from WSL that’s currently for Insider only.
The story of compressed files in Windows is a good example of how shortsighted management and careless corner-cutting can end up creating more work for everyone in the long run. The ZIP support in Windows Explorer has been left to stagnate because management didn’t want to spend resources on maintaining it or building their own…but in the end, full ZIP support had to be implemented at least twice more elsewhere in the Microsoft codebases, and the Explorer implementation couldn’t be reused for that due to the limited featureset and restrictive license terms. An outsourcing decision that seemed like a labor-saver at the time ended up creating more work for Microsoft as a whole than if they’d just developed their own non-encumbered library in the first place and reused the code (or at least the expertise) across products.
Great post, as always, Raymond.
. ”
Has there ever been a reported security vulnerability regarding Zip Folders in Windows? I presume the limited expertise in this code base would make fixing them more difficult.
Besides Pinball and ZIP folders, how many other parts of Windows are like this? :-)
Did the old Win7 image preview application have special knowledge about zip files? It could show the previous/next image in those, while its Win10 counterpart cannot.
However seemingly from this functionality doesn’t always work on Win7 either.
For the record, ZIP shell folders *are* programmable. Scriptable, even. Create a COM object “Shell.Application”, then call Namespace(), passing the ZIP name. | https://blogs.msdn.microsoft.com/oldnewthing/20180515-00/?p=98755 | CC-MAIN-2018-47 | refinedweb | 5,184 | 61.77 |
I am developing an android application which loads a web application when started. To achieve the purpose I am using webview control. I want my webview to be displayed full screen so that it will give native feel to the users. I tried all the methods to display webthview full screen but nothing is working. […]
I’m trying to listen to key events in android web view. Example when user is filling a form, I should receive the key events. This is my WebView code public class MyWebView extends WebView implements OnKeyListener { ….. @Override public boolean onKeyDown(int keyCode, KeyEvent event) { Log.i(“PcWebView”, “onKeyDown keyCode=” + keyCode); return super.onKeyDown(keyCode, event); } […]
I am trying to pass JSON-formatted data from my Android WebView to a HTML page. However, the app crashes whenever I try to parse the original JSON data, which I am expecting to be of the format {“key”:”data”} The aim of my app will be to interpret this JSON data, form it into an array […]
In my application I have a webview in which initially I load any website, say. Later on a button click I want to clear this webview, for which I do webview.loadUrl(“about:blank”); But doing this does not clear my webview, it continues to display google.com I tried to re initialize the webview as webview = […] […]
I have a webview in my android app. I have loaded the url: There are links to various notifications. My WebView is doing nothing when I click on them. It is working on everything else – Chrome (Android), Chrome(Desktop). The links are fine. One thing I notice notifications have the href with PHP file. […]
I have some content on a webpage which contains æ ø å, but my webview cant show them properly. Does anyone know what the problem might be ?
(using Samsung Galaxy Tab and Android 3.0, this is intended to work on every 3.0+ tablet, like it does in their browsers, just not webview) I have a page with CSS and jQuery that loads fine in browsers, android browsers and iOS devices, but android webview refuses to load it consistently. I’ve narrowed down the […]
I’m developing a project that uses a Flash video within a webview. I solved all my problems regarding to code, but only worked below Honeycomb. Reading this I found out how to solve the problems for Android 3.0 and later (including ICS), but now it’s the big question… If I make my project compatible with […]
I’m using a WebView to open some files saved to the app. Is there a way to link to the app’s directory where files saved at runtime would be, in a similar way that does? By link I mean loadUrl( *path* ) and also in the HTML markup of the file being opened <img […] | http://babe.ilandroid.com/android/webview/page/9 | CC-MAIN-2018-22 | refinedweb | 477 | 64.51 |
In the previous article we showed how to draw a regular star polygon which was not degenerate i.e it could be drawn without lifting the pen from the paper. In this one we will attempt to come up with a way to draw two types star polygons.
By two types I mean degenerate (must lift pen) and non-degenerate/regular (no lift pen) but I don't know the true names of these star polygons types. But I hope I get the idea across
Repeated regular polygon method
In this method we are going to look at how to draw a degenerate star polygon by repeatedly drawing regular polygon polygons (like triangles, squares and pentagons)
The Star of David
Let's begin with the simplest degenerate star we know of the Star of David. This type of star can be draw by simply drawing one equilateral triangle and then overlapping it with another one which is upside down
Note that you can draw a triangle using the circle function! See the code below
import turtle as t t.shape('turtle') t.color('#4F86F7') t.pensize(2) LENGTH=200 t.circle(LENGTH) t.circle(LENGTH,steps=3) t.circle(LENGTH,steps=4) t.circle(LENGTH,steps=5)
In the code above, we are drawing a triangle, square and a pentagon inscribed within the same circle making use of the
steps parameter. Here's what we get:
Now we know how to use the
steps argument, we can use it twice from two different places on a circle to draw our star
import turtle as t t.shape('turtle') t.color('#4F86F7') t.pensize(2) LENGTH=200 def star_of_david(): """Draws two overlapping triangles """ t.circle(LENGTH,steps=3) t.penup() t.circle(LENGTH, 180) t.pendown() t.circle(LENGTH,steps=3) t.circle(LENGTH) star_of_david()
In the code above
t.circle(LENGTH, 180) makes the turtle go to the top of the circle so that it can draw another triangle from there. Here's what we get
Degenerate Octagram
You can draw a degenerate octagram using a similar method
import turtle as t t.shape('turtle') t.color('#4F86F7') t.pensize(2) LENGTH=200 def degenerate_octagram(): t.circle(LENGTH,steps=4) segments=8 t.circle(LENGTH,360/segments) t.circle(LENGTH,steps=4) t.circle(LENGTH) degenerate_octagram()
After drawing the first square, we are moving the turtle to a location determined by
360 / segments.
segments = 8 because an octagram has 8 equal segments.
Remember that a n-gram star has n equal segments
Degenerate enneagram
According to wikipedia, enneagram is a 9 pointed star. Unlike the hexagram and the decagram, the enneagram cannot be drawn with just two regular polygons. But how many do we need?
A degenerate enneagram should be drawn by using 3 equilateral triangles (You can try drawing this on a piece of paper too).
import turtle as t t.shape('turtle') t.color('#4F86F7') t.pensize(2) LENGTH=200 def enneagram(): segments=9 t.circle(LENGTH, steps=3) t.circle(LENGTH, 360/segments) t.circle(LENGTH, steps=3) t.circle(LENGTH, 360/segments) t.circle(LENGTH, steps=3) t.circle(LENGTH) enneagram()
In the code above, we are calling the
circle(LENGTH, steps=3 function 3 times
A General method - 1 (Regular non star Polygon based)
Now, how do we know the number of times we need to repeat the circle and step commands? To answer that we can make a few observations about the examples above:
- A hexagram is a (6,2) star where the vertices - 6 is divisible the order- 2, 3 times
- A octagram is a (8,2) star where the vertices - 8 is divisible the order- 2, 4 times
- A enneagram is a (9,3) star where the vertices - 9 is divisible the order- 3, 3 times
With the above, we might be able to say that for a degenerate
n-gram with the order
m, the quotient of n/m is the number of sides of the polygon while the order m is the number of times we need to draw the polygon
So, we should try the following:
For an n,m degenerate star draw m number of regular polygons
And the polygons drawn should have n/m sides
Let's code that
import turtle as t t.shape('turtle') t.color('#4F86F7') t.pensize(2) LENGTH=200 def degenerate(n,m): segments = n order = m sides = int(n/m) for _ in range(order): t.circle(LENGTH, steps=sides) t.circle(LENGTH, 360/segments) t.circle(LENGTH) degenerate(10,2)
The code above works! It drew the 10 gram and the 12 grams below
You can also draw (12,3) and (12,4)
Interestingly if you try to draw (10,5) and (12,6) it draws this:
This is a reminder that the possible orders (values of m) are
2 through (n-1)/2-1. Basically the order can't be 1 and it also can't be half of the number of vertices
With the general code above, have we covered all the degenerate regular star polygons?
The (10,4) star
In the examples that we have looked at a degenerate star polygon can be composed of multiple regular (non-star) polygons. But (10,4) is a curious case. In the previous article we had written a code that draws regular stars. Let's see what happens when we use it here
import turtle as t t.shape('turtle') t.color('#4F86F7') t.pensize(2) LENGTH=150 def regular_star(n,m): """ n = number of pointies m = order of the star, (number of skips + 1) """ angle = 360*m/n for count in range(n): t.forward(LENGTH) t.right(angle) regular_star(10,4)
Yes we get a pentagram! But we were supposed to have 10 vertices :(
A new type of degenerate
Hence, (10,4) is a new type of degenerate star where
n is not divisible by
m and as such we cannot draw it with the
degenerate function we wrote earlier.
We can draw this type of degenerate stars by overlapping two stars. Unfortunately we cannot use the circle's
step parameter anymore. We need to draw a regular star then rotate it to draw another star. To do that we will first begin by modifying our regular_star function as follows
def regular_star(n,m): """ n = number of pointies m = order of the star, (number of skips + 1) """ angle = 360*m/n t.left(180-angle/2) for count in range(n): t.forward(LENGTH) t.right(angle) t.right(180-angle/2)
The code above adds
t.left(180-angle/2) to place the regular star to the left side of the turtle. This is similar to how a circle command behaves
Notice above that both the stars are drawn to the left side of the turtle
We will additionally make the following changes to our code) regular_star(7,2) t.circle(RADIUS)
In the code above, we have made the following changes
- Replaced the global
LENGTHwith
RADIUS. This is to allow us to draw the star of a defined radius
- Computed a center angle which is the angle subtended by
msegments on the center of the circle.
center_angleis the representation of arch length in radians
- We used the
center_angleand the
RADIUSto compute the length of the arms of the star
We can now draw a star to the left side of the turtle having defined radius. We can even overlap them all
A better general method
Our goal is to build a better general method to draw any degenerate regular polygon star of a given number of vertices and order. Let's begin again by listing the properties of the degenerate stars
The above can be verified by trying them out on a piece of paper (or by just thinking way too hard)
Here's an algorithm that we might be able to use to draw all the of the above stars
- Find the greatest common divisor (gcd) of
nand
m
- Divide
nand
mby the
gcdto obtain a star (n/gcd,m/gcd)
- Draw the (n/gcd,m/gcd) star
- Move the turtle to another position on the circumcircle determined by 360/n degrees
- Repeat 3 until we have drawn the star
gcdnumber of times
Let's try it:) def star(n,m): """ n = number of pointies m = order of the star, (number of skips + 1) """ gcd = math.gcd(n,m) new_n = int(n/gcd) new_m = int(m/gcd) segment_angle = 360/n for _ in range(gcd): regular_star(new_n, new_m) t.penup() t.circle(RADIUS, segment_angle) t.pendown() star(10,4)
With the code above it turns out that when the gcd is 1, we get either a regular non-degenerate star or a polygon. So we have written code to generalize:
- Regular Polygons
- Regular Non-degenerate stars!
- Regular degenerate stars!~
- Weird stick star :)
Examples:
Bonus
You can overlap all orders for a fixed number of vertices to get beautiful images such as these
def all_stars(n): for x in range(int(n/2)): star(n,x+1) all_stars(10) t.hide()
Yay!!~
Top comments (0) | https://dev.to/taarimalta/all-star-polygons-with-python-turtle-487a | CC-MAIN-2022-40 | refinedweb | 1,519 | 59.74 |
After using R notebooks for a while I found it really unintuitive to use MATLAB in IDE. I read that it’s possible to use MATLAB with IPython but the instructions seemed a bit out of date. When I tried to follow them, I still could not run MATLAB with Jupyter (spin-off from IPython).
I wanted to conduct analyses of electroencephalographic (EEG) activity and the best plug-ins to do it (EEGLAB and ERPLAB) were written in MATLAB. I still wanted to use a programming notebook so I had to combine Jupyter and MATLAB.
I spent a bit of time setting it all up so I thought it might be worthwhile to share the process. Initially, I had three version of MATLAB (2011a, 2011b, and 2016b) and two versions of Python (2.7 and 3.3). This did not make my life easier of Windows 7.
Eventually, I only kept the installation of MATLAB 2016b to avoid problems with paths pointing to other versions. MATLAB’s Python engine works only with MATLAB 2014b or later so keeping the older versions could only cause problems.
Instructions
pip install metakernel
pip install matlab_kernel– this will use the development version of the MATLAB kernel.
pip install pymatbridgeto install a connector between Python and MATLAB.
… voilà!
MATLAB should now be available in the list of available languages.
Once you choose it, you can start using it in a Jupyter notebook:
Issues
Obviously, thing were not always this smooth. Initially, I ran into problems with installing MATLAB’s Python engine. The official website suggested running the following code:
cd "matlabroot\extern\engines\python"
python setup.py install
Which I did but it resulted in an error:
Luckily, the error message was clear so I had to point Python to run the 64-bit version. I double-checked my versions with:
import platform
platform.architecture()
Which returned 64-bit as expected:
Using a command with full path to Python solved the problem:
Summary
I hope this will be useful. I have been messing with other issues which were pretty specific to my system so I did not include them here. Hopefully, these instructions will be enough to make MATLAB work with Jupyter.
PS: I have also explained how to use MATLAB with Jupyter on Ubuntu. | https://walczak.org/2017/07/using-matlab-in-jupyter-notebooks-on-windows/ | CC-MAIN-2020-24 | refinedweb | 380 | 63.8 |
In this guide we will learn how to implement device logic in C/C++. In particular, our firmware will:
mostool
Run
mos tool without arguments to start the Web UI. After start,
the Web UI changes current working directory to the directory where
it finishes the last time. In other words, it "remembers" its settings:
the working directory, chosen port, board, etc.
In this example, it is an
app1 directory, a quickstart example I have done recently:
Since we are going to create our new app in a different directory,
use the
cd DIRECTORY command to change the current directory.
I am going to do
cd .. to go up one level. Notice the current directory change:
Now we are going to use a
mos clone URL DIRECTORY command, in order to clone
some remote app into a DIRECTORY. Press
Ctrl-n - that populates the input field
with
mos clone app1 . We don't
want to use
demo-js as a template, so change it to
empty to use a minimal
app, and change
app1 to
app2:
Now press Enter to execute the command. Notice that the mos tool automatically changes enters into the cloned directory:
Click on the folder icon on the bottom left corner to open a system file browser in the current directory:
Here is the meaning of all files:
fs/ -- All files we put here, will end up on device's filesystem └─ index.html -- Device's HTTP server, if enabled, will serve this file LICENSE mos.yml -- Describes how to build an app README.md -- Document your app in this file src/ └─ main.c -- Contains device logic. We are going to edit this file
Open
mos.yml file in your favorite editor and add support for DHT sensor:
libs: - origin: - origin: - origin: - origin: - origin: # <-- Add this line!
Note - all available libraries are collected under the organisation. They are categorised
and documented under the "API Reference" docs section.
Now, open
src/main.c, you'll see the following
skeleton code which initialises an app that does nothing:
#include "mgos.h" enum mgos_app_init_result mgos_app_init(void) { return MGOS_APP_INIT_SUCCESS; }
Let's add a code that reads from a DHT temperature sensor every second.
The pin to which a sensor is attached, we make configurable by editing
a
config_schema: section in the
mos.yml, to have it like this:
config_schema: - ["app.pin", "i", 5, {title: "GPIO pin a sensor is attached to"}]
This custom configuration section will allow us to change sensor pin
at run time, without recompiling firmware. That could be done programmatically
or via the
mos tool, e.g.
mos config-set app.pin=42.
NOTE: see mos.yml file format reference
for the full documentation about the
mos.yml.
Then, edit
src/main.c, add a timer (see timer api docs) that reads DHT and logs the value
(error handling is intentionally omitted):
#include "mgos.h" #include "mgos_dht.h" static void timer_cb(void *dht) { LOG(LL_INFO, ("Temperature: %lf", mgos_dht_get_temp(dht))); } enum mgos_app_init_result mgos_app_init(void) { struct mgos_dht *dht = mgos_dht_create(mgos_sys_config_get_app_pin(), DHT22); mgos_set_timer(1000, true, timer_cb, dht); return MGOS_APP_INIT_SUCCESS; }
The
mgos_dht.h file comes from the
dht library that we have included to our app.
In order to find out its documentation and API, navigate to
"API Reference" -> "Drivers" -> "DHT temp sensor". This should bring you to
this page - DHT temp sensor. Similarly, you can find
out about any other library.
Connect DHT sensor to pin 5. The sensor itself has following pins:
This is an example with ESP8266 NodeMCU. Red connector is VCC 3.3 volts, black connector is ground GND, and yellow is data, connected to pin 5:
Build, flash the firmware, and attach the console to see device logs. Assume we're working with ESP8266:
Choose your board and port in the the UI, and run
mos build command:
When finished, run
mos flash to flash the firmware and see the output in the console:
Now let's use the cornerstone of Mongoose OS remote management capabilities.
We can make any hardware function be remotely accessible. This is done
by creating an RPC service. Read more about it in the Overview and Core
libraries sections, and here we jump straight to it. Looking at
MG-RPC API doc, add RPC service
Temp.Read:
#include "mgos.h" #include "mgos_dht.h" #include "mgos_rpc.h" static void timer_cb(void *dht) { LOG(LL_INFO, ("Temperature: %lf", mgos_dht_get_temp(dht))); } static void rpc_cb(struct mg_rpc_request_info *ri, void *cb_arg, struct mg_rpc_frame_info *fi, struct mg_str args) { mg_rpc_send_responsef(ri, "{value: %lf}", mgos_dht_get_temp(cb_arg)); (void) fi; (void) args; } enum mgos_app_init_result mgos_app_init(void) { struct mgos_dht *dht = mgos_dht_create(mgos_sys_config_get_app_pin(), DHT22); mgos_set_timer(1000, true, timer_cb, dht); mg_rpc_add_handler(mgos_rpc_get_global(), "Temp.Read", "", rpc_cb, dht); return MGOS_APP_INIT_SUCCESS; }
Run
mos build followed by
mos flash.
And now, call the device's RPC service by running
mos call Temp.Read.
You will see
{"value": 18.6} printed.
This call could be performed over the serial connection as well as over network connection - see RPC section to learn more.edit this doc | https://mongoose-os.com/docs/mongoose-os/quickstart/develop-in-c.md | CC-MAIN-2022-05 | refinedweb | 824 | 65.83 |
So i have been wondering around the net and absorbing as much as possible about c++. I feel like i have a decent understanding of c++ but using what i have learned in a practical way is another story. I feel like the best way is just to start making a simple program. So i decided to make a name the note on the guitar fretboard and here is my code so far:
#include <iostream>
#include <string>
#include <cstdlib>
#include <ctime>
using namespace std;
int main(){
srand(time(0));
char guitarNotes[7] = { 'a', 'b', 'c', 'd', 'e', 'f', 'g' };
char guitarStrings[5] = { 'e', 'g', 'b', 'd', 'f' };
int note;
int guitarNoteAr = rand() % 7;// Used to make the array guitarNotes random
int guitarStringAr = rand() % 5;// Used to make array guitarString random
string exit;
cout << "This program will help you memorize the guitar fretboard.\n";
cout << "Type Begin to start and Exit to quit. Good luck!" << endl;
while (cin >> exit){
if (exit == "Begin")
{
cout << "Where is the " << guitarNotes[guitarNoteAr] << " note on the " << guitarStrings[guitarStringAr] << " string?" << endl;
cin >> note;
}
}
system("pause");
return 0;
}
Having a bit of trouble as to how i should write the code to compare note.
Could you explain (in words!) what this program should do?
And BTW, why does your guitar contain only 5 strings (neither 6, nor 7 nor 12)?
Victor Nijegorodov
When posting code, please use code tags. Go Advanced, select the code and click '#'.
What is the user expected to reply in answer to the question? Say the question was
Where is the b note on the f string?
What would be the expected reply and why? | http://forums.codeguru.com/showthread.php?545617-Guitar-fretboard-program-help | CC-MAIN-2017-39 | refinedweb | 271 | 79.09 |
Doc entrypekerjaan
I need access to the formatted hdc of an html document in an ie component. The document can assume to be loaded, I need to get access to the device context as it would be formatted if it were printed. ## Deliverables 1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done. 2) Deliverables must be in ready-to-run condition, as f...
..
I have an 8 page word doc which I need '1 line footer' and page count showing at the bottom of each page. Can't fathom out how to do this. Will mail you document and you mail it back. Quick easy review. Thanks rice fu...
Private for BShukla7
...our website they are taken to their own personalized page with categorized image gallery (server-side image re-sizing and cropping for thumbnails), document uploads (.pdf, .doc, etc.), and contact information (multiple contacts with name, phone, fax, email for each) IMAGE LIBRARY * this is for our internal/private use, to upload many images (with
Create word doc from the gif's, I want the same format as in the images. I started doing this and included a text file that some of the text may be copied and pasted. The doc's will be legal size, 8.5 x 14. ## Deliverables 1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done.
Entry leve C++ homework. 8 problems in all due by 11/26/05. Please see attachemt 11/27/05.doc for acutall problems. Atachment 10.15 and 10.16 are suplemental information used in one of the problems.
I need someone to go through the Installation Document for one of our products and create an index. The document is about 50 pages long. The index should contain least 500 words. Each item in the Table of Contents should have at least 2 references in the index. I have attached the table of contents and some sample pages. ## Deliverables 1) Complete and fully-functional working program(s) in exe...
We require the following two page contract translated. The two missing words at the end of the contract in article 9, are "saibansho" and "saibansho". Regards Chris
We require the attached pages translated as soon as possible. The translation is FROM Japanese to English. For the right candidate we have a lot more translation work. Regards, Chris
I have 100 word doc articles (average 500 words and some with 2/3 images) that need uploading to my website under the articles section. They will be uploaded to the template and will have the ad on the bottom. Example: [log masuk untuk melihat URL] Excellent Feedback and quick payment GUARANTEED
I need you to take the 15 headlines that I provide for you and find the explaination in a doc file I also provide. This is a pretty easy job. Just read the headlines and find the answer in the file. All the headlines are answered in my document. Then I need you to take the paragraphs that you found and rewrite them in your own words. The
We have a number of PDF documents that we use on a daily basis. We would like these to be converted to MS Word template format, retaining the original formatting and layout, and ensuring the form fields are also available. We have approx 150 of these documents - some very easy, some a little complicated. ## Deliverables 1) Complete and fully-functional working program(s) in executable form as w...
Hi there, I have 22 PDF documents. I need them all converted to .doc files because the way they were created causes them not to be pronted correct. There are som images in the PDFs all vector though. I need each PDF made into a .doc file with the images. Identical to the PDF, only I can resize the text and still see the images, and above al, print
Convert this sales letter in word doc format into HTML and CSS by hand so it looks as close as possible to the letter but cleaned up formatting. Use minimal styles in CSS. ## Deliverables 1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done. 2) Deliverables must be in ready-to-run
Hi, I need an updated version of the word doc you previously converted for me to pdf, converted and test links to ensure they work in final copy. The pdf should be secured- highest security setting in adobe (master password you can choose just tell me) so they can open without a password and print but nothing else (ie NO document extraction or copying)
...have a 73 page word doc I need compiled into an ebook using EbookGold. I will provide the software to you. In addition I need page numbers on the contents page corrected and hyperlinking put in from the contents page to the differnet parts of the document. (This is partially done already). The software requires each page of the word doc be converted to
Modified Listing Mobile Phone Project It requires a DEMO. I will not pay a deposit to anyone until I see a demo please ! Also, I have spend days making a very good detailed word doc about this new site which will be very helpfull to whoever makes this site. There are 2 files attached, they will take you to the Vid and Photo Messages so you can try it for
I need a word document brochure converted to a different software that is supported by the printing company. The document is front and back 8.5 by 11 and includes some pictures. The supported software follows: I will need a pdf of the file and the native file. This is a quick job. Preferably done within a few hours. Thanks, Jim ## Deliverables 1) PDF and Raw file of the do...
I require a word expert to make this word document stand out and look really professional. [log masuk untuk melihat URL] Different Font, Colours, Tables Done professionally. One tricky point is the graph - this either needs drawn manually or on a pc - a gif picture is not suitable. I also need you to add a line of type underneath this. (page9) YOU WILL BE GIVEN THE HIGHEST POSSIBLE REVIEW I CA...
I have a 6 page report that needs typing in word. Much of it is already typed (just needs cut and pasting) however I have a graph and a couple of tables that need inserting - as well as a footer on each page. You will send full report. It will be in different text and will look professional. EXCELLENT FEEDBACK AND QUICK PAYMENT. ps This is not for a 12 year old who has word skills ( I could do ...
...methodical approach from an experienced professional. Examples of reference sites / works would be appreciated. **Project overview:** *3 part project. * * *1:* Coding data entry forms on the front end website from existing HTML templates * *2:* Developing backend database applications * *3:* Developing web based exception reporting tools Brief:
I would like this project in Excel format. I have an template that needs improvement. I need it more ...
We must customize Microsoft CRM 1.2 adding some field to data entry forms, and some more features. Please look at attached doc Please do not bid if you don't have a working test installation of Microsoft CRM 1.2
Macro 1: (see attached word doc in zip file) Determine range of worksheet that contains text Remove any "Subtotals" (if worksheet has been sorted and subtotaled previously) "TM Name" column will always have an entry Delete any rows than have no entry in "Sales Document Number" column Find column heading named "Order Reason" - case sensit...
**_Custom Component Designer_** We need a custom component designer that can be used in [log masuk untuk melihat URL] 2005 at design time. (see screenshot in attached doc) The designer should be nothing more than a simple data-entry form, in witch you can define properties and attributes for these properties. The goal is that the designer (re)generates the necessary code
...thumbnail clickable to a larger image) in each line item needs a feature that will allow the client to browse their computer in order to find the picture file. as well as data entry fields. Program needs to be extremely user friendly and will need to be set up and configured on our server. Continuing support may be necessary as well. Each line needs to
...want to take that data and populate an html document that will then be emailed on command or at a predetermined time in the future (such as 2 weeks after the trip). This html doc will be used to send a confirmation email, a pre arrival letter xx days in advance, thank you email and so forth. I have said that this will be html but really I do not care
I have a paper prepared in WORD format of about 14 pages. It needs to be formatted for inclusion into a conference proceedings. The format guidelines are attached. ## Deliverables 1) Complete and fully formatted paper in format as stated. 2) All deliverables will be considered "work made for hire" under U.S. Copyright law. Buyer will receive exclusive and complete copyrights to...
We need nine demo web pages with embedded doc, xls, ppt, WMA,WMV,MP3, MPEG-2, MPEG-4 and real player and strip of menu options across the top to select between samples. Ideally each page has just play and stop buttons for content. The pages should be set so that the content cannot be saved on a right click on the playing content (don't worry about the
Hi I'm not sure what technology would be used to do this - VBA? Please confirm. I would like to automatically insert an auto-incrementing reference code into a Word document before printing. So when I print multiple copies, each copy would have its own unique reference code. The format would be REF_CURRENTDATE_AUTONUMBER, e.g. REF_030805_001, REF_030805_002, REF_030805_003 and...
... SKU Range to use: 2110 and up Starting stock level: 99 Key word list: to be provided A long 300 to 500-word description should be prepared for each product. One entry for each product with variations. Products differing only in Color, size weight, can be on one page with the variations set up as options two types of options may be used, (ie
Require server-side conversion application which accepts common office documents and converts (using FlashPaper) into SWF. Please read attached PDF. Also see (as alternative): [log masuk untuk melihat URL] Professionals only please.
This is a straight forward Data entry. Im have about 600 Interview Question Papers of various companies in India, in .doc or .txt format. Most of these files range from half page upto 5-6 pages (per file). The content on these doc/txt files are not formatted preoperly & may contain mails/headers of mails/email addresses or such content. You need to
We require our basic text information pack to be made into a pdf file with graphics and artwork that uses the same colour and effect scheme as our website. We must also be able to edit the pdf at any time in future.
I have a PDF document to convert to Word as follows: - original is a scanned image (not text) - it is a legal document (Shareholder's Agreement) - 8 1/2 x 11 pages - 33 pages total - 3 pages are table of contents - font looks like Times New Roman 12 - single spaced - I want a proper table of contents in the Word Document I need this done ASAP. ## Deliverables 1) ...
...that allows for the licensing of music. It will be somewhat similar to Colaborata - go to [log masuk untuk melihat URL] Please download our design doc. to review the project. The program essentially allows registered users to search and browse genres of music and sample music in a player. When they decide on a track they
MS Word (.doc) files to PDFs with bookmarks ??" using Java The project, called doc2PDF, takes a marked up Microsoft Word .doc file and outputs a .pdf document, readable by Acrobat 7.0+, with the texts marked up in the .doc file that are to be rendered as bookmarks in Acrobat’s left hand navigation pane. An RAD product, Display Machine, must be used
I am interested in creating a website for my document, graphic, and photographic printer network. This website will have the functionality of being able to create documents using form layout wizards, utilizing seperate frames for Addresses, Messages, and photographs. The document form and function should be very similar to that into which I am now typing here on Rent A Coder's website page ti...
...product and product information that are found that keeps the same general theme of the home page. This [log masuk untuk melihat URL] document at [log masuk untuk melihat URL] has some web sites that I like the way they look or work. This will give you some indication of what I am looking for in an ecommerce web site home page. Any questions, please
..
Very small java program and java doc. Implement the Object array sorting functionality of the [log masuk untuk melihat URL] class. As skeleton code, this is public class Assignment3 { public static void sort(Object[] array) {} } In order for this sorting method to be possible, you must assume that every element in the array implements the Comparable
...format is that is broken down into 21 chapters. each chapters is listed as questions first then answers. spend a few minutes looking at the .doc and it will be obvious. your goal is to get the text from the .doc to the .mdb. Look at the questions table of the mdb and it should be obvious where the data should go. For "correctanswer", this should be
I need the attached document to be web based so it can be filled out online and printed or e-mailed. It needs to be user friendly. I am using FrontPage for this particular site. ## Deliverables 1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done. 2) Deliverables must be in ready-to-run condition, as follows (depending ...
Need a professional with experience in converting doc,xls,ppt document into PDF document
...([log masuk untuk melihat URL]) a. username/password login b. news c. random quotes d. 2. my Info 3. my Projects 4. my Gallary a. categories based 5. my Resume a. downloaded able doc or pfd b. dynamic text from database 6. my Message Board (Bulletin Board) a. Messages posted by my sites registered users 7. my Calendar a. will display upcoming events
...forms are designed to be written on, but that's silly and outdated - we're automating that process. So, every time you see "Borrower Name" (or just "Name"), you put in a text-entry-field with appropriate font size and a correctly named field. THE PARSER REPLACES THE "DEFAULT VALUE" of the field, so THAT is the portion that must be named correc...
...the documents of Microsoft Word (.doc files) without use of OLE/COM objects or components from Microsoft and to visualize the information from a .doc file with help of COM objects. Requirements: 1. Do not use OLE/COM objects or components from Microsoft for a reading the .doc files. 2. The program must read a .doc file created in Microsoft Word 95 | https://www.my.freelancer.com/job-search/doc-entry/129/ | CC-MAIN-2019-13 | refinedweb | 2,611 | 74.19 |
Get Extension Of File In Java
In this tutorial, we will learn or figure out how to get the extension of any file through the Java program easily. In Java, there is no built-in API to find the extension of a file but you can do this with the below program.
Get Extension Of A File In Java
As a rule, we need or required archive development while dealing with record input-yield tasks.
As already I mentioned there is no API in java to get the extension of a file but through the simple code, we can make it possible to find the file extension.
If you don’t know how to get the file extension then you are at the right place.
At first, we have to import a File package to deal with the file input-output stuff. We can add it by simply writing:-
import java.io.File;
The Below code will help you to achieve your goal:-
import java.io.File; public class GetFileExtension { public static void main(String[] args) { File file=new File("Example.txt"); File file2=new File("Example2.png"); String extension=""; String extension2=""; try{ if(file!=null || file.exists()) { String name=file.getName(); extension=name.substring(name.lastIndexOf(".")); } } catch(Exception e) { extension=""; } try{ if(file2!=null || file2.exists()) { String name2=file2.getName(); extension2=name2.substring(name2.lastIndexOf(".")); // this will give the extension of a fle } } catch(Exception e) { extension2=""; } System.out.println("*****OUTPUT OF THE ABOVE CODE*****"); System.out.println("Extension of a file 1 is"+extension); System.out.println("Extension of second file is"+extension2); }
You can identify any file extension through the above code.
The output of the following code will give the result:-
*****OUTPUT OF THE ABOVE CODE***** Extension of a file 1 is.txt Extension of second file is.png
You may also learn: | https://www.codespeedy.com/get-extension-of-file-in-java/ | CC-MAIN-2020-40 | refinedweb | 307 | 58.99 |
Who are we ?
We are a team of two students in Engineering School (Polytech Sorbonne) in Paris. We decided, for our project, to help bees that are essential for our planet.
What is our project about ?.
How it works ?
Prerequisites
Have knowledge in
- Microcontroller programming (C)
- How to read a datasheet and extract the most important data
- UART
- How to design a PCB
- Electronics
How to send data using Sigfox:
- AT Command
- How many message can be sent per day?
- How to build the message? (JSON language)
To start well
- Carry out a list of all the needed components (see in the category things) :
- Battery and regulator for powering the project
- A timer for a very low consumption
- The sensor to measure the temperature and humidity
- A microcontroller, "brain" of the project
- A Sigfox module to send all the data to the network
- An IC to charge the battery via the STM32 nucleo USB
- Carry out a mapping of all the pins used for the project, depending on the technologies used to interpret the data
How to make it work ?
Programming the microcontroller
to program the microcontroller you can make it on Mbed Compiler.
Programming the microcontroller STM32 NUCLEO-L432KC in terms of:
- Retrieving data from the sensors
- Converting the data to send using Sigfox
- Configuring the sleep mode of the microcontroller when the Sigfox module does not send message
- Libraries to use
#include "mbed.h" #include "mbed_dht.h"
- Pinout assignment
DigitalOut myled(LED1); DigitalOut done(D2); AnalogIn battery(A0); DHT sensor(D4,DHT22); Serial pc(SERIAL_TX, SERIAL_RX); Serial Sigfox (D1, D0);
- Main
main loop :
while(1) { myled = 1; done = 0; err = sensor.readData(); /* ************************* */ /* setting the battery level */ /* ************************* */ tempbatt = (2.1)-(battery*3.3); tempbatt2 = 0.5-tempbatt; niveaubatterie = (tempbatt2*100)/0.5; nb = (unsigned char)niveaubatterie; /* ********************* */ /* case without mistakes */ /* ********************* */ if (err == 0) { // manage the variables of temperature and humidity temperature = sensor.ReadTemperature(CELCIUS); humidite = sensor.ReadHumidity(); tAV = (int) temperature; hAV = (int) humidite; //sent data to sigfox Sigfox.printf("AT$SF=%02x%02x%02x\n", tAV,hAV,nb); wait(10); done=1; } else /* ****************** */ /* case with mistakes */ /* ****************** */ pc.printf("\r\n\r\nErreur %i \r\n", err); myled = 0; wait(10); myled = 1; wait(1); }
Configurate Sigfox module
Initialize the Sigfox module
The BRKWS01 board is the module used to send data on Sigfox cloud. Sigfox is an Iot network created in France, it's work around the world.
The BRKWS01 board requires at least +3.3v, ground and Tx Rx connections.
- 140 messages per days
Next, you can test your module with putty with UART connection. Connect the system in serial mode. the module is controlled with serial AT commands sent on TX / RX pins.
Serial communnication : 9600 bauds, 8 bits
Below is the communication specification and the AT commands to use.
Once the sigfox module connected to PUTTY entered the command "AT" in the command console, you must receive "OK" if everything is normal.
You can consult this link for more explanation.
Then create a sigfox account and fill in the information about the BRKS01.
At first, you can test the connection to the sigfox network by programming a callback to an email address.
Write a message on PUTTY for exemple "AT$SF=0101" and you will receive this message by mail.
How to link Sigfox and Ubidots
- First You have to configured Sigfox callback. We follow this tutorial to structure the Sigfox message on the callback configuration. The programming language is JSON (JavaScript Object Notation).
I have three variables declared :
the temperature : "tAV::int:8" type integer of 8 bits with name tAV.
The humidity : "hAV::int:8" type integer of 8 bits name tAV.
The charge level of the battery type : "niveaubatterie::uint:8" unsigned integer of 8 bits with name niveaubatterie
this the data send by Sigfox BRKS01, received on the backend :
- Then you have to create an ubidots for education account. The token number must be entered in the sigfox callback. Then when the body of your callback is completed, the variables will appear on ubidots.
Above you can see the test results in real condition, we have deposited our product for 48 hours in a beehive.
- You can create SMS or Mail alerts from ubidots, to be notified in case of critical data.
In our case the temperature and the humidity should not be lower than 30 ° C and 60%, otherwise a mail will be sent to warn me there is a problem in the beehive. Moreover the battery should not go below 15% of charge.
Make the PCB
- Test the circuit via breadboard
- Design the PCB via an electronic design App like Altium Designer, make it done and solder all the components on it.
Consumption balance
After measurements, the circuit consumes:
- 200uA on "shutdown" mode
- 90mA during data transmission by BRKS01 module (approx. 6 sec)
- 20 mA during measurement by DHT22 module
The timer TPL5110 low consumption, allows the circuit to be fed every hour in order to take a measurement, then the regulator returns to "shutdown" mode as soon as the sending ended without errors thanks to its entry "done" activated by the microcontrolle.
The system allows a battery life of about 112 days.
The timer turns on the system every 40 min.
Consumption chart :
Make a case for the measuring system
- At first take the measurement of the system.
- Next use a CAO Sotfware, we used SolidWorks.
- The box must not exceed a certain size to enter in the hive without disrupting the life's bees.
- The box must have an easy opening.
- Holes on the sides allow a flow of ambient air for measurements.
- Provide a hive attach
Once the plan is complete, you can print your box with a 3D printer. | https://www.hackster.io/156305/beewatched-the-connected-beehive-monitoring-box-dbf178 | CC-MAIN-2019-51 | refinedweb | 952 | 63.39 |
Hello, I am trying to debug my perl code under mod_perl and I had
followed all the instruction at this section
I managed to see my perl debugging console prompted , but this prompt
is shown by the
tail -f /var/log/apache2.log
DB<1> ModPerl::RegistryCooker::default_handler(/usr/local/lib/perl/5.8.8/ModPerl/RegistryCooker.pm:172):
172: return ($rc == Apache2::Const::OK && $old_status != $new_status)
173: ? $new_status
174: : $rc;
DB<1>
I was thrilled for a while that it did appear, but then it's just an
output but not input. Where am I suppose to see and use the debugger ?
Thanks. | http://mail-archives.apache.org/mod_mbox/perl-modperl/200805.mbox/%3Cb4a0c4930805271038j1a74b947idd2ad38ceefc3da9@mail.gmail.com%3E | CC-MAIN-2019-39 | refinedweb | 104 | 54.63 |
Checking Submissions the Boss Way using Celery and DRF
Project Description
Release History Download Files
Checking Submissions the Boss Way using Celery and DRF
Documentation
The full documentation is at.
Quickstart
Install answerdiff:
pip install django-answerdiff
Then use it in a project:
import answerdiff
Features
- TODO
Running Tests
Does the code actually work?
source <YOURVIRTUALENV>/bin/activate (myenv) $ pip install -r requirements-text.txt (myenv) $ python runtests.py
Credits
Tools used in rendering this package:
History
0.1.0 (2015-10-02)
- First release on PyPI.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/django-answerdiff/ | CC-MAIN-2018-05 | refinedweb | 108 | 59.19 |
Whats the best approach to check if multiple bits are set with bitset<8>? For example &-operation with uint8_t variable ? (bitsetvariable & flag) == 0 or is something like this possible bitsetvariable.test(std::bitset<8>(flag)) ? Source: Windows Que..
Category : bitset
I have a bitset of any size and I would like to know the fastest way to get a list of 64 bits bitsets from my original bitset ? For example, from bitset<10000> b(‘010001110 …’), I would like to get a list of 64 bits bitsets containing the 1st 64th bits, then the next 64th ..
Following is the code: int a = -1991287698; int* ptr = &a; short* p1 = (short*)ptr; // or reinterpret_cast<short*>(ptr) cout << ptr << " " << *ptr << endl; cout << p1 << " " << *p1 << endl; p1++; cout << p1 << " " << *p1 << endl; I’m getting output as: 0x61fef8 ..
I was able to compress a string contained inside a text file as follows: string data = path; int num = data.size() % 8; for (int i=0; i < num; i++) { data += "0"; } stringstream sstream(data); string output; while (sstream.good()) { std::bitset<8> bits; sstream >> bits; auto c = char(bits.to_ulong()); output += c; ..
I have difficulties finding out why my bitset takes up 29MB of memory when writing it to disk. I.e. the file that is written out is 29MB large. I write out the bitset in the following way: #include <iostream> #include <string> #include <fstream> int main() { std::bitset<29621645> a; std::ofstream out; out.open("myfile"); out << a; out.close(); ..
I want to have a bitset constexpr variable in my program. bitset can have unsigned long long value as a constructor which is 64bit value, I need 100 bit value. As per this Q&A, we can use constructor that takes a string as an argument and initialize it that way, but it won’t be constexpr ..
I am implementing a bloom filter with help of bitsets in c++ for finding out malicious URLs. I have a bitset of 100 bits and a simple hash function. But still I get this error. #include<bits/stdc++.h> using namespace std; typedef long long int ll; #define m 100 #define k 1 ll hash1(string str) { ll ..
I am working with bitsets std::bitset<29621645> (i.e. with sizes of 29621645 bits) in C++. Each bitset fills around 29MB, which is a lot. I fell over a library called sdsl which have their own bitset called sdsl::bit_vector. I was amazed over the low size with same number of bits as above. When I instantiate sdsl::bit_vector ..
Im currently trying to declare an array of 17 std::bitsets, each 32 bits long. I’m doing it like this: std::bitset<32> mTestInstruction[17] { std::string("01000000001000000000000000000001"), std::string("01000000011000000000000001100011"), std::string("01000000101000000000000000000001"), std::string("10100000000000000000000000001010"), std::string("00000000100000010000000010000010"), std::string("00000000110001010010000000000001"), std::string("01001000111001010000000000000000"), std::string("01000100001000110000000000000011"), std::string("01000000001000010000000000000001"), std::string("10000000000000000000000000000011"), std::string("00000000010000000000000000000001"), std::string("00000000111000000000000000000001"), std::string("00000000111001110000100000000001"), std::string("01000000010000100000000000000001"), std::string("01000100001000100000000000000010"), std::string("10000000000000000000000000001100"), std::string("11100000000000000000000000001000"), }; And I’m receiving the following error: error: could not convert ‘std::__cxx11::basic_string<char>(((const char*)"01000000001000000000000000000001"), std::allocator<char>())’ from ‘std::__cxx11::string ..
Generating binary representation of numbers from 0 to 255. This is causing segmentation fault. Kindly enlighten. vector<bitset<7>> vb; for (i = 0; i < 256; i++) { bitset<7> b(i); vb[i] = b; } //print for(i=0;i<256;i++){ cout<<vb[i]<<"n"; Source: Windows Que..
Recent Comments | https://windowsquestions.com/category/bitset/ | CC-MAIN-2021-21 | refinedweb | 570 | 65.52 |
cosh - hyperbolic cosine function
#include <math.h> double cosh(double x);
The cosh() function computes the hyperbolic cosine of x.
An application wishing to check for error situations should set errno to 0 before calling cosh(). If errno is non-zero on return, or the returned value is NaN, an error has occurred.
Upon successful completion, cosh() returns the hyperbolic cosine of x.
If the result would cause an overflow, HUGE_VAL is returned and errno is set to [ERANGE].
If x is NaN, NaN is returned and errno may be set to [EDOM].
The cosh() function will fail if:
- [ERANGE]
- The result would cause an overflow.
The cosh() function may fail if:
- [EDOM]
- The value of x is NaN.
No other errors will occur.
None.
None.
None.
acosh(), isnan(), sinh(), tanh(), <math.h>.
Derived from Issue 1 of the SVID. | http://www.opengroup.org/onlinepubs/007908799/xsh/cosh.html | crawl-002 | refinedweb | 140 | 76.93 |
delta_time 1.0.2
A simple library for keeping track of the delta time in games
To use this package, run the following command in your project's root directory:
Manual usage
Put the following dependency into your project's dependences section:
delta_time
A simple library for keeping track of the delta time in games.
This library will give you the delta time in a double floating point number.
Here is a small example on how to use it:
import std.stdio; import delta_time; void main() { bool running = true; int count = 0; // This is the game loop while (running) { // The delta must be calculated first calculateDelta(); // Do some random work as an example int w = 0; for (int i = 1; i < 2_000_000; i++) { w = w * i; } count++; if (count > 10_000) { running = false; } // Now we can just grab the delta anywhere in the project double delta = getDelta(); writeln("The delta is: ", delta); } }
- Registered by jordan4ibanez
- 1.0.2 released 3 days ago
- jordan4ibanez/delta_time
- MIT
- Authors:
-
- Dependencies:
- none
- Versions:
- Show all 4 versions
- Download Stats:
0 downloads today
2 downloads this week
2 downloads this month
2 downloads total
- Score:
- 0.3
- Short URL:
- delta_time.dub.pm | https://code.dlang.org/packages/delta_time | CC-MAIN-2022-33 | refinedweb | 196 | 57.23 |
HOUSTON (ICIS)--Here is Friday’s end of day ?xml:namespace>
CRUDE: Jul WTI: $104.35/bbl, up 61 cents; Jul Brent: $110.54/bbl, up 18 cents
NYMEX WTI crude futures finished up on pre-long-weekend buying, boosted by upbeat economic data from China and the US driving the stock market higher. A stronger dollar eventually helped cap the rally, with WTI topping out at $104.50/bbl before retreating.
RBOB: Jun $3.0235/gal, up 1.77 cents/gal
Reformulated blendstock for oxygen blending (RBOB) gasoline futures settled higher on stronger crude and pre-weekend buying ahead of the long holiday weekend.
NATURAL GAS: Jun $4.405/MMBtu, up 4.6 cents
The front month contract on the NYMEX natural gas futures market closed the trading week back above the $4.40/MMBtu threshold after a 1% rise ate into the losses recorded earlier this week. Weather forecasts predict warmer than average temperatures across the US for much of the next two weeks.
ETHANE: higher at 28.75 cents/gal
Ethane spot prices moved slightly higher with natural gas futures.
AROMATICS: toluene flat at $3.50-3.60/gal, mixed xylenes flat at $3.50-3.60/gal
Activity was thin for US aromatics the fifth consecutive day in a row this week. There were no fresh trades heard during the day. Many trade participants were out of the market ahead of the extended holiday weekend.
OLEFINS: ethylene lower at 52.0-55.5 cents/lb, PGP bid higher at 66.5 cents/lb
US May ethylene bid/offer levels fell slightly to 52.00-55.50 cents/lb from 53.50-56.25 cents/lb on Friday as market activity was thin. US May polymer-grade propylene (PGP) was bid higher at 66.5 cents/lb from 64.0 cents/lb, while offers were steady at 68.0 cents/lb.
For more pricing intelligence please visit | http://www.icis.com/resources/news/2014/05/23/9784237/evening-snapshot-americas-markets-summary/ | CC-MAIN-2015-32 | refinedweb | 320 | 70.9 |
Library Interfaces and Headers
- definitions for vector I/O operations
#include <sys/uio.h>
The <sys/uio.h> header defines the iovec structure, which includes the following members:
void *iov_base /* base address of a memory region for input or output */ size_t iov_len /* size of the memory pointed to by iov_base */
The <sys/uio.h> header uses the iovec structure for scatter/gather I/O.
The ssize_t and size_t types are defined as described in <sys/types.h>.
The symbol {IOV_MAX} defined in <limits.h> should always be used to learn about the limits on the number of scatter/gather elements that can be processed in one call, instead of assuming a fixed value.
See attributes(5) for descriptions of the following attributes:
read(2), write(2), limits.h(3HEAD), types.h(3HEAD), attributes(5), standards(5) | http://docs.oracle.com/cd/E26505_01/html/816-5173/uio.h-3head.html | CC-MAIN-2016-07 | refinedweb | 136 | 58.58 |
II.
In this part, we're going to add a setting page so that we can change the RSS feed that our Data Source downloads.
In current versions of the Reports Module, Data Source settings are stored in the ModuleSettings table provided by DotNetNuke. This ensures that all copies of the same module instance will share the same Data Source settings (Visualizer settings are not shared between copies). However, Data Source developers need not (and should not) access these directly. Instead, the Reports Module provides a layer on top of the ModuleSettings table called "Data Source Settings". When the Data Source is executed, the Reports Module passes a Dictionary containing these settings (inside the ReportInfo object). To allow users to edit the settings, we go back to the Settings.ascx file we created in Part I.
Let's start with the user interface for our settings page. All we need for now is a place for the user to enter a feed URL. When we're done, it should look like the screenshot below
Figure 1 - The finished settings UI
So, open up the Settings.ascx file and make sure you are in Design mode. First we need to add a label, so users know what they should type in our text box. In the Solution Explorer, find the file: controls/labelcontrol.ascx and drag it on to the design surface.
Figure 2 - Locating the LabelControl.ascx file
Next drag an ASP.Net TextBox control, from the Toolbox, on to the surface. You should have something that looks like this:
Figure 3 - Locating the LabelControl.ascx file
Now, set the properties of the controls to the following values:
Save your changes, and open up the code file: Settings.ascx.vb. Here, we need to add code to connect our text box to the Data Source settings. The key to this is the LoadSettings and SaveSettings methods provided by ReportsSettingsBase. First, we need to import some extra namespaces:
Imports DotNetNuke.Modules.Reports
Imports System.Collections.Generic
Then, we can implement the LoadSettings and SaveSettings methods.
Public Overrides Sub LoadSettings(ByVal Settings As Dictionary(Of String, String))
MyBase.LoadSettings(Settings)
feedUrlTextBox.Text = SettingsUtil.GetDictionarySetting(Settings, _
"FeedUrl", _
String.Empty)
End Sub
Public Overrides Sub SaveSettings(ByVal Settings As Dictionary(Of String, String))
MyBase.SaveSettings(Settings)
Settings("FeedUrl") = feedUrlTextBox.Text
The LoadSettings method is provided with a System.Collections.Generic.Dictionary(Of String, String) containing the current settings saved for the Data Source. Our implementation uses the SettingsUtil helper class (provided with the Reports Module) to retrieve a value from the dictionary, or return a default value if it doesn't exist (in this case, an empty string).
The SaveSettings method is also provided with the current settings. However, this method is responsible for retrieving the values entered by the user and updating the settings.
Save the file, and navigate to the Settings page on your test module. You should see our new Settings control displayed, as in the screenshot below:
Figure 4 - Settings page so far
Oops, we still need to put the label text in! We're going to support the DotNetNuke Localization Framework, so we need to put the text in two places.
First, lets add the localized text. Create a Resource File called Settings.ascx.resx in /DesktopModules/Reports/DataSources/RSS/App_LocalResources file and open it. In Visual Studio, you get a nice table interface for editing resource strings. Add the following entries to the table (feel free to tweak the values as you want, just keep the name the same):
Your resource file should look like this after making those changes
Figure 5 - Resource File after entering values
Now, go back to Settings.ascx and set the following properties on the label we created earlier:
Note: That's a colon (':') in the Suffix property. Also, CssClass is case sensitive.
Now, save and refresh your page. You should see something like the screen shot below:
Figure 6 - The finished Feed URL text box
Now, to change our Data Source code to use this new setting. Let's go back to the /App_Code/RSSDataSource/RSSDataSource.vb file and take a look at the signature for the ExecuteReport method:
Public Overrides Function ExecuteReport(ByVal report As ReportInfo, _
ByVal hostModule As PortalModuleBase, _
ByVal inputParameters As IDictionary(Of String, Object)) As System.Data.DataView
The important parameter here is the report parameter. There is a property called DataSourceSettings on that object which contains the same dictionary we created in SaveSettings. First, delete the FeedUrl constant we were using before. Then add following code to the beginning of the ExecuteReport method to get the Feed URL from the settings:
If Not report.DataSourceSettings.ContainsKey("FeedUrl") Then
Throw New RequiredSettingMissingException("FeedUrl", MyBase.ExtensionContext)
End If
Dim feedUrl As Uri = Nothing
If Not Uri.TryCreate(report.DataSourceSettings("FeedUrl"), UriKind.Absolute, feedUrl) Then
This code checks for the setting, and if it isn't present or if it isn't a valid URL we throw an exception provided by the Reports Module: RequiredSettingMissingException. We pass it the name of our setting and some contextual information about our Data Source (which is provided automatically by our base class. If the setting isn't present, the Reports Module will automatically display a useful error message indicating that the setting is missing.
While we're in the code, let's add HTML Decoding directly to the Data Source, so we don't have to use the HTML Decode converter in the Module Settings. To do that, we change the line that adds entries to the output table so that it automatically HTML Decodes the description:
dt.Rows.Add(title, New Uri(link), HttpUtility.HtmlDecode(description))
Make sure everything is saved and go back to the website. You should probably go back to the home page, just to make sure everything is properly recompiled. Go back to the settings page for the Reports Module, and make sure the RSS Data Source is selected. Then configure it with your favourite RSS feed. This time, I'll use my personal blog's RSS feed (WARNING: Shameless plug alert!).
Figure 7 - Testing the Settings UI
Once you've done that, make sure the HTML Decode property is set and the HTML Visualizer is properly configured, just like in part 2 and click Update. You should see the RSS feed displayed just like in Part 2, only now we can change the URL!
Figure 8 - The Final Results
At this point, we have a working RSS Data Source! You can stop here if you want, but in the next part I'll cover packaging the Data Source up so that it can be installed in any Reports Module installation.. | https://www.dnnsoftware.com/community-blog/cid/136335 | CC-MAIN-2019-35 | refinedweb | 1,117 | 56.15 |
One tool for storing data browser-side we might reach for is local storage. In this post, we'll use local storage in React by rolling our own useLocalStorage hook.
If you enjoy this tutorial, please give it a 💓, 🦄, or 🔖 and consider:
- subscribing to my free YouTube dev channel
Our Approach
To approach this problem, let's break it down into pieces.
- Provide a local storage key. Local storage works off of key-value pairs, so we'll want to be able to provide a
keyfor our stored data.
- Provide a default value. If there's no existing data in local storage under the provided
key, we'll want to be able to provide a
defualtValuefor our data.
- Load the local storage value into state (or default if no local storage value exists). We'll still be maintaining stateful information in our app, so we can still use the
useStatehook. The difference here is we'll use the local storage value if it exists before we consider the user-provided
defaultValue.
- Save the stateful data to local storage. When our stateful data changes, we'll want to make sure local storage is kept up to date. Therefore, on any change to our variable, let's run an effect to sync up local storage.
- Expose the state variable and a setter. Much like the
useStatehook, our
useLocalStoragehook will return a 2-element array. The first element will be the variable and the second will be a setter for that variable.
Creating the Hook
Let's create the hook! As noted above, the hook will take two inputs: the
key that will be used in
localStorage and the
defaultValue, which will be used in the even that there's nothing in
localStorage yet.
useLocalStorage.js
export const useLocalStorage = (key, defaultValue) => {};
Next up, let's load any data in
localStorage under the provided
key.
export const useLocalStorage = (key, defaultValue) => { const stored = localStorage.getItem(key); };
Now we know that the initial value for our stateful variable is going to be this
stored value. However, if there's nothing in
localStorage yet under the provided
key, we'll default to the user-provided
defaultValue.
Note: since
localStorage data are stored as strings, we make sure to
JSON.parse any data we retrieve from there.
export const useLocalStorage = (key, defaultValue) => { const stored = localStorage.getItem(key); const initial = stored ? JSON.parse(stored) : defaultValue; };
Now that we have our
initial value for state, we can use our regular
useState hook format and return our stateful variable and its setter.
import { useState } from 'react'; export const useLocalStorage = (key, defaultValue) => { const stored = localStorage.getItem(key); const initial = stored ? JSON.parse(stored) : defaultValue; const [value, setValue] = useState(initial); return [value, setValue]; };
Almost done! We still have one outstanding requirement we haven't met yet: we need to save any data back to
localStorage when it's changed. I like doing this in a
useEffect hook that's triggered when
value changes.
import { useState, useEffect } from 'react'; export const useLocalStorage = (key, defaultValue) => { const stored = localStorage.getItem(key); const initial = stored ? JSON.parse(stored) : defaultValue; const [value, setValue] = useState(initial); useEffect(() => { localStorage.setItem(key, JSON.stringify(value)); }, [key, value]); return [value, setValue]; };
There we have it! Whenever
value changes, our effect will run, meaning we'll set the
localStorage item to be set to the
JSON.stringify of our
value. Note that the provided
key is also a dependency of our effect, so we include it in the dependency array for completeness even though we don't really expect it to change.
Testing Out Our New Hook
Let's take the hook our for a test drive! We'll create a simple component that has a text input whose value is based on our
useLocalStorage hook.
App.jsx
import React from 'react'; import { useLocalStorage } from './useLocalStorage'; function App() { const [name, setName] = useLocalStorage('username', 'John'); return ( <input value={name} onChange={e => { setName(e.target.value); }} /> ); } export default App;
Now let's run our app. We can see that, when we first run the app, our stateful
name variable is defaulted to the string
John. However, when we change the value and then refresh the page, we're now defaulting to the value persisted to
localStorage.
Success!
Discussion
Now time to implement something like this with IndexedDB
Nice post, I'll implement this on a Chrome extension, for settings state management. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/nas5w/using-local-storage-in-react-with-your-own-custom-uselocalstorage-hook-45eo | CC-MAIN-2021-04 | refinedweb | 726 | 57.06 |
Text.ProtocolBuffers.ProtoCompile.MakeReflections
Description
The
MakeReflections module takes the
FileDescriptorProto
output from
Resolve and produces a
ProtoInfo from
Reflections. This also takes a Haskell module prefix and the
proto's package namespace as input. The output is suitable
for passing to the
Gen module to produce the files.
This acheives several things: It moves the data from a nested tree to flat lists and maps. It moves the group information from the parent Descriptor to the actual Descriptor. It moves the data out of Maybe types. It converts Utf8 to String. Keys known to extend a Descriptor are listed in that Descriptor.
In building the reflection info new things are computed. It changes
dotted names to ProtoName using the translator from
makeNameMaps. It parses the default value from the ByteString to
a Haskell type. For fields, the value of the tag on the wire is
computed and so is its size on the wire.
Documentation
makeProtoInfo :: (Bool, Bool) -> NameMap -> FileDescriptorProto -> ProtoInfoSource | http://hackage.haskell.org/package/hprotoc-2.0.15/docs/Text-ProtocolBuffers-ProtoCompile-MakeReflections.html | CC-MAIN-2014-52 | refinedweb | 162 | 59.09 |
- ×Show All
Phaser - Discord
- Contributing
31st July 2017
Phaser has been split into 3 major versions:
This is the last official release of Phaser 2. If you're here for the first time then we would recommend you start here.
Phaser CE (Community Edition)
In November 2016 we handed ownership of Phaser 2 over to the open source community so we could work freely on Phaser 3. Since then the community has added many enhancements, new features and fixed loads of bugs. This build is called Phaser CE (Community Edition) and at the time of writing is at version 2.8.3.
Phaser 3 is the next generation of the Phaser game framework. We have been working hard on it all year and have a development roadmap in place and regular dev logs. We've recently released the first Alpha build, so if you've time to help us test that would be appreciated! It is not yet production ready, but gets closer with every build.
For the latest information visit the Phaser web site, where we cover all three versions. Subscribe to Phaser World, our weekly newsletter, for the latest news, tutorials and development updates on both Phaser 3 and Phaser CE.
Want something more social? Then you can follow us on Twitter and chat with fellow Phaser developers in our Slack and Discord channels.
There are now more ways than ever to help support development of Phaser. The uptake so far has been fantastic, but this is an on-going mission. Thank you to everyone who supports our development, who shares our belief in the future of HTML5 gaming, and Phasers role in that.
Happy coding everyone!
Cheers,
Rich - @photonstorm
Developing Phaser takes a lot of time, effort and money. There are monthly running costs; such as hosting and services. As well as countless hours of development time, community support, and assistance resolving issues. We do this out of our love for Phaser,.
Extra special thanks to our top-tier sponsors: Orange Games, Zenva Academy and CrossInstall.
If you would like to sponsor Phaser then please get in touch. We have sponsorship options available on our GitHub repo, web site, and newsletter. All of which receive tens of thousands of eyeballs per day.
Every Monday we publish the Phaser World newsletter. It's packed full of the latest Phaser games, tutorials, videos, meet-ups, talks, and more. It also contains our weekly Development Progress updates, where you can read about what new features we've been working on.
Previous editions can found on our Back Issues page.
All Phaser versions are hosted on Github. You can:
- Clone the git repository via https, ssh or with the Github Windows or Mac clients.
- Download as zip or tar.gz
- Download just the build files: phaser.js and phaser.min.js
You can also get Phaser via Bower, npm and CDN. Please see the README files for the version you need for further details:folder of the repository. There are both plain and minified versions. The plain version is for use during development, and the minified version for production. You can also create your own builds.
Custom Builds
Phaser 2 2 was never written to be modular. Everything exists under one single global namespace, and you cannot
requireselected parts of it into your builds. It expects 3 global vars to exist in order to work properly:
Phaser,
PIXI 1 supported in Phaser 3.
Webpack
Starting from Phaser 2.4.5 we included a custom build for Webpack.
You need to add
p2as a dependency.
Webpack Config'); 2 from source you can take advantage of the provided Grunt scripts. Ensure you have the required packages by changing to the
v2or
v2-communityfolder, and running
npm installfirst.
Run
gruntto perform a default build to the
distfolder.folder.folder. They are for TypeScript 1.4+.
We have always been meticulous in recording changes to the Phaser code base, and where relevant, giving attribution to those in the community who helped with the change. You can find comprehensive Change Logs for all versions:
The Contributors Guide contains full details on how to help with Phaser development. The main points are:
Found a bug? Report it on GitHub Issues and include a code sample. Please state which version of Phaser you are using! This is vitally important.
Pull Requests can now be made against the
masterbranch (for years we only accepted PRs against the
devbranch, but with the release of Phaser CE we've relaxed this policy)
Before submitting a Pull Request run your code through JSHint using our 2017 Photon Storm Limited.
"Above all, video games are meant to be just one thing: fun. Fun for everyone." - Satoru Iwata | https://www.javascripting.com/view/phaser | CC-MAIN-2017-39 | refinedweb | 784 | 66.64 |
How to store information in the Journal using Mono
The aim of this lab is to discover how to store information in the Sugar journal. In this lab we will take the activity from the previous lab (Lab 1) and add to it a text entry that we'll save in the journal at the end of application.
Step 1: Add an entry field
Let's take the activity in the previous lab. If need you could retrieve the source code in the directory "/home/user/LabSource/lab1/".
First, we will add an entry field on the screen.
- Launch MonoDevelop environment (Applications/Programming/MonoDevelop)
- Open solution from lab1 (File/Recent Solutions/LabActivity)
- Open the file MainWindow.cs
- Add an Entry field as an instance variable of the class, and initialize this field in the class constructor. New lines to add are commented below.
Note that the following source code could be retrieve at "/home/user/LabSource/lab2/step1".
public class MainWindow : Sugar.Window { public new static string activityId=""; public new static string bundleId=""; Entry _entry; //(); // Added _entry.Text = ""; // Added vbox.Add(_entry); // Added Button _button = new Button(); _button.Label = "Quit"; _button.Clicked += new EventHandler(OnClick); vbox.Add(_button); this.Add(vbox); ShowAll(); }
Build and launch the application using "Project/Run", you should see this:
Step 2: Retrieve the id parameter
When an activity start from the journal, it receive one more parameter: its context. This parameter should be retrieved by the activity. This id is the identifier value of the corresponding entry in the DataStore.
So, we're going to update our application to process this new parameter.
First, we'll add it in the class. The new line to add is commented below:
public class MainWindow : Sugar.Window { public new static string activityId=""; public new static string bundleId=""; public static string objectId=""; // Added Entry _entry; ...
Let's now update the entry point of the activity to process and store the "objectid" parameter.
public static void Main(string[] args) { System.Console.Out.WriteLine("Lab Activity for OLPC"); if (args.Length>0) { IEnumerator en= args.GetEnumerator(); while (en.MoveNext()) { if (en.Current.ToString().Equals("-sugarActivityId")) { if (en.MoveNext()) { activityId=en.Current.ToString(); } } else if (en.Current.ToString().Equals("-sugarBundleId")) { if (en.MoveNext()) { bundleId=en.Current.ToString(); } } else if (en.Current.ToString().Equals("-objectId")) { // Added if (en.MoveNext()) { // Added objectId=en.Current.ToString(); // Added } // Added } // Added } } Application.Init(); new MainWindow(activityId, bundleId); Application.Run(); }
- Launch the build using "Project/Build Solution"
- You don't have any error at the end of the build. Don't run the activity yet.
Step 3: A new file type
To store the context of the application, we 're going to declare a new file type. All should be done in the "activity.info" file where all properties for the application.
You probably remind that this file is in the "LabActivity.activity/activity" directory. The property "mime_types" should be set.
[Activity] name = LabActivity activity_version = 1 host_version = 1 service_name = org.olpcfrance.LabActivity icon = activity-labactivity exec = labactivity-activity mime_types = application/vnd.labactivity
This property allow you to say which file types are usable for the activity. Here we set a new file type named "application/vnd.labactivity".
Step 4: Store activity context
We could now store the context for the activity. With Sugar, everything is done to avoid users (mostly children) thinking to save what they currently done. So, most often, the activity context is stored automatically, usually at the end of the activity. It's exactly what we're going to do now.
All these operations need to handle the "Datastore". The Datastore is the file system of Sugar. The DataStore provide an isolated means of storage for each activity. The Datastore saves both physical data and property to describe the data.
See for more information on DataStore.
Let's start by adding two namespace declaration that we need:
using System.IO; using Mono.Unix;
Here is the SaveFile method to add:
void SaveFile() { // 1) Get path for the instance String tmpDir=System.Environment.GetEnvironmentVariable("SUGAR_ACTIVITY_ROOT"); if (tmpDir==null) tmpDir = "./"; else tmpDir += "/instance"; // 2) Write the file with our context UnixFileInfo t = new UnixFileInfo(tmpDir+"/labactivity.dta"); StreamWriter sw = new StreamWriter(t.FullName); sw.WriteLine(_entry.Text); sw.Close(); // 3) Create the Datastore object DSObject dsobject=Datastore.Create(); dsobject.FilePath=t.FullName; dsobject.Metadata.setItem("title","LabActivity"); dsobject.Metadata.setItem("mime_type","application/vnd.labactivity"); byte[] preview=this.getScreenShot(); dsobject.Metadata.setItem("preview",preview); // 4) Write the object to the Datastore Datastore.write(dsobject); }
First, we need to retrieve the directory for the instance of the current activity. This directory could be retrieved in the value of the SUGAR_ACTIVITY_ROOT environment variable. The specific case of the null value is processed because the SUGAR_ACTIVITY_ROOT is not set by the Sugar emulator.
In the second step, the context is stored in a file created in the instance directory. Here, our context is just the content of the entry field. All file handling uses standard .NET features of StreamWriter.
Then, we will create a new object in the Datastore using Sugar library. This command will add a new entry in the Journal. We set to this object the file path for the created file. Then we set all properties for this object:
- title is the title for the activity. We're using the name of the activity but lot of activities allow user to custom this title,
- mime_type is the MIME type for this entry, we're using here the MIME type defined in the "activity.info" file to allow Sugar to match it to the activity runtime,
- preview is the image displayed in the detailed view of the Journal. The Sugar library allow us to set directly it with a screen capture. However, any image could be set here.
Finally, we just store the new object in the Datastore using "write" method.
Now, we need to call this method. As mentioned before, we will call it when the window is closed, just before the end of the activity (show the commented line below).
void OnClick(object sender, EventArgs a) { SaveFile(); // Added Application.Quit(); }
Launch build using Project/Build Solution" Then let's deploy the activity using the "./deploy" script. If asked, type the password "user". We're now starting Sugar emulator from the desktop.
Click on the activity's icon (the square) to run it, you should see the new entry field. Type a value in the field.
Close the application using "Quit" button then run the Journal by a click on its icon.
A new entry will appear in the Journal for the activity:
Click on the arrow to the right to see the detailed view and all the properties of this entry. You could see the screen capture that we set previously. If you've got good eyes, you could see the value you typed for the field.
Let's launch the activity from the Journal using the launch arrow to the upper left. Here what you could see:
The activity works but the entry field is not set. It's exactly what we expected: the next step in the lab will set the right value here.
Quit the activity using the "Quit" button then quit the emulator, for example using the shortcut "ALT+Q".
Step 5: Retrieve the context
As we saw at step 2, when the activity is launch from the Journal, it retrieve a new parameter called "objectid". This id is the record identifier in the Datastore.
Let's now write the method "LoadFile" that we need to retrieve the content of the context.
Go back to MonoDevelop. Add the method "LoadFile" in the file "MainWindow.cs".
string LoadFile() { // 1) Get the Datastore object if (objectId == null || objectId.Length == 0) return ""; DSObject result = Datastore.get(objectId); if (result == null) return ""; // 2) Load the included text file StreamReader sr = new StreamReader(result.FilePath); string text = sr.ReadLine(); sr.Close(); // 3) Return text content return text; }
The first step in the method is to retrieve the object in the Datastore. We just need to call the "Datastore.get()" method in the Sugar library with the "objectId" as parameter. A test condition is used because the "objectid" is not set when the activity is launch out of the Journal.
The second step is to retrieve the full path from the file using the instance variable "FilePath". Then, we just do a file read using a standard StreamReader object. Note that we could also access to other properties (title, mime_type, preview) using the "result.Metadata.getItem()" method.
Finally, the content of the file read is returned.
Now, we will update the constructor of the window to take into account the reading of the context. We just initialize the entry field with the content of the file (commented in the source code(); _entry.Text = LoadFile(); // Updated vbox.Add(_entry); Button _button = new Button(); _button.Label = "Quit"; _button.Clicked += new EventHandler(OnClick); vbox.Add(_button); this.Add(vbox); ShowAll(); }
Launch the build using "Project/Build Solution" Then, let's deploy the activity using the "./deploy" script. If asked, type the password "user". We're now starting Sugar emulator from the desktop.
Launch the Journal.
Start the activity from the Journal (CAUTION: you need to click on the entry before the last one because the last run has left an empty field).
Here what you could see:
Our message is now correctly displayed into the entry field.
So, we've got now an application able to store and retrieve its context in the Journal. | http://wiki.sugarlabs.org/index.php?title=How_to_store_information_in_the_Journal_using_Mono&oldid=36121 | CC-MAIN-2021-04 | refinedweb | 1,582 | 51.65 |
When building an API, there are times when you won’t want for your API to be publicly accessible to everyone. Whether it’s having a simple username and password, or a check that verifies that someone is a paid customer, you will need a way to guard your routes from users that aren’t authenticated. One of the best ways to do this is by using JWT. JWT is a stateless token that checks with your back-end system to validate your user requests.
In this example, I’ll be using my express ES6 blog API from a previous post here. The github repo can also be found here.
Project Setup
Assuming that you already have an API in place, you’ll want to add dependencies for authentication.
npm install –save passport passport-local passport-local-mongoose
Passport is a tool for authentication Node.js applications while Passport-Local and Passport-Local-Mongoose help to add the ability to use simple username and password authentication. Passport-Local-Mongoose specifically handles the passport hashing and salt in your User Document in Mongoose.
Now that we have those installed, we will walk to pull in dependencies for handling our JWT creating and authentication with express.
npm install –save express-jwt jsonwebtoken passport-jwt
Passport-jwt adds middleware to the Passport to allows for it to accept JSON web tokens as a valid authentication type. Now that we have all of our dependencies pulled in, let’s add Passport to our application.
First, we need to create a User model to represent our stored user and credential in our database.
import mongoose from 'mongoose'; const Schema = mongoose.Schema; import passportLocalMongoose from 'passport-local-mongoose'; let userSchema = new Schema({ firstName: String, lastName: String, email: String, password: String }); userSchema.plugin(passportLocalMongoose); let User = mongoose.model('User',userSchema); export default User;
As you can see, we add our email and passport to the model and then right under it, we ass the passport-local-mongoose plugin in order to handle our password hashing. Even though this can be handled manually, you should follow the principles of DRY. If there is a package to handle a common task, don’t re-invent the wheel.
Now that we have our User model created, we can now add that to our passport pipeline and configure passport to connect with Express.
Add these two import statements to your server.js (or app.js )file
import passport from 'passport'; import User from './models/user';
Then choose the method of authentication and JWT configuration you wish to use.
const passportJWT = require("passport-jwt"); const JWTStrategy = passportJWT.Strategy; const ExtractJWT = passportJWT.ExtractJwt; const LocalStrategy = require('passport-local').Strategy;
server.use(passport.initialize()); passport.use(new LocalStrategy({ usernameField: 'email', passwordField: 'password' }, User.authenticate() )); passport.serializeUser(User.serializeUser()); passport.deserializeUser(User.deserializeUser()); passport.use(new JWTStrategy({ jwtFromRequest: ExtractJWT.fromAuthHeaderAsBearerToken(), secretOrKey : 'ILovePokemon' }, function (jwtPayload, cb) { //find the user in db if needed. This functionality may be omitted if you store everything you'll need in JWT payload. return User.findById(jwtPayload.id) .then(user => { return cb(null, user); }) .catch(err => { return cb(err); }); } ));
As you can see here, we initialize an instance of passport, set the strategy to local, and then set the fields that we wish to use as well as the User model. The local Strategy just means using username and password.
Right after that, we tell passport how we plan to serials and deserialize our user, and how to configure our JWT with our JWT strategy. ExtractJWT.fromAuthHeaderAsBearerToken(), specifies that JWT tokens will be sent as Bearer tokens in incoming HTTP requests and that our secret key for encrypting our tokens is stored in secretOrKey.
Setting up User Login and JWT Authentication
How that we’ve set up passport, we will want to create controllers handle our registrations of users and our login that returns a valid JWT.
Create a controller called auth.controller.js and import the following
import User from '../models/user'; import bodyParser from 'body-parser'; import passport from 'passport'; const AuthController = {}; import jwt from 'jsonwebtoken';
We import passport, our model as well as jsonwebtoken as a method of signing our tokens.
We also create an object called AuthController to export later into other files.
Now that that is done, create the function to register the user.
AuthController.register = async (req, res) => { try{ User.register(new User({ username: req.body.email, firstName: req.body.firstName, lastName: req.body.lastName, }), req.body.password, function(err, account) { if (err) { return res.status(500).send('An error occurred: ' + err); } passport.authenticate( 'local', { session: false })(req, res, () => { res.status(200).send('Successfully created new account'); }); }); } catch(err){ return res.status(500).send('An error occurred: ' + err); } };
The User.register() method takes in a new User model, the password and the mother of authentication as parameters.
Now create the method to login
AuthController.login = async (req, res, next) => { try { if (!req.body.email || !req.body.password) { return res.status(400).json({ message: 'Something is not right with your input' }); } passport.authenticate('local', {session: false}, (err, user, info) => { if (err || !user) { return res.status(400).json({ message: 'Something is not right', user : user }); } req.login(user, {session: false}, (err) => { if (err) { res.send(err); } // generate a signed son web token with the contents of user object and return it in the response const token = jwt.sign({ id: user.id, email: user.username}, 'ILovePokemon'); return res.json({user: user.username, token}); }); })(req, res); } catch(err){ console.log(err); } };
This checks that the email and password fields are not empty. This is using the passport-local authentication to check if it’s a valid user. Then it signs a token using that users information and returns that token as json.
Guarding routes
Now that we have the auth methods created, we’re going to have to create routes for those.
Create the two files auth.routes.js and user.routes.js
import { Router } from 'express'; import AuthController from '../controllers/auth.controller'; const router = new Router(); router.post('/register', (req, res) => { AuthController.register(req, res); }); router.post('/login', (req, res, next) => { AuthController.login(req, res, next); }); export default router;
Here you can see that we import Express and get the router to later export. We also create Post routes for our AuthController.
In our user.routes.js file
import { Router } from 'express'; const router = new Router(); /* GET users listing. */ router.get('/', function(req, res, next) { res.send('respond with a resource'); }); /* GET user profile. */ router.get('/profile', function(req, res, next) { res.send(req.user); }); export default router;
This returns the user that made the request.
Now if we go back to our server.js file we can add those route files to our express pipeline
import user from './routes/user.routes'; import auth from './routes/auth.routes'; server.use('/auth', auth); server.use('/user', passport.authenticate('jwt', {session: false}), user);
Here we pass in passport as our method of authentication. The user must now have a JWT token in every header in order to make requests to that controller.
User Creation
User Login
JWT user profile
Conclusion
On top of having a JWT token in the header, you can also use this as a method of creating MongoDB documents that are related to a user. An example of this is having an authenticated user’s ID associated with a post or comment upon creation. Check out the GitHub repository for the full source code of this tutorial. | https://codebrains.io/add-jwt-authentication-to-an-express-api-with-passport-and-es6/ | CC-MAIN-2019-35 | refinedweb | 1,240 | 51.44 |
STOMP and Individual Ack?jscheid Jun 1, 2011 11:08 PM
Hi,
I'm in the process of porting an application from JBM+StompConnect to HornetQ with native STOMP protocol handler.
One issue I've just run into is that StompSession calls ServerSessionImpl.acknowledge rather than ServerSessionImpl.individualAcknowledge, forcing acknowledgment of messages in the order in which they were received. This breaks my application, which processes and acknowledges messages out-of-order using separate worker threads. StompConnect doesn't have the same limitation, and the STOMP protocol description also doesn't mention it. Could this be changed to use individualAcknowledge by default, or could this be made configurable?
Thanks,
Julian
1. Re: STOMP and Individual Ack?Tim Fox Jun 2, 2011 4:15 AM (in response to jscheid)
HornetQ is just following the STOMP 1.0 spec :
"When a client has issued a SUBSCRIBE frame with the ack header set to client any messages received from that destination will not be considered to have been consumed (by the server) until the message has been acknowledged via an ACK."
BTW StompConnect won't do anything different since it just uses the JMS API, and there is no way using JMS to acknowledge an individual message out of order.
2. Re: STOMP and Individual Ack?jscheid Jun 2, 2011 5:51 AM (in response to Tim Fox)
Thanks for your reply.
You are right that HornetQ is under no obligation to support out-of-order acknowledgements, I was mistaken. I missed the bit where vanilla JMS doesn't support out-of-order acks. I'm not sure I agree that the STOMP spec is clear in this regard -- it doesn't clarify what "any messages" are in the context of "the message" -- but you are probably right that the intention was to match JMS behaviour.
That said, if I'm not missing anything HornetQ could easily support out-of-order STOMP acks, and my application -- and probably others -- would benefit from it. Would you consider adding this as a feature that can be enabled in the configuration? I'd be happy to provide a patch to this end.
3. Re: STOMP and Individual Ack?jscheid Jun 2, 2011 6:03 AM (in response to jscheid)
Even better than a configuration option might be to introduce a third ack mode, something pseudo-namespaced like "hornetq:individual" maybe? The STOMP spec does mention ack modes beyond "auto" and "client" in passing... ( )
4. Re: STOMP and Individual Ack?jscheid Jun 2, 2011 9:22 PM (in response to jscheid)
Even betterer: STOMP 1.1 provides a new ack type, client-individual!
It would be fantastic to add support for this prior to adding full STOMP 1.1 support as per HORNET-553. I'll send over a patch shortly.
5. Re: STOMP and Individual Ack?jscheid Jun 2, 2011 9:56 PM (in response to jscheid)
I took the liberty of creating issue HORNETQ-713 with a patch for this. | https://developer.jboss.org/thread/167456 | CC-MAIN-2018-39 | refinedweb | 495 | 65.52 |
ASP.NET Web Pages with Razor is a framework that has many names. Sometimes referred to as ASP.NET Web Pages or Razor Pages, ASP.NET Web Pages is a new framework that brings together the Razor view engine and an ASP.NET website. It supports most of the constructs of the Razor view engine in the ASP.NET MVC framework (minus the MVC-specific offerings) and adds additional WebMatrix capabilities, all within the website project template format.
For the average Web Forms developer, ASP.NET Web Pages introduces quite a few shifts in development styles. The framework no longer relies on a code-behind file; rather, it mixes server-side code and client-side HTML all into one package. This architecture is déjà vu for ASP developers because they develop using these building blocks in a very similar fashion. Similarly, ASP.NET Web Pages do not use controls or components; instead they render the raw HTML or use helper methods (methods similar to the HTML helpers in MVC, but with a different naming convention and structure). This means that even ASP.NET Web Forms developers will have to get accommodated to some changes if they want to use the framework.
We'll start our exploration of ASP.NET Web Pages by quickly examining Razor constructs and syntax. After that, we will look at examples that demonstrate how to use some of Razor's features and examine layout and content pages, rendering techniques, and more Razor features.
Razor Syntax
ASP.NET Web Pages are constructed very similarly to an ASP.NET MVC view, using the new Razor syntax. A lot of the same constructs are supported; many of the features described in my article "Working with the ASP.NET MVC 3 Razor View Engine" also apply to ASP.NET Web Pages. Now let's take a quick look at some syntax examples.
Razor views support inline and code statements, as shown in the code sample in Figure 1.
Welcome @userName @{ if (DateTime.Now.Month == 12) { @:, did you get all of your Christmas shopping done? } }
The Razor view engine provides some intelligence about inline and code statements, so that it recognizes where the server-side code stops and starts. Razor is smart enough to know that the @SomeObject.Name statement is server-side code but the surrounding span tag is client-side markup; this is because inline code is interpreted strictly. Code blocks, on the other hand, are denoted by curly braces (multi-line code blocks) or parentheses (single-line code statements) and aren't interpretive where they end; they end at a specific designation (an ending curly brace or parenthesis). All server-side code is denoted by the "@" character. Razor also supports comments using an asterisk combination (@* *@) to denote comment blocks, escaping the "@" character to display it directly as client content using "@@", and using special keywords in combination with the "@" character.
Razor supports two types of common reusable components: functions and helpers, each designated with a special keyword. Functions are declared with the @functions keyword, and defined in a code block. Figure 2 shows the structure of a functions block. A function defines a method that returns an HTML string, a string, or whatever the method may need. HTML strings are handy for rendering markup directly to the browser later. To use the function, simply call it using its name.
@functions { public static IHtmlString GetFormattedAddress(string address1, string city, string state) { return new HtmlString(address1 + "
" + city + ", " + state); } }
Functions vary from the way they are defined in MVC; instead of being defined in the view, functions need to be in the app_code folder and must be static, as shown in Figure 2. Being in the app_code and being static makes functions a global feature.
Helpers are a little different; instead of acting like a traditional function by defining an access level and return type, helpers define a method structure without returning anything, rendering their contents directly. Inside the method can be any type of code statements -- if/else, while, and so on. Figure 3 shows a sample helper rendering HTML and server-side content directly to the browser.
@helper GetNameAsHtml(string first, string last) { @last, @first }
A helper can render content to the browser in one of three ways: by using the
There are two types of helpers: local (to the page only) or global (available across the entire application). A global helper looks a lot like a local helper, except that the definition of the helper resides in the App_Code folder. Global functions and helpers assume the name of the file as the class it resides in; for instance, the file GlobalHelpers.cshtml defining a helper named GetNameAsHtml can be referenced via GlobalHelpers.GetNameAsHtml().
Razor Offerings
Razor would be of little help to a developer if an ASP.NET web page couldn't be broken up into sections (as user controls enabled developers to do) or by defining a single template container to render the entire site in (as master pages allow). Thankfully, ASP.NET Web Pages does not disappoint in this regard. ASP.NET Web Pages supports layout and content pages, similar to a Web Form with a master page. A layout page looks like the condensed sample in Figure 4. Note that the head and root body are deliberately omitted.
@RenderPage("~/Layout/_Header.cshtml"). .
@Page.Title@RenderBody()
Also notice the use of three methods, each of which serves a particular purpose:
- @RenderPage -- Renders a *.cshtml or *.vbhtml page directly at the current placeholder. This is similar to a server-side include of ASP, or equivalent to defining a user control in ASP.NET Web Forms. The difference here is that the view does not have a direct API to work with the rendered pages content; there are other ways to work around this limitation (using PageData, for instance, which we'll discuss later).
- @RenderSection -- Equivalent to a ContentPlaceholder control, a section defines a custom region whereby a view can render specific content at a specific location of the layout page. Sections may be required or optional.
- @RenderBody -- Rather than a view simply assuming that it has a section defined for the main content, the RenderBody statement renders the body that is not handled by an explicit section. The main content of the view does not define a section to render in.
Note that in Razor, no such user control concept exists; user controls are essentially a simplified page that can be rendered within the context of a parent page. This would normally mean that these reusable pages could be served by the browser; however, a special designation (an underscore at the beginning of the filename) means the page is not servable to the client. In addition, the framework also handles generating clean URLs without file extensions and has similar routing features to those provided in ASP.NET MVC.
Layout, content pages, and reusable pages can communicate with each other using a PageData collection. Data stored via PageData is persisted during the request only. It works similarly to TempData or ViewData containers in MVC; data pushed in can be read using a dictionary interface.
Razor also provides special support and initialization of application-wide settings using two special pages: _AppStart and _PageStart. AppStart is a great way to establish any application-wide settings or perform application initialization. For instance, the application could initialize a dependency injection container here or set up other framework-specific components. Settings specific to the page are established via the _PageStart page. Here the page defines common page settings in a global file. Similar to the web.config, the _PageStart defines common settings for a particular page, such as the layout file or common PageData settings.
Both of these special pages run within the context of the executing page; _PageStart can also explicitly control when the page renders by executing an @RunPage method. This means part of the _PageStart code is run before execution of the page, and the rest is run after it, giving developers more control of the page process. This method is optional and implicitly called at the end if omitted.
Out with ASP.NET Controls, In with Helpers
If you're wondering how ASP.NET Web Pages uses the ASP.NET Web Forms architecture, there's a simple answer for you: It doesn't. Instead, the ASP.NET Web Pages architecture mirrors ASP.NET MVC with a set of HTML helpers to construct the UI, in addition to a new set of WebMatrix helpers for more advanced controls. Lacking a code-behind page or a controller, ASP.NET Web Pages defines code in line with the markup as the means of serving requests. ViewState is also another mechanism that didn't make it into the framework, meaning developers have to manage the request and response scenarios manually.
Let's take a look at building a form using the Html Helper extensions. The Html property of the page defines a series of extension methods to render TextBox, CheckBox, and more types of controls. A developer can choose to go this route or instead define each HTML element directly. The sample form in Figure 5 displays an input form to the user and manages the request/response process completely. If any errors have occurred, input is validated appropriately.@{ if (IsPost) { if (string.IsNullOrEmpty(Request["FirstName"])) { this.ModelState.AddError("FirstName", "First name is required"); } if (string.IsNullOrEmpty(Request["LastName"])) { this.ModelState.AddError("LastName", "Last name is required"); } if (!this.ModelState.IsValid) { this.ModelState.AddFormError("Some errors have occurred on the form."); } } } @Html.ValidationSummary("These are the errors:") . .
The form has helpers for each input control, in addition to the ValidationMessage helper, which is an MVC-style construct for displaying validation errors in the UI. The form listens for errors added to ModelState, which is a dictionary, matching the validator name to the state of its validation. When an error added to the dictionary matches the name of the validation message, the validation message is displayed, giving the user an asterisk to denote that the error happened. Additionally, all validation errors are displayed in the ValidationSummary.
Notice how the page checks that the page posts back, by using the IsPost property. Upon post, the form checks the posted data, looking for errors. An automatic validation process does not happen here; each error needs to be explicitly checked for and, if found, added to the model state.
Controls
Although controls no longer exist, their replacements exist in the form of either direct HTML tags, HTML Helper extension methods, or a web component such as the WebGrid control. The WebGrid control, from the System.Web.Helpers namespace and assembly, follows a chaining-like operation to set up the control and render to the browser. Most parameters are established in the constructor of the WebGrid control. At the end of the process is a GetHtml() method, the method that performs the rendering and accepts a variety of rendering options, such as applying various styles to the control.
The example in Figure 6 creates a grid control. Notice how the settings for binding data (e.g., the data to display) are defined in the constructor, whereas the Cascading Style Sheets (CSS) classes to use for rendering the styles in the UI are defined in the GetHtml() method.@(new WebGrid(results, columnNames: new string[] { "UserId", "Email" }, defaultSort:"Email") .GetHtml(rowStyle:"RowStyle", headerStyle:"HeaderStyle", tableStyle:"TableStyle"))
Data is supplied to the grid in the constructor as an object. The framework handles the binding in a similar manner as the GridView control would, matching fieldnames in the data to a specific column (defined by specifying the names of the columns).
Components
If you were to peruse the examples of Razor web pages on the website, you would notice a series of helpers used to perform database, I/O, or web tasks. Some of these helpers still use objects in the .NET framework, whereas others are web-specific. The ASP.NET Web Pages framework made some procedural changes to how you perform certain tasks, which differs from how you perform those tasks in ASP.NET Web Pages' counterparts.
For instance, to work with a database and query the results, we can use the new Database class. This class handles the core ADO.NET operations, such as connecting to a database, opening the database connection, and executing a parameterized query or command. The Database class requires the same input as an ADO.NET command or data adapter; however, it encapsulates all this into a fluent interface, as shown in Figure 7.var results = Database.Open("StarterSite").Query("select * from UserProfile"); if (IsPost) { Database.Open("StarterSite") .Execute("insert into UserProfile values (@0)", email); }
Query and Execute are the two main operations; both support parameterization. Parameters are numeric-based; instead of supplying a name, you supply a numeric value starting at zero. Both direct queries and stored procedures are supported by these two methods.
Even though a Database class provides simple ADO.NET operations, Entity Framework can still handle the heavy-duty work of querying and modifying data. As ASP.NET Web Pages are still object-oriented, they can support instantiating the ObjectContext and reading/writing data. The code in Figure 8 shows an example of using Entity Framework with Web Pages. This code responds to a form post, then processes the data and inserts a new object into the database.@{ var oc = new WebPageSamples.Data.StarterSiteObjectContext(); if (Request.Form["AddNewEmail"] != null) { var email = Request.Form["NewEmail"]; if (!string.IsNullOrEmpty(email)) { oc.UserProfiles.AddObject(new WebPageSamples.Data.UserProfile { Email = email }); oc.SaveChanges(); } } var results = oc.UserProfiles.ToList(); }@(new WebGrid(results, columnNames: new string[] { "UserId", "Email" }, defaultSort:"Email") .GetHtml(rowStyle:"RowStyle", headerStyle:"HeaderStyle", tableStyle:"TableStyle"))
In addition, disk I/O is fully supported using the .NET Framework's File class. This is nothing specific to ASP.NET Web Pages but is the same File class we've had in any other project. Figure 9 shows an example of using the File class in ASP.NET Web Pages.var entry = Request.Form["Entry"]; File.AppendAllLines(Server.MapPath("data.txt"), new string[] { entry }); var lines = File.ReadAllLines(path); foreach (var line in lines) { .. }
In ASP.NET Web Pages you have a variety of methods available to read, write, or append data, depending on your needs. However, you are not restricted to the File class to perform these operations; you can stick to the traditional way of using a StreamWriter class to output data to a file, as shown in Figure 10.var entry = Request.Form["EntryWriter"]; using (var writer = new StreamWriter(File.Open(path, FileMode.Append))) { writer.WriteLine(entry); }
Order Matters
If you look at the sample code provided with this article (click the Downloads button at the top of the article) and some of the code samples in the figures, you'll notice the concept that views have to be linear. ASP.NET Web Pages has a different API than Web Forms', which means views are rendered in a top-to-bottom fashion. This means that you have to carefully place where your code needs to be and ensure that it is rendered in the exact spot on the page that you want.
With Web Forms, we used to have an API for the page, letting us change form values at any point in time in the page life cycle. Code didn't have to be in any specific order to get the page to render correctly. The approach I just described is a shift away from the Web Forms architecture, more toward an ASP or an ASP.NET MVC view style. So be aware that order matters.
Give ASP.NET Web Pages a Try
What you've seen here is a comprehensive overview of the ASP.NET Web Pages framework. ASP.NET Web Pages uses a combination of HTML helpers, WebMatrix components, the Razor view engine, influences from the MVC framework, and client-side HTML to generate the next-generation web page. A Razor web page consists of a single page formed by a combination of server-side code and client-side HTML, using the Razor conventions to denote the differences. Additionally, MVC-style HTML extensions build the UI and provide additional features such as validation, complete with model state support.
Web Pages can still leverage everything that the .NET Framework offers; with Web Pages you can use the new Database class (a WebMatrix offering), or you can use LINQ to SQL or Entity Framework for data management. WebMatrix also supports all .NET Framework capabilities out of the box, although it doesn't fully support everything with ASP.NET. I encourage you to give Web Pages a try and see if it improves your productivity.0 commentsHide comments | https://www.itprotoday.com/web-development/using-aspnet-web-pages-razor-view-engine | CC-MAIN-2018-43 | refinedweb | 2,781 | 56.35 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.