text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Control.DeepSeq
Description
This module provides overloaded functions, such as
deepseq and
rnf, for fully evaluating data structures (that is, evaluating to
"Normal Form").
A typical use is to prevent resource leaks in lazy IO programs, by forcing all characters from a file to be read. For example:
import System.IO import Control.DeepSeq import Control.Exception (evaluate) readFile' :: FilePath -> IO String readFile' fn = do h <- openFile fn ReadMode s <- hGetContents h evaluate (rnf s) hClose h return s
Note: The example above should rather be written in terms of
bracket to ensure releasing file-descriptors in
a timely matter (see the description of
force for an example).: deepseq-1.1.0.0
Synopsis
- class NFData a where
- deepseq :: NFData a => a -> b -> b
- force :: NFData a => a -> a
- ($!!) :: NFData a => (a -> b) -> a -> b
- (<$!!>) :: (Monad m, NFData b) => (a -> b) -> m a -> m b
- rwhnf :: a -> ()
- class NFData1 f where
- rnf1 :: (NFData1 f, NFData a) => f a -> ()
- class NFData2 p where
- rnf2 :: (NFData2 p, NFData a, NFData b) => p a b -> ()
NFData class
class NFData a where Source #
A class of types that can be fully evaluated.
Since: deepseq.
Note:
Generic1 can be auto-derived starting with GHC 7.4
{-# LANGUAGE DeriveGeneric #-} import GHC.Generics (Generic, Generic1) import Control.DeepSeq data Foo a = Foo a String deriving (Eq, Generic, Generic1) instance NFData a => NFData (Foo a) instance NFData1 Foo, Generic1, NFData, NFData1)
instance NFData Colour where rnf = rwhnf
or
{-# LANGUAGE BangPatterns #-} instance NFData Colour where rnf !_ = ()
Instances
Helper functions: deepseq-1.1.0.0 ()
Finally, here's an exception safe variant of the
readFile' example:
readFile' :: FilePath -> IO String readFile' fn = bracket (openFile fn ReadMode) hClose $ \h -> evaluate . force =<< hGetContents h
Since: deepseq-1.2.0.0
($!!) :: NFData a => (a -> b) -> a -> b infixr 0 Source #
(<$!!>) :: (Monad m, NFData b) => (a -> b) -> m a -> m b infixl 4 Source #
rwhnf :: a -> () Source #
Liftings of the
NFData class
For unary constructors
class NFData1 f where Source #
A class of functors that can be fully evaluated.
Since: deepseq-1.4.3.0
Minimal complete definition
Nothing
Methods
liftRnf :: (a -> ()) -> f a -> () Source #
liftRnf should reduce its argument to normal form (that is, fully
evaluate all sub-components), given an argument to reduce
a arguments,
and then return
().
See
rnf for the generic deriving.
Instances
rnf1 :: (NFData1 f, NFData a) => f a -> () Source #
For binary constructors
class NFData2 p where Source #
A class of bifunctors that can be fully evaluated.
Since: deepseq-1.4.3.0
Methods
liftRnf2 :: (a -> ()) -> (b -> ()) -> p a b -> () Source # | https://downloads.haskell.org/~ghc/latest/docs/html/libraries/deepseq-1.4.4.0/Control-DeepSeq.html | CC-MAIN-2020-24 | refinedweb | 423 | 53.41 |
Trouble with arduino/processing
- Login or register to post comments
- by mushroom glue
- Collected by one user
December 1, 2012
I've been trying to teach myself Arduino and Processing, and I've been hitting a few problems. I had a previous post Here, but it was getting a bit crowded with code, so I thought I'd start a new post.
I've been trying to send an integer value from Processing to Arduino via serial, to set the brightness of the on-board LED. The arduino sketch works, and I've tested it with the serial monitor. When I try to control it with my Processing sketch, though, it only adjusts between 47-60 rather than the full range of 0-100. I'm using the ControlP5 and serial libraries in processing, and the softPWM library on the arduino to PWM the onboard LED.
Code:
Arduino;
#include <SoftPWM.h>
#include <SoftPWM_timer.h>
int LED = 13;
int dim = 0;
void setup() {
Serial.begin(2400);
SoftPWMBegin();
SoftPWMSet(LED,0);
}
void loop() {
while (Serial.available() > 0) {
int dim = Serial.parseInt();
if (Serial.read() == '\n') {
SoftPWMSetPercent(LED,dim);
}
}
}
Processing (The reason the value is called servo is because I'd intended to control a servo, but didn't get round to it);
import processing.serial.*;
Serial myPort;
import controlP5.*;
ControlP5 cp5;
int Servo = 0;
int out = 0;
Knob controller;
void setup() {
size (200,200);
myPort = new Serial(this, "COM6", 2400);
smooth();
noStroke();
cp5 = new ControlP5(this);
controller = cp5.addKnob("Servo")
.setRange(47,60)
.setValue(0)
.setPosition(50,50)
.setRadius(50)
.setDragDirection(Knob.HORIZONTAL)
;
}
void draw() {
background(5);
myPort.write(Servo);
delay(15);
myPort.write('\n');
}
Any Ideas?
PS; can the arduino provide MIDI over USB?
Thanks in advance.
Integers, characters, and strings
After digging into the (horrible) processing documentation for 5 minutes or so, it seems that your line myPort.write(Servo); is actually sending the character whose ASCII (?) code is the number servo, not a string of digits representing that number. The Arduino, however, is expecting a string of digits to assemble into an integer (parseint). If you convert the number Servo to a string of digits then pass that to .write( string ); that should work. I don't use processing so I don't know how to tell you to do that.
EDIT: It appears the str() function will do what you need:
something like this myPort.write(str(Servo));
Thanks!
Thanks! Just tried this, and it's worked perfectly. I'm getting smooth control from 0 to 100 from the sketch.
Glad to help
Have fun!
Range yes, but also the int
I would have to say the 47,60 in the range variable is something to look at, but it should be noted that you are sending an INT via serial and it does not look like you are taking it apart (into bytes) first. If you are only using a 0-100 range (or even a 0-255 range), I would use a byte for your "servo" variable instead.
Thans for the reply. I've
Thans for the reply. I've given using byte instead of int a shot, but had no luck. The 47,60 range variable was origeonally 0,100, but I changed it to see how smooth it was within those values. It's just the range for the knob. I've also tried changing the port settings (parity etc.), but it didn't help, so I set it back. Do you have any idea how to monitor serial activity? I was just wondering if it's possible to use a bit of software to compare what the arduino serial terminal and the processing sketch were sending, so I could have a better guess as to what's wrong.
Suspicious...
I try to stay well away from anything that speaks Java (like Processing), but the statement "it only adjusts between 47-60 " certainly draws suspicion to this line of code:
.setRange(47,60)
Sorry
Thanks for the reply. I forgot to re-adjust that, as it was origeonally (0,100). That sets the range for the on-screen control. I just changed it so that I could see how precise it was within that range. | http://letsmakerobots.com/node/35264 | CC-MAIN-2015-14 | refinedweb | 701 | 66.33 |
Restful API Interface Using Android Things
Restful API Interface Using Android Things
In this post, we take a look at how to build a RESTful API that can be used by and integrated with any device running Android Things!
Join the DZone community and get the full member experience.Join For Free
Building a RESTful API interface using Android Things is useful whenever we have to connect Android Things to an external system. This article explains how to build, implementing a RESTful API interface in Android Things guarantees large compatibility with other systems implemented using different technologies.
We have already discussed how to exchange data between Android Things and Firebase the MQTT protocol.
To focus our attention on the process of building a RESTful API interface in Android Things, we will use a simple sensor that reads temperature, pressure, and humidity. This sensor is BMP280 — an I2C sensor.
There are two different parts that make up this tutorial:
- The schematic: How to connect the sensor to Android Things
- How to build a RESTful API interface to read the sensor data
How to Connect the BMP280 to Android Things
In this first step, we cover how to connect the BMP280 sensor to the Android Things. This topic was covered several times on this blog anyway, but just to refresh your memory, the BMP280 is an I2C sensor, so it connects to Android Things using four pins:
- Vcc
- Ground
- Clock (CLK)
- Data (SDA)
The schematic showing the connections is:
Please refer to the Android Things Peripherals I/O to know the pins used in this project. This project uses a Raspberry Pi, but you can use any platform compatible with Android Things.
Create a new Project using Android Things Studio and add the following dependency to the Gradle file:
dependencies { ... compile 'com.google.android.things.contrib:driver-bmx280:0.4' }
Let us create a new class that manages the connection to the sensor and reads the temperature and the pressure:; } }
That’s all, now Android Things is connected to the sensor and the Android Things app can read its values.
Building a RESTful API Interface Using Android Things
This is the most interesting part of this article. The target of this section is building a RESTful API interface using Android Things so that we can expose a set of services to read the temperature and the pressure from the sensor. In other words, we can imagine we have an external application or an app that wants to remotely read the temperature and the pressure from a sensor.
For this purpose, this Android Things project uses the Restlet framework. This framework provides several implementations, and one of them is for Android. Service
We are supposed to implement two RESTful APIs: one reading the temperature and another one reading the pressure. For this purpose, it is necessary to implement two different classes. One for temperature:); } }
And the other one for the pressure:, which to be completed even if the app UI is no longer available.
This Android Things project uses an IntentService, and the class is:
public class APIServerService extends IntentServ(); restComponent.getServers().add(Protocol.HTTP, PORT); // listen on 8090 //(); } } } }
This class is very simple: In the beginning, it defines the port where the server listens to and then defines a Router. In this context, the router is necessary to dispatch the requests to the different resources. We can attach the following URIs:
- /temp to get the current temperature
- /press to get the current pressure
Then the class implements the onHandleIntent to manage starting and stopping the server.
Step 3: Defining the Activity
This is the last step in this process and contemplates building the Android Things Activity:); } }
In this class, simply, the Android Things app invokes the Intent Service defined previously using two types of Intents.
That’s all, we can run the Android Things app and test it.
Testing the Restful API Interface Using Android Things
In this step, we can test the app and check the results.
To read the temperature, for example, let us open a browser and write the following URL to get the temperature:
http://<raspberry_ip>:port/sensor/temp
While to get the pressure:
http://<raspberry_ip>:port/sensor/temp
The result is JSON data holding the temperature or the pressure.
Summary
At the end of this post, hopefully, you gained the knowledge of how to implement a RESTful API interface using Android Things and how to invoke it. This is a simple project, but it demonstrates how we can use a Restful API interface to integrate Android Things with external systems. We can further expand this Android Things project to implement other APIs. For example, we can use POST requests to send data and control remote peripherals like motors or LEDs. }} | https://dzone.com/articles/restful-api-interface-using-android-things | CC-MAIN-2019-26 | refinedweb | 793 | 55.58 |
#include <searchm.h>
#include <searchm.h>
True if reqid is valid.
True: reqid was given on the command line, and this is a requery of a previously created search thread. False: reqid will be/was generated, and this request is creating a search thread.
Similar to structure, but IN_ALL is (1<<8). Keeps track of which within parameters have been seen on the command/ query line.
The searchm_qdisplay value.
The number of valid index parameters.
The number of valid limits given on cmdline
The number of within parameters given.
The sorted index element list.
The searchm_qpage value.
The reqid parameter value or the generated reqid.
The sort pairs information.
The sub-query informatin.
The calculated SwishSetStructure value. 0 implies not used. | http://searchm.sourceforge.net/html/structsearchm__cmd__line.html | CC-MAIN-2017-17 | refinedweb | 121 | 55.61 |
User Name:
Published: 22 Oct 2007
By: Dino Esposito
Dino Esposito overviews the integration between WCF and AJAX
AJAX applications written for the ASP.NET platform work by sending HTTP requests to a server-side back-end. Aside from pages using the partial rendering engine - a sort of interceptor that uses XMLHttpRequest to place classic postbacks - an ASP.NET AJAX page requests data to a remote endpoint available over the HTTP protocol. Callable endpoints are public addresses for back end services. How would you write such services for an ASP.NET AJAX application?
The key thing that is going on here is that these services are not publicly available services, but just application services living inside the same application that calls them. More precisely, nothing really prevents developers and architects from designing services that any clients that understand the protocol can consume. However, any service should be considered primarily as part of the host application. So, how would you write an application HTTP service in ASP.NET?
The first, and fairly obvious, option that springs up to mind is using ASP.NET Web Services; that is ASMX endpoints. An AJAX application requires a different configuration than a classic ASP.NET application; this means that requests for an ASMX resource are filtered out and redirected to a special AJAX component if they contain a special header - the clear evidence that the request has been spawned by an AJAX page. The reason why the same software technology - ASP.NET Web services - is used for creating both WS-* Web services and AJAX local services in ASP.NET is that for a time it was just the only available option.
Starting with Visual Studio 2008 (code-named Orcas), now close to its RC status, you can use Windows Communication Foundation (WCF) to build AJAX-callable services. Overall, the developer's experience is mostly the same whether you use ASP.NET Web services or WCF services. However, the richness of the WCF platform - specifically designed to generate and support software services - is a no-brainer. A good question would be: why wait for Visual Studio 2008 to start using WCF services in ASP.NET AJAX pages?
Before the availability of the .NET Framework 3.5 "Orcas", the WCF platform had no built-in support for taking JSON as input and returning it as output. So what is it that the .NET Framework 3.5 really does with respect to WCF? It basically empowers WCF to support JSON serialization. Now WCF services can optionally output JSON, and not always SOAP envelopes, over HTTP. All that you have to do is configure an endpoint to use the webHttpBinding binding model and enable web scripting through a new attribute. More detail on this in a moment.
webHttpBinding
Having always been a huge fan of the bottom-up approach to things, I just can't learn anything without first testing it in a very simple scenario that then evolves as quickly as possible into a more realistic one. So let's simply create a new Web site in Visual Studio 2008 and click to add a new AjaxWcfService item and name it TimeService.
After confirming the operation, you find your project extended with a service endpoint (say, timeservice.svc) and its related code-behind file placed in the App_Code folder; say, timeservice.cs. In addition, the web.config file is also modified to provide registration and discovery information for the service being created.
timeservice.svc
timeservice.cs
web.config
The TimeService class implicitly represents the contract of the service and its explicit implementation. The ServiceContract and OperationContract attributes play the same role you may know from any previous exposure to WCF programming. For the sake of simplicity, no interface is used to define the contract.
ServiceContract
OperationContract
In the end, the TimeService class exposes a couple of public endpoints - named GetTime and GetTimeFormat.
GetTime
GetTimeFormat
The endpoint to reach the methods on this interface are defined in a SVC file, like the timeservice.svc file shown below.
The service host indicates the language being used to implement the service and the location of the source files. Finally, through the Service attribute it indicates the name of the contract being used.
Service
The final step before you can test the service is registering its usage in the web.config file of the host ASP.NET application. Here's what you need to have:
First off, you register the list of behaviors for endpoints. In doing so, you define a behavior for your service - named TimeServiceAspNetAjaxBehavior - and state that it accepts requests from the Web via script. The enableWebScript element is logically equivalent to the ScriptService attribute you use to decorate a Web service class for the same purpose.
TimeServiceAspNetAjaxBehavior
enableWebScript
ScriptService
Next, you list the services hosted by the current ASP.NET application. This preceding web.config file has just one service named TimeService with one endpoint using the TimeService contract and the webHttpBinding binding model.
TimeService
The service is pretty much all set. How would you use it from the <script> section of a client ASP.NET page? The steps required by a developer aren't much different from those required to invoke a Web Service. You start by registering the service with the script manager using the SVC endpoint.
<script>
When processing the markup, the ScriptManager control triggers additional requests to generate and download the JavaScript proxy class for the specified WCF service. The client page uses the proxy class to place calls.
ScriptManager
The proxy class is named after the namespace of the service, as declared by the Namespace parameter of the ServiceContract attribute. If you leave the parameter to its default value (Tempuri.org), then the proxy class is named Tempuri.org.TimeService. Let's assume the following, instead:
Namespace
Tempuri.org
Tempuri.org.TimeService
In this case, the following JavaScript can be used to invoke the method GetTimeFormat.
Figure 1 shows the sample page in action.
The JavaScript proxy class is made of static methods whose name and signature match the prototype of the WCF service endpoints. In addition, and like the ASP.NET AJAX Web services, each JavaScript proxy method supports a bunch of additional parameters - callback functions to handle success and failure of the operation.
That's enough for a quick introduction to WCF services and AJAX from the perspective of a developer who knows enough about ASP.NET AJAX. You're now in the condition to dig out a few of the intricacies specific of WCF. Let's talk, for instance, about the ASP.NET compatibility mode.
When you create a new WCF service for ASP.NET AJAX, the service class is also decorated by default by the AspNetCompatibilityRequirements attribute, which deserves a few words of its own.
AspNetCompatibilityRequirements
While designed to be transport independent, when employed in the context of an ASP.NET AJAX application, WCF services may actually work in a manner that is very similarly to ASMX services. By using the AspNetCompatibilityRequirements attribute, you state your will of having WCF and ASMX services to work according to the same model. One practical repercussion of this setting is that when a WCF service is activated, the runtime checks declared endpoints and ensures that all of them use the Web HTTP binding model.
The compatibility with ASMX services makes it possible for WCF services to access, for example, the HttpContext object and subsequently other ASP.NET intrinsic objects. The compatibility is required at two levels. First, in the web.config file where you use the following:
HttpContext
Second, developers need to explicitly choose the compatibility mode for a given WCF service by using the service AspNetCompatibilityRequirements attribute.
WCF services also support a zero-configuration mode where you don't need to enter anything in the web.config file in order to publish endpoints. All that you do is summarized in the following schema for the SVC file:
Unfortunately, in Beta 2 of the .NET Framework 3.5 this option just doesn't work as expected. If used, it takes you to an error message about the fact that the binding only supports specification of exactly one authentication scheme in IIS. There's nothing wrong in your IIS configuration. It's only a "classic" bug in Beta 2 that will be fixed in RTM.
AJAX applications require services for building the back end of the system. In ASP.NET, there are just two software technologies to build services - ASMX Web services and WCF services, with the latter being a superclass of the former. AJAX communication, though, should use JSON data strings to move data back and forth. Adjusting the ASP.NET runtime for making ASMX Web services support JSON was a relatively easy task fully accomplished in the ASP.NET AJAX Extensions 1.0 and then in ASP.NET 3.5. Doing the same for the far richer runtime of WCF services took a bit more. That's why only in ASP.NET 3.5 you can build your AJAX back end of services using WCF. In future columns, I'll dig out more and more about the integration between WCF and AJ.
Granville
Reply
|
Permanent
link
HTTP/1.1 200 OKCache-Control: privateContent-Type: application/json; charset=utf-8Server: Microsoft-IIS/7.0X-AspNet-Version: 2.0.50727X-Powered-By: ASP.NETDate: Sat, 10 Nov 2007 23:10:07 GMTContent-Length: 29
{"d":"10-10-2007 [03:10:07]"} | http://dotnetslackers.com/articles/ajax/json-enabledwcfservicesinaspnet35.aspx | crawl-002 | refinedweb | 1,568 | 57.67 |
Create icons button in a grid that connect to applications in a folder
I am trying to create a simple application that contains a group of icons in a grid. These icons would link to applications(exes) i place in a folder
I am working on the photoviewer example and trying to figure out how i can link the pictures to open the process(an exe) but unsure how to do link a qml object from the main and load a exe with Qprocess. Ultimatly I would like it to dynamic load the exes i am interested in but currently i am content with hardcoded paths.
I am mainly a C++ developer and I am trying out qt to see if it is suitable for further use but it seems pretty overwhelming. I am trying to do my application over an example but it seems like there is a thousand ways to skin the cat which kinda impedes my learning as sometimes it gets confusing with the different file types in different examples like the qml,moc,qm files. Any tips on the best way to approach my particular problem?
- SGaist Lifetime Qt Champion
Hi and welcome to devnet,
So you would like to create some sort of launcher app with QtQuick, correct ?
As for the "starting the application" part, you'll be interested by QProcess.
- ambershark Moderators
@dashtag This book may help you learn QML.
As for all the different file types and technologies. You don't need to use all of them. You could use QML for your UIs or you could do it with QtWidgets in C++. I use both for different things. For this project it sounds like QML would be good for you though. It would be very easy to make a nice looking interface with those requirements.
As for linking C++ and QML, it is quite easy. Here is a quick example main.cpp that would load your qml:
#include <QGuiApplication> #include <QQmlApplicationEngine> int main(int argc, char **argv) { bool shouldExit = false; QGuiApplication app(argc, argv); QQmlApplicationEngine engine; QObject::connect(&engine, &QQmlApplicationEngine::objectCreated, [&shouldExit](QObje // load our main window engine.load(QUrl("qrc:///qml/main.qml")); // if mainwindow failed to load, exit if (shouldExit) return 1; return app.exec(); }
The QML is in a QRC file which is just a resource file. If you're familiar with visual studio it is like a .rc file. You can use raw QML on the file system too, you don't need it in a QRC.
If you need more info on connecting objects from QML to C++ let me know and I can show you an example of that as well.
Finally, Qt is a large and very mature library. It has been around for 20 years or so. Actively developed the whole time. It is not something you will learn over night but it is an amazing library that will definitely benefit you and your company. Once I learned it I never looked back. That was 16 years ago. It's worth it! :) I now hate getting contracts or projects where I can't use it. | https://forum.qt.io/topic/79339/create-icons-button-in-a-grid-that-connect-to-applications-in-a-folder | CC-MAIN-2018-30 | refinedweb | 521 | 72.16 |
RFM Analysis with Python
A complete guide on evaluating customer value with Python.
RFM modelling is a marketing analysis technique used to evaluate a customer's value. The RFM model is based on three factors:
- Recency: How recently a customer has made a purchase
- Frequency: How often a customer makes a purchase
- Monetary Value: How much money a customer spends on purchases
An RFM model comes up with numeric values for the three measures above. These values help companies better understand customer potential.
For example
If a customer made a purchase daily from Starbucks in the past year and hasn't bought anything in the last month, they could be moving to a competitor brand. They might have made a switch to The Coffee Bean and Tea Leaf now due to better deals or convenience.
Starbucks can then target these customers and come up with a marketing strategy to win them back.
In this article, I will show you how to build an RFM model with Python. We will use a dataset that contains over 4000 unique customer IDs, and will assign RFM values to each of these customers.
Download the dataset from Kaggle to get started. Make sure you have a Python IDE installed on your device, along with the Pandas library.
Step 1
Read the downloaded dataset with the following lines of code:
import pandas as pd df2 = pd.read_csv('data.csv',encoding='unicode_escape')
Now, let's look at the head of the data frame:
df.head()
For this analysis, we will only be using four columns: Quantity, InvoiceDate, UnitPrice, CustomerID.
Step 2
Let's start by calculating the value of M - monetary value. This is the simplest value to calculate. All we need to do is calculate the total amount spent by each customer.
To do this, we need to use the columns UnitPrice and Quantity. We will multiply these values first, to get the total amount spent by each customer for each transaction.
Here's the code to do this:
df['Total'] = df['Quantity']*df['UnitPrice']
Great!
Let's check the head of the data frame now:
We can see a new column with the total amount spent for each transaction.
Now, we need to find the total amount spent by the same customer throughout the entire dataset. We can do this with the following lines of code:
m = df.groupby('CustomerID')['Total'].sum() m = pd.DataFrame(m).reset_index()
Looking at the head of our new data frame, we can see this:
Great! We have now successfully calculated the monetary value of each customer in the data frame.
Step 3
Now, let's calculate the frequency. We want to find the number of times each customer has made a purchase.
Let's take a look at the data frame again to see how we can do this:
To find the number of times each customer was seen making a purchase, we need to use the columns CustomerID and InvoiceDate.
We need to calculate the number of unique dates each customer was seen making a purchase.
To do this, run the following lines of code:
freq = df.groupby('CustomerID')['InvoiceDate'].count() f = pd.DataFrame(freq).reset_index()
Taking a look at the head of the data frame, you should see this:
Great! We have successfully come up with a quantitative measure of frequency for each customer in the data frame.
Step 4
Finally, we can calculate the recency of each customer in the data frame.
To calculate recency, we need to find the last time the person was seen making a purchase. Was it a year ago? Months ago? Or a few days back?
To find this value, we need to use the CustomerID and InvoiceDate column. We first need to find the latest date each customer was seen making a purchase. Then, we need to assign some quantitative value to this date.
For example, if customer A was seen making a purchase two months ago and customer B was seen making a purchase two years ago, we need to assign a higher recency value to customer A.
To do this, we first need to convert the InvoiceDate column to a datetime object. Run the following lines of code:
df['Date']= pd.to_datetime(df['InvoiceDate'])
Let's take a look at the head of the data frame again:
Notice that we now have a new 'Date' column.
Now, need to find the most recent date each customer was seen making a purchase.
To do this, we need to assign a rank to all the dates for each CustomerID. The most recent date will be ranked as 1, second most recent date as 2, and so on.
Run the following lines of code:
df['rank'] = df.sort_values(['CustomerID','Date']).groupby(['CustomerID'])['Date'].rank(method='min').astype(int)
Taking a look at the head of the data frame, we can see this:
Now we have different ranks based on the time the customer was seen making a purchase. The most recent purchase has a rank of 1.
Let's now filter the data frame and get rid of all the other purchases. We only need to keep the most recent ones:
recent = df[df['rank']==1]
Perfect!
Now all we need to do is come up with a quantitative recency value. This means that a person seen one day ago will be given a higher recency value as compared to someone seen one week ago.
To do this, let's just calculate the difference between every date in the data frame and the earliest date. This way, more recent dates will have a higher value.
Run the following lines of code:
recent['recency'] = recent['Date'] - pd.to_datetime('2010-12-01 08:26:00')
Now, let's take a look at the head of the data frame:
Notice that we have a new column labelled 'recency,' and the number of days from the oldest date in the dataset has been calculated. A value of 0 days indicates lowest recency.
We can now convert the recency values into numeric. To do this, run the following lines of code:
def recency(recency): res = str(recency).split(' ')[0] return(int(res)) recent['recency'] = recent['recency'].apply(recency)
Finally, notice that the data frame above has many duplicate values for each CustomerID. This is because the breakdown is by product, and the same customers purchased multiple products at the same time.
Let's select only the CustomerID and recency columns and remove duplicates:
recent = recent[['CustomerID','recency']] recent = recent.drop_duplicates()
Taking a look at the head of the data frame, you should see this:
Step 5
We're now done calculating RFM values. We have the results stored in separate data frames, so let's merge them together:
finaldf = f.merge(m,on='CustomerID').merge(recent,on='CustomerID')
Let's now take a look at the head of the final data frame:
That's it!
We have successfully managed to append RFM values to each customer ID in the dataset.
We can come up with customer insights based on these calculations, or build some sort of clustering model to group similar customers together.
If you managed to run the codes above successfully, then try going a step further and normalize these values. RFM values are usually presented on a scale of 1-5, so you can create bins for each of these values and group them together.
Thanks for reading, I hope you enjoyed this tutorial. | https://www.natasshaselvaraj.com/rfm-analysis-in-python/ | CC-MAIN-2022-27 | refinedweb | 1,241 | 64.1 |
APACHE 2.x ROADMAP ================== Last modified at [$Date: 2005-03-14 05:24:22 +0000 (Mon, 14 Mar 2005) $] WORKS IN PROGRESS ----------------- * Source code should follow style guidelines. OK, we all agree pretty code is good. Probably best to clean this up by hand immediately upon branching a 2.1 tree. Status: Justin volunteers to hand-edit the entire source tree ;) Justin says: Recall when the release plan for 2.0 was written: Absolute Enforcement of an "Apache Style" for code. Watch this slip into 3.0. David says: The style guide needs to be reviewed before this can be done. The current file is dated April 20th 1998! OtherBill offers: It's survived since '98 because it's welldone :-) Suggest we simply follow whatever is documented in styleguide.html as we branch the next tree. Really sort of straightforward, if you dislike a bit within that doc, bring it up on the dev@httpd list prior to the next branch. So Bill sums up ... let's get the code cleaned up in CVS head. Remember, it just takes cvs diff -b (that is, --ignore-space-change) to see the code changes and ignore that cruft. Get editing Justin :) * Replace stat [deferred open] with open/fstat in directory_walk. Justin, Ian, OtherBill all interested in this. Implies setting up the apr_file_t member in request_rec, and having all modules use that file, and allow the cleanup to close it [if it isn't a shared, cached file handle.] * The Async Apache Server implemented in terms of APR. [Bill Stoddard's pet project.] Message-ID: <008301c17d42$9b446970$01000100@sashimi> (dev@apr) OtherBill notes that this can proceed in two parts... Async accept, setup, and tear-down of the request e.g. dealing with the incoming request headers, prior to dispatching the request to a thread for processing. This doesn't need to wait for a 2.x/3.0 bump. Async delegation of the entire request processing chain Too many handlers use stack storage and presume it is available for the life of the request, so a complete async implementation would need to happen 3.0 release. Brian notes that async writes will provide a bigger scalability win than async reads for most servers. We may want to try a hybrid sync-read/async-write MPM as a next step. This should be relatively easy to build: start with the current worker or leader/followers model, but hand off each response brigade to a "completion thread" that multiplexes writes on many connections, so that the worker thread doesn't have to wait around for the sendfile to complete. MAKING APACHE REPOSITORY-AGNOSTIC (or: remove knowledge of the filesystem) [ 2002/10/01: discussion in progress on items below; this isn't planned yet ] * dav_resource concept for an HTTP resource ("ap_resource") * r->filename, r->canonical_filename, r->finfo need to disappear. All users need to use new APIs on the ap_resource object. (backwards compat: today, when this occurs with mod_dav and a custom backend, the above items refer to the topmost directory mapped by a location; e.g. docroot) Need to preserve a 'filename'-like string for mime-by-name sorts of operations. But this only needs to be the name itself and not a full path. Justin: Can we leverage the path info, or do we not trust the user? gstein: well, it isn't the "path info", but the actual URI of the resource. And of course we trust the user... that is the resource they requested. dav_resource->uri is the field you want. path_info might still exist, but that portion might be related to the CGI concept of "path translated" or some other further resolution. To continue, I would suggest that "path translated" and having *any* path info is Badness. It means that you did not fully resolve a resource for the given URI. The "abs_path" in a URI identifies a resource, and that should get fully resolved. None of this "resolve to <here> and then we have a magical second resolution (inside the CGI script)" or somesuch. Justin: Well, let's consider mod_mbox for a second. It is sort of a virtual filesystem in its own right - as it introduces it's own notion of a URI space, but it is intrinsically tied to the filesystem to do the lookups. But, for the portion that isn't resolved on the file system, it has its own addressing scheme. Do we need the ability to layer resolution? * The translate_name hook goes away Wrowe altogether disagrees. translate_name today even operates on URIs ... this mechansim needs to be preserved. * The doc for map_to_storage is totally opaque to me. It has something to do with filesystems, but it also talks about security and per_dir_config and other stuff. I presume something needs to happen there -- at least better doc. Wrowe agrees and will write it up. * The directory_walk concept disappears. All configuration is tagged to Locations. The "mod_filesystem" module might have some internal concept of the same config appearing in multiple places, but that is handled internally rather than by Apache core. Wrowe suggests this is wrong, instead it's private to filesystem requests, and is already invoked from map_to_storage, not the core handler. <Directory > and <Files > blocks are preserved as-is, but <Directory > sections become specific to the filesystem handler alone. Because alternate filesystem schemes could be loaded, this should be exposed, from the core, for other file-based stores to share. Consider an archive store where the layers become <Directory path> -> <Archive store> -> <File name> Justin: How do we map Directory entries to Locations? * The "Location tree" is an in-memory representation of the URL namespace. Nodes of the tree have configuration specific to that location in the namespace. Something like: typedef struct { const char *name; /* name of this node relative to parent */ struct ap_conf_vector_t *locn_config; apr_hash_t *children; /* NULL if no child configs */ } ap_locn_node; The following config: <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> Creates a node with name=="server_status", and the node is a child of the "/" node. (hmm. node->name is redundant with the hash key; maybe drop node->name) In the config vector, mod_access has stored its Order, Deny, and Allow configs. mod_core has stored the SetHandler. During the Location walk, we merge the config vectors normally. Note that an Alias simply associates a filesystem path (in mod_filesystem) with that Location in the tree. Merging continues with child locations, but a merge is never done through filesystem locations. Config on a specific subdir needs to be mapped back into the corresponding point in the Location tree for proper merging. * Config is parsed into a tree, as we did for the 2.0 timeframe, but that tree is just a representation of the config (for multiple runs and for in-memory manipulation and usage). It is unrelated to the "Location tree". * Calls to apr_file_io functions generally need to be replaced with operations against the ap_resource. For example, rather than calling apr_dir_open/read/close(), a caller uses resource->repos->get_children() or somesuch. Note that things like mod_dir, mod_autoindex, and mod_negotation need to be converted to use these mechanisms so that their functions will work on logical repositories rather than just filesystems. * How do we handle CGI scripts? Especially when the resource may not be backed by a file? Ideally, we should be able to come up with some mechanism to allow CGIs to work in a repository-independent manner. - Writing the virtual data as a file and then executing it? - Can a shell be executed in a streamy manner? (Portably?) - Have an 'execute_resource' hook/func that allows the repository to choose its manner - be it exec() or whatever. - Won't this approach lead to duplication of code? Helper fns? gstein: PHP, Perl, and Python scripts are nominally executed by a filter inserted by mod_php/perl/python. I'd suggest that shell/batch scripts are similar. But to ask further: what if it is an executable *program* rather than just a script? Do we yank that out of the repository, drop it onto the filesystem, and run it? eeewwwww... I'll vote -0.9 for CGIs as a filter. Keep 'em handlers. Justin: So, do we give up executing CGIs from virtual repositories? That seems like a sad tradeoff to make. I'd like to have my CGI scripts under DAV (SVN) control. * How do we handle overlaying of Location and Directory entries? Right now, we have a problem when /cgi-bin/ is ScriptAlias'd and mod_dav has control over /. Some people believe that /cgi-bin/ shouldn't be under DAV control, while others do believe it should be. What's the right strategy? | http://opensource.apple.com//source/apache/apache-780/httpd/ROADMAP | CC-MAIN-2016-44 | refinedweb | 1,451 | 65.62 |
Cool Controls Library (v1.1)
I started to write CoolControls in October last year. The inspiration for me was Corel 8.0 and its great, well-designed UI, especially the dialog controls. If you have already seen Corel then you exactly know what I mean. If no, download the demo project and take a brief look at it now! One picture is often worth more than thousands of words, isn't it? Although the idea is borrowed from Corel, I wrote the entire code myself and I think that my implementation is faster and more accurate. Initially, I wrote support only for drop-down combo-boxes and edit controls and that early version required subclassing of each control individually. In fact this took me only two days, but I wasn't satisfied with that solution. So, I hit upon an idea to make a hook which could subclass all controls automatically. I wrote the code quickly because I've already had some experience with Windows hooks. It was working quite fine but I still had support only for basic controls, nothing more. Well, I realized that I've got to handle a variety of controls and its styles. It seemed to be a horrible work, but I didn't get scared and I was writing support for the rest of the controls. At last, I had to test the code under Windows 95/98 and NT including different system metrics and color sets. It took me a month to complete the code, pretty long time, but I hope that the code is good and doesn't contain too many bugs. That's a whole story.
What's new in version 1.1?
- Fixed bug with LVS_EX_HEADERDRAGDROP list controls (thanks to Vlad Bychkoff for pointing this out)
- UNICODE support added
- WH_CALLWNDPROCRET is no longer supported due to some weird problems with that type of hook
- Added support for multiple UI threads - (thanks for Mike Walter for the code)
- Class name has been changed to CCoolControlsManager (my own idea)
- Added support for SysTabControl32
Yeah, this is a very good question but answer isn't easy. Writing the code is usually easier than writing a good documentation for it :-). Nevertheless, I'll try to explain that...
Generally speaking the effect is done by subclassing a control and painting on the non-client area (in most cases) when the control needs to be drawn. The state of the control depends on the keyboard focus and mouse cursor position. The control is drawn with lighter borders (never totally flat) when it has no focus or mouse is outside of the window. Otherwise, the control is drawn in a normal way (without any changes). In more details, the library consists of two parts. First is a one and only, global CControlsManager object. The most important part of this class is a map of all subclassed controls, implemented as CMapPtrToPtr. The ControlsManager also provides a way to add a control manually by calling AddControl() member function.
Second part is a set of classes (not CWnd-derived) which represent each control individually. All classes derive from CCMControl, which holds important control information and is responsible for drawing the control border. CCMControl derives from CCMCore, which is a virtual class that provides a skeleton for all of the rest. Each CCMControl-derived class typically implements own DrawControl() function, which is the main drawing routine. It was necessary to respect all possible control styles and hence it took relatively long time to write this code and check all possible situations in different system configurations.
The first thing we have to do is installing app-wide hook of WH_CALLWNDPROCRET type. Further processing depends on m_bDialogOnly flag. If this flag is set to TRUE, we intercept WM_INITDIALOG and make a call to ControlManager's Install() method, which gets a handle to the dialog as a parameter. Next ,this function iterates through all dialog controls and for each of them the AddControl() member is being called. This approach allows to subclass only controls that are inserted to some dialog. If m_bDialogOnly is set to FALSE, WM_CREATE is intercepted instead of WM_INITDIALOG, so we are able to subclass all controls including those on toolbars an other non-dialog windows.
The AddControl() member gets a handle to control, retrieves its windows class name and next try to classify the control to one of currently supported groups.
Currently supported controls are:
- Pushbuttons (except those with BS_OWNERDRAW style)
- Checkboxes
- Radiobuttons
- Scrollbar controls
- Edit boxes
- List boxes
- List views
- Tree views
- Spin buttons
- Slider controls
- Date/time pickers
- Combo boxes (all styles)
- Header controls
- Hotkey controls
- IPAddress controls
- Toolbars (without TBSTYLE_FLAT)
- Month calendars
- Extended combo boxes
- Rich edit controls
- Tab controls
When window class name matches one of supported items, an object of appropriate type is created, the control is subclassed and the object is added to the map. The ControlsManager checks periodically (by setting a timer of 100ms period) whether the mouse cursor is over of any control in the map. If so, state of that control is changed accordingly. In addition we have to intercept some of messages that may cause redrawing of the control, e.g. WM_KILLFOCUS, WM_SETFOCUS, WM_ENABLE etc. and border of control is redrawn after calling the original window procedure. The control is removed from the map when it receives WM_NCDESTROY, the last message that the system sends to a window.
My code is not strongly MFC-based because I've used only CMapPtrToPtr class from MFC. Initially, I tried to use 'map' class from the STL, but the resulting executable was bigger and slower than this which has been built using CMapPtrToPtr.
For further information look at the CoolControlsManager.cpp and .h files
How to use it?
Single-threaded applications:
This module is extremely easy to use; you have to add to your CWinApp-derived class implementation file only two lines of code. First is a typical #include <CoolControlsManager.h> statement, second is a call to ControlsManager's InstallHook() method. The best place for this is the InitInstance() method of your CWinApp-derived class.
...
#include "CoolControlsManager.h"
...
BOOL CCoolControlsApp::InitInstance()
{
// Install the CoolControls
GetCtrlManager().InstallHook();
// Remaining stuff
}
Multithreaded applications:
Steps are the same as for single-threaded case, but you must add a call to InstallHook() for any additional thread you're going to create. You can place this code in InitInstance() of your CWinThread-derived class.
...
#include "CoolControlsManager.h"
...
BOOL CNewThread::InitInstance()
{
// Install the CoolControls for this thread
GetCtrlManager().InstallHook();
// Remaining stuff
}
BOOL CNewThread::ExitInstance()
{
// Uninstall the CoolControls for this thread
GetCtrlManager().UninstallHook();
// Remaining stuff
}
Of course don't forget to add CoolControlsManager.cpp to your project! That's all. The code can be compiled using VC5 as well as VC6 and has been tested under Win98 and WinNT 4.0.
Standard Disclaimer
This files may be redistributed unmodified by any means providing it is not sold for profit without the authors written consent, and providing that this notice and the authors name and all copyright notices remains intact. This code may be used in compiled form in any way you wish with the following conditions:
If the source code is used in any commercial product then a statement along the lines of "Portions Copyright (C) 1999 Bogdan Ledwig".
Download demo project -62 KB
Date Last Updated: May 17, 1999
Amazing!Posted by Legacy on 11/17/2003 12:00am
Originally posted by: Victor N
Looking for this thing for a long time...Reply
Made since 1999, still usefull in 2003.
Error, GetWindowText returned empty string in richedit on Win2000Posted by Legacy on 04/15/2002 12:00am
Originally posted by: Paradoxx
How to handle my own drawn button that conflict with yours?Posted by Legacy on 12/02/2001 12:00am
Originally posted by: devilsword
I derive a my own drawn button from CButtonReply
When I used GetCtrlManager().InstallHook()
my own drawn cannot work as it was.How did I
disable the button function of yours and didn't
effect other's controls?
How can I add support for ownerdrawn push buttons?Posted by Legacy on 10/23/2001 12:00am
Originally posted by: Joerg Hoffmann
Yes, you did really good work.
The only missing thing is the support for for push buttons with the BS_OWNWERDRAWN style.
Any suggestions how or where i could add this ?
THXReply
Does it need the virtual destructor for CCMControl?Posted by Legacy on 07/30/2001 12:00am
Originally posted by: Hyung-Wook Kim
It's really cool job!Reply
But studying this work, I wonder why there isn't virtual destructor for CCMControl... The classes inherited from CCMControl are deleted in RemoveControl(), but they are the type of CCMControl. So they are destructed as the object of CCMControl type, right? Is it safe for the memory leakage?
Truly amazingPosted by Legacy on 06/05/2001 12:00am
Originally posted by: Diarrhio
Great work, man. Your work really made a diff in my app! Thanks so much for your diligence!
DReply
Flat controlsPosted by Legacy on 05/21/2001 12:00am
Originally posted by: Iulian Costache
I want to make an application that has flat controls all the time (like they are before you move the mouse inside one). I want to be flat even when you push a Scrollbar or a Thumb.
Can you help me?Reply
Thnx
Possible Bug in Horizontal Scroll BarPosted by Legacy on 07/05/1999 12:00am
Originally posted by: Cris Tagle
Yes, indeed it is an excellent set of codes. Thank you very much for sharing such knowledge. However, it seems to have a minor glitch in displaying horizontal scroll bars. When the control is not active, it displays a combination of both the original and the new control. The thumb doesn't seem to appear in a single location thus producing a duplicate image. I'll try to trace the problem but since this is your work, you maybe able to track it faster than I could. And if you want a snap shot of the said control, please tell me and I'll email you the image.
Thanks again.Reply
Error processing Tab ControlsPosted by Legacy on 05/20/1999 12:00am
Originally posted by: Simon Brown
More than just thanksPosted by Legacy on 05/13/1999 12:00am
Originally posted by: Ian Duff
As a programmer of more years than I care to remember, I would just like to add my congratulations to the author. He has done an enormous amount of work, and quite generously offers his source code to us in the programming community gratis. Well done Bogdan! and thank you very much for such a beautiful piece of work. I personally think that this is worthy of Code Guru status.Reply | http://www.codeguru.com/cpp/controls/controls/coolcontrols/article.php/c2157/Cool-Controls-Library-v11.htm | CC-MAIN-2014-10 | refinedweb | 1,790 | 63.09 |
In any modern web app, you probably want to have really cool and simple URLs like how WordPress does for your permalinks. E.g., the permalink for this posting is
MUCH better than the typical MSDN type urls:
Do you think when I complete this posting and publish it, WordPress will ACTUALLY PUT A FILE at?
NO you dummy!!
Anytime you see REALLY simple urls like most likely what is happening there is the web application is using something called REQUEST MAPPING. You do REQUEST MAPPING using what is called a FRONT CONTROLLER.
The FRONT CONTROLLER works to INTERPRET requests for specific URI’s AS requests to OTHER pages, engines, and so on.
Give this a watch.
First, NOTICE how simple that uri is? In case you didn’t follow the link through, its.
So CakePHP a front controller to do “request mapping!” WATCH THE VIDEO to get an idea, dude.
ANYWAY, how do you create a really simple front-controller and do request mapping from a Java servlet?
Just create a regular servlet, then make the web.xml entry for it something like this:
<servlet> <servlet-name>member</servlet-name> <servlet-class>member</servlet-class> </servlet> <servlet-mapping> <servlet-name>member</servlet-name> <url-pattern>/member/*</url-pattern> </servlet-mapping>
NOTICE the /member/* for the <url-pattern>? THAT means that ANY requests that come in with the pattern will AUTOMATICALLY be mapped down into the “member” servlet.
So the “member” servlet class might look something like this:
public class member extends HttpServlet { protected void doGet( HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String requestURI = request.getRequestURI(); // if the user hit // then the REQUEST URI looks like // /member/BOB // All I'm going to do now is get everything after the last slash, // and that is what will tell me which member profile is desired: String desiredUserProfile = requestURI.substring( requestURI.lastIndexOf("/") + 1 ); // work with desiredUserProfile to produce page output, whatever. } }
Get the idea?
See also:
Java:
PHP:
Apache Server: | http://bobobobo.wordpress.com/category/internets/ | crawl-003 | refinedweb | 329 | 64.71 |
{-| Simple utilities The \"Example\" section at the bottom of this module contains an extended example of how to interact with the @sdl@ library using the @mvc@ library -} module MVC.Prelude ( -- * Controllers producer , stdinLines , inLines , inRead , tick -- * Views , consumer , stdoutLines , outLines , outShow -- * Handles , inHandle , outHandle -- * Example -- $example ) where import Control.Applicative (pure, (<*)) import Control.Concurrent.Async (withAsync) import Control.Concurrent (threadDelay) import Data.IORef (newIORef, readIORef, writeIORef) import MVC import Pipes.Internal (Proxy(..), closed) import qualified Pipes.Prelude as Pipes import qualified System.IO as IO {-| Create a `Controller` from a `Producer`, using the given `Buffer` If you're not sure what `Buffer` to use, try `Single` -} producer :: Buffer a -> Producer a IO () -> Managed (Controller a) producer buffer prod = managed $ \k -> do (o, i, seal) <- spawn' buffer let io = do runEffect $ prod >-> toOutput o atomically seal withAsync io $ \_ -> k (asInput i) <* atomically seal {-# INLINABLE producer #-} -- | Read lines from standard input stdinLines :: Managed (Controller String) stdinLines = producer Single Pipes.stdinLn {-# INLINABLE stdinLines #-} -- | Read lines from a file inLines :: FilePath -> Managed (Controller String) inLines filePath = do handle <- inHandle filePath producer Single (Pipes.fromHandle handle) {-# INLINABLE inLines #-} -- | 'read' values from a file, one value per line, skipping failed parses inRead :: Read a => FilePath -> Managed (Controller a) inRead filePath = fmap (keeps parsed) (inLines filePath) where parsed k str = case reads str of [(a, "")] -> Constant (getConstant (k a)) _ -> pure str {-# INLINABLE inRead #-} -- | Emit empty values spaced by a delay in seconds tick :: Double -> Managed (Controller ()) tick n = producer Single $ lift (threadDelay (truncate (n * 1000000))) >~ cat {-# INLINABLE tick #-} -- | Create a `View` from a `Consumer` consumer :: Consumer a IO () -> Managed (View a) consumer cons0 = managed $ \k -> do ref <- newIORef cons0 k $ asSink $ \a -> do cons <- readIORef ref let go cons_ = case cons_ of Request () fa -> writeIORef ref (fa a) Respond v _ -> closed v M m -> m >>= go Pure r -> writeIORef ref (return r) go cons {-# INLINABLE consumer #-} -- | Write lines to standard output stdoutLines :: View String stdoutLines = asSink putStrLn {-# INLINABLE stdoutLines #-} -- | Write lines to a file outLines :: FilePath -> Managed (View String) outLines filePath = do handle <- outHandle filePath return (asSink (IO.hPutStrLn handle)) {-# INLINABLE outLines #-} -- | 'show' values to a file, one value per line outShow :: Show a => FilePath -> Managed (View a) outShow filePath = fmap (contramap show) (outLines filePath) {- outShow filePath = do handle <- outHandle filePath return (asSink (IO.hPrint handle)) -} {-# INLINABLE outShow #-} -- | Read from a `FilePath` using a `Managed` `IO.Handle` inHandle :: FilePath -> Managed IO.Handle inHandle filePath = managed (IO.withFile filePath IO.ReadMode) {-# INLINABLE inHandle #-} -- | Write to a `FilePath` using a `Managed` `IO.Handle` outHandle :: FilePath -> Managed IO.Handle outHandle filePath = managed (IO.withFile filePath IO.WriteMode) {-# INLINABLE outHandle #-} {- $example The following example distils a @sdl@-based program into pure and impure components. This program will draw a white rectangle between every two mouse clicks. The first half of the program contains all the concurrent and impure logic. The `View` and `Controller` must be `Managed` together since they both share the same initialization logic: > import Control.Monad (join) > import Graphics.UI.SDL as SDL > import Lens.Family.Stock (_Left, _Right) -- from `lens-family-core` > import MVC > import MVC.Prelude > import qualified Pipes.Prelude as Pipes > > data Done = Done deriving (Eq, Show) > > sdl :: Managed (View (Either Rect Done), Controller Event) > sdl = join $ managed $ \k -> withInit [InitVideo, InitEventthread] $ do > surface <- setVideoMode 640 480 32 [SWSurface] > white <- mapRGB (surfaceGetPixelFormat surface) 255 255 255 > > let done :: View Done > done = asSink (\Done -> SDL.quit) > > drawRect :: View Rect > drawRect = asSink $ \rect -> do > _ <- fillRect surface (Just rect) white > SDL.flip surface > > totalOut :: View (Either Rect Done) > totalOut = handles _Left drawRect <> handles _Right done > > k $ do > totalIn <- producer Single (lift waitEvent >~ cat) > return (totalOut, totalIn) Note the `Control.Monad.join` surrounding the `managed` block. This is because the type before `Control.Monad.join` is: > Managed (Managed (View (Either Rect Done), Controller Event)) More generally, note that `Managed` is a `Monad`, so you can use @do@ notation to combine multiple `Managed` resources into a single `Managed` resource. The second half of the program contains the pure logic. > pipe :: Monad m => Pipe Event (Either Rect Done) m () > pipe = do > Pipes.takeWhile (/= Quit) >-> (click >~ rectangle >~ Pipes.map Left) > yield (Right Done) > > rectangle :: Monad m => Consumer' (Int, Int) m Rect > rectangle = do > (x1, y1) <- await > (x2, y2) <- await > let x = min x1 x2 > y = min y1 y2 > w = abs (x1 - x2) > h = abs (y1 - y2) > return (Rect x y w h) > > click :: Monad m => Consumer' Event m (Int, Int) > click = do > e <- await > case e of > MouseButtonDown x y ButtonLeft -> > return (fromIntegral x, fromIntegral y) > _ -> click > > main :: IO () > main = runMVC () (asPipe pipe) sdl Run the program to verify that clicks create rectangles. The more logic you move into the pure core the more you can exercise your program purely, either manually: >>> let leftClick (x, y) = MouseButtonDown x y ButtonLeft >>> Pipes.toList (each [leftClick (10, 10), leftClick (15, 16), Quit] >-> pipe) [Left (Rect {rectX = 10, rectY = 10, rectW = 5, rectH = 6}),Right Done] ... or automatically using property-based testing (such as @QuickCheck@): >>> import Test.QuickCheck >>> quickCheck $ \xs -> length (Pipes.toList (each (map leftClick xs) >-> pipe)) == length xs `div` 2 +++ OK, passed 100 tests. Equally important, you can formally prove properties about your model using equational reasoning because the model is `IO`-free and concurrency-free. -} | http://hackage.haskell.org/package/mvc-1.0.0/docs/src/MVC-Prelude.html | CC-MAIN-2016-36 | refinedweb | 867 | 52.6 |
#include "subsystems/sensors/baro.h"
#include "generated/airframe.h"
#include "subsystems/abi.h"
#include "led.h"
Go to the source code of this file.
threshold >0 && <1023
Definition at line 37 of file baro_board.c.
Referenced by baro_board_calibrate(), and baro_periodic().
scale factor to convert raw ADC measurement to pressure in Pascal.
Sensor Sensitivity -> SS = 0.045 mv / Pa Sensor Gain -> G = 94.25 Sensitivity -> S = SS*G = 4.24125 mV / Pa 10 bit ADC -> A = 3.3 V / 1024 = 3.223 mV / LSB Total Sensitivity SENS = A / S = 0.759837
For the real pressure you also need to take into account the (variable) offset
supply voltage Vs = 5V real sensor sensitivity Vout = Vs * (0.009 P - 0.095) voltage variable offset Voff(DAC) = Vs / 69.23 + (DAC * 3.3 / 1024) / 21.77 ADC voltage at init Vadc = 3.3*BARO_THRESHOLD/1024 = Vout - Voff
=> Inverting these formulas can give the 'real' pressure
since we don't care that much in this case, we can take a fixed offset of 101325 Pa
Definition at line 60 of file baro_board.c.
Referenced by baro_periodic(). 65 of file baro_board.c.
References BaroBoard::absolute, adc_buf_channel(), ADC_CHANNEL_BARO, baro_board, BB_UNINITIALIZED, BaroBoard::buf, DACSet(), DEFAULT_AV_NB_SAMPLE, LED_OFF, BaroBoard::offset, BaroBoard::status, and BaroBoard::value_filtered.
Definition at line 80 of file baro_board.c.
References BaroBoard::absolute, adc_buf::av_nb_sample, baro_board, baro_board_calibrate(), BARO_BOARD_SENDER_ID, BB_UNINITIALIZED, BOOZ_ANALOG_BARO_THRESHOLD, BOOZ_BARO_SENS, BaroBoard::buf, get_sys_time_usec(), BaroBoard::status, adc_buf::sum, and BaroBoard::value_filtered.
Definition at line 63 of file baro_board.c.
Referenced by baro_board_calibrate(), baro_init(), baro_periodic(), and lisa_l_baro_event(). | http://docs.paparazziuav.org/latest/booz_2baro__board_8c.html | CC-MAIN-2020-24 | refinedweb | 249 | 55.2 |
view raw
I want to use
fabric
def readsn():
with open(hn) as f:
while True:
line=f.readline()
if not line:
break
desthost = line.strip().lstrip().rstrip()
env.host_string = desthost
run('cp %s %s' %(path,path+time.strftime(r'%Y%m%d%H%M%S', time.localtime())))
run(change_conf(path, old, new))
def change_conf(path,old,new):
f = fileinput.input(path,backup='.bak',inplace=True)
for line in f:
line = line.rstrip()
match = re.match(r,line)
if match:
print line.replace(old, new)
print line
f.close()
TypeError: cannot concatenate 'str' and 'NoneType' objects
run() expects string with command. It can run only programs/scripts which are on remote server - it can't run your function.
BTW: now Python first executes your function which returns
None and then
run() use this result as command to execute on server.
If you have Linux on remote server then you could use
sed command. ie.
sed "s/old_text/new_text/g" old_file > new_file
Or you have to copy your script on remote server and then run it
You can also download file from server (get()), change it locally (using your function) and send back on server (put()) | https://codedump.io/share/o1dnsmPsmQi7/1/how-to-replace-file-string-via-fabric-at-remote-server | CC-MAIN-2017-22 | refinedweb | 195 | 67.96 |
The java.io.BufferedReader and java.io. The text is moved to the underlying output stream or other target only when the buffer files ·up or when the writer is explicitly flushed, which can make writes much faster than would otherwise.
In order to create BufferedReader, one of the following two constructors can be used.
BufferedReader(Reader in}
BufferedReader(Reader in, int bufferSize}
The first argument in is Reader object which is the underlying character input stream from which data will be read. If the buffer size is not set, the default size of 8192 characters is used. Similarly, to create BufferedWriter, one of the following two constructors are used.
BufferedWriter(Writer out)
BUfferedWriter(Writer out, int bufferSize}
The first argument out isa Writer object which is the underlying character output stream to which buffered data is written. If buffer size is not set, the default size of 8192 characters is used. The BufferedReader and BufferedWriter classes have the usual methods associated with Reader and Writer classes like read () , wri te (), close () etc. In addition, they also provide the following methods.
• String readLine () : The readLine () method of the BufferedReader class reads a single line of text and returns it as a string. The return string is null when the operation attempts to read past the end of the file (EOF).
• void newline () .:.
import java.io.*;
public class BufferedReaderWriter
{
public static void main(String[] args)
{
try
{
BufferedReader br = new BufferedReader(new FileReader("java.txt"));
BufferedWriter bw = new BufferedWriter(new FileWriter("java1.txt"));
int ch;
while ((ch = br.read()) != -1)
{
if (Character.isLowerCase((char) ch))
bw.write(Character.toUpperCase((char) ch));
else
bw.write((char) ch);
}
br.close();
bw.close();
}
catch(Exception | http://ecomputernotes.com/java/stream/bufferedreaderwriter | CC-MAIN-2018-51 | refinedweb | 280 | 57.98 |
Java is a platform-independent and object oriented programming language. This language was initially developed by Sun Microsystems. Since its release in 1995, Java has undergone several revisions. It is easy to learn and is presently one of the world’s most popular programming languages. Java programs are compiled into platform-independent byte code. This code is interpreted by JVM(Java Virtual Machine) of the platform its run on. The language is robust and incorporates the state-of-the-art security features. It is used to develop web applications as well as standalone applications. Java programs have automatic memory management.
In this beginner’s level tutorial, we take you through how to format a Date object in the Java programming language. We assume that you are familiar with the basics of programming. If you’re new to Java programming, take this beginners course to get started.
Date Class in Java
The Date class in Java is contained in the java.util package. The current date and time are encapsulated in Date class. This class has two constructors. The first constructor assigns the object the system date and time. The syntax is as follows
Date()
The second constructor accepts as a parameter the number of milliseconds that have elapsed since the date January 1, 1970. The syntax for this constructor is as follows:
Date(long millisec)
How to Format Dates in Java
In Java, the SimpleDateFormat is usually used to format a date. This class is subclass of DateFormat. Its format method converts a date to a string. On the other hand, its parse method converts a string to a date.
Example 1: Program to Get the Current Date and Time
import java.util.Date; public class Example{ public static void main(String args[]) { Date date1 = new Date(); System.out.println(date1.toString()); } }
The import statement imports the classes from the date package. Once done, you can use the library’s classes in your program. The name of the class used in this program is user defined and named Example. The keyword public is the access specifier that indicates everybody or any external class can instantiate this class. The keyword static is used to indicate there is only one instance of this class. In this program, we create a date object. The date object contains the current date and time. The toString() converts the date object into a string. This is printed on the screen. You can learn how to write your own Java programs with this course.
Example 2: Program to Format Date using SimpleDateFormat
SimpleDateFormat is a class responsible to format and parse dates. Let’s take a look at the program below to understand this better:
import java.util.*; import java.text.*; public class Example2 { public static void main(String args[]) { Date date1 = new Date( ); SimpleDateFormat x = new SimpleDateFormat ("E yyyy.MM.dd 'at' hh:mm:ss a zzz"); System.out.println("Today's Date: " + x.format(date1)); } }
The output of the above program is Mon 2014.05.26 at 13:36:21 PM PDT . The line with SimpleDateFormat specifies how we want the date to be shown. The ‘E’, ‘yyyy.MM.dd’ are the codes indicating the specific format we want. Check out the table below to see the different format codes available.
Simple DateFormat format codes
This is the list of most commonly used format codes with dates in Java. You can use these format codes in your Java programs to convert the date to the desired format.
Character Description Example
G Era designator AD
y Year in four digits 2014
M Month in year May or 05
D Day in month 26
H Hour in A.M./P.M. (1~12) 12
H Hour in day (0~23) 22
m Minute in hour 30
s Second in minute 55
S Millisecond 234
E Day in week Monday
D Day in year 360
F Day of week in month 2 (second Wed. in July)
w Week in year 47
W Week in month 3
a A.M./P.M. marker PM
k Hour in day (1~24) 21
K Hour in A.M./P.M. (0~11) 10
z Time zone Eastern Standard Time
‘ Escape for text Delimiter
” Single quote `
If you’d like to understand more about the date format in Java, we suggest that you take this course on basics of Java Programming.
Example 3 : To Format and Print a Date using the printf Function
import java.util.Date; public class Example3 { public static void main(String args[]) { Date date1 = new Date(); String new_str = String.format("Current Date/Time : %tc", date1 ); System.out.printf(new_str); } }
The string.format() function converts date1 from date format to string format. Then the printf() function prints the string passed to it as parameter to the output screen. The output of the above program is
Current Date/Time : Mon May 26 13:49:15 MST 2014
Advanced Date and Time Conversion Code
Like we mentioned earlier, Java is a pretty flexible language and gives you many ways to customize what you want. The list we gave above is of the most commonly used date-time format codes, but does not provide a complete range of possible formats for dates. Yes, there are more. Check out the table below for the more advanced date formatting codes.
Example 4: Program to Get the Complete Date and Time
import java.util.*; import java.text.*; public class Example4 { public static void main(String args[]) { Date date1 = new Date( ); SimpleDateFormat x = new SimpleDateFormat ("c"); System.out.println("Current Date: " + x.format(date1)); } }
The output of this program will be Mon May 26 21:24:34 CDT 2014. That is, it will display the current date and time, along with time zone.
We’ve tried to give you a comprehensive view of how to format date in Java. Do try out these programs on your own to get a feel of it. Once you’re ready to move on to the next level, you can take this advanced course on Java. | https://blog.udemy.com/java-date-format/ | CC-MAIN-2017-09 | refinedweb | 1,006 | 66.33 |
.
We are going to install the FANN library on Ubuntu and install the Python binding. Get and unzip the library:
wget sudo apt-get install unzip unzip fann-2.1.0beta.zip
Configure, make and install the library:
cd fann-2.1.0/ sudo apt-get install gcc make ./configure make sudo make install
Install the Python bindings:
cd python/ sudo apt-get install g++ python-dev swig sudo python setup.py install
The Python files are now located in a build directory. Copy them to a place where you can use them, e.g. your home directory:
cd build/lib.linux-i686-2.6/pyfann/ cp libfann.py ~ cp _libfann.so ~
And finally test that Python can now work with the library, start up Python and type:
import libfann print dir(libfann)
This should print out all the functions of the library.
johnJune 14, 2011 at 7:31pm
Really useful and straightforward. Thanks for sharing!
johnJune 16, 2011 at 12:48pm
Just two notes, in case it can be useful to whoever reads this “howto”.
In case you get troubles installing this binding on a x86-64 system, you can get help in this link:
And one question: wouldn’t be a better choice something like /usr/lib/ in order to place the libraries in order to be accessible from whatever dir we’re working on?
free_dougJune 4, 2012 at 1:55am
the best choice would be, instead of copying libfann.py file, to just write in your project:
from pyfann import libfann
and it works! 🙂
PastafarianistMarch 1, 2013 at 11:01pm
Thanks a lot for the tutorial, worked perfectly for me 🙂
It seems that there is a new version of FANN out there, namely 2.2.0, can I try to install it in the same manner? And if yes, do I have to uninstall FANN 2.1.0beta first, and how?
MatthewSeptember 21, 2015 at 4:10am
birdmw@birdmw-thinkpad:~/Desktop/final_project/fann-2.1.0/python$ sudo python setup.py install
Running SWIG before: swig -c++ -python pyfann/pyfann.i
running install
running build
running build_py
copying pyfann/libfann.py -> build/lib.linux-x86_64-2.7/pyfann
running build_ext
building ‘pyfann._libfann’ extension
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DSWIG_COMPILE -I../src/include -I/usr/include/python2.7 -c pyfann/pyfann_wrap.cxx -o build/temp.linux-x86_64-2.7/pyfann/pyfann_wrap.o
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default]
c++ /pyfann/pyfann_wrap.o ../src/doublefann.o -o build/lib.linux-x86_64-2.7/pyfann/_libfann.so
/usr/bin/ld: ../src/doublefann.o: relocation R_X86_64_32S against `.rodata’ can not be used when making a shared object; recompile with -fPIC
../src/doublefann.o: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
error: command ‘c++’ failed with exit status 1 | https://jansipke.nl/installing-fann-with-python-bindings-on-ubuntu/ | CC-MAIN-2019-22 | refinedweb | 487 | 59.9 |
Lambda@Edge is the thing I always find something new to gripe about each time I come back to it. Almost every time the core of my issue is the lack of any ability to perform dynamic configuration of that running code.
This is a situation that I find the AWS blog doesn't help with. Go look and you will find all sorts of examples where the configuration and/or secrets are hard coded in their functions.
I don't consider this to be best practice at all. My serverless app should be able to be redeployed without branching to commit different resource ARNs, or customizing the build process to inject values at deployment. I need to be able to have my template - the definition of my app in CloudFormation - to be able to determine all of this using the standard tooling AWS provides.
Limitations
There are two sets of event types for Lambda@Edge functions with CloudFront:
viewer-request,
origin-request,
origin-response, and
viewer-response. I wrote them in the order they occur during the client request lifecycle. As the names imply,
viewer- event occur on the client's side of the CloudFront distribution, while
origin- events occur the origin/source's side.
For Lambda@Edge, the triggering defines where our limitations are going to be.
origin- events allow the most freedom. We can set function memory as high as we want, the timeout can be a full 30 seconds (same as an API Gateway event source), and the size of the function code can be up to 50 MB.
Switching to
viewer- events severely restricts our resources. Function memory can only be the default 128 MB, our timeout can not exceed 5 seconds, and the function code cannot be more than 1 MB.
Beyond this, both event types allow us network access. As long as our functions work without the bounds set on them, we are able to leverage AWS (or other) services in our code.
For any other serverless app we would rely on environment variables to determine how we configure our code at runtime. Things like ARNs of required resources, paths or locations to read in needed secrets, all the usual suspects. With Lambda@Edge we not allow to use environment variables. At all. Period.
Origin-Request Example
For
origin-request event functions there is a trick we have available: origin custom headers. These are a configuration on the origin for our CloudFront distribution, and we can dynamically set these values in our CloudFormation template. Essentially, the custom headers will become our missing environment variables.
Here's an example of one I implemented once. The use case here is that my distribution serves content from a variety of S3 buckets in numerous AWS regions. I have an API, deployed in all those regions, that can take a file identifier and then return the key name and bucket domain of where it is located. I need the Lambda@Edge function to take the file from the request, look it up in the API, and then set the origin for the request to serve it. My implementation is a variation on this AWS blog post.
In this example, my API uses a Cognito User Pool for authentication. I created an app client specifically for the CloudFront distribution's use, and it can obtain access tokens uses
client credentials flow.
I'm going to skip most of the template as the important piece is how we configure the default origin of the CloudFront distribution.
Origins: - DomainName: !GetAtt DefaultBucket.DomainName # See comment below Id: default-origin OriginCustomHeaders: - HeaderName:api-domain HeaderValue: !Ref ApiDomainName # Template parameter - HeaderName: cognito-domain HeaderValue: !Ref UserPoolDomain # Template parameter - HeaderName: client-id HeaderValue: !Ref DistributionClientId - HeaderName: client-secret HeaderValue: !Ref DistributionClientSecret S3OriginConfig: OriginAccessIdentity: !Sub origin-access-identity/cloudfront/${CloudFrontOAI} # Template parameter
In this template I do have a
DefaultBucket S3 bucket resource to use with this. You don't have to do that, but the
DomainName attribute must be set. In any event, this never gets used as our function is routing to another origin or returning 404 responses.
This isn't without drawbacks. We are exposing the client credentials to CloudFront (though they are also visible in the Cognito console). The client that made the original request will never see these headers, and because they're injected by CloudFront after receiving the request. Our Lambda@Edge function processes the event before the origin, and we have the option of removing these headers as we modify the event and pass it back to CloudFormation (thus preventing exposed to the origin).
Consider an adaptations of this pattern where you pass SSM or Secrets Manager paths that the code can then read at runtime and cache for successive invocations.
Viewer Request Example
The next example I'm going to show really applies to the remaining three event types (custom headers apply only to
origin-request events), but my example is specifically around implementing HTTP basic auth for resources behind my CloudFront distribution.
Here, my plan was to have the function perform a lookup in a DynamoDB table for the user that it would perform the comparison against and allow or deny access. My challenge is to
I'm not entirely sure how I was struck by this inspiration, but it occurred to me one day that the Lambda@Edge function does indeed already know exactly what it needs to talk to. That information is in the IAM role I assigned to it.
This solution can be distilled down very simply to granting the function the ability to read its own IAM role and then extract the DynamoDB ARN to configure the
boto3 client. It feels wrong, but it works, and it works within the 128 MB memory and 5 second timeout limits imposed by this event type (including cold start invocations).
Let's look at the fuction's definition in my CloudFormation/SAM template first.
DistributionAuthorizer: Type: AWS::Serverless::Function Properties: Runtime: python3.8 Handler: index.lambda_handler CodeUri: ./src/distribution/authorizer Role: !GetAtt DistributionAuthorizerRole.Arn MemorySize: 128 # Max for viewer-request Timeout: 5 # Max for viewer-request AutoPublishAlias: live DistributionAuthorizerRole: Type: AWS::IAM::Role Properties: Path: "/" ManagedPolicyArns: - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: - sts:AssumeRole Principal: Service: - lambda.amazonaws.com - edgelambda.amazonaws.com Policies: - PolicyName: root PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: iam:GetRole Resource: !Sub arn:aws:iam::${AWS::AccountId}:role/${AWS::StackName}-DistributionAuthorizer-* - Effect: Allow Action: dynamodb:GetItem Resource: !GetAtt ServiceTable.Arn
All very standard for a Lambda@Edge function, but I have added a new permission for this:
iam:GetRole. Because we have a chicken-and-the-egg scenario unfolding where I am grating a permission to read the IAM role that the permission is defined within, I'm taking advantage of how CloudFormation names resources ( {stack name}-{logical ID}-{random suffix} ) to be sure I've targeted down the resource as best as I can without having to name it (it is best practice to not name your resources and let CloudFormation do it for you).
To make this happen in code, I need to use STS
GetCallerIdentity to find out the name of the IAM role used for the credentials my function was provided, then use IAM
GetRolePolicy to read the policy of the role, and then after parsing it I can find the ARN of the DynamoDB table and configure my client.
Here's the code.
import boto3 session = boto3.Session() iam_client = session.client("iam") sts_client = session.client("sts") ROLE_NAME = sts_client.get_caller_identity()["Arn"].split("/")[-2] ROLE_POLICY = iam_client.get_role_policy(RoleName=ROLE_NAME, PolicyName="root")[ "PolicyDocument" ] for arn in ROLE_POLICY["Statement"][0]["Resource"]: if arn.startswith("arn:aws:dynamodb:"): arn_parts = arn.split("/") TABLE_NAME = arn_parts[-1] TABLE_REGION = arn_parts[0].split(":")[3] dynamodb_table = session.resource("dynamodb", region_name=TABLE_REGION).Table( TABLE_NAME ) def lambda_handler(event, context): result = dynamodb_table.get_item(Key={"pk": "U#username", "sk": "A"}) # Basic auth code will go here. L@E function always returns a 401 response for now. return { "status": "401", "statusDescription": "Unauthorized", "body": "Unauthorized", "headers": { "www-authenticate": [{"key": "WWW-Authenticate", "value": "Basic"}] }, }
This is an incomplete HTTP basic authorizer as it isn't parsing the authorization header to get the username to perform the lookup, but it does demonstrate all the working pieces of configuring
boto3 by referencing the IAM role policy and then performing a lookup against the table.
The runtime results, despite the number of AWS API calls being made (across three services) are encouraging:
# Cold Start Duration: 171.38 ms Billed Duration: 172 ms Memory Size: 128 MB Max Memory Used: 80 MB Init Duration: 519.87 ms # Warm Executions Duration: 24.46 ms Billed Duration: 25 ms Memory Size: 128 MB Max Memory Used: 80 MB Duration: 43.25 ms Billed Duration: 44 ms Memory Size: 128 MB Max Memory Used: 80 MB Duration: 25.76 ms Billed Duration: 26 ms Memory Size: 128 MB Max Memory Used: 80 MB Duration: 30.47 ms Billed Duration: 31 ms Memory Size: 128 MB Max Memory Used: 80 MB
Well within limits. It works.
It's not really good, through, is it?
We Shouldn't Have To Do This
All of the above might sound very clever, and if it does that should tell us that it shouldn't have to be this way. My issue with Lambda@Edge is I don't feel it's developer friendly. Look at the mental hoops I've gone through to figure out how to configure my Lambda@Edge functions the way I would a normal Lambda function. There's a massive ripple effect that comes from not having environmental configuration built in.
I've reached out on this topic before, and I've never received a really good answer on the "how" to do it. If anyone from AWS's CloudFront, Lambda@Edge, CloudFormation, and/or SAM teams are stumbling across this; I ask you to reflect on the what many other customers must also have to contend with for something that should be simple, straightforward, and is easy for developers to do.
Discussion (1)
Interesting solution.. I also "inject" env variables for my Lambda@Edge with CloudFront headers.
One thing you need to watch out for is the low limit on
iam:GetRoleAWS will throttle that call if it is made to often, the limit is quite low.
We actually create a Cloudfront per client as they need a custom domain anyway. So maybe the better way to do it is to create a cloudfront per client and then inject the table name in the headers. Then Lambda@Edge just uses that? | https://dev.to/aws-builders/dynamically-configure-your-lambda-edge-functions-2pkp | CC-MAIN-2021-31 | refinedweb | 1,774 | 53.71 |
hello :) also from Philippines :D
Type: Posts; User: destitute.developer
hello :) also from Philippines :D
yes, basically same principle applies there.
myPOJO pojo = new myPOJO("the String",2); // can use my defined constructor to initialize my pojo
//get values with the getter methods...
using your code above
jFormattedTextField1 = new JFormattedTextField(new SimpleDateFormat("MM/dd/yy - mm:HH"));
jFormattedTextField1.setValue(new Date());
...
hi,
correct me if i'm wrong.
your problem is using the 'Dog' class to return the user input to the text area?
well can create a variable of that class, kinda like this
public class...
hello,
As per my understanding of the problem, you want the user to input a date on the text field then as he/she type the text is automatically converted to the format that you want?
regards...
you have actually solved it already :)
though
message = message + String.format("Winning sales person: %d", counter);
this line will always display the max value of 'counter'.
Hi
how about arrays?
eg :
int[][] x= new int[10][2];
x[0][0] = 100; // sales
a yes sorry, my bad :) thank you for pointing it out
carelessness will definitely be the end of me
Hi,
as per my understanding of what you are trying to do, does sample output below fit your need?
eg :
Enter units sold by sales person # 1 : 5
Enter units sold by sales person # 2 : 1
...
nice :) i'll remember that. thanks.
good day
try this, also how do i insert this the way you inserted your code? sorry, im new to this
public static void main(String args[]) {
ArrayList<PhoneBookEntry> list = new...
[Still hoping to find help :D]
[sorry i had placed this in a wrong place. how do you move this thing?]
Good Day,
i am trying to integrate a mail application to my webapp,
currently it is... | http://www.javaprogrammingforums.com/search.php?s=607a3700b476148cfdb4a16f97b3bafb&searchid=1627315 | CC-MAIN-2015-27 | refinedweb | 305 | 64.81 |
Insert a Node at the Tail of a Linked List
Insert a Node at the Tail of a Linked List
devasood + 8 comments
seriously, show main function as uneditable.. its tough to debug with "wrong answer" and "right answer"
ravi1008Asked to answer + 6 comments
Could you just share your code so that I can examine it and help you out.
Even I suffer a lot while coding in hackerrank due to "uneditable" part like main() function. But you can always print the intermediate results to cross check the code flow and approach.
ankeet_s + 9 comments
Node Insert(Node head,int data) { Node tmp = new Node(); tmp.data = data; tmp.next = null;
if(head == null) { head = tmp; return head; } Node current = head; while(current.next != null) { current = current.next; } current.next = tmp; return head;
}
navkhan + 2 comments
how does this work? there is no link between the 2ndlast item (formerly the last) and the new one just added. Also current.next, when current is null woud return an NPE, wouldnt it?
robbyoconnor + 4 comments[deleted] navkhan + 3 comments
I am not asking for a solution, I had already submitted a working one. So yes to answer your unnecessarily rude question, I do have an understanding of how linked lists work. But I do like to see others' solutions as well, and if better than mine, I like to improve. I noticed, he used current.next, which is the correct way to do it. I dont think questions are frowned upon here nor is there
robbyoconnor + 2 comments
calm down.
robbyoconnor + 0 comments
Also when you submit a correct working solution, you can see everyone elses in on the leaderboard.
ray10mathew + 2 comments
Never in the history of calming down has someone calmed down by being told to calm down.
pratibha_vrawat + 2 comments
Please dont post solution here. :/
manishku99251 + 0 comments
why he is helping out us!!!!!!! you can't say that "stop posting solutions here"
samfelder17 + 5 comments
I tried doing mine recursively in C any idea what went wrong?
Node* Insert(Node *head,int data) { if(head->next==NULL) { Node *newn=(Node *)malloc(sizeof(Node)); head->next=newn; newn->next=NULL; newn->data=data; return newn; } Insert(head->next,data); return head; }
bsiddarth29 + 1 comment
According to the problem statement... 1.)You need to figure out a way if head=NULL i.e if the list is empty but when you declare "if(head->next==NULL)" it means that you are consider only the non empty case. 2.)Even in this above case you are just returning "newn" nut you need to return thr address of first node which is "head"!
try adding if(head==NULL) block of statements also
vishal_gupta1_c1 + 0 comments
You are making changes in head only, which is wrong. Take any other pointer & copy head into it & then proceed,rest your code seems absolutely fine.
kalpitshah0078 + 1 comment
you have not covered the condition where the head is null. if you want to do this in this manner , your code should be somthing like this:
Node* Insert(Node *head,int data) { if(head==NULL) { Node *newn=(Node *)malloc(sizeof(Node));
newn->next=NULL; newn->data=data; head=newn; } else{ Node *newn2=(Node *)malloc(sizeof(Node)); newn2=head; while(newn2->next!=NULL){ newn2=newn2->next; } newn2->next=Insert(newn2->next , data); } return head;
}
saikiranhs + 0 comments
thanks! nice one! you can reduce complexity by removing while loop there, because anyway in recursion head moves as you pass
newn2->next, so no need to check the condition whether it is pointing to the extreme
avik_dutta111191 + 0 comments
Only problem with recursive one is how to return the start pointer of the list ...(i tried it tooo)
pandeysatyendra1 + 0 comments
actually you are not using concept if head is already null then what will print.......
pradeesh1998 + 2 comments
try in c++
typedef struct Node *list; list Insert(list head,int data) { if(head==NULL){ head = new Node; head->next=NULL; head->data=data; return head; } return Insert(head->next,data); }
lobheshdhakarma1 + 0 comments
how am i getting error???plz explainNode* Insert(Node *head,int data { Node n = (node)malloc(sizeof(node)); n->data=data; n->next=NULL; if(head==NULL) { return head=n; } else { Node *rhead = head; while(head->next!=NULL) { head=head->next; } head->next =n; return rhead; } }
soulthani + 0 comments
first, check if the head is null (means that its start with emapty list) then the first if condition will handle that
second, mantain head object by assigning its references to current then iterate over the current while current.next still containing Node, when it found current.next == null, means that current is the last Node, then it assign the new Node to Current.next and return the head
with this way, we makesure that current would never be found as null
AnuragPatil + 7 comments
Please suggest , unable to find out mistake in this :
Node* Insert(Node *head,int data) { Node *temp; temp->data = data; temp->next = NULL; if(head == NULL) { head = temp; } else { Node *p; p = head; while(p->next != NULL) p = p->next; p->next = temp; } return(head); }
Showing no response in output :(
rajinder0852 + 1 comment
you do not assign the address to pointer temp Node* Insert(Node *head,int data) { Node *temp; temp=new Node(); //That's the statement missing temp->data = data; temp->next = NULL;
if(head == NULL) { head = temp; } else { Node *p; p = head; while(p->next != NULL) p = p->next; p->next = temp; } return(head); }
prannavk1 + 1 comment
p->next=temp; //in cpp , giving a segmentation fault
why is the above code giving me segmentation fault
desnesn + 0 comments
When you exit the while loop you have a pointer pointing to nowhere. It didn't matter on the Print exercise, but here it does.
In your else statement, create another Node* pointer variable (*tail, for instance), and make it receive your *p at each iteration.
At the last iteration, it will be pointing to the last Node structure, which will be the tail of the linked list. Then your
p->next=temp;will work :-) !
varuos_oohas + 3 comments
Node* Insert(Node *head,int data) { Node *temp=new Node(); temp->data = data; temp->next = NULL; if(head == NULL) { head = temp; } else { Node *p; p = head; while(p->next != NULL) p = p->next; p->next = temp; } return(head); }
abbhishek971 + 6 comments
This is my solution in C++14
Node* Insert(Node *head,int data) { Node *temp = new Node(); temp->data = data; temp->next = NULL; if(head==NULL) { head = temp; } else if(head->next==NULL) { head->next = temp; return head; } else { Insert(head->next,data); } return head; }
neeraj1515_kaus1 + 1 comment
what is the difference bet Node* temp and Node *temp.?
abbhishek971 + 0 comments
Apologies for the delay in reply.
There's no difference. Both the notations result in the same identifier declaration.
abhishekjoshi261 + 0 comments
solution.cc:46:1: error: ‘Node’ does not name a type; did you mean ‘modf’? Node* Insert(Node head,int data) ^~~~ modf solution.cc: In function ‘int main()’: solution.cc:83:41: error: ‘insertNodeAtTail’ was not declared in this scope SinglyLinkedListNode llist_head = insertNodeAtTail(llist->head, llist_item);
showing this error code not working
shubhvratj + 3 comments
SinglyLinkedListNode* insertNodeAtTail(SinglyLinkedListNode* head, int data) {
SinglyLinkedListNode *new_node,*ptr; new_node = new SinglyLinkedListNode(); new_node->data = data; new_node->next = NULL; if(head == NULL){ head = new_node; return head; }else{ ptr = head; while(ptr != NULL){ ptr = ptr->next; } ptr->next = new_node; new_node->next = NULL; return head; }
}
can you please help me with this code?
abhishekjoshi261 + 0 comments
Node* Insert(Node *head,int data) { struct Node *temp,*tail,*ptr; temp=new Node; temp->data=data; temp->next=NULL; if(head==NULL) { head=temp;} else { ptr=head; while(ptr!=NULL) { tail=ptr; ptr=ptr->next; } tail->next=temp; } return head;
}
bro i think you missed the struct function and above is the soln that worked for me pm me if you find any problem
asmitaverma + 1 comment
Here's my solution in C++14. The class definition of SinglyLinkedListNode specifies that the constructor has an argument which is the data part of the node to be inserted.
SinglyLinkedListNode* insertNodeAtTail(SinglyLinkedListNode* head, int num){
SinglyLinkedListNode* node = new SinglyLinkedListNode(num); node->data = num; node->next = NULL; if(head == NULL){ head = node; } else{ SinglyLinkedListNode* temp; temp = head; while(temp->next != NULL){ temp = temp->next; } temp->next = node; } return head;
}
akkuarora07 + 0 comments
how can i remove this error?
solution.cc:47:1: error: ‘Node’ does not name a type; did you mean ‘modf’?
please help.
jackie88812 + 0 comments
you need memory space before temp->data = data;
It's my suggestion below: Node *temp = (Node *)malloc(sizeof(Node)); temp->data = data; temp->NULL;
kartikaythakkar + 1 comment
Node* Insert(Node *head,int data) { Node *temp=new Node(); temp->data=data; temp->next=NULL; if(head==NULL){ head=temp; } else{ Node *p; p=head; while(p->next !=NULL){ p=p->next; } p->next=temp; } return head; }
aashah7 + 1 comment
In your code:
if(head == null) { head = tmp; return head; }
Why not simply return
tmpinstead of re-assigning
headto
tmpand then returning
head?
Just curious. Thanks!
k_pradeep + 1 comment
Below approach is with recursion
Node Insert(Node head,int data) { return insertDS(head, data, head); } Node insertDS(Node head, int data, Node parent){ if(head == null){ head = new Node(); head.data = data; if(parent != null) parent.next = head; }else insertDS(head.next, data, head); return head; }
prem_hackerrank + 0 comments cse1519210260 + 0 comments
var newNode = new SinglyLinkedListNode(data); if(head == null){ head = newNode; return head; }else{ var temp = head; while(temp.next!=null){ temp = temp.next; } temp.next = newNode; return head }
cse1519210260 + 0 comments
Hope this will helps you !! :)
- var newNode = new SinglyLinkedListNode(data);
- if(head == null){
- head = newNode;
- return head;
- }else{
- var temp = head;
- while(temp.next!=null){
- temp = temp.next;
- }
- temp.next = newNode;
- return head
- }*
jjenner689 + 7 comments
I have a recursive solution in Python if anyone is interested:
def Insert(head, data): if head == None: return Node(data, None) else: if head.next == None: head.next = Node(data,None) else: Insert(head.next,data) return head
SirLemuel + 1 comment
In the 3rd line you return a new Node object. I thought we were supposed to return only the head object. Can you explain this part? My understanding with linked list is that the head never has any data but is used to reference the first element in the list, yet you return a single Node with data and no head.
xiangyu_li1990 + 0 comments
I can explain. The third line return the newest Node as the inserted node to the linked list because the program has already checked the head is Null. Returning the newest node would be the answer
SuperGogeta + 3 comments
I did it non recursively, calling a function multiple times unless necessary would be inefficient.
def Insert(head, data): if head == None: return Node(data,None) temp = head while temp != None: prevFinalNode=temp temp = temp.next prevFinalNode.next = Node(data,None) return head
Also you could have used elif instead of else>if/else as such
def Insert(head, data): if head == None: return Node(data, None) elif head.next == None: head.next = Node(data,None) else: Insert(head.next,data) return head
The code looks more readable this way I think, sorry for the nitpicking.
DanHaggard + 1 comment
I think you can trim one line out of your non-recursive solution:
def Insert(head, data): if head == None: return Node(data, None) curr_node = head while curr_node.next != None: curr_node = curr_node.next curr_node.next = Node(data, None) return head
pratik9044536615 + 2 comments
Node *temp = new Node(); temp->data=data; temp->next=NULL; if(head==NULL) { head=temp; } else { Node *current=head; while(current->next!=NULL) { current=current->next; } current->next=temp; } return head;
samikshya_chand1 + 1 comment
my program is the same as yours except the first part of the code where mine says
Node *temp; //did not use Node*temp=new Node temp->data=data; temp->next=NULL;
This is causing error in my code.. Can you please explain?
gurudashimself + 0 comments
This is becase when you used Node *temp, you just made a pointer of type Node, No memory was allocated as in the pratik9044536615's code where he uses
new Node();
So, I would suggest allocating memory either using new Node() or go all C style and use
Node *temp = (Node *)malloc(sizeof(Node));
mallireddy_t + 0 comments
`cpp Node* Insert(Node *head,int data) { Node *temp,*prev; temp->data=data; temp->next=NULL; prev=head; if(prev==NULL) head=temp; while((prev->next)!=NULL) prev=prev->next;
prev->next=temp;
return head;
} could you tell me the error
hjalmar_basile + 0 comments
I came to know about Harsha Suryanarayana and his sad story only yesterday, and today by pure chance I started solving this subdomain and found out that he's the creator of these list challenges, he was working with a friend on the MyCodeSchool start-up.
It seems that many people here are unaware of who he was, these are a few links:
Rest in peace Harsha, you were a good soul.
niranjan_wad + 0 comments
I second that. I tried my logic in eclipse and works fine according to my main method. I am not able to understand what seems wrong to them here??
NickFi + 0 comments
I usually post solutions in C#. And for all problems that are non-trivial (for my level), I copy the solution in SharpDevelop and then I do my coding and debugging. Once the solution seems to work, I copy and paste the code back on hackerrank. For C/C++ or C#, Visual Studio Community Edition is also a good alternative.
micahwood50 + 5 comments
This passes tests. (Python 3)
def Insert(head, data): print("Right Answer!") exit()
Eric_Day87 + 4 comments
Your python implementations break because of inconsistent use of tabs/spaces. It's in your main function, line 52. It's locked, so we can't even fix it.
aayushi_bhandar1 + 0 comments
I think this is because the function needs to be in the class and the pre defined function is outside that, which we can not even change. I think that is causing all these issues. Can some one please see into this ? I want to practice using Python.
kishynivas10 + 0 comments
Came back to go through basic ds, I thought I was doing something wrong, wasted too much of my time in this. sigh!
tomatoeggs + 3 comments
Python 3 bug
Line: llist_head = insertNodeAtTail(llist.head, llist_item)
Sorry: TabError: inconsistent use of tabs and spaces in indentation (solution.py, line 57)
pedro_calais + 1 comment
Same thing here.
tomatoeggs + 0 comments
Many exercises have the same Python issue. Code should be tested before going online. I want to solve the challenges using both C++ and Python but these bugs does not allow me to. Anyway... I have given up and I use only C++.
aayushi_bhandar1 + 0 comments
That seems to be some indentation issue with the main which we can not even edit. I need to practice DS using Python. What to do?
Pierre_Masse + 1 comment
Got the same problem here. Does anyone know how to get this problem corrected?
swatcat1 + 0 comments
Node Insert(Node head,int data) { Node newNode = new Node(); newNode.next = null; newNode.data = data; if(head == null){ head = newNode; } else{ Node tailNode = head; while(tailNode.next != null){ tailNode = tailNode.next; } tailNode.next = newNode; } return head; }
Input (stdin) 3 247 678 159 17
Your Output (stdout) Right Answer!
Expected Output 247 678 159 17
Compiler Message Wrong Answer
Any pointer as to what is that needs to be fixed here? Any response is appreciated. Thank you
cian_j_mcintyre + 0 comments
I seriously do not see how we're supposed to solve this without seeing the main. I really have no idea what is going into this function at all, or how I can even begin to debug it. I don't even see how we can possible check if part of the answer is right, such as if the first test case is working okay or what the inputs mean.
Elnaz + 3 comments
Java implementation :
if (head == null){ head = new Node(); head.data = data; } else { Node node = head; while (node.next != null){ node = node.next; } node.next = new Node(); node.next.data = data; } return head;
shahab03 + 1 comment
what does this do?
while (node.next != null){ node = node.next; }
and why is it needed?
The_Speck + 1 comment
Why must you declare a "Node node = head"? Initially I had:
if(head == null) { head = new Node(); head.data = data; } else{ while(head.next != null) head = head.next; head.next = new Node(); head.next.data = data; } return head;
But this method doesn't work. May you explain why? Thank you!
sanipatel2141 + 1 comment
because the question asks to "return the head of the updated linked list". In your case you're returning the pointer (is that correct Java terminology?) to the last node.
vishaltk + 0 comments
I did the same mistake, initially and was scratching my head to figured where I went wrong. Hence people say understanding the requirement is important. You may right functionally correct code, but if that is not meeting the requirement, then the whole purpose of your program/app is defeated.
haripriya_sreer1 + 1 comment[deleted] shaikhkamal2012 + 0 comments
Hope this will work in c SinglyLinkedListNode *ptr,*tmp; tmp = (SinglyLinkedListNode *)malloc(sizeof(SinglyLinkedListNode)); tmp->data = data;
if(head==NULL){ head=tmp; tmp->next = NULL; return head; } else{ ptr=head; while(ptr->next!= NULL) ptr=ptr->next; ptr->next = tmp; tmp->next = NULL; } return head;
Sort 802 Discussions, By:
Please Login in order to post a comment | https://www.hackerrank.com/challenges/insert-a-node-at-the-tail-of-a-linked-list/forum | CC-MAIN-2019-18 | refinedweb | 2,924 | 63.59 |
ipmi(7) ipmi(7)
NAME [Toc] [Back]
ipmi - intelligent platform management interface (IPMI) driver
SYNOPSIS [Toc] [Back]
#include <sys/ipmi.h>
DESCRIPTION [Toc] [Back]
The /dev/ipmi driver allows user processes to send IPMI messages to
the BMC (Baseboard Management Controller) System Message Interface.
The following data structures are provided in the <sys/ipmi.h> header
file for sending IPMI requests to the BMC.
ImbRequest [Toc] [Back]
The ImbRequest structure is used to specify the fields in
the IPMI request.
typedef struct {
BYTE rsSa;
BYTE cmd;
BYTE netFn;
BYTE rsLun;
BYTE dataLength;
BYTE data[1];
} ImbRequest;
rsSa Responding device Slave Address (BMC_SA).
cmd IPMI Command specified in hexadecimal.
netFn IPMI Network Function in hexadecimal.
rsLun Hexadecimal value dependent on cmd.
dataLength Length of following data field.
data Request data if any.
ImbRequestBuffer [Toc] [Back]
The ImbRequestBuffer structure is used to specify a timeout
value for the request. It also contains the IPMI request
itself.
typedef struct {
DWORD flags;
DWORD timeOut;
ImbRequest req;
} ImbRequestBuffer;
flags Currently unused. May be removed in the future.
Hewlett-Packard Company - 1 - HP-UX 11i Version 2: August 2003
ipmi(7) ipmi(7)
timeOut Timeout in micro seconds.
req Variable sized ImbRequest buffer.
ImbResponseBuffer [Toc] [Back]
The ImbResponseBuffer structure contains the response from
the BMC.
typedef struct {
BYTE cCode;
BYTE data[1];
} ImbResponseBuffer;
cCode Completion code from BMC in hexadecimal.
data Response data from BMC excluding Completion
Code.
ioctl Commands
The commands used to send IPMI messages to the BMC are:
IOCTL_IMB_SEND_MESSAGE Allows a 64-bit user process to send a
message to the BMC. The arg parameter
points to an ipmi_data_t structure (defined
in the <sys/ipmi.h> header file) whose
members are as follows:
typedef struct ipmi_data {
caddr_t InBuffer;
DWORD InBufferLength;
caddr_t OutBuffer;
DWORD OutBufferLength;
DWORD * BytesReturned;
caddr_t Overlapped;
int status;
} ipmi_data_t;
IOCTL_IMB_SEND_MESSAGE_32 Allows a 32-bit user process to send a
message to the BMC. The arg parameter
points to an ipmi_data_32_t structure
(defined in the <sys/ipmi.h> header file)
whose members are as follows:
typedef struct ipmi_data_32 {
ptr32_t InBuffer;
DWORD InBufferLength;
ptr32_t OutBuffer;
DWORD OutBufferLength;
ptr32_t BytesReturned;
ptr32_t Overlapped;
int status;
Hewlett-Packard Company - 2 - HP-UX 11i Version 2: August 2003
ipmi(7) ipmi(7)
} ipmi_data_32_t;
The fields used in these structures are as defined:
InBuffer Pointer to variable sized ImbRequestBuffer
structure.
InBufferLength Length of relavent data in InBuffer.
OutBuffer Pointer to variable sized ImbResponseBuffer
structure.
OutBufferLength Length of relavent data in OutBuffer.
BytesReturned Pointer to integer which returns output data
length.
Overlapped Currently unused.
status The status must be 0 if the operation was
successful. Otherwise, it contains a value
known internally to the driver. This field may
be obfuscated in future releases.
The return to ioctl() and status field must be zero for the operation
to succeed. If ioctl() returns 0 and status is not 0, the message may
not have been successfully sent to the BMC or the message was
successfully sent, but return data was not successfully received.
The application should check the Completion Code returned by the BMC
to furthur evaluate the status of the operation. The meaning of that
Completion Code is documented in the IPMI specifications.
RETURN VALUE [Toc] [Back]
Unless specified otherwise, upon successful completion, the IPMI
ioctl() commands return a value of 0 (zero). Otherwise, a value of
-1.
ERRORS [Toc] [Back]
[EBUSY] The caller is unable to access the BMC because too
many processes are contending for access.
[ETIMEDOUT] The caller is able to access the BMC but a timeout
occurred because either the timeOut value in
ImbRequestBuffer is too small or the BMC is busy.
[E2BIG] The caller is able to access the BMC but a timeout
occurred because either the timeOut value in
ImbRequestBuffer is too small or the BMC is busy.
Hewlett-Packard Company - 3 - HP-UX 11i Version 2: August 2003
ipmi(7) ipmi(7)
[EFAULT] The buffer pointed to by InBuffer or OutBuffer in
ipmi_data_t or ipmi_data_32_t is invalid.
[ENXIO] The IPMI driver failed to attach to a device during
initialization.
[EINVAL] Incorrect input and/or output buffer lengths.
[EIO] An internal error has occurred.
EXAMPLES [Toc] [Back]
The following segment of code sends the IPMI message Get SEL Info,
NetFn Storage, CMD 0x40. This is section 25.2 of the IPMI v1.5
Specification.
struct selinfo {
BYTE sel_version;
BYTE num_entry_ls;
BYTE num_entry_ms;
BYTE free_space_ls;
BYTE free_space_ms;
BYTE add_timestamp[4];
BYTE erase_timestamp[4];
BYTE op_support;
};
...
uint32_t bytesreturned;
ipmi_data_t ipmidata;
BYTE requestbuffer[64];
BYTE responsebuffer[64];
ImbRequestBuffer *request = requestbuffer;
ImbResponseBuffer *response = responsebuffer;
struct selinfo *selinfo;
request->flags = 0;
request->timeOut = 1000000;
request->req.rsSa = BMC_SA;
request->req.cmd = 0x40;
request->req.netFn = 0x0A;
request->req.rsLun = 0;
request->req.dataLength = 0;
ipmidata.InBuffer = request;
ipmidata.InBufferLength = sizeof(ImbRequestBuffer) - 1;
ipmidata.OutBuffer = response;
ipmidata.OutBufferLength = sizeof(responsebuffer);
ipmidata.BytesReturned = &bytesreturned;
Hewlett-Packard Company - 4 - HP-UX 11i Version 2: August 2003
ipmi(7) ipmi(7)
fd = open("/dev/ipmi",O_RDONLY);
ioctl(fd,IOCTL_IMB_SEND_MESSAGE,&ipmidata);
selinfo = response->data;
FILES [Toc] [Back]
/dev/ipmi IPMI driver file
sys/ipmi.h IPMI header file
SEE ALSO [Toc] [Back]
ioctl(2)
STANDARDS CONFORMANCE [Toc] [Back]
IPMI Interface Specification: v1.0, v1.5
Hewlett-Packard Company - 5 - HP-UX 11i Version 2: August 2003 | http://nixdoc.net/man-pages/HP-UX/man7/ipmi.7.html | CC-MAIN-2019-43 | refinedweb | 862 | 50.33 |
#include <baselist.h>
Template for list head pointing to elements of T.
move constructor
move assignment operator
Gets the number of elements.
Checks if the list is empty. This is the same as
GetCount() == 0
Checks if the list contains anything. This is the same as
GetCount() != 0
Gets the element by index.
Gets the index of the element. The element must be part of the array, otherwise (e.g. if x is a copy of an array element) InvalidArrayIndex will be returned.
Gets the pointer to the virtual end node (the node after the last node). This is the address of a virtual (non-existing) node that contains this list head. | https://developers.maxon.net/docs/Cinema4DCPPSDK/html/classmaxon_1_1_base_list_head.html | CC-MAIN-2022-05 | refinedweb | 112 | 69.89 |
Joomla registration form size take xml from 1 website. But that sizes for retail. But i will sell wholesale. So can you change this ? Example: 38 size -2 40 size-3 42 size -5 44 size-0 It has to write 38-40-42 2 serial.]
i am looking for a new traders list, individual persons only including name, address, email and phone numbers, registration date.
Require: ]: blue based 4. Do not have too much color. 5..Concise macro from excel to website to pre fill in required information. The info will be emailed.
I need you to fill in a spreadsheet with data. need you to develop some software for me. I would like this software to be developed for Windows using Javascript. [kirjaudu nähdäksesi URL:n].
import search filter large json file . search edit export data in many ways xls pdf etc.. will provide sample of small file and example data inside it once we discuss. winner of this project who place best bid . thanks
...
Details will be shared with winning bidder. Please bid.
Details will be shared with winning bidder. Please bid. | https://www.fi.freelancer.com/work/joomla-registration-form-size/ | CC-MAIN-2018-43 | refinedweb | 185 | 79.26 |
There is a pcap library - it is a bit of overkill if all you are trying to do is read pcap files. I have an (internal - could be made external to the company) library that does this sort of thing and reads using Binary the pcap file and does the appropriate re-ordering of the bytes within the words depending on the pcap endianness Neil On 12 Oct 2011, at 16:38, mukesh tiwari wrote: > Hello all > I was going through wireshark and read this pcap file in wireshark. I wrote a simple haskell file which reads the pcap file displays its contents however it looks completely different from wireshark. When i run this program . it does not produce any thing and when i press ^C ( CTRL - C ) it produce output. > > output for given file > ^C0xd4 0xc3 0xb2 0xa1 0x02 0x00 0x04 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0xff 0xff 0x00 0x00 0x01 0x00 0x00 0x00 0x0b 0xd4 0x9e 0x43 0x41 0x38 0x01 0x00 0x3c 0x00 0x00 0x00 0x3c 0x00 0x00 0x00 0x00 0x04 0x76 0xdd 0xbb 0x3a 0x00 0x04 0x75 0xc7 0x87 0x49 0x08 0x00 0x45 0x00 0x00 0x28 0x1a 0x6a 0x40 0x00 0x40 0x88 0x6f 0x71 0x8b 0x85 0xcc 0xb0 0x8b 0x85 0xcc 0xb7 0x80 0x00 0x04 0xd2 0x00 0x00 0x38 0x45 0x68 0x65 0x6c 0x6c 0x6f 0x20 0x77 0x6f 0x72 0x6c 0x64 0x00 0x00 0x00 0x00 0x00 0x00 > > The values displayed in wireshark > 0000 00 04 76 dd bb 3a 00 04 75 c7 87 49 08 00 45 00 ..v..:.. u..I..E. > 0010 00 28 1a 6a 40 00 40 88 6f 71 8b 85 cc b0 8b 85 .(.j at .@. oq...... > 0020 cc b7 80 00 04 d2 00 00 38 45 68 65 6c 6c 6f 20 ........ 8Ehello > 0030 77 6f 72 6c 64 0a 00 00 00 00 00 00 world... .... > > > > import Data.Char > import Data.List > import Text.Printf > import Control.Monad > > > > fileReader :: Handle -> IO () > fileReader h = do > t <- hIsEOF h > if t then return () > else do > tmp <- hGetLine h > forM_ tmp ( printf "0x%02x " ) > fileReader h > > main = do > l <- openBinaryFile "udp_lite_full_coverage_0.pcap" ReadMode > fileReader l > print "end" > > I am simply trying to write a haskell script which produce interpretation of pcap packet same as wireshark ( At least for UDP packet ) . Could some one please tell me a guide map to approach for this . A general guide line for this project like What to read which could be helpful for this project , which haskell library or any thing which you think is useful . > > Regards > Mukesh Tiwari > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | http://www.haskell.org/pipermail/haskell-cafe/2011-October/095996.html | CC-MAIN-2013-48 | refinedweb | 446 | 72.39 |
Cross-site scripting vulnerabilities are Web-specific issues and can compromise a client's data through a flaw in a single Web page. Imagine the following ASP.NET code fragment:
<script language=c#>
Response.Write("Hello, " + Request.QueryString("name"));
</script>
How many of us have written or seen code like this? You may be surprised to learn that its
alert('hi!');<script>alert('hi!');</script>
You’d get a Web page that displays a dialog box, saying "hi!"
"So what?" you say..
There are two ways to avoid Cross-site scripting.
The first is not to trust the input and be strict about what comprises a user's name. For example, you could use regular expressions to check that the name contains only a common subset of characters and is not too big. The following C# code snippet shows the way that you can accomplish this
//please include this namespace to run this code snippet
//using System.Text.RegularExpressions;
Regex rg = new Regex(@"^[\w]{1,40}$");
if (rg.Match(vpuCode).Success)
{
// Cool! The string is ok
}
else
// Not cool! Invalid string
This code uses a regular expression to verify that a string contains between 1 and 40 alphanumeric characters and nothing else. This is the only safe way to determine whether a value is correct.
Don’t use a regular expression to look for invalid characters and reject the request if such characters are found because there is always a case that will slip by you.. | http://weblogs.asp.net/Varad/archive/2005/02/16/374977.aspx | crawl-002 | refinedweb | 246 | 65.01 |
The Securibot: a Small Survelliance Drone for Home Security
Introduction: The Securibot: a Small Survelliance Drone for Home Security
It's a simple fact that robots are awesome. Security robots, however, tend to be way too expensive for an average person to afford or are legally impossible to purchase; Private companies and the military tend to keep such devices to themselves, and for good reason. But what if you really want to have a personal security robot?
Enter the Securibot: A small all-wheel drive robot that can patrol around where you desire and feedback information with a wide array of sensors. It's small, robust and cheap, and will require only minimal understanding of wiring and programming in order to create.
Step 1: Gathering Materials
The following materials will be required. These are parts that must be purchased and consumed for the final product, and as such it may be wise to have additional backup materials in case an accident occurs. Just click on a part to open a new tab if you must purchase it!
POWER MANAGEMENT
- 9-Volt Battery 4-Pack x1
- AA Battery 8-Pack x1
- 4-Slot AA Battery Holder x1
- Male/Male Jumper Wires x1
- Male/Female Jumper Wires x1
- Female/Female Jumper Wires x1
- Mini Breadboard x1
- 1k Resistor x1
- 2k Resistor x1
- Red/Black Power Cables x1
- Rocker Switch x2
HARDWARE AND SENSORS
- Arduino Uno Rev3 x1
- ESP8266 Wi-Fi Module w/ NodeMCU x
- HCSR04 Ultrasonic Sensor x1
- PIR Motion Sensor x1
- Motor Board x1
CHASSIS
ADDITIONAL MATERIALS*
- Soldier Iron and Solder
- Wire Strippers
- Wire Cutters
- 8" Acrylic
- Laser Cutter
- Electrical Tape
- Zipties
- Small Screws & Nuts
*These materials are not required, but certainly add an extra layer of organization and protection. Being optional, they can be more commonly found in hardware stores, and laser cutters are a more serious consideration for purchase rather then simply renting one or having parts shipped.
Step 2: Programming and Planning
The Securibot is a rather complex device in terms of wiring and programming that may seem intimidating at first, but if done in small steps can be made easier. Below is a diagram that shows the entire wiring scheme. Even though this is here now, it would be unwise to wire everything since this entire mechanism will be attached to the robot. This is simply here to get a better understanding of how the device is set up on paper.
To program the robot, we will be using two different languages: Python and C/C++. Also, it is important to understand that this is best done when programmed on MacOS.
Before we begin, physically attach the NodeMCU to the Motor Board. You can do this by lining up the small squiggle on the bottom with each other. DO NOT PUT IT BACKWARDS OR IT WILL FRY!
Once you have connected the NodeMCU + Motorboard to a computer, open a terminal window, and begin write these lines, ignoring to type anything after a #.
ls /dev/tty.* #Finds the port the NodeMCU is listening on.
screen ls/dev/tty. 115200
#after this, hit enter until you see >>>, then type the following:
import network
sta = network.WLAN(network.STA_IF)
ap = network.WLAN(network.AP_IF)
ap.active(True)
sta.active(False)
If you have programmed this correctly, you should now see a connection for MicroPython-xxxxxx (the numbers will differ based on the ESP8266 used) in your Wi-Fi. Connect to it, the password for it is micropythoN (exactly as written)
Now, go to and press "Connect". DO NOT CHANGE THE IP. The default one that is given is what is required. You should be prompted to enter a password; Simply enter password.
After that, we are going to have to obtain all the code used in the control of the robot's motors. In this github repository, download crimsonbot.py. You may download other things for future use if need be. Now we can begin programming, but doing so may be too hard, so instead we have made another repository instead located here. Grab demo.py and place it in the same location as crimsonbot.py.
Go back into the webrepl and connect again. Press "Connect" and log in with password again. On the right side, click "Choose File" and find where you put demo.py. After selecting demo.py, send it by pressing "Send to device". If you did it correctly, you should be able to type import demo and not get any error. Congratulations, you have all the software set up for control. Now it is time to assemble this into the robot itself.
Step 3: Building the Basics
Now that we have set up the primary part of the software, we can work on the hardware. Open up the package for the robot's Makerfire chassis and assemble it as instructed in the guide included. It should be noted that the wires do not come soldered, so be careful as always when working with one. Once you have assembled the entire robot as per the guide provided, we actually don't have to have the top on for now, so you can put that aide for now.
Taking of the top, we can now attach some things. Grab an adhesive of your choosing and place the Motor Board and two 9V battery in front of the blue section on the board. It goes without saying, but you can detach the Motor Board to do this.
Using soldered wires or alligator clips, attach the two 9V batteries in series, giving around 18V of power. Now take one end of that and connect it to a rocker switch. You should now have a negative/positve end attached to the rocker and one simply attached to one end. With wire strippers, remove a little of the red/black power cable to reveal some of the copper. You can now put them into the Motorboard on the blue section by sticking them in. Use a small Phillips screwdriver to raise and lower to secure them in properly. The red wire will attach to the outlet named VIN and the ground will attach to the outlet named GND.
Now is the hard part of the wiring. It's probably the hardest part since it is very intricate. Using the ends of the motors, connect it in the following way:
The two black wires on the left to outlet A-
The two red wires on the left to outlet A+
The two black wires on the right to outlet B-
The two red wires on the right to outlet B+
Electrical tape and zipties will come in very handy in order to keep the pairs of wires together. Now that that has been assembled, we can test if the motors are working properly.
Log in and follow all the parts in Step 1 in from starting webrepl to loading demo.py. After you have typed in import demo, type either one of the following commands:
demo.demo_fb() #Makes the robot go forward and reverse.
demo.demo_rot() #Makes the robot spin.
These will evaluate whether you can move forward and turn. If they both work as intended, than fantastic! If not, then double check your wiring and make sure your batteries are fully charged. Attached to this is a small video of the demo_fb() program and how it runs the wheels as an example. Notice that these are not powered fully, so we must make sure using a multimeter whether power is sufficient enough for the four motors.
Step 4: Coloring a Sense of Things
Now that we have established that our bot can move around, it is finally time to begin the automation of the robot.
Much like how a guard is tasked with patrolling an area for a period of time, the robot is programmed using the code in demo.py to patrol an area by following a black line. The best candidate for this line is black electrical tape.
Using three female/female jumper wires, connect to the following pins on one of the color sensors: VCC (power), GND (ground) and DAT (data). Connect the other ends using also any pins from rows 2-8 on the Motor Board for the following connections:
VCC => V
GND => G
DAT => D
Note that all of these must be in the same row to function. The rows are labeled on the side of the Motor Board. Repeat this twice for a second sensor, and mount them in the front with some spare standoffs or anything you prefer. Keep in mind that the color sensors have to be very close to the ground. If they are not close enough, they will not function properly. Make sure to also mount them symmetrically on opposite sides for intended effect.
Go back into the webrepl, send demo.py and import it once more. After that, lay it on a non-black surface and map out a line of black electrical tape a meter or two. Place the robot down with the line in between the two sensors. Type the following commands after powering:
demo.setup()
demo.loop()
The Securibot should now follow the line and correct itself when the color sensor is tripped. The code works by detecting what value is normal, meaning not black-colored, and when that value is sensed to be different, it corrects itself. Note that since the program is meant to run indefinitely, the only way to stop the robot is to power it off. Test this way a couple of times, and if you are really daring, try to make some curves and turns.
Step 5: Sounding Off
The diagram above shows how the ultrasonic sensor will be set up. The sensor works by transmitting an ultrasonic pulse of sound, higher than any human can hear, and calculating how long it takes for it to reflect back. This is where the male/female tabs will shine alongside the 1k and 2k resistors.
At this point, real estate will be difficult to manage, so now would be a good time to attach the top of the car back on. However, bear in mind that the grey TRIG wireand the white ECHO wire must connect to two separate D pins on the Motor Board under, so sneak them and attach them on. If you purchased the breadboard included in the materials section, then it will have an adhesive bottom that can be used by just peeling the paper away. Attach that to the front of the car, and then attach the battery pack using whatever adhesive you desire in the back of the car.
It should be noted that the copper wires that come with the AA battery pack do not have female ends, so you will need to strip the wire away before inserting them into the breadboard.
The code for the ultrasonic sensor is a little more complex but can still be accessed from this github repo again. Download HCSR04.py and motion_control.py and have them in the same location. With these, you can detect the distance the sensor is from any object. The range of the ultrasonic is around two to three meters.
Step 6: Heat Signatures
Now that we have the other parts assembled, we can focus on using the Arduino Uno with the Passive Infrared Sensor (PIR) to detect thermal motion.
First of all, make sure to download the latest IDE for Arduino. Connect the required cable from your USB outlet to the Uno. You may be required to confirm security prompts for this, say "Yes" to all. Make sure that it recognizes this by checking under Tools > Board > Arduino/Genuino Uno and Tools > Port > dev/cu.Bluetooth-Incoming-Port. Once those are step up, go to Tools > Get Board Info and see if the board information pops up.
Now we can use the code back on the good old github repo in order to detect thermal motion. Download the .ino file in the repository and open it with the Arduino IDE. Click "Verify" to compile the code and push it to the Uno using the button next to it.
Now we must physically wire the Arduino Uno. Follow the diagram above to do so, and when attaching the PIR to the car, use some super glue to attach it on top of the ultrasonic sensor. Any adhesive will due to attach the additional 9V, switch and Uno.
Step 7: Coming Together
Now that everything is in place, load all the code onto the respective boards. Once finished and you have executed demo.loop(), the robot will be able to follow black lines and the sensors should bring in data on their respective terminal windows. Congratulations, you now have your very own personal Securibot!
In case you want to learn the logistics of the robot, then this section is supplementary material on how the software works. Essentially, the robot will continue to follow the line in a loop and the ultrasonic and passive infrared sensors will display the distance and motion of objects directly in front of the car.
If you wish to add more protocols onto it, here are additional resources that you may use in order to make the car have better software or hardware. Since the Securibot is a bit basic, it serves as a platform for you to modify to your heart's content. Design laser cut armor, advanced detection programs, add spikes to make your own combat robot; The potential is limitless with what you can do with the Securibot!
If you want to add more acrylic armor to make the chassis look nicer, we have already made them on the github repository as .pdfs which can be loaded onto a laser cutter. The files are armor-side.pdf, front-back-plates-fixed.pdf, and hinge-fix.pdf. For more tutorials on how to laser cut, go to to learn more cutting projects.
The guide isn't downloading at this time. (12-7-17).
I like the idea and would love to see details to build this!
That would be useful to have :) Do you have any progress photos from when you were building it? | http://www.instructables.com/id/The-Securibot-a-Small-Survelliance-Drone-for-Home-/ | CC-MAIN-2017-51 | refinedweb | 2,362 | 70.53 |
Dear @dabeaz , thank you for the nice course. From Exercise 7.7 found here Can someone kindly explain what the
prop is that is returned in the
typedproperty function? It's unclear to me whether the property is returned or whether the property setter function is returned.
# typedproperty.py def typedproperty(name, expected_type): private_name = '_' + name @property def prop(self): return getattr(self, private_name) @prop.setter def prop(self, value): if not isinstance(value, expected_type): raise TypeError(f'Expected {expected_type}') setattr(self, private_name, value) return prop
And in the below block of code, how can the
typedproperty('name', str) be assigned to the variable
namewhen as far as I can tell, the
typedproperty function from above doesn't return any value?
from typedproperty import typedproperty class Stock: name = typedproperty('name', str) shares = typedproperty('shares', int) price = typedproperty('price', float) def __init__(self, name, shares, price): self.name = name self.shares = shares self.price = price
Thanks in advance!
Am I understanding correctly that when a
Stock instance is created, e.g.:
>>> stock = Stock('GOOG', 100, 490.1)
then the
__init__() for the
Stock instance runs, including:
self.name = name
or for this particular instance
'GOOG' as
name because that was supplied as the
name argument:
self.name = 'GOOG'
If that's all correct, how does the code
name = typedproperty('name', str)
become involved?
name is a property so this question is really the same as "how does a property" get involved?
class Stock: @property def name(self): return self._name @name.setter def name(self, value): if not isinstance(value, str): raise TypeError("Expected a value") self._name = value def __init__(self, name, shares, price): self.name = name self.shares = shares self.price = price s = Stock('GOOG', 100, 490.1) # Works t = Stock(123, 100, 490.1) # Fails
To answer this question, you'll need to do some research on how properties work. That's left as an exercise... (wink ;-).
Thank you--I believe I understand now. For simplicity, if only looking at how
self.name = name works, when a
Stock instance is created, even before the
__init__() is run we already have the following established:
name = typedproperty('name', str)
So, the above code has set a property instance to the variable
name. Subsequently, when the
__init__() runs the following code:
self.name = name
self.name is equivalent to
self.prop (
prop being the
closure assigned to
name in the first block of code which has retained the arguments that was given to the
typedproperty() function initially). This means that the above statement uses the
setter method defined in
propto be called which raises an exception if the wrong type of value is passed to it.
308 874705.88 3478.83 309 877389.99 809.21 310 878199.2 3.37 311 878202.57 0.01 312 878202.59 0.0, However, my table kept running until month 447:
445 878202.59 0.0 446 878202.59 0.0 447 878202.59 0.0 Total paid: 878202.59 Months: 0.0Given Dabeaz' response to issue 95, is this "good enough"? I aask because I am moving on to the next problems. I don't want to be stuck here forever. I think my real problem is that I don't know how to calculate a mortgage! ;-)
Dear all, i need help understanding exercise 1.5 solution
Question:
A rubber ball is dropped from a height of 100 meters and each time it hits the ground, it bounces back up to 3/5 the height it fell. Write a program bounce.py that prints a table showing the height of the first 10 bounces.
Solution:
height = 100
bounce = 1
while bounce <= 10:
height = height * (3/5)
print(bounce, round(height, 4))
bounce += 1
My challenge is i could not comprehend what makes the value of height change at every iteration
Dear all,
In the mortgage.py code as stated below, please can someone explain why we have (1+rate/12)
I am not able to wrap my head around why we need to add 1 to that line
principal = 500000.0
rate = 0.05
payment = 2684.11
total_paid = 0.0
while principal > 0:
principal = principal * (1+rate/12) - payment
total_paid = total_paid + payment
print('Total paid', total_paid)
def getitem(self, index):
return self._holdings[index]
import report
portfolio = report.read_portfolio('Data/portfolio.csv') | https://gitter.im/dabeaz-course/practical-python | CC-MAIN-2021-43 | refinedweb | 720 | 68.77 |
class Solution(object): def canCross(self, stones): # create a dictionary where the keys are the stones # and the values are empty sets that will contain # integers which represent the jump lengths that # can reach the stone represented by the key d = dict((x,set()) for x in stones) # catches a tricky test case: stones = [0,2] if stones[1] != 1: return False # the problems says that the first jump made is # always of length 1 and starts at stone 0. That # means the jump length that was used to reach # stone 1 is 1 so I add it into the set at stone 1 d[1].add(1) # iterate over all the stones after 0 for i in xrange(len(stones[1:])): # iterate over each jump length used to reach # the current stone for j in d[stones[i]]: # iterate over every jump length possible # (k-1, k, k+1) given the current jump length for k in xrange(j-1, j+2): # if that jump length lands on a stone if k > 0 and stones[i]+k in d: # add that jump length used to get there to # the set of jump lengths for the stone the # jump puts the frog on d[stones[i]+k].add(k) # if the last stone has any jump lengths in it's # set, that means that it is possible to get to # the last stone return d[stones[-1]] != set()
Maybe the comments are good, but to be honest the first thing I did was remove them and only read the actual code :-P. I like seeing the whole code at once, and seeing how much it is. The missing syntax highlighting also makes it hard to just focus on the code. (You'd get syntax highlighting if you put the formatting backticks on their own line and also after the code.)
Anyway, I'd do a few things differently, like catching non-starters before I do any other work, and not looping over
stones by index.
def canCross(self, stones): if stones[1] != 1: return False d = {x: set() for x in stones} d[1].add(1) for x in stones[:-1]: for j in d[x]: for k in xrange(j-1, j+2): if k > 0 and x+k in d: d[x+k].add(k) return bool(d[stones[-1]])
Could even be
for x in stones:, that might do a little more work at the end but would prevent creating that almost complete slice of
stones.
Thanks for the protip on the back ticks!
TBH I don't know why I didn't put the check on top before I constructed by dict. I am going to leave the order just to avoid confusion.
Also, good call on taking out xrange in favor of
for x in stones:
I was doing something differently when I first started and totally forgot that I wasn't leveraging the index anymore. | https://discuss.leetcode.com/topic/59570/python-documented-solution-that-is-easy-to-understand | CC-MAIN-2017-34 | refinedweb | 491 | 72.19 |
Type: Posts; User: Yves M
[ Moved here ]
Agreed, it's better here.
For the original question, I cannot say, since for me it loads correctly. Are you behind a proxy that blocks traffic? Are you maybe using a browser plugin that blocks...
Hi Paul,
Did you try running it again? I'm just curious whether it's the same issue or something else.
Regards,
Yves
When you overwrite a pure virtual function, it has to have the same signature. Otherwise you haven't actually overwritten it, you just have declared an unrelated virtual function.
virtual T...
That's not entirely correct. When you use delete [], the destructors for each individual object are called first. If you replace char and float by car and bike classes, it's obvious that using the...
For the deallocation speed, did you try running your program outside of the IDE? Just open a CMD window, go to the release directory and run it from there. The thing is that Visual Studio since 2003...
This forum is for computer programming professionals, not for malware writers. This type of questions is not condoned.
stdext::hash_map<std::string, CObjects*> m_SparseMap;
Just a mention that putting pointers to objects inside STL containers is not generally a good idea, precisely because you run into problems...
[ redirected from the Visual C++ forum ]
I think you are probably having issues with re-entrant code. If I understand your design correctly, multiple connections can be active at any time, so your program is running listeners concurrently....
That makes sense, thanks. So I guess I'll have to implement it on my own then.
Thanks,
Yves
Hi everyone,
I'd like to run a design by you and ask on how to implement it. Here is the basic problem:
I have a servelet A that returns full JPG images.
I have a servelet B that returns JPG...
Have you installed the platform SDK?
You could use unix commands for this.
grep ':CME,' filename | sed -e 's/expression to parse out what you want/\1/' | uniq
The grep selects the lines you're interested in from the source file....
It's in the FAQ.
Basically, templates need the source code.
1) Well, you are safe, because the lack of a virtual destructor is not a problem when there is no way to actually use it.
2) Basically yes. However, you should write a big comment at the start of...
Hi,
You can simply use next_permutation from STL algorithms. This is an example of how you would use it. The data structures are not the same as yours, but you should be able to adapt that with a...
If you can define your "similarity" mathematically, then you're halfway there. If you can't then you should probably try to find out what you think of when you think of your "similarity". Statistical...
\[MSG_74{[^\]]*}\]
Replace with
\["MSG_74\1"\]
I guess the name lookup is a bit weird in this case. Maybe try using
const _Ty& operator *() const
{
return (Acc::Value(iterator::_Ptr));
}
No, the memory issue is an aside (that can be very helpful of course, but still).
64 bit code is faster in these situations:
Your program has integer arithmetic takes advantage of 64 bit values....
It's pretty much the same speed on most processors.
However, why don't you also add a 64bit executable? If it's a new project, I think that would definitely make sense and then you could have...
[ moderator note ]
Please leave personal differences aside when posting on a technical forum. This forum is meant for helping with or solving precise questions that are stated by the original...
That's exactly how I read his question.
If you can modify the code a little bit, I would add a function that dumps the vector into a csv file and then open that with Excel. It means that you will have to add the function call to those... | http://forums.codeguru.com/search.php?s=c7f14d470e322216cb2d582b028a4eab&searchid=4870973 | CC-MAIN-2014-35 | refinedweb | 658 | 75 |
CodePlexProject Hosting for Open Source Software
I currently initialize my modules using a UnityBootStrapper. The bootstrapper gets a list of ModuleInfos by deserializing(from a xml file) a list of ModuleInfos and the module infos are added to the ModuleCatalog. The modules are instantiated and initialized
by the bootstrapper and thats all good. My question is, how do I then inject custom data(deserialized from a file) that belongs to a specific module. For example I have a class below;
public class ModuleA : IModule
{
public List<string> NameList {get;set}
}
What would would be the right practice for me to inject an existing(deserialized) NameList to ModuleA?
I read up on Annotating Objects into Constructor;
and
Annotating Objects into Properties;
but im still missing something here because how do I match the serialized data with a particular module that will be initialized by the UnityBootStrapper based on whats in the ModuleCatalog.
Is there anywhere I can attach data to a ModuleInfo(or even a custom IModuleCatalogItem) that 'somehow' the ModuleCatalog in the UnityBootstrapper knows how to create the module and inject the custom data onto the modules property.
This would seem like a common problem because how else would you specify the region name of a shell that you want your view to inject onto during module.Initialize()? I definitely dont want to hardcode the region name. Wouldnt it be nice if you can somehow
easily set that in a xaml file or configuration file that specifies a region name for the module to mount that view onto?
Any help would be appreciated.
Would having a custom IModuleInitializer work in this?
Do you think for my case, a better approach would just be deserializing the modules(different module types that implements IModules) itself and use BuildUp to buildup the other necessary properties I use like the UnityContainer, RegionManager and EventAggregator?
In this case, my modules will have all the deserialized data with it.
Hi,
One possibility in order not to hard code the region names would be to define constants for the region names in an infrastructure project (one that would be referenced by your shell and all your modules).
As for the approach for deserializing your data, I wouldn't recommend to place such logic in the
IModule class. One possibility for that would be to place the logic for deserializing your data in the component that will be responsible of handling it, such as a controller or a service.
If you scenario strictly requires you to place that logic inside the IModule, the approach you're mentioning in your second post seems to be a valid one.
I hope you find this helpful.
Guido Leandro Maliandi
Thank you very much for your suggestions Guido. I will have to think abit more here in terms of how I would design my infrastructure.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://compositewpf.codeplex.com/discussions/246531 | CC-MAIN-2017-26 | refinedweb | 509 | 51.18 |
David Abrahams wrote: > John Reid <johnbaronreid at netscape.net> writes: > > >>Hi, >> >>I'm trying to call a python function from C++. I pass the function to >>C++ as a boost::python::object. I can call functions with no arguments, >>when I try to pass a C++ created object as an argument I get the >>following error: >> >>Traceback (most recent call last): >> File "python/test_python.py", line 8, in ? >> d.connect_new_deal_slot_py_2( f ) >>TypeError: No to_python (by-value) converter found for C++ type: class >>module::classname > > > Actually that would be namespacename::classname. It's a C++ class > name, most probably of the return type of the C++ function > implementing your connect_new_deal_slot_py_2 method. Another > possibility is that it's the name of a C++ class of which you're > passing an instance to f inside that C++ function implementing > connect_new_deal_slot_py_2. Either way, you haven't wrapped that C++ > class. > > I was pretty sure I had wrapped it. Was that not what I was doing with the following code? class_< test, noncopyable >( "test" ) .def( "connect_slot", &test::connect_slot_py ) ; Anyhow I tracked the problem down a little more and found out it did not like the noncopyable base class. Without this everything was OK. From the documentation it wasn't clear that this would make it difficult for me to pass C++ objects to python. I'm guessing there is a way to do this even with the noncopyable base class but I couldn't work it out. It is clear that I had some problems understanding what was going wrong. I'm not really sure if that was a problem with me, the error messages or the documentation. Thanks for your help. Best, John. | https://mail.python.org/pipermail/cplusplus-sig/2006-June/010488.html | CC-MAIN-2019-18 | refinedweb | 279 | 74.49 |
Hello, I'm new to C++ and I am stuck on a problem I was assigned. Below is the problem statement and I will post my code as well.
Write a C++ program that reads one line of text from the user via the keyboard, then reports the frequency of the word lengths found in that line of text. Here is a sample run of the program. The program first asks the user to enter a line of text:
Enter a line of text:
Then, the user types in a line such as this one (the user can type in any text of up to 80 characters):
The cat ran down the road. Meanwhile, a storm was gathering.
Requirements
1. Put your solution in a file name "WordFreq.cpp". You can have other files if you wish (the main function should be in WordFreq.cpp).
2. The user can type up to an 80 character line. Therefore, the longest possible word can be 80 characters.
3. Your "Word Frequency Report" should report up to and including the longest word found, and not beyond that length. For example, if the longest word in the user's input is three characters, then the report should go from 1 to 3. If the longest word in the user input was 21 characters, then the report should go from 1 to 21. In the sample run of the program, above, the longest words were of length ten, so the report went from 1 to 10. The longest possible word is 80 characters. In the second sample run of the program, below, the longest word was five characters so only word lenghths of 1 to 5 were reported. Your program should automatically adjust the length of the report.
Hints
1. You can use the cin.getline function to read a line of text from the user. See the textbook for details on getline.
2. You can use function strtok to break a line into its separate words. I posted a video on how to use strtok that you might want to look at (strtok is also covered in the textbook).
3. Get an early start on the program. If you get stuck you can send me your code via email.
Here is my code.
Like I said I'm new to programming all together so this seems very difficult me. My output for that just counts the first token and also the amount of words in the input. So if someone was to input...."I hate this program" the output would read....Like I said I'm new to programming all together so this seems very difficult me. My output for that just counts the first token and also the amount of words in the input. So if someone was to input...."I hate this program" the output would read....Code:
#include "stdafx.h"
#include <iostream>
#include <cstring>
#include <iomanip>
using namespace std;
int main()
{
int wordCount = 0;
int letterCount = 0;
cout << "Enter a line of text" << endl;
char text[80];
cin.getline(text,80);
char *p = strtok(text, " ");
while (p != NULL)
{
p = strtok(NULL, " ");
letterCount = 0;
for ( int i =0; text[i] != '\0'; i++)
{
letterCount ++;
}
cout << letterCount << endl;
}
return 0;
}
1
1
1
1 1 being the amount in the first token and repeated 4 times for the amount of words in the input. I basically need to figure out how to get past the first token, and also how to implement the array and make a count on said array.
Sorry if this is long. | http://cboard.cprogramming.com/cplusplus-programming/128762-strtok-help-word-frequency-program-printable-thread.html | CC-MAIN-2015-22 | refinedweb | 592 | 82.34 |
- What Is Hxt?
- Why Hxt?
- Hello World
- Understanding Arrows
- Getting Started
- Parse A String As Html
- Arrow Interlude #1: Hxt Arrows
- Extracting Content
- Pretty-printing
- Selecting Elements
- Arrow Interlude #2
- Children And Descendents
- Working With Text
- Modifying A Node
- Modifying Children
- Conditionals (ifa)
- More Conditionals (when, Guards, And Filtera)
- Using Functions As Predicates
- Using Haskell Functions
- Working With Lists
- Introducing Handsomesoup
- Avoiding Io
- Debugging
- Epilogue
Contents
Working With Html In Haskell
updated: April 27, 2012
This is a complete guide to using HXT for parsing and processing HTML in Haskell.
What is HXT?
HXT is a collection of tools for processing XML with Haskell. It's a complex beast, but HXT is powerful and flexible, and very elegant once you know how to use it.
Why HXT?
Here's how HXT stacks up against some other XML parsers:
HXT vs TagSoup
TagSoup is the crowd favorite for HTML scraping in Haskell, but it's a bit too basic for my needs.
HXT vs HaXml
HXT is based on HaXml. The two are very similar, but I think HXT is a little more elegant.
HXT vs hexpat
hexpat is a high-performance xml parser. It might be more appropriate depending on your use case. hexpat lacks a collection of tools for processing the HTML, but you can try Parsec for that bit.
HXT vs xml (Text.XML.Light)
I haven't used Text.XML.Light. If you have used it and liked it, please let me know!
The one thing all these packages have in common is poor documentation.
Hello World
To whet your appetite, here's a simple script that uses HXT to get all links on a page:
import Text.XML.HXT.Core main = do html <- readFile "test.html" let doc = readString [withParseHTML yes, withWarnings no] html links <- runX $ doc //> hasName "a" >>> getAttrValue "href" mapM_ putStrLn links
Understanding Arrows
I don't assume any prior knowledge of Arrows. In fact, one of the goals of this guide is to help you understand Arrows a little better.
The Least You Need to Know About Arrows
Arrows are a way of representing computations that take an input and return an output. All Arrows take a value of type
a and return a value of type
b. All Arrow types look like
Arrow a b:
-- an Arrow that takes an `a` and returns a `b`: arrow1 :: SomeType a b -- an Arrow that takes a `b` and returns a `c`: arrow2 :: SomeType b c -- an Arrow that takes a `String` and returns an `Int`: arrow3 :: SomeType String Int
Arrows sound like functions! In fact, functions are arrows.
-- a function that takes an Int and returns a Bool odd :: Int -> Bool -- also, an Arrow that takes an Int and returns a Bool odd :: (->) Int Bool
Don't get confused by the two different type signatures!
Int -> Bool is just the infix way of writing
(->) Int Bool.
Arrow Composition
You'll be using
>>> a lot with HXT, so it's a good idea to understand how it works.
>>> composes two arrows into a new arrow.
We could compose
length and
odd like so:
odd . length.
Since functions are Arrows, we could also compose them like so:
length >>> odd or
odd <<< length.
They're all exactly the same!
ghci> odd . length $ [1, 2, 3] True ghci> length >>> odd $ [1, 2, 3] True ghci> odd <<< length $ [1, 2, 3] True
A function is the most basic type of arrow, but there are many other types. HXT defines its own Arrows, and we will be working with them a lot.
Let's get started. Don't worry if Arrows still seem unclear. We will be writing a lot of examples, so they should become clear soon enough.
Getting Started
Step 1: Install HXT:
cabal install hxt
Step 2: Install HandsomeSoup:
cabal install HandsomeSoup
HandsomeSoup contains a powerful
css function that will allow us to access elements using css selectors. We will use this function until we can write a basic version of it ourselves as explained here. For more info about HandsomeSoup, see this section.
Step 3: Here's the HTML we'll be working with:
'>Some text</p> </body> </html>
Save it as
test.html.
Step 4: Import HXT, HandsomeSoup, and the html file into
ghci:
import Text.XML.HXT.Core import Text.HandsomeSoup html <- readFile "test.html"
Parse a String as HTML
Use
readString:
ghci> let doc = readString [withParseHTML yes, withWarnings no] html
doc is now a parsed HTML document, ready to be processed!
Now we can do things like getting all links in the document:
ghci> doc >>> css "a"
Arrow Interlude #1: HXT Arrows
You just used your first Arrow!
css is an Arrow. Here's its type:
ghci> :t css css :: ArrowXml a => String -> a XmlTree XmlTree
So
css takes an
XmlTree and returns another
XmlTree. A lot of Arrows in HXT have this type: they all transform the current tree and return a new tree.
Extracting Content
doc is wrapped in an
IOStateArrow. If you try to see the contents of
doc, you'll get an error:
<interactive>:1:1: No instance for (Show (IOSLA (XIOState s0) a0 XmlTree)) arising from a use of `print' Possible fix: add an instance declaration for (Show (IOSLA (XIOState s0) a0 XmlTree)) In a stmt of an interactive GHCi command: print it
Use
runX to extract the contents.
contents <- runX doc print contents
Prints out:
[NTree (XTag "/" [NTree (XAttr "transfer-Status")...]
Pretty-printing
I don't want to see ugly Haskell types. Let's use
xshow to convert our tree to HTML:
res <- runX . xshow $ doc mapM_ putStrLn res
Prints out:
</<html> <head> <title>The Dormouse's story</title> ...
Much better! Now use
indentDoc to add proper indentation:
res <- runX . xshow $ doc >>> indentDoc mapM_ putStrLn res
Prints out:
</<html> <head> <title>The Dormouse's story</title> ...
Perfect.
Selecting Elements
Note: To keep things simple for now, these examples make use of our custom
css Arrow.
Get all
a tags
doc >>> css "a"
Check if those links have an
id attribute
doc >>> css "a" >>> hasAttr "id"
Get all values for an attribute
doc >>> css "a" >>> getAttrValue "href"
See how easy it is to chain transformations together using
>>>? Notice how using
getAttrValue gets us the links for all
a tags, instead of just one:
ghci>runX $ doc >>> css "a" >>> getAttrValue "href" ["","",""]
This is a core idea behind HXT. In HXT, everything you do is a series of transformations on the whole tree. So you can use
getAttrValue and HXT will automatically apply it to all the elements.
Get all links that have a particular id
doc >>> css "a" >>> hasAttrValue "id" (== "link1")
Get multiple values at once
Use
<+>:
-- get all p tags as well as all a tags doc >>> css "p" <+> css "a"
Get all element names
doc //> hasAttr "id" >>> getElemName
We used the special function "
//>" here! It's covered in this section.
Get all elements where the text contains "mouse"
import Data.List runX $ doc //> hasText (isInfixOf "mouse")
Get the element's name and the value of
id
ghci> runX $ doc //> hasAttr "id" >>> (getElemName &&& getAttrValue "id") [("a","link1"),("a","link2"),("a","link3")]
Let's talk about the
&&& function.
Arrow Interlude #2
&&& is a function for Arrows. The best way to see how it works is by example:
ghci> length >>> (odd &&& (+1)) $ ["one", "two", "twee"] (True,4)
&&& takes two arrows and creates a new arrow. In the above example, the output of
length is fed into both
odd and
(+1), and both return values are combined into a tuple
(True, 4).
We used
&&& to get an element's name and its id:
(getElemName &&& getAttrValue "id").
Why is this function useful? Suppose we want to get all attributes on links:
runX $ doc >>> css "a" >>> getAttrl >>> getAttrName
Here's where it's nice to have
&&&. The above line gives you something like this:
["href","class","id","href","class","id","href","class","id"]
The only problem: you have no idea what element each attribute belongs to! Use
&&& to get a reference to the element as well:
ghci> runX $ doc >>> css "a" >>> (this &&& (getAttrl >>> getAttrName)) [(...some element..., "href"), (...another element..., "class")..etc..]
HXT has lots of other arrows for selecting elements. See the docs for more.
Children and Descendents
HXT has a few different functions for working with children, and it can be tricky to decide which one to use.
So far we have been using the
css function to get elements. Now let's see how we could implement a basic version of it:
css tag = multi (hasName tag)
css uses
hasName to get elements with a given tag. Why don't we just use
hasName instead of
css?
ghci> runX $ doc >>> hasName "a" []
hasName only works on the current node, and ignores its descendents, whereas
css allows us to look in the entire tree for elements. Here are some arrows for looking in the entire tree:
getChildren and multi
We could use
getChildren to get the immediate child nodes:
ghci>runX $ doc >>> getChildren >>> getName ["html"]
But what if we want the names of all descendents, not just the immediate child node? Use
multi:
ghci> runX $ doc >>> multi getName ["/","html","head","title","body","p","b","p","a","a","a","p"]
multi recursively applies an Arrow to an entire subtree.
css uses
multi to search across the entire tree for nodes.
deep and deepest
These two Arrows are related to
multi.
deep recursively searches a whole tree for subtrees, for which a predicate holds. The search is performed top down. When a tree is found, this becomes an element of the result list. The tree found is not further examined for any subtress, for which the predicate also could hold:
-- deep successfully got the name of the root element, -- so it didn't go through the child nodes of that element. ghci> runX $ doc >>> deep getName ["/"] -- here, deep will get all p tags but it won't look for -- nested p tags (multi *will* look for nested p tags) ghci>runX $ doc >>> deep (hasName "p") >>> getName ["p","p","p"]
deepest is similar to
deep but performs the search from the bottom up:
ghci> runX $ doc >>> deepest getName ["title","b","a","a","a","p"]
/> and
//>
/> looks for a direct child (i.e. what
getChildren does).
//> looks for a node somewhere under this one (i.e. what
deep does).
So, these two lines are equivalent:
doc /> getText doc >>> getChildren >>> getText
And these two lines are equivalent:
doc //> getText doc >>> getChildren >>> (deep getText)
Working With Text
Get the text in an element
ghci>runX $ doc >>> css "title" /> getText ["The Dormouse's story"]
Remember, this is the same as writing:
runX $ doc >>> multi (hasName "title") >>> getChildren >>> getText
Get the text in an element + all its descendents
doc >>> css "body" //> getText
Try using
/> instead of
//>. What do you get?
Get All Links + Their Text
The wrong way:
ghci> runX $ doc >>> css "a" >>> (getAttrValue "href" &&& getText) []
This returns
[] because
doc >>> css "a" >>> getText returns
[].
We need to go deeper! (i.e. use
deep):
ghci> runX $ doc >>> css "a" >>> (getAttrValue "href" &&& (deep getText)) [("","Elsie"),("","Lacie"),("","Tillie")]
Remove Whitespace
Use
removeAllWhiteSpace. It removes all nodes containing only whitespace.
runX $ doc >>> css "body" >>> removeAllWhiteSpace //> getText
If you have used BeautifulSoup, this is kinda like the
stripped_strings method.
Modifying a Node
Modifying text
Use
changeText. Here's how you uppercase all the text in
p tags:
import Data.Char uppercase = map toUpper runX . xshow $ doc >>> css "p" /> changeText uppercase
Add or change an attribute
Use
addAttr:
runX . xshow $ doc >>> css "p" >>> addAttr "id" "my-own-id"
Modifying Children
processChildren and
processTopDown allow you to modify the children of an element.
Add an id to the children of the root node
-- adds an id to the <html> tag runX . xshow $ doc >>> processChildren (addAttr "id" "foo")
Add an id to all descendents of the root node
-- adds an id to all tags runX . xshow $ doc >>> processTopDown (addAttr "id" "foo")
processChildren is similar to
getChildren, except that instead of returning the children, it modifies them in place and returns the entire tree.
processTopDown is similar to
multi.
processTopDownUntil is similar to
deep.
Conditionals (ifA)
HXT has some useful functions that allow us to apply Arrows based on a predicate.
Using
ifA:
ifA is the if statement for Arrows. It's used as
ifA (predicate Arrow) (do if true) (do if false).
Uppercase all the text for
p tags only:
runX . xshow $ doc >>> processTopDown (ifA (hasName "p") (getChildren >>> changeText uppercase) (this))
We use the identity arrow
this here. You can read this as: if the element is a
p tag, uppercase it, otherwise pass it through unchanged.
this has a complementary arrow called
none.
none is the zero arrow. Here's how we can use
none to remove all
p tags:
runX $ doc >>> processTopDown (ifA (hasName "p") (none) (this))
More Conditionals (when, guards, and filterA)
when and
guards can make your
ifA code easier to read.
Uppercasing text for
p tags using
when instead of
ifA
runX . xshow $ doc >>> processTopDown ((getChildren >>> changeText uppercase) `when` hasName "p") f `when` g -- when the predicate `g` holds, `f` is applied, else the identity filter `this`.
Deleting all
p tags using
guards
runX $ doc >>> processTopDown (neg (hasName "p") `guards` this) g `guards` f -- when the predicate `g` holds, `f` is applied, else `none`.
Deleting all
p tags using
filterA
runX $ doc >>> processTopDown (filterA $ neg (hasName "p")) filterA f -- a shortcut for f `guards` this
Using Functions as Predicates
How would we get all nodes that have "mouse" in the text? Here's one way:
runX $ doc //> hasText (isInfixOf "mouse")
But if the
hasText function didn't exist, we could write it ourselves! Here's how:
First, import
Text.XML.HXT.DOM.XmlNode. It defines several functions that work on Nodes.
import qualified Text.XML.HXT.DOM.XmlNode as XN
(Note the qualified import...this module has a lot of names that conflict with
HXT.Core).
Here's a function that returns true if the given node's text contains "mouse":
import Data.Maybe import Data.List hasMouse n = "mouse" `isInfixOf` text where text = fromMaybe "" (XN.getText n)
isA lifts a predicate function to an HXT Arrow. Combined with
isA, we can use
hasMouse to filter out all nodes that don't have
mouse as part of their text:
runX $ doc //> isA hasMouse
We can use
isA wherever a predicate Arrow is needed:
ifA,
when,
guards etc.
See the docs for more conditionals for Arrows.
See these docs for more functions you can use to write your own Arrows.
Using Haskell Functions
Suppose we have an array of link texts:
ghci>runX $ doc >>> css "a" //> getText ["Elsie","Lacie","Tillie"]
And we want to get the length of each bit of text. So we need an arrow version of the
length function.
We can lift the
length function into an HXT arrow using
arr:
ghci> runX $ doc >>> css "a" //> getText >>> arr length [5,5,6]
Note how length automatically gets applied to each element without us having to use
map. This is because Arrows in HXT always apply to the entire tree, not just one node. This behaviour is abstracted away so that you can just write a function that works on one node and have it apply to every node in the tree automatically.
Working With Lists
This section was written after Ywen asked this question on Reddit.
So far, we have applied arrows to one node at a time. In the previous section, we applied
length to every node individually. What if we wanted to work with all the nodes at once, to do a
map or a
foldl over them?
HXT has some special functions that allow you to work on the entire list of elements, instead of working on just one element.
>>. and
>.
We already know how to get the text for all links:
ghci> runX $ doc >>> css "a" //> getText ["Elsie","Lacie","Tillie"]
How do we get the text with the results reversed? Use
>>.:
ghci> runX $ (doc >>> css "a" //> getText) >>. reverse ["Tillie","Lacie","Elsie"]
>>. takes a function that takes a list, and returns a list, so it allows us to use all our Haskell list functions.
We could sort all the letters in the names:
ghci> import Data.List ghci> runX $ (doc >>> css "a" //> getText) >>. (map sort) ["Eeils","Lacei","Teiill"]
How do we count the number of links in the doc? Use
>.:
ghci> runX $ (doc >>> css "a" //> getText) >. length [3]
>. takes a function that takes a list and returns a single value.
Getting the length of the text of all links combined:
ghci> runX $ (doc >>> css "a" //> getText >>. concat) >. length [16]
The parentheses are important here!
-- Counts the number of links in the doc ghci> runX $ (doc >>> css "a" //> getText) >. length [3] -- Oops! Runs `>. length` on each link individually ghci> runX $ doc >>> css "a" //> getText >. length [1,1,1]
Introducing HandsomeSoup
HandsomeSoup is an extension for HXT that provides a complete CSS2 selector implementation, so you can complicated selectors like:
doc >>> css "h1#title" doc >>> css "li > a.link:first-child" doc >>> css "h2[lang|=en]"
...or any other valid CSS2 selector. Here are some other goodies it provides:
Getting Attributes With HandsomeSoup
Use
! instead of
getAttrValue:
doc >>> css "a" ! "href"
Scraping Online Pages
Use
fromUrl to download and parse pages:
doc <- fromUrl url links <- runX $ doc >>> css "a" ! "href"
Downloading Content
Use
openUrl:
content <- runMaybeT $ openUrl url case content of Nothing -> putStrLn $ "Error: " ++ url Just content' -> writeFile "somefile" content'
Parse Strings
Use
parseHtml:
contents <- readFile [filename] doc <- parseHtml contents
Avoiding IO
Look at the type of our html tree:
ghci>:t doc doc :: IOSArrow XmlTree (NTree XNode)
It's in IO! This means that any function that parses the html will have to be IO. What if you want a pure function for parsing the html?
You can use
hread:
-- old way: ghci> let old = runX doc -- using hread: ghci> let new = runLA hread contents
And here are their types:
ghci> :t old old :: IO [XmlTree] -- IO! ghci> :t new new :: [XmlTree] -- no IO!
An Example: Getting All Links
ghci> runLA (hread >>> css "a" //> getText) contents ["Elsie","Lacie","Tillie"]
So why haven't we been using
hread? Because
IOSArrow is much more powerful; it gives you IO + State.
hread is also much more stripped down. From the docs:
parse a string as HTML content, substitute all HTML entity refs and canonicalize tree. (substitute char refs, ...). Errors are ignored. This is a simpler version of readFromString without any options.
Debugging
HXT provides arrows to print out the current tree at any time. These arrows are very handy for debugging.
Use
traceTree:
doc >>> css "h1" >>> withTraceLevel 5 traceTree >>> getAttrValue "id"
traceTree needs level >= 4.
Use
traceMsg for sprinkling printf-like statements:
doc >>> css "h1" >>> traceMsg 1 "got h1 elements" >>> getAttrValue "id"
See the docs for even more trace functions.
Epilogue
I hope you found this guide helpful in your quest to work with HTML using Haskell.
Key Modules For Working With HXT
Arrows for working with nodes (the core stuff).
Arrows for working with children.
Function versions of most Arrows (Useful with
arr or
isA).
More Guides
The HXT tutorial on haskell.org.blog comments powered by Disqus | http://adit.io/posts/2012-04-14-working_with_HTML_in_haskell.html | CC-MAIN-2017-43 | refinedweb | 3,177 | 71.75 |
Introduction to Max Function in Python
Python max() is one of the common built-in functions. max() function is used to find the maximum item of a given object i.e it returns the maximum value among the arguments supplied to the function. If an iterable is passed as the argument value, the largest item in the iterable is returned.
Syntax:
Basically, there are two different ways of implementing max() function in python.
max(num1, num2, *args[,key])
- num1: required parameter, the first object for comparison
- num2: required parameter, a second object for comparison
- *args: optional parameters, any object for comparison
- Key: optional parameter, it will hold the function (can be built-in or user-defined) and comparison will be done based on the return value of this key function.
max(iterable, *[,key,default])
- iterable (required): an iterable object like list, tuple, dictionary, string, etc.
- key (optional): key function to customize the sort order. Max value will be returned on the basis of the return value of this applied function
- default (optional): the default value to be returned when the iterable is empty
Examples of Max Function in Python
The below code snippets will help in understanding the functionality of the max() function.
Example #1
Program to illustrate max() functionality using python iterables.
Code:
# Program to illustrate max() functionality
# example 1: Input --> basic integers
print("input --> 1, 9, 72, 81, 28, 49, 90, 102, 150, 190")
print("max: ", max(1, 9, 72, 81, 28, 49, 90, 102, 150, 190))
# Input --> a string
print("input -->'aCDeFQwN'")
print("max: ", max('aCDeFQwN'))
# Input --> list
inp1 = [19, 40, 228, 112, 90, 506, 810, 119, 202] print("input -->", inp1)
print("max: ", max(inp1))
# Input --> tuple
inp2 = (91, 40, 822, 112, 90, 506, 108, 119, 202)
print("input -->", inp2)
print("max: ", max(inp2))
# Input --> dictionary
inp3 = {
'John': [10, 20, 30],
'Rob': [12, 18, 36],
'Karen': [10, 15, 19] }
print("input -->", inp3)
print("max among dict keys: ", max(inp3.keys()))
print("max among dict values: ", max(inp3.values()))
Output:
Code Explanation:
- The first call of max() à parameters includes basic integers and max value is returned among these integers.
- The second call of max() à parameter is a string that includes both lower and upper case characters. Max value is returned as per ASCII value.
- The third call of max() à parameter is a list of integers, the max value is returned among this list of values
- The fourth call of max() à parameter is a tuple of integers, the max value is returned among this tuple.
- Fifth call of max() à parameter is a dictionary, max value from dictionary keys and dictionary values are printed.
# Input type should be of same type, otherwise it will raise an error
print(max('a', 12, 18, 90, 'q', 29, 'b', 'd', 63))
Example #2
Illustration of max() function with given key function
Code:
#Program to illustrate max() functionality, when key is set to some function
# Input --> list of strings
inp = ['John', 'Rob', 'Karen', 'Diana', 'Emanual', 'Alexnder', 'Tina'] print("input -->", inp)
def length(item):
return len(item)
# Key function --> len()
print("Using user defined function")
print("max: ", max(inp, key = length))
# This can be implemented using built-in function len()
print("Using built-in function")
print("max: ", max(inp, key = len))
# Using lambda function
print("Using lambda function")
print("max: ", max(inp, key = lambda item : len(item)))
# Input --> multiple iterables
x = [10, 20, 30] y = [5, 10, 15, 20, 25] print("max among x and y :", max(x, y, key = len))
print("max among x and y :", max(x, y))
Output:
Code Explanation:
- The input list is defined which consists of different names (string type).
- First, max() is called with user-defined function, this will return the string value from the given input list which has a maximum number of characters. In this example “Alexnder” will be printed as len(Alexnder) is greater than all other names in the list
- Second max() function takes built-in function len() as its key function argument.
- The third max() call is implemented using a lambda function.
In all the examples we got the same output. Only the way for defining function is changed.
- The last max() call takes multiple iterables (lists).
When key=len à list with max number of items will be the output
When the key is not set to any function à lists will be compared element wise as shown in the output above.
Example #3
Program to return list value with the max sum of digits.
Code:
def sum_digits(num):
sum = 0
while(num > 0):
rem = num % 10
sum = sum + rem
num = num // 10
return sum
# Input --> list of positive integers
inp = [120, 20, 42, 212, 802, 139, 175, 802, 468] print("Input List = ", inp)
print("Maximum Value in the List = ", max(inp, key = sum_digits))
Output:
Code Explanation:
- A function “sum_digits” is defined which takes a number and returns its sum of digits.
- The input list which contains positive integers is created.
- max() is called with key function: sum_digits. This will return the list item which has a max sum of digits.
inp = []
print("input:", inp)
print(max(inp))
Above error can be handled by providing default parameter:
inp = []
print("input:", inp)
print(max(inp, default=0))
Recommended Articles
This is a guide to Max Function in Python. Here we discuss the Introduction and two different ways of implementing max() function in python along with different examples and its code implementation. You may also look at the following articles to learn more – | https://www.educba.com/max-function-in-python/?source=leftnav | CC-MAIN-2021-04 | refinedweb | 913 | 53.85 |
1. Introduction to AngularJS Services
In AngularJS world, the services are singleton objects or functions that carry out specific tasks. It holds some business logic. Separation of concern is at the heart while designing an AngularJS application. Your controller must be responsible for binding model data to views using $scope. It does not contain logic to fetch the data or manipulating it.
For that we must create singleton objects called services. AngularJS can manage these service objects. Wherever we want to use the service, we just have to specify its name and AngularJS auto-magically inject these objects (more on this later).
Thus service is a stateless object that contains some useful functions. These functions can be called from anywhere; Controllers, Directive, Filters etc. Thus we can divide our application in logical units. The business logic or logic to call HTTP url to fetch data from server can be put within a service object.
Putting business and other logic within services has many advantages. First it fulfills the principle of separation of concern or segregation of duties. Each component is responsible for its own work making application more manageable. Second this way each component can be more testable. AngularJS provides first class support for unit testing. Thus we can quickly write tests for our services making them robust and less error prone.
Consider above diagram. Here we divide our application in two controllers: 1. Profile and 2. Dashboard. Each of these controllers require certain user data from server. Thus instead of repeating the logic to fetch data from server in each controller, we create a User service which hides the complexity. AngularJS automatically inject User service in both Profile and Dashboard controller. Thus our application becomes for modular and testable.
2. AngularJS internal services
AngularJS internally provides many services that we can use in our application.
$http is one example (Note: All angularjs internal services starts with $ sign). There are other useful services such as
$route,
$window,
$location etc.
These services can be used within any Controller by just declaring them as dependencies. For example:
module.controller('FooController', function($http){ //... }); module.controller('BarController', function($window){ //... });
3. AngularJS custom services
We can define our own custom services in angular js app and use them wherever required.
There are several ways to declare angularjs service within application. Following are two simple ways:
var module = angular.module('myapp', []); module.service('userService', function(){ this.users = ['John', 'James', 'Jake']; });
or we can use factory method
module.factory('userService', function(){ var fac = {}; fac.users = ['John', 'James', 'Jake']; return fac; });
Both of the ways of defining a service function/object are valid. We will shortly see the difference between
factory() and
service() method. For now just keep in mind that both these apis defines a singleton service object that can be used within any controller, filter, directive etc.
4. AngularJS Service vs Factory. Using
module.factory and
module.service.
module.service( 'serviceName', function ); module.factory( 'factoryName', function );.
In below example we define
MyService in two different ways. Note how in
.service we create service methods using
this.methodname. In
.factory we created a factory object and assigned the methods to it.
AngularJS .service
module.service('MyService', function() { this.method1 = function() { //.. } this.method2 = function() { //.. } });
AngularJS .factory
module.factory('MyService', function() { var factory = {}; factory.method1 = function() { //.. } factory.method2 = function() { //.. } return factory; });
5. Injecting dependencies in services
Angularjs provides out of the box support for dependency management.
In general the wikipedia definition of dependency injection is:
Dependency injection is a software design pattern that allows the removal of hard-coded dependencies and makes it possible to change them, whether at run-time or compile-time. …
We already saw in previous tutorials how to use angularjs dependency management and inject dependencies in controllers. We injected
$scope object in our controller class.
Dependency injection mainly reduces the tight coupling of code and create modular code that is more maintainable and testable. AngularJS services are the objects that can be injected in any other Angular construct (like controller, filter, directive etc). You can define a service which does certain tasks and inject it wherever you want. In that way you are sure your tested service code works without any glitch.
Like it is possible to inject service object into other angular constructs, you can also inject other objects into service object. One service might be dependence on another.
Let us consider an example where we use dependency injection between different services and controller. For this demo let us create a small calculator app that does two things: squares and cubes. We will create following entities in AngularJS:
- MathService – A simple custom angular service that has 4 methods: add, subtract, multiply and divide. We will only use multiply in our example.
- CalculatorService – A simple custom angular service that has 2 methods: square and cube. This service has dependency on MathService and it uses MathService.multiply method to do its work.
- CalculatorController – This is a simple controller that handler user interactions. For UI we have one textbox to take a number from user and two buttons; one to square another to multiply.
Below is the code:
5.1 The HTML
<div ng- <div ng- Enter a number: <input type="number" ng- <button ng-X<sup>2</sup></button> <button ng-X<sup>3</sup></button> <div>Answer: {{answer}}</div> </div> </div>
5.2 The JavaScript
var app = angular.module('app', []); app.service('MathService', function() { this.add = function(a, b) { return a + b }; this.subtract = function(a, b) { return a - b }; this.multiply = function(a, b) { return a * b }; this.divide = function(a, b) { return a / b }; }); app.service('CalculatorService', function(MathService){ this.square = function(a) { return MathService.multiply(a,a); }; this.cube = function(a) { return MathService.multiply(a, MathService.multiply(a,a)); }; }); app.controller('CalculatorController', function($scope, CalculatorService) { $scope.doSquare = function() { $scope.answer = CalculatorService.square($scope.number); } $scope.doCube = function() { $scope.answer = CalculatorService.cube($scope.number); } });
5.3 Online Demo
Thus in the above angularjs injected service object to another service and in turn injected final service object to the controller object. You can inject same service object in multiple controllers. As angularjs service object is inheritedly singleton. Thus only one service object will be created per application.
6. End to End application using AngularJS Service
Let us apply the knowledge that we acquired so far and create a ContactManager application. This is the same app that we built-in our last tutorial. We will add a service to it and see how we can divide the code between service and controllers.>
Next we add the AngularJS code to and life to our ContactManager appllication. We define a module ‘app’. This module is then used to create Service and Controller.
See in below code how
ContactService is created. It has simple methods to save/delete/get the contact.
Note how the service object in injected in controller.
6.2 The JavaScript
var module = angular.module('app', []); module.service('ContactService', function () { //to create unique contact id var uid = 1; //contacts array to hold list of all contacts var contacts = [{ id: 0, 'name': 'Viral', 'email': '[email protected]', 'phone': '123-2343-44' }]; //save method create a new contact if not already exists //else update the existing object this.save = function (contact) { if (contact.id == null) { //if this is new contact, add it in contacts array contact.id = uid++; contacts.push(contact); } else { //for existing contact, find this contact using id //and update it. for (i in contacts) { if (contacts[i].id == contact.id) { contacts[i] = contact; } } } } //simply search contacts list for given id //and returns the contact object if found this.get = function (id) { for (i in contacts) { if (contacts[i].id == id) { return contacts[i]; } } } //iterate through contacts list and delete //contact if found this.delete = function (id) { for (i in contacts) { if (contacts[i].id == id) { contacts.splice(i, 1); } } } //simply returns the contacts list this.list = function () { return contacts; } }); module.controller('ContactController', function ($scope, ContactService) { $scope.contacts = ContactService.list(); $scope.saveContact = function () { ContactService.save($scope.newcontact); $scope.newcontact = {}; } $scope.delete = function (id) { ContactService.delete(id); if ($scope.newcontact.id == id) $scope.newcontact = {}; } $scope.edit = function (id) { $scope.newcontact = angular.copy(ContactService.get(id)); } })
6.3 Online Demo
That’s All Folks
We saw that angularjs service/factory objects are. How to define our own custom service/factory object in angularjs. Also we saw how dependency injection works. In the end we created a simply calculator application that wrap up all the concepts.
I hope you liked this tutorial. Feel free to post your comment below.
In this ongoing series, I will try to publish more AngularJS articles on topics like angularjs $http service, AngularJS filters, AngularJS directives etc.
Some great articles you have posted! Any article that you can refer to display grid with AngularJS service? Thanks!
Hi that is great quick start tutorial for AngularJS
but tell me how i use that AngularJS to persist data in db
and also tell me how you secure business logic from user side in AngularJS
Sorry you cant. This is a client side app framework. Only things which are not too sensitive must be coded in angular. You have the rest of the logic in the server side.
Client side code can always be viewed in the devtools.
But you can still make it really tough to understand for the person looking at your code by obfuscating it. Thats all I know!
Why dont you use HTML5’s Local storage or the webSQL. Both are great.
Is there any particular reason not to pass the newcontact directly from the ng-click as a parameter of save()?
and
This seems to me to have better reusability and encapsulation. For example, you may want to be able to add a new contact from somewhere else without binding to the single input you already have.
Anyway, I am honestly curious if there is any particular “Angular” reason to do it the way you did.
Also, I realize you clear the $scope.newcontact var after saving… if you use the variation I suggested, you can still do that in the ng-click:
I personally consider it a good practice to write minimal code in HTML that’s why I support his approach.
This is a great tutorial to start with anulgar’s service & factory…
Thanks a lot for giving a sweet & small expample…
Hi Viral, Have been following your tutorials on AngularJS. The explanation is very crisp and clear. Please keep posting more tutorials in the future.
Hi dude this is excellent example.
HATS of U.
Thank you for making the tutorial.
One thing that seems to be missing, and was in fact the reason I read the tutorial, was an answer to the question of when to use a factory vs a service? Can you provide different use cases for when one would be preferable over the other? Seems like a factory could just as easily return a function, which would make it identical to a service (Is this correct).
Thank you for the awesome tutorial!
Hi Viral,
I am newbie to AngularJS and this article has helped me a lot in understanding concepts.
I am trying to code something like :- I have a dropdown and when I select one value, it should in turn open another drop down and if not, the second dropdown should not be displayed.
For ex. I have dropdown with values A, B, C and when I select C, I should see another select element with values in it. But when I select A or B, the second select is not visible. How can I achieve this?
Thanks in advance.
You should look at ng-if, ng-show and ng-hide.
These will allow you to control whether or not the second selection box displays by setting an expression on it, as follows…
…
ng-if will add the element to the dom model if the condition is true. This means if the expression is false, the element does not even exist, so be careful if referencing it from elsewhere.
ng-show will show the element only if the condition is true, hide when false. The element will always exist in the DOM.
ng-hide is the exact reverse of show. Shows when the expression is false, hide when true.
Happy Coding!
can you please post an example on using multiple filters with multiple input text boxes in angularjs
Yay, another over simplified tutorial that ignores the complexity of any real-world app. Try doing this example when the model data is requested asynchronously from the server. Then add another controller that needs to share data using the service. That would be a very simple and practical example, however the code required to achieve that is way more complex than you would imagine. And you could take it one step further, how about instead of having a save button like you would on a web page from 10 years ago, you handle data automatically being updated when an input changes.
I’m sorry, but I’ve been through a hundred tutorials just like this. Afterwards I feel like I can jump right in and create a simple app, until I add one more controller and deal with real data from a server, then all the sudden everything in these tutorials is irrelevant. Tutorials titled “Sharing data between controllers using services” will have an example with static data, one service, and one controller. Yes, I know you can create a service and inject that service into all of your controllers, but so what? That’s the least of the problems with sharing data between controllers. The real issue is how do you keep your data in sync between all of your services and controllers and the server, and what is the best way to handle the asynchronous nature of loading data into the application. Those are the practical real world scenarios that tutorials need to explain.
John Smith – you seem like the smartest guy in the room. You should stop reading so many dumb tutorials and start writing good ones.
Viral (or anyone) – One thing I don’t understand is where $scope.newcontact is declared or exposed on the controller? It’s used in the 3 functions on the controller – but it is not actually on the controller itself anywhere. Is this an oversight or am I missing something?
great tutorial…Thanks.
The newContact value is created in the $scope.edit(id) function defined in the controller. It is called when the user clicks the edit button on a particular contact. It is also created in the save() method, but only after the save has completed. It is also created in the delete method if the id in the newContact matches the passed id.
From just a brief review of the code, this looks like it could blow up for many reasons – trying to save before editing, deleting before editing.
It would be much better to declare the $scope.newContact variable and its initial structure in the controller than to hide its creation in various methods. You could even have a function in the service return the initial structure so it could be updated in a single place later.
Excellet Viralpatel. Thanks i have learned AngularJS in just one hour with your tutorial.
Nice Article Viral. Keep Posting. Thanks a lot.
This is one great tutorial… I am able to create my own Angular Service… in 15 mins…
GREAT JOB… Mr.VIRUS
Nice tutorial. Thanks for your time
Viral – just wanted to quickly compliment you on your articles – they are really well-done, and easy to understand. I very much appreciate the time you take to write up these AngularJS tutorials, and I hope you continue to write more of them!
” You can inject same service object in multiple controllers. As angularjs service object is inheritedly singleton. Thus only one service object will be created per application.”
THANK YOU! I couldn’t find this answer anywhere, ended up just testing it to see what happened.
This is a excellent tutorial to start with anulgar’s service & factory…and my hart full thanx for providing such a great tutorial.
Nice Tutorial
Excellent Tutorial. Very Simple.
The this.add = function(a, b) { return a + b }; should be
this.add = function(a, b) { return ( parseInt(a) + parseInt(b) ) } ..
Because + is having concatenation property By default
I’ve been surfing different explanations of the differences between service and factory for the last hour and this is by far the clearest and best explanation for an experienced classical OO developer.
Excellent work, it addressed my use case exactly. If I can make one suggestion, it would have been great if you had provided some unit tests to show how they can be set up.
Thanks alot, this is really a very good tutorial for the starter/beginner’s. Hope you will continue doing the same further…
Nice Tutorial. Thanks 4 the simplicity
I’m getting a cannot read property ‘id’ of undefined flagging on this line;
[if (contact.id == null)]
Any ideas?
Hey Brad, I am having the same issue. Let me know if you fix it. I’m going to continue working on it.
if (contact.id == null) can just be if (contact.id) as it would be evaluated as true or false. It works in my app.
hello sir, angularjs provide a two way data binding can i possible two use angularjs with servle,jsp & JDBC if u have any idea post it in ur blog thanks sir
You definitely can not use JDBC directly. To use it from angular, create a web service on the server that performs all the database work and returns its results as JSON. You can then call this service from within an Angular application by using the angular $http service.
The same thing goes for a servlet – since it is already on one or more URLs, you can call it from angular using the $http service.
Best practice would have you create an angular service which encapsulates the calls to these services.
You would then call the angular services from your angular controller and assign the results to $scope values. If you bind your UI objects to these scope values, changes to the UI controls will update the data, and vice versa. In order to save the data back to the database, use the $scope variables as parameters to the save() methods on your angular service.
If you use spring on your server side, spring can cast the JSON data from the client into a matching java object for further processing.
Viral its really nice article for angular js
Hi,
I’ve a question related to your service definition.
From this post: “…service is a stateless object that contains some useful functions”
Then, in the “Custom services” section you create services like this one:
module.service(‘userService’, function(){
this.users = [‘John’, ‘James’, ‘Jake’];
});
Is the previous service stateless? (taking into account that it contains a list of users…)
Thanks.
Very Useful
Some great articles you have posted! Any article that you can refer to display grid with AngularJS service? Thanks!
Great post. Really useful. Just a quick question, in “5. Injecting dependencies in services” why did you use services, rather than factories? Both would work, right? Just trying to understand why to use one not the other.
Great post. Can you add few more words which will explain a difference between “instance of a method” that service returns and “object with assigned methods” that factory returns relatively to a real world applications? For newbies please )
Excellent for beginners…Thanks a lot Viral ….Keep Posting……
hi viral can u please let me know . In last example how $scope.contact is updated automatically When I push contact in service contact object. I know we bind our $scope.contacts = ContactService.list(); . I am not clear how all the time(at the time of edit, delete ,add) our main $scope.contacts is updated . Is it updated (called) all time.
Please let me know.
Thanks in advance. and Keep posting..
Viral, this is a nice article. A good explanation on Service vs. Factory. In your example, it would be great if you tested for success or error on the return.
Really nice post. Got a very good introduction to services and factory
Really nice post. Got a very good introduction to services
Excellent!! Very useful, Thanks
I need a code to Add a title in the popup based on the type of the any document using angularjs.
thanks in advance.
Superb. Thanks For YOur Example.
Suoerbbbbb.. This work awesome. i complete angular services . and 50% chapter….
//Service code
var integrityService = angular.module(‘starter.IntegrityService’, []);
integrityService.service(‘IntegrityAuthService’, [‘configData’, function($http,’serverDetails’){
[/code javascript]
var appController = angular.module(‘starter.controllers’, []);
appController.controller(‘AppCtrl’,function($scope, $ionicModal, $timeout,$location,IntegrityAuthService) {
[/code javascript]
Getting below error
Error: [$injector:unpr] Unknown provider: IntegrityAuthServiceProvider <- IntegrityAuthService
what could be its cause
Welcome to Spectrum Engineering Consortium LTD
Name
Phone
Age
Name
Phone
Age
Action
{{ contact.name }}
{{ contact.email }}
{{ contact.phone }}
{{ contact.age }}
edit |
delete
function myFunction() {
document.getElementById(val=””);
alert(“Shuld not be blank”);
}
var uid = 1;
function ContactController($scope) {
$scope.contacts = [
{ id:0, ‘name’: ‘Viral’,
’email’:’[email protected]’,
‘phone’: ‘123-2343-44′,’age’:’20’
}
];
]);
}
}
}
}
Great !
Wow super article
excellent article, it is very useful Us…………. Thanks a lot………………….
Hi viral,
Thanks for the explanation. I have a query . We are developing a web application using angular js where the menu is created based on the role of the user who logs in. Now post login this menu is going to be static and will be a part of header div. for such views also should we rely on $scope for menu data or can we add it to session storage etc. your input would really help.
Thanks
Nice article, but I’m still a bit confused.
I did a mistake in my app today, and then stumbled across your article.
I mixed the methods and my script worked fine:
I wanted what you explained to be a factory, but I used “service” instead.
I then swapped back service for factory, and everything seemed to work identically. Of course my factory is still a work in progress, but the content didn’t seem to make any difference on the controller side (simply using the reference, not instantiating the object).
I’m not sure I understood better, but I’ll try to stick the factory “syntax” but it seems more natural to me XD.
Arg, sorry, I realised that in my last sentence, my second “but” should have been “because” XD
Excellent work . Keep it up…God bless you for all your efforts in helping others learning the technology….
Thanks for the article. But can you please explain how do we do the same Calculator app using factory?
Thanks. I am able to do the same using factory.
it is very difficult understand for the people who starts beginners
Your tutorials on AngularJS helped me to startup with it (looking forward for more of this serie!). I wish they were included on the AngularJs website as they are very clear and easy to follow. Thanks!
Thanks Buddy for the great tutorial, it was very helpful for beginner like me.You have pointed the things in a very understandable manner.
it helped me, understandable
Thanks
Wonderful stuff! Was able to connect to SQL server db via express.js to serve up nice rest api and get some nice UI going
Thanks so much!!!
I am also trying to call a method of one service into the other service but it is giving error saying
Cannot read property ‘create’ of undefined
Please ignore syntax error if it is there
We needed an example of using a factory. Please work on your english. Thanks.
Thanks..Good article.
Thanks a million. Very well explained and easy to follow. Very much appreciated.
Well explained article ,thanks a lot for your blog.
Great example Viral, (and your English is just fine). Thank you.
One of the best tutorials I have used in a long time. Well done. Perhaps a little more info on pros and cons of factory\service approaches.
After seeing your tutorial I can started with AngularJS Project. THANKS………..
it was very helpful. thanks
Thanks so much! I can see how service and factory work and how they connect with controller
Thanks for the tutorial..
the simplified way in which you explained the concepts is commendable
Very good Example and Easy to Under stand for Every one. Thanks Lot
is there any way to create multiple instances of a factory in another factory ?
//factory 1
module.factory(‘MyService’, function() {
});
//factory2
module.factory(‘MyService1’, [ ‘MyService’ , function( MyService) {
var obj = new MyService();
}]);
is there any way to create multiple instances of a factory in another factory ?
//factory 1
module.factory(‘MyService’, function() {
});
//factory2
module.factory(‘MyService1′, [ ‘MyService’ , function( MyService) {
var obj = new MyService();
}]);
This was an incredibly well-written and informative tutorial. Thanks for sharing your wealth of knowledge. This really clarified things for me. Cheers again!
Thank you so much. This article helped me a lot in understanding AngularJS. This is very easy to follow.
Awesome article !!
Welldone…Excellent Explanation…Keep the good work…
Thank you very much….great tutorial
nice article
Excellent tutorial. This is very clear and the example using the service approach works nicely! Thanks for posting it. You should help on the next O’Reilly Angular book. They could use it :-)
Section one, last sentence, “Thus our application becomes for modular and testable.” I think you meant to write more, not for.
Thanks,
Grammar Guy
Section one, third paragraph, first sentence. “Thus service is a stateless object that contains some useful functions.” I think you meant to write The, not Thus…
Nice tutorial, Good job…… Thnx
Thank you. It’s easy to follow.
Very good tutorial really even for beginners also it’s very easy to understand not only this post for Java, JQuery,spring i am getting good knowledge from this site….Thank you so much viral
Its great tutorial for beginner!!! Thank you a lot!!!
Thanks a lot, this was very informative about controllers and services. Well done.
Very simple way to explain. I cleared my concepts using your articles & examples.
Can you please add custom directive tutorial?
Thnx
Chandra
Great article, much easier to follow than a lot on the web – at last an article that KISS!
This was lucid. Thanks for taking time to write this.
Thanks Viral, for the excellent tutorial, it’s very easy to understand & to practice the tutorials these were really helpful, waiting to see the new interesting topics on Angular JS.
Thanks,
Sandeep.
Thanks Viral, excellent tutorial, nice flow, easy to understand.
Awesome tutorial with examples, even a biginners can follow these and get into knowledge of subject easily.
Thanks a lot
Thanks Viral, Please send a post on AngularJS with RESTful web services.
An excellent tutorial. Thank you so much. By the way, could you please tell me whether it can read a json from an URL instead of directly initialize in the service.js file? (For example, you have written //contacts array to hold list of all contacts
var contacts = [{
id: 0,
‘name’: ‘Viral’,
’email’: ‘[email protected]’,
‘phone’: ‘123-2343-44’
}];
and I want to read dynamic data from an URL)
Thanks.
Thanks this blogentry.I understood how to be use service , controller together .. thank you again
Nice job buddie, Thanks a lot.
nice guide,
But you dont explain much on the main question, when should i use Service, and when we should Factory
Nice explanation of service and factory I was struggling a lot now you made me clear. Thank you so much………. :)
Clearrd many doubts regarding angular.
Thanks buddy (y)
This is classic, you explained with serenity .
Thanks Guru!
Good article. One thing not clear to me that when to use service and when to use factory?
Thanks, nice post
very nice for angularjs learner
thank you very much
please help me how to create controller and service in project
if i have more than one function at the time like save , edit , dropdown , then how to define service in service page
please help me
i am learner for angularjs
i hope , you help me soon
Many thanks brother.
thank you a lot, very to the point, no fuss and effective tutorial.
Really Awesome Article.. Patel Bhai….I have started learning Angular.. ..Good One..
Please……………..
create a tutorial on node.js.
I am impressed with your tutorial
Very nicely explained.. i got a clear picture of services . With you permission i would like to answer the question most of them asking .
Factory vs Service : If we need an object in return use Factory … if we need a function or object use Service .
Please correct me if i am wrong.
thanks so much. was great article!! waiting for more!
Great Tutorial thank for the help in understanding AugularJS.
It is nice article, very useful for beginners…..
Very useful. Thank you.
Thank you very much, this blog help me to clearly understand angular js. Realy nice blog
nice tutorial ..thank you
I don’t understand exactly how contacts-variable in the scope get updated? When I save something it pushes the new object to service’s contacts array but how these changes are updated in $scope?
Dear Vivvv!
First please check the html form’s hidden id and then keep focus on save function.It has a very clear logic.
Good One.
ViralPatel – I really appreciate you for having given clear explanations with neat examples and code. I have never seen it in any other sites. Please continue the great job – your examples are great and works perfect
Thanks, very clear!
Thank you very much! Great tutorial!
hello world
God bless You. This tutorial rocks.
This is really helpful. Clean and very easy to understand. Thanks a ton…… :)
Really Awesome Article…
Great…
Thanks. A concrete tutorial. Just to the point
Very impressive tutorial – absolutely to the point with real world application, well done!!
Its what the actual post. I got to work with services now. Thanks.
Hello,
Really nice article I have a question,
Can service object be injected in multiple controller’s spread across many different .js files and be used to call service methods for calculations ?????? I tried actually and failed to do so, thats why asking.
Its really good. Thanks for your article. Please update services using $resource
This is excellent. It is very clear to understand what is service and what is factory with simple examples. I appreciate.
Thank you . Great article
Excellent Article. Nicely written and well explained. Thanks.!
Excellent and the great and then great and very great and super great article ! It clearly defines the differnence between Service and Factory, and easy way to understand AngularJs.
This is a great article for a beginner like me. But I have one question. What if the SAVE button is changed to SUBMIT button. The reason being I can also enter filter information in the top portion of the form that should in essence restrict the list portion at the bottom. Can u please explain how we do this. This would be of great help.
Thanks
Also I forgot to mention. How to link this to the back end REST service that provides data. How to transfer the search data back to the client side?
Excellent
Nice job!
Really this one is a clear stuff.
Great tutorials! I enjoyed reading them.
Thank you :)
Perfect. Simple and clear.
Thanks bro.
can you make it little bit simple it is same like
Thank very much, you put off my headache.
Thanck, nice lesson!!!
This is a excellent tutorial to start with anulgar’s service & factory…and my hart full thank you for providing such a great tutorial. it’s a very nice tutorial for beginners……
Excellent explaination, you are superb.
great work
Such a concise and excellent tutorial! I appreciate the time you’ve put into this :)
Thanks for this article. The way you explained the topic is really good and the examples were also helpful!
Thank you too much
Really great, clear tutorial. Thanks for taking the time to put this together, and also adding two examples. Keep up the good work.
its helpfull..
Great tutorial. Thx a lot
Nice Article.
How to transfer the search data back to the client-side?
good
I need to create 3 views along with 3 controllers and views are going to load via controller. Could you suggest me how many ways are there to load the views.
AngularJS Service / Factory : To keep it simple, use the $http module (see $http vs $resource). This has also the advantage that $http returns a promise which you should be able to use as follows (adapted from delaying-controller-initialization).
Alternatively, if the two controllers are on the same route and one is contained in the other, it may be easier to access the parent controller’s scope with $scope.$parent.data.
function tricoreContrller($scope, data) {
$scope.users = data;
}
function webdevController($scope, data) {
$scope.users = data;
}
var resolveFn = {
data : function($http) {
return $http({
method: ‘GET’,
url: ‘’
});
}
};
tricoreContrller.resolve = resolveFn;
webdevController.resolve = resolveFn;
var myApp = angular.module(‘myApp’, [], function($routeProvider) {
$routeProvider.when(‘/’, {
templateUrl: ‘/editor-tpl.html’,
controller: tricoreContrller,
resolve: tricoreContrller.resolve
});
// same for the other route for webdevController
});
This is awesome beginners guide to AngularJS. Thank you! | https://viralpatel.net/blogs/angularjs-service-factory-tutorial/ | CC-MAIN-2018-34 | refinedweb | 5,566 | 59.4 |
In today’s Programming Praxis exercise, our goal is to implement an array-based heap. Let’s get started, shall we?
We’ll be using Vectors as our array datatype.
import qualified Data.Vector as V
The algorithm assumes that the array is 1-based instead of the usual 0-based. Having to do this at every array lookup would be annoying, so we define a new array index operator.
(!) :: V.Vector a -> Int -> a v ! i = v V.! (i-1)
Swapping two elements can be done without having to use a temporary variable thanks to Vector’s bulk update feature.
swap :: Int -> Int -> V.Vector a -> V.Vector a swap i j heap = heap V.// [(i-1, heap ! j), (j-1, heap ! i)]
Sifting up is fairly simple: keep swapping elements with their parents as long as necessary.
siftup :: Ord a => Int -> V.Vector a -> V.Vector a siftup i heap = let j = div i 2 in if i == 1 || heap ! j <= heap ! i then heap else siftup j $ swap i j heap
Sifting down is less convenient, since we need to count up to n, necessitating a worker function. A quick tip on recursive worker functions: make sure they call themselves rather than their parent functions. I initially didn’t, and it took me quite a while to find the bug.
siftdown :: Ord a => Int -> V.Vector a -> V.Vector a siftdown n = f 1 where f i heap = if 2*i > n || heap ! i <= c then heap else f j $ swap i j heap where (c, j) = minimum [(heap ! x, x) | x <- [2*i, 2*i+1], x <= n]
Sorting is a matter of first sifting up and then sifting down.
hsort :: Ord a => V.Vector a -> V.Vector a hsort heap = foldr (\i -> siftdown (i - 1) . swap 1 i) (foldl (flip siftup) heap [2..V.length heap]) [2..V.length heap]
And finally a test to see if everything is working properly.
main :: IO () main = print $ hsort (V.fromList [4,7,8,1,5,3,2,9,6]) == V.fromList [9,8..1]
Tags: array, bonsai, code, Haskell, heap, kata, praxis, programming | http://bonsaicode.wordpress.com/2013/01/25/programming-praxis-imperative-heaps/ | CC-MAIN-2014-10 | refinedweb | 356 | 78.96 |
I went to a general store, but they wouldn't let me buy anything specific. - Steven Wright.
One of the challenges of working with a general purpose programming language (like C#) and a general purpose framework (like ASP.NET, Silverlight, or any other application framework) is building something specific.
The trick is to apply constraints and build abstractions to take ownership of the situation, and there is an entire bag of fancy tricks to pull from, like using fluent APIs or applying domain driven design. But taking real ownership goes beyond just building the right classes - it's also about circumventing the underlying framework to match the needs of the team, the application, and the business in general. If you look at opinionated frameworks like OpenRasta, FubuMvc, and Caliburn.Micro, you'll find they make assumptions about what you want from a framework. These assumptions simplify design decisions and you'll often find it easier to find the one right way to achieve some goal.
Let's say your are using a relatively general purpose web framework like ASP.NET MVC 3. Every controller action needs to respond with one of two views depending on some bit of user information. To simply the scenario, we'll say we switch on the IsAuthenticated flag.
if (Request.IsAuthenticated) { return View("private/index"); } else { return View("public/index"); }
Although it feels pretty good to keep the different views you need inside of separate folders, it's just a first step. The above code typifies what happens when you let the framework control you instead of taking control of the framework. When the above code appears inside of every primary controller action it reads like this: "the framework gives me a View method, and that's all I have to use."
To take this one step further you'll want to simplify the controller code and move the view decision out of the controller. Maybe this means you write a new helper method, a new action filter, or if the logic applies everywhere, a new view engine.
public class CustomViewEngine : IViewEngine { ... public ViewEngineResult FindView( ControllerContext controllerContext, string viewName, string masterName, bool useCache) { if (controllerContext.HttpContext.Request.IsAuthenticated) { viewName = "private/" + viewName; } else { viewName = "public/" + viewName; } return _inner.FindView(controllerContext, viewName, masterName, useCache); } RazorViewEngine _inner = new RazorViewEngine();
... }
Then configure the engine into your application.
protected void Application_Start() { ViewEngines.Engines.Clear(); ViewEngines.Engines.Add(new CustomViewEngine());
... }
And now controller code is simple again.
return View();
The need for your application to offer two distinct views in every scenario is now implicitly baked into the infrastructure.
Taking control of the software often means you apply constraints (what I should not do) and conventions (what I don't need to do). The downside is having more implicit "magic" inside the code, but the upside is having simple code focused on specific solution.
Only problem I can think of is that if you do not break the content of the indexes into reusable partials, you then risk reintroducing duplication in each directory (private/public). IOW, you simply moved the problem to a different layer. | http://odetocode.com/blogs/scott/archive/2011/02/02/make-it-yours.aspx | CC-MAIN-2014-42 | refinedweb | 514 | 54.52 |
This instructable was created in
fulfillment of the project requirement of the Makecourse at the University of South Florida ()
This Inscrutable will take you through all of the steps necessary to
build your own arduino controlled RFID Lockbox. The final product will use an RFID reader along with RFID chips to lock and unlock your deposit box.
The Lockbox uses a rechargeable battery to power itself.
One can open it with the RFID Chips, but if for some reason you want to use the original key, of course you can do so. Only now you need to disarm it with your own secret combination.
I found the project fun, and I learned alot about programming.
This was my first Arduino Project, so enjoy, but keep in mind there could be many improvements.
Step 1: Aquire Materials
This project will call for the following materials:
1 X RFID Module with card/dongle (Here)
1 X ON/OFF Switch (Local electronics store)
2 X LED's (Local electronics store)
1 X Rechargeable Lithium battery Pack (Here)
And Miscellaneous Wires, Solder and Screws
Step 2: Design Your Box in CAD and 3D-Print Parts
Attached are the Solidworks Parts that I designed for my particular Lockbox. If you go with a different type of box you will have to modify accordingly. The holes for the Buttons and the ON/OFF switch as well as the LED's are not accounted for because those are very easily made after printing and could vary very much depending on what hardware was used.
Step 3: Design Your Control System
My Control System is pretty simple, but looks scary at first.
We have the Arduino as brains of the operation. It gets information from the tilt switch as well as the RFID module. It also receives inputs from the buttons. Once it has received all of its inputs and digested them. It unlocks the box, or the buzzer as well as red LED will be on. The green LED is simply an indicator for the tilt.
I will explain which pin's on the Arduino are used in the next Step.
Step 4: Write the Arduino Sketch
The hardest part of programming this system for me, was to get the buttons to work. The buttons need to store a value (in this case 1,2 or 3) in a vector and then compare them with a preset "Code". This can be challenging using simple buttons.
At the top of my arduino code you will see two include commands. You will have to download these and add them to your arduino libraries so that the code will work.
Then you see alot of define statements. This is where you will see how to wire the components.
Here is my Code:
#include <SPI.h>
#include <MFRC522.h>
MFRC522 mfrc522(SS_PIN, RST_PIN);
int counter = 0;
unsigned long lastPush = 0;
int lastPushValue = -1;
unsigned long lastOpened = 0;
int array1[5] = {1, 2, 3, 2, 1};
int array2[5] = { -1, -1, -1, -1, -1};
bool gLocked = true;
unsigned long lastScan = 0;
#define RST_PIN 9
#define SS_PIN 10
#define button1 3
#define button2 5
#define button3 4
#define greenLed 7
#define redLed 6
#define buzzer 8
#define relay 2
#define tilt 14
void setup() {
Serial.begin(9600); // Establishing Serial connection
pinMode (button1, INPUT); // Defining Input or Output pins
digitalWrite(button1, HIGH);
pinMode (button2, INPUT);
digitalWrite(button2, HIGH);
pinMode (button3, INPUT);
digitalWrite(button3, HIGH);
pinMode(greenLed, OUTPUT);
digitalWrite(greenLed, LOW);
pinMode(redLed, OUTPUT);
digitalWrite(redLed, HIGH);
pinMode(buzzer, OUTPUT);
digitalWrite(buzzer, LOW);
pinMode(relay, OUTPUT);
digitalWrite(relay, LOW);
pinMode(tilt, INPUT);..."));
}
int buttonValue (int b) {
if (b == button1) {
return 1;
}
else if (b == button2) {
return 2;
}
else if (b == button3) {
return 3;
}
return 0;
}
String printButton(int b) {
int value = buttonValue (b);
return "button " + String(value);
}
int checkButton(int refresh, int c, int b) { //c=counter b=button
int ret = refresh;
unsigned long currentTime = millis();
if (digitalRead(b) == LOW && refresh == 1 && (currentTime - lastPush > 500 || lastPushValue != buttonValue(b))) {
array2[counter] = buttonValue(b);
counter++;
lastPush = currentTime;
lastPushValue = buttonValue(b); //Serial.println("counter: " + String(c) + " " + printButton(b) );
for (int i = 0; i < 5 ; i++)
{
Serial.println(array2[i]);
}
ret = 0;
}
return ret;
}
void loop() {
bool tilted = tiltCheck(); // I keep the loop very simple by just putting the 4 main functions we made together lockKey();
lockChip();
output(gLocked, tilted);
}
void output(bool locked, bool tilted) { // all possible outcomes
if (locked && tilted ) {
digitalWrite(buzzer, HIGH);
}
else {
digitalWrite(buzzer, LOW);
}
if (locked) {
digitalWrite(redLed, HIGH);
digitalWrite(relay, LOW);
}
else {
digitalWrite(redLed, LOW);
digitalWrite(relay, HIGH);
}
if (tilted) {
digitalWrite(greenLed, HIGH);
}
else {
digitalWrite(greenLed, LOW);
}
}
void lockKey() { // unlock or not using buttons
bool match = true;
int refresh = 1;
refresh = checkButton(refresh, 3, button3);
refresh = checkButton(refresh, 1, button1);
refresh = checkButton(refresh, 2, button2);
for (int i = 0; i < 5 ; i++) {
if (array1[i] != array2[i]) {
match = false;
}
}
if (match && gLocked) {
gLocked = false;
}
if (millis() - lastPush > 10000) {
for ( int erase = 0; erase < 5; erase++) {
array2[erase] = -1;
}
counter = 0;
}
}
void lockChip() { // unlock or not using keycard
// Look for new cards
if ( ! mfrc522.PICC_IsNewCardPresent())
{
return;
}
// Select one of the cards
if ( ! mfrc522.PICC_ReadCardSerial()) {
return;
}
// Dump debug info about the card; PICC_HaltA() is automatically called
String scannedKey = "";
byte readCard[4];
unsigned long currentTime = millis();
if (currentTime - lastScan > 2000) {
scannedKey = "";
for ( uint8_t i = 0; i < 4; i++) {
readCard[i] = mfrc522.uid.uidByte[i];
//Serial.print(readCard[i], HEX);
scannedKey += readCard[i];
}
Serial.println(scannedKey);
if (scannedKey == "86234213229" || scannedKey == "9712424785") {
gLocked = !gLocked;
}
lastScan = currentTime;
}
}
bool tiltCheck() { // check if tilted or not
return digitalRead(tilt) == LOW;
}
Step 5: Assemble and Enjoy
Now all that is left to do is assemble!
I started by cutting the lid of the lockbox out and placing the RFID chip in its housing. This is what covers up the hole you just cut. (RFID does not work through metal, so you must cut a hole)
I also cut out the existing locking mechanism from the box, as the lock now latches onto the solenoid.
Then I carefully disassembled the battery pack. I had to extend the wires going to the lcd screen. I cut the lcd screen out carefully and glued it into the hole in the Component plate.
Next I placed the lock into its bracket and screwed it down.
I drilled the appropriate holes for the LEDS, Buttons and ON/OFF switch and wired everything up.
Lastly I drilled holes into the Lockbox enclosure and attached the whole component plate in one go.
I finished the box with a coat of paint and it was done!
Now I have a place to store my belongings! (keep in mind this is not a "safe", one could simply unscrew the mechanism and the intruder would be in)
If the battery dies, one can simply recharge it using the supplied usb cable.
6 Discussions
Question 2 months ago on Step 5
1 year ago
I can't see any level translators between the 5V Arduino system and the 3V3 RFID board. While this might work initially it will, in time, fry your RFID card.
Reply 1 year ago
The RFID chip is hooked up to the 3.3V output of the Arduino. I failed to mention this in the writeup
Reply 1 year ago
That is just the power but you are feeding the 5V SPI signals from the Arduino into a 3V3 system. Do you not understand the consiquences of this?
1 year ago
Interesting Instructable. A couple of corrections are needed. The link to the RFID module is not there. Also, when clicking on step 5 there is an error message that it doesn't exist.
1 year ago
What a great idea! The use of RFID is so clever! | https://www.instructables.com/id/RFID-Lockbox/ | CC-MAIN-2019-04 | refinedweb | 1,279 | 60.55 |
SYNOPSIS
#include <pcre.h>
int pcre_dfa_exec(const pcre *code, const pcre_extra *extra,
const char *subject, int length, int startoffset,
int options, int *ovector, int ovecsize,
int *workspace, int wscount);
DESCRIPTION
PCRE_DFA_SHORTEST Return only the shortest match
PCRE_DFA_RESTART This is a restart after a partial match
There are restrictions on what may appear in a pattern when using this
matching function. Details are given in the pcrematching documentation.
match_limit_recursion fields are not used, and must not be set.
There is a complete description of the PCRE native API in the pcreapi
page and a description of the POSIX API in the pcreposix page. | http://www.linux-directory.com/man3/pcre_dfa_exec.shtml | crawl-003 | refinedweb | 102 | 59.64 |
Making HTTP Requests From Java
Continuing the Java network communication theme, let's examine how to make HTTP requests from a Java application, including how to add header parameters and proxy information. I've made requests to HTTP servers from Java in the past, but believe it or not I only recently had to consider adding proxy server information. As usual for me, I started at a low level, making the requests using a Java Socket.
To begin, you need to parse the given URL request to extract the host, path, port, and protocol (i.e., HTTP, HTTPS, and so on). For example, let's parse the host and path first:
String urlStr = "¶m2=xyz"; // some URL URI uri = new URI( urlStr); String host = uri.getHost( ); String path = uri.getRawPath( ); if (path == null || path.length( ) == 0) { path = "/"; } String query = uri.getRawQuery( ); if (query != null && query.length( ) > 0) { path += "?" + query; }
Next, let's extract the protocol and port, and make sure they match:
String protocol = uri.getScheme( ); int port = uri.getPort( ); if (port == -1) { if (protocol.equals("http")) { port = 80; // http port } else if (protocol.equals("https")) { port = 443; // https port } else { return null; } }
Now that the required information has been extracted, we need to do three main things. First, make a socket connection to the server; second, send a correctly formatted HTTP request; third, listen for the response. Connecting is simple; just create a new Java Socket with the host and port:
Socket socket = new Socket( host, port );
Sending the request is a little trickier. If the HTTP server requires authentication for the request, you need to encode this so it isn't sent in plain text (and hence viewable by anyone looking at the network traffic). The HTTP specification requires you to format it as
", as in
"ericbruno:mypassword" (see). To encode it using Base64 encryption, I used the Apache Commons Codec library. Here's the code:
import org.apache.commons.codec.binary.Base64; … String username = "ericbruno"; String password = "mypassword"; String auth = username + ":" + password; String encodedAuth = Base64.encodeBase64String(auth.getBytes());
The HTTP request format for this request needs to look like this, according to the spec:
GET /getdata?param1=abc¶m2=xyz HTTP/1.1 Host: Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= Connection: close
The code to send this request uses the
Socket's
OutputStream and simply "prints" the request as a
String to it:
PrintWriter request = new PrintWriter( socket.getOutputStream() ); request.print( "GET " + path + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Authorization: Basic " + encodedAuth + "\r\n" + "Connection: close\r\n\r\n"); request.flush( );
The
"\r\n" (line feed, new line) combinations at the end of each line in the
String ensure that the request meets the HTTP specification formatting requirements. To get the response, use the
Socket's
InputStream and read the resulting text line-by-line:
InputStream inStream = socket.getInputStream( ); BufferedReader rd = new BufferedReader( new InputStreamReader(inStream)); String line; while ((line = rd.readLine()) != null) { // ... }
A Better Way
Using this code works, and it's obvious what's happening since it involves low-level socket communication. There are cases when understanding the low-level workings is important, and for some reason I tend to feel comfortable working at this level. I wrote the basis for this code years ago, and I simply copy/paste from my personal code library each time I need it. However, I ran into a snag when I had to add the proxy information. I'm sure it can be done using this low-level code, but I decided to find a better way. For me, that search ended with the Apache HTTP Client classes, part of the Apache HTTP Components library. Using this library, the code to make the request via a proxy server looks like this:
DefaultHttpClient httpclient = new DefaultHttpClient(); if ( useProxy == true ) { HttpHost proxy = new HttpHost(proxyStr, 80, "http"); httpclient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, proxy); } HttpGet httpget = new HttpGet(urlStr); httpget.addHeader("Authorization", "Basic " + encodedAuth); HttpResponse response = httpclient.execute(httpget);
Here, the proxy I needed to use was a basic HTTP proxy, but there's support for others (i.e., SOCKS). If you don't require a proxy, simply set the
useProxy flag to false. The response is returned when you make the request via the
execute() method. Here's the code to extract the call status (i.e.,
"HTTP/1.1 200 (OK)"), the headers, and the body of the response:
String status = response.getStatusLine().toString(); Header[] headers = response.getAllHeaders(); HttpEntity entity = response.getEntity(); BufferedReader rd = new BufferedReader( new InputStreamReader(entity.getContent())); String line; while ((line = rd.readLine()) != null) { // ... }
That's much better! Happy coding!
-EJB | http://www.drdobbs.com/mobile/high-performance-io-with-java-nio/jvm/making-http-requests-from-java/240160966 | CC-MAIN-2018-09 | refinedweb | 766 | 57.37 |
8 Key Announcements for Android Developers at Google IO
By Chris Ward
Mobile.
SitePoint has already covered what developers can expect in Android N, and it has been updated to include new announcements.
Some of my favorite (user-focussed) features coming to Android N include seamless updates (borrowed from ChromeOS), just-in-time compilation (goodbye “Android is Updating”)
But in this article I will focus on what you can work with right now and how to get started with it.
There’s a lot to cover, so time to dive into an Android-sized pool.
1. Android Studio 2.2
I love Android Studio and the forthcoming versions bring a lot of new features for improving an Android developers experience. Oh, and whilst you download all those new SDKs, notice that Android Studio finally lets them download in the background.
Speed
Instant run has been improving with recent Android Studio releases and version 2.2 claims a further 10x improvement. Emulators are also faster, with claims that emulators may now be faster than some real devices.
Test Recorder
This feature is awesome. It enables you to run an app in debug mode and have Espresso tests written automatically for UI events to run locally or in a remote test lab.
I was hoping to record a demo of this working, but I could not see the menu option available in the preview I downloaded. This was disappointing and I’m not sure if I missed something.
Build Better
Opening your apps to a world of possibilities, Android Studio now includes support for the CMake and NDK-Build tools and adds support for Java 8. This also means that coding in C++ will no longer require the NDK library, and will allow for far better mixing of C++ and Java code. This is all thanks to functionality lifted from the CLion IDE, also from JetBrains.
If you’re sticking with Gradle there is a new File -> Project Structure menu item (Enable it needs in the Experimental pane of the preferences) that will alert you to out of date dependencies, allow you to update and install them, and write to your Gradle files for you.
For those of you struggling with migrating your app permissions to the new M and N model, highlighting an old-style permission and selecting the Refactor -> Convert to Android System Permissions menu item should help you along.
Layouts
Android Studio’s tools for interface design still lag behind those found in XCode, but version 2.2 improves things with better drag & drop layout (especially menus), a new blueprint mode for inspecting arrangements, constraint layouts for designing interfaces across multiple devices (like AutoLayout in iOS), and a new component inspector, that shows extra computed runtime details.
Code Better
In a continuing effort to help you code better, version 2.2 adds new annotations for API version checking, thread handling and other uses. It adds a samples browser to help find examples of code for a particular symbol of method. It also offers an APK analyzer for tracking down memory issues and advice for reducing the file size of final compiled apps. You can find the analyzer under the Build -> Analyze APK menu item.
You can also see evidence of the instant run feature in the APK.
The new merged manifest view helps you see what other items get added to a manifest file from other sources. For example, in this demo Google Maps app you can see the entries added to the Manifest from play services and firebase:
2. Instant Apps
The announcement that has caused the most excitement/intrigue/confusion was that at some point in the future Android users will be able to open apps without installing them. Triggered by a user clicking a link with an associated app, the Play store will download the section of an app’s code required to undertake that action. This makes use of Android’s
Activity concept and will work across all versions of Android back to KitKat. Intriguing stuff, and I wonder how this will affect installs or respect data limits users may have set.
3. Play with ChromeOS
Are the potential hundreds of millions of Android devices in use not enough of a user base for you? Google claim (confirmed by IDC) to have sold over 2 million Chrome-based devices in the first quarter of 2016, beating Apple’s Mac sales figures, and now, the Play store is available on the platform.
To achieve this, Google are using a combination of Linux namespaces to isolate Android and ChromeOS, but share some essential resources and compositing. There’s no emulation or virtualization, but a full Android OS is available within ChromeOS.
Most features should work on ChromeOS without much work, but bear in mind that many won’t make sense, i.e. you don’t go running with a ChromeBook.
Find more details and advice here.
4. Virtual Reality Daydreams
Cardboard has been Google’s semi-successful foray into Virtual Reality (VR), thanks to its simplicity. But we all knew that more was needed to make VR with Android a bigger concern.
‘Daydream’ will be a part of Android N (Not to be confused with ‘daydream mode‘), supplemented by daydream-ready devices, which limits the user base, but Google claim that “hundreds of millions” of users should be able to use the platform. You are also advised to use a new headset and controller that is currently only a concept design, but Cardboard may also still be usable.
The Daydream home screen(s) are interesting, offering a futuristic, immersive interface for navigating content and media services. These include specialized apps from Netflix, Hulu, YouTube and (oddly) The New York Times.
I couldn’t test the development process for Daydream as for the time being you need a Nexus 6P. If you own one, then immerse yourself in the setup instructions here. SDKs are also available for Unity and iOS.
5. Android Wear 2.0
The next versions of Android’s wearable extension will ship later this year and will introduce many new features, here are some of my favorites:
- Watch faces are fully customizable and able to show data from any other application.
- Bizarrely, the OS will offer hand writing recognition and a keyboard, using the input method framework.
- On supported devices, a standalone mode, so apps can run on the wearable alone, without the need of a paired device.
- There are now official material design guidelines available for Android Wear.
Find more details here, and if you’re keen to get developing for Wear 2.0, find details of the preview here.
6. Android Auto
Using your car as a companion to an Android device is becoming a more viable option. Whilst it didn’t receive many headline grabbing announcements, there were some small updates announced that may interest developers. If you haven’t tried developing for Android Auto yet, SitePoint recently published an article on getting started.
- Waze built in: There’s not much developers can do with this new set of mapping data, but if you are familiar (and working) with the Waze API, it’s another platform for your work.
- You don’t need a compatible car: After spending so much time working on making car dashboards compliment a phone, starting with updates later in the year, you won’t even need a car anymore. Instead you can switch a phone into ‘car mode’ and have a large and accessible interface (now voice operated) purely on a device.
7. Firebase Joins the Mothership
Google acquired Firebase in 2014, and it complements Google’s core business well, with both companies benefiting from each others experience and product lineup. With Facebook announcing the closure of Parse, Firebase has a perfect opportunity to fill the gap and Google seems keen to do so.
Most of the improvements involve better integration of the individual Firebase tools, and better integrating them into the rest of the Google eco-system.
These tools and improvements include app analytics, crash reporting, messaging frameworks, user growth tools and more. Best of all, these tools are enabled by default in Android Studio 2.2 with an easy to use plugin that will configure your app and code for you. Or you can drag snippets into your code.
Some of the most useful improvements with existing Google services are with the AdMob platform, the ability to export Firebase data into BigQuery for analysis and shifting billing into Google’s central cloud billing platform for much easier accounting. I hope this lays the foundation for even better connections between Firebase and Google in the future.
8. ‘N’ame It Yourself
Google claim that they are struggling to decide on a name for their N release and have opened the process to the public. I think people see this as lame publicity stunt, with most submissions likely ignored, but I’m sure someone will be plucked out of obscurity as the winner of a Google-approved name.
The Google Gauntlet
Phew! What an event. I was excited by a lot of the new feature announcements, but was more thrilled by the improved tooling to implement them, and the maturation and consolidation of the Android/Google ecosystem.
With WWDC less than a month away, Apple have a lot of pressure to deliver to developers and consumers alike, and I’m skeptical they will. | https://www.sitepoint.com/8-key-announcements-for-android-developers-at-google-io/ | CC-MAIN-2020-24 | refinedweb | 1,566 | 59.94 |
Convert CSV to Swatch Library?Ginakra67 Aug 4, 2012 2:37 PM
Hi everyone
I have a list of 300+ colors that I need to make into a swatch library for Illustrator. The data looks like this:
GREEN GRASS,127,187,0
PALE YELLOW,241,235,135
LIGHT YELLOW,238,231,93
DAFFODIL,249,231,21
MOONBEAM,249,223,22
etc.
It's RGB I think. In any case, I am just starting with Illustrator and I know NOTHING about scripting. Can anyone help me get aaaaalllll these colors into a swatch library, please? I am getting a migraine just thinking about putting them in one by one. LOL
I found something here, but that didn't work for me. I get an error on processing on line 75.
Error 24: app.doScript is not a function, Line 75 _> app.dpScript(speakThis, 1095978087); //AppleScript.
I get as far as choosing the csv file, and then I get the error. I think this outputs as CMYK, but not sure. Does anyone know of another script, or can anyone help me out?
Thanks,
Gina
1. Re: Convert CSV to Swatch Library?Muppet Mark Aug 5, 2012 6:08 AM (in response to Ginakra67)
Funny, someone has just asked a similar question only this last couple of days… My memory is shot I don't recall that script at all… It will error at that line because Illustrator doesn't have a method to do AppleScript… ( it only makes the mac speak the errors anyhow )
2. Re: Convert CSV to Swatch Library?Ginakra67 Aug 5, 2012 2:54 PM (in response to Muppet Mark)
Hi
I went looking for the other thread you said was started in the last couple of days and found it. My search terms didn't pick it up for some reason..Oh, I found the thread I referenced by google so it must not have parsed the newer thread yet. I was desperate so I tried the script in the older thread, lol. I didn't know if it would work or not. Um..not.
I have a windows pc, btw.
I was hoping there would be something out there I could use without bothering anyone, but I haven't seen anything. I'd appreciate the help, but I can understand that someone creating the script for me is a big favor. The thread is two years old, so I think it's understandable that you don't remember it, lol.
Thanks,
Gina
3. Re: Convert CSV to Swatch Library?[Jongware] Aug 6, 2012 5:43 AM (in response to Ginakra67)
Muppet's script was for InDesign -- I guess so it could save the swatches as an .ASE file. Illustrator's Javascript lacks this command ...
Here is a variant, based upon MM's, but stripped of all ID specific stuff. This one creates your new swatches in the current document; it creates a new swatch group with the CSV file name.
function main() { if (isOSX()) { var csvFile = File.openDialog('Select a CSV File', function (f) { return (f instanceof Folder) || f.name.match(/\.csv$/i);} ); } else { var csvFile = File.openDialog('Select a CSV File','comma-separated-values(*.csv):*.csv;'); } if (csvFile != null) { fileArray = readInCSV(csvFile); var columns = fileArray[0].length; //alert('CSV file has ' + columns + ' columns…'); var rows = fileArray.length; //alert('CSV file has ' + rows + ' rows…'); if (columns == 4 && rows > 0) { exchangeSwatches(csvFile); } else { var mess = 'Incorrect CSV File?'; isOSX ? saySomething(mess) : alert(mess); } } else { var mess = 'Ooops!!!'; isOSX ? saySomething(mess) : alert(mess); } } main(); function exchangeSwatches(csvFile) { // var docRef = app.documents.add(); var docRef = app.activeDocuments; var swatchgroup = docRef.swatchGroups.add(); swatchgroup.name = csvFile.name; with (docRef) { /* for (var i = swatches.length-1; i >= 0; i--) { swatches[i].remove(); } */ for (var a = 0; a < fileArray.length; a++) { var n = fileArray[a][0]; // First Column is name if (n == 'Cyan' || n == 'Magenta' || n == 'Yellow' || n == 'Black') { n = n + '-???'; // Reserved swatch name; } r = parseFloat(fileArray[a][1]); // Second Column is Red g = parseFloat(fileArray[a][2]); // Third Column is Green b = parseFloat(fileArray[a][3]); // Forth Column is Bloo if (r >= 0 && r <= 255 && g >= 0 && g <= 255 && b >= 0 && b <= 255) { var color = new RGBColor; color.red = r; color.green = g; color.blue = b; var swatch = swatches.add(); swatch.name = n; swatch.color = color; swatchgroup.addSwatch(swatch); } else { var mess = 'Color values are out of range?'; isOSX ? saySomething(mess) : alert(mess); } } } } function readInCSV(fileObj) { var fileArray = new Array(); fileObj.open('r'); fileObj.seek(0, 0); while(!fileObj.eof) { var thisLine = fileObj.readln(); var csvArray = thisLine.split(','); fileArray.push(csvArray); } fileObj.close(); return fileArray; } function saySomething(stringObj) { var speakThis = 'I say, "' + stringObj + '"'; alert(speakThis); } function isOSX() { return $.os.match(/Macintosh/i); }
4. Re: Convert CSV to Swatch Library?Ginakra67 Aug 7, 2012 2:11 PM (in response to [Jongware])
Oh, thank you very much Jongware However, I get an error when using the script
Error 21; undefined is not an object.
Line 30
-> var swatchgroup=docRef.swatchGroups.add();
I copied and pasted the above code into Notepad, then saved as a .js, ran the script within Illustrator CS5. It asked for the csv file, then gave me the error.
5. Re: Convert CSV to Swatch Library?Larry G. Schneider Aug 7, 2012 2:51 PM (in response to Ginakra67)
The line before the error should be
var docRef = app.activeDocument; not
var docRef = app.activeDocuments;
6. Re: Convert CSV to Swatch Library?Ginakra67 Aug 7, 2012 4:34 PM (in response to Larry G. Schneider)
Larry, that fixed the problem! Now I have a swatch library of all 300 colors Woohoo! Thank you Jongware, for the script, and Larry for catching the error! I have health issues that keep my time on the computer very limited. This would have taken me weeks to do one by one (if I ever managed to get it done at all), so I thank you so much for the script that did it in... less than a minute. This is fantastic!!!
I really appreciate the time and effort. You did me a huge favor and I can't thank you enough. This is one happy gal.
7. Re: Convert CSV to Swatch Library?[Jongware] Aug 8, 2012 12:15 PM (in response to Ginakra67)
What a weird error to leave in ... I must've done "one slight adjustment" just prior to copying -- usually I make sure it works as advertised! Thanks to eagle-eyed Larry for correcting it.
8. Re: Convert CSV to Swatch Library?Ginakra67 Aug 13, 2012 4:59 PM (in response to [Jongware])
Just wanted to say thank you, again !! I am happily using my 300 color swatch library, it's awesome! Thanks a bunch for saving me a HUGE amount of time and at least four migraines, and for doing me such a huge favor, Jongware. I really appreciate it
9. Re: Convert CSV to Swatch Library?brandtryan Feb 22, 2013 8:09 AM (in response to Ginakra67)
Any of you brave script writers want to take on another challenge? I've visited colormunki.com and played around with the Munsell color swatch/pallette app, and have managed to save ALL of the swatches in csv files. There are 40 files, one for each hue. Each csv file has four columns:
- Sample Name (2.5R, etc.)
- L* (lightness)
- a* (Red for pos #s and Green for neg #s)
- b* (Yellow for pos #s and Blue for neg #s)
I attempted to modify the scripts in this thread, but was a bit out of my league. There are seperate attributes/properties/classes, etc. for lab colors, and after getting a migraine headache, I threw in the towel. Below is a link to the zip file that contains the 40 csv files, if anyone wants to take a stab at it. AFAIK, since the colormunki site allows visitors to download the swatch data, I'm assuming it's ok to post this link. All due credit to colormunki.com and Munsell Color. (not sure why they don't just allow visitors to download entire Munsell library in one zip file)
I have not checked the files individually, so there may be a mistake here and there (missing color maybe, or double entry of a color) -- but it should be pretty clean.
Message was edited by: brandtryan
10. Re: Convert CSV to Swatch Library?Muppet Mark Feb 22, 2013 1:50 PM (in response to brandtryan)
Had a quick look reading the data is simple enough… I didn't look at the legality of this… Do you have AI, ID or both…? I used ID in the original linked post as it can save to an *.ase file which can be used throughout the suite…
11. Re: Convert CSV to Swatch Library?brandtryan Feb 23, 2013 2:06 AM (in response to Muppet Mark)
I've got both -- (creativec cloud subscription). An .ase would be great!
Somehow managed to miss seeing the "neutral" tabs on the colormunki site -- going to create csv files for those as well
Message was edited by: brandtryan
12. Re: Convert CSV to Swatch Library?brandtryan Feb 23, 2013 2:15 AM (in response to brandtryan)
I've updated the original link to the csv files -- the new .zip contains the additional 40 neutral swatches (white to black)
13. Re: Convert CSV to Swatch Library?mutagenicpixels Feb 3, 2014 1:42 PM (in response to [Jongware])
Jongware, this is giving me the following error (using Illustrator CS5.5):
Error 21: undefined is not an object.
Line: 9
-> var columns = fileArray[0].length;
I'm in dire need of this script as well to quickly digest 900+ swatches :/
Dave
14. Re: Convert CSV to Swatch Library?Larry G. Schneider Feb 3, 2014 2:30 PM (in response to mutagenicpixels)
What's the setup of your CSV files? Is this for RGB or CMYK colors. The script above is for RGB. If for CMYK, change the number of of columns to 5 (Name,Cvalue,Mvalue,Yvalue,Kvalue)
15. Re: Convert CSV to Swatch Library?mutagenicpixels Feb 3, 2014 2:41 PM (in response to Larry G. Schneider)
Thank you for replying so rapidly Larry! This is for RGB and my CSV file is setup as followed:
Aloe 309,167,217,172
Alpine 468,75,175,218
Aluminum 052,178,178,187
Amazon 313,119,149,85
Ambrosia 337,143,184,164
The number after the name is the colour code and the three numbers at the end are RGB values.
16. Re: Convert CSV to Swatch Library?Larry G. Schneider Feb 3, 2014 4:31 PM (in response to mutagenicpixels)
17. Re: Convert CSV to Swatch Library?[Jongware] Feb 4, 2014 1:35 AM (in response to Larry G. Schneider)
18. Re: Convert CSV to Swatch Library?mutagenicpixels Feb 4, 2014 5:40 AM (in response to [Jongware])
I created the CSV using Excel (also attempted it in TextEdit) and edited the script using ExtendedScript Toolkit
19. Re: Convert CSV to Swatch Library?[Jongware] Feb 4, 2014 6:33 AM (in response to mutagenicpixels)
mutagenicpixels wrote:
I created the CSV using Excel (also attempted it in TextEdit) and edited the script using ExtendedScript Toolkit
It works for both Larry and me, as you can see. These are actual Illustrator screen shots, not Photoshopped or otherwise mocked up.
Did you know that Mac OS X's TextEdit by default does not save a new file as plain text, but as RTF instead? If that is not the issue here (and you are aware of the difference between a plain text file and any other sort -- and know how to check), then there must be a fundamental mis-understanding somewhere.
For the moment, I'm going to stick to my PICNIC diagnostic.
20. Re: Convert CSV to Swatch Library?mutagenicpixels Feb 4, 2014 6:53 AM (in response to [Jongware])
[Jongware] wrote:
Did you know that Mac OS X's TextEdit by default does not save a new file as plain text, but as RTF instead?
Quite aware of that. Besides if I was saving in Excel and LibreOffice into a new file it would have nothing to do with TextEdit's rtf now would it?
I don't think you're shopping or mocking it up, but writing "Mine works, so yours should" doesn't solve anything. Perhaps I should contact the original scripts creator instead of your knockoff
21. Re: Convert CSV to Swatch Library?[Jongware] Feb 4, 2014 7:30 AM (in response to mutagenicpixels)
Equally, noting "it does not work for me, please help" as the problem on your end does nothing to help us help you.
As I said, I am able to get the error you got -- by pointing the script to an invalid CSV file. Perhaps, if you upload your file somewhere on a public server, we can trouble-shoot that part.
22. Re: Convert CSV to Swatch Library?mutagenicpixels Feb 4, 2014 7:37 AM (in response to [Jongware])
I replied same way as Ginakra67 with quite different results, that's not my problem that's yours...
Anyway if you're so inclined:
23. Re: Convert CSV to Swatch Library?Silly-V Feb 4, 2014 8:25 PM (in response to mutagenicpixels)
I opened your file in Excel and it looked the way I expected it to. I opened it in Notepad and all of your text is in one line. I saved another CSV from my Excel (choosing Comma-separated Values csv, not the MSDOS ones or anything), and opened the new one in Notepad and the text was in columns & rows now. I must therefore advise that it is an issue with your text file. (O_o)
24. Re: Convert CSV to Swatch Library?Jmsa15369610 Jul 21, 2016 2:18 AM (in response to Ginakra67)
Hello Everyone,
I have used the script for the export of the .csv file to create the color swatches using RGB values and this works OK.
But there is a problem that it adds QUOTES to the beginning of the name after exported, is there any way these QUOTES can be removed or not be added wile exporting?
these swatches can be renamed one by one but I would like to rename them all at once, I have used a renaming script but it does not recognize these quotes at the beginning of each swatch name. it changes the name bu the quotes.
Thank you all for any help you can provide and it will be well appreciated it.
25. Re: Convert CSV to Swatch Library?DCardillo Oct 11, 2016 10:39 AM (in response to [Jongware])
hi there,
does anyone have an idea how to do the exact opposite? I found this thread because I have a list of CMYK swatches saved as spot colors in an ASE file and I want to parse them into CSV/TDV to bring them into Excel, where I can revise the data to import into another database.
26. Re: Convert CSV to Swatch Library?Silly-V Oct 11, 2016 11:32 AM (in response to DCardillo)
Hey DCardillo - you can start off with this script snippet:
#target illustrator function test(){ var doc = app.activeDocument; var arr = [["name", "cyan", "magenta", "yellow", "black"]], thisSwatch, thisSwatchColorSpotColor; for(var i = 0; i < doc.swatches.length; i++){ thisSwatch = doc.swatches[i]; if(thisSwatch.color.typename != "SpotColor"){ continue; } thisSwatchColorSpotColor = thisSwatch.color.spot.color; if(thisSwatchColorSpotColor.typename != "CMYKColor"){ continue; } arr.push([ thisSwatch.name, Math.floor(thisSwatchColorSpotColor.cyan), Math.floor(thisSwatchColorSpotColor.magenta), Math.floor(thisSwatchColorSpotColor.yellow), Math.floor(thisSwatchColorSpotColor.black) ]); }; var newFile = File("~/Desktop/SwatchList.csv"); newFile.open("w"); newFile.write(arr.join("\n")); newFile.close(); }; test(); | https://forums.adobe.com/message/4605578 | CC-MAIN-2019-04 | refinedweb | 2,608 | 75.1 |
I recently posted a similar post to this one. The reason why I'm posting this again is because an administrator told me to put [ highlight=Java] and [/highlight] to make my post more readable I hope this is what he meant. Please help me. I'm trying to get at least a B in this course.
Code Java:
/** * A class to give students experience using loops. This class * creates and manipulates objects of Greg's Date class. */ public class SpeedDating { // Note: this class has no instance variables! /** * Creates an empty SpeedDating object so that you can call the methods * (a constructor that takes no parameters is known as a "default" * constructor) */ public SpeedDating() { } // Constructor has empty body /** * Prints the day of the week (e.g. "Thursday") on which Halloween will * fall for 10 consecutive years. * @param startYear the first of the 10 consecutive years */ public void printHalloweens(int startYear) { // TO DO: write body of this method here }
1.Create a SpeedDating object
2.Have the user enter a year, and call the printHalloweens method to print the day of the week on which Halloween will occur for the next 10 years, starting with the input year
3.Have the user enter another year, call the getThanksgiving method, and print the Date object returned. Print the Date in the main method, not in SpeedDating.
4.Have the user enter another year, call getThanksgiving again, and print the Date object returned again.
Right now I'm not worried about the test class. I just need some help in writing the methods. My assignment states specifically that loops must be used in order to get credit. If I can write the loops I'm pretty sure I can do the test class. Please someone help me. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/18908-how-write-body-methods-printingthethread.html | CC-MAIN-2015-27 | refinedweb | 294 | 74.08 |
Understanding Access Modifiers in VB 2005
Access modifiers in Visual Basic .NET are represented by the four words—public, protected, friend, and private—that support a cornerstone of object-oriented programming (OOP): encapsulation. Encapsulation supports information hiding, and the general purpose of an access modifier is to determine who has access to what (or from whom certain information in your code is hidden). The reason you want to limit access is because all access, all the time is a very bad thing. Just ask anyone who's used procedural programming.
Blah! Blah! Blah! You've probably heard all of that before already, but it doesn't explain when or how you would use access modifiers. As far as I can tell, there are two reasons you need to know about access modifiers:
- You just need some simple rules to figure out which keyword to use.
- You need all of the theory mumbo jumbo to pass an OOP IQ test.
Simple Rules, Simple Tools
Almost everyone has heard the acronym KISS, or Keep It Simple Stupid. To keep it simple, consult the following three simple rules (listed in order of simplicity) to figure out which modifier to use:
- You can always use the Public modifier to allow everyone to have access to everything.
- Make everything Protected except what absolutely must be Public. I mean everything, and try to make as little as possible Public.
- You can always change access modifiers up until the point the software is shrink-wrapped. (Don't worry if you get it wrong in version 1; you have version 2 to get it right. Plus, even the experts make mistakes here.)
Now, if the simplicity of the above rules seems fishy, consider the policy I use:
- Fields are always Private and wrapped in Public properties.
- Only the subroutines and functions that define the class's primary behavior are made public; everything else is private.
- If I know a child class will be created—and I usually do because I design the classes beforehand—I will make some behaviors protected, but just the behaviors I think that I will change.
- Occasionally, I use the Friend modifier to reduce the signal-to-noise ratio between assemblies.
These rules will satisfy a huge percentage of the programming you do. I have simple, general rules because simple rules facilitate speed. Listing 1 is an example of the application of these rules.
Listing 1: Elements of a .vb File Containing Access Modifiers as I Would Apply Them
'Access modifier not needed here Namespace AllAccess Friend Class OnlyInThisAssembly Public Function AddTwo(ByVal lhs As Integer, _ ByVal rhs As Integer) As Integer Return lhs + rhs End Function End Class Public Class CustomerEveryOneCanUse Private nameField As String Friend key As String Public Sub New(ByVal nameField As String) End Sub Public Property NameProperty() As String Get Return nameField End Get Set(ByVal value As String) nameField = value End Set End Property Protected Overridable Sub ChildClassCanExtend() ' do something End Sub End Class End Namespace
Don't beat yourself up if it's not perfect the first time. No one gets it perfect the first time.
Encapsulation: OOP Cornerstone
Encapsulation enables you to hide information, but no one ever tells you what to hide and from whom. The answer is that you are hiding information from yourself mostly and other programmers sometimes. You hide the information because consumers—you and other programmers—don't need to know it to use the code element (like class). You are hiding it to make using the element easier.
A consumer is anyone who uses the element, which more often than not is you at a later date. When one consumes code, only the public members need to be considered directly and the general behaviors need to be considered indirectly. That is, a consumer needs to learn only how to use what is public and whether or not the code as a whole will complete the work needed.
Public members are members that anyone can use. Private members are like underwear; only you should see them. Protected members come into play only when you want members to appear private to external consumers, but accessible to child classes. By making members protected, you are conveying that these members can be changed by generalizers—those that will inherit from and extend the members. (Protected members are like underwear from Victoria's Secret: they are meant to be seen by a trusted few.) Finally, the Friend modifier means that an element is accessible only in the same assembly, like helper classes and members.
Most of the time, you will be creating private fields, private helper methods, public classes, public properties, and public methods and events that represent consumable behaviors. Most of your classes will be public.
For more information about using access modifiers, refer to How to: Control the Availability of a Variable on MSDN.
Practice Makes Perfect
Remember, no one gets all access modifiers correct all the time. If you are still a little unsure, try making everything private and then change one item at a time to public until the code you write satisfies the very minimal set of elements needed to make the code usable. If that prospect frightens you intellectually, make everything public and change one element at a time to private. If the private modifier makes your code stop working, change the item back to public and try the next item.
Finally, to do the best job you can; you will need to have to have an intimate understanding of encapsulation. Continue reading articles like this one, buy books on the subject, and get lots of practice._0<< | http://www.developer.com/net/vb/article.php/3582961/Understanding-Access-Modifiers-in-VB-2005.htm | CC-MAIN-2015-06 | refinedweb | 940 | 51.68 |
I'm trying to compile my program to a shared library that I can use from within Python code using ctypes.
The library compiles fine using this command:
g++ -shared -Wl,-soname,mylib -O3 -o mylib.so -fPIC [files] `pkg-config --libs --cflags opencv`
from ctypes import *
mylib = CDLL("/path/to/mylib.so")
print mylib.test() // Expected output: Hello World
libdc1394 error: Failed to initialize libdc1394
Very frustrating that nobody actually shows a concrete solution. I had this issue after installing OpenCV. For me the easiest solution to remove this warning was actually to disable this driver:
sudo ln /dev/null /dev/raw1394 | https://codedump.io/share/RGVGGwe3bg7x/1/ctypes-error-libdc1394-error-failed-to-initialize-libdc1394 | CC-MAIN-2018-22 | refinedweb | 103 | 58.28 |
Jaxb parsing : xml
JAXB is for converting java pojo to xml. It Supports annotation too. In jdk 1.6 and above it is aready package inside. For less then jdk 1.6 , download from link:- jaxb download link Project Structure :- Fruit Java Pojo:- package com.sandeep.jaxb.demo;import javax.xml.bind.annotation.XmlAttribute;import […]
XStream Parser : XML & JSON
Xstream parser is used to convert java POJO to xml or json and vice-versa. The main benefit is we can use different parsers for different file like Stax parser. download jar:- link to download xstream parser Project Structure :- Student Pojo Class :- package com.sandeep.example.value.objects;public class […]
Pretty Time Format :Java
Pretty time format is used mainly in social media site like twitter, Facebook. If you mark your posts in Facebook then you can see message like “posted 2mins ago”, “posted 1day ago”. These formats are pretty time format. Download the jar file from:- Project Structure:- Test the Pretty Time:- package com.sandeep.pretty.demo;import java.util.Calendar;import […]
Json and Jquery Demo
Building Project Structure : Download the Google GSON from this link download gson. Create a dynamic web project and put the gson jar file in lib folder of WEB-INF.Refer the screen shot below. JsonDemoServlet : Create a Java Servlet to process request from client (browser) .The response type of the servlet be application/JSON. package com.sandeep.example.servlet;import […] | http://www.tutorialsavvy.com/page/79/ | CC-MAIN-2017-09 | refinedweb | 236 | 59.9 |
Databases require more defined structure than Python lists or dictionaries 1.
When we create a database table we must tell the database in advance the names of each of the columns in the table and the type of data which we are planning to store in each column. When the database software knows the type of data in each column, it can choose the most efficient way to store and lookup the data based on the type of data.
You can look at the various data types supported by SQLite at the following url:
Defining structure for your data up front may seem inconvenient at the beginning, but the payoff is fast access to your data even when the database contains a large amount of data.
The code to create a database file and a table named Tracks with two columns in the database is as follows:
import sqlite3 conn = sqlite3.connect('music.sqlite3')cur = conn.cursor() cur.execute('DROP TABLE IF EXISTS Tracks ')cur.execute('CREATE TABLE Tracks (title TEXT, plays INTEGER)') conn.close()
The connect operation makes a “connection” to the database stored in the file music.sqlite3 in the current directory. If the file does not exist, it will be created. The reason this is called a “connection” is that sometimes the database is stored on a separate “database server” from the server on which we are running our application. In our simple examples the database will just be a local file in the same directory as the Python code we are running.
A cursor is like a file handle that we can use to perform operations on the data stored in the database. Calling cursor() is very similar conceptually to calling open() when dealing with text files.
Once we have the cursor, we can begin to execute commands on the contents of the database using the execute() method.
Database commands are expressed in a special language that has been standardized across many different database vendors to allow us to learn a single database language. The database language is called Structured Query Language or SQL for short.
In our example, we are executing two SQL commands in our database. As a convention, we will show the SQL keywords in uppercase and the parts of the command that we are adding (such as the table and column names) will be shown in lowercase.
The first SQL command removes the Tracks table from the database if it exists. This pattern is simply to allow us to run the same program to create the Tracks table over and over again without causing an error. Note that the DROP TABLE command deletes the table and all of its contents from the database (i.e. there is no “undo”). cur.execute('DROP TABLE IF EXISTS Tracks ')
The second command creates a table named Tracks with a text column named title and an integer column named plays.
cur.execute('CREATE TABLE Tracks (title TEXT, plays INTEGER)')
Now that we have created a table named Tracks, we can put some data into that table using the SQL INSERT operation. Again, we begin by making a connection to the database and obtaining the cursor. We can then execute SQL commands using the cursor.
The SQL INSERT command indicates which table we are using and then defines a new row by listing the fields we want to include (title, plays) followed by the VALUES we want placed in the new row in the table. We specify the values as question marks (?, ?) to indicate that the actual values are passed in as a tuple ( 'My Way', 15 )as the second parameter to the execute() call.
import sqlite3 conn = cur.execute('DELETE FROM Tracks WHERE plays < 100')conn.commit()cur.close()
First we INSERT two rows into our table and use commit() to force the data to be written to the database file.
Then we use the SELECT command to retrieve the rows we just inserted from the table. On the SELECT command, we indicate which columns we would like (title, plays) and indicate which table we want to retrieve the data from. After we execute the SELECT statement, the cursor is something we can loop through in a for statement. For efficiency, the cursor does not read all of the data from the database as the title and the second value as the number of plays. Do not be concerned that the title strings are shown starting with u’. This is an indication that the strings are Unicode strings that are capable of storing non-Latin character sets.
At the very end of the program, we execute an SQL command to DELETE the rows we have just created so we can run the program over and over. The DELETE command shows the use of a WHERE clause that allows us to express a selection criterion so that we can ask the database to apply the command to only the rows that match the criterion. In this example the criterion happens to apply to all the rows so we empty the table out so we can run the program repeatedly. After the DELETE is performed we also call commit() to force the data to be removed from the database.
- 瀏覽次數:725 | http://www.opentextbooks.org.hk/zh-hant/ditatopic/6810 | CC-MAIN-2021-17 | refinedweb | 873 | 68.4 |
jGuru Forums
Posted By:
m_dunne
Posted On:
Tuesday, February 19, 2002 02:21 AM
Hi,
I have a question about implementing an abstract method from an superclass
in a concrete subclass.
Currently I have an abstract class with an abstract with return type Map.
In a concrete subclass of this I have implemented this method but want to
return a HashMap. The reason that the abstract method is returning Map as
opposed to HashMap is that all implementing
subclasses are not restricted to returning a HashMap. (can also return a Hashtable).
However, I thought this was possible, but it seems that it is not- as I am
getting a compiler error.
Does anyone know a way to do this. The superclass has to be an abstract class
as opposed to an interface as there are a number of methods already implemented in it.
TIA
Re: implementing abstract methods in a concrete subclass
Posted By:
Paul_Connaughton
Posted On:
Tuesday, February 19, 2002 03:08 AM
You can implement an overridden method to return a subclass of the return type, but you can not specify that inthe method header. The return type must be the same as the parent class.
The following example will allow you to return a HashMap as a Map, but you have to cast it to access HashMap functionality.
public class SomeSubclass extends AbstractClass{ public Map overRiddenMethod(){ HashMap map = new HashMap(); return map; } } public static void main( String args[] ){ AbstractClass myClass = new SomeSubClass(); Map map = myClass.overRiddenMethod(); if ( map instanceof HashMap ){ //now you know that you have a HashMap HashMap hashMap = (HashMap) map; } }} | http://www.jguru.com/forums/view.jsp?EID=763269 | CC-MAIN-2014-15 | refinedweb | 265 | 59.43 |
Hey all,
Hash vs methods may seem like a silly question, but in certain
situations I don’t understand why you would use a method over a hash.
For example, take a look at this method:
#this is defined in a class called core.rb
def assign_attributes(attributes)
attributes.each_pair { |k,v| send("#{k}=", v) if respond_to?("#{k}=")
} if attributes
end
This method gets called with this:
assign_attributes @options
Options is a hash when it gets passed into argument list which contains
key/value pairs:
@options[:dom_id] = @options.delete(:id) @options[:dom_class] = @options.delete(:class) assign_attributes @options
It appears that assign_attributes method is taking the key/value pairs
and converting them into setter and getter methods in a class called
table builder:
attr_accessor :dom_class, :dom_id
So basically we pass a hash and the each_pair method iterates through
the key/value pairs and uses the send method to call a setter #{k} back
on the table builder class and passing ‘v’ as a parameter so now the
getter method :dom_class or dom_id can be used to retrieve a value like
here:
def sortable?
dom_class && dom_class.include?(“sortable”)
end
The dom_class is called and if it exists then we check if it includes
the string ‘sortable’, but now it appears the method is being used as an
array or can include? be called on a mehtod like a method chain? Or is
dom_class not even a method? And if it’s not then why even bother
creating setter/getter methods when you could have just used a hash? Or
am I misunderstanding what assign_attributes is doing?
Thanks for response. | https://www.ruby-forum.com/t/hash-vs-methods/205709 | CC-MAIN-2021-49 | refinedweb | 267 | 69.11 |
Prev
Java Serialization Code Index
Headers
Your browser does not support iframes.
Re: Convert String to Object - Newbie Question
From:
"A. Bolmarcich" <aggedor@earl-grey.cloud9.net>
Newsgroups:
comp.lang.java.help
Date:
Fri, 28 Sep 2007 15:33:48 -0000
Message-ID:
<slrnffq7ms.286.aggedor@earl-grey.cloud9.net>
On 2007-09-28, Marion <mchew02@hotmail.com> wrote:
Hello all,
I am trying to convert a myString declared as a String[] and convert
it into an Object so that I can use the non static method.
Currently, my program does not compile. The error I'm receiving is my
old friend error msg: "non-static method setData....cannot be
referenced from a static content.
This error likely has nothing to do with converting a String[] into an
Object.
Essentially, I want to read in an array of Strings into objects for
data manipulation (a sort to be exact.).
The program looks something like this:
...
public void init (String[] array, int size){
.// assume array[0] is the first string read in from the text file
String dataString = array[0];
Node.setData(dataString);
...
}
Class Node {
(constructor stuff declared here)
}
...
public void setData(Object data){
this.data = data;
}
I read from the Java docs that String was an object (or was extended
by it) so doesn't that mean that String is an object? I'm beginning
to dislike objects! lol,
Not only is a String an Object, but a String[] is alos an object.
I know how to turn integers into objects with:
Integer one = new Integer(1);
why doesn't "String myString = new String (array[0]);:
work?
It works. Here is a small class that demonstrates that it works (as
long as you run the class with at least one argument value).
public class Test {
public static void main(String[] array) {
String myString = new String(array[0]);
System.out.println("myString = " + myString);
}
}
If you dislike objects, you should not try to write Java programs.
I tried to read documentation for java.io.Serializable (not sure if
that's what it's called but something like that) and on reflections.
being new to java, those documents I read scared the begeezus off of
me. (didn't understand em, nosiree)
Neither serialization nor reflection is involved here.
I tried to use "stringToObject(string,class)" statement and sadly, I
am having trouble understanding how to use it in my program. There
are no examples anywhere.
All I want is to convert a string to object and it has gotten suddenly
complicated. :(
Ok, well it's 4am here, I'll try this again in the morning. :)
Anyone with hints or suggestions, please, I'm open to them.
Getting back to the actual problem: "non-static method setData....cannot
be referenced from a static content.", the Java compile is complaining
about the line
Node.setData(dataString);
Because setData is a non-static method, you need to invoke the method
on a Node object (or an object that has Node as an ancestor class). In
the code that you have given there are no Node objects. In the
epression
Node.setData(dataString);
"Node" is the name of a class, not a reference to a Node object (an
instance of the class). Where it the Node object whose data you want to
set? | http://preciseinfo.org/Convert/Articles_Java/Serialization_Code/Java-Serialization-Code-070928183348.html | CC-MAIN-2021-49 | refinedweb | 546 | 67.55 |
Red Hat Bugzilla – Full Text Bug Listing
Description of problem:
Started Live CD installer in VM with networking disabled.
Version-Release number of selected component:
anaconda-18.37.11-1.fc18
Additional info:
cmdline: /usr/bin/python /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-osimg-min --lang en_US.UTF-8
executable: /sbin/anaconda
kernel: 3.6.10-4.fc18.x86_64
uid: 0
Created attachment 675115 [details]
File: backtrace
Created attachment 675116 [details]
File: core_backtrace
Created attachment 675117 [details]
File: dso_list
Created attachment 675118 [details]
File: environ
Command-line:
$ qemu-kvm -m 2048 -hda f18-test-1.img -cdrom ~/xfr/fedora/F18/F18-Final/RC2/Fedora-18-x86_64-Live-Desktop.iso -usb -vga qxl -boot menu=on -usbdevice mouse
Created attachment 675119 [details]
anaconda.log
Created attachment 675120 [details]
program.log
Created attachment 675121 [details]
storage.log
ifcfg.log and packaging.log are empty.
*** Bug 896687 has been marked as a duplicate of this bug. ***
This seems to be reliable reproducer.
Start Live image:
$ qemu-kvm -m 2048 -cdrom ~/xfr/fedora/F18/F18-Final/Final/Fedora-18-x86_64-Live-Desktop.iso -vga qxl -usbdevice mouse
Click Live System User to login.
Click Try Fedora.
Click Close.
Click network icon in panel.
Click Wired to turn networking Off.
Click Activities in panel.
Click installer icon.
An ABRT notification appears.
Click Report.
There is no apparent response -- I believe this is because the exception is owned by root.
Click Activities.
Click Show Applications.
Click the ABRT icon.
Click Show all problems.
The Gtk.py ... RuntimeError exception is listed.
Testing is with an F17 host:
$ rpm -q qemu-kvm
qemu-kvm-1.0.1-2.fc17.x86_64
I've also encountered this issue a few times. I haven't tried to reproduce it consistently yet, but I've seen it happen two or three times already. What is notable for me is that it only occured so far when I had a network connection active instead of not having one like Steve.
Thanks Alex. Your comments in Bug 904014 helped me discover another reproducer for this one:
Start the Live image in a VM:
$ qemu-kvm -m 2048 -hda f18-test-2.img -cdrom ~/xfr/fedora/F18/F18-Final/Final/Fedora-18-x86_64-Live-Desktop.iso -vga qxl -boot menu=on -usbdevice mouse
Proceed to the desktop.
Start a terminal.
[liveuser@localhost ~]$ su
[root@localhost liveuser]# hostnamectl set-hostname f18-test-2
[root@localhost liveuser]# exit
exit
<module> <module>
import meh.ui.gui
File "/usr/lib/python2.7/site-packages/meh/ui/gui.py", line 23, in <module> <module>
raise RuntimeError("Gtk couldn't be initialized")
RuntimeError: Gtk couldn't be initialized
[liveuser@localhost ~]$
seen with dd USB of Mate-live Cd :
:fails with this error if no network connection.
:Works fine if reboot with wired connection present.
:Liveinst (Install to Hard disk) Works in VirtualBox install as there is a wired connection
On a physical test machine here I was able to repeat this error several times in a row, where the installer on KDE live image would crash with the above GTK error.
I then turned off my machine, unplugged my network cable and it then worked several times in a row.
I then plugged my network cable back in without a power off, and it still continued to work.
Not particularly helpful but at least it's not confined to VMs, and in my case it was the opposite to above (fails with error with network plugged in).
Will try to gather more information.
Can someone please see if this works for them:
xhost +
liveinst
This might be a re-occurrence of an old F15 bug:
(In reply to comment #16)
> Can someone please see if this works for them:
>
> xhost +
> liveinst
Chris,
yes, it works.
I thought one had to be # (su) for liveinst to work....It is that way in Soas spin- But this does not seem to apply in a live CD in Virtual Box with f19:
Testing with f19 TC5 live desktop x86_64 in Virtual Box
2358 MB 16,00 GB HD
1-) network detached + Live System User/settings:Airplane mode on
$su
#liveinst
localuser: root being added to access control list
Starting installer....
"anaconda 19.16.1 for Fedora 19 ... started.
Not asking for VNC beause we have don't have a network"
Anaconda appears in 2nd window
2-)VirtualBox: [x]Enable Network Adapter: Bridged Adapter eth0
$su
#liveinst
localuser: root being added to access control list
Starting installer....
"anaconda 19.16.1 for Fedora 19 ... started.
Anaconda appears in 2nd window
3-)VirtualBox: [x]Enable Network Adapter: Bridged Adapter eth0
(try with-out root)
$liveinst
localuser: root being added to access control list
Starting installer....
"anaconda 19.16.1 for Fedora 19 ... started.
Anaconda appears in 2nd window
*** This bug has been marked as a duplicate of bug 679486 *** | https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=893218 | CC-MAIN-2016-26 | refinedweb | 805 | 58.18 |
8. Formulas¶
Formulas define logical relations between the free variables used in expressions.
Depending on the values assigned to those free variables, a formula can be true or false.
When a formula is true, we often say that the formula “holds”.
For example, the formula
x = 4 + 5 holds if the value
9 is assigned to
x, but it
doesn’t hold for other assignments to
x.
Some formulas don’t have any free variables. For example
1 < 2 always holds, and
1 > 2
never holds.
You usually use formulas in the bodies of classes, predicates, and select clauses to constrain
the set of values that they refer to.
For example, you can define a class containing all integers
i for which the formula
i in
[0 .. 9] holds.
The following sections describe the kinds of formulas that are available in QL.
8.1. Comparisons¶
A comparison formula is of the form:
<expression> <operator> <expression>
See the tables below for an overview of the available comparison operators.
8.1.1. Order¶
To compare two expressions using one of these order operators, each expression must have a type and those types must be compatible and orderable.
For example, the formulas
"Ann" < "Anne" and
5 + 6 >= 11 both hold.
8.1.2. Equality¶
To compare two expressions using
=, at least one of the expressions must have a type. If
both expressions have a type, then their types must be compatible.
To compare two expressions using
!=, both expressions must have a type. Those types
must also be compatible.
For example,
x.sqrt() = 2 holds if
x is
4, and
4 != 5 always holds.
For expressions
A and
B, the formula
A = B holds if there is a pair of values—one
from
A and one from
B—that are the same. In other words,
A and
B have at least
one value in common. For example,
[1 .. 2] = [2 .. 5] holds, since both expressions have
the value
2.
As a consequence,
A != B has a very different meaning to the negation
not A = B [1]:
A != Bholds if there is a pair of values (one from
Aand one from
B) that are different.
not A = Bholds if it is not the case that there is a pair of values that are the same. In other words,
Aand
Bhave no values in common.
Examples
- If both expressions have a single value (for example
1and
0), then comparison is straightforward:
1 != 0holds.
1 = 0doesn’t hold.
not 1 = 0holds.
- Now compare
1and
[1 .. 2]:
1 != [1 .. 2]holds, because
1 != 2.
1 = [1 .. 2]holds, because
1 = 1.
not 1 = [1 .. 2]doesn’t hold, because there is a common value (
1).
- Compare
1and
none()(the “empty set”):
1 != none()doesn’t hold, because there are no values in
none(), so no values that are not equal to
1.
1 = none()also doesn’t hold, because there are no values in
none(), so no values that are equal to
1.
not 1 = none()holds, because there are no common values.
8.2. Type checks¶
A type check is a formula that looks like:
<expression> instanceof <type>
You can use a type check formula to check whether an expression has a certain type. For
example,
x instanceof Person holds if the variable
x has type
Person.
8.3. Range checks¶
A range check is a formula that looks like:
<expression> in <range>
You can use a range check formula to check whether a numeric expression is in a given
range. For example,
x in [2.1 .. 10.5] holds if the variable
x is
between the values
2.1 and
10.5 (including
2.1 and
10.5 themselves).
Note that
<expression> in <range> is equivalent to
<expression> = <range>.
Both formulas check whether the set of values denoted by
<expression> is the same as the
set of values denoted by
<range>.
8.4. Calls to predicates¶
A call is a formula or expression that consists of a reference to a predicate and a number of arguments.
For example,
isThree(x) might be a call to a predicate that holds if the argument
x is
3, and
x.isEven() might be a call to a member predicate that holds if
x is even.
A call to a predicate can also contain a closure operator, namely
* or
+. For example,
a.isChildOf+(b) is a call to the transitive closure of
isChildOf(), so it holds if
a is a descendent of
b.
The predicate reference must resolve to exactly one predicate. See Name resolution for more information about how a predicate reference is resolved.
If the call resolves to a predicate without result, then the call is a formula.
It is also possible to call a predicate with result. This kind of call is an expression in QL, instead of a formula. See Calls to predicates (with result) for the corresponding topic.
8.5. Parenthesized formulas¶
A parenthesized formula is any formula surrounded by parentheses,
( and
). This formula
has exactly the same meaning as the enclosed formula. The parentheses often help to improve
readability and group certain formulas together.
8.6. Quantified formulas¶
A quantified formula introduces temporary variables and uses them in formulas in its body. This is a way to create new formulas from existing ones.
The following “quantifiers” are the same as the usual existential and universal quantifiers in mathematical logic.
8.6.1. exists¶
This quantifier has the following syntax:
exists(<variable declarations> | <formula>)
You can also write
exists(<variable declarations> | <formula 1> | <formula 2>).
This is equivalent to
exists(<variable declarations> | <formula 1> and <formula 2>).
This quantified formula introduces some new variables. It holds if there is at least one set of values that the variables could take to make the formula in the body true.
For example,
exists(int i | i instanceof OneTwoThree) introduces a temporary variable of
type
int and holds if any value of that variable has type
OneTwoThree.
8.6.2. forall¶
This quantifier has the following syntax:
forall(<variable declarations> | <formula 1> | <formula 2>)
forall introduces some new variables, and typically has two formulas in its body. It holds
if
<formula 2> holds for all values that
<formula 1> holds for.
For example,
forall(int i | i instanceof OneTwoThree | i < 5) holds if all integers
that are in the class
OneTwoThree are also less than
5.
In other words, if there is a value in
OneTwoThree that is greater than or equal to
5,
then the formula doesn’t hold.
Note that
forall(<vars> | <formula 1> | <formula 2>) is
logically the same as
not exists(<vars> | <formula 1> | not <formula 2>).
8.6.3. forex¶
This quantifier has the following syntax:
forex(<variable declarations> | <formula 1> | <formula 2>)
This quantifier exists as a shorthand for:
forall(<vars> | <formula 1> | <formula 2>) and exists(<vars> | <formula 1> | <formula 2>)
In other words,
forex works in a similar way to
forall, except that it ensures that
there is at least one value for which
<formula 1> holds.
To see why this is useful, note that the
forall quantifier could hold trivially.
For example,
forall(int i | i = 1 and i = 2 | i = 3) holds: there are no integers
i
which are equal to both
1 and
2, so the second part of the body
(i = 3) holds for
every integer for which the first part holds.
Since this is often not the behavior that you want in a query, the
forex quantifier is a
useful shorthand.
8.7. Logical connectives¶
You can use a number of logical connectives between formulas in QL. They allow you to combine existing formulas into longer, more complex ones.
To indicate which parts of the formula should take precedence, you can use parentheses. Otherwise, the order of precedence from highest to lowest is as follows:
- Negation (not)
- Conditional formula (if … then … else)
- Conjunction (and)
- Disjunction (or)
- Implication (implies)
For example,
A and B implies C or D is equivalent to
(A and B) implies (C or D).
Similarly,
A and not if B then C else D is equivalent to
A and (not (if B then C else D)).
Note that the parentheses in the above examples are not necessary, since they highlight the default precedence. You usually only add parentheses to override the default precedence, but you can also add them to make your code easier to read (even if they aren’t required).
The logical connectives in QL work similarly to Boolean connectives in other programming languages. Here is a brief overview:
8.7.1. not¶
You can use the keyword
not before a formula. The resulting formula is called a negation.
not A holds exactly when
A doesn’t hold.
Example
The following query selects files that are not HTML files.
from File f where not f.getFileType().isHtml() select f
Note
You should be careful when using
not in a recursive definition, as this could lead to
non-monotonic recursion. For more information, see the section on Non-monotonic recursion.
8.7.2. if … then … else¶
You can use these keywords to write a conditional formula. This is another way to simplify
notation:
if A then B else C is the same as writing
(A and B) or ((not A) and C).
Example
With the following definition,
visibility(c) returns
"public" if
x is
a public class and returns
"private" otherwise:
string visibility(Class c){ if c.isPublic() then result = "public" else result = "private" }
8.7.3. and¶
You can use the keyword
and between two formulas. The resulting formula is called a
conjunction.
A and B holds if, and only if, both
A and
B hold.
Example
The following query selects files that have the
js extension and contain fewer
than 200 lines of code:
from File f where f.getExtension() = "js" and f.getNumberOfLinesOfCode() < 200 select f
8.7.4. or¶
You can use the keyword
or between two formulas. The resulting formula is called a
disjunction.
A or B holds if at least one of
A or
B holds.
Example
With the following definition, an integer is in the class
OneTwoThree if it is equal to
1,
2, or
3:
class OneTwoThree extends int { OneTwoThree() { this = 1 or this = 2 or this = 3 } ... }
8.7.5. implies¶
You can use the keyword
implies between two formulas. The resulting formula is called an
implication. This is just a simplified notation:
A implies B is the same as writing
(not A) or B.
Example
The following query selects any
SmallInt that is odd, or a multiple of
4.
class SmallInt extends int { SmallInt() { this = [1 .. 10] } } from SmallInt x where x % 2 = 0 implies x % 4 = 0 select x
Footnotes | https://help.semmle.com/QL/ql-handbook/formulas.html | CC-MAIN-2018-51 | refinedweb | 1,780 | 65.42 |
NAME
RSA_private_encrypt, RSA_public_decrypt - low-level signature operations
SYNOPSIS
#include <openssl/rsa.h> int RSA_private_encrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding); int RSA_public_decrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding);
DESCRIPTION
These functions handle RSA signatures at a low-level._PKCS1_PADDING
PKCS #1 v1.5 padding. This function does not handle the algorithmIdentifier specified in PKCS #1. When generating or verifying PKCS #1 signatures, RSA_sign(3) and RSA_verify(3) should be used.
- RSA_NO_PADDING
Raw RSA signature. This mode should only be used to implement cryptographically sound padding modes in the application code. Signing user data directly with RSA is insecure..)
Licensed under the OpenSSL license (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at. | https://www.openssl.org/docs/man1.1.1/man3/RSA_private_encrypt.html | CC-MAIN-2022-33 | refinedweb | 144 | 50.63 |
Recently I have been been posting up on the Arduino Developers Email list, trying to hopefully come up with a proposal for minor enhancements to the SPI interface that allows users to more easily get more throughput on the SPI buss... As always this has been interesting.
I have also experimented with SPI on a few different platforms to again get a better idea.
More on that later.
Right now playing with SPI on the Teensy LC. Was curious if it makes sense to have the buffer transfers try using 16 bit writes when possible like I did for the 3.x boards. And again it might? That is with using 8 byte writes per entry. I am getting something like .24us gap between bytes (with the double buffering working). With the 16 bit writes, I am still getting around the .24us gap between words, and a in the nature of .16-.2us between bytes of a word. So gains a little.
With a quick and dirty test that output 128 bytes I was seeing:
If I do simply one byte outputs SPI.transfer(buffer[i]) : it took about 300us to output the buffer
If I use SPI.transfer16((buffer[i] << 8) | buffer[i + 1]) : it took 270
If I used current SPI.transfer(buffer, 128): 200
SPI.transfer(buffer, 128) using 16 bit writes: 188
What I am trying to see next does it make sense to try to utilize the maybe 64 bit FIFO on SPI on LC
? Has anyone tried this? Does this chip support it?
Trying it in test program, like:
And when I call this function, The "Try fifo" message shows up in debug terminal. The printing of S, C1 and C2 works. (20, 50, 0)And when I call this function, The "Try fifo" message shows up in debug terminal. The printing of S, C1 and C2 works. (20, 50, 0)Code:void transfer_lc_fifo(const void * buf, void * retbuf, uint32_t count) { if (count == 0) return; const uint8_t *p = (const uint8_t *)buf; uint8_t *pret = (uint8_t *)retbuf; uint8_t in; uint32_t count_in = count; Serial.println("Try fifo"); Serial.printf("%x %x %x \n", SPI0_S, SPI0_C1, SPI0_C2); Serial.flush(); Serial.printf("%x %x\n", SPI0_C3, SPI0_CI); Serial.flush(); KINETISL_SPI0.C3 |= SPI_C3_FIFOMODE; Serial.println("fifo turned on"); Serial.flush(); KINETISL_SPI0.S; // Read in status while (count_in) { if (count && !(KINETISL_SPI0.S & SPI_S_TXFULLF)) { // we have more characters to output and fifo not full KINETISL_SPI0.DL = p? *p++ : 0; count--; } while (!KINETISL_SPI0.S & SPI_S_RFIFOEF) { // There is data available in fifo queue so extract in = KINETISL_SPI0.DL; if (pret) *pret++ = in; count_in--; } } KINETISL_SPI0.C3 = 0; // turn off FIFO KINETISL_SPI0.S; // Read in status }
The printing of CI or C3 does not work. Before I added these prints, of registers. The attempt to turn on FIFO mode (set C3 appeared to die) as again the print after it fails.
So again wonder if anyone has used the FIFO? I am mainly looking at fifo if I attempt to add an async version of transfer as suggested on the email list:
SPI.transfer(txbuf, rxbuf, cnt, call back)
In these cases it would be nice to be able to load up the queues and minimize interrupts.
Kurt
I don't think SPI0 on KL26 has a FIFO. The manual only mentions a FIFO for SPI1.
Thanks, That is what I wondered. Where in the manual(s)? I was looking but for some reason I missed it... Will look again.
Thanks again!
Update: Still did not find it in documents. But experimented using SPI1 and have the fifo queue working, with the test case.
It was also good for testing some more issues with the main thing of this thread which is to have a version of SPI library that all of the objects are of the class SPIClass. Ran into issue that SPI1 was not using the right br value out of settings, which I now fixed.
Last edited by KurtE; 04-21-2017 at 03:05 AM.
KL26 Sub-Family Reference Manual.
3.9.2.1 SPI instantiation information
The number of SPI modules on this device is: two
The supported data length is: 16-bit
SPI1 includes a 4-deep FIFO of 16-bit entries.
...
Thanks,
I somehow missed that section in the manual
... (Again)... (Again)
I mainly looked through chapter 37 and then the other TLC manual about electrical... What was interesting was that section 37.3 memory map for it showed addresses for SPI0_CL and SPI0_C3, however if you touch the addresses the CPU faults...
Thanks again.
So the Question will be: IF we add async SPI support into main library? as suggested by those who are working on the STM32 SPI library, Should I first attempt to do it with using FIFO interrupts or should it setup to use DMA under the hood...
Should there be maximum sizes allowed for these? For example when doing the DMA stuff for ILI9341 code (which I borrowed some of Frank's )code I think each DMASetting was setup to do a maximum of 64K transfer, not sure if this is a hardware limit or not...
Back to playing and thanks again!
So I thought I would now try adding the ASYNC support. First thought is to try to get DMA to work. I probably should start with the T3.x boards as I know them a bit better, but I started with T-LC
So I wrote a quick and dirty prototype of function, and I tried specifically for SPI1. And when my code tries to initialize the DMA stuff it faults/hangs very early on...
The code dies in the call: _dmaTX->destination(KINETISL_SPI1.DL);The code dies in the call: _dmaTX->destination(KINETISL_SPI1.DL);Code://========================================================================= // Try Transfer using DMA. //========================================================================= DMAChannel *_dmaTX = NULL; DMAChannel *_dmaRX = NULL; uint8_t _dma_state = 0; uint8_t _dma_dummy_tx = 0; uint8_t _dma_dummy_rx; void (*_dma_callback)(); void _dma_rxISR(void) { Serial.println("_dma_rxISR"); _dmaRX->clearInterrupt(); KINETISL_SPI1.C2 = 0; _dmaTX->clearComplete(); _dmaRX->clearComplete(); _dma_state = 1; // set back to 1 in case our call wants to start up dma again if (_dma_callback) (*_dma_callback)(); } bool transfer_lc_dma(const void * buf, void * retbuf, uint32_t count, void(*callback)(void)) { if (!_dma_state) { Serial.println("First dma call"); _dmaTX = new DMAChannel(); _dmaTX->destination(KINETISL_SPI1.DL); // _dmaTX->destination(_dma_dummy_tx); Serial.println("TAD"); Serial.flush(); _dmaTX->triggerAtHardwareEvent(DMAMUX_SOURCE_SPI1_TX); Serial.println("TAT"); Serial.flush(); _dmaTX->disableOnCompletion(); Serial.println("TDOC"); Serial.flush(); _dmaTX->disable(); Serial.println("TDEST"); Serial.flush(); _dmaRX = new DMAChannel(); _dmaRX->disable(); Serial.println("RDIS"); Serial.flush(); _dmaRX->source(KINETISL_SPI1.DL); Serial.println("RDEST"); Serial.flush(); _dmaRX->disableOnCompletion(); _dmaRX->triggerAtHardwareEvent(DMAMUX_SOURCE_SPI1_RX); _dmaRX->attachInterrupt(_dma_rxISR); _dmaRX->interruptAtCompletion(); _dma_state = 1; // Should be first thing set! Serial.println("end First dma call"); } if (_dma_state == 2) return false; // already active // Now handle NULL pointers. if (buf) { _dmaTX->sourceBuffer((uint8_t*)buf, count); } else { _dmaTX->source(_dma_dummy_tx); // maybe have setable value _dmaTX->transferCount(count); } if (retbuf) { _dmaRX->destinationBuffer((uint8_t*)retbuf, count); } else { _dmaRX->destination(_dma_dummy_rx); // NULL ? _dmaRX->transferCount(count); } _dma_callback = callback; // Now try to start it? // Setup DMA main object //Serial.println("Setup _dmatx"); Serial.println("Before DMA C1"); KINETISL_SPI1.C1 &= ~(SPI_C1_SPE); Serial.println("Before DMA C2"); KINETISL_SPI1.C2 |= SPI_C2_TXDMAE | SPI_C2_RXDMAE; Serial.println("Before RX enable"); _dmaRX->enable(); Serial.println("Before TX enable"); _dmaTX->enable(); _dma_state = 2; Serial.println("DMA end of call"); return true; }
My looking through the DMAChannel code looks like it should properly set access to these parts of memory.
So I hacked up the dma channel code and Tried to verify where it was dying, I also rearranged the code to see if I could at least tough the CFG...
Currently looks like:
And my output in debug terminal:And my output in debug terminal:Code:void destination(volatile signed char &p) { destination(*(volatile uint8_t *)&p); } void destination(volatile unsigned char &p) { Serial.printf("D: %x %x\n", (uint32_t)&p, (uint32_t)CFG); Serial.flush(); Serial.printf("%x %x %x %x\n\r", (uint32_t)CFG->SAR, (uint32_t)CFG->DAR, CFG->DSR_BCR, CFG->DCR); Serial.flush(); CFG->DCR = (CFG->DCR & 0xF0F0F0FF) | DMA_DCR_DSIZE(1); Serial.println("set DCR"); Serial.flush(); CFG->DAR = (void*)&p; Serial.println("set DAR"); Serial.flush();
So it is dying on the line: CFG->DAR = (void*)&p;So it is dying on the line: CFG->DAR = (void*)&p;Code:First dma call D: 40077006 40008100 0 0 0 20000000 set DCR
The address look valid to me: that is 40077006 is address of the DL register and 40008100 is address of first DMA channel.
Suggestions?
I then tried running the DMASPI library example program for SPI0 as well as a copy of it where I tried on SPI1. They also both appear to die at the same place. Potentially it might be my updated SPI library? I did have to slightly modify the Dmaspi library as both SPI and SPI1 use the same class.
I did verify that the Example would run on a T3.6...
Question: Has anyone tried DMA SPI on T-LC lately? I probably should pull out a secondary dev machine and see if I can get it to work with currently released 1.8.2 Arduino with released SPI and current Teensyduino.
But again would really welcome suggestions... Will probably adapt this test to the T3.x boards while hopefully get some hints.
Thanks!
Update: Still curious about T-LC in previous post, but right now playing with a 3.x version and making progress.
It is actually going through and outputting and calling my callback. Which is good. Will then add it to my main SPI implementation for Async support.
There are a couple of issues I am trying to figure out, what makes sense:
T3.x
a) If your last SPI transfer was using 16 bit mode, your transfer will continue in 16 bit mode. That is since you are only touching the low word of the PUSHR register it keeps what was previously in the High word which has things like CS settings CONT flag plus setting for 8 bit or 16 bit transfer. This is not unique to my test code. For example DMASPI test program.
I added the transfer16 and all of the writes after this point were 16 bits (a zero byte was added to each transfer)I added the transfer16 and all of the writes after this point were 16 bits (a zero byte was added to each transfer)Code:DMASPI0.begin(); DMASPI0.start(); // Wonder what happens if I do a SPI.transfer16() SPI.transfer16(0xffff); DmaSpi::Transfer trx(nullptr, 0, nullptr); Serial.println("Testing src -> dest, single transfer"); Serial.println("--------------------------------------------------"); trx = DmaSpi::Transfer(src, DMASIZE, dest); clrDest((uint8_t*)dest); DMASPI0.registerTransfer(trx); while(trx.busy()) { }
How to fix?
1) Punt - Maybe ok in some cases, but ... I would not like to in generic SPI library.
2) push/pop without DMA the first byte, such that it sets up the proper state. I did this with my DMA code for ili9341_t3n. Works OK in specific case, but lots stuff to figure out.
3) Maybe create a DMASettings chain on the TX where the first item is setup to PUSHR 32 bits with proper first byte plus settings and then chains to 2nd item that does the rest.. Several cases to think about - But...
b) T3.5 - Can not use DMA support on T3.5 for SPI1/SPI2... Could do using interrupts, we have fifo, but only one item deep, so won't be much of a win.
Well now back to playing around
Again suggestions hoped for.
The problem is, DMAChannel is broken on Teensy LC. In some cases, GCC generates byte accesses to 'CFG->DAR', which will cause the CPU to hang.
Thanks tni,
Thanks that is helping on the LC.
I am still trying to figure some of the stuff out... But it is outputting something. But not fully the right, and as such not getting to the end transfer stuff.
In particular when I look at the output using LA I see, it output bytes like:
00 80 01 81 02 82... When the array was init 00 01 02 ... 7f. So I am guessing that some of my/dmachannel stuff is not setting up sizes properly.
Thought I would mention that I updated my version of the SPI library ()
This update I believe I now have some DMA Async support added for the LC, plus a a few fixes for the 3.x version of the DMA Async support.
There are still a few more issues I need to iron out. But a summary of what is in this version of the library.
All of the SPI object are derived from SPIClass (like many of the other implementations). I did it similar to what Paul did for Wire and have a hardware Table created for the differences between objects.
So hopefully in future we can have more libraries updated that can handle multiple SPI busses without having to do a bunch of work.
To help with this I added members that work with same table as we would use in lets say: setMiso
I added another function: pinIsMISO so you can verify for all of the different busses and boards.
The Transfer methods have been enhanced: Still have:
transfer(b)
transfer16(w)
transfer(buf, cnt) - It overwrites your buff.
New stuff:
transfer(txbuf, rxbuf, cnt) // separate rx and tx either can be null.
(added member transferFillTXCharto set the TX char if txbuf is null: default 0 -
Async support - as suggested on email list
bool transfer(const void *txBuffer, void *rxBuffer, size_t count, void(*callback)(void));
void flush(void);
bool done(void);
Currently the Async is implemented using DMA. Have version for LC and version for 3.x
Issues to address:
On 3.x - Currently assuming 1 byte transfers. Issue if last PUSHR set special things like 16 bit mode or CS pins or... Need to address. I think I may try some/all of:
a) Detect I am in 16 bit mode and maybe detect that I am doing an even count output, then continue in 16 bit mode.
b) Switch to 8 bit mode. Maybe have two SPISetting chain where first byte is output doing a 4 byte write, which sets fully PUSHR register and then chain it to second item that does the rest.
c) Have the Async output first char sync if necessary...
Teensy 3.5 - SPI1/SPI2 - Don't have unique DMA TX/RX setup. Not sure if they work in the Read only or Write only case. Things to try.
a) Punt - have it not work on these
b) See if it works for only Read or only Write?
c) Implement instead using FIFO interrupts - But again these queues are only 1 entry in size, so would need to do interrupt for each input or output.
Another thing I want to decide on, is I had a suggested implementation for the Teensy LC for the Transfer(txbuf, rxbuf, cnt), that broke up the code into three separate sections, READ, WRITE, TRANSFER, as to avoid an if or two in the loop. They also special cased the instances where count = 1 or count = 2 and did special code for these; }
What I am wondering is does it make enough sense to have three copies of the above code, to remove those two if statements in red for the three different versions?
But warning: On Teensy LC, you need the patch to get DMA to work with current release
Probably a very rare use case.Probably a very rare use case.They also special cased the instances where count = 1 or count = 2 and did special code for these cases.
You do. You have 2 transmit bytes scheduled, one in the output shift register and one in port.DL. If you get interrupted, you loose a received byte (only one can be stored). transfer() hangs when a byte is lost.You do. You have 2 transmit bytes scheduled, one in the output shift register and one in port.DL. If you get interrupted, you loose a received byte (only one can be stored). transfer() hangs when a byte is; }
Yes. You should also remove the entireYes. You should also remove the entireWhat I am wondering is does it make enough sense to have three copies of the above code, to remove those two if statements in red for the three different versions?
for the transmit-only case - no need to pick up the received bytes. Just empty port.DL at the very end.for the transmit-only case - no need to pick up the received bytes. Just empty port.DL at the very end.Code:while (!(port.S & SPI_S_SPRF)) ; // wait in = port.DL; __enable_irq(); if (pret)*pret++ = in;
Thanks TNI,
Makes sense for the disabling interrupts.
As for the write only case, yes it could remove the stuff you mentioned. I assume also the disable_irq...
Logically Something in the nature of:
But then need some way to synchronize the return from this function until the last bits have been output. The version that was suggested still had code in the main loop waiting for each character to be received. It also did not handle the interrupt case you mentioned.But then need some way to synchronize the return from this function until the last bits have been output. The version that was suggested still had code in the main loop waiting for each character to be received. It also did not handle the interrupt case you mentioned.Code:void SPIClass::write(const void * buf, , uint32_t count) { if (count == 0) return; //while would work here without this but what for character(s) to transmit would hang while (count-- > 0) { while (!(port.S & SPI_S_SPTEF)) ; // wait port.DL = *p++; } // Now need to somehow wait until the last character was output and then read in DL to clear. }
So for now I will probably keep it as one version. But may play more later.
Thanks again!
Here is a version that is very fast without code duplication. It does 10.60Mbit. (An optimal transmit-only version can do 11.3Mbit.)
Code:template<bool send, bool receive> __attribute__((always_inline)) void SPIClass::transfer_(const uint8_t* send_ptr, uint8_t* recv_ptr, uint32_t count) { const uint8_t _transferFillTXChar = 0; auto& __restrict port = this->port; uint8_t dummy; while(!(port.S & SPI_S_SPTEF)) ; // wait port.DL = send ? *send_ptr++ : _transferFillTXChar; while(--count > 0) { while(!(port.S & SPI_S_SPTEF)) ; // wait __disable_irq(); port.DL = send ? *send_ptr++ : _transferFillTXChar; while(!(port.S & SPI_S_SPRF)) ; // wait *(receive ? recv_ptr++ : &dummy) = port.DL; __enable_irq(); } while(!(port.S & SPI_S_SPRF)) ; // wait *(receive ? recv_ptr++ : &dummy) = port.DL; } void SPIClass::transfer(const void* buf, void* retbuf, uint32_t count) { if(count == 0) return; const uint8_t* send_ptr = (const uint8_t*) buf; uint8_t* recv_ptr = (uint8_t*) retbuf; if(send_ptr){ if(recv_ptr) transfer_<true, true>(send_ptr, recv_ptr, count); else transfer_<true, false>(send_ptr, recv_ptr, count); } else { if(recv_ptr) transfer_<false, true>(send_ptr, recv_ptr, count); else transfer_<false, false>(send_ptr, recv_ptr, count); } }
Last edited by tni; 04-26-2017 at 04:01 PM. Reason: Fixed. Posted wrong version.
Quick update:
Thought maybe better to continue in this thread instead of the Teensy 3.5 DMA SPI thread as not all specific to 3.5...
As I mentioned in the other thread, I went around in circles getting the DMA to work on the three 3.x boards as each one behaved slightly (or majorly) different. Like T3.5 on SPI1 and SPI2 only has one DMA channel...
Today I thought I would play with the Teensyview displays, to see if I can get multiple of them to work both SYNC and ASYNC.
As I mentioned in the other thread, I wondered if it made sense to allow the Callback function to maybe have an optional parameter like maybe void* that can be passed to it, that gets passed back. I thought I would also ask on the Arduino Mail list and received a few responses were against it....
So I did a quick and dirty version of the Teensyview display update using Async. Right now I have it setup that I have three static call back functions in the class (one for each possible SPI), and the object on the first time it wishes to do async update, figures out which one to use and saves away it's this pointer in a static data member, such that when this callback is called it then uses the this pointer to call appropriate function for that class object.
This has several holes... Like what if I want two of these displays on the same SPI buss and wish to do async on both... Several solutions could be done, like when the last DMA output completes and the callback detects the display has completed, it could clear the this pointer and other instance could spin waiting for this to happen and then it claims the this pointer for that SPI...
Or could: build a simple Queue of outputs: (this, output buffer pointer, count, <maybe state change info: start trans, assert dc, unassert dc, end transaction). The code would need to be such that you do not intermix the transactions... ...
Anyway Currently I have three displays hanging off my T3.6 beta board (2 TV, 1 128x64) and I have a 3rd TV that I can also connect up to maybe have it try running through the TeensyView Screen demo program, where it does each of the main sections, one Teensyview at a time, So it first does Shapes on display0 then does shapes on display1 and then shapes on display2...
This works on all three displays each on different buss. So the multiple busses stuff is working here.
I then added a couple of draw screen functions from the ili9341 like update (draw Rectangles and draw circles pages and called it for all three displays without delays. Actually I alternated between the two displays 5 times with a small delay before it changed all three screens again and they all updated pretty quick.
Then I made the two functions take a parameter to optionally do the updates using the Async code. So far it appears like it worked. Again the updates are so quick, that may be hard to see if there are any visual differences. That is with the standard way, how much of a gap in time after I display the first screen before the next screen updates...
Tomorrow I may hack in a way to handle multiple displays async on the same buss... And Try the 4th display on the processor. Also may move the code to simpler test app, that includes more of the ili9341 graphic test like tests. Just not sure how many of the tests make sense on a monochrome display.
If anyone is interested could also upload test app and hacked up Teensyview. As I mentioned on other thread my one branch of my SPI fork was updated yesterday.
If you can't attach state to a callback, you can't have clean object oriented design that supports multiple instances of the same class (e.g. a Display class that uses SPI). If you have a 'void*' parameter, you can easily use that to dispatch to the correct instance.
Last edited by tni; 05-03-2017 at 02:54 PM.
Building consensus on the mail list is almost impossible. Even when things do go well, it can take a year or more of sustained effort to get any sort of substantial API adopted.
the developer mailing list is broken.
Thomas Roell claimed that 3 data entries would be needed for storing the callback (callback function pointer, this, data). A C-style function pointer + 'void*' context is quite enough for dispatching to an object. E.g.:
For dispatching to an object member with parameter(s), the context could be a struct containing the necessary data.For dispatching to an object member with parameter(s), the context could be a struct containing the necessary data.Code:class S { public: void callback(); }; void registerCallback(void (*callback_fn)(void*), void* context) { // ... } void test(S* s) { // ... registerCallback([](void* ctx){ ((S*) ctx)->callback(); }, s); }
Glancing at the STM32L4 core, the PendSV dispatching that Thomas mentioned does support a 'void*' parameter for callbacks.
That would be very useful indeed. I'm currently storing a pointer to "s" in a global variable to get the same effect but that is kind of ugly. With the proposed default value of nullptr it should not break any existing code.
With a sufficiently smart compiler and sufficient template magic, you should be able to create a wrapper that "transparently" allows you to use either SPI object with whatever library, and not pay any indirection overhead.
Of course, the drawback being that the library needs to be updated to support this.
Thanks guys,
@tni - have you tried email to the address mentioned toward the end of the arduino forum posting you mentioned? Wonder if it worked?
@jwatte - Maybe at some point I should play more with templates... They are something that I have never used as it was not part of C back when I first learned (ice age)
I meant to respond yesterday, but was busy outside, and now playing with with my Teensyview code to test out the Async SPI code, and trying to figure out why things are not responding the way I would expect... I need to test it out in a simple test case and See what happens... In particular setting up to do a new Async transfer within the callback. Also handling multiple SPI transfers at the same time
Something like:
So basically testing out having callback be callled and issue new request, plus multiple DMA requests going on at same time.So basically testing out having callback be callled and issue new request, plus multiple DMA requests going on at same time.Code:uint8_t buffer1_1[6]; uint8_t buffer1_2[512]; volatile uint8_t state1 = 0; uint8_t buffer2_1[6]; uint8_t buffer2_2[512]; volatile uint8_t state2 = 0; uint8_t buffer3_1[6]; uint8_t buffer3_2[512]; volatile uint8_t state3 = 0; void callback1() { if (state1 == 1) { SPI.transfer(buffer1_2, NULL, sizeof(buffer1_2), &callback1) state1 = 2; } else { state1 = 0; } } ... Same for 2 and 3, but using SPI1 and SPI2... void loop() { while (state1 != 0) ; state1 = 1; SPI.transfer(buffer1_1, NULL, sizeof(buffer1_1), &callback1); while (state2 != 0) ; state2 = 1; SPI.transfer(buffer2_1, NULL, sizeof(buffer2_1), &callback2); while (state3 != 0) ; state3 = 1; SPI.transfer(buffer3_1, NULL, sizeof(buffer3_1), &callback3); }
I thought I had it working earlier yesterday, until I figured out that Teensyview was doing screen updates more difficult than it needed to.
That is it was doing something like:
[header for page 0(4 bytes)][data for page 0(128 bytes)][header for page 1][data for page 1]....[header for page 3][data for page 3] (0r 7 for 128x64]
But looking at Adafruit SSD1306, I found you could turn on horizontal memory mode, which when you filled page 0, would automatically advance to page 1... So once I did that the logical update is simply:
[header 6 bytes][data 512 bytes] or for 128x64 - 1024 bytes...
Hopefully today will figure it out... But it is sunny
Another quick update: I am probably running into a timing issue. I broke out the draw function on it's own and tried with just one of them and it hung after the first update... So I then ripped out all of the Teensyview code and just left in the logical draw functions, just going to use SPI... And with only one it was running... Then enabled second one and it hangs...
Actually it appears to have issues if the count of bytes >=512 Works ok with 511...
In case anyone wishes to play along. Again this uses my other branch/fork of SPI.
Which is interesting as at least a little while ago my ili9341_t3n code that used DMA to output a full screen and I use 3 DMASettings objects:Which is interesting as at least a little while ago my ili9341_t3n code that used DMA to output a full screen and I use 3 DMASettings objects:Code:#include <SPI.h> ////////////////////////////////// // TeensyView Object Declaration // /////////////////////////////////// #define USE_SPI1 //#define USE_SPI2 // Kurt's setup #define PIN_RESET 15 #define PIN_SCK 13 #define PIN_MOSI 11 #define PIN_DC 21 #define PIN_CS 20 // Setup 2nd one SPI1 #define PIN_RESET1 16 #define PIN_SCK1 32 #define PIN_MOSI1 0 #define PIN_DC1 31 #define PIN_CS1 30 // Pins on connector on Beta T3.6 board (3.3, GND)(48, 47)(57 56) (51 52) (53 55) #define PIN_RESET2 48 //#define PIN_MISO2 51 #define PIN_MOSI2 52 #define PIN_SCK2 53 #define PIN_DC2 55 #define PIN_CS2 56 // This is real C..p but simple test to see how multipole DMA and // transfers work called from callback uint8_t header1[] = {0x21, 0, 0x7f, 0x22, 00, 03}; uint8_t header2[] = {0x21, 0, 0x7f, 0x22, 00, 07}; // larger display. uint8_t header3[] = {0x21, 0, 0x7f, 0x22, 00, 03}; #define BUFFER1_SIZE 511 #define BUFFER2_SIZE 250 #define BUFFER3_SIZE 128 uint8_t buffer1[BUFFER1_SIZE]; uint16_t buffer1_size = sizeof(buffer1); volatile uint8_t state1 = 0; uint8_t buffer2[BUFFER2_SIZE]; uint16_t buffer2_size = sizeof(buffer2); volatile uint8_t state2 = 0; uint8_t buffer3[BUFFER2_SIZE]; uint16_t buffer3_size = sizeof(buffer3); volatile uint8_t state3 = 0; void setup() { while (!Serial && millis() < 3000); Serial.begin(38400); SPI.begin(); pinMode(PIN_CS, OUTPUT); pinMode(PIN_DC, OUTPUT); digitalWrite(PIN_CS, HIGH); digitalWrite(PIN_DC, HIGH); #ifdef USE_SPI1 SPI1.begin(); pinMode(PIN_CS1, OUTPUT); pinMode(PIN_DC1, OUTPUT); digitalWrite(PIN_CS1, HIGH); digitalWrite(PIN_DC1, HIGH); #else pinMode(PIN_CS1, OUTPUT); // use to debug digitalWrite(PIN_CS1, HIGH); #endif #ifdef USE_SPI2 SPI2.begin(); pinMode(PIN_CS2, OUTPUT); pinMode(PIN_DC2, OUTPUT); #endif delay(1000); // Delay 1000 ms } void callback1() { if (state1 == 1) { state1 = 2; digitalWriteFast(PIN_DC, HIGH); if (!SPI.transfer(buffer1, NULL, buffer1_size, &callback1)) { Serial.println("SPI Transfer failed"); } } else { state1 = 0; digitalWriteFast(PIN_CS, HIGH); SPI.endTransaction(); } } #ifdef USE_SPI1 void callback2() { if (state2 == 1) { state2 = 2; digitalWriteFast(PIN_DC1, HIGH); SPI1.transfer(buffer2, NULL, buffer2_size, &callback2); } else { state2 = 0; digitalWriteFast(PIN_CS1, HIGH); SPI1.endTransaction(); } } #endif #ifdef USE_SPI2 void callback3() { if (state3 == 1) { state3 = 2; digitalWriteFast(PIN_DC2, HIGH); SPI2.transfer(buffer3, NULL, buffer3_size, &callback3); } else { state3 = 0; digitalWriteFast(PIN_CS2, HIGH); SPI2.endTransaction(); } } #endif uint8_t loop_counter = 0; void loop() { elapsedMillis timer; loop_counter++; #ifndef USE_SPI1 digitalWrite(PIN_CS1, !digitalRead(PIN_CS1)); #endif timer = 0; while (state1 != 0) { if (timer > 10) { Serial.printf("Timeout SPI: %d %x %x\n", state1, header1, buffer1); Serial.printf(" TX: C:%d, err: %d, S: %x, D: %x\n", SPI._dmaTX->complete(), SPI._dmaTX->error(), SPI._dmaTX->sourceAddress(), SPI._dmaTX->destinationAddress()); Serial.printf(" RX: C:%d, err: %d, S: %x, D: %x\n", SPI._dmaRX->complete(), SPI._dmaRX->error(), SPI._dmaRX->sourceAddress(), SPI._dmaRX->destinationAddress()); Serial.println("Hit any key to continue"); while (Serial.read() == -1) ; while (Serial.read() != -1) ; break; } } Serial.println("Start 1"); state1 = 1; memset(buffer1, (loop_counter & 1)? 0xff : 0, buffer1_size); SPI.beginTransaction(SPISettings(8000000, MSBFIRST, SPI_MODE0)); digitalWriteFast(PIN_CS, LOW); digitalWriteFast(PIN_DC, LOW); SPI.transfer(header1, NULL, sizeof(header1), &callback1); #ifdef USE_SPI1 timer = 0; while (state2 != 0) { if (timer > 10) { Serial.printf("Timeout SPI1: %d %x %x\n", state2, header2, buffer2); Serial.printf(" TX: C:%d, err: %d, S: %x, D: %x\n", SPI1._dmaTX->complete(), SPI1._dmaTX->error(), SPI1._dmaTX->sourceAddress(), SPI1._dmaTX->destinationAddress()); Serial.printf(" RX: C:%d, err: %d, S: %x, D: %x\n", SPI1._dmaRX->complete(), SPI1._dmaRX->error(), SPI1._dmaRX->sourceAddress(), SPI1._dmaRX->destinationAddress()); Serial.println("Hit any key to continue"); while (Serial.read() == -1) ; while (Serial.read() != -1) ; break; } } Serial.println("Start 2"); state2 = 1; memset(buffer2, (loop_counter & 1)? 0x0 : 0xff, buffer2_size); SPI1.beginTransaction(SPISettings(8000000, MSBFIRST, SPI_MODE0)); digitalWriteFast(PIN_CS1, LOW); digitalWriteFast(PIN_DC1, LOW); SPI1.transfer(header2, NULL, sizeof(header2), &callback2); #endif #ifdef USE_SPI2 while (state3 != 0) ; Serial.println("Start 3"); state3 = 1; memset(buffer3, (loop_counter & 1)? 0xff : 0, buffer3_size); SPI2.beginTransaction(SPISettings(8000000, MSBFIRST, SPI_MODE0)); digitalWriteFast(PIN_CS2, LOW); digitalWriteFast(PIN_DC2, LOW); SPI2.transfer(header3, NULL, sizeof(header3), &callback3); #endif }
320*240/3 = 25600 words output, which is > 512... So need to figure out difference...
I think I figured it out:
Issue with DMAChannel.hIssue with DMAChannel.h
I have code that looks like:
Suppose I have am doing 512 byte writes for the screen memory, so ISuppose I have am doing 512 byte writes for the screen memory, so ICode:_dmaRX->destination((uint8_t&)bit_bucket); _dmaRX->transferCount(count); rser = SPI_RSER_RFDF_RE | SPI_RSER_RFDF_DIRS | SPI_RSER_TFFF_RE | SPI_RSER_TFFF_DIRS;
So I output my 6 byte header, then my 512 byte buffer, and then when I repeat I output my 6 byte header:
Now if we look at transferCount method:
I first call it with 6 and before this assume BITER = 0;I first call it with 6 and before this assume BITER = 0;Code:void transferCount(unsigned int len) { if (len > 32767) return; if (len >= 512) { TCD->BITER = len; TCD->CITER = len; } else { TCD->BITER = (TCD->BITER & 0xFE00) | len; TCD->CITER = (TCD->CITER & 0xFE00) | len; } }
so BITER = (0 & oxfE00) | 6 = 0x6
Then I call with 0x200 (512)
so BITER = (6 & 0xfe00) | 200 = 0x200
Then I call again with 0x6:
(200 & 0xfe00) | 0x6 = 0x206 Which is wrong...
The code should maybe be something like:
That is if the LINK bit is not set than use the 15 bits...That is if the LINK bit is not set than use the 15 bits...Code:void transferCount(unsigned int len) { if (len > 32767) return; if (!(TCD->BITER & DMA_TCD_BITER_ELINK)) TCD->BITER = len; TCD->CITER = len; } else { TCD->BITER = (TCD->BITER & 0xFE00) | len; TCD->CITER = (TCD->CITER & 0xFE00) | len; } }
EDIT: That appears to make my app happy
Last edited by KurtE; 05-05-2017 at 02:25 AM. | https://forum.pjrc.com/threads/43048-How-best-to-manage-multiple-SPI-busses?s=0e66101b553c5d0a87c67c3e1b7fc936&p=140845 | CC-MAIN-2017-43 | refinedweb | 5,540 | 65.73 |
User Tag List
Results 1 to 2 of 2
-?
My solution for this would be to have a 'last login' field in the database and check posts as 'new' that have a newer timestamp than the 'last login' field. This code isn't for this, but it gives the same idea.
PHP Code:
function logip() {
$ip = $_SERVER['REMOTE_ADDR'];
$duration = 2;
$old = mktime(date("G"), (date("i")-$duration), date("s"), date("n"), date("j"), date("Y"));
mysql_query("UPDATE logs SET last='0' WHERE last<'".$old."'");
$new = (int) mysql_num_rows(mysql_query("SELECT * FROM logs WHERE ip='".$ip."'"));
if ($new == 0) {
mysql_query("INSERT INTO logs VALUES('".$ip."', '".mktime()."', '".mktime()."')") or die(mysql_error());
} else {
mysql_query("UPDATE logs SET visit=last, last='".mktime()."' WHERE ip='".$ip."'") or die(mysql_error());
}
return (int) mysql_result(mysql_query("SELECT COUNT(*) FROM logs WHERE last>'0'"), 0);
}
- the lid is off the maple syrup again!
Bookmarks | http://www.sitepoint.com/forums/showthread.php?169486-Advanced-session-and-cookie-handlers&p=1222597&viewfull=1 | CC-MAIN-2017-04 | refinedweb | 144 | 65.12 |
.yaml. For more information, please see Defining Pull Queues on the Task Queue configuration page.
The following sections describe the process of enqueuing, leasing, and deleting tasks using pull queues.
- Pull queue overview
- Pulling tasks within App Engine
- Pulling tasks to a module
- An end-to-end example of pull queues
- Pulling tasks from outside App Engine
- Quotas and limits for pull queues
Pull queue overview
Pull queues allow a task consumer to process tasks outside of App Engine's default task processing system. If the task consumer is a part of your App Engine app, you can manipulate tasks using simple API calls from the
google.appengine.api.taskqueue module..
- Once.
Before you begin, make sure to configure the pull queue in
queue.yaml.
Adding tasks to a pull queue
To add tasks to a pull queue, simply get the queue using the queue name defined in
queue.yaml, and set the Task
method to
PULL. The following example enqueues tasks in a pull queue named
pull-queue:
from google.appengine.api import taskqueue q = taskqueue.Queue('pull-queue') tasks = [] payload_str = 'hello world' tasks.append(taskqueue.Task(payload=payload_str, method='PULL')) q.add(tasks)
Leasing tasks
Once you have added tasks to a pull queue, you can lease one or more tasks using
lease_tasks()._task_lease().
Leasing a task makes it unavailable for processing by another worker, and it remains unavailable until the lease expires. If you lease an individual task, the API selects the task from the front of the queue. If no such task is available, an empty list is returned.
This method returns a Task object containing a list of tasks leased from the queue.
from google.appengine.api import taskqueue q = taskqueue.Queue('pull-queue') q.lease_tasks(3600, 100). # Tag is that of "oldest" task by eta.
Deleting tasks
In general, once a worker completes a task, it needs to delete the task from the queue. If you see tasks remaining in a queue after a worker finishes processing them, it is likely that the worker failed; in this case, the tasks need)
Pulling tasks to a module
You can use App Engine Modules as workers to lease and process pull queue tasks. Modules allow you to process more work without having to worry about request deadlines and other restrictions normally imposed by App Engine. Using modules with pull queues gives you processing efficiencies by allowing you to batch task processing using leases.
For more information about using modules, check out the Modules documentation.
An end-to-end example of pull queues
For a simple but complete end-to-end example of using pull queues in Python, see appengine-pullqueue-counter.
Pulling tasks from outside App Engine
If you need to use pull queues from outside App Engine, you must use the Task Queue REST API. The REST API is a Google web service accessible at a globally-unique URI of the form:
Google provides the following client libraries that you can use to call the Task Queue methods remotely:
In the tables below, the first column shows each library's stage of development (note that some are in early stages), and links to documentation for the library. The second column links to available samples for each library.
These early-stage libraries are also available:
Prerequisites
The REST API uses OAuth as the authorization mechanism. When you configure your pull queue, make sure that your
queue.yaml file supplies the email addresses of the users that can access the queue using the REST API. The OAuth scope for all methods is.
Using the task queue REST API with the Python Google API library
This section demonstrates the use of the REST API in an application that allows you to interact with the REST API via the command line. It can continually grab tasks from a pull queue, and execute an arbitrary binary for each task that is pulled. It also supports sending the output of the binary to an arbitrary URL, using the Google APIs Client Library for Python to interact with the REST API. The command-line functions are based on the gflags Python library. The sections below show the Python code used to import the library and use it to lease and delete tasks. The final section describes how to implement scaling in your application.
Importing the Client Library for Python
To begin using the library, you need to install it in your local environment. After installation, you can import the appropriate client libraries and build the taskqueue service:
from apiclient.discovery import build task_api = build('taskqueue', 'v1beta2')
Once you've built the task queue service, your application can access methods from the library allowing you to interact with the REST API. The following sections describe the two most common functions used with the Task Queue API, allowing you to lease and delete tasks.
Leasing Tasks
The Google APIs Client Library provides methods that invoke the
Tasks.lease method in the REST API. When you create a lease, you need to specify the number of tasks to lease (up to a maximum of 1,000 tasks); the API returns the specified number of in order of the oldest task ETA.
You also need to specify the duration of the lease in seconds (up to a maximum of one week). The lease must be long enough to enable you to finish all the leased tasks, yet short enough that if your consumer crashes, the tasks will be available for lease by other clients relatively soon. Similarly, if you lease too many tasks at once and your client crashes, a large number of tasks will become unavailable until the lease expires.
You can specify a deadline, the amount of time to wait before aborting the
lease_tasks() method call.
The following code shows how to lease tasks using the library.
def _get_tasks_from_queue(self): """Gets the available tasks from the taskqueue. Returns: Lease response object. """ try: tasks_to_fetch = self._num_tasks_to_lease() lease_req = self.task_api.tasks().lease(project=FLAGS.project_name, taskqueue=FLAGS.taskqueue_name, leaseSecs=FLAGS.lease_secs, numTasks=tasks_to_fetch, body={}) result = lease_req.execute() return result except HttpError, http_error: logger.error('Error during lease request: %s' % str(http_error)) return None
This code enables a command-line tool for leasing a specified number of tasks for a set duration:
gtaskqueue leasetask --project="gpullqueue1" \ --taskqueue_name=appengtaskpuller \ --lease_secs=30 \ --num_tasks=100
When run, this command-line tool constructs the following URI call to the REST API:
This request returns an array of 100 tasks with the following JSON structure:
{ "kind": "taskqueues#tasks", "items": [ { "kind": "taskqueues#task", "id": string, "queueName": string, "payloadBase64": string, "enqueueTimestamp": number, "leaseTimestamp": number } ... ] }
After processing each task, you need to delete it, as described in the following section.
Deleting Tasks
In general, once a worker completes a task, it needs to delete the task from the queue. If you see tasks remaining in a queue after a worker finishes processing, it is likely that the worker failed; in this case, the tasks need to be processed by another worker.
You can delete an individual task or a list of tasks using the REST method
Tasks.delete. You must know the name of a task in order to delete it. You can get the task name from the
id field of the
Task object returned by
Tasks.lease.
Call delete if you have finished a task, even if you have exceeded the lease time. Tasks should be idempotent, so even if a task lease expires and another client leases the task, performing the same task twice should not cause an error.
The following code snippet uses the Google APIs Client Library for Python to delete tasks from a queue:
def _delete_task_from_queue(self, task_api): try: delete_request = task_api.tasks().delete(project=FLAGS.project_name, taskqueue=FLAGS.taskqueue_name, task=self.task_id) delete_request.execute() except HttpError, http_error: logger.error('Error deleting task %s from taskqueue. Error details %s' %(self.task_id, str(http_error)))
This code enables a command for naming a task to delete:
gtaskqueue deletetask --project_name="gpullqueue1" \ --taskqueue_name=appengtaskpuller \ --task_name=taskID
When run, this command constructs the following URI call to the REST API:
DELETE
If the delete command is successful, the API returns an HTTP 200 response. If deletion fails, the API returns an HTTP failure code.
Quotas and limits for pull queues
Enqueuing a task counts counts toward the following quotas:
- Task Queue Stored Task Count
- Task Queue API Calls
- Task Queue Stored Task Bytes
Leasing a task counts toward the following quotas:
- Task Queue API Calls
- Outgoing Bandwidth (if using the REST API)
The Task Queue Stored Task Bytes quota is configurable in
queue.yaml by setting total_storage_limit. This quota counts towards your Stored Data (billable) quota.
The following limits apply to the use of pull queues: | https://cloud.google.com/appengine/docs/python/taskqueue/overview-pull | CC-MAIN-2015-14 | refinedweb | 1,451 | 61.26 |
SQLObject was designed from the bottom up to make you faster, stronger, and better looking. Okay, probably not stronger, and definitely not better looking, but it will make you more productive. It's up to you what you do with the time you get back in your life when you learn to let SQLObject save you from tedious and error-prone tasks. Perhaps you could write more unit tests, or perhaps you could ask the new marketing manager out for a date. The choice is yours.
The following sections describe some of the cool productivity-enhancing features, shortcuts, and ease-of-use tricks SQLObject provides.
The main goal of SQLObject is to translate SQL into objects. That means that what you used to do with SQL before, you can do now with a Python object. It's an implementation of the ActiveRecord design pattern. ActiveRecords have a one-to-one relationship between model classes and tables in the database.
As previously discussed, you can define classes that derive from the base SQLObject and represent tables in the database. Rows in the table are represented by instances of your SQLObject derived class.
When you set attributes on an instance, it updates column values in this row and updates the DB. Actually, you can defer updates to the DB for performance reasons, but we get back to that idea later.
When you set an attribute on the class itself, you operate on the entire table. That includes managing DB connections and creating your DB schema, indexes, constraints, default values for columns, relationships between tables, CRUD operations, and sophisticated queries. In addition, you can control the behavior and interaction of your model classes with the underlying database.
Here's a quick glance over the offerings of SQLObject. We'll go over the nuts and bolts in detail later.
Every access to the database requires a DB connection. From the user's point of view, a connection is how you tell SQLObject to locate your database. It resembles a URI in the following format:
scheme://[user[:password]@]host[:port]/database[?parameters]
The connection is the only place where you specify the actual database you use. The rest of your data access code is totally DB-agnostic. Unfortunately, SQLObject doesn't provide a perfect abstraction; sometimes you need to consider the features of the underlying database you are using. Some databases don't support some features, or you might want to write a highly tuned query with database-specific features. With that said, SQLObject does a pretty darned good job, and you can write a large application using SQLite, and then switch to Postgres for production with no transition problems.
Here are a few sample database URLs:
mysql://user:pwd@/db postgres://user:pwd@localhost:5432/db sqlite:///full/path/to/db sqlite:/C|full/path/to/db sqlite:/:memory:
In TurboGears, you generally set this up in the dev.cfg file for your project, and it is then used by all the classes in your project. If you want more flexibility, however, there are multiple ways to specify the connection for a particular class. You can set the _connection class attribute, set the module's _connection_ variable (so that all classes in this module can share the same connection), or pass a connection object to the _init() method of an instance (to control the connection in a granular way per record). Probably, the easiest way is to use the sqlhub.processConnection that controls the connection for all SQLObject, in the current process. You can also have a connection per thread or custom connections.
Here's how you manually set up a connection to a SQLite database on Windows:
from sqlobject import * db_filename = os.path.abspath('test.db').replace(':\\', '|\\') connection_uri = 'sqlite:/' + db_filename connection = connectionForURI(connection_uri)
SQLite requires the full path to the database file, and on Windows you must replace the colon that follows the drive letter with a pipe (|). So, if your SQLite DB file is located in c:\db_dir\db_file.db the corresponding connection URI is sqlite:/c|\db_dir\db_file.db and in escaped form sqlite:/c|\\db_dir\\db_file.db. Note that this URI doesn't comply with URI format because there is only a single slash after the scheme. SQLite is a little different because it uses an actual filename for the connection. But don't let it distract you; other than the URI pointing to a file, the SQLite connection strings are exactly the same.
The connectionForURI() function creates a connection object. This connection object can now be used for declaring classes and accessing the DB. There are many ways to associate a connection with a model class. The simplest one is to assign one connection for the entire process. Every model class will automatically use this connection. Here is how it's done:
sqlhub.processConnection = connection
TurboGears provides a tg-admin sql create command, which we used to create the tables from the model class automatically.
But this is just a wrapper around functionality provided by SQLObject itself. This means that if you want to create tables programmatically from within your application, you can. Just define a class with some columns, call createTable(), and it will be created in the database. Here is how it's done:
class FighterRobot(SQLObject): name = StringCol() weapon_1 = StringCol() weapon_2 = StringCol() engine = StringCol(default='Basic engine') health = IntCol(default=100) FighterRobot.dropTable(ifExists=True) # drop previous definition if exists FighterRobot.createTable()
And here is the generated DB schema:
CREATE TABLE fighter_robot ( id INTEGER PRIMARY KEY, name TEXT, weapon_1 TEXT, weapon_2 TEXT, engine TEXT, health INT );
Note that the default values for engine and health didn't make it into the DB schema. SQLObject takes care of setting the values by itself, and whenever you create a new instance (insert a new row into the DB), it populates it properly.
Another important detail that you might notice is that the name of the class doesn't exactly match the name of the table that is created. SQLObject uses a special naming convention to translate Pythonic names to database names. That's why the FighterRobot class created a table called fighter_robot.
Each model class has a nested class called sqlmeta that allows you to specify a plethora of interesting attributes such as caching and lazy updates, and a style object to control name translations. This is just a hint of what is to come. We discuss lazyUpdate thoroughly later on. Here is how to turn on lazy updates for the fighter robot:
class FighterRobot(SQLObject): class sqlmeta: lazyUpdate = True name = StringCol() weapon_1 = StringCol() weapon_2 = StringCol() engine = StringCol(default='Basic engine') health = IntCol(default=100)
This is the flip-side of automatic DB schema creation. If you already have an existing database, you don't have to labor through defining every column as an attribute of your model classes. You can just specify _fromDatabase = True and be done with it. Here is how it's done:
class FighterRobot(SQLObject): _fromDatabase = True
Note
This system only works if the database itself and the Python drivers for that database provide the introspection capabilities that SQLObject needs. For example, this doesn't work for SQLite right now, but MySQL and Postgres work just fine. If you use one of the less commonly used SQLObject backends, your experience may vary.
Remember that your DB should have a table called fighter_robot to match the FighterRobot class name. This feature is useful if your DB is created and maintained by an external person or group. This way you are protected (to some degree) from DB schema changes. Of course, you must convert the data in the DB itself and change the parts of the code that relied on the old schema, but things such as adding a new column to a table or changing the name/type of a column you didn't access in your code should be transparent. | https://flylib.com/books/en/4.370.1.72/1/ | CC-MAIN-2021-04 | refinedweb | 1,316 | 54.22 |
Show Table of Contents
16.10. Creating a Rule Set
- Ensure the rule-set imports the ESB message. To test, use this code:
import org.jboss.soa.esb.message.Message
- Next, ensure the rule set defines the following global variable which creates the list of destinations:
global java.util.List destinations;The message is now sent to the JBoss Rules engine's working memory.Now you can turn a regular rule set into one that can be used for content-based routing. Do this by evaluating the ESB message's matching rule and ensuring it outputs a list of service destination names. | https://access.redhat.com/documentation/en-us/jboss_enterprise_soa_platform/5/html/esb_services_guide/creating_a_rule_set_with_jboss_developer_studio | CC-MAIN-2021-10 | refinedweb | 102 | 58.69 |
21 December 2010 13:40 [Source: ICIS news]
LONDON (ICIS)--Cold weather conditions across Europe could lead to lower jet fuel differentials, market sources said on Tuesday.
Heavy snow meant that many short haul flights were cancelled and that planes remained grounded in airports such as Heathrow in the UK and Frankfurt, Germany.
One buyer said this had decreased demand and created a surplus of material in the market.
The source said that even if the weather became milder, there would be after-effects which would dampen demand and exert downwards pressure on differentials.
Grounded planes were causing congestion at airports, occupying sparse parking spaces which meant repositioning planes into correct positions for take-off would be a lengthy process when the snow eventually cleared, the source said.
Another source said that airlines were expected to start selling product in open trading as they were consuming less jet fuel, and agreed that this was likely to result in lower premiums.
However, the source also said that differentials could remain supported as there was currently strong demand from European majors.
On Tuesday morning, a major European airline was reported to have sold a 2,000 tonnes parcel for 24-28 December at January plus-$58/tonne (€44/tonne) FOB (free on board) FARAG (Flushing ?xml:namespace>
Jet fuel cargoes and barges in Europe are traded based on a premium to the ICE gasoil futures contracts.
At GMT 12:25, the January ICE gasoil contract was trading at $770.25/tonne, up $6.00/tonne from Monday’s closing price.
Jet fuel barge differentials had been trading in the low January ICE gasoil plus $60s/tonne FOB in recent weeks.
( | http://www.icis.com/Articles/2010/12/21/9421428/europe-jet-fuel-prices-could-fall-as-cold-weather-hits-consumption.html | CC-MAIN-2014-35 | refinedweb | 278 | 59.53 |
Re: [Zope-dev] Methods through the Web (security?)
Martijn Faassen wrote: Various things. What you'd need is turn off 'view' permission by default for just about *everything* except possibly DTML Documents, otherwise it's just too easy to set up a site that exposes too much. Exposure to URLs should be turned off by default. Well, this is why
Re: [Zope-dev] Re: Superuser ownership (was Adding LoginManager at the root)
Robin Becker wrote: [stuff about 2.2] You can't create any objects except User Objects as the superuser. Create a manager, re-login as that manager and then add yoru objects. Can't speak for TinyTables but make sure the Anonymous role has the 'access TinyTables contents' permission. A lot of
Re: [Zope-dev] Re: Superuser ownership (was Adding LoginManager at the root)
Robin Becker wrote: What kind of idiotic permissions model is this where God cannot create anything? What is the function of the super user if not to manage? The super user is not god, that was seen to be an insecure thing. The super user is a facilitator for creating god... ;-) Seems to be
Re: [Zope-dev] Re: Superuser ownership (was Adding LoginManager at the root)
Robin Becker wrote: What steps should I try to allow my currently unowned documents/tinytables to be made accessible. When I press the change ownership button I get a help browser with an error message in the content frame. That sounds like a 2.2 bug to me... oh well... it is Alpha so fair
Re: [Zope-dev] Q2: ZCatalog Intelligent Objects
Steve Alexander wrote: In PTK, PortalContent-derived classes have the method def SearchableText(self): "text for indexing" So, if I get this correctly, Catalog will index both attributes and methods(that have only the self argument?) with the same name as an index? If so, then
Re: [Zope-dev] how to download a entire ZWiki site?
Jephte CLAIN wrote: Hello, Where I work, I do not have access to the internet, and as such, I have to move software and docs back and forth to my office. I understand that it is better for collaborative work to use ZWikis, but I wonder, how do I download the entire site to view it
[Zope-dev] Strange, Random Error
Can anyone shed any light on the following? It's got me stumped... It has only happened from 1 request so far, although it triggered off the same error for all the images on the page... ?!? Chris Zope reported an error for: Error Type: Invalid
Re: [Zope-dev] Please Help!!! - HelpSys
Lena wrote: I actually need to create a simple web-editor, something very similar to HelpSys, but I need to link tree items to the folders I create dynamically in Zope. I'd suggest looking at the ZWiki Product, it may be exactly what you want: cheers,
[Zope-dev] ZCatalog and Unique IDs
Hi, I hope this is a stupid question but why does ZCatalog use the URL of an object to uniquely identify it? Why not just use a reference to the object? Persistent object identifiers must exist for the ZODB so why not use them instead? cheers, Chris PS: This would solve all the problems with
Re: [Zope-dev] [Fwd: ZCatalog.. Again.. I know..]
Michel Pelletier wrote: I'm looking into this right now... Many thanks :-) As you may have gathered from my posts, I've been having a bit of a ZCatalog/SiteAccess (not to mention MIME) nightmare today... ...I took a day off work to try and get Squishdot 0.4.0 out and have spent the day
[Zope-dev] Re: ZCVS Mixin class... feedback sought.
Jerry Spicklemire wrote: [snip] Steve, please clarify any of the concepts I've managed to mangle! Now this sounds like something I waffled about a few weeks back... ...cool :-) I don't think I'm gonna have a chance to play though :S I'll jump in at 1.0.0 though and I'm there in spirit now
Re: [Zope-dev] Observer / Notification Interface Proposal
Evan Simpson wrote: This exists! (sort of). ObjectManager.superValues accepts a list of meta_types, and returns a list of all objects of those meta_types which can be found in any ObjectManager in the current one's acquisition context. and has done for ages... the original Squishdot code
Re: [Zope-dev] Re: ZCatalog and Unique IDs
Evan Simpson wrote: There are generally two ways (at least) to think about "object identifiers". You can think of them as unique labels for specific objects, without regard for location ("Jim Fulton") Me too... or as addresses or slots in which objects can be found ("the CTO"). who's
Re: [Zope-dev] Observer / Notification Interface Proposal
Michel Pelletier wrote: There allready exists such a thing, superValues([meta_type]). But I think this is too weak to use as a discovery protocol. An observable may not know or care what type of observer it wants to discover. Also, when it finds a number of various resources, catalogs lets
Re: [Zope-dev] Re: ZCatalog and Unique IDs
Michel Pelletier wrote: At the moment this is true, but Jim and I have discussed the possibility of ZCatalog being treating the paths less atomically and more as a sequence of nodes from root to the target, this way, you could ask the catalog for all objects below a certain point that match
Re: [Zope-dev] Proposed change in the authentication
Jim Fulton wrote: I wonder whether this would fix the following problem: What I reckon is happening is that HTTP is being dumb and
[Zope-dev] TextIndex Questions
From: Text Index: property values are applied against a lexicon object that stems, stops, and parses the value into a full text index. The index may be queried with a simple boolean query language that allows 'and', 'or', phrasing,
Re: [Zope-dev] [Fwd: ZCatalog.. Again.. I know..]
"R. David Murray" wrote: On Wed, 24 May 2000, Chris Withers wrote: This is weird... the bit of code in question was: lib/python/SearchIndex/Lexicon.py line 132: [...] Now Michel's patch was to change this to: else: self.counter = self.counter + 1
[Zope-dev] Case Insensitive TextIndices
Since and classic.zope.org (not to mention digicool.com) all seem to be down, I'll dump my feature request here (although given the lack of mail this morning, maybe mailman is down too ;-) Could it at least be an option, my view is that it should be the default option ;-), for text
Re: [Zope-dev] Re: ZCatalog and Unique IDs;
Re: [Zope-dev] Calling DTML methods from Python
Kevin Dangoor wrote: I believe this is in the Collector... My guess is that no one has yet figured out a good way to make the client and namespace be passed in automatically. This may not be desirable anyhow, because there may be times when you want to change the client... I've actually done
Re: [Zope-dev] New Help System in 2.2
Shane Hathaway wrote: 1) Enahnce the permissions system to include "Accessible via HTTP", "Accessible via FTP", etc. Add to that 'execute in DTML method' ( + python methods perl methods;-) and you've got what I've been asking for for so long I've atoped trying :( This would prevent the nasty
[Zope-dev] Traversal Stuff
Hi, I just remembered, something I really miss from my Mason days are dhandlers. What this offers is having a URL like: processed by the 'object' object with a parameter of '/some/parameters' or better still a 'URL objects' list as a parameter.
[Zope-dev] Interfaces Wiki Changes
Hi Michel, I've been good and documented what I learned from the docs discussion in the interfaces wiki. Can you check and correct as appropriate please:
Re: [Zope-dev] FWIW, ZCVSMixin now at 0.0.9... and rising. ;-)
cool :-) Keep up the good work... How long before Zope is a web-accessible CVS repository? (also accessible by FTP, WebDAV, XML-RPC, etc...) Chris PS: When WinCVS can CICO of Zope, I will be a happy man ;-) Steve Spicklemire wrote: Hi Folks, ZCVSMixin is now reaching a
Re: [Zope-dev] Confera fix
Hmmm, you could just use Squishdot 0.4.1 with the 'plain' style, it looks and behaves pretty much identically to Confera except it works (although I'm sure it has bugs ;-) and is maintained (by me ;-) which, sadly, I don't think Confera is. HTH, Chris
[Zope-dev] Comments on ZPatterns
"Phillip J. Eby" wrote: At 02:33 PM 6/11/00 -0600, Shane Hathaway wrote: I believe I have come to understand the basics of ZPatterns and would like to be sure I understand correctly, as well as help others understand also. Feel free to flame me if I've got this wrong, but: I've
Re: [Zope-dev] Product Data Storage
Loren Stafford wrote: In that case we decided to put the Schedule in the root folder. There is no more global data than the catalog itself, but if there were it could be stored in the catalog folder, since catalogs are folderish. Hmm, this is messy. Could the Catalog not be stored in your
Re: [Zope-dev] ZPatterns Specialist Question
Steve Alexander wrote: I think you're getting your levels of abstraction confused with your meta-levels of abstraction :-) Confused? when talking about ZPatterns? n e v e r ;-) Chris ___ Zope-Dev maillist - [EMAIL PROTECTED]
[Zope-dev] Zope Add-on Installation
Hi, Zope currently has two ways of distributing products: 1. Tarballs containing python products 2. .zexps containing methods, ZClass products and the like. Things are now getting bad as products are emerging (PTK, Tracker, etc) which require both of these to get a single product working which
[Zope-dev] More Image Errors
Hi, does anyone know what's causing the following problems? I'm going to knock up a patch for this anyway and chuck it in the collector. As a rule, I don't think a browser providing duff information should cause Zope to throw errors... cheers, Chris Automatic Zope Response wrote: Zope
Re: [Zope-dev] More Image Errors
Chris Withers wrote: I'm going to knock up a patch for this anyway and chuck it in the collector. The patch (which is for the latest CVS, but shouldn't be too hard to modify for other versions ;-) Chris === RCS file: /cvs
[Zope-dev] More comments on ZPatterns
From a mail about the LinuxTag conference: P.S. ABout ZPatterns: everyone I spoke to was thought the basic idea behind ZPattern was good and sound and nice and so on. But _everyone_ complained about it being too pretentious (with all the computer science claims and theory behind it) and
Re: [Zope-dev] Zope Add-on Installation
Jimmie Houchin wrote: It would be nice to have a single unified way of preparing apps, products or whatever for distribution. The process be nicely automated or provide a user interface for building the distributable. The install process would also need to be just as painless. Totally agree.
Re: [Zope-dev] Help needed: why is this DTML not working in zope 2 ?
You could try moving stuff to the new dtml style for starters: dtml-var standard_html_header and the like. That might help, if not let us know.. cheers, Chris Gilles Lavaux wrote: Hello, I am a little bit disappointed not getting any echo from my previous question, so I report it.
Re: [Zope-dev] Help needed: why is this DTML not working in zope 2 ?
Gilles Lavaux wrote: But I found something: In my 'PUBLIC_Doc' document I display image with a !--# var image_name-- tag, my problem disappear if I use !--#var "image_name"--, someone can explain this?? Are you sure the space after the # isn't your problem? I would STRONGLY recommend
Re: [Zope-dev] ZCatalog lexicon bug is back?
Dieter Maurer wrote: I am very interested in ZCatalog. I promiss that I will look into it, when I come to 2.2. Thanks, Documentation of the cool stuff which is already there, and making it work 100% reliably are the two main things... cheers, Chris
[Zope-dev] Overriding a method in an instance.
Hi, I guess this should be a feature request for the collector but I thought I'd see what other people thought first... I'd really like to be able to override methods in an instance of an object. Examples I can think of are Squishdot and the Tracker. In Squishdot, or any ZCatalog for that
Re: [Zope-dev] Overriding a method in an instance.
Monty Taylor wrote: Make a folder that contains the overridden methods and call things through the context of that folder. Neat trick :-) We love acquisiton, but it won't quite do it :( The default index_html will get called, unless you put /folder/ on the end of your URL. which is horrible :(
[Zope-dev] NASTY error. Why?
Hi,
Re: [Zope-dev] NASTY error. Why?
Evan, Thanks, that worked... Evan Simpson wrote: From: Chris Withers [EMAIL PROTECTED] dtml-call "REQUEST['where'][-1].manage_addFolder(id)" Error Type: TypeError Error Value: read-only character buffer, Python Method Looks like the 'id' of something along the line i
Re: [Zope-dev] NASTY error. Why?
Chris Withers wrote: Can someone please tell me why folder.id is a method and everything-else.id is a string? Sorry, that should be folder.id is a string and everything-else.id is a method. cheers, Chris ___ Zope-Dev maillist - [EMAIL PROTECTED
[Zope-dev] manage_clone? (amongst others!)
I was going to title this 'I should give up using Zope' but I thought I'd give it one last go... Yesterday's 6hrs failing to do a tree-walk-and-copy in DTML failed, so I thought I'd try in an external method today... ..well, that proved to be just as stupid. Okay, firstup, has anything
Re: [Zope-dev] manage_clone? (amongst others!)
Shane Hathaway wrote: Chris Withers wrote: I'm still not clear on this. Why would I need access to the product factory when where.manage_addFolder(id,title) works just fine? I thought manage_addFolder was the product factoy method? manage_addFolder() is to be considered a shortcut
[Zope-dev] User not in User Folder problem solved! :-)
Brian Lloyd wrote: That's a problem. Root index_html is viewable by Anonymous user - Zope should not complain about wrong (not in acl_users) login/password. It seems Zope doesn't like being presented with Authentication information it knows nothing about. A more graceful way of
[Zope-dev] External Method Missery
I think this is a bgu so I'll chuck it into the collector unless someone tells me otherwise... I have an external method called navTree (dtml-tree was too broken to fix in the time frame :( ) with a spec as follows: def navTree(self,start): It's called in some DTML as: dtml-var
[Zope-dev] Another mystery for you ;-)
Hi Steve, Since you found the external methods arguments thing interesting, here's another challenge for you... ;-) (it's actually from the same nav_tree method) I was trying to use 'if o in REQUEST.PARENTS' to expand branches on the way to the currently displayed object and was running into
Re: [Zope-dev] Re: Another mystery for you ;-)
Steve Alexander wrote: Smells like an Acquisition Wrapper misunderstanding :-) You should change your name to Jim... ...or have you been bitten by this before? Do you know if objects in PARENTS are acquisition wrapped? cheers and much we're-not-worthy'ing, Chris
Re: [Zope-dev] Re: Another mystery for you ;-) : o.aq_base,PARENTS): ..nice... :/
[Zope-dev] Aquisition, in, == and is
Chris Withers wrote:
Re: [Zope-dev] External Method Missery
Shane Hathaway wrote: that last algorithm falls to pieces. The solution is to always provide the "self" argument. When calling or in the signature of your external method? If the former, then would that have to be: dtml-var "external_method(this(),...other args..." ? cheers, Chris
Re: [Zope-dev] Re: Another mystery for you ;-)
Steve Alexander wrote: Does the method aq_inContextOf() do what you want? Not really, sicne it's not linked to the URL traversal, which PARENTS is. Or am I getting that wrong? ;-) However, someone could write a Product with a class that doesn't support Acquisition. I think I'd consider that
Re: [Zope-dev] External Method Missery
Steve Alexander wrote: Both. def external_method(self, ...other args...): dtml-var "external_method(this(),...other args..." ? I'll go with this advice since I still can't make heads or tails which of the two Shane thinks I need to do ;-) Of course, it's not documented like this. I think
Re: [Zope-dev] External Method Missery
Shane Hathaway wrote: Here's the logic: ExternalMethod sets up func_* attributes so it can masquerade as a function. The trick works well enough to convince ZPublisher's mapply() to pass in a "self" argument as the first argument when needed. What 'self' does mapply pass? I always though it
Re: [Zope-dev] External Method Missery
Chris Withers wrote: ...which doesn't give quite what I want objects[-1] should be the object getting rendered, however, for folders getting rendered where index_html is a DTML document, it is the index_html document. Well, I think I solved it for now, but this is horrible! ;( def navTree
Re: [Zope-dev] External Method Missery
Steve Alexander wrote: Ah... but are you calling the external method from a DTML method? These are all methods, and therefore you'd expect the "self" object to be the object the methods are subobjects of. The exact turn of events is that index_html is a DTML method which shows a DTML
[Zope-dev] Errors causing half rendered pages
Hi, I'm still debugging and writing the navtree code I've mentioned before and I've noticed that not infrequently, when rendering a page that causes and error, I get a half rendered page rather than a nice Zope error page. This is a bit waffly to put into the collector so I was wondering if
[Zope-dev] Re: ZCallable
Yup, This looks good, as did your earlier post. However, it raises a question :( Something like a ZCatalog or a Squishdot Site (which I have a passing interest in ;-) are both folderish. However, their __call__ method does something quite different: it returns the results of searching the
Re: [Zope-dev] Errors causing half rendered pages
Dieter Maurer wrote: I saw this only when buggy HTML was generated. When I viewed the HTML source my Netscape browser sometimes showed me blinking parts that located the errors. Nope, this was with IE... I viewed source and sure enough, it ended after a few lines. I guess it might have
[Zope-dev] Multi-homed objects?
Is there any reason why an object cannot be contained in more than one Folder in the ZODB? Apparently what I'm talking about is very similar to hard linking in UNIX... I can't think of any reasons why this would be bad but I can't think how to implement an 'Add hard link to this object'
Re: [Zope-dev] Multi-homed objects?
Oleg Broytmann wrote: Hardlinks are prohibited on directories; and 5 minutes ago you said all objects are foldersih :) I'm not sure if my statement applies in this situation... ;-) Hardlinks are prohibited on directories because it'd cause infinite loops on traversing. Hmm, would
Re: [Zope-dev] Multi-homed objects?
Shane Hathaway wrote: Can you see any weirdities that might occur from having an object as an attribute of more than one object manager / folder? The recursion problem, for one. Hmmm, how much of Zope would need changing to use the visitor interface? Also, if you change the object in
Re: [Zope-dev] Errors causing half rendered pages
Martijn Pieters wrote: THis rings vague bells of IIS or some other proxy server or somthing converting LF tp CRLF but not updating the Content-Length header, thus having your browser drop part of the transmission. I could be talking absolute nonsense of course. To much work on ZopeStudio I
Re: [Zope-dev] Multi-homed objects?
Terry Kerr wrote: Where would the object aquire its attributes from? Would it try one of its folders, then the other? Which order would it choose? Now that's a very good question ;-) Probably from the folder you referenced it from and then normal acquisition from then on... cheers,
Re: [Zope-dev] Re: Comments on ZPatterns
"Phillip J. Eby" wrote: Yep, like "Acquisition" and "object publishing". :) Seriously, that is very much the level we're talking about here. I know, there are docs explaining acquisition and object publishing has an implicit meaning. Things like 'Specialist' and 'DataSkin' don't have
Re: [Zope-dev] Comments on ZPatterns
Ty Sarna wrote: In article [EMAIL PROTECTED], Chris Withers [EMAIL PROTECTED] wrote: 1. Too much jargon... by far... Lots of complicated words that are meanlingless to the layman and don't help to convey the concepts. This Can you point out some examples of which ones you think
Re: [Zope-dev] Zope Add-on Installation Skins (none right now,
Re: [Zope-dev] Request for amplification on new Product permissions API.
Brian Lloyd wrote: So, what do I need to do to allow controlled access to this object? I understand that Shopper, inheriting from SimpleItem, already has the access to unprotected subobjects flag. And I'd rather protect the object correctly, anyway grin. I tried adding an
Re: [Zope-dev] More comments on ZPatterns
"Phillip J. Eby" wrote: 2) I need a good way to make the methods overrideable without any subclassing (whether in Python or ZClasses), Ah, so it's not just me who wants this ;-) I think this may relate to an existing interest of yours regarding specification of interfaces and overriding
Re: [Zope-dev] Expanded access file (was Re: LoginManager patch consideredharmful)harmful)
"Phillip J. Eby" wrote: Maybe, maybe not. I think perhaps the most compelling argument from Digital Creations' viewpoint for having an expanded "access" file might be the simplification of the setup process for customers. And it would also make it easier to: 1) Phase out unownedness
Re: [Zope-dev] Expanded access file (was Re: LoginManagerpatch consideredharmful)harmful)
"Phillip J. Eby" wrote: You speak in the past tense. This is only a suggestion and a possibility. It's not as important as some other feature requests. Patch opportunity, perhaps? :) Ty and I would do it, no problem. Heck, I've been tempted to do it as a LoginManager function, since
Re: [Zope-dev] Re: Acquisition (was: [Zope-dev] Overriding a method in an instance.)
Shane Hathaway wrote: P.S. I wouldn't mind if someone posted this as a HOWTO. :-) Done :-) cheers, Chris ___ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related
Re: [Zope-dev] Overriding a method in an instance.
Shane Hathaway wrote: I have an idea: the _objects attribute of ObjectManagers could include a "configurable" flag, which would tell _checkId that the object can be overridden. Shane, Is this what became the ConfigurableInstances thing at:
Re: [Zope-dev] ZPatterns: missing docstring in getItem()
"Phillip J. Eby" wrote: The local roles stuff is the ability to have a DataSkin get local role information from "local role providers", thus allowing rules-based local roles to exist/co-exist with the standard Zope persistently-managed local roles. How does this fit in / compare / contrast
Re: [Zope-dev] Redirecting from the manage interfaces.
Erik Enge wrote: call, it won't redirect. So you should be able to achieve the same results just by invoking manage_addImage without including the REQUEST object. But I have to pass something with the REQUEST, or else it won't add the image, right? This is the problem we've
[Zope-dev] ZCallable the Renderable Folders Patch
Steve Alexander wrote: The __call__ method is what gets invoked when you call the method, either through the magic of dtml-var Catalog or with dtml-var "Catalog(client, namespace, args*)". Now you see, this is what confuses me... dtml-var dtml_method renders that method, implying in my mind
Re: [Zope-dev] Zope Add-on Installation
Bill Anderson wrote: I believe he is talkin gabout the __init__ function checking to see if the .zexp has been imported, and if not, importing it for you. It's an idea I have kicked around, but haven't tried yet. IIRC, the 'Distribution' tab creates all this for you... It creates a binary
Re: [Zope-dev] Re: ZCallable the Renderable Folders Patch
Shane Hathaway wrote: Offhand I would say that putting a __call__ method in standard folders would break a lot of DTML. Remember that a namespace lookup tries to execute the value before returning it. But don't take my word for it, add a __call__ method to ObjectManager and see what happens.
Re: [Zope-dev] Zope Add-on Installation
Shane Hathaway wrote: I've created a patch that allows you to create redistributable archives with the distribution tab. It's checked in to CVS on a branch. This is what was needed for us to start using the distribution tab more frequently. Great :-) Thanks again Shane... Is the patch
Re: [Zope-dev] several permissions for the same method
Jephte CLAIN wrote: snip different security for same method You could just check for the permissions specifically, here's a quote from Folder.py in Zope 2.2: checkPermission=getSecurityManager().checkPermission if createUserF: if not checkPermission('Add User Folders', ob): raise
Re: [Zope-dev] Redirecting from the manage interfaces.
Erik Enge wrote: On Wed, 19 Jul 2000, Chris Withers wrote: Why I'm asking is 'cos it'd be really nice not to have to keep re-writing UI when there's perfectly good stuff available in the management interface, things like add forms, edit forms, etc... Exactly. Great, so how do we
Re: [Zope-dev] getting request variables values
Evan Simpson wrote: The value you're after is stored in the 'environ' section of the request. Unlike 'other' and 'cookies' keys, 'environ' keys can't generally be fetched as attributes or keys of REQUEST. You need to access them as REQUEST.environ['keyname']. Heh, I thought so, I presume
[Zope-dev] Incorrect Padding?
Does anyone know what this means? The page views fine for me and this is the first error of this type I've seen since we launched the archives. I wonder what WebWhacker is doing to cause this? cheers, Chris Automatic Zope Response wrote: Zope reported an error for:
Re: [Zope-dev] Security Strangeness
Johan Carlsson wrote: First, you can't delegate the permissionto add and delete user except by assigning the user the role "manager". IMHO this is to limiting. Second, if you give a user the permission to Change Persmissions, that user can change permissions that she doesn't have the right
Re: [Zope-dev] Incorrect Padding?
Steve Alexander wrote: My guess is that the argument "auth" passed to validate() has some trailing characters. Either that, or WebWhacker passed just "Basic " as an auth string. Yuk, that sounds like a Zope bug. Collector time with patch? A judicious string.strip should solve the problem,
Re: [Zope-dev] Incorrect Padding?
Chris Withers wrote: Steve Alexander wrote: My guess is that the argument "auth" passed to validate() has some trailing characters. Either that, or WebWhacker passed just "Basic " as an auth string. Yuk, that sounds like a Zope bug. Collector time with patch? A jud
Re: [Zope-dev] Incorrect Padding?
Steve Alexander wrote: snip excellent patches Well, I think Brian Lloyd'd have to make the call... Nice work though, 2 for 2 on the day, that's pretty good going, are DC paying you yet? ;-) cheers, Chris ___ Zope-Dev maillist - [EMAIL PROTECTED]
Re: [Zope-dev] ZODB : mystery
Every time you create a Zope object, such as a DTML document or Folder, it gets stored in the ZODB. Perhaps you could be a little more specific in your aims? cheers, Chris [EMAIL PROTECTED] wrote: Hi, I've a question : How could I store and retrieve object in the ZoDB ? I know that I
Re: [Zope-dev] Zope 2.2.0 and SiteAccess 2.0.0b2 -- doesn't work?-- FIXED
Michael Monsen wrote: When I created the SiteRoot object I was using the superuser account, I thought this wasn't supposed to be possible in the first place?! Chris ___ Zope-Dev maillist - [EMAIL PROTECTED]
Re: [Zope-dev] Zope bug (w/ patch): hard coded Unix separator in special_dtml.py
Yves-Eric Martin wrote: Does that sound right to you? If no one disagrees, I'll fill a bug report w/ patch in the Collector. Sounds great, is it in the collector yet? Chris ___ Zope-Dev maillist - [EMAIL PROTECTED]
Re: [Zope-dev] getting request variables values
Steve Alexander wrote: def __getitem__/__getattr__ from HTTPRequest.py: """Get a variable value Return a value for the required variable name. The value will be looked up from one of the request data categories. The search order is environment variables,
Re: [Zope-dev] Request for amplification on new Product permissions API.
Brian Lloyd wrote: Yes - basically any class that defines *or inherits from a class that defines* permissions should do this to make sure that the permissions along the inheritance heirarchy are cobbled together correctly at class initialization time. I'm sure someone said recently that this
Re: [Zope-dev] Incorrect Padding?
Martijn Pieters wrote: Oops. You took out the strip. But IIRC, base64 does a strip as well. Not according to the original error which started this thread :( Chris ___ Zope-Dev maillist - [EMAIL PROTECTED]
Re: [Zope-dev] Incorrect Padding?
Steve Alexander wrote: Martijn Pieters wrote: Oops. You took out the strip. But IIRC, base64 does a strip as well. So it does! from base64 import * s = encodestring('foo') decodestring(s) 'foo' decodestring(s+' ') 'foo' decodestring(' '+s) 'foo' So what was causing the
Re: [Zope-dev] Incorrect Padding?
Martijn Pieters wrote: So what was causing the original error then? Buggy client? If so, surely Zope should just return an Unauthorized error rather than exposing its internals?! If you're a server and the client is buggy, tell it so, but don't look like you just screwed up really badly
[Zope-dev] Standard Error Messages (was Re: [Zope-dev] Incorrect Padding?)
Steve Alexander wrote: ! raise 'InternalError', request.response._error_html( Can someone enlighten me as to what this does? Does it reset the HTTP response code? Is _error_html something that gets the acquired standard_error_message? If not, it should do ;-) Has anyone made
[Zope-dev] Stuff hanging off user not in cookies
Hi, small rant (no change there for me ;-)
Re: [Zope-dev] Re: [Zope] Bi-directional update of Data.fs if one user updates and object on
Re: [Zope-dev] Bi-directional update of Data.fs
Steve Alexander wrote: What does Lotus Notes offer to do if you get such a conflict? Save both versions and ask the user to delete the one which isn't needed, or merge the changes... Use Jim's new conflict resolution algorithm to try to settle conflicts. Urm? First I heard of this and it
Re: [Zope-dev] Re: [Zope-PTK] PROPOSAL: Splitting ZPatterns into two products
Steve Alexander wrote: "Phillip J. Eby" wrote: So, I am thinking perhaps I should split ZPatterns into two products: PlugIns and DataSkins. If 'PlugIns' includes plugins, plugin groups and plugin containers, then that's a pretty good name :-) (If I've missed bits out and the like, please | https://www.mail-archive.com/search?l=zope-dev@zope.org&q=from:%22Chris+Withers%22 | CC-MAIN-2018-47 | refinedweb | 5,237 | 63.09 |
CPAN::PackageDetails::Header - Handle the header of 02packages.details.txt.gz
Used internally by CPAN::PackageDetails
The 02packages.details.txt.gz header is a short preamble that give information about the creation of the file, its intended use, and the number of entries in the file. It looks something like:
File: 02packages.details.txt URL: Description: Package names found in directory $CPAN/authors/id/ Columns: package name, version, path Intended-For: Automated fetch routines, namespace documentation. Written-By: Id: mldistwatch.pm 1063 2008-09-23 05:23:57Z k Line-Count: 59754 Last-Updated: Thu, 23 Oct 2008 02:27:36 GMT
Note that there is a Columns field. This module tries to respect the ordering of columns in there. The usual CPAN tools expect only three columns and in the order in this example, but
CPAN::PackageDetails tries to handle any number of columns in any order.
Create a new Header object. Unless you want a lot of work so you get more control, just let
CPAN::PackageDetails's
new or
read handle this for you.
In most cases, you'll want to create the Entries object first then pass a reference the the Entries object to
new since the header object needs to know how to get the count of the number of entries so it can put it in the "Line-Count" header.
CPAN::PackageDetails::Header->new( _entries => $entries_object, )
Write the date in PAUSE format. For example:
Thu, 23 Oct 2008 02:27:36 GMT
Returns a list of the the headers that should show up in the file. This excludes various fake headers stored in the object.
Add an entry to the collection. Call this on the
CPAN::PackageDetails object and it will take care of finding the right handler.
Returns true if the header has a field named FIELD, regardless of its value.
Returns the value for the named header FIELD. Carps and returns nothing if the named header is not in the object. This method is available from the
CPAN::PackageDetails or
CPAN::PackageDetails::Header object:
$package_details->get_header( 'url' ); $package_details->header->get_header( 'url' );
The header names in the Perl code are in a different format than they are in the file. See
default_headers for an explanation of the difference.
For most headers, you can also use the header name as the method name:
$package_details->header->url;
Returns the columns name as a list (rather than a comma-joined string). The list is in the order of the columns in the output.
Return the header formatted as a string.
This source is in Github:
brian d foy,
<bdfoy@cpan.org>
You may redistribute this under the same terms as Perl itself. | http://search.cpan.org/dist/CPAN-PackageDetails/lib/Header.pm | crawl-003 | refinedweb | 446 | 64.91 |
12 February 2009 17:45 [Source: ICIS news]
TORONTO (ICIS news)--US chemical railcar traffic fell 17.7% in the week ended 7 February from the same week in 2008, marking the 23rd straight decline, according to data released on Thursday.
US chemical railcar loadings were 25,146, down from 30,572 for the same week in 2008, the Association of American Railroads (AAR) said.
The weekly shipment data are a good early indicator of current chemical industry activity. Railroads transport more than 20% of the chemicals produced in the ?xml:namespace>
For the year-to-date period through 7 February, US chemical railcar loadings fell 18.7% to 126,981, down from 156,214 in the same period last year.
The AAR also provided comparable chemical railcar shipment data for
Canadian chemical rail traffic for the week ended 7 February declined 19.6% to 11,876 carloads, down from 14,766 last year.
Year-to-date, Canadian shipments were 60,489, a 20.4% decrease from 75,948 in the same period in 2008.
Mexican chemical rail traffic rose 23.4% to 977 carloads, up from 792 a year earlier.
Year-to-date, Mexican shipments were 4,947, up 4.9% from 4,716 in the same period last year.
For all of
Year-to-date to 7 February, North American chemical railcar traffic was 192,417 carloads, down 18.8% from 236,878 in the same period last year.
Overall, the
For all | http://www.icis.com/Articles/2009/02/12/9192427/us-weekly-chemical-rail-traffic-down-17.7.html | CC-MAIN-2014-41 | refinedweb | 245 | 67.86 |
Java 14 Record Keyword
Last modified: May 14, 2020
1. Introduction
Passing immutable data between objects is one of the most common, but mundane tasks in many Java applications.
Prior to Java 14, this required the creation of a class with boilerplate fields and methods, which were susceptible to trivial mistakes and muddled intentions.
With the release of Java 14, we can now use records to remedy these problems.
In this tutorial, we'll look at the fundamentals of records, including their purpose, generated methods, and customization techniques.
2. Purpose
Commonly, we write classes to simply hold data, such as database results, query results, or information from a service.
In many cases, this data is immutable, since immutability ensures the validity of the data without synchronization.
To accomplish this, we create data classes with the following:
- private, final field for each piece of data
- getter for each field
- public constructor with a corresponding argument for each field
- equals method that returns true for objects of the same class when all fields match
- hashCode method that returns the same value when all fields match
- toString method that includes the name of the class and the name of each field and its corresponding value
For example, we can create a simple Person data class, with a name and an address:
public class Person { private final String name; private final String address; public Person(String name, String address) { this.name = name; this.address = address; } @Override public int hashCode() { return Objects.hash(name, address); } @Override public boolean equals(Object obj) { if (this == obj) { return true; } else if (!(obj instanceof Person)) { return false; } else { Person other = (Person) obj; return Objects.equals(name, other.name) && Objects.equals(address, other.address); } } @Override public String toString() { return "Person [name=" + name + ", address=" + address + "]"; } // standard getters }
While this accomplishes our goal, there are two problems with it:
- There is a lot of boilerplate code
- We obscure the purpose of our class – to represent a person with a name and address
In the first case, we have to repeat the same tedious process for each data class, monotonously creating a new field for each piece of data, creating equals, hashCode, and toString methods, and creating a constructor that accepts each field.
While IDEs can automatically generate many of these classes, they fail to automatically update our classes when we add a new field. For example, if we add a new field, we have to update our equals method to incorporate this field.
In the second case, the extra code obscures that our class is simply a data class that has two String fields: name and address.
A better approach would be to explicitly declare that our class is a data class.
3. The Basics
As of JDK 14, we can replace our repetitious data classes with records. Records are immutable data classes that require only the type and name of fields.
The equals, hashCode, and toString methods, as well as the private, final fields, and public constructor, are generated by the Java compiler.
To create a Person record, we use the record keyword:
public record Person (String name, String address) {}
3.1. Constructor
Using records, a public constructor – with an argument for each field – is generated for us.
In the case of our Person record, the equivalent constructor is:
public Person(String name, String address) { this.name = name; this.address = address; }
This constructor can be used in the same way as a class to instantiate objects from the record:
Person person = new Person("John Doe", "100 Linda Ln.");
3.2. Getters
We also receive public getters methods – whose names match the name of our field – for free.
In our Person record, this means a name() and address() getter:
@Test public void givenValidNameAndAddress_whenGetNameAndAddress_thenExpectedValuesReturned() { String name = "John Doe"; String address = "100 Linda Ln."; Person person = new Person(name, address); assertEquals(name, person.name()); assertEquals(address, person.address()); }
3.3. equals
Additionally, an equals method is generated for us.
This method returns true if the supplied object is of the same type and the values of all of its fields match:
@Test public void givenSameNameAndAddress_whenEquals_thenPersonsEqual() { String name = "John Doe"; String address = "100 Linda Ln."; Person person1 = new Person(name, address); Person person2 = new Person(name, address); assertTrue(person1.equals(person2)); }
If any of the fields differ between two Person instances, the equals method will return false.
3.4. hashCode
Similar to our equals method, a corresponding hashCode method is also generated for us.
Our hashCode method returns the same value for two Person objects if all of the field values for both object match (barring collisions due to the birthday paradox):
@Test public void givenSameNameAndAddress_whenHashCode_thenPersonsEqual() { String name = "John Doe"; String address = "100 Linda Ln."; Person person1 = new Person(name, address); Person person2 = new Person(name, address); assertEquals(person1.hashCode(), person2.hashCode()); }
The hashCode value will differ if any of the field values differ.
3.5. toString
Lastly, we also receive a toString method that results in a string containing the name of the record, followed by the name of each field and its corresponding value in square brackets.
Therefore, instantiating a Person with a name of “John Doe” and an address of “100 Linda Ln.” results in the following toString result:
Person[name=John Doe, address=100 Linda Ln.]
4. Constructors
While a public constructor is generated for us, we can still customize our constructor implementation.
This customization is intended to be used for validation and should be kept as simple as possible.
For example, we can ensure that the name and address provided to our Person record are not null using the following constructor implementation:
public record Person(String name, String address) { public Person { Objects.requireNonNull(name); Objects.requireNonNull(address); } }
We can also create new constructors with different arguments by supplying a different argument list:
public record Person(String name, String address) { public Person(String name) { this(name, "Unknown"); } }
As with class constructors, the fields can be referenced using the this keyword (for example, this.name and this.address) and the arguments match the name of the fields (that is, name and address).
Note that creating a constructor with the same arguments as the generated public constructor is valid, but this requires that each field be manually initialized:
public record Person(String name, String address) { public Person(String name, String address) { this.name = name; this.address = address; } }
Additionally, declaring a no-argument constructor and one with an argument list matching the generated constructor results in a compilation error.
Therefore, the following will not compile:
public record Person(String name, String address) { public Person { Objects.requireNonNull(name); Objects.requireNonNull(address); } public Person(String name, String address) { this.name = name; this.address = address; } }
5. Static Variables & Methods
As with regular Java classes, we can also include static variables and methods in our records.
We declare static variables using the same syntax as a class:
public record Person(String name, String address) { public static String UNKNOWN_ADDRESS = "Unknown"; }
Likewise, we declare static methods using the same syntax as a class:
public record Person(String name, String address) { public static Person unnamed(String address) { return new Person("Unnamed", address); } }
We can then reference both static variables and static methods using the name of the record:
Person.UNKNOWN_ADDRESS Person.unnamed("100 Linda Ln.");
6. Conclusion
In this article, we looked at the record keyword introduced in Java 14, including their fundamental concepts and intricacies.
Using records – with their compiler-generated methods – we can reduce boilerplate code and improve the reliability of our immutable classes.
The code and examples for this tutorial can be found over on GitHub. | https://www.baeldung.com/java-record-keyword | CC-MAIN-2021-17 | refinedweb | 1,258 | 51.07 |
Opened 11 years ago
Closed 11 years ago
#2701 (copie de travail) @@ (12)
comment:1 Changed 11 years ago by
comment:2 Changed 11 years ago by
You can't mark it as wontfix:
comment:3 Changed 11 years ago by
comment:4 Changed 11 years ago by
So pydoctor now (in SVN head) avoids that module, but the offending docstring still turns up as the docstring of twisted.words.xish.xpathparser.XPathParser.init, so this alone won't fix the buildbot, so there's still a code change needed for this (adding a docstring to the previously mentioned method). Or I guess pydoctor could not consider docstrings of objects that are excluded but I'm not that keen on that idea.
comment:5 Changed 11 years ago by
Hmmmmmm.
Tricky. :/
comment:6 Changed 11 years ago by
Maybe I've missed something obvious, but:
Why are we including the yapps runtime rather than just depending on it? Is it not packaged?
If we are going to bundle it with Twisted, perhaps it could live outside the Twisted package, which should hopefully not get picked up by pydoctor.
comment:7 Changed 11 years ago by
Hrm. It _is_ packaged. I don't know why it's also in Twisted.
comment:8 Changed 11 years ago by
Seeing that #2502 has been reversed because of this, I'm not sure if packaging the Yapps runtime seperately resolves the issue. The generated
twisted.words.xish.xpathparser module would still have an
XPathParser.__init__ that inherits from the Yapps runtime and generate non-epydoc docstrings.
Maybe we need a script that calls the yapps compiler and applies the needed local fixes (import and the docstring) for generating xpathparser.py to resolve the issue.
I understand the suggestion of depending on a yapps runtime package, but I notice that there are platforms (e.g. FreeBSD) that don't provide yapps or the yapps runtime as a package (yet).
comment:9 Changed 11 years ago by
Distributing it with Twisted but outside twisted should address the issue of platforms which don't package it. If pydoctor still has a problem because the generated subclass inherits the non-parsing docstring, I'm not sure what to do about that (buildbot suggests it doesn't, though)
comment:10 Changed 11 years ago by
Okay, buildbot behavior is explained to my satisfaction, so it is actually working w/ yappsrt outside of the twisted namespace.
Please don't try to 'fix' this. This file is the yapps runtime and is by a third party.
The file
xpathparser.gused to have this functionality embedded, but people were subsequently trying to blindly 'fix' the API documentation in the generated
xpathparser.pyfile, making it a nightmare to maintain. So that is why we now include the official yapps runtime.
Could we maybe exclude
yappsrt.pyfrom pydoctor API generation and leave the file as-is?
I intend to mark this as
wontfix. | https://twistedmatrix.com/trac/ticket/2701 | CC-MAIN-2018-26 | refinedweb | 489 | 64.2 |
Calculates a hypotenuse by the Pythagorean formula
#include <math.h>
double hypot ( double x , double y );
float hypotf ( float x , float y );
long double hypotl ( long double x , long double y );
The hypot( ) functions compute the square root of the sum of the squares of their arguments, while avoiding intermediate overflows. If the result exceeds the function's return type, a range error may occur.
double x, y, h; // Three sides of a triangle
printf( "How many kilometers do you want to go westward? " );
scanf( "%lf", &x );
printf( "And how many southward? " );
scanf( "%lf", &y );
errno = 0;
h = hypot( x, y );
if ( errno )
perror( _ _FILE_ _ );
else
printf( "Then you'll be %4.2lf km from where you started.\n", h );
If the user answers the prompts with 3.33 and 4.44, the program prints this output:
Then you'll be 5.55 km from where you started.
sqrt( ), cbrt( ), csqrt( ) | http://books.gigatux.nl/mirror/cinanutshell/0596006977/cinanut-CHP-17-110.html | CC-MAIN-2018-43 | refinedweb | 153 | 84.68 |
Rose::HTML::Form::Field::Compound - Base class for field objects that contain other field objects.
package MyFullNameField; use base qw(Rose::HTML::Form::Field::Compound Rose::HTML::Form::Field::Text); sub build_field { my($self) = shift; $self->add_fields ( first => { type => 'text', size => 15, maxlength => 50 }, middle => { type => 'text', size => 15, maxlength => 50 }, last => { type => 'text', size => 20, maxlength => 50 }, ); } sub coalesce_value { my($self) = shift; return join(' ', map { defined($_) ? $_ : '' } map { $self->field($_)->internal_value } qw(first middle last)); } sub decompose_value { my($self, $value) = @_; return undef unless(defined $value); if($value =~ /^(\S+)\s+(\S+)\s+(\S+)$/) { return { first => $1, middle => $2, last => $3, }; } my @parts = split(/\s+/, $value); if(@parts == 2) { return { first => $parts[0], middle => undef, last => $parts[1], }; } return { first => $parts[0], middle => $parts[1], last => join(' ', @parts[2 .. $#parts]), }; } # Override these methods to determine how sub-fields are arranged sub html_field { ... } sub xhtml_field { ... } ... use MyFullNameField; $field = MyFullNameField->new( label => 'Full Name', name => 'name', default => 'John Doe'); print $field->internal_value; # "John Doe" $field->input_value('Steven Paul Jobs'); print $field->field('middle')->internal_value; # "Paul" print $field->html; ...
Rose::HTML::Form::Field::Compound is a base class for compound fields. A compound field is one that contains other fields. The example in the SYNOPSIS is a full name field made up of three separate text fields, one each for first, middle, and last name. Compound fields can also contain other compound fields.
Externally, a compound field must field look and behave as if it is a single, simple field. Although this can be done in many ways, it is important for all compound fields to actually inherit from Rose::HTML::Form::Field::Compound. Rose::HTML::Form uses this relationship in order to identify compound fields and handle them correctly. Any compound field that does not inherit from Rose::HTML::Form::Field::Compound will not work correctly with Rose::HTML::Form.
This class inherits from, and follows the conventions of, Rose::HTML::Form::Field. Inherited methods that are not overridden will not be documented a second time here. See the Rose::HTML::Form::Field documentation for more information.
A Rose::HTML::Form::Field::Compound-derived object behaves as if it is a single field made up of a group of sibling elements. These siblings are available through the fields method.
See the "hierarchy" sections of the "HIERARCHY" in Rose::HTML::Form::Field and "HIERARCHY" in Rose::HTML::Form documentation for more information about how field objects that are really "groups of siblings" behave with respect to the the child-related methods inherited from Rose::HTML::Object.
Actual compound fields must override the following methods: build_field(), decompose_value(), and coalesce_value(). The required semantics of those methods are described in the "OBJECT METHODS" section below.
Subfields are fields that are contained within another field. A field that has sub-fields is called a compound field. It is important to HTML form initialization that sub-fields be addressable from the top level. Since fields can be arbitrarily nested, some form of hierarchy must also exist in the field addressing scheme.
To that end, compound fields use the "." character to partition the namespace. For example, the "month" sub-field of a compound field named "date" could be addressed from the form that contains the field using the name "date.month". As a consequence of this convention, field names may not contain periods.
Subfields are addressed by their "relative" names from the perspective of the caller. For example, the Rose::HTML::Form::Field::DateTime::Split::MDYHMS custom field class contains a two compound fields: one for the time (split into hours, minutes, seconds, and AM/PM) and one for the date (split into month, day, and year). Here are a few ways to address the various sub-fields.
$datetime_field = Rose::HTML::Form::Field::DateTime::Split::MDYHMS->new( name => 'datetime'); ## Get the (compound) sub-field containing the month, day, and year $mdy_field = $datetime_field->field('date'); ## Get the year sub-field of the month/day/year sub-field ## in two different ways: # Fully-qualified sub-field access $year_field = $datetime_field->field('date.year'); # Relative sub-field access $year_field = $datetime_field->field('date')->field('year');
See the Rose::HTML::Form documentation for more information on how forms address and initialize fields based on query parameter names.
It is not the job of the coalesce_value() or decompose_value() methods to validate input. That's the job of the validate() method in Rose::HTML::Form::Field.
But as you'll see when you start to write your own decompose_value() methods, it's often nice to know whether the input is valid before you try to decompose it into sub-field values. Valid input can usually be divided up very easily, whereas invalid input requires some hard decisions to be made. Consequently, most decompose_value() methods have one section for handling valid input, and another that makes a best-effort to handle invalid input.
There are several ways to determine whether or not a value passed to decompose_value() is valid. You could actually call validate(), but that is technically a violation of the API since decompose_value() only knows that it's supposed to divvy up the value that it is passed. It is merely assuming that this value is also the current value of the field. In short, don't do that.
The decompose_value() method could try to validate the input directly, of course. But that seems like a duplication of code. It might work, but it is more effort.
The recommended solution is to rely on the fact that most overridden inflate_value() methods serve as an alternate form of validation. Really, the decompose_value() method doesn't want to "validate" in the same way that validate() does. Imagine a month/day/year compound field that only accepts dates in the 1990s. As far as validate() is concerned, 12/31/2002 is an invalid value. But as far as decompose_value() is concerned, it's perfectly fine and can be parsed and divided up into sub-field values easily.
This is exactly the determination that many overridden inflate_value() methods must also make. For example, that month/day/year compound field may use a DateTime object as its internal value. The inflate_value() method must parse a date string and produce a DateTime value. The decompose_value() method can use that to its advantage. Example:
sub decompose_value { my($self, $value) = @_; return undef unless(defined $value); # Use inflate_value() to do the dirty work of # sanity checking the value for us my $date = $self->SUPER::inflate_value($value); # Garbage input: try to do something sensible unless($date) { my($month, $day, $year) = split('/', $value); return { month => $month || '', day => $day || '', year => $year || '', } } # Valid input: divide up appropriately return { month => $date->month, day => $date->day, year => $date->year, }; }
This technique is sound because both decompose_value() and inflate_value() work only with the input they are given, and have no reliance on the state of the field object itself (unlike validate()).
If the inflate_value() method is not being used, then decompose_value() must sanity check its own input. But this code is not necessarily the same as the code in validate(), so there is no real duplication.
Convenience alias for add_fields().
Add the fields specified by ARGS to the list of sub-fields in this compound field.
If an argument is "isa" Rose::HTML::Form::Field, then it is added to the list of fields, stored under the name returned by the field's name() method.
If an argument is anything else, it is used as the field name, and the next argument is used as the field object to store under that name. If the next argument is not an object derived from Rose::HTML::Form::Field, then a fatal error occurs.
The field object's name() is set to the name that it is stored under, and its parent_field() is set to the form object.
Returns the full list of field objects, sorted by field name, in list context, or a reference to a list of the same in scalar context.
Examples:
$name_field = Rose::HTML::Form::Field::Text->new(name => 'name', size => 25); $email_field = Rose::HTML::Form::Field::Text->new(name => 'email', size => 50); # Field arguments $compound_field->add_fields($name_field, $email_field); # Name/field pairs $compound_field2->add_fields(name => $name_field, email => $email_field); # Mixed $compound_field3->add_fields($name_field, email => $email_field);
Get or set a boolean value that indicates whether or not the internal value of any parent fields are automatically invalidated when the input value of this field is set. The default is true.
This method must be overridden by subclasses. Its job is to build the compound field by creating and then adding the sub-fields. Example:
sub build_field { my($self) = shift; $self->add_fields ( first => { type => 'text', size => 15, maxlength => 50 }, middle => { type => 'text', size => 15, maxlength => 50 }, last => { type => 'text', size => 20, maxlength => 50 }, ); }
See the documentation for add_fields() for a full description of the arguments it accepts.
This method must be overridden by subclasses. It is responsible for combining the values of the sub-fields into a single value. Example:
sub coalesce_value { my($self) = shift; return join(' ', map { defined($_) ? $_ : '' } map { $self->field($_)->internal_value } qw(first middle last)); }
The value returned must be suitable as an input value. See the Rose::HTML::Form::Field documentation for more information on input values.
This method must be overridden by subclasses. It is responsible for distributing the input value VALUE amongst the various sub-fields. This is harder than you might expect, given the possibility of invalid input. Nevertheless, subclasses must try to divvy up even garbage values such that they eventually produce output values that are equivalent to the original input value when fed back through the system.
The method should return a reference to a hash of sub-field-name/value pairs.
In the example below, the method's job is to decompose a full name into first, middle, and last names. It is not very heroic in its efforts to parse the name, but it at least tries to ensure that every significant piece of the value ends up back in one of the sub-fields.
sub decompose_value { my($self, $value) = @_; return undef unless(defined $value); # First, middle, and last names all present if($value =~ /^(\S+)\s+(\S+)\s+(\S+)$/) { return { first => $1, middle => $2, last => $3, }; } my @parts = split(/\s+/, $value); # First and last? if(@parts == 2) { return { first => $parts[0], middle => undef, last => $parts[1], }; } # Oh well, at least try to make sure all the non-whitespace # characters get fed back into the field return { first => $parts[0], middle => $parts[1], last => join(' ', @parts[2 .. $#parts]), }; }
This method calls the
disabled() method on all fields that possess such a method, passing all arguments. Set to true to disable all eligible sub-fields, false to enable them.
Get or set the field specified by NAME. If only a NAME argument is passed, then the field stored under the name NAME is returned. If no field exists under that name exists, then undef is returned.
If both NAME and VALUE arguments are passed, then the field VALUE is stored under the name NAME. If VALUE is not an object derived from Rose::HTML::Form::Field, a fatal error occurs.
Returns the full list of field objects, sorted by field name, in list context, or a reference to a list of the same in scalar context.
Returns the internal_value of the sub-field named NAME. In other words, this:
$val = $field->field_value('zip_code');
is just a shorter way to write this:
$val = $field->field('zip_code')->internal_value;
Returns the HTML serialization of the field. The default implementation calls html_field on each of the fields and then concatenates and the results. Override this method in your compound field subclass to lay out your sub-fields as desired.
Invalidates the field's value, and the value of all of its parent fields, and so on. This will cause the field's values to be recreated the next time they are retrieved.
Returns true if all of the sub-fields are empty, false otherwise.
Returns false if any of the sub-fields are empty, true otherwise. Subclasses can override this method to indicate that a valid value does not require all sub-fields to be non-empty.
For example, consider a compound time field with sub-fields for hours, minutes, seconds, and AM/PM. It may only require the hour and AM/PM sub-fields to be filled in. It could then assume values of zero for all of the empty sub-fields.
Note that this behavior is different than making "00" the default values of the minutes and seconds sub-fields. Default values are shown in the HTML serializations of fields, so the minutes and seconds fields would be pre-filled with "00" (unless the field is cleared--see Rose::HTML::Form::Field's reset and clear methods for more information).
If a subclass does override the
is_full method in order to allow one or more empty sub-fields while still considering the field "full," the subclass must also be sure that its coalesce_value method accounts for and handles the possibility of empty fields.
See the Rose::HTML::Form::Field::Time::Split::HourMinuteSecond source code for an actual implementation of the behavior described above. In particular, look at the implementation of the
is_full and
coalesce_value methods.
Get or set the input value of the sub-field named NAME. If there is no sub-field by that name, a fatal error will occur.
This method has the same effect as fetching the sub-field using the field method and then calling input_value directly on it, but with one important exception. Setting a sub-field input value using the subfield_input_value method will not invalidate the value of the parent field.
This method is therefore essential for implementing compound fields that need to set their sub-field values directly. Without it, any attempt to do so would cause the compound field to invalidate itself.
See the source code for Rose::HTML::Form::Field::DateTime::Range's inflate_value method for a real-world usage example of the subfield_input_value method.
Returns the XHTML serialization of the field. The default implementation calls xhtml_field on each of the fields and then concatenates and the results. Override this method in your compound field subclass to lay out your sub-fields as desired.. | http://search.cpan.org/~jsiracusa/Rose-HTML-Objects-0.617/lib/Rose/HTML/Form/Field/Compound.pm | CC-MAIN-2014-35 | refinedweb | 2,373 | 53.41 |
11 minute read
Notice a tyop typo? Please submit an issue or open a PR.
To recap, here is an illustration diagramming the structure and content of a pandas DataFrame.
The DataFrame has columns, one for each symbol, and rows, one for each date. Each cell has a value; in this case, it is the closing price of the security represented by the symbol on the corresponding date.
To build out a DataFrame like the one pictured above, we need to consider a few things first.
For example, you might remember that our CSV file from a previous lecture contains data from 2000 through 2015. Since the DataFrame we want to build contains data from only 2010 to 2012, we need to figure out how to read in data from specific date ranges.
This new DataFrame contains information about multiple equities, whereas previous DataFrames only covered a single equity. We need to learn how to read in data for multiple stocks.
Additionally, we need a mechanism by which we can align dates. For example, if SPY traded on some days, and IBM traded on other days, we need to be able to create a DataFrame that aligns the closing prices correctly.
Finally, we need to undo the problem we discovered in the CSV from the last lecture; specifically, the dates were presented in reverse order. We need to build our DataFrame such that the chronology is as we expect: from past to present.
See breakdown here (for 2018 and 2019).
We can start constructing our DataFrame by first building an empty DataFrame
df, which we index by the dates that we want to consider.
Since our goal is to load
df with a column for SPY, IBM, GOOG, and GLD, we begin by reading in SPY data into a DataFrame
dfSPY.
When we compare
df1 and
dfSPY, we notice two interesting things.
First, there are many more dates in
dfSPY than there are in the target
df.
dfSPY contains all of the data for SPY, and we need a way to retrieve data from only the dates we are considering.
Second, there are dates present in
df that are not present in
dfSPY. When we constructed our index, we didn't skip weekends or holidays. Obviously, SPY did not trade on those dates since the market was not open. We will need to deal with this alignment issue.
We now need to combine
df and
dfSPY into a single DataFrame. Thankfully, pandas has several different strategies for doing just that. We are going to look at an operation called a join.
There are a few different types of joins, and the names may sound familiar if you have ever taken a databases course. The type of join that we are interested in retains only those rows that have dates present in both
df and
dfSPY. Formally, this is known as an inner join.
Not only does the inner join eliminate the weekends and holidays originally present in
df, but it also drops the dates in
dfSPY that fall outside of the desired date range.
After we have joined
df and
dfSPY, we can repeat the procedure for the other equities - IBM, GOOG, and GLD - by performing additional joins.
Before we start joining in our stock data, we first need to create our empty DataFrame using pandas.
We can create a date range
dates that will serve as the index of our DataFrame with the following code:
start_date = '2010-01-22' end_date = '2010-01-26' dates = pd.date_range(start_date, end_date)
If we print
dates, we see the following.
Note that
dates is not a list of strings, but rather a series of
DatetimeIndex objects.
We can retrieve the first object from
dates by calling
dates[0], which looks like the following if we print it.
The "00:00:00" represents the default timestamp, midnight, for a
DateTimeIndex object. The index for our DataFrame only concerns dates, so we can effectively ignore this timestamp.
We can now create a DataFrame
df that uses
dates as its index with the following code:
df = pd.DataFrame(index=dates)
If we print
df, we see the following.
We can see that
df is an empty DataFrame, with no columns, that uses
dates as its index.
With
df in place, we can now read SPY data into a temporary DataFrame,
dfSPY, with the following code:
dfSPY = pd.read_csv('data/SPY.csv')
We can attempt to join the two DataFrames with the following code:
df = df.join(dfSPY)
The default join is a left join, which means the following. First, all of the rows in
df are retained. Second, all of the rows in
dfSPY whose index values are not present in the index of
df are dropped. Third, all of the rows in
df whose index values are not present in the index of
dfSPY are filled with
NaN values.
If we print the resulting DataFrame, relabelled as
df, we see the following.
None of the values from
dfSPY appear in our new
df. Let's print
dfSPY to debug.
The issue here is that while
df has an index of
DatetimeIndex objects,
dfSPY has a simple, numeric index.
We can rectify this by telling pandas that the
Date column of the SPY CSV should be used as the index column and that the values in this column should be interpreted as dates. We accomplish this with the following code:
dfSPY = pd.read_csv('data/SPY.csv', index_col="Date", parse_dates=True)
If we print
dfSPY now, we see the following, correct DataFrame.
Since we only care about the adjusted close and the date columns, we can construct
dfSPY to only include those columns using the
usecols parameter.
Additionally, we can replace textual values representing null or absent values with proper
NaNs using the
na_values parameter. In the SPY CSV,
NaN is represented by the string "nan".
The full initialization of
dfSPY is demonstrated by the following code:
dfSPY = pd.read_csv('data/SPY.csv', index_col="Date", parse_dates=True, usecols=["Date", "Adj Close"], na_values=['nan'])
If we again print
df, the result of the join, we see the following DataFrame.
Finally, we can drop weekends and holidays - where adjusted close is
NaN - using the following code:
df = df.dropna()
If we print out
df, we see the following, correct DataFrame.
We can avoid having to explicitly call
dropna on line 22 by passing a certain value for the 'how' parameter of the
join call on line 19.
What is the value of that parameter?
We want to read in data about three more stocks - GOOG, IBM, and GLD - and create our complete DataFrame. We can iterate over the stock symbols, create temporary DataFrames, and join them to
df with the following code:
for symbol in symbols: df_temp = pd.read_csv('data/{}.csv'.format(symbol), index_col="Date", parse_dates=True, usecols=["Date", "Adj Close"], na_values=['nan']) df = df.join(df_temp)
However, when we try to print
df, we see an error.
The issue here is that we have multiple DataFrames that each has a column name "Adj Close". Pandas complains that it does not know how to resolve the overlap when joining DataFrames with identical column names. In other words, column names must be unique.
Instead of having four columns with the same name, we'd like to name each column after the symbol whose data it contains. We can accomplish that with the following code:
df_temp = df_temp.rename(columns={'Adj Close': symbol})
If we print
df now, we see the following, correct output."])
Here is our complete DataFrame, containing data from 2010 to 2012 for SPY, IBM, GOOG, and GLD.
Suppose we want to focus on a subset, or a slice or this data; for instance, we might be interested in just values of GOOG and GLD for 2/13/10 through 2/15/10.
Pandas exposes a syntax for creating such a slice. Given a DataFrame
df, we can retrieve the data for all columns between these two dates with the following expression:
df['2010-2-13':'2010-2-15']
We can further scope this slice to include information about only GOOG and GLD with the following expression:
df['2010-2-13':'2010-2-15', ['GOOG', 'GLD']]
Note that neither the rows nor the columns that we slice must be contiguous in
df. The following slice of nonadjacent columns IBM and GLD is just as valid as our original slice:
df['2010-2-13':'2010-2-15', ['IBM', 'GLD']]
Our original DataFrame only contained data for four days. We can read in data for the whole year of 2010 by changing the dates of the
DatetimeIndex object we built at the beginning of this lesson:
start_date = '2010-01-01' end_date = '2010-12-31' dates = pd.date_range(start_date, end_date)
If we rebuild
df with this new index and print it out, we see the following.
Pandas offers several different options for slicing DataFrames.
Row slicing gives us the requested rows and all of the columns. This type of slicing might be useful when you want to compare the movement of all the stocks over a subset of time.
If we want to retrieve data for all of the symbols during January, for example, we can use the following code:
df.ix['2010-01-01':'2010-01-31']
Note that this code is equivalent to the following:
df['2010-01-01':'2010-01-31']
However, the former is considered to be more pythonic.
aside: it's not. The
ixmethod has since been deprecated, and users should use either the
locor
ilocmethods.
Column slicing returns all of the rows for a given column or columns, and is helpful when we want to view just the movement of a subset of stocks over the entire date range.
The following code shows the syntax for retrieving information for just GOOG, as well as for both IBM and GLD.
df['GOOG'] df[['IBM', 'GLD']] # Note the second pair of []
Finally, we can slice through both rows and columns. For example, we can select information about just SPY and IBM, for just the dates between March 10th and March 15th:
df.ix['2010-03-10':'2010-03-15', ['SPY', 'IBM']]
It's pretty easy to generate a good-looking plot from a DataFrame; in fact, we merely have to call the
plot method.
One feature that we can immediately notice about this plot is that GOOG is priced quite higher than the other stocks. It's often the case that different stocks are priced at significantly different levels.
As a result, it can be hard to compare these stocks objectively. We'd like to adjust the absolute prices so that they all start from a single point - such as 1.0 - and move from there.
If we can normalize the stock prices in this manner, we can more easily compare them on an "apples to apples" basis.
While both of these are technically correct, the second approach leverages vectorization which is must faster than the iterative approach. Read more about vectorization here.
To plot our data, we first need to import
matplotlib. We can then define a
plot_data function that receives a DataFrame
df and calls
plot on it.
import matplotlib.pyplot as plt def plot_data(df): df.plot() plt.show()
If we run this function, we see the following graph.
We need to add more information to our graph, such as a title as well as x- and y-axis labels. Additionally, we can change the font-size of the text to improve readability. We can adjust the title and the font-size with the following code:
def plot_data(df, title="Stock Prices"): df.plot(title=title, fontsize=2) plt.show()
To generate axis labels, we need to call the
set_xlabel and
set_ylabel methods on the object that the
plot method returns.
def plot_data(df, title="Stock Prices"): ax = df.plot(title=title) # ax for axis ax.set_xlabel("Date") ax.set_ylabel("Price") plt.show()
If we run this function now, we see a more informative graph.]
Let's take another look at the plot we have created.
Here we see the absolute price movement of each stock. What we are more interested in, however, is the price movement of each stock relative to its starting price.
To create a plot that shows this relative movement, we first need to normalize the prices of each stock. We do this by dividing the values of each column by their price on day one. This ensures that each stock starts at 1.0.
We can create a
normalize_data method that normalizes the data in a DataFrame
df:
def normalize_data(df): return df / df.ix[0, :]
If we graph our normalized DataFrame, we see the following graph. Notice that all the stocks start at 1.0 and move from there.
OMSCS Notes is made with in NYC by Matt Schlenker. | https://www.omscs.io/machine-learning-trading/working-with-multiple-stocks/ | CC-MAIN-2022-33 | refinedweb | 2,140 | 71.95 |
Java expert needed code fixingpekerjaan
A client of mine is looking to add few checks/swaps in his java code. You are required to look into a already existing solution written in C and do similar in Java. If you understand the code below, then please place your bid. import java.security.*; import java.util.*; public class HelloWorld{ public static void main(String []args){
...for use in ebay. We would design the ebay listing, and then once client approves design, we would pass on PSD design to you with instructions and you will be required to code it. Here is some examples of our ebay listing designs: [log masuk untuk melihat URL] [log masuk untuk melihat URL] [log masuk untuk melihat URL] You
Sila Dafter atau Log masuk untuk melihat butiran. | https://www.my.freelancer.com/job-search/java-expert-needed-code-fixing/ | CC-MAIN-2018-47 | refinedweb | 131 | 61.56 |
The following section will show how to control the LED backpack from the board's Python prompt / REPL. You'll walk through how to control the LED display and learn how to use the CircuitPython module built for the display.
First connect to the board's serial REPL so you are at the CircuitPython >>> prompt.
First you'll need to initialize the I2C bus for your board. It's really easy, first import the necessary modules. In this case, we'll use
board and
BigSeg7x4.
Then just use
board.I2C() to create the I2C instance using the default SCL and SDA pins (which will be marked on the boards pins if using a Feather or similar Adafruit board).
Then to initialize the display, you just pass
i2c in.
import board from adafruit_ht16k33.segments import BigSeg7x4 i2c = board.I2C() display = BigSeg7x4(i2c)
import board from adafruit_ht16k33.segments import BigSeg7x4 i2c = board.I2C() display = BigSeg7x4(i2c)
If you bridged the address pads on the back of the display, you could pass in the address. The addresses for the HT16K33 can range between 0x70 and 0x77 depending on which pads you have bridged, with 0x70 being used if you haven't bridged any of them. For instance, if you bridge only the A0 pad, you would use
0x71 like this:
You can set the brightness of the display, but changing it will set the brightness of the entire display and not individual segments. If can be adjusted in 1/16 increments between 0 and 1.0 with 1.0 being the brightest. So to set the display to half brightness, you would use the following:
You can set the blink rate of the display, but changing it will set the brightness of the entire display and not individual segments. If can be adjusted in 1/4 increments between 0 and 3 with 3 being the fastest blinking. So to set the display to blink at full speed, you would use the following:
To print text to the display, you just use the print function. For the 7-segment display, valid characters are 0-9, letters A-F, and a hyphen. So if we want to print ABCD, we would use the following:
Printing numbers is done similar to printing text, except without the quotes, though you can still print numbers in a string as well.
To set individual characters, you simply treat the
display object as a list and set it to the value that you would like.
display[0] = '1' display[1] = '2' display[2] = 'A' display[3] = 'B'
display[0] = '1' display[1] = '2' display[2] = 'A' display[3] = 'B'
To set individual segments to turn on or off, you would use the set_digit_raw function to pass the digit that you want to change and the bitmask. This can be really useful for creating your own characters. The bitmask corresponds to the following diagram. The highest bit is not used, so an X represents that spot to indicate that.
The bitmask is a single 8-bit number that can be passed in as a single Hexidecimal, Decimal, or binary number. This will use a couple different methods to display
8E8E:
display.set_digit_raw(0, 0xFF) display.set_digit_raw(1, 0b11111111) display.set_digit_raw(2, 0x79) display.set_digit_raw(3, 0b01111001)
display.set_digit_raw(0, 0xFF) display.set_digit_raw(1, 0b11111111) display.set_digit_raw(2, 0x79) display.set_digit_raw(3, 0b01111001)
To fill the entire display, just use the fill() function and pass in either 0 or 1 depending on whether you want all segments off or on. For instance, if you wanted to set everything to on, you would use:
If you want to scroll the displayed data to the left, you can use the
scroll() function. You can pass in the number of places that you want to scroll. The right-most digit will remain unchanged and you will need to set that manually. After scrolling, you will need to call the show function. For example if you wanted to print an A and then scroll it over to spaces, you would do the following.
display.print("A") display.scroll(2) display[3] = " " display.show()
display.print("A") display.scroll(2) display[3] = " " display.show()
There are a couple of different ways to display a colon on the 7-segment display. The first and easiest way is to use the print function:
The other way to control it is to access the colon with the colon property and set it to
True or
False:
There are a couple of different ways to set the left-side dots on the large 7-segment display. The first way is to use the colon property like above:
If you would like to set the dots individually, you can do that using the
top_left_dot and
bottom_left_dot properties and set them to
True or
False:
display.top_left_dot = True display.bottom_left_dot = True
display.top_left_dot = True display.bottom_left_dot = True
If you would like to set the upper-right dot, you can do this using the
ampm property:
To make displaying long text easier, we've added a marquee function. You just pass it the full string. Optionally, you can pass it the amount of delay between each character. This may be useful for displaying a phone number, words using only letters A-F, or other numeric data:
By default it is 0.25 seconds, but you can change this by providing a second parameter. You can optionally pass
False for a third parameter if you would not like to have it loop. So if you wanted each character to display for half a second and didn't want it to loop, you would use the following: | https://learn.adafruit.com/adafruit-led-backpack/circuitpython-and-python-usage-197dcbfa-4ccf-4b98-a152-3982411df681 | CC-MAIN-2020-45 | refinedweb | 941 | 73.47 |
).
public fields with get/set operations :
private
get
set. keyword is generating a strongly typed variable reference.
Let's use the same Point class defined earlier, and suppose we want to define an instance of this class. We will have to create the object and start setting its properties, the code would look like this:
Point calls to the Add() method to add elements to the collection one at a time.
Add()
This language feature enable us to define inline types without having to explicitly define a class declaration for this type. In other words, imagine we want to use a Point object you will get a list of properties that this anonymous type has.
p.
from...where...select namespace
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
SELECT OrderID, CustomerID, EmployeeID, OrderDate, RequiredDate, ShippedDate, ShipVia, Freight, ShipName, ShipAddress, ShipCity, ShipRegion,
ShipPostalCode, ShipCountry
FROM Orders
WHERE (OrderDate =
(SELECT MAX(OrderDate) AS Expr1
FROM Orders AS Orders_1
WHERE (CustomerID = Orders.CustomerID)))
ORDER BY CustomerID, OrderDate
CIDev wrote:My only complaint is that the title of your article is “Understanding LINQ” and you don’t go into a lot of depth on that subject
CIDev wrote:VS 2008 (was codenamed Orcas)
Amro Khasawneh wrote:CIDev wrote:
My only complaint is that the title of your article is “Understanding LINQ” and you don’t go into a lot of depth on that subject
You're probably right as I didn't go into details about LINQ but I tried to explain the new language features in C# 3.0 that makes it possible, hence get a better understating of LINQ (i believe the subtitle is more clarifying). Keep in mind that I marked this article as "Beginner"..
Also there are a number of other articles here on CP that you could use.
Amro Khasawneh wrote:CIDev wrote:
VS 2008 (was codenamed Orcas)
At the time i wrote the article it was VS Orcas Beta 1, however the basics remain the same.
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/19154/Understanding-LINQ-C?msg=2394593 | CC-MAIN-2014-52 | refinedweb | 368 | 52.94 |
Run arbitrary Python functions indicated by JSON specs
Project description
SPyTEx
(this is still in early development, features will be added mostly according to internal needs, see planned features section at the end)
SPyTEx (Simple Python Task Executor) is a small library and CLI utility allowing to run arbitrary tasks defined in your Python code as ordinary functions and configured through simple but flexible JSON-based specifications.
Warning
Arbitrary Python code can be run through SPyTEx, even malicious! Never run SPyTEx on untrusted input files!
Motivation
SPyTEx has been created for use in dynamic codebases which do not have one or few well-defined and stable entry points, but contain a large amount of interrelated functions and classes which can represent either single tasks by themselves or be used as parts of larger tasks.
The goal of SPyTEx is to provide a single entry point from which users can run all these tasks, without the need to create a command line interface (CLI) for each of them. Once SPyTEx is deployed alongside with your codebase, e.g. in a Python package distribution or in a Docker container, you can use its CLI to call arbitrary functions with arbitrary arguments, even if they were not planned to be used as entry points. Function and arguments are specified in a JSON file, with a specific syntax allowing to create complex Python objects as arguments.
While simple Python scripts can be used to launch arbitrary functions inside a codebase, SPyTEx allows to define tasks in form of JSON files which are a more standard format and provide some short notations for functionality such as unpickling objects from local or remote files.
Installation
pip install spytex
Task specification
A task is a call to a function (or any callable object). SPyTEx represents
calls in JSON as an object with a
! entry specifying the full dotted name
(i.e.
package.subpackage.module.name) of the function to be invoked.
{"!": "acme.learn.train_model"}
This would be equivalent to launch a Python script like
from acme.learn import train_model train_model()
To pass keyword arguments, just add them as entries to the same object.
{ "!": "acme.learn.train_model", "data": "trainset.csv", "model": "svm" } # equivalent to: from acme.learn import train_model train_model(data='trainset.csv', model='svm')
To pass positional arguments, pass a list with
* as key.
{ "!": "acme.learn.train_model", "*": ["data1.csv", "data2.csv"], "model": "svm" } # equivalent to: from acme.learn import train_model train_model('data1.csv', 'data2.csv', model='svm')
If you have exactly one positional argument and no keyword arguments, you can use a shorter equivalent syntax (unless there is a clash with a magic function name, see below).
{"!acme.learn.train_model": "trainset.csv"} # equivalent to: from acme.learn import train_model train_model('trainset.csv')
In order to pass more complex objects as arguments, a nested invocation can be specified in place of a single value: such invocation can be a class instantiation. In the example below we instantiate a scikit-learn classifier.
{ "!": "acme.learn.train_model", "data": "trainset.csv", "model": { "!": "sklearn.svm.SVC", "C": 0.1, "kernel": "poly", "degree": 3 } } # equivalent to: from acme.learn import train_model from sklearn.svm import SVC train_model(data='trainset.csv', model=SVC(C=0.1, kernel='poly', degree=3))
To get a named object without calling it (e.g. a constant or a function to be
passed to an higher-order one), use
{".": "dotted.name"}.
{ "!": "acme.learn.train_model", "data": "trainset.csv", "model_class": {".": "sklearn.svm.SVC"} } # equivalent to: from acme.learn import train_model from sklearn.svm import SVC train_model(data='trainset.csv', model_class=SVC)
Some convenient "magic" calls in the form
{"!name": "argument"} are provided
for common operations. Currently supported magic functions are:
!run: invokes the task in the specified file and returns its result
!env: returns the value of the specified environment variable (
Noneif undefined)
!unpickle: returns an object deserialized from given file using
pickle.load(do not unpickle untrusted files!)
Example usage for
!unpickle:
{ "!": "acme.learn.validate_model", "data": "testset.csv", "model": {"!unpickle": "model.bin"} } # equivalent to: import pickle from acme.learn import validate_model with open('model.bin', 'rb') as f: model = pickle.load(f) validate_model(data='testset.csv', model=model)
Running a task
Once you have a
task_file.json following the syntax above, just run
spytex task_file.json
If the function returns a non-
None object, it will be
-q/
--quiet flag. Use the
-p file.bin/
--pickle file.bin option to
pickle.dump the returned object to a given file.
Use
spytex -h/
spytex --help to get the list of all options.
Remote files
SPyTEx uses smart-open to open file names specified both in the JSON files and in the CLI: this allows to fetch and write files from HTTP[S] (read only), Amazon S3 and other non-local sources. Refer to the smart-open documentation for more information.
Internals
The
spytex command performs the following key steps:
- the indicated source file is parsed using Python standard
jsonmodule into an object graph made of standard Python objects (dicts, lists, ...);
- such graph is compiled into a graph of
Definitionobjects, which formally represent the operators used in SPyTEx JSON (function calls, raw values, ...)
- the
Definitions in the graph are recursively resolved: each of them is turned into the object it represents (function calls are executed, raw values are unwrapped, ...)
Planned features
(in rough priority order)
- additional operators in JSON, e.g. to pass a date in "YYYY-MM-DD" format
- command-line parameters (referenceable from JSON file) and more options (e.g. logging configuration)
- support for different syntaxes (e.g. using keywords in place of symbols) and/or for JSON alternatives (e.g. TOML)
- proper documentation
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/spytex/ | CC-MAIN-2021-10 | refinedweb | 962 | 50.12 |
This is a discussion on Is it not learning C++ shooting myself in the foot? within the General Discussions forums, part of the Community Boards category; Originally Posted by Malcolm McLean the vast majority of programs can be written in C as easily as C++. I ...
Code:namespace life { const bool change = true; }
Another C versus C++ thread. Great, just great.
I'm out.
Nominal, my response to that post can, honestly, only : O_o
Lol wut?
@MutantJohn: About once a year or so, a "C vs C++" debate breaks out on these boards. Some people argue rationally, others passionately. Emotions can run high during these debates.
Those active in the discussion typically fall into one of three categories:
1. C is better than C++ because [insert arguments here]
2. C++ is better than C because [insert arguments here]
3. Use the right tool for the right job
Lol omg.
If I had to comment on one of the things I really like in C++ is the use of namespaces. It's pretty nice because I like how it organizes variables.
Aside from that, that's pretty funny lol.
If you're trying to get into the world of numerical computation, I suggest learning Fortran 77 and 90/95 as well.
I've known a lot of physicists who write code. Most of them don't care about programming. Most of those who do severely over-estimate their ability as programmers. You might therefore need to be prepared for scepticism or chortling at your claim about being as apt a programmer, or more so, than a CS person (or someone else with a relevant background). Then again, many CS graduates and many programmers over-estimate their abilities as programmers. Academic snobbery remains alive and well.
You're right, grumpy. My peers are awful.
Um... I don't know how much you know about meshing or the techniques but what if I told you that I could create a successful Delaunay tree which is used as the base of generating a Voronoi tessellation of space? I'm not trying to use jargon, I'm trying to eloquently and curtly describe my experience in one sentence. If you are unfamiliar with the terminology then you are just like every recruiter I've spoken too, even about meshing jobs. Which is sad. I spoke to one recruiter who literally didn't know the difference between C and C++ and it was also for a meshing job.
What if I showed you this code and you saw that it worked, my tetrahedrons don't collide and the repairs happen as is needed without destroying the integrity of the search history.
Is that demonstrated enough? I'm not trying to be hostile in how I'm asking.
Making this mesh is really hard and it's tougher doing it alone and I only have an idea on how to handle repairs in the tree but I'm confident it'll work. If companies wouldn't take this as valid experience then I would be at a loss. So I'm curious, if a 23 year old BS in physics came up to you, told you he made a Voronoi tessellation code in C++, would you be intrigued? Would a company see that and raise an eyebrow?
I've googled this enough to know that what I'm doing is technically graduate level CS. I really wish I had a professor to help me with this but I don't. The good news is, I'm really good at geometry now.
But you're right, I can sense companies don't believe me. I feel like they read my cover letters and go, "No, you just read the wiki on it, you little troll" when in all reality I'm crying because this is so hard. But I love it even though I'm terrified of it. But then I think, I wouldn't be paid to solve the easy problems and even though I'm scared I can't back down. I'm starting to grow a backbone and repairing this tree is the biggest challenge that I have literally ever faced in my career as an unemployed at home programmer. But I think I got it. I think I got this.
And you're right, I am using the Interactive Data Language. Good guess.
Employers are not interested in what you can do in general, either. They are only interested in whether you can do work they need done, or that generates profits; and whether you can do it more efficiently (faster, cheaper) than the alternatives.
In many cases the interviewers don't even know what the work is that needs to be done, which means that "celebrities" and "people persons" are much more employable than experts. The prevalence of short term planning, fragmentation of careers, and the poor quality of management practices in general, has basically lead to current employment becoming a game.
It is a game where your technical skills are only a minor asset.
Make no mistake: I value knowledge and learning much more than I value any amount of money or financial security; it is a decision I've personally made a decade ago. I'm not trying to squash your dreams, I'm trying to give you realistic advice.
Since I do believe I am one of those Grumpy was thinking about when he said some people over-estimate their abilities, I'll just say I'm very confident in mine. I do base my beliefs on things I've done and problems I've solved, though.
Yet, in my own case, I consider myself unemployable, at least in the traditional sense. I just don't fit in the short-term economy well.
I do have two options left: either finding a computational physics research topic I can get funding for, or finding a large company that requires a multiple-domain problem-solving specialist that fits my skill set better than anyone else. If neither of those pans out in a year or two, I guess I'll have to adapt.
Do not despair. Plan your moves, and adapt when you have to. I'm sure you'll do fine.
So wait, why is the job market so dumb?
I will call it dumb because with the way you described it, it can't possibly be smart.
And if someone like you, Nominal, is unemployable then I am even more unemployable...
That's... disheartening.
Should I even finish this project then? I love it and I'd love to finish it but if it won't get me a job, then it's pointless. I only started coding it because I thought if I demonstrated my skills with a difficult project that has several applications in various areas of computer science, I would get a job. That an employer would see my resume and ask my portfolio and see that I'm able to handle complex geometries and code, difficult algorithms and structures.
But a company won't even consider how much I would grow in a year?
Why are they like that? The short-term economy seems like an awful idea and we're too smart to be dismissed! And by 'we', I mean the physicists who can actually program.
And even if I was a BS in CS, I'd be used like a tool and then tossed away at the drop of a hat. That's not security if I'm only supposed to come in, do my work and then be replaced like I'm some sort of machine.
So the people who get a BS in CS just basically get used and the people who get a BS in physics work at a grocery store.
As a 23 year old, that's really sad. Wow.
Everything I ever learned, studied or endeavoured myself to do is fundamentally meaningless. I'll never use my math, physics or programming skills...
*** this, I'm gonna be a butcher.
Find people who have hired people to jobs you'd like to try, and find out what they looked for. (Usually that means ignoring what they say they did, because most people are not that rational; better look at who got picked and why.)
Find out how to make a company want to hire you. Show why it would be smart business for them to hire you. If it has to be a gamble (say if you don't have much practical experience yet), show that what you bring to the organization greatly outweigh any possible risks picking you might have.
Be nimble. You're young, and you can find out what the world is like in different places. Remember that organizations are not human: they are not alive, and they have no feelings. Loyalty only exists in agreements and between living beings. Don't stay somewhere just because you're already there, and you feel you can make a difference. Make a plan, and execute it.
You need to show your value, in a way that even the recruiters can see.
I recommend against exaggeration (and definitely against lying), but planning ahead and exploiting the interviewers weaknesses is perfectly OK in my book. Treat it as a game, if it makes it easier.
Of course, if money or financial security is what you want, you have to go into finance and/or commerce. You don't get rich (or financially secure) by working; you get rich by owning and controlling stuff. That's just how this society works.
If you wish to apply your math, physics, and programming skills, and get a living out of it, you just have to find someone who can make money out of your skills, then convince you are a better bet than anyone else.
To be honest, this does seem quite ridiculous, a self-confessed "unemployable" person giving advice to someone just at the start of their career.
Perhaps I should have stayed out of this thread, and let those who have successfully executed a plan similar to yours advise you.
"Employers are not interested in what you can do in general, either. They are only interested in whether you can do work they need done, or that generates profits; and whether you can do it more efficiently (faster, cheaper) than the alternatives."
Pretty much sums it all up. Based on what has been said, it's obvious the OP is trying to get past getting a masters or a Ph.D for a job that clearly demands it; good luck with that.
Lol thank you, Nominal. That was a very helpful post and you have indeed talked me off of a ledge.
I've decided, I'm not going to get a programming job... yet. I will not abandon this project. I will, however, shift my priorities to just getting a job because I want out of my parents' house. I figure, if I keep coding on the side, I'll eventually break through and you're also right, I need to find someone who can recognize the value in a physicist/programmer. The thing is, it might just take longer than I'd like which is fine because I'm still young enough, I have this time to burn and search and grow.
Plus, I just really, really wanna live with my girlfriend again.
So, I am not giving up! I will make this Voronoi tessellation, even if it kills me. And I don't think I can afford butcher's school. So it looks like data entry for me which is cool. I can code on the side and no one'll be the wiser.
Also, Epy, it's on, son! It's on like Donkey Kong! It's on like... It's on like a HoN vs. DotA 2 debate.
All silliness aside, I'm actually quite far on the project. I've already established a tetrahedral triangulation routine that works and switching from using a long double to a double data type actually reduced the runtime of my triangulation by like a factor of 4 (from 3.5 seconds to like .86 in making the same 1514 tetrahedrons).
Also, the way to use a Delaunay tree is, have multiple parents to the same set of children! You use the tree for a search history but even though old tetrahedrons are invalid, the volume they span, however, is not! It's still valid! Volume is conserved for every insertion and repair so what you do is, have multiple leaves point to the same node which then points to the new children.
The way to make sure the search is still clean is to use an "occupied" field per leaf. For example, let's say we perform a 2-to-3 flip. You may be wondering, 2 tetrahedrons to 3? That can't conserve volume! Oh wait, but they do and it's all kinds of sexy.
Let's say we have two tetrahedrons, abcx and abcy. The line connecting xy intersects the triangle abc in its interior so we have met all the requirements for restoring Delaunayhood and I actually wrote code that does this but doesn't integrate back into the tree, it'll just produce 3 Delaunay tetrahedrons.
So basically, the new 3 tetrahedrons are abxy, acxy and bcxy. Uh, son, uh!
So how do we put this into the tree? Simple. Take the leaves abcx and abcy (every tetrahedron in the mesh is a leaf) and have them point to a parent node. This parent node then points to the leaves abxy, acxy and bcxy. So, when I search again, I can actually locate the repaired tetrahedrons just by using volume spanning. Simple enough.
But if you're really, really clever, you'll consider the search finding the same leaves twice as what if the point is located on the shared face between our original two tetrahedra, abcx and abcy? Simple, simple. Occupied leaves field. When traversing the tree, it'll find abcx and examine where the point would be in the 3 children but it keeps searching and finds it's also in abcy so it brings it back those same 3 children. Instead of inserting the same point twice, I do all my searching first, gather up the leaves to be fractured and then if the same leaf is found twice, I don't fracture it twice. I just discard the second find.
And that, ladies and gentleman, is how you code a Delaunay tree. Or at least, that should totally work.
Yeah, tell me I need a Master's to know? Pfft, I'll teach this to a Master's student and they'll be all like, "Whoa bro, you're, like, super smart, aren't you?" and I'd be all like, "No, I am only a man."
XD
For proof of volume conservation, I tested the two tetrahedrons with an online calculator (0, 0, 0), (7, 0, 0), (0, 7, 0) and (2, 2, +/- 2). The combined volumes were roughly 32.664.
My three children were simplywhose total volume spans roughly 32.668. This is due to rounding errors but hey, it largely conserves the volume enough for accurate searches because as at least I know, a real double extends more than just 3 decimal places.whose total volume spans roughly 32.668. This is due to rounding errors but hey, it largely conserves the volume enough for accurate searches because as at least I know, a real double extends more than just 3 decimal places.Code:{(0, 0, 0), (7, 0, 0), (2, 2, +/- 2)}, {(0, 0, 0), (0, 7, 0), (2, 2, +/- 2)}, and {(7, 0, 0), (0, 7, 0), (2, 2, +/- 2)}
So Epy, do you dare to challenge me? I am the King of the Beasts and I lost myself today. Today was a strong moment of weakness that threatened to overtake me but I know my path now and I will not suffer such insolence. Now boy, if you see any flaws with my algorithm, I gotta figure out a way to actually put this abstraction into practice.
Also, thank you again, Nominal, you're my hero. You're a very nice person and you really did help me through a pit today.
One thing you've got to realise is that your work is quite specialised - you are using one technique for a particular purpose. Unfortunately, unless you know you are talking to a specialist who knows about such things, you need to be able to discuss what your work achieves in concise laymans terms. Recruiters rarely have specialised knowledge - if you're lucky a recruiter might recognise a few relevant keywords or topics, from a brief conversation with someone in the area they are recruiting for.
If this was an interview situation, I would attempt to gauge how you went about solving the problem, possibly ask "what if" type questions on variations of the problem you addressed, and probably describe another (unrelated) problem and ask you to describe how you would go about addressing it. It is your willingness, ability, and approach to solving problems you haven't seen before - but are potentially relevant to a job you're being interviewed for - that will determine if I employ you.
The thing is, any employer will be interested in what value you offer them, how you will go about addressing problems relevant to the job they offer, and whether you will do so effectively under their employ. And you need to pitch your application accordingly. If you are seeking a specialised position doing similar work, then use the specialised language. If you are seeking work that will make use of your problem-solving ability rather than just your ability to solve a particular problem, then you need to describe your work to non-specialists. Including recruiters, if they are in the loop.
You don't need a master's to know, you need a master's for the job. Almost all jobs I've seen concerning numerics require a master's if not a Ph.D. It really doesn't matter what you can do, cause people are stuck on degrees.
I took a Ph.D level course on computational fluid dynamics during my junior year of my BS, think any employer cares? Nope. | http://cboard.cprogramming.com/general-discussions/158488-not-learning-cplusplus-shooting-myself-foot-2.html | CC-MAIN-2015-06 | refinedweb | 3,069 | 71.95 |
Code Map improvements in Visual Studio 2015 CTP6
February 23, 2015
Code Maps, previously known as Directed Graph Documents, are a great way to visualize the relationships and interdependencies between the components of your applications. They make it much easier to understand the architecture of your (or, even more useful, somebody else’s) application, and where you should start when you need to update the application. Code Maps are a core part of the features of Visual Studio that help you to visualize and get insights about code.
Since Visual Studio 2015 Preview, and based primarily on your feedback, we have improved several features related to architectural and dependency analysis using Code Maps:
· Simplified Architecture menu
· Faster display and better responsiveness
· Progressively visible solution structure and dependencies
· Additional dependency link styles
· Assembly styling depending on the project type
· Less clutter with implicit .NET type dependencies hidden
· Filters for code elements as well as dependency links
Simplified Architecture menu
In Visual Studio 2013 Update 3, the Architecture menu is simpler than in previous releases—we added a Generate Dependency Graph sub menu—but this is still complicated because it mixes UML designers, settings for UML code generation, and the Dependency Graph diagrams.
In the more recent in Visual Studio 2015 Preview, we added an option to create a new Code Map—making things even more confusing.
Now, in Visual Studio 2015 CTP6, you get a much cleaner Architecture menu:
– We unified the terminology so that it refers everywhere to Code Maps, rather than sometimes to Code Maps and sometimes to Dependency Graphs or Dependency Diagrams.
– We renamed the New Diagram command to New UML or Layer Diagram to make it clear what it does.
– We made the code generation settings and XML import commands available from the UML Explorer, where they really make sense.
– We added the tool windows that you can use as drag sources for Code Maps (Class View and Object Browser) to the Windows submenu.
Faster display and better responsiveness
In previous versions, it could take several minutes to generate Code Maps and Dependency Graphs. In Visual Studio2015 CTP6, we’ve improved the performance of Code Maps so that dependency generation and background processing have much less impact on the responsiveness of the UI and Code Map.
For example, creating a Code Map of the Roslyn solution, with its 58 projects, takes only a few seconds. The map is responsive immediately, even though it is still being built—you can see that the dependencies between the assemblies are shown as simple gray lines until the map generation process is fully completed. Notice how Visual Studio is currently generating the map by searching for and inspecting assemblies in the background.
In addition, if you were using Code Maps to understand local dependencies in a bottom-up approach by creating the map from an element in the code editor or Solution Explorer—which means it must add related elements—a build was attempted every time. This could make it very slow.
In Visual Studio 2015 Preview, we introduced the Skip Build toggle button in the Code Map toolbar to avoid automatic rebuilds.
However, the experience was still not completely right—until now. In Visual Studio 2015 CTP6, if you don’t want to see the details of the link dependencies and you are happy with the initial diagram, or you want to focus on a sub-diagram using the New Graph from Selection command, you can use the Cancel link located in the information bar of the Code Map.
Progressively visible solution structure and dependencies
In Visual Studio 2013, with very large solutions, the resulting Dependency Graphs can be virtually un-usable because there are too many assemblies, connected to each other without any notion of grouping or layers. For example, a Dependency Graph of the 58 projects in the Roslyn solution is very large, and is missing any kind of grouping that would really make it usable. We heard from Visual Studio 2013 customers how difficult it is to get useful maps from large solutions such as this.
To be able to understand this graph, the first thing you had to do was to add groups manually and delete stuff you don’t want to see. This is possible, and we’ve worked with customers who found some value out in it, but it is extremely cumbersome.
In Visual Studio 2015 CTP6, you quickly get a map representing the structure of the solution, and the map is much more useful. Assuming you have opened the Roslyn solution in Visual Studio 2015 CTP6 and built the solution, you can choose Generate Code Map for Solution Without Building from the Architecture menu to get the following diagram almost immediately. This is because Code Map crawls through the solution in the background extracting information about the projects and the solution folders.
You can see that the projects are grouped into solution folders, which makes the diagram considerably more readable and accessible. For example, you can see the Compilers solution and, within this, the separate layers for CSharp, Visual Basic, some common Core stuff, and the Test project.
Another great feature with the progressive creation of Code Maps is that, if your solution does not build in its entirety, you will immediately know which projects built and which didn’t. The structured layout makes it easy to see which project must be built first, and to specfiy the overall build order. This helps you when planning how to fix a broken build, without being overwhelmed by compiler errors.
Additional dependency link styles
After the initial creation of the Code Map, it does the same as it did in previous versions of Visual Studio—except, now, it does all this in the background. This includes fetching the assemblies, indexing them if required, and analysing the links.
If you wait until the dependency links have been fully analysed (it takes a minute or so for this solution), you’ll notice that these links are colored, as was the case in Visual Studio 2013 Update 3 and later. The colors of the links relate to the type of dependency.
One interesting point you might notice is the gray dotted link between Roslyn.Services.VisualBasic.UnitTests.dll and Microsoft.CodeAnalysis.VisualBasic.Workspaces.dll. We’ve hovered over this link in the following enlarged section of the map to show the pop-up information tooltip.
This link tells us that the project generating Roslyn.Services.VisualBasic.UnitTests.dll references the project generating Microsoft.CodeAnalysis.VisualBasic.Workspaces.dll. But the link is not colored because the compiler does not really need this reference. So, unless you want to force a specific build order, it’s safe to remove this unrequired reference. In fact, removing it will probably ensure that msbuild better uses its parallel build capabilities, and therefore that your solution builds more quickly.
Assembly styling depending on the project type
If you look carefully at the earlier screenshots of the Code Maps, you’ll see that some of the assemblies have different colors and icons. This is because they are styled based on the project types. In the case of the Roslyn solution we’ve been using, most assemblies are libraries—the purple ones being Visual Studio extensions (VSIX).
However, the new styling is more obvious when you look at a Code Map for the solution we used in the keynote demonstrations during the Connect event when we released Visual Studio 2015 Preview.
In the following screenshot, which shows a close-up view, you can see that there is a Web Project, a Phone Project, and a Portable Library type. Note that the Test Projects are not yet styled in CTP6, but that’s something we are working on.
Less clutter with implicit .NET type dependencies hidden
In previous releases of Code Maps, every assembly in the display had a dependency link of type Inheritance with respect to mscorlib (and therefore to the Externals group). This is because all .NET types inherit from one of the mscorlib base types: System.Object, System.ValueType, System.Enum, or System.Delegate. You told us that this information is not useful because it’s not something that you had coded, but was simply an implementation detail that was adding confusion and clutter to the map—as well as adding processing time when generating the map.
Since Visual Studio 2015 CTP5, Code Maps no longer represent these dependencies. Instead, you see links between assemblies and externals in colors other than green (which represents inheritance).
Filters for code elements as well as dependency links
In Visual Studio 2015 Preview, we introduced filters for links. In CTP6, we’ve added a new and powerful node filtering mechanism; plus additional filters for the project references, and for the external references (references from a project to an assembly that is not built from a project in the solution). The links between nodes that belong to different groups (such as classes that belong to different namespaces) are now also filtered.
For example, assume that you want to build a Roslyn analyser similar to the one provided by FxCop. You can find one in Solution Explorer (for example, the class CA1813DiagnosticAnalyzer) and add it to a new Code Map. Get all of its base classes (open the shortcut menu, choose Advanced, and then choose Show All Base Types), and then get the containing namespaces and assemblies (Select All, and then Advanced | Show Containing Namespace and Assembly). The result is a diagram such as the following (we chose Bottom to Top Layout from the Layout menu so that the base class is at the top).
Now we can uncheck the check boxes in the Filters window so that the map shows only the details we’re interested in. For example, by unchecking Namespace we get a grouping of classes by the assemblies where they are defined, but skipping the Namespace level.
Conversely, we can check Namespace and un-check Assembly and Class to get the type of namespace dependency diagram that many customers have requested (with the caveat that if two namespaces have the same name but belong to different assemblies they will still be represented as different groups: that’s the current model of Code Maps).
Filtering on test assemblies is not yet available, but we’ve heard from you that you sometimes want to be able to focus only on production code. Test assembly filtering is something we want to do in future release.
Got feedback?
Please try the improved Code Maps and send your feedback, ideas, and requests to vsarchfeedback@microsoft.com
Thank you,
The Code Insights team
To be honest I can't find any usability for code map but wasting times, Why finding driven classes of a class (one the most importance features that VS still doesn't have it!) takes more than one second, One keystroke and one glance would be enough. Developers find relations, specially finding driven and concrete classes, dozen times per day. using Shift+F12 is still faster, even with a large result. This means productivity to me, doing in less time. What is the point of a visualizing classes in a large unreadable image in term of productivity. It takes more time to understand it, Scanning classes by eyes are still faster and easier. We all know that in every well designed project there are much more interfaces than classes. VS IntelliSence will fail to create an simple statement like IList<Something>list = new List<Something>() We have to write second part manually dozen times per day! I think VS team should have a better user experience for productivity! A better Shift+F12 will bring more productivity. "A picture is worth a thousand words" is not always true! . It depends on context.
It doesn't work for TypeScript, does it?
I like codemap, it helps in understanding new code and bring us to pace.Hope it does make my understanding better with the new code and link filtering.
Code map and DGML graph documents in general are about seeing more than the immediate Shift+F12 experience, so it is intended to be complementary to that. Think of it this way if you find your head hurting after jumping through a million Shift+F12 dependencies, try stepping back, and look at the big picture with code map and see if that helps understand how the code works, and better, how to clean it up.
@thorn – Code Map currently works for C#, VB and C++
Nima is not totally wrong. The user experience with current implementation is absolutly a waste of time. Especially in big projects it becomes more and more useless. I could become very useful, if there were better filter functions. For example a function that would hide all items/namespaces that are not in direct reference with my actual selection, where a selection could be multiple namespaces or a project folder.
Will the filter option be available in VS2013 or are we stuck with the VS2015 upgrade, as large projects makes this tool useless if there is no filter of the clutter?
@Denis: Filters won't be available in VS 2013. In any case I strongly advise you to upgrade to VS 2015 as Code Map is much better for large projects there (filters, but also the Code map for solution scales). See the MSDN magazine article about that: msdn.microsoft.com/…/mt238403.aspx | https://blogs.msdn.microsoft.com/devops/2015/02/23/code-map-improvements-in-visual-studio-2015-ctp6/ | CC-MAIN-2017-26 | refinedweb | 2,235 | 57.81 |
Class::Struct::FIELDS - Combine Class::Struct, base and fields
(This page documents
Class::Struct::FIELDS v.1.1.)
use Class::Struct::FIELDS; # declare struct, based on fields, explicit class name: struct (CLASS_NAME => { ELEMENT_NAME => ELEMENT_TYPE, ... }); use Class::Struct::FIELDS; # declare struct, based on fields, explicit class name # with inheritance: struct (CLASS_NAME => [qw(BASE_CLASSES ...)], { ELEMENT_NAME => ELEMENT_TYPE, ... }); package CLASS_NAME; use Class::Struct::FIELDS; # declare struct, based on fields, implicit class name: struct (ELEMENT_NAME => ELEMENT_TYPE, ...); package CLASS_NAME; use Class::Struct::FIELDS; # declare struct, based on fields, implicit class name # with inheritance: struct ([qw(BASE_CLASSES ...)], ELEMENT_NAME => ELEMENT_TYPE, ...); package MyObj; use Class::Struct::FIELDS; # declare struct with four types of elements: struct (s => '$', a => '@', h => '%', x => '&', $ary_element_value = $obj->a (2); # same thing $obj->a->[2] = 'new value'; # assign to array element $obj->a (2, 'newer value'); # same thing # hash type accessor: $hash_ref = $obj->h; # reference to whole hash $hash_element_value = $obj->h->{x}; # hash element value $hash_element_value = $obj->h (x); # same thing $obj->h->{x} = 'new value'; # assign to hash element $obj->h (x, 'newer value'); # same thing # code type accessor: $code_ref = $obj->x; # reference to code $obj->x->(...); # call code $obj->x (sub {...}); # assign to element # regexp type accessor: $regexp = $obj->r; # reference to code $string =~ m/$obj->r/; # match regexp $obj->r (qr/ ... /); # assign to element # class type accessor: $element_value = $obj->c; # object reference $obj->c->method (...); # call method of object $obj->c (My_Other_Class::->new); # assign a new object
Class::Struct::FIELDS exports a single function,
struct. Given a list of element names and types, and optionally a class name and/or an array reference of base classes,
struct creates a Perl 5 class that implements a "struct-like" data structure with inheritance., code or class.
Class::Struct
baseand
fields
Class::Struct::FIELDS is a combination of
Class::Struct,
base and
fields.
Unlike
Class::Struct, inheritance is explicitly supported, and there is better support for user overrides of constructed accessor methods. One result is that you may no longer use the array (
[]) notation for indicating internal representation. Also,
Class::Struct::FIELDS relies on
fields for internal representation.
Also,
Class::Struct::FIELDS supports code and regular expression elements. (
Class::Struct handles code and regular expressions as scalars.)
Lastly,
Class::Struct::FIELDS passes it's import list, if any, from the call to
use Class::Struct::FIELDS ... to
struct so that you may create new packages at compile-time.
Unlike
fields, each element has a data type, and is automatically created at first access.
use Class::Struct::FIELDS
You may call
use Class::Struct::FIELDS just as with any module library:
use Class::Struct::FIELDS; struct Bob => [];
However, if you try
my Dog $spot syntax with this example:
use Class::Struct::FIELDS; struct Bob => []; my Bob $bob = Bob::->new;
you will get a compile-time error:
No such class Bob at <filename> line <number>, near "my Bob" Bareword "Bob::" refers to nonexistent package at <filename> line <number>.
since the compiler has not seen your class declarations yet until after the call to
struct, by which time it has already seen your
my declarations. Oops, too late. Instead, create the package for
Bob during compilation:
use Class::Struct::FIELDS qw(Bob); my Bob $bob = Bob::->new;
This compiles without error as
import for
Class::Struct::FIELDS calls
struct for you if you have any arguments in the
use statement. A more interesting example is:
use Class::Struct::FIELDS Bob => { a => '$' }; use Class::Struct::FIELDS Fred => [qw(Bob)]; my Bob $bob = Bob::->new; my Fred $fred = Fred::->new;
structsubroutine
The
struct subroutine has three forms of parameter-list:
struct (CLASS_NAME => { ELEMENT_LIST }); struct (CLASS_NAME, ELEMENT_LIST); struct (ELEMENT_LIST);
The first form explicitly identifies the name of the class being created. The second form is equivalent. The second form assumes the current package name as the class name. The second and third forms are distinguished by the parity of the argument list: an odd number of arguments is taken to be of the second form.
Optionally, you may specify base classes with an array reference as the first non-class-name argument:
struct (CLASS_NAME => [qw(BASE_CLASSES ...)], { ELEMENT_LIST }); struct (CLASS_NAME => [qw(BASE_CLASSES ...)], ELEMENT_LIST); struct ([qw(BASE_CLASSES ...)], { ELEMENT_LIST }); struct ([qw(BASE_CLASSES ...)], ELEMENT_LIST);
(Since there is no ambiguity between CLASS_NAME and ELEMENT_LIST with the interposing array reference, you may always make ELEMENT_LIST a list or a hash reference with this form.)
The class created by
struct may be either a subclass or superclass of other classes. See base and fields for details.
The ELEMENT_LIST has the form
NAME => TYPE, ...
Each name-type pair declares one element of the struct. Each element name will be usually be defined as an accessor method of the same name as the field, unless a method by that name is explicitly defined (called a "user override") by the caller prior to the
use statement for
Class::Struct::FIELDS. (See "Replacing member access methods with user overrides".)
struct returns the name of the newly-constructed package.
The five element types -- scalar, array, hash, code and class -- are represented by strings --
$,
@,
%,
&,
/ and a class name.
\@ or
*@, a reference to the array
\% or
*%, a reference to the hash element is returned.
&,
\&or
*&)
The element is code,. (It is unclear of what value this facility is. XXX)
/,
\/or
*/)
If the element type is
/, the value of the element (after assignment) is returned. If the element type is
\/ or
*/, a reference to the element is returned. (It is unclear of what value this facility is. XXX)
Regular expressions really are special in that you create them with special syntax, not with a call to a constructor:
$obj->r (qr/^$/); # fine $obj->r (Regexp->new); # WRONG
Class_Name,
\Class_Name
\ or
*, the accessor returns the element value (after assignment). If the element type starts with a
\ or
*, a reference to the element itself is returned.
The class is automatically required for you so that, for example, you can safely write:
struct MyObj {io => 'IO::Scalar'};
and access
io immediately. The same applies for nested structs:
BEGIN { struct Alice { when => '$' }; struct Bob { who => 'Alice' }; } my Bob $b = Bob::->new; $b->who->when ('what');
Note, however, the
BEGIN block so that this example can use the
my Dog $spot syntax for
my Bob $b. Also, no actual import happens for the caller -- the automatic use is only for convenience in auto-constructing members, not magic. Another way to do this is:
{ package Bob; use Class::Struct::FIELDS; struct } my Bob $b = Bob::->new;
And of course the best way to do this is simply:
use Class::Struct::FIELDS qw(Bob); my Bob $b = Bob::->new;
*) and other funny types?
At present,
Class::Struct::FIELDS does not support special notation for other intrinsic types. Use a scalar to hold a reference to globs and other unusual specimens, or wrap them in a class such as
IO::Handle (globs). XXX code is a code reference.
The initializer for a class element is also a hash reference, and the contents of that hash are passed to the element's own constructor.
new tries to be as clever as possible in deducing what type of object to construct. All of these are valid:
use Class::Struct::FIELDS qw(Bob); my Bob $b = Bob::->new; # good style my Bob $b2 = $b->new; # works fine my Bob $b3 = &Bob::new; # if you insist my Bob $b4 = Bob::new (apple => 3, banana => 'four'); # WRONG!
The last case doesn't behave as hoped for:
new tries to construct an object of package
apple (and hopefully fails, unless you actually have a package named
apple), not an object of package
Bob.
See Example 3 below for an example of initialization.
init
You may also use
init as a constructor to assign initial values to new objects. (In fact, this is the preferred method.)
struct will see to it that you have a ready object to work with, and pass you any arguments used in the call to
new:
sub init { my MyObj $self = shift; @self->a->[0..3] = (a..d); return $self; }
It is essential that you return an object from
init, as this is returned to the caller of
new. You may return a different object if you wish, but this would be rather uncommon.
First,
new arranges for any constructor argument list to be processed first before calling
init.
Second,
new arranges to call
init for base classes, calling them in bottom-up order, before calling
init. This is so that ancestors may construct an object before descendents.
There is no corresponding facility for DESTROY. XXX
You might want to create custom access methods, or user overrides. The most straight forward way to do this and still retain
string and
warnings is:
use strict; use warnings; sub Bob::ff ($;$$); # declare sub so Class::Struct::FIELDS can see use Class::Struct::FIELDS Bob => { ff => '$' }; sub Bob::ff ($;$$) { my Bob $self = shift; &some_other_sub (@_); }
If you do not declare the user override prior to the
use statement, a warning is issued if the warning flag (-w) is set.
Notice that we changed the default sub signature for ff from
($;$) to
($;$$). Normally, this might generate a warning if we redefine the sub, but declaring the sub ahead of time keeps
strict and
warnings happy. You might prefer this construction:
{ package Bob; } sub Bob::ff ($;$$) { my Bob $self = shift; &some_other_sub (@_); } use Class::Struct::FIELDS Bob => { ff => '$' };
You might still want the advantages of the the constructed accessor methods, even with user overrides (for example, checking that an assigned value is the right type or package).
Class::Struct::FIELDS constructs the accessor with a special name, so that you may use it yourself in the user override. That special name is the regular field name prepended by a double underscore,
__. You can access these so:
use strict; use warnings; sub Bob::ff ($;$); # declare sub so Class::Struct::FIELDS can see sub Bob::gg ($;$); # declare sub so Class::Struct::FIELDS can see use Class::Struct::FIELDS Bob => { ff => '$', gg => '$' }; # This example is identical to having no user override. sub Bob::ff ($;$) { my Bob $self = shift; $self->__ff (@_); } # This example illustrates a workaround for v5.6.0. sub Bob::gg ($;$) { # This silliness is due to a bug in 5.6.0: it thinks you can't # fiddle with @_ if you've given it a prototype. XXX my @args = @_; $args[1] *= 2 if @args == 2 and defined $args[1]; @_ = @args; goto &Bob::__gg; }
Fields starting with a leading underscore,
_, are private: they are still valid fields, but
Class::Struct::FIELDS does not create subroutines to access them. Instead, you should access them the usual way for hash members:
$self->{_private_key}; # ok $self->_private_key; # Compilation error
See fields for more details.
If there exists a subroutine named
as_string at the time you invoke
struct (or, equivalently, during the call to
use), then
Class::Struct::FIELDS will glue that into auto-stringification with
overload for you.
Giving a struct element a class type that is also a struct is how structs are nested. Here,
timeval represents a time (seconds and microseconds), and
rusage has two elements, each of which is of type
timeval.
use Class::Struct::FIELDS;::FIELDS; # declare the struct struct (MyObj => {count => '$', stuff => '%'}); # override the default accessor method for 'count' sub count { my MyObj struct is specified as an anonymous hash of initializers, which is passed on to the nested struct's constructor.
use Class::Struct::FIELDS; struct Breed => { name => '$', cross => '$', }; struct Cat => { name => '$', kittens => '@', markings => '%', breed => 'Breed', }; my $cat = Cat->new (name => 'Socks', kittens => ['Monica', 'Kenneth'], markings => { socks => 1, blaze => "white" }, breed => { name => 'short-hair', cross => 1 }); print "Once a cat called ", $cat->name, "\n"; print "(which was a ", $cat->breed->name, ")\n"; print "had two kittens: ", join(' and ', @{$cat->kittens}), "\n";
Class::Struct::FIELDS has a very elegant idiom for creating inheritance trees:
use Class::Struct::FIELDS Fred => []; use Class::Struct::FIELDS Barney => [qw(Fred)]; use Class::Struct::FIELDS Wilma => [qw(Barney)], { aa => '@', bb => 'IO::Scalar' };
That's all the code it takes!
Class::Struct::FIELDS export
struct for backwards-compatibility with
Class::Struct.
The following are diagnostics generated by Class::Struct::Fields. Items marked "(W)" are non-fatal (invoke
Carp::carp); those marked "(F)" are fatal (invoke
Carp::croak).
(F) The caller failed to read the documentation for
Class::Struct::FIELDS and follow the advice therein.
(W) There is already a subroutine, with the name of one of the accessors, located in a base class of the given package. You should consider renaming the field with the given name.
(W) There is already a subroutine, with the name of one of the accessors, located in the given package. You may have intended this, however, if defining your own custom accessors.
(W) There is already a 'new' subroutine located in the given package. As long as the caveats for defining your own
new are followed, this warning is harmless; otherwise your objects may not be properly initialized.
(F) At runtime, the caller tried to assign the wrong type of argument to the element. An example which triggers this message:
use Class::Struct::FIELDS Bob => { ary => '@' }; my $b = Bob::->new; $b->ary ({hash => 'reference'}); # croaks
The last statement will croak with the message, "Initializer for 'ary' must be ARRAY reference".
(F) At runtime, the caller tried to assign the wrong class of argument to the element. An example which triggers this message:
use Class::Struct::FIELDS Bob => { mary => 'Mary' }; use Class::Struct::FIELDS qw(Fred); # NOT inherit from Mary my $b = Bob::->new; $b->ary (Fred::->new); # croaks
The last statement will croak with the message, "Initializer for 'aa' must be Mary object".
Please see the TODO list.
GIANT MAN-EATING HOLE: due to bugs in lvalue subs in 5.6.0 (try running some under the debugger), I had to disable the obvious syntax:
use Class::Struct::FIELDS Bob => { s => '$' }; my Bob $b = Bob::->new; $b->s = 3;
and provide the clumsier:
use Class::Struct::FIELDS Bob => { s => '$' }; my Bob $b = Bob::->new; $b->s (3);
Some of these constructs work fine as long as you don't try to debug the generated code.
Dean Roehrich, Jim Miner <jfm@winternet.com> and Dr. Damian Conway <damian@conway.org> wrote the original
Class::Struct which inspired this module and provided much of its documentation.
B. K. Oxley (binkley) <binkley@bigfoot.com>
Copyright (c) 2000 B. K. Oxley (binkley). All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Class::Contract is an extension module by Damian Conway for writing in a design-by-contract object-oriented style. It has many of the features of
Class::Struct::FIELDS, and many more besides.
Class::Struct is a standard module for creating simple, uninherited data structures.
base is a standard pragma for establishing IS-A relationships with base classes at compile time.
fields is a standard pragma for imbuing your class with efficient pseudo-hashes for data members.
overload is a standard pragma for overloading Perl syntax with your own subroutines. | http://search.cpan.org/~binkley/Class-Struct-FIELDS-1.1/FIELDS.pm | crawl-002 | refinedweb | 2,494 | 60.85 |
Part 3: Understanding the Execution Environment
In this lesson we explore WinJS to learn more about what it can do for our apps. We learn about creating namespaces, classes, simple binding to objects, observability, and more.
Introduction to creating Windows Store apps using HTML and JavaScript
Chris Anderson & Josh Williams at Build 2012
Channel 9's JavaScript Fundamentals Series - Lesson 10: Understanding Function versus Global Scope
Organizing your code with WinJS.Namespace
WinJS.Class.define function
Actual format may change based on video formats available and browser capability.
Hi Bob,
I remember in a lesson in the JS for beginner series, you said we can solve the namespace issue by creating an object includes all our functions. Why don't we use that technique in here? What's the differentiation if using WinJS.Namespace?One more thing, in this line: Why does the "data-win-bind" know the exactly "time" they need? I think WinJS.Binding.as has something to do with it, but I do not quite understand this function, even thought I've read the documentary.
Thank you.
@Minowar: Absolutely nothing wrong with continuing to use that Namespace approach from before. You'll get a few benefits from using Microsoft's beefed up Namespaces, however. Check out this thread for a good conversation about this very topic:
Comments have been closed since this content was published more than 30 days ago, but if you'd like to send us feedback you can Contact Us. | https://channel9.msdn.com/Series/Windows-Store-apps-for-Absolute-Beginners-with-JavaScript/Part-4-Quick-Tour-of-WinJS-1-of-2?format=html5 | CC-MAIN-2017-17 | refinedweb | 246 | 53.51 |
08 May 2012 05:02 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The official said the plant’s operating rate is not high because of feedstock carbon monoxide (CO) supply issues and the duration of the shutdown can not be ascertained yet.
The plant was taken off line on 14 March for scheduled maintenance and was restarted during the week ended 20 April.
After the March turnaround, Shanghai Wujing was producing acetic acid at a total daily output rate of around 1,500 tonnes, according to sources close to the company.
The company was running its 250,000 tonne/year acetic acid plant at the same site at 100% of capacity during the turnaround of the bigger unit and is currently still running the plant at full rates, said the source. | http://www.icis.com/Articles/2012/05/08/9557122/chinas-shanghai-wujing-chemical-may-shut-acetic-acid.html | CC-MAIN-2014-35 | refinedweb | 131 | 63.22 |
#011 Pytorch – RNN with PyTorch
Highlights: In this post, we will give a brief overview of Recurrent Neural Networks. Along with the basic understanding of the RNN model, we will also demonstrate how it can be implemented in PyTorch. We will use a sine wave, as a toy example signal, so it will be very easy to follow if you are encountering RNNs for the first time. Let’s start with our post!
Tutorial Overview:
- Introduction to Recurrent Neural Networks
- Introduction to Long Short Term Memory-LSTM
- RNN/LSTM model implemented with PyTorch
1. Introduction to Recurrent Neural Networks
Fully Connected Neural Networks or Convolutional Neural Networks mainly work with vector data types and images. However, not all data can be effectively presented in this way. For instance, text or time series are better modeled as time sequences. So, in this post, we will see how time series can be modeled and forecasted using Recurrent Neural Networks.
One example of sequence signals are physiological signals such as heart rate or pulse. These signals can be used for classification or prediction. For instance, you can have a signal with certain patterns in healthy patients. And then, you will have a similar recording of an unhealthy subject. Then, you can train your model to determine whether the person is sick or healthy. For example, we can determine whether a person has an arrhythmia or not. The recordings may last for 24 hours, so it is impractical for a human observer / medical doctor to examine all this data, and therefore, a computer-assisted solution in terms of Recurrent Neural Networks can be a way to predict this.
On the other hand, you can maybe have an outbreak of COVID-19 cases and you want to predict the trend of growth or decline of the new patients or a death rate. Another example can be travelers who are going to visit a certain country during summertime. In that case, you would like to have data from the previous five years during summertime and also the situation prior to summer in order to track the trend of that current year. In case that the external factors are not changing a lot, your model can make meaningful predictions.
Last but not least, you can model text as a sequence. Have a look at the example below, where you are trying to predict the next word for a given sequence.
So, in this post, we will start with a very brief RNN theory that we also covered in a series of posts [1, 2, 3, 4, 5] together with the LSTMs and Gated Recurrent Units (GRU). Then, we will see how we can perform some simple implementation of a Time series prediction using RNN.
The main idea of predicting a sequence is that you will have several sequence elements as input. For instance, you can have a sequence \([1, 2, 3, 4] \), then, one step into the future you would like to predict the unknown value.
So, you can use Bitcoin’s price from the last 4 days, and then, you want to predict tomorrow’s price. One example of a time series is the following graph for the Bitcoin price change from June 2018.
So, the main idea of models is that within Recurrent Neural Networks, we will have a neuron that should know about its previous outputs. A simple way to do this is just to feed its output back to itself. In the following image, we can see an example of one simple neuron in Feed-Forward Networks where we had the aggregation of input and output from the previous time step.
Therefore, in the recurrent neuron, we send the output back to itself. So, here, we can have a situation where we have input at the time instance \(t-1 \) which will give us an output at \(t-1 \). Then, at the next time instance, we will have an input at time \(t \), plus the outputs from the previous time instance. That output is \(t-1 \), and then it will give us an output at \(t \). So, we can treat this as a simple Feed-Forward Network, but keep in mind that the input will have an output from the previous sequence. Here, the cells that are a function of inputs from the previous time steps are also known as the memory cells.
Next, we have an example of an RNN layer with three neurons. The output here is going back as an input to all three hidden neurons. One common procedure that we do is to unroll an RNN in time. Observe in the image below how this network structure now looks like.
There are several different types of Recurrent Neural Networks. We can categorize RNN architectures into the following four types:
- Many-to-many
- Many-to-one
- One-to-one
- One-to-many
For example, we can have a sequence to sequence type – many-to-many.
In this case, you will have five words and you have to predict the next five words.
Another common type of RNNs is many-to-one where we start with a sentence of five words we will end up with the one output.
This type of RNN can be used when we have five words in a sentence and we want to predict the word that most likely follows these five words. An example can be a model that has to learn and to provide the following word as the output while you type on your smartphone.
One shortcoming of the RNN models is that we don’t want to have only short-term memory. We would like to have models that would work much better if we are able to track a longer history in time. Also, there is one problem with the training which is known as “vanishing gradient”. Further in the post, we will see how they can be improved in models known as LSTM (Long Short Term Memory) Units.
To better understand the vanishing gradients let’s take a look at the following image.
Here, we can see the sigmoid function in black, and the sigmoid derivative in blue. We can see that values lower than -5 and higher than +5 are practically already zero. In such a case the gradients cannot propagate well due to small values. That is the reason why we call them a vanishing gradient. Therefore, training of the time-related sequences can be a problem.
2. Introduction to Long Short Term Memory – LSTM
Now, let’s have a look into LSTMs and GRU (Gated Recurrent Units). So, this was the main bottleneck of RNNs because it tends to forget very quickly. The information is lost when we go through the RNN, and therefore, we need to have a mechanism to provide a long-term memory for our models. The cell, known as Long-Short-Term Memory, can assist us to overcome the disadvantages of an RNN. In the following image, we can see a typical unrolled RNN.
Usually this output \(t-1 \) is called hidden state- \(H_{t-1} \). Also, it gives the output as a hidden state \(H_{t}\). We can treat this as a simple neuron that we’ve seen in fully connected neural networks. However, here, this \(H_{t} \) is calculated using a hyperbolic tangent. Basically, here we have a matrix of coefficients or weight matrix \(W \) that multiplies \(H_{t-1} \) and the input vector \(X_{t} \). Also, we have a bias term \(b \). So, that’s how we obtain the output \(H_{t} \).
On the other hand, with LSTMs we will have something different. Here, we can see two signals called Short Term Memory and Long Term Memory.
Here, the input at time \(t \) will provide the output at time \(t \), but it will also keep track of current and updated Short Term Memory as well as the current and an updated New Long Term Memory. Furthermore, there are four gates here: Forget gate, Input gate, Update gate, and Output gate.
All these four gates are very important. They pass information if the signal value is close to one, and they will not pass forward the information if it is close to zero.
Here, \(\sigma \) stands for sigmoid and \(tanh \) is hyperbolic tangent. Also, we create \(f_{t} \) which is a sigmoid of the following multiplication plus \(b_{f} \).
$$ f_{t}=\sigma\left(W_{f} \cdot\left[h_{t-1}, x_{t}\right]+b_{f}\right) $$
Then, we have a signal \(i_{t} \) which is a sigmoid of the following multiplication:
$$ i_{t}=\sigma\left(W_{i} \cdot\left[h_{t-1}, x_{t}\right]+b_{i}\right) $$
Then of course, we have a signal \(\tilde{C}_{t} \) which can be calculated in the following way:
$$ \tilde{C}_{t}=\tanh \left(W_{C} \cdot\left[h_{t-1}, x_{t}\right]+b_{C}\right) $$
Next, we obtain a signal \(C_{t} \):
$$ C_{t}=f_{t} * C_{t-1}+i_{t} * \tilde{C}_{t} $$
The first part of an equation is element wise multiplication, and then, we add signal \(i_{t} \) multiplied with \(\tilde{C}_{t}\).
Finally, we will obtain \(o_{t}\) and \(h_{t} \) as follows:
$$ o_{t}=\sigma\left(W_{o}\left[h_{t-1}, x_{t}\right]+b_{o}\right) $$
$$ h_{t}=o_{t} * \tanh \left(C_{t}\right) $$
Now, let’s explain this in more detail. In the image above we can see that the \(C_{t} \) signal goes down and passes through \(tanh \). Then, we have the output at the time \(t \), \(o_{t} \). These two signals are multiplied, using the so-called, element-wise multiplication, and we finally obtain the signal \(h_{t} \).
Fortunately, we have Deep Learning libraries like PyTorch or TensorFlow, so we don’t have to code all these equations on our own. We can just use them and apply them for our purposes. Before we proceed, let’s discuss how the data should look like for this model. A time-series usually looks like this: we have a sequence that is separated into two parts. We have a training sequence and the sequence value that should be predicted/forecasted.
For instance, we can have a set of numbers \([1,2,3,4,5] \). Then, for the training batch, we will use \([1,2,3,4] \) and for the desired output we will have \([5] \).
Therefore, we usually train sequences using batches. We can edit the size of the training data point, as well as how many sequences we want to feed per batch.
One interesting thing is that when we work with predictions a common way to estimate our error would be Root Mean Square Error ( RMSE ). Using this approach we can measure how good our prediction is.
Usually, once we have our data, we save the last 10% or 20% of our time series for the test. Next, we use the previous part for training and to run predictions. Then, the last part we use for the test to see how well we are performing.
An interesting event occurs when we select the last part of the sequence – the last remaining batch.
Then, we can use this predicted number 10. We can say that this will be a value of Bitcoin for tomorrow, and then we will predict the value of two days after today. After that, we can use that value and place it in the third sequence and once again calculate the output of the prediction. Finally, we will move our sliding window and we will have three numbers that we’ve already predicted. We will use this whole sequence to predict the final output. Of course, since all of these numbers are predictions they will have an error and that error will propagate through this more and more. Eventually, the longer we do these predictions without recording data, we will have more inaccurate predictions. That means that if we want to predict bitcoin price for five days, we can be reasonably successful. However, if we want to predict it for one year, our predictions will probably not be valid at all.
Now, let’s apply this knowledge in Python.
3. RNN/LSTM model implemented with PyTorch
First, let’s import necessary libraries.
import torch import torch.nn as nn import numpy as np import pandas as pd import matplotlib.pyplot as plt
Next, will define
x as an array of numbers from 0 to 799. Then we define a simple sin using the command
torch.sin(). We will take \(x\cdot 2\pi \) and we will divide this with 40 in order to achieve higher frequency.
Creating a dataset
x = torch.linspace(0,799, 800) y = torch.sin(x * 2 * np.pi / 40)
With the following code, we can plot these values.
plt.figure(figsize = (12,4)) plt.xlim(-10, 801) plt.grid(True) plt.plot(y.numpy() )
Then, we will use a size of 40 which means that our test will consist only of 40 elements. Also, we will split our data into
train_set and
test_set, so that
train_set will go from 0 to 759, and
test_set will have the last 40 elements.
test_size = 40 train_set = y[:-test_size] test_set = y[-test_size:]
We can plot these values and this is how they look.
plt.figure(figsize = (12,4)) plt.xlim(-10, 801) plt.grid(True) plt.plot(train_set.numpy())
So, this is the
train_set consisting of all the elements without the last 40 elements.
Now, this function is doing something interesting. As parameters, we will pass here an input sequence and window size. In our case, the window size will be 40. This function takes the length of a sequence minus window size. For example, if the window size is 40 our input training set will be 760 elements. So, this loop will go from 0 to 719. So, we can print this just to be sure.
def input_data(seq, ws): output = [] L = len(seq) for i in range((L) - ws): window = seq[i:i+ws] label = seq[i+ws:i+ws+1] print(i) output.append((window, label)) return output
The idea here is that we will loop over these elements and basically, we will start from zero and then go up to forty elements. So, this will be located in zero variable. Then we will have a label that will be exactly the element after the windows sequence. So, if we have 40 here this will be the 41st element. Then we will append only the window and this label in a tuple. So this will be a list consisting of tuples.
Next, we define
window_size to 40, and call this
input_data with
train_set and
window_size. We will obtain
train_data like a batch.
window_size = 40 train_data = input_data(train_set, window_size)
Output:
0,1,2,3,...719
As you can see the loop will go up to 719. So basically
train_data will be equal to 720 sequences, where each of them consists of 40 elements and the next value that needs to be predicted.
The first element from
train_data will have the first 40 elements of our sin signal. Then the 41st element will be the value that we want to predict. We can visualize one sequence using the function
plt.stem().
plt.stem(train_data[0][0])
Defining the LSTM model using PyTorch
Now, we will continue with our code by looking at the class
torch.nn.LSTMCell. Here we can see what this function is calculating for us:
$$ i=\sigma\left(W_{i i} x+b_{i i}+W_{h i} h+b_{h i}\right) $$
$$ f=\sigma\left(W_{i f} x+b_{i f}+W_{h f} h+b_{h f}\right) $$
$$ g=\tanh \left(W_{i g} x+b_{i g}+W_{h g} h+b_{h g}\right) $$
$$ o=\sigma\left(W_{i o} x+b_{i o}+W_{h o} h+b_{h o}\right) $$
$$ c^{\prime}=f * c+i * g $$
$$ h^{\prime}=o * \tanh \left(c^{\prime}\right) $$
Here \(\sigma \) is the sigmoid function and the asterisk sign is the Hadamard product also known as an element-wise multiplication.
So, these are the equations that we had for
torch.nn.LSTMCell. Note here that bias can be set to
True or
False and the default value is
True. So, we will leave it as such. The input consists of three terms. We have a common input, and then we have two more inputs, which are passed as a tuple: hidden state
h_0, and cell state
h_c. In this case,
h_0, and
h_c are initial states for each element in the batch.
For the output, we have something similar. We will have a tuple of a New long-term memory and a New short-term memory, and also we will have an output vector as well.
torch.nn.LSTM(input_size, hidden_size)
Now, we will create an LSTM model by creating our class. We will call it
myLSTM and we will derive all the members from the
nn.module. So, we will have an init function with the following parameters:
self,
input_size which will be equal to 1,
hidden_size which will be equal to 50, and
out_size which we will set to 1. So, you can think of a
hidden_size as the number of neurons in a hidden LSTM layer.
Next, we will call
super() and we will instantiate from the class that we are calling on. This means that in
nn.module we will have available all functions, parameters, and methods developed in the initial part.
We will start by defining the
hidden_size. In this case, we will also go from
hidden_size to the
output_size. Here we select that
hidden_size is equal to 50. Then, we will go from the LSTM layer to the output layer using a well-known, fully connected layer.
class myLSTM(nn.Module): def __init__(self, input_size=1, hidden_size=50, out_size=1): super().__init__() self.hidden_size = hidden_size self.lstm = nn.LSTM(input_size, hidden_size ) self.linear = nn.Linear(hidden_size, out_size) self.hidden = (torch.zeros(1,1,hidden_size ) , torch.zeros(1,1,hidden_size) ) def forward(self, seq): lstm_out, self.hidden = self.lstm(seq.view( len(seq),1,-1 ), self.hidden ) pred = self.linear(lstm_out.view( len(seq) ,-1 )) return pred[-1]
Now, with the parameter
self.hidden, we will initialize this hidden layer. Those are values
h_t and
c_t in mathematical formulas. Note that now we initialize
h_t and
c_t as zeros. So, we will start the initial state of our LSTM, with zero values.
Next, we will define the forward method with
self and
seq as parameters. So, the
seq variable will be the sequence that we work with within this class.
So, our first call will be
self.lstm() layer and as the output, we will provide two parameters. The first parameter will be
lstm_out which is an output of the LSTM. Then, we have
self.hidden parameters which will also be updated as a tuple. Next, we also have
h_t and
c_t variables that we use in this function. We can also see that this function has to provide the input to the LSTM layer that now will be
h_0 and
c_0.
So, an input will be a sequence. We need to make sure that the sequence is of adequate shape. So, as parameters, we will have
len(seq) and then we will have 1 and -1. The parameter
self.hidden is just passed as
self.hidden, where we have two values,
h_t, and
c_t.
Once we call the function
lstm(), we will proceed by calculating a prediction. For that, we will use our linear layer. The input to this linear layer will be the output of the LSTM layer. Again, we need to make sure that it is of the correct shape. So, it will have the parameter
len(seq) and we will include here -1 in case that we use batches. This -1 here stands that if we have an input system of, for instance, 32 batches, then instead of this -1, we will have a number 32.
Finally, we will return the prediction that is the last element of this sequence. For example, this prediction can be 1, 2, 3, 4. What we actually care about is the last element,
pred[-1], that we are predicting. In this example, this is the number 4.
Next, let’s instantiate the model with
myLSTM(). As we already said in the beginning, for the criterion we will use
nn.MSELoss(). Next, for the optimizer, we will use the Stochastic Gradient Descent model by calling
torch.optimSGD(). For this model, we will have an argument
model.parameters(), and we will set the learning rate to 0.01.
model = myLSTM() criterion = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr = 0.01) print(model)
Output:
myLSTM( (lstm): LSTM(1, 50) (linear): Linear(in_features=50, out_features=1, bias=True) )
This is how our model looks. We can see that first, we have an LSTM layer with 1 input and 50 hidden neurons. Then, it is followed by a linear layer and output features.
Next, what we can do is to go through all the model parameters and we can print the number of elements using the function
p.numel().
for p in model.parameters(): print(p.numel())
Output:
200 10000 200 200 50 1
Training the LSTM model in PyTorch
We have defined our model and now we need to train it. First, we will define the number of epochs and set it to 10. Then, we can use the parameter
future and set it to 40. This parameter determines how many points we want to predict. This means that if we are dealing with the Bitcoin prices from today, we want to predict in the following consecutive 40 days what the daily price of bitcoin will be.
Then, for the number of epochs, we will create a
for loop. The next step is to iterate over the
train_data. What’s important here is that
train_data will be a tuple of 40 numbers along with one output. That means that for the values of Bitcoin daily prices, we want to predict tomorrow’s price. We will set gradients to zero in our optimizer. Then we will set the
model.hidden variable values to zeros. Note that there are two hidden vectors
C_t and
H_t. Finally, we will calculate
y_pred, and we will define our loss. Here, the
criterion is a root mean square error between the predicted value and train value that we have at the moment. The next step is to apply backpropagation through our network. Finally, we will call
optimizer.step() in order to update our parameters.
epochs = 10 future = 40 for i in range(epochs): for seq, y_train in train_data: optimizer.zero_grad() model.hidden = (torch.zeros(1,1,model.hidden_size) , torch.zeros(1,1,model.hidden_size)) y_pred = model(seq) loss = criterion(y_pred, y_train) loss.backward() optimizer.step() print(f"Epoch {i} Loss {loss.item()} ") preds = train_set[-window_size:].tolist() for f in range(future): seq = torch.FloatTensor(preds[-window_size:]) with torch.no_grad(): model.hidden = (torch.zeros(1,1,model.hidden_size) , torch.zeros(1,1,model.hidden_size)) preds.append(model(seq).item()) loss = criterion(torch.tensor(preds[-window_size :]), y[760:] ) print(f'Performance on test range: {loss}')
Now, we will print our loss, and also we will obtain prediction values in a variable
preds. We will do that in such a way that we will take the last window size elements. So, we will take the last 40 numbers using
preds[-window_size :]
The next step is to create another for loop to iterate over the future parameter. Note here that we are taking the last 40 elements from the training set with window size. So, that’s defined with predictions
train_set. That means that from these 40 elements, we are predicting tomorrow’s value. Then we will use this predicted value, we will append it, and then based on 39 values plus tomorrow’s predicted value we will predict the value in two days from today.
Next, we are defining a sequence. So, we will convert predictions into a
torch. FloatTensor by taking the last window size elements. Then, we will exclude gradients to perform calculations. We will set
model.hidden layers to zeros, and then we will calculate the predictions –
preds. That is, to the predictions, we will append the values that we obtained from our model when our input is the seq array. Then, with the function
.item() we will actually take these elements that will be appended into the prediction. So, because the output of the
myLSTM model is one we will append 40 numbers, from 40 iterations to predictions.
Once this loop is finished we will calculate the criterion on the last 40 predicted values-
preds[-window_size, :], and
y[760:]. So, basically,
y here are the last 40 elements of the whole input sequence, that is, the test set.
So, we had 10 epochs. After each epoch, we will plot the predictions. We can see that the more we train more this part will look and resemble the sine wave. So, in the end, it will look as follows.
plt.figure(figsize=(12,4)) plt.xlim(700, 801) plt.grid(True) plt.plot(y.numpy()) plt.plot(range(760,800), preds[window_size:]) plt.show()
So, we see that it’s not perfect, but we do have a very reasonable approximation of what should be coming after the end of a training part.
Finally, we would be willing to see how in an unknown future our predictions will behave. So, in reality, this means that we need to figure out whether today we should buy or sell Bitcoins. That is, if our model is accurate, we should trust it, and potentially in two or three days, we can estimate the unknown price. Of course, there’s too much volatility in Bitcoin data and I do not advise that you rely on such models to predict the Bitcoin data. However, if you’re working on something where there is not that much volatility present, then you should go for it!
The code for this will be similar to the previous one. We take the last 40 elements that previously have been used for the testing. The way we are going to use them is by predicting the next 40 elements based on them. This simply means that we are predicting the unknown future using these 40 last elements.
preds = y[-window_size:].tolist() for i in range(future): seq = torch.FloatTensor(preds[-window_size:]) with torch.no_grad(): model.hidden = (torch.zeros(1,1,model.hidden_size), torch.zeros(1,1,model.hidden_size)) preds.append(model(seq).item())
Now, we are going to print those predictions that are called Forecast into the unknown future. We will set the grid to
True. Here
y is a tensor and that means that we need to convert it to NumPy before plotting. In other words, now we will take the values from 760 to 799. We will use them to predict the following 40 numbers. They will correspond to the time instances from 800 till 839 (this is the unknown future, as we had a signal from 0 – 799)
Now, we can see the results, by plotting them.
plt.figure(figsize = (12,4)) plt.xlim(0,841) plt.grid(True) plt.plot(y.numpy()) plt.plot(range(800, 800+future), preds[window_size:])
This will be the output of the unknown future.
Summary
To summarize, we used a very simple toy example signal – a sine wave. It’s a deterministic signal and from that perspective, there is a really high chance that we can do a good job with our prediction. Here, we used the LSTM model and we managed to obtain very nice and accurate predictions. | http://datahacker.rs/011-pytorch-rnn-with-pytorch/ | CC-MAIN-2021-39 | refinedweb | 4,636 | 65.32 |
This problem is due to the case sensitivity of the iPhone device. The splash screen must be named ‘Default.png’, capital D included. Once you’ve renamed the file you need to remove the app from your device; you also need to go into the project folder and delete everything within the ‘Build’ folder.
Programming
Proper HTML Custom Menu in Moodle 2
If you’re developing a theme for Moodle 2, you probably want to support the new custom menu functionality. Problem is it uses YUI3 to make the menu ‘work’ rather than CSS alone. This means it’s a) Harder to style and b) results in a nasty ‘jump’ as the JS kicks in – you have to style both pre- and post- JS menu. Instead why not just output a normal nested unordered list and use CSS to provide the functionality? Well, you can! You need to add a little PHP first though. Create a file ‘renderers.php’ in your theme directory and add this code (make sure to change <theme name> to the name of your theme):
<?php class theme_<theme name>_core_renderer extends core_renderer { /** * Renders a custom menu object * * ++; $content .= html_writer::start_tag('ul', array('class'=>'custommenu')); // Render each child foreach ($menu->get_children() as $item) { $content .= $this->render_custom_menu_item($item); } // Close the open tags $content .= html_writer::end_tag('ul'); // Return the custom menu return $content; } /** * Renders a custom menu node as part of a submenu * * @see('title'=>$menunode->get_title())); $content .= html_writer::start_tag('ul'); foreach ($menunode->get_children() as $menunode) { $content .= $this->render_custom_menu_item($menunode); } $content .= html_writer::end_tag('ul'); $content .= html_writer::end_tag('li'); } else { // The node doesn't have children so produce a final menuitem $content = html_writer::start_tag('li'); if ($menunode->get_url() !== null) { $url = $menunode->get_url(); } else { $url = '#'; } $content .= html_writer::link($url, $menunode->get_text(), array('title'=>$menunode->get_title())); $content .= html_writer::end_tag('li'); } // Return the sub menu return $content; } } ?>
Then go into your config.php file and make sure $THEME->rendererfactory is set like so:
$THEME->rendererfactory = 'theme_overridden_renderer_factory';
And now you should have nice plain HTML output ready for styling the proper way!
Detect WordPress Home Page / Front Page
Chances.
Mahara Installation
[ENV].
Cannot convert lambda expression to type ‘string’ because it is not a delegate type
Getting the above error? You may have forgot to include the System.Linq namespace:
using System.Linq;
Aquiss Broadband Usage Checker for Mac OSX
I.
CIFilters with MonoMac_2<<
Implementing IParcelable in Mono for Android
If_7<<
.
Reachability in MonoTouch
If.
Using the Keychain in MonoMac
. | http://dan.clarke.name/category/programming/page/2/ | CC-MAIN-2017-22 | refinedweb | 407 | 60.01 |
CodePlexProject Hosting for Open Source Software
I've had an odd problem with a TimePicker. Another user has reported
something similar in the Silverlight forum.
The error is: "XamlParseException - The type 'TimePicker' was not found."
I re-created the problem by following a series of steps:
creating a new Silverlight Application project and adding a TimePicker to the MainPage.xaml - failed to recreate
created a UserControl in the application in a Views folder, moved the TimePicker to it, added the UserControl to the MainPage.xaml- failed to recreate
Added the Prism references (V4drop8) to the project and turned MainPage.xaml into Shell.xaml, adding a Bootstrapper.cs as per the standard instructions and poiting App.xaml.cs at the bootstrapper. I hadn't added any regions or modules- failed to recreate.
I added a module to the solution, gave the shell project a reference to it and it a ref to Prism., Created a Views folder, moved the UserControl with the TimePicker to the Views folder and changed the namespace to match the module project. Then I created
a region in the shell.xaml (using a tabcontrol) and told the module to stick the UserControl in the mainregion. Recreated the problem.
In both my actual project and my recreating-the-problem project, the TimePicker had been dragged off the Toolbox and had the xmlns:toolkit namespace applied. In both, I've added the xmlns:swcit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Input.Toolkit"
namespace and changed the TimePicker line to point to that.
The error now becomes (in both projects):
XamlParseException - The type 'TimePicker' was not found because 'clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Input.Toolkit' is an unknown namespace.
In my acual project (but not in my recreation) the latter error can be resolved by adding a Name attribute, e.g.
x:Name="Dennis"
just as with the other guy. I haven't reported this as an issue because I can't see how it's a problem with Prism but it only manifests itself in a Prism region.
Hi,
Thanks for reporting that. It would be helpful if you could send the repro sample that you mentioned. So if we found that this is an issue related to Prism, we will create a work-item for this. Additionally the community can also vote.
Fernando Antivero
Sorry about the delay - I've been out of the office. Where would you like me to send it?
I think you could upload this to the cloud, for example using
skydrive. Then you could post in this thread the URL. So all the community could contribute as well.
Thanks,
Okay. I think I've done that! It's here:
TimePickerError.zip. I've not used SkyDrive before so we'll see how this goes.
Hi. I was wondering if you've had time to look at this?
Sorry for the delay. I reproduced this situation using your repro sample. Additionally, I think that you can find the following thread in the Silverlight Forum interesting: SL4 runtime
error when parsing Generic.xaml: unknown namespace
Based on this thread, it seems to be something related to references. You can find a workaround over there.
Hope this helps.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://compositewpf.codeplex.com/discussions/229202 | CC-MAIN-2017-51 | refinedweb | 575 | 68.67 |
ACTION-11: Show how <div aria="something"> works with URI based extensibility
Show how <div aria="something"> works with URI based extensibility
- State:
- closed
- Person:
- Dan Connolly
- Due on:
- March 13, 2008
- Created on:
- November 10, 2007
- Associated Issue:
- aria-role
- Related emails:
- {minutes} 2008-04-03 HTML WG telcon (from mike@w3.org on 2008-04-04)
- Re: SVG and MathML in text/html (from hsivonen@iki.fi on 2008-03-10)
- Re: SVG and MathML in text/html (from connolly@w3.org on 2008-03-09)
- {minutes} HTML WG teleconference 2007-11-06 [resend] (from mike@w3.org on 2007-12-11)
- ARIA and URI based extensibility, part I (ACTION-11) (from connolly@w3.org on 2007-11-16)
Related notes:
some progress: A story about namespaces, MIME types, and URIs
I'd like to get this done sooner, but other important stuff keeps coming up. Anybody who needs it sooner will please let me know.Dan Connolly, 18 Dec 2007, 22:14:30
sigh... not this week either.Dan Connolly, 9 Jan 2008, 17:43:26
almost got into coding mode this week... soon, I hope...Dan Connolly, 7 Feb 2008, 18:02:38
crud; still no progressDan Connolly, 14 Feb 2008, 22:47:50
Can you point to the current status from the PFWG, too?Chris Wilson, 15 Feb 2008, 00:10:59
I'm inclined to withdraw this; it seems to be overtaken by events.Dan Connolly, 3 Apr 2008, 15:48:14
Changelog:
Created action 'Show how <div aria="something"> works with URI based extensibility' assigned to Dan Connolly, due 2007-11-1710 Nov 2007, 15:13:06
Due date changed to 2007-11-29Dan Connolly, 19 Nov 2007, 10:13:53
Due date changed to 2007-12-05Dan Connolly, 29 Nov 2007, 18:04:04
Due date changed to 2007-12-11Chris Wilson, 7 Dec 2007, 00:11:30
Due date changed to 2007-12-13Chris Wilson, 7 Dec 2007, 00:12:26
Due date changed to 2008-01-10Dan Connolly, 18 Dec 2007, 22:14:30
Due date changed to 2008-01-17Dan Connolly, 9 Jan 2008, 17:43:26
Due date changed to 2008-01-31Dan Connolly, 23 Jan 2008, 17:11:31
Due date changed to 2008-02-14Dan Connolly, 7 Feb 2008, 18:02:38
Due date changed to 2008-02-21Dan Connolly, 14 Feb 2008, 22:47:50
Due date changed to 2008-03-13Dan Connolly, 10 Mar 2008, 02:12:53
Status changed to 'pending review'3 Apr 2008, 15:48:14
Status changed to 'closed'3 Apr 2008, 16:45:57 | http://www.w3.org/html/wg/tracker/actions/11?changelog | CC-MAIN-2016-40 | refinedweb | 435 | 64.14 |
Harmonic numbers
The nth harmonic number, Hn, is the sum of the reciprocals of the integers up to and including n. For example,
H4 = 1 + 1/2 + 1/3 + 1/4 = 25/12.
Here’s a curious fact about harmonic numbers, known as Wolstenholme’s theorem:
For a prime p > 3, the numerator of Hp-1 is divisible by p2.
The example above shows this for p = 5. In that case, the numerator is not just divisible by p2, it is p2, though this doesn’t hold in general. For example, H10 = 7381/2520. The numerator 7381 is divisible by 112 = 121, but it’s not equal to 121.
Generalized harmonic numbers
The generalized harmonic numbers Hn,m are the sums of the reciprocals of the first n positive integers, each raised to the power m. Wolstenholme’s theorem also says something about these numbers too:
For a prime p > 3, the numerator of Hp-1,2 is divisible by p.
For example, H4,2 = 205/144, and the numerator is clearly divisible by 5.
Computing with Python
You can play with harmonic numbers and generalized harmonic numbers in Python using SymPy. Start with the import statement
from sympy.functions.combinatorial.numbers import harmonic
Then you can get the nth harmonic number with
harmonic(n) and generalized harmonic numbers with
harmonic(n, m).
To extract the numerators, you can use the method
as_numer_denom to turn the fractions into (numerator, denominator) pairs. For example, you can create a list of the numerators of the first 10 harmonic numbers with
[harmonic(n).as_numer_denom()[0] for n in range(10)]
What about 0?
You might notice that
harmonic(0) returns 0, as it should. The sum defining the harmonic numbers is empty in this case, and empty sums are defined to be zero.
7 thoughts on “Numerators of harmonic numbers”
Let s(n) be the sum of the divisors of n. The Riemann Hypothesis is equivalent to Problem E: for all n greater than 1, s(n) < H_n + exp(H_n) log(H_n). This is a result of Jeff Lagarias. ()
Very interesting!
Wrote a code snippet to test for primes using this property.
For fun I decided to create the harmonic function from what I just read, in Perl 6.
After maybe five minutes of testing and iterating on the REPL I came up with:
( That includes the amount of time I spent playing with the code before trying to put it in a subroutine )
Since that is a bit dense, this is what it would look like if I expanded it out a bit, and added comments for someone new to Perl 6:
If you want just the numerators for the first ten harmonic numbers
my @n = (harmonic($_).numerator for ^10);
If you want to format the output as a fraction:
say harmonic(10).nude.join('/');
Have you seen the chapter of Proofs That Really Count by Benjamin and Quinn that is all about combinatorial interpretations of harmonic numbers? I found it pretty amazing that such things could even exist at all! It’s a book I think you would enjoy, if you haven’t seen it, and it could give you a lot to add to this post.
The p^2 divisibility reminds me of Eisenstein’s irreducibility criterion. I wonder if there’s a connection. | https://www.johndcook.com/blog/2015/07/19/numerators-of-harmonic-numbers/ | CC-MAIN-2018-17 | refinedweb | 555 | 62.48 |
After you get information about your OPC servers, described in Discover Available Data Access Servers you can establish a connection to the server by creating an OPC Client object and connecting that client to the server. These steps are described in the following sections.
Note
To run the sample code in the following examples, you must have the Matrikon™ OPC Simulation Server available on your local machine. For information on installing this, see Install an OPC DA or HDA Simulation Server for OPC Classic Examples. The code requires only minor changes work with other servers.
To create an
opcda object, call the
opcda function specifying the hostname, and
server ID. You retrieved this information using the
opcserverinfo function
(described in Discover Available Data Access Servers).
This example creates an
opcda object to represent
the connection to a Matrikon OPC Simulation Server. The
opcserverinfo function
includes the default
opcda syntax in the
ObjectConstructor field.
da = opcda('localhost', 'Matrikon.OPC.Simulation.1');
To view a summary of the characteristics of the
opcda object
you created, enter the variable name you assigned to the object at
the command prompt. For example, this is the summary for the object
da.
da
The items in this list correspond to the numbered elements in the object summary:
The title of the
Summary includes the name of
the
opcda client object. The
default name for a client object is made up of the
'host/serverID'. You can change
the name of a client object using the
set function, described in
Configure OPC Toolbox Data Access Object Properties.
The
Server
Parameters provide information on the
OPC server that the client is associated with. The
host name, server ID, and connection status are
provided in this section. You connect to an OPC
server using the
connect
function, described in Connect a Client to the DA Server.
The
Object Parameters section
contains information on the OPC Data Access Group (
dagroup)
objects configured on this client. You use group objects to contain
collections of items. Creating group objects is described in Create Data Access Group Objects.
You connect a client to the server using the
connect function.
connect(da);
Once you have connected to the server, the
Status information
in the client summary display will change from
'disconnected' to
'connected'.
If the client could not connect to the server for some reason (for example, if the OPC server is shut down) an error message will be generated. For information on troubleshooting connections to an OPC server, see Troubleshooting.
When you have connected the client to the server, you can perform the following tasks:
Get diagnostic information about the OPC server, such
as the server status, last update time, and
supported interfaces. You use the
opcserverinfo function to obtain this
information.
Browse the OPC server name space for information on the available server items. See Browse the OPC DA Server Name Space for details.
Create group and item objects to interact with OPC server data. See Create OPC Toolbox Data Access Objects for information.
A connected client object allows you to interact with the OPC server to obtain information about the name space of that server. The server name space provides access to all the data points provided by the OPC server by naming each of the data points with a server item, and then arranging those server items into a name space that provides a unique identifier for each server item.
This section describes how you use a connected client object to browse the name space and find information about each server item. These activities are described in the following sections:
Get the DA Server Name Space describes
how to obtain a server name space, or a partial
server name space, using the
getnamespace and
serveritems functions.
Get Information about a Specific Server Item describes how to query the server for the properties of a specific server item.
You use the
getnamespace
function to retrieve the name space from an OPC server. You
must specify the client object that is connected to the
server you are interested in. The name space is returned to
you as a structure array containing information about each
node in the name space.
The example below retrieves the name space of the Matrikon OPC Simulation Server installed on the local host.
da = opcda('localhost','Matrikon.OPC.Simulation.1'); connect(da); ns = getnamespace(da) ns = 3x1 struct array with fields: Name FullyQualifiedID NodeType Nodes
The fields of the structure are described in the following table.
From the example above, exploring the name space shows.
ns(1) ans = Name: 'Simulation Items' FullyQualifiedID: 'Simulation Items' NodeType: 'branch' Nodes: [8x1 struct] ns(3) ans = Name: 'Clients' FullyQualifiedID: 'Clients' NodeType: 'leaf' Nodes: []
From the information above,, you need to
reference the
Nodes field of a branch
node. For example, the first node contained within the
'Simulation Items' node is
obtained as follows.
ns(1).Nodes(1) ans ='. You use the
fully qualified ID to refer to that specific node in the
server name space when creating items with OPC Toolbox™ software.
You can use the
flatnamespace function to flatten a
hierarchical name space.
In addition to publishing a name space to all clients, an OPC server provides information about the properties of each of the server items in the name space. These properties provide information on the data format used by the server to store the server item value, a description of the server item, and additional properties configured when the server item was created. The additional properties can include information on the range of the server item, the maximum rate at which the server can update that server item value, etc. See OPC DA Server Item Properties.
You access a property using a defined set of property IDs. A property ID is simply a number that defines a specific property of the server item. Property IDs are divided into three categories:
OPC Specific Properties (1-99) that every OPC server must provide. The OPC Specific Properties include the server item’s Value, Quality, and Timestamp. For more information on understanding OPC data, see OPC Data: Value, Quality, and TimeStamp.
OPC Recommended Properties (100-4999) that OPC servers can provide. These properties include maximum and minimum values, a description of the server item, and other commonly used properties..
Vendor Specific Properties (5000 and higher) that an OPC server can define and use. These properties may be different for each OPC server, and provide a space for OPC server manufacturers to define their own properties.
You query properties of a server item using the
serveritemprops function,
specifying the client object, the fully qualified item ID of
the server item you are interested in, and an optional
vector of property IDs that you want to retrieve. If you do
not specify the property IDs, all properties defined for
that server item are returned.
Note
You obtain the fully qualified item ID from the server
using the
getnamespace function or the
serveritems function, which simply
returns all fully qualified item IDs in a cell array
of character vectors.
The following example queries the Item Description property
(ID 101) of the server item
'Bucket
Brigade.ArrayOfReal8' from the example in
Get the DA Server Name Space.
p = serveritemprops(da, 'Bucket Brigade.ArrayOfReal8', 101) p = PropID: 101 PropDescription: 'Item Description' PropValue: 'Bucket brigade item.'
For a list of OPC Foundation property IDs, see OPC DA Server Item Properties. | https://uk.mathworks.com/help/opc/ug/connect-to-opc-data-access-servers.html | CC-MAIN-2021-17 | refinedweb | 1,232 | 53.21 |
palindrome
palindrome program to find the given string is palindrome...];
}
if(st.equalsIgnoreCase(reversedSt)){
System.out.println("String
writig a program - Java Beginners
writig a program Write an application that reads in a five digit integer and determines whether or not it is a palindrome. if the number is not five digits long, displays an error message indicating the problem to the user... was developed to execute the business logic. You can develop EJB, or Java Beans to process
Online Java Training for Beginners
Online Java Training for Beginners
The online java training for beginners... and benefits of Java.
In this age of advanced technology, Java has made... for the beginners.
The Online Java Programming course for the beginner's
palindrome array problem
palindrome array problem I'm having trouble figuring this assignment out. Can someone please help me?
Generate (write the code) and save in an array Palidrome[250][5] all the 5 letter words using {a, b, c}
Write the code
Roseindia JSP Tutorial
are made with an objective to help learners grasp the fundamentals of JSP in easy... for beginners as well as experienced programmers as the JSP tutorials... tutorials for you that makes it quite easy for you to create dynamic
a java program - Java Beginners
for more information.... a java program well sir, i just wanna ask you something regarding... about the second line...
i have made my program but not able to click
core java little bit but now i m confused as i don't know from where shud i start. Easy boss.
U r telling u stood first in d race & asking others "HOW SHOULD I WALK?".
If ur comfortable with C++(core java) then its too easy
java -
prime pal - Java Beginners
prime pal 1) WAP to print all prime-palindrome number range between 1-500
2) WAP to print all prime-Fibonacci number range between 1-500
Hi Friend,
1)Prime and Palindrome:
import java.util.. While designing it, i came across a problem. I tried to store the functional dependencies using the hashmap class. e.g. for a dependency A->B, i made... A->B. This i knew after reading the java docs. So is there any workaround beginners - Java Beginners
the following links: beginners what is StringTokenizer?
what is the funciton
Converting Anything to String
Java: Converting Anything to String
Summary: Converting any data to strings is easy.
You can do almost everything with concatenation, but can get more... for converting
any object into a string.
printf() was added in Java 5 to use a format
java features - Java Beginners
program malfunction.
the java designers made hard attemp to alter...java features Why java is called as neutral language hai frnd..
JAVA is not neutral language...but ARCHITECTURE-NEUTRAL..
1 of the main
: Java Compilation error. - Java Beginners
on java visit to :
Thanks
static...: Java Compilation error. what is the difference between static
java - Java Beginners
in Java. PROGRAM TO IMPLEMENT LINKED LIST
Implementation File... in current
}
previous.nxt=null; //last but one's pointer is made to null thus... of the given data's node is made to point to the link of the node to be deleted
java program - Java Beginners
java program plzzzzz help me on this java programming question?
hello people.. can u plzzzzzzzzzzzzzzzzzzz help me out with this java programm. its... when there are no valid moves left to be made, or
when there is a single peg
What is Abstraction - Java Beginners
What is Abstraction What is abstraction in java? How is it used in java..please give an example also Hi Friend,
Abstraction is nothing... is made up of different components,does not need to know how the different
Java-call by value - Java Beginners
Java-call by value Why java is called strictly "call by value...,
The Java programming language does not pass objects by reference; it passes... to the same actual object, changes made through one reference variable
java - Java Beginners
java Hi, i have some problem in my code.In this code i have made two Panels(p1 and p2) and one tabbed pane(tp) in which i have added
second panel(p2). on panel 1 i have one button(Go to Tabbed Pane). I want, when i will click
code problem - Java Beginners
there is no Guarantee of packet's delevery so send me code made of TCP,
plz help me
Wt is easy way to learn UML
Wt is easy way to learn UML Hello Friends,
Please guide me,How to learn UML in easy way and efficent way.
Thanks in Adavance
Thanks,
Abiram
number of stairs - Java Beginners
case made of text. The user should choose the text character and the number...();
}
}
}
-----------------------------------
read for more information,
interfaces - Java Beginners
something that is already made. It is one of the most important feature of Object... is called the base class or the parent class. To derive a class in java the keyword... to :
Thanks
what is xfire? - Java Beginners
and the CXF website.Codehaus XFire is a next-generation java SOAP framework. Codehaus XFire makes service oriented development approachable through its easy... is proud to announce XFire 1.2.6! XFire is an open source Java SOAP framework
java.lang.OutOfMemoryException - Java Beginners
javax.sound.sampled package for my voicechat application and made it as thread based
sorting - Java Beginners
; Easy Sorting: For all the values in the array A, find the largest and store
arrays help - Java Beginners
Easy Sorting: For all the values in the array A, find the largest and store
Hiiii - Java Beginners
made modification
import java.sql.Connection;
import java.sql.DriverManager
help in uml - Java Beginners
help in uml you are required to produce a design in UML and an implementation of the design in Java .the design should represent the following... is recorded . The vehicle is then made available for hire or put in for maintenance
String.fromCharCode - Java Beginners
String.fromCharCode This is a two part assignment.
Part one, input a text and have it converted to upper case. This is easy.
The second part is to record and display any key that is pressed.
I am unable to separate
Java Numeric Pyramids - Java Beginners
Java Numeric Pyramids Hi! I'm a beginner in Java can somebody help with these two pyramids, I just can't find the right code.
Here's the first...");
}
}
Thanks Hello friend thanks for your help, but i made a mistake my second
abt proj - Java Beginners
abt proj I made login page, when i click on login button , thaen how to go on home page on home page containe some tab like add, delete, update,etc... the following link:
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/95794 | CC-MAIN-2013-20 | refinedweb | 1,155 | 65.62 |
I have written a java program to add elements in an array using Linear Recursion. The output obtained is not as expected. Can anyone point what is wrong with this program?
public class TestSum {
public int count = 0;
public int sum(int[] a){
count++;
if(a.length == count){
return a[count -1];
}
return sum(a) + a[count -1] ;
}
public static void main(String[] args) {
int[] a = {1,2,3};
int val = new TestSum().sum(a);
System.out.println(val);
}
}
return a[count -1] + sum(a);
Generally, recursive programs that are not re-entrant (i.e. relying on external state) are suspicious. In your particular case
count will change between invocations of
sum, making the behavior hard to trace, and ultimately resulting in the error that you observe.
You should pass the index along with the array to make it work:
// The actual implementation passes the starting index private static int sum(int[] a, int start){ if(a.length == start){ return 0; } return sum(a, start+1) + a[start]; } // Make sure the method can be called with an array argument alone public static int sum(int[] a) { return sum(a, 0); }
Unlike an implementation that increments the count external to the method, this implementation can be called concurrently on multiple threads without breaking. | https://codedump.io/share/HENE0JwqfvRB/1/how-does-linear-recursion-work | CC-MAIN-2016-50 | refinedweb | 212 | 52.6 |
No, thats not a typo or a joke. I have created a Docker container containing a single Unix executable with no dependencies that occupies less than 1 KB of space on disk. There are no other files included in the container not even libc.
Before explaning how this was accomplished, it is worth explaining why this was accomplished. caddy-docker (which is another tool I wrote and explain in detail here) routes incoming requests to running containers based on their labels.
I needed caddy-docker to act as a reverse proxy for a particular host and the easiest way to do that was by spinning up a container whose sole purpose was to contain two special labels. The container should not do anything until it is stopped.
Thats when I came up with an idea.
I immediately began working on the application, naming it hang for its rather unusual purpose. Go can easily produce executables that have no dependencies, allowing the Docker container to inherit from
scratch. The only downside is that Go executables tend to be massive, even very simple ones often exceed 8 MB in size.
This would never do.
I reasoned that a C application could easily be written that registered a signal handler for SIGTERM and quit when it was received. Unfortunately, this meant that I would need to use libc, which in turn meant the container would quickly become comparable in size to the Go executable. This would provide no advantage at all.
Yes, the quickest way to produce a tiny executable with no dependencies is to write it in assembly. I prefer Intel-style syntax so NASM was the obvious choice.
Once upon a time, back in the early days of the x86 architecture, a syscall looked something like this:
mov eax, 0x01 mov ebx, 0x00 int 0x80
The first line specifies which syscall to invoke
sys_exit in this case. The second line specifies the exit value (0). The third line generates an interrupt which the kernel will then process.
x86 operating systems later moved to using
sysenter/
sysret while x86_64 introduced a new (aptly named) opcode:
syscall. Similar to the example above, the
rax register is used for specifying the specific syscall to invoke. The example above could be rewritten in x86_64 assembly like this:
mov rax, 0x3c mov rdi, 0x00 syscall
Note that the syscall number for
sys_exit is different on x86_64.
Registering a signal handler is fairly trivial in C:
#include <signal.h> void handler(int param) {} int main() { struct sigaction sa; sa.sa_handler = handler; sigaction(SIGTERM, &sa, 0); return 0; }
Unfortunately, a couple of things are being hidden by the C standard library:
SA_RESTORERis added to
sa.sa_flags
sa.sa_restorermember is set to a special function
We cannot directly translate the C code to assembly since the
sigaction struct doesnt correspond to the one
sys_rt_sigaction expects. Heres what the kernel struct looks like in NASM:
struc sigaction .sa_handler resq 1 .sa_flags resq 1 .sa_restorer resq 1 .sa_mask resq 1 endstruc
Each member is 8-bytes in size.
First, we must allocate space for the struct in the
.bss section:
section .bss act resb sigaction_size
Note that
sigaction_size is a special value the assembler creates for us it is equal to the size of
sigaction in bytes. The struct can then be initialized in the
.text section like so:
section .text global _start lea rax, [handler] mov [act + sigaction.sa_handler], rax mov [act + sigaction.sa_flags], dword 0x04000000 ; SA_RESTORER lea rax, [restorer] mov [act + sigaction.sa_restorer], rax
handler and
restorer are labels that well come to in a moment. Now we can invoke the
sys_rt_sigaction syscall:
mov rax, 0x0d ; sys_rt_sigaction mov rdi, 0x0f ; SIGTERM lea rsi, [act] mov rdx, 0x00 mov r10, 0x08 syscall
The next step is waiting for the
SIGTERM signal to arrive. The
sys_pause syscall easily accomplishes this:
mov rax, 0x22 ; sys_pause syscall
The handler itself is fairly trivial it doesnt really do anything:
The restorer is fairly simple as well, though it does need to invoke the
sys_rt_sigreturn syscall:
restorer: mov rax, 0x0f ; sys_rt_sigreturn syscall
Two commands are required to build the application. Assuming the source file is named
hang.asm, the commands are:
nasm -f elf64 hang.asm ld -s -o hang hang.o
This produces an executable named
hang and its small:
$ stat hang File: hang Size: 736
Yes, thats 736 bytes.
The
Dockerfile is fairly simple, requiring only two commands:
FROM scratch ADD hang /usr/bin/hang ENTRYPOINT ["/usr/bin/hang"]
Lets see if the container works:
$ docker build -t nathanosman/hang . $ docker run -d --name hang nathanosman/hang
At this point, the container should remain running:
$ docker ps -a CONTAINER ID IMAGE COMMAND STATUS f1861f628ea8 nathanosman/hang "/usr/bin/hang" Up 3 seconds
It should also immediately stop when
docker stop is run:
It works! Lets make sure the container is only as large as the executable:
$ docker images REPOSITORY TAG CREATED SIZE nathanosman/hang latest 2 minutes ago 736B
And there you have it a really tiny container!
You can find the source code in its entirety here:
github.com/nathan-osman/hang
Please enable JavaScript to view the comments powered by Disqus.
Continue reading on blog.quickmediasolutions.com | https://hackerfall.com/story/a-1-kb-docker-container | CC-MAIN-2018-17 | refinedweb | 866 | 64 |
Notifications have the potential to be a cumbersome feature to develop — and project managers might think they’re something you can just whip up a day before a product launch; after all, it’s just a tiny little box with a call to action! Luckily for those of us out there who don’t want to stress and work late to get these projects done, there are plenty of libraries out there to help us implement notifications quickly.
With close to half a million downloads a week by the end of 2020, React-toastify is a popular notification library offering easy implementation of a small, customizable notification system. With a library like this, you can get notifications up and running without taking up too much of your time.
One thing to consider when choosing to build vs. finding an existing library is the time it will take to roll out a custom solution vs. the size, functionality, and implementation effort of a library that already exists. At 7.1kb zipped, React-toastify is an attractive solution over building a React toast notification system yourself.
Before we get into the demo code, let’s take a look at some of the features that make React-toastify a tempting library to implement.
This is an obvious one — top or bottom on the left, right, or center. Where you place your notifications for maximum visibility and to ensure high engagement rates will depend on your product and your users’ habits. If this data isn’t available to you, consider creating a tracking plan to start gathering data about how users respond to your notifications (this rings true for all choices you make regarding notification design).
This is another obvious but important feature. Choose between the default notification type, or implement color-formatting notifications for information, success, warnings, and errors. There’s even support for dark mode!
There are multiple options for dismissing notifications, including auto-close with a timer you can customize or have pause on user hover. There are also the classic swipe/drag to close and the straightforward click to close options. All are supported on the web and mobile.
If your market includes those who speak languages that are written right to left, there’s support for RTL text! If your product currently only supports one left-to-right language, having right-to-left text support for future growth is always a good thing. We love inclusion.
If the four default animation options aren’t your cup of tea, the docs will point you to support for customization with react-transition-group as well as React-toastify’s own cssTransition helper. This allows you to create your own enter and exit transitions by either building custom solutions or bringing in other CSS animation libraries.
To get started on your React toast notifications, use either:
$ npm install --save react-toastify
or if you’re set up for yarn:
$ yarn add react-toastify
This is a basic setup as per the direction on their GitHub page:
import React from 'react'; import { ToastContainer, toast } from 'react-toastify'; import 'react-toastify/dist/ReactToastify.css'; function App() { return ( <div> <button onClick={() => toast("Wow so easy!")}> Notify! </button> <ToastContainer /> </div> ); }
Alright, so after a read through the docs, we can see the ToastContainer component comes with a bunch of default settings related to position, type, and auto-close. This default toast notification will be on the top right and set to React-toastify’s default unicorn rainbow 🦄 notification style. Let’s change up the notification location and type. We’ll customize the toast notification so it pops into view at the top center, and we’ll change the type of notification to an informative one, which will appear in the standard color format of blue.
To change the position, we’ll add the property to the ToastContainer component and specify where we want it:
<ToastContainer position="top-center" />
To change the notification type, we’ll need to change the toast emitter to the info type. Let’s also change the message to be content appropriate:
<button onClick={() => toast.info("You're informed!")}> Notify! </button>
And that’s pretty much it! In mere minutes you can have a React toast notification implementation that’s convenient and customizable.
If you’ve got React-toastify up and running, you can see the default auto-close is set to five seconds, and the auto-close will pause functionality on hover. By default, both the click to close and the drag/swipe to close properties are set to true. It’s got a nice bounce CSS animation in and out. Let’s customize the auto-close to 10 seconds and add a (subjectively) nicer slide transition:
<ToastContainer position="top-center" autoClose={10000} />
React-toastify comes with four built-in transitions if you don’t want to roll your own. They are bounce (default), zoom, flip, and slide. To use a slide animation, we need to import it from the library, and then add it as a transition inside the emitter:
import { ToastContainer, toast, Slide } from "react-toastify"; <button onClick={() => toast.info("You're informed !", { transition: Slide }) } > Notify ! </button>
And voilà! You’ve now got this wonderful little React toast notification app.
If you'd like to check out the working example, you can find it here.
As we’ve already touched on, React-toastify has some really cool features when you dig into it. Right-to-left text language support is important for the inclusion of all users and the languages your product supports (or could support in the future). Fully-customizable CSS animations are possible through the use of their CSS helper, and they also provide their list of CSS classes if you’d like to override them with your own styles. Even though this library is really popular, it isn’t the only option out there.
You can search around and select what library works best for you and your coding style and patterns. React-toast-notifications is a similar React toast notification library that offers a simple, clean design with plenty of customization options. Toasted notes looks very minimal upfront, but this sleek library also offers a completely customizable solution if needed through the use of a render callback.
So, not to worry if you’re asked to implement a notification system at the last minute! There are people who have already created and are continuing to contribute to great React toast notification libraries for you.
Don’t neglect workflow notifications! In today’s world of distractions throughout the workday, it can be easy to ignore or dismiss ill-timed workflow notifications. MagicBell offers a useful and usable in-app notification system for users to see their notifications when it’s convenient for them.
Get started with MagicBell’s easy-to-implement, customizable workflow notification system. Check out how to create a notification in the docs, and check out our tutorial on how to implement MagicBell in Angular.
Here are a few related articles!
Get a demo and see how you can add MagicBell's notification inbox to your product within an hour.Schedule a Demo | https://magicbell.com/blog/react-toast-notifications-made-easy | CC-MAIN-2021-43 | refinedweb | 1,188 | 52.9 |
User Details
- User Since
- Aug 4 2014, 10:15 AM (357 w, 5 d)
Thu, May 27
Thanks Renato. Short and long term plans look good to me.
Wed, May 26
Apr 21 2021
Dec 16 2020
I tested this change on Graviton2 aarch64-linux with clang -moutline-atomics.
clang was configured with compiler-rt:
cmake -v -G Ninja \ -DCLANG_DEFAULT_RTLIB:STRING=compiler-rt \ -DLLVM_ENABLE_PROJECTS:STRING="clang;compiler-rt;libunwind" \ -DCLANG_DEFAULT_UNWINDLIB:STRING=libunwind \ -DCMAKE_BUILD_TYPE:STRING=Release \ -DCMAKE_INSTALL_PREFIX:PATH=/home/ubuntu/llvm/usr/ \ ../llvm
Dec 5 2020
I tested this change on Graviton2 aarch64-linux by building with clang -O3 -moutline-atomics and make test: all tests pass with and without outline-atomics.
Clang was configured to use libgcc.
Dec 24 2019
This cleanup looks good to me.
Nov 5 2019
Oct 28 2019
Looks good to me. Thanks!
Oct 7 2019
Ping.
Sep 24 2019
Sep 19 2019
I looked at both the SLP and loop vectorizer and I think this is more work than I can do right now.
Sep 18 2019
To catch more dot product cases, we need to fix the passes above instruction selection.
Sep 17 2019
The new patch does not use the first argument of the dot product instruction: we now set it to zero.
Patch tested on x86_64-apple-darwin with make check-all.
Sep 16 2019
Sep 14 2019
Excellent!
Thanks for catching those patterns.
Please commit.
I still see a link error on aarch64-linux on master:
/usr/bin/ld: tools/clang/lib/AST/CMakeFiles/obj.clangAST.dir/AttrImpl.cpp.o: in function `clang::AttributeCommonInfo::getAttributeSpellingListIndex() const': /home/ubuntu/llvm-project/llvm/tools/clang/include/clang/Basic/AttributeCommonInfo.h:166: undefined reference to `clang::AttributeCommonInfo::calculateAttributeSpellingListIndex() const' collect2: error: ld returned 1 exit status
I reverted locally this patch and it finishes building.
Sep 13 2019
Sep 12 2019
Looks good to me, please apply.
Updated patch by removing the patterns that generate i16. Patch passes make check-all on aarch64-linux.
The tablegen error only happens on the i16 patterns:
def : Pat<(i16 (extractelt (v8i16 V128:$V), (i64 0))), (EXTRACT_SUBREG V128:$V, hsub)>; def : Pat<(i16 (extractelt (v4i16 V64:$V), (i64 0))), (EXTRACT_SUBREG V64:$V, hsub)>;
If I remove these two patterns, make check-all passes.
Sep 11 2019
Never mind. You cannot do the other ones as it would call Match too many times and would not follow the semantics of the original code.
Almost LGTM.
Sep 10 2019
I like the patch. Thanks!
Sep 7 2019
Sep 6 2019
Aug 20 2019
Updated patch to current llvm trunk.
Aug 15 2019
Ping patch.
The last version of the patch addresses all the comments from reviews.
Ok to commit?
Jun 11 2019
For some reason asan/tests/asan_noinst_test.cc is not compiled on make check-asan and that has exposed a compile error that was not triggered by the other tests:
sanitizer_double_allocator.h:30:11: error: use of non-static data member 'use_first_' of 'DoubleAllocator' from nested type 'DoubleAllocatorCache' if (use_first_) ^~~~~~~~~~
The updated patch fixes this by accessing the non-static field of the enclosing class through a this pointer to one of the instances:
- if (use_first_) + if (this->use_first_)
The updated patch passes make check-lsan check-asan and is still under test for check-all on aarch64-linux.
The updated patch I will post is addressing all the review comments.
May 19 2019
Addressed comments from @vitalybuka: factored up the 3 versions and added more tests.
Passes with no new fails ninja check-all on an AArch64 Graviton A1 instance.
May 10 2019
Fix the x86_64 overflow warning with 1ULL << 48.
Updated patch fixes ASan.
May 9 2019
I have verified that the updated patch compiles and it reduces the execution time of leak sanitized trivial example.
Apr 12 2019
This patch reduces the number of #ifdefs as suggested by Kostya, and speeds up both the leak and address sanitizers on aarch64.
Passes check-all on x86_64-linux and aarch64-linux is still under test.
Worked with Brian Rzycki @brzycki.
Apr 8 2019
Also, this is changing only the standalone lsan, not lsan used as part of asan. Right?
standalone lsan is not widely used, AFAICT.
Address review comments from Kostya: move AArch64 lsan allocator to a separate file to avoid #ifdefs.
Apr 5 2019
Ok, I will prepare an updated patch.
Thanks Brian and Kostya for your reviews.
Looks good to me.
Apr 3 2019
Rebased patch on today's trunk.
Mar 29 2019
I am accepting the scalable vector types based on the comments in
Feb 21 2019
This seems to be a useful transform that is not yet covered by the current implementation of jump-threading.
I think GCC calls it control dependence DCE.
Please run and report performance numbers on the testsuite and other benchmarks that you have access to.
Nov 1 2018
Maybe you can add the testcase from my previous patch:
Oct 29 2018
Looks good to me.
Ok, thanks!
Oct 25 2018
Oct 24 2018
The change looks good to me. Thanks!
ok
Oct 23 2018
ok
The change looks good with some minor changes. Thanks!
ok
Oct 22 2018
Oct 20 2018
ok
Oct 15 2018
Fixed in
Are both fixes necessary to fix the issue (the one for back propagation and the one to bail out if the entry block is cold), or is either one sufficient? The patch description only mentions the former.
Fix two comments from @tejohnson.
I will post a patch to fix the comments from @tejohnson.
Does this patch fix
In which case you may want to add the testcases from that bug.
Oct 5 2018
Added an early return for outlining if function entry is cold.
Added a check for invoke calls: invoke should not be marked as cold by back propagation.
Oct 4 2018
Before this patch we have a 10% regression in sqlite with hot-cold-split pass.
With this patch I now see a 3% speedup on sqlite vs. no hot-cold-split pass.
Oct 2 2018
Looks good.
Oct 1 2018
Please add a testcase that will exercise the new flag.
Sep 27 2018
Sep 21 2018
lgtm | https://reviews.llvm.org/p/sebpop/ | CC-MAIN-2021-25 | refinedweb | 1,028 | 74.19 |
GWT is a framework developed by Google for building AJAX enabled web applications using the Java programming language. It comprises:
- An API for creating GUI (similar to Swing), which manipulates the web browser's Document Object Model (DOM).
- A Java-to-JavaScript compiler.
- An environment for running and debugging GWT applications.
This approach offers some advantages:
- Assuming you know Java, there is no need to learn a new programming language.
- You can build an AJAX enabled web application without writing JavaScript or HTML.
- Everything is written in Java, so Java's advanced development tools, including debugging and re-factoring support, are available.
- GWT shields the developer from browser idiosyncrasies
Finally, since everything is in Java (even the View part of the MVC pattern), we should be able to create UI tests easily. This article explores some approaches.
1. First attempt
1.1 GWT using TDD
We will start with a simple example: we want to display a text entry field, a button and a label. When we click on the button, the text field content entered by the user is placed in the label. The corresponding test (in JUnit 4) will be:
1 @Test 2 public void testClick() { 3 GwtExample view = new GwtExample(); 4 Assert.assertNull(view.getLabel().getText()); 5 view.getTextBox().setText("my text"); 6 // creation of a basic "click" event 7 NativeEvent event = Document.get().createClickEvent(0, 0, 0, 0, 0, false, false, false, false); // dispatch de l'évènement DomEvent.fireNativeEvent(event, view.getButton()); 8 Assert.assertEquals("my text", view.getLabel().getText()); 9 }
Then, we can write the corresponding View code:
1 public class GwtExample extends Composite { 2 private Label label; 3 private TextBox textBox; 4 private Button button; 5 6 public GwtExample() { 7 FlowPanel fp = new FlowPanel(); 8 textBox = new TextBox(); 9 fp.add(textBox); 10 button = new Button("validate"); 11 fp.add(button); 12 button.addClickHandler(new ClickHandler() { 13 14 public void onClick(ClickEvent event) { 15 label.setText(textBox.getText()); 16 17 } 18 }); 19 label = new Label("init"); 20 fp.add(label); 21 initWidget(fp); 22 } 23 24 public Label getLabel() { return label; } 25 public TextBox getTextBox() { return textBox; } 26 public Button getButton() { return button; } 27 }
Finally, we launch the preceding JUnit test:
The above error means that GWT classes are not to be used in a standard JVM; they only work once they are compiled into JavaScript, and executed in a browser.
1.2 Some existing solutions
1.2.1 GWTTestCase
GWT provides a class to perform unit tests called GWTTestCase, however it suffers from a number of limitations: I
- The management of native events is only implemented from GWT 1.6. GWTTestCase does not allow testing the View part in an efficient manner as per the previous versions.
- Slowness. In fact, this class launches a masked HostedMode (the development environment for GWT), which means it needs a few seconds to initialize itself.
- Locating files. The unit tests have to be compiled by GWT. They must therefore be referenced by the MyApp.gwt.xml file. This complicates the application launch and packaging quite a bit, especially with Maven.
Moreover, using the GWTTestCase and GWTTestSuite greatly limits access to Java APIs: the unit tests must be compatible with the GWT compiler (which takes care of compiling the Java UI code into JavaScript). This means that, for example, using Java reflection is not possible. It is therefore not possible to use test libraries such as Unitils or Easymock.
The list of Java classes emulated in JavaScript is available here.
GWT 2.0 brings an improvement to GWTTestCase in that the class no longer uses native libraries to run tests. HtmlUnit is used instead of the browser used by Hosted Mode. In GWT 2, GWTTestCase is platform independent. But some limitations of GWTTestCase are still there: test execution is slow and there is no possibility to use a standard test framework.
1.2.2 Using interfaces
A solution to test a GWT application is to not use GWT objects while testing but instead replace all GWT objects with mock objects that will work in a standard JVM. This solution presents a major inconvenience however. Since the GWT objects are not interfaces but rather concrete classes, we will have to modify the code of our application so that it uses interfaces (you can find an example here). This solution impacts the design of our application.
1.3 The heart of the problem
In order to move forward, we have investigated why Google blocked the execution of GWT classes in a standard JVM. The reason is simple: a good portion of the code for these classes uses JSNI code. JSNI code is presented in the following manner:
1 public static native void alert(String msg) /*-{ $wnd.alert(msg); }-*/;
This is a native function. It means that during the execution, the JVM will try to execute a function that has the same name as a DLL (or a .so). When these classes are compiled into JavaScript, the method is replaced by the code located within the /* and */. It’s the reason we cannot execute within a standard JVM, using non-JavaScript.
In addition, part of the GWT behaviour is not implemented in Java, but in JavaScript (or by the brower’s HTML rendering system). Even if we succeed in circumventing the problem of the native methods, we have to find a solution to reimplement this behaviour.
2. Gwt-Test-Utils framework
2.1 Objective
Our objectives to be efficient in GWT testing are as follow:
- Our test classes should not require any annoying loading time.
- We should be able to manipulate the GWT classes directly, without intermediary interfaces that render the project more complex.
- We should be able to use all the Java Standard APIs, specifically the introspection Java APIs (to use tools like Unitils).
- We want something that is light and compatible with Maven.
2.2 The "gwt-test-utils" framework
During a customer project, we developed a framework which answered to our objectives.
We have designed a test framework to modify the GWT classes without any additional work for the developer. It starts with "hot" modifications of the bytecode of the classes for the GWT objects, to replace the native JSNI methods by Java methods, as shown in the following diagram:
Note: the presentation of the technical implementation of the framework is not part of the scope for this article. We will concentrate on its usage.
We have published this framework as an open source project, Gwt-Test-Utils, so that anyone can use it.
2.3 Using the framework
We will start by writing a simple Junit 4 test to validate the creation of a GWT button:
@Test public void checkText() { Button b = new Button(); b.setText("toto"); Assert.assertEquals("toto", b.getText()); }
As we explained earlier, such a test generates an error:
For Gwt-Test-Utils to be able to modify GWT classes’ bytecode, it will be necessary to execute our tests with a Java agent that we have specifically developed. We must therefore add a new argument to the JVM launching command: -javaagent:path_to_bootstrap.jar.
After that, we have to install " Gwt-Test-Utils " within the test code:
1 @Before 2 public static void setUpClass() throws Exception { 3 // patch GWT standard components 4 PatchGWT.init(); 5 } 6
The test can now be validated: Gwt-Test-Utils replaces the GWT classes’ bytecode on the fly. This way, the GWT HostedMode is not launched and the execution time is in the order of a few milliseconds. And we can use all the standard tools.
For example, we can use Easymock to test a call to a GWT-RPC service:
1 static interface MyRemoteService extends RemoteService { 2 String myMethod(String param1); 3 } 4 5 static class MyGwtClass { 6 public String myValue; 7 8 public void run() { 9 MyRemoteServiceAsync service = GWT.create(MyRemoteService.class); 10 service.myMethod("myParamValue", new AsyncCallback
gt;() {
11 public void onFailure(Throwable caught) {myValue = "error";}
12 public void onSuccess(String result) {myValue = result;}
13 });
14 }
15 }
16
17 @Mock
18 private MyRemoteServiceAsync mockedService;
19
20 @Test
21 public void checkGwtRpcOk() {
22 // Setup
23
24 // mock remote call 25 mockedService.myMethod(EasyMock.eq("myParamValue"), EasyMock.isA(AsyncCallback.class));
26 expectServiceAndCallbackOnSuccess("returnValue");
27
28 replay();
29
30 // Test
31 MyGwtClass gwtClass = new MyGwtClass();
32 gwtClass.myValue = "toto";
33 Assert.assertEquals("toto", gwtClass.myValue);
34 gwtClass.run();
35
36 // Assert 37 verify(); 38 39 Assert.assertEquals("returnValue", gwtClass.myValue);
40 }
Note: the @Mock annotation is similar to the one we can find in Unitils. It is used to declare a mocked object.
2.4 The constraints and non-constraints of this framework
- There is no need to change the design of / redevelop the GWT application in order to make it testable.
- You need to modify the launch command for these unit tests, by adding argument - javaagent:path_to_bootstrap.jar. This has to be done in the IDE settings and/or in the Maven configuration (in the surefire plugin configuration).
- It’s mandatory to use a Java 6 JVM to execute the tests (a Java 5 JVM does not allow you to modify the code of the native methods). This is easy with Eclipse, by changing the JRE execution. With Maven, you only need to change the JVM used by the surefire plugin.
These constraints are not insignificant, however we feel they are outweighed by the advantages of the test framework.
See Gwt-Test-Utils demo1 project for a complete example of a Maven configuration.
2.5 Results
In our project (a 26k line GWT application compiled using JRockit 1.5, tested under Hotspot 1.6) we achieved 85% code coverage with a total of 600 unit tests (14k lines of test code). However, we have concentrated our tests on the controller part of the GWT application, the purpose being not to re-test GWT, but rather to validate the behaviour that we had implemented.
3 Integration test
3.1 First limit
Gwt-Test-Utils framework allows us to test our UI efficiently. The weakness of these tests is that they are unit tests: we are testing the behaviour of a single view. The server part (which receives the GWT-RPC calls) is mocked. The majority of problems we have encountered are on view chaining, with lots of GWT-RPC calls in views.
3.2 Writing integration tests
In our case, the server backend uses Spring. Therefore, we test it by using SpringJUnit4ClassRunner, which starts the entire server backend for us under JUnit. So, by adding a bit of glue, we have "closed the loop": instead of mocking the GWT application server backend, we have connected the UI part to the server backend.
The GWT application and its server are therefore completely started and operational within a unique JVM, and are ready to perform the tests. For example, we can write a test scenario which:
- launches the server backend
- launches the GWT application (by simply calling its EntryPoint)
- stimulates the GWT application, which will call the server backend
- goes through the views
These tests are no longer unit tests; they are true integration tests.
3.3 In practice
No modifications are needed on the server side. We simply added some glue code to connect the GWT application to the Spring part. We already had integration tests that mocked the services consumed by the server such as databases, Webservices and so on. We simply reused this environment.
To simulate the GWT application, we can write some Java. For example:
1MyView myView = (MyView) RootPanel.get(0); 2myView.getButton().click();
It is not very practical; we need to add getters everywhere. Moreover, we will often want to fetch “the 4th checkbox located in table X which is itself located in container Y, itself located in the RootPanel”. This is why we have developed a small language that resembles XPath and provides a mechanism to call a method on a given component.
For example, to reach a label located in the first widget of a container, itself being in the third widget of the RootPanel, we can write:
1 /rootPanel/widget(2)/widget(0)
The containing text of this label, which is normally accessible by the getText() method, is accessible by
1 /rootPanel/widget(2)/widget(0)/text
All this is made possible by a heavy usage of Java reflection.
Test scenarios can be written using these XPaths, in a CSV file. Here is an example of a CSV scenario which:
- Starts the GWT application
- Checks that the content of a label contains 'foo'
- Simulates a click on a button, which calls a Spring service via GWT-RPC, and replaces the label content
- Checks that the content of the label has changed and now contains 'bar'
Here is the scenario:
1 initApp; 2 assertExact;foo;/rootPanel/widget(2)/widget(0)/text 3 click;/rootPanel/widget(2)/widget(1) 4 assertExact;bar;/rootPanel/widget(2)/widget(0)/text
A small amount of code allows us to launch this scenario in JUnit. These tests are therefore executed the same way as the unit tests, but simply take longer to execute (due to the startup of the server part).
The integration test part is also in the Gwt-Test-Utils project. Documentation is provided here.
We could have done about the same thing with Selenium. There are three major differences:
- GWT does not fit well with Selenium due to the component IDs (however, it can be done).
- Our localisation language is, in our opinion, much simpler and more efficient.
- Our tests are launched by JUnit, which means we can launch them from Eclipse or during the Maven build, which makes their execution easier.
3.4 Conclusion
In the context of our project, we have written 900 UI unit tests and about 40 integration tests.
This collection of tests ensures an overall non-regression of all the features of our application:
- The initial investment was not that significant, because the testing system was developed as we built the application.
- The benefits are enormous: in just about one year of development, we have had virtually no regression.
- The maintenance of the scenarios is expensive (a few hours every 15 days), but we feel it is totally justified in comparison to the benefits incurred.
- The complete non-regression testing suite for the application (in a mocked environment) takes only 3 minutes to execute.
- We often do refactoring inside the GWT application without even launching GWT: if the non-regression tests pass, we are sure that nothing is broken.
GWT has turned out to be a UI technology, which, with a few tools, enables us to perform highly advanced tests thus further increasing the productivity of this technology.
The test framework, as well as its documentation, has been published as an open source project, Gwt-Test-Utils.
About the Authors
Bertrand Paquet works for Octo Technology, a French MIS Architect company. Based on the specific needs of his customers he can either be a Technical Leader, an Architect or a Scrum Master. His job is to facilitate Octo Technology’s corporate customers in realizing their development projects successfully. Bertrand’s favorite topics are continuous integration, tests, agile development and team productivity.
Gael Lazzari works as a consultant for Octo Technology. A Java specialist, he works at Octo’s customers’ sites as an Agile developer.
Community comments
JRE test Async Call
by simon wu /
Nice overview
by Dave LeBlanc /
JRE test Async Call
by simon wu /
Your message is awaiting moderation. Thank you for participating in the discussion.
Another idea to test Async Call within a JRE test case is to use expectLastCall().andAnswer provide by easyMock.
Following code shows basic idea.
Nice overview
by Dave LeBlanc /
Your message is awaiting moderation. Thank you for participating in the discussion.
I've been tackling the GWT-TDD challenge for a few years now, and I think you've done a good job of explaining the issues and providing some clever solutions.
One of the best approaches I've found, outlined elsewhere, is using Presenter-First or the MVC + Humble Dialog pattern. In this sense your controller and model are decoupled from any GWT specifics, and the view layer is simply a paper-thin adapter to GWT logic. This is a beneficial architecture, but it limits the amount of logic you can have in the view, plus the obvious downside that the view class remains untested. With your approach, those limitations are gone.
Also of note is the GwtMockUtilities, which allows you to provide a replacement for the GWT.create method. This can be handy when your controller interacts with GWT objects, say for internationalization, that are easily mockable. This might be a good middle-ground for those that find the restrictions of your framework too imposing.
Very nice! | https://www.infoq.com/articles/gwt_unit_testing?utm_source=articles_about_gwt&utm_medium=link&utm_campaign=gwt | CC-MAIN-2019-13 | refinedweb | 2,784 | 53.61 |
rfxcom 0.3.0
RFXCOM RFXtrx Library for Python 3.3+
A Python library for working with your RFXTrx for automation projects.
This library is designed to work with Python 3.3+ [1] and asyncio (but other transports can be implemented). Currently it is primarily used by the home project, a dashboard for managing and visualising your home automation devices.
This library is relatively new and thus the number of devices are limited to those that @d0ugal owns. This means, that the current list of fully supported protocols are:
- Status Packets from the RFXTrx itself.
- Energy usage sensors (such as the Owl CM119/160 and Owl CM180)
- Temperature and humidity sensors (such as the Oregon THGN132)
- Lighting and power control devices from LightwaveRF
If you want to use a device and you don’t think its supported or you are unsure then please open an issue.
Installation
It is recommended that you get the latest version from PyPI with pip:
pip install rfxcom
However, if you want to grab the latest development version you can download the repository directly from github and run python setup.py install in the root of the repository.
Documentation
View the documentation on Read The Docs.
Quick Example
The following example shows some basic usage of this library, it sets up the asyncio event loop, points the library to the serial device path and attaches a simple handler function which prints out information about all the packets it receives. For example, see what it will output for energy usage sensors.:
from asyncio import get_event_loop from rfxcom.transport import AsyncioTransport dev_name = '/dev/serial/by-id/usb-RFXCOM_RFXtrx433_A1WYT9NA-if00-port0' loop = get_event_loop() def handler(packet): # Print out the packet - the string representation will show us the type. print(packet) # Each packet will have a dictionary which contains parsed data. print(packet.data) # You can access the raw bytes from the packet too. print(packet.raw) try: rfxcom = AsyncioTransport(dev_name, loop, callback=handler) loop.run_forever() finally: loop.close()
Contributing
If you would like to contribute to python-rfxcom, you will need to use tox to run the tests. This will test against Python 3.3, Python 3.4, pyflakes for code linting and build the documentation. To do this, you simply need to install tox and then call tox from the root of the python-rfxcom git repository.
pip install tox tox
Don’t worry if you can’t test against Python 3.3 and Python 3.4 locally, many people will only have one installed. We use the brilliant Travis CI to verify all pull requests.
- Author: Dougal Matthews
- License: BSD
- Platform: any
- Categories
- Package Index Owner: d0ugal
- Package Index Maintainer: rfxcom
- DOAP record: rfxcom-0.3.0.xml | https://pypi.python.org/pypi/rfxcom/0.3.0 | CC-MAIN-2017-09 | refinedweb | 452 | 56.96 |
I have been running with this in my /etc/csh.cshrc for the last few days with a recent stable build (since the -I option is very new), as an experiment to see if it would annoy me. if ( $?prompt ) then alias rm 'rm -I' endif The /etc/profile equivalent, for sh users: alias rm='rm -I' Well, it hasn't annoyed me at all! In fact, it has made me feel more comfortable using rm, even though I don't make the kinds of mistakes that rm -I saves one from anymore. I strongly recommend it to everyone, developers and users alike. -Matt Matthew Dillon <dillon@xxxxxxxxxxxxx> | http://leaf.dragonflybsd.org/mailarchive/users/2004-10/msg00027.html | CC-MAIN-2014-42 | refinedweb | 108 | 79.8 |
In a pure brute-force solution, we would try every possible combination of assignments of variables to values. There are 7 variables, with at most 20 values per variable, for a total of $20^7$ combinations. This is over one billion combinations to check, which is too many to check.
One approach you can try is to count the number of ways you can force the expression to be odd. When checking if a combination is odd, you can immediately note a couple things - for example, M must be odd. Also, if you recursively assign values to variables and you see that one of the three terms in the product is even, you can stop all combinations for variables that you haven't yet inspected.
There is a much faster approach though that removes the dependency on checking different combinations. Since you want to check if the product is even or odd, the important thing to know for each variable is how many even values that variable can take on, and how many odd values that variable can take on. Once you've done that, you can assign to each variable a parity and see if with those parities, the product is even. If so, you can count how many combinations there are with those parities, and then sum the parities.
With this approach, there are only $2^7=128$ combinations of parities to check, which is guaranteed to work quickly enough.
Here is Mark Gordon's code:
#include <iostream> #include <cstdio> using namespace std; int num[256][2]; bool is_even(int x) { return x % 2 == 0; } int main() { freopen("geteven.in", "r", stdin); freopen("geteven.out", "w", stdout); int N; cin >> N; for (int i = 0; i < N; i++) { char letter; int val; cin >> letter >> val; if (is_even(val)) { num[letter][0]++; } else { num[letter][1]++; } } int result = 0; /* Try every possible way that the variables could be even or odd. */ for(int B = 0; B < 2; B++) for(int E = 0; E < 2; E++) for(int S = 0; S < 2; S++) for(int I = 0; I < 2; I++) for(int G = 0; G < 2; G++) for(int O = 0; O < 2; O++) for(int M = 0; M < 2; M++) { if (is_even((B + E + S + S + I + E) * (G + O + E + S) * (M + O + O))) { /* If the expression is even then add the number of variable assignments * that have the variables odd/even. */ result += num['B'][B] * num['E'][E] * num['S'][S] * num['I'][I] * num['G'][G] * num['O'][O] * num['M'][M]; } } cout << result << endl; return 0; } | http://usaco.org/current/data/sol_geteven_bronze.html | CC-MAIN-2018-17 | refinedweb | 431 | 72.19 |
C++ Introduction
C++ is a programming language which is used in computer programming and mostly in game development.
What is C++?
C++ is programming language which is used in programming and mostly in game development. It was developed by Bjarne Stroustrup in 1979. It was developed to give programmers a high level of control over system resources.
Why C++?
- C++ is a popular language which is used in operating system Graphical interface and embedded systems and it is easy to learn.
- It is an object-oriented programming language which give a clear structure to programs and allow codes to be reused.
- It is portable and is used to develop applications and to adapt it in multiple platforms.
What is the difference between C and C++?
C++ is a developed, enhanced and extended version of C both of them almost same syntax.
Main difference is that C++ can support classes, object but C cannot.
C++ syntax
#include <iostream>
using namespace std;
int main()
{
cout<<“Welcome to C++”
return 0 ;
}
Welcome to C++
In the first line #include <iostream>is our library.
- In the second line using namespace std; means that we can use names for variables objects etc.
NOTE : Without #include <iostream> or using namespace std; your programs will not run.
- In another line cout is used to get output and the reference of cout is <<.
- And in the next line is return 0 it is used to return 0 value to compiler .
- And now } is used to close the body of program . It is necessary to close the program if you do not close it then the program will not work.
- And in side is the output of your program given by compiler.
Points to remember
- It is necessary to type #include to insert library .
- Without using namespace std your program will not run . If you donot want to type usinge namespace std than you can type it like this “std::cout<<” but it is nessecary to write it.
- You can use cout and form bodies multiple times .
- You can also put comments by using // (single line comment) or /* lines*/ (multi-lines comments).
Variables
What are Variables?
Variables are vessels or container which is used to store values.
C++ have different variables with different keywords such as :
- int- It stores numerical values such as whole numbers but it does not hold decimals.
- double- Stores floating value | https://q3schools.com/2022/08/18/c/ | CC-MAIN-2022-40 | refinedweb | 396 | 74.19 |
Place
your cursor where you want the table to be inserted in your Microsoft
Word document and choose "Insert" from the Table menu to open the
Table dialog box.Select the number of columns and rows un
i'm writing a text based game that creates a world based on the
data inside of the specified file.
however, the code i have
won't let me use the function
DataInputStream.ReadFully(byte[]b)
Code:
package textgame;import java.io.DataInputStream;import
java.io.FileInputStream;import java.io.IOException;import
java.util.Stack;public class world { Stack<String> LevelData =
new Stack<String>(); int location; int EndLocation; public
world(Stack<String> s, int l, int e) { s =
this.LevelData; l = this.location; e = this.EndLocation;
}}class world_render{ public world RenderWorld(String fil
I need to store a ton of RGB color objects. These are taking up
between 8% & 12% of the total memory of my app for some common
usages. I presently have it defined as follows:
class MyColor {
byte red;
byte green;
byte blue;
}
I assume that (most) JVMs actually use an int for each of those
entries. The easiest alternative is:
class MyColor {
byte [] color = new byte[3];
private static final int red = 0;
private static final int green = 1;
private static final int blue = 2;
}
Will that put the entire array in a single int? Or is it an int[3]
under the covers? If the first, this is great. If the second, then the
best is:
class MyColor {
int color;
private static final int red_shift = 0;
private static final int green_shift = 8;
private static final int blue_shift = 16;
}
Or is there a better approach?
Update: I will also have a getRed(), setRed(int),
... as the accessors. I just listed the data components of the class
to keep it smaller. And size is the critical issue
here. The code doesn't spend a lot of time accessing these
values so performance is not a big issue.
Update 2: I went and ran this usingSizeofUtil (referenced below - thank you). I did
this using code as follows:
protected int create() {
MyColor[] aa = new MyColor[100000];
for (int ind=0; ind<100000; ind++)
aa[ind] = new MyColor2();
return 2;
}
}.averageBytes());
And here's where it gets weird. First, if I don't do the for loop,
so it only is creating the array (with all values null), then it
reports 400016 bytes or 4 bytes/array element. I'm on a 64-bit system
so I'm surprised this isn't 800000 (does Java have a 32-bit address
space on a 64-bit O/S?).
But then came the weird part. The total numbers with the for loop
are:
First surprise, the 2nd approach with byte[3] uses less memory! Is
it possible that the JVM, seeing the byte[3] in the declaration, just
allocates it inline?
Second, the memory per object is (2,800,000 - 400,000) / 100,000 =
24. I'll buy that for the first approach where each byte is made a
native 64-bit int. 3 * 8 bytes = 24 bytes. But for the third case
where it's a single int? That makes no sense.
Code here in case I missed something:
package net.windward;
import java.util.Arrays;
public class TestSize {
public static void main(String[] args) {
new TestSize().runIt();
}
public void runIt() {
System.out.println("The average memory used by MyColor1 is "
+ new SizeofUtil() {
protected int create() {
MyColor1[] aa = new MyColor1[100000];
for (int ind = 0; ind < 100000; ind++)
aa[ind] = new MyColor1();
return 1;
}
}.averageBytes());
System.out.println("The average memory used by MyColor2 is "
+ new SizeofUtil() {
protected int create() {
MyColor2[] aa = new MyColor2[100000];
for (int ind = 0; ind < 100000; ind++)
aa[ind] = new MyColor2();
return 2;
}
}.averageBytes());
System.out.println("The average memory used by MyColor3 is "
+ new SizeofUtil() {
protected int create() {
MyColor3[] aa = new MyColor3[100000];
for (int ind = 0; ind < 100000; ind++)
aa[ind] = new MyColor3();
return 1;
}
}.averageBytes());
System.out.println("The average memory used by Integer[] is "
+ new SizeofUtil() {
protected int create() {
Integer[] aa = new Integer [100000];
for (int ind = 0; ind < 100000; ind++)
aa[ind] = new Integer(ind);
return 1;
}
}.averageBytes());
}
public abstract class SizeofUtil {
public double averageBytes() {
int runs = runs();
double[] sizes = new double[runs];
int retries = runs / 2;
final Runtime runtime = Runtime.getRuntime();
for (int i = 0; i < runs; i++) {
Thread.yield();
long used1 = memoryUsed(runtime);
int number = create();
long used2 = memoryUsed(runtime);
double avgSize = (double) (used2 - used1) / number;
// System.out.println(avgSize);
if (avgSize < 0) {
// GC was performed.
i--;
if (retries-- < 0)
throw new RuntimeException("The eden space is
not large enough to hold all the objects.");
} else if (avgSize == 0) {
throw new RuntimeException("Object is not large
enough to register, try turning off the TLAB with -XX:-UseTLAB");
} else {
sizes[i] = avgSize;
}
}
Arrays.sort(sizes);
return sizes[runs / 2];
}
protected long memoryUsed(Runtime runtime) {
return runtime.totalMemory() - runtime.freeMemory();
}
protected int runs() {
return 11;
}
protected abstract int create();
}
class MyColor1 {
byte red;
byte green;
byte blue;
MyColor1() {
red = green = blue = (byte) 255;
}
}
class MyColor2 {
byte[] color = new byte[3];
private static final int red = 0;
private static final int green = 1;
private static final int blue = 2;
MyColor2() {
color[0] = color[1] = color[2] = (byte) 255;
}
}
class MyColor3 {
int color;
private static final int red_shift = 0;
private static final int green_shift = 8;
private static final int blue_shift = 16;
MyColor3() {
color = 0xffffff;
}
}
} | http://bighow.org/14082543-_Convert_from_managed_byte___to_void__and_back_to_byte___again_.html | CC-MAIN-2018-47 | refinedweb | 898 | 54.63 |
Hello,
I’m fairly new to writing C code, so I’m not really sure where to start for the issue here. I have a .c file (a6.c) and a header file (a6Defs.c) in the same directory. When I try to include it into my a6.c file, as follows:
#include "a6Defs.h"
I get an error: “a6Defs.h: No such file or directory.” but I am able to make references to other functions that are in a6Defs.h while in a6.c. Can anyone please provide some insight on this issue?
Thanks and sorry if this ends up being not the correct place to post this. | https://discuss.atom.io/t/atom-will-not-recognize-my-h-file-in-my-c-code/35576 | CC-MAIN-2018-30 | refinedweb | 109 | 89.04 |
I've had this same crash 7 times so far in the last 6 or 8 hours, seems to happen at random when the machine is otherwise idle. I have had three or four pages up with an auto-refresh on them.
Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1pre) Gecko/20090620 Firefox/3.5pre Glubble/2.0.4.6 ID:20090630031219
ChatZilla 0.9.85
DownloadHelper 4.5
DownThemAll! 1.1.4
Firebug 1.4.0b3
Firefox PDF Plugin for Mac OS X 1.1
Glubble 2.0.4.6
Greasemonkey 0.8.20090123.1
Japanese-English Dictionary for rikaichan 1.10
Live HTTP headers 0.15
Mass Password Reset 1.04
Modify Headers 0.6.6
Nagios Checker 0.14.4
Names Dictionary for rikaichan 1.10
Nightly Tester Tools 2.0.2
NoScript 1.9.5
Personas for Firefox 1.2.1
Rikaichan 1.06
Update Channel Selector 1.5
User Agent Switcher 0.7.1
Web Developer 1.1.7
Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.1pre) Gecko/20090630 Shiretoko/3.5.1pre Glubble/2.0.4.6
^^ My correct UserAgent... I had User-Agent Switcher active because of a Facebook sniffing bug, and NTT believed it when I told it to paste. :)
This is bug 337418 reincarnated.
OK, given the information in bug 337418 and the additional debugging info requested there, I'm suspecting it's a site with automatic-update/refresh usage since it happens at random without me actually touching the browser, and I have several tabs open with auto-refresh stuff (keeping an eye on release activity, etc).
Tab 1: - does ajax requests to poll for new status updates and notifications has address 69.63.184.30
Tab 2: - has a Refresh: 150 on it.
nagios.mozilla.org is an alias for dm-nagios01.mozilla.org.
dm-nagios01.mozilla.org has address 63.245.208.170
Tab 3: - has a Refresh: 60 on it.
nagios.mozilla.org is an alias for dm-nagios01.mozilla.org.
dm-nagios01.mozilla.org has address 63.245.208.170
Tab 4: - has <meta http-
people.mozilla.org is an alias for people.mozilla.com.
people.mozilla.com has address 63.245.208.169
Tab 5: - I assume it's using ajax of some sort to poll for the stats data.
downloadstats.mozilla.com is an alias for star-moz-com-php5.nslb.sj.mozilla.com.
star-moz-com-php5.nslb.sj.mozilla.com has address 63.245.209.77
Of course, the ones doing ajaxy stuff could be using other host names for where to pull data from.
What product/component owns class nsDNSRecord ?
IMO, the problem probably lies there, not in NSS.
Perhaps we should add members of the cc list for bug 337418 to this bug's list.
PR_EnumerateAddrInfo nsprpub/pr/src/misc/prnetdb.c:2095
nsDNSRecord::GetNextAddr netwerk/dns/src/nsDNSService2.cpp:134
nsDNSRecord::GetNextAddrAsString netwerk/dns/src/nsDNSService2.cpp:166
NS_InvokeByIndex_P xpcom/reflect/xptcall/src/md/unix/xptcinvoke_unixish_x86.cpp:179
XPCWrappedNative::CallMethod js/src/xpconnect/src/xpcwrappednative.cpp:2454
XPC_WN_CallMethod js/src/xpconnect/src/xpcwrappednativejsops.cpp:1590
js_Invoke js/src/jsinterp.cpp:1386
js_Interpret js/src/jsinterp.cpp:5179
justdave: are you using PAC?
(In reply to comment #5)
> justdave: are you using PAC?
What's PAC?
Nope, not using that.
Looks like NoScript is to blame. While running NoScript (no changes to base install), I loaded about 12 tabs, most to support.mozilla.com, along with three other tabs: Facebook, Nagios (open source monitoring tool, nagios.org) and OTRS (open source trouble ticket system, otrs.org).
The common thread between OTRS and Nagios is meta refreshes every so often (90 seconds for Nagios, 2 minutes for OTRS.
Additionally, Facebook does some triggered JS, which checks back to FB servers to see if there are any new status updates. I think it also interacts with their chat servers.
I did not have this crash before 6/30/09, and I was either up to date or a day or two behind Shiretoko Nightlies. I'm fairly certain that whatever updates were made to Shiretoko and thus FF 3.5 between 6/27/09 and release is causing this issue.
CrashID for the crash with only NoScript running:
41724f2c-19f4-47da-ab54-785202090703 7/3/09 5:01 PM
NoScript 1.9.5 was released on or around June 29, 2009. Since NoScript was the only AddOn installed, and the same tabs hadn't crashed Firefox with NO AddOns installed, I'm wondering if the combination is suspect.
I likely updated NoScript on June 29 or 30. Maybe I'll install a previous version and see if that still causes the problems. I also wonder if it is NoScript to blame, since between the 26th and 30th a bunch of non-3.5 browsers had the same issue.
Other related crashes of mine that show up in PR_EnumerateAddrInfo crash search:
3073a03a-093a-4a83-a7ba-2ba8e2090703
e18a0d58-1c80-469e-8bda-3f0082090630 Shiretoko
eea60f39-2c9a-4953-86eb-c610a2090630 Shiretoko
fa650c74-553b-425a-a1a0-a66742090702
42fc54ef-72f1-4bea-8276-e72032090701.
During development I found that if I cached nsIDNSRecord objects directly I caused random crashed like the one described here (I suppose for some pointer ownership issue), so I started storing the address strings returned by the query into an ad-hoc JavaScript DNSRecord object for caching purposes, and had no crash at all since then.
However this report seems to suggest there's still something broken in what I do. Should I deep-clone the strings (e.g. new String(record.getNextAddrAsString()) ) rather than pushing them directly into a JS array?
If you've got NoScript 1.9.5 (or even better 1.9.5.4 from) installed, the relevant code is all in chrome://noscript/content/DNS.js
Sorry, I actually read the stack now and it doesn't seem an ownership issue but rather a concurrence one. Nothing I can fix on my side, apparently.
Also, I wonder why none of my testers (nor I) had a single crash of this kind in one month of beta testing (the ones I got when I tried to cache nsIDNSRecord objects directly were in private preliminary builds) and I still can't see any report on the NoScript forum.
Stupid question related to comment 12 (unexpectedly few reports): might it be Mac OS X specific? I can see Darwin-specific code in prnetdb.c...
If you look at the crash-stats report linked from the URL field, you can see that almost all of the crashes with this signature are indeed on Mac. There's a couple Windows NT 6.0 in there, but not very many.
It does seem that the prnetdb.c does have OSX and Darwin specific ifdefs, the only one starts at line 196, with a possible elif at line 333, but the else is at line 403, and the endif at 416. There's one more DARWIN ifdef on line 2186 which defines function pr_StringToNetAddrFB(), which is called in PR_StringToNetAddr, which I don't see called, but then again, this code is pretty foreign to me, so it is possible that the Darwin specific code related to bug 404399 is to blame.
Darwin 8.8.4 was OSX 10.4.4, and while OSX10.5 was released in October 2007 (after the bug was opened), it is possible that it is causing problems in 10.5. I'm guessing that since no popular code used this code recently, it had not reared its head before, as it does now. And given that the related crashes are almost all OSX, it's possible this code needs a new review.
*** Bug 502360 has been marked as a duplicate of this bug. ***
It worked for me without a crash in Windows7, but crashed in OS X
TEMPORARY FIX FOR THIS ISSUE
If you want to avoid this bug temporarily, and this bug is due to your use of the NoScript Addon, open NoScript Preferences and Disable ABE.
Tools > AddOns > NoScript Preferences > Advanced Tab > ABE Sub-tab > Uncheck Enable ABE
According to NoScript developer Giorgio Maone confirmed in Bugzilla that ABE calls the code that seems to be broken in OSX version of Firefox, and that by disabling ABE, you still get the functionality of NoScript, without the crashes. However once this bug is fixed, you should re-enable ABE to have the utmost protection that NoScript can provide.
Is this going to get fixed? We've narrowed it down to a piece of OSX-specific code, probably a bad pointer. What do we do here to get this one fixed to we can use ABE again?
Out of curiosity, what is ABE?
(In reply to comment #20)
> Out of curiosity, what is ABE?
From comment #11:.
I wonder if the changes to class nsDNSService for bug 453403 are in some way
causal of these crashes.
It'd be great to get this issue resolved.
This has been driving my wife nuts (OSX Firefox user) for weeks now. I finally did some digging and found out about about:crashes, which finally pointed us to the NoScript ABE issue. Would really like to see this addressed, since it seems to be affecting a lot of people (as NoScript is one of the most popular Firefox addons).
She has just disabled the ABE feature and we're crossing our fingers that it stops the crashes.
ok, i claim this is noscript's fault:
timeless-mbp-2:ns timeless$ grep em:ver '/Users/timeless/Library/Application Support/Firefox/Profiles/*.default/extensions/{73a6fe31-595d-460b-a920-fcc0f8843232}/install.rdf'
<em:version>1.9.9.35</em:version>
timeless-mbp-2:ns timeless$ unzip '/Users/timeless/Library/Application Support/Firefox/Profiles/*.default/extensions/{73a6fe31-595d-460b-a920-fcc0f8843232}/chrome/noscript.jar'
timeless-mbp-2:ns timeless$ grep idn-ser content/noscript/DNS.js host = CC["@mozilla.org/network/idn-service;1"].createInstance(CI.nsIIDNService).convertUTF8toACE(host);
mao: the idn-service is a service. please use .getService().
Timeless, Please spell your suggestion out a little more explicitly.
You might even spell it out in a form that an adventurous user could try
out him/her self by editing his/her own noscript.jar file. If that fixes
the problem, then users get immediate relief, you get more fame & glory. :)
I think fixing NoScript to not cause the bug is not as good as fixing the underlying bug. One day, another extension or even some Firefox code might come along that triggers the same bug.
@timeless:
You're correct about nsIIDNService being, as the interface name says, a service.
However I happened to copy that code (which, BTW, gets called quite rarely) verbatim from MDC. Therefore I'm gonna fix my call, but someone should fix the docs:
That said, how is this supposed to fix this very bug?
i'm not sure i want to think about or explain it. i can guess. the code uses locks to protect objects, but if there are two locks each of which should be the *only* lock for something, then when things think they have "locked" the object, they might have just locked a lock which wasn't the same as the other guy's lock for the same object.
as for the wiki, wow, that sucks. in general, anyone who creates an account can edit any page on the wiki (including that one). I don't touch wikis. You're welcome to fix it, or i'll see about getting someone to fix it.
nelson: i'd just as soon giorgio fix it and send it to everyone.
jonathan louie: comment 29 has identified the underlying bug, it's bad documentation.
our arrangement with extension authors is this:
You have full access to the entire application, you can do anything, you can crash the browser. we expect you to follow certain rules and try not to crash the browser.
among the rules are this: a service should be used as a service.
we might someday enforce that codewise, however officially the design of xpcom is such that there might be a class for which it's legal to have both a service and multiple instances. Unfortunately, since xpcom is portable, it's possible some downstream vendor has taken advantage of that with their own objects, which means if we change how the service manager works, we're breaking our contract with them.
it's unfortunate that sheppy made this mistake in his documentation (or in copying it from elsewhere), but he was basically the only person creating any MDC content and he created a lot, getting everything right is a lot to ask for.
While I agree that fixing the nsIIDNService instantiation is due (and indeed I did it), I doubt it's the actual culprit here, especially since it gets called exclusively for hosts containing characters which are illegal in ASCII domains (i.e. quite rarely so far).
After re-digging Gecko's sources I found some more suspicious code:
By "doing the right thing" with the nsIDNSRecord interface (i.e. calling hasMore() before getNextAddrAsString()), NoScript is apparently calling getNextAddr() *twice*, and looking at the stack traces it crashes on the second call.
Furthermore, the thread safety of hasMore() seems dubious, so it plausible that it's preparing our crash since NoScript calls the DNS service asynchronously from an "unusual" thread (the UI one).
Therefore I eliminated the hasMore() call completely, and released both "fixes" (nsIIDNService and hasMore() elision) in NoScript 1.9.9.36, now available on
Let's cross our fingers and see if crashes go down in a week or so...
There are many startup crashes with this crash signature in 10.0b4.
It doesn't seem related to NoScript.
More reports at:
For crashes in 10.0b4, I filed bug 718389.
(In reply to Scoobidiver (away) from comment #33)
> For crashes in 10.0b4, I filed bug 718389.
I've closed the above bug report.
Does anyone believe this crash still exists?
I'm not seeing any on crash-stats
Resolved per whiteboard | https://bugzilla.mozilla.org/show_bug.cgi?id=501446 | CC-MAIN-2017-09 | refinedweb | 2,366 | 67.15 |
Walkthrough: Using a Business Object Data Source with the ReportViewer Windows Forms Control in Local Processing Mode
This walkthrough shows how to use an object data source using Business objects in a report in a Microsoft Visual Studio Windows Forms application. For more information about Business objects and object data sources, see Binding to Business Objects.
Perform the following steps to add a report to a Windows Forms application project. For this example, you will be creating the application in Microsoft Visual C#.
Create a new Windows Forms application project
On the File menu, point to New, and select Project.
In the New Project dialog box, in the Installed Templates pane, choose Visual C#, and then choose the Windows Forms Application template. The C# node may be under Other Languages, depending on your startup settings in Visual Studio.
Type BusinessObject for the project name, and click OK.
Create business objects to use as a data source
From the Project menu, select Add New Item.
In the Add New Item dialog box, choose Class, type BusinessObjects.cs for the file name, and click Add.
The new file is added to the project and automatically opened in Visual Studio.
Replace the default code for BusinessObjects.cs with the following code:
using System; using System.Collections.Generic; // Define the Business Object "Product" with two public properties // of simple datatypes. public class Product { private string m_name; private int m_price; public Product(string name, int price) { m_name = name; m_price = price; } public string Name { get { return m_name; } } public int Price { get { return m_price; } } } // Define Business Object "Merchant" that provides a // GetProducts method that returns a collection of // Product objects. public class Merchant { private List<Product> m_products; public Merchant() { m_products = new List<Product>(); m_products.Add(new Product("Pen", 25)); m_products.Add(new Product("Pencil", 30)); m_products.Add(new Product("Notebook", 15)); } public List<Product> GetProducts() { return m_products; } }
From the Project menu, select Build Solution. This creates an assembly for the object, which you will later use as a data source for the report.
Add a report to the project using the Report Wizard
From the Project menu, select Add New Item.
In the Add New Item dialog, select Report Wizard. Type a name for the report and click Add.
This launches the Report Wizard with the Data Source Configuration Wizard.
In the Choose a Data Source Type page, select Object and click Next.
In the Select the Data Objects page, expand the class hierarchy under BusinessObject until you see Product in the list. Select Product and click Finish.
You now return to the Report Wizard. Notice that the new data source object is added to your project in Solution Explorer.
In the Dataset Properties page, in the Data source box, verify that global is selected.
In the Available datasets box, verify that Product.
Add a ReportViewer control to the report
In the Solution Explorer, open the Windows form in Design view. By default, the form name is Form1.cs.
From the Toolbox, in the Reporting group, drag the ReportViewer icon onto the form.
In the ReportViewer control open the smart tags panel by clicking the smart-tag glyph on the top right corner.
In the Choose Report list, select the report you just designed. By default, the name is Report1.rdlc. Notice that a BindingSource object called ProductBindingSource is automatically created corresponding to each object data source used in the report.
In the open smart tags panel, choose Dock in parent container.
Supply data source instances to the BindingSource object
In the Solution Explorer, right-click Form1.cs and select View Code.
In Form1.cs, inside the partial class definition, add the following code as the first line, before the constructor.
In the Form1_Load() method, add the following code as the first line, before the RefreshReport call:
Run the application
Press F5 to run the application and view the report. | https://msdn.microsoft.com/en-us/library/ms251784 | CC-MAIN-2016-44 | refinedweb | 643 | 57.16 |
#include <CCEditBox.h>
\CControl.
Event callback that is invoked every time the CCNode leaves the 'stage'.
If the CCNode leaves the 'stage' with a transition, this event is called when the transition finishes. During onExit you can't access a sibling node. If you override onExit, you shall call its parent's one, e.g., CCNode::onExit().
Reimplemented from CCControl..
Reimplemented from CCNode. ccp(x,y) to compose CCPoint object. The original point (0,0) is at the left-bottom corner of screen. For example, this codesnip sets the node in the center of screen.
Reimplemented from CCNode.
Set the return type that are to be applied to the edit box.
Set the text entered in the edit box.
callback funtions
Unregisters a script function that will be called for EditBox events. | http://www.cocos2d-x.org/reference/native-cpp/V2.2/d7/dc3/classcocos2d_1_1extension_1_1_c_c_edit_box.html | CC-MAIN-2016-50 | refinedweb | 132 | 70.29 |
I wanted to develop a game using OpenGL but I was having trouble deciding on a windowing library. Somebody suggested I try Qt, so I decided to give it another shot. I heard it was good, but the editor it came with was so foreign and unintuitive. Unfortunately, this overshadowed what a beautiful library Qt really is! It takes a little getting used to, but it really is well designed.
Turns out Netbeans 6.8 (and perhaps earlier versions) have support for Qt, so it’s pretty easy to get set up. It’s probably way too early to be posting any code, but I’m going to anyways because I’m way too excited about this (yeah, I live a sad life).
main.cpp
#include <QtOpenGL/QGLWidget>
#include "GLWidget.h"
int main(int argc, char *argv[]) {
QApplication app(argc, argv);
GLWidget window;
window.resize(800,600);
window.show();
return app.exec();
}
GLWidget.h
#define _GLWIDGET_H
#include <QtOpenGL/QGLWidget>
class GLWidget : public QGLWidget {
Q_OBJECT // must include this if you use Qt signals/slots
public:
GLWidget(QWidget *parent = NULL);
protected:
void initializeGL();
void resizeGL(int w, int h);
void paintGL();
void mousePressEvent(QMouseEvent *event);
void mouseMoveEvent(QMouseEvent *event);
void keyPressEvent(QKeyEvent *event);
};
#endif /* _GLWIDGET_H */
All those functions you see above will get called for you automatically. No need to register any callbacks!
GLWidget.cpp
#include "GLWidget.h"
GLWidget::GLWidget(QWidget *parent) : QGLWidget(parent) {
setMouseTracking(true);
}
void GLWidget::initializeGL() {
glDisable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glDisable(GL_COLOR_MATERIAL);
glEnable(GL_BLEND);
glEnable(GL_POLYGON_SMOOTH);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glClearColor(0, 0, 0, 0);
}
void GLWidget::resizeGL(int w, int h) {
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, w, 0, h); // set origin to bottom left corner
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void GLWidget::paintGL() {
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1,0,0);
glBegin(GL_POLYGON);
glVertex2f(0,0);
glVertex2f(100,500);
glVertex2f(500,100);
glEnd();
}
void GLWidget::mousePressEvent(QMouseEvent *event) {
}
void GLWidget::mouseMoveEvent(QMouseEvent *event) {
printf("%d, %d\n", event->x(), event->y());
}
void GLWidget::keyPressEvent(QKeyEvent* event) {
switch(event->key()) {
case Qt::Key_Escape:
break;
default:
event->ignore();
break;
}
}
It just draws a red triangle and tracks your mouse movement. You need setMouseTracking(true); in order for the mouse move callback to fire when you aren’t holding a button down. I need this for my game.
Anyway, that’s all I’ve got so far, but that should get you up and running with Qt + OpenGL anyway, the rest is native OpenGL and your typical game layout; you shouldn’t need too much more Qt-specific stuff. Although, Qt does seem to have some nice OpenGL helpers and other neat stuff I have to dig into yet 🙂
Leave a comment if you need some help getting set up on Netbeans/Ubuntu: there are a couple quirks but nothing too terrible.
35 thoughts on “Qt + OpenGL Code Example”
The code is work fine for me.
Thanks for the post.
nofortee
Thanks a lot.
Brendler
Thanks!
Exactly what I was looking for. A simple clean, starting example.
Michael
printf was not declared in this scope….
#include
helped
and printf seam to be buffered, so you can mention about it in tutorial
thx for tutorial
przemo_li
#include “stdio.h”
przemo_li
instead of printf() you could also use qDebug().
angelos
Thanks for the tip! Didn’t know about that.
Mark
[…] following this tutorial for building a simple OpenGl application in C++ using […]
Problems compiling an c++ application using QT and OpenGL | SeekPHP.com
I am coming from a C++ and QT orientation. Happened to run into your page where OpenGL integration with QT is discussed. Great topic.
I tried downloading the code you had shown above into my C++ ready netbeans IDE to see if it would run on its own. Unfortunately, it failed. I believe some OpenGL library for windows is required.
What to install for OpenGL? How? Perhaps that will enable the above code to run.
Sai
Hey Sai, you need to make sure in your .pro file you have this line:
QT = core gui opengl
that tells Qt to use the OpenGL module
You also need to make sure you are linking to the opengl32.lib and glu32.lib
You can right click on the project folder on the left and then click “Add library” Just add both those libraries by pasting in their respective paths.
Hope that helps!
Art Vandelay
Hi guys,
Thanks for answering him, Art! I’ve forgotten what the process is myself, it’s been awhile since I’ve done any C++ programming, nevermind Qt + OpenGL!
Mark
Mark
It works very good. Thanks!
Damian
[…] […]
log | To dear future me
Thanks for example. Really work)))
Bob
Thanks. It was a ladder for me.
Sabourian
Looks nice. But when i want to compile it, he says that he didn’t found the function gluOrtho2D. What can i do??
Albrecht
gluOrtho2D was not declared in this scope
Helmut Kemper
One of the few examples that doesn’t use QMainWindow to connect a QGLWidget to a QApplication. Just what I was looking for. Thank you!
PietjePuk
Albrecht – Replace gluOrtho2D with glOrtho should solve your problem. Be sure to add #include
bzplayer
Thanks ..it works fine…
Additional changes include adding stdio.h header file and adding opengl in pro file as mentioned in previous comments
cybertooth3.39
Does it have a good performance? I’m thinking about to develop a game and it’s gonna be heavy to run, do you know if it’s a good idea to use Qt?
Bruno
I don’t have any benchmarks for you, but considering it’s C++ and OpenGL, I’m going to go ahead and say “yes”. OpenGL runs on your GPU, and it’s pretty low-level, so as long as you’ve coded your game properly, it should be pretty optimally efficient (your only other comparable choice is DirectX, which is Windows only). Qt, on the other hand, I’m fairly certain is going to be more efficient than GUI libraries written in other languages. Compared to other C++ GUI libraries, I can’t say for certain, but it is developed by Nokia, is pretty mature, and is used in KDE (the desktop environment used by several linux distros such as Kubuntu). Furthermore, it’s cross-platform. I doubt you’ll get much more bang for you buck.
That said, depending on your experience, you may want a more comprehensive game library. OpenGL and Qt aren’t game libraries; you’ll have to code everything yourself. That’s why I chose them as my base; there’s no cruft, just what I write.
Mark
i got an error of glortho2D which i replaced with glortho as suggested here in comments, but now it says No executables specified and then it asks me to specify executable by asking me a location for exectuable.
Imran
but yes i couldnt add any #include because you didnt mention which include file to include….? tried fining something with #include glortho…… but couldnt find anything with it
Imran
@Imran: Please see updated version of article.
Mark
i have trouble in this command :
unused parameter ‘event’ in
void GLWidget::mousePressEvent(QMouseEvent *event) {
}
please help me to my email. thank you.
Iqbal
With this problem
unused parameter ‘event’ in
void GLWidget::mousePressEvent(QMouseEvent *event) {
}
I have solved myseld before i only copied from main.cpp not the whole script only this
QApplication app(argc, argv);
GLWidget window;
window.resize(800,600);
window.show();
return app.exec();
because the original #include above already the same but that makes it some problem so i copied the whole one.
I hope this could give some solution to the others. thank you.
Iqbal
@Iqbal: You’re reading a very old version of this tutorial. Follow the link in the big yellow box at the top of this post.
Mark
Hi Mark,
first well done man u did a very gud job..
Actually i am very new to Opengl but using Qt from last one year.
i want to develop stereo image visualizer with Qt + Opengl …
can u plz help me .. it is very helpful if u give me ur email id so i can directly send u my Query … thanx
anuj
Hi Anuj, I’m glad you liked my tutorial, but sorry, no, I don’t provide personal 1-on-1 help. If you have a specific question you’re welcome to ask it here or I highly recommend Stackoverflow.com.
Mark
Hi Mark!
Im also interested in game developement, so I have a few questions related to that:) Does QT work with non-fixed pipeline openGL? Does QT support things like get the passed time in milliseconds(or CPU thicks)? Can I make a fullscreen openGL window with it?
+Is QT free even if I use it to make stuff that I will sell?(like games)
Thanks! And thanks for the code!
Aki
thank u very much:-)
Bharathi.S.K
[…] Qt + OpenGL Code Example | Program & Design – I am coming from a C++ and QT orientation. Happened to run into your page where OpenGL integration with QT is discussed. Great topic. I tried downloading the code …… […]
Fix Qt Error Codes Windows XP, Vista, 7, 8 [Solved]
What of other Glut OpenGL callback functions like glutIdleFunc, glutVisibiltyFunc, glutTimerFunc etc. What are their QGLWidget class method equivalents? I use these glut function for animations, but do not know how to go about same in QT. Please someone should educate me.
Emmanuel Chidinma
I have used Qt Creator for years and find it to be ideal for Qt projects. I downloaded the source, deleted all that Netbeans crap which Qt Creator does not need, added #include to GLWidget.cpp, changed the include statement in main.h for QApplication to #include , then created a .pro file containing these statements, which links in the GLU library required for gluOrtho2d:
QT += opengl widgets
LIBS += -lGLU
SOURCES += main.cpp\
GLWidget.cpp
HEADERS += GLWidget.h
Builds and run fine with the Qt v5.12.4 kit.
Raymond M. Wood | https://programanddesign.com/cpp/qt-opengl-code-example/comment-page-1/ | CC-MAIN-2022-33 | refinedweb | 1,659 | 65.22 |
I am using NodeJS, Angular2, and the ng2-chartjs2. Below I listed the relevant parts of my code that is rendering charts. The data is loaded into this.data from an API using a fixed date range. I would like to allow the user to select a date range and then update the chart. From here I know that you can call update() on the chart object to update the data in it, but I don't know how to get a hold of the chart object, since the component code never actually has a reference to it - it's done automagically when the template is rendered. Looking at the source code (line 13) I see that the author intended to make the object available. I contacted the author but haven't received a response yet and need to get moving. I have learned a lot about Angular2 but am no expert yet, so perhaps a deeper understanding of Angular2 makes this obvious. How can I either get access to the object to call update() on it, or do it some other clean way?
The template contains
<chart [options]="simple.options"></chart>
import { ChartComponent } from 'ng2-chartjs2';
...
@Component({
selector: 'home',
templateUrl: 'client/components/home/home.component.html',
styleUrls: ['client/components/home/home.component.css'],
directives: [DashboardLayoutComponent, CORE_DIRECTIVES, ChartComponent],
pipes: [AddCommasPipe],
})
...
setCurrentSimpleChart = (simpleType: number): void => {
this.simple.options = {
type: 'line',
options: this.globalOptions,
data: {
labels: this.data[simpleType].labels,
datasets: [{
label: this.titles[simpleType],
data: this.data[simpleType].data,
backgroundColor: 'rgba(255, 99, 132, 0.2)',
borderColor: 'rgba(255,99,132,1)',
borderWidth: 1
}],
},
};
...
}
You can hold reference to component by using
ViewChild:
@ViewChild(ChartComponent) chartComp;
And then you can get
chart object:
let chart = this.chartComp.chart;
Here is the corresponding plunker | https://codedump.io/share/gkekvBldxhoS/1/how-to-update-the-data-in-an-ng2-chartjs2-chart-in-angular2-and-nodejs | CC-MAIN-2017-09 | refinedweb | 292 | 52.36 |
There I was... in the cruel, dark, world of Algebra 2, when suddenly the most feared question of all was asked. "Draw a graph". I froze in my tracks, (I wasn't moving, but this needs to sound dramatic...) my face turned pale as dozens of questions raced through my mind... "How?", "Why?", "When?".
I was in a serious jam, my homework was to draw graphs by using the formulas given, but how? I'm completely out of graph paper!!! And everyone knows that JOSH + PENCIL = SLOPPY. Whatever shall I do? Well, after greatly pondering this serious predicament, I discovered I have two options:
Well, since I'm always doing things the easy way, I chose option 2. (I consider it easier than option 1, because it's FUN!). So I sat, scratched my head once or twice, stroked my imaginary beard a couple times, and then began. Coffee at hand, it didn't take too long. (At the cost of one slight headache) It only took 155 lines of code... so let's begin!
First and foremost, we must decided WHAT exactly our app will do! First, it will EXECUTE! Secondly, it will perform the first line of code! Then, it will... ok enough of this.
Print a graphEasy enough! But now we need to consider very carefully how we should go about doing this. Print in the same entry function? Create a new class? Derive a new class from an existing class?
Well the first rule I always go by, is to make extra sure you have a cup of coffee nearby. Secondly, we need to make this program as OO as possible. We need to take under consideration that someday we might need to use it again! (And I will! This graph business has only begun!) Therefore, this will require us to create a whole new class.
public
We're going to derive it from the class PrintDocument in the System.Drawing.Printing namespace. Now we get to the dirty work. First you must consider this, - are you ever going to come back and use it again? If so, you need to make it... - how should I say this? - easy to change and edit!
Instead of doing this:.DrawLine(Pens.Black, 0, 0, 100, 100);(first line of graph)
Make it changeable! Supposing next week you need a graph, only slightly longer! You would then have to come back, edit those x-y values, and then recompile! So first of all, we need some public properties in our class to allow us to specify things like: Height, Width, Color, and so forth of our graph.
So here's the beginning of our new class:
I deem all those properties absolutely necessary. The "Size" property will specify the size (x-y) of our graph, while the "Width" property will specify the width (in pixels) of each box in the graph. The boolean "ShowNums" property, will let us decide whether we want each bar in the graph numbered. The "Color" property specifies the color of each bar except the middle bar of the x bars and the y bars, that is where the property "MidColor" comes in.
That being said, it is now time to construct the constructor! Just in case we use this class sometime later, and forget to specify each of those vital variables, we need to create some default values.
Simple enough! Something that may look different is the "text" variable. Just added that. All it is, is the header for the graph. (Also added a property "Text")You will also notice we added an eventhandler to our class! This is where the code begins. We now need to create a new finction called "printme", this will be where all the drawing takes place.
So we then create the function "printme"...
private
Simple enough... now we move on to start drawing the horizontal bars of the graph. We will need a FOR... loop for this...
for
That's rather ugly if I may say so myself... but for any experienced programmer, that's just every day stuff!!! (I'm NOT one of those "experienced" programmers). First, it sees how long you want your graph! Then it checks to see the width of each box. With those two values, it determines how many bars it will be capable of drawing. (By dividing the width of the graph by the width of each individual box) Now that it knows how many bars it will be drawing, it then decides how many times it will need to loop.
Next you notice an interesting IF...ELSE statement... no fear, this is only to check whether it is drawing the MIDDLE bar or not, because if it is, we want it to be a different color! Remember?
The two DRAWLINE functions are identical, except when it creates a new pen.It multiplies the current integer it's on, with the desired width of each box. THEN it addes the "_startx" integer to that. (Just in case we don't want the graph to be drawn RIGHT on the edge of the paper) That's basically it!
Now all we need to do is to number each of these lines. (Which I found to be a tad bit more time consuming (and paper consuming) then I thought.)
Here it is:
gfx.DrawString((x - ((g_size.Width / width)) / 2).ToString(), littletext, Brushes.Black, (x * width + _startx) - 4, g_size.Height + _starty + 4);
Peachy! Just peachy! That's all we need... or that's all we WOULD need if the length of the string were the same! But it won't be! It will vary from 1 to 3. AT one point it will be "5", (one character) and at another time it will be "-10". (THREE characters)So we're going to need to do some validating. The position of the number is going to vary on how many characters are in the string. (Because the length will be different!)Here's the solution:
gfx.DrawString((x - ((g_size.Width / width)) / 2).ToString(),littletext,Brushes.Black, (x * width + _startx) - ((x - ((g_size.Width / width) / 2)).ToString().Length * 4),g_size.Height + _starty + 4);
We basically do the same thing for drawing the horizontal lines.We dispose the graphics object we're printing, and then we're done! A nice, clean, (with STRAIGHT lines too!) graph to draw on. Making my math teacher a happy-camper.Now all I have to do is create a class that will perform all the algebraic formula's for me, and I'll have it made!!! (Ok, maybe not.)
There are a few things that I would do different if I had the time, but for now, it's good enough. For example, instead of deriving our newly created class from the PRINTDOCUMENT class, derive it from the GRAPHICS object, so it can draw to the graphis object and then send it to the PRINTDOCUMENT object. (That way, if you later wanted to display it on the screen, it would be a piece of cake. Maybe someday I'll do that, and then turn this into a WinForms app, that will let the user draw it out and print it. (Instead of having to specify the width and height in pixels)But until then, I've got some homework I need to finish... Happy coding!!!
Command used to compile:/t:exe /out:C:\Josh\cs\graph\Graph.exe C:\Josh\cs\graph\Graph.cs /r:System.dll,System.Drawing.dll
Final results
/// Notice this AIN'T COPYRIGHTED!!!! (Hope me English teacher don't see this)/// Have fun! And HAPPY CODING!!!!/// Happy Easter and a MERRY ST. PATRICK'S DAY!!!!!using System;using System.Drawing;using System.Drawing.Printing;namespace Josh {public class Graph {[STAThread]static void Main(string[] args) {MathGraphPrint pp = new MathGraphPrint();pp.DocumentName = "Math Graph";Console.WriteLine("Will now print graph.");Console.WriteLine("Specify width in pixels of graph: ");int x = Convert.ToInt32(Console.ReadLine());Console.WriteLine("Specify height in pixels of graph: ");int y = Convert.ToInt32(Console.ReadLine());Console.WriteLine("Name of graph: ");string text = Console.ReadLine();Console.WriteLine("Width of squares: ");int width = Convert.ToInt32(Console.ReadLine());Console.WriteLine("Hold on to your hoola-hoops! CAUSE WE'RE PRINTING!!!");pp.Text = text;pp.Size = new Size(x, y);pp.Width = width;pp.Print();Console.ReadLine();}}public class MathGraphPrint : PrintDocument {private Size g_size;private int width;private string text;private bool shownums;private Color color;private Color midcolor;private int _mod;public string Text {set {text = value;} get {return text;}}public Size Size {set {g_size = value;} get {return g_size;}}public int Width {set {width = value;} get {return width;}}public bool ShowNums {set {shownums = value;} get {return shownums;}}public Color Color {set {color = value;} get {return color;}}public Color MidColor {set {midcolor = value;} get {return midcolor;}}public int Mod {set {_mod = value;} get {return _mod;}}public MathGraphPrint() {g_size = new Size(378, 378);width = 13;text = "";shownums = true;color = Color.Black;midcolor = Color.Red;_mod = 1;this.PrintPage += new PrintPageEventHandler(this.printme);}private void printme(object sender, PrintPageEventArgs ppea) {Graphics gfx = ppea.Graphics;int _starty = 28;int _startx = 5;Font littletext = new Font("Times New Roman", 8);/// Draw textgfx.DrawString(text, new Font("Times New Roman", 15), Brushes.Black, 0, 0);/// Draw horizontal lines.for(int x = 0; x < (g_size.Width / width); x++) {if(x==((g_size.Width / width) / 2)) {gfx.DrawLine(new Pen(midcolor), (x * width) + _startx, _starty, (x * width + _startx), g_size.Height + _starty);} else {gfx.DrawLine(new Pen(color), (x * width) + _startx, _starty, (x * width + _startx), g_size.Height + _starty);}if(shownums==true) {gfx.DrawString((x - ((g_size.Width / width)) / 2).ToString(), littletext, rushes.Black, (x * width + _startx) - ((x - ((g_size.Width / width) / 2)).ToString().Length * 4), g_size.Height + _starty + 4);}}/// Draw vertical lines.for(int y = 0; y < (g_size.Height / width); y++) {if(y==((g_size.Height / width) / 2)) {gfx.DrawLine(new Pen(midcolor), _startx, (y * width + _starty), g_size.Width + _startx, (y * width + _starty));} else {gfx.DrawLine(new Pen(color), _startx, (y * width + _starty), g_size.Width + _startx, (y * width + _starty));}if(shownums==true) {gfx.DrawString((((g_size.Height / width) / 2) - y).ToString(), littletext, rushes.Black, g_size.Width + 4 + _startx, (y * width + _starty) - 6);}}gfx.Dispose();}}}
View All | http://www.c-sharpcorner.com/article/mathgraph-in-C-Sharp/ | CC-MAIN-2017-22 | refinedweb | 1,692 | 69.89 |
Grzegorz Grzybek created CXF-7183:
-------------------------------------
Summary: CXF Blueprint namespace don't work well with blueprint-core 1.7.x
Key: CXF-7183
URL:
Project: CXF
Issue Type: Bug
Components: OSGi
Affects Versions: 3.0.12, 3.1.9
Reporter: Grzegorz Grzybek
Aries blueprint-core 1.7.1 changes (improves) the way namespace handlers (classes implementing
{{org.apache.aries.blueprint.NamespaceHandler}} interface) resolve namespaces to URIs.
The problem is when XSD behind a handler imports other XSDs that may be resolved by other
handlers.
Around Aries blueprint-core 1.4.x, imported XSDs where looked up in all currently registered
handlers, but this was prone to race conditions, because handler that could resolve given
imported XSD might've not been registered yet.
The clean solution is to delegate at code level to correct handler.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.apache.org/mod_mbox/cxf-issues/201612.mbox/%3CJIRA.13028325.1481790579000.527454.1481790599105@Atlassian.JIRA%3E | CC-MAIN-2017-43 | refinedweb | 144 | 59.3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.