text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hello
I got this grid working as you can se in the code bellow, but when i try to make some printf (or cout) to make it show some numbers i only get "0 0 1 1 2 2" displayed in a line. anyone know why and how i could make it the display the grid & Numbers like it should?
Code:#include "stdafx.h" #include <iostream> #include <ctime> using namespace std; int main() { int grid[3][3]; // grid int count = 1; for(int i = 0; i < 3; i++){ for(int n = 0; n < 3; n++){ if(i == 2 && n == 2) break; // dont give out a number to square 9 grid[i][n] = count; // give out a number to square 1-8 count++; cout << i << endl; } } return 0; } | http://cboard.cprogramming.com/cplusplus-programming/107684-help-grind.html | CC-MAIN-2015-35 | refinedweb | 127 | 74.19 |
Code Editing Tools in Visual Studio 2019
To write good code, you need proper tools. Tools such as IntelliSense, Code Cleanup, and IntelliCode not only reduce our coding time but also assist us in conforming to most coding standards. Today, I will highlight a few tools that you can use.
IntelliSense
IntelliSense is a code-completion tool that assists us while we are typing our code. It helps us add property and method calls by using only a few keystrokes, it keeps track of the parameters being typed, and helps us learn more about the code we are busy using. IntelliSense is mostly language-specific.
IntelliSense includes the following features:
- List Members
- Parameter Info
- Quick Info
- Complete Word
Let's have a look at them one by one.
List Members
When a period (.) is pressed, a list of members from a type or namespace appears. The list filters as you type to provide a proper Camel Case selection. We can double-click the suggested item, or press the tab or space button.
You also can invoke this feature by typing Ctrl + J, choosing Edit, IntelliSense, List Members, or by choosing the List Members button on the editor toolbar.
Parameter Info
Parameter Info provides us with information on the number, names, or the types of parameters required by a certain method, template, or an attribute generic type parameter. The bold parameter indicates the next required parameter while we type. In an overloaded function, we can simply use the Up or Down arrow keys to view different parameter information for the function overloads.
Quick Info
Quick Info displays the complete declaration for any identifier in our code. After you select a member from the List Members box, Quick Info appears. You also can invoke Quick Info by choosing Edit, IntelliSense, Quick Info, or by pressing Ctrl + I, Ctrl + K, or by choosing the Quick Info button on the editor toolbar.
IntelliTrace
IntelliTrace is included in Visual Studio Enterprise only. It records and traces the code's execution history by performing the following functions: Recording specific events in code, examining code, Locals Window data, and function call information, debugging difficult-to-find errors, or errors that happen in deployment.
IntelliCode Extension for Visual Studio 2019
The IntelliCode Extension for Visual Studio 2019 enhances your software development efforts with the help of artificial intelligence (AI). It helps developers find issues faster and focus on code reviews. It improves developer productivity with features such as contextual IntelliSense, code formatting, and style rule inference. It combines existing developer workflows with machine-learning to provide an understanding of the code and its context.
IntelliCode is available on the Visual Studio Marketplace.
Code Cleanup
Code Cleanup formats the code and applies any and all code fixes as suggested by its current settings. The Health Inspector can be accessed at the bottom of the Code Window, indicated by a little brush icon, as shown in Figure 1.
Figure 1: Code Cleanup
There are two built-in Profiles. One is pre-set, but can be edited. Profile2 is initially empty but can be set and amended at any time. To edit these profiles, we can select Tools, Options, Text Editor, C#.
EditorConfig
EditorConfig helps us maintain a consistent coding style while working with multiple developers on the same project with different editors and IDEs.
Solution Filtering
We can share project-load configuration files by creating a Solution Filter File (with the extension .slnf); you can see this in Figure 2. Use the following steps:
- Right-click the solution.
- Select Save As Solution Filter.
- Choose a name and location.
Figure 2: Solution Filter
Conclusion
Visual Studio 2019 makes a lot of tedious tasks so much easier and fun to do. These tools that I have talked about help us to code better and faster. To find out about more tools like these, you are welcome to have a read through Visual Studio 2019 In Depth, my new book on the subject.
This article was originally published on December 3, 2019... | https://www.developer.com/net/net/code-editing-tools-in-visual-studio-2019.html | CC-MAIN-2020-10 | refinedweb | 669 | 55.24 |
Binding Error in Spring - Spring
Binding Error in Spring Error:
Neither BindingResult nor plain target object for bean name 'loginBean' available as request attribute
I am...
What could be the cause of this error?
Thanks
spring
spring package bean;
public interface AccountsDAOI{
double... normally.
i set the classpath=D:\java softwares\ST-IV\Spring\spring-framework-2
.5.1\dist\spring.jar;D:\java softwares\ST-IV\Spring\spring-framework-2.5.1\lib\c
error
HelloWorld
Deployment Error for module: HelloWorld: Error occurred during deployment: Exception while deploying the app [HelloWorld... [HelloWorld]. TargetNamespace.1 : Espace de noms " message, which appears when
authentication failed. The predefined error message with Hibernate - Spring
Spring with Hibernate When Iam Executing my Spring ORM module (Spring with Hibernate), The following messages is displaying on the browser window... The server encountered an internal error () that prevented it from fulfilling
Spring Hello World prog - Spring
Spring Hello World prog I used running the helloworld prog code mentioned in
I'm... visit the following link::
NoSuchBeanDefinitionException - Spring
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ohfileUploadController... :
For more information on Spring visit to :
Thanks
Customizing the Default Error Page
In this section, you will learn about Customizing the Default Error Page in Spring MVC
org.hibernate.exception.GenericJDBCException - Spring
running application in tomcat server plz help me.
exception is:
Error is: Could... is
org.hibernate.exception.GenericJDBCException: Cannot open connection
Error:==>
error rectify
error rectify public class HelloWorld{
public static void main(String []args)throws IOException {
try {
for(int i=0;i<24;i++)
{
for(int j
What is Spring?
What is Spring?
Spring is great framework for development of Enterprise grade applications.
Spring is a light-weight framework for the development
display Helloworld through servlets using jboss
display Helloworld through servlets using jboss I'm beginner,Can You please Write the code for this.Including WEB.INF
Please visit the following link:
An introduction to spring framework
SPRING Framework... AN
INTRODUCTION...;
(PUBLISHED IN DEVELOPER IQ -
September2005)
Spring is an open-source....
Just as Hibernate
attacks CMP as primitive ORM technology, Spring attacks) aaa
2) bbb
c) ccc
when i click either one of them it should go
Spring asm Dependency
error in
your Spring framework then you can get the latest version of asm... dependency in your Spring 3 project you
might get the following error:
Jan... Spring 3 ASM - Spring asm Dependency
run time error
saved your class with HelloWorld.
Check it out. Even then if error occurs, post... HelloWorld
{
public static void main(String[] args)
{
System.out.println... showing output. Have you save the file name with HelloWorld? One more thing did
property datasource required - Spring
property datasource required Hi I am using java springs I am using mysql as backend I am getting one error when i compile the application.property... the following link:
regarding header files - Spring
regarding header files i am working on linux platform. i a using spring framework 2.5.4.while comlinig the client program i.e from demo which given in tutorial.
error is these two package does not exist
import
spring
spring sir how to access multiple jsp's in spring
Spring Validation
Spring Validation
In this tutorial you will see an example of how to perform validation in
spring-core. The spring provide Validator interface... that while
validation, report validation send by validators to the Error object
Tomcat Server error - Framework
Tomcat Server error I am doing problem on struts,spring integration.in that at the time of tomcat server starting SEVERE: Error listenerStart occurs.why this error will occurs please tell the reason
spring
spring i am a beginner in java.. but i have to learn spring framework.. i know the core java concepts with some J2EE knowledge..can i learn spring without knowing anything about struts
deployment error - XML
web.xml entries to support spring and site mesh?? here is the error on my console...deployment error hai, iam using sitemesh on spring framework, iam getting the following error after i deployed my war file on tomcat.I suspecting my
Crystal Reports with Spring: problem
Crystal Reports with Spring: problem Hi all,
having successfully... tool,
I would like to integrate Crystal Reports into my existing spring MVC... a ByteArrayInputStream in which there are report's data. The error occurs when returning from
spring in spring
Table Heading in HTML
Table Heading in HTML
... consists of heading
enclosed within <th> tag. It consists of rows ,which...
The Tutorial illustrates an example from Table Heading in HTML
Servlet error - Java Beginners
my own servlet program..
The error i'm facing is...
exception... mentioned URL:
Spring
Delete database Table through hibernate and Spring
Spring,Hibernate and Tapestry to save and also fetch data simultaneously from...);
private UserDao userDao = null;
/**
* setter to allows spring to inject... String images;
private String heading;
private String type;
public Info
Servlet error - Java Beginners
:
Hope... the following error:
org.apache.jasper.JasperException: Exception in JSP: /jsp
Java Compilation error. - Java Interview Questions
Java Compilation error. when ever i run my console application i get this error or message even if it is a helloworld program please help
AgentId [JavaApplication]c:\javase\bin\javaw.exe
spring - Spring
spring sample code for formvalidation by using spring with javascript as presentation
ex
textboxes like
dept id
dept name
Submit.../spring/userregistration.shtml
Hope that it will be helpful for you.
Thanks
ClassNotFound Exception:DispatcherServlet in spring application?
ClassNotFound Exception:DispatcherServlet in spring application? **I... org.apache.catalina.core.ApplicationContext log
SEVERE: Error loading WebappClassLoader
delegate... org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring FrameworkServlet 'dispatcher
spring - Spring
spring i want to work in spring.i m new to spring.
i have download the springframework 2.5.4 ,
i dont know how to use this in eclipse 3.3.
i m using JBOSS server.
help me plz
HelloWorld in PHP
Acegi Security exclusions Error - Security
-security 1.0.6 org.springframework spring-dao Hi,I used the following code...;groupId>org.springframework</groupId> <artifactId>spring-dao<...;org.springframework</groupId> <artifactId>spring-jdbc</artifactId>
error
error while iam compiling iam getting expected error
How to use spring validator with multiaction controller?
How to use spring validator with multiaction controller? Hi, I am trying to use spring validator with multiaction controller.
The configuration...;
And the controller has method called
update(..., BindException error)
----it shows...://
Error
Error I have created ajax with php for state and city. When I change state then city will not come in dropdown list and it give me error as 'Unknown Runtime Error'.
This error come only in IE browser, but in other brower
Spring and Eclipse - Spring
Spring and Eclipse How to add Spring API Docs to Eclipse
error
error java.lang.unsupportedclassversionerror:bad major version at offset 6
how to solve this????
Hi,
Please check the version of framework used and also the JDK version.
This type error also comes when java file
error
error
Spring - Spring
spring - Spring
error!!!!!!!!!
error!!!!!!!!! st=con.createStatement();
int a=Integer.parseInt(txttrno.getText());
String b=txttname.getText();
String c=txtfrom.getText();
String d=txtto.getText
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/93542 | CC-MAIN-2013-20 | refinedweb | 1,222 | 50.23 |
Introduction
The EPiServer CMS Change Log System (referred to simply as "Change Log" in this document) is a facility where changes to an EPiServer CMS site can be logged. For example, all changes to pages, files and directories are currently logged in the Change Log to support the Mirroring and OnlineCenter features in EPiServer CMS and also act as a general audit mechanism.
User Interface / Admin mode
You can access Change Log from the EPiServer CMS Admin Mode, go to > Config tab > Change Log (under Tool Settings). Select the View tab to view the Change Log, which also can be filtered by change date, category, action and changed by.
The state of the Change Log can also be viewed and changed from the Status tab of this page. The Change Log has the following states:
- Enabled means that the Change Log System will start automatically when the site starts and will be available for read and write operations.
- Disabled means that the Change Log System will not start when the site starts. Items written to the Change Log will be ignored but items may still be read from the Change Log.
- Auto means that the Change Log System will start as soon as any dependencies (such as a Mirroring Job) have been registered against the Change Log. If no dependencies exist the system will not start or will stop if already running.
The Change Log Auto Truncate scheduled job can also be scheduled and executed manually from Admin Mode. On the Admin tab under the Scheduled Jobs heading you will the menu item. This scheduled job will delete items from the Change Log that are over one month old and do not have a dependency registered against them by another part of EPiServer CMS, for example Mirroring.
Programmatic Interfaces
The classes and interfaces for the Change Log can be found in the EPiServer.ChangeLog namespace in the file EPiServer.dll. | https://world.episerver.com/documentation/Items/Developers-Guide/Episerver-CMS/7/Logging/Configuring-Change-Log/ | CC-MAIN-2019-26 | refinedweb | 320 | 66.88 |
.
- If you don’t care about the obsolete classes / methods, you can turn off warning level 4 down to level 3 in each project properties. If not you have to fix all the errors from warnings. If you want to be really cool you can use pragmas (yeah they finally got it right (geez C++ has had the for a while now).
- Any type in your configuration data that can be null can not be an abstract class. The problem is that the XmlSerializer does not care that something is null (nil), it stills tries to instantiate the class. To fix this, make any of these types concrete and not abstract. The two biggest classes are TransformerData and KeyAlgorithmPairStorageProviderData. So what most of you are seeing right now is that when you don’t specify a key storage provider for configuration, it blows up. That is because it is trying to instantiate KeyAlgorithmPairStorageProviderData class which is abstract.
- The XmlSerializer no longer supports the CDATA sections
. So in logging in the TextFormatterData class, you will have to change the template to something else like an XmlNodeList or if you don’t care about reading the template, a base64 encoded string.
- The ConfigurationDesignHost implements the IDictionaryService interface. Kill it. Remove it completely. In truth, this was never supposed to work (according to the System.ComponentModel guys). You are only to have one IDictionaryService per Site. Ok another reason to remove it, no one ever uses it
. This should get the tool working.
Since this may not be everything (I just have had one pass through the code), post or email me other changes and I will try and keep this article up to date.
Now playing: Soundgarden – Jesus Christ Pose
Hey,
I still get this error:
Warning 9 Cannot find custom tool ‘StringResourceTool’ on this system. C:Program FilesMicrosoft Enterprise LibrarysrcLoggingSinksDatabaseSR.strings 0 0
How do I fix this?
Scott,
Thanks, my morning (despite the rain) is already looking up!
Miguel,
You can pick up the string resource generator at:
If you download the latest source from that site, rename the file to string.zip (strangely is missing the extension), and unzip it there is an installation for the resource generator in there. Good luck!
Thanks, Scott.
Could you expand on bullet point 3, please? Assume it’s first thing in the morning and I have an awful head cold and don’t readily see what I need to do to this class. 🙂
On point #3, basically you want to remove the IDictionaryService interface from the class (ConfigurationDesignHost) like this:
public class ConfigurationDesignHost : ServiceContainer, IContainer, IComponentChangeService, IDictionaryService
…becomes…
public class ConfigurationDesignHost : ServiceContainer, IContainer, IComponentChangeService
You’ll also want to remove this line (#50):
AddService(typeof(IDictionaryService), this);
And if you are feeling adventurous you can remove any of the methods that have ‘seealso cref="IDictionaryService.xxx", but I don’t think that is necessary.
The changes seemed to go well, but the dreaded "ReadOnlyConfigurationSectionData is not found in schema" error is still there. I think you only see this if you are building the library into a web project (web.config).
I tried to get around it by inferring an XSD from the configuration XML and then referencing that in the xmlns tag on the enterpriselibrary.configurationSettings element. I know it is kinda crazy, and I’m not sure why it makes any difference but it at least lets me run up until the point where it starts deserializing the configuration. I’m getting close tho! 8)
Its working! Its working! ASP.NET 2.0 & Enterprise Library… seriously.. months I’ve been trying this.
Ok..
1.) Do everything above. On step 3 you can change the TextFormatterData templateData member to a string like this:
…
private string templateData = string.Empty;
…
[XmlElement("template")]
public string Template
{
get { return Convert.ToBase64String(System.Text.Encoding.ASCII.GetBytes(templateData)); }
set
{
byte[] conv = Convert.FromBase64String(value);
char[] chars = System.Text.Encoding.ASCII.GetChars(conv);
this.templateData = new string(chars);
}
}
…
2.) You’ll need to wipe out the CDATA section in the loggingdistributorconfiguration.config file and just leave <template></template> for now until it’s all working properly.
3.) Move all the enterprise library configuration stuff to a subfolder in your website (like Configuration) including the entries in web.config (I created a new enterpriseLibrary.config to hold this using the web.config template VS generated)
4.) Ok, here is where the real hack comes in. I’m certain there is a better way, but I’m desperate.. Since the configuration code looks to web.config by default, and I couln’t determine how to change this, I added this line to the ConfigurationBuilder at #577 just before it loads the configuration.
configurationFile = configurationFile.Replace("web.config", "Configuration\enterpriseLibrary.config");
So instead of looking at web.config (which is now empty of EntLib stuff so VS is happy), it looks into the Configuration subfolder for the ‘enterpriseLibrary.config’ file and works from there.
5.) You may get an error when the logger tries to access the security event log.. Another hack, but I went into regedit and gave ASPNET permissions to read/write this section and it went away. Here is the key I changed the permissions on:
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesEventlogSecurity
Phew…
So, very good luck to all and if you are unable to build the library in VS2005, you can email me at questions@rationalpath.com for suggestions.
Thanks Scott!
When i change KeyAlgorithmPairStorageProviderData from being abstract to just public class KeyAlgorithmPairStorageProviderData : ProviderData, ICloneable
It barked that TypeName wasn’t implemented from ProviderData Class. So i put it in as override. But then the CustomKeyAlgorithmPairStorageProviderData barked with: System.InvalidOperationException: Member ‘CustomKeyAlgorithmPairStorageProviderData.TypeName’ hides inherited member ‘KeyAlgorithmPairStorageProviderData.TypeName’, but has different custom attributes
I understand why i’m getting the exception but not sure how to get around it. It won’t let me set TypeName as virtual within KeyAlgorithmPairStorageProviderData because it is overriding the one from ProviderData.
What am i missing?
Hi,
I’m almost there. I guess the sunshine and warm weather helped :). I can build the solution and start the tool application but when I try to save the configuration I get ‘object reference not set to an instance of an object. It’s a System.NullReferenceException. I suspect it’s related to scott’s #2 issue. Scott(or anyone), could you elaborate more on it, specially this part that I don’t understand ‘To fix this, make any of these types concrete and not abstract.’ ?
Thanks!
Hello,
I just removed the abstract attribute for the classes that you mention, but now I am stuck with an exception on ConfigurationManagerSectionHandler.Create method.
The folowing line of code throw an exception:
XmlSerializer xmlSerializer = new XmlSerializer(typeof(ConfigurationSettings));
"There was an error reflecting type Microsoft.EnterpriseLibrary.Practices.Configuration.ConfigurationSettings"
I am using VS 2005 Beta 2
I get it working in the following way:
After removing abstract from the two classes TransformerData and KeyAlgorithmPairStorageProviderData, you should add a public default contructor to the classes (it seems that it is required while XmlSerializer reflects on types).
You also should declare and define the TypeName property as following:
[XmlAttribute("type")]
public override string TypeName
{
get { return typeof TransformerData).AssemblyQualifiedName; }
set {}
}
respectivelly
[XmlAttribute("type")]
public override string TypeName
{
get { return typeof KeyAlgorithmPairStorageProviderData).AssemblyQualifiedName; }
set {}
}
It should work then…
Regards
No more "ReadOnlyConfigurationSectionData is not found" ! 8)
In case you are still struggling with the changes, I’ve built a <soooper>Unofficial</soooper> release of the Enterprise Library for ASP.NET 2.0 assemblies which can be downloaded from the URL below. All the assemblies are signed, so you can also install them right to the GAC if you wish.
Thanks!
James
I got it to build succesfully, but i had to do a few extra things not listed by Scott.
For every project, changing the warning level from 4 to 3 did not remove the obsolete class/method warnings. I had to list warning number 0618 to the supress warnings text box. I also had to list 1717 to the Logging and ExceptionHandling.Logging projects because a field was begin set to itself. Other than that i didn’t have an issue. Just thought i would list this in case anyone else ran into it.
So I started compiling EL on VS.NET 2005. Everything works fine after a few tweaks but the web.config file does not like the tag "enterpriselibrary.configurationSettings", but later on that works fine too. Now the web services and everything else works fine but when I run the project in Debug mode, I get I get following error:
{}XmlFileStorageProviderData is not found in Schema.
I am pulling my hair…need some assistance. I would want to keep all these settings in Web.config.
Thanks,
Ram
Hi,
I have compiled parts of the library with Beta 2 of VS. Specifically, I am working on Configuration, Common and the Confiuration Console. I am past the XmlSerializer problems described in previous posts, but the output configuration files generated by the console only include the root node. I am suspecting more serialization problems. Can you provide any guidance on how to solve this?
Ok, This one took a lot of people a lot of time. It took me a couple days to compile the…
I don’t know how many of you are having the same problem I did. I am working now (Thank God.) I put together a quick list of exactly what I did to get myself up and running with enterprise library 1.9.9.9 from Jason White:
So I implemented James’ and Rick’s solution but the hack for web.config in ConfigBuilder.cs doesn’t work with me because I am also using WSE 2.0 and this hack will break my code on client. Additionally, isn’t there another way to get around it without the hack?
Thanks,
Ram
We are trying to get the EnterpriseLibrary to work in Visual studio 2005 beta 2. We want to use the library in the smart client application. Does anyone have the full version compiled and tested under 2005 beta 2? While compiling in beta 2,we changed the obsolete namespaces and are able to compile in beta 2. However there are NUNit test cases attached to it which fail. Most of them are on the XML seralizer. After applying the changes mentioned by Scott some of them passed. But there are still some which fail. Its basically in Configuration block where it expects a XML of certain type. like xmlns:xsd="“>" " +"xmlns:xsi="“>" " +
but it receives the string like "mlns:xsi="“>" first and then the "xmlns:xsd="“>" and the test case fails. Has anyone encountered the problem and know a work around. Any help would be appreciated.
-Vishal Karnik
Perhaps Microsoft could learn from the OSS community here and supply the changes as a plaintext diff/patch file?
Applying a patch is a lot easier than having to manually comb through the source code and replicate disparate changes and hope you got it right.
Hi everybody,
I have dowloaded the following version for starting to get it works with 2005 beta 2:.
Each time i try to add a reference to the configuration assembly then i got yellow exclamation mark on this reference and i can’t compile it.
Does anyone know what is the problem ?
Cheers,
Shmulik
For the exclamation with reference, i got the same problem … Just put your reference in a shorter path (like c:reference…) and it will work !
Ok enterprise 1.9.9.9 work in winform, but in web I still have problem with configuration, I do everything in, token key and version are correct, web.config doesn’t have anything about EL, I make the ‘Configuration’ repertory and copy all the .config, but when i ask for database instance name it is still empty ? Any idea?
Benoit
Okay, so here’s how you get Enterprise Library to work with Visual Studio 2005,…
Okay, so here’s how you get Enterprise Library to work with Visual Studio 2005,…
How to get enterprise library to work with beta 2.
Okay, so here’s how you get Enterprise Library to work with Visual Studio 2005,….
PingBack from
PingBack from
PingBack from | https://blogs.msdn.microsoft.com/scottdensmore/2005/04/27/how-to-get-enterprise-library-to-work-whidbey-beta-2/ | CC-MAIN-2017-26 | refinedweb | 2,041 | 57.06 |
In the previous chapter, we reviewed some of the basics of GraphQL, differences between REST and GraphQL and core concepts of GraphQL. Now let’s understand more about some of the internal working of core-concepts. And, What components suggested for testing GraphQL?
Table of Contents: The Definitive Guide to Testing GraphQL API
- Introduction and getting started with GraphQL
- You’re here → Testing GraphQL APIs
- GraphQL Test Playground
- Querying GraphQL API
- Testing GraphQL
- Testing GraphQL using Tools in the API Ecosystem
- Outro
- Cross-functional testing of GraphQL APIs – Coming Soon
GraphQL Test Playground
We will be working with a simple Movie and Actor based GraphQL API that shows some name of the movie and list of movies and actor details. I’ve developed and Open-Sourced here. Firstly, let us look at the Schema of the API that we are going to use for testing. And we will be using the same for the upcoming chapters in this course.
I’ve used code sandbox to deploy the source code of GraphQL server component and it is accessible here. This will be available for access for a limited time, and when you are reading this and the sandbox environment is not accessible, I will request you to clone the source code and build it locally.
To recap, we’ve learned from our previous chapter that, everything is perceived as a graph and it’s connected (think of the GraphQL Logo) and represented as a single endpoint. In REST, we can see multiple endpoints for different functions of your product. In GraphQL, it’s through a single endpoint, and the main entry point is via
RootQueryType as the name suggests, it’s the root and a single entry point towards accessing your server component. The below picture depicts precisely that, and we have four types
movie,
actor,
movies and
actors that users can query for data. And we’ve also seen different scenarios that users may query in the previous chapter.
Querying GraphQL API
Before we jump onto testing components of GraphQL, firstly we need to understand how to call a GraphQL API over HTTP. There are different ways in which you could trigger an API. I’ve shared below some easy and commonly used ways as below:
- cURL: Curl is a popular command-line tool and library for processing data over HTTP (and many other) protocols. You will have to send a curl command along with three parameters.
- Firstly, the content-type
application/jsonbecause the query is part of JSON.
- Secondly, the data sent will be actual query like
{ "query": "{ movies { name } }" }
- Finally, the GraphQL endpoint: mostly all the GraphQL calls would be based on HTTP verb POST.
- In summary, below is the curl command to query your GraphQL endpoint.
curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ movies { name, genre } }" }' | jq .
- GraphiQL: It’s a React-based Front-end that provides a nice GUI for us to compose queries from the browser. Users have an option to set a boolean flag to set
graphiqlto be
Truein your
app.jsfile. Then, GraphiQL is accessible automatically the moment you build the server component via
nodemonor
node.
- Using
fetchlibrary: The code below is written based on JavaScript. But you can do this in any language. It is basically an extension of the cURL command that we saw above.
require('isomorphic-fetch'); fetch('', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ query: '{ movies { name, genre } }' }), }) .then(response => response.json()) .then(response => console.log(response.data));
Testing GraphQL
There are several ways your application could break, and testing is very much an essential activity to release code with confidence. Speaking of testing GraphQL API’s, there are multiple ways and methodologies that you could perform. Firstly, let us take a look at the components to check in GraphQL and then we will look at implementing a testing methodology.
We all know the famous Test Pyramid by Mike Cohn. It’s a great visual metaphor telling about the tests you could write at different levels. We all know it is proven to be effective in many ways for different environments and scenarios.
There is also a new way of looking at the organizing of your tests called the Testing Trophy 🏆 that helps write tests at different granular levels. I like both the testing metaphors, and it’s best to apply whatever works per the environment and the scenario. Testing GraphQL APIs can utilize the Testing Trophy very well because the lower level of the testing trophy recommends checking for static checks. We have a Schema defined with its strong types and stands as a source of truth to other field type and parameters, you can easily do a lot of static checks. Let’s look at some of the core components in GraphQL that should be tested which will fit into different levels like the Unit, Integration and Static checks.
Components to test in GraphQL
The first choice to go about testing GraphQL APIs would be to look at Schema and check for those types that you’d have had defined. Here is a cool handy cheat-sheet for reference to write schema succinctly, useful for developers and testers. Schema can be tested by the following ways:
- Static Type Checks: A lot of these can be tested using two of my favourite tools, the graphql-schema-linter and eslint-plugin-graphql and of course you could use Flow to perform static checks.
- Mock Schema & Test queries: You should consider mocking the schema using
mock-serverthat should be available in both
graphql-toolsor
apollo-servertools and write tests(queries) with all possible combinations.
import { makeExecutableSchema } from '@graphql-tools/schema'; import { addMocksToSchema } from '@graphql-tools/mock'; import { graphql } from 'graphql'; const schemaString = `type Query { movie(id: ID): Movie actor(id: ID): Actor } type Actor { id:String! name:String! age:Int movies:Movie } type Movie { id:String name:String! genre:String actor:Actor } type Movies { movies:[Movie] } type Actors { actor:[Actors] }`; // Make a GraphQL schema with no resolvers const schema = makeExecutableSchema({ typeDefs: schemaString }); // Create a new schema with mocks const schemaWithMocks = addMocksToSchema({ schema }); const query = ` query getmoviewithid { movie(id: 6) { name, genre } } `; graphql(schemaWithMocks, query).then((result) => console.log('Got result', result));
Testing queries are easy with
GraphiQL if you are using JavaScript to build GraphQL server components. There are also other plugins to like Altair that can be used to test if you are using other language clients to build GraphQL server. The above GIF under GraphiQL section exactly shows the same.
- You can also test the queries in an automated fashion using libraries like
requestand
supertest
const supertest = require("supertest"); const expect = require("chai").expect; const schema = require("../schema/schema"); let baseURL = supertest(""); let list_users = `{ movie(id:"1234595830") { name id actor { name age id } } } `; describe("POST Request", async () => { let post_resp; it("makes a POST call ", async () => { post_resp = await baseURL .post(list_users); await console.log(post_resp.body); //Do any other validation here. }); });
- Using
easygraphql-testeryou could test for different queries against the schema definition.
const EasyGraphQLTester = require('easygraphql-tester') const fs = require('fs') const path = require('path') const schemaCode = fs.readFileSync( path.join("./schema/movies-schema.gql"), "utf8" ); const tester = new EasyGraphQLTester(schemaCode) describe("Test Schema, Queries and Mutation", () => { let tester; before(() => { tester = new EasyGraphQLTester(schemaCode); //just to make sure schema comes through swiftly //console.log(util.inspect(tester)) }); it("Should pass with a valid query", () => { const query = ` { movies { name genre actor{ name age } } } `; // First arg: true because the query is valid // Second arg: query to test tester.test(true, query); }); });
Mutations modify data in our database and return us a value. We’ve learned from the previous chapter that Mutations are equivalent to CRUD operations in the REST API world. Testing Mutations are very important as it involves testing data access and addition to databases. Mutations can be tested by the following ways:
- Using GraphiQL: Use the below mutation query in the GraphiQL client and test for valid response and message.
mutation{ addMovie(name: "MovieName", genre: "Action", actorId:"The ID of Actor"){ name genre } }
- Using
easygraphql-tester
it("Should fail with a Invalid mutation query", () => { const mutation = ` mutation addMovie($id: String!, $name: String!, $genre: String) { addMovie(id: $id, name: $name, genre: $genre) { id name genre } } `; // First arg: false because the query is Invalid // Second arg: query to test tester.test(true, mutation, { id: "id123", name: "testMovie", genre: "Action", }); });
Resolvers are query handlers and are pure JavaScript functions. The Resolver function would typically have the following syntax:
fieldName(obj, args, context, info) { result }
Let’s take our example query to get movies as below and see how Resolver functions would be like:
query { movies{ name genre actor{ name } } }
objin
Query.movieswill be whatever the server configuration passed for
rootValue.
objin
Movie.nameand
Movie.genrewill be the result from
movies, usually, a movies object from the backend.
objin
Actor.nameand will be one item from the
actorresult array.
GraphQL query is a series of function calls (resolver functions) nested in a way according to the query pattern. So testing Resolvers would be the same as unit testing a function in JavaScript.
I hope that by now, you are aware of the core component of GraphQL and how to test those components. Now let’s go ahead and look at how to test GraphQL APIs using some of the tools and libraries in the API testing ecosystem.
Testing GraphQL using Tools in the API Ecosystem
Testing GraphQL with Postman
Exploratory testing with GraphQL is quite simple, Postman has inbuilt support to run GraphQL queries. Let’s see how can you do that:
- Step 1: Select the Schema type as GraphQL under New API dialog. Upload your GraphQL Schema onto your Postman, this will help us assist writing easy queries and also help us with supporting query completion. This is optional, and you could even test it without uploading Schema.
- Step 2: Next up, you can start creating a collection and add an API call onto your collection. In this case, I’ve named our collection as
TestProject-GraphQLand have three
POSTcalls, as shown on the left-hand side in the below picture. I also have three different
POSTcalls testing for different data.
- Step 3: Next, the API test is just to demonstrate the usage of Query variable, say when you’d want to query for a particular
Movieor
Actorusing its unique identifier.
query($id: ID!){ actor(id:$id){ name age movies{ name } } }
- Step 4: We can make use of
teststab in Postman to automate and validate the response.
pm.test("Status code is 200", function () { pm.response.to.have.status(200); }); pm.test("Verify Movie Name", function () { var responsePayload = pm.response.json(); pm.expect(responsePayload.data.movies).to.be.oneOf(["The Matrix","Inception","Inferno"]); });
- Step 5: We can automate these tests using
Testssection and use Postman Runner to execute your GraphQL API tests via command line too.
🔱 Myth: GraphQL always returns 200 as status code.
It’s advisable not to check for status code, as you see on the above screenshot the
Get List Of All Moviestest has failed but still shows 200 as status code. GraphQL allows you to configure custom error codes, so the best practice would be to set custom error code for particular error and then assert.
Testing GraphQL using Rest Assured
Rest Assured allows us to test REST APIs easily and is well known in the Java world for automating REST APIs. Good news (!) is that you could still use Rest Assured to test for GraphQL APIs with one condition though! The GraphQL query(request) in itself is not a JSON payload and here is how you could achieve:
- Firstly, convert the GraphQL query into Stringified JSON format.
- Secondly, pass the converted String as Body parameters to the Request.
- Then finally, validate the response.
@BeforeClass public static void createRequestSpecification() { requestSpecification = new RequestSpecBuilder(). setBaseUri(""). setPort(4040). build(); } @Test public void SimpleTest() { given() .spec(requestSpecification) .accept(ContentType.JSON) .contentType(ContentType.JSON) .body("{\"query\":\"query {\\n movies{\\n name\\n }\\n}\",\"variables\":{}}" ) .post("/graphql") .then() .statusCode(200).log().all(); }
💸 Tip: The Generate Code Snippets option in Postman will help you get the Stringified JSON.
Testing GraphQL using Rest Sharp
[SetUp] public void Setup() { _restClient = new RestClient(""); } [Test] public void TestGraphQL() { //Arrange _restRequest = new RestRequest("/graphql" , Method.POST); _restRequest.AddHeader("Content-Type", "application/json"); _restRequest.AddParameter("application/json", "{\"query\":\"{\\n movie(id:\\\"5ec2caaaa0f98451a25d1429\\\") " + "{\\n name\\n id\\n actor {\\n name\\n age\\n id\\n }\\n }\\n}\\n\"," + "\"variables\":{}}", ParameterType.RequestBody); //Act _restResponse = _restClient.Execute(_restRequest); //Assert //do validation Console.WriteLine("Printing results for fun : "+_restResponse.Content); }
Testing GraphQL using Karate
The Karate framework has built-in support for testing GraphQL API and it’s seamless. Unlike the other options that we saw above. If you are using Karate for API testing then, it’s very straight forward to test, you can pass the GraphQL Query as it is and validate the response using Karate’s
match assertions.
Scenario: GraphQL request to get all data fields # note the use of text instead of def since this is NOT json Given text query = """ { movies{ name genre actor{ name age id } } } """ And request { query: '#(query)' } When method POST # pretty print the response * print 'response:', response # json-path makes it easy to focus only on the parts you are interested in # which is especially useful for graph-ql as responses tend to be heavily nested # '$' happens to be a JsonPath-friendly short-cut for the 'response' variable #* match $.data.movie.name == 'The Matrix' # the '..' wildcard is useful for traversing deeply nested parts of the json * def actor0 = get[0] response..actor * match actor0 contains { name: 'Keanu Reeves', id: '5ec2ac72abe66a4b8a184f96', age: 56 } * def actor2 = get[2] response..actor * match actor2 contains { name: 'Tom Hanks', id: '5ec2abab84395b4b4c71ed9d', age: 63 }
Testing GraphQL using TestProject
TestProject is a cloud-hosted, 100% free test automation platform built on Selenium and Appium for all testers and developers. It has also got Add-ons that users can use to do additional testing on the same cloud. Here, we can use the community Add-on called RESTful API Client and invoke a POST call to query a GraphQL API. Please follow the steps as mentioned in this blog post.
This is how your test would be, a very simple test to hit the endpoint using POST call and pass the Stringified JSON payload into the Body and you are done.
Once you execute the tests, the tests are executed in seconds and you could verify it from the Reports sections as well. A sample response as below:
Testing GraphQL using Test GraphQL Java
If you are not using any of the above BDD frameworks for API testing and looking for a standard Java project to test GraphQL APIs. Test-GraphQL-Java project by Vimal Selvam should be easy to implement. Simply add the below maven dependency to your Java project and you are done.
<dependency> <groupId>com.vimalselvam</groupId> <artifactId>test-graphql-java</artifactId> <version>1.0.0</version> </dependency> package test.graphql; import java.io.*; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.databind.node.ObjectNode; import com.vimalselvam.graphql.GraphqlTemplate; import org.testng.Assert; import org.testng.annotations.Test; import okhttp3.*; /** * Test */ public class TestGraphQL { private static final OkHttpClient client = new OkHttpClient(); private final String graphqlUri = ""; private Response prepareResponse(String graphqlPayload) throws IOException { RequestBody body = RequestBody.create(MediaType.get("application/json; charset=utf-8"), graphqlPayload); Request request = new Request.Builder().url(graphqlUri).post(body).build(); return client.newCall(request).execute(); } @Test public void testGraphqlWithInputStream() throws IOException { // Read a graphql file as an input stream InputStream iStream = TestGraphQL.class.getResourceAsStream("/movies.graphql"); // Create a variables to pass to the graphql query ObjectNode variables = new ObjectMapper().createObjectNode(); variables.put("id", "5ec2caaaa0f98451a25d1429"); // Now parse the graphql file to a request payload string String graphqlPayload = GraphqlTemplate.parseGraphql(iStream, variables); // Build and trigger the request Response response = prepareResponse(graphqlPayload); Assert.assertEquals(response.code(), 200, "Response Code Assertion"); String jsonData = response.body().string(); JsonNode jsonNode = new ObjectMapper().readTree(jsonData); System.out.println(jsonData); Assert.assertEquals(jsonNode.get("data").get("movie").get("name").asText(), "The Matrix"); } }
Hi Manoj, First of all a very nice and comprehensive article. I am QA Automation Engg, I have been working on graphql with RestSharp, actually I am facing difficulties in performing request with body with variables, In this article you have mention variables in payload but Could you please gimme more insights on it. | https://blog.testproject.io/2020/06/23/testing-graphql-api/ | CC-MAIN-2020-40 | refinedweb | 2,738 | 53.81 |
]
Ajay Yadav(10)
Puran Mehra(4)
Jean Paul(3)
Vidya Vrat Agarwal(3)
Vijai Anand Ramalingam(3)
Kiranteja Jallepalli(2)
Kailash Chandra Behera(2)
Amit Choudhary(2)
G Gnana Arun Ganesh(2)
Chris Rausch(2)
Arun Choudhary(2)
Dhananjay Kumar (2)
Nitin Pandit(1)
Prashant Verma(1)
Vignesh Ganesan(1)
Venkatesan Jayakantham(1)
Santhakumar Munuswamy(1)
Manoj Kalla(1)
Priyaranjan K S(1)
Shridhar Sharma(1)
Zain Nisar(1)
Kantesh Sinha(1)
Banketeshvar Narayan(1)
Sabyasachi Mishra(1)
Neeraj Kumar(1)
Afzaal Ahmad Zeeshan(1)
Vithal Wadje(1)
Pankaj Bajaj(1)
Ketak Bhalsing(1)
Jasminder Singh(1)
Abhishek Singh(1)
Sourav Kayal(1)
Ritesh Sharma(1)
Brijendra Gautam(1)
Saineshwar Bageri(1)
Nimit Joshi(1)
Abhishek Jaiswal :)(1)
Scott Lysle(1)
Azad Chouhan(1)
Shiju Joseph(1)
Sharad Gupta(1)
Shankar M(1)
Vishal Gilbile(1)
Mahesh Alle(1)
Veena Sarda(1)
Rajesh VS(1)
Seakar Krishna(1)
Alessandro Del Sole(1)
Filip Bulovic(1)
Mihir Pathak)
Deepak Dwij(1)
Gohil Jayendrasinh(1)
Nipun Tomar(1)
Girish Nehte(1)
Ashish Shukla(1)
Mayur Gujrathi(1)
Karthikeyan Anbarasan(1)
Vulpes (1)
Abhishek Bhat(1)
Ramesh dharam(1)
prabhjot bakshi(1)
Divyesh Shah(1)
Kirtan Patel(1)
Hirendra Sisodiya(1)
Ahsan Murshed(1)
Tajuddin MD(1)
Praveen Moosad(1)
Resources
No resource found.
Workaround For Missing "Sign In As A Different User" Custom Action In SharePoint 2016
Jan 09, 2016.
In this article you will learn about a workaround for a missing "sign in as a different user" custom action in SharePoint 2016.#.
Sep 24, 2015.
This article explains how we can allow a user to sign up using LinkedIn and save data of user signing up after successfully sign.
How to Sign a Certificate For Use in PHA Application
May 15, 2015.
This article shows how to sign a certificate for use in a PHA application..
SharePoint Server 2013: "Sign in as Different User" Menu Option is Missing
Oct 14, 2014.
This article describes how to activate the Sign in as Different User option in SharePoint 2013 and if you want to activate it permanently how you can do that..
Creating Login Or Signin Page Using .NET Framework
Feb 08, 2014.
Here we learn how to create a Login or Signin page using the .NET Framework..
ASP.Net Page Directives
Jul 11, 2013.
As a ASP.NET developer everyone have to have knowledge about Page Directive. If you are a fresher and you want to know about the page directive then you can read this article.
How to Write Access 2013 Custom Web App on Office 365
Dec 19, 2012.
Sign into Office 365 enterprise and get a free version of Office as well as Sharepoint. I installed Access 2013 on my local machine and used SharePoint from the Office 365 enterprise version...
Reading Assembly attributes in VB.NET
Nov 10, 2012.
This article allows you to read the assembly attributes information using .NET. The information store in AssemblyInfo files like Title, Description, copyright, Trade mark can be read using reflection and assembly namespace... convert unsigned integer arrays to signed arrays and vice versa
Mar 18, 2011.
Here's a simple technique for converting between signed and unsigned integer arrays..
About assembly-signing. | http://www.c-sharpcorner.com/tags/assembly-signing | CC-MAIN-2016-26 | refinedweb | 532 | 61.06 |
for item in sequence: # use itemWhich is equivalent to the following loop:
iterator = iter(sequence) while True: try: item = next(iterator) except StopIteration: break # leave the loop # use itemAn iterator is the object returned from the __iter__ method of a sequence you'd like to iterate over. An iterator implements the __next__ method which, each time it is called, either will return the next item in the sequence or will raise a StopIteration exception to indicate that it has no more values to return.
def integers(): x = 1 while True: yield x x += 1The presence of the line yield x tells Python that this function describes a generator. Furthermore, whenever the interpreter reaches that line, it will produce x as a value and stop running the generator; the generator will later resume from that point in the code if/when called again to ask for another value. To actually use a generator, we can call the function defining it in a for loop in the same position where we might otherwise have used a sequence:
>>> for i in integers(): ... print(i) ... 1 2 3 4 etc.If we want a generator to finish producing values, we simply have the function exit. As an example we have the generator integers_to below:
def integers_to(n): current = 1 while current <= n: yield current current += 1 # The function stops here
class Stream(object): """A lazily computed recursive list.""" empty_stream = Stream(None, None, True)A Stream looks much like an RList in that it has a first item and a "rest", a reference to a Stream of the remaining items. However, unlike an RList, a Stream is "lazy"[1]: it does not immediately compute all of the values in the sequence, instead generally waiting until we ask for an item before actually computing its value. (Furthermore, you might notice that Streams, as implemented here, involve something like memoization: once we've computed a value we remember it and reuse it later whenever asked for it again.)
from random import random random_stream = Stream(random(), lambda: random_stream)What it is unsatisfactory about this? How can one fix it?
[1]: Hence my selection to write this lab. Laziness is my wheelhouse. -SR↩
[2]: My fun, not yours. -SR↩ | http://www-inst.eecs.berkeley.edu/~cs61a/sp12/labs/lab13/lab13.html | CC-MAIN-2017-26 | refinedweb | 370 | 57.3 |
So you’d like to log to your database, rather than a file. Well, here’s a brief rundown of exactly how you’d do that.
First we need to define a Log model for SQLAlchemy (do this in myapp.models):
from sqlalchemy import Column from sqlalchemy.types DateTime, Integer, String from sqlalchemy.sql import func from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Log(Base): __tablename__ = 'logs' id = Column(Integer, primary_key=True) # auto incrementing logger = Column(String) # the name of the logger. (e.g. myapp.views) level = Column(String) # info, debug, or error? trace = Column(String) # the full traceback printout msg = Column(String) # any custom log you may have included created_at = Column(DateTime, default=func.now()) # the current timestamp def __init__(self, logger=None, level=None, trace=None, msg=None): self.logger = logger self.level = level self.trace = trace self.msg = msg def __unicode__(self): return self.__repr__() def __repr__(self): return "<Log: %s - %s>" % (self.created_at.strftime('%m/%d/%Y-%H:%M:%S'), self.msg[:50])
Not too much exciting is occuring here. We’ve simply created a new table named ‘logs’. | http://docs.pylonsproject.org/projects/pyramid_cookbook/en/latest/logging/sqlalchemy_logger.html | CC-MAIN-2013-48 | refinedweb | 185 | 54.49 |
watch a lot of KnpU tutorials, you know that I love to talk about how the whole point of a bundle is that it adds services to the container. But, even I have to admit that a bundle can do a lot more than that: it can add routes, controllers, translations, public assets, validation config and a bunch more!
Find your browser and Google for "Symfony bundle best practices". This is a really nice document that talks about how you're supposed to build re-usable bundles. We're following, um, most of the recommendations. It tells you the different directories where you should put different things. Some of these directories are just convention, but some are required. For example, if your bundle provides translations, they need to live in the
Resources/translations directory next to the bundle class. If you follow that rule, Symfony will automatically load them.
Here's our new goal: I want to add a route & controller to our bundle. We're going to create an optional API endpoint that returns some delightful lorem ipsum text.
Before we start, I'll open my PhpStorm preferences and, just to make this more fun, search for "Symfony" and enable the Symfony plugin. Also search for "Composer" and select the
composer.json file so that PhpStorm knows about our autoload namespaces.
Back to work! In
src/, create a
Controller directory and inside of that, a new PHP class called
IpsumApiController. We don't need to make this extend anything, but it's OK to extend
AbstractController to get some shortcuts... except what!?
AbstractController doesn't exist!
That's because the class lives in
FrameworkBundle and... remember! Our bundle does not require that! Ignore this problem for now. Instead, find our app code, open
AbstractController, copy its
namespace, and use it to add the
use statement manually to the controller.
Next, add a public function called
index. Here, we're going to use the
KnpUIpsum class to return a JSON response with some dummy text. When you create a controller in a reusable bundle, the best practice is to register your controller as a proper service and use dependency injection to get anything you need.
Add
public function __construct() and type-hint the first argument with
KnpUIpsum. I'll press Alt+Enter and choose Initialize Fields so that PhpStorm creates and sets a property for that.
Down below, return
$this->json() - we will not have auto-complete for that method because of the missing
AbstractController - with a
paragraphs key set to
$this->knpUIpsum->getParagraphs() and a
sentences key set to
$this->knpUIpsum->getSentences()
Excellent!
Next, we need to register this as a service. In
services.xml, copy the first service, call this one
ipsum_api_controller, and set its class name. For now, don't add
public="true" or
false: we'll learn more about this in a minute. Pass one argument: the main
knpu_lorem_ipsum.knpu_ipsum service.
Tip
In Symfony 5, you'll need a bit more config to get your controller service working:
<service id="knpu_lorem_ipsum.ipsum_api_controller" class="KnpU\LoremIpsumBundle\Controller\IpsumApiController" public="true"> <call method="setContainer"> <argument type="service" id="Psr\Container\ContainerInterface"/> </call> <tag name="container.service_subscriber"/> <argument type="service" id="knpu_lorem_ipsum.knpu_ipsum"/> </service>
For a full explanation, see this thread:
Perfect!
Finally, let's add some routing! In
Resources/config, create a new
routes.xml file. This could be called anything because the user will import this file manually from their app.
To fill this in, as usual, we'll cheat! Google for "Symfony Routing" and, just like we did with services, search for "XML" until you find a good example.
Copy that code and paste it into our file. Let's call the one route
knpu_lorem_ipsum_api. For
controller, copy the service id, paste, and add a single colon then
index.
Fun fact: in Symfony 4.1, the syntax changes to a double
:: and using a single colon is deprecated. Keep a single
: for now if you want your bundle to work in Symfony 4.0.
Finally, for
path, the user will probably want something like
/api/lorem-ipsum. But instead of guessing what they want, just set this to
/, or at least, something short. We'll allow the user to choose the path prefix.
And that's it! But... how can we make sure it works? In a few minutes, we're going to write a legitimate functional test for this. But, for now, let's just test it in our app!
In the
config directory, we have a
routes.yaml file, and we could import the
routes.xml file from here. But, it's more common to go into the
routes/ directory and create a separate file:
knpu_lorem_ipsum.yaml.
Add a root key -
_lorem_ipsum - this is meaningless, then
resources set to
@KnpULoremIpsumBundle and then the path to the file:
/Resources/config/routes.xml. Then, give this a prefix! How about
/api/ipsum.
Did it work? Let's find out: find your terminal tab for the application, and use the trusty old:
php bin/console debug:router
There it is!
/api/ipsum/. Copy that, find our browser, paste and.... nope. Error!
Controller ipsum_api_controller cannot be fetched from the container because it is private. Did you forget to tag the service with
controller.service_arguments.
The error is not entirely correct for our circumstance. First, yes, at this time, controllers are the one type of service that must be public. If you're building an app, you can give it this tag, which will automatically make it public. But for a reusable bundle, in
services.xml, we need to set
public="true".
Try that again! Now it works. And... you might be surprised! After all, our bundle references a class that does not exist! This is a problem... at least, a minor problem. But, because FrameworkBundle is included in our app, it does work.
But to really make things solid, let's add a proper functional test to the bundle that guarantees that this route and controller work. And when we do that, it'll become profoundly obvious that we are, yet again, not properly requiring all the dependencies we need.
//.10 } } | https://symfonycasts.com/screencast/symfony-bundle/routes-controllers | CC-MAIN-2021-31 | refinedweb | 1,023 | 67.96 |
Hello. This is my first post. Hopefully the first of many. I am currently starting my journey in collegiate Computer Science and as such I know some basics about programming - logic and a fledgling understanding of some languages.
Right now I am trying to create a program that uses StreamWriter and StreamReader objects to open two text files. One to read from and one to write to. The point is to build a comma-separated values parser. I'm not doing this for school, I just want to practice.
I'd like to note that I got the program to work. It printed the headers just fine. I had unexpected results with the read/write. It was able to do both, but some of the info was garbled.
The problem I have now is, even though my desk-check looks fine and it compiles fine, it doesn't write ANYTHING. Not even the headers. I have tried starting with an existing file, and starting without and it doesn't matter. All I get is a blank .txt file. My next step is to just start over, verifying that at each step the program is working correctly, however I figured I would try some DaniWeb advice. Be gentle.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace FirstTry { class Program { public static void Main(string[] args) { string newTextFilePath = @"C:\Users\Richard\Documents\Visual Studio 2008\Projects\FirstTry\CSV_File1.txt"; string valueTextFilePath = @"C:\Users\Richard\Documents\Visual Studio 2008\Projects\FirstTry\CSV_FileValues.txt"; char[] buffer = new char[1]; const char DELIMIT = ','; char[] field = new char[255]; byte buffCount = 0; string[] HEADERS = new string[5] {"NAME", "ADDRESS", "CITY", "STATE", "ZIPCODE"}; try { if (File.Exists(newTextFilePath)) // Checks to see if the output file exists { Console.WriteLine("The file exists, deleting now ..."); Console.Read(); File.Delete(newTextFilePath); } StreamWriter sw = new StreamWriter(newTextFilePath); // Creates new file from newTextFilePath if (File.Exists(newTextFilePath)) // Checks to see if the output file exists { Console.WriteLine("The file has been recreated"); Console.Read(); } for (int i = 0; i < HEADERS.Length; i++) // Loop that controls the writing of the headers { sw.Write(HEADERS[i]); if (i < HEADERS.Length -1) { sw.Write(DELIMIT + " "); } } sw.Write("\n"); StreamReader sr = new StreamReader(valueTextFilePath); // Opens existing file for reading if (File.Exists(valueTextFilePath)) { while(sr.Peek() != -1) // Checks next char for EOF { sr.Read(buffer, 0, buffer.Length); // Reads next char into buffer if (buffer[0].Equals(DELIMIT)) // Control break logic checks char for ',' { for (int k = 0; k < buffCount; k++) { sw.Write(field[k]); // If control breaks then write field } sw.Write(DELIMIT + " "); buffCount = 0; } else { field[buffCount] = buffer[0]; // If char doesn't equal ',', add char to field buffCount++; } } } else { Console.WriteLine("File read failed: file doesn't exist"); } } catch(Exception e) { Console.WriteLine("The process failed: ", e.ToString()); } } } } | https://www.daniweb.com/programming/software-development/threads/290438/streamreader-writer-help | CC-MAIN-2018-30 | refinedweb | 473 | 61.73 |
#include <factory.hpp>
List of all members.
Definition at line 62 of file factory.hpp.
Definition at line 64 of file factory.hpp.
Definition at line 74 of file factory.hpp.
This is the assemblage which widget factories register the created widget (or wrapping structure) with, such that the widget gets deleted when the assemblage does.
Definition at line 103 of file factory.hpp.
A set for the contributing factors to the sheet (not the layout sheet)
Definition at line 108 of file factory.hpp.
Used by the eve_client_holder to disconnect itself from the root behavior.
Definition at line 140 of file factory.hpp.
The Eve engine which all of these widgets are being inserted into. This must be known by top-level windows, and by widgets which manage trees of Eve widgets (such as splitter widgets).
Definition at line 91 of file factory.hpp.
The layout sheet is sheet used to hold the layout state.
Definition at line 96 of file factory.hpp.
REVISIT (sparent) : We really need a generalized mechanism for deferring an action - a command queue of sorts - which collapses so things don't get done twice. We hack it here.
Definition at line 146 of file factory.hpp.
Path to the file loaded for this window.
Definition at line 156 of file factory.hpp.
Relayout is complicated. We need to maintain a visible update queue (VUQ) per-window, as we want to know when a window has hide/show elements in their respective queues; this helps to eliminate unnecessary calls to eve_t::evaluate for a given window if none is needed. However, we also need to call _all_ the VUQ update routines related to a given sheet- this is needed to make sure all the views are updated w.r.t. the state of the sheet so there are no hide/show sync issues. This reference is held to the 'root behavior', the one scoped the same as the sheet to which this view will be bound, and the behavior called when a user action requires us to check the VUQ set for the sheet for potential relayout.
Definition at line 135 of file factory.hpp.
Display token for the root item in the view.
Definition at line 151 of file factory.hpp.
Top-level widgets (windows, dialogs, etc) need to be told to show themselves when all of the child widgets have been created and inserted. This signal is issued when the top-level window needs to be shown -- so any factory function which creates a window needs to connect to this signal.
Definition at line 116 of file factory.hpp.
Definition at line 122 of file factory.hpp.
Use of this website signifies your agreement to the Terms of Use and Online Privacy Policy.
Search powered by Google | https://stlab.adobe.com/structadobe_1_1eve__client__holder.html | CC-MAIN-2016-50 | refinedweb | 464 | 66.44 |
Responding to the advice from Malx I added a faux PATH_INFO variable in PHPortal.
This basically means anything after the file extensions are trasnformed into an array of variables.
EXAMPLE:
PRODUCES:
$x->_URL = Array ( [0] => /index [1] => .text/string1/var1,val1/var2|val2/string2
)
$x->_PATH = Array ( [1] => string1 [var1] => val1 [var2] => val2 [4] => string2 )
All three of these characters are checked and can be used as delimters to create variables and values in urls.
$separators=array(',',':','|');
If you click the example link above view source and scroll to the end of the page and look for a XML namespace like so: <xpc:path str... />
You can also play with the link by adding and removing you own paths, delimiters, and variables to see how this works. | http://www.advogato.org/person/mglazer/diary.html?start=18 | CC-MAIN-2014-15 | refinedweb | 127 | 68.6 |
Soumis par Arch D. Robison (Intel) le Building Blocks (Intel® TBB), Intel® Cilk™ Plus, and OpenMP*.
For more detailed analysis of parallel quicksort, samplesort, and merge sort, see the book Structured Parallel Programming (by Michael McCool, James Reinders, and me). I’ve also provided numerous links in this article to background material.
Background
I wrote the original version of the parallel quicksort that ships with Intel TBB. At the time, I knew that theoretically it did not scale as well as some other parallel sorting algorithms. But in practice, for the small core counts common when Intel TBB first shipped, it was usually the fastest sort, because it was in place. The other contending sorts required temporary storage that doubled the memory footprint. That bigger footprint, and the extra bandwidth that it incurred, clobbered theoretical concerns about scalability. Furthermore, C++11 move semantics were not standardized yet (though many developers had hacked their own idiosyncratic versions), thus moving objects was sometimes expensive, which hurt alternative sorts that required moving objects. Now that more hardware threads are common (indeed a Intel® Xeon Phi™ coprocessor has 240 on a chip), and C++11 move semantics are ubiquitously available, fundamental scalability analysis comes back into play.
Why Parallel Quicksort Cannot Scale
Parallel quicksort operates by recursively partitioning the data into two subsets and sorting the subsets in parallel. The first level partitioning uses one thread, the second level uses 2 threads, the third 4 threads, then 8, 16 and so forth. This exponentially growing parallelism sounds great. But work-span analysis shows that parallel quicksort cannot scale.
Work-span analysis is the foundation for analyzing parallel algorithms. Let TP denote the time to execute an algorithm on P threads on an ideal machine (infinite available hardware threads, infinite memory bandwidth). The work for an algorithm is T1, which is the serial execution time. The span is T∞, the execution time on an infinite number of processors.For a parallel quicksort, T1=Θ(n lg n) and T∞=Θ(n). (Θ is like "big O", except that it denotes asymptotic upper and lower bounds. We need both bounds because we'll be computing asymptotic quotients.) The latter bound arises because the first partitioning step is serial and thus gets no benefit from having more than one thread.
The available parallelism is T1/T∞,in other words the maximum achievable speedup on an ideal machine. Using more threads than T1/T∞can’t help much, even if you are lucky enough to have an ideal machine. For parallel quicksort, T1/T∞=Θ(lg n). Thus the parallelism is about 30 if sorting a billion keys. That's well short of the parallelism available on a 240-thread Intel Xeon Phi coprocessor.
Sample Sort’s Achille’s Heel
Parallel samplesort overcomes the shortcoming of parallel quicksort by parallelizing the partitioning operation, and doing a many-way partitioning instead of a two-way partitioning. The analysis is a bit complicated (see the book), but the net result is that parallel samplesort scales nicely on an ideal machine. Unfortunately, on real machines, threads are not the only resource of concern. The memory subsystem can be the limiting resource. In particular, a many-way partitioning generates many streams of data to/from memory. When each stream occupies at least a page (commonly the case for a big sort), each stream will need an entry in theTranslation Lookaside Buffer (TLB). Having more streams then entries causes the TLB to thrash. This is not to say that samplesort is hopeless. My experience has been that samplesort does very well as long as enough TLB capacity exists, which it typically does for multi-core machines. Alas at the extremes of many-coremachines such as Intel Xeon Phi coprocessor, TLB capacity becomes a problem for samplesort.
Parallel Merge Sort
Parallel merges sort works by recursively sorting each half of the data, and then merging the two halves. The two subsorts can be done in parallel, and the merge can be done in parallel too, using a divide-and-conquer tactic. The parallel merge works like this: Given two sorted sequences to merge:
-.
Parallel merge sort is attractive because it can be written without any Θ(n) bottlenecks, is cache-oblivious, and has nice memory streaming behavior. It has T1=Θ(n lg n) and T∞=Θ(lg3 n), thus its parallelism is T1/T∞ = Θ(n / lg2 n). Thus the parallelism is on the order of a million if sorting a billion keys. Even for sorting just a million keys, the parallelism will be on the order of 2500.
However, there is scalability trap to avoid if using C++. Parallel merge sort requires a temporary buffer. The buffer objects must be initialized (default-constructed) and destroyed. For types with trivial constructors/destructors (such as int and float) , these operations take zero time. But for types such as std::string, these operations take time, and so constructing or destroying the buffer serially raises the span to Θ(n), clobbering parallelism back to the same order as parallel quicksort.
A simple solution is to construct (or destroy) the buffer objects in parallel. But doing so introduces more scheduling overhead, and has poor locality with respect to uses of those objects. A more elegant approach is to construct the buffer objects at the leaves of the merge sort recursions, and destroy them at the leaves of the parallel merge recursions. That way the work for construction/destruction is distributed across the threads, with good locality since the first construction almost immediately precedes first use, and the last use almost immediately precedes destruction.
Ready to Use Code
The attachment has four versions of the code, and a test. The code requires support for C++11 “move” semantics. All versions are not exception safe if the keys have non-trivial destructors. Making them exception safe would add much complexity, and in most applications, the operations on the keys are not going to throw exceptions, particularly since keys are relocated using move operations instead of copy construction or assignment.
The four versions, listed from highest level to lowest-level expression are written in:
-.
The test program checks that keys are sorted, the sort is stable, and that no objects are leaked or used incorrectly.
For exposition, all four versions share a common header pss_common.h. The top-level routine ispss::parallel_stable_sort, which has two overloads similar tostd::stable_sort. One of the overloads is in the common header. If using the code in a production environment, I suggest choosing one version, incorporating the content of pss_common.h directly into it, and renaming the namespace pss to whatever suits your fancy. You may also want to add a traditional single-inclusion #ifndef guard.
I've left out the obligatory performance/scaling graphs, because the performance is dependent on the hardware and key type. So try it on your own favorite dataset. I've been happy with it, particularly for a sort written in less than 150 lines of code.
Notes on the OpenMP Version
The OpenMP version demonstrates a generally useful trick when using OpenMP tasking. Tasking was grafted onto OpenMP well after it was founded on the notion of parallel regions, which creates a problem for a routine that uses OpenMP tasking, because there are two contexts to.
The code solves the problem by conditionally creating a parallel region and using the master thread to start the sort, as shown below:
if( omp_get_num_threads() > 1) internal::parallel_stable_sort_aux( xs, xe, (T*)z.get(), 2, comp ); else #pragma omp parallel #pragma omp master internal::parallel_stable_sort_aux( xs, xe, (T*)z.get(), 2, comp );
While translating the sort to OpenMP, I discovered (and reported) a bug in the Intel OpenMP implementation of firstprivate for parameters with non-trivial copy constructors. The code has a work-around for the issue (look for __INTEL_COMPILER in openmp/parallel_stable_sort.h to find it).
Acknowledgements
Andrey Churbanov diagnosed the nature of the Intel OpenMP problem and suggested the work-around. Alejandro Duran pointed out the trick of conditionally creating a parallel region.
*The OpenMP name and the OpenMP logo are registered trademarks of the OpenMP Architecture Review Board | https://software.intel.com/fr-fr/articles/a-parallel-stable-sort-using-c11-for-tbb-cilk-plus-and-openmp?language=ru | CC-MAIN-2015-40 | refinedweb | 1,358 | 54.93 |
Introduction to Tkinter Frame the complex widgets as a foundation class.
Syntax:
w=frame( master, option, …. )
Attributes:
The attributes are :
- Master: Master attribute helps us to represent the parent window
- Options: Options attribute helps us to list the options which are commonly used for the widget and these options are very much useful as the key-value pairs which can be separated by the commas.
Tkinter Frame Options
These are the options of the Tkinter frame which is helping a lot to control the Tkinter frame. Check out those options given below:
- bg: The bg option of the Tkinter frame is the normal bg( background color ) which is used to display behind the indicator and the label.
- bd: The bd option of the Tkinter frame is very much helpful in order to set the border size around the indicator and by default it’s size is only 2 pixels.
- cursor: The “cursor” option is very helpful in order to set this cursor option to a cursor name ( dot, arrow, etc.. ). With the help of this mouse cursor will helps to change the pattern when the cursor/mouse point is over the checkbutton/ we can call it as changing the pattern when the checkbutton is below the cursor point.
- height: The “height” option of this is only the new frame’s vertical dimension.
- highlightbackground: The “highlightbackground” option of the Tkinter frame is the focus highlight’s color when the frame don’t have any focus.
- highlightcolor: The “highlightcolor” option of the Tkinter frame only shows the color in the focus highlight when there is a focus for the frame.
- highlightthickness: The “highlightthickness” option is the focus highlight’s thickness.
- relief: The “relief” option of the Tkinter frame is the only checkbutton which don’t even stand out from the background and by default, the relief value is FLAT (relief = FLAT). You can set this option to all of the other styles.
- width: The “width” option of the Tkinter frame is the checkbutton’s width which is the size of the text or the image by default. One can also this option to the number of the characters and this width checkbutton also have room for many characters always.
Methods:
- Pack() method: This “pack()” method of the this is very much helpful in order to manage the rows and columns by filling, expanding and moving by controlling the pack geometry manager.
Examples
The following are code examples for showing how to use Tkinter.Frame.
Example #1
This is the program which displays the buttons with different colors and it’s color names on the buttons. Here pack() method is used which is used to align the Tkinter frame based on our requirement. Here in the below example, all Tkinter functions are called using the import and from functions. Frame1 is to fall the Tkinter frame from the root1 variable. Variables are created for each color button with the color names on it for the frame. redbutton1 is the variable for the button with master option as frame1 in order to call the Tkinter frame and then the option text is included in order to know what color is embedding to the button and then the font background color is added to the font using “fg” option. The “bg” color is an option to implement the background color.
After this redbutton1 variable is embedded with the pack() function in order to align the button based on our requirement. Likewise, for each button, I implemented the same code but with the different color using ‘fg”, “bg” options and alignment using the pack() method with the option side = “LEFT” or side = “BOTTOM” or side = “RIGHT” etc.. Check out the output below the output section/heading to know what is happened or happening when implemented the following program in the command prompt / in a Python interpreter or at any other software based on our requirement.
Red, Brown, Blue, Violet, Pink, Green, Yellow, Black color button are using in the Tkinter frame functions. Here for the yellow button bg color (background color = “RED”) option also used.
Code:
from tkinter import *
import tkinter
root1 = Tk()
frame1 = Frame(root1)
frame1.pack()
bottomframe1 = Frame(root1)
bottomframe1.pack( side = BOTTOM )
redbutton1 = Button(frame1, text="Red", fg="red")
redbutton1.pack( side = LEFT)
greenbutton1 = Button(frame1, text="Brown", fg="brown")
greenbutton1.pack( side = LEFT )
bluebutton1 = Button(frame1, text="Blue", fg="blue")
bluebutton1.pack( side = LEFT )
blackbutton1 = Button(bottomframe1, text="Black", fg="black")
blackbutton1.pack( side = BOTTOM)
yellowbutton1 = Button(bottomframe1, text="Yellow", fg="yellow", bg="red")
yellowbutton1.pack( side = RIGHT)
greenbutton1 = Button(bottomframe1, text="Green", fg="green")
greenbutton1.pack( side = RIGHT)
violetbutton1 = Button(bottomframe1, text="Violet", fg="violet")
violetbutton1.pack( side = LEFT)
pinkbutton1 = Button(bottomframe1, text="Pink", fg="pink")
pinkbutton1.pack( side = BOTTOM)
root1.mainloop()
Output:
Example #2
The below program will implement the series or colors one by one, one after the other using the for loop function with various colors mentioned in the loop. Colors mentioned in the loop are VIBGYOR (Violet, Indigo, Blue, Green, Yellow, Orange, Red). These are the colors of the prism when the incident of white light is passed through it. Here root1 variable is used in order to call the Tkinter frame in the loop function. The “fm” is the variable to show the color from the colors of the listed colors one at an instance.
Code:
from tkinter import *
root1 = Tk()
for fm in ['violet', 'indigo', 'blue', 'green', 'yellow','orange','red']:
Frame(height = 25,width = 740,bg = fm).pack()
root1.mainloop()
Output:
Recommended Articles
This is a guide to Tkinter Frame. Here we discuss the basic concept and what are its attributes and methods along with different examples and its code implementation. You may also look at the following articles to learn more – | https://www.educba.com/tkinter-frame/?source=leftnav | CC-MAIN-2021-04 | refinedweb | 957 | 53.21 |
Summary
plac is much more than a command-line arguments parser. You can use it to implement interactive interpreters (both on a local machine on a remote server) as well as batch interpreters. It features a doctest-like mode, the ability to launch commands in parallel, and more. And it is easy to use too!
I have just released version 0.7.4 of plac. The tool has grown considerably in its three months of life, both in size - now it is over a thousand lines of code - and capabilities. The core is still two hundred lines long, and the basic usage of plac as a command-line arguments parser can still be explained in 10 minutes, but there is much more now.
Judging from the downloads, I have a couple hundred of potential users. Since I have got a few people sending me emails with questions and/or feedback, I assume I have at least a few real users ;) I suspect most people use plac simply as a command-line argument parser. There is nothing wrong with that, it is its intended usage after all: in release 0.7 I have even improved its capabilities in such sense. Still, I would like to encourage people to start using the full capabilities of plac.
Officially plac is still in alpha status, i.e. I do not guarantee backward compatibility between releases. In practice, the basic API for parsing the command-line arguments (plac.call(callable, arglist)) has stayed unchanged from the beginning and it will never change. Possibly, some optional arguments may be added or may change, but not the basic API. Actually in its three months of life plac has seen only minor incompatible changes, on non fundamental features or experimental features. So, while formally plac is in alpha status, in practice you can pretty much rely on its basic features. Apparently they work fine too, since I never had any bug report. In other terms, there is no excuse not use plac, even if it is a still young project ;)
argparse - the library underlying plac - has support for subcommands, but writing a parser with subcommands is not as simple as it could be. plac makes it trivial thanks to the Interpreter.call classmethod (new in release 0.7).
Notice the shebang line (#!vcs.py:VCS) which specifies the name of the file where the interpreter is defined and the name of the plac factory to use (in this case it is the class VCS).
There is also a doctest mode to check that the execution returns the expected output; for an explanation, you should check out the full documentation of plac.
Perhaps you have thought in the past that it would be nice if plac could execute commands on a remote machine: the good news is that starting for release 0.7 it can.
The trick is to start the plac interpreter in server mode: then any client can connect to the server. Here is an example working on localhost:
# plac_runner --serve 2199 vcs.py:VCS $ telnet localhost 2199 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. i> checkout url ok i> EOF Connection closed by foreign host.
It also works on a real remote host, if you use a port which is not firewalled. Notice that the plac server is in an early stage of development, so for the moment is does not support any authentication/authorization feature, nor any kind of security (that mean that it should be used on a protected LAN). Also, in order to avoid dependencies, it is implemented on top of the asyncore/asynchat modules in the standard library. Nothing stops you for implementing a better server using Twisted or any other technology, anyway: after all, implementing a line-oriented protocol is pretty trivial. Finally, at the moment there is no plac client: you can use telnet, but that means loosing the autocompletion features. There is still room for improvement there (patches are welcome!).
plac was not born as a tool for parallel programming and you should not expect too much from it in that area; still, it has some support to running commands in background, and it can be used to solve simple parallel tasks. Let me give a real life use case which happened to me a few weeks ago. I needed to perform a rather long query on our production database and on our testing database and to compare the results. The two databases are on two different machines and in order to compare the results the best way is to save the query results in a local database, say in two tables 'prod' and 'test': then one can compare the two tables with ease. In the past I have always solved such problem sequentially, by performing the two queries one after the other. One query takes nearly 5 minutes to run, so I had to wait 10 minutes to compare the results. However, this is a perfectly parallel job and since I had plac at my disposal I decided to use it to perform the queries together and to halve my waiting time. To this aim plac provides an utility function plac.runp(genseq, mode='p', start=True) which takes a sequence of generators and executes them in parallel, by using processes if mode='p' or threads if mode='t'. The functions returns a list of task objects; each task object has a .result property which returs the result of the task, i.e. the last yielded value. Then the problem was solved with code like the following:
def run_query_and_save(sourcedb, targetdb): output = run_query(sourcedv) save(output, targetdb) yield 'saved %d rows on %s' % (len(output), targetdb) tasks = plac.runp(run_query_and_save(prod_db, localdb_prod), run_query_and_save(test_db, localdb_test)) for t in tasks: print r.result
You can look at plac documentation for more details about task objects. The .result property was inspired by the concept of futures: it blocks the main thread until the running task has finished its job.
Speaking of futures, it is time to ask myself and my users where we want to go from here.
First of all I would like to reorganize the documentation, since now it has become quite large (around 45 pages in PDF form).
Then there is the long standing issue of multiline support: I would like to extend plac to support commands spanning multiple lines. The problem is that Python readline module does not expose the multiline support of the underline C-level readline module so that I cannot use such futures; moreover, I am also limited by the capabilities of pyreadline on Windows.
Multiple line commands are best entered in an editor anyway, so I would like to support some integration between plac and at least Emacs, which is the editor I use. In other words, it should be possible to run plac scripts in an inferior mode inside Emacs. Some help from Emacs experts would be welcome on that idea.
Other unplanned features may enter in plac as I see the need for them in my day-to-day work: for instance plac.Interpreter.call was not designed in advance, but it emerged from my practical needs.
What do you think? What users want for plac? Speak your mind, I am here to hear from you!
Have an opinion? Be the first to post a comment about this weblog entry.
If you'd like to be notified whenever Michele Simionato adds a new entry to his weblog, subscribe to his RSS feed. | http://www.artima.com/weblogs/viewpost.jsp?thread=301632 | CC-MAIN-2014-52 | refinedweb | 1,258 | 62.88 |
#ifdef FLEX_SCANNER /* dnl tables_shared.c - tables serialization code dnl dnl Copyright (c) 1990 The Regents of the University of California. dnl All rights reserved. dnl dnl This code is derived from software contributed to Berkeley by dnl Vern Paxson. dnl dnl The United States Government has rights in this work pursuant dnl to contract no. DE-AC03-76SF00098 between the United States dnl Department of Energy and the University of California. dnl dnl This file is part of flex. dnl dnl Redistribution and use in source and binary forms, with or without dnl modification, are permitted provided that the following conditions dnl are met: dnl dnl 1. Redistributions of source code must retain the above copyright dnl notice, this list of conditions and the following disclaimer. dnl 2. Redistributions in binary form must reproduce the above copyright dnl notice, this list of conditions and the following disclaimer in the dnl documentation and/or other materials provided with the distribution. dnl dnl Neither the name of the University nor the names of its contributors dnl may be used to endorse or promote products derived from this software dnl without specific prior written permission. dnl dnl THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR dnl IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED dnl WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR dnl PURPOSE. dnl */ /* This file is meant to be included in both the skeleton and the actual * flex code (hence the name "_shared"). */ #ifndef yyskel_static #define yyskel_static static #endif #else #include "flexdef.h" #include "tables.h" #ifndef yyskel_static #define yyskel_static #endif #endif /** Get the number of integers in this table. This is NOT the * same thing as the number of elements. * @param td the table * @return the number of integers in the table */ yyskel_static flex_int32_t yytbl_calc_total_len (const struct yytbl_data *tbl) { flex_int32_t n; /* total number of ints */ n = tbl->td_lolen; if (tbl->td_hilen > 0) n *= tbl->td_hilen; if (tbl->td_id == YYTD_ID_TRANSITION) n *= 2; return n; } | http://opensource.apple.com/source/flex/flex-24.1/flex/tables_shared.c | CC-MAIN-2013-48 | refinedweb | 326 | 53.61 |
So happy to see you back!!
I had (once again) a fantastic time watching your video…!
Does anybody know how to get predictions of a single row from the test_df (not several)? i.e. I would like to do something like this:
So happy to see you back!!
I had (once again) a fantastic time watching your video…!
Does anybody know how to get predictions of a single row from the test_df (not several)? i.e. I would like to do something like this:
Interesting! I re-run today the notebook and both options worked!
I may have broken something yesterday while experimenting with the code.
Does anybody know what the outputs of learn.predict() are? I get from the documentation that it returns the following:
full_dec,dec_preds,preds = learn.predict(test_df.iloc[100]) but I observe no difference between
dec_preds and
preds…
dec_preds is decoded from the loss function, and preds is the raw preds
What is the difference…? Not seeing your point
The recording is available in this thread here (if you may not have access to this now that is okay). A basic summary is that when we call
learn.get_preds (which
predict does), we can pass in
decoded=True. What this entails is that our loss function will decode the values. Why and how this could be different is say with CrossEntropy, we’ll actually run a softmax regularly, but our decoded values will also perform an argmax. Converting it to an actual class name is done through the dataloader’s
decode_batch() function
Do check the contents of lesson 2 in form of Kaggle notebooks if interested
Next lecture will be on Wednesday night. We’re back and with a voice again so we’ll go to our regularly scheduled programming
(5pm CST)
I used Zach’s notebook to train a neural network on the adults dataset. I then extracted the embeddings to use as input for a random forest classifier to see if there is a difference in the accuracy vs a random forest without embeddings.
Is there a way to customize TabularPandas? I used the DataBlock to achieve the mapping between the categorical variables and the embeddings but the code I wrote feels quite clunky. Any tips appreciated
@faib awesome work! (Reading through now) IIRC @Pak looked into this and found that it didn’t really make that much of a difference at the end of the day (back in v1) so the results aren’t too surprising.
Thank you!
I just realized I could write a processor like
Categorify and append it to the
procs argument in TabularPandas right?
I still have problems understanding where
TabularPandas fits in. It’s not a high level API
like
TabularDataLoaders nor does it belong to the
DataBlock category. Does it logically sit below that or somewhere in between?
Somewhere in between. The role of TabularPandas is to prepare the data for being made into a DataLoader. It’s high-level API but it’s not a DataBlock (this is in development)
Got it! I’m looking forward to using the DataBlock API for tabular and being able to rely on using a unified interface
When I try to add an additional metric to my learner, it results in below error during learn.fit():
*TypeError: unsupported operand type(s) for : ‘AccumMetric’ and ‘int’
Learn object defined as below:-
from fastai2.callback.all import *
learn = tabular_learner(dls,
layers=[1000,500],
metrics=[accuracy,RocAuc])
Need some help on this …
Hi Haroon!
I think this can help you:
I also get below error on passing the dropouts (ps, embed_p). I remember this working fine earlier.
TypeError: init() got an unexpected keyword argument 'ps’
learn = tabular_learner(dls,
layers=[1000,500],
ps=[0.001, 0.01],
embed_p=0.04,
metrics=[accuracy])
Thanks this fixed the issue.
I think the more recent version deprecates dropouts from the tabular_learner thus there will be no ps nor embed_p…
It’s not deprecated, you just need to pass them in
config=tabular_config(). All customization of models are done this way in fastai v2, to avoid mixing the kwargs of the models with those of the
Learner.
Hi, I’ve watched Tabular lesson-1 and I’ve a couple of doubts.
Zach shows how to plot using matplotlib, but the .plot() function plots any column with the serial number only. So how to plot between two columns(like a scatter plot between age and working hours)?
Also towards the end, he creates a tabular model called ‘net’. Why can’t we do a lr_find or .fit on net like we do on a tabular_learner?
Thanks, | https://forums.fast.ai/t/a-walk-with-fastai2-tabular-study-group-and-online-lectures-megathread/64361/88 | CC-MAIN-2022-27 | refinedweb | 768 | 64.41 |
rssdler Google Group For now, any discussion related to development or user help of rssdler. If user base becomes sufficiently large, a separate rssdler-dev. For more about the project: 2012-10-13T01:07:49Z Google Groups getthistone...@gmail.com 2012-10-13T01:07:49Z Re: [rssdler] Permissions on downloaded files You're a champion. That worked. Here's the commands I entered: <br> chmod g+s /path/to/my/watchdir <br> setfacl -d -m g::rwx /path/to/my/watchdir <br> setfacl -d -m o::rx /path/to/my/watchdir Graham Dunn g...@kurai.org 2012-10-12T13:53:23Z Re: [rssdler] Permissions on downloaded files Look into setfacls(1), specifically <br> <p>default:user::rwx <br> default:group::r-x <br> default:group:media:rw- <br> default:mask::rwx <br> default:other::rw- Orionn getthistone...@gmail.com 2012-10-12T10:18:57Z Permissions on downloaded files I'm using rssdler to scan a torrent feed, and it's downloading the torrents <br> fine, but it's creating them with permissions that doesn't allow any other <br> user or group to read them (-rw------- 1 pi pi). I have tried setting the <br> GID(?) bit on the folder by using chmod g+s, and it works fine for all <br> files currently in the folder, but any new files get the "600" permissions Lauri Niskanen a...@ape3000.com 2012-04-22T13:25:38Z RSSDler memory usage Hello! <br> I have a simple RSSDler setup with just one feed. It consumes about 25 <br> megabytes of memory constantly. Is that normal behaviour or could there <br> be a problem with my setup? <br> Thanks. Raymond khumalo raymondkhumalo...@live.co.za 2011-09-20T06:36:56Z Immediate transfer of $25.350 Million Dollars to you PLEASE VIEW THE ATTACHED FILE FOR DETAILS. fabs...@gmx.com 2010-01-17T16:39:56Z Re: rssdler crash hey, got exactly the same error, need also advice ! Celicia Johnson andronikizotov24...@gmail.com 2010-01-01T16:53:53Z My wedding video Hi to all. Here isour wedding video. <br> Happy New Year! <br> <a target="_blank" rel=nofollow[link]</a> Anna F borisavdonov48...@gmail.com 2009-12-31T06:35:06Z A joke O-ha-ha <br> What are they doing? <br> PS Just a joke, but so funny :) <br> <a target="_blank" rel=nofollow[link]</a> danomac djqf...@gmail.com 2009-12-28T05:07:16Z rssdler crash Hi, <br> It seems that rssdler has been crashing on me. Here's a snippet of the <br> log: <br> 20091227.10:22 DEBUG determining filename <br> 20091227.10:22 DEBUG filename from url <br> 20091227.10:22 DEBUG determining size of file <br> 20091227.10:22 CRITICAL Unexpected Error: Traceback (most recent call <br> last): <br> File "/usr/lib/python2.6/site-packa ges/rssdler.py", line 2079, in Helga M. ivandyady...@gmail.com 2009-12-25T08:14:44Z Short joke for You One short joke clip to cheer Your up! <br> <a target="_blank" rel=nofollow[link]</a> box2k2 martin.linf...@gmail.com 2009-12-17T22:42:11Z Re: TypeError Similar problem here, kept crashing while downloading the following <br> feed: <br> <a target="_blank" rel=nofollow[link]</a> <br> Tried it using <a target="_blank" rel=nofollow[link]</a> and it was successful, so <br> there's something in the ShowRss it's failing on. <br> Following advice above, I changed: <br> except ValueError: Rafał Wieczorek blacksa...@gmail.com 2009-12-10T10:55:47Z TypeError With one of my rss feeds I keep getting: <br> <p>CRITICAL Unexpected Error: Traceback (most recent call last): <br> File "/usr/lib/python2.5/site-packa ges/rssdler.py", line 2079, in <br> main <br> run() <br> File "/usr/lib/python2.5/site-packa ges/rssdler.py", line 2049, in <br> run <br> rssparse(key) <br> File "/usr/lib/python2.5/site-packa ges/rssdler.py", line 1952, in Brian Derr bder...@gmail.com 2009-11-19T15:33:04Z Re: [rssdler] Re: Proposed code clean-up diff... I haven't looked through the whole thing yet, but thanks for taking a stab <br> at this. The few pieces that I looked at looked good. The code was <br> definitely in need of some TLC. <br> <p>bd dasnyderx dasnyd...@gmail.com 2009-11-18T20:34:24Z Re: Proposed code clean-up diff... Hmmm, it looks like the paste or the submission to the group messed <br> with the formatting of the patch. If you'd like a tarball of the <br> diff, contact me at this address or (better yet) dasnyderx _at_ yahoo <br> _dot_ com. <br> <p>Cheers, <br> David dasnyderx dasnyd...@gmail.com 2009-11-18T20:30:28Z Proposed code clean-up diff... In an effort to learn and mess with Python, I came across and began <br> using RSSDler. However, looking through the code and the log/debug <br> messages were as mess. So, I would like to propose the following <br> patch to clean up the code and messages. The bulk of the patch <br> consists of: <br> <p>- (Attempts at) proper punctuation. lostnihilist@gmail.com lostnihil...@gmail.com 2009-10-01T14:14:15Z RSSDler 0.4.2 Released Finally. As noted, I'm not actively developing this script anymore, <br> mostly because it works quite well for me. If anyone has the <br> inclination to take it further, email me and we'll see about adding <br> you to the project. I will try to squash bugs as I see them, but major <br> changes are unlikely at this point. lostnihilist lostnihil...@gmail.com 2009-08-26T23:52:55Z Re: [rssdler] Re: contribute? If you haven't noticed, a number of bugs listed in the issues as well as the <br> group emails have been fixed. Besides some comments in the tickets, I haven't <br> found any patches from people. So, if you have some patches, please send them <br> in again. Testing the recent svn would also be helpful. If I don't get any bug bder...@gmail.com 2009-08-22T01:36:41Z Re: [rssdler] Re: contribute? I'm glad to hear from you! I was about to sit down and open up a new project to fork this. I was delaying trying to come up with a good name (which I suck at and rssdler describes the project really well). I look forward to seeing patches and fixes making it into this project soon now. <br> <p>bd <br> <p>Oh, wow. Sorry guys. I've messed up big time here. I thought I had setup this email account to forward to another one but appear to have screwed something up. I've confirmed that it is working now. Further communication should not be a problem. lostnihilist lostnihil...@gmail.com 2009-08-21T23:11:24Z Re: [rssdler] Re: contribute? Oh, wow. Sorry guys. I've messed up big time here. I thought I had setup this <br> email account to forward to another one but appear to have screwed something <br> up. I've confirmed that it is working now. Further communication should not be <br> a problem. <br> <p>I'm open to giving people write access to the repository so that the project Graham Dunn g...@kurai.org 2009-08-18T13:45:13Z Re: [rssdler] Re: contribute? Well, technically, by maintaining a seperate repository, you've already <br> forked :) Just make it public... if lostnihilist wants to track your <br> changes, he(she?) can. <br> You may want to call it something else though, to avoid confusion. bderrly@gmail.com bder...@gmail.com 2009-08-18T01:22:13Z Re: contribute? I tried to contact lostnihilist individually a while back about being <br> added as a contributor but I received no response. I think that a fork <br> of the project might be a good idea since we won't be able to <br> contribute to this repository without being able to speak with <br> lostnihilist. I too have worked on fixing up some problems on my local Angel angel....@gmail.com 2009-06-28T23:36:39Z contribute? Hi lostnihilist (and everyone who uses rssdler), <br> <p>I like rssdler but think it could be easier to use and more stable. <br> <p>I have a couple patches of my own and can easily fix most of the open <br> defects listed on the 'issues' page in a day or two. Since you are too <br> busy to add features, make bugfixes, and put out new releases, do you infowolfe infowo...@gmail.com 2009-06-14T08:53:10Z bug in Safari cookie handling --- rssdler.py 2008-08-11 19:48:24.000000000 -0700 <br> +++ rssdler.fixed 2009-06-14 01:51:11.085078087 -0700 <br> @@ -500,7 +500,7 @@ <br> elif keyText == 'value': d['value'] = valueText <br> else: <br> if 5 == len(set(d.keys()).intersection <br> (('domain','path','expires','n ame','value'))): <br> - d['dspec'] = str(d['host'].startswith('.')) .upper() Irregular irra...@gmail.com 2009-06-01T19:53:21Z problem with percent encoding rssdler can't fetch torrent file, because it's replacing % with %25: <br> <p>20090601.04:08 DEBUG unQuoteReQuote <br> <a target="_blank" rel=nofollow[link]</a> <br> )%20%5b3A18C972%5d.mkv.torrent <br> 20090601.04:08 DEBUG checking download<x> Gelegrodan gelegro...@gmail.com 2009-05-08T23:26:23Z Make it include cookie while fetching .torrent? Hello <br> How do i make it include the cookie-file while fetching the torrent? Sanmarcos marcetcheve...@gmail.com 2009-03-21T04:21:05Z Re: TTL minutes or seconds It is weird, because with the RSS feed with a ttl of 10, and scanMins <br> at 11 I still get the warning. <br> <p>Does rssdler assume that the RSS ttl or scanMins is in seconds? <br> <p>I can't seem to figure out a way to fix this. Sanmarcos marcetcheve...@gmail.com 2009-03-21T04:17:49Z Issues with filenames 20090321.05:02 ERROR filename already taken xxx-blah.torrent <br> <p>Is there a way to tell rssdler to not download torrents if the <br> filename is already taken? <br> <p>Also, is there make it not download filenames that end in .html, <br> only .torrent? (Would the ifTorrent postDownloadFunction be another <br> solution for this?). Graham Dunn g...@kurai.org 2009-03-20T18:59:33Z Re: [rssdler] Re: Cookie file support in 0.4.1a1? Where in particular is it broken? I've been poking through the code, I see <br> this in the logs: <br> <p>INFO --- RSSDler 0.4.1a1 <br> DEBUG writing daemonInfo <br> INFO [Waking up] Fri Mar 20 14:24:29 2009 <br> DEBUG checking working dir, maybe changing dir <br> INFO Scanning threads <br> INFO finding new downloads in thread TRM lostnihilist lostnihil...@gmail.com 2009-03-20T18:17:56Z Re: [rssdler] Cookie file support in 0.4.1a1? svn is broken, yes. I need to fix it. Patches are accepted :D Graham g...@kurai.org 2009-03-20T17:11:22Z Cookie file support in 0.4.1a1? It doesn't seem to be working ... using wget pointing at the same <br> cookie file works. lostnihilist lostnihil...@gmail.com 2009-03-20T16:37:11Z Re: [rssdler] Re: Using reserved keywoards in postDownloadFunction well, the line it is tripping up on is: <br> s = socket(AF_INET,SOCK_DGRAM) <br> I'm not at all familiar with this netgrowl thing. I think if you change <br> socket() to socket.socket() you wil lbe good to go, assuming the rest of the <br> code is correct. What the error says is that you have effectively <br> done "import socket; socket()" which of course won't work since you cannot Sanmarcos marcetcheve...@gmail.com 2009-03-20T10:06:09Z Re: Using reserved keywoards in postDownloadFunction lostnihilist: Any ideas? lostnihilist lostnihil...@gmail.com 2009-03-17T17:15:35Z Re: [rssdler] Match against files in torrent? this would be possible with a postDownloadFunction. You would use rssdler's <br> bdecode function to read the torrent, and the iterate through the info <br> dictionary to find files ending with .rar and similar rar extensions. I am <br> trying to find the time to refactor rssdler to be more amenable to changes lostnihilist lostnihil...@gmail.com 2009-03-16T15:15:47Z Re: [rssdler] TTL minutes or seconds I will look to make sure that everything is cast to the right value, but I'm <br> pretty sure the code uses minutes for convenience and then multiplies by 60 <br> to get seconds. <br> I do not see this behavior with similar settings. Cokelid coke...@googlemail.com 2009-03-15T18:18:50Z Match against files in torrent? I've got rssdler downloading torrents just fine, but recently I'm <br> grabbing lots of "fake" torrents masquerading as a trusted uploader. <br> The "fake" torrents contain RAR files rather than AVIs (and then try <br> to get you to buy a password to decrypt the rar file). <br> <p>Is there any way to match against the files contained in the torrent Sanmarcos marcetcheve...@gmail.com 2009-03-14T08:41:18Z TTL minutes or seconds The RSS 2 spec states that the <ttl> value is in *minutes*, not <br> seconds. See <a target="_blank" rel=nofollow[link]</a> <br> <p>However, the rssdler docs say: <br> <p> scanMins: [Optional] An integer option. Default 15. Values are in <br> minutes. <br> The number of minutes between scans. Sanmarcos marcetcheve...@gmail.com 2009-03-14T08:36:23Z Using reserved keywoards in postDownloadFunction I am trying to use the socket module in a function, to send a <br> notification to my desktop client machine via the Growl notification <br> system. <br> <p>Here is my userFunctions.py <br> <p>from netgrowl import * <br> import sys <br> import datetime <br> <p>def growlNotify(directory=None,[link]</a> <br> mrss/" namespace to parse properly? This is what I'm trying to get <br> working: <a target="_blank" rel=nofollow[link]</a> <br> <p>Thanks, <br> Graham lostnihilist lostnihil...@gmail.com 2009-02-03T17:35:11Z Re: [rssdler] rssdler debian package? GPLv2 is right. I prefer the pseudonym. I'm not sure what trouble you may run <br> into, there is nothing particularly complicated about the package. Basically, <br> just a python module for the python library and a python script that gets <br> thrown in /usr/bin. However, do feel free to contact me with questions or David Spreen netzw...@debian.org 2009-02-02T06:42:07Z rssdler debian package? Hi, <br> I am thinking of packaging rssdler for inclusion in Debian. I have not <br> done a whole lot of research yet on whether there have been discussions <br> to do that. Before I do that, I would like to clarify the copyright. <br> According to the google code page the code is lincensed under the <br> GPLv2. Is there an official author? Is there a name and preferred lostnihilist lostnihil...@gmail.com 2009-01-29T00:34:42Z Re: [rssdler] Ignoring Feed Wait Times I won't be adding this feature so that rssdler is not implicated in abusing <br> servers/RSS feeds and getting itself banned. If the RSS wait time is not <br> sane, talk to the admin. Otherwise, it would be improper to disrespect that. ftnisthebe...@gmail.com 2009-01-28T22:59:59Z Ignoring Feed Wait Times Hello, <br> I have downoaded and installed rssdler and it works great. I wanted <br> to know if there was an option to ignore feed wait times? It is nice <br> that it does that but would be nice to disable it as well. sunwukung@gmail.com sunwuk...@gmail.com 2008-11-15T13:09:41Z bug report I started with rssdler last night and when testing I found --purge- <br> failed doesn't work. I'm guessing because it is misspelled at <br> <a target="_blank" rel=nofollow[link]</a> <br> <p>I hope FF3 cookies are still being worked on. I got this error: <br> 20081114.22:57 DEBUG testing cookieFile settings jav nob...@gmail.com 2008-11-10T10:09:06Z Re: cookie problem firefox3 For those of you who are intrested in cookies FF3 => FF2: <br> <a target="_blank" rel=nofollow[link]</a> lovedaddy greg.losco...@gmail.com 2008-11-08T21:12:24Z Skip writing out torrent if already exists Hey all, is there the functionality to skip downloading or writing a <br> torrent file if the file its about to download already exists. <br> <p>Example, got a config file with... <br> <p>[thread1] <br> download1 = this_series1 <br> download2 = this_series2 <br> download3 = this_series3 <br> <p>[thread2] <br> download1 = this_series1 jav nob...@gmail.com 2008-11-07T20:16:26Z Re: Trigger function after compleated download Turns out it was exactly what I needed. <br> Where can I find more info about the *feed() functions found in <br> userFunctions.py ? <br> I'm hesitant to what their use is and how to use them. <br> <p>Thanks // Jav jav nob...@gmail.com 2008-11-07T18:20:41Z Re: Trigger function after compleated download nope, I hadn't noticed it. <br> Sounds to me as if it's exactly what I'm looking for. <br> I'll look at it now, thanks! lostnihilist lostnihil...@gmail.com 2008-10-29T17:37:48Z Re: [rssdler] cookie problem firefox3 The issue is that the firefox _3_ cookie format is new, and no one (as far as <br> I could tell) has written a way to interpret them yet, except obviously the <br> mozilla people. Since I don't use FF, <br> I didn't have a way to test the cookie format extensively. I will try to see <br> what is going wrong. In the meantime, firefox 2 cookies work fine, as well as lostnihilist lostnihil...@gmail.com 2008-10-29T17:31:56Z Re: [rssdler] Download only items after a certain date well, when you add a new feed/filter, you could run it with the noSave option, <br> and then all torrents that would've been downloaded are not, and are <br> remembered as not to be downloaded. <br> Alternately, you could have rssdler save if to a safe location that won't be <br> picked up by other software, use a postDownloadFunction to check the date on lostnihilist lostnihil...@gmail.com 2008-10-29T17:27:29Z Re: [rssdler] installing RSSdler First of all, it looks like you are running rssdler as root. Don't do that! <br> Secondly, is /opt/local/bin in your $PATH? On the command line, do: "echo <br> $PATH" and see what is there, or write back here with the output if you don't <br> understand what I'm getting at. | http://groups.google.com/group/rssdler/feed/atom_v1_0_msgs.xml?num=50 | CC-MAIN-2013-20 | refinedweb | 3,111 | 66.64 |
See also: IRC log
Present: Hugo, Rigo, DBooth, Patrick
Regrets:
Chair: SV_MEETING_CHAIR
Scribe: UNKNOWN
<rigo> apparemment, il est deja trop tard pour les ricains ;)
<hugo> David m'a dit qu'il comptait venir
<hugo> Rigo: we are confirming our intention to do a P3P generic attribute
... ... I envision a WG note
... ... and then we could push it through Rec track rather fast
<hugo> ACTION: Rigo & Patrick (P3P Beyond HTTP task force) to write a P3P generic attribute document. [PENDING]
<hugo> ACTION: [DONE] Rigo to talk to MSM and/or HT to get help in defining the schema.
<hugo> ACTION: Hugo to look at how to apply the generic attribute to WSDL [IN PROGRESS]
<dbooth> Hugo: Started the document. Not done yet.
<hugo> Document started at
<hugo> David: in an arbitrary XML schema, if we add an attribute in a different namespace, what is the effect on the schema?
... Patrick: PLH proposed annotation and XML extension
<hugo> ACTION: David to ask MSM about effect of XML attribute on schema
<hugo>
... SOAP works in the opposite direction of P3P
... with SOAP, it would be better to use something like EPAL
... Consensus: finding a privacy policy for a Web service should be done by looking at the WSDL of this service
<hugo> ACTION: Hugo to organize call after 11 January | http://www.w3.org/2003/12/16-p3pws.html | CC-MAIN-2015-22 | refinedweb | 217 | 68.6 |
At my current project we’re working hard to get a new REST API running on top of ASP.NET Core. One of the things we need to do is communicate with a set of existing WCF Services in the back office of the company.
This means we need WCF clients inside our ASP.NET Core project. Something that isn’t very simple as it turns out.
In this article I will show you some of the options you have for building and connecting to WCF services. I will also show you which problems you may run into while building or connecting to WCF services.
Building WCF services using the .NET Core SDK
Before you start to think about building WCF services on top of .NET Core framework it’s important to know that only the client part of WCF is supported.
If you want to build WCF services you need to change your project.json so that you run on top of the full .NET Framework.
{ "version":"1.0.0-*", "dependencies": {}, "frameworks": { "net461": { "frameworkAssemblies": { "System.Runtime.Serialization":"4.0.0.0", "System.ServiceModel":"4.0.0" } } } }
When you build WCF services with the .NET Core SDK in combination with the full .NET Framework you need to know that you are limited to running on Windows. WCF isn’t fully supported by Mono and .NET Core framework only supports clients. Other than that it’s perfectly fine to build WCF services with the .NET Core SDK.
Right now there’s only one way to host your WCF service when you use the .NET Core SDK. Normally you’d make a .svc file with a servicehost directive to host on IIS. Applications build with the .NET Core SDK however don’t seem to support this.
The only way to host a WCF service is to create a self-hosting application.
using(ServiceHost host =newServiceHost(typeof(HelloWorldService), baseAddress)) { ServiceMetadataBehavior smb = newServiceMetadataBehavior(); smb.HttpGetEnabled = true; smb.MetadataExporter.PolicyVersion = PolicyVersion.Policy15; host.Description.Behaviors.Add(smb); host.Open(); Console.WriteLine("The service is ready at {0}", baseAddress); Console.WriteLine("Press <Enter> to stop the service."); Console.ReadLine(); host.Close(); }
This is fine when you don’t mind running console applications for each of your services. It does however present problems for companies that want to host several services in a single host process. So before you jump in and start building WCF services make sure that you think about the hosting model.
Building WCF clients using the .NET Core SDK
While WCF services aren’t supported with the .NET Core framework but can be build using the SDK it’s good to know that WCF clients are supported in the .NET core framework.
In order to generate a client for a WCF service you need an extension in Visual Studio 2015 called the WCF connected service . This extension makes it possible to add a WCF connected service to your project. Notice that it only works for Visual Studio 2015 right now. Support for Mac and Linux is being developed, but not available yet.
The good thing with the connected service code is that while you can only generate it in Visual Studio, you can use the code on your Mac or Linux machines. Once a WCF client is generated there’s no need for the Visual Studio extension.
The current version of WCF Core supports only a limited set of bindings and transports. For example WS-* support is missing. Also, you can’t use Windows Authentication on Mac and Linux.
If you need this kind of support you need to use the full .NET framework. Which means that you are required to run on Windows. Since Mono does not support all scenarios for WCF right now.
If you set your project.json to full framework and include the
System.ServiceModel
and
System.Serialization
assemblies you can then generate clients using the good old svcutil commandline utility. Provided that you work on Windows of course.
For our project I created a custom powershell script that does just that. It takes some settings and generates a WCF client for me.
$svcutil="C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6.1 Tools\SvcUtil.exe" functionGenerate-WcfClient { param([string]$wsdlPath, [string]$namespace, [string]$outputPath) $languageParam="/language:C#" $outputParam="/out:"+$outputPath $namespaceParam="/n:*,"+$namespace $svcutilParams=$wsdlPath,$languageParam,$namespaceParam,$outputParam & $svcutil$svcutilParams|Out-Null } Generate-WcfClient -wsdlPath "../../Metadata/WSDL/MyService.V1.wsdl"-namespace"MyProject.Agents.MyService"-outputPath"./Agents/MyService.cs"
You can extend this to generate several clients if you need to. To integrate it into the build you need modify your
project.json
file and add the script to it.
{ "version":"1.0.0-*", "buildOptions": { "debugType":"portable" }, "dependencies": {}, "frameworks": { "net461": { "frameworkAssemblies": { "System.Runtime.Serialization":"4.0.0.0", "System.ServiceModel":"4.0.0" } } }, "scripts": { "precompile": [ "powershell ./generateagents.ps1" ] } }
Every time you run a build the agents get generated automatically. Keep in mind though that because you have a precompile script the build no longer uses incremental compiles. It means things will be slower. I personally feel that this is not a problem for me, but your situation could be different.
Conclusion
So yes you can use WCF from .NET Core SDK projects, but you will have to spend some time to make a trade-off between cross platform support and the requirements of the WCF services you connect to.
If you don’t need to use WS-* extensions and Windows authentication I suggest you change the bindings of your WCF services and use .NET Core framework. If you can’t then it’s good to know you can still work with your existing WCF services by running on the full .NET Framework on Windows. | http://126kr.com/article/848jprqzrz9 | CC-MAIN-2016-50 | refinedweb | 950 | 60.01 |
Talk:mkinitcpio
Contents)
Mark /usr with a passno of
2, not
0
mkinitcpio#/usr as a separate partition firmly suggests to mark
/usr with a
passno of
0 in
/etc/fstab. The way I understand
man fstab,
2 is the correct number for the task. Am I wrong? Regid (talk) 19:09, 6 April 2017 (UTC)
- It also suggests adding the
fsckhook to your config. Seeing as the regular fsck binaries reside on the
/usrpartition, this is needed to actually make it possible to fsck it at startup. The entry in
/etc/fstabshould thus be
0. Koneu (talk) 07:27, 7 April 2017 (UTC)
- I see your point about the binaries residing on /usr. Still, with a passno of
0you set no automatic fsck at all, don't you? Would you set passno for / to
0when usr is an integral part of /?
- Indeed the fsck hook copies fsck binaries to the initramfs, and possibly ignores passno for / and /usr at runtime. As an aside, its help text should mention usr, not just /.
- Regid (talk) 10:12, 7 April 2017 (UTC)
- It seems the
passnoneeds to be
2instead, as otherwise I only get the message about / being clean at boot time. After changing
passnoto
2I get the additional message that /usr is also clean. The only error in the logs is in regard to / (/dev/sda2 is mounted. e2fsck: Cannot continue, aborting.) which could indicate that the
passnofor / needs to be
0instead. Maklor (talk) 22:27, 19 April 2020 (UTC)
Talking about LUKS/LVM with sd-encrypt example
Formerly there was a link to the forum that was ambiguous about where the UUID were generated from. enckse documented clearly the same use case within this page, which was reverted by Lahwaacz in this change. The current edit of the page removes a reference to the forum post entirely. Where should the writeup from enckse be placed? - storrgie 22:05, 23 October 2017 (UTC)
- In the previous form, ie., as a tutorial, it would belong in someone's namespace. If you just want to capture the bit about determining UUIDs, then that is already documented on the relevant page. Jasonwryan (talk) 22:19, 23 October 2017 (UTC)
- If you review the forum post mentioned it isn't that I'm saying users don't know how to capture UUID. I'm saying that in the case of LUKS/LVM the rd.luks.uuid requires the block device UUID and the root=UUID= requires the UUID of the root volume group. This is what is ambiguous in the original forum post. Let's be clear, this is a commonly followed install pathway (LUKS/LVM), so why let users be confused when we've got a documented use case? - storrgie 22:25, 23 October 2017 (UTC)
- Then that relevant factoid belongs on the dm-crypt page page. Jasonwryan (talk) 22:39, 23 October 2017 (UTC)
- I've tried to place it in an appropriate place on that page, please advise. - storrgie 22:50, 23 October 2017 (UTC)
- I said *not* the entire tutorial, just the relevant snippet about locating the correct uuids. Jasonwryan (talk) 23:20, 23 October 2017 (UTC)
- It'd be nice to have a proposed modification rather than just slapping the revert button. - storrgie 23:41, 23 October 2017 (UTC)
- enckse's writeup is really no better than the forum post, it just describes how to do one specific thing without explaining the steps. That's not the wiki way, the wiki should explain the steps in some organized way and let users combine them as they want.
- That said, I think that everything discussed here is already on the dm-crypt/System configuration page:
- mkinitcpio hooks: dm-crypt/System_configuration#mkinitcpio (two variants, base and systemd)
- bootloader config: dm-crypt/System_configuration#Using_encrypt_hook (for the base hook) and dm-crypt/System_configuration#Using_sd-encrypt_hook (for the systemd hook) - the second section explains what the
rd.luks.*parameters actually do, this cannot be found in the forum post
- If you don't know how to get the UUIDs, read the Persistent block device naming page - it is linked multiple times from dm-crypt/System configuration. There is no point to include a full example of
lsblk -feverywhere.
- Though I must admit, that I can't find the
rd.lvm.lvkernel parameter anywhere on the wiki. I have no idea what it's for and the forum post does not explain it either. Is it necessary for resume? Or for LVM? Or the LUKS/LVM combo? If you find out, feel free to explain it here so we can find an appropriate place on the wiki for it.
- -- Lahwaacz (talk) 07:25, 24 October 2017 (UTC)
rd.lvm.lvseems to be a dracut specific thing. -- nl6720 (talk) 13:26, 29 October 2017 (UTC)
- [1] confuses me. Is
/etc/crypttabmeant to be
/etc/crypttabon real root or
/etc/crypttabin initramfs (i.e.
/etc/crypttab.initramfson real root)? -- nl6720 (talk) 13:39, 29 October 2017 (UTC)
- Since the snippet is from the systemd-cryptsetup-generator(8) man page, I'd say it's the root where systemd-cryptsetup-generator is run. -- Lahwaacz (talk) 13:56, 29 October 2017 (UTC)
- mkinitcpio#Non-root drives are not decrypted by sd-lvm2 hook doesn't make the distinction, which makes the whole section nonsense. Why would you expect sd-lvm2 to have access to block devices that are decrypted after boot (those listed in
/etc/crypttabon real root)? Using
/etc/crypttab.initramfsis far simpler than messing around with all those
rd.luks.and
luks.parameters. -- nl6720 (talk) 14:06, 29 October 2017 (UTC)
- I have no idea, I just moved the section from systemd-boot. See also Talk:Systemd-boot#Move_section_.22Non-root_drives_are_not_decrypted_by_sd-lvm2_mkinitcpio_hook.22. -- Lahwaacz (talk) 14:31, 29 October 2017 (UTC))
mdadm deprecated
The section "Using RAID" may need to be edited because mdadm hook seems deprecated:
==> WARNING: Hook 'mdadm' is deprecated. Replace it with 'mdadm_udev' in your config
—This unsigned comment is by Noraj (talk) 12:40, 11 May 2019. Please sign your posts with ~~~~!
- So do what the message tells you to do. It says the same thing as the wiki page - mdadm_udev is preferred over mdadm. -- Lahwaacz (talk) 16:03, 11 May 2019 (UTC)
Mkinitcpio replacement with Dracut —This unsigned comment is by B1tninja (talk) 14:56, 27 September 2019 (UTC). Please sign your posts with ~~~~! | https://wiki.archlinux.org/index.php/Talk:Mkinitcpio | CC-MAIN-2020-45 | refinedweb | 1,061 | 64.2 |
As a single developer without a team of artists to create hand craft level content for me, I am considering creating my RPG’s world from Math. In this tutorial, I will create a means of visualizing such a world to show just how flexible and easy to use it can be. We will be making heavy use of Perlin noise, but don’t worry, this post will actually be light on math.
It is actually quite easy to create infinite worlds with a great degree of variation, providing a nice boost to exploration and replay ability. On the other hand, if I go with something too complex I may need to move away from the text based interface. Decisions, decisions!
Create the Scene
- Begin by creating a new empty scene.
- Add a Quad. From the menu bar choose “GameObject->3D Object->Quad”
- Create a new material. From the project tab, choose “Create->Material”
- With the material selected, use the inspector to set the material’s shader to “Unlit/Texture”, but leave the texture unassigned.
- Assign the newly created material to the Mesh Renderer of the Quad.
- Select the “Main Camera” object in the “Hierarchy” panel.
- In the Inspector, set the camera component’s “Projection” type to “Orthographic” and the “Size” to 0.5.
- Move the camera to position: (0, 0, -10).
At this point, the Game window should show the Quad taking up the full height of the screen. There may be bars on the side if you are using a non-square aspect ratio, but it doesn’t matter, we just want a convenient canvas to render on and show us the result.
Rendering to a Texture
Create a new script called “PerlinVisualizer.cs” and attach it to the Quad in the scene. We may want to change the resolution of the texture, so we will create a public Vector2 for the texture size. I created a small default size of 32×32 because I am imagining each pixel as a “room” or at least “tile” of a world. We will also need properties to hold a Texture2D which we will create and draw on, and an array of Color so that we can paint the texture in a single pass.
public Vector2 textureSize = new Vector2(32, 32); Texture2D texture; Color[] pix;
The texture itself will be created and assigned to our material in OnEnable. At the same time we will create the placeholder Color array for our pixel information. The texture format is important, not all of them support painting. Setting the “filterMode” to “Point” causes the pixels to display crisply. If you don’t set it, the renderer will try to blend everything together. Since we create the texture we are responsible for destroying it, do that in OnDisable.; }
Now we can do a sample render just to make sure everything works. In the Start method, we will trigger a test render that fills every pixel on our canvas with a randomly generated color. Note that I assign the color to the pix array, then assign the array to the texture. You must call “Apply” on the texture before it uploads to the graphics card and therefore appears on screen.
void Start () { Draw(); } public void Draw () { for (int y = 0; y < texture.height; ++y) { for (int x = 0; x < texture.width; ++x) { int index = y * texture.width + x; pix[index] = new Color( UnityEngine.Random.value, UnityEngine.Random.value, UnityEngine.Random.value ); } } texture.SetPixels(pix); texture.Apply(); }
Save the script and run the scene. If everything is configured correctly you should see your Quad appear as if a rainbow vomited all over it.
Draw Perlin Noise
Next, we want to draw something a little more intentional. Create a new script called “Mapper.cs”. It will be the base class of the script drawing perlin noise. We might later want to implement different patterns or means of drawing initial values. It indicates that we will take an x and y value (driven by pixels of the image) and return a float. We will use that float as a multiplier on a color value to paint the pixel.
public abstract class Mapper : MonoBehaviour { public abstract float Map (int x, int y); }
Here is the concrete implementation of the Mapper for Perlin Noise.
public class PerlinMapper : Mapper { /// <summary> /// The offset is used to determine a base shifting of the perlin pattern /// and will be added to the sample position /// </summary> public Vector2 offset = Vector2.zero; /// <summary> /// The scale is used to shrink or grow the overal size of the pattern /// </summary> public Vector2 scale = new Vector2(0.1f, 0.1f); /// <summary> /// Takes an x and y position and returns a value from 0-1 with respect /// to its offset and scale properties /// </summary> public override float Map (int x, int y) { return Mathf.PerlinNoise((x + offset.x) * scale.x, (y + offset.y) * scale.y); } }
So you can see the results of this bit of code, let’s plug it in to draw the perlin noise to our texture instead of the rainbow garbage we had earlier. The bit of code we are doing right now is throw-away and is just to let you see the purpose of each piece before it is fully assembled into something that might otherwise be too large to understand.
Add the Perlin Mapper script to our canvas object (called “Quad” in the scene unless you renamed it). Add another temporary property to the PerlinVisualizer script:
PerlinMapper mapper;
At the end of the OnEnable method get the component and assign it to our property.
mapper = gameObject.GetComponent<PerlinMapper>();
One last step. Modify the line that assigned a random color to each pixel to the following:
// Change this line... // pix[index] = new Color( UnityEngine.Random.value, UnityEngine.Random.value, UnityEngine.Random.value ); // To this... pix[index] = Color.white * mapper.Map(x, y);
Run the scene again, and now you should see a pixelated version of something that looks kind of like Photoshop’s render clouds filter.
Experiment with the pattern
The values you pass along to Perlin Noise can be pretty sensitive, so rather than having to change a value and run the scene repeatedly, let’s add an editor script to allow us to manually trigger a render, or to simply make it render every frame. Before that, we will need to expose some new functionality.
Add another property, which indicates whether or not our script is rendering the canvas every frame or not:
public bool isPlaying { get; private set; }
These simple methods handle starting, stopping and performing the draw loop:
public void Play () { if (isPlaying == false) { isPlaying = true; StartCoroutine("DrawLoop"); } } public void Stop () { if (isPlaying) { StopCoroutine("DrawLoop"); isPlaying = false; } } IEnumerator DrawLoop () { while (true) { yield return null; Draw (); } }
Now we can add a new editor script. Note that it needs to be placed in a folder called “Editor” in order to work properly. I named the script “PerlinVisualizerInspector.cs”. Editor scripts must include the “UnityEditor” namespace and should inherit from the “Editor” class. Marking the class as “CustomEditor” is what allows it to target our PerlinVisualizer component and insert new controls into the inspector.
using UnityEngine; using UnityEditor; [CustomEditor (typeof(PerlinVisualizer))] public class PerlinVisualizerInspector : Editor { protected PerlinVisualizer perlinVisualizer { get { return (PerlinVisualizer)target; } } public override void OnInspectorGUI () { DrawDefaultInspector (); if (GUILayout.Button("Draw")) perlinVisualizer.Draw(); if (perlinVisualizer.isPlaying && GUILayout.Button("Stop")) perlinVisualizer.Stop(); if (!perlinVisualizer.isPlaying && GUILayout.Button("Play")) perlinVisualizer.Play(); } }
Now run the scene. Modify some values on the Perlin Mapper, such as the offset position. Click the “Draw” button in the Perlin Visualizers inspector and the texture should update. If our calculations get too complex, or we display it on too large of a texture, we may want to leave play mode off and trigger renders manually with the Draw button like this.
Click “Play” and now drag your mouse across the Offset and you should see the texture scroll across the canvas. Since our texture is small and our system simple, we can get away with leaving it in play mode for now.
Make the pattern look like a Level
As it is, the Perlin noise looks like it could be nice for a textured ground or perhaps a nice height map for a heavily subdivided ground mesh. It can be tweaked for many other purposes, such as drawing out a walkable level. I am imagining something where we only see white or black pixels, where the white pixels represent traversable world and the black pixels are non-traversable. They could become mountain ranges or oceans surrounding an island. It doesn’t really matter, but you get the idea for now.
Create an abstract base class called Node. It will be used to modify the parameters we get from instances of the Mapper class in some way.
public abstract class Node : MonoBehaviour { public abstract float Calculate (float input); }
The first concrete implementation of the Node class I will call the “BarNode”. It will take a value and if it passes a “bar” value, will return 1, otherwise it will return 0. This will create the white or black extremes as I mentioned earlier.
public class BarNode : Node { /// <summary> /// Determines the value at which values are clamped /// to either 0 or 1 /// </summary> public float value = 0.5f; /// <summary> /// Values less than or equal to the input will be /// output as zero, one otherwise /// </summary> public override float Calculate (float input) { return (input <= value) ? 0f : 1f; } }
To see the effect of the Node on a Mapper, add this component to our Canvas object (the same one with the mapper and visualizer). Add another temporary property for it to our PerlinVisualizer script:
Node node;
Connect the Property at the end of the OnEnable method:
node = gameObject.GetComponent<BarNode>();
And run the mapper’s value through the node in the “Draw” method:
pix[index] = Color.white * node.Calculate( mapper.Map(x, y) );
Run the scene now and you can see that there are very defined regions of walkable vs non-walkable area. We could definitely create some sort of interesting maze like level out of this.
Enable Play mode on the Visualizer script and modify the value of the BarNode script. Lower values create pockets of black, which I might imagine as random boulders scattered over an open field, or if used as a mask, pockets of areas that will become forests or swamps etc. Higher values create pockets of white, which might be islands in an ocean. Use your imagination here.
Create a new script called “BandNode”. It will also clamp values to either 0 or 1, but will do so based on values within a value range which will be defined by a Vector2:
public class BandNode : Node { /// <summary> /// The range of values that return 1; /// </summary> public Vector2 range = new Vector2(0.4f, 0.6f); /// <summary> /// Input values >= range.x and values <= range.y output 1 /// </summary> public override float Calculate (float input) { return (input >= range.x && input <= range.y) ? 1f : 0f; } }
Add this script to the Canvas, and in the OnEnable method, get a reference to the BandNode instead of the BarNode.
// Change this... // node = gameObject.GetComponent<BarNode>(); // To this... node = gameObject.GetComponent<BandNode>();
Run the scene and you will have an even more maze like system of passageways than we saw before. The passages are narrow and winding.
Layer the Complexity
Now I want to be able to stack mappers and nodes and combine their functions to create even more interesting patterns and possibilities. Create a new script called “RenderLayer.cs”:
public class RenderLayer : MonoBehaviour { public Color color = Color.white; public Mapper mapper; public Node[] filters; public Color Render (int x, int y) { float value = mapper.Map(x, y); if (filters != null) { for (int i = 0; i < filters.Length; ++i) { value = filters[i].Calculate(value); } } return color * value; } }
Now I will remove all the temporary code I added to the Visualizer script and implement its final version. Here is the full class for clarity sake:
public class PerlinVisualizer : MonoBehaviour { public RenderLayer[] layers; public Vector2 textureSize = new Vector2(32, 32); public bool isPlaying { get; private set; } Texture2D texture; Color[] pix;; } void Start () { Draw (); } public void Play () { if (isPlaying == false) { isPlaying = true; StartCoroutine("DrawLoop"); } } public void Stop () { if (isPlaying) { StopCoroutine("DrawLoop"); isPlaying = false; } } public void Draw () { for (int y = 0; y < texture.height; ++y) { for (int x = 0; x < texture.width; ++x) { Color c = Color.black; for (int i = 0; i < layers.Length; ++i) { c += layers[i].Render(x, y); } pix[ y * texture.width + x ] = c; } } texture.SetPixels(pix); texture.Apply(); } IEnumerator DrawLoop () { while (true) { yield return null; Draw (); } } }
I removed the mapper and node components from the canvas. Then I created a new GameObject and parented it to the canvas. I named the object “Layer 0”. Add the RenderLayer script to this new object. Drag the object onto the Visualizer script’s “Layers” property so that it has an array size of 1 and the element is auto assigned.
Add another GameObject called “Perlin Mapper” and attach the “PerlinMapper” script to it. Parent this object to the “Layer 0” GameObject (note that none of the parenting has any effect on anything, it is simply for organization sake). Drag the “Perlin Mapper” object into the “Mapper” field of the Render Layer’s inspector. At this point you could play the scene and see the soft “Cloud” like filter from earlier.
Add another GameObject called “Bar Node” and attach the “BarNode” script to it. Parent this object to the “Layer 0” GameObject and drag the object onto the Layer’s “Filters” property so that it has an array size of 1 and the element is auto assigned. I will set the value of the Bar Node to 0.6 so that it creates little islands or pockets of white.
Create another layer like we did before. I made mine a separate hierarchy so that each layer is a sibling child of the Quad canvas object. Like before, the mapper will be a Perlin Mapper. To make things more interesting I used a different value for the scale of the noise (0.15, 0.15). Also we will add some variation by using a BandNode for the filter. Don’t forget to add the new layer to the Visualizer script’s Layers array or you won’t see it added to the picture. Run the scene and you will see a nice mixture of winding passages and larger more open areas.
Hopefully by now you see the potential of this system. Feel free to experiment with what I have already provided and try changing colors or adding more nodes into the mix:
The invert node will take white pixels and make them black and vice-versa.
public class InvertNode : Node { public override float Calculate (float input) { return 1f - input; } }
The multiply node will reduce the effect of a layer much like an alpha channel. values of 0 turn a layer off, and values of 1 are fully visible and unmodified.
public class MultiplyNode : Node { public float value; public override float Calculate (float input) { return input * value; } }
Enjoy! | https://theliquidfire.com/2014/12/12/procedural-world-visualizer/ | CC-MAIN-2021-10 | refinedweb | 2,511 | 54.42 |
25 February 2009 17:17 [Source: ICIS news]
HOUSTON (ICIS news)--Colorado lawmakers rejected a bill that would have made the state the nation’s first to ban the use of plastic bags by large retailers, the bill’s sponsor, state Sen. Jennifer Veiga (Democrat-Denver), said on Wednesday.
The bill, which narrowly passed through a state Senate committee two weeks ago, was universally opposed by Republicans, and several Senate Democrats joined with them on Tuesday in rejecting the measure.
The opponents claimed the ban would cause an increased use of paper bags, which they said take more energy and water to make than plastic bags. Additionally, some were concerned that paper bags take up more room in landfills.
Likewise, the American Chemistry Council (ACC) has maintained that "switching back to paper bags increases greenhouse gas emissions, energy use and waste". The trade group supports plastic-bag recycling programmes.
However, Veiga said that her intent was for shoppers to use reusable bags, not paper. In addition, she said plastic bags were a bigger problem than paper bags because they are used more widely, made from petroleum products and are not recycled as frequently as paper.
Veiga told the Associated Press (AP) that she thought ?xml:namespace>
While no statewide bans exist to ban plastic bags, city ordinances are already in place | http://www.icis.com/Articles/2009/02/25/9195788/colorado-lawmakers-reject-statewide-plastic-bag-ban.html | CC-MAIN-2015-18 | refinedweb | 221 | 60.14 |
One of the most confusing things in Python for the new programmer is string formatting. Not because it's difficult in Python, but because the way to do it has changed many times over the years.
Below is a reverse timeline of ways to do string formatting in Python. It is recommended that you follow the first method of formatting.
All the code snippets ask the user for their name (e.g.
"Jose"), and then print out a greeting (
"Hello, Jose!").
f-strings
The most modern method for string formatting is f-strings, new in Python3.6. This is the way I'd recommend doing string formatting!
user_name = input('Enter your name: ') # Ask user for their name greeting = f'Hello, {user_name}!' # Construct a greeting phrase print(greeting)
The f-string will replace whatever is inside the curly braces by the variable in the scope. Thus
{user_name} in the variable would be replaced by the value of the
user_name variable.
.format()
The
.format() method is a great way to do string formatting, but it can almost always be replaced by f-strings.
Here's an example of this:
user_name = input('Enter your name: ') greeting = 'Hello, {}!'.format(user_name) print(greeting)
In the
.format() method, we replace the
{} inside the string for the value inside the brackets.
Templates
Templates allow you to create re-usable strings where special placeholders can be replaced by values. They are much slower than any other form of formatting, and provide few or no benefits over the above two.
Here's an example:
from string import Template user_name = input('Enter your name: ') greeting_template = Template('Hello, $who!') greeting = s.substitute(who=user_name) print(greeting)
String concatenation
String concatenation is the simplest and oldest form of string formatting in Python. It's useful when you have very simple strings, with little formatting.
Here's an example:
user_name = input('Enter your name: ') greeting = 'Hello, ' + user_name + '!' print(greeting)
However, it's easy to create unexpected errors in your Python code when joining things together that are not strings, as we see from the examples below:
age = 30 greeting = 'You are ' + age + ' years old.' # This raises an error! print(greeting)
This was a quick, short overview of the existing ways of doing string formatting in Python. There's many different ways!
Many different tutorials may suggest different ways of doing string formatting because they may have been written at different times over Python's life. My recommendation: stick to f-strings if on Python3.6, and to the
.format method if on Python3.5 or earlier. | https://blog.tecladocode.com/learn-python-string-formatting-in-python/ | CC-MAIN-2019-26 | refinedweb | 423 | 66.74 |
String matching (KMP algorithm)
Girish Budhwani
・5 min read
The string matching problem also known as “the needle in a haystack is one of the classics. This simple problem has a lot of application in the areas of Information Security, Pattern Recognition, Document Matching, Bioinformatics (DNA matching) and many more. Finding a linear time algorithm was a challenge, then came our father Donald Knuth and Vaughan Pratt conceiving a linear time solution in 1970 by thoroughly analysing the naive approach. It was also independently discovered by James Morris in the same year. The three published the paper jointly in 1977 and from that point on it is known as the Knuth-Morris-Pratt aka KMP Algorithm.
This is my first blog in the series and the approach I follow is I start with the basics then keep building on it till we reach the most optimised solution. I will be using
Python for code snippets as it’s very much concise and readable. Here we go..
Problem statement:
To Find the occurrences of a word W within a main text T.
One naive way to solve this problem would be to compare each character of W with T. Every time there is a mismatch, we shift W to the right by 1, then we start comparing again. Let’s do it with an example:
T: DoYouSeeADogHere (it will be a case insensitive search for all examples)
W: dog
# Here is the working code of the naive approach. def bruteSearch(W, T): # edge case check if W == "": return -1 # getting the length of the strings wordLen = len(W) textLen = len(T) # i is the index of text T from where we will start comparing the # the word W i = 0 # length of the subtext has to be equal to the length of the word, # so no need to check beyond (textLen - wordLen + 1) while i < (textLen - wordLen + 1): # we set match to false if we find a mismatch match = True for j in range(wordLen): if W[j] != T[i + j]: # A mismatch match = False break if match: # We found a match at index i print "There is a match at " + str(i) # incrementing i is like shifting the word by 1 i += 1 return -1
Time complexity of this naive approach is O(mn), where m and n are length of the word W and the text T respectively. Let’s see how can we make it better. Take another wacky example with all unique characters in W.
T: duceDuck
W: duck
As you can see in the above image, there is a mismatch at index 3. According to naive approach next step would be to shift W by 1. Since all letters in W are different, we can actually shift W by the index where mismatch occurred (3 in this case). We can say for sure there won’t be any match in between. I would recommend to try with some other similar example and check for yourself.
The idea is to find out how much to shift the word W when there is a mismatch. So far we have optimised the approach only for a special case where all characters in W are unique. Let’s take another bizarre example. This one is gonna be little tricky so brace yourself.
T: deadElephant
W: deadEye
Make sure you understand what green cells convey. I will be using a lot of them. In the above image the green cells in the left substring is equal to the green cells in the right substring. It is actually the largest prefix which is also equal to the suffix of the substring till index 4 of the word “deadeye”. Assume for now we have found it somehow, we will work on finding out largest prefix(green cells) later. Now let's see how it works by taking an abstract example.
str1 = str2 (green cells) and str2 = str3. When there is a mismatch after str2, we can directly shift the word till after str1 as you can see in the image. Green cells actually tell us the index from where it should start comparing next, if there is a mismatch.
I suppose you now understand if we find out green cells for every prefix of the word W, we can skip few unnecessary matches and increase the efficiency of our algorithm. This is actually the idea behind knuth-Morris-Pratt(kmp) algorithm.
In search of green cells
We will be using aux[] array to store the index. Unlike Naive algorithm, where we shift the word W by one and compare all characters at each shift, we use a value from aux[] to decide the next characters to be matched. No need to match characters that we know will match anyway. Let’s take yet another weird example.
W: acabacacd
m and
i define the state of our algorithm and signify that prefix of the word W before
m is equal to the suffix for the substring till
i-1 i.e
W[0…m-1] = W[i-m…i-1]. For the above image state, 2(value of
m) is stored in the aux[] array for the substring till index 4(
i-1).
def createAux(W): # initializing the array aux with 0's aux = [0] * len(W) # for index 0, it will always be 0 # so starting from index 1 i = 1 # m can also be viewed as index of first mismatch m = 0 while i < len(W): # prefix = suffix till m-1 if W[i] == W[m]: m += 1 aux[i] = m i += 1 # this one is a little tricky, # when there is a mismatch, # we will check the index of previous # possible prefix. elif W[i] != W[m] and m != 0: # Note that we do not increment i here. m = aux[m-1] else: # m = 0, we move to the next letter, # there was no any prefix found which # is equal to the suffix for index i aux[i] = 0 i += 1 return aux
Following will be the aux array for the word acabacacd
Now let's use the above aux array to search the word acabacacd in the following text.
T: acfacabacabacacdk
W = "acabacacd" T = "acfacabacabacacdk" # this method is from above code snippet. aux = creatAux(W) # counter for word W i = 0 # counter for text T j = 0 while j < len(T): # We need to handle 2 conditions when there is a mismatch if W[i] != T[j]: # 1st condition if i == 0: # starting again from the next character in the text T j += 1 else: # aux[i-1] will tell from where to compare next # and no need to match for W[0..aux[i-1] - 1], # they will match anyway, that’s what kmp is about. i = aux[i-1] else: i += 1 j += 1 # we found the pattern if i == len(W): # printing the index print "found pattern at " + str(j - i) # if we want to find more patterns, we can # continue as if no match was found at this point. i = aux[i-1]
Below is the snapshot of above code at some intermediate running state.
You just nailed Knuth-Morris-Pratt algorithm :)
What would you like to see on your DEV profile?
If we were to rethink the DEV profile, what would you like to see on it?
Lets make the problem a little more realistic,you have to search for a word from the dictionary in a given string.Lets say that the comp saves large data without space,one simple large string.
How do find the most efficient way to find your word?
private void initDictionary() {
// All the words in the dictionary
}
KMP algorithm solves in linear time for generic case. You can apply KMP algorithm in your case as well. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/girish3/string-matching-kmp-algorithm-cie | CC-MAIN-2019-30 | refinedweb | 1,297 | 76.05 |
Hi
When i compile my code, it works alright, no errors, but when i run it, i get to a certain point where the program sort of goes out of control.
Here is the code:
can anyone help me?can anyone help me?Code:#include <iostream> using namespace std; int main() { int DOB; int AGE; int MONTH; int DATE; do { cout << "Please enter your date of birth...\n" << endl; cin >> DOB; cin.ignore(); if (DOB < 2006) { cout << "Really? What day?\n" << endl; cin >> DATE; cin.ignore(); } else if (DOB == 2006) { cout << "Really? How old are you?\n" << endl; cin >> AGE; cout << "...and what month?\n" << endl; cin >> MONTH; cin.ignore(); } else { cout << "What are you on about? You arn't even born yet!\n" << endl; } } while(DOB != -1); cin.get(); } | http://cboard.cprogramming.com/cplusplus-programming/84049-strange-bug.html | CC-MAIN-2014-10 | refinedweb | 130 | 97.5 |
This article originally appeared in Spanish as [SECMCA Regional Economic Notes No. 93]() in December 2017. This Jupyter notebook is a slightly modified version of the original, and was prepared on July 28, 2018..
Those who wish to exploit the advantages of programming to do these tasks must first decide which of many programming languages to learn. For instance, to a greater or lesser extend the languages R, Python, Julia, Fortran, Gauss, and MATLAB are all used by economists. MATLAB has been especially popular in this field, and there are many tools that have been developed to be run in this program, among them DYNARE and IRIS (to solve and estimate DSGE models), CompEcon (for computational economics), and Econometrics (for spacial econometrics).
Despite the fact that Python is not yet as popular as MATLAB among economists, its popularity has certainly skyrocketed in recent years. For example, the following books use Python to do typical economists tasks:
Python is a versatile and easy-to-learn language ---in fact it is used extensively in America's best universities to teach introductory programming courses. Its syntax is very clear, which makes developing and maintaining code easy. Because it is one of the most popular languages among computer programmers, there are abundant resources to learn it (books, Internet pages). It is an excellent tool to perform scientific calculation tasks (thanks to packages such as Numpy and Scipy), data management (Pandas), visualization (Matplotlib) and econometric modeling (Statsmodels).
Another advantage of using Python is that, unlike proprietary programs, Python and many of these complementary packages are completely free. The best way to get Python is through Anaconda, a free distribution that includes more than 300 very useful packages in science, mathematics, engineering, and data analysis. Besides Python, Anaconda includes tools such as IPython (to run Python interactively), Jupyter (an editor that allows combining text, code and results in a single file, excellent for documenting your work), Spyder (a GUI for code editing, similar to that of MATLAB) and Conda (allows to install and update packages).
If you want to start working with Python, you should consider two issues. First, there are currently two versions of Python that are not entirely compatible with each other, the 2 (whose last update is 2.7) and the 3 (currently updated to 3.6). Personally, I recommend working with version 3.6 because it has significant improvements over version 2.7, and most of the packages needed to work in typical economists tasks have already been ported to 3.6.
Second, although Spyder facilitates code editing, more advanced users may prefer PyCharm, an excellent Python editor whose “Community” version can be used for free. This editor makes it much easier to edit programs, because of features such as autocomplete (especially useful when we have not yet memorized Python functions), syntax highlighting (shows keywords in different color, to make it easier to understand the code's logic), and debugger (to partially run a program when it is necessary to find a bug).
The purpose of this note is to illustrate some of the common tasks that economists can do using Python. First, we use numerical techniques to solve two Cournot competition models presented by Miranda and Fackler (2002) using the "CompEcon-python" package (This package was developed by the author and is based precisely on the *CompEcon toolbox* for MATLAB from Miranda and Fackler), which is freely available at Github (Readers interested in the topic of computational economics will find more of these examples in Romero-Aguilar (2016)). Second, it illustrates how to automate the collection of Internet data and its presentation in tables and graphs. Third, some examples of econometric models estimated with Python are shown.
For each of the problems, I provide Python code to solve it, along with brief explanations of how this code works. However, this note is not intended to teach programming in Python because, as mentioned above, there are already many high quality teaching resources for this purpose, including the site of Google developers, the site learnpython, and several online courses at edx. Likewise, in the first two examples, the numerical methods implemented in Python are presented concisely, but readers interested in this topic are advised to consult the textbooks of Miranda and Fackler (2002), Judd (1998), and Press (2007)
Assume the market is controlled by two firms that compete with each other. For this duopoly, the inverse of the demand function is given by \begin{equation*} P(q) = q^{-\alpha} \end{equation*} and both firms face quadratic costs \begin{align*} C_1 &= \tfrac{1}{2}\beta_1q_1^2 \\ C_2 &= \tfrac{1}{2}\beta_2q_2^2 \end{align*}
Firms profits are \begin{align*} \pi_1\left(q_1, q_2\right) &=P\left(q_1+q_2\right)q_1 - C_1\left(q_1\right) \\ \pi_2\left(q_1, q_2\right) &=P\left(q_1+q_2\right)q_2 - C_2\left(q_2\right) \end{align*}
In a Cournot equilibrium, each firm maximizes its profits taking as given the other firm's output. Thus, it must follow that \begin{align*} \frac{\partial \pi_1\left(q_1, q_2\right)}{\partial q_1} &= P\left(q_1+q_2\right) + P'\left(q_1+q_2\right)q_1 - C'_1\left(q_1\right) = 0\\ \frac{\partial \pi_2\left(q_1, q_2\right)}{\partial q_2} &= P\left(q_1+q_2\right) + P'\left(q_1+q_2\right)q_2 - C'_2\left(q_2\right) = 0 \end{align*}
Therefore, equilibrium output levels for this market are given by the solution to this nonlinear equation system \begin{equation} \label{eq:fzero} f\left(q_1, q_2\right) = \begin{bmatrix} \left(q_1+q_2\right)^{-\alpha} - \alpha q_1 \left(q_1+q_2\right)^{-\alpha - 1} - \beta_1q_1 \\ \left(q_1+q_2\right)^{-\alpha} - \alpha q_2 \left(q_1+q_2\right)^{-\alpha - 1} - \beta_2q_2\end{bmatrix} = \begin{bmatrix}0 \\ 0\end{bmatrix}\tag{1} \end{equation}
To find the root of the function defined in (1) we will use Newton's method. In general, this method is applied to the function $f: \Re^n \to \Re^n$ to find some (Notice that, depending on the function, there could be more than one solution, or no solution at all.) value $x^*$ such that $f(x^*)=0$. To that end, we start with a value $x_0 \in \Re^n$ and make the recursion \begin{equation}\label{eq:newton} x_{i+1} = x_i - J^{-1}(x_i) f(x_i) \tag{2} \end{equation} where $J(x_i)$ corresponds to the Jacobian of $f$ evaluated at $x_0$. In theory, following this recursion $x_i$ converges to $x^*$ as long as the $f$ function is continuously differentiable and the initial value $x_0$ is “sufficiently close” to the $x^*$ root.
import numpy as np import matplotlib.pyplot as plt from compecon import NLP, gridmake from compecon.demos import demo
To solve this model computationally, we need to assign values to the parameters, so we set $\alpha=0.625$, $\beta_1=0.6$ and $\beta_2=0.8$.
alpha = 0.625 beta = np.array([0.6, 0.8])
The unknowns in our problem are the firms' output levels, $q_1$ and $q_2$. We define the
market function to tell us total output and resulting price, given the levels of $q_1$ y $q_2$. Notice that both quantities are passed to this function in the
q vector
def market(q): quantity = q.sum() price = quantity ** (-alpha) return price, quantity
Then, we define the
cournot function, returning a two-element tuple: the objective function and its Jacobian matrix, both evaluated in a pair of quantities contained by the
q vector. To make the code easier, notice that the (1) function can be written more succinctly as
\begin{equation*}
f\left(q_1, q_2\right) = \begin{bmatrix}
P + \left(P' - c_1\right)q_1 \\
P + \left(P' - c_2\right)q_2\end{bmatrix}
= \begin{bmatrix}0 \\ 0\end{bmatrix}
\end{equation*}
and its Jacobian matrix is
If we define total output as $Q=q_1 + q_2$, notice also that \begin{equation*} P' = -\alpha\frac{P}{Q} \qquad\text{y que} \qquad P''=-(\alpha+1)\frac{P'}{Q} \end{equation*}
def cournot(q): P, Q = market(q) P1 = -alpha * P/Q P2 = (-alpha - 1) * P1 / Q fval = P + (P1 - beta) * q fjac = np.diag(2*P1 + P2*q - beta) + np.fliplr(np.diag(P1 + P2*q)) return fval, fjac
Next, we compute the equilibrium using Newton's method (equation (2)) to find the root of the
cournot function. We set $q_0 = \begin{bmatrix}0.2 & 0.2\end{bmatrix}'$ as our initial value and iterate until the norm of the change between two successive values of the recursion is less than $10^{-10}$.
q = np.array([0.2, 0.2]) for it in range(40): f, J = cournot(q) step = -np.linalg.solve(J, f) q += step if np.linalg.norm(step) < 1.e-10: break price, quantity = market(q) print(f'\nCompany 1 produces {q[0]:.4f} units, while just five iterations, Newton's method converges to the solution, which Python prints to screen:
Company 1 produces 0.8396 units, while company 2 produces 0.6888 units. Total production is 1.5284 and price is 0.7671
We see that the code has found the equilibrium to this market.
The
compecon package provides the
NLP (non-linear problem) class, useful to solve last problem without the need of coding Newton's algorithm. To use it, we create an instance of
NLP from the
cournot function, and simply call the
newton method, using
q0 as initial value.
q0 = np.array([0.2, 0.2]) cournot_problem = NLP(cournot) q = cournot_problem.newton(q0) price, quantity = market(q) print(f'\nCompany 1 produces {q[0]:.4f} units, while' + f' completing this code block, Python prints the following to screen:
Company 1 produces 0.8396 units, while company 2 produces 0.6888 units. Total production is 1.5284 and price is 0.7671
As expected, we got the same result.
Figure 1 illustrates the problem we just solved, where the axes represent the output levels of each firm. The quasi-vertical white line represents the profit-maximizing output level for firm 1, taking the output of firm 2 as given. Similarly, the quasi-horizontal line represents the profit maximizing output level for firm 2, given firm 1 output. The solution to the problem corresponds to the intersection of these two lines. See also the path to convergence (blue line) from the initial $q_0 = \begin{bmatrix}0.2 & 0.2\end{bmatrix}'$ point to the solution.
n = 100 q1 = np.linspace(0.1, 1.5, n) q2 = np.linspace(0.1, 1.5, n) z = np.array([cournot(q)[0] for q in gridmake(q1, q2).T]).T steps_options = {'marker': 'o', 'color': (0.2, 0.2, .81), 'linewidth': 2.5, 'markersize': 9, 'markerfacecolor': 'white', 'markeredgecolor': 'red'} contour_options = {'levels': [0.0], 'colors': 'white', 'linewidths': 2.0} Q1, Q2 = np.meshgrid(q1, q2) Z0 = np.reshape(z[0], (n,n), order='F') Z1 = np.reshape(z[1], (n,n), order='F') methods = ['newton', 'broyden'] cournot_problem.opts['maxit', 'maxsteps', 'all_x'] = 10, 0, True qmin, qmax = 0.1, 1.3 x = cournot_problem.zero(method='newton') demo.figure("Convergence of Newton's method", '$q_1$', '$q_2$', [qmin, qmax], [qmin, qmax]) plt.contour(Q1, Q2, Z0, **contour_options) plt.contour(Q1, Q2, Z1, **contour_options) plt.plot(*cournot_problem.x_sequence, **steps_options) demo.text(0.85, qmax, '$\pi_1 = 0$', 'left', 'top') demo.text(qmax, 0.55, '$\pi_2 = 0$', 'right', 'center')
To illustrate the implementation of the collocation method for implicit function problems, consider the case of a Cournot oligopoly. In the standard microeconomic model of the firm, the firm maximizes its profits by matching marginal revenue to marginal cost (MC). An oligopolistic firm, recognizing that its actions affect the price, knows that its marginal revenue is $p + q \frac{dp}{dq}$, where $p$ is the price, $q$ the quantity produced, and $\frac{dp}{dq}$ is the marginal impact of the product on the market price. Cournot's assumption is that the company acts as if none of its production changes would provoke a reaction from its competitors. This implies that: \begin{equation} \frac{dp}{dq} = \frac{1}{D'(p)} \tag{3} \end{equation}
where $D(p)$ is the market demand curve.
Suppose we want to derive the firm's effective supply function, which specifies the amount $q = S(p)$ that it will supply at each price. The effective supply function of the firm is characterized by the functional equation \begin{equation} p + \frac{S(p)}{D'(p)} - MC(S(p)) = 0 \tag{4} \end{equation}
for every price $p>0$. In simple cases, this function can be found explicitly. However, in more complicated cases, there is no explicit solution. Suppose for example that demand and marginal cost are given by \begin{equation*} D(p) = p^{-\eta} \qquad\qquad CM(q) = \alpha\sqrt{q} + q^2 \end{equation*}
so that the functional equation to be solved for $S(p)$ is \begin{equation} \label{eq:funcional} \left[p - \frac{S(p)p^{\eta+1}}{\eta}\right] - \left[\alpha\sqrt{S(p)} + S(p)^2\right] = 0 \tag{5} \end{equation}
In equation (5), the unknown is the supply function $S(p)$, which makes (5) and infinite-dimension equation. Instead of solving the equation directly, we will approximate its solution using $n$ Chebyshev polynomials $\phi_i(x)$, which are defined recursively for $x \in [0,1]$ as: \begin{align*} \phi_0(x) & = 1 \\ \phi_1(x) & = x \\ \phi_{k + 1}(p_i) & = 2x \phi_k(x) - \phi_{k-1}(x), \qquad \text{for} k = 1,2, \dots \end{align*}
In addition, instead of requiring that both sides of the equation be exactly equal over the entire domain of $p \in \Re^+$, we will choose $n$ Chebyshev nodes $p_i$ in the interval $[a, b]$: \begin{equation} \label{eq:chebynodes} p_i = \frac{a + b}{2} + \frac{ba}{2}\ cos\left(\frac{n-i + 0.5}{n}\pi\right), \qquad\text{for } i = 1,2, \dots, n \tag{6} \end{equation}
Thus, the supply is approximated by \begin{equation*} S(p_i) = \sum_{k = 0}^{n-1} c_{k}\phi_k(p_i) \end{equation*}
Substituting this last expression in (5) for each of the placement nodes (Chebyshev in this case) results in a non-linear system of $ n $ equations (one for each node) in $ n $ unknowns $ c_k $ (one for each polynomial of Cheybshev), which in principle can be solved by Newton's method, as in the last example. Thus, in practice, the system to be solved is\begin{equation} \label{eq:collocation} \left[p_i - \frac{\left(\sum_{k=0}^{n-1}c_{k}\phi_k(p_i)\right)p_i^{\eta+1}}{\eta}\right] - \left[\alpha\sqrt{\sum_{k=0}^{n-1}c_{k}\phi_k(p_i)} + \left(\sum_{k=0}^{n-1}c_{k}\phi_k(p_i)\right)^2\right] = 0 \tag{7} \end{equation}
for $i=1,2,\dots,n$ and for $k=1,2,\dots,n$.
import numpy as np import matplotlib.pyplot as plt from compecon import BasisChebyshev, NLP, nodeunif from compecon.demos import demo
and set the $\alpha$ and $\eta$ parameters
alpha, eta = 1.0, 3.5
For convenience, we define a
lambda function to represent the demand
D = lambda p: p**(-eta)
We will approximate the solution for prices in the $p\in [\frac{1}{2}, 2]$ interval, using 25 collocation nodes. The
compecon library provides the
BasisChebyshev class to make computations with Chebyshev bases:
n, a, b = 25, 0.5, 2.0 S = BasisChebyshev(n, a, b, labels=['price'], l=['supply'])
Let's assume that our first guess is $S(p)=1$. To that end, we set the value of
S to one in each of the nodes
p = S.nodes S.y = np.ones_like(p)
It is important to highlight that in this problem the unknowns are the $c_k$ coefficients from the Chebyshev basis; however, an object of
BasisChebyshev class automatically adjusts those coefficients so they are consistent with the values we set for the function at the nodes (here indicated by the
.y property).
We are now ready to define the objective function, which we will call
resid. This function takes as its argument a vector with the 25 Chebyshev basis coefficients and returns the left-hand side of the 25 equations defined by (7).
def resid(c): S.c = c # update interpolation coefficients q = S(p) # compute quantity supplied at price nodes return p - q * (p ** (eta+1) / eta) - alpha * np.sqrt(q) - q ** 2
Note that the
resid function takes a single argument (the coefficients for the Chebyshev basis). All other parameters (
Q, p, eta, alpha must be declared in the main script, where Python will find their values.
To use Newton's method, it is necessary to compute the Jacobian matrix of the function whose roots we are looking for. In certain occasions, like in the problem we are dealing with, coding the computation of this Jacobian matrix correctly can be quite cumbersome. The
NLP class provides, besides the Newton's method (which we used in the last example), the Broyden's method, whose main appeal is that it does not require the coding of the Jacobian matrix (the method itself will approximate it).
cournot = NLP(resid) S.c = cournot.broyden(S.c, tol=1e-12, print=True)
Solving nonlinear equations by Broyden's method it bstep change -------------------- 0 0 4.08e-01 1 0 8.95e-02 2 0 1.37e-02 3 0 2.01e-03 4 0 3.36e-04 5 0 8.11e-05 6 0 1.28e-05 7 0 2.80e-06 8 0 4.88e-07 9 0 7.55e-08 10 0 1.84e-08 11 0 1.88e-09 12 0 4.63e-10 13 0 5.88e-11 14 0 1.17e-11 15 0 2.48e-12 16 0 2.91e-13
After 17 iterations, Broyden's method converges to the desired solution. We can visualize this in Figure 3, which shows the value of the function on 501 different points within the approximation interval. Notice that the residual plot crosses the horizontal axis 25 times; this occurs precisely at the collocation nodes (represented by red dots). This figure also shows the precision of the approximation: outside nodes, the function is within $5\times10^{-11}$ units from zero.
One of the advantages of working with the
BasisChebyshev class is that, once the collocation coefficients have been found, we can evaluate the supply function by calling the
S object as if it were a Python function. Thus, for example, to find out the quantity supplied by the firm when the price is 1.2, we simply evaluate
print(S(1.2)), which returns
0.4650. We use this feature next to compute the effective supply curve when there are 5 identical firms in the market; the result is shown in Figure 2.
pplot = nodeunif(501, a, b) demo.figure('Cournot Effective Firm Supply Function', 'Quantity', 'Price', [0, 4], [a, b]) plt.plot(5 * S(pplot), pplot, D(pplot), pplot) plt.legend(('Supply','Demand'))
<matplotlib.legend.Legend at 0x1d4d5ffad30>
p = pplot demo.figure('Residual Function for Cournot Problem', 'Quantity', 'Residual') plt.hlines(0, a, b, 'k', '--', lw=2) plt.plot(pplot, resid(S.c)) plt.plot(S.nodes,np.zeros_like(S.nodes),'r*');
m = np.array([1, 3, 5, 10, 15, 20]) demo.figure('Supply and Demand Functions', 'Quantity', 'Price', [0, 13]) plt.plot(np.outer(S(pplot), m), pplot) plt.plot(D(pplot), pplot, linewidth=4, color='black') plt.legend(['m=1', 'm=3', 'm=5', 'm=10', 'm=15', 'm=20', 'demand']);
In Figure 4 notice how the equilibrium price and quantity change as the number of firms increases.
pp = (b + a) / 2 dp = (b - a) / 2 m = np.arange(1, 26) for i in range(50): dp /= 2 pp = pp - np.sign(S(pp) * m - D(pp)) * dp demo.figure('Cournot Equilibrium Price as Function of Industry Size', 'Number of Firms', 'Price') plt.bar(m, pp);
Oftentimes we need to keep track of some economic indicators. This work usually requires visiting the website of a data provider, looking for the required indicators, downloading the data (possibly in several different files), copying them to a common file, arranging them properly, and only after completing these cumbersome tasks, plotting them. If this work has to be done periodically then it is also necessary to thoroughly document each of these steps so we can replicate them exactly in the future. Needless to say, if it is necessary to do all these tasks with numerous indicators, the work ends up demanding a considerable amount of time and is prone to many errors.
To facilitate this work, we can use Python to download data available in Internet directly, thanks to packages such as pandas-datareader. This is easily done when data providers supply an API ---application program interface--- which specifies how a language like Python can find the desired data.
Let us illustrate this with an example. Suppose we want recent data on economic growth for the member countries of the CMCA. The World Bank provides the relevant data in its “World Database”, which we can read with the
wb module from
pandas_datareader.
from pandas_datareader import wb
To be able to download data from the World Bank, we first need to know the exact code of the indicator we want to read. The first time we do this task we will not know this code, but we can look for it in the World Bank website or more easily from Python itself. For example, to find data on real GDP per capita, we run the following using the
.search function:
wb.search('gdp.*capita.*const').iloc[:,:2]
where the dot followed by an asterisk (.*) indicates that any text in that position is a match. This function returns a data table with information about indicators that match the search criteria. In the preceding line, we use the code
.iloc[:,:2] so that Python only prints the first two columns from that table.
After running that search, we choose the 'NY.GDP.PCAP.KD' indicator, whose description is “GDP per capita (constant 2010 US\$)”. We define a variable with a list of country codes of the CMCA countries:
paises = ['CR', 'DO', 'GT', 'HN', 'NI', 'SV']
and we proceed to reed data from 1991:
datos = wb.download(indicator='NY.GDP.PCAP.KD', country=paises,start=1991, end=2016)
It is also possible to read data for more than one indicator in a single call to the
wb.download function, writing their codes in a list (just like we did to read data on all six countries at once). In any case, we get a data table in panel format, where each columns corresponds to one of the indicators. For our example in particular, where we only read one indicator, it would be useful if the table was arranged so that each row correspond to a year and each column to a country. We can achieve it with this instruction:
GDP = datos.reset_index().pivot('year','country')
Once data is arrange this way, it is very easy to compute growth for all countries in a single step:
GROWTH = 100 * GDP.pct_change()
or to generate a formatted data table to be included in a \LaTeX document
#GROWTH.tail(6).round(2).to_latex('micuadro.tex') print(GROWTH.tail(6).round(2).to_latex())
\begin{tabular}{lrrrrrr} \toprule {} & \multicolumn{6}{l}{NY.GDP.PCAP.KD} \\ country & Costa Rica & Dominican Republic & El Salvador & Guatemala & Honduras & Nicaragua \\ year & & & & & & \\ \midrule 2011 & 3.06 & 1.81 & 3.34 & 1.94 & 1.89 & 5.03 \\ 2012 & 3.59 & 1.42 & 2.34 & 0.80 & 2.24 & 5.24 \\ 2013 & 1.13 & 3.59 & 1.89 & 1.54 & 0.99 & 3.72 \\ 2014 & 2.40 & 6.35 & 1.49 & 2.03 & 1.29 & 3.60 \\ 2015 & 2.55 & 5.79 & 1.87 & 2.03 & 2.08 & 3.60 \\ 2016 & 3.10 & 5.41 & 2.06 & 1.04 & 2.02 & 3.50 \\ \bottomrule \end{tabular}
In last instruction, the
.tail(6) part indicates that we only want the last six observations, while the
.to_latex('micuadro.tex') part exports that table to a file named 'micuadro.tex', which can later te included in a document. The result of this code will look similar to this:
GROWTH.tail(6).round(2)
GROWTH.columns = paises GROWTH.plot();
GROWTH.plot(subplots=True, layout=[2,3], sharey=True);
where we have specified that each time series should be plotted separately (
subplots=True), be arranged in two rows and three columns (
layout=[2,3]), and all subplots must have the same “y” axis (
sharey=True, to facilitate country comparisons).
The Python
statsmodels package enable the estimation of many types of econometric models, although not as many as can be estimated using R. A simple illustration is the estimation of a Keynesian consumption function,
\begin{equation*}
\ln(c_t) = \beta_0 + \beta_1 \ln(y_t) + \epsilon_t
\end{equation*}
where $c_t$ stands for consumption, $y_t$ income, $\epsilon$ a stochastic shock. In this case $\beta_1$ corresponds to the income elasticity of consumption.
Just like in the previous example, we will use
pandas-datareader to import data from Internet. In this example we also import the
log function from the
numpy package to compute the logarithm of the data, as well as the
formula.api module form
statsmodels to estimate the model.
import pandas_datareader.data as web from numpy import log import statsmodels.formula.api as smf
Once this is done, we are ready to import data. In this example, we use quarterly data on consumption and production in the United States, available in FRED, a database from the Federal Reserve Bank of Saint Louis. For “consumption” we use the “PCEC” (Personal Consumption Expenditures) series, and for “income” we use “GDP” (Gross Domestic Product).
usdata = web.DataReader(['PCEC','GDP'],'fred', 1947, 2017)
After executing this instuction, the
usdata variable points to a
pandas data table, in which each column corresponds to a variable and each row to a quarter. We now estimate the model by ordinary least squares (
.ols) and print a summary of the results
mod = smf.ols('PCEC ~ GDP', log(usdata)).fit() print(mod.summary())
OLS Regression Results ============================================================================== Dep. Variable: PCEC R-squared: 1.000 Model: OLS Adj. R-squared: 1.000 Method: Least Squares F-statistic: 6.154e+05 Date: Sat, 28 Jul 2018 Prob (F-statistic): 0.00 Time: 18:27:34 Log-Likelihood: 584.98 No. Observations: 281 AIC: -1166. Df Residuals: 279 BIC: -1159. Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept -0.6712 0.010 -64.053 0.000 -0.692 -0.651 GDP 1.0268 0.001 784.505 0.000 1.024 1.029 ============================================================================== Omnibus: 51.339 Durbin-Watson: 0.075 Prob(Omnibus): 0.000 Jarque-Bera (JB): 85.582 Skew: 1.027 Prob(JB): 2.61e-19 Kurtosis: 4.758 Cond. No. 47.1 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Notice that the
.ols function takes two arguments, the formula specifying the model, and the name of the data table containing the variables. In this code block we specify the data as
log(usdata), which tells Python that we want the logarithm of the data, saving us the task of generating another data table with the transformed data beforehand (as would be necessary in, for example, Stata).
Alternatively, that line can also be written as
mod = smf.ols('log(PCEC) ~ log(GDP)', usdata).fit()
which is convenient in cases where not all variables must be transformed.
As it is expected in a regression of trending time series, the $R^2$ statistic is very close to one, and the Durbin-Watson statistic points to the high possibility of autocorrelation in the residuals. This document does not aim at being a guide of best practices in econometrics, but let us consider one last model in which consumption growth depends on income growth: \begin{equation*} \Delta\ln(c_t) = \beta_0 + \beta_1 \Delta\ln(y_t) + \epsilon_t \end{equation*}
with we estimate in Python with
smf.ols('PCEC ~ GDP', log(usdata).diff()).fit().summary()
We notice that now the $R^2$ is no longer close to one, and that the Durbin-Watson statistic is closer to 2.0, indicating lack of autocorrelation.
This last line of code, where we estimate the model with first-differenced data, highlights one of the reasons why code written in Python is so concise: it is not always necessary to store intermediate results in variable, because we can simply chain sevearal operations. In the case at hand, we have specified a model (
.ols), estimated it (
.fit) and gotten a table summarizing the results (
.summary). Similarly, we have computed the logarithm (
log) of data in
usdata table, and to its result we have computed its first-difference (
.diff). To better appreciate how concise this code is, let us compare that line with the following block, which takes 8 lines of code to perform the same operations:
usdata['lPCEC'] = log(usdata['PCEC']) usdata['lGDP'] = log(usdata['GDP']) usdata['dlPCEC'] = usdata['lPCEC'].diff() usdata['dlGDP'] = usdata['lGDP'].diff() model = smf.ols('dlPCEC ~ dlGDP', usdata) results = model.fit() table = results.summary() print(table)
With results from last Table at hand, we could predict that a one percentage point (p.p.) increase in GDP growth would lead to a 0.618 p.p. increase in consumption growth. However, given that the data sample covers such a long period (nearly 70 years of quarterly observation), it is reasonable to wonder whether the parameters in this model are constant, given that several structural changes could have occurred along these years. One way to evaluate such posibility is to estimate the model with a rolling sample. In particular, we are going to estimate this model with 24 quarterly observations rolling window, changing the sample by one quarter in every step.
In this case, since we are going to need growth data many times, it is more efficient to compute growth data only once and store it in a
growth variable. With the
[1:] code we are dropping the first observation, which we lose when we compute the first-order difference (
.diff). Furthermore, we use the
.shape property from the table to find out how many observations
T we have, and then we set the window range to
h=24 observations:
growth = (100*log(usdata).diff())[1:] T, nvar = growth.shape h = 24
To faciliate next step, we define función
window_beta1, which takes as its only argument the number of the last observation to be included in the estimation, and returns the value of the estimated GDP coefficient
def window_beta1(k): return smf.ols('PCEC~GDP',growth[k-h:k]).fit().params['GDP']
With this, we are ready to estimate the model many times, adding the results to the
growth table as the
beta1 “indicator”. Plotting the results we get Figure 8, where we clearly see that the effect of GDP growth on consumption growth is quite unstable, and thus the predictions made with the simple model could be very poor.
growth.loc[h-1:,'beta1'] = [window_beta1(k) for k in range(h,T+1)] growth[['beta1']].plot();
To conclude this note, I let the reader know that the original (PDF) version of this document is an example of what is known as a “dynamic document”, in the sense that it was generated by interweaving \LaTeX code with Python code. The main benefit of this is that if in the future we need to update the previous examples (say to use updated data in the tables and graphs), it would suffice to rerun the code that generated that document (similarly to what we would do with this Jupyter notebook). It will not be necessary to use an Internet browser to get data, nor to copy-and-paste the graphs in the document.
Dynamic documents are extremely useful, because they enable significant time savings in the updating of periodic reports. Readers who are interested in learning how to create one of these documents will need to know \LaTeX and to review the pythontex documentation.
Judd, Kenneth L. (1998). Numerical Methods in Economics. MIT Press. isbn: 978-0-262- 10071-7.
Miranda, Mario J. and Paul L. Fackler (2002). Applied Computational Economics and Finance. MIT Press. isbn: 0-262-13420-9.
Press, William H., Saul A. Teukolsky, and William T. Vetterling and Brian P. Flannery (2007). Numerical Recipes: The Art of Scientific Computing. 3rd ed. Cambridge University Press. isbn: 978-0521880688.
Romero-Aguilar, Randall (2016). CompEcon-Python . url:. | https://nbviewer.jupyter.org/github/randall-romero/teaching-materials/blob/master/python/Python%20for%20Economists.ipynb | CC-MAIN-2019-43 | refinedweb | 5,376 | 55.74 |
The incredible growth in new technologies like machine learning has helped web developers build new AI applications in ways easier than ever. In the present day, most AI enthusiasts and developers in the field leverage Python frameworks for AI & machine learning development. But looking around, one may also find that JavaScript-based frameworks are also being implemented in AI.
While the Python programming language feeds most machine learning frameworks, JavaScript has not lagged behind. This is the reason why JavaScript developers are using a number of frameworks for training and implementing machine learning models in the browser.
In this blog, we will discuss various top machine learning JavaScript frameworks that you must consider for you’re seeking business growth through AI & machine learning.
1. Brain.js
Brain.js is an open-source JavaScript library used to run and process neural networks. It is particularly useful for developers venturing into machine learning, and would best for those among them already acquainted with the complexities of JavaScript.
Brain.js is generally used with Node.js or a client-side browser to train machine learning models.
To set up Brain.js, use the following code:
npm install brain.js
However, to install the Naive Bayesian classifier, use the following code:
npm install classifier
You can also include library in the browser using the code given below:
<script src=""></script>
2. ML.js
ML.js primarily aims to make machine learning accessible to a broader audience which includes creators, students and artists. It’s a JavaScript library that provides algorithms and tools within the browser working on top of Tensorflow.js without any external dependency.
First, you need to set up the ML.js tool using the following code:
<script src=""></script>
Here, I have listed the machine learning algorithms which are supported:
Supervised learning includes:
K-Nearest Neighbor (KNN)
Simple linear regression
Naive Bayes
Random forest
Decision tree: CART
Partial least squares (PLS)
Logistic regression
Unsupervised learning includes:
K-means clustering
Principal component analysis (PCA)
3. Keras.js
Using KeraJS, you can easily run Keras models in the browser with support of GPU via WebGL. These models can also be run in Node.js but only in CPU mode.
I have listed out some Keras models that can be run in the browser:
Bidirectional LSTM for IMDB sentiment classification
DenseNet-121, trained on ImageNet
50-layer residual network, trained on ImageNet
Convolutional variational autoencoder, trained on MNIST
Basic convnet for MNIST
Auxiliary classifier generative adversarial networks (AC-GAN) on MNIST
Inception v3, trained on ImageNet
SqueezeNet v1.1, trained on ImageNet
4) Limdu.js
It is a machine learning framework used for Node.js.
Limdu.js is ideally suited for language processing chatbots and other dialog systems.
You can install it by using the following command:
npm install limdu
It supports some of the following:
Feature engineering
Binary classification
Multi-label classification
SVM
5) Tensorflow.js (Earlier known as deeplearn.js)
It is an open source machine learning JavaScript library maintained by Google.
It can be used for different purposes like understanding ML models, training neural networks in the browser, for educational purposes, etc.
Tensorflow.js allows training of machine learning models in JavaScript and facilitates its subsequent deployment in the browser or on Node.js.
By using this framework, you can run pre-trained models in an inference model. In fact, one can write the code in Typescript (ES6 JavaScript or ES5 JavaScript).
You can quickly start by including the following code within a header tag in the HTML file and writing JS programs to build the model.
<script src=""></script>
PropelJS
It is a machine learning JavaScript library that provides a numpy infrastructure backed by GPUs, especially for scientific computing. It can be used for both the browser and the NodeJS applications.
The following is the configuration code for the browser:
<script src=""></script>
For a nodejs application, you need to use the following code:
npm install propel
import { grad } from "propel";
7)
8) ConvNetJS
This JavaScript library is used to train neural networks(deep learning models) entirely in the browser. The NodeJs app can use this library too. To start with it, you need to get its minified version using ConvNetJS minified library.
Use the following code:
<script src="convnet-min.js"></script>
Conclusion
So far we have seen the top 8 JavaScript machine frameworks which you must consider for your web development in 2019.
Obviously, JavaScript is not becoming the language of choice for Machine Learning, far from it! However, common problems, such as performance, Matrix manipulations and the abundance of useful libraries, are slowly being overcome, closing the gap between common applications and using machine learning.
Hence, the above-listed machine learning JavaScript libraries will be helpful if you’re looking for an alternative to python frameworks for machine learning development. Moreover, I invite you to suggest more libraries or useful projects which can be added to the list. | https://hackernoon.com/top-javascript-based-machine-learning-frameworks-and-libraries-lz92j32w4 | CC-MAIN-2019-43 | refinedweb | 821 | 53.51 |
How To Python Flask Tutorial
Here is a how to setup Flask tutorial as of recent. I recommend following along in Linux/Mac/WSL, as my setup will be mostly done through the terminal. I will also assume some basic Python knowledge, as this isn’t a Python tutorial. This tutorial won’t be going over how to make a specific web app, as I think the best way to truly learn a technology is to build something for yourself. Rather than be stuck in a tutorial loop, learning the concepts and applying them to your own ideas would help you grow faster in my opinion. Use the docs to find things that weren’t covered in this tutorial that you need.
Basic Setup
Most of my setup will be coming from the docs on making the app installable. This will make flask able to run from any directory, instead of being limited to only the root directory. First we will setup the environment and install our dependencies. So in our terminal:
mkdir myproject
cd myproject
python -m venv venv
source venv/bin/activate
Make a file called
.env in the directory
myproject/, this is where we store our environment variables.
FLASK_APP=flaskr
FLASK_ENV=development
Make a file called
setup.py in the same directory.
from setuptools import find_packages, setupsetup(
name='flaskr',
version='1.0.0',
packages=find_packages(),
include_package_data=True,
zip_safe=False,
install_requires=[
'flask',
'python-dotenv',
'Flask-Migrate',
],
)
python-dotenv will auto load our
.env file when running the flask app.
Flask-Migrate is a package that helps with setting up and migrating our database without doing any manual work.
Install dependencies:
pip install -e .
The
-e flag makes the
flaskr folder editable pretty much. Without the
-e flag, any edits would not be reflected. As mentioned earlier, this method allows us to run
flask from any subdirectories of the project. Not doing it this way would mean that
flask can only be executed from the root folder.
Getting Flask to Run
Let’s start off by getting the app to run, and go over the concepts afterwards. Note that this is the way I organize my Flask app, feel free to change whatever you think would work better. Starting from our
myproject/ folder, run the following commands to get our folder structure going:
mkdir flaskr
touch flaskr/__init__.py
Now let’s edit
__init__.py . Here is a basic snippet from the documentation with some slight tweaks of mine.
import os
from flask import Flask
from .models import db
from flask_migrate import Migratedef create_app(test_config=None):
app = Flask(__name__, instance_relative_config=True)
app.config.from_mapping(
SECRET_KEY='dev',
DATABASE=os.path.join(app.instance_path, 'flaskr.sqlite'),
SQLALCHEMY_TRACK_MODIFICATIONS=False,
)# Database settings
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///sqlite.db"
db.init_app(app)
migrate = Migrate(app, db)if test_config is None:
app.config.from_pyfile('config.py', silent=True)
else:
app.config.from_mapping(test_config) try:
os.makedirs(app.instance_path)
except OSError:
pass# Register views
from .home import views as home_views
app.register_blueprint(home_views.bp) return app
In the
flaskr/ folder, we will setup the database:
touch flaskr/models.py
Edit
models.py :
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()class MyModel(db.Model):
__tablename__ = "mymodel"
# This id is required on every model, as it is the pk
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(128))# This is the display name for the model when queried
def __repr__(self):
return f"{self.name}"
In the
flaskr/ folder, lets make our
home app and it’s views:
mkdir flaskr/home
touch flaskr/home/__init__.py
touch flaskr/home/views.py
Edit the
flaskr/home/views.py :
from flask import render_template, Blueprint, request
from flaskr.models import db, MyModel
bp = Blueprint("home", __name__, url_prefix="")@bp.route("/", methods=("GET",))
def home():
q = MyModel.query.all()
new = MyModel(name=f"model no. {len(q)}")
db.session.add(new)
db.session.commit()
context = {
"models": MyModel.query.all()
}
return render_template("home/index.html", context=context)@bp.route("/mymodel/<int:pk>", methods=("GET",))
def func_mymodel(pk):
q = MyModel.query.filter_by(id=pk).first()
context = {
"model_query": q
}
return render_template("home/index.html", context=context)
Let’s make our html we are trying to display, in the
flaskr/ folder:
mkdir -p flaskr/templates/home
touch flaskr/templates/home/index.html
Edit the
flaskr/templates/home/index.html:
<h1>Hello Flask</h1>
{% for model in context["models"] %}
<p>{{ model.name }}</p>
{% endfor %}{% if context["model_query"] %}
<h1>Query:</h1>
<p>{{ context["model_query"].name }}</p>
{% endif %}
Now that we have organized our Flask app’s code, we can init the database and run the webserver:
flask db init
flask db migrate
flask db upgrade
flask run
Go to
localhost:5000 on your browser, and you should see a “Hello Flask” heading. You should see that every time you go to the home url, a new model should be created. Going to the link, should bring up a single model name only.
Folder Structure
Flask lets us have a good amount of freedom when it comes to customizing our folders and files structure. I personally like to keep things similar to Django, as that is what I use the most. Here is my layout for a single application on our flask:
myproject/
|--.env
|--setup.py
|--venv/
|--flaskr/
| |--__init__.py
| |--models.py
| | |--__init__.py
| | |--views.py
| |--templates/
| |--static/
__init__.py
This file will contain our entire app settings before it gets created. Check the docs on making this here. Some notable settings are registering views (our endpoints) with blueprints, registering custom commands, and database settings.
home/ (the app directory)
This directory is like when running
manage.py createapp home in Django. If you’re not familiar with Django, it is a way to organize your app’s functionalities into folders. For example, you should make a folder called
users/ if your app will have users and put related functions such as the models for users in that folder. Instead of
home/ , it could be
posts/ if you’re making a blog app. In the
posts/ folder, you would have a
views.py that has all endpoints that related to blog posts. This way it is organized, and also easier to debug since you know where to look if your posts has a problem.
templates/
This is where your HTML goes, you can reference them in views like a path with
/templates as the root. So if there is a folder in
/templates called
/home like in our demo, it would be
home/index.html .
static/
This is where your css, js, and images go for the most part. You can reference static files like this:
<link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='css/style.css') }}">
Database: Models, Setup, and Wiping
Model attribute classes and parameters can be found in the docs. These docs also show how to do one-to-many and many-to-many relationships. Note that the
flask_sqlalchemy docs are minimal because it is built on
sqlalchemy, so the original docs on
sqlalchemy will give more detail on more complex functions and methods.
We are using the
Flask-Migrate package, which we call in our
create_app in the main
flaskr/__init__.py file. This lets us manage our changes to our database very easily, in a very similar way to how Django does it. Here are the 3 main commands we use for database management:
flask db init — Creates our
migrations/ folder if it does not exist
flask db migrate
flask db upgrade
— Run
migrate after making any changes to your model classes in
models.py — Run
upgrade after running
migrate to apply changes to the database
— Coming from Django:
migrate is similar to
makemigrations, and
upgrade is similar to
migrate in Django.
After running
flask db migrate, it will automatically create a sqlite database file as specified in our
__init__.py. Anytime we add or change a model class in our
models.py file, we would run
flask db migrate and
flask db upgrade .
Wiping the Database
If you want to wipe the entire database: just delete the sqlite file which we named
sqlite.db and run
flask db migrate; flask db upgrade; .
Wiping Migrations
If you want to start the migrations over: delete the sqlite file which we named
sqlite.db , and then delete everything from
migrations/versions/ . Then run
flask db migrate; flask db upgrade; to start with a fresh initial migration file.
Database: Read, Write, Edit
Now that we know how to setup our database, we need to know how to manipulate it through Flask’s ORM. I recommend using
flask shell in order to go into interactive mode for learning the syntax for this ORM.
# writing
from flaskr.models import db, MyModel
new = MyModel(name="hello")
db.session.add(new)
db.session.commit()
# querying
q = MyModel.query.all()
print(q)
filter = MyModel.query.filter_by(name="hello")
print(filter)
# updating
update = q[0]
update.name = "bye"
db.session.add(update)
db.session.commit()
# deleting
db.session.delete(update)
db.session.commit()
Displaying a Page: Views
Views contain our endpoints for the web app. Below is the minimal code we need in order to get a view up.
# in flaskr/home/views.py
from flask import render_template, Blueprint
bp = Blueprint("home", __name__, url_prefix="")@bp.route("/", methods=("GET",))
def home():
return render_template("home/index.html")
Endpoint Anatomy
The blueprint sets the view’s name, and also the url_prefix. The format for url_prefix is:
localhost:5000/[URL_PREFIX]/[ROUTE]/ . So if our blueprint had
url_prefix="/home" , and our route had
@bp.route(“/flask”) . Our endpoint would be
localhost:5000/home/flask .
Namespace Anatomy
The blueprint also lets us name our view’s namespace, meaning how we reference the view when using something like
url_for . The format for namespace is
“blueprint_name.function_name”. So if our blueprint was like this:
bp = Blueprint(“flaskhome”, __name__, url_prefix=””) , our url_for becomes
url_for("flaskhome.home") .
Parameters
Using parameters just requires 2 extra variables to add to our route:
@bp.route("/<int:pk>", methods=("GET",))
def home(pk):
q = MyModel.query.get(pk)
context = {
"mymodel": q
}
return render_template("home/index.html", context=context)
You can see in this example, we take an integer parameter to use for querying the id of a model. So the endpoint
localhost:5000/2 would return the model with the id 2 back to us.
Registering Views
We register our view’s blueprints in the
__init__.py .
# from the flaskr/__init__.py from the demo code above
from .home import views as home_views
app.register_blueprint(home_views.bp)
HTML Templating
Static files and urls for views:
# this will get static/css/style.css
{{ url_for('static', filename='css/style.css') }}
# getting a view's url
{{ url_for("home.function_name") }}
# getting a view's url with arguments
{{ url_for("home.function_name", name=name) }}
Note that the
home in this case is the blueprint’s name. If you check the example from above, you will see that the home corresponds to the view’s code:
bp = Blueprint(“home”, __name__, url_prefix=””) .
For loops
{% for i in context[“mylist”] %}
<p>{{ i }}</p>
{% endfor %}
If statements
{% if context[“mybool”] %}
<p>{{ context[“mybool”] }}</p>
{% endif %}
Referencing HTML Files Example
# in templates/navbar.html
<navbar>
<ul>
<li>Navbar Home</li>
</ul>
</navbar>
We will be using
{% include %} to display our navbar. This way we can write 1 line of code instead of rewriting the navbar on every other html file we create.
# in templates/index.html
<html>
<head>
<title>Myproject</title>
</head>
<body>
<!-- Including just adds that .html to here -->
{% include "navbar.html" %}
{% block content %}
<!-- Anything extending index.html will be shown between here -->
{% endblock %}
</body>
</html>
Our
index.html will be the base of almost every other html files we create, as it contains the layout of an html file. We will use
{% block content %}{% endblock %} so that we can extend
index.html to our main content.
# in templates/contents.html
{% extends "index.html" %}
{% block content %}
<h1>Hello Flask, I am in contents.html</h1>
{% endblock %}
Using
{% extends %} will display everything from
index.html , except whatever is in between
{% block content %} and
{% endblock %} . We add the same blocks in our
content.html in order to complete the block, so anything in between
{% block content %} and
{% endblock %} will be displayed on our
index.html
For more complex templating, search for
jinja . As that is what flask uses to template in HTML.
Getting User Input: Forms
In
templates/home/index.html
<form action="post" action="{{ url_for('home.func_form') }}">
<label for="pk">Model ID:</label>
<input id="pk" type="text" name="mymodel_pk">
<input type="submit" value="Submit">
</form>
In
home/views.py
from flask import render_template, Blueprint, request
bp = Blueprint("home", __name__, url_prefix="")@bp.route("/", methods=("GET",))
def home():
return render_template("home/index.html")@bp.route("/myform", methods=("POST",))
def func_form():
pk = request.form["mymodel_pk"]
context = {
"mymodel_pk": pk
}
return render_template("home/page.html", context=context)
In
templates/home/page.html
<p>{{ context["mymodel_pk"] }}</p>
This code example should show the rough idea on how to retrieve user input from a form. You can use the
pk in
func_form() to query the database to edit or delete that data, or use the user input to create new data and add it to the database.
Flask Commands Overview
Here are the main commands used in flask:
Database Management:
flask db init, flask db migrate, flask db upgrade Mainly migrate and upgrade are used throughout development after using init.
Flask Env’s Interactive Mode:
flask shell
Use this when you want to interact with the database for testing queries and methods.
Notable Go-to Resources
This framework is great for starting a small personal project, as not all projects need scalability. I hope these concepts help jump start whatever you are trying to do in Flask.
Like my content? — Support Me — Github — Twitter — Medium | https://antisyllogism.medium.com/python-flask-tutorial-463c6b2ba1bc?source=post_internal_links---------4---------------------------- | CC-MAIN-2021-49 | refinedweb | 2,284 | 59.6 |
In this project, you’ll create a web page that displays sensor readings in a plot that you can access from anywhere in the world. In summary, you’ll build an ESP32 or ESP8266 client that makes a request to a PHP script to publish sensor readings in a MySQL database.
As an example, we’ll be using a BME280 sensor connected to an ESP board. You can modify the code provided to send readings from a different sensor or use multiple boards.
To create this project, you’ll use these technologies:
- ESP32 or ESP8266 programmed with Arduino IDE
- Hosting server and domain name
- PHP script to insert data into MySQL database and display it on a web page
- MySQL database to store readings
- PHP script to plot data from database in charts
You might also find helpful reading these projects:
- ESP32/ESP8266 Insert Data into MySQL Database using PHP and Arduino IDE
- ESP32/ESP8266 Plot Sensor Readings in Real Time Charts – Web Server
Watch the Video Demonstration
To see how the project works, you can watch the following video demonstration:
1. Hosting Your PHP Application and MySQL Database
The goal of this project is to have your own domain name and hosting account that allows you to store sensor readings from the ESP32 or ESP8266. You can visualize the readings from anywhere in the world by accessing your own server domain. Here’s a high level overview of the project:
I recommend using one of the following hosting services that can handle all the project requirements:
- Bluehost (user-friendly with cPanel): free domain name when you sign up for the 3-year plan. I recommend choosing the unlimited websites option;
- Digital Ocean: Linux server that you manage through a command line. I only recommended this option for advanced users.
Those two services are the ones that I use and personally recommend, but you can use any other hosting service. Any hosting service that offers PHP and MySQL will work with this tutorial. If you don’t have a hosting account, I recommend signing up for Bluehost.
Get Hosting and Domain Name with Bluehost »
When buying a hosting account, you’ll also have to purchase a domain name. This is what makes this project interesting: you’ll be able to go your domain name () and see your ESP readings.
If you like our projects, you might consider signing up to one of the recommended hosting services, because you’ll be supporting our work.
Note: you can also run a LAMP (Linux, Apache, MySQL, PHP) server on a Raspberry Pi to access data in your local network. However, the purpose of this tutorial is to publish readings in your own domain name that you can access from anywhere in the world. This allows you to easily access your ESP readings without relying on a third-party IoT platform.
2. Preparing Your MySQL Database
After signing up for a hosting account and setting up a domain name, you can login to your cPanel or similar dashboard. After that, follow the next steps to create your database, username, password and SQL table.
Creating a database and user
Open the “Advanced” tab:
1. Type “database” in the search bar and select “MySQL Database Wizard”.
2. Enter your desired Database name. In my case, the database name is esp_data. Then, press the “Next Step” button:
Note: later you’ll have to use the database name with the prefix that your host gives you (my database prefix in the screenshot above is blurred). I’ll refer to it as example_esp_data from now on.
3. Type your Database username and set a password. You must save all those details, because you’ll need them later to establish a database connection with your PHP code.
That’s it! Your new database and user were created successfully. Now, save all your details because you’ll need them later:
- Database name: example_esp_data
- Username: example_esp_board
- Password: your password
Creating a SQL table
After creating your database and user, go back to cPanel dashboard and search for “phpMyAdmin”. Sensor ( id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY, Sensor in the example_esp_data database as shown in the figure below:
3. PHP Script HTTP POST – Insert Data in MySQL Database
In this section, we’re going to create a PHP script that receives incoming requests from the ESP32 or ESP8266 and inserts the data into a MySQL database.
If you’re using a hosting provider with cPanel, you can search for “File Manager”:
Then, select the public_html option and press the “+ File” button to create a new .php file.
Note: if you’re following this tutorial and you’re not familiar with PHP or MySQL, I recommend creating these exact files. Otherwise, you’ll need to modify the ESP sketch provided with different URL paths.
Create a new file in /public_html with this exact name and extension: post-data.php
Edit the newly created file (post-data.php) and copy the following snippet:
<?php /* Rui Santos Complete project details at Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. */ $servername = "localhost"; // REPLACE with your Database name $dbname = "REPLACE_WITH_YOUR_DATABASE_NAME"; // REPLACE with Database user $username = "REPLACE_WITH_YOUR_USERNAME"; // REPLACE with Database user password $password = "REPLACE_WITH_YOUR_PASSWORD"; // Keep this API Key value to be compatible with the ESP32 code provided in the project page. If you change this value, the ESP32 sketch needs to match $api_key_value = "tPmAT5Ab3j7F9"; $api_key = $value1 = $value2 = $value3 = ""; if ($_SERVER["REQUEST_METHOD"] == "POST") { $api_key = test_input($_POST["api_key"]); if($api_key == $api_key_value) { $value1 = test_input($_POST["value1"]); $value2 = test_input($_POST["value2"]); $value3 = test_input($_POST["value3"]); // Create connection $conn = new mysqli($servername, $username, $password, $dbname); // Check connection if ($conn->connect_error) { die("Connection failed: " . $conn->connect_error); } $sql = "INSERT INTO Sensor (value1, value2, value3) VALUES ('" . $value1 . "', '" . $value2 . "', '" . $value3 . "')"; if ($conn->query($sql) === TRUE) { echo "New record created successfully"; } else { echo "Error: " . $sql . "<br>" . $conn->error; } $conn->close(); } else { echo "Wrong API Key provided."; } } else { echo "No data posted with HTTP POST."; } function test_input($data) { $data = trim($data); $data = stripslashes($data); $data = htmlspecialchars($data); return $data; }
Before saving the file, you need to modify the $dbname, $username and $password variables with your unique details:
// Your Database name $dbname = "example_esp_data"; // Your Database user $username = "example_esp_board"; // Your Database user password $password = "YOUR_USER_PASSWORD";
After adding the database name, username and password, save the file and continue with this tutorial. If you try to access your domain name in the next URL path, you’ll see the message:
4. PHP Script – Visualize Database Content in a Chart
Create another PHP file in the /public_html directory that will plot the database content in a chart on a web page. Name your new file: esp-chart.php
Edit the newly created file (esp-chart.php) and copy the following. --> <?php $ <script src=""></script> <style> body { min-width: 310px; max-width: 1280px; height: 500px; margin: 0 auto; } h2 { font-family: Arial; font-size: 2.5rem; text-align: center; } </style> <body> <h2>ESP Weather Station</h2> <div id="chart-temperature" class="container"></div> <div id="chart-humidity" class="container"></div> <div id="chart-pressure" class="container"></div> <script> var value1 = <?php echo $value1; ?>; var value2 = <?php echo $value2; ?>; var value3 = <?php echo $value3; ?>; var reading_time = <?php echo $reading_time; ?>; var chartT = new Highcharts.Chart({ chart:{ renderTo : 'chart-temperature' }, title: { text: 'BME280 Temperature' }, series: [{ showInLegend: false, data: value1 }], plotOptions: { line: { animation: false, dataLabels: { enabled: true } }, series: { color: '#059e8a' } }, xAxis: { type: 'datetime', categories: reading_time }, yAxis: { title: { text: 'Temperature (Celsius)' } //title: { text: 'Temperature (Fahrenheit)' } }, credits: { enabled: false } }); var chartH = new Highcharts.Chart({ chart:{ renderTo:'chart-humidity' }, title: { text: 'BME280 Humidity' }, series: [{ showInLegend: false, data: value2 }], plotOptions: { line: { animation: false, dataLabels: { enabled: true } } }, xAxis: { type: 'datetime', //dateTimeLabelFormats: { second: '%H:%M:%S' }, categories: reading_time }, yAxis: { title: { text: 'Humidity (%)' } }, credits: { enabled: false } }); var chartP = new Highcharts.Chart({ chart:{ renderTo:'chart-pressure' }, title: { text: 'BME280 Pressure' }, series: [{ showInLegend: false, data: value3 }], plotOptions: { line: { animation: false, dataLabels: { enabled: true } }, series: { color: '#18009c' } }, xAxis: { type: 'datetime', categories: reading_time }, yAxis: { title: { text: 'Pressure (hPa)' } }, credits: { enabled: false } }); </script> </body> </html>
After adding the $dbname, $username and $password save the file and continue with this project.
// Your Database name $dbname = "example_esp_data"; // Your Database user $username = "example_esp_board"; // Your Database user password $password = "YOUR_USER_PASSWORD";
If you try to access your domain name in the following URL path, you’ll see the following:
That’s it! If you see three empty charts in your browser, it means that everything is ready. In the next section, you’ll learn how to publish your ESP32 or ESP8266 sensor readings.
To build the charts, we’ll use the Highcharts library. We’ll create three charts: temperature, humidity and pressure over time. The charts display a maximum of 40 data points, and a new reading is added every 30 seconds, but you change these values in your code.
5. Preparing Your ESP32 or ESP8266
This project is compatible with both the ESP32 and ESP8266 boards. You just need to assemble a simple circuit and upload the sketch provided to insert temperature, humidity, pressure and more into your database every 30 seconds.
Parts Required
For this example we’ll get sensor readings from the BME280 sensor. Here’s a list of parts you need to build the circuit for this project:
- ESP32 board (read Best ESP32 dev boards)
- module we’re using communicates via I2C communication protocol, so you need to connect it to the ESP32 or ESP8266 I2C pins.
BME280 wiring to ESP32
The ESP32 I2C pins are:
- GPIO 22: SCL (SCK)
- GPIO 21: SDA (SDI)
So, assemble your circuit as shown in the next schematic diagram (read complete Guide for ESP32 with BME280).
Recommended reading: ESP32 Pinout Reference Guide
BME280 wiring to ESP8266
The ESP8266 I2C pins are:
- GPIO 5 (D1): SCL (SCK)
- GPIO 4 (D2): SDA (SDI)
Assemble your circuit as in the next schematic diagram if you’re using an ESP8266 board (read complete Guide for ESP8266 with BME280).
Recommended reading: ESP8266 Pinout Reference Guide
ESP32/ESP8266 Code
We’ll program the ESP32/ESP8266 using Arduino IDE, so you must have the ESP32/ESP8266 add-on installed in your Arduino IDE. Follow one of the next tutorials depending on the board you’re using:
- Install the ESP32 Board in Arduino IDE – you also need to install the BME280 Library and Adafruit_Sensor library
- Install the ESP8266 Board in Arduino IDE – you also need to install the BME280 Library and Adafruit_Sensor library
After installing the necessary board add-ons, copy the following code to your Arduino IDE, but don’t upload it yet. You need to make some changes. */ #ifdef ESP32 #include <WiFi.h> #include <HTTPClient.h> #else #include <ESP8266WiFi.h> #include <ESP8266HTTPClient.h> #include <WiFiClient.h> #endif #include <Wire.h> #include <Adafruit_Sensor.h> #include <Adafruit_BME280.h> // Replace with your network credentials const char* ssid = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH_YOUR_PASSWORD"; // REPLACE with your Domain name and URL path or IP address with path const char* serverName = ""; //"; /*#include <SPI.h> #define BME_SCK 18 #define BME_MISO 19 #define BME_MOSI 23 #define BME_CS 5*/); WiFi.begin(ssid, password); Serial.println("Connecting"); while(WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); } Serial.println(""); Serial.print("Connected to WiFi network with IP Address: "); Serial.println(WiFi.localIP()); // (you can also pass in a Wire library object like &Wire2) bool status = bme.begin(0x76); if (!status) { Serial.println("Could not find a valid BME280 sensor, check wiring or change I2C address!"); while (1); } } void loop() { //Check WiFi connection status if(WiFi.status()== WL_CONNECTED){ HTTPClient http; //) + ""; Serial.print("httpRequestData: "); Serial.println(httpRequestData); //"; // Send HTTP POST request int httpResponseCode = http.POST(httpRequestData); // If you need an HTTP request with a content type: text/plain //http.addHeader("Content-Type", "text/plain"); //int httpResponseCode = http.POST("Hello, World!"); // If you need an HTTP request with a content type: application/json, use the following: //http.addHeader("Content-Type", "application/json"); //int httpResponseCode = http.POST("{\"value1\":\"19\",\"value2\":\"67\",\"value3\":\"78\"}"); if (httpResponseCode>0) { Serial.print("HTTP Response code: "); Serial.println(httpResponseCode); } else { Serial.print("Error code: "); Serial.println(httpResponseCode); } // Free resources http.end(); } else { Serial.println("WiFi Disconnected"); } //Send an HTTP POST request every 30 seconds delay(30000); }";
Setting your serverName
You also need to type your domain name, so the ESP publishes the readings to your own server.
const char* serverName = "";
Now, you can upload the code to your board. It should work straight away both in the ESP32 or ESP8266 board. If you want to learn how the code works, read the next section.
How the code works
This project is already quite long, so we won’t cover in detail how the code works, but here’s a quick summary:
- Import all the libraries to make it work (it will import either the ESP32 or ESP8266 libraries based on the selected board in your Arduino IDE)
- Set variables that you might want to change (apiKeyValue)
- The apiKeyValue is just a random string that you can modify. It’s used for security reasons, so only anyone that knows your API key can publish data to your database
- Initialize the serial communication for debugging purposes
- Establish a Wi-Fi connection with your router
- Initialize the BME280 to get readings
Then, in the loop() is where you actually make the HTTP POST request every 30 seconds with the latest BME280 readings:
//) + ""; int httpResponseCode = http.POST(httpRequestData);
You can comment the httpRequestData variable above that concatenates all the BME280 readings and use the httpRequestData variable below for testing purposes:
String httpRequestData = "api_key=tPmAT5Ab3j7F9 Sensor also use a Raspberry Pi for local access).
With this setup you control your server and can move to a different host if necessary. There are many cloud solutions both free and paid that you can use to publish your readings, but those services can have several disadvantages: restrictions on how many readings you can publish, number of connected devices, who can see your data, etc. Additionally, the cloud service can be discounted or change at any time.
The example provided is as simple as possible so that you can understand how everything works. After understanding this example, you may change the web page appearance, publish different sensor readings, publish from multiple ESP boards, and much more.
You might also like reading:
- [Course] Learn ESP32 with Arduino IDE
- ESP32 Publish Sensor Readings to Google Sheets (ESP8266 Compatible)
- Plot Sensor Readings in Real Time Charts (ESP Local Web Server)
- ESP32/ESP8266 Insert Data into MySQL Database using PHP and Arduino IDE
I hope you liked this project. If you have any questions, post a comment below and we’ll try to get back to you.
If you like ESP32, you might consider enrolling in our course “Learn ESP32 with Arduino IDE“. You can also access our free ESP32 resources here.
Thank you for reading.
98 thoughts on “Visualize Your Sensor Readings from Anywhere in the World (ESP32/ESP8266 + MySQL + PHP)”
Sara and Rui Santos,
Excellent post, didactic and accessible to all. Congratulations!
Carlos Bruni
Salvador Bahia Brazil.
Thank you!
do you have example that use input data from web server save to database, and from data base, we use data to control machine, for exemple if user input 1(led 1 on), input 2 led 2 on) and 3 to turn off led
At the moment, I don’t have any tutorial on that exact subject.
Thank you for your excellent tutorial!
How can I set up the weather data base so that anyone visiting my web page can download the data if they need to?
Thank you very much.
God bless you.
You’re welcome!
This is really great, thanks a lot.
You’re welcome! Thanks for reading
Excited to have it working! Thanks for the great tutorial and examples. Looking forward to the next.
You’re welcome! More tutorial on this subject will be posted in the upcoming weeks
I’m glad to read about the possibility to use Raspberry Pi instead of a server. Do you have additional information to do so?
Friedhelm
I don’t have, but I might write a tutorial on that subject. With the Raspberry Pi the data will be stored locally (not from anywhere in the world).
That’s want I want to do.
Can I do the same by installing PHP and mySQL standalone on my mac ?
I don’t want to use the hosting services.
You can definitely do it. Search for “PHP + MySQL” on Mac OS X and you’ll find plenty of tutorials on that subject.
Note: with that setup data will be stored locally (not from anywhere in the world).
Hi, When i execute from my FileManager the file esp-chart.php visualize the following lines:
Notice: Undefined variable: sensor_data in /storage/ssd3/152/2515152/public_html/esp-chart.php on line 38.
When execute post-data.php in my screen view is the following:
No data posted with HTTP POST.
When I view the structure the ‘sensor’ database ,I view following:
# Name Type Collation Attributes Null Default Comments Extra Action
1 idPrimary int(6) UNSIGNED No None AUTO_INCREMENT Change Change Drop Drop
More More
2 value1 varchar(10) utf8_unicode_ci Yes NULL Change Change Drop Drop
More More
3 value2 varchar(10) utf8_unicode_ci Yes NULL Change Change Drop Drop
More More
4 value3 varchar(10) utf8_unicode_ci Yes NULL Change Change Drop Drop
More More
5 reading_time timestamp No current_timestamp() ON UPDATE CURRENT_TIMESTAMP() Change Change Drop Drop
More More
What happens with in this case?
Thanks you
Do you have any values stored in your database already?
Even though the request is successful… Is data actually being inserted in your Sensor table?
Regards,
Rui
No, unfortunately only the following appears when I run esp-data.php:
ID Sensor Location Value 1 Value 2 Value 3 Timestamp
Please, view the following link:
qpower.000webhostapp.com/esp-data.php
You need to check why your ESP is not posting data. Is your sensor working?
What do you see in the Arduino IDE serial monitor?
Regards,
Rui
Hello Rui:.
Is it possible to have some kind of return from the database if it recorded well or not?
Thanks for helping
I’m honestly not sure why that happens… I can only imagine some small error in the post-data.php script (like wrong credentials) that can’t insert data in your Sensor table…
Hello Rui and Alberto,
I had a bit the same problem. But I discovered this was caused by a difference in table name SensorData as is used in this other example
Here in example this table is named Sensor. I took the database from the other example (saving work to do).
I hope this will help you.
Regards,
Jop
I had a problem posting to the database.
I solved it by noting the URL of the SQL server as indicated in phpMyAdmin and I changed the $servername value from ‘localhost’.
Thank you Santos’ for this tutorial.
I’m a fan of you guys. You are doing a wonderful job. This tutorial is but one of your great efforts. I appreciate it.
Please is it possible to make this same platform used in this tutorial to control appliances from anywhere in the world? Thanks for your reply.
It’s definitely possible, but it’s a bit tricky. I’ll probably create that tutorial too in a couple of weeks.
Right now I don’t have any tutorial on that subject.
Stunningly good example, like all your others 😊
I just wanted to add in that if things don’t work immediately as was the case with mine, try talking to your isp internet service provider.
I put this in my existing site and just couldn’t push data from sensor to database. I KEPT GETTING A FAILURE 301 Redirect Status
It turned out my isp had as Default a redirect in place. They commented that out and data started to appear in the database, but, not on the page.
More talking and it turned out this was a question of cache.
With super cache turned off I now get live data… After refresh…
I now want to find out what I need to do to have an auto-refresh as data arrives.
As I say, just my experience, but it may help someone.
Keep up the great work guys. I have been through my course twice now and feel quite empowered.
😁😁
got this when visited *****.com/esp-chart.php “Connection failed: Access denied for user ‘*****_esp_brd’@’localhost’ to database ‘*****_esp_data’ “
You’ve typed in the PHP script either the wrong database name, user or password…
Hello Akash,
I have the same issue.. ; but the database name, user and password are correct..
Did you solved the issue ?
If yes, please tell me how.
Thanks in advance,
alin
Ciao, thamks for your fantastic tutorials, i have learned a lot from them.
In this one i have found a problem. Opening the file esp-chart.php it give me this error and i don’t know how to solve it:
Fatal error: Uncaught Error: Call to a member function fetch_assoc() on boolean in /home/mhd-01/www.”myhost”.it/htdocs/ESP/esp-chart.php:34 Stack trace: #0 {main} thrown in /home/mhd-01/www.”myhost”.it/htdocs/ESP/esp-chart.php on line 34
Obviusly in “myhost” i have my domain name
You can leave localhost in the host (if your database is in the same server as your PHP code)
Hi Rui,
I have almost the same error. I’m trying to mimic this data visualisation on my local computer, using XAMPP. I don’t understand your answer to the question of Davide. What do you mean by “leave localhost in the host”?
This is the error text:
Fatal error: Uncaught Error: Call to a member function fetch_assoc() on bool in /Applications/XAMPP/xamppfiles/htdocs/websiteSWB/esp-chart.php:34 Stack trace: #0 {main} thrown in /Applications/XAMPP/xamppfiles/htdocs/websiteSWB/esp-chart.php on line 34
After hours of internet searching and trial and error I finally had success in using the XAMPP-environment to receive the ESP-data and plotting the chart in my browser locally. First I changed in the ‘esp-chart.php’ file the $username to “root” and the $password to “” (empty). Then in the Arduino sketch I set the servername to “”. This means: first the IP-address of my computer, followed by the foldername ‘websiteSWB’ which contains the ‘post-data.php file’. The folder is inside the xammp/htdocs/ folder structure.
Hi Rui, thanks for this tutorial! Your time is appreciated.
I have followed your tutorial to the letter and I am also recieving the same error regarding line 34? Very strange. I have only changed the database, username and password as intructed. The servername was left as localhost.
[06-Feb-2021 11:06:02 UTC] PHP Fatal error: Uncaught Error: Call to a member function fetch_assoc() on bool in /home2/myHost/public_html/esp-chart.php:34
Stack trace:
#0 {main}
thrown in /home2/myHost/public_html/esp-chart.php on line 34
Do you have any idea what I might have to do to fix this?
Thank you for your time.
Hi,
Im trying to follow and try the tutorial according to the code and step, I managed to setup the mysql database, create post-data.php, and esp-chart.php, seem to be ok, except that for esp-chart.php I didnt see nothing, it did not show the X-Y chart and the title, it just doesnt show anything, I just copy and paste from ur rawcode, please advise, the php code seem to be able to connect to the sql database, it just doest display the html or chart portions of it.
If you access PHPMyAdmin, can you open your Sensor table? Is your data inserted there?
I’ve just noticed that the table is called Sensor while, in the other tutorial, is called SensorData. It is working now.
Hi Rui,
Sent a msg yesterday about my code not working. Actually ended up a problem with the php version stored on my ISP’s server.
Just a suggestion, adding a sleep mode to the esp’s code, will make this project portable 😉
Thanks again and keep up with the excellent job!
esp-chart.php is returning me this error:
Connection failed: Access denied for user ‘xxxx_esp_boa’@’localhost’ (using password: YES)
My database name, user and password are correct, the same as in post-data.php, which is working fine. My user has all privileges. I have a online mysql server, I work with wordpress websites, so it would have to do the job.
What could be happening?
Your project is very interesting, I wish I could see it working for me. Thank you and congratulations.
Fabio.
To be honest that error only happens if you entered the wrong settings in your PHP script… Can you double-check the database name, username, and password again?
The user must have all the privileges as you’ve mentioned in order to work
I have had the same Problem and solved it. My fault was to use the wrong kind of “-Key around the username and password. It was the „username“ instead of “username”.
Check your database password if you are having access denied problems.
Invalid characters in my password were causing the similar problems.
Forgot to mention the function which solved my php version issue…
if (! function_exists(‘array_column’)) {
function array_column(array $input, $columnKey, $indexKey = null) {
$array = array();
foreach ($input as $value) {
if ( !array_key_exists($columnKey, $value)) {
trigger_error(“Key \”$columnKey\” does not exist in array”);
return false;
}
if (is_null($indexKey)) {
$array[] = $value[$columnKey];
}
else {
if ( !array_key_exists($indexKey, $value)) {
trigger_error(“Key \”$indexKey\” does not exist in array”);
return false;
}
if ( ! is_scalar($value[$indexKey])) {
trigger_error(“Key \”$indexKey\” does not contain scalar value”);
return false;
}
$array[$value[$indexKey]] = $value[$columnKey];
}
}
return $array;
}
}
on what line did you insert this?
pls help, im havingthe same problem. thanks!
Hi, Rui good work Rui, I want a bit modification, can I transfer and plot my sensor data within micro seconds delay rather than 30 seconds? Actually, I want to plot the real time current wave of my induction motor. So, please suggest what I have to change?
Hi,
many thanks for the good tutorial. I like the presented architecture very much.
I use the Highcharts library for the first time. Unfortunately the scaling on the x-axis is not nice because of the crooked values. I haven’t managed to get a reasonable scaling yet.
Do you have any idea to get this?
You can definitely modify the charts look as they are highly customizable (show less points or hide x axis data). Unfortunately I don’t have any examples on that subject…
I have entered the Sensor table in phpMyAdmin in me account at ONE.COM.
But where and how do I enter the PHP files?
There is no cPanel file manager.
Sorry if this is a silly question, I am new to all this sql php stuff.
You should definitely have a file manager of some sort where you’ll find a folder called public_html (you might need to contact your host for specific details on where they have that folder).
Hi guys, thanks again for a brilliant tutorial. I have set up an ESP32 and have it logging to my home computer hosting a webserver and SQL database using Wamp64. Your code worked fine after a bit of setting up localhosts on the Wamp server. Also had some errors initially in the chart php code in line 35. The code returned an error until there was some data in the sensor table. The php code is looking for an array which only gets populated when there is some data. After the first posting by the ESP all was fine.
Keep up these great tutorials………
Regards
Alan
This is really great, thanks a lot
can I replace the BME280 sensor with dht11 and DS18B20 and yl-69?
Yes Hamza, but I don’t have any tutorials on that exact subject… You would need to modify the code to read one of those sensors and the web page to only display those readings.
I still think you should use the BME280, here’s why: DHT11 vs DHT22 vs LM35 vs DS18B20 vs BME280 vs BMP180
Hello
I followed the tutorial to the letter.
A problem has arisen when I connect the ESP32 to an external power source, the ESP32 if it turns on but does nothing, does not send data to the web. It only works correctly when it is connected via USB to my PC.
What could be the problem?
It sounds like a power issue… Can you double-check that is supplying enough voltage/current?
Hi guys,
just finished a project where I combined 4 of your previous projects:
1. Sensor data to SQL database and available anywhere
2. BME280 temp, humidity and pressure sensor data logging
3. SD1306 OLED display with DS3231 Real Time clock
4. ESP32 Dual Core for split processing
Basically, I wanted to capture and display multiple sensor data together with real time video, real time clock data and display this on a dashboard with selectable update timing.
I used the 2 ESP32 cores to periodically poll the SQL database for data (5 minute intervals) and also to update a Real Time Clock display every second.
The results were posted to an IOBroker dashboard using Node Red logic behind the scenes. The outcome was a superb display with historical trends updated every 5 minutes and an IP camera live video.
I have to thank you guys for the great work done on these individual projects which enabled me to get the final result.
A big shout out to IOBroker and Node Red in providing the open source tools to integrate and display the data in a professional manner.
Anybody interested in the details please contact me via RNT Forum as I’m pleased to share RNTs valuable work.
I encounter error in my Arduino code
>> Error code: -1
My website is not updated. I’m using localhost, is this okay?
Can you double-check the IP address of your device? Can you open the URL path in your local web browser?
This error code was resolved by allowing the connection through the firewall.
Hello and thanks a lot for that tuto. I’m noob on that : just began today
I followed everything in your tuto.
The only diff is that i’ve setup a ubuntu server with apache2 MySQL PhpMyadmin etc…
Created databases etc…
Finally at the end i loaded the sketch and i only add a auto-refresh in “esp-chart.php”
I also put a DeepSleep function in the sketch, the chart is working and giving the good values : it’s great
But on MySQL side i always get that errors :
Unfortunately iam not a SQL expert so if someone can help me please ?
Also as a no SQL expert, is it possible to add a button to reset data-base to restart from zero value
”
Warning in ./libraries/sql.lib.php#613
count(): Parameter must be an array or an object that implements Countable
Backtrace
./libraries/sql.lib.php#2128: PMA_isRememberSortingOrder(array)
Hello again,
I finally find the why of my pb : MySQL was wrong installed and set
😉
Hello Max, unfortunately I’ve never experienced that error, so I’m not sure exactly what’s missing in your configuration…
Regards,
Rui
Hi
Thanks for another interesting project. Works fine.
I only wonder if I change my domain to ‘SSL’ or https from http,
do I need to include WiFiClientSecure etc. in the sketch?
Did one program that sends to Google Sheet and the fingerprint
(client.verify(fingerprint, host)) does not seem to make any difference.
Janne
Yes, you should be able to install a free SSL certificate (like letsencrypt ) using cPanel. Then, as you suggested you would need to run a WiFiClientSecure example
Hello again 😉
I’ve a question please :
I noticed that the chart is limited for one day display history
Is it possible to custom that view or get more large than now
a view for one week or one month
Is it possible to add an option like that in main esp_*.php page ?
or at minimum setup a larger view
Waiting for your feedback
thanks in advanced
Guys, this tutorial is just awesome! Even though I’ve never created a website, a database or even registered a domain before, I was able to complete the whole project and it works perfectly. I managed to change the sensor to others and, with some changes of the script, measure different variables and check them from anywhere in the world! 🙂
This tutorial is so well explained that it wasn’t as complicated to follow as it may seem at the beginning.
Thank you very much for posting these projects and helping so many people.
Best regards,
Joan
Hi Joan.
Thank you so much for your feedback.
We spend a lot of time making the tutorials as easy to follow as possible even if you don’t have any experience in these fields.
We’ll try to create more projects on this subject on a near future (like sending email notifications, setting a threshold on a web form, etc…)
Regards,
Sara
Hi Joan, can you please tell me how you were able to upload and display other data? I tried to modify this part:
String httpRequestData = “api_key=” + apiKeyValue + “&value1=” + String((1.8*(bme.readTemperature())+32))
+ “&value2=” + String(bme.readHumidity()) + “&value3=” + String(bme.readPressure()/100.0F) + “”;
to this:
String httpRequestData = “api_key=” + apiKeyValue + “&value1=” + String((1.8*(bme.readTemperature())+32))
+ “&value2=” + String(coach()) + “&value3=” + String(starter()) + “”;
I added these lines to confirm the data was being received:
Serial.print(“httpRequestData: “);
Serial.print(“Coach battery: “);
Serial.print(String(coach()));
Serial.println();
Serial.print(“Starter battery: “);
Serial.print(String(starter()));
Serial.println();
Serial.println(httpRequestData);
Serial monitor output:
20:02:44.638 -> httpRequestData: Coach battery: 9.54
20:02:44.706 -> Starter battery: 12.08
20:02:44.742 -> api_key=not_the_real_key&value1=80.71&value2=9.54&value3=12.09
20:02:45.381 -> HTTP Response code: 200
MySQL database never updates, webpage never updates. I only changed two of the variables.
If anyone has an ideas please help.
Thanks
It appears your problem is within the PHP file. Run an HTML test form to check your PHP code. Something like this:
hum, the html code didn’t show up. Let’s try again:
”
“
Hi Rui, Hi Sara, please let me know how to do to implement Autoconnect on ESP32 (Not ESP8266!)
I tried to do it but I’m getting compiling errors.
Thanks
Hi Rui –
Having saved this tutoprial when I first came across it, I called it up to solve a current problem. The approach taken to access HTTP services is much simpler than the one I have been using previously. Unfortunately, however, I cannot get past the #include line! The compiler throws errors. There seem to be countless variants of this library – HttpClient.h, ESP8266HTTPClient.h, ArduinoHttpClient.h and so on. None of these seems to compile! Help – what am I missing here?
Hi Phil.
What board are you using and what are the exact errors that you’re getting?
Regards,
Sara
Hi Sara –
Merry Christmas!
I am using a NodeMCU ESP-12E board but the confusion arose over exactly which version of the HTTP library I should be using as the version specified above didn’t seem to work with some of the other libraries I was using. However, I finally succeeded with the libraries you recommended having modified the rest of my program to work with these and the updating is now working.
Thanks for coming back to me – have a Happy New Year!
Regards
Phil
Hi Rui,
short feedback from my side.
Everything worked well. Thanks for all your work – very much appreciated.
Looking forward to go on with the esp32 GSM one to get rid off the wifi challenges.
Hi Mathias.
Thank you for your feedback.
Are you talking about the ESP32 with SIM800?
We have some tutorials:
Regards,
Sara
Hi Rui, Hi Sara
thank you for your great tutorial,
I would like to replace the Bme280 to DS18B20 or Dht 22, could you help me please,
Best,
excellent! changed last line from delay(30000) to ESP.deepSleep(9e8) for 15 minutes and tied D0 to reset pin for wake up.
changed pressure reading to inHg
String(bme.readPressure()/100.0F * .02953)
Hi and thanks for your valuable tutorials.
How I can have a chart with shifting data from right to left.
I have to press Enter key every time for reading and showing new data.
Would you please guide me for solving this problem.
I want to do like this.
Hello and thanks a lot for this tutorial
please is there any free way to Hosting server and domain name??
because is not free anymore
Not working at all. Exit with Error code -1
20:51:20.529 -> httpRequestData: api_key=tPmAT5Ab3j7F9&value1=24.20&value2=24.20&value3=993.83
20:51:20.576 -> Error code: -1
My website doesnt show any readings but in Arduino serial monitor shows:
httpRequestData: api_key=tPmAT5Ab3j7F9&value1=26.28&value2=84.59&value3=84.59
HTTP Response code: 200
I think the problem is in PHP file because there is no data shown in the SensorData table.
Please help me. Thanks!
Dear Sara and Rui Santos,
Thank you for all your projects. Not only are interesting, they are perfectly configured, with many included possibilities. I did it with another sensor like voltaje and DHT, but I think like you tha BM280 is much better. I used my NAS and my webpage and of course it works perfect due to the project is very clear and the I could find my errors quickly. The only thing that I didn’t find was how to show the part of the table I want to show based on the time or date that I want, but I think I have to study more about sql.
Thakns again and best regards,
Lorenzo
Sara & Rui,
Thanks for the great tutorial. This was my first ESP32 / BME280 project and I managed to get everything working. I would like to modify the date format though and I’m not exactly sure how I can do this.
I’ve been reading php.net’s manual for DateTime::format and tried to modify this line in esp-chart.php:
$readings_time[$i] = date(“Y-m-d H:i:s”, strtotime(“$reading – 1 hours”));
to
$readings_time[$i] = date(“D H:i”, strtotime(“$reading – 1 hours”));
I was hoping to get the date to display as “Mon 12:30” but this did not seem to work. Any suggestions? I am completely new to all of this so I’m a little puzzled.
Thanks
Hello,
First of all, congratulations for the excellent work you are doing with all these tutorials you are sharing with us.
I have a quick question regarding the HTTP POST, is ti possible to show me how to adapt the code in order to send a POST but using HTTPS.
I’ll really appreciate your help.
Regards and great job.
Hi Guillermo.
At the moment, we don’t have any tutorials about suing HTTPS.
Regards
Sara
Made this with a BMP085 sensor so no humidity available, had to changes the .h file with the correct one for the BMP085. and left out the part for the humidity reading.
Once I get the correct sensor i put this back in.
I like to read the tutorials and the comments, very useful.
Thanks
if I have data 1000 record to show .this chart can show 40 data . How can I do to show data next 40? I maybe add button to show next data. How can I do that ?
Thanks for advance
Or I would like to show history records or current records ? How can I do that ?
Thanks alot
Hi! I want to modify the HTML Portion of the esp-chart.php files (text and colors) . But when i edit and save it, nothing changes. Thanks for your effort Rui
Hi! Rui. How can i make the Chart display more than 40 points ?
Hi together,
points in my series here, are not always equidistant in time along the charts x-axis, but appear as that in the chart unfortunately.
I have invested so much efforts already but have still not been able to find out,
how Highcharts needs to be configured, so that the timestamp values appears at correct positions which reflect their proportional distance in time.
Was anyone perhaps already been able to solve this appropriate?
Thanks so much for sharing in that case !
Kind Regards to you all and to Sara and Rui over there in Porto
most of the problems above have to do with the browser not updating.
if you want to check this is your issue, just put an ‘ ? ‘ char behind your url and load webpage again. i d’ont know yow to fix this in the html code but someone over here probable does.
thanks and welcome | https://randomnerdtutorials.com/visualize-esp32-esp8266-sensor-readings-from-anywhere/?replytocom=460196 | CC-MAIN-2021-25 | refinedweb | 6,867 | 64.2 |
"Receiving SOAP faults is not supported due to Web browser limitations"
I'm scared, really
Mi autogenerated service proxy has my typed exception object, In the response soap message i see mi exception perfectly, but the silverlight application throw error 404 I suppose by "Receiving SOAP faults is not supported due to Web browser limitations"
EriC#
I have noticed this behaviour as well and when I posted about it before all I heard were crickets. What I wound up doing was catching the error in the proper method for the Reference.cs file that is generated in the proxy. Definitely not optimal because if you ever update your service reference it will over write your code.
Chandler ChaoiStreamPlanet
At least in beta 1, Fault Exception is not supported in Silverlight. I don't think beta2 changed that but I could be wrong.
Here is the way I do it:
I created a custom exception in my WCF side. I have one generic WCF function for my all WCF calls. The return type is called ResponseData I defined this way:
[DataContract] public class ResponseData { [DataMember] public string StringResult { get; set; } [DataMember] public int IntResult { get; set; } [DataMember] public double NumericResult { get; set; } [DataMember] public bool BooleanResult { get; set; } [DataMember] public CustomException Error { get; set; } [DataMember] public DataSetData DataSetResult { get; set; } [DataMember] public Dictionary<string, string> DictionaryResult { get; set; } }
[DataContract] public class CustomException { [DataMember(Order = 0)] public string Message { get; set; } [DataMember(Order = 1)] public CustomException InnerException; public Exception ToException() { Exception e; CustomException ce = this; if (ce.InnerException != null) { Exception inner = ce.InnerException.ToException(); e = new Exception(ce.Message, inner); } else e = new Exception(ce.Message); return e; } }
So if my WCF function I catch any exception and pass that exception to MyResponseData.Error;
[OperationContract] public ResponseData ProcessRequest(RequestData request) { return Process(request); }
private ResponseData Process(RequestData request)
{
ResponseData result = new ResponseData(); try
{ ...
result.StringResult = "Something"; // }
catch (Exception ex) { while (ex.InnerException != null) ex = ex.InnerException; result.Error = new CustomException(); result.Error.Message = ex.Message + ex.StackTrace; } return result; // this result could contains Error or could contain real data.
}
On Silverlight side in MyWebServiveCompleted call I check both e.Error and e.Result.Error
Hope this would help if somebody want to do this way. Use this CustomExeption I can also send my Exception in the Silverlight side back to Server to do Exception logging.
sladapterSoftware EngineerAprimo, IncPlease remember to mark the replies as answers if they answered your question
sladapter:
Here is the way I do it:
I created a custom exception in my WCF side. I have one generic WCF function for my all WCF calls. The return type is called ResponseData I defined this way:
I created a custom exception in my WCF side. I have one generic WCF function for my all WCF calls. The return type is called ResponseData I defined this way:
Hi sladapter,
I like this approach very much, thanks for sharing it!
vitya
Hi sladaper, I thought of creating an exception model such as this, but now that Beta 2 is out I see they have added support for FaultException.
However in my WCF method, if I try to throw a new FaultException from the catch(Exception) the runtime stops with error "FaultException was unhandled by user code.
Have you had a chance to play with FaultException in Beta 2 ??
thanks.
Have to dissapoint you there Jordan. They did change something concerning the FaultExceptions.In Beta 1, adding service reference to WCF service with FaultContracts specified caused compilation errors (as you probably now).Because this information was added to the Reference class, but Silverlight doesn't know what to do with this.In Beta 2 they changed it so this information is no longer added to the reference class.Hence no more compilation errors, but it still doesn't work.Two weeks ago I sent a mail to Tim Heuer about this (and other WCF/Silverlight stuff, you can only try to get an answer right :-)) and this was his answer:
Great, thanks for the detailed response. I do like sladapter's idea of a single generic operation contract (since it would reduce the download size on the client), though i don't see how I could do this to support the many different data contracts I need. These are being used in the client side app as well to simplify things and avoid code duplication.
I will look into your idea of using an out param for the exception.
thanks.
ps. anyone know when we can expect RC1?
No problem. I started of with the exception being part the response but got stuck :-).The reason I got stuck was because of our current WCF-service architecture (which in all modesty is pretty good ;)).Reason I got stuck was that we (as you have I guess) have many OperationContracts each returning a specific typed response.These responses all inherit from a base class. And it's in this base class I added the exception object.But the problem was I wanted to handle exceptions in one place (which our architecture provides). But currently I couldn't get to the type the response should be in that place.Maybe this is a bit of an abstract description. But it's just to say that it was indeed my intention to add it to the response.But didn't want to change to many things to the current architecture of the WCF service as this 'exception work around' is hopefully only temporary...
Of course I had to ask Mr Heuer about the future of SL aswell. Here is what he answered to that (I basically asked when final SL2 will be available and what is on their future roadmap, again I can only ask :)):
Surprises me too, as I've been in meetings with MS regarding silverlight, and every indication was a Fall release. I would assume this to be PDC. I can't think of a better time or place to release the final of Silverlight 2, if it is as important to them as they make it to be. I also spoke of roadmap with some people on the SL team, who stated that there's already plans up to version 5.
But i agree with Mr Heuer, no point getting ahead of themselves, just get the final v2 out and see where things stand. This workaround is fine for a while (out param works nicely btw), I would rather they fix things like localization which is im complete shambles right now.
My other big concern, which no one seems to be addressing, is how MS is planning on getting the runtime propogated. Believe me, NBC's olympic coverage won't be it. They should've bought youtube before google did... No corporate networks will install it until the final release has come out.
doolittle,
Thank you for all the information.
So currently we still need this workaround for passing WCF exception back to Silverlight. I agree that if you have a lot of DataContracts, using out parameter in all Service functions should be the best choice.
For me, I only need to pass generic "DataSetData" back and forth so I can use the generic method. I only have a few very generic OperationContracts and DataContracts even our application contains more than 1000 tables.
sladapter, it would be nice to have such simple data requirements.
Do you not need to bring back combined information? such as details of a client with lists of things they're allowed to do?
Yes, actually I bring back meta data and real data. The meta data is for describing each field of an object such as field data type, if it's required, if it's editable, the Max field length, security right and access right check id, data format, display control type and edit control type etc. With all those information we basically can automate control building process.
Most security check are done in the Business tier so the data I bring back is for the current user only. If the user does not have certain right for certain field, the data returned won't even contains that information.
Our data requirements is not simple at all, it need to be very dynamic. Each user can select different columns (within their right) to show their data. They can change this selection dynamically. Because the nature of our data requirement is so dynamic, generic way (using DataSet rather than define each DataObject) is a better way for us. It requires a lot of thinking before hand and it's not that straight forward either. But once the structure is there, adding more objects and pages are simple and easy.
Looks like you are already doing what we ultimatly would like to be doing :).At least the part where you send the info with information on what the client is actually allowed to do with it.
You might have noticed my thread'm really interested if for your, pretty big it seems, project you used any existing framework.If not what kind of structure are you using.I'm not asking for any code, but is it based on some guidelines?Did you find those somewhere??Would be very interested in that...
A Silverlight business app would need it's state managed on the server.
And it's probably no coincidence that the state machine workflow
service seems like it could be ideal for this. This could be a lot
more elegant than the old ASP.Net session state. However the current
ReceiveActivity seems to be designed for SOAP features. So when
a method is called of the wrong state all I get back is the 404 errors.
Does anyone know of a work around for this? And this seems pretty obvious way to make Silverlight apps so one of the gurus in the blogsphere must be working on a decent demo...? (that Calculator sample is pretty lame).
I can return errors within my functions using a property in a DataContract but I don't know how to override the workflow services when it throws a fault exception. I'm in process of learning how to use it so there are big gaps in my knowledge of it.
If you do no catch the Exception thrown in your Service code, you will get 404 error. But if you catch the final exception before your Service code return, wrap that error in ResultData back or you can use Out Parameter to return the error, you should get a meaningful error instead of 404 error.
agreed. I use the out parameter method as it's easier than adding a property if you have a lot of DataContracts. Also hoping the FaultException is fixed in the next release so can easily strip out the out param.
Instead of "out params" approach, I prefer to manage "Status".
Status management is a specific service dedicated to error handling with "strong-typed" errors (like non-generic FaultException).
Typically, when a method from a service returns "false", the client has to call a service like "GetLastStatus" that enables him to retrieve error details from the last call.
I design several DataContracts for different kind of Status. Basically: KernelErrorStatus, BusinessErrorStatus, and so on.
Of course this kind of approach depends on service implementation (single / per call / per session mode and multi-threading or not), and has to be designed correctly to fit well.
But I think it helps to get nice architecture for error handling and prevents developpers to "pollute" or "complicate" method signatures in the service contracts.
Hope this helps,
Little S t e p s
In order to keep the same signatures on my WCF services, I've done something like what you said (with a GetLastError() Method) :
Thanks
I have put together an article on Code project explaining in detail a re-usable set of classes to allow Service Exception handling in Silverlight.
The code wrapps itself around the service calls and adds extra info to the response message if an exception is handled.
This solution works for all scenarios including methods that return primitives or void, and keeps the compatibility with the standard WCF Fault handling.
The article can be found at Catch and Handle WCF Service Exceptions in Silverlight
To Err is human, To Arr is Pirate
surielb,
Thanks for the article.
Here is another similar solution from Microsoft Web Service team which basically has the same concept, but the code they provided is even more straightforward and simpler without having to change much of the current code. All you need to do to create BasicHttpMessageInspectorBinding instead of BasicHttpBinding and pass it to the Service constructor. Then the e.Error should return you all the exceptions thrown from the Service end.
See discussion in this blog:
You can download the Silverlight Fault Handle sample code from here:
Hi SLadapter,
Thanks for this link, I have implemented the code and it works well. I have one problem though...
It will only work when I have anonymous access permitted on the site which hosts the WCF Service.
For other reasons it is essential that my site has anonymous access disabled.
I assume that I have to change the BasicHttpMessageInspectorBinding to implement the equivalent of this:
<
</
But I am unclear how I would actually go about doing this in the context of the MessageInspector solution.
Any help would be gratefully received.
Cheers
Davedrat,
Sorry I have never done this. I'm not sure you need to change anything in MessageInspector code;
You might just need to add the Security to the binding when you create BasicHttpMessageInspectorBinding. However, I have not figure out how to specify the transport in the code.
EndpointAddress address = new EndpointAddress(new Uri(Application.Current.Host.Source, "../Service.svc")); BasicHttpMessageInspectorBinding binding = new BasicHttpMessageInspectorBinding(new SilverlightFaultMessageInspector()); binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; // How to set the Transport here? // binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Windows; This line won't work in Silverlight. ServiceClient proxy = new ServiceClient(binding, address); proxy.DoWorkCompleted += new System.EventHandler<System.ComponentModel.AsyncCompletedEventArgs>(proxy_DoWorkCompleted); proxy.DoWorkAsync();
You might want to ask this question in the codeplex site where you download the sample code and see if the creator of this solution can provide an answer.
Thanks for your quick reply sladapter,
The approach you describe seems like the right one... but I'm getting stuck at the same point as you.
There is an additional difficulty though.. If I do not change anything in the Message Inspector code, or the service web.config, then Visual Studio will not even let me add the service reference to my client Silverlight app.
VS complains that "Security Settings for this service require 'Anonymous' Authentication but it is not enabled for the IIS application that hosts this service."
Any thoughts on this?
I have contacted the codeplex guys to see if they can provide some help. If a solution is found I'll post it back here.... | http://silverlight.net/forums/p/17944/60019.aspx | crawl-002 | refinedweb | 2,491 | 62.98 |
Talk:OSM2World
Contents
Known issues
The "TerainBoundaryAABB" debug view does not work.(fixed in 0.1.1) handling of multipolygons with more than one hole is buggy(fixed in 0.1.2)
Debian
Great work, Tordanik!
Just a hint: for running OSM2World on Debian Linux, I have to install libjogl-jni and extend the java library path as in java -Djava.library.path=/usr/lib/jni/ -jar OSM2World.jar. --Scai 20:06, 24 November 2011 (UTC)
- Thanks for testing OSM2World, that information could be useful for other Debian users. Can you describe what kind of error you get when running OSM2World with the bundled native jogl libraries (lib/jogl/linux-amd64 or lib/jogl/linux-i586, depending on your architecture)? --Tordanik 22:25, 24 November 2011 (UTC)
It would be nice to have a list of recognized tags, especially for building attributes.
A related question regarding the building roof: why do you evaluate roof:ridge instead of the documented building:roof:ridge? Also there seems to be a problem with building=entrance nodes disturbing the roof shape.--Scai 19:05, 25 November 2011 (UTC)
- What I can offer is this semi-automatically generated taglist. Because of the way it is generated, tags are often not listed individually, but only with their common key. This picture shows the roof shape values (minus "onion") supported by OSM2World. I hope I get around to providing more detailled documentation of supported tagging some time.
- About the "namespace" issue: I'm currently discussing the building attributes with another developer and OSM2World will probably end up supporting building:roof:* tags in the near future (even though building:roof:ridge:direction is awfully long...). After all, they are more popular in the database and with other applications.
- Finally, you are right about the entrance nodes. Some roof shapes, particularly those mapped with ridge and edge ways, are still quite buggy. --Tordanik 22:45, 26 November 2011 (UTC)
GUI
Although I understand that the GUI is primarily for debugging, I wonder if you would consider the following small enhancements:
- it would be great to have a dashboard, showing current height above ground, lat/lon location, direction of view and angle of head
- the keyboard controls are much more intuitive and easy than the mouse (I will add them to the wiki as I had to experiment to find them). Please note that +/- is not a good choice as these keys move around on international keyboards. On my UK keyboard, + is shift-= as doesn't work. Finally, there is no keyboard control for head tilt, I have to use the right mouse button for this. Would it be possible to add?
Really nice to have but probably more work:
- a map view showing a mapnik tile with user's location on it (this makes more sense when the user is at "ground level" alongside the 3D view
- dynamic loading of small areas, directly from OSM as the user moves around
- click on a item to show its tags
- offer a shortlink to "edit this in OSM"
Thanks for this - it is a great tool with lots of potential to bring OSM3D to life. OliverLondon 22:18, 6 May 2012 (BST)
- I appreciate your feedback and all of your ideas would make nice features (even though some stray quite a bit from "debugging interface" into "end user application" territory). Those features that require knowing the user's lat/lon location are currently held back by the unfinished projection implementations, but it will be straightforward to add them once that issue is solved.
- The keyboard controls are mostly undocumented because there isn't much of an underlying design context behind them, aside from trying to allow for keyboard+mouse control as well as two hands on the keyboard, and I'll readily shuffle them around if there are better ideas. Do you have any alternative suggestions for +/- as zoom keys? --Tordanik 14:47, 12 May 2012 (BST)
Thanks for this great tool - I've been using to show my kids what can be done with Open Data :-) For this purpose it would be very nice to have the roof:colour display more values, e.g. brown. Would'nt it be possible to use exactly the colour the tagger has entered?! Thanks!
- Depending on which of the many available colour name translation tables you use, "brown" might be very different colour. That's why I have refrained from adding a large number of words for colours until now, and pointed out the ability to add RGB triples as colour values. However, Kendzi3D has now added one such list of colour names, so I'm considering to use the same one. --Tordanik 20:52, 22 September 2013 (UTC)
Darstellungsfehler an Kreuzungen
Hallo, mir sind in letzter Zeit in meiner Gegend immer wieder seltsame Darstellungsfehler in der Slippy Map aufgefallen. Beispielsweise werden hier oder hier statt Kreuzungen Rechtecke im 45°-Winkel gezeichnet und an anderer Stelle versinken ganze Felder im Nichts. Ist da etwas falsch getaggt oder liegt der Fehler an OSM2World? --AndreR (talk) 21:13, 29 May 2013 (UTC)
- Die fehlenden Texturen (schwarze Felder) sind ein Server-Problem, das nach zu langem ununterbrochenen Betrieb auftritt - dafür gibt es bereits einen Bugreport, aber noch keine Abhilfe. Die Kreuzungen sind hingegen ein Bug direkt in OSM2World, der vor allem bei sehr spitzen Winkeln an Kreuzungen vorkommt. Grund ist die recht primitive Kreuzungsberechnung, bei der die Fläche der Kreuzung so lange in alle Richtungen vergrößert wird, bis keine einmündenden Straßen sich mehr gegenseitig überlappen. Bei diesem Link von dir ist das aber z.B. bei den beiden nördlichen Straßen erst sehr weit vom Kreuzungsnode entfernt gegeben. --Tordanik 00:20, 30 May 2013 (UTC)
OSM2World crashes on my laptop
When I will create a 3D rendering in OSM2World in my laptop, the program crashes for the latest version, but I am not sure for the older versions, which is more stable. Is OSM2World crashing upon download of .osm data an issue?--TagaSanPedroAko (talk) 19:44, 29 December 2016 (UTC) | http://wiki.openstreetmap.org/wiki/Talk:OSM2World | CC-MAIN-2017-04 | refinedweb | 1,002 | 59.13 |
On 20/12/05 12:50, in article do7knq$276$1@news.nchu.edu.tw, "swang"
wrote:
> Hi, everyone:
>
> I am trying to use studio11 for x86 to compile MM5( a meteorological
> model issued by NCAR. It is basically a FORTRAN95 program.) I have
> problems with the flags:
>
> CPPFLAGS = -I. -C -P -DSUN -DBIT32
> FCFLAGS = -ansi -free -stackvar -M.../util
>
> Is there anything that I sould modify? I don't quite understand the
> -DSUN option.
You probably need to throw us a bone and say *what* problems you're having
with the flags!
The -DSUN option is effectively the same as putting '#define SUN' at the
start of your source files, so any #ifdef SUN...#endif sections will get
compiled in.
Cheers,
Chris | http://fixunix.com/solaris/142128-flags-studio11-x86.html | CC-MAIN-2014-52 | refinedweb | 123 | 77.23 |
14 March 2012 10:09 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
Bayer said in a statement that it intends to further improve its market share of global toluene di-isocyanate (TDI) production to 24% in the “medium term”, from 23% in 2011.
In addition, it plans to defend its position as the global market leader for methyl di-p-phenylene isocyanate (MDI) and polycarbonate (PC) by expanding their fields of application in the automotive and construction industries, it said.
The group is targeting for 2014 sales figures of Bayer HealthCare and Bayer CropScience to reach around €20bn ($26.3bn) and €8bn respectively, it added.
The Bayer Group also aims to increase sales of its pharmaceuticals segment to €11.5bn by 2014, compared with €9.9bn in 2011.
“I am optimistic about Bayer’s medium-term development overall,” said Bayer Group CEO Marijn Dekkers.
Earlier this week, global analysts Bernstein Research forecast a weak first quarter for Bayer MaterialScience because of continually tightening margins in January.
Bayer’s net profit for 2011 grew by 89.9% year on year to €2.5bn, while sales rose by 4.1% to €36 | http://www.icis.com/Articles/2012/03/14/9541305/germanys-bayer-aims-to-strengthen-lead-in-high-tech-materials.html | CC-MAIN-2015-06 | refinedweb | 190 | 58.79 |
repeater node with sensor
can i combine a repeater node with sensor together?
i want to cover a large area with sensors and i need some thep/humid sensors on the way so i thought it could be nice to use the same node for the two missions.
to avoid sleep time for the repeater, and continous sampling time from the sensor, i manged to copy a function from sparkfun flow sensor sketch to calculate milliiseconds between readings (there it was 1000 ms and i want it to be 600000 ms (every 10 min))
is it possibe? also, what sketch to delcare on setup? humidity or repeater?
Yes you can combine sensor and repeaters.
Use the sensor sketch, but change gw.begin according to an change gw.sleep to gw.wait
I think that's all that needs to be done.
You sure can. I made early on a MotionSensorRepeater using 1.5.0 . The gotcha is you need to implement some sort of NON_BLOCKING rate limiting so the sensor doesn't spam the repeater. If you use DELAY(), you will lose messages. A lot of them.
Also, I'm using a junky Sparkfun PIR sensor for this, with NO potentiometers for sensitivity or triggered duration, so I had to do duration in software. That's why the extra complication.
#define NODE_ID 3 #define CHILD_ID 5 // Id of the sensor child String ROOM_ID = " ID 5, living room repeater" ; #include <MySensor.h> #include <SPI.h> unsigned long MIN_TIME_TRIGGERED = 60000 ; // Time for sensor to report "off". Normally set on the PIR itself but this is junky sparkfun. boolean previousTripped = LOW; unsigned long PREVIOUS_TRIPPED_TIME = millis(); unsigned long MIN_RETRIGGER = 3500; #define DIGITAL_INPUT_SENSOR 3 // The digital input you attached your motion sensor. (Only 2 and 3 generates interrupt!) #define INTERRUPT DIGITAL_INPUT_SENSOR-2 // Usually the interrupt = pin -2 (on uno/nano anyway) MySensor gw; // Initialize motion message MyMessage msg(CHILD_ID, V_TRIPPED); void setup() { gw.begin(NULL, NODE_ID, true); // Send the sketch version information to the gateway and Controller gw.sendSketchInfo("Motion Sensor", "1.0"); pinMode(DIGITAL_INPUT_SENSOR, INPUT); // sets the motion sensor digital pin as input // Register all sensors to gw (they will be created as child devices) gw.present(CHILD_ID, S_MOTION); } void loop() { boolean tripped = digitalRead(DIGITAL_INPUT_SENSOR) == HIGH; // Sensor must be HIGH and not yet sent the status of the sensor if ((tripped == HIGH) && (tripped != previousTripped) ) { Serial.print(tripped); Serial.println(ROOM_ID); gw.send(msg.set(tripped ? "1" : "0")); // Send tripped value to gw previousTripped = tripped; PREVIOUS_TRIPPED_TIME = millis(); } // Is sensor HIGH, already sent to gateway, and under the minimum time to trigger? Then don't send if ( (PREVIOUS_TRIPPED_TIME + MIN_RETRIGGER) < millis() ) { if ( (tripped == HIGH) && (tripped == previousTripped) ) { Serial.println("Counter Reset"); PREVIOUS_TRIPPED_TIME = millis(); } } // Is sensor low and not sent? Then send LOW to gateway if (tripped == LOW) { if ( ((PREVIOUS_TRIPPED_TIME + MIN_TIME_TRIGGERED) <= millis() ) && (tripped != previousTripped) ) { Serial.print(tripped); Serial.println(ROOM_ID); gw.send(msg.set(tripped ? "1" : "0")); // Send tripped value to gw previousTripped = tripped; PREVIOUS_TRIPPED_TIME = millis(); } } gw.process(); // makes the node repeat }
Keep this in your loop otherwise it will not process messages:
gw.process();
PS @cranky beat me with an example
gw.wait() already calls process() so there is no need to call process if gw.wait is used
See
thanks a lot all of you,
cranky - i made a similar sketch using the same method with if XXXX > millis:
#include <SPI.h> #include <MySensor.h> #include <DHT.h> #define CHILD_ID_HUM 0 #define CHILD_ID_TEMP 1 #define HUMIDITY_SENSOR_DIGITAL_PIN 3 MySensor gw; DHT dht; float lastTemp; float lastHum; boolean metric = true; unsigned long oldTime; MyMessage msgHum(CHILD_ID_HUM, V_HUM); MyMessage msgTemp(CHILD_ID_TEMP, V_TEMP); void setup() { gw.begin(NULL, AUTO, true); dht.setup(HUMIDITY_SENSOR_DIGITAL_PIN); oldTime = 0; // Send the Sketch Version Information to the Gateway gw.sendSketchInfo("Repeater Node", "1.0"); // Register all sensors to gw (they will be created as child devices) gw.present(CHILD_ID_HUM, S_HUM); gw.present(CHILD_ID_TEMP, S_TEMP); metric = gw.getConfig().isMetric; } void loop() { if((millis() - oldTime) > 600000) { oldTime = millis();.process(); }
its just the regular humidity sensor with gw.begin in repeater mode and gw.process in the end(with no sleep time) and the "anti-spam" is the if((millis() - oldtime) > 600000) statement so i still have reading every 10 minutes.
is that ok or the dht22 delay can interfere with the repeater operation?
also, the send.sketchc should be "repeater" or "humidity"?
Using delay can cause the node to miss messages received during that time. Use gw.wait instead.
I would call the sketch something like "Repeater+Humidity". The name is just for you to make you remember which node does what. It is not used by the system.
By the way, your indentation is off. The code would be easier to read if you use Ctrl+T in the Arduino IDE (CMD+T on Mac)
delay(dht.getMinimumSamplingPeriod());
ok that ctrl+t is realy a nice tip. thanks.
about this delay - delay(dht.getMinimumSamplingPeriod());
it comes with the humidity sensor sketch and i thought it is crucial for the sensor to work.
did you meant replace this line with the gw.wait?
maybe i can delete it anyway because the whole loop doesnt perform until 10 minutes passed.
mfalkvidd means:
gw.wait(dht.getMinimumSamplingPeriod());
Looking at the code, there's no good reason to use that delay at all. You're already waiting for 600 seconds with the
if((millis() - oldTime) > 600000)
loop. The Delay is only recommended in the main codebase so that the sensor has time to regenerate the data. Some sensors have to wait for a bit or the measurement will be off, but we're talking about milliseconds here.
Just delete the whole line:
delay(dht.getMinimumSamplingPeriod());
and you should be perfectly fine. If you wanted to be on the safe side, you could do the following...
if((millis() - oldTime) > (600000 +dht.getMinimumSamplingPeriod() )
and that would include any delay that the sensor library might need.
All the best,
cranky
- Heizelmann last edited by Heizelmann
If cycle interval might become a parameter and for smaller values of this it is better to do it this way
if((millis() - oldTime) > max(cycleInterval, dht.getMinimumSamplingPeriod() )
- Heizelmann last edited by
... and to give the DHT sensor a chance on reading errors to retry and not to wait until next cycle which might be very long:
for (unsigned int i = 0; i < DHT_MAX_TRIES; i ++) { float temperature = dht.getTemperature(); if (isnan(temperature)) { Serial.println("Failed reading temperature from DHT"); gw.wait(dht.getMinimumSamplingPeriod()); } else if (temperature != lastTemp) { lastTemp = temperature; if (!metric) { temperature = dht.toFahrenheit(temperature); } gw.send(msgTemp.set(temperature, 1)); Serial.print("T: "); Serial.println(temperature); break; } }
Repeat the same for humidity of cause. | https://forum.mysensors.org/topic/2717/repeater-node-with-sensor/3 | CC-MAIN-2019-22 | refinedweb | 1,096 | 59.6 |
nose: a discovery-based unittest extension.
nose provides extended test discovery and running features for unittest.. responsive, since nose begins running tests as soon as the first test module is loaded. See Finding and running tests for more.
Setting up your test environment is easier
nose supports fixtures at the package, module, class,. nose comes with a number of builtin plugins, for instance:, functional tests will be run in the order in which they appear in the module file. TestCase derived tests and other test classes are run in alphabetical order.
Fixtures, whether or not the test or tests pass. For more detail on fixtures at each level, see below.
Test packages classes that do not descend from unittest.TestCase may also include generator methods, and class-level fixtures. Class level fixtures may be named setup_class, setupClass, setUpClass, setupAll or setUpAll for set up and teardown_class, teardownClass, tearDownClass, teardownAll or tearDownAll for teardown and must be class methods.
Test functions(): # ... def teardown_func(): # ... @with_setup(setup_func, teardown_func) def test(): # ...
For python 2.3 or earlier, add the attributes by calling the decorator function like so:
def 4.
nose, by default, follows a few simple rules for test discovery.
- If it looks like a test, it's a test. Names of directories, modules, classes and functions are compared against the testMatch regular expression, and those that match are considered tests. Any class that is a unittest.TestCase subclass is also collected, so long as it is inside of a module that looks like a test.
- Directories that don't look like tests and aren't packages are not inspected.
- Packages are always inspected, but they are only collected if they look like tests. This means that you can include your tests inside of your packages (somepackage/tests) and nose will collect the tests without running package code inappropriately.
- When a project appears to have library and test code organized into separate directories, library directories are examined first.
- When nose imports a module, it adds that module's directory to sys.path; when the module is inside of a package, like package.module, it will be loaded as package.module and the directory of package will be added to sys.path.
- If an object defines a __test__ attribute that does not evaluate to True, that object will not be collected, nor will any objects it contains.-2008 | https://bitbucket.org/jpellerin/nosedeprecated/src | CC-MAIN-2017-39 | refinedweb | 393 | 66.74 |
okular
#include "textpage.h"
#include "textpage_p.h"
#include <QDebug>
#include "area.h"
#include "debug_p.h"
#include "misc.h"
#include "page.h"
#include "page_p.h"
#include <cstring>
#include <QtAlgorithms>
#include <QVarLengthArray>
Go to the source code of this file.
Typedef Documentation
Definition at line 284 of file textpage.cpp.
Function Documentation
Add spaces in between words in a line.
It reuses the pointers passed in tree and might add new ones. You will need to take care of deleting them if needed
- Call makeAndSortLines before adding spaces in between words in a line
- Now add spaces between every two words in a line
- Finally, extract all the space separated texts from each region and return it
Definition at line 1821 of file textpage.cpp.
Calculate Statistical information from the lines we made previously.
For the region, defined by line_rects and lines
- Make line statistical analysis to find the line spacing
- Make character statistical analysis to differentiate between word spacing and column spacing.
Step 0
Step 1
Step 2
Definition at line 1386 of file textpage.cpp.
Definition at line 55 of file textpage.cpp.
Definition at line 63 of file textpage.cpp.
Definition at line 1127 of file textpage.cpp.
Definition at line 1135 of file textpage.cpp.
Definition at line 103 of file textpage.cpp.
Definition at line 108 of file textpage.cpp.
Create Lines from the words and sort them.
We cannot assume that the generator will give us texts in the right order. We can only assume that we will get texts in the page and their bounding rectangle. The texts can be character, word, half-word anything. So, we need to:
- Sort rectangles/boxes containing texts by y0(top)
- Create textline where there is y overlap between TinyTextEntity 's
- Within each line sort the TinyTextEntity 's by x0(left)
Definition at line 1292 of file textpage.cpp.
We will read the TinyTextEntity from characters and try to create words from there.
Note: characters might be already characters for some generators, but we will keep the nomenclature characters for the generator produced data. The resulting WordsWithCharacters memory has to be managed by the caller, both the WordWithCharacters::word and WordWithCharacters::characters contents
We will traverse characters and try to create words from the TinyTextEntities in it. We will search TinyTextEntity blocks and merge them until we get a space between two consecutive TinyTextEntities. When we get a space we can take it as a end of word. Then we store the word as a TinyTextEntity and keep it in newList.
We create a RegionText named regionWord that contains the word and the characters associated with it and a rectangle area of the element in newList.
Definition at line 1181 of file textpage.cpp.
Remove all the spaces in between texts.
It will make all the generators same, whether they save spaces(like pdf) or not(like djvu).
Definition at line 1156 of file textpage.cpp.
Returns true iff segments [
left1,
right1] and [
left2,
right2] on the real line overlap within
threshold percent, i.
e. iff the ratio of the length of the intersection of the segments to the length of the shortest of the two input segments is not smaller than the threshold.
Definition at line 78 of file textpage.cpp.
Definition at line 787 of file textpage.cpp.
Implements the XY Cut algorithm for textpage segmentation The resulting RegionTextList will contain RegionText whose WordsWithCharacters::word and WordsWithCharacters::characters are reused from wordsWithCharacters (i.e.
no new nor delete happens in this function)
- calculation of projection profiles
- Cleanup Boundary White Spaces and removal of noise
- Find the Widest gap
- Cut the region and make nodes (left,right) or (up,down)
Definition at line 1560 of file textpage.cpp.
Documentation copyright © 1996-2020 The KDE developers.
Generated on Sun May 24 2020 23:25:00 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/4.x-api/kdegraphics-apidocs/okular/html/textpage_8cpp.html | CC-MAIN-2020-24 | refinedweb | 655 | 66.44 |
the tgz:
the zip:
I get an illegal operation on startup. What is wrong with it? Thank you.
the tgz:
the zip:
I get an illegal operation on startup. What is wrong with it? Thank you.
This war, like the next war, is a war to end war.
You can't store data with these pointers until you point them to some valid allocated memory.You can't store data with these pointers until you point them to some valid allocated memory.Code:char * out; char * instring; char * foutstring; char * comm;
Why not use std::string instead?
I put char intring[100] and things like that as i got syntac error before ';'. However it still doesn't work. What am i doing wrong??
This war, like the next war, is a war to end war.
Try this code. I changed your character arrays to use STL strings and also changed the "end" command to "kill" so that the input file would work correctly.
Code:#include <iostream> #include <string> using namespace std; #include <cstring> #include <fstream> #include <conio.h> #include <windows.h> const char * file = "main.ec"; const char * outfile = "Out File.txt"; ofstream fout(outfile); string out; string instring; string foutstring; string comm; void output() { cout << out; } void newline() { cout << endl; } void input() { getline(cin, instring); } void fileout() { fout.open(outfile); fout << foutstring; fout.close(); } void filenewline() { fout.open(outfile); fout << endl; fout.close(); } void pause() { getche(); } void wait() { Sleep(1000); } void kill() { exit(0); } int main() { ifstream fin; fin.open(file); while (1) { getline(fin, comm); if(comm == "out") { getline(fin, out); output(); } else if (comm == "newline") { newline(); } else if (comm == "input") { input(); } else if (comm == "fileout") { getline(fin, foutstring); fileout(); } else if (comm == "filenewline") { filenewline(); } else if (comm == "pause") { pause(); } else if (comm == "sleep") { wait(); } else if (comm == "kill") { fin.close(); kill(); } } return 0; }
Hey, thanks, it works...
No only if i can learn OGL, i can make that scripted.... lol.
I owe you one, pal. Ill give you a PM if any more problems come up. And thanks again!
This war, like the next war, is a war to end war.
Glad to have helped. | https://cboard.cprogramming.com/cplusplus-programming/44756-scripting-language.html | CC-MAIN-2017-43 | refinedweb | 358 | 86.1 |
What's New in the Development Environment (C++)
In the Visual Studio Integrated Development Environment (IDE), the following features are new or are enhanced for Visual C++ 2005.
Browsing Source Code
The Call Browser window, which helps you easily navigate to code that either makes calls to a function, or makes calls from a function.
Inheritance browsing from Class View. For more information, see How to: Display Inheritance Graphs.
Live browsing enables features like Call Browser, Find Symbol Results Window, and all tool windows to operate for Visual C++ without generating a BSC file.
IntelliSense
Identifiers that are defined with the The #define Directive directive are now supported in IntelliSense.
Symbols from namespaces that are specified with the using Directive (C++) directive are now supported in IntelliSense.
List Members no longer populates completion lists with symbols from all common libraries, such as Win32, ATL, STL, and MFC. Instead, it populates them with symbols from header files included in your program with the #include directive.
Templates symbols are now fully supported in IntelliSense. Furthermore, Explicit Template Specializations and Partial Template Specializations are also fully supported in IntelliSense.
The scalability of IntelliSense has increased from a maximum of 16,000 files per solution to 65,535 files per solution, with a limitation of 65,536 symbols per file.
Application Wizards and Project Templates
Create New Project from Existing Code Files Wizard, which helps you port existing code into a new project. For more information, see How to: Create a C++ Project from Existing Code.
SQL Server Project Template, which helps you create class library projects for SQL Server 2005.
Project and Build System
VCBUILD.EXE, which builds Visual C++ projects and solutions from the command line. For more information, see VCBUILD Reference.
64-bit platforms support. For more information, see How to: Configure Visual C++ Projects to Target 64-Bit Platforms.
Property Sheets (C++), which enable you save project settings in files that you can apply to additional projects on multiple computers. Property sheets also allow you to create User-defined Macros.
Property Manager, which helps you manage property sheets. To display this feature, select the Property Manager menu item from the View menu.
Custom build rules, which are defined in Rule Files. This feature facilitates building file extensions that require external build tools.
Physical view in Solution Explorer, which is available through the Show All Files button. Now you can drag files from Windows Explorer onto project nodes in Solution Explorer. The Show All Files button displays all file references in your project.
The References node was removed from Solution Explorer. The new References, Common Properties, <Projectname> Property Pages Dialog Box enables you to add references to .NET assemblies, COM components, or project components, to your .NET projects.
Profile-Guided Optimizations (PGO) build commands available through the project context menu (right-click a project node) in Solution Explorer; and PGO project property settings available through project property pages.
Multiprocessor Builds, which help you build multiple projects simultaneously.
General Features
Visual C++ Settings, which are settings that customize the IDE for Visual C++ developers.
Unicode in the C/C++ Code and Text Editor, Resource Editors, IntelliSense, the Object Browser, wizards for Visual C++ Projects, and all tool windows.
RAD features (Dataset Designer and Server Explorer Window) for remote databases. | http://msdn.microsoft.com/en-us/library/6db3z985(v=vs.80).aspx | CC-MAIN-2014-35 | refinedweb | 547 | 56.76 |
SCA/SCA Component/SCA Signature
Contents
SCA signature
An SCA signature is an object which identifies an SCA element (a component, a service...) in a unique manner.
Or said differently, SCA signatures are computed element IDs, similar to a unique HashCode. They only make sense for SCA.
SCA signatures were necessary
Within SCA tools, SCA elements are manipulated by different technologies.
The SCA Composite Designer and the Composite editor rely on EMF and GMF. So, they manipulates EObjects.
The SCA XML editor and the Form editor rely on the WTP XML model. So, they manipulate DOM nodes.
The SCA builder rely on EMF. But it also deals with resource markers. It must be able to write identifiers on resource markers, so that editors will be able to find where they should display markers (which line for a source editor, which page for the form editor, which figure for the designer). That was the real problem. Be able to retrieve an SCA element in a model from a marker added by the SCA builder. And this, in EMF models, GMF diagrams and XML documents.
At first glance, an EMF URI could have worked.
Except that SCA tools deal with *.composite and *.composite_diagram files. All the editors edit directly the *.composite, except the designer which edits the *.composite_diagram. That's two resources working on the same contextual elements. But resource and EMF URIs would be different since the file is not the same.
More gnerally, resource URIs depend on resource locations. They are not based on the content of these resources. SCA signatures solve all these problems. They are computed from the content of SCA resources. If an SCA resource is moved or renamed, the associated signature does not change. It only changes with the content of the resource.
If you find two SCA elements with the same signature, there are only two possibilities. Either they are the same element, or the project contains errors. There is no other explaination. In a model instance, to find the element matching a given signature, you simply have to compute the signature of any element until you find one with the same signature. SCA signatures can be computed quickly and for any SCA element (in a composite, a constraining type - and soon for a component type). They can be created either from XML nodes or from EObjects.
SCA signatures in a concrete way
SCA signatures are computed from resource contents.
It looks like an XPath expression. It uses element names and sometimes also, attributes, which are used as identifiers.
Let's take an example by considering the composite below.
<?xml version="1.0" encoding="ISO-8859-15"?> <sca:composite xmlns: <!-- Test Component --> <sca:component <sca:implementation.java <sca:service <tuscany:binding.rmi xmlns: </sca:service> <sca:service<sca:binding.sca/></sca:service> <sca:service<sca:binding.sca/></sca:service> <sca:reference <tuscany:binding.rmi xmlns: </sca:reference> </sca:component> <!-- Tested Component --> <sca:component <sca:implementation.java <sca:service <sca:interface.java <tuscany:binding.rmi xmlns: </sca:service> <sca:reference<sca:binding.sca/></sca:reference> <sca:reference<sca:binding.sca/></sca:reference> </sca:component> <sca:wire <sca:wire <sca:wire <sca:service </sca:composite>
- The signature of the composite itself is
composite[TestRestaurantServiceComponent,]
- The signature of the first component service is
composite[TestRestaurantServiceComponent,]/component[TestComponent]/service[RunTestService]
- The signature of the last wire is
composite[TestRestaurantServiceComponent,]/wire[RestaurantServiceComponent/menuService -> TestComponent/MenuService]
Although they are readable, you should rather see SCA signatures like a long string identifier.
As you may have noticed also, an SCA signature is made up of segments. Each segment corresponds to an element in the XML tree.
An element signature has the signature of its ancestor (eContainer) as prefix.
Parts between square brackets are attributes which identify the given element. If one of these attributes changes, the element signature changes too.
Now, let's take a look at the way SCA signature segments are computed.
- For a composite, it is the element name "composite", followed by the two attributes that identify in a unique way a composite in a project scope: the name and the target namespace. In a project, you can't have two composites with the same name and target namespace. Otherwise, SCA composite inclusions can't work.
- For a component, a service, a reference and a property, it is the element name and the name attribute.
- For an include, it is the element name (include) and the name and target namespace of the included composite.
- For a wire, it is the element name (wire) and the source and target attributes (two wires can't have the same ones).
- For interfaces, implementations and bindings, the element name is enough. You can't have one of their ancestor having two such elements. That would be an invalid SCA application.
- For other elements, by default, only the element name is taken. It may change if new requirements appear.
To compute the signature of an SCA element, use one of the following:
new ScaSignature( EObject scaEObject ).toString();
new ScaSignature( Node scaNode ).toString();
SCA IDs and use cases
SCA signatures solved an important problem for SCA tools.
But they may also interest other tools.
Let's take the example of the STP-IM component.
The STP-IM aims at bridging STP tools thanks to a central meta-model defining SOA concepts and their relations. Each STP tool can then map the concepts it works on with the concepts represented in the intermediate model. One of the features the IM component provides is a builder to synchronize the different views within the tooling. As an example, we could imagine two BPMN and SCA models being linked through an IM instance. In this situation, some BPMN elements would be associated with certain SCA elements, and vice-versa. The association would be made through an STP-IM instance.
The IM builder aims at reporting modifications made in one model (e.g. the BPMN one) in the associated models (e.g. the SCA model in this example).
The issue is that it is not possible to intrinsically associate SCA elements with STP-IM elements. With BPMN, it is possible. The BPMN specification defines the graphical representation, not the way this representation is written. In the STP BPMN modeler, BPMN elements are associated a unique ID (UUID) by the modeler. This ID will never change as long as the element exists. Therefore, it is possible to associate this element with an IM element.
With SCA, this is not possible.
The specification defines the XML schemas (and also the graphical representation). Add an ID attribute, like in the BPMN modeler, would not be a clean solution. XML and EMF parsers would be able to deal with that. But only the editor which defines this ID would understand its meaning. Within SCA Tools, there are four editors, relying on 3 different model representations. Such an attribute would have to be strongly required in order to be correctly handled by all the editors. With BPMN, there is only one editor. That's the reason why it is difficult to imagine the IM builder remain synchronized with SCA resources.
But now, with SCA signatures, there is a basis of solution. An incomplete solution, but better than nothing.
SCA signatures are computed SCA IDs. They can be stored as string objects by the IM builder. To determine what has changed in an SCA model, the IM builder could proceed the following way:
- For very element of the first SCA model...
- Compute and store the SCA signature of the element.
- For every element of the second SCA model...
- Compute the SCA signature of the element. Search this signature in the first model signatures.
- If it is found, compare all the properties / attributes to determine what has changed.
- If it does not have it, then it may be a new element (store it).
- Process orphan the elements from both models (i.e. which were not found in the other SCA model).
- These elements may have been removed.
- Or their identifiers may have changed (e.g. if the name of a component has changed, then its signature has changed).
- In any case, the STP-IM builder would have to find a solution to deal with this last situation (it must be possible to compute the proximity between two signatures and/or provide a merge view). If we see the IM component as a helping tool, one nice approach could be to simply add markers on unresolved resources and let the user deal with that (rather than wanting the perfect matching in any situation and provide a wrong mapping).
Anyway, this is a suggestion of another use case for SCA signatures.
The discussion about this use case is obviously open. | http://wiki.eclipse.org/index.php?title=SCA/SCA_Component/SCA_Signature&oldid=226831 | CC-MAIN-2015-40 | refinedweb | 1,451 | 59.19 |
As I wrote about in my post “Playing with JDK 12’s Switch Expressions“, the JDK 12 Early Access Builds have made it easy to experiment with the implementation of JEP 325 [“Switch Expressions (Preview)”]. My post “JDK 12: Switch Statements/Expressions in Action” used code examples to demonstrate core features and characteristics of the enhanced
switch statement and the new
switch expression. In this post, I look at a special case explicitly called out in JEP 325 related to a runtime addition to an enum used in a
switch expression.
Because a
switch expression returns a value, it is necessary that all possible cases the
switch might encounter are handled via a
case (or covered by a
default for those not explicitly associated with a
case). JEP 325 states the following:
The cases of a switch expression must be exhaustive; for any possible value there must be a matching switch label. In practice this normally means simply that a default clause is required; however, have written code similar to that described in JEP 325 (“this is what developers do by hand today”) as discussed in my blog post “Log Unexpected Switch Options.” In the past, it was often wise to add logic for handling or logging
switch statement options that were not explicitly called out or handled in a
default. With the advent of
switch expressions via JDK 12 and JEP 325, it is now required.
JEP 325 addresses the case of a
switch expression on an enum and explicitly specifies how to support situations when all of an enum’s values were explicitly specified in
case clauses when the enum and code with the
switch expression were compiled, but then more values were later added to the enum without recompiling the
switch expression code using that enum.
To demonstrate this support, I will present a simple enum along with two examples based on JEP 325 and the JDK Early Access Build 10 to use that enum in a
switch statement and a
switch expression.
The following code listing shows a simple enum called
Response that only has two values.
package dustin.examples.jdk12.switchexp; /** * Enum representation of a response. */ public enum Response { YES, NO; }
The next code listing shows a class that includes two methods that use the enum shown above. One method uses a
switch statement against that enum and the other uses a
switch expression against that enum.
package dustin.examples.jdk12.switchexp; import static java.lang.System.out; /** * Demonstrates implicit handling of expanding enum * definition related to JEP 325 switch expressions and * switch statements. */ public class GrowingEnumSwitchDemo { public static void printResponseStringFromStatement(final Response response) { out.println("Statement [" + response.name() + "]:"); switch (response) { case YES: out.println("Si!"); break; case NO: out.println("No!"); break; } } public static void printResponseStringFromExpression(final Response response) { out.println("Expression [" + response.name() + "]:"); out.println( switch (response) { case YES -> "Si!"; case NO -> "No!"; }); } public static void main(final String[] arguments) { if (arguments.length < 1) { out.println("Provide an appropriate 'dustin.examples.jdk12.switchexp.Response' string as an argument."); System.exit(-1); } final String responseString = arguments[0]; out.println("Processing string '" + responseString + "'."); final Response response = Response.valueOf(responseString); printResponseStringFromStatement(response); printResponseStringFromExpression(response); } }
The code above (which is also available on GitHub) will compile without incident and when I execute the
main function on the
GrowingEnumSwitchDemo class and pass it the “YES” string, it works as expected. If I add a new value
MAYBE to the
Response enum and compile only that enum Java file and then run the
GrowingEnumSwitchDemo.main(String[]) with string “MAYBE”, I encounter an IncompatibleClassChangeError. The new
Response.java listing is shown next, followed by a screen snapshot that demonstrates the issue just described once the enum only was re-compiled with new value and run with the previously compiled calling code.
package dustin.examples.jdk12.switchexp; /** * Enum representation of a response. */ public enum Response { YES, NO, MAYBE; }
The presence of the IncompatibleClassChangeError makes it obvious immediately that there is a new value on the enum not previously handed by the
switch expression. This allows the developer to fix the
switch expression either by adding a
case for the enum value or by adding a catch-all
default. This is likely to be better than the current situation today where a
switch statement using the
:/
break syntax will silently move on without exception message in the same situation (which is also demonstrated in the previous code listing and screen snapshot).
There are several things to like about the enhancements coming to Java via JEP 325. The “arrow” syntax allows
switch expressions and
switch statements to not be burdened with surprising scope issues, risk of unintentional fall-through, or need for explicit
breaks. Furthermore,
switch expressions, which must return a value, can be used in conjunction with enums to ensure that all enum values are always handled at compile-time (won’t compile if all enum values are not handled at compile-time) or that an error is thrown if the enum being used has a value added to it and is used with the previously compiled client code. | https://www.javacodegeeks.com/2018/09/jdk-12-switch-expression-encountering-unanticipated-enum-value.html | CC-MAIN-2020-16 | refinedweb | 847 | 51.89 |
Religious Tolerance In Springfield and the Greenleaf Coven
cimerian@swbell.net
Religious intolerance is alive and well in Springfield Missouri. The
question is, to what degree, and in the terminology of "The Burning Times",
how high are the flames?
In 1999, the court case of Jean Webb vs. The City of Republic was a
headliner and major topic of conversation in this community. Republic, a
small suburb of Springfield which predominantly featured the ichthus, known
as a Christian symbol, on the city logo was under fire from a self-
proclaimed Witch who had the audacity to claim that this symbol violated
her religious rights.
Jean Webb, a member of the Wiccans, a non-Christian religion had moved to
Republic in 1995. She said that the fish symbol on the city seal made her
think her religious practices would not be tolerated in Republic so she and
her children concealed their beliefs. After she wrote an opinion piece in
the Republic newspaper opposing the city seal, Webb says she received hate
mail and harassing phone calls and that her children were ostracized. Webb
sued the city in July 1998. Her attorneys were from the American Civil
Liberties Union.
ACLU members searched for someone who lived in Republic for the lawsuit
after the Republic Board of Alderman refused to remove the fish symbol
voluntarily from the city seal. Jean Webb moved from the city to avoid the
harassment that she says she received, but Judge Clark said the suit could
continue even though she no longer lived there. Attorneys for Republic
argued that there was a dispute whether the ichthus is really a religious
symbol. They also argued that the city did not intend to endorse a
particular religion or exclude others by adopting the city seal.
U.S. District Judge Russell Clark ruled in favor of Webb first without a
trial in what's known as a summary judgment. Clark said the ruling is
appropriate because "there is no genuine issue of material fact present in
the case." Clark said, "Webb brings overwhelming evidence before the Court
to show that only one conclusion is possible: when viewing the fish on
Republic's flag, a reasonable observer would conclude that it is a
Christian religious symbol."
This stirred up all kinds of intolerance. Jean, a personal acquaintance of
mine, lost her home because she had to move, her husband died of a
respiratory ailment, her Coven found the public scrutiny too much to handle
so Jean released the Coven. She lost a lot. The intolerance towards her was
rather dramatic. It even culminated in a rather mild statement seen on
local cars in the form of a bumper sticker featuring the ichthus stating,
"Republic, this fish is for you". Quite a few people I personally know as
well as I, had cars with Wiccan or Pagan bumper stickers defaced around
this time.
Currently, in another suburb of Springfield called Ozark, there is a
sensational ongoing murder case of a woman of the "Wiccan faith" murdered
by another "Wiccan " with his "ritual knife" over a drug deal. This case is
too new to discuss here, but the religious connection has been mentioned in
the local paper, the News Leader.
The headlines in this laid back little Ozark town are slightly
inflammatory. Where are the flames today, and who or what has created
tolerance from intolerance?
Republic has since removed the ichthus from it's city vehicles, flags,
stationary, and literally anywhere it was displayed. Few bumper stickers
are seen now stating, "This fish is for you". Stores such as Renaissance
Books and Gifts, the local New Age/Craft store does not report any hate
mail or problems like in the early days when it first opened, and no one
wearing a pentagram has been reportedly asked to leave any of our local
establishments as in other Ozark cities in earlier years. My car with it's
large blue and black Wiccan bumper sticker has not been attacked in the
last 8 years, and unlike in the 80's in Springfield, no one has found their
pets, again, dead in the mailbox. No Pagan person's business has been
burned to the ground because it was found out they were Pagan, as happened
here in the 70's.The local Unitarian Universalist Church with it's Covenant
Group of Earth Based Religions has not been targeted for adverse publicity,
as was done also in the 80's. The local public TV station continues to air
the Pagan Pagan Show Friday nights on Channel 26 without protest or
protestors. The Annual Pagan Pride Picnic is held every year at Nathaniel
Green Park with no problems noted from the public. The First Annual Witches
Ball was held with guests Gavin and Yvonne Frost and daughter Bronwyn, and
no picketers visited.
In Springfield Missouri, the home of the rather intolerant Assemblies of
God World Wide Ministries, Pagans and Wiccans are today alive and well,
living and seemingly tolerated in this Ozark Community. Well, well. How do
you suppose THAT happened. Inflammatory headlines, but where are the
flames? Where did they go, and why did they go away?
Go and get a copy of Drawing Down the Moon by Margot Adler. Go ahead. It
won't bite. Turn to page 514 and look for Greenleaf Coven. Oh, can't find
a copy?
"Greenleaf Coven. Greenleaf is a loosely knit group of Witches, Pagans,
Shamans, and Independents who come together for worship, work and
fellowship. The traditions are eclectic, though primary influences from
Celtic and Germanic traditions, and there are ties with the Church and
School of Wicca. Greenleaf is totally public, publishes Spider, a free
monthly newsletter, and considers its primary focus to educate and
communicate. It also presents classes, marches in the local St. Patrick's
Day parade, communicates with the media, has an active prison ministry, and
works with the homeless and people with AIDS."
Who are these Greenleaf people and what are they doing for our Community?
Located here in Springfield Missouri since 1976 when they were originally
chartered by the Church and School of Wicca, Greenleaf became a legal
Church in l993.Greenleaf's web site says, "We teach karma, reincarnation,
and the Law of Multiple return. The Mother Earth and all life is sacred and
should be treated with love and respect. We encourage people to express
gratitude every day for all that we are given. We also encourage people to
ask questions, to recognize and develop their ability and talents, and to
try to get through life doing good and causing as little harm as possible."
How do they do that? Well first of all, Greenleaf began publishing it's
newsletter now called Spider, but originally called The Web, in
approximately 1986. Targeting the Wiccan/Pagan community, it was filled
with recipes, gossip, cheery news and quotes, and liberally spiced with
teachings of Wicca. Cleverly interwoven in this publication was networking
information. Yes, actually how to meet (gasp) witches. However, in
addition to handing this little publication out free to all stores and
persons even remotely related to Wicca, it was also hand delivered to law
enforcement officials in Springfield. The local Springfield Police
Department was the first stop, right along with popular record stores,
candle shops, and head shops. Then soon, the three local TV stations were
added to the list, shortly followed by the PBS TV Channel, and the local
Newspaper mentioned previously called The News Leader. What an idea. Let
the public know who we are, and what we are doing, and combat intolerance
in a positive manner.
Next, Greenleaf approached an offshoot of the Unity Church called the
Oneness Center and other businesses like the now defunct Celestial Horizons
Bookstore, and began having public classes. Free classes were held. In
hosting and teaching these classes, people from Greenleaf dressed casually,
in a manner completely non-descript, and usual for this town and climate.
Again, what an idea. Let the public see who we are, and in a positive, non-
threatening manner.
Shortly after open Wicca classes began, open Sabbats were held in local
parks and other public places, like Busiek State Park. Obtaining the
necessary permits was easy, and again dress was casual. Some times the
Newsleader came and did stories and took pictures. Seeing these sources,
local colleges contacted Greenleaf for speakers to come and do
presentations to college classes.
In l997, having established a working relationship with the local PBS
Channel, the Pagan Pagan Show was born. Still featured Fridays, it has been
running for 5 years now. Greenleaf says again, "Every week we stick our
faces out there to let our town know who we are and what we are about. We
let them see our kids, our pets, and we tell them about ourselves. We pass
along information about the Wheel of the Year, the Gods and Goddesses, and
a little bit about what we do in ritual. Remember, most people do not know
anything about Wicca, and most of what they hear or think they know is not
true. We are trying to dispel the myths. The Pagan Pagan Show can been
seen in areas of New York and Illinois. Segments of the show have been
featured in articles carried by "USA Today" and Greenleaf has been
contacted by "The Daily Show" on Comedy Central."
Members of the show and the Church continue to speak to local college
classes, and have expanded to now make themselves available and are called
in on Police matters as expert witnesses. In October 2000, they were
featured on local radio stations US 97 and Alice 95.5 to promote the
Samhain Seminar and First Annual Witches Ball.
Now new groups like Portal of Light and DragonStar Rising Coven have
reopened The Pantry, originally opened in about 1996. The Pantry is the
only free food bank in town that delivers and is open on weekends and
evenings for people who have jobs. I was nominated for Springfield Woman of
the Year in 1998 in the KGBX program sponsored by Today's Woman Magazine
for my work in forming and running this Pagan Organization. The Pantry is
an approved Community Alternative Sentencing Program site where offenders
can do their community service that is court ordered.
All of this activity has been taken, not with a plan to combat intolerance,
but to serve our community and make Wicca accessible to seekers of the
faith. It has been a realistic progression of becoming public, and
coordinating with the community.
Greenleaf began the process, and now others of us are following.
The causes of intolerance are too numerous to mention. Shall we start with,
"Thou shalt not suffer a witch to live". Yes, even in Springfield Missouri,
occasionally intolerance continues. Last week the HP of Greenleaf was
yelled at due to her beliefs. But then again, last week Greenleaf won third
prize in the Annual St. Patrick's Day Parade for their parade float, and
were awarded a trophy and $100.
I am able to wear my pentagram, display my bumper stickers, run my Coven,
hold classes, run The Pantry and go to a public park for the Pagan Pride
Picnic because Greenleaf has worked all these years, and taken all these
steps, realistically, lovingly, and with the guidance of the God and
Goddess to overcome intolerance in my community. Greenleaf, thank you.
Adler, Margot Drawing Down the Moon ( Penguin Books 1996)
The News Leader, Springfield Missouri, KY News Staff l999
yahoogroups.com Greenleaf Coven, March 2001
DragonStar Rising Coven and Portal of Light Springfield Missouri 65806
Patricia Allgeier HPS Greenleaf Coven Box 924 Springfield Missouri 65801
KGBX Radio and Today's Woman Journal, Laura Scott, Springfield Missouri
Suggested Pdf Resources
- Liste erstellt von Frau Prof. Sherryl Berg-Ridenour - FBG eG
- Business and Religion in the American 1920's, Rolf Lunden . Doggett of Springfield: A Biography of Laurence Locke Doggett, Ph.D.
-
- Download guide in PDF format (Filename = 1015000A.pdf, Size
- Religion. Collation: 96 p. ; 22 cm; Sabin No.
- microformguides.gale.com
- The American Genealogical and Biographical
- net.lib.byu.edu
Suggested Web Resources
- Religious Tolerance In Springfield and the Greenleaf Coven | RM
- Religious intolerance is alive and well in Springfield Missouri. The question is, to what degree, and in the terminology of "Th...
-
- Religious Tolerance In Springfield And The Greenleaf Coven | RM
-
- Symbol Etymology | RM.com ®
-
- Utopia Religious Utopia | RM.com ®
-
- Links of Interest
- Ontario Consultants on Religious Tolerance Good reference for anything The most well-known coven in Springfield, and very friendly and open to questions.
- organizations.missouristate.edu
Related searchescritical realism locke and descartes
dream interpretation auditorium
empathy empathy in animals
helicopter stability
music of argentina chamam
1054 ad
human sacrifice celtic sacrifice
sparta constitution | http://www.realmagick.com/7095/religious-tolerance-in-springfield-and-the-greenleaf-coven/ | CC-MAIN-2015-22 | refinedweb | 2,126 | 61.87 |
Hello,
i’m develop an app Ionic 5 with Capacitor.
My app needs to identify the devices by users, and for that, i use the Device API to get uuid :
import { Plugins } from '@capacitor/core'; const { Device } = Plugins; const info = await Device.getInfo(); this.deviceId = info.uuid;
With Android 10 we know that our apps can’t access IMEI numbers of devices running Android 10 :
Starting in Android 10, apps must have the READ_PRIVILEGED_PHONE_STATE privileged permission in order to access the device's non-resettable identifiers, which include both IMEI and serial number. Caution: Third-party apps installed from the Google Play Store cannot declare privileged permissions.
My app is a third-party app, so what is the impact of usage of the uuid number used in my application?
Will the above code still work on android 10?
Thank you in advance for your assistance. | https://forum.ionicframework.com/t/ionic-5-capacitor-restriction-to-get-imei-on-android-10/191234 | CC-MAIN-2021-04 | refinedweb | 145 | 51.89 |
Up tonight, some more behavior driven development of the Dart version of the ICE Code Editor. The ICE Code Editor is the thing that I have kids use while coding in 3D Game Programming for Kids, so it's got to be right. While adding new features to the original JavaScript version of ICE, it seemed like things were starting to slip: bugs, documentation, browser support—all the things Dart excels at. So why not convert to Dart?
I need to be able to set the content of the actual code editor (ICE has a code editor and preview layer for visualizations). I have js-interop working with ACE code editor to create a running instance of ACE. Hopefully I can continue to use that instance to do things like setting the content.
This is useful when the editor gets embedded on webpages. It is also necessary when switching between projects. In the JavaScript version, I had to use silly setter and getter methods (
setContent()/
getContent()). In Dart, I can make them real gettters and setters.
I start with a test to verify that I can set the content. The test will exercise both the setter and getter on the editor:
When I first run the test, I get a failure because I have not added the necessary code to ICE:When I first run the test, I get a failure because I have not added the necessary code to ICE:
test("can set the content", () { var it = new Editor('ice'); it.content = 'asdf'; expect(it.content, equals('asdf')); });
FAIL: content can set the content Caught Class 'Editor' has no instance setter 'content='. NoSuchMethodError : method not found: 'content=' Receiver: Instance of 'Editor' Arguments: ["asdf"]I can make the test pass fairly easily:
class Editor { // ... String _content; set content(String data) { _content = data; } get content => _content; }Heck, I could have defined a regular instance variable and gotten the default instance variable setter and getter methods. In either case, I am not doing what I want—setting and getting the the ACE content. So, with a passing test, I go about changing the implementation:
class Editor { var ace; Editor(el, {this.edit_only:false, this.autoupdate:true, this.title}) { ace = js.context.ace.edit(el); } set content(String data) { ace.setContent(data); } // ... }When I run the tests again, I get a failure:
FAIL: content can set the content Caught NoSuchMethodError : method not found: 'setContent' Receiver: Instance of 'Proxy' Arguments: ["asdf"]At first, the reference to
Proxymakes me think that I need to do more js-interop setup than I had been doing. Eventually, I realize that this is not the case. I should be able to call JavaScript methods on JavaScript objects from Dart—as long as the methods are actually defined in JavaScript.
I had simply gotten the method name wrong. So I switch to the correct
setValue():
import 'dart:html'; import 'package:js/js.dart' as js; class Editor { bool edit_only, autoupdate; String title; var _ace; Editor(el, {this.edit_only:false, this.autoupdate:true, this.title}) { _ace = js.context.ace.edit(el); } set content(String data) { _ace.setValue(data, -1); } String get content => _ace.getValue(); }With that, I now have four passing tests:
unittest-suite-wait-for-done PASS: defaults defaults to auto-update the preview PASS: defaults defaults to disable edit-only mode PASS: defaults starts an ACE instance PASS: content can set the content All 4 tests passed.The last thing that I do is try it out in the browser, so I update the example page to set some initial test content:
import 'package:ice_code_editor/editor.dart' as ICE; main() { var ice = new ICE.Editor('ace'); ice.content = ''' main() { print("Dart rocks"); }'''; }Then, when I reload the page, I see:
I love having the test around to ensure that things continue to work, but there is something exciting about actually seeing it. This will serve as a stopping point for tonight. Up tomorrow: the preview element.
Day #739
I think this has a lot of potential for mixing Dart and JavaScript, esp. if there can be a tool that will take advantage of the many TypeScript interfaces to popular JavaScript libs to create Dart wrappers. The list of TS interfaces will only continue to grow () Then, we could test the proxy performance hit in various use cases. To run everywhere, the goal would be to use Dart2js and have a pure JavaScript codebase. We need to measure how thin the proxy layer could be between the Dart2js and the native JS. Hopefully, it can be made small and fast.
Interesting. Do you think there is significant benefit to using these libraries with a TypeScript wrapper vs. pure JavaScript?
So far, I don't see much benefit in the additional TypeScript layer. Then again, I'm only just getting started with Dart + JS integration so I'm definitely open to the possibility that there's benefit to be realized once the integration becomes more complicated.
I can see I'm having a hard time communicating this because I am not being clear. I'm saying Dart programmers can leverage the work of TypeScript programmers, who are busy creating typed interfaces to many of the JavaScript libs out there, if we had a tool to parse a TypeScript interface and convert it to a Dart wrapper class such as you've done. No TypeScript is involved in the final result. Should be possible, but I don't know how hard it is. The alternative is to write an interface to a JavaScrip lib in Dart by hand, which is obviously time consuming. Automating by leveraging the existing TS interface definitions, even though not in the correct language, would be more accurate. Dart and Js-interop does not provide for interface definitions, so for example, you had your setContent vs setValue problem. I'd like to see Dart provide some sort of interface mechanism to external JavaScript to make all this easier. I don't think this is going to happen. Now I say all this not because I think Dart programmers targeting the Dart VM should do a lot of external JavaScript calls, but for the benefit of those of us using Dart2js to target V8 and coexist with other JS libs. Js-interop should be really be a thin layer in this case.
Ahhh... I gotcha. Thanks for the clarification. I think you made sense the first time -- I just didn't think it through :)
But yeah, that approach seems like it could be all kinds of useful. Somebody should get on that ;-)
Well, it's probably not worth doing unless Dart2js strips out most of that js-interop proxy code. Anyone concerned about performance that needs to use existing js libs will just go use TypeScript. | https://japhr.blogspot.com/2013/05/setting-ace-content-from-dart.html | CC-MAIN-2018-09 | refinedweb | 1,128 | 62.17 |
Let's take a look at a very simple example of an asychronous task using jQuery and its ajax method to load a file of data.
task.spawn(function() { var data = yield $.ajax("text.html"); $('#result').html(data);});
All of the Task.js methods are within the task namespace.
The spawn function adds the task to the scheduler and starts runing it. The first instruction:
var data = yield $.ajax(url);
Uses the jQuery ajax method to download an HTML file. The ajax method returns a promise and the value of the promise is the content of the file.
The yield returns control to the scheduler which stores the Promise and looks to see if there is another Task ready to run. If there is then it runs it, if not it releases the UI thread. At some point the Promise resolves and its onResolve function is called. This reactivates the scheduler which unpacks the Promise's value and sends it off to the the yield using send(value).
This may sound complicated and it is only rough outline of what happens but the net effect is that the ajax call is performed asynchronously without blocking the UI or any other task and the contents of the file is stored in the data variable.
This looks to the programmer just as if ajax was a synchronous blocking method.
Notice that the user hasn't had to worry about Promises, call backs or anything else. You can also fade in a message when the data has been loaded:
var status = $('#status').hide().html('Download complete.'); yield status.fadeIn().promise();
The reason that this is interesting is that many of the jQuery methods that you already know are equiped to return a promise object if you ask for it. The fadeIn method has been able to return a Promise object since jQuery 1.6 although it isn't often used because it seems complicated. Used with Task.js it makes it seem easy to wait for the fadeIn to complete without blocking the UI thread. Notice that in there is no value returned by the Promise in this case so we don't make use of it.
Perhaps the most interesting thing about Task.js is that it makes it possible to write "tight" loops that in standard JavaScript would bring the system to a halt.
For example, if you write:
var i=0 while(true){ console.log(i); i=i+1 }
Then what happens it that the UI freezes and you don't see any values displayed in the log.
This is a common beginners mistake but with the help of Task.js this tight loop need no longer be avoided. All you have to do is yield now and again so that other tasks have the opertunity to do some work:
task.spawn( function() { var i=0 while(true){ console.log(i); i=i+1 yield task.sleep(1000); } });
Now everything works as expected and you will see the values appear in the console and the UI remains active as well as any other tasks you might have started running. The sleep method simply puts the task to sleep for 1000 milliseconds and the scheduler runs other tasks during this period.
You can also use the sleep method to demonstrate the operation of two tasks:
task.spawn(function() { var i=0 while(true){ console.log("A"); i=i+1; yield task.sleep(100); } }); task.spawn(function() { var i=0 while(true){ console.log("B"); i=i+1; yield task.sleep(1000);
} });
The first task prints an A every tenth of a second and the second task prints B every second. If you run the program you will see As and Bs interleaved in the log with ten times more As than Bs.
Of course you can't use this sort of approach at all widely until yield is better supported. There are ways of achieving the same results but none quite as elegant as using Task.js - it is a clever use of a facility introduced for one purpose for another.
If you look at the Task.js documentation then you will find that there are lots of additional methods and facilities for managing tasks - most you will never need to use.
The one requirement to make Task.js useful in the future is the use of the Promise object by asychronous methods. This is yet another good reason, if you needed one, for using Promises within your own asychronous routines.
jQuery, Promises & Deferred
jQuery Promises, Deferred & WebWorkers
To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Google+, Twitter, Linkedin or Facebook, install the I Programmer Toolbar, [ ... ] | http://i-programmer.info/programming/javascript/5815-taskjs-asynchronous-tasks-in-javascript.html?start=1 | CC-MAIN-2014-10 | refinedweb | 784 | 73.47 |
Listening to one of Full Stack Radio’s latest episodes, I was very impressed by the expertise of Matt Biilmann, CEO of Netlify. Adam Wathan and Matt talked a lot about how global state is handled in the Netlify web application. Although the Netlify app is built with React and Redux when he spoke of his philosophy for structuring the global state of the app, it motivated me to think a little more about this topic in the context of Vue.js and Vuex.
Global state best practices
The first rule you should bear in mind when dealing with global state is that it is not a panacea for all your state-related problems. I recommend that you always use your Vuex store as a means of last resort and only use it when there is a reason to do so. Always consider the alternatives to putting state into Vuex.
The second rule is to keep your global state tree flat. This means that you should not have nested entities like article data with the corresponding author information as nested objects in your state. Instead, lists of articles and authors should be separated.
Problems with deeply nested Vuex state
One of the main problems with a nested state tree is that it is more difficult to keep all your data up to date and synchronized. Suppose you have a few articles of the same author in your state, and now the author changes their profile, and at the same time, the user loads a new article of that author. Now the newly loaded article shows a different author profile than the rest of the articles that were loaded before the author updated their profile.
const articles = [ // This article was loaded first. { author: { avatar: '', id: 1, name: 'Jane Doe', }, id: 1, intro: 'Lorem ipsum dolor sit amet, consetetur sadipscing elitr.', title: 'Lorem Ipsum', }, // Here you can see that this article, // which was loaded later, references a // different avatar image. { author: { avatar: '', id: 1, name: 'Jane Doe', }, id: 2, intro: 'Stet clita kasd gubergren, no sea takimata sanctus est.', title: 'Dolor sit', }, ];
If you store author data and article data separately instead, there is only one author entry in your state and you can update this entry every time you fetch a new article.
const articles = { // IDs as keys to avoid duplicate // entries and enable easy access. 1: { // Reference authors by ID. author: 1, id: 1, intro: 'Lorem ipsum dolor sit amet, consetetur sadipscing elitr.', title: 'Lorem Ipsum', }, 2: { author: 1, id: 2, intro: 'Stet clita kasd gubergren, no sea takimata sanctus est.', title: 'Dolor sit', }, }; const authors = { // No duplicated author data anymore. 1: { avatar: '', id: 1, name: 'Jane Doe', }, };
In addition, nesting your state makes it more complicated to update deeply nested fields because you have to write quite complex code to get to the relevant property you want to update.
Normalizing Vuex state
In Vuex we can use modules to cleanly separate different entity types. And in addition to that we can use the concept of foreign keys, like in a traditional database, to relate certain entities to each other.
// src/store/modules/article.js import Vue from 'vue'; import { normalizeRelations, resolveRelations } from '../helpers'; import articleService from '../../services/article'; const state = { byId: {}, allIds: [], }; const getters = { // Return a single article with the given id. find: (state, _, __, rootGetters) => id => { // Swap ID referenes with the resolved author objects. return resolveRelations(state.byId[id], ['author'], rootGetters); }, // Return a list of articles in the order of `allIds`. list: (state, getters) => { return state.allIds.map(id => getters.find(id)); }, }; const actions = { load: async ({ commit }) => { const articles = await articleService.list(); articles.forEach((item) => { // Normalize nested data and swap the author object // in the API response with an ID reference. commit('add', normalizeRelations(item, ['author'])); // Add or update the author. commit('author/add', item.author, { root: true, }); }); }, }; const mutations = { add: (state, item) => { Vue.set(state.byId, item.id, item); if (state.allIds.includes(item.id)) return; state.allIds.push(item.id); }, }; export default { actions, getters, mutations, namespaced: true, state, };
Above you can see a simple implementation of a flat Vuex store module with
find and
list getters for conveniently returning a nested representation of our flat state.
The most interesting parts of this are the
normalizeRelations() and
resolveRelations() helper functions which help us to convert a nested state into a flat state and vice versa.
// src/store/helpers.js export function normalizeRelations(data, fields) { return { ...data, ...fields.reduce((prev, field) => ({ ...prev, [field]: Array.isArray(data[field]) ? data[field].map(x => x.id) : data[field].id, }), {}), }; } export function resolveRelations(data, fields, rootGetters) { return { ...data, ...fields.reduce((prev, field) => ({ ...prev, [field]: Array.isArray(data[field]) ? data[field].map(x => rootGetters[`${field}/find`](x)) : rootGetters[`${field}/find`](data[field]), }), {}), }; }
The use of these two simple helper functions requires that you follow the convention of always having an
id field for referencing other entities. If you have a more complex data structure you can use the normalizr package which was developed exactly for that use case.
Do you want to learn more about advanced Vue.js techniques?
Register for the Newsletter of my upcoming book: Advanced Vue.js Application Architecture.
Usage
Let’s take a look at how we can consume the data of our Vuex store in our Vue.js application.
<template> <div id="app"> <ArticleList : </div> </template> <script> // src/App.vue import { mapActions, mapGetters } from 'vuex'; import ArticleList from './components/ArticleList'; export default { name: 'App', components: { ArticleList, }, computed: { ...mapGetters('article', { articles: 'list' }), }, created() { this.loadArticles(); }, methods: { ...mapActions('article', { loadArticles: 'load' }), }, }; </script>
In our
App.vue component we map the relevant getter and action from our
article store module and we pass the
articles which we load in the
created() hook via
this.loadArticles() to an
ArticleList component.
<template> <ul class="ArticleList"> <li v- <h2>{{ article.title }}</h2> <p>{{ article.intro }}</p> <div class="ArticleList__author"> <img class="ArticleList__avatar" : {{ article.author.name }} </div> </li> </ul> </template> <script> export default { name: 'ArticleList', props: { articles: { required: true, type: Array, }, }, }; </script>
Here you can see that thanks to our getter function and the
resolveRelations() helper, we’re able to conveniently access the articles author data.
Wrapping it up
If you keep your Vuex state flat and avoid deeply nested state trees, it’s much easier to reason about your state architecture. And in my experience, it also makes it a lot easier when it comes to updating data in your Vuex Store. | https://markus.oberlehner.net/blog/make-your-vuex-state-flat-state-normalization-with-vuex/ | CC-MAIN-2020-50 | refinedweb | 1,069 | 55.64 |
Top Questions about the DataGrid Web Server Control
Mike Pope and Nikhil Kothari
Visual Studio Team
Microsoft Corporation
January 2002
Summary: Answers frequently asked questions about using the DataGrid Web server control. (17 printed pages)
Applies to
- Microsoft® Visual Studio® .NET
- ASP.NET
- Web Forms
- DataGrid Web server control
Introduction.
This.
This paper assumes that you are already familiar with the control — how to add it to a form and configure it to display data. You should also understand how to put a row in the grid into edit mode and other basic tasks. (For details, see DataGrid Web Server Control.) Finally, you will find it helpful to know how to work with templates — adding template columns to the grid and layout out controls inside a template.
Windows Forms versus Web Forms DataGrid Control
The Web Forms DataGrid control is not the same as the Windows Forms equivalent. It is a common (and not unreasonable) assumption that they are the same control, or at least have identical functionality. However, the entire programming paradigm for Web Forms is quite different from that for Windows Forms. For example, Web Forms pages perform a round trip to the server for any processing; they must manage state; they feature a very different data-binding model; and so on.
Because of these differences, there are also significant differences in their respective controls, including the DataGrid control. As a general rule, the Web Forms DataGrid control includes less built-in functionality. A few examples of differences in the Web Forms DataGrid control are:
- It.
On the other hand:
-. (Details are provided later in this paper.)
Controlling Column Width, Height, and Alignment
By default, the DataGrid control sizes rows and columns to fit the overall height and width that you have assigned to the grid. Within the overall grid width, it sizes columns according to the width of the column heading text. All data is displayed left-justified by default.
To control column characteristics, you should turn off auto column generation by setting the AutoGenerateColumns property to false. In fact, you should set this property to true only for short-term uses, such as quick proof-of-concept pages or demonstrations. For production applications, you should add columns explicitly. The individual columns can be bound columns or template columns.
To set the column width, you create a style element for that column and then set the element's Width property to standard units (say, pixels). The following example shows you what the HTML syntax looks like for an ItemStyle element with its Width property set.
Alternatively, you can do the same thing by setting the ItemStyle property directly in the element, as in the following example:
You can set alignment using the style element, setting it to "Right," "Left," and other values defined in the HorizontalAlign enumeration. (In Visual Studio, alignment is available for individual columns in the Format tab of the grid's Property builder.) The following is an example:
You can also set a column's height using the style element (or the ItemStyle-Height property). You will probably find this less flexible than setting the width, since setting the height for one column sets it for all of them.
You can set the width in code at run time as well. One place to do so is in an ItemCreated event handler. The following example sets the width of the first two columns to 100 and 50 pixels, respectively:
' Visual Basic Private Sub DataGrid1_ItemCreated(ByVal sender As Object, _ ByVal e As System.Web.UI.WebControls.DataGridItemEventArgs) _ Handles DataGrid1.ItemCreated e.Item.Cells(0).Width = New Unit(100) e.Item.Cells(1).Width = New Unit(50) End Sub // C# private void DataGrid1_ItemCreated(object sender, System.Web.UI.WebControls.DataGridItemEventArgs e) { e.Item.Cells[0].Width = new Unit(100); e.Item.Cells[1].Width = new Unit(50); }
Of course, there is little sense in setting a fixed width in code that you could set at design time. You would normally do this only if you wanted to set the width based on a run-time value. You can set the width of a cell or control in units (typically pixels), but it is not straightforward to translate the length of data — which is simply a character count — into pixels. But the data is available for you to examine when you are creating the item.
Customizing Column Layout in Display and Edit Mode
By default, the grid displays data in pre-sized columns. When you put a row into edit mode, the control displays text boxes for all editable data, regardless of what data type the data is.
If you want to customize the content of a column, make the column a template column. Template columns work like item templates in the DataList or Repeater control, except that you are defining the layout of a column rather than a row.
When you define a template column, you can specify the following template types:
- The ItemTemplate allows you to customize the normal display of the data.
- The EditItemTemplate allows you to specify what shows up in the column when a row is put into edit mode. This is how you can specify a control other than the default text box for editing.
- A HeaderTemplate and FooterTemplate allow you to customize the header and footer, respectively. (The footer is only displayed if the grid's ShowFooter property is true.)
The following example shows the HTML syntax for a template column that displays Boolean data. Both the ItemTemplate and EditItemTemplate use a check box to display the value. In the ItemTemplate, the check box is disabled so that users do not think they can check it. In the EditItemTemplate, the check box is enabled.
<Columns> <asp:TemplateColumn <ItemTemplate> <asp:Checkbox </asp:Checkbox> </ItemTemplate> <EditItemTemplate> <asp:Checkbox </asp:Checkbox> </EditItemTemplate> </asp:TemplateColumn> </Columns>
Note If you use a CheckBox control in the EditItemTemplate, be aware that at run time, the grid cell actually contains several LiteralControl controls (for spacing) in addition to the check box itself. Whenever you know the ID of the control whose value you want, use the FindControl method to create a reference to it, rather than using specific indexes into the Cells and Controls collections:
In Visual Studio, you can use the grid's Property builder to create the template column and use the template editor to specify the layout. In the Columns tab of the Properties window page for the grid, select the column and at the bottom, click Convert this column into a Template Column. Close the Properties window, right-click the grid, and choose Edit Template. You can then drag controls from the Toolbox into the template and add static text.
Formatting Dates, Currency, and Other Data
Information in a DataGrid control is ultimately displayed in an HTML table in the Web Forms page. To control how data is displayed, therefore, you can specify .NET string formatting for column values.. A slightly confusing aspect of format strings is that the same specifier — for example, "D" — can be applied to different data types (integers, dates) with different results.
Note In Visual Studio, you can specify a formatting expression in the Columns tab of the control's Property builder.
Some example formatting strings are listed in the following table. For more information, see the topics Formatting Types and BoundColumn.DataFormatString Property in the Visual Studio documentation.
Showing and Hiding Columns Dynamically
One way to have columns appear dynamically is to create them at design time, and then to hide or show them as needed. You can do this by setting a column's Visible property. The following example shows how to toggle the visibility of the second column (index 1) of the grid:
Adding Columns Dynamically
You can hide and show columns if you know in advance what columns you need. Sometimes, however, you do not know that until run time. In that case, you can create columns dynamically and add them to the grid.
To do so, you create an instance of one of the column classes supported by the grid — BoundColumn, EditCommandColumn, ButtonColumn, or HyperlinkColumn. (You can add template columns to the grid, but it is slightly more complex. For details, see Creating Web Server Control Templates Programmatically.) Set the column's properties, and then add it to the grid's Columns collection.
The following example shows how to add two bound columns to a grid.
' Visual Basic Private Sub Button1_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button1.Click 'Set data-binding properties of the grid DataGrid1.AutoGenerateColumns = False DataGrid1.DataSource = Me.dsBooks1 DataGrid1.DataMember = "Books" DataGrid1.DataKeyField = "bookid" ' Add two columns Dim dgc_id As New BoundColumn() dgc_id.DataField = "bookid" dgc_id.HeaderText = "ID" dgc_id.ItemStyle.Width = New Unit(80) DataGrid1.Columns.Add(dgc_id) Dim dgc_title As New BoundColumn() dgc_title.DataField = "title" dgc_title.HeaderText = "Title" DataGrid1.Columns.Add(dgc_title) Me.SqlDataAdapter1.Fill(Me.dsBooks1) DataGrid1.DataBind() End Sub // C# private void Button1_Click(object sender, System.EventArgs e) { DataGrid1.AutoGenerateColumns = false; DataGrid1.DataSource = this.dsBooks1; DataGrid1.DataMember = "Books"; DataGrid1.DataKeyField = "bookid"; // Add two columns(); }
Any time that you add controls to a page dynamically, you have the problem of persistence. Dynamically-added controls (or in this case, columns) are not automatically added to the page's view state, so you are obliged to add logic to the page to make sure the columns are available with each round trip.
An excellent way to do this is to override the page's LoadViewState method, which gives you an early opportunity to reestablish columns in the DataGrid control. Because the LoadViewState method is called before the Page_Load event is raised, re-adding columns in the LoadViewState method assures that they are available for normal manipulation by the time any event code runs.
The following example shows how you would expand the previous example to restore the columns each time the page runs again. As before, the Button1_Click handler adds two columns to the grid. (In this example, the event handler calls a separate routine called AddColumns to do so.) In addition, the page contains a simple Boolean property called DynamicColumnsAdded indicating whether the grid has had columns added; the property persists its value in view state. The LoadViewState method first calls the base class's LoadViewState method, which extracts view state information and configures controls with it. If columns were previously added to the grid (as per the DynamicColumnsAdded property), the method then re-adds them.
' Visual Basic Private Property DynamicColumnAdded() As Boolean Get If ViewState("ColumnAdded") Is Nothing Then Return False Else Return True End If End Get Set(ByVal Value As Boolean) ViewState("ColumnAdded") = Value End Set End Property Protected Overrides Sub LoadViewState(ByVal savedState As Object) MyBase.LoadViewState(savedState) If Me.DynamicColumnAdded Then Me.AddColums() End If End Sub Private Sub Button1_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button1.Click ' Check property to be sure columns are not added more than once If Me.DynamicColumnAdded Then Return Else Me.AddColums() End If End Sub Protected Sub AddColums() ' Add two columns Dim dgc_id As New BoundColumn() dgc_id.DataField = "instock" dgc_id.HeaderText = "In Stock?" dgc_id.ItemStyle.Width = New Unit(80) DataGrid1.Columns.Add(dgc_id) Dim dgc_title As New BoundColumn() dgc_title.DataField = "title" dgc_title.HeaderText = "Title" DataGrid1.Columns.Add(dgc_title) Me.DataGrid1.DataBind() Me.DynamicColumnAdded = True End Sub // C# private bool DynamicColumnAdded{ get { object b = ViewState["DynamicColumnAdded"]; return (b == null) ? false : true; } set { ViewState["DynamicColumnAdded"] = value; } } protected override void LoadViewState(object savedState) { base.LoadViewState(savedState); if (DynamicColumnAdded) { this.AddColumns(); } } private void Button1_Click(object sender, System.EventArgs e) { if(this.DynamicColumnAdded != true) { this.AddColumns(); } } private void AddColumns() {(); this.DynamicColumnAdded = true; }
Adding New Records to a Data Source Using the DataGrid Control
The DataGrid control allows users to view and edit records, but does not inherently include the facility to add new ones. However, you can add this functionality in various ways, all of which involve the following:
- Adding a new, blank record to the data source of the grid (in the dataset or database). If necessary, you will need to assign an ID for the record and put placeholder values into it for any columns that cannot be null.
- Rebinding the DataGrid control to the source.
- Putting the grid into edit mode for the new record. You need to be able to determine where in the grid the new record appears.
- Updating the record normally when the user clicks Update, thereby writing the new record to the source with user-provided values.
The following example shows the process for adding the new record, binding the grid, and putting it into edit mode. In this example, the data source is a dataset (DsBooks1 or dsBooks1) containing a table called "Books."
' Visual Basic Private Sub btnAddRow_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles btnAddRow.Click Dim dr As DataRow = Me.DsBooks1.Books.NewRow dr("title") = "(New)" dr("instock") = True Me.DsBooks1.Books.Rows.InsertAt(dr, 0) Session("DsBooks") = DsBooks1 DataGrid1.EditItemIndex = 0 DataGrid1.DataBind() End Sub // C# private void btnAddRow_Click(object sender, System.EventArgs e) { DataRow dr = this.dsBooks1.Books.NewRow(); dr["title"] = "(New)"; dr["instock"] = true; this.dsBooks1.Books.Rows.InsertAt(dr, 0); Session["DsBooks"] = dsBooks1; DataGrid1.EditItemIndex = 0; DataGrid1.DataBind(); }
Some things to notice:
- This code runs when a user clicks an Add button somewhere in the page.
- The new row is created using the NewRow method. It is then inserted into the dataset table using the InsertAt method, which allows you to place it at a specific, predefined location — in this case, as the first record in the table (that is, the first record in the Rows collection). Alternatively, you could add it to the end of the table, using the row count as the value. The important thing is that you know exactly where the row is in the table.
- Because you know that the record is in the first position of the table, you can set the grid's EditItemIndex value to zero to put the new row into edit mode. (If you created the row elsewhere in the table, you would set EditItemIndex to that location instead.)
- Because you have a new record in the dataset (but not yet in the database), you have to keep a copy of the dataset between round trips — you do not want to refill it from the database and lose the new record. Here, the code stores it in Session state. You need to reload the dataset from Session state when the page loads. The following example shows what your Page_Load handler might look like:
' Visual Basic Private Sub Page_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load If Me.IsPostBack Then DsBooks1 = CType(Session("DsBooks"), dsBooks) Else Me.SqlDataAdapter1.Fill(Me.DsBooks1) Session("DsBooks") = DsBooks1 DataGrid1.DataBind() End If End Sub // C# private void Page_Load(object sender, System.EventArgs e) { if(this.IsPostBack) { dsBooks1 = (dsBooks) Session["DsBooks"]; } else { this.sqlDataAdapter1.Fill(this.dsBooks1); Session["DsBooks"] = dsBooks1; this.DataGrid1.DataBind(); } }
For information about maintaining state, see Web Forms State Management in the Visual Studio documentation.
You can update the record normally. For an example, see Walkthrough: Using a DataGrid Web Control to Read and Write Data in the Visual Studio documentation. After updating the dataset, update the database, then refresh the dataset. Be sure to save the refreshed dataset to Session state again. Here is an example of an update handler:
' Visual Basic Private Sub DataGrid1_UpdateCommand(ByVal source As Object, _ ByVal e As System.Web.UI.WebControls.DataGridCommandEventArgs) _ Handles DataGrid1.UpdateCommand Dim dr As Dataset.BooksRow 'Get a reference to row zero (where the row was inserted) dr = Me.DsBooks1.Books(0) Dim tb As TextBox = CType(e.Item.Cells(2).Controls(0), TextBox) dr.title = tb.Text Dim cb As CheckBox = CType(e.Item.Cells(3).Controls(1), CheckBox) dr.instock = cb.Checked Me.SqlDataAdapter1.Update(Me.DsBooks1) DataGrid1.EditItemIndex = -1 'Refresh the dataset from the database DsBooks1.Clear() Me.SqlDataAdapter1.Fill(Me.DsBooks1) 'Save the refreshed dataset in Session state agin Session("DsBooks") = DsBooks1 DataGrid1.DataBind() End Sub // C# private void DataGrid1_UpdateCommand(object source, System.Web.UI.WebControls.DataGridCommandEventArgs e) { dsBooks.BooksRow dr; //Get a reference to row zero (where the row was inserted) dr = this.dsBooks1.Books[0]; TextBox tb1 = (TextBox) e.Item.Cells[2].Controls[0]; dr.title = tb1.Text; CheckBox cb = (CheckBox) e.Item.Cells[3].Controls[1]; dr.instock = cb.Checked; this.sqlDataAdapter1.Update(this.dsBooks1); DataGrid1.EditItemIndex = -1; //Refresh the dataset from the database dsBooks1.Clear(); this.sqlDataAdapter1.Fill(this.dsBooks1); //Save the refreshed dataset in Session state agin Session["DsBooks"] = dsBooks1; DataGrid1.DataBind(); }
Displaying a Drop-Down List in Edit Mode
A common request is to present users with a drop-down list when a row is in edit mode. For example, the grid might show a list of books, including each book's genre. When users edit a book record, they might want to assign a different genre; ideally, they can select from a drop-down list that shows possible genre values such as "fiction," "biography," or "reference.". (Preselecting an item might not be an issue in all scenarios.)
There are many ways to populate the drop-down list. The following examples show you three possibilities: using static items; using records from a dataset; or by using a data reader to read information directly from a database.
Static Items
To display static items in the drop-down list, you do not data bind the control. Instead, you simply define items in the control's Items collection. In Visual Studio, you can invoke the Items collection editor from the Items property in the Properties window. Alternatively, you can add items in HTML view.
The following shows a complete column definition for a template column that displays the genre in display mode, and a static list of genre types in edit mode. The ItemTemplate contains a Label control whose Text property is bound to the "genre" field of the current record. The declarations for the static items in the EditItemTemplate are highlighted.
<asp:TemplateColumn <ItemTemplate> <asp:Label id=Label4 </asp:Label> </ItemTemplate> <EditItemTemplate> <asp:DropDownList <asp:ListItemfiction</asp:ListItem> <asp:ListItembiography</asp:ListItem> <asp:ListItemreference</asp:ListItem> </asp:DropDownList> </EditItemTemplate> </asp:TemplateColumn>
Dataset
If the data you want to display in the drop-down list is in a dataset, you can use ordinary data binding. The following shows the declarative syntax. The DropDownList control is bound to the Genre table in a dataset called DsBooks1. The data-binding settings are highlighted.
<asp:TemplateColumn <ItemTemplate> <asp:Label id=Label3 </asp:Label> </ItemTemplate> <EditItemTemplate> <asp:DropDownList id=DropDownList4 </asp:DropDownList> </EditItemTemplate> </asp:TemplateColumn>
Data Reader
You can also populate the drop-down list directly from a database. This method is more involved, but it can be more efficient, since you do not actually read the data from the database till the moment you need it.
A relatively easy way to do this is to take advantage of Web Forms data-binding expressions. Although it is most common to call the DataBinder.Eval method in a data-binding expression, you can in fact call any public member available to the page. This example shows you how to create a function that creates, fills, and returns a DataTable object that the drop-down list can bind to.
For this scenario, you will need to be able to execute a data command that gets the records you want. For example, you might define a data command whose CommandText property is
Select * from Genres. To simplify the example, it will be assumed that you have a connection object and a data command object already on the page.
Start by creating a public function in the page that creates a data table object and defines the columns you need in it. Then open the connection, execute the data command to return a data reader, and loop through the reader, copying the data to the table. Finally, return the table as the function's return value.
The following example shows how you can do this. In this case, there is only one column in the returned table ("genre"). When you populate a drop-down list, you usually need only one column, or two columns if you want to set the drop-down list's text and values to different columns.
' Visual Basic Public Function GetGenreTable() As DataTable Dim dtGenre As DataTable = New DataTable() If Application("GenreTable") Is Nothing Then Dim dr As DataRow Dim dc As New DataColumn("genre") dtGenre.Columns.Add(dc) Me.SqlConnection1.Open() Dim dreader As SqlClient.SqlDataReader = _ Me.SqlCommand1.ExecuteReader() While dreader.Read() dr = dtGenre.NewRow() dr(0) = dreader(0) dtGenre.Rows.Add(dr) End While Me.SqlConnection1.Close() Else dtGenre = CType(Application("GenreTable"), DataTable) End If Return dtGenre End Function //C# public DataTable GetGenreTable() { DataTable dtGenre = new DataTable(); if(Application["GenreTable"] == null) { DataRow dr; DataColumn dc = new DataColumn("genre"); dtGenre.Columns.Add(dc); this.sqlConnection1.Open(); System.Data.SqlClient.SqlDataReader dreader = this.sqlCommand1.ExecuteReader(); while(dreader.Read()) { dr = dtGenre.NewRow(); dr[0] = dreader[0]; dtGenre.Rows.Add(dr); } this.sqlConnection1.Close(); } else { dtGenre = (DataTable) Application["GenreTable"]; } return dtGenre; }
Notice that the function caches the table it creates into Application state. Since the table is acting as a static lookup table, you do not need to re-read it every time a different row is put into edit mode. Moreover, because the same table can be used by multiple users, you can cache it in the global Application state rather than in user-specific Session state.
The following shows the declaration for the template column. You will see that this is very similar to the syntax used for binding to a dataset table; the only real difference is that the DataSource binding calls your function. A slight disadvantage of this technique is that you do not get much design-type assistance from Visual Studio. Because you are defining the table to bind to in code, Visual Studio cannot offer you any choices for the DataMember, DataTextField, and DataValueField property settings. It is up to you to be sure that you set these properties to the names of the members you create in code.
<asp:TemplateColumn <ItemTemplate> <asp:Label id=Label1 </asp:Label> </ItemTemplate> <EditItemTemplate> <asp:DropDownList id=DropDownList1 </asp:DropDownList> </EditItemTemplate> </asp:TemplateColumn>
Preselecting an Item in the Drop-Down List
You often want to set the selected item in the drop-down list to match a specific value, usually the value displayed in the cell in display mode. You can do this by setting the SelectedIndex property of the drop-down list to the index of the value to display.
The following example shows a reliable way to do this in a handler for the DataGrid item's ItemDataBound event. This is the correct event to use, because it guarantees that the drop-down list has already been populated, no matter what data source the drop-down list is using.
The trick is in knowing what value to set the drop-down list to. Typically, the value is already available to you either in the current item (being displayed) or in the DataItem property of the current item, which returns a DataRowView object containing the current record. Once you have the value, you can use the DropDownList control's FindByText or FindByValue method to locate the correct item in the list; you can then use the item's IndexOf property to return the index.
' Visual Basic Private Sub DataGrid1_ItemDataBound(ByVal sender As Object, _ ByVal e As System.Web.UI.WebControls.DataGridItemEventArgs) _ Handles DataGrid1.ItemDataBound If e.Item.ItemType = ListItemType.EditItem Then Dim drv As DataRowView = CType(e.Item.DataItem, DataRowView) Dim currentgenre As String = CType(drv("genre"), String) Dim ddl As DropDownList ddl = CType(e.Item.FindControl("DropDownList1"), DropDownList) ddl.SelectedIndex = ddl.Items.IndexOf(ddl.Items.FindByText(currentgenre)) End If End Sub // C# private void DataGrid1_ItemDataBound(object sender, System.Web.UI.WebControls.DataGridItemEventArgs e) { if(e.Item.ItemType == ListItemType.EditItem){ DataRowView drv = (DataRowView) e.Item.DataItem; String currentgenre = drv["genre"].ToString(); DropDownList ddl = (DropDownList) e.Item.FindControl("DropDownList1"); ddl.SelectedIndex = ddl.Items.IndexOf(ddl.Items.FindByText(currentgenre)); } }
Selecting Multiple Items Using a Check Box (Hotmail Model)
In applications such as Microsoft Hotmail®, users can "select" rows by checking a box and then performing an operation on all the selected rows — for example, delete them or copy them.
To add functionality like this, add a template column to the grid and put a check box into the column. When the page runs, users will be able to check the items they want to work with.
To actually perform the user action, you can walk the grid's Items collection, looking into the appropriate column (cell) to see if the check box is checked. The following example shows how you can delete rows in a dataset corresponding to the items that a user has checked. The dataset, called dsBooks1, is assumed to contain a table called Books.
' Visual Basic Private Sub btnDelete_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles btnDelete.Click ' Walk the grid looking for selected rows Dim i As Integer = 0 Dim cb As CheckBox Dim dgi As DataGridItem Dim bookid As Integer Dim dr As dsBooks.BooksRow For Each dgi In DataGrid1.Items cb = CType(dgi.Cells(0).Controls(1), CheckBox) If cb.Checked Then ' Determine the key of the selected record ... bookid = CType(DataGrid1.DataKeys(i), Integer) ' ... get a pointer to the corresponding dataset record ... dr = Me.DsBooks1.Books.FindBybookid(bookid) ' ... and delete it. dr.Delete() End If i += 1 Next Me.SqlDataAdapter1.Update(DsBooks1) Me.SqlDataAdapter1.Fill(DsBooks1) DataGrid1.DataBind() End Sub // C# private void btnDelete_Click(object sender, System.EventArgs e) { int i = 0; CheckBox cb; int bookid; dsBooks.BooksRow dr; foreach(DataGridItem dgi in this.DataGrid1.Items) { cb = (CheckBox) dgi.Cells[0].Controls[1]; if(cb.Checked) { // Determine the key of the selected record ... bookid = (int) DataGrid1.DataKeys[i]; // ... get a pointer to the corresponding dataset record ... dr = this.dsBooks1.Books.FindBybookid(bookid); // ... and delete it. dr.Delete(); } i++; } this.sqlDataAdapter1.Update(this.dsBooks1); this.sqlDataAdapter1.Fill(this.dsBooks1); DataGrid1.DataBind(); }
Some points to note:
- You can determine whether the check box is checked by using the standard approach for getting a control value from a template column — getting an object from the Controls collection of the cell and casting it appropriately. If you are getting a Checkbox control, remember that it is usually the second control (index 1) because a literal control precedes it (even if it is blank).
- If you are deleting, you must do so by key and not by offset in the dataset. The index of an item in the DataGrid control might not match the index of the same record in the table. Even if it does at first, after the first record is deleted it will not. Here, the code gets the record key out of the grid's DataKey collection. It then uses the FindBy<key> method in the dataset table to locate the record to delete.
- After the records have been deleted from the dataset (technically, they are only marked for deletion), you delete them from the database by calling the data adapter's Update method. The code then refreshes the dataset from the database and re-binds the grid.
Editing Multiple Rows At Once
The standard way to edit rows in the DataGrid control — by adding an "Edit, Update, Cancel" button to the grid's columns — only allows users to edit one row at a time. If users want to edit multiple rows, they must click the Edit button, make their changes, and then click the Update button for each row.
In some cases, a useful alternative is to configure the grid so that it is in edit mode by default. In this scenario, the grid always displays editable data in text boxes or other controls; users do not explicitly have to put the grid into edit mode. Typically, users make whatever changes they want and then click a button (not a button in the grid) to submit all changes at once. The page might look something like the following:
Figure 1
You can use this style of editing grid with any data model, whether you are working against a dataset or directly against the data source using data commands.
To configure the grid for multiple-row edit, add the columns as you normally would and convert all editable columns to template columns. In the Columns tab of the grid's Property Builder, select the column and at the bottom of the window, choose Convert this column into a Template column. To edit the templates, right-click the grid and choose Edit Template.
Add the edit controls to the ItemTemplate. Note that you are not adding them to the EditItemTemplate, as you normally would, because the rows will not be displayed in edit mode. That is, the ItemTemplate will contain editable controls.
Set up data binding for the grid normally. You will need to bind each editable control individually. A typical data binding expression will look like this:
Loading the grid is no different than usual. Updating is slightly different, however, because when users click the Update button, you need to go through the entire grid, making updates for all the rows.
The following example shows one possibility. In this case, it is assumed that you are using a data command (dcmdUpdateBooks) that contains a parameterized SQL Update statement. The code walks through the grid, item by item, extracts values from the editable controls, and assigns the values to command parameters. It then executes the data command once for each grid item.
' Visual Basic Private Sub btnUpdate_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles btnUpdate.Click Dim i As Integer Dim dgi As DataGridItem Dim bookid As Integer Dim TextBoxTitle As TextBox Dim CheckBoxInStock As CheckBox Dim TextBoxPrice As TextBox Dim LabelBookId as Label For i = 0 To DataGrid1.Items.Count - 1 dgi = DataGrid1.Items(i) LabelBookId = CType(dgi.Cells(0).Controls(1), Label) bookid = CType(LabelBookId.Text, Integer) TextBoxTitle = CType(dgi.FindControl("TextBoxTitle"), TextBox) CheckBoxInStock = _ CType(dgi.FindControl("CheckBoxInstock"), CheckBox) TextBoxPrice = CType(dgi.FindControl("TextBoxPrice"), TextBox) Me.dcmdUpdateBooks.Parameters("@bookid").Value = bookid Me.dcmdUpdateBooks.Parameters("@Title").Value = TextBoxTitle.Text Me.dcmdUpdateBooks.Parameters("@instock").Value = _ CheckBoxInStock.Checked Me.dcmdUpdateBooks.Parameters("@Price").Value = TextBoxPrice.Text Me.SqlConnection1.Open() Me.dcmdUpdateBooks.ExecuteNonQuery() Me.SqlConnection1.Close() Next End Sub // C# private void btnUpdate_Click(object sender, System.EventArgs e) { int i; DataGridItem dgi; int bookid; TextBox TextBoxTitle; CheckBox CheckBoxInStock; TextBox TextBoxPrice; for(i = 0; i <= DataGrid1.Items.Count -1 ; i++) { dgi = DataGrid1.Items[i]; Label LabelBookId = (Label) dgi.Cells[0].Controls[1]; bookid = int.Parse(LabelBookId.Text); TextBoxTitle = (TextBox) dgi.FindControl("TextBoxTitle"); CheckBoxInStock = (CheckBox) dgi.FindControl("CheckBoxInStock"); TextBoxPrice = (TextBox) dgi.FindControl("TextBoxPrice"); this.dcmdUpdateBooks.Parameters["@bookid"].Value = bookid; this.dcmdUpdateBooks.Parameters["@Title"].Value = TextBoxTitle.Text; this.dcmdUpdateBooks.Parameters["@instock"].Value = CheckBoxInStock.Checked; this.dcmdUpdateBooks.Parameters["@Price"].Value = float.Parse(TextBoxPrice.Text); this.sqlConnection1.Open(); this.dcmdUpdateBooks.ExecuteNonQuery(); this.sqlConnection1.Close(); } }
Checking for Changed Items
One disadvantage of the update strategy illustrated above is that it can be inefficient to send updates to the dataset or database for each grid row if there have been only a few changes. If you are working with a dataset, you can add logic to check for changes between the controls in the grid and the corresponding columns in dataset rows. If you are not using a dataset — as in the example above — you cannot easily make this comparison, since it would involve a round trip to the database.
A strategy that works for both types of data sources is to establish a way to determine whether rows are "dirty" so you can check that before making an update..
Imagine that you want to follow this strategy for the example above. Create an instance of an ArrayList object as a member of the page class:
Then create a handler to add the book ID to the ArrayList object whenever a control is changed. The following code shows a handler that can be invoked when a TextBox control raises its TextChanged event or when a CheckBox control raises its CheckedChanged event:
' Visual Basic Protected Sub RowChanged(ByVal sender As Object, _ ByVal e As System.EventArgs) Dim dgi As DataGridItem = _ CType(CType(sender, Control).NamingContainer, DataGridItem) Dim bookidlabel As Label = CType(dgi.Cells(0).Controls(1), Label) Dim bookid As Integer = CType(bookidlabel.Text, Integer) If Not (bookidlist.Contains(bookid)) Then bookidlist.Add(bookid) End If End Sub // C# protected void RowChanged( object sender, System.EventArgs e) { DataGridItem dgi = (DataGridItem)(((Control)sender).NamingContainer); Label bookidlabel = (Label) dgi.Cells[0].Controls[1]; int bookid = int.Parse(bookidlabel.Text); if (!bookidlist.Contains(bookid)) { bookidlist.Add(bookid); } }
Note The method cannot be private, or you will not be able to bind to it later.
It is helpful to understand that change events do not, by default, post the page back to the server. Instead, the event is raised only when the page is posted some other way (usually via a Click event). During page processing, the page and its controls are initialized, and then all change events are raised. Only when the change event's handlers have finished is the Click event raised for the control that caused the post.
On to the RowChanged method illustrated above. The code needs to get the book ID out of the current item. The event does not pass the item to you (as it does for many DataGrid events, for example), so you have to work backwards. From the
sender argument of the event, get the NamingContainer property, which will be the grid item. From there, you can drill back down to get the value of the Label control that displays the book ID.
You need to check that the book ID is not already in the array. Each control in the row raises the event individually, so if there has been a change in more than one control, you could potentially end up adding the book ID to the array more than once.
The change events for controls are always raised and handled before click events. Therefore, you can build the array list in the change event and know that it will be available when the event handler runs for the button click that posted the form (in this example, the
btnUpdate_Click handler).
Now that you have the array list, you can make a minor modification to the handler that manages the update. In the
btnUpdate_Click, when you iterate through the data grid items, add a test to see if the current book ID is in the array list; if so, make the update.
' Visual Basic Private Sub btnUpdate_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles btnUpdate.Click Dim i As Integer Dim dgi As DataGridItem 'Rest of declarations here For i = 0 To DataGrid1.Items.Count - 1 dgi = DataGrid1.Items(i) LabelBookId = CType(dgi.Cells(0).Controls(1), Label) If bookidlist.Contains(bookid) Then TextBoxTitle = CType(dgi.FindControl("TextBoxTitle"), TextBox) ' Rest of update code here End If Next End Sub // C# private void btnUpdate_Click(object sender, System.EventArgs e) { int i; DataGridItem dgi; int bookid; //Rest of declarations here for(i = 0; i <= DataGrid1.Items.Count -1 ; i++) { dgi = DataGrid1.Items[i]; TableCell tc = dgi.Cells[0]; string s = dgi.Cells[0].Text; Label LabelBookId = (Label) dgi.Cells[0].Controls[1]; bookid = int.Parse(LabelBookId.Text); if (bookidlist.Contains(bookid)) { // Update code here } } }
One task is left: binding the handlers to the control events. In Visual Studio, you can only do this in HTML view. The controls are not explicitly instantiated in the code-behind file, so they are not supported by the code tools. Switch the .aspx file to HTML view and in the declarative elements for each of the controls, add the following highlighted syntax:
<asp:TemplateColumn <ItemTemplate> <asp:TextBox </asp:TextBox> </ItemTemplate> </asp:TemplateColumn> <asp:TemplateColumn <ItemTemplate> <asp:CheckBox id=cbInStock </asp:CheckBox> </ItemTemplate> </asp:TemplateColumn>
Both the TextBox and CheckBox controls can call the same method from their respective change methods, because the signature for both event handlers is the same. That would be true also if you had a list box or drop-down list control, whose SelectedIndexChanged events likewise pass the same arguments.
Selecting Rows by Clicking Anywhere
The default model for selecting rows in the grid is for you to add a Select button (actually, a LinkButton control) whose CommandName property is set to "Select." When the button is clicked, the DataGrid control receives the Select command and automatically displays the row in selected mode.
Not everyone likes having an explicit Select button, and a common question is how to implement the feature where users can click anywhere in a grid row to select it. The solution is to perform a kind of sleight-of-hand in the grid. You add the Select LinkButton control as normal. Users can still use it, or you can hide it. In either event, you then inject some client script into the page that effectively duplicates the functionality of the Select button for the row as a whole.
The example below shows how. In the grid's ItemDataBound handler, first make sure that you are not in the header, footer, or pager. Then get a reference to the Select button, which in this instance is assumed to be the first control in the first cell. You then call a little-known method called GetPostBackClientHyperlink. This method returns the name of the postback call for the designated control. In other words, if you pass in a reference to a LinkButton control, it returns the name of the client function call that will perform the postback.
Finally, you assign the client-side method to the item itself. When the grid renders, it renders as an HTML table. By assigning the method to the item, it is the equivalent of adding client-side code to each row (<TR> element) in the table. The grid's Item object does not directly support a way to assign client code to it, but you can do that by using its Attributes collection, which passes anything you assign to it through to the browser.
Note One small disadvantage of this technique is that it adds somewhat to the stream rendered to the browser, and it adds information for each row to view state.
' Visual Basic Private Sub DataGrid1_ItemDataBound(ByVal sender As Object, _ ByVal e As System.Web.UI.WebControls.DataGridItemEventArgs) _ Handles DataGrid1.ItemDataBound Dim itemType As ListItemType = e.Item.ItemType If ((itemType = ListItemType.Pager) Or _ (itemType = ListItemType.Header) Or _ (itemType = ListItemType.Footer)) Then Return Else Dim button As LinkButton = _ CType(e.Item.Cells(0).Controls(0), LinkButton) e.Item.Attributes("onclick") = _ Page.GetPostBackClientHyperlink(button, "") End If End Sub // C# private void DataGrid1_ItemDataBound(object sender, System.Web.UI.WebControls.DataGridItemEventArgs e) { ListItemType itemType = e.Item.ItemType; if ((itemType == ListItemType.Pager) || (itemType == ListItemType.Header) || (itemType == ListItemType.Footer)) { return; } LinkButton button = (LinkButton)e.Item.Cells[0].Controls[0]; e.Item.Attributes["onclick"] = Page.GetPostBackClientHyperlink(button, ""); } | http://msdn.microsoft.com/en-us/library/aa289519(v=vs.71).aspx | CC-MAIN-2014-52 | refinedweb | 6,536 | 57.47 |
Class Loading Fun with Groovy
Class Loading Fun with Groovy
Sometimes you need special measures. Not to make Groovy work, but to make your applications or frameworks a little bit more powerful or versatile.
Join the DZone community and get the full member experience.Join For Free
Get the Edge with a Professional Java IDE. 30-day free trial.
Sometimes you need special measures. Not to make Groovy work, but to make your applications or frameworks a little bit more powerful or versatile.
Here are two class loading techniques that I've used in combination with Groovy to make life more fun and interesting.
One: getting rid of Groovy
Sometimes you have to get hold of a
ClassLoader that doesn't include the Groovy classes. Sometimes you need to have a
ClassLoader that only contains the classes shipped with the JVM.
Every Java application has a
ClassLoader that only contains JVM classes and you can get hold of it, if you know how.
The
java.lang.ClassLoader class has a static
getSystemClassLoader() method that returns a
ClassLoader that contains all classes on the CLASSPATH when the JVM started:
ClassLoader.systemClassLoader.loadClass "some.class.on.the.ClassPath"
This
ClassLoader has one or more parent
ClassLoaders. The
ClassLoader on top of this hierarchy is the one that only contains JVM classes. When you find a
ClassLoader without parent you know you've found it:
def classLoader = ClassLoader.systemClassLoader while (classLoader.parent) { classLoader = classLoader.parent }
This little routine gets you the top
ClassLoader. You can now use this one to create a new
ClassLoader hierarchy. It only takes creating a new
URLClassLoader and provide the locations of the JAR files you want to include:
def classLoader = ClassLoader.systemClassLoader while (classLoader.parent) { classLoader = classLoader.parent } def newClassLoader = new URLClassLoader([new File("location/to.jar").toString().toURL()] as URL[], classLoader)
You now have a new
ClassLoader. Warning: it does not include Groovy if you don't add the Groovy JAR specifically, and any classes that have already been loaded will be loaded again, increasing the memory foot-print of your application. Also, don't create too many new
ClassLoaders in one application since they tend to cause memory leaks. Creating one or a few should be fine. Classes loaded with the new
ClassLoader will not be compatible with classes loaded by the current system
ClassLoader, except for the JVM classes.
You can improve this code by using the
org.apache.tools.ant.launch.Locator class (that ships with Ant). Its
fileToURL() method does a slightly better job at converting
java.io.File objects to
java.net.URL methods.
Two: getting hold of
GroovyClassLoader in Java code
Sometimes you have to get hold of a
groovy.lang.GroovyClassLoader object in Java code. This may for example happen when Java code is called from Groovy code.
GroovyClassLoaders tend the be at the bottom of
ClassLoader hierarchies, and sometimes you want to load classes in
GroovyClassLoader and not anywhere else in the
ClassLoader hierarchy.
Your Java code however was most likely not loaded by the
GroovyClassLoader, so how to get hold of it? You could pass is explicitly from your Groovy code but that's not practical. Instead, use the
sun.reflect.Reflection class? Never heard of it? Then it's time to meet the
getCallerClass() method.
When Groovy code calls Java code then somewhere up in the stack there is a Groovy class that's been loaded by a
GroovyClassLoader. Here's how to get hold of it from Java code:
import sun.reflect.Reflection; import groovy.lang.GroovyClassLoader; public class MyClass { public void myMethod() throws Exception { int frame = 1; // number of levels above you in the stack Class c = Reflection.getCallerClass(frame); while (c != null && !(c.getClassLoader() instanceof GroovyClassLoader)) { frame++; c = Reflection.getCallerClass(frame); } if (c != null) { GroovyClassLoader gcl = (GroovyClassLoader) c.getClassLoader(); // do Groovy stuff here Class newClass = gcl.parseClass("class MyNewClass{}"); } } }
These tricks show just how much Groovy is integrated with the JVM, and how Groovy generates proper Java byte-code.
Happy Groovy coding! }} | https://dzone.com/articles/class-loading-fun-groovy | CC-MAIN-2018-30 | refinedweb | 669 | 59.5 |
Mar 30, 2010 07:05 AM|den2005|LINK
Hi,
Does anyone have created 2 compositecontrols (dll), one having a textbox and a button (Search), when the button is clicked, the results of the search be displayed in another composite control (dll)?? I am looking at delegates and events...
All-Star
24320 Points
Mar 30, 2010 10:36 AM|karthicks|LINK
Apr 05, 2010 09:55 PM|CruzerB|LINK.
Apr 06, 2010 10:07 PM|den2005|LINK
Hi CruzerB,
Can you expose/share some details (codes) particular to #2 and #3?? Thanks for the reply...
Apr 06, 2010 11:02 PM|CruzerB|LINK
Dear den2005,
#2 'Define a custom event at ur composite control Public Event CustomClick(ByVal sender As Object, ByVal e As String) Private Sub SearchButton_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles SearchButton.Click 'Raise the custom event and pass the search value into 2nd parameter. RaiseEvent CustomClick(Me, "Searched Value here.") End Sub #3 'At ur page that consume the composite control Private Sub CompositeControl_CustomClick(ByVal sender As Object, ByVal e As String) Handles CompositeControl.CustomClick 'Pass "e" to ur display control (another composite control). DisplayControl.Display(e) End Sub
Apr 07, 2010 10:12 PM|den2005|LINK
Hi CruzerB,
Thanks for the reply..You misunderstood it is not the page that interacts with the button_click of a compositecontrol, it is another composite control that should be triggered when this button event is triggered from another composite control, both of these composite controls (in dlls) are loaded on a page dynamically...I am using C# and not VB.Net here...
Apr 07, 2010 10:53 PM|CruzerB|LINK
Dear den2005,
Because u'r develop 2 different composite controls and there are no ways to handler event raised by each other without put both control into a page.
Because the 1st composite is unknow for the 2nd control. Only the page (which contain these 2 controls) know it.
Hope this make sence to u.
Apr 09, 2010 11:13 PM|den2005|LINK
CruzerB,
both of this compositecontrols in form of dll are loaded to a page at runtime...so are you're saying thaye can be be wired to each at design time??
Apr 09, 2010 11:41 PM|CruzerB|LINK
Dear den2005,
Yes. For example, "Button" & "CheckBox" are in an assembly with namespace "System.Web.UI.WebControls". There are dunno each another.
The button is not able to handle the event raise by checkbox. But the page able to handler the event raise by each other. So if the button clicked (Event raise up), page will handle it at Button_Click (Event handler) then change the checkbox status by set
CheckBox.Enabled (Method in the CheckBox control).
Apr 11, 2010 02:48 AM|den2005|LINK
CruzerB,
Sad to hear that...
Anybody else has an idea on how to do this??
Apr 11, 2010 07:31 AM|CruzerB|LINK
Dear den2005,
!
Are u the one to develop the composite controls? If u are, then u should understand it. Else ppl will feel hard to use ur controls.
Apr 19, 2010 10:03 PM|den2005|LINK
CruzerB,
Yes I'm the one creating initial compositecontrols...I'm looking at Observer Pattern in solving this problem as suggested by our dev manager.
Apr 20, 2010 06:25 AM|CruzerB|LINK
Dear den2005,
Maybe u can just imaging..
U put a textbox(In ur case is: search be displayed in another composite control (dll)) into a page. Then add another button(In ur case is: one having a textbox and a button (Search)) to same page.
U want to set some text into the textbox after user click the button. What u can do is, u handle the button click event at ur page and set the text to the textbox text properties.
These are what I means at :.
<div style="position: absolute; left: -10000px; top: 0px; width: 1px; height: 1px; overflow: hidden;" id="_mcePaste">one having a textbox and a button (Search)</div>
Apr 20, 2010 11:38 PM|den2005|LINK
CruzerB,
That's is fine when you know all controls need to be added at the page before runtime. But as I said all these controls (dll) are added at runtime, so normal way of registering event for these controls in the page is not applicable....But thanks for the reply..
Apr 21, 2010 03:02 AM|CruzerB|LINK
Dear den2005,
I'm using AddHandler xxxx, Addressof xxxxx in VB.NET.
e.g.
AddHandler _insertButton.Click, AddressOf OnApplyInsertMode
to C#.NET with this:
this._insertButton.Click +=new EventHandler(this.OnApplyInsertMode);
So, u hv to something like this for ur custom control.
Apr 28, 2010 04:55 AM|den2005|LINK
This problem has been resolved using 2 base classes, one plugin with button inherits a base class and another plugin inherits different base class, then in the first plugin (with button) there is a method that allows to registers a child object (second plugin) ....and so on..
15 replies
Last post Apr 28, 2010 04:55 AM by den2005 | https://forums.asp.net/t/1541820.aspx?Event+between+2+compositecontrols | CC-MAIN-2021-25 | refinedweb | 841 | 65.32 |
You can subscribe to this list here.
Showing
2
results of 2
Hello,
Unfortunately I had to re-install my OS so I updated my emacs and cedet
versions (I'm using the last snapshots). I'm also using the same
configuration files that had worked pretty well with my c++ projects.
However, now the name auto-completion is not working.
More specifically, before re-installing my system if I started to write a
variable's name, cedet showed me all the possible matches among all the
variables already defined in the same file. Now I have no suggestions and I
thinks it's related to how semantic and auto-complete work together.
When I'm working on different types of files (lisp, latex, org...)
auto-complete works fine. Also semantic works as expected: I can jump from
definitions to implementations, I can search a method definition, I can see
all the members of a
particular object, and the function prototypes are showed too.
It's just the name complete that does not work, and as I'm using the same
configuration files and I followed step by step the same process I used the
first time to install cedet and other tools (gtags, cscope...), I can't see
the cause of this problem.
Here are my configurations files:
i) init.d
ii) my cedet configuration based on Alex's tutorial
Can someone give me some hints about what I'm doing wrong or missing here?
Also, I'm new to cedet and I would appreciate some suggestions about how to
improve my current configuration to work better on c++ projects.
Thanks,
Pamela
On 04/09/2013 12:18 PM, Barry OReilly wrote:
> The problem was tricky to narrow down because if I cycle back and forth
> between having and not having the ^M characters, reproduction becomes
> unpredictable. I think some undesired states were persisting between
> sessions. When I started deleting semanticdb files between each run,
> reproduction behaved a bit better. I've found that the problematic
> lines are the header guard:
>
> // Problematic:
> #ifndef MANAGER_H^M
> #define MANAGER_H^M
> // ...
> #endif // ^M here doesn't affect Semantic behavior
>
> > Could you check if it is parsing the ^M file correctly? Just do:
> >
> > M-x bovinate RET
> >
> > and see if it has stuff in it that matches your expectation.
>
> With the ^M after the header guards, I get only nil when I bovinate.
> When I remove them, I get more content.
Thanks for narrowing this down. I checked in a change to allow
characters such as ^M after the variable being checked by an ifdef.
Surprisingly, a SPC would have been similar. I guess the code out there
is pretty consistent about having no whitespace after such a statement. ;)
Eric | http://sourceforge.net/p/cedet/mailman/cedet-semantic/?viewmonth=201304&viewday=10 | CC-MAIN-2014-23 | refinedweb | 457 | 63.09 |
In the end of our last post (which was about Securing REST APIs) we mentioned about JWT. I made a promise that in the next post, we would discuss more about JWT and how we can secure our REST APIs using it. However, when I started drafting the post and writing the code, I realized the underlying concepts of JWT themselves deserve a dedicated blog post. So in this blog post, we will focus solely on JWT and how it works.
What is JWT?
We will ignore the text book definitions and try to explain the concepts in our own words. Don’t be afraid of the serious looking acronym, the concepts are rather simple to understand and comprehend. First let’s break down the term – “JSON Web Tokens”, so it has to do something with JSON, the web and of course tokens. Right? Let’s see.
Yes, a JWT mostly concerns with a Token that is actually a hashed / signed form of a JSON payload. The JSON payload is signed using a hashing algorithm along with a secret to produce a single (slightly long) string that works as a token. So a JWT is basically a string / token generated by processing a JSON payload in a certain way.
So how does JWT help? If you followed our last article, you now know why http basic auth is bad. You have to pass your username and password with every request. That is kind of bad, right? The more you send your username and password over the internet, the more likely it is to get compromised, no? Instead, on the first login, we can accept the username and password and return a token back to the client. The client passes that token with every request. We verify that token to see if it’s a logged in user or not. This is the idea behind Token based authentication.
Random Tokens vs JWT
How would you generate such token? You could generate a nice random string and store it in database against that user. Right? This is how cookie based session works too, btw. Now what if your application is scaled across multiple servers and all requests are load balanced? One server will not recognize a token / session generated by another server. Unless of course you also have one central database active all the time, serving all the incoming requests from all the servers. That setup is tricky and difficult, no?
There is another work around using sticky sessions where the requests from one particular user is always directed to the same server by the load balancer. This work around is also not as simple as JWT. Even if all these work nicely, we still have to make database queries to validate the token / session. What if we want to provide single sign on (users from one service wants to access resources on a different service all together)? How does that work? We will need a central auth server and all services will have to talk to it to verify the user token.
The benefit of JWT is that it’s lightweight but at the same time it’s a self contained JSON payload. You can store user identity in the JSON, sign it and send the token to the clients. Since it’s signed we can verify and validate it with just our secret key. No database overhead. No need for sticky sessions. Just share the secret key privately and all your services can read the data stored inside the JWT. Others can’t tamper or forge a new, valid token for an user without that secret key. Single sign on just becomes a breeze and less complicated. Sounds good? Let’s see how JWTs are constructed.
Anatomy of JWT
A JSON Web Token consists of three major parts:
- Header
- Payload
- Signature
These 3 parts are separated by dots (
.). So a JWT looks like the
If you look closely, there are 3 parts here:
- Header:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9
- Payload:
eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9
- Signature:
TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh7HgQ
Okay, so we get the three parts, but what do they really do? Also those strings look like meaningless characters to me. What are they? How are they generated? Well, they are encoded in certain ways as we will see in the following sections.
Header
The header is a simple key value pair (dictionary / hashmap) data structure. It usually has two keys
typ and
alg short for type and algorithm. The standard is to have the keys at best 3 character long, so the generated token does not get too large.
Example:
{ "alg": "HS256", "typ": "JWT" }
The
typ value is
JWT since this is JWT we’re using. The
HS256 is the most common and most popular hashing algorithm used with JWT.
Now we just need to base64 encode this part and we get the header string. You were wondering why the strings didn’t make sense. That’s because the data is base64 encoded.
Payload
Here comes our favorite part – the JSON payload. In this part, we put the data we want to store in the JWT. As usual, we should keep the keys and the overall structure as small as possible.
{ "sub": "1234567890", "name": "John Doe", "admin": true }
We can add any data we see fit. These fields / keys are called “claims”. There are some reserved claims – keys which can be interpreted in a certain way by the libraries which decode the JWT. For example, if we pass the
exp (expiry) claim with a timestamp, the decoding library will check this value and throw an exception if the time has passed (the token has expired). These can often be helpful in many cases. You can find the common standard fields on Wikipedia.
As usual, we base64 encode the payload to get the payload string.
Signature
The signature part itself is a hashed string. We concatenate the header and the payload strings (base 64 encoded header and payload) with a dot (
.) between them. Then we use the hashing algorithm to hash this string with our secret key.
In pseudocode:
concatenated_string = base64encode(header) + '.' + base64encode(payload) signature = hmac_sha256(concatenated_string, 'MY_SUPER_SECRET_KEY')
That would give us the last part of the JWT, the signature.
Glue it all together
As we discussed before, the JWT is the dot separated form of the three components. So the final JWT would be:
jwt = header + "." + payload + "." + signature
Using a library
Hey! JSON Web Tokens sounded great but looks like there’s a lot of work involved! Well, it would seem that way since we tried to understand how a JSON Web Token is actually constructed. In our day to day use cases, we would just use a suitable library for the language / platform of our choice and be done with it.
If you are wondering what library you can use with your language / platform, here’s a comprehensive list of libraries – JSON Web Token Libraries.
Real Life Example with PyJWT
Enough talk, time to see some codes. Excited? Let’s go!
We will be using Python with the excellent PyJWT package to encode and decode our JSON Web Tokens in this example. Before we can use the library, we have to install it first. Let’s do that using
pip.
pip install pyjwt
Now we can start generating our tokens. Here’s an example code snippet:
import jwt import datetime payload = { "uid": 23, "name": "masnun", "exp": datetime.datetime.utcnow() + datetime.timedelta(minutes=2) } SECRET_KEY = "N0TV3RY53CR3T" token = jwt.encode(payload=payload, key=SECRET_KEY) print("Generated Token: {}".format(token.decode())) decoded_payload = jwt.decode(jwt=token, key=SECRET_KEY) print(decoded_payload)
If we run the code, we will see:
python jwt_test.py Generated Token: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1aWQiOjIzLCJuYW1lIjoibWFzbnVuIiwiZXhwIjoxNDk0NDQ5OTQ0fQ.49okXifPSqc7n_n7wZRc9XVVqekTTeBIBBZdiH0nGJQ {'uid': 23, 'name': 'masnun', 'exp': 1494449944}
So it worked – we encoded a payload and then decoded it back. All we needed to do is to call
jwt.encode and
jwt.decode with our secret key and the payload / token. So simple, no? parties.
Bonus Example – Expiry
In the following example, we will set the expiry to only 2 seconds. Then we will wait 10 seconds (so the token expires by then) and try to decode the token.
import jwt import datetime import time payload = { "uid": 23, "name": "masnun", "exp": datetime.datetime.utcnow() + datetime.timedelta(seconds=2) } SECRET_KEY = "N0TV3RY53CR3T" token = jwt.encode(payload=payload, key=SECRET_KEY) print("Generated Token: {}".format(token.decode())) time.sleep(10) # wait 10 secs so the token expires decoded_payload = jwt.decode(jwt=token, key=SECRET_KEY) print(decoded_payload)
What happens after we run it? This happens:
jwt.exceptions.ExpiredSignatureError: Signature has expired
Cool, so we get an error mentioning that the signature has expired by now. This is because we used the standard
exp claim and our library knew how to process it. This is how we use the standard claims to ease our job!
Using JWT for REST API Authentication
Now that we’re all convinced of the good sides of JSON Web Tokens, the question comes into mind – how can we use it in our REST APIs?
The idea is simple and straightforward. When the user logs in the first time, we verify his/her credentials and generate a JSON Web Token with necessary details. Then we return this token back to the user/client. The client will now send the token with every request, as part of the authorization header.
The server will decode this token and read the user data. It won’t have to access the database or contact another auth server to verify the user details, it’s all inside the decoded payload. And since the token is signed and the secret key is “secret”, we can trust the payload.
But please make sure the secret key is not compromised. And of course use SSL (https) so that men in the middle can not hijack the token anyway.
What’s next?
JSON Web Token is not only about authentication. You can use it to securely transmit data from one party to another. However, it’s mostly used for authenticating REST APIs. In our next blog post, we shall go through that use case. We will see how we can authenticate our api using JWT.
In the mean time, you can subscribe to the mailing list so you can stay up to date with this blog. If you liked the article and/or learned something new, please don’t forget to share it with your friends. | http://polyglot.ninja/2017/05/page/2/ | CC-MAIN-2018-43 | refinedweb | 1,728 | 66.13 |
Red Hat Bugzilla – Bug 156294
ppt exported from sxi looks better in oo1.9.96 then the sxi
Last modified: 2007-11-30 17:11:05 EST
Open the sxi file attached to bug 156255 and take a look at the text on slide
85, notice how it doesn't fit on the slide.
Now open the ppt file attached to this bug, this was exported from the sxi file
using ooo 1.1.?, and again goto slide 85 now it does fit. The same happens on a
lott of other slides.
I would expect the sxi import of oo2.0 to be flawless or atleast better then the
ppt import.
Created attachment 113798 [details]
PPt file which does import correctly
Text seems to fit fine on page 85 with 1.9.112
Caolan,
You probably tried the file attached to this bug, which is a ppt exported with
ooo 1.1.x, which is attached as an example to show that a ppt exported file
works better then the original sxi file.
If you open the sxi file from which the ppt was exported it still doesn't fit,
the sxi file is attached to bug 156255. Also see my initial comment, which
states: "Open the sxi file attached to bug 156255"
Created attachment 116802 [details]
original sxw
Created attachment 116803 [details]
screenshot on opening the original sxw on 1.9.117-1
How's that, that's the original .sxw opened in 1.9.117-1 in rawhide, which will
be updated to fc4 soon
Looks better,
When I find the time I'll do a yum update on my rawhide partition, verify and close.
I'll leave as needinfo until then.
can we close this then ?
I'll risk assuming that it's acceptable now.
I'm sorry, but with openoffice.org-impress-1.9.122-3.2.0.fc5 from Rawhide this
bug still happens (or is back again)
I noticed a couple of days ago that this bug has changed. Now the sxi / ppt
import are consistent and both don't fit the page anymore :|
Hans, please attach a screenshot too. Are you able to test with OOo 2.0 in FC4
updates?
Created attachment 121830 [details]
screen shot of the sxi opened at slide 85
Sorry, I'm running rawhide on both my systems not FC4, my current ooo version
is:
2.0.1-0.142.1.2
Created attachment 121831 [details]
screenshot of ppt, which was exported with ooo 1.x from the sxi
Notice that the ppt is back as it should be once again, some versions back the
ppt had the same problem as the sxi but now the ppt "works" again.
On a related note the original presentation was created in ppt then migrated to
ooo. All later editing was done in ooo, the ppt export was to be able to show
it on windows computers.
let's move this upstream for some insight from the impress developers as it
affects upstream version, | https://bugzilla.redhat.com/show_bug.cgi?id=156294 | CC-MAIN-2016-40 | refinedweb | 503 | 80.92 |
Java Program to find and count a specific word in a Text File
Hey everyone! In this article, we will learn how to find a specific word in a text file and count the number of occurrences of that word in the text file. Before taking a look at the code, let us learn about BufferedReader class and FileReader class.
FileReader class in Java
FileReader class is used for file handling purposes in Java. It reads the contents of a file in a stream of characters. It is present in the java.io package. It reads the contents into a character array.
FileReader fr = new FileReader(“filename.txt”);
BufferedReader class in Java
The BufferedReader class in Java is used to read contents from a file in Java. It is used to read text from the input stream in Java. It is present in the java.io.BufferedReader package in Java. It buffers the character read into an array.
BufferedReader bfr = new BufferedReader(fr);
Java Program to find a specific word in a text file and print its count.
import java.io.File; import java.io.FileReader; import java.io.BufferedReader; import java.util.Scanner; public class Program { public static void main(String[] args) throws Exception { int cnt=0; String s; String[] buffer; File f1=new File("file1.txt"); FileReader fr = new FileReader(f1); BufferedReader bfr = new BufferedReader(fr); Scanner sc = new Scanner(System.in); System.out.println("Enter the word to be Searched"); String wrd=sc.nextLine(); while((s=bfr.readLine())!=null) { buffer=s.split(" "); for (String chr : buffer) { if (chr.equals(wrd)) { cnt++; } } } if(cnt==0) { System.out.println("Word not found!"); } else { System.out.println("Word : "+wrd+"found! Count : "+cnt); } fr.close(); } }
Output:-
Enter the word to be Searched krishna Word : krishnafound! Count : 4
I hope this article was useful to you!
Please leave a comment down below for any doubt/suggestion.
Also read: QR Code to Text converter in Java
This program is case sensitive. How to ignore the case sensitive and count all the specific words. For example, if I want to count the word “the”, it should count “The”, “THE, and “THe” etc. how to do that. Appreciate if you can make the changes. Thanks | https://www.codespeedy.com/java-program-to-find-and-count-a-specific-word-in-a-text-file/ | CC-MAIN-2020-50 | refinedweb | 370 | 69.68 |
Mar 10
NP .NET Profiler Tool is designed to assist in troubleshooting issues such as slow performance, memory related issues, and first chance exceptions in any .NET process. The tool has the following features:
- XCopy deployable: no install or reboot required
- Supports all types of .NET applications
- Generates true callstacks for exceptions, memory allocations, and function calls
- Can monitor a specific namespace to reduce overhead and generate a smaller output file
- Memory profiler reports total number of objects allocated per function
- Custom reports using SQL-like queries
- Wizard-based UI
- Supports all versions of .NET (1.0 , 1.1, 2.0 and 3.5)
- Supports all platforms (x86, x64 and IA64)
- Supports all OSes (Windows XP, Windows 2003, Windows 2008 and Vista)
- Support virtual machines
Download NP .NET Profiler
It would have been a very useful free tool but it looks like Microsoft have withdrawn it, does anyone know why?
Thanks.
Very strange, Microsoft killed the download page.
Oops. there was not.
MS has retained this tool for use by their DSE’s and PFE’s to support their premier contract customers. Playing for strategic advantage?
It’s back: | http://www.csharp411.com/np-net-profiler/ | CC-MAIN-2017-22 | refinedweb | 189 | 54.83 |
/* Reading and parsing of makefiles interpreting? 0=interpreting, 1=not yet interpreted, 2=already interpreted */, int len, const struct floc *flocp)); static void record_files PARAMS ((struct nameseq *filenames, char *pattern, char *pattern_percent, struct dep *deps, unsigned int cmds_started, char *commands, unsigned int commands_idx, int two_colon, const struct floc *flocp)); static void record_target_var PARAMS ((struct nameseq *filenames, char *defn, enum variable_origin origin, int enabled, const struct floc *flocp)); static enum make_word_type get_next_mword PARAMS ((char *buffer, char *delim, char **startp, unsigned int *length)); static void remove_comments PARAMS ((char *line)); static char *find_char_unquote PARAMS ((char *string, int stop1, int stop2, int blank, int ignorevars)); /* Read in all the makefiles and return the chain of their names. */ struct dep * read_all'; eval_makefile (name, RM_NO_DEFAULT_GOAL|RM_INCLUDED|RM_DONTCARE); } = alloc_dep (); d->file = enter_file (*p); d->file->dontcare = 1; /*; } /* Install a new conditional and return the previous one. */ static struct conditionals * install_conditionals (struct conditionals *new) { struct conditionals *save = conditionals; bzero ((char *) new, sizeof (*new)); conditionals = new; return save; } /* Free the current conditionals and reinstate a saved one. */ static void restore_conditionals (struct conditionals *saved) { /* Free any space allocated by conditional_line. */ if (conditionals->ignoring) free (conditionals->ignoring); if (conditionals->seen_else) free (conditionals->seen_else); /* Restore state. */ conditionals = saved; } static int eval_makefile (char *filename, int flags) { struct dep *deps; struct ebuffer ebuf; const struct floc *curfile; char *expanded = 0; char *included = 0; int makefile_errno; int r; ebuf.floc.filenm = strcache_add ) { included = concat (include_directories[i], "/", filename); ebuf.fp = fopen (included, "r"); if (ebuf.fp) { filename = included; break; } free (included); } /* If we're not using it, we already freed it above. */ if (filename != included) included = 0; } /* Add FILENAME to the chain of read makefiles. */ deps = alloc_dep (); deps->next = read_makefiles; read_makefiles = deps; deps->file = lookup_file (filename); if (deps->file == 0) deps->file = enter_file (xstrdup (filename)); filename = deps->file->name; deps->changed = flags; if (flags & RM_DONTCARE) deps->file->dontcare = 1; if (expanded) free (expanded); if (included) free (included); /*); alloca (0); return r; } int eval_buffer (char *buffer) { struct ebuffer ebuf; struct conditionals *saved; struct conditionals new;; saved = install_conditionals (&new); r = eval (&ebuf, 1); restore_conditionals (saved); reading_file = curfile; alloca (0); return r; } /* Read file FILENAME as a makefile and add its contents to the data base. SET_DEFAULT is true if we are allowed to set the default goal. */ static int eval (struct ebuffer *ebuf, int set_default) { char *collapsed = 0; unsigned int collapsed_length = 0; unsigned int commands_len = 200; char *commands; unsigned int commands_idx = 0; unsigned int cmds_started, tgts_started; int ignoring = 0, in_ignored_define = 0; int no_targets = 0; /* Set when reading a rule without targets. */, \ &fi); \ } \) { unsigned. Get more space if we need it; we don't need to preserve the current contents of the buffer. */ if (collapsed_length < linelen+1) { collapsed_length = linelen+1; if (collapsed) free ((char *) { int i = conditional_line (p, len, fstart); if (i != -2) { if (i == -1) no filenames, it's a no-op. */ if (*p == '\0') { free (p); continue; } /* Parse the list of file names. */ p2 = p; files = multi_glob (parse_file_seq (&p2, '\0', sizeof (struct nameseq), 1), sizeof (struct nameseq)); free (p); /* Save the state of conditionals and start the included makefile with a clean slate. */ save = install && !noerror) error (fstart, "%s: %s", name, strerror (errno)); free (name); } /* Restore conditional state. */ restore_conditionals (save); goto rule_complete; } if (try_variable_definition (fstart, p, o_file, 0)) /* This line has been dealt with. */ goto rule_complete; /* This line starts with a tab but was not caught above because there was no preceding target, and the line might have been usable as a variable definition. But now we know it is definitely lossage. */ if (line[0] == '; int exported; char *cmdleft, *semip, *lb_next; unsigned int len, plen = 0; char *colonp; const char *end, *beg; /* Helpers for whitespace stripping. */ /* Record the previous rule. */ record_waiting_files (); tgts_started = fstart->lineno; /* Search the line for an unquoted ; that is not after an unquoted #. */ cmdleft = find_char_unquote (line, ';', '#', 0, 1);" or "export" keyword; if so see if what comes after it looks like a variable definition. */ wtype = get_next_mword (p2, NULL, &p, &len); v_origin = o_file; exported = 0; if (wtype == w_static) { if (word1eq ("override")) { v_origin = o_override; wtype = get_next_mword (p+len, NULL, &p, &len); } else if (word1eq ("export")) { exported = 1;) { unsigned int l = p - variable_buffer; *(--semip) = ';'; variable_buffer_output (p2 + strlen (p2), semip, strlen (semip)+1); p = variable_buffer + l; } record_target_var (filenames, p, v_origin, exported, fstart); filenames = 0; continue; } /* This is a normal target, _not_ a target-specific variable. Unquote any = in the dependency list. */ find_char_unquote (lb_next, '=', 0,, 0); if (cmdleft != 0) *(cmdleft++) = '\0'; } } /*-style; /* Strip leading and trailing whitespaces. */ beg = p2; end = beg + strlen (beg) - 1; strip_whitespace (&beg, &end); if (beg <= end && *beg != '\0') { /* Put all the prerequisites here; they'll be parsed later. */ deps = alloc_dep (); deps->name = savestring (beg, end - beg + 1); } else deps = 0;'; } /* Determine if this target should be made default. We used to do this in record_files() but because of the delayed target recording and because preprocessor directives are legal in target's commands it is too late. Consider this fragment for example: foo: ifeq ($(.DEFAULT_GOAL),foo) ... endif Because the target is not recorded until after ifeq directive is evaluated the .DEFAULT_GOAL does not contain foo yet as one would expect. Because of this we have to move some of the logic here. */ if (**default_goal_name == '\0' && set_default) { char* name; struct dep *d; struct nameseq *t = filenames; for (; t != 0; t = t->next) { int reject = 0; name = t->name; /* We have nothing to do if this is an implicit rule. */ if (strchr (name, '%') != 0) break; /* See if this target's name does not start with a `.', unless it contains a slash. */ if (*name == '.' && strchr (name, '/') == 0 #ifdef HAVE_DOS_PATHS && strchr (name, '\\') == 0 #endif ) continue; /*) { define_variable_global (".DEFAULT_GOAL", 13, t->name, o_file, 0, NILF); break; } } } (); if (collapsed) free ((char *) collapsed); free ((char *) commands); return 1; } /* Remove comments from LINE. This is done by copying the text at LINE onto itself. */ static void remove_comments (char *line) { char *comment; comment = find_char_unquote (line, '#', 0, 0, 0); if (comment != 0) /* Cut off the line at the #. */ *comment = '\0'; } /* Execute a `define' directive. The first line has already been read, and NAME is the name of the variable to be defined. The following lines remain to be read. */ static void do_define ; nlines = readline (ebuf); ebuf->floc.lineno += nlines; /* -2 if the line is not a conditional at all, -1 if the line is an invalid conditional, 0 if following text should be interpreted, 1 if following text should be ignored. */ static int conditional_line (char *line, int len, const struct floc *flocp) { char *cmdname; enum { c_ifdef, c_ifndef, c_ifeq, c_ifneq, c_else, c_endif } cmdtype; unsigned int i; unsigned int o; /* Compare a word, both length and contents. */ #define word1eq(s) (len == sizeof(s)-1 && strneq (s, line, sizeof(s)-1)) #define chkword(s, t) if (word1eq (s)) { cmdtype = (t); cmdname = (s); } /* Make sure this line is a conditional. */ chkword ("ifdef", c_ifdef) else chkword ("ifndef", c_ifndef) else chkword ("ifeq", c_ifeq) else chkword ("ifneq", c_ifneq) else chkword ("else", c_else) else chkword ("endif", c_endif) else return -2; /* Found one: skip past it and any whitespace after it. */ line = next_token (line + len); #define EXTRANEOUS() error (flocp, _("Extraneous text after `%s' directive"), cmdname) /* An 'endif' cannot contain extra text, and reduces the if-depth by 1 */ if (cmdtype == c_endif) { if (*line != '\0') EXTRANEOUS (); if (!conditionals->if_cmds) fatal (flocp, _("extraneous `%s'"), cmdname); --conditionals->if_cmds; goto DONE; } /* An 'else' statement can either be simple, or it can have another conditional after it. */ if (cmdtype == c_else) { const char *p; if (!conditionals->if_cmds) fatal (flocp, _("extraneous `%s'"), cmdname); o = conditionals->if_cmds - 1; if (conditionals->seen_else[o]) fatal (flocp, _("only one `else' per conditional")); /* Change the state of ignorance. */ switch (conditionals->ignoring[o]) { case 0: /* We've just been interpreting. Never do it again. */ conditionals->ignoring[o] = 2; break; case 1: /* We've never interpreted yet. Maybe this time! */ conditionals->ignoring[o] = 0; break; } /* It's a simple 'else'. */ if (*line == '\0') { conditionals->seen_else[o] = 1; goto DONE; } /* The 'else' has extra text. That text must be another conditional and cannot be an 'else' or 'endif'. */ /* Find the length of the next word. */ for (p = line+1; *p != '\0' && !isspace ((unsigned char)*p); ++p) ; len = p - line; /* If it's 'else' or 'endif' or an illegal conditional, fail. */ if (word1eq("else") || word1eq("endif") || conditional_line (line, len, flocp) < 0) EXTRANEOUS (); else { /* conditional_line() created a new level of conditional. Raise it back to this level. */ if (conditionals->ignoring[o] < 2) conditionals->ignoring[o] = conditionals->ignoring[o+1]; --conditionals->if_cmds; } goto DONE; } if (conditionals->allocated == 0) { conditionals->allocated = 5; conditionals->ignoring = (char *) xmalloc (conditionals->allocated); conditionals->seen_else = (char *) xmalloc (conditionals->allocated); } o =[o] = 0; /* Search through the stack to see if we're already ignoring. */ for (i = 0; i < o; ++i) if (conditionals->ignoring[i]) { /* We are already ignoring, so just push a level to match the next "else" or "endif", and keep ignoring. We don't want to expand variables in the condition. */ conditionals->ignoring[o] = 1; return 1; } if (cmdtype == c_ifdef || cmdtype == c_ifndef) { char *var; struct variable *v; char *p; /* Expand the thing we're looking up, so we can use indirect and constructed variable names. */ var = allocated_variable_expand (line); /* Make sure there's only one variable name to test. */ p = end_of_token (var); i = p - var; p = next_token (p); if (*p != '\0') return -1; var[i] = '\0'; v = lookup_variable (var, i); conditionals->ignoring[o] = ((v != 0 && *v->value != '\0') == (cmdtype == c_ifndef)); free (var); } else { /* "Ifeq" or "ifneq". */ char *s1, *s2; unsigned int len; char termin = *line == '(' ? ',' : *line; if (termin != ',' && termin != '"' && termin != '\'') return -1; s1 = ++line; /* Find the end of the first string. */ if (termin == ',') {') EXTRANEOUS (); s2 = variable_expand (s2); conditionals->ignoring[o] = (streq (s1, s2) == (cmdtype == c_ifneq)); } DONE: /* Search through the stack to see if we're ignoring. */ for (i = 0; i < conditionals->if_cmds; ++i) if (conditionals->ignoring[i]) return 1; return 0; } /* Remove duplicate dependencies in CHAIN. */ static unsigned long dep_hash_1 (const void *key) { return_STRING_HASH_1 (dep_name ((struct dep const *) key)); } static unsigned long dep_hash_2 (const void *key) { return_STRING_HASH_2 (dep_name ((struct dep const *) key)); } static int dep_hash_cmp (struct nameseq *filenames, char *defn, enum variable_origin origin, int exported,; char *fname; char *percent; struct pattern_var *p; nextf = filenames->next; free ((char *) filenames); /* If it's a pattern target, then add it to the pattern-specific variable list. */ percent = find_percent (name); if (percent) { /* Get a reference for this pattern-specific variable struct. */ p = create_pattern_var (name, percent); p->variable.fileinfo = *flocp; /* I don't think this can fail since we already determined it was a variable definition. */ v = parse_variable_definition (&p->variable, defn); assert (v != 0); if (v->flavor == f_simple) v->value = allocated_variable_expand (v->value); else v->value = xstrdup (v->value);); fname = f->name; current_variable_set_list = f->variables; v = try_variable_definition (flocp, defn, origin, 1); if (!v) fatal (flocp, _("Malformed target-specific variable definition")); current_variable_set_list = global; } /* Set up the variable to be *-specific. */ v->origin = origin; v->per_target = 1; v->export = exported ? v_export : v_default; /* If it's not an override, check to see if there was a command-line setting. If so, reset the value. */ if (origin != o_override) { struct variable *gv; int len = strlen(v->name); gv = lookup_variable (v->name, len); if (gv && (gv->origin == o_env_override || gv->origin == o_command)) { if (v->value != 0) free (v->value); v->value = xstrdup (gv->value); v->origin = gv->origin; v->recursive = gv->recursive; v->append = 0; } } /* Free name if not needed further. */ if (name != fname && (name < fname || name > fname + strlen (fname))) free (name); } } /* (struct nameseq *filenames, char *pattern, char *pattern_percent, struct dep *deps, unsigned int cmds_started, char *commands, unsigned int commands_idx, int two_colon, const struct floc *flocp) { struct nameseq *nextf; int implicit = 0; unsigned int max_targets = 0, target_idx = 0; char **targets = 0, **target_percents = 0; struct commands *cmds; /* If we've already snapped deps, that means we're in an eval being resolved after the makefiles have been read in. We can't add more rules at this time, since they won't get snapped and we'll get core dumps. See Savannah bug # 12124. */ if (snapped_deps) fatal (flocp, _("prerequisites cannot be defined in command scripts")); *this = 0; char *implicit_percent; nextf = filenames->next; free (filenames); /* Check for special targets. Do it here instead of, say, snap_deps() so that we can immediately use the value. */ if (streq (name, ".POSIX")) posix_pedantic = 1; else if (streq (name, ".SECONDEXPANSION")) second_expansion = this is a static pattern rule: `targets: target%pattern: dep%pattern; cmds', make sure the pattern matches this target name. */ if (pattern && !pattern_matches (pattern, pattern_percent, name)) error (flocp, _("target `%s' doesn't match the target pattern"), name); else if (deps) { /* If there are multiple filenames, copy the chain DEPS for all but the last one. It is not safe for the same deps to go in more than one place in the database. */ this = nextf != 0 ? copy_dep_chain (deps) : deps; this->need_2nd_expansion = (second_expansion && strchr (this->name, '$')); }) { free_dep_chain (f->deps); f->deps = 0; } else if (this != 0) { /* Add the file's old deps and the new ones in THIS together. */ if (f->deps != 0) { struct dep **d_ptr = &f->deps; while ((*d_ptr)->next != 0) d_ptr = &(*d_ptr)->next; if (cmds != 0) /* This is the rule with commands, so put its deps last. The rationale behind this is that $< expands to the first dep in the chain, and commands use $< expecting to get the dep that rule specifies. However the second expansion algorithm reverses the order thus we need to make it last here. */ (*d_ptr)->next = this; else { /* This is the rule without commands. Put its dependencies at the end but before dependencies from the rule with commands (if any). This way everything appears in makefile order. */ if (f->cmds != 0) { this->next = *d_ptr; *d_ptr = this; } else (*d_ptr)->next = this; } } else f->deps = this; /* This is a hack. I need a way to communicate to snap_deps() that the last dependency line in this file came with commands (so that logic in snap_deps() can put it in front and all this $< -logic works). I cannot simply rely on file->cmds being not 0 because of the cases like the following: foo: bar foo: ... I am going to temporarily "borrow" UPDATING member in `struct file' for this. */ if (cmds != 0) f->updating = 1; } } else { /* Double-colon. Make a new record even if there already; } /* If this is a static pattern rule, set the stem to the part of its name that matched the `%' in the pattern, so you can use $* in the commands. */ if (pattern) { static char *percent = "%"; char *buffer = variable_expand (""); char *o = patsubst_expand (buffer, name, pattern, percent, pattern_percent+1, percent+1); f->stem = savestring (buffer, o - buffer); if (this) { this->staticpattern = 1; this->stem = xstrdup (f->stem); } } /* Free name if not needed further. */ if (f != 0 && name != f->name && (name < f->name || name > f->name + strlen (f->name))) { free (name); name = f->name; } /* If this target is a default target, update DEFAULT_GOAL_FILE. */ if (streq (*default_goal_name, name) && (default_goal_file == 0 || ! streq (default_goal_file->name, name))) default_goal_file = f; } if (implicit) { targets[target_idx] = 0; target_percents[target_idx] = 0; if (deps) deps->need_2nd_expansion = second_expansion;. STOPCHARs inside variable references are ignored if IGNOREVARS is true. STOPCHAR _cannot_ be '$' if IGNOREVARS is true. */ static char * find_char_unquote (char *string, int stop1, int stop2, int blank, int ignorevars) { unsigned int string_len = 0; register char *p = string; if (ignorevars) ignorevars = '$'; while (1) { if (stop2 && blank) while (*p != '\0' && *p != ignorevars && *p != stop1 && *p != stop2 && ! isblank ((unsigned char) *p)) ++p; else if (stop2) while (*p != '\0' && *p != ignorevars && *p != stop1 && *p != stop2) ++p; else if (blank) while (*p != '\0' && *p != ignorevars && *p != stop1 && ! isblank ((unsigned char) *p)) ++p; else while (*p != '\0' && *p != ignorevars && *p != stop1) ++p; if (*p == '\0') break; /* If we stopped due to a variable reference, skip over its contents. */ if (*p == ignorevars) { char openparen = p[1]; p += 2; /* Skip the contents of a non-quoted, multi-char variable ref. */ if (openparen == '(' || openparen == '{') { unsigned int pcount = 1; char closeparen = (openparen == '(' ? ')' : '}'); while (*p) { if (*p == openparen) ++pcount; else if (*p == closeparen) if (--pcount == 0) { ++p; break; } ++p; } } /* Skipped the variable reference: look for STOPCHARS again. */ continue; } (char *pattern) { return find_char_unquote (pattern, '%', 0, (char **stringp, int stopchar, unsigned int size, int strip) { struct nameseq *new = 0; struct nameseq *new1, *lastnew1;, 0); , 0); } , 0); (struct ebuffer *ebuf) { char *e). */ eol = ebuf->buffer = ebuf->bufnext; while (1) { int backslash = 0; char *bol = eol; char *p; /* Find the next newline. At EOS, stop. */ eol = p = strchr (eol , '\n'); if (!eol) { ebuf->bufnext = ebuf->bufstart + ebuf->size + 1; return 0; } /* Found a newline; if it's escaped continue; else we're done. */ while (p > bol && *(--p) == '\\') backslash = !backslash; if (!backslash) break; ++eol; } /* Overwrite the newline char. */ *eol = '\0'; ebuf->bufnext = eol+1; return 0; } static long readline __) && !defined(__EMX__) /*++; int e; if (dir[0] == '~') { char *expanded = tilde_expand (dir); if (expanded != 0) dir = expanded; } EINTRLOOP (e, stat (dir, &stbuf)); if (e ==) { int e; EINTRLOOP (e, stat (default_include_directories[i], &stbuf)); if (e == 0 && S_ISDIR (stbuf.st_mode)) dirs[idx++] = default_include_directories[i]; } dirs[idx] = 0; /* Now compute the maximum length of any name in it. Also add each dir to the .INCLUDE_DIRS variable. */; /* Append to .INCLUDE_DIRS. */ do_variable_definition (NILF, ".INCLUDE_DIRS", dirs[i], o_default, f_append, 0); } include_directories = dirs; } /* Expand ~ or ~USER at the beginning of NAME. Return a newly malloc'd string or 0. */ char * tilde_expand ; } | http://opensource.apple.com/source/gnumake/gnumake-129/read.c | CC-MAIN-2015-35 | refinedweb | 2,818 | 54.22 |
This is your resource to discuss support topics with your peers, and learn from each other.
06-12-2012 08:23 AM
Hi,
Would like to know if BB10 supports Qt Mobility API's.
Solved! Go to Solution.
06-14-2012 04:52 PM
Currently supported Qt classes for cascades are documented in:
Stuart
06-15-2012 04:20 AM
Looks like its not supported then...eg no QGeoPositionInfoSource
What is then the recommended way of using mobility like functionality (eg determining the current position) in a cascades project?
06-15-2012 06:29 AM
06-15-2012 12:22 PM
Hello,
You can access the geolocation, accelerometer and other data from the device sensors using the The Blackberry Platform Services API located here.
There is also the device namespace in Cascades namespace e.g. bb::cascades::Gravity class
Unforunately there is no maps API supported in the released version of the NDK.
Cheers
Selom
06-18-2012 05:23 AM
Hi Selom.
Thanks for the helpful reply.
We have some apps we're currently considering porting to blackberry. They all show one or more maps however.
What are our options? None for Blackberry 10? Can you give me any idea when its planned to make this possible if its not yet?
Thanks in Advance.
Best Wishes,
Declan
06-18-2012 10:26 AM
Hi Declan,
Unfortunately I do not have information regarding future features in the upcoming NDK releases. This is normally done through the offical RIM channels. You could use the device features available in the native NDK at this time from your Cascades app, minus the map.
Please monitor official RIM developer channels for announcements related to upcoming releases.
Cheers
Selom
06-19-2012 05:25 AM
Hi Selom.
Thanks.
>Please monitor official RIM developer channels for announcements related to upcoming releases.
Can you please advise me where the best places for this would be.
Thanks in Advance.
06-19-2012 01:26 PM
Hello again
The main developer page
and the twitter feed:
are two good places to monitor for new information.
Cheers
06-20-2012 07:36 AM | https://supportforums.blackberry.com/t5/Native-Development/Is-Qt-Mobility-API-s-supported/m-p/1766603/highlight/true | CC-MAIN-2017-13 | refinedweb | 353 | 67.55 |
Besides creating a class within another class we can also create many classes within the same python module. Let us take a look at how to create a python module with many classes in it.
Start a new Eclipse’s project as before then create this module :- doubleclassdemo.py and input below python script into it.
class FirstClass(object): def __init__(self, welcome = "welcome to first class server!"): print(welcome) class SecondClass(object): def __init__(self, welcome = "welcome to the second class server!"): print(welcome)
As you can see we have created two classes in one module. Next create another python module and name it mainserver.py then enter below script into it.
import doubleclassdemo if __name__ == '__main__': print(doubleclassdemo.FirstClass()) print(doubleclassdemo.SecondClass())
Run the mainserver.py module.
As you can see we have imported the doubleclassdemo module into this module and then create a new instance for both classes within the doubleclassdemo module inside that print method which will first call the __init__ method of each class to print out the welcome message and then will call the __str__ method to return the memory address of each class and print it out!
welcome to first class server! <doubleclassdemo.FirstClass object at 0x00FE5C10> welcome to the second class server! <doubleclassdemo.SecondClass object at 0x00FE5C10>
Do you realize that both classes do share the same memory address? Hmmm…interesting. | http://gamingdirectional.com/blog/2016/08/03/create-many-classes-within-a-same-python-module/ | CC-MAIN-2018-22 | refinedweb | 229 | 57.16 |
Well it is an assignment question here is the question :
Implement a base class Person. Derive classes Student and Instructor from Person. A person has
a name and a birthday. A student has a major, and an instructor has a salary. Write the class
definitions, the constructors, and the member functions print() for all classes.
this is what i have got so far any kind of help will be good :
#include <iostream> using namespace std; class Person{ public: Person( string nam, string day); string get_name()const; string get_date()const; void print()const; private: string name; string date; }; Person::Person( string nam,string day){ name= nam; date= day; } string Person::get_name() const { return name; } string Person::get_date() const { return date; } void Person:: print()const{ cout<<" Mr/Mrs."<<name<<" was born on "<<date<<"\n";} int main (){ string nam1; string dat1; cout<< " Please enter the name of student \n"; cin>>nam1; cout<< " Please enter the date of birth in the format dd/mm/yyyy \n"; cin>>dat1; Person(nam1,dat1); return 0; }
Well i have managed to get the base class person but it gives error can someone help me out with whats wrong and how to implement the other required derived class student and instructor :idea: | https://www.daniweb.com/programming/software-development/threads/74962/inheritance-class-help-needed-asap | CC-MAIN-2018-30 | refinedweb | 204 | 61.29 |
I know there are several ways to do this in Java and C that are nice, but in c++ I can't seem to find a way to easily implement a string trimming function.
This is what I currently have:
string trim(string& str)
{
size_t first = str.find_first_not_of(' ');
size_t last = str.find_last_not_of(' ');
return str.substr(first, (last-first+1));
}
trim(myString);
/tmp/ccZZKSEq.o: In function `song::Read(std::basic_ifstream<char,
std::char_traits<char> >&, std::basic_ifstream<char, std::char_traits<char> >&, char const*, char const*)':
song.cpp:(.text+0x31c): undefined reference to `song::trim(std::string&)'
collect2: error: ld returned 1 exit status
Your code is fine. What you are seeing is a linker issue.
If you put your code in a single file like this:
#include <iostream> #include <string> using namespace std; string trim(const string& str) { size_t first = str.find_first_not_of(' '); if (string::npos == first) { return str; } size_t last = str.find_last_not_of(' '); return str.substr(first, (last - first + 1)); } int main() { string s = "abc "; cout << trim(s); }
then do
g++ test.cc and run a.out, you will see it works.
You should check if the file that contains the
trim function is included in the link stage of your compilation process. | https://codedump.io/share/zwvJ7ka5vnhp/1/c-trim-whitespace-from-a-string | CC-MAIN-2016-50 | refinedweb | 201 | 68.77 |
.
Complete the wizard; the query is added to the TableAdapter.
Build your project.
TableAdapters are not actually located inside their associated dataset classes. Each dataset has a corresponding collection of TableAdapters in its own namespace. For example, if you have a dataset named
SalesDataSet,.)
Queries that we think of as returning no value actually do return a value — an integer containing the number of rows affected by the query. The complete code to declare an instance of a TableAdapter and execute an SQL statement that returns no in the data source that SqlConnection1 connects to. (Otherwise, you need a valid SQL statement for your data source).
Add the following code to a method that you want to execute the SQL statement from. Call the ExecuteNonQuery method of a command to return no value (for example, SqlCommand..::.ExecuteNonQuery).();
The application requires permission to access the database and execute the SQL statement. | http://msdn.microsoft.com/en-us/library/0k7hxwz6.aspx | crawl-002 | refinedweb | 150 | 57.77 |
(2014-02-18 23:47)Spider_mp3 Wrote: Hello sphere,
fantastic add-on! I really like effects very much!
I would like to make a suggestion: have you ever considered creating a visualization add-on based on your Multi Slideshow Screensaver? The idea is to use the looking modes you have already created for your screensaver to display slideshows of the artist currently playing in the audio player. You could use the pictures stored in the extrafanart folders inside the artists folders. Skins like Transparency! already have this functionality (extra fanart slideshow during audio play), but they don't have the beautiful look modes of your add-on.
So, when XBMC is playing music in fullscreen mode (tab key), the visualization add-on should detect what artist is currently playing, let's say Shakira, look in the artists folder (i. e. \Music\Shakira\extrafanart\) and use the images inside the extrafanart folder. Then, when a new track starts to play, the add-on would switch to the new artist accordingly.
This would be a awesome add-on! What do you say?
Best regards,
Spider
(2014-03-09 03:54)majorsl Wrote: I'm giving this a whirl on Gotham B1. On two XBMC installs it'll run for a random amount of time and then freeze. I can hear XBMC "behind" it working... pressing the "s" key actually brings up the shutdown panel on top of the screen saver so I can exit.
I'm wondering if I have a corrupted piece of movie fanart causing problems, although I'm at a loss to figure out how to find it.
--- screensaver_orig.py 2014-04-04 14:06:25.856481700 -0500
+++ screensaver.py 2014-04-04 14:11:06.191837500 -0500
@@ -19,6 +19,7 @@
import random
import sys
+from PIL import Image
if sys.version_info >= (2, 7):
import json
@@ -476,7 +477,7 @@
def process_image(self, image_control, image_url):
MOVE_ANIMATION = (
- 'effect=slide start=0,720 end=0,-720 center=auto time=%s '
+ 'effect=slide start=0,720 end=0,-1280 center=auto time=%s '
'tween=linear delay=0 condition=true'
)
image_control.setVisible(False)
@@ -486,7 +487,11 @@
# to be added to the window in size order.
width = image_control.getWidth()
zoom = width * 100 / 1280
- height = int(width / self.image_aspect_ratio)
+ # height = int(width / self.image_aspect_ratio)
+ im = Image.open(image_url)
+ height = im.size[1] * width / im.size[0]
+ self.log('image size original %d' % im.size[0] + '/%d' % im.size[1] + ' new %d' % width + '/%d' % height)
+ del im
# let images overlap max 1/2w left or right
center = random.randint(0, 1280)
x_position = center - width / 2
smb://germunraid/photos/2006-07 Hope Valley/'
11:04:45 T:804319232 NOTICE: Multi Slideshow Screensaver: _get_folder_images ends
11:04:45 T:804319232 NOTICE: Multi Slideshow Screensaver: _get_folder_images started with path: '
(2014-04-26 20:09)2TallKnowItAll Wrote: Does this screensaver work with Gotham betas? I am trying to run it, but all I get is the spinning wheel of death indefinitely
14:02:59 T:222068 NOTICE: Multi Slideshow Screensaver: _get_folder_images started with path: 'smb://WHS/Pictures/Holidays/Mexico/' | http://forum.kodi.tv/showthread.php?tid=173734&page=5 | CC-MAIN-2015-22 | refinedweb | 511 | 58.89 |
Details
Description.
Issue Links
- is related to
HADOOP-6901 Parsing large compressed files with HADOOP-1722 spawns multiple mappers per file
- Open
- relates to
MAPREDUCE-606 Implement a binary input/output format for Streaming
- Resolved
-
MAPREDUCE-5018 Support raw binary data with Hadoop streaming
- Patch Available
Activity
- All
- Work Log
- History
- Activity
- Transitions
I see we don't have any properties for streaming combiners and I am having some problems with the reducer being fed the wrong format by the combiner. Before I submit a bug, I would like to understand what the intended behavior is. Without properties for combiners, such as stream.combine.input and stream.combine.output, it seems combiners read the properties that apply to the reducer. When that happens, it is impossible to have the reducer read typedbytes and write text when the combiner is on, since the reducer expects typedbytes in input and the combiner will provide text. The workaround is to have a job complete using a single serialization format and then a conversion job that doesn't use combiners, which not only adds a job but also is surprising to users and a violation of orthogonality. This is in the context of the development of RHadoop/rmr, a mapreduce package for R.
Thanks Matt.
I've committed the patch to branch-1.0 too.
+1. Please push it to both branch-1 and branch-1.0. Thanks.
Matt - Matthias's patch applies clean to branch-1. I'd like to push it in for hadoop-1.0.2, you ok? Thanks.
Integrated in Hadoop-Mapreduce-trunk #285 (See)
MAPREDUCE-889. binary communication formats added to Streaming by HADOOP-1722 should be documented. Contributed by Klaas Bosteels.
adjusted
HADOOP-1722-v6.patch for Hadoop version 0.20.1
Editorial pass over all release notes prior to publication of 0.21.
Streaming documentation in forrest should be updated with this feature.
Zhuweimin, presumably you're expecting the number of bytes reported by "wc -c" to be equal to the number of bytes in your input files, but that's not what you should be expecting really. Here's a quick outline of what happens when you run that command:
- Since you didn't specify an InputFormat, the TextInputFormat is used which leads to Text values corresponding to "lines" (i.e. sequences of bytes ending with a newline character) and LongWritable keys corresponding to the offsets of the lines in the file.
- Because you use rawbytes for the map input, Streaming will pass the keys and values to your mapper as <4 byte length><raw bytes> byte sequences. These byte sequences are obtained using Writable serialization (i.e. by calling the write() method) and prepending the length to the bytes obtained in this way.
You could probably get the behavior you're after by writing a custom InputFormat and InputWriter, but out of the box it's not supported at the moment as far as I know.
Thanks, Klaas.
I tried -jobconf option and it worked.
But it looks like a part of the content was lost.
The test results are the following.
Do you have any idea what's wrong?
----------
$ bin/hadoop fs -ls data
Found 2 items
rw-r r- 1 hadoop supergroup 67108864 2009-03-06 17:15 /user/hadoop/data/64m_1.dat
rw-r r- 1 hadoop supergroup 67108864 2009-03-06 17:15 /user/hadoop/data/64m_2.dat
$ hadoop jar contrib/streaming/hadoop-0.19.1-streaming.jar
-input data \
-output dataoutput \
-mapper "wc -c" \
-numReduceTasks 0 \
-jobconf stream.map.input=rawbytes
...
09/03/06 17:17:08 INFO streaming.StreamJob: map 0% reduce 0%
09/03/06 17:17:16 INFO streaming.StreamJob: map 100% reduce 0%
09/03/06 17:17:18 INFO streaming.StreamJob: Job complete: job_200903061543_0012
09/03/06 17:17:18 INFO streaming.StreamJob: Output: dataoutput
$ hadoop fs -cat dataoutput/part*
67107830
67107830
----------
Zhuweimin, in that case you probably want to use:
-D stream.map.input=rawbytes
instead of:
-io rawbytes
(You can also use -jobconf instead of -D, but that option has been deprecated).
hi,Klaas Bosteels
my case is read binary data and generate text data by mappers.
can you teach me the usage for it.
the following is correct?
$ hadoop jar contrib/streaming/hadoop-0.19.1-streaming.jar
-input data
-output result
-mapper "wc -c"
-numReduceTasks 0
-io rawbytes
but the result is null in the output folder.
Attaching a backported patch for the 0.19 branch.
Integrated in Hadoop-trunk #756 (See)
Attaching a backported patch for the 0.18 branch. Typed bytes have been the default communication format for all our dumbo progams for a while now, and we haven't run into any issues with that so far. Some quick timings also revealed that dumbo programs appear to be 40% faster when typed bytes are used.
I just committed this. Thanks, Klaas!
Not sure if small changes like this need to go through Hudson again, but we can still cancel if it's not necessary I guess...
Added throws IOException to InputWriter.initialize().
Makes sense. Please file one ( hopefully) last patch.
That's just because waitOutputThreads() throws IOException now. I added throws IOException to createOutputReader(), and therefore also to startOutputThreads() and waitOutputThreads(), because OutputReader.initialize() can throw an IOException. While checking this I noticed that I forgot to add throws IOException to InputWriter.initialize() by the way, maybe that should be changed before this gets committed...
Klaas, could you please explain why in PipeMapRed.mapRedFinished, the call to waitOutputThreads() has been moved inside the try{} block? Did you spot a bug that made you do this change?
+1 overall. Here are the results of testing the latest attachment
against trunk revision 741330.
.
This is what changed in version 5 of the patch:
- I implemented Owen's suggestion by adding the package org.apache.hadoop.streaming.io. This package contains an IdentifierResolver class that resolves string identifiers like "typedbytes" and "text" (case insensitive) into an InputWriter class, an OutputReader class, a key output class, and a value output class (different OutputReader classes require different key and value output classes to be set). Since a different resolver can be used by setting the property stream.io.identifier.resolver.class, external code can add new identifiers by extending IdentifierResolver and pointing stream.io.identifier.resolver.class to this extension.
- I removed the -typedbytes ... option and added -io <identifier> instead. This latter option is less fine-grained since it triggers usage of the classes corresponding to the given identifier for everything (e.g. -io typedbytes means that typed bytes will be used for the map input, the map output, the reduce input, and the reduce output), but you can still use more fine-grained configurations by setting the relevant properties manually. As suggested by Owen, the properties now are:
- stream.map.input=<identifier>
- stream.map.output=<identifier>
- stream.reduce.input=<identifier>
- stream.reduce.output=<identifier>
- Since it was easy to do, I also added RawBytesInputWriter and RawBytesOutputReader, which implement Eric's original suggestion (i.e. just a 4 byte length followed by the raw bytes). Quoting could be added as well of course, but I would rather pass that on to someone else/another patch...
The patch is nearly done, I'll submit it later today...
Eric, the patch looks good. Klaas, if you could update your patch with Owen's last comment, we could look at it for commit..
Hi committers,
Could you estimate how close we are to being able to accept this patch?
E14
I thought about that, but since streaming is primarily used by developers who don't want to write Java, I think the usability is better if we just have a set of enums/strings that we map into classes in streaming.
"typed.bytes" -> TypedBytesMRInputWriter / TypedBytesMROutputProcessor
"text" -> TextMRInputWriter / TextMROutputProcessor
You might consider renaming the classes to something like StreamingInputWriter and StreamingOutputReader, which are more symmetric with each other.
if someone implemented my suggestion above, it could be called "backquoted" or something.
By the way, you of course could use a different identifier string, like "typedBytes" or "typed_bytes".
I guess we could even do:
stream.map.input.typed.bytes=true -> stream.map.input.writer=org.apache.hadoop.streaming.TypedBytesInputWriter
stream.map.output.typed.bytes=true -> stream.map.output.processor=org.apache.hadoop.streaming.TypedBytesOutputProcessor
etc, or do you think that goes to far?
I'd suggest that we generalize this a little bit more and make the MRInputWriter and MROutputWriter take the input and output stream. And then instead of:
stream.map.input.typed.bytes=true -> stream.map.input=typed.bytes
So that if someone wants to add another encoder it is trivial to do so.
At some point, we probably should make the AutoInputFormat more complete and promote it into mapreduce.lib, but clearly that can happen in a different patch.
I'm not that convinced that people will use the typed bytes format outside of python, where the library is already present, but it does give them an option, which is currently not there.
-1 overall. Here are the results of testing the latest attachment
against trunk revision 740532.
.
Actually, ignore my last comment. I realized that you need to get the length of the Writable.
One thing i noticed - TypedBytesWritableOutput.writeWritable makes a copy of the Writable passed and then writes it out. Can we instead write directly to the underlying out of TypedBytesOutput.
Looks like that last patch should actually have been version 5. It is the correct patch though, I just forgot that I already submitted a 4th version of it...
Version 4 of the patch moves everything that was added in core to streaming, as suggested by Deveraj.
Some comments:
- Since the typed bytes classes are still in the package org.apache.hadoop.typedbytes (and not in org.apache.hadoop.streaming.typedbytes or so), we can still move them to core later without braking sequence files that rely on TypedBytesWritable.
- I extended the streaming command-line format from "hadoop jar <streaming.jar> <options>" to "hadoop jar <streaming.jar> <command> <options>". This is backwards compatible because the command "streamjob" is assumed when no command is given explicitly, and it allowed me to add the commands "dumptb" and "loadtb" ("dumbtb" corresponds to the DumpTypedBytes class that used to be in tools, and "loadtb" is a new command that does (more or less) the reverse operation, namely, it reads typed bytes from stdin and writes them to a sequence file on the DFS).
Replies to Deveraj's comments:
- I guess it would indeed make sense to move everything I added in core to streaming. I'll attach a new version of the patch later today.
- The main advantage of the typed bytes is not that they lead to more compact representations, but rather that they can make it a lot easier to write certain streaming apps. Suppose for instance that you want to write a streaming app that consumes sequence files containing instances of VIntWritable as keys and instances of a custom Record as values. With typed bytes, the keys and values will then then be converted automatically to appropriate typed bytes (namely, the keys will be converted to a typed bytes integer and the values to a typed bytes list consisting of the typed bytes objects that correspond to the attributes of the record), whereas your streaming app would have to implement VIntWritable and Record deserialization itself if streaming would only support raw bytes. So using typed bytes does indeed make things a bit more complex, but it's definitely worth it in my opinion (and you can still use raw bytes if you want to by using a 0 byte as type code).
Looks good overall! The one thing that should be considered here is moving the typedBytes package from core to streaming. Is there currently a usecase where typedBytes might be used elsewhere? The same argument holds for DumpTypedBytes & AutoInputFormat as well. Could we have the DumpTypedBytes integrated with StreamJob (as in, if someone wants to test out things, he uses the streaming.jar and passes an option to invoke DumpTypedBytes tool).
The other thing is about special handling for the basic types, as opposed to using raw bytes for everything. How typical is the use case where we have Key/Value types as the basic types. I understand that it makes the on-disk/wire representation compact in the cases where the native types are used, but it would simplify the framework if we dealt with only raw bytes instead (and probably use compression).
It would help if you include an example of a streaming app where binary data is consumed/produced.
Klaas, I got your points now. Thanks.
-1 overall. Here are the results of testing the latest attachment
against trunk revision 738744.
.
Replies to Runping's questions:
- As I said, letting the mapper output typed bytes and the reducer take text as input probably will not be used much in practice, but that does not mean we should remove that option in my opinion.
- You can always set the properties directly via -D stream.map.input.typed.bytes=false etc. so it is still possible to let only the reducer output typed bytes. The -typedbytes command line option just provides shorthands for the most common combinations really, and if you are going to output typed bytes in the reducer then you might as well output typed bytes in the mapper too (since that will be faster and probably also more convenient from a programming perspective because types are preserved), so it seemed better to me to let the -typedbytes output shorthand correspond to using typed bytes for everything except for the map input. Moreover, the old implementation of -typedbytes output would lead to a sequence file containing Text objects when it is combined with -numReduceTasks 0 and -outputformat org.apache.hadoop.mapred.SequenceFileOutputFormat, which seems counterintuitive to me. When the programmer wants typed bytes as output, then all output sequence files should always contain TypedBytesWritables (as is always the case with the modified implementation of -typedbytes output).
Note also that all of this does not really matter that much. Since text gets converted to a typed bytes string, most people will be using typed bytes for everything in practice. The -typedbytes input|output|mapper|reducer options are mostly intended to make it possible to convert existing streaming programs gradually...
I'm glad to see you working on this! We'll take it for a spin.
I don't understand your reasoning about the format for the map output does not necessarily have to be the same as the format for the reduce input.
Why the programmer would want the streaming framework to convert the typed bytes outputted by a mapper to strings before passing them on to a reducer?
If the programmer wants that, why doesn't the mapper generate text output at the first place?
Another issue: it is not uncommon that the mapper output is in text format and the reducer output is in binary format (typed bytes).
Your last change of the semantics cannot express this case.
The failed contrib unit tests are not caused by the patch as far as I can tell, but I did find a small unrelated bug (serializing lists did not work properly in all cases). The 4th version of the patch contains a fix for this bug (as well as an update to the unit test that did not catch this bug before). Furthermore, I also changed the semantics of -typedbytes output once more. It now corresponds to
- stream.map.input.typed.bytes=false
- stream.map.output.typed.bytes=true
- stream.reduce.input.typed.bytes=true
- stream.reduce.output.typed.bytes=true
because it is more likely that you want to take text as input but use type bytes for everything else (instead of only using typed bytes for the reducer output).
-1 overall. Here are the results of testing the latest attachment
against trunk revision 738479.
.
Yes, that is what the current patch provides, but it also adds a stream.reduce.input.typed.bytes because the format for the map output does not necessarily have to be the same as the format for the reduce input (it makes most sense to use the same format of course, but the streaming framework can convert the typed bytes outputted by a mapper to strings before passing them on to a reducer if the programmer would want that for some reason). So the patch is a bit more general, but it includes all the cases you listed.
I think you need to have the flags for the three cases:
1. stream.reduce.output.typed.bytes=true
In this case, the PipeMapRed and PipeReducer classes needs to interpret the reduce output as typed bytes and deserialize it accordingly.
This is the case where the user wants to generate binary data by reducers and output them in the typed bytes format..
2. stream.map.output.typed.bytes=true
In this case, the PipeMapRed and PipeMapper classes needs to interpret the mapper output as typed bytes and deserialize it accordingly.
This is the case where the user wants to generate binary data by mappers. In this case, the types for the map output key/value pairs
(and that for the reducer input key/value pairs) are typed bytes. The types for map output must be the same as those for the reduce input.
3. stream.map.input.typed.bytes=true
The intended use case for this setting may be that the user knows that the input data is in typedbytes, and does not want to PipMapRed (PipeMapper) class to convert them into text by calling toString. Rather, PipeMapRed class should serialize them according to typedbytes.
The mapper program will interpret the serialized format properly.
I realized that it is probably more convenient/intuitive to make -typedbytes input correspond to
- stream.map.input.typed.bytes=true
- stream.map.output.typed.bytes=false
- stream.reduce.input.typed.bytes=false
- stream.reduce.output.typed.bytes=false
instead of
- stream.map.input.typed.bytes=true
- stream.map.output.typed.bytes=false
- stream.reduce.input.typed.bytes=true
- stream.reduce.output.typed.bytes=false
and similarly that it would be better to let -typedbytes output correspond to
- stream.map.input.typed.bytes=false
- stream.map.output.typed.bytes=false
- stream.reduce.input.typed.bytes=false
- stream.reduce.output.typed.bytes=true
instead of
- stream.map.input.typed.bytes=false
- stream.map.output.typed.bytes=true
- stream.reduce.input.typed.bytes=false
- stream.reduce.output.typed.bytes=true
Maybe this was also (part of) what Runping was trying to say in his comment? In any case, the attached third version of my patch incorporates this minor change.
This second version of my patch addresses the issues raised by Runping:
- The javadoc for the package org.apache.hadoop.typedbytes now includes a detailed description of the typed bytes format.
- The (boolean-valued) typedbytes-related properties for streaming are now:
- stream.map.input.typed.bytes
- stream.map.output.typed.bytes
- stream.reduce.input.typed.bytes
- stream.reduce.output.typed.bytes
- The command line option -typebytes was changed such that it can take the values none|mapper|reducer|input|output|all (that should cover most cases, and otherwise the properties listed above can be set manually).
BTW: The comment "The reduce input type flag should always be the same as the map output flag" is not really valid, since TypedBytesWritable's toString() outputs sensible strings and hence it would not be impossible to output typed bytes in the mapper and let streaming convert to strings and pass those strings as input to the reducer.
Any other comments?
Looks good.
A couple things.
1. The type flags: The user may need to specify two different output type flags, one for the map output, and the other is for the reducer output.
2. The reduce input type flag should always be the same as the map output flag, and thus it is completely independent of the input type flag for the mapper
3. Since the mapper/reducer may be implemented in other languages, such as C, we must document the serialization format for the TypedBytesWritable clearly in a language agnostic way. It will be great to have a library for the serialization/deserialization for each common languages.
Are there any comments on the attached patch? It basically implements an extended version of Eric's idea concerning the addition of an option that triggers the usage of a new binary format. However, instead of a 4 byte length it uses a 1 byte type code (and the number of following bytes is derived from this type code). This leads to a slightly more compact representation for basic types (e.g. a float requires 1 + 4 bytes instead of 4 + 4 bytes), and it also solves another important Streaming issue, namely, that all type information is lost when everything is converted to strings.
Contents
The patch consists of the following parts:
- A new package org.apache.hadoop.typedbytes in src/core that provides functionality for dealing with sequences of bytes in which the first byte is a type code. This package also includes classes that can convert Writables to/from typed bytes and (de)serialize Records to/from typed bytes. The typed bytes format itself was kept as simple and straightforward as possible in order to make it very easy to write conversion code in other languages.
- Changes to Streaming that add the -typedbytes none|input|output|all option. When typed bytes are requested for the input, the functionality provided by the package org.apache.hadoop.typedbytes is used to convert all input Writables to typed bytes (which makes it possible to let Streaming programs seamlessly take sequence files containing Records and/or other Writables as input), and when typed bytes are used for the output, Streaming outputs TypedBytesWritables (i.e. instances of the org.apache.hadoop.typedbytes.TypedBytesWritable class, which extends BytesWritable).
- A new tool DumpTypedBytes in src/tools that dumps DFS files as typed bytes to stdout. This can often be a lot more convenient than printing out the strings returned by the toString() methods, and it can also be used to fetch an input sample from the DFS for testing Streaming programs that use typed bytes.
- A new input format called AutoInputFormat, which can take text files as well as sequence files (or both at the same time) as input. The functionality to deal with text and sequence files transparantly was required for the DumpTypedBytes tool, and putting it in an input format makes sense since the ability to take both text and sequence files as input can be very useful for Streaming programs. Because Streaming still uses the old mapred API, the patch includes two versions of AutoInputFormat (one for the old and another for the new API).
Example
Using the simple Python module available at, the mapper script
import sys import typedbytes input = typedbytes.PairedInput(sys.stdin) output = typedbytes.PairedOutput(sys.stdout) for (key, value) in input: for word in value.split(): output.write((word, 1))
and the reducer script
import sys import typedbytes from itertools import groupby from operator import itemgetter input = typedbytes.PairedInput(sys.stdin) output = typedbytes.PairedOutput(sys.stdout) for (key, group) in groupby(input, itemgetter(0)): values = map(itemgetter(1), group) output.write((key, sum(values)))
can be used to do a simple wordcount. The unit tests include a similar example in Java.
Remark
This patch renders
HADOOP-4304 mostly obsolete, since it provides all underlying functionality required for Dumbo. If this patch gets accepted, then future versions of Dumbo will probably only consists of Python code again and thus be very easy to install and use, which makes adding Dumbo to contrib less of requirement.
I think arkady's point is much more to the point than this quoting proposal, which I think is going the wrong way!
There are two interfaces here - that between man & reduce and that into map and out of reduce. I think they deserve different handling.
1) map in & reduce out - Should by default just consume bytes and produce bytes. The framework should do no interpretation or quoting. It should not try to break the output into lines, keys & values, etc. It is just a byte stream. This will allow true binary output with zero hassle. Some thought on splits is clearly needed...
2) map out & reduce in - Here we clearly need keys and values. But i think quoting might be the wrong direction. It should certainly not be the default. I think we should consider just providing an option that specifies a new binary format will be used. here. Maybe a 4 byte length followed a binary key followed by a 4 byte length and then a binary value? Maybe with a record terminator for sanity checking?
Two observations:
1) Adding quoting by default will break all kinds of programs that work with streaming today. This is undesirable. We should add an option, not change the default behavior.
2) Streaming should not use utf8 anywhere! It should assume that it is processing a stream of bytes that contains certain signal bytes '\n' and '\t'. I think we all agree on this. treating the byte stream as a character stream just confuses things.
Yes, it should have been backslash. I guess it would be ok to unquote the \0 as a null byte, but the point of the exercise is require the minimal amount of quoting. Null bytes should be ok to pass through since they won't be confused with the special field/record delimiters .
In owen's table, I assume \ \ is a backSLASH, not a backquote.
More substantively, I think I would rather see an escape for the zero byte, perhaps \0 imaginatively enough, added to Owen's table.
-dk
bq
+1
would it be good to have an option which translate it, preserving the current behavior. It would be easier for few map/reduce scripts for framework to translate it.
I think the right way to handle this is to support a standard quoting language on input and output from each streaming process. In particular, I think that streaming should have:
tab = field separator
new line = record separator
\t = literal tab
\n = literal newline
\ \ = literal backslash.
Thoughts?
Passing data from from DFS to streaming mapper should be transparent,
By default, the mapper task should receive the exactly the same bytes as stored in DFS without any transformation.
There should also be command line parameters that specify other useful options, including custom input format, decompressions, etc.
There should be no requirements on the command that is used as Streaming Mapper.
This has been broken twice – in Sept. 2006, and in July 2007.
It would be nice to restore the functionality, and make it part of specification. (This implies adding regression cases, etc.)
This does seem like a bug. I'd expect the combiner to ignore the output property and always issue its output in the same format as its input. So this shouldn't require new properties unless I'm confused. | https://issues.apache.org/jira/browse/HADOOP-1722?focusedCommentId=12542991&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-35 | refinedweb | 4,520 | 54.73 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to block duplicating RECORDS (with the same employee_id)?
Hello everybody!!!!
Please, i have a class that contains an employee_id field.
So for the different records of my class, i want to block those with the same employee_id.
I have tried to work with sql_constraint but it doesnt work well.
Here is my code.
('uniq_name', 'unique(employee_id)', "A stuff already exists with this name . Stuff's name must be unique!"),
]
def on_change_test_id(self, cr, uid, ids, employee_id, context=None):
obj = self.pool.get('solde.compte')
obj_ids = obj.search(cr, uid, [('employee_id', '=',employee_id)])
vals = obj.read(cr,uid,obj_ids,['id','employee_id'],context=context)
if vals == []:
res = {'value':{'remaining_leave': self.get_inputs(cr, uid, ids, employee_id, context=context),
}
}
return res
else:
My_error_Msg = 'Attention!! Vous avez deja rempli un SOLDE DE TOUT COMPTE de cet employe'
raise osv.except_osv(_("Error!"), _(My_error_Msg))
return False | https://www.odoo.com/forum/help-1/question/how-to-block-duplicating-records-with-the-same-employee-id-94135 | CC-MAIN-2017-04 | refinedweb | 170 | 62.54 |
Photo editing tends to involve a lot of repetitive processes, especially when you’re working with a large album of images. If you’re willing to dabble in scripting, you can use GIMP to automate some of these actions to save yourself time and effort.
Scripting in GIMP isn’t easy, but it’s very rewarding if you’re prepared to learn the ropes. Here’s how to get started with a couple of very basic scripts.
Creating a Python Script
Before we start start working on our project in earnest, we need to lay some foundations. First, open up a text editor, then copy and paste the code below:
#!/usr/bin/python from gimpfu import * def first_plugin(timg, tdrawable): print "Hello, world!" register( "hello_world", "Presents a Hello, World! message", "Presents a Hello, World! message", "Brad Jones", "Brad Jones", "2017", "<Image>/Image/Hello, World!", "RGB*, GRAY*", [], [], first_plugin) main()
Here’s a brief rundown of what’s going on up there. The first two lines initialize the script and give us access to some helpful libraries. The portion of code following def first_plugin contains the instructions we’re giving GIMP. The information that follows the word register is everything GIMP needs to know about our plug-in.
This is the information we need to give GIMP to register our script:
- Name: the name of the command (e.g. hello_world)
- Blurb: a brief description of the command (e.g. Presents a Hello, World! message)
- Help: the help message to be displayed (e.g. Presents a Hello, World! message)
- Author: the person that created the script (e.g. Brad Jones)
- Date: the date the script was created (e.g. 2017)
- Label: the way that the script will be referred to in the menu (e.g. <Image>/Image/Hello, World!)
- Parameters: parameters attached to the plug-in (e.g. [] — none in this case)
- Results: Results from the plug-in (e.g. [] — none in this case)
- Function: the name used to refer to the action in our code (e.g. first_plugin)
Finally, we need to call main().
Save your script, and select All Files from the Save as type dropdown. Make sure to include the .py extension in your file name.
Next, place this file into GIMP’s plug-in folder, which can be found in Windows at Program Files > GIMP 2 > lib > gimp > 2.0 (or ~\Library\Application Support\GIMP\2.8\scripts on a Mac). You may need administrator privileges How to Create a Backup Administrator Account in Windows 10 How to Create a Backup Administrator Account in Windows 10 Having a second administrator account on your PC makes it easier to reset your own password if you forget it. Here's an easy way to create one. Read More to do so.
Initialize GIMP and open the Image menu. You should see Hello, World! right there at the bottom.
Now it’s time for us to make our script a little more useful.
Adding Some Functionality
Now we’re going to rewrite our script so that it actually does something practical. Open up the text file once again, and copy and paste the following code:
#!/usr/bin/env python from gimpfu import * def test_script(customtext, font, size): img = gimp.Image(1, 1, RGB) layer = pdb.gimp_text_fontname(img, None, 0, 0, customtext, 10, True, size, PIXELS, font) img.resize(layer.width, layer.height, 0, 0) gimp.Display(img) gimp.displays_flush() register( "python_test", "TEST", "TEST", "Brad Jones", "Brad Jones", "2017", "TEST", "", [ (PF_STRING, "customtext", "Text string", 'Scripting is handy!'), (PF_FONT, "font", "Font", "Sans"), (PF_SPINNER, "size", "Font size", 100, (1, 3000, 1)), ], [], test_script, menu="<Image>/File/Create") main()
This is a little bit more complex than our Hello, World! script, but it shares a very similar structure. First we create an image.
img = gimp.Image(1, 1, RGB)
Then we add text based on parameters supplied by the user.
layer = pdb.gimp_text_fontname(img, None, 0, 0, customtext, 10, True, size, PIXELS, font)
Next, we resize the image in accordance with the size of the text.
img.resize(layer.width, layer.height, 0, 0)
Finally, we tell GIMP to display the image on-screen.
gimp.Display(img) gimp.displays_flush()
All that’s left to do is add the registration information that GIMP needs, with the addition of some parameter settings that we didn’t include earlier.
[ (PF_STRING, "customtext", "Text string", 'Scripting is handy!'), (PF_FONT, "font", "Font", "Sans"), (PF_SPINNER, "size", "Font size", 100, (1, 3000, 1)), ],
Save this just like we saved the Hello, World! script, move it to the plug-ins folder, and restart GIMP. Head to File > Create > TEST to try out our plug-in.
You’ll see a window where you can set various parameters.
Click OK and you’ll create an image that looks something like this.
This demonstrates how you can use scripting in GIMP to automate a process 5 Resources for Excel Macros to Automate Your Spreadsheets 5 Resources for Excel Macros to Automate Your Spreadsheets Searching for Excel macros? Here are five sites that have got what you're looking for. Read More consisting of several different actions. Now let’s write a script that makes changes to an image we already have open.
Inverting a Layer
Once you are comfortable with scripting with Python in GIMP, you can automate all kinds of tweaks to your images. However, we’re going to start as simple as possible by implementing a script that inverts the colors of the current layer.
To get started, open up a text editor again, then copy and paste the following script:
#!/usr/bin/env python from gimpfu import * def invert_current_layer(img, layer): pdb.gimp_invert(layer) register( "python_fu_invert_current_layer", "Invert layer", "Invert colors in the current layer", "Brad Jones", "Brad Jones", "2017", "<Image>/Filters/Custom/Invert current layer", "*", [], [], invert_current_layer) main()
This follows from the script we created earlier. The first couple of lines of code lay down some foundations, and the last several lines take care of registration. Here is the important section:
def invert_current_layer(img, layer): pdb.gimp_invert(layer)
We’re defining our process, telling GIMP what components we’re going to refer to, then using pdb.gimp_invert to instruct the program to adjust the colors. Save this in the .py file format, add it to the plug-ins folder, then open up GIMP to check that it works.
Navigate to Filters > Custom > Invert current layer.
You should get a result similar to the one above. Of course, it’s already relatively easy to perform an invert operation in GIMP, but this is just a starting point. The great thing about writing your own scripts 10 Rewarding Hobbies That Involve Programming or Scripting 10 Rewarding Hobbies That Involve Programming or Scripting What kind of programming and/or scripting can you do that don't involve big corporations or contracted clients? Here are several ideas that you can start exploring right now. Read More is that you can create something that’s completely tailored to you.
Next Steps in GIMP Scripting
Once you understand the basics of scripting in GIMP, it’s time to start experimenting. Think about what kind of processes you do a lot and that would be useful. Read More . Then comes the tricky part: figuring out how to use code to realize those ideas.
Fortunately, GIMP can offer up some assistance. Navigate to Help > Procedure Browser and you’ll be able to access a list of all the procedures you can utilize.
The Procedure Browser not only lists the procedures themselves, but also gives you information regarding what parameters you need to supply in your code.
You can scroll through the entire list of procedures or use the search bar to narrow the field. Then just insert the procedure name and parameters into your script.
This information will be invaluable as you work on your scripts. Start out with some simple stuff, and before you know it you’ll be making some really useful automated processes!
Do you need help scripting with GIMP? Or do you have a tip that you want to share with other users? Either way, why not join the conversation in the comments section below?
Image Credits: Volkova Vera/Shutterstock
such a nice and informative info.
thank you | http://www.makeuseof.com/tag/automating-gimp-scripts/ | CC-MAIN-2017-51 | refinedweb | 1,367 | 66.23 |
Getting started with YUI’s Connection Manager in Rails and PHP; or “All Happy Families Are Not Alike”February 28th, 2007 by
This is not meant to supplant the excellent yui! tutorials which you should read in detail for thorough explanations and examples. What I am adding here are a few examples of using this in the Rails framework and some thoughts on the callback object and scope.
Your “AJAX” goals are simple: you want to communicate with your server, get a response back that you can use (or not), do something with that response (or not), and move on. As this is asynchronous, you want to do this without reloading your web page. Or, as a client once said to me, referring to certain animated gifs in the upper right-hand corner of certain browsers, without “making the world spin”. To this end, Yahoo! has supplied us one line of code:
var transaction = YAHOO.util.Connect.asyncRequest( method, uri, callback, postData);
or, same line with some data plugged in:
var transaction = YAHOO.util.Connect.asyncRequest( 'POST', 'php/post.php', callback,"id=1&old_id=2");
When I was making the switch from synchronous to a-, it helped me to visualize a standard web form to see how form elements and attributes are translated to an AJAX request. It’s pretty obvious, but if you need an “aha!” moment, the above line is akin to the html form printed below (though unless you’re one of those people whose definition of interactivity is “The Monologue”, do refrain from creating forms with 2 hard-coded hidden inputs and nothing else! :)).
<form method="post" action="php/post.php"> <input type="hidden" name="id" value="1" /> <input type="hidden" name="old_id" value="2" /> <input type="submit"> </form>
So, excluding a reference to the callback for the moment (which is not addressed in this example), that form maps to the Connection Manager call quite simply: method, action (uri), data. Let’s look at the arguments required:
method: the method of the server request (POST, GET and others also available).
uri: the uri that’s receiving and processing the data you send (in our example, “php/post.php”). YUI’s examples use php, but, if you’re using the Connection Manager in a Rails app, it’s easy to adapt: your argument uri might read “/projects/update which would pass the data to the update method in projects_controller.rb, which would then be able to access the data through the params array, like so:
def update @project = Project.find(params[:id]) end
In php you’d probably do some type of db query [assume input cleanup and some type of database abstraction layer, such as PEAR/DB_DataObject, here]
$project = DB_DataObject::factory('Projects'); $project->get($_POST['id']);
Callback: a reference to the callback object you are supplying. This is how everything is handled. More on that in a minute.
postData: the data itself in standard query-string format (”new=1&old=2″). NOTE: if you’re doing a GET transaction, your 4th argument would be false and your second argument would include the url and query string, like so:
“php/post.php?new=1&old=2".
So, what’s this callback?. In a synchronous transaction, you have the luxury of redrawing the page to process your data (and yes, nothing says luxury like a nice, slow, page reload…). In an asynchronous transaction you need to essentially “sneak” your data back into the page without reloading it. This is where your callback object comes in. It helps you get your data “in the door”, so to speak, so your page or application can change in a way that feels seamless to the user but often returns a visible result (changing a div, displaying some text, etc.) and if not a visible result at least a meaningful one (setting the value of a hidden form element, for example). Your callback is responsible for executing actions based on the data retrieved (or the failure to retrieve data) from the uri. In a standard synchronous form this action might be “generate an HTML table that displays your database results”. Or, “Print a message saying there are no results”. Of course, you can do anything you want with your data, that’s just an example of a fairly common scenario.
Once you’ve sent your data to the uri for processing, you need to wait for your response — without, of course, appearing to wait (save for the ubiquitous web 2.0 spinner you know you’re dying to try!). And, of course you want to know if your transaction failed. If you don’t watch for these things — “success” and “failure” in technical terms — you’re not going to be able to make an appropriate decision about what to do next in your app. So you feed your AJAX request a callback object: an object that defines functions for what to do in the cases of success and failure. In simplest terms, we’ve got
var callback = { success: handleSuccess, failure: handleFailure };
where “handleSuccess” and “handleFailure” are user-defined functions that take the http response object and do stuff with it.
handleSuccess = function(o){ // cheer! (or process data returned from the server) } handleFailure = function(o){ // cry, vow to try again! (or display failure message) }
There’s also the ability to pass scope, timeouts, and additional arguments to the callback object. To do so you’d read the great tutorials at the links above and add the lines below, of course changing the values to values meaningful in your application.
scope: Ajaxobject, timeout: 5000, args: ['arg1', 'arg2']
The handlers. handleSuccess and handleFailure both take an object o, which is the http response object. There’s a detailed list of all the properties of o on the yui page (Not to be confused with the Story of O, which I will not link to as it is beyond the scope of this article, you dirty rascals, you…). The property you’ll likely use most often is o.responseText, which is the server’s response as a string. This is what you pass back from good old ‘php/post.php’, and getting it is simple: echo. What? echo. What? echo..o..o… ok, sorry, moving on. For instance, if we wanted to capture the update_date in our successHandler to print to our page and we’re using php, we’d write something like this:
echo $project->update_date;
and if we’re in Rails? something like this:
render_text @project.update_date
If you need more data than a string — an array or collection of objects passed back from the server, you’ll find that’s simple, too: call the ruby method to_json() on your array instead. This essentially serializes your object so it can survive the journey to the Client. Once there, you can access the data using JavaScript’s magic wand: eval(). It’s great. So if you had an array of users connected to a project (and your database relationships are set up correctly), you could write
render_text @project.users.to_json
in php, assume you’ve got your $users array, and use print_r
print_r($users);
The in your JavaScript successHandler use eval(), like so:
var users = eval(o.responseText)
and bingo: in two lines you’re happily in your JavaScript parsing your users array like you would any other JavaScript array. You have connected your server to your DOM and no one is the wiser for it.
All this is great. We have our AJAX call and our callback object and are ready to go. But, suppose you don’t want to rewrite the AJAX call all over your app? The Yahoo folks have a great example of a ‘“hypothetical ajax object” (mysteriously named “AjaxObject”) that encapsulates success, failure, and process methods and calls a callback object that defines AjaxObject as it’s scope. Encapsulating your AJAX request so you can call it from wherever you want in your scripts in a DRY fashion makes your code cleaner and easier to manage. Yahoo! does this well in their example: in my usage I changed it up a little bit to meet my needs.
To quote the great Chicago writer Leo Tolstoy from his famous novel Anna Karenina Does Lake Michigan, ‘“Happy families are all alike; every unhappy family is unhappy in its own way”. I’ve learned that when working on an app, the opposite is true: success cases call for a range of actions: failures can more easily handled (log, display error, abort). Based on this, I’ve adapted the yui AjaxObject example to accept a postAction, successHandler, and object (used to define scope). This allows you to call AjaxObject from other objects, pass specific success handlers, and pass this (a reference to your current object) so you can access it in your successHandler from within the calling object. The AjaxObject builds the callback using those arguments. Like so:
var AjaxObject = { handleFailure:function(o){ // Fail gracefully }, /** * Wrapper for AJAX calls using YUI connector * * @param postAction {String} URL to post to * @param callBackSuccess {String} Success handler * @param postData {String} Data to post * @param obj {Object} Object that handler has scope in * */ startRequest:function(postAction, callBackSuccess, postData, obj) { var callback = { success:callBackSuccess, failure:this.handleFailure, scope:obj } // ASSUME you've shortened your yui connection mgr to $C $C('POST', postAction, callback, postData); } };
Then you can call AjaxObject from within a class, like so, and pass it a class method as it’s success-handler:
var Project = function Project(){ // initialize project however you like this.foo = "bar"; ... // CREATE in db and return id AjaxObject.startRequest('/project/create', this._generateDbId, postData, this); } // Success handler Project.prototype._generateDbId = function(o){ if(o.responseText !== undefined){ this._setDbid(Number(o.responseText)); // DO other stuff.. } }
This way your AJAX calls are in one place, you can use them in the scope of the calling object and define as many success handlers as success cases (or pass false); and fail in one standard way (gracefully, of course). Of course, this can be adapted to pass failure cases in too, or however you like. This was a way I found helpful in my work, and I hope it’s helpful to you as well. And thanks again to the folks at Yahoo! for providing so much great stuff to work with in the first place.
April 17th, 2007 at 5:32 am
Lev Tolstoy is from Russia
April 20th, 2007 at 2:19 pm
An excellent how-to with style and character. I like. :)
June 19th, 2007 at 2:41 am
@Leo: No Shit Sherlock!
@Sarah G: Thanks for the tutorial ;-)
November 27th, 2007 at 9:35 am
Thanks. Great tutorial
March 10th, 2008 at 1:35 pm
how about a working live demo?
April 8th, 2009 at 10:10 pm
I am using YUI file uploader with connection manager.I need to check the content of the server
response but i am unable to get the response because javascript always says the response is
undefined.I tested with FireBug which shows the correct response from the server.
Where is the problem? Is it parse error or something else? | http://www.devchix.com/2007/02/28/getting-started-with-yui%e2%80%99s-connection-manager-in-rails-and-php-or-all-happy-families-are-not-alike/ | crawl-002 | refinedweb | 1,854 | 60.95 |
You are not logged in.
Pages: 1
I am currently doing r&D as how to use splice to write and receive files from the socket .
Here is my code to send data from a file to socket
#include <iostream> #include <sys/socket.h> #include <sys/types.h> #include <netinet/tcp.h> #include <arpa/inet.h> #include <fcntl.h> #include <sys/stat.h> #include <errno.h> struct MyFds { int fdin; int fdout; int readpipe; int writepipe; loff_t size; MyFds(int p_fdin, int p_fdout, int p_readpipe, int p_writepipe, loff_t p_size) : fdin(p_fdin), fdout(p_fdout), readpipe(p_readpipe), writepipe(p_writepipe), size(p_size) { }; }; static void * splicecopywriter(void * threadparam) { const MyFds * fds = (const MyFds *) threadparam; std::cout << "In writer thread " << std::endl; loff_t offset = 0; size_t bytesleft = fds->size; int splices=0; while (bytesleft > 0) { splices ++; ssize_t bytes = splice(fds->readpipe, (loff_t *) 0, fds->fdout, & offset, bytesleft, 0 /* flags */ ); if (bytes == -1) { break; } bytesleft -= bytes; } int spliceerr = errno; std::cout << "writer:splices= " << splices << " errno=" << spliceerr << std::endl; std::cout << "still in writer thread " << std::endl; return (void *) 0; } static int splicecopyreader(const MyFds *fds) { loff_t offset = 0; size_t bytesleft = fds->size; int splices = 0; while (bytesleft > 0) { splices ++; ssize_t bytes = splice(fds->fdin, &offset, fds->writepipe, (loff_t *) 0, (size_t) fds->size, 0 /* flags */ ); if (bytes == -1) { break; } bytesleft -= bytes; } int spliceerr = errno; std::cout << "reader: splices=" << splices << " errno=" << spliceerr << std::endl; close(fds->writepipe); } static int splicecopypipes(const MyFds *fds) { // Main thread reads from disc and into pipe; // writerthread reads from pipe and on to disc. pthread_t writerthread; int res = pthread_create(&writerthread, (const pthread_attr_t *) 0, splicecopywriter, (void *) fds); std::cout << "In main thread" << std::endl; splicecopyreader(fds); int threadres; pthread_join(writerthread, (void **) (& threadres) ); std::cout << "Joined" << std::endl; return threadres; } static int splicecopyfd(int fdin, int fdout) { // Work out how big it is struct stat st; if (fstat(fdin, &st) != 0) { std::cerr << "Stat failed" << std::endl; return 3; } int pipes[2]; if (pipe(pipes) != 0) { std::cerr << "Pipe failed" << std::endl; return 4; } int readpipe = pipes[0]; int writepipe = pipes[1]; MyFds fds(fdin, fdout, readpipe, writepipe, st.st_size); int res = splicecopypipes(&fds); close(pipes[0]); close(pipes[1]); return res; } static int splicecopy(const char *srcfile) { int fdin = open(srcfile, O_RDONLY); if (-1 == fdin) { std::cerr << "Failed to open src file" << std::endl; return 1; } struct sockaddr_in lstServerAddr. inet_aton("172.20.101.38", &(lstServerAddr.sin_addr)); lstServerAddr.sin_port = htons(8000); lstServerAddr.sin_family = AF_INET; int fdout = socket(AF_INET, SOCK_STREAM, 0); if (fdout == -1) { printf("failed due to %s", strerror(errno)); return 1; } if (connect(fdout, (struct sockaddr*)&lstServerAddr, sizeof(sockaddr_in)) == -1) { printf("connect failed due to %s", strerror(errno)); return 1; } int res = splicecopyfd(fdin, fdout); close(fdin); close(fdout); return res; } int main(int argc, const char * argv[]) { if (argc < 2) { std::cerr << "Source required" << std::endl; return 1; } const char *srcfile = argv[1]; return splicecopy(srcfile); }
while writing to the socket it gives error as Invalid argument ie in the writer thread it give error Invalid Argument (EINVAL).
I have also tried setting both the offset to null while sending that doesn't work.
Please suggest as what os wrong exactly.
Thanks
Pankaj
You've really tried using NULL for all offsets? Because, that would be my guess as to your problem... Specifically, trying to specify an offset for the socket, which is non-seekable...
Also, I found when using splice(), it was wise to limit single calls to no more than about 60K at a time, due to the pipe buffer size... And, when copying to a socket, performance is greatly improved by using the SPLICE_F_MORE flag when you know you have more to write still to come... And, when doing a double-splice() with your own pipe (to copy between non-pipe FDs) as you're doing, you'd also be well-advised to add SPLICE_F_MOVE in the second splice() (the one copying from the pipe to the socket)...
I have sucessfully send data from a file to a socket using splice. Now while receiving / writing data from a socket to file using socket it
gives me EWOULDBLOCK error.
here is the Sample code.
int ReadData(int nSocketFD, int FileFd, int readpipe, int writepipe) { int lnRetValue = 0; loff_t lSocketOffSet = 0; loff_t lFileOffSet = 0; while(true) { lnRetValue = splice(lnFd, &lSocketOffSet/*NULL*/, writepipe, (loff_t*)0, 1024, 0) ; if (lnRetValue > 0) { if (lnRetValue == 1024) { break; } else { lnRetValue += lnRetValue; } } else if (lnRetValue == -1) { printf("Splice failed due to %s", strerror(errno)); return 1; } } lnRetValue = splice(readpipe, (loff_t*)0, lnFileFd, &lFileOffSet, 1024, 0); if (lnRetValue < 0) { printf("splice 1 failed due to %s", strerror(errno)); return 1; } return 0; }
pipe used are non blocking. tried setting the socketoffset to NULL.
Please advise as what am i doing wrong.
Regards,
Pankaj
Unless I can see the real code (and all of it), there's not really any way I can know what you're doing wrong... If you're getting EINVAL and truly are passing NULL for both offsets, then I'd have to guess your write pipe isn't really a pipe, or has been closed, or something like that... *shrug* All I can do is guess randomly without seeing all the real code which is actually producing the problem...
Pages: 1 | https://developerweb.net/viewtopic.php?id=7222 | CC-MAIN-2021-39 | refinedweb | 866 | 57 |
.
Recap 💬 end ## # end ## # end
reduce)
The end
Not too bad. But now another business requirement comes in to skip any number
under 5:
def halfly_even_doubly_odd(enum) enum.reduce(0) do |result, i| if i < 5 result else result + i * (i.even? ? 0.5 : 2) end end end
Ugh. That’s not very nice ruby code. Using
next it could look like:
def halfly_even_doubly_odd(enum) enum.reduce(0) do |result, i| next result if i < 5 next result + i * 0.5 if i.even? result + i * 2 end end
next works in any enumeration, so if you’re just processing items using
.each , you can use it too:
(1..10).each do |num| next if num.odd? puts num end # 2 # 4 # 6 # 8 # 10 # => 1..10
break 🛑
Instead of skipping to the next item, you can completely stop iteration of a an
enumerator using
break.
If we have the same business requirements as before, but we have to return the
number 42 if the item is exactly 7, this is what it would look like:
def halfly_even_doubly_odd(enum) enum.reduce(0) do |result, i| break 42 if i == 7 next result if i < 5 next result + i * 0.5 if i.even? result + i * 2 end end
Again, end find_my_red_item([ { name: "umbrella", color: "black" }, { name: "shoe", color: "red" }, { name: "pen", color: "blue" } ]) # => 'shoe'
StopIteration
You might have heard about or seen
raise StopIteration.
It is a special exception that you can use to stop iteration of an enumeration,
as it is caught be
Kernel#loop, but its use-cases are limited as
you should not try to control flow using
raise or
fail. The
airbrake blog has a good article about this
use case.
When to use reduce
If you need a guideline when to use
reduce, look no further. I
use the four rules to determine if I need to use
reduce or
each_with_object or something else.
- reducing a collection of values to a smaller result (e.g. 1 value)
- grouping a collection of values (use
group_byif possible)
- changing immutable primitives / value objects (returning a new value)
- you need a new value (e.g. new Array or Hash)
Alternatives 🔀 end # => [12, 14, 16, 18, 20]
Use
each_with_object when:
- building a new container (e.g. Array or Hash). Note that you’re not really reducing the current collection to a smaller result, but instead conditionally or unconditionally map values.
- you want logic in your block without repeating the result value (because you must provide a return value when using
reduce)
My use case
The reason I looked into control flow using
reduce is because I was iterating
through a list of value objects that represented a migration path. Without using
lazy, I wanted an elegant way of representing when these
migrations should run, so used semantic versioning. The migrations enumerable is
a sorted list of migrations with a semantic version attached.
migrations.reduce(input) do |migrated, (version, migration)| migrated = migration.call(migrated) next migrated unless current_version.in_range?(version) break migrated end
The function
in_range? determines if a migration is executed, based on the
current “input” version, and the semantic version of the migration. This will
execute migrations until the “current” version becomes in-range, at which point
it should execute the final migration and stop.
The alternatives were less favourable:
take_while,
selectand friends are able to filter the list, but it requires multiple iterations of the migrations collection (filter, then “execute”);
findwould be a good candidate, but I needed to change the input so that would require me to have a bookkeeping variable keeping track of “migrated”. Bookkeeping variables are almost never necessary in Ruby.
Discussion (0) | https://dev.to/xpbytes/control-flow-in-reduce-inject-ruby-25b4 | CC-MAIN-2021-49 | refinedweb | 607 | 55.34 |
So first off my programming skills are weak, but I have my code somewhat working. My goal is to use my touch sensor to turn my servo and when its not pressed it would go back. The code right now when I press the touch sensor and lift my finger the servo turns for a millisecond or so. I want it to move when I push it, and I don’t know how to code the turning. Can anyone help? Thank you so much!!! Also in my code I used pushbutton as my touch sensor
You have not created a servo object, attached the servo object to a pin or written the position to the servo. Why would you expect it to move ?
Have you looked at and tried the Servo examples ?
There are examples in the IDE
Try this:
#include <Servo.h>; Servo myservo; // create servo object to control a servo int position = 0; const byte pushButton = 2; void setup() { Serial.begin(9600); pinMode(pushButton, INPUT_PULLUP); myservo.attach(9); // <-----<<<<< pin 9 geos to the servo input } void loop() { int buttonState = digitalRead(pushButton); //Serial.println(buttonState); if (buttonState == HIGH && position < 180) { digitalWrite(6, HIGH); myservo.write(180); } else { digitalWrite(6, LOW); myservo.write(0); } }
@larryd
I can see what you meant to do, but your program does not update the position variable so it will always be less than 180
I just typed some new code into the OP's sketch.
You are correct, I should have just had:
if (buttonState == HIGH)
. | https://forum.arduino.cc/t/coding-a-servo-as-a-lock/473650 | CC-MAIN-2022-27 | refinedweb | 252 | 74.39 |
How do I convert Neo4j logs from base UTC to local timezone
With the introduction of Neo4j 3.3.1 it is possible to represent date timestamps in
your $NEO4J_HOME/logs/* in either UTC or SYSTEM timezone through the implementation of
dbms.logs.timezone
However for prior releases all Neo4j logs will preface each line with a date/time string of the format
<YYYY-MM-DD HH24:MM:SS.MMM+0000>
for example
2016-12-01 15:51:00.222+0000 INFO [o.n.k.i.DiagnosticsManager] --- INITIALIZED diagnostics START ---
where the +0000 above indicates the date/time is expesssed in UTC format. Logging in UTC is helpful for analysis when a cluster is defined with members in different timezones. However, when cluster members are in the same timezone or you are running a single instance you may want to log in local timezone. There is a pending product improvement to request the date/time string be configurable based upon timezone.
In the absence of this feature, one can run the following Perl script to convert any file from UTC timezone to the machine timezone where the perl script is run.
For most Unix implementations to determine the timezone, if one runs
$ date
this will return output similar to
Mon Jan 16 14:38:06 EST 2017
indicating the EST timezone.
To convert a log from UTC to EST run
$ ./utc.pl debug.log > debug.EST.log
To install the script, copy the following lines from here to a file named
utc.pl on your linux server.
#!/usr/bin/perl -w use strict; use Time::Local; #needed for timegm() my $file = $ARGV[0] or die "USAGE: $0 <filename>\n"; open(my $data, '<', $file) or die "Could not open '$file' $!\n"; while (my $line = <$data>) { # where a line might start as # 2017-01-11 23:22:28.372+0000 INFO ... .... .... chomp $line; # check to make sure the line begins with a YYYY-MM-DD HH if ( $line =~ /\d\d\d\d-\d\d-\d\d \d\d/ ) { my $newstring = UTC2LocalString($line); print "$newstring\n"; } else { print "$line\n"; } } sub UTC2LocalString { # below attributed to Marshall at my $t = shift; my ($datehour, $rest) = split(/:/,$t,2); # $datehour will represent YYYY-MM-DD HH (i.e. 2017-01-14 12) # $rest represents the rest of the line after # and this will reassemble and return $datehour (adjusted) + $rest my ($year, $month, $day, $hour) = $datehour =~ /(\d+)-(\d\d)-(\d\d)\s+(\d\d)/; # proto: $time = timegm($sec,$min,$hour,$mday,$mon,$year); my $epoch = timegm (0,0,$hour,$day,$month-1,$year); # proto: ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = # localtime(time); my ($lyear,$lmonth,$lday,$lhour,$isdst) = (localtime($epoch))[5,4,3,2,-1]; $lyear += 1900; # year is 1900 based $lmonth++; # month number is zero based #print "isdst: $isdst\n"; #debug flag day-light-savings time return ( sprintf("%04d-%02d-%02d %02d:%s", $lyear,$lmonth,$lday,$lhour,$rest) ); }
Make the script executable by running:
$ chmod +x utc.pl
Run the script as:
$ ./utc.pl <log file>
replacing
<log file> with a filename.
With Neo4j 3.3 and as a result of PR 10127 the timestamp timezone can be configured through
parameter
dbms.logs.timezone.
Was this page helpful? | https://neo4j.com/developer/kb/how-do-i-convert-neo4j-logs-from-base-utc-to-local-timezone/ | CC-MAIN-2020-50 | refinedweb | 535 | 61.46 |
This is the program that causes the acces violation. Can't be more simple... #include <python.h> #include <boost/python.hpp> namespace python = boost::python; void func() { python::handle<> main_module( python::borrowed( PyImport_AddModule( "__main__" ) ) ); //python::handle<> main_namespace( python::borrowed( PyModule_GetDict( main_module.get() ) ) ); PyRun_SimpleString("a = 3"); python::handle<> hmain_namespace( python::borrowed( PyModule_GetDict(main_module.get() ) ) ); python::dict main_namespace( hmain_namespace ); } int main( int argc, char** argv ) { Py_Initialize(); func(); Py_Finalize(); return 0; } The acces violation occurs in the last line of the func() code: python::dict main_namespace( hmain_namespace ); I have been trying to debug what happens. This is what I have got. Don't know if it will be useful: In the template function template <class T> explicit dict(T const& data) : base(object(data)) { } data, in this case, the handle<> named main_namespace, holds the following information - data {...} - m_p 0x00ae33b0 + _ob_next 0x00ae3338 + _ob_prev 0x00ae3478 ob_refcnt 2 + ob_type 0x00574038 _PyDict_Type inside the function object object::object(handle<> const& x) : object_base(python::incref(python::expect_non_null(x.get()))) {} x should have the same information as data had before, because is the same handle<>, but I find the following: - x {...} - m_p 0x00ae33b0 ob_refcnt 11416376 + ob_type 0x00ae3478 It seems that the PyObject structure pointed by m_p has... dynamically changed? Well, sorry if I am sayin nonsenses, but I don't know what to do. Maybe it would be better for me to use objects and only use handles, but then there is no sense in using boost, because I could only get the advantage PyObject handling. More details of my project: It is a windows 32 console application, made in Visual C++ 6.0 I am using the directives _DEBUG, and the boost_python_debug.lib library that this morning compiled using bjam and the source code that checked out from boost- consulting.com yesterday at night. The version of the puthon libraries is 2.2.3, but it fails also with 2.2.1. The include path of the proyect is: E:\proxp\prog\DirectX 9.0 SDK\Lib E:\PROXP\PROG\DIRECTX 8.1 SDK VC++\INCLUDE e:\proxp\prog\VC++ 6.0\INCLUDE e:\proxp\prog\VC++ 6.0\ATL\INCLUDE e:\proxp\prog\VC++ 6.0\MFC\INCLUDE E:\PROXP\PROG\PYTHON 2.2.1\INCLUDE (when i tried with 2.2.3 i changed it to 2.2.3 of course) E:\proxp\prog\boost checkout-14-09-2003 boost-consulting.com The library path of the proyect is: E:\proxp\prog\DirectX 9.0 SDK\Lib E:\PROXP\PROG\DIRECTX 8.1 SDK VC++\LIB e:\proxp\prog\VC++ 6.0\LIB e:\proxp\prog\VC++ 6.0\MFC\LIB E:\proxp\prog\Python 2.2.1 source\PCbuild (because I use python_d.dll for debug version. As above, i change to 2.2.3 when i using 2.2.3 includes) E:\proxp\prog\boost checkout-14-09-2003 boost-consulting.com\libs\python\build\bin- stage ( I have also tried E:\proxp\prog\boost checkout-14-09-2003 boost- consulting.com\libs\python\build\bin\boost_python.dll\msvc\debug\runtime-link-dynamic with the same ACCES VIOLATION result ) I also have tried to compile the boost.python libraries using msvc with the project located in E:\proxp\prog\boost checkout-14-09-2003 boost- consulting.com\libs\python\build\VisualStudio but I get tons of erros from the e:\proxp\prog\VC++ 6.0\INCLUDE\mmreg.h file, saying things lik 'WORD' : missing storage-class or type specifiers and other types as well that VC does not find. This seems strange to me, that a file from VC++ include has this kind of errors. Any hint would be appreciate. If need more details, just ask me. David Lucena | https://mail.python.org/pipermail/cplusplus-sig/2003-September/005185.html | CC-MAIN-2014-15 | refinedweb | 615 | 53.07 |
We are switching to Xcode 10 and are noticing some issues with Xcode 10's code coverage. Entire swaths of code are getting no coverage. So far I have noticed
* Computed vars
* class functions in extensions
* guard statements in extensions
For example:
import UIKit extension Date { func dayNumberOfWeek() -> Int? { return Calendar.current.dateComponents([.weekday], from: self).weekday } } extension UIView { // Testing with this function compiled will cause no code coverage // for steveWasHereHelper to be generated. // // However, if I comment out steveWasHere, then code coverage for // steveWasHereHelper is generated, despite that this function is also // called by wyattWasHere @discardableResult public class func steveWasHere(value: Int) -> Bool { return self.steveWasHereHelper(value: value) } // The usage of Date here is to try and rule out compiler optimization @discardableResult public class func steveWasHereHelper(value: Int) -> Bool { let d = Date().dayNumberOfWeek() let testValue = value + d! if testValue >= 42 { return true } return false } fileprivate class func neverCalled() { print("This is never called") } // steveWasHereHelper is also called here @discardableResult public class func wyattWasHere(value: Int) -> Bool { return self.steveWasHereHelper(value: value) } }
In this example, steveWasHereHelper will not get any coverage. It won't be green, it won't be red, its as if the code is not called. If I remove all tests EXCEPT for the direct test of steveWasHereHelper, it still gets no coverage.
HOWEVER, if I then comment out steveWasHere, THEN steveWasHereHelper shows coverage.
I have opened a Radar with Apple on this issue (45468318) but I was wondering if anyone else has seen this (This is not the only example of issues with code coverage in Xcode 10) and if there is a workaround.
I have also tested this in 10.1b3, getting the same results.
self. was added to the function calls just to see if it had any impact.
Narrator: It didn't
Re: Missing code coverage with Xcode 10hybridcattt Oct 30, 2018 11:24 AM (in response to stevebay)
Hi Steve,
I am seeing a similar issue on Xcode 10.0 (10A255). Getting no coverage for an extension of a struct with a where clause.
I don't have a workaround yet - will post if I find one.
Re: Missing code coverage with Xcode 10aditiJ Nov 6, 2018 9:34 AM (in response to stevebay)
I am seeing similar issues. Actually not only that, what I am noticing :
1- Coverage number varies between multiple runs on XC 10 on the same binary. Like in first run it shows x% vs in another run it will show y% keeping the same code.
2- Coverage number/Number of tests varies while running on 11.4 simulator and 12.0 simulator, both ran on XC 10
3- Number of tests also a little different like in some of my run it was 5507 tests vs in some runs it was 5506.
XC 10 certainly came with lots of bugs.
Please provide more input if you faced similar issues or found any workaround or solutions @stevebay hybridcatttstevebay
Re: Missing code coverage with Xcode 10rsharp Dec 27, 2018 11:13 AM (in response to stevebay)
I'm finding code coverage with Xcode 10 very frustrating. I use it to help find areas that have no test coverage, but it's been very unreliable.
I have restarted Xcode, deleted derived data, turned code coverage off/on, changed what was covered (all vs specific targets) and nothing produces deterministic results.
In the current state of my project, Xcode reports large blocks of code as never being hit (right-hand gutter in source files show code counts in only parts of the file). Yet coverage reports show the file as being covering 100%. Setting breakpoints in numerous areas in that file are never hit, so the 100% claim is a lie.
Sometimes, the values as reported in the right-hand gutter seems correct. But then the reported coverage is less than 100%.
While massively painful, I think I will have to resort to literally adding breakpoints to every individual code path and prove I have ample test coverage.
Re: Missing code coverage with Xcode 10stevebay Jan 3, 2019 3:24 PM (in response to rsharp)
It's good to know it is not just us. I have opened a second TSI with Apple on this, also see: | https://forums.developer.apple.com/thread/110263?ru=9209&sr=stream | CC-MAIN-2019-26 | refinedweb | 706 | 61.97 |
Re: String drawing gives me headache... advise very welcome
- From: "Bob Powell [MVP]" <bob@xxxxxxxxxxxxxxxxxxxxxxx>
- Date: Thu, 06 Mar 2008 18:34:45 +0100
Just a tip.. You may want to clone a format such as GenericTypographic rather than create a new one. Otherwise I can't see what is "wrong" with your output.
--.
Fre wrote:
Hi all,.
I'm working on this for some time now, but it keeps giving me
headaches. I'm drawing strings in a label and in a
DataGridViewTextBoxCell. Why? Subparts of the string need other
colors. It's working, but not as should be. I'm not able to fully
understand how drawing works with TextFormatFlags and StringFormat.
I've the feeling I tried all possible combinations of flag settings,
but it's not working as should be (which is well formatted strings and
backgrounds, well formed spacing...). This is a complete example that
draws a string in a label (to test, create a project with a Form
(Form1) and a Label (label1). As you can see after testing the code,
the result is buggy. How does MS draws the strings?
using System;
using System.Drawing;
using System.Windows.Forms;
namespace TestDrawing
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
this.Size = new Size(600, 80);
label1.Location = new Point(5, 5);
label1.Size = new Size(585, 21);
}
private void label1_Paint(object sender, PaintEventArgs e)
{
Rectangle rec = e.ClipRectangle;
Graphics g = e.Graphics;
Font f = new Font("Courier New", 10, FontStyle.Regular);
string[] writeThis = { "This drawing ",
"gives",
" me ",
"a",
" serious ",
"headache",
"! Why",
" isn't it ",
"w",
"orkin",
"g?"
};
// text format flags
TextFormatFlags flags = TextFormatFlags.Left;
flags |= TextFormatFlags.NoPadding;
flags |= TextFormatFlags.PreserveGraphicsClipping;
flags |= TextFormatFlags.NoPrefix;
flags |= TextFormatFlags.GlyphOverhangPadding;
// more text layout
StringFormat sf = new StringFormat();
sf.Alignment = StringAlignment.Near;
sf.LineAlignment = StringAlignment.Center;
sf.Trimming = StringTrimming.None;
sf.FormatFlags = StringFormatFlags.MeasureTrailingSpaces;
sf.FormatFlags |= StringFormatFlags.NoWrap;
sf.FormatFlags |= StringFormatFlags.NoFontFallback;
bool b = false;
Size propsize = new Size(int.MaxValue, int.MaxValue);
foreach (string s in writeThis)
{
Size size = TextRenderer.MeasureText(g, s, f, propsize,
flags);
rec.Width = size.Width;
// fill rectangle and draw string (toggle colors)
b = !b;
g.FillRectangle(b ? Brushes.Indigo : Brushes.LightGreen, rec);
g.DrawString(s, f, b ? Brushes.Wheat : Brushes.Black, rec,
sf);
// update the start position of our rectangle
rec.X = rec.X + size.Width;
}
}
}
}
Thanks in advance for your feedback.
Frederik
- Prev by Date: Re: Graphics.TextRenderingHint throw exception?
- Next by Date: Re: Graphics.TextRenderingHint throw exception?
- Previous by thread: Re: Loss of precision when drawing gridlines and blocks on top.....
- Next by thread: on-screen overlay connections : looking for the medium-priced solution
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.drawing/2008-03/msg00013.html | crawl-002 | refinedweb | 436 | 54.49 |
A.
Doctrine DBAL
Doctrine DBAL (available as RC4 as of this writing) provides a wonderful abstraction layer to process all the needed database actions. DBAL is a lightweight layer built around a PDO-like API which offers features like database schema introspection, schema migration, database access and manipulation through an OO API. Note that DBAL API can be used independently of the Doctrine ORM API. So you can use the DBAL library even if you are unfamiliar with the ORM API.
Creating a MySQL schema from DBAL
In this post I’ve created a small example script using the DBAL API to generate a MySQL schema for a couple of tables. Note that the DBAL API uses php namespaces, so you will need to be familiar with them to understand the code, which also means that you must be at least using PHP 5.3.0
This will generate the following MySQL statements.
Migrating schemas to another platform
Now that you have generated the script to create sql schemas, you can easily migrate the generated schemas to another platform using a couple of statements. For example we can migrate the above MySQL statements to Oracle using the following code.
Basic column types
Below are the basic types that you can use in the ‘addColumn’ method when creating tables.
In the next few posts we will explore some more features of Doctrine DBAL.
4 thoughts on “Creating SQL schemas with Doctrine DBAL”
thanks for all your efforts to write such a nice and informative article. Is there any option to use this for versions lesser than 5.3.
Namespaces where added from version 5.3, so it is not possible to use the class prior to that version.
>> Doctrine uses a class loader to autoload the required classes
True; however, if you already have a PSR-0 compatible autoloader, you can skip the bundled option.
Great article. I only trying find out how set other DB options, primary key, unique, autoincement and etc. There I can find full list of array key? Assocc is really horrible thing in PHP you never have a clue that you suppose put in function. | https://www.codediesel.com/mysql/creating-sql-schemas-with-doctrine-dbal/ | CC-MAIN-2019-13 | refinedweb | 359 | 61.16 |
Contributor
2831 Points
May 16, 2005 03:47 PM|azamsharp|LINK
Contributor
2831 Points
May 16, 2005 03:57 PM|azamsharp|LINK
Jun 20, 2005 11:39 PM|Thanhq|LINK
Since you already added the using Microsoft.Practices.EnterpriseLibrary.Caching.Expirations; I don't think you need to use the long namespace. The only difference I can see between your code and theirs is the bold parameter and your is ASP.NET vs Windows:
primitivesCache.Add(id, name,
CacheItemPriority.Normal, new ProductCacheRefreshAction(),
new SlidingTime(TimeSpan.FromMinutes(5)));
When I look into the source code of the class ProductCacheRefreshAction, its primary purpose is to refresh the cache after an item is removed. However, there is no implementation there at all from their quickStart sample.
2 replies
Last post Jun 20, 2005 11:39 PM by Thanhq | https://forums.asp.net/t/885941.aspx?SlidingTime+is+not+reconized+by+Asp+net | CC-MAIN-2020-50 | refinedweb | 135 | 58.69 |
Qt Designer has changed significantly in the Qt 4 release. We have moved away from viewing Qt Designer as an IDE and concentrated on creating a robust form builder which can be extended and embedded in existing IDEs. Our efforts are ongoing and include the Visual Studio Integration, as well as integrating Designer with KDevelop and possibly other IDEs.
The most important changes in Qt Designer 4 which affect porting for UI files are summarized below:
The rest of this document explains how to deal with the main differences between Qt Designer 3 and Qt Designer 4:
See Porting to Qt 4 and qt3to4 - The Qt 3 to 4 Porting Tool for more information about porting from Qt 3 to Qt 4. See also the Qt Designer Manual.
In Qt 3, uic generated a header file and an implementation for a class, which inherited from one of Qt's widgets. To use the form, the programmer included the generated sources into the application and created an instance of the class.
In Qt 4, uic creates a header file containing a POD class. The name of this class is the object name of the main container, qualified with the Ui namespace (e.g., Ui::MyForm). The class is implemented using inline functions, removing the need of a separate .cpp file. Just as in Qt 3, this class contains pointers to all the widgets inside the form as public members. In addition, the generated class provides the public method setupUi().
The class generated by uic is not a QWidget; in fact, it's not even a QObject. Instead, it is a class which knows how to populate an instance of a main container with the contents of the form. The programmer creates the main container himself, then passes it to setupUi().
For example, here's the uic output for a simple helloworld.ui form (some details were removed for simplicity):
namespace Ui { class HelloWorld { public: QVBoxLayout *vboxLayout; QPushButton *pushButton; void setupUi(QWidget *HelloWorld) { HelloWorld->setObjectName(QString::fromUtf8("HelloWorld")); vboxLayout = new QVBoxLayout(HelloWorld); vboxLayout->setObjectName(QString::fromUtf8("vboxLayout")); pushButton = new QPushButton(HelloWorld); pushButton->setObjectName(QString::fromUtf8("pushButton")); vboxLayout->addWidget(pushButton); retranslateUi(HelloWorld); } }; }
In this case, the main container was specified to be a QWidget (or any subclass of QWidget). Had we started with a QMainWindow template in Qt Designer, setupUi()'s parameter would be of type QMainWindow.
There are two ways to create an instance of our form. One approach is to create an instance of the Ui::HelloWorld class, an instance of the main container (a plain QWidget), and call setupUi():
#include <QApplication> #include <QWidget> #include "ui_helloworld.h" // defines Ui::HelloWorld int main(int argc, char *argv[]) { QApplication app(argc, argv); QWidget w; Ui::HelloWorld ui; ui.setupUi(&w); w.show(); return app.exec(); }
The second approach is to inherit from both the Ui::HelloWorld class and the main container, and to call setupUi() in the constructor of the subclass. In that case, QWidget (or one of its subclasses, e.g. QDialog) must appear first in the base class list so that moc picks it up correctly. For example:
#include <QApplication> #include <QWidget> #include "ui_helloworld.h" // defines Ui::HelloWorld class HelloWorldWidget : public QWidget, public Ui::HelloWorld { Q_OBJECT public: HelloWorldWidget(QWidget *parent = 0) : QWidget(parent) { setupUi(this); } }; int main(int argc, char *argv[]) { QApplication app(argc, argv); HelloWorldWidget w; w.show(); return app.exec(); }
This second method is useful when porting Qt 3 forms to Qt 4. HelloWorldWidget is a class whose instance is the actual form and which contains public pointers to all the widgets in it. It therefore has an interface identical to that of a class generated by uic in Qt 3.
Creating POD classes from UI files is more flexible and generic than the old approach of creating widgets. Qt Designer does not need to know anything about the main container apart from the base widget class it inherits. Indeed, Ui::HelloWorld can be used to populate any container that inherits QWidget. Conversely, all non-GUI aspects of the main container may be implemented by the programmer in the application's sources without reference to the form.
Qt 4 comes with the tool uic3 for working with old .ui files. It can be used in two ways:
You can use both these methods in combination to obtain UI, header and source files that you can use as a starting point when porting your user interface to Qt 4.
The first method generates a Qt 3 style header and implementation which uses Qt 4 widgets (this includes the Qt 3 compatibility classes present in the Qt3Support library). This process should be familiar to anyone used to working with Qt Designer 3:
uic3 myform.ui > myform.h uic3 -impl myform.h myform.ui > myform.cpp
The resulting files myform.h and myform.cpp implement the form in Qt 4 using a QWidget that will include custom signals, slots and connections specified in the UI file. However, see below for the limitations of this method.
The second method is to use uic3 to convert a Qt Designer 3 .ui file to the Qt Designer 4 format:
uic3 -convert myform3.ui > myform4.ui
The resulting file myform4.ui can be edited in Qt Designer 4. The header file for the form is generated by Qt 4's uic. See the Using a Designer UI File in Your Application chapter of the Qt Designer Manual for information about the preferred ways to use forms created with Qt Designer 4.
uic3 tries very hard to map Qt 3 classes and their properties to Qt 4. However, the behavior of some classes changed significantly in Qt 4. To keep the form working, some Qt 3 classes are mapped to classes in the Qt3Support library. Table 1 shows a list of classes this applies to.
Converting Qt 3 UI files to Qt 4 has some limitations. The most noticeable limitation is the fact that since uic no longer generates a QObject, it's not possible to define custom signals or slots for the form. Instead, the programmer must define these signals and slots in the main container and connect them to the widgets in the form after calling setupUi(). For example:
class HelloWorldWidget : public QWidget, public Ui::HelloWorld { Q_OBJECT public: HelloWorldWidget(QWidget *parent = 0); public slots: void mySlot(); }; HelloWorldWidget::HelloWorldWidget(QWidget *parent) : QWidget(parent) { setupUi(this); QObject::connect(pushButton, SIGNAL(clicked()), this, SLOT(mySlot())); } void HelloWorldWidget::mySlot() { ... }
A quick and dirty way to port forms containing custom signals and slots is to generate the code using uic3, rather than uic. Since uic3 does generate a QWidget, it will populate it with custom signals, slots and connections specified in the UI file. However, uic3 can only generate code from Qt 3 UI files, which implies that the UI files never get translated and need to be edited using Qt Designer 3.
Note also that it is possible to create implicit connections between the widgets in a form and the main container. After setupUi() populates the main container with child widgets it scans the main container's list of slots for names with the form on_objectName_signalName().
If the form contains a widget whose object name is objectName, and if that widget has a signal called signalName, then this signal will be connected to the main container's slot. For example:
class HelloWorldWidget : public QWidget, public Ui::HelloWorld { Q_OBJECT public: HelloWorldWidget(QWidget *parent = 0); public slots: void on_pushButton_clicked(); }; HelloWorldWidget::HelloWorldWidget(QWidget *parent) : QWidget(parent) { setupUi(this); } void HelloWorldWidget::on_pushButton_clicked() { ... }
Because of the naming convention, setupUi() automatically connects pushButton's clicked() signal to HelloWorldWidget's on_pushButton_clicked() slot.
In Qt 3, the binary data for the icons used by a form was stored in the UI file. In Qt 4 icons and any other external files can be compiled into the application by listing them in a resource file (.qrc). This file is translated into a C++ source file using Qt's resource compiler (rcc). The data in the files is then available to any Qt class which takes a file name argument.
Imagine that we have two icons, yes.png and no.png. We create a resource file called icons.qrc with the following contents:
<RCC version="1.0"> <qresource prefix="/icons"> <file>yes.png</file> <file>no.png</file> </qresource> </RCC>
Next, we add the resource file to our .pro file:
RESOURCES += icons.qrc
When qmake is run, it will create the appropriate Makefile rules to call rcc on the resource file, and compile and link the result into the application. The icons may be accessed as follows:
QFile file(":/icons/yes.png"); QIcon icon(":/icons/no.png"); QPixmap pixmap(":/icons/no.png");
In each case, the leading colon tells Qt to look for the file in the virtual file tree defined by the set of resource files compiled into the application instead of the file system.
In the .qrc file, the qresource tag's prefix attribute is used to arrange the files into categories and set a virtual path where the files will be accessed.
Caveat: If the resource file was not linked directly into the application, but instead into a dynamic or static library that was later linked with the application, its virtual file tree will not be available to QFile and friends until the Q_INIT_RESOURCE() macro is called. This macro takes one argument, which is the name of the .qrc file, without the path or the file extension. A convenient place to initialize resources is at the top of the application's main() function.
In Qt Designer 4, we can associate any number of resource files with a form using the resource editor tool. The widgets in the form can access all icons specified in its associated resource files.
In short, porting of icons from a Qt 3 to a Qt 4 form involves the following steps:
Qt Designer 3 supported defining custom widgets by specifying their name, header file and methods. In Qt Designer 4, a custom widget is always created by "promoting" an existing Qt widget to a custom class. Qt Designer 4 assumes that the custom widget will inherit from the widget that has been promoted. In the form editor, the custom widget will retain the looks, behavior, properties, signals and slots of the base widget. It is not currently possible to tell Qt Designer 4 that the custom widget will have additional signals or slots.
uic3 -convert handles the conversion of custom widgets to the new .ui format, however all custom signals and slots are lost. Furthermore, since Qt Designer 3 never knew the base widget class of a custom widget, it is taken to be QWidget. This is often sufficient. If not, the custom widgets have to be inserted manually into the form.
Custom widget plugins, which contain custom widgets to be used in Qt Designer, must themselves be ported before they can be used in forms ported with uic3. The Porting to Qt 4 document contains information about general porting issues that may apply to the custom widget code itself, and the Creating Custom Widgets for Qt Designer chapter of the Qt Designer Manual describes how the ported widget should be built in order to work in Qt Designer 4.
[Previous: Porting to Qt 4 - Drag and Drop] [Next: Porting to Graphics View] | http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/porting4-designer.html | CC-MAIN-2014-15 | refinedweb | 1,882 | 53.81 |
Tutorial: Custom QML components
In this tutorial, we'll learn about joining QML components together by creating a PostcardView component from an ImageView, a TextField, and some containers.
After we create our custom component, we'll see how to use it in an application and modify its appearance and behavior.
You will learn to:
- Create a custom QML component
- Use a custom component
- Rotate and scale a component
- Use JavaScript in QML
Before you begin
You should have the following things ready:
- The BlackBerry 10 Native SDK
- A device or simulator running BlackBerry 10
Set up your project
Create a project
The first thing we must do is create a Cascades project using the Standard empty project template. For more information on creating a project, see Managing projects.
Create a new QML file
Next, we need to create a new QML file for our PostcardView custom control:
- In your project, right-click the assets folder and select New > Other.
- Expand BlackBerry, click QML File, and click Next.
- In the File Name field, provide a name for your custom control (for example, PostcardView).
- In the Template drop-down list, select Container.
- Click Finish.
Add the image assets
The last part of the setup is adding our image resources to the project. The .zip file included as a part of this tutorial contains images of four different locations, a default image, and an overlay image.
To import images into your project:
- Download the images.zip file.
- Extract the images folder to the project's assets folder in your workspace. For example, C:\your_workspace\project_name\assets.
- In the Project Explorer view, refresh your project to display the imported images.
Create the PostcardView control.
Now that the project is set up, let's start creating our custom component. In the assets folder, double-click PostcardView.qml to open the file in the editor. Go ahead and remove all the existing code from the PostcardView.qml file since we will build it from scratch. The PostcardView component is pretty simple; it consists of two ImageView controls and a Label, all within a Container. One of the ImageView controls is the postcard image and the other is a static overlay that provides a border, a drop shadow, and a glossy finish for the postcard.
First, create a Container that uses an AbsoluteLayout as its layout. We're using AbsoluteLayout because we want to position our text directly on top of our postcard image.
import bb.cascades 1.0 // The root container for the custom component Container { layout: AbsoluteLayout {}
Next, let's add our two images. The image that is added first is the default postcard image and the second image that we add is the overlay. Since the postcard image is added to the container first, the overlay image is displayed in front of it on the screen.
The postcard image needs an id property. This will be necessary when we change the image that is being displayed. The layoutProperties for the postcard image are used to offset the image so that it is displayed in the center of the overlay.
// The postcard image ImageView { id: imageView imageSource: "asset:///images/default.png" layoutProperties: AbsoluteLayoutProperties { positionX: 14 positionY: 14 } } // The overlay image that sits on top of the postcard image ImageView { imageSource: "asset:///images/overlay.png" }
The last part of our custom component is a TextArea that we'll use to add some text on top of our image. The TextArea needs an id property so that we can dynamically change the text that is being displayed.
Along with the id property, there are other properties that must be set. The positioning and size of the text area are set using the layoutProperties, preferredWidth, and preferredHeight properties. To make our greeting stand out on our postcard, we set the textStyle to bold and white. We also set the editable property to false so that the end user can't modify the text.
// The text displayed on the postcard TextArea { id: textArea text: "Greetings from... " preferredWidth: 400 preferredHeight: 400 backgroundVisible: false editable: false textStyle { fontWeight: FontWeight.Bold color: Color.create ("#FFFFFF") } layoutProperties: AbsoluteLayoutProperties { positionY: 20 positionX: 25 } } } // End of the root container
Add the custom component to the app
Now that we've created our custom component, all we need to do is add it to our application. In the Project Explorer view, double-click the main.qml file to open it in the editor.
To start, create a Page component at the root of the QML document. Page is a subclass of AbstractPane, which is the required root component for QML documents in Cascades.
Within the Page component, create a Container with a DockLayout and bind it to the content property for the page. Within the container, create an instance of the PostcardView component and give it an id. Finally, set the layout properties so that the container is centered within its parent's DockLayout.
import bb.cascades 1.0 Page { // The root container content: Container { layout: DockLayout {} background: Color.create("#262626"); // The custom component PostcardView { id: postCard horizontalAlignment: HorizontalAlignment.Center verticalAlignment: VerticalAlignment.Center } } }
Add a TextArea and a Button
First, open the main.qml file in the editor and remove the root Container and its contents. When you're done, main.qml should look like this:
import bb.cascades 1.0 Page { content: }
Before we can add the TextArea and the Button, we must add some containers to help us position the various components.
We need a root container that fills the screen, and two child containers that divide the screen horizontally. The container on the top displays the TextArea and the Button, while the one on the bottom displays the custom component.
The first container we create is the root container. This container uses a StackLayout, since we want the content containers to be positioned one above the other.
import bb.cascades 1.0 Page { // The root container content: Container { layout: StackLayout {} background: Color.create("#262626");
Next, we create the top container that displays the TextArea and the Button. This container also uses a StackLayout, since the TextArea will be positioned above the Button. The container's layoutProperties are set so that it fills its parent's layout horizontally.
// The top container. This container holds a TextArea and a Button. Container { // Padding to create some space between the edge of the // container and its child controls leftPadding: 50 rightPadding: 50 topPadding: 50 layout: StackLayout {} horizontalAlignment: HorizontalAlignment.Fill
Within this container, we create the TextArea and Button.
The TextArea has a bottomMargin property, which inserts some space between it and the Button, and it has a text property that contains a simple greeting message. The Button has a clicked() signal that we need to capture when the user clicks the button. We'll define the behavior for that signal handler later on.
// The text area that contains the greeting text that is // displayed on the post card TextArea { id: greetingPhrase bottomMargin: 10 text: "Greetings from..." } // The button used to generate a new postcard Button { text: "Create" } }
Next, we add back in the Container and PostcardView controls that we previously created, with some modifications. Instead of specifying a preferred width and height, we must set the horizontal and vertical alignment to position the Container within its parent's StackLayout. We also set the spaceQuota property to 1.0 so that the Container expands to fill any remaining space in its parent container.
// The bottom container. This container has the custom component. Container { layout: DockLayout {} horizontalAlignment: HorizontalAlignment.Fill verticalAlignment: VerticalAlignment.Fill layoutProperties: StackLayoutProperties { spaceQuota: 1.0 } PostcardView { id: postCard horizontalAlignment: HorizontalAlignment.Center verticalAlignment: VerticalAlignment.Center } } // End of the bottom container } // End of the root container } // End of the page
Now, build and run the application again to see the new layout and components. At this point however, there's no functionality attached to the button.
In the next part of the tutorial, we'll add some JavaScript logic that randomly selects the postcard image that is displayed each time the button is pressed.
Lastly, we'll play with the rotation and scale properties from VisualNode to see how we can modify the appearance of the PostcardView component.
Generate a new postcard
First, open the PostcardView.qml file in the editor.
Before we can generate new postcards, we need to access some of the inner properties of PostcardView. Even though PostcardView contains a TextArea with a text property and an ImageView with an image property, they are not accessible from outside the component because they are not defined at the root of PostcardView.
To expose these properties, you must define new properties at the root of the component, and bind them to the inner properties that you want to expose using an alias. Now, when you use the PostcardView control, you can easily access its text and image properties.
// The root container for the custom component Container { property alias image: imageView.imageSource property alias text: textArea.text ...
Next, we need to create a JavaScript function that generates a new postcard when the user presses the button on the screen. This function must contain logic that selects an image at random, and then rotates and scales the postcard to produce a unique effect each time a new postcard is generated.
In main.qml, at the root of the Page, create a function called createPostcard().
import bb.cascades 1.0 Page { // The root container for the application Container { // The content for the root container } // Generates a new postcard function createPostcard () { // TODO: Add functionality } }
Within createPostcard() the first thing we'll do is add some logic that randomly selects an image from one of the four locations. We'll create a switch statement to handle each of the four options. In addition to setting the image property for the PostcardView, this statement also sets the text property by combining the greeting phrase with the city name that goes with the image.
// Choose a random number between 1 and 4. var r = Math.ceil (Math.random () * 4) switch (r) { case 1: // Set postcard image and message to Malmo. postCard.text = greetingPhrase.text + " Malmö!" postCard.image = "asset:///images/malmo.png" break case 2: // Set postcard image and message to Marseille. postCard.text = greetingPhrase.text + " Marseille!" postCard.image = "asset:///images/marseille.png" break case 3: // Set postcard image and message to Rome. postCard.text = greetingPhrase.text + " Rome!" postCard.image = "asset:///images/rome.png" break case 4: // Set postcard image and message to Waterloo. postCard.text = greetingPhrase.text + " Waterloo!" postCard.image = "asset:///images/waterloo.png" break default: // Use the fallback image postCard.text = greetingPhrase.text postCard.image = "asset:///images/default.png" } // Ends the switch statement
Next, we must rotate and scale our PostcardView component. The properties for handling visual effects such as rotate and scale are a part of VisualNode, which is inherited by all Cascades classes that have a visual component.
// Rotate the whole component to a random number between // -10 and 10 degrees. postCard.rotationZ = (Math.random () * 20) - 10 // Scale the whole component to a random number between // 0.7 and 1.2 var s = Math.random () * 0.5 + 0.7 postCard.scaleX = s postCard.scaleY = s
Lastly, just call the createPostcard() function from the onClicked signal handler on the button.
// The button used to generate a new postcard Button { ... ... // Invoke createPostcard() when the button is clicked onClicked: { createPostcard () } }
Congratulations! Build and run the application one last time and you're done.
Once you're done, try to:
Last modified: 2014-11-17
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/documentation/cascades/ui/custom_components/custom_components_tutorial.html | CC-MAIN-2014-52 | refinedweb | 1,925 | 56.55 |
On 21 February 2013 22:40, <piterrr.dolinski at gmail.com> wrote: > Thanks to all for quick relies. > > Chris, you are (almost) spot on with the if blocks indentation. This is what I do, and it has served me well for 15 years. > > code > code > > if (some condition) > { > code > code > } > > code > code So you already indent blocks in an "if" construct? This is good practise in some languages and is enforced in Python. Once I got used to it I found that the compulsory whitespace made it easier to read conditional code blocks. > > This is what I call code clarity. With Python, I am having to do this > > code > code > > ############################## > > if (some condition): > code > code > > ############################## > > code > code > > It does the job, but is not ideal. Do you mean that you literally insert a line of '#' characters before and after in "if" block? There's no need to do that. Just allow yourself to acclimatise to the significant whitespace and you'll find that it's easy to see where the block begins and ends. > > I am nervous about using variables "out of the blue", without having to declare them. For example, when I write "i = 0" it is perfectly OK to Python without 'i' being declared earlier. How do I know that I haven't used this variable earlier and I am unintentionally overwriting the value? I find I constantly have to use the search facility in the editor, which is not fun. > > You see, Javascript, for one, behaves the same way as Python (no variable declaration) but JS has curly braces and you know the variable you have just used is limited in scope to the code within the { }. With Python, you have to search the whole file. No, you only have to search the whole function which for me is rarely more than 20 lines. The statement "i=0" when inside a function will not overwrite anything outside the function (unless you use the global/nonlocal statements). I rarely use global variables or module level variables and if I do then I usually have a special place in a module/script for defining them. I also tend to name them in ALLCAPS just like C-preprocessor macros that need to be carefully maintained in a separate "namespace". > > Thanks to Chris, Ian and Dave for explaining the () issue around if and for statement. I don't agree with this, but I understand your points. The reason why I like parentheses is because they help with code clarity. I am obsessed with this. :-) After all, there is a reason why so many languages have required them for several decades. You'll get used to using the colon in the same way. > > What about Python's ambiguity? > For example, in C you would write > > if (myVar != 0) > do something > > in Python, this is legal > > if (not myVar): > do something > > What does this mean? Is it a test for myVar being equal to zero or a test for null, or else? All of those things. It executes "do something" if myVar is 1) zero (whether int/float/complex etc.) 2) False 3) None 4) an empty collection (list/set/tuple etc.) 5) an empty string 6) and more... If the context doesn't make it clear what you are testing for then use a more specific test (myVar!=0 works just as well). > I want to learn a new language but Python's quirks are a bit of a shock to me at this point. I have been Pythoning only for about a week. > > In the mean time, thanks to most of you for encouraging me to give Python a chance. I will do my best to like it, w/o prejudice. Many of the things that have confused/concerned you are things that I actually like about Python. Given time you may do as well. Oscar | https://mail.python.org/pipermail/python-list/2013-February/641293.html | CC-MAIN-2019-35 | refinedweb | 645 | 82.85 |
Proposed features/Group restaurants
Proposal
The amenity=* tag is very crowded, we should split the restaurants (used for all kinds of places to eat something) off from it. So create one key with the values we've had in amenity before. That way we can more easy add more values (and the renderer doesn't have to know about all different kinds of eating places). I don't know how to best name the key, maybe some native English speakers can help there.
Discussion
Seems to be reasonable suggestion. And I think we need not only another group for "eating places", but other groups too. Especial including proposed tags. The only one problem I see in this global change is rewriting old tags. It'll be great if this can be done automatically.--LEAn 14:18, 5 June 2008 (UTC)
- We need some trainees :-D And yes, the same thing should be done with educational stuff. --Bkr 14:44, 6 June 2008 (UTC)
I think that namespaces is the only way to manage the ever growing number of tags. Could something like amenity:restaurant=* or amenity=restaurant:* work? Gustavf 21:28, 5 June 2008 (UTC)
- At the moment I only see amenity=fast_food and amenity=restaurant to split off. May be also amenity=biergarten. What about food=fast_food, food=restaurant? (food=biergarten?) And if there is more than one, tag food=restaurant;biergarten. --Bahnpirat 06:54, 6 June 2008 (UTC)
- Gustavf, why would it have to be in the amenity category? Literally, a shop or a fitness center is also an „amenity“. I think we should try to use amenity only for stuff that cannot be categorized somewhere else.
Why not something like
- amenity=gastro which will include all kinds of gastronomy and the type will go into separate and combinable keys.
- style_restaurant=yes
- style_cafe=yes
- style_ice_cafe=yes
- style_biergarten=yes
- style_night_club=yes
with this it would be possible to create a restaurant with a connected nightclub. For things like this today you have to create two different objects. --Zottel 19:22, 1 October 2008 (UTC)
I completely agree that amenity is getting overloaded. Any value that requires a secondary tag for usefulness should be split out. (cuisine in the case of amenity=restaurant). It is simple to provide some continuity by making the new tag name the same as the amenity value, like this
- restaurant=yes(generic), restaurant=italian, restaurant=sushi;fusion, fast_food=hamburgers etc.
-- StellanL 01:10, 7 October 2008 (UTC) | https://wiki.openstreetmap.org/wiki/Proposed_features/Group_restaurants | CC-MAIN-2018-13 | refinedweb | 412 | 65.73 |
Using ARKit and Image Tracking to Augment a Postcard
Prayash Thapa, Former Developer
Article Categories:
Posted on
ARKit 2 brings a suite of tools to help you build bigger and better AR experiences on iOS. One of its key features is the ability to look for certain 2D images in the real world, and anchor virtual content to the found images.
A Primer on AR
Augmented Reality (AR) is the blending of interactive digital elements with the physical world. It creates an illusion of digital content (2D or 3D) inhabiting the real world. Until AR headsets become widely adopted, we will most likely experience AR through our mobile devices for the years to come. AR combines device motion tracking and advanced scene processing through state-of-the-art computer vision and machine learning algorithms to understand the user's surroundings.
You may have already seen several AR apps in the wild (Pokemon, Snapchat etc.), but AR can be more than just fun and novelty. You can use AR to preview furniture at scale, like the IKEA Place app, or you can use it to get accurate measurements of real-life objects (like the Measure app that comes by default in iOS 12).
Once you're thinking about AR beyond day-to-day activities, it opens up a lot of possibilities. For example, museums could provide AR apps so that visitors can point their phones at a painting to access new content. Retail stores can provide AR experiences that respond to the packaging of the items on sale, providing supplementary content to give customers more information. As you can imagine, AR opens up a new world of displaying and delivering content, with the added benefit of it all being available on anyone's mobile device. If you want to learn more, I recommend checking out this primer on AR.
I also really enjoy Nathan Gitter's AR prototypes on Twitter, they gives us a glimpse into the future of AR. I can't recommend reading his blog post about AR prototypes enough, he's got a ton of insight to share.
Mobile AR
AR apps are super fun to play with, and they're quite fun to build too. You may have heard about ARKit or ARCore, SDKs provided by Apple and Google to create AR experiences on iOS/Android devices. These SDKs are opening up new horizons for developers by exposing high-level APIs to create AR applications. These APIs make tasks like plane detection (horizontal and vertical), 6-DOF motion tracking, facial recognition, and 3D rendering a lot more manageable. As these SDKs are advancing quite rapidly in parallel with computing power, these platforms will continue to roll out more advanced features for developers to start integrating right away.
Many AR experiences can be enhanced by using known features of the user's environment to trigger the appearance of virtual content, instead of letting it float about around the user. In iOS 11.3 and later, we can now add such features by enabling 2D image tracking in ARKit. We can provide these images as assets for our apps, and use them as references during tracking. When ARKit detects these images in the real world, we can then use that event to anchor 3D (or 2D) assets to the AR world.
Today, I'll be guiding you through creating your own AR app. We'll use ARKit's image tracking feature to look for a 2D image in the real world, and anchor 3D content to it. I'm using a postcard of an elephant, but feel free to use any 2D image you want. I encourage you to print the image out, but you could technically track the image on your screen as well.
Here's what your final result should look like (you'll be using your own reference image for tracking of course). This beautiful postcard is courtesy of Brad Wilson.
Setup
If you are new to iOS development, fear not, I've started a project for you to download and start hacking on right away. But before you download, you'll need to ensure that you have at least macOS High Sierra (10.13.xx) and Xcode 10 installed. These can take quite a bit time to install (and many of you may already have them installed), but you absolutely need them before moving forward. You will also need a physical iOS device with iOS 11.3+ installed so you can preview your AR experiences IRL.
This guide by Apple on the composition of iOS apps will give you a good understanding of what's going on and will make things easier to grasp down the road. A basic understanding of Swift and Object Oriented Programming will be helpful, but is not a strict requirement.
Ready? Brew yourself a strong cup of coffee/tea, because things are about to get real fun. Download the Git repo here and double click the
AugmentedCard.xcodeproj file to open it in Xcode.
The
master branch of the repo includes the final code and assets. I've created different branches for each step of this tutorial so you can refer to those if needed along the way. Before we start, run
git checkout 1-setup inside of your Terminal (assuming you've
cded into the root of the repo). You'll have to change the
Bundle Identifier and
Team before building the app to run on your device:
ARKit Foundation
ARKit is an abstraction built by Apple that makes it surprisingly easy for you to build AR apps. The beauty of this is that you do not need to know the implementation details of how scene comprehension works or how the device is estimating lighting. The point is that we now have high-level interfaces that let us reach into these technologies and leverage their superpowers.
iOS apps are comprised of one or many screens that are represented by a special class provided by Apple's UIKit called
UIViewController. Because our app will only have one screen, we'll be using a single instance of a
UIViewController to display the camera feed and the AR experience on top of it.
With that said, here is the entire controller along with some inline documentation for you to read:
import UIKit import ARKit class ViewController: UIViewController { // Primary SceneKit view that renders the AR session @IBOutlet var sceneView: ARSCNView! // A serial queue for thread safety when modifying SceneKit's scene graph. let updateQueue = DispatchQueue(label: "\(Bundle.main.bundleIdentifier!).serialSCNQueue") // MARK: - Lifecycle // Called after the controller's view is loaded into memory. override func viewDidLoad() { super.viewDidLoad() // Set the view's delegate sceneView.delegate = self // Show statistics such as FPS and timing information (useful during development) sceneView.showsStatistics = true // Enable environment-based lighting sceneView.autoenablesDefaultLighting = true sceneView.automaticallyUpdatesLighting = true } // Notifies the view controller that its view is about to be added to a view hierarchy. override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) guard let refImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: Bundle.main) else { fatalError("Missing expected asset catalog resources.") } // Create a session configuration let configuration = ARImageTrackingConfiguration() configuration.trackingImages = refImages configuration.maximumNumberOfTrackedImages = 1 // Run the view's session sceneView.session.run(configuration, options: ARSession.RunOptions(arrayLiteral: [.resetTracking, .removeExistingAnchors])) } // Notifies the view controller that its view is about to be removed from a view hierarchy. override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) // Pause the view's session sceneView.session.pause() } }
Hopefully the comments make it clear what the code is doing, but the gist of it is that we're setting up the 3D scene and the AR session here. The ViewController mounts a
sceneView onto the screen, which is a special view that draws the camera feed, and gives us lighting and 3D anchors to add virtual objects to. The
ARImageTrackingConfiguration is what tells ARKit to look for a specific reference image that we've loaded into memory called
refImages. We're also setting some debugging options so we can get stats like FPS on the screen.
Pro tip: If you want to refer to the documentation for any native SDK classes or functions, hold the
option key and click on a native class (like
UIViewController) to view the documentation in Xcode. This is a super handy feature of Xcode and will help you learn the API much quicker.
You might be wondering where our main
ViewController is actually getting instantiated and mounted on the screen. That is the responsibility of the Storyboard, Apple's Interface Builder that gives us a drag-and-drop interface to lay out the UI and assign screens. We then hook into some lifecycle methods of
UIVIewController to set up the AR session. You can see on the panel on the right that we've assigned our own custom class
ViewController to this View Controller. The Scene View has also been added to the view hierarchy, which we can reference from the code.
Adding a Reference Image
We'll now be adding our reference image. This is the image our app will be looking for in the real world. To add a reference image, we need to go to the
Assets.xcassets folder in Xcode. Right click on the Asset Catalog pane and select
New AR Resource Group. This will create an
AR Resources folder where you will drag your image into. A couple gotchas here:
- When creating the image, create a canvas of the physical size (like 5 in. x 7 in. in Photoshop), then paste your image onto that canvas for exporting your final reference image.
- Once imported into Xcode, you need to specify the exact size of the image in Xcode after adding the image. Xcode will warn you about this. Refer to the ARReferenceImage documentation for more on that.
There are a few guidelines you should abide by when choosing a reference image. Apple shared this slide during their WWDC presentation to give us an idea of what we should be going for:
If you violate these guidelines, Xcode will give you a warning about your reference image. You'll still be able to compile and run the app, but your results will depend greatly upon your adherence to the guidelines above.
Visualize Image Tracking Results
Before ARKit can start tracking your image, it needs to process the reference image first. It processes the image as grayscale and tries to find 'features' in the image that can be used as anchor points.
In order to visualize the result, we'll edit the
ViewController+ARSCNViewDelegate.swift file. This file extends our default
ViewController class and makes it a delegate for the
sceneView (
ARSCNView). To be a delegate means to be responsible for something. In this case, our extension will be responsible for all 3D rendering on the AR context. Because our
ViewController now conforms to the
ARSCNViewDelegate protocol, it needs to implement a specific set of methods to make the compiler happy. This is a key idea in Swift called Protocol-Oriented Programming. I've already added stubs for those methods so you don't need to worry about them, but if you do delete any of the stubbed methods then Xcode will yell at you.
The first thing we'll be doing is detecting when an anchor has been added. Because we configured our AR session to use the
ARImageTrackingConfiguration configuration in the
ViewController, it will automatically add a 3D anchor to the scene when it finds that image. We simply need to tell it what to render when it does add that anchor:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { guard let imageAnchor = anchor as? ARImageAnchor else { return } // Delegate rendering tasks to our `updateQueue` thread to keep things thread-safe! updateQueue.async { let physicalWidth = imageAnchor.referenceImage.physicalSize.width let physicalHeight = imageAnchor.referenceImage.physicalSize.height // Create a plane geometry to visualize the initial position of the detected image let mainPlane = SCNPlane(width: physicalWidth, height: physicalHeight) // This bit is important. It helps us create occlusion so virtual things stay hidden behind the detected image mainPlane.firstMaterial?.colorBufferWriteMask = .alpha // Create a SceneKit root node with the plane geometry to attach to the scene graph // This node will hold the virtual UI in place let mainNode = SCNNode(geometry: mainPlane) mainNode.eulerAngles.x = -.pi / 2 mainNode.renderingOrder = -1 mainNode.opacity = 1 // Add the plane visualization to the scene node.addChildNode(mainNode) // Perform a quick animation to visualize the plane on which the image was detected. // We want to let our users know that the app is responding to the tracked image. self.highlightDetection(on: mainNode, width: physicalWidth, height: physicalHeight, completionHandler: { // Introduce virtual content self.displayDetailView(on: mainNode, xOffset: physicalWidth) // Animate the WebView to the right self.displayWebView(on: mainNode, xOffset: physicalWidth) }) } }
Try building and running the code above on your device. Point your device at your reference image and you should see a brief animation of a flashing white rectangle over the image. We want to start our experience by indicating to the user that the image has been detected before adding content. The motivation behind this is that from a UX standpoint, AR applications incur big setup costs for users. They have to download the app, open it, give it the appropriate permissions to access the camera, and have to wait another few seconds for the camera to calibrate and start detecting features. By giving them an indication of the anchor in the scene, it notifies them that something is happening.
Skimming through the code above, we're using the detected image to render a SceneKit plane on top. This is the plane where we'll put our virtual content. You can see that we're first going to highlight the detected plane for a few seconds, then display a detail view, followed by the web view. These methods are empty for now, as we'll be filling them in next.
Introducing Virtual Content
Notice that we called 3 methods once the image was found:
highlightDetection,
displayDetailView, and
displayWebView, but nothing really happened except the flashing white rectangle overlaying the reference image. If you look at the implementation of the
displayDetailView method, it's empty. Let's go ahead and fill it out:
func displayDetailView(on rootNode: SCNNode, xOffset: CGFloat) { let detailPlane = SCNPlane(width: xOffset, height: xOffset * 1.4) detailPlane.cornerRadius = 0.25 let detailNode = SCNNode(geometry: detailPlane) detailNode.geometry?.firstMaterial?.diffuse.contents = SKScene(fileNamed: "DetailScene") // Due to the origin of the iOS coordinate system, SCNMaterial's content appears upside down, so flip the y-axis. detailNode.geometry?.firstMaterial?.diffuse.contentsTransform = SCNMatrix4Translate(SCNMatrix4MakeScale(1, -1, 1), 0, 1, 0) detailNode.position.z -= 0.5 detailNode.opacity = 0 rootNode.addChildNode(detailNode) detailNode.runAction(.sequence([ .wait(duration: 1.0), .fadeOpacity(to: 1.0, duration: 1.5), .moveBy(x: xOffset * -1.1, y: 0, z: -0.05, duration: 1.5), .moveBy(x: 0, y: 0, z: -0.05, duration: 0.2) ]) ) }
Hopefully this is easy for you to read through and get a sense of what's going on. We create another
SCNNode object which holds a plane geometry, and its material is actually going to be a SpriteKit scene. A SpriteKit scene is a 2D texture which can be created through Xcode's editor. Check out the
Resources/DetailScene.sks file and feel free to modify it. This is what we'll be displaying on the face of a 3D plane. We then set an initial position and opacity for the node (it's invisible and behind the tracked image). We can then use SceneKit Actions API to fade it in and move it to the left of the image.
Adding a WebView
We're almost there. Our next piece of virtual content will be an interactive Web view. This will load a specific URL and display it next to the postcard:
func displayWebView(on rootNode: SCNNode, xOffset: CGFloat) { // Xcode yells at us about the deprecation of UIWebView in iOS 12.0, but there is currently // a bug that does now allow us to use a WKWebView as a texture for our webViewNode // Note that UIWebViews should only be instantiated on the main thread! DispatchQueue.main.async { let request = URLRequest(url: URL(string: "")!) let webView = UIWebView(frame: CGRect(x: 0, y: 0, width: 400, height: 672)) webView.loadRequest(request) let webViewPlane = SCNPlane(width: xOffset, height: xOffset * 1.4) webViewPlane.cornerRadius = 0.25 let webViewNode = SCNNode(geometry: webViewPlane) // Set the web view as webViewPlane's primary texture webViewNode.geometry?.firstMaterial?.diffuse.contents = webView webViewNode.position.z -= 0.5 webViewNode.opacity = 0 rootNode.addChildNode(webViewNode) webViewNode.runAction(.sequence([ .wait(duration: 3.0), .fadeOpacity(to: 1.0, duration: 1.5), .moveBy(x: xOffset * 1.1, y: 0, z: -0.05, duration: 1.5), .moveBy(x: 0, y: 0, z: -0.05, duration: 0.2) ]) ) } }
We've instantiated a
UIWebView here and used it as a texture for a plane geometry. Finally, we animate its position and opacity so it moves to the right of the reference image..
Try running the app again. Voila! We should have a static PNG and an interactive web view loading next to our image now.
Parting Thoughts
Hooray! We're done. You should be able to run the app and preview it with your reference image. Make sure that your environment is well lit for optimal tracking. I hope this tutorial gave you a glimpse into the new world of possibilities that AR opens up. Do check out Apple's guide on image tracking which gives you more tips on best practices. We've laid a lot of groundwork here but you could easily extend this demo further by adding GIFs as SpriteKit textures, experimenting with other geometries and materials in SceneKit, playing around with animating other properties of the SceneKit nodes, and even importing your own 3D models to anchor to the reference image.
Personally, I'm excited about creating persistent AR experiences. ARKit 2 allows you to save mapping data of an environment and reload it on the fly, which is another exciting avenue to explore in AR. You could, for example, map your entire apartment or house and anchor 3D content to different areas in the house, and reload it later instantly. This could have some really interesting applications in the future, so as you play around with new AR features, try to think about how it could improve, or augment an ordinary experience. There are no rules, so have fun and let your imagination run wild!
Feel free to tweet at me @_prayash or leave a comment below if you want to share what you made, or if you have any follow-up questions. Have an AR idea that you'd like to prototype? Hit us up. We love exploring emerging tech with our clients. | https://www.viget.com/articles/using-arkit-and-image-tracking/ | CC-MAIN-2022-27 | refinedweb | 3,120 | 55.84 |
Hi,
I’m currently playing with Rails 3 (I’m new to rails, coming from PHP
background). Everything works good so far, until yesterday, when I
tried to setup a cronjob.
The most common way to starting regular jobs seems to be using “rails
runner”, right? So, as a test I’ve created a class “Mytest” in /vendor/
mytest.rb:
class Mytest
def self.hello
“hello world”
end
end
Now, when I try to run this method in rails runner like:
shell# rails runner “puts Mytest.hello”
I get this response:
====== snap ==========
/Users/christian/.rvm/gems/ruby-1.8.7-p249/gems/railties-3.0.0.beta/
lib/rails/commands/runner.rb:45: (eval):1: uninitialized constant
Mytest (NameError)
from /Users/christian/.rvm/gems/ruby-1.8.7-p249/gems/
railties-3.0.0.beta/lib/rails/commands.rb:60:in
eval' from /Users/christian/.rvm/gems/ruby-1.8.7-p249/gems/ railties-3.0.0.beta/lib/rails/commands/runner.rb:45 from /Users/christian/.rvm/gems/ruby-1.8.7-p249/gems/ railties-3.0.0.beta/lib/rails/commands.rb:60:inrequire’
from /Users/christian/.rvm/gems/ruby-1.8.7-p249/gems/
railties-3.0.0.beta/lib/rails/commands.rb:60
from /private/tmp/testapp/script/rails:10:in `require’
from /private/tmp/testapp/script/rails:10
I was wondering if I did something wrong. So I checked the same stuff
in Rails 2 on another machine. The same call "./script/runner “puts
Mytest.hello” works there.
I did some other test, but the rails runner script seems to throw this
error on other things too (like, when I created a simple model and try
to do "rails runner “User.new”).
The same call in “rails console” does work (in case of “puts
Mytest.hello” prints “hello world”).
Is this a bug in Rails 3? I’m using rvm and tried with serveral ruby
versions (ruby 1.8.7.p29, ruby 1.9.1.p378 and ruby 1.9.2-preview1 and
ruby-head). Rails is “3.0.0.beta”
Any ideas? Did I forget something here (beginner mistake), or is this
a (known) bug?
Regards,
Christian
P.S: I’ve found several ways to working with cron/background jobs with
Rails, like rake, background-rb, rails-runner or delayed_jobs. Is
there any recommended way to do this in Rails 3? I’m just wondering
because I feel cronjobs are a common task in web development, and
usually Rails does have a ‘recommendation’ for common taks.
| https://www.ruby-forum.com/t/rails-3-rails-runner-doesnt-work-here/184268 | CC-MAIN-2021-39 | refinedweb | 422 | 62.14 |
I need to test if an expression which returns an optional is
nil. This seems like a no-brainer, but here is the code.
if nil != self?.checklists.itemPassingTest({ $0 === note.object }) { … }
Which, for some reason, looks unpleasant to my eye.
if let item = self?.checklists.itemPassingTest({ $0 === note.object }) { … }
Looks much better to me, but I don’t actually need the item, I just need to know if one was returned. So, I used the following.
if let _ = self?.checklists.itemPassingTest({ $0 === note.object }) { … }
Am I missing something subtle here? I think
if nil != optional … and
if let _ = optional … are equivalent here.
Update to address some concerns in the answers
I don’t see the difference between
nil != varand
var != nil, although I generally use
var != nil. In this case, pushing the
!= nilafter the block gets the boolean compare of block mixed in with the boolean compare of the if.
The use of the Wildcard Pattern should not be all that surprising or uncommon. They are used in tuples
(x, _) = (10, 20), for-in loops
for _ in 1...5, case statements
case (_, 0):, and more (NOTE: these examples were taken from The Swift Programming Language).
This question is about the functional equivalency of the two forms, not about coding style choices. That conversation can be had on programmers.stackexchange.com.
After all this time, Swift 2.0 makes it moot
if self?.checklists.contains({ $0 === note.object }) ?? false { … }
After optimization, the two approaches are probably the same.
For example, in this case, compiling both the following with
swiftc -O -emit-assembly if_let.swift:
import Darwin // using arc4random ensures -O doesn’t just // ignore your if statement completely let i: Int? = arc4random()%2 == 0 ? 2 : nil if i != nil { println("set!") }
vs
import Darwin let i: Int? = arc4random()%2 == 0 ? 2 : nil if let _ = i { println("set!") }
produces identical assembly code:
; call to arc4random callq _arc4random ; check if LSB == 1 testb $1, %al ; if it is, skip the println je LBB0_1 movq $0, __Tv6if_let1iGSqSi_(%rip) movb $1, __Tv6if_let1iGSqSi_+8(%rip) jmp LBB0_3 LBB0_1: movq $2, __Tv6if_let1iGSqSi_(%rip) movb $0, __Tv6if_let1iGSqSi_+8(%rip) leaq L___unnamed_1(%rip), %rax ; address of "set!" literal movq %rax, -40(%rbp) movq $4, -32(%rbp) movq $0, -24(%rbp) movq [email protected](%rip), %rsi addq $8, %rsi leaq -40(%rbp), %rdi ; call println callq __TFSs7printlnU__FQ_T_ LBB0_3: xorl %eax, %eax addq $32, %rsp popq %rbx popq %r14 popq %rbp retq
The
if let syntax is called optional binding. It takes an optional as input and gives you back a required constant if the optional is not nil. This is intended for the common code pattern where you first check to see if a value is nil, and if it’s not, you do something with it.
If the optional is nil, processing stops and the code inside the braces is skipped.
The
if optional != nil syntax is simpler. It simply checks to see if the optional is nil. It skips creating a required constant for you.
The optional binding syntax is wasteful and confusing if you’re not going to use the resulting value. Use the simpler
if optional != nil version in that case. As nhgrif points out, it generates less code, plus your intentions are much clearer.
EDIT:
It sounds like the compiler is smart enough to not generate extra code if you write “if let” optional binding code but don’t end up using the variable you bind. The main difference is in readability. Using optional binding creates the expectation that you are going to use the optional that you bind.
I personally think it looks unpleasant because you are comparing nil to your result instead of your result to nil:
if self?.checklists.itemPassingTest({ $0 === note.object }) != nil { … }
Since you only want to ensure it is not nil and not use
item there is no point in using
let.
The answer of AirspeedVelocity show us that
let _ = and
!= nil produce the same assembly code, therefore I strongly suggest using the first approach.
In fact, if you have something like:
if let _ = optional { do_something() }
…and you want to add some code and now you need that optional, this change will be easier and quicker:
if let wrapped = optional { do_something() do_something_else(with: wrapped) }
Use
let _ = and write maintainable code. | https://exceptionshub.com/whats-the-difference-between-if-nil-optional-and-if-let-_-optional.html | CC-MAIN-2021-21 | refinedweb | 721 | 75.5 |
This is a realy dum question but why are my Strings and stuff not being used in my intro.h and other classes? It says theres a parce error but i don't see it.
I'll attach the project if anyone wants to look at it.I'll attach the project if anyone wants to look at it.Code:#include <IOSTREAM> #include <string.h> String torf,f_name,USER_name,commands,Name; //Says theres parce error before the; char User_MAP[20][20],Map[20][20],Get; bool newg,end,load; using namespace std; #include "Intro.h" #include "Instruct.h" #include "NewGame.h" #include "LMAP.h"
It's just a simple text adventure game I havent put everything in yet though. | http://cboard.cprogramming.com/game-programming/31788-small-problem.html | CC-MAIN-2013-48 | refinedweb | 119 | 79.77 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Project Help and Ideas » Higher Resolution Large LED Display
I own the Nerdkit and LED Array kit and have built all the projects (just by following the directions). I don't understand a lot about them other than I got them to work.
I have a design in mind that I am trying to work on to have on display at my wedding reception (which is in July of this year). The idea is to have a large, higher resolution (maybe 10 x 100?), nicely finished led display that will hang on the wall and will scroll text.
The text would be able to be sent via txt message from the people who are attending the reception (a note on the table that says something like- "send a text message to 555-1234 with a special message for the bride and groom to have it displayed on the led sign".
I have combed the internet for a project that would be similar to this that I could tweak (minimally because of my understanding). I was given a tip from a friend to look for Arduino projects also since there is a lot of support for it online.
I am not looking for someone to spend the hours working out all the details for me on this, but I wanted to create this thread to open discussion about this project and also for the possibility that someone else may want to do something similar in the future.
Any comments welcome on this topic- I really want to finish this project to have at the reception. I think it would be really fun and different. No help is too small :) Thanks!
Hi Singlecoilx3 ,
For the recieving texts part - if i was u i would write a python script that reads texts sent from a cell phone to a google account or some sort of sms reader for your computer. Sryy if my typing is horrible, i just broke my wrist snowboarding a few days ago so i am trying to adjust.
-missle3944
Hi Singlecoilx3, what a great idea!! Congratulations on your upcoming marriage.
You should post a outline step by step of what you want to do.
Have you looked at the Nerdkits expanded Array using three mcus. Possible you could stack six of them.
Maybe using I2C port expanders and I2C Master/Slave mcu code could get you the processing power and large array.
I really like your concept. In fact it fits in with my water curtain project.
Ralph
Maybe Steven could help with you with the sms text messaging part.
I've left a message for him in his thread.
I am working on an outline and trying to research the best solution for receiving the text messages... google labs in gmail has a sms send/receive but it requires the gmail user start the conversation with each phone (will not work for this project). I have also come across many ways to SEND txt from a computer, but the receiving ones I have found are very limited, or like the google one- require the computer user initiate the sms conversation.
Thanks for the input on this so far guys, and keep the ideas rolling :)
Thanks
Singlecoilx3
Here is what I've got for an outline so far...
Wow, that's a lot of stuff to complete by July!
The large multiple panel LED display project already on the Nerdkits projects page allows for 'many' 5x20 panels (end to end) instead of a single 5x24 panel for the original LED display kit. I am wondering, as Ralphxyz mentioned, if the project could be easily adapted to have what would essentially be 10 panels, 5 across and 2 high. The programming change will most certainly have to change the way it displays text and scrolls since the letters would be 10x8 instead of 5x?. Can anyone with a good understanding of the multiple panel project speak to the difficulty of making this change?
This is a really cool idea! I've been looking around and there are a couple services/companies that offer SMS functionality with .NET APIs. Essentially you want to write a quick .NET program that polls the service for new messages and puts them in a queue. Simply send the message over the serial port to your NerdKit.
Here are some of those services:
Wow Steven, that is some great information! I will take a look into those links and also look into using .NET. I am familiar with VB and Python (and some C) but never (knowingly) made anything with .NET
Singlecoilx3
So after looking into .NET a little bit, it appears that I have been programming in a .NET environment (Microsoft Visual Basic 2010 Express)- correct me if I'm wrong. After looking into the links you suggested I was impressed with Esendex. Next on the agenda- learn to use APIs in my programs and then look over the Esendex API.
I made up this sample display just to get a feel for the dimensions and look of the grid size I was planning. I definitely like the increased clarity of the letters. Below is an embedded version of the graphic I created. Here is a link to a larger version of the same thing. The grid itself was created using a free online grid creator I found at Incompetech.com
I was doing some searches and found this program: SMS Enabler It looks like it can do exactly what I am looking for without paying a sms gateway/per message fee. It says it can be enabled to write the sms to an external file, database, forward to email, etc... It has a 45 day free trial (not sure if it is gimp-ware or fully functional) but as long as it works I think it would make my life easier.
Just an update-
I was not able to get SMS Enabler to work with the phone I have. But... I have been working with an Android developer who has an app in the market called 'SMS2PC'. He is currently writing an API that is compatible with vb.net that I'll be able to use for the app I am going to write. If he makes it freely available I will share it with you all.
As far as the physical construction goes- I have a friend with a CNC router. He is going to cut out the holes in the front plate so that the LEDs will be uniform and will fit snugly into the holes. This will also help with the angle the LEDs will be facing to cut down on issues that are related to viewing angle. I plan on using a 30deg viewing angle LED instead of the 15 deg LEDs that came with the LED Array kit from the Nertkits folks. This should also help with the problems I had with my original array in regards to the viewing angle. Any suggestions on the construction of the sign and the 'box' that will contain it are welcome (I would rather think about it before I start building than after...)
Okay- So here is going to be the largest struggle for me: designing the circuits. As I mentioned on my first post, I was able to get all the projects to work by following the directions in the kits, but my understanding of microcontrollers and circuit design is very limited. I definitely welcome any info, suggestions, calculations, explanations, links, comments, etc... to help with this part. Thank you in advance :)
Thanks again for all the great information so far. I think things are on track to come together in time. I will continue to post updates and pictures along the way to keep you all up on my progress. Is anyone else working on a similar project right now? Does Humberto or Mike want to chime in with their take/suggestions on this project? :)
Thanks!
My friend did up some renders in Solidworks:
Front View:
Back View:
Game Room (not to scale)
Here are the measurments:
---- LED Board ----
LED board length = 1020 millimeters = 3.34645669 feet = 40.1574803 inches
LED board hight = 120 millimeters = 0.393700787 feet = 4.72440945 inches
LED board thickness = 1/8 inch
LED spacing is 10mm (0.393700787 inches) from center to center in both directions
---- Oak Box ----
Thickness = 0.5"
Depth = 3.0"
Question: Which would work better for this project- I2C or SPI? The NK folks used SPI in the multi-panel array but I have also been reading up on I2C after Ralphxyz's suggestion. Any opinions on this?
(continued) ...or shift registers?
Either one will work fine but I2C will require only 2 pins while SPI will require 3 plus 1 to select each slave in turn. I2C also has a broadcast capability to send the same cmd/msg to all the slaves at once. Don't know if you could use that or not.
How many slaves do you plan to have?
10 slaves
Oh, by the way- I couldn't find any good 10x8 fonts online that I could use for my font index so I did an Google search and found this nifty website that lets you create your own font:. The font file itself isn't important to me, but being able to design the letters in a nifty grid definitely saved me some time and effort.
Just figured I would post it in case it helps someone else.
a* Google seach
In that case I would definately go with I2C over SPI or shift registers. Get the latest version of master/slave code (TWI.c & TWI.h) near the bottom of. I can also provide sample slave side code if you need it.
Thank you for posting that link Noter. I will check it out tomorrow and let you know if I want to take you up on that offer.
I just wanted to post this link for anyone who may be doing a similar project. As I posted above, I used fontstruct.com to create a font template for this project. Here is a link to the 10x8 font I created in case you would like to use it.
Note- the website does require you to login in order to download or 'clone' the font. I would recommend 'cloning' the font so you can look at each letter within the grid-view to make it easier to see for creating the font template for your project (assuming you are using the method the NK folks do).
Cheers,
Singlecoilx3
Thanks for the font. I grabbed a copy and will use it one of the days I'm sure.
Ralph's water curtain needs an 8 bit font too so he may use it even sooner.
Absolutely, this is great. I haven't looked at the font setup yet, I wonder if I'll be able to do a 32bit font?
I suppose I should just try making a font to find out.
I had gotten some transparency sheets that I was going to use as a overlay to take off the bit pattern but it looks like this font maker might save me the step.
Thanks for the link,
Ralph
Well I tried the font program. It is really nice and there have been some fantastic fonts made using it.
But how would one use a True Type Font (.ttf) in conjunction with a Nerdkit?
I was hoping it would make a bit pattern text file which I can see using on the mcu.
The .ttf is essentially a compiled font, good for pcs with a True Type driver but that is not available on a AVR.
Am I missing something?
I would recommend forgetting about the ttf file and just use the letter editor on the website just as a visual aid for creating your own ascii font file similar to the way the NK guys did. I was looking at this as a tool to get the letters to look correct. For example: I will have the "A" open on one screen and notepad open on the other while I 'map' the letter in notepad.
Check out the NK font.txt file to see how they implemented it.
It looks like the first line is the letter name, the second line is the width (which in my font will always be 8) and the following lines are the map of the letters.
Hope this helps...
I will be creating a map like this for myself, so I may just post a link to the map also since this may save you from doing the same work)
Singlexoilx3
Hi Singlexoilx3, can I just call you George?
Yeah that is what I was coming up with also, that will be a help.
I can definitely use it to get a take-off of the bit composition.
Thanks again,
Ralph,
You may call me George if you like, I will be more likely to respond to Eric though :)
Of course just to make sure I do not have any free time I am working on trying to develop a program to scan a image and to produce a text file from the scan.
"We", well I supervised the programmers that actually did the work, had to develop signature capture algorithms for hand held devices 15 - 20 years ago and the programmer once the program was done said it was really simple to implement. He explained the steps to me but I never needed to do anything with the knowledge so it just sorta evaporated from my mind.
I want to use a tablet pc as a picture box and capture the co-ordinates of where a mark is made on the screen.
In theory it sounds simple, and my programer described it as simple to implement (using C++) now I need to figure it out.
I am getting ready to purchase the MCUs from the NK store. I was wondering if the ATmega328P would be better for this project? My concern was making sure people that sent a long(ish) text message would not have the message capped because of having less memory on the chip. I don't know if having more memory on the chip would have any affect on this or not- or if it is only more memory that my programs can be stored in.
Sincerely,
Confused in Oregon
Eric
It's all of the above. The 328 has twice the flash, eeprom, and ram. I always buy the 328P although I have only one project so far that has actually exceeded 16k in flash requirement. The only 168 I have is the one that came with my original nerdkit. So, I think you will be fine with the ATmega168's but maybe buy one 328P to use as your master just in case.
Or maybe you want to buy a ~$25 USBISP type programmer off eBay and get plain 328P mcu's (without the bootloader) for about 1/2 the price. Then you can put the bootloader on them yourself if you wish as well as set fuses to use the internal 8mhz clock which has no requirement for an external crystal. Rick is the expert on those little programmers and can probably point you to the best one to get.
Thanks for the info Noter. I like the idea of being able to program the bootloader myself. I will see if Rick can give me a hand picking one of those out. I will probably just go with all 328s since they are still pretty cheap.
Have you any experience using the internal clock? Have you seen any problems come up compaired to external crystals? I guess if I have issues with it I can just use the external one that came with the original NK.
Yes, I use the internal clock quite a bit. I have a real time clock that runs timer2 asynchronously with a 32767hz watch crystal on XTAL1 and XTAL2 so the only choice for that configuration is using the internal clock. And I have another project where I just need all the pins for I/O so I use the internal clock there too. More general, if the 8mhz interal clock will do the trick I use it just to avoid the need for an external crystal.
The only issue I have had is some of the baud settings for the USART have an error value that is greater than 3% so they won't work with all PC's (especially mine it seems). You can see the baud error values for various clock rates on pp 200-201 of the ATmega328P datasheet. As you know this is no longer a problem for me since implementing the FT232R USB interface yesterday which lets me run at 250K baud while using the 8mhz internal clock.
There exist other requirements between masters and slaves with relation to their clock rates and bitrates used but none of these have been an issue for me with the 8mhz clock. I think they are more of an issue for very low slave clock rates of less than 1.5mhz.
By the way, there is risk in messing with the clock rates and fuses. If you accidently clear the reset enabled bit and burn the fuse, that chip is done until you get a high voltage programmer. I have to admit that I have a little bag of mcu's that someday I will recover. Best to buy more mcu's than you need so you'll have a few spares ...
and then even though you can save a little money it is nice to support the Nerdkits guys by purchasing from their store.
I can and do save by buying from DigiKey or Mouser but I'll also make some purchases from the Nerdkits store.
Their support here in the forums or by email has to be worth something.
Found Rick's post about it here:
Thanks Rick :)
No Problem... Any questions, just ask.
Rick
Update:
I placed an order today for fifteen ATMEGA328P-PU (of which I plan to use 11 for this project). As of the time of my order I was able to buy them from Mouser Electronics for $3.31/ea + about $7 for shipping. This price included a quantity discount of 10+ pieces. The 1+ piece price right was $4.28/ea.
Because these chips do not come with the bootloader like the ones you can purchase in the Nerdkits Store, I also needed to purchase a USBISP Programmer to program the bootloader into each MCU. I took Rick's advice and purchased the one from Fun4Diy.com. I ended up getting two of them, but I paid $8.50 for each unit + $2.50 for s/h. It came as a kit (see pictures on the website) including all the parts needed to build it as well as a schematic diagram.
I did decide to go with the SPI bus as opposed to I2C or shift registers, etc.. on the advice of an engineer I work with that I discussed the project with- as well as the availability of the NK source code for the multi panel sign they made in hopes that I can use parts of it and tweak it for my project.
I checked with the people at the wedding reception hall we booked and there is no wifi/internet access available in the hall :( :( :( . This means that the SMS > PC solution I go with must not involve the internet. I have an Android-based phone and have been corresponding with the developer of the SMS2PC app in the android market. He said that he may be able to write a simple API that is compatible with VB.NET that I can use to import the incoming SMS messages to a database where they will be usable for my project. I am quite nervous about this as this is the only potential solution I have that doesn't involve internet access and it is still not guaranteed that he will be able to get the API to me, and/or get it to me in enough time for me to get it worked into my project. Any suggestions for other non-internet based solutions are definitely welcome as this is the part that has got me the most stressed :-|.
The physical construction of the oak project enclosure and the front face with the holes for the 1000 LEDs should be getting underway soon. As I mentioned in a previous post, I have a friend who will be making the front face with his CNC router, and once that is finished I can get started on the oak project enclosure.
I hope to have a circuit diagram up soon. I had planned on having a slave for each 10x10 section of the grid. I haven't actually sat down to really look at this to see if it would indeed be the case. Any discussion on this is also welcome as I know there are many of you with much more experience in circuit design than I have.
Thanks again to all who have contributed so far to making this idea a reality. I am still very excited to get this project off the ground and see it in action.
Cheers,
Singlecoilx3 (Eric)
Eric, you might drive by the reception hall to see if there is WIFI. The hall might not have wifi but possible a carrier or someone might.
16x32 stackable LED matrix. Might save some time.
@Ralph- That is a good idea, it is only 2 mins from my house anyway...
@Bretm- thanks for posting the link. I wouldn't mind using some ready-made panels like that in a future project but I am set on the homemade variety this go around. I also appreciate the idea since it potentially could save a lot of time which is always nice when a project has a deadline.
I am knees deep in the schematic drawing process now, and I've run into something I am unsure about. I am using SPI and I have 1 Master and 10 slaves. I have all the chips tied together on pin17 for MOSI, all the chips tied together on pin19 for SCK, and I now need to figure out WHICH PINS I CAN USE ON THE MASTER FOR SS LINES that will be routed to pin16 on each of the slaves.
I see on the NK guys' multi-panel project they are using pins 15, 23, 24, and 25:
What other ones are legal? Bonus points if you can explain why, or point me to a page in the datasheet or a link/post/whatever that explains it.
Single
I think you can use any pin that you can tie high/low for CS (chip select). So if you can make a pin high than you can use it as chip select.
If you master is the regular nerdkit with the led display and crystal, you won't have enough pins available to select all your slaves unless you use the internal oscillator to free up PB6 and PB7. With the external crystal installed you have only 8 pins available - PB1, PB2, PC0, PC1, PC2, PC3, PC4, and PC5.
Would this not work?
PB6 and PB7 are still open for the external osc... Did I use some pins that are off limits, or did I forget about something else?
...or was the assumption made that I would be controlling some of the LEDs with the master also?
Sure that will work. I forgot you are going to program using the SPI instead of serial/usb. Isn't PD2 used by the LCD or are you not going to use the LCD? You can use PB2 as a regular I/O pin on the master because it is not needed for SPI.
You don't need any of the resistors to GND.
I Got the atmega328p chips in yesterday, going to try and follow Rick's instructions for copying the bootloader to the chips tonight.
Looking into the type of material I want to use for the front face of the sign (the piece with the holes in it). So far it looks like Black ABS Sheet is the leader in this race. Here is an example of what I am looking at. Does anyone have any experience with ABS? Is it usually shiny or flat looking?
I got an email from the SMS API guy and he says he is still working on the SMS2PC API for VB.net that I plan to use.
Things are pretty well on track I think for the deadline (July 2nd). I think the biggest struggle for me is going to be the microcontroller programming. I will have to start combing through the NK guys' code and see what I can pick out for my project.
I've run into some problems with installing the bootloader on the chips. I am currently working with Rick to try and get it worked out.
I just received my order for some small solderable perf-boards from WestFloridaComponents.com that I am going to use for the panels (1 per slave/panel and 1 for the master MCU). The holes are perfect spacing for the MCUs with solderable copper rings on one side. If I am smart- I will socket the MCUs on each perf board to allow for changes to the MCU code (and debugging).
I also just ordered the Black ABS Sheet - 1/8" that I am going to use for the front face with the holes for the LEDs. I got it from TapPlastics.com. It is being delivered to my friend with the CNC router who will cut the panel and holes and then mail it to me.
Once the exact size of the panel has been figured out (we had to change it a tiny bit because of the size limits of his CNC router) I will be purchasing the 1/2" oak for the project enclosure. This will be a fun part of the project, as I love woodworking.
I need to order the LEDs- (the ones I want are EXPENSIVE, sad day...) The trick here is getting a wide enough viewing angle (I am getting 30deg) and a nice high brightness rating. Oh- and having low power consumption doesn't hurt too...
Now that I have some of the various balls rolling I am going to start working on the software side of things. On this I shall certainly need some luck :)
Update-
I got the problem with copying the bootloader figured out (thanks to Rick for all the help). Now working on the C code for the master/slaves.
I am LED shopping and would like to get some clarification on the luminous intensity rating, 'mcd'
The NK guys advised me that the LED array kit comes with these LEDs which, according to the seller's website, have a luminous intensity of 2200-2500 mcd.
Some LEDs that we use on a product at my job are these ones which, according to the seller's website have a luminous intensity of 15 mcd. Since these ones are pretty stinkin' bright, I have a hard time believing they are only 15 mcd. OR- maybe it is that the other ones aren't actually 2200-2500 mcd.... OOORRR... some of the properties are different enough to rate it so much lower- viewing angle (20 vs 30), diffused lense (yes vs no), etc...
If I understand mcd correctly- 1 mcd is roughly equal to 1/1000th of 1 candlepower. So this means that this LED is only about 15/1000ths of the brightness of a candle?? That hardly seams right when the LEDs I got with my NK Array Kit were hardly any brighter than those, yet claiming 2.2 - 2.5 candlepower??
Thanks for any 'light' you can shed on this for me, <grin>
btw- the ones I were looking to buy for my Large LED Array project when I became perplexed by this oddity were these ones. I need over 1000 of them so it would be 5.4 cents each if I go with these. They are also claiming to be 15 mcd, which is the reason for my hesitation, and this post.
Thanks for the help!
....... how do these ones look?
Illumination Color: Red
Lens Color/Style: Diffused
Operating Voltage: 1.85 V
Wavelength: 640 nm
Luminous Intensity: 400 mcd
Operating Current: 20 mA
Viewing Angle: 30 deg
Lens Shape: Dome
Mounting Style: Through Hole
Package / Case: T-1 3/4
Since you're lighting up so many of them, I'd consider a low-power model. I don't know about brightness, though. That's certainly confusing.
This one is 4.5 cents at 20mcd but draws only 2mA current: WP7104LSRD They're T-1 size, so I don't know if they're big enough for you.
(1000 LEDs at 20mA is 20 amps)
It looks like you are already going a different direction, but have you heard of or looked into Google Voice? You can choose a phone number in any area code that links to your google e-mail. I get texts through my e-mail all the time, (which is useful when you don't have access to your phone). If you want me to send you an invite, let me know.
As for no wifi, many phone carriers offer a data plan through smart phones that allows you to tether your phone's internet to your laptop/pc. It may be more than most are willing to pay on a regular basis, but I think it would be worth it for a wedding :)
Also, you may want to ask friends and family if any of them already have a tether/data plan and are willing to share it for the night.
Josh
Also, have you looked at abcTronics? I saw that NerdKits buys from them, and 1000 Red LEDs would cost $29.99... that's pretty good. These ones are still rated at 20mA, but show an output of 2700 - 3000 mcd (3mm model). I don't know enough about driving LEDs yet, but perhaps driving it at a lower amperage will still yield decent light output. Plus, unless you plan to have every LED on at full strength at the same time you won't hit 20 amps (which, if I'm not mistaken, would fry your controllers)
Thanks for the good ideas folks. I did end up going with ABCTronics. The LEDs they offer aren't exactly what I was wanting (somewhat narrow viewing angle, non-diffused) but they are very bright and very cheap. Plus, if I really want to, I can use some fine grit sandpaper to diffuse them a bit. It cost me a total of $51.97 for 1200 red 5mm LEDs shipped to Oregon.
1000 RED LEDs 5mm - $29.99 (Qty1 = $29.99)
100 RED LEDs 5mm $3.99 (Qty2 = $7.98)
Shipping and handling $14.00
Total - $51.97
I have been so busy in the last week or so, I got laid off this past week and haven't had much time to work on this. I have been going through the nerdkit guides trying to better understand the programming for the MCUs and combing through the LED Array code and the multiple panel SPI master/slave code to try and figure out what modifications I will need to make. At this point I know that I will obviously need to change the pin assignments to match mine. Also, since I have 10 rows (for each 10x10 'panel') I will need to have my array be 2 bytes per column to hold my 10 rows and be wasting 6 bits per column, instead of the single byte 5 row column in the original nerdkit project (which wastes 3 bits per column).
As this is my first time really creating my own project with the NK instead of just following the directions on their creations step by step, I appreciate any tips, suggestions, or help with this part of it :)
Thanks!
Singlecoil (Eric)
Wedding is 30 days from today!! Running a bit behind schedule...
I am definitely going to need some help with the C code part of this project. I will post the modifications to the spi_master.c and spi_slave.c files soon if someone wouldn't mind checking out the code and pointing out some things I am not doing correctly (get your red inkpens ready!) Thanks in advance.
On another note, I just purchased the last of the parts I need for this project (with the only exception being the wood for the outside frame). In case it helps anyone working on a similar project- here is what I got:
a 500ft spool of this 24 gauge solid hookup wire from bulkwire.com ($20.29 + $7.90 s/h = $28.19) I'm sure I could have done with much less, but I wanted to get enough to have around for more projects.
15 (4 extra) of these STMicroelectronics L7805CV 5V voltage regulators. I'm not sure I am really going to need more than 1, but now I will have them if I need them.
15 (4 extra) of these 0.1uF 50volt capacitors.
15 (4 extra) of these 14.7456MHz Crystals (FOX / FOXSLF/147-20).
here is the master code that I have edited so far... I can't test it yet because I am waiting for the stuff I ordered to come in the mail. See any issues so far? I don't understand a lot of this code so I am sure there are a lot of things wrong for changing it to a 10x100 display.
Modified Master Code:
// for NerdKits with ATmega328-PU
#define F_CPU 14745600
#include <stdio.h>
#include <avr/io.h>
#include <avr/interrupt.h>
#include <avr/pgmspace.h>
#include <inttypes.h>
#include "../libnerdkits/delay.h"
#include "../libnerdkits/uart.h"
#include "font.h"
// PIN DEFINITIONS:
//
// PB5 - SCK
// PB3 - MOSI (Master Out Slave In)
// PB1, PC0-5, PD0-2 - SS pins for slave panels
#define ROWS 10
#define COLS_PER_ARRAY 10
#define NUM_ARRAYS 10
#define COLS (NUM_ARRAYS * COLS_PER_ARRAY)
//keeps the entire array. NUM_ARRAYS * COLS_PER_ARRAY
uint8_t data[COLS];
//sets all columns in the array to 0
void blank_data(){
uint16_t i = 0;
for(i=0;i<COLS;i++){
data[i] = 0;
}
}
void data_left_shift(){
uint16_t i = 0;
for(i=0;i<COLS-1;i++){
data[i] = data[i+1];
}
data[COLS-1] = 0;
}
void all_on(){
uint16_t i;
for(i=0;i<COLS;i++){
data[i] = 0x3f;
}
}
inline uint8_t data_get(uint8_t i, uint8_t j) {
if(i < ROWS && j < COLS) {
if((data[j] & (1<<i)) != 0) {
return 1;
} else {
return 0;
}
} else {
return 0;
}
}
inline void data_set(uint8_t i, uint8_t j, uint8_t onoff) {
if(i < ROWS && j < COLS) {
if(onoff) {
data[j] |= (1<<i);
} else {
data[j] &= ~(1<<i);
}
}
}
void font_get(char match, char *buf) {
// copies the character "match" into the buffer
uint8_t i;
PGM_P p;
for(i=0; i<FONT_SIZE; i++) {
memcpy_P(&p, &font[i], sizeof(PGM_P));
if(memcmp_P(&match, p,1)==0) {
memcpy_P(buf, p, 7);
return;
}
}
// NO MATCH?
font_get('?', buf);
}
uint8_t font_width(char c) {
char buf[7];
buf[1] = 0;
font_get(c, buf);
return buf[1];
}
void font_display(char c, uint8_t offset) {
char buf[7];
font_get(c, buf);
uint8_t width = buf[1];
uint8_t i, j;
for(i=0; i<ROWS; i++) {
for(j=0; j<width; j++) {
if((offset + j) < COLS) {
if( (buf[2+j] & (1<<i)) != 0) {
data_set(i,offset + j,1);
} else {
data_set(i,offset + j,0);
}
}
}
}
// blank the next column to the right
for(i=0; i<ROWS; i++) {
data_set(i, offset+width, 0);
}
}
void do_scrolling_display() {
blank_data();
int16_t offset = 0, next_offset = 0;
uint16_t is_started = 0;
char x=' ';
while(1) {
//we are started, shift the array, and display the next character
if(is_started) {
delay_ms(60);
data_left_shift();
if(next_offset > 0) {
offset -= 1;
next_offset -= 1;
}
font_display(x, offset);
} else { //we are not started, just shift the next offset
offset = COLS-1;
}
// if we can now accept a new character, tell the computer
if(next_offset == COLS)
uart_write('n');
while(uart_char_is_waiting()) {
if(is_started)
offset = next_offset;
x = uart_read();
if(x=='a') {
blank_data();
return;
}
font_display(x, offset);
next_offset = offset + font_width(x)+1;
is_started = 1;
// if we can now accept a new character, tell the computer
if(next_offset <= COLS)
uart_write('n');
}
}
} we are using as outputs
DDRC |= (1<<PC0) | (1<<PC1) | (1<<PC2) | (1<<PC3) | (1<<PC4) | (1<<PC5);
DDRB |= (1<<PB1);
//set the SS pins logic high (slave not active)
PORTC |= (1<<PC0) | (1<<PC1) | (1<<PC2) | (1<<PC3) | (1<<PC4) | (1<<PC5); now corresponding to the ith panel (active)
inline void activate_array_ss(uint8_t i){
if(i == 0){
PORTB &= ~(1<<PB1);
} else if (i < 7){
PORTC &= ~(1<<(PC0+i-1));
}
delay_us(5);
}
//pull the slave line high corresponding ot the ith panel (not active)
inline void deactivate_array_ss(uint8_t i){
if(i == 0){
PORTB |= (1<<PB1);
} else if(i < 7){
PORTC |= (1<<(PC0+i-1));
}
delay_us(5);
}
//timer has overflowed, update all the panels
SIGNAL(SIG_OVERFLOW0) {
uint16_t i = 0;
uint16_t j = 0;
//cycle over the arrays
for(i=0;i<NUM_ARRAYS;i++){
//pull SS low - activate slave
activate_array_ss(i);
//cycle over each byte to send to this array.
for(j=0;j<COLS_PER_ARRAY;j++){
//grab the next byte and send it
SPDR = data[i*COLS_PER_ARRAY+j];
//wait for transmition to be over
while(!(SPSR & (1<<SPIF))){
}
//wait a little bit before sending the next byte
delay_us(15);
}
//send done byte
SPDR = 0xff;
//wait for transmition to be over
while(!(SPSR & (1<<SPIF))){
}
//pull SS high - deactivate slave
deactivate_array_ss(i);
}
}
int main() {
//ledarray_init();
master_init();
// activate interrupts
sei();
// init serial port
uart_init();
FILE uart_stream = FDEV_SETUP_STREAM(uart_putchar, uart_getchar, _FDEV_SETUP_RW);
stdin = stdout = &uart_stream;
// mode 1: test pattern
//do_testpattern();
do_scrolling_display();
while(1) {
//this loop never executes, scrolling display should loop forever in this mode of operation
//everythign exciting happens in the interrupt handler, and in do_scrolling display
}
return 0;
}
Eric,
I haven't seen any project updates since the beginning of the month. July is nearing and I was wondering if the project came together, is close to coming together, or didn't work out as you planned. It seemed to be such a interesting use for the kit I was hoping to see it come to be.
Hope all is well with you and your future wife, (and the project! ) - I know the days before the wedding can get somewhat hectic, and your's is only 4 days away.
Well Eric,
Are you Married?? Did the project make it?? Did you survive your bachelor party??
I hope everything was successful I know it's been less than a week since the day so you are probabaly on your honeymoon. Hopefully you'll see this when you get back.
Best wishes for a long and happy marriage.
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/1478/ | CC-MAIN-2019-22 | refinedweb | 6,518 | 78.79 |
clock_gettime, clock_settime - Obtains or sets the time
for the specified clock (P1003.1b)
#include <time.h>
int clock_gettime ( clockid_t clock_id, struct timespec
*tp);
int clock_settime ( clockid_t clock_id, const struct timespec
*tp);
Realtime Library (librt.so, librt.a)
The clock type used to obtain the time for the clock that
is set. The CLOCK_REALTIME clock is supported and represents
the TIME-OF-DAY clock for the system. A pointer to
a timespec data structure. (zero) is returned.
On an unsuccessful call, a value of -1 is returned and
errno is set to indicate that an error occurred.
The clock_gettime and clock_settime functions fail under
the following conditions: The clock_id argument does not
specify a known clock. The value pointed to by the tp
argument to the clock_settime function is outside the
range for the given clock_id. The tp argument specified a
nanosecond value less than zero or greater than or equal
to 1000 million. The requesting process does not have the
appropriate privilege to set the specified clock.
Functions: time(1), clock_getres(3), ctime(3), timer_gettime(3)
Guide to Realtime Programming
clock_gettime(3) | http://nixdoc.net/man-pages/Tru64/man3/clock_gettime.3.html | CC-MAIN-2018-43 | refinedweb | 182 | 58.08 |
Combinatorics
Authors: Jesse Choe, Aadit Ambadkar, Dustin Miao
How to count.
If you've never encountered any combinatorics before, AoPS is a good place to start.
ResourcesResources
If you prefer watching videos instead, here are some options:
Binomial CoefficientsBinomial Coefficients
Focus Problem – try your best to solve this problem before continuing!
The binomial coefficient (pronounced as " choose " or sometimes written as ) represents the number of ways to choose a subset of elements from a set of elements. For example, , because the set has subsets of elements:
There are two ways to calculate binomial coefficients:
Method 1: Pascal's Triangle (Dynamic Programming) -Method 1: Pascal's Triangle (Dynamic Programming) -
Binomial coefficients can be recursively calculated as follows:
The intuition behind this is to fix an element in the set and choose elements from elements if is included in the set or choose elements from elements, otherwise.
The base cases for the recursion are:
because there is always exactly one way to construct an empty subset and a subset that contains all the elements.
This recursive formula is commonly known as Pascal's Triangle.
A naive implementation of this would use a recursive formula, like below:
C++
/** Computes nCk mod p using naive recursion */int binomial(int n, int k, int p) {if (k == 0 || k == n) {return 1;}return (binomial(n - 1, k - 1, p) + binomial(n - 1, k, p)) % p;}
Additionally, we can optimize this from to using dynamic programming (DP) by caching the values of smaller binomials to prevent recalculating the same values over and over again. The code below shows a bottom-up implementation of this.
C++
/** Computes nCk mod p using dynamic programming */int binomial(int n, int k, int p) {// dp[i][j] stores iCjvector<vector<int>> dp(n + 1, vector<int> (k + 1, 0));// base cases described abovefor (int i = 0; i <= n; i++) {/** i choose 0 is always 1 since there is exactly one way* to choose 0 elements from a set of i elements
Method 2: Factorial Definition (Modular Inverses) -Method 2: Factorial Definition (Modular Inverses) -
Define as . represents the number of permutations of a set of elements. See this AoPS Article for more details.
Another way to calculate binomial coefficients is as follows:
Recall that also represents the number of ways to choose elements from a set of elements. One strategy to get all such combinations is to go through all possible permutations of the elements, and only pick the first elements out of each permutation. There are ways to do so. However, note the the order of the elements inside and outside the subset does not matter, so the result is divided by and
Since these binomial coefficients are large, problems typically require us to output the answer modulo a large prime such as .
Fortunately, we can use modular inverses to divide by and modulo for any prime . In our case, is prime, so we can utilize modular inverses. However, computing inverse factorials online can be very time costly. Instead, we can precompute all factorials in time and inverse factorials in by taking the inverses of each factorial. See the code below for the implementation.
C++
const int MAXN = 1e6;long long fac[MAXN + 1], inv[MAXN + 1];/** Computes x^y modulo p in O(log p) time. */long long exp(long long x, long long y, long long p) {long long res = 1; x %= p;while (y) {if (y & 1) {res *= x; res %= p;
Solution - Binomial CoefficientsSolution - Binomial Coefficients
The first method for calculating binomial factorials is too slow for this problem since the constraints on and are (recall that the first implementation runs in time complexity). However, we can use the second method to answer each of the queries in constant time by precomputing factorials and their modular inverses.
C++
#include <iostream>using namespace std;using ll = long long;const int MAXN = 1e6;const int MOD = 1e9 + 7;ll fac[MAXN + 1];ll inv[MAXN + 1];
DerangementsDerangements
Focus Problem – try your best to solve this problem before continuing!
The number of derangements of numbers, expressed as , is the number of permutations such that no element appears in its original position. Informally, it is the number of ways hats can be returned to people such that no person recieves their own hat.
Method 1: Principle of Inclusion-ExclusionMethod 1: Principle of Inclusion-Exclusion
Suppose we had events , where event corresponds to person recieving their own hat. We would like to calculate .
We subtract from the number of ways for each event to occur; that is, consider the quantity . This undercounts, as we are subtracting cases where more than one event occurs too many times. Specifically, for a permutation where at least two events occur, we undercount by one. Thus, add back the number of ways for two events to occur. We can continue this process for every size of subsets of indices. The expression is now of the form:
For a set size of , the number of permutations with at least indicies can be computed by choosing a set of size that are fixed, and permuting the other indices. In mathematical terms:
Thus, the problem now becomes computing
which can be done in linear time.
C++
#include <bits/stdc++.h>// (included in grader)#include <atcoder/modint>using mint = atcoder::modint;using namespace std;int main() {int N, M;
Method 2: Dynamic ProgrammingMethod 2: Dynamic Programming
Suppose person 1 recieved person 's hat. There are two cases:
- If person recieves person 1's hat, then the problem is reduced to a subproblem of size . There are possibilities for in this case, so we add to the current answer .
- If person does not recieve person 1's hat, then we can reassign person 1's hat to be person 's hat (if they recieved person 1's hat, then this would become first case). Thus, this becomes a subproblems with size , are there ways to choose .
Thus, we have
which can be computed in linear time with a simple DP. The base cases are that and .
C++
#include <bits/stdc++.h>// (included in grader)#include <atcoder/modint>using mint = atcoder::modint;using namespace std;int main() {int N, M;
ProblemsProblems
Module Progress:
Join the USACO Forum!
Stuck on a problem, or don't understand a module? Join the USACO Forum and get help from other competitive programmers! | https://usaco.guide/gold/combo | CC-MAIN-2022-40 | refinedweb | 1,055 | 50.26 |
Multiple C# Views loaded from ExtJS
My ASP.NET MVC3/ExtJS application requires multiple Views that need to be maintained as C# code. What would be the best way to use these Views from the client-side ExtJS? Can a Ext.Direct call to a Controller Action, which returns a View, be used? What would the syntax be in the ExtJS code? If this isn't the best way to accomplish the task, what is? The way I see this working is for each C# View to have a <script> tag to load the ExtJS .js file for the client-side code.
Hi,
it seems we found an issue using the lastest version in a non "en-Us" environnement : when used to call a method with a Date parameter, Ext.Direct.Mvc return an invalid date or an exception when parsing the String : this problem occurs with the basic sample included with the source.
After looking at the source code, it seems the issue occurs in ReadJson method of the RequestDataConverter class where each JValue sent from the client are converted to string : if the JValue contains a DateTime the toString must specify the InvariantCulture to avoid parsing issue in the DirectValueProvider. To resolve the problem we've only add a check if the value is an IFormattable. Hence the following modification in the ReadJson method resolves the problem :
Code:
if (value is IFormattable) { data.Add(value == null ? null : ((IFormattable)value).ToString(null,CultureInfo.InvariantCulture)); } else { data.Add(value == null ? null : value.ToString()); }
Using Ext.Direct with RequestVerificationToken
HI, I need some help to use [ValidateAntiForgeryToken] in some methods inside controller
I was trying something like that but i get error:
Code:
[ValidateAntiForgeryToken] public ActionResult GetList(int start, int limit) { if (ModelState.IsValid){ var total = db.Contacts.Count(); var contacts = db.Contacts.OrderBy(c => c.FirstName).ThenBy(c => c.LastName).Skip(start).Take(limit).ToList(); return Json(new { total = total, data = contacts }); } else { return Json(new { message = "invalidrequest", }); } }
and from extjs I am doing it
onDirectstoreBeforeLoad: function(store, operation, eOpts){
var token = document.getElementsByName('__RequestVerificationToken').item(0).value;
Ext.apply(Ext.getStore('MyDirectStore').getProxy().extraParams, {
__RequestVerificationToken : token
});
}
HTML Code:
The required anti-forgery form field "__RequestVerificationToken" is not present. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.Mvc.HttpAntiForgeryException: The required anti-forgery form field "__RequestVerificationToken" is not present. Source Error: Line 268: try {Line 269: controller.ActionInvoker = new DirectMethodInvoker();Line 270
any advice please
regards
Frank
I get this solution
in the client side:
Code:
onDirectstoreBeforeLoad: function(store, operation, eOpts){ var afth = document.getElementsByName('__RequestVerificationToken').item(0).value; Ext.Ajax.defaultHeaders = { '__RequestVerificationToken': afth }; }
in the server side:
I create this class: (thanks to )
Code:
[AttributeUsage(AttributeTargets.Class)] public class ValidateAntiForgeryTokenOnAllPosts : AuthorizeAttribute { public override void OnAuthorization(AuthorizationContext filterContext) { var request = filterContext.HttpContext.Request; // Only validate POSTs if (request.HttpMethod == WebRequestMethods.Http.Post) { // Ajax POSTs and normal form posts have to be treated differently when it comes // to validating the AntiForgeryToken if (request.IsAjaxRequest()) { var antiForgeryCookie = request.Cookies[AntiForgeryConfig.CookieName]; var cookieValue = antiForgeryCookie != null ? antiForgeryCookie.Value : null; AntiForgery.Validate(cookieValue, request.Headers["__RequestVerificationToken"]); } else { new ValidateAntiForgeryTokenAttribute() .OnAuthorization(filterContext); } } } }
and then in the controller I add it:
Code:
[ValidateAntiForgeryTokenOnAllPosts] public class ContactsController : DirectController {
regards
Frank
What/where is Basic.Echo?
In main.js it calls Basic.Echo(). But I cannot find it documented anywhere. (Granted, a Google search on Basic.Echo finds a million other things first.) Where it is documented?
thanks - dave
Hi;
I'm trying to figure out how MvcTest works. I think I understand about half of it (maybe). If anyone could explain the following, it would be very appreciated:
- ContactForm.js has load: Contacts.Get. That maps to ContactsController.Get. How? Does it use reflection on all DirectController child classes and on the name strip the Controller part off the name? (That's my best guess.)
- Contact.js and Contact.cs are the same class. Is either auto-generated (and if so how) or do I have to create both sides and make sure I match?
- The sample has <script type="text/javascript" src="@Url.Content("~/directapi?assembly=Test")"></script> - is there documentation of what params can be passed to directapi?
- What are WebApiConfig, FilterConfig, & RouteConfig doing? Is that part of the plumbing between the server & client side? And do I need to understand it or do I just copy what is there for any app using this? (Even if I can just copy it blindly, is it explained anywhere? I'd like to understand it.)
- Can I put the client side in a separate project from the server side? I want to create the client side in Sencha Architect.
Hello,
I'm have some troubles with parsing dates, we build a application with is used in different time zones. While using the ISO datetime parser Ext JS see it as a string, so you have to parse it manually. When using the JS parser the client time zone is added what result in different time between the client and server.
How do you handle datetime fields?
Thanks
I get around the timezone problem by putting in a convert function on the model fields that are dates to explicitly convert the string representation to a date
//2012-11-12T00:00:00
var thedate=new Date(v.substr(0, 4), v.substr(5, 2) - 1, v.substr(8, 2), v.substr(11, 2), v.substr(14, 2));
return thedate;
Nice!
Do you use the IsoDateConvert or the JavascriptConvert for serializing?
Thanks) | https://www.sencha.com/forum/showthread.php?72245-Ext.Direct-for-ASP.NET-MVC/page38 | CC-MAIN-2015-35 | refinedweb | 939 | 51.34 |
How To Import Modules in Python 3
Introduction
The Python programming language comes with a variety of built-in functions. Among these are several common functions, including:
print()which prints expressions out
abs()which returns the absolute value of a number
int()which converts another data type to an integer
len()which returns the length of a sequence or collection
These built-in functions, however, are limited, and we can make use of modules to make more sophisticated programs.
Modules are Python
.py files that consist of Python code. Any Python file can be referenced as a module. A Python file called
hello.py has the module name of
hello that can be imported into other Python files or used on the Python command line interpreter. You can learn about creating your own modules by reading How To Write Modules in Python 3.
Modules can define functions, classes, and variables that you can reference in other Python
.py files or via the Python command line interpreter.
In Python, modules are accessed by using the
import statement. When you do this, you execute the code of the module, keeping the scopes of the definitions so that your current file(s) can make use of these.
When Python imports a module called
hello for example, the interpreter will first search for a built-in module called
hello. If a built-in module is not found, the Python interpreter will then search for a file named
hello.py in a list of directories that it receives from the
sys.path variable.
This tutorial will walk you through checking for and installing modules, importing modules, and aliasing modules.
Checking For and Installing Modules
There are a number of modules that are built into the Python Standard Library, which contains many modules that provide access to system functionality or provide standardized solutions. The Python Standard Library is part of every Python installation.
To check that these Python modules are ready to go, enter into your local Python 3 programming environment or server-based programming environment and start the Python interpreter in your command line like so:
- python
From within the interpreter you can run the
import statement to make sure that the given module is ready to be called, as in:
- import math
Since
math is a built-in module, your interpreter should complete the task with no feedback, returning to the prompt. This means you don’t need to do anything to start using the
math module.
Let’s run the
import statement with a module that you may not have installed, like the 2D plotting library
matplotlib:
- import matplotlib
If
matplotlib is not installed, you’ll receive an error like this:
OutputImportError: No module named 'matplotlib'
You can deactivate the Python interpreter with
CTRL + D and then install
matplotlib with
pip.
Next, we can use
pip to install the
matplotlib module:
- pip install matplotlib
Once it is installed, you can import
matplotlib in the Python interpreter using
import matplotlib, and it will complete without error.
Importing Modules
To make use of the functions in a module, you’ll need to import the module with an
import statement.
An
import statement is made up of the
import keyword along with the name of the module.
In a Python file, this will be declared at the top of the code, under any shebang lines or general comments.
So, in the Python program file
my_rand_int.py we would import the
random module to generate random numbers in this manner:
import random
When we import a module, we are making it available to us in our current program as a separate namespace. This means that we will have to refer to the function in dot notation, as in
[module].[function].
In practice, with the example of the
random module, this may look like a function such as:
random.randint()which calls the function to return a random integer, or
random.randrange()which calls the function to return a random element from a specified range.
Let’s create a
for loop to show how we will call a function of the
random module within our
my_rand_int.py program:
import random for i in range(10): print(random.randint(1, 25))
This small program first imports the
random module on the first line, then moves into a
for loop which will be working with 10 elements. Within the loop, the program will print a random integer within the range of 1 through 25 (inclusive). The integers
1 and
25 are passed to
random.randint() as its parameters.
When we run the program with
python my_rand_int.py, we’ll receive 10 random integers as output. Because these are random you’ll likely get different integers each time you run the program, but they’ll look something like this:
Output6 9 1 14 3 22 10 1 15 9
The integers should never go below 1 or above 25.
If you would like to use functions from more than one module, you can do so by adding multiple
import statements:
import random import math
You may see programs that import multiple modules with commas separating them — as in
import random, math — but this is not consistent with the PEP 8 Style Guide.
To make use of our additional module, we can add the constant
pi from
math to our program, and decrease the number of random integers printed out:
import random import math for i in range(5): print(random.randint(1, 25)) print(math.pi)
Now, when we run our program, we’ll receive output that looks like this, with an approximation of pi as our last line of output:
Output18 10 7 13 10 3.141592653589793
The
import statement allows you to import one or more modules into your Python program, letting you make use of the definitions constructed in those modules.
Using
from ...
import
To refer to items from a module within your program’s namespace, you can use the
from ...
import statement. When you import modules this way, you can refer to the functions by name rather than through dot notation
In this construction, you can specify which definitions to reference directly.
In other programs, you may see the
import statement take in references to everything defined within the module by using an asterisk (
*) as a wildcard, but this is discouraged by PEP 8.
Let’s first look at importing one specific function,
randint() from the
random module:
from random import randint
Here, we first call the
from keyword, then
random for the module. Next, we use the
import keyword and call the specific function we would like to use.
Now, when we implement this function within our program, we will no longer write the function in dot notation as
random.randint() but instead will just write
randint():
from random import randint for i in range(10): print(randint(1, 25))
When you run the program, you’ll receive output similar to what we received earlier.
Using the
from ...
import construction allows us to reference the defined elements of a module within our program’s namespace, letting us avoid dot notation.
Aliasing Modules
It is possible to modify the names of modules and their functions within Python by using the
as keyword.
You may want to change a name because you have already used the same name for something else in your program, another module you have imported also uses that name, or you may want to abbreviate a longer name that you are using a lot.
The construction of this statement looks like this:
import [module] as [another_name]
Let’s modify the name of the
math module in our
my_math.py program file. We’ll change the module name of
math to
m in order to abbreviate it. Our modified program will look like this:
import math as m print(m.pi) print(m.e)
Within the program, we now refer to the
pi constant as
m.pi rather than
math.pi.
For some modules, it is commonplace to use aliases. The
matplotlib.pyplot module’s official documentation calls for use of
plt as an alias:
import matplotlib.pyplot as plt
This allows programmers to append the shorter word
plt to any of the functions available within the module, as in
plt.show(). You can see this alias import statement in use within our “How to Plot Data in Python 3 Using
matplotlib tutorial.”
Conclusion
When we import modules we’re able to call functions that are not built into Python. Some modules are installed as part of Python, and some we will install through
pip.
Making use of modules allows us to make our programs more robust and powerful as we’re leveraging existing code. We can also create our own modules for ourselves and for other programmers to use in future programs. | https://www.digitalocean.com/community/tutorials/how-to-import-modules-in-python-3 | CC-MAIN-2017-39 | refinedweb | 1,468 | 59.64 |
[Update 30 October 2007]: I moved this library to a CodePlex project, called DotNetZip. See. It does zip creation, extraction, passwords, ZIP64, Unicode, SFX, and more. It is open source, free Free FREE to use, has a clear license, and comes with .NET-based ZIP utilities. It works on the Compact Framework or the regular .NET Framework. It is not the same as #ziplib or SharpZipLib. DotNetZip is independent.
There's a new namespace in the .NET Framework base class library for .NET 2.0, called System.IO.Compression. It has classes called DeflateStream and GZipStream.
These classes are streams; they're useful for compressing a stream of bytes as you transfer it, for example across the network to a cooperating application (a peer, or a client, whatever). The DeflateStream implements the Deflate algorithm, see the IETF's RFC 1951. "DEFLATE Compressed Data Format Specification version 1.3." The GZipStream is an elaboration of the Deflate algorithm, and adds a cyclic-redundancy-check. For more on GZip, see the IETF RFC 1952, "Gzip".
The GZip format described in RFC 1952 is also used by the popular gzip utility included in many *nix distributions. The Base Class Library team at Microsoft previously published example source code for a simple utility that behaves just like the *nix gzip, but is written in .NET and based on the GZipStream class. This simple utility can interoperate with the *nix gzip, can read and write .gz files.
As a companion to that example, enclosed here as an attachment (see the bottom of this post) is an example class than can read and write zip archives. It is packaged as a re-usable library, as well as a couple of companion example command-line applications that use the library. The example apps are useful on their own, for example for just zipping up a directory quickly, from within a script or a command-prompt. But the library will be useful also, for including zip capability into arbitrary applications. For example, you could include a zip task in a msbuild session, or into a smart-client GUI application. I've included both the binaries and source code here.
This is the class diagram for the ZipFile class, and the ZipEntry class, as generated by Visual Studio 2005. The ZipFile is the main class.
If you don't quite grok all that notation, I will point out a few highlights. The ZipFile itself supports a generic IEnumerable interface. What this means is you can enumerate the ZipEntry's within the ZipFile using a foreach loop. Makes usage really simple. ( Implementing that little trick is also dead-simple, thanks to the new-for-2.0 support for iterators in C# 2.0, and the "yield return" statement.)
You can extract all files from an existing .zip file by doing this:
ZipFile zip = ZipFile.Read("MyZip.zip");
foreach (ZipEntry e in zip)
{
e.Extract("NewDirectory");
}
Of course, you don't have want to extract the files, you can just fiddle with the properties on the ZipEntry things in the collection. Creating a new .zip file is also simple:
ZipFile zip= new ZipFile("MyNewZip.zip");
zip.AddDirectory("My Pictures", true); // AddDirectory recurses subdirectories
zip.Save();
You can add a directory at a time, as shown above, and you can add individual files as well. It seems to be pretty fast, though I haven't benchmarked it. It doesn't compress as much as winzip; This library is at the mercy of the DeflateStream class, and that class doesn't support multiple levels of compression.
I am no lawyer, but it seems to me the ZIP format is PKware's intellectual property. PKWare has some text in their zip spec which states:
PKWARE is committed to the interoperability and advancement of the .ZIP format. PKWARE offers a free license for certain technological aspects described above under certain restrictions and conditions. However, the use or implementation in a product of certain technological aspects set forth in the current APPNOTE, including those with regard to strong encryption or patching, requires a license from PKWARE. Please contact PKWARE with regard to acquiring a license.
I checked with pkware for more on that. I described what I was doing with this example, and got a nice email reply from Jim Peterson at PKWare, who wrote:
From the description of your intended need, no license would be necessary for the compression/decompression features you plan to use.
Which would mean, anyone could use this example without a license. But like I said, I am no lawyer.
Later,
-Dino
[Update 11 April 2006 1036am US Pacific time]: After a bit of testing it seems that there are some anomalies with the DeflateStream class in .NET. One of them is, it performs badly with already compressed data. The DeflateStream in .NET can actually Inflate the size of the stream. The output is still a valid Deflate stream, but it isn't compressed as you'd like. The DotNetZip implementation works around this by using the STORE method rather than DEFLATE when data size increases. But still....The base class library team is aware of this anomaly and is considering it. If you'd like to weigh in on this behavior, and I encourage you to do so if you value this class, use the Product Feedback Center, see here.
If you would like to receive an email when updates are made to this post, please register here
RSS
Finding a way to use system.io.compression for zip archives
Overview SharpZipLib provides best free .NET compression library, but what if you can't use it due to
Hai
i have used it in my code but have a problem with it. my folder size before zipping is 599 kb and after zipping is 998 kb so what is the way to zip it in a way to decrease the file size
Mohan - What version of the Zip Library are you using? You will want to get the latest version of this library, from. It corrects the problem where some files get "inflated" when they are zipped
the 4gig limit is probably due to the physical memory limitations of addressing space in the system.
IE the .net framework is not designed to touch the disk whilst it compresses
If you wanted to exceed this then you would have to write a pagefile like system to store processed data whilst using the 4 gig as a buffer
@john, The 4g limit mentioned above has nothing to do with the physical memory of the machine. It is related to the DeflateStream implementation. I haven't explored it well, so
I cannot say more than that.
it does not have to do with whether the implementation is streaming or not (viz, "not designed to touch the disk while compressing").
Comment Policy: No HTML allowed. URIs and line breaks are converted automatically. Your e–mail address will not show up on any public page.
Connecting .NET to just about anything else | http://blogs.msdn.com/dotnetinterop/archive/2006/04/05/.NET-System.IO.Compression-and-zip-files.aspx | crawl-002 | refinedweb | 1,167 | 65.01 |
Opened 6 years ago
Last modified 3 months ago
#6320 reopened Bugs
race condition in boost::filesystem::path leads to crash when used in multithreaded programs
Description
following app crashes on VC10 (Windows 7) (multibyte and Unicode builds) at point of creation of path (im working around the issue by doing dummy string to wstring conversion) [
wstring r; std::copy(s.begin(),s.end(),r.begin());
]
#include <boost/thread.hpp> #include <boost/filesystem.hpp> int main(void) { std::string sPath("c:\\Development"); boost::thread_group tg; for(int i = 0; i < 2; i++) { tg.create_thread([&sPath](){ boost::this_thread::sleep(boost::posix_time::milliseconds(10)); boost::filesystem::path p(sPath); boost::filesystem::directory_iterator di(p), end; while(di != end) std::cout << (*(di++)).path().string() << std::endl; }); } tg.join_all(); int a; std::cin >> a; }
VC10 CallStack?:
> msvcp100d.dll!std::codecvt<wchar_t,char,int>::in(int & _State, const char * _First1, const char * _Last1, const char * & _Mid1, wchar_t * _First2, wchar_t * _Last2, wchar_t * & _Mid2) Line 1521 + 0x1f bytes C++ filesystem_crash.exe!`anonymous namespace'::convert_aux(const char * from, const char * from_end, wchar_t * to, wchar_t * to_end, std::basic_string<wchar_t,std::char_traits<wchar_t>,std::allocator<wchar_t> > & target, const std::codecvt<wchar_t,char,int> & cvt) Line 84 + 0x25 bytes C++ filesystem_crash.exe!boost::filesystem3::path_traits::convert(const char * from, const char * from_end, std::basic_string<wchar_t,std::char_traits<wchar_t>,std::allocator<wchar_t> > & to, const std::codecvt<wchar_t,char,int> & cvt) Line 165 + 0x20 bytes C++ filesystem_crash.exe!boost::filesystem3:, const std::codecvt<wchar_t,char,int> & cvt) Line 174 + 0x7e bytes C++ filesystem_crash.exe!boost::filesystem3::path::path<std::basic_string<char,std::char_traits<char>,std::allocator<char> > >(const std::basic_string<char,std::char_traits<char>,std::allocator<char> > & source, void * __formal) Line 135 + 0x13 bytes C++ filesystem_crash.exe!`anonymous namespace'::<lambda0>::operator()() Line 15 + 0x10 bytes C++ filesystem_crash.exe!boost::detail::thread_data<`anonymous namespace'::<lambda0> >::run() Line 62 C++ filesystem_crash.exe!boost::`anonymous namespace'::thread_start_function(void * param) Line 177 C++ msvcr100d.dll!_callthreadstartex() Line 314 + 0xf bytes C msvcr100d.dll!_threadstartex(void * ptd) Line 297 C
Attachments (0)
Change History (31)
comment:1 Changed 6 years ago by
comment:2 Changed 6 years ago by
.
comment:3 Changed 5 years ago by
This issue still appears to be present, using:
MSVC 2012 (Win7, 32-bit). Statically linked to boost::filesystem.
Just modified the example code to use "C:/" as the directory search path (to ensure existence).
Code asserts in .../libs/filesystem/src/path.cpp, line 888:
#if defined(BOOST_WINDOWS_API) && defined(BOOST_FILESYSTEM_STATIC_LINK) const path::codecvt_type& path::codecvt() { BOOST_ASSERT_MSG(codecvt_facet_ptr(), "codecvt_facet_ptr() facet hasn't been properly initialized"); return *codecvt_facet_ptr(); }
comment:4 Changed 5 years ago by
I agree the issue is still present, I am getting crashes due to invalid access inside the path conversion routines. As original poster says issue is that the initialization of static members is not threadsafe in all pre c++11 compilers. This seems to be compiler dependent, gcc seems to default to adding serialization code, but provides -fno-threadsafe-statics to disable, MSVC seems to not be. I am currently using MSVC 2010.
Although this comment states that the local is thread local
// The path locale, which is global to the thread, can be changed by the // imbue() function. It is initialized to an implementation defined locale.
It looks like the same path object is shared between threads
std::locale& path_locale() { static std::locale loc(default_locale()); return loc; }
Adding explicit one time initialization code to the shared local object with boost::call_once works for me.
void make_loc(std::locale& loc) { loc = default_locale(); } std::locale& path_locale() { static std::locale loc; static boost::once_flag once; boost::call_once(once, boost::bind(&make_loc, boost::ref(loc))); return loc; }
void make_wchar_t_codecvt_facet(const path::codecvt_type *& cvt) { cvt = &std::use_facet<std::codecvt<wchar_t, char, std::mbstate_t> > (path_locale()); } const path::codecvt_type *& path::wchar_t_codecvt_facet() { static const std::codecvt<wchar_t, char, std::mbstate_t> * facet; static boost::once_flag once; boost::call_once(once, boost::bind(&make_wchar_t_codecvt_facet, boost::ref(facet))); return facet; }
Another workaround that seemed to work is to do a path string operation on one thread in advance of multithread operations.
comment:5 Changed 5 years ago by
The call once struct should probably also not be static, I should pull those out.
comment:6 Changed 5 years ago by
Turning on /we4640 to make these warnings errors produces a large number of hits, there are other places where const static "empty path" objects are local to functions.
comment:7 Changed 5 years ago by
comment:8 Changed 5 years ago by
I used revision 83062 and it still crashed because of race condition in codecvt(). Call to codecvt() resulted from my call to fs::exists. I used VC9 and static linking.
I uncommented locale_mutex and its locks in codecvt() and imbue(const std::locale& loc), now everything is fine.
So, please, include mutex into next boost release. Or at least provide some build variant with it, that can be turned on.
comment:9 Changed 4 years ago by
Have not found "boost-root/libs/filesystem/doc/reference.html#path-Usage" section you are referring to. The following workaround works for me:
#ifdef _MSC_VER // fix for boost::filesystem::path::imbue( std::locale( "" ) ); #endif
if placed in main().
comment:10 Changed 4 years ago by
If you're using Boost as a static lib in a DLL you have to apply the fix in the DLLMain example:
#ifdef WIN32 #ifndef WIN32_LEAN_AND_MEAN #define WIN32_LEAN_AND_MEAN #endif #include <Windows.h> #include <boost/filesystem/path.hpp> static void init_boost () { boost::filesystem::path p("dummy"); }; BOOL APIENTRY DllMain( HMODULE /* hModule */, DWORD ul_reason_for_call, LPVOID /* lpReserved */) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: init_boost(); break; case DLL_THREAD_ATTACH: case DLL_THREAD_DETACH: case DLL_PROCESS_DETACH: break; } return TRUE; } #endif
comment:11 Changed 3 years ago by
I'm experiencing this too on 1.55 and MSVC10. Vadim Panin's hack works fine for me.
comment:12 Changed 3 years ago by
comment:13 Changed 3 years ago by
comment:14 Changed 3 years ago by
comment:15 Changed 20 months ago by
comment:16 Changed 17 months ago by
comment:17 Changed 14 months ago by
Experiencing this on 1.61.0 too.
comment:18 Changed 14 months ago by
1.61.0 MSVC120
comment:19 Changed 14 months ago by
I exeperience the same issue with 1.60.0 with VS2010
comment:20 follow-up: 21 Changed 13 months ago by
Just had 1.61.0 crash due to this issue and investigated a bit. The root of the problem is the initialization of a local static inside path_locale().
See path.cpp line 925:
static std::locale loc(default_locale());
This obviously leads to a race between threads with compilers that do not implement thread-safe initialization of local statics. BTW, none of the MSVC compilers before VC2015 do implement this feature, which Microsoft likes to call "magic statics".
So I did some tests with Visual Studio 2015 Update 3. I tested with the default v140 toolset and with the XP-compatibility toolset v140_xp. Here are the rather interesting results:
v140 toolset: works fine. v140_xp toolset: race, crash and burn.
If you step through the code compiled with v140, you can see the thread sync stubs that are automatically inserted by v140. Step through the same code compiled with v140_xp -> no thread sync stubs.
comment:21 Changed 12 months ago by
v140 toolset: works fine.
v140_xp toolset: race, crash and burn.
Clarification: the above mentioned toolsets can be selected in VS 2015 C++ project settings, and are not to be confused with the bjam "toolset" parameter. In the above tests I used a special build of boost built with the VS2015 v140_xp toolset. You cannot normally do this with unmodified boost.build scripts.
However, Windows XP woes aside, the boost download page clearly states for the Visual C++ compilers that the following compilers are supported:
Visual C++: 7.1, 8.0, 9.0, 10.0, 11.0, 12.0, 14.0
This bug is still present in the latest boost v1.62.0 release, and leads to crashes with all of the above compilers up to 12.0, and even with 14.0 when using the command line switch "/Zc:threadSafeInit-" which disables thread-safe initialization of scoped statics [and is the default with v140_xp toolset for DLLs using ATL].
comment:22 Changed 12 months ago by
comment:23 follow-up: 24; }
comment:24 follow-up: 26().
comment:25 Changed 12 months ago by
BOOST 1.61.0. MSVC 12 Update 4
std::string some_path("C:\\something"); boost::filesystem::file_size(some_path);
Workaround from Vadim Panin works.
comment:26 Changed 6().
boost::barrier can help a lot to catch threading issues:
const int num = 10; boost::barrier b(num + 1); std::vector<std::thread> threads; for (int i = 0; i < num; ++i) threads.emplace_back([&b] { b.wait(); boost::filesystem::path::codecvt(); }); b.wait(); for (auto& thread: threads) thread.join();
comment:27 Changed 6 months ago by
I've got a suggestion to resolve the issue.
I think we should rewrite
std::locale& path_locale() in the unnamed namespace in path.cpp
Instead of using
static std::locale (the original code):
std::locale& path_locale() { static std::locale loc(default_locale()); return loc; }
in pre-c++11 we could use an atomic pointer:
std::mutex& locale_mutex() { static std::mutex mutex; return mutex; } std::unique_ptr<std::locale>& locale_unique_ptr() { static std::unique_ptr<std::locale> locale; return locale; } std::atomic<std::locale*>& locale_pointer() { static std::atomic<std::locale*> locale; return locale; } std::locale& path_locale() { if (!locale_pointer().load()) { std::lock_guard<std::mutex> l(locale_mutex()); if (!locale_pointer().load()) { locale_unique_ptr() = std::make_unique<std::locale>(default_locale()); locale_pointer().store(locale_unique_ptr().get()); } } return *locale_pointer().load(); }
In the path.hpp we could add a dummy static object (int) to initialise the mutex, the unique_ptr and the atomic ptr.
namespace { int dummy = boost::filesystem::path::codecvt_init(); }
This static dummy goes to every compilation unit where boost/path.hpp is included and causes the initilisation of the necessary local static helper objects (mutex, atomic and unique ptrs).
codecvt_init() is a static member of the path class and it is defined in the end of the path.cpp file (where path::codecvt is defined, too):
int boost::filesystem::path::codecvt_init() { locale_mutex(); locale_pointer(); locale_unique_ptr(); return 0; }
If our program never calls
path::codecvt the unique ptr remains empty (but initialised!) so it is up the the client when the locale (and the facet) initialisation happens. This is important because on POSIX platforms it can result an exception to be thrown so we do not want to do the initialisation until the
main is called. (Please note this is not always possible, e.g. there is a static logger class instance that uses
path::codecvt before the
main is reached.)
Only bit I do not like (have not been able to sort out) that the path class public interface would have a new static function (
codecvt_init). Calling it by the clients would not cause any harm, but it would be better to hide it somehow. I do not think this the end of the world. Eliminating the crash, i guess, is more important than paying a little price in the form of polluting the interface with a dummy function.
I used
std::atomic,
std::uinique_ptr,
std::mutex and
std::lock_guard that have to be replaced with the corresponding boost classes. It is also neccessary to add an #if <pre c++11> ... #else ... #endif preprocessor condition to switch between the proposed or the original code.
I wonder whether this would be less efficient than the local static stuff.
- Not if we use c++11 (the preprocessor condition would switch)
- I guess, even the local static magic implementation needs some kind of synchronisation for initialisation/access local static objects that may not be less costly than that the atomic ptr version demands
I would be grateful if someone could comment on the idea. If you think it's worth considering it as a solution I could work on patch for review.
comment:28 Changed 6 months ago by
Just noticed that one load could be spared:
std::locale& path_locale() { std::locale* locale = nullptr; if (!(locale = locale_pointer().load())) { std::lock_guard<std::mutex> l(locale_mutex()); if (!(locale = locale_pointer().load())) { locale_unique_ptr() = std::make_unique<std::locale>(default_locale()); locale_pointer().store(locale = locale_unique_ptr().get()); } } return *locale; }
comment:29 Changed 6 months ago by
comment:30 Changed 3 months ago by
With 1.64 it is still there. Has it some priority? Or it will be gone with static inicializing thread-safe compilers (I use MSVC 2013, which is probably not)?
Unhandled exception thrown: read access violation. _Loc._Ptr was nullptr.
Incriminated thread (optimized code):
[Inline Frame] My.dll!std::locale::_Getfacet(unsigned __int64) Line 468 My.dll!std::use_facet<std::codecvt<wchar_t,char,int> >(const std::locale & _Loc) Line 572 My.dll!MyCode() Line 692 // calling boost::filesystem::path(const std::string &)
I also suspect path_locale(), where is static inicializing global shared object across threads.
comment:31 Changed 3 months ago by
Kind of curious, why this one is not fixed until now? We have found a few crash reports related to this bug.
Maybe we have to upgrade from vs2013 to vs2015? Unfortunately, our code base is not ready for that yet:-(
I think the problem is in file filesystem/v3/source/path.cpp in method
This method is entered by multiple threads concurrently and the static pointer "facet" gets initialized by the first thread (which takes some time) but the other threads don't wait and just continue while facet is a NULL pointer.
In boost 1.44 this crash didn't occur. It seems that the initialization of global variable const fs::path dot_path(L"."); in filesystem/v3/source/path.cpp resulted in a call of path::wchar_t_codecvt_facet(), so the static "facet" pointer got initialized during program startup while no other threads were running.
In boost 1.48 the initialization of the same globale variable doesn't result in a call of path::wchar_t_codecvt_facet() so race conditions might occur during initialization of the static "facet" pointer (in C++ 03). | https://svn.boost.org/trac10/ticket/6320 | CC-MAIN-2017-47 | refinedweb | 2,318 | 56.15 |
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
On Fri, Feb 27, 2004 at 09:47:31AM -0800, Caroline Tice wrote: > + ret_val = (scan_ahead_for_unlikely_executed_note (BB_HEAD (e->src)) == > + scan_ahead_for_unlikely_executed_note (BB_HEAD (e->dest))); The number of times you're scanning instructions for this note is bad. You should just keep a bitmap or sbitmap, depending on the expected density of the set bits. > { > + ... > + > + } > + > + } Don't add silly whitespace. Lots of ocurrences. > ! if (round < last_round - 1 > + && (round == last_round - 1) > + && (round <= last_round - 1) I think these tests are confusing. It would be much better to have a predicate or nicely named boolean variable to accurately describe what's being tested here. Fold flag_reorder_blocks_and_partition into said predicates. > + static > + void color_basic_blocks (int *bb_colors, int *bb_has_label, Bad formatting. Several ocurrences. > + /* Find all the edges that cross between hot and cold sections. */ ... > + /* Check to see if cur_bb ends in an unconditional jump. If > + so, there is nothing that needs to be done for it. */ Well that would seem to deny the function block comment. What exactly is this function supposed to be collecting? > + if ((GET_CODE (last_insn) == JUMP_INSN) > + && (! any_condjump_p (last_insn))) > + continue; Redundant check form jump_insn. Redundant parenthesis. Lots of ocurrences. > + if ((GET_CODE (last_insn) != JUMP_INSN) > + && (succ1->dest != cur_bb->rbi->next)) > + continue; Another check for jump_insn? I'm not sure what you're not just doing FOR_EACH_BB(bb) for (e = bb->succ; e ; e = e->succ_next) Of course, that means you're not necessarily capped at 2*n_basic_blocks, which means crossing_edges needs to be allocated and reallocated within this function. Alternately, I might guess that you could use e->aux to mark those that cross sections, so you don't have to allocate memory at all. You'd have to double-check that, since I don't know which aux bits are currently used by bb-reorder. You might well be able to add your bits to those datastructures, however. A final alternative is to allocate a bit in e->flags. > + /* If any destination of a crossing edge does not have a label, add label; > + Convert any fall-through crossing edges (for blocks that do not contain > + a jump) to unconditional jumps. */ > + > + static > + void add_labels_and_missing_jumps (edge *crossing_edges, > + int n_crossing_edges, > + int *bb_has_label, int *bb_has_jump) The creating label part of this is wrong -- we have block_label that will create or find labels for blocks on demand. I see that there's the no-jump fallthrough part remaining; perhaps this function becomes clearer without the label bits. > + if (dest && (dest->index != -2)) /* Not EXIT BLOCK */ Never ever compare vs -2/-1 directly. > + if (fall_thru && (fall_thru->dest->index >= 0)) rtl_verify_flow_info assures us that we never fallthru to exit. > + /* Find the jump instruction. */ > + > + for (cur_insn = BB_HEAD (cur_bb); > + cur_insn != NEXT_INSN (BB_END (cur_bb)); > + cur_insn = NEXT_INSN (cur_insn)) > + if (GET_CODE (cur_insn) == JUMP_INSN) The jump insn is always at the end. Always. > + if (cond_jump) Don't we already know that both edges already exist? > + /* Find label in fall_thru block. We've already added > + any missing labels, so there must be one. */ block_label. > + /* This is the case where both edges out of the basic > + block are crossing edges. Here we will fix up the > + fall through edge. The jump edge will be taken care > + of later. */ > + > + new_bb = force_nonfallthru (fall_thru); Seems like this new block should be reused for other crossings, but I can't see how it would get found. > + REG_NOTES (BB_END (new_bb)) = gen_rtx_EXPR_LIST > + (REG_CROSSING_JUMP, > + NULL_RTX, > + NULL_RTX); Don't all of these get added at the end? > + static basic_block > + find_jump_block (rtx old_label, basic_block *block_list, int max_idx) Again this seems like something that we shouldn't be searching for. Either we can process all crossing edges incomming to each block and create just the one bounce block (and never have to record it) or we should have this placed in some lookaside data structure (like RBI). > + /* Find last, last hot, and last cold basic blocks. These will be > + used as insertion points for new bb's containing unconditional > + jumps (to cross section boundaries). */ Why wouldn't this be taken care of with the bb-reordering block ordering data structures? > + /* Find the jump insn in cur_bb. */ > + > + found = 0; > + old_jump = NULL; > + for (cur_insn = BB_HEAD (cur_bb); > + (cur_insn != NEXT_INSN (BB_END (cur_bb))) && !found; > + cur_insn = NEXT_INSN (cur_insn)) Again, it's last. > + /* Check to make sure the jump instruction is a > + conditional jump. */ We have predicates in jump.c for this sort of thing. > + /* Check to see if bb ends in an unconditional jump. */ > + > + if ((GET_CODE (last_insn) == JUMP_INSN) > + && (! any_condjump_p (last_insn))) > + { > + succ = cur_bb->succ; > + > + if (succ->succ_next) > + { > + rtx set_src; > + if (GET_CODE (PATTERN (last_insn)) == SET) > + set_src = SET_SRC (PATTERN (last_insn)); > + else if (GET_CODE (PATTERN (last_insn)) == PARALLEL) > + { > + set_src = XVECEXP (PATTERN (last_insn), 0, 0); > + if (GET_CODE (set_src) == SET) > + set_src = SET_SRC (set_src); > + else > + set_src = NULL_RTX; > + > + if (set_src && (GET_CODE (set_src) == REG)) > + continue; > + else > + abort (); > + } > + else > + abort (); > + } > + What kind of unconditional direct jump has two edges? Are you looking to exclude tablejump and computed jump here or what? Again, there's predicates for this sort of thing in jump.c. > *************** try_simplify_condjump (basic_block cbran > *** 148,153 **** > --- 149,163 ---- > return false; > jump_dest_block = jump_block->succ->dest; > > + /* If we are partitioning hot/cold basic blocks, we don't want to > + mess up unconditional or indirect jumps that cross between hot > + and cold sections. */ > + > + if (flag_reorder_blocks_and_partition > + && (scan_ahead_for_unlikely_executed_note (BB_HEAD (jump_block)) != > + scan_ahead_for_unlikely_executed_note (BB_HEAD (jump_dest_block)))) > + return false; Ok, you use this data outside your own pass. This *definitely* wants some sort of basic_block/edge annotation. > *************** try_crossjump_to_edge (int mode, edge e1 > *** 1367,1372 **** > --- 1418,1430 ---- > rtx newpos1, newpos2; > edge s; > > + /* If we have partitioned hot/cold basic blocks, it is a bad idea > + to try this optimization. */ > + > + if (flag_reorder_blocks_and_partition && no_new_pseudos) > + return false; Hum? I don't see why not. You just want to avoid crossjumping between hot/cold blocks. I'm also quite certain that you don't want to insert NOTE_INSN_UNLIKELY_EXECUTED_CODE or REG_CROSSING_JUMP until late in the compilation process. They'll just cause you problems trying to keep them up-to-date. Also, you'll want to have verify_flow_info changes that validate things like fallthru only between same sections. > + if (current_function_decl->decl.section_name) > + fprintf (asmfile, SECTION_FORMAT_STRING, > + TREE_STRING_POINTER (current_function_decl->decl.section_name)); > + else > + fprintf (asmfile, SECTION_FORMAT_STRING, HOT_TEXT_SECTION_NAME); Hmm? Are you absolutely certain you don't want function_section()? > #ifndef HOT_TEXT_SECTION_NAME > - #define HOT_TEXT_SECTION_NAME "text.hot" > + #define HOT_TEXT_SECTION_NAME ".text" > #endif This conflicts with existing usage of HOT_TEXT_SECTION_NAME. > #ifndef UNLIKELY_EXECUTED_TEXT_SECTION_NAME > - #define UNLIKELY_EXECUTED_TEXT_SECTION_NAME "text.unlikely" > + #define UNLIKELY_EXECUTED_TEXT_SECTION_NAME ".textunlikely" > #endif Why? > + static bool > + is_jump_table_basic_block (rtx insn) You mean something like tablejump_p? > + if (!in_unlikely_text_section()) > + unlikely_text_section (); Section changing should already take care of the if. > case NOTE_INSN_BASIC_BLOCK: > + > + /* If we are performing the optimization that paritions > + basic blocks into hot & cold sections of the .o file, > + then at the start of each new basic block, before > + beginning to write code for the basic block, we need to > + check to see whether the basic block belongs in the hot > + or cold section of the .o file, and change the section we > + are writing to appropriately. */ > + > + if (flag_reorder_blocks_and_partition) > + if (in_unlikely_text_section()) > + if (!(scan_ahead_for_unlikely_executed_note (insn))) > + text_section (); NOTE_INSN_BASIC_BLOCK occurs after the CODE_LABEL. > ! if (!unlikely_section_label_printed) > ! { > ! fprintf (asm_out_file, "_%s_unlikely_section:\n", > ! current_function_name ()); > ! unlikely_section_label_printed = true; > ! } This should be some debug callback, since dwarf2 is going to want to handle this in some other way. And it would have to be two leading underscores to get it out of the user's namespace. > + #define SECTION_FORMAT_STRING ".section\t\"%s\"\n\t.align 2\n" What is this for? r~ | https://gcc.gnu.org/legacy-ml/gcc-patches/2004-03/msg00318.html | CC-MAIN-2020-16 | refinedweb | 1,211 | 58.79 |
In Ruby it isn't possible to execute a piece of code that isn't defined on an object, because there is nothing in Ruby that is not an object - this way, strictly speaking, a concept of a function does not exist, there are only methods.
Methods in Ruby are defined using the reserved word
def.
Blocks, Procs, Methods and Lambdas are variances of a function in Ruby. All functions in Ruby act (or can be made to act) like some variant of a Proc. lambdas in Ruby are objects of class Proc. Proc objects don't belong to any object. They are called without binding them to an object.
add = lambda { |a,b| a + b } add = ->(a, b) { a + b }
Lambdas are like Procs, but with stricter argument passing and localised returns. Create a lambda which takes two arguments and returns the result of calling
+ on the first, with the second as its arguments.
In Ruby 1.9, the Proc class gained the method:
#curry.
plus_five = add.curry[5] puts plus_five[8]
plus_five = add.curry.(5) puts plus_five.(8)
Blocks in Ruby are a special syntactic sugar to create Procs.
def my_method yield if block_given? end my_method do puts 3 + 9 end # => 12
def my_method(&block) block.class end my_method do end # => Proc
def my_method yield end my_method &proc { puts "Hello World!" } # => "Hello World!" syntactic sugar for some variation of an underlying Proc.
def executor yield end def greet "Hello World" end executor &method(:greet)
Lambdas and methods validate the arguments they receive while Procs do not: if you pass only one argument to a Lambda taking two arguments, you’ll get an
ArgumentError. If you do the same to a Proc, it will accept it and set the rest of the arguments to
nil. Any
return statements used in a Proc will return from the method that called that Proc. Lambdas, on the other hand, will not - you can call a Lambda, get its return value, and process it, all within the one method.
Higher-order functions are functions that accept a function as an argument and/or return a function as the return value.
def adder(a, b) lambda { a + b } end adder_fn = adder(5, 9) adder_fn.call # => 14
Partial function aplication is calling a function with some number of arguments, in order to get a function back that will take that many less arguments. Currying is taking a function that takes n arguments, and splitting it into n functions that take one argument.
apply_math = lambda do |fn, a, b| a.send(fn, b) end Result (best) add = apply_math.curry.(:+) subtract = apply_math.curry.(:-) multiply = apply_math.curry.(:*) divide = apply_math.curry.(:/) add.(4, 7) # => 11 increment = add.curry.(1)
My Tech Newsletter
Get emails from me about programming & web development. I usually send it once a month | https://zaiste.net/posts/functions-methods-ruby/ | CC-MAIN-2021-10 | refinedweb | 468 | 65.52 |
TestNG is a powerful testing framework, an enhanced version of JUnit which was in use for a long time before TestNG came into existence. NG stands for 'Next Generation'.
TestNG framework provides the following features −
Step 1 − Launch Eclipse and select 'Install New Software'.
Step 2 − Enter the URL as '' and click 'Add'.
Step 3 − The dialog box 'Add Repository' opens. Enter the name as 'TestNG' and click 'OK'
Step 4 − Click 'Select All' and 'TestNG' would be selected as shown in the figure.
Step 5 − Click 'Next' to continue.
Step 6 − Review the items that are selected and click 'Next'.
Step 7 − "Accept the License Agreement" and click 'Finish'.
Step 8 − TestNG starts installing and the progress would be shown follows.
Step 9 − Security Warning pops up as the validity of the software cannot be established. Click 'Ok'.
Step 10 − The Installer prompts to restart Eclipse for the changes to take effect. Click 'Yes'.
Annotations were formally added to the Java language in JDK 5 and TestNG made the choice to use annotations to annotate test classes. Following are some of the benefits of using annotations. More about TestNG can be found here).
Step 1 − Launch Eclipse and create a 'New Java Project' as shown below.
Step 2 − Enter the project name and click 'Next'.
Step 3 − Navigate to "Libraries" Tab and Add the Selenium Remote Control Server JAR file by clicking on "Add External JAR's" as shown below.
Step 4 − The added JAR file is shown here. Click 'Add Library'.
Step 5 − The 'Add Library' dialog opens. Select 'TestNG' and click 'Next' in the 'Add Library' dialog box.
Step 6 − The added 'TestNG' Library is added and it is displayed as shown below.
Step 7 − Upon creating the project, the structure of the project would be as shown below.
Step 8 − Right-click on 'src' folder and select New >> Other.
Step 9 − Select 'TestNG' and click 'Next'.
Step 10 − Select the 'Source Folder' name and click 'Ok'.
Step 11 − Select the 'Package name', the 'class name', and click 'Finish'.
Step 12 − The Package explorer and the created class would be displayed.
Now let us start scripting using TestNG. Let us script for the same example that we used for understanding the WebDriver. We will use the demo application,, and perform percent calculator.
In the following test, you will notice that there is NO main method, as testNG will drive the program execution flow. After initializing the driver, it will execute the '@BeforeTest' method followed by '@Test' and then '@AfterTest'. Please note that there can be any number of '@Test' annotation in a class but '@BeforeTest' and '@AfterTest' can appear only once.
package TestNG; import java.util.concurrent.TimeUnit; import org.openqa.selenium.*; import org.openqa.selenium.firefox.FirefoxDriver; import org.testng.annotations.AfterTest; import org.testng.annotations.BeforeTest; import org.testng.annotations.Test; public class TestNGClass { WebDriver driver = new FirefoxDriver(); @BeforeTest public void launchapp() { // Puts an Implicit wait, Will wait for 10 seconds before throwing exception driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); // Launch website driver.navigate().to(""); driver.manage().window().maximize(); } @Test public void calculatepercent() { //); if(result.equals("5")) { System.out.println(" The Result is Pass"); } else { System.out.println(" The Result is Fail"); } } @AfterTest public void terminatetest() { driver.close(); } }
To execute, right click on the created XML and select "Run As" >> "TestNG Suite"
The output is thrown to the console and it would appear as shown below. The console output also has an execution summary.
The result of TestNG can also be seen in a different tab. Click on 'HTML Report View' button as shown below.
The HTML result would be displayed as shown below. | https://www.tutorialspoint.com/selenium/selenium_test_ng.htm | CC-MAIN-2019-47 | refinedweb | 610 | 59.9 |
Hi There,
I am very new to Python scripting and have started to learn through my lab PaloAlto Firewall.
I wrote a basic script to just return me the output of an operational command. This script works on my lab SRX but hangs on the PaloAlto.
The code snippet : (indentation is correct)
root# vi PAoutTest.py
#!/usr/bin/python
import paramiko
import time
import inspect
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
timeout = 30
endtime = time.time() + timeout
iin = paramiko.sys.stdin
out = paramiko.sys.stdout
err = paramiko.sys.stderr
ssh.connect("x.x.x.x",username="xxxxx",password="xxxx")
iin, out, err = ssh.exec_command("show clock")
print out
#print out.read()
while not out.channel.eof_received:
time.sleep(1)
if time.time() > endtime:
out.channel.close()
break
print out.read()
I have included a while loop because without it the script would just hang forever and I have to manually kill the process.
So after the while loop the script times out without providing any output.
Is this not possible with PaloAlto or only way to automate is via API?
Please help.
Yes it is possible to log into the Palo Alto using Paramiko, as long as you get all the timing and everything correct.
I use this instead of using exec, I believe exec is only able to run one command.
channel = ssh.invoke_shell()
output = channel.recv(10000)
print output
print "connected to device"
channel.send('show interface all\n')
output3 = channel.recv(100! | https://live.paloaltonetworks.com/t5/Automation-API-Discussions/Paramiko-with-PaloAlto/m-p/131462 | CC-MAIN-2020-05 | refinedweb | 246 | 62.44 |
Unit testing C# with NUnit and .NET Core
This tutorial takes you through an interactive experience building a sample solution step-by-step to learn unit testing concepts. If you prefer to follow the tutorial using a pre-built solution, view or download the sample code before you begin. For download instructions, see Samples and Tutorials.
This article is about testing a .NET Core project. If you're testing an ASP.NET Core project, see Integration tests in ASP.NET Core.
Prerequisites
- .NET Core 2.1 SDK or later versions.
- A text editor or code editor of your choice.
Creating the source project
Open a shell window. Create a directory called unit-testing-using-nunit to hold the solution. Inside this new directory, run the following command to create a new solution file for the class library and the test project:
dotnet new sln
Next, create a PrimeService directory. The following outline shows the directory and file structure so far:
/unit-testing-using-nunit unit-testing-using-nunit.sln /PrimeService
Make PrimeService the current directory and run the following command to create the source project:
dotnet new classlib
Rename Class1.cs to PrimeService.cs. You create a failing implementation of the
PrimeService class:
using System; namespace Prime.Services { public class PrimeService { public bool IsPrime(int candidate) { throw new NotImplementedException("Please create a test first."); } } }
Change the directory back to the unit-testing-using-nunit directory. Run the following command to add the class library project to the solution:
dotnet sln add PrimeService/PrimeService.csproj
Creating the test project
Next, create the PrimeService.Tests directory. The following outline shows the directory structure:
/unit-testing-using-nunit unit-testing-using-nunit.sln /PrimeService Source Files PrimeService.csproj /PrimeService.Tests
Make the PrimeService.Tests directory the current directory and create a new project using the following command:
dotnet new nunit
The dotnet new command creates a test project that uses NUnit as the test library. The generated template configures the test runner in the PrimeService.Tests.csproj file:
<ItemGroup> <PackageReference Include="nunit" Version="3.12.0" /> <PackageReference Include="NUnit3TestAdapter" Version="3.16.0" /> <PackageReference Include="Microsoft.NET.Test.Sdk" Version="16.4.0" /> </ItemGroup>. Use the
dotnet add reference command:
dotnet add reference ../PrimeService/PrimeService.csproj
You can see the entire file in the samples repository on GitHub.
The following outline shows the final solution layout:
/unit-testing-using-nunit unit-testing-using-nunit.sln /PrimeService Source Files PrimeService.csproj /PrimeService.Tests Test Source Files PrimeService.Tests.csproj
Execute the following command in the unit-testing-using-nunit directory:
dotnet sln add ./PrimeService.Tests/PrimeService.Tests.csproj
Creating the first test
You write one failing test, make it pass, then repeat the process. In the PrimeService.Tests directory, rename the UnitTest1.cs file to PrimeService_IsPrimeShould.cs and replace its entire contents with the following code:
using NUnit.Framework; using Prime.Services; namespace Prime.UnitTests.Services { [TestFixture] public class PrimeService_IsPrimeShould { [Test] public void IsPrime_InputIs1_ReturnFalse() { PrimeService primeService = CreatePrimeService(); var result = primeService.IsPrime(1); Assert.IsFalse(result, "1 should not be prime"); } /* More tests */ private PrimeService CreatePrimeService() { return new PrimeService(); } } }
The
[TestFixture] attribute denotes a class that contains unit tests. The
[Test] attribute indicates a method is a test method.
Save this file and execute
dotnet test to build the tests and the class library and then run the tests. The NUnit test runner contains the program entry point to run your tests.
dotnet test starts the test runner using the unit test project you've created.
Your test fails. You haven't created the implementation yet. Make this test pass by writing the simplest code in the
PrimeService class that works:
public bool IsPrime(int candidate) { if (candidate == 1) { return false; } throw new NotImplementedException("Please create a test first."); }
In the unit-testing-using-nunit directory, run
dotnet test again. The
dotnet test command runs a build for the
PrimeService project and then for the
PrimeService.Tests project. After building both projects, it runs this single test. It passes.
Adding more features
Now that you've made one test pass, it's time to write more. There are a few other simple cases for prime numbers: 0, -1. You could add new tests with the
[Test] attribute, but that quickly becomes tedious. There are other NUnit attributes that enable you to write a suite of similar tests. A
[TestCase] attribute is used to create a suite of tests that execute the same code but have different input arguments. You can use the
[TestCase] attribute to specify values for those inputs.
Instead of creating new tests, apply this attribute to create a single data driven test. The data driven test is a method that tests several values less than two, which is the lowest prime number:
[TestCase(-1)] [TestCase(0)] [TestCase(1)] public void IsPrime_ValuesLessThan2_ReturnFalse(int value) { var result = _primeService.IsPrime(value); Assert.IsFalse(result, $"{value} should not be prime"); }
Run
dotnet test, and two of these tests fail. To make all of the tests pass, change the
if clause at the beginning of the
Main method in the PrimeService.cs file:
if (candidate < 2)
Continue to iterate by adding more tests, more theories, and more code in the main library. You have the finished version of the tests and the complete implementation of the library.
You've built a small library and a set of unit tests for that library. You've structured the solution so that adding new packages and tests is part of the normal workflow. You've concentrated most of your time and effort on solving the goals of the application.
Feedback | https://docs.microsoft.com/en-us/dotnet/core/testing/unit-testing-with-nunit | CC-MAIN-2020-05 | refinedweb | 930 | 51.14 |
Hello Kevin, I don't know if it can be a solution to your problem but for my Master Thesis I'm working on making Stackless Python distributed. What I did is working but not complete and I'm right now in the process of writing the thesis (in french unfortunately). My code currently works with PyPy's "stackless" module onlyis and use some PyPy specific things. Here's what I added to Stackless: - Possibility to move tasklets easily (ref_tasklet.move(node_id)). A node is an instance of an interpreter. - Each tasklet has its global namespace (to avoid sharing of data). The state is also easier to move to another interpreter this way. - Distributed channels: All requests are known by all nodes using the channel. - Distributed objets: When a reference is sent to a remote node, the object is not copied, a reference is created using PyPy's proxy object space. - Automated dependency recovery when an object or a tasklet is loaded on another interpreter With a proper scheduler, many tasklets could be automatically spread in multiple interpreters to use multiple cores or on multiple computers. A bit like the N:M threading model where N lightweight threads/coroutines can be executed on M threads. The API is described here in french but it's pretty straightforward: The code is available here (Just click on the Download link next to the trunk folder): You need pypy-c built with --stackless. The code is a bit buggy right now though... -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/pypy-dev/2010-July/006041.html | CC-MAIN-2018-26 | refinedweb | 257 | 62.17 |
On 11/22/2011 07:23 PM, Tejun Heo wrote:> Hello,> > On Tue, Nov 22, 2011 at 03:11:02PM +0400, Pavel Emelyanov wrote:>>> Hmmm... I hope this could be prettier. I'm having trouble following>>> where the MAY_OPEN comes from. Can you please explain?>>>> From this calltrace:>>>> pid_ns_ctl_permissions>> sysctl_perm>> proc_sys_permission>> inode_permission>> do_last <<<<< MAY_OPEN appears here>> path_openat>> do_filp_open>> do_sys_open>> sys_open> > Thanks a lot. :)> >>> Can't we for now allow this for root and then later allow CAP_CHECKPOINT >>> that Cyrill suggested? Or do we want to allow setting pids even w/o CR >>> for NS creator?>>>> I think that systemd guys can play with it. E.g. respawning daemons with predefined>> pids sounds like an interesting thing to play with.> > But wouldn't CAP_CHECKPOINT be enough for systemd?It would, but what's the point in granting to a systemd (which can be a container'sinit by the way) the ability to use the _whole_ checkpoint/restore engine?Even more - protecting with the capability implies, that any task might want to playwith it. But what's the point for an arbitrary task, that just _lives_ in a pid namespaceto set the last_pid of its namespace?>>>> +static int pid_ns_ctl_handler(struct ctl_table *table, int write,>>>> + void __user *buffer, size_t *lenp, loff_t *ppos)>>>> +{>>>> + struct ctl_table tmp = *table;>>>> + tmp.data = ¤t->nsproxy->pid_ns->last_pid;>>>> + return proc_dointvec(&tmp, write, buffer, lenp, ppos);>>>> +}>>>>>> Probably better to call set_last_pid() on write path instead?>>>> Why? The usage of this sysctl is going to be synchronized by external locks,>> so why should we care?> > I think the question should usually be the other way around. Why> deviate when the deviation doesn't earn any tangible benefit? If you> think setting it explicitly is justified, explain why in the comment> of the setter and places where those explicit settings are.The set_last_pid() is the way to update the last_pid by two concurrent updaters. Sincesetting the last_pid via sysctl is racy by its nature, using that race protection isjust pointless.And yes, I agree, that writing this comment is a good idea :)> Thanks.> | https://lkml.org/lkml/2011/11/22/305 | CC-MAIN-2019-43 | refinedweb | 341 | 73.78 |
After what seems like a long slumber, along with work being done on other projects such as Topshelf and Stact, it is our great pleasure to announce the first beta release of MassTransit v2.0. What originally started out as a minor “1.3” update has turned into a full-out cleanup of the codebase, including a refinement of the configuration API. Since there were some breaking changes to the configuration, we felt a 2.0 moniker was better to ensure users of the framework understood the depth of the changes made.
And what a list of changes it is (TL;DR = We filled it with awesomeness):
Configuration
MassTransit v2.0 now includes a streamlined configuration model built around an extensible fluent interface (inspired by Stact and Topshelf and sharing a common, consistent design). As a result, getting started with MassTransit is now easier than ever. In version 2.0, all configuration starts with the ServiceBusFactory and Intellisense guides you from there forward. The result is a clean, understandable API and a quicker out-of-the-box experience.
Container-Free Support
With the release of MassTransit 2.0, using a dependency injection container is now optional. When we started MassTransit, we leveraged the container extensively to assemble the internal workings of the bus. As we added support for other containers, required features that were not supported by a particular container led to some creative solutions (read: hacks) that were less than optimal. By moving away from a “container-first” approach, we have increased the reliability of the software and now provide container-specific extensions to subscribe consumers from the container in one simple step. We also threw in support for Autofac!
Quick-Start
By simplifying the configuration, and dropping the need for a container, it is now fast and easy to get started using our new QuickStart:
#NuGet
NuGet packages have been added for the base MassTransit project, with any external dependencies (log4net and Magnum) resolved using the proper NuGet packages. Any additional references are downstream in additional NuGet packages, such as support for persisting sagas using NHibernate (MassTransit.NHibernate), and the various dependency injection containers supported.
Multiple Subscription Service Options
In addition to the existing RuntimeServices included with MassTransit, an all-new peer-to-peer subscription service has been added. By leveraging the reliable multi-cast support in MSMQ, services can now exchange subscription information without the need for a centralized subscription service. To ensure everything is setup correctly, a VerifyMsmqConfiguration method has been added that will check the installation of MSMQ and install any missing components. This is the first iteration of multi-cast support, and we need to get some mileage on it. In the meantime, the original run-time services continue to work as expected.
Documentation
Which brings us to the next big update. DOCS! They’re not perfect, and they’re far from complete, but we have focused on the configuration story to help get you up and running. As we see a need for more documentation in a given area, we will continue to flush out the docs appropriately. The docs are located at and are being hosted by the fine people at. [Thanks Eric!]
Support for .NET 4.0 and .NET 3.5
The project files and solution have all been updated to Visual Studio 2010 SP1. By default, all projects are now built in the IDE targeting .NET 4.0. The command-line build (which has been revamped to use Rake and Albacore) builds both .NET 3.5 and .NET 4.0 assemblies, including the run-time services and System View. The NuGet packages also include the proper bindings for the target project run-time version (you must use the full .NET 4.0 profile with MassTransit, the client profile is not supported).
Transport Support
Internally, the transports and endpoints have been redesigned to improve the support for new transports like RabbitMQ (and improve our ActiveMQ support). For example, transports are now inbound, outbound, or both, allowing us to properly leverage fan-out exchanges on RabbitMQ for publishing and subscribing to messages. There is more to come in this area as we take greater advantage of these advanced transport features. If you’re a RabbitMQ or ActiveMQ user and don’t mind getting your hands dirty, now is a great time to jump in and help improve transport support.
Distributor Consumer And Saga Support
Work on the MassTransit distributor subsystem continues to be improved. Testing on a multi-master system has been completed which will allow it to serve multiple distributors to improve load balancing efficiency. Support for all sagas (previously only state machine sagas were supported) has been added as well.
Swinging the Feature Axe
Some previous troublesome and poorly supported features (Batching and Message Grouping) were removed from the 2.0 release to reduce code complexity. Also in light of the new Parallel Tasks work in the framework the Parallel namespace has been removed.
In the next few days, I’ll be posting an annotated walkthrough of the new configuration API. In the meantime, fire up Visual Studio 2010, create ConsoleApplication69, switch to the full .NET 4.0 framework, and Add a Library Package Reference to MassTransit using NuGet. Paste the code from the Quick Start into your program.cs and check it out! | https://lostechies.com/chrispatterson/2011/05/03/masstransit-v2-0-beta-available-now-on-nuget/ | CC-MAIN-2016-44 | refinedweb | 884 | 54.32 |
Of business.
I've drawn up this table, which compares Scala and Clojure, feature for feature, pound for pound.
So there are certainly big differences between these 2 languages. Clojure being a Lisp is read as a series of expressions all passing their data upstream, like
(myfunc (first (iterate inc 1)))
You read that backwards,
That's how most Lisp code looks. Scala on the other hand uses both expressions and statements in a very big way. Although the collections in scala.* are all immutable I've seen alot of code based on Arrays which perform well, but are mutable. An example written by Martin Odersky (author <_ style="color: #afeeee; font-weight: bold;" span="span" _="_" j="j">while (xs(i) < pivot) i += 1 while (xs(j) > pivot) j = 1 if (i <_ style="color: #afeeee; font-weight: bold;" span="span" i="i" swapi="swapi" _="_" j="1">if (l < j) sort1(l, j) if (j < r) sort1(i, r) } sort1(0, xs.length 1) }
This is one way to implement a Quick Sort routine in Scala - It's not the nicest way, but it works. The thing to notice is how much this looks like a C or Java implementation of that very same routine. Values are mutated, nested while-loops, the works. Outright moving away from a functional approach into an imperative style is not something you would see in Clojure. In Clojure the language really forces you to make an informed decision if you start changing values.
A second thing to notice is that Clojure has explicitly rejected the OOP approach to programming, because
Scala has done quite the opposite, they've wholeheartedly embraced OOP making everything objects. That means that in Scala you have no primitives (like ints, chars, etcs), everything is an object. Martin Odersky, says what he likes about OOP is that it makes it easy to
...complex systems. I think he has a point in saying that complex systems are easy to extend when built on OOP, I'll make the argument though that the complexity which comes from an OOP approach makes the building of the system unecessarily difficult. But thats another story.
Through the rest of this post, keep in mind that some of the differences shouldn't be viewed as a wrong/right discussion. Scala has a different approach to concurrency than Clojure and although my preference is clear, that doesn't make Scala's way wrong or inferior.
Yes, Scala has Static typing but this is coupled with type inference. In practice you need to specify types in your function definitions and
return types (update: As RSchulz correctly points out the Scala compiler checks exit points and infers return types), but throughout most of your logic Scala will be able to discern which types your dealing with. For example
scala> 2 + 2
res0: Int = 4
I ask the Scala REPL to calculate 2 + 2, I don't specify that these are of type Integer. It comes back with the first result of the day (res0) and says: Its an Int! So thats not unbearable like C is for instance. I would personally hate to have to specify types for all my functions. For one because I feel it's unnecessary, and second I want most of my functions to work with a multitude of types.
Before hearing the Scala community I considered this to be a fundamental weakness in Scala, that you have to submit to it's type system. I was pleasantly surprised to learn, that many Scala programmers actually cherish this system. One guy asked me 'Havent you ever passed arguments in the wrong order and expected a different return than what you got ?" and I'll be honest: No, I haven't. But for people who have a hard time getting arguments order and return types right, this system is a great help. So it has it's place, I must say.
This a big point. Clojure is functional and pure, it protects you. If you want to mutate transients for example and one of those is accessed from an outside thread, you get an exception. Not so with Scala.
Scala is functional in the broadest sense of the word: It has immutable constructs. Scala does not in anyway discourage it's users from side-effects and mutable code. This is baaaaaaaaaad. Programmers sometimes need the box to be visible in order to not step out of it, when it's unnecessary. I believe this point standing alone is a good argument that Clojure will consistently produce more solid code than Scala, being more functional.
I'll walk you though the building of a factorial + benchmark in both languages. That'll give you a feel of what it's about.
Factorial
Written as: x!
Example 1: 5! = 5 * 4 * 3 * 2 * 1 = 120
Example 2: 0! = 1
Thats factorial in a mathematical sense, now lets code it.Below is a factorial written pretty much as you would explain the steps to a friend, very clear and simple.
So there you have 2 versions, doing the exact same thing. Take an argument x, if x = 0 return 1, if not then return x * the factorial of x - 1. This recurses like so
(factorial 5) (5 * (factorial 4)) (5 * 4 * (factorial 3)) (5 * 4 * 3 * (factorial 2)) (5 * 4 * 3 * 2 * (factorial 1)) (5 * 4 * 3 * 2 * 1 * (factorial 0)) (5 * 4 * 3 * 2 * 1 * 1) --------------------- 120
So the result is correct, but due to us building up the stack, calculating 3600! will blow the stack. Not wanting to look bad, we need to fix this and manually handle our stack. But before we do, please notice how readable the above code is. Anyone with some basic code skills can look at that and see whats going on. We don't want to loose this as we move on.
To fix this, we add an accumulator as an argument so we don't build up the stack. One way to do this, would look like this:
We are still concise on both parts. Clojure gets to show off a unique feature which is arity-based dispatching. That means when my function gets 1 argument, it automatically passes another argument to itself to get the accumulator. This is a clever way of embedding helper functions, which are specific to a particular function, inside the main logic.
Scala uses a general switch case architecture which simply checks if x = 0 or not and then acts as before. Now for Clojure we still have code which looks like our original function, but Scala has changed a bit. Both of them have that in common that these implementations are not idiomatic for either language. To get back to the business of manipulating sequences, I'll show you both of these implemented as idiomatic one-liners that don't blow the stack:
Decide for yourselves what would suit your style of code best. I think both are classy examples :)
Alright, so we've written code which earns us the envy of the world but how does it perform? Firstly, microbenchmarking is an art in itself, and often times a pointless one, so don't read more into the results than you should! Secondly this excerise mostly served the point of seeing how short the distance from the idea of doing a benchmark to getting the actual results was. One of the things I've always loved about Lisp is the awesome velocity you work with. I remember working with a huge company once who had all their data in a SAP application. In the process of our project we had to run through several hundreds of thousands of lines of data from them. Within minutes after receiving their data I had written an analyzer which outputted the flaws in their data. I sent it back and it took them days to reach the same results. This is Lisp, can Scala compete?
Off the top of my head I know I'll want to run about 200 hits on factorial(5000) and calculate average cpu time. That'll give us real computations (and not just cached results) and it will also put us through a round of garbage collection. Since Clojure has macros (functions where I control the evaluation myself), its very simple to get the time of a computation. There is a time-macro in core and it looks something like this:
(defmacro time [& body] (let [start-time (now) result (evaluate body)] (println "time: " (- start-time (now))) result))
(pseudo code)
So thats very simple. Record the start-time, compute the body, subtract now from start-time. Output is like so:
user> (time (+ 2 2)) "Elapsed time: 0.093447 msecs" 4
Alright, for the purpose of my little benchmark this is not optimal because I need to work with the number (ie. calculate an average) and not have them printed, but fortunately with-out-str rebinds *out* for me, so benchmarking is actually very straightfoward:
user> (let [values (for [i (range 200)] (with-out-str (time (fac 5000)))) doubles (map #(Double. (nth (.split % " ") 2)) values)] (println "Average cpu time: " (/ (apply + doubles) (count doubles)))) Average cpu time: 143.15819786499998
Line by line, this is what happends
Clojure weighs in at
Now for Scala. I didn't want to be presumptuous, so I requested the help of the Scala community in doing this benchmark. Scala is not as straight forward as Clojure in this regard, because it doesn't have macros. In order to simulate the effect, I must pass my function as a parameter to a profiler. In the current situation, it's not a big problem since we're passing 1 function, it would have been another story if we we're passing an entire body of code.
def time(n: Int, fun: () => Unit) = { val start = System.nanoTime (0 until n).toList.foreach((_: Int) => fun()) ((((System.nanoTime - start): Double) / (n: Double)) / 1000000.0) } val t = time(200, () => fact(5000)) println("Runtime: " + t + "ms")
Obviously, this is also very concise and as with Clojure, you get a lot of mileage on a few lines of code! Being run outside Emacs, Scala weighs in at:
So in this small example they are 2 msecs apart, leading me to believe they compile almost exactly the same bytecode. Although it's worth noting that Scalas 'Range' isn't Lazy where Clojure's sequence is, in theory that should have had a very bad effect on our performance, but it seems to be able to hold it's own.
Now you've seen a bit of Scala and I hope you're intrigued like I was when I first stumbled upon it. There are a few things which I don't like.
When I first entered #scala someone said "Martin broke the build again?". The next night when I logged on another said "Hmm, the build looks broken?" and this goes on and on. I suppose it's not criminally bad that a development build is broken, but it just doesn't happen with Clojure. Finally - Since nobody seems to be in regular contact with Martin Odersky, no immediate action was/could-be taken. (at the time of writing this, it just got fixed I think)
Secondly it was decided that ";" semicolons at the end of each line was not mandatory - I think because it's a unnecessary ceremony - but instead could be sprinkled throughout the code. This had led to several examples I've come across where people have put in abundance of code on one line, killing clarity:
var y = 5; val x = (y + 5) + 10; y += 2;
I realize though, that to bring this up as a complaint is almost a compliment.
I'm also not a fan of OOP and Scala (the scalable language) gives me a way out by also letting me write small scripts where there are no objects in the user-code.
Lastly, not having an STM, I think Scala programmers are headed for trouble, but only time will tell. The Actor model they looted from Erland is far from bad, but it doesn't suit my style and their concerns seem to be on performance where mine are on correctness - I think that's important when we're talking concurrency.
Finally, Clojure does lazy evaluation per default, Scala evaluates strictly. In practice this means that when I do this in Clojure
user> (range 1000) clojure.lang.LazySeq
A series of computations are lined up but are never performed - Nothing is computed! If I take out the first 5 items of that LazySeq then those 5 items gets computed, nothing else: That's lazy! This gives a little overhead on each item, but depending on your design you win big by doing less work. It's not Lazy like Haskell though, there difference is that our laziness rests on LazySeq, but thats a topic for another talk.
In scala its different:
scala> (1 until 1000) res1: Range = Range(1, 2, 3, 4, 5, 6, 7, 8, 9.......1000)
Everything is calculated. This doesn't divide the camps though, you can still do lazy evaluation in Scala, it's just not the default. (update: Range is in fact lazy)
Ok, so Scala gets a 3.5 star rating. I reached that conclusion because that crowd is generally very friendly and helpful. After spending a short time on #scala it seems that RSchulz, DRMcIver, jesnor, mapreduce, eikki and a few others are pillars of the community and willing to lend a hand when you need one. I asked theoretical and practical questions on both #clojure and #scala and generally got more well rounded answers in #clojure. I deducted 1 star from Scala because Martin Odensky doesn't attend it. I asked how people got in touch with him and I got comments like "He contacted me on Skype once", "He uses white rats as messengers" etc. He's not involved and interacting with the community in the same way Rich Hickey (author of Clojure) is and that really makes all the difference. The last 0.5 star I deducted was because I actually got really unfriendly messages from some of the members - not Common Lisp style evil - but getting there. In all the time I've spent on #clojure I remember 1 single instance where somebody dropped a rude childish remark and that person was immediately corrected by Rich Hickey - So the modesty and etiquette which Rich nurtures is not found on #scala in the same scale - and that's a real shame.
Finally, by implication, Martin Odersky can't take public debates live on IRC like Rich does when he's considering features/extensions. He does however have some activity on certain mailing lists.
Why does Clojure then get the full 5 stars? People are as friendly there as on #scala. I've often joked saying we have no FAQ we have Chouser (Chris Houser), but there's some truth to it. Almost every question gets answered, everyone gets help regardless of the level on which you ask. Secondly Rich Hickey attends daily, taking debates, giving advice, educating the people. I've even seen him do a public code-review on the Google group which was just an amazing piece of consultant assistance that we got for free. And lastly - Never ever, does the language/tone/etiquette drop below common decency and friendliness.
Scala is awesome, Clojure is awesome. Both are breaking new ground in Software Development. If you thrive on OOP and need Static Typing then Scala is for you. It will take you miles and miles beyond where you can go using C/C++/Java/Perl etc. It's a very welcomed addition on the scene of JVM languages.
Both will get you along way with concurrent programming, I can't say for now who will go the distance, but I give the STM the best odds. On the recommendation of a Scala community member, I'll consider doing a post only looking at Actors Vs STM.
If you're not keen on OOP and can administrate a large project without types and leverage the power of Lisp, Clojure is for you. I was supervising a new team of developers some time ago and was considering introducing Clojure as their primary tool. I described the scenario for Rich Hickey and asked for his advice, he said something like this "If you have a small and sharp team, you should consider it. If not, it's probably not for you". Lisp isn't for everybody and the sooner you reach that conclusion, the better.
In a business setting, what would I use? Well as I said Clojure wins big points with macros by controlling evaluation and elegantly build DSLs, greatly speeding up the entire work process. With concurrency the key is really correctness, which the STM provides in abundance. If you can harness the power of Lisp by all means do so! Clojure is the businessmans best friend. However! If your team is not able to adobt and fully appreciate Clojure in all it's aspects I would not hesitate a moment in recommending Scala.
/Lau
note: This article has a sequel | http://www.bestinclass.dk/index.clj/2009/09/scala-vs-clojure-lets-get-down-to-business.html | CC-MAIN-2014-15 | refinedweb | 2,874 | 70.43 |
DBIx::SQLEngine::Docs::Changes - Revision history for DBIx::SQLEngine.
2004-11-?? Released as DBIx-SQLEngine-0.93.
2004-11-29 Rearranged some documentation in top-level module's POD.
2004-11-29 Removed test config logic from Makefile to top of test.pl, and related updates of test.pl and test_cfg.pl.
2004-11-29 Fixed case-insensitive matching of "from" in sql_limit of several drivers; thanks to Michael Kroll for pointing out the issue.
2004-11-28 Released as DBIx-SQLEngine-0.92.
2004-11-21 Added incomplete implementation of RecordSet::NextPrev class.
2004-11-21 Moved table-specific Record code into new Record::Table class.
2004-11-21 Sketched out interface for a new Record::PKey object.
2004-11-16 Modified Record class factory to use new Class::MixinFactory distribution.
2004-11-16 Eliminated unnecessary extra level in Record namespace for mixins.
2004-11-16 Moved Record::Set to a new RecordSet namespace to allow new mixins.
2004-11-16 Fixed SQLite definition for sequential fields, thanks to Ron Savage.
2004-11-13 Adjusted Related.pod docs for Rosetta based on email from Darren Duncan.
2004-11-13 Fixed test.config.pl to not die when run for the first time.
2004-11-12 Released as DBIx-SQLEngine-0.91.
2004-11-12 Replaced compile-time @MIXINS in Record hierarchy with run-time NEXT method.
2004-11-12 Ran coverage tests and added test cases and documentation for missing items. We're now at over 90% for POD coverage, but still under 60% overall.
2004-11-12 Changed test_core/examplep.t to case-insensitive regex to compensate for case mangling on VMS file system, thanks to pitch from Peter (Stig) Edwards.
2004-11-09 Additional work on driver tests.
2004-11-09 Bumped up version number in preparation for a 1.0 release soon.
2004-11-08 Moved common behaviors of CSV and AnyData drivers to new shared trait named PerlDBLib. Added driver class for XBase.
2004-11-08 Additional work on test scripts.
2004-11-06 Added test.pl and reorganized test scripts for better driver testing.
2004-11-06 Reorganized ReadMe.pod and clipped credits from end of SQLEngine.pod. Other minor documentation cleanups.
2004-10-08 Fixed problem with define_named_connections and array reference reported via CPAN RT ticket 7464.
2004-06-06 Applied Driver::Oracle patch from Michael Kroll that should improve optimization of queries with rownum limits.
2004-04-19 Released as DBIx-SQLEngine-0.028.
2004-04-19 Fixed file missing from manifest which made 027 totally broken.
2004-04-19 Released as DBIx-SQLEngine-0.027.
2004-04-19 Moved Driver docs and Default code to a separate Driver module.
2004-04-18 Documentation tweaks, including labeling methods public or internal.
2004-04-16 Released as DBIx-SQLEngine-0.026.
2004-04-15 Added subclasses to Driver::Mysql for various versions.
2004-04-15 Added Driver::Sybase and Driver::Sybase::MSSQL.
2004-04-15 Added Driver::Trait::DatabaseFlavors, Driver::Trait::NoPlaceholders.
2004-04-15 Minor bits of documentation cleanup.
2004-04-14 Fixed missing import of clone_with_parameters for interpret_named_connection(); thanks to Ron Savage for the bug report.
2004-04-14 Documentation fixes in Record::Trait::Cache POD.
2004-04-14 Added new makefile targets for development convenience; the following are now all supported:
make everything - clean build with testsuite make again - rebuilds Makefile and blib make cleandist - clean build with manifest, docs, and dist make cleanmanifest - rebuilds MANIFEST make docs - runs pod2text on SQLEngine/Docs/*.pod files make compile - makes sure that all of the modules compile make testsuite - runs tests against multiple data sources make t/*.t - runs whichever test scripts you name make cover - runs testsuite under Devel::Cover with report
2004-04-14 Released as DBIx-SQLEngine-0.025.
2004-04-14 Fixed undocumented (and un-necessary) dependency on String::Escape in Record::Trait::Cache which was causing a test failure. Thanks to the folks running automated cpan smokers, which caught this a few hours after I posted it.
2004-04-14 Released as DBIx-SQLEngine-0.024.
2004-04-13 Fixed support for literal SQL in Schema::Table, Record::Trait::Cache.
2004-04-13 Added a few more methods to Record::Trait::Extras for compatibility with older DBO classes.
2004-04-11 Fixed a few problems in Record::Trait::Cache and Cache::TrivialCache. Added initial test script for record caching.
2004-04-10 Released as DBIx-SQLEngine-0.023.
2004-04-10 Added feature matrix to Docs::Related. Added Docs::Comparison.
2004-04-10 Fixed test crasher in visit_select for Driver::Trait::NoUnions.
2004-04-09 Released as DBIx-SQLEngine-0.022.
2004-04-09 Merged many methods from DBIx::DBO2::RecordSet into Record::Set.
2004-04-09 Added Record::Trait::Accessors and increased the minimum version of Class::MakeMethods to 1.006 to get the Autoload interface it uses.
2004-04-09 Added Record::Trait::Cached based on DBIx::DBO2::Record::Cached. Added cache packages DBIx::SQLEngine::Cache::TrivialCache and DBIx::SQLEngine::Cache::BasicCache.
2004-04-09 Added Record::Trait::Hooks based on DBIx::DBO2::Record and built a few tests.
2004-04-09 Initial working support for multiply-combinable Record class traits. Moved traits into nested namespaces Driver::Trait and Record::Trait.
2004-04-09 Revised synopsis and description to include more recent features.
2004-04-09 Added support for sql_select( columns => \%colnames_as_aliases ) and sql_select( tables => { "table1.column1" => "table2.column2" } ), both inspired by the interface described in the Sql::Simple documentation.
2004-04-08 Fixed tight loop detection (self-reference) in CloneWithParameters.
2004-04-06 Beginning support for multiply-combinable Record class traits.
2004-04-05 Released as DBIx-SQLEngine-0.021.
2004-04-05 Doc adjustments. Additional work on Record classes.
2004-04-05 Released as DBIx-SQLEngine-0.020.
2004-04-05 Added initial Row classes.
2004-04-04 Added named_connection interface.
2004-04-04 Migrated clone function to new Utility::CloneWithParams package. Generalized string substitution. Added more tests and documentation.
2004-04-04 Renamed Mixin::* packages to DriverTrait::*.
2004-04-04 Added "where" as a synonym for "criteria" in sql_* methods and changed documentation to use it instead. Added "distinct" support to sql_select.
2004-04-04 Released as DBIx-SQLEngine-0.019.
2004-04-03 Added initial support for constructing complex joins and Mixin::NoComplexJoins to provide emulation for the simplest inside joins.
2004-04-03 Added Mixin::NoUnions to provide emulation of unions where needed.
2004-04-02 Added initial support for selects with unions.
2004-03-30 Released as DBIx-SQLEngine-0.018.
2004-03-30 Fixed sequential type in Driver::SQLite to say "integer" not "int".
2004-03-30 Removed driver for DBD::File, which isn't intended to be used directly.
2004-03-30 Changed Criteria::HashGroup to asciibetically sort the hash keys, so that criteria are predictable for testing and better statement handle caching.
2004-03-26 Fixed Mixin::SeqTable's seq_fetch_current, which returned undef due to scope confusion with return inside an eval block.
2004-03-26 Minor cleanup to Schema::Table to improve documentation.
2004-03-26 Fixed Schema::Column's new_from_hash() and type() methods.
2004-03-26 Released as DBIx-SQLEngine-0.017.
2004-03-26 Fixed parens in Oracle's sql_limit, with apologies to Michael Kroll.
2004-03-25 Brough Schema::Table up to workable condition and added basic tests.
2004-03-25 Added simple last_query method to NullP for faster testing.
2004-03-23 Added fetch_select_rows() and visit_select_rows().
2004-03-23 Re-ordered a few chunks of code so that the POD pages are clearer.
2004-03-23 Released as DBIx-SQLEngine-0.016.
2004-03-23 Fixed files missing from manifest.
2004-03-22 Released as DBIx-SQLEngine-0.015.
2004-03-22 Added some database capability methods inspired by DBIx::Compat.
2004-03-22 Added create_ and drop_index methods inspired by DBIx::Renderer.
2004-03-22 Added create_ and drop_database methods inspired by DBIx::DataSource.
2004-03-22 Removed do_ prefix from create_ and drop_table methods, with aliases for backwards compatibility.
2004-03-22 Added initial interface for DBMS stored procedures.
2004-03-22 Added NullP subclass and null.t test with basic SQL generation and named query tests.
2004-03-22 Added named_query interface to support libraries of named queries.
2004-03-22 Added visit_sql_rows.
2004-03-22 Applied patch to Driver::Oracle's sql_limit() from Michael Kroll.
2004-03-22 Incorporated support for mixing explicit SQL and additional criteria by having sql_where() splice them together, based on a patch from Michael Kroll.
2004-03-14 Released as DBIx-SQLEngine-0.014.
2004-03-13 Fixed spacing problem for long column names in sql_create_columns, curtesy of a patch from Ron Savage.
2004-03-13 Ported schema classes from DBO2 to DBIx::SQLEngine::Schema namespace. These are not yet in use, but will be supported through additions to the SQLEngine interface in upcoming versions in a way that should not break any existing code.
2004-03-12 Released as DBIx-SQLEngine-0.013.
2004-03-12 Fixed another minor POD error.
2004-03-11 Released as DBIx-SQLEngine-0.012.
2004-03-11 Fixed several minor POD errors. Adjusted checking of wantarray to preserve its value across eval{} boudaries.
2004-03-10 Released as DBIx-SQLEngine-0.011.
2004-03-10 Moved driver-specific subclasses to a new Driver:: namespace and added documentation to each of them. The external public interface is identical, but this may break naughty code that checked ref() or isa(); however the fix is straightforward and this keeps our namespaces clearer. If your code was negatively impacted by this, let me know for future reference!
2004-02-24 Integrated some patches from Michael Kroll, including new Oracle subclass, and created a new Criteria::StringComparison class as an alternate way of addressing an issue raised in email.
2003-09-07 Released as DBIx-SQLEngine-0.010.
2003-02-02 Released as DBIx-SQLEngine-0.009.
2003-02-02 Merged the "Default" class used by AnyDBD into base class via stash aliasing, simplifying documentation.
2002-11-02 Released as DBIx-SQLEngine-0.008.
2002-11-02 Added basic subclass for MSSQL.
2002-11-02 Added "is null" support based on patches from Michael Kroll at University of Innsbruck.
2002-11-02 Added some introductory documentation to Criteria::Comparison.
2002-11-02 For compatibility with the CPAN installer and automated testing tools, the Makefile.PL now suggests that you set your environment's DBI_DSN, rather than prompting you to enter it interactively in test.pl.
2002-11-02 Refactored test script into several separate files.
2002-11-02 Added initial basic transaction support.
2002-05-24 Minor error-handling improvements.
2002-05-24 Collect and return the results from visit_* methods.
2002-05-24 Added DBIx::SQLEngine::Criteria::Not, contributed by Christian Glahn at University of Innsbruck.
2002-04-10 Added "fetch without execute" to the catch_query_exception handler for MySQL and Pg.
2002-03-23 Released as DBIx-SQLEngine-0.006.
2002-03-23 Added basic support for passing SQL functions and expressions to SQLEngine::Default and SQLEngine::Critera::Comparison in a way that allows them to be used directly, rather than being treated as literals and bound to placeholders.
2002-03-01 Fixed syntax errors in MySQL subclass. Fixed suspected Perl-version dependency (return from inside eval) in Default detect_any.
2002-03-01 Released as DBIx-SQLEngine-0.005.
2002-02-19 Simon: Added Pg catch_query_exception REDO for "out of range 0..-1". I don't know what this error means but it always seems to succeed when we retry.
2002-02-10 Simon: Modified format of error messages in Default.pm
2002-02-02 Eliminated complaint about empty subclauses in Criteria::Compound
2002-01-31 Extracted Default sql_* criteria handling into DBIx::SQLEngine::Criteria->auto* methods.
2002-01-31 Added catch_query_exception to Pg subclass for automatic reconnection.
2002-01-31 Created new Criteria::LiteralSQL package. Remove redundant parentheses around single-item list in Criteria::Compound.
2002-01-30 Released as DBIx-SQLEngine-0.004.
2002-01-30 Expanded Pg subclass. Fixed Default's type_info handling to not fall over when a type has multiple info hashes (needed for Pg). Adjusted error handling to call $sth->finish() if there's an exception after we prepare but before we complete retrieving the results.
2002-01-27 Filled out documentation, especially of the public interface in SQLEngine.pm. Added skeletal Pg subclass.
2002-01-25 Released as DBIx-SQLEngine-0.003
2002-01-21 Fixed AnyData sql_create_column_text_long_type.
2002-01-20 Fixed typo in sql_drop_table. Added detect_any and detect_table methods.
2002-01-15 Improvements to sql_create_table to support primary key declarations. Completed DBIx::SQLEngine::Mixin::SeqTable. Added DBIx::SQLEngine::CSV.
2002-01-14 Added initial version of DBIx::SQLEngine::Mixin::SeqTable.
2002-01-14 Fixed bug in test script; user and password were not used! (Bug report from Terrence Brannon.)
2002-01-14 Released as DBIx-SQLEngine-0.002
2002-01-13 Fixed logging scope errors. Filled in several mising chunks of documentation. Makefile.PL touchups for distribution.
2001-12-01 Separated from object-relational mapping functionality. Moved to DBIx::SQLEngine namespace.
2001-11-30 Began adjusting namespace for CPAN distribution. Fixed use of AnyDBD to not require non-standard patch.
2001-06-28 Use DBIx::AnyDBD for platform-specific issues. Moved SQL-generation code from Table to SQLEngine.
2001-06-28 Ported some Criteria classes from earlier code into the new SQLEngine class hierarchy.
2001-06-27 New version moved into separate DBIx::DBO2 namespace. Renamed Adaptor to SQLEngine. Flattened class hierarchy; we're only going to support DBI. Expunged old code. Switched to new Class::MakeMethods distribution.
2001-02-14 Reversed boolean sense of 'nullable' to 'required'.
2001-01-29 Fixed code to detect length and nullable column information.
2001-01-29 Added boolean required to store whether column is nullable.
2001-01-13 Added retry on "Lost connection to MySQL" errors
2000-12-22 Added new_with_contents constructor to Criteria::Compound.
2000-12-13 Substantial revisions to record set functionality
2000-11-29 Switched to use of Class::MethodMaker 2.0 features.
2000-03-31 Added SQL-string to execution failure.
2000-03-03 Adjusted clear_connection.
1999-08-18 Added MySQL tinyint column code
1999-07-27 Added use of DBO::Column package.
1999-04-21 Added explicit disconnect if existing connection's ping fails.
1999-04-05 Added reconnect if can't ping server behaviour; add'l type codes
1998-12-08 Cleanup of logging.
1998-12-08 Added parameter handling and on-demand class loading.
1998-11-01 Changed importing style for Class::MethodMaker.
1998-10-05 Revised. Rewrote POD.
1998-05-09 Moved connection behaviour from SQL Table into Adaptor classes.
1998-03-16 DBAdaptor package (and subpackages) renamed to DBO::Table.
1997-11-18 Simon: Updated to current practice.
1997-09-24 IntraNetics97 Version 1.00.000
1997-08-20 Eric: Extracted DBAdaptor class from prior code.
DBIx::SQLEngine::Docs::ReadMe | http://search.cpan.org/dist/DBIx-SQLEngine/SQLEngine/Docs/Changes.pod | CC-MAIN-2017-09 | refinedweb | 2,495 | 53.78 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.