text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
In this section we will discuss about the Command Line Java IO Standard Error.
AdsTutorials
In this section we will discuss about the Command Line Java IO Standard Error.
System.out and System.err both are for Standard output but, System.err separates the error output. System.err is provided by Java for printing of error messages.
System.err
In System.err, err is a public static final field defined as an object of PrintStream that specifies a standard error output stream is open and ready for accepting output data. These output data can be displayed output at the console, file or any other output destination depends upon the user or host environment. So, with the System.err we can use methods of PrintStream (discussed in the example given below).
public static final PrintStream err
Example
Here I am giving a simple example which will demonstrate about how Standard Error can be printed. In this example I have created a Java class named JavaSystemErrExample.java. In this class I have tried to take the input of file name at console and then read the content of that file. But in case if you will entered the file name incorrect or if the file doesn't existed in the specified directory then an error message will be printed. To print this error message I have used the Standard Error accessing i.e. System.err
Source Code
JavaSystemErrExample.java
import java.io.BufferedReader; import java.io.InputStreamReader; import java.io.File; import java.io.FileInputStream; import java.io.IOException; public class JavaSystemErrExample { public static void main(String[] args) throws Exception { BufferedReader br = null; FileInputStream fis = null; String str; try{ br = new BufferedReader(new InputStreamReader(System.in)); System.out.print("Enter File Name: "); str = br.readLine(); File file = new File(str); fis = new FileInputStream(file); int r; while((r = fis.read()) != -1) { System.out.print((char)r); } } catch(IOException ioe) { System.err.println("File Doesn't Existed Into The Specified Directory"); } } }
Output
Execute this example two times.
1. In first time execute this example and provide the file name (available in the specified directory) as I given below then the contents of file will be read as follows
2. But, When you will give give the file name wrong then an error message will be displayed.
Advertisements
Posted on: December 22, 2012 If you enjoyed this post then why not add us on Google+? Add us to your Circles
Advertisements
Ads
Ads
Discuss: Command Line Standard Error In Java
Post your Comment | http://www.roseindia.net/java/example/java/io/standarderror.shtml | CC-MAIN-2017-13 | refinedweb | 419 | 50.73 |
What Are The Available DGH Properties For Search And Find Action?
DGH uses a custom property set dghProp, to store data and state information for the Search/Find features.
- saved data: contains the dgData saved by DGH when it turns the data grid in search mode
- dg is in search mode: returns true if the data grid is in search mode
- text to find: contains the text to find inside a form or table
- find color: change the background text color of the found strings. Default is yellow
- columns query list: contains a list of the columns to search in. Separator between columns is carriage return (cr) -
- action found rows count label: set the long id of the label to use to display the found lines
- action found rows text pattern: contains the text pattern to display inside the count lab.
Use the #FoundRows# keyword to display the number of found rows
Use the #RowsNumber# keyword to display the number of rows inside the data grid
ExamplesExamples
How to change the find color?
set the dghProp["find color"] of grp "datagrid 1" to "orange"
How to change the columns query list?
set the dghProp["columns query list"] of grp "datagrid 1" to "Price" & cr & "Product"
How to define a label for displaying found rows count?
set the dghProp["action found rows count label"] of grp "datagrid 1" to the long id of field 1
How to use patterns in the count label field?
set the dghProp["action found rows text pattern"] of grp "datagrid 1" to "Found lines: #FoundRows# on #RowsNumber# rows"
How to populate another datagrid wit the saved data of the datagrid?
set the dgData of grp "datagrid 2" to the dghProp["saved data"] of grp "datagrid 1" | http://lessons.livecode.com/m/4068/l/851111-what-are-the-available-dgh-properties-for-search-and-find-action | CC-MAIN-2020-16 | refinedweb | 290 | 62.21 |
SpaCy Introduction for NLP | Linguistic Features Extraction
Getting Started with spaCy
This tutorial is a crisp and effective introduction to spaCy and the various NLP linguistic features it offers.We will perform several NLP related tasks, such as Tokenization, part-of-speech tagging, named entity recognition, dependency parsing and Visualization using displaCy.
spaCy is a free, open-source library for advanced Natural Language Processing (NLP) in Python.spaCy is designed specifically for production use and helps you build applications that process and
understand large volumes of text. It’s written in
Cython and is designed to build
information extraction or
natural language understanding systemor to
pre-process text for deep learning.
Linguistic Features in spaCy
Processing raw text intelligently is difficult: most words are rare, and it’s common for words that look completely different to mean almost the same thing.
That’s exactly what spaCy is designed to do: you put in raw text, and get back a Doc object, that comes with a variety of Linguistic annotations.
spaCy acts as a one-stop-shop for various tasks used in NLP projects, such as Tokenization, Lemmatisation, Part-of-speech(POS) tagging, Name entity recognition, Dependency parsing, Sentence Segmentation, Word-to-vector transformations, and other cleaning and normalization text methods.
Setup
!pip install -U spacy
!pip install -U spacy-lookups-data
!python -m spacy download en_core_web_sm
Once we’ve downloaded and installed a model, we will load it via
spacy.load(). spaCy has different types of pretrained models. The
default model for the
English language is
en_core_web_sm.
Here, the nlp object is a language instance of spaCy model.This will return a Language object containing all components and data needed to process text.
import spacy nlp = spacy.load('en_core_web_sm')
Tokenization
Tokenization is the task of splitting a text into meaningful segments called
tokens. The input to the tokenizer is a unicode text and the output is a
Doc object.
A
Doc is a sequence of
Token objects. Each
Doc consists of individual tokens, and we can iterate over them.
doc = nlp("Apple isn't looking at buyig U.K. startup for $1 billion") for token in doc: print(token.text)
Apple is n't looking at buyig U.K. startup for $ 1 billion
Lemmatization
A work-related to
Apple isn't looking at buyig U.K. startup for $1 billion
for token in doc: print(token.text, token.lemma_)
Apple Apple is be n't not looking look at at buyig buyig U.K. U.K. startup startup for for $ $ 1 1 billion billion
Part-of-speech tagging
Part of speech tagging is the process of assigning a
POS tag to each token depending on its usage in the sentence.
for token in doc: print(f'{token.text:{15}} {token.lemma_:{15}} {token.pos_:{10}} {token.is_stop}')
Apple Apple PROPN False is be AUX True n't not PART True looking look VERB False at at ADP True buyig buyig NOUN False U.K. U.K. PROPN False startup startup NOUN False for for ADP True $ $ SYM False 1 1 NUM False billion billion NUM False
Dependency Parsing
Dependency parsing is the process of extracting the dependency parse.
Noun chunks are
“base noun phrases” – flat phrases that have a noun as their
head.To get the noun chunks in a document, simply iterate over
Doc.noun_chunks.
for chunk in doc.noun_chunks: print(f'{chunk.text:{30}} {chunk.root.text:{15}} {chunk.root.dep_}')
Apple Apple nsubj buyig U.K. startup startup pobj
Named Entity Recognition
Named Entity Recognition (NER) is the process of locating named entities in unstructured text and then classifying them into pre-defined categories, such as person names, organizations, locations, monetary values, percentages, time expressions, and so on.
It is used to populate
tags for a set of documents in order to improve the
keyword search. Named entities are available as the
ents property of a
Doc.
doc
Apple isn't looking at buyig U.K. startup for $1 billion
for ent in doc.ents: print(ent.text, ent.label_)
Apple ORG U.K. GPE $1 billion MONEY
Sentence Segmentation
Sentence Segmentation is the process of locating the start and end of sentences in a given text. This allows you to you divide a text into linguistically meaningful units.SpaCy uses the
dependency parse to determine sentence boundaries. In spaCy, the
sents property is used to extract sentences.
doc
Apple isn't looking at buyig U.K. startup for $1 billion
for sent in doc.sents: print(sent)
Apple isn't looking at buyig U.K. startup for $1 billion
doc1 = nlp("Welcome to KGP Talkie. Thanks for watching. Please like and subscribe") for sent in doc1.sents: print(sent)
Welcome to KGP Talkie. Thanks for watching. Please like and subscribe
doc1 = nlp("Welcome to.*.KGP Talkie.*.Thanks for watching") for sent in doc1.sents: print(sent)
Welcome to.*.KGP Talkie.*.Thanks for watching
From the above example our
sentence segmentation process fail to detect the sentence boundries due to delimiters. In such cases we write our own
customize rules to detect sentence boundry based on
delimiters.
Here’s an example, where an text(…) is used as the delimiter.
def set_rule(doc): for token in doc[:-1]: if token.text == '...': doc[token.i + 1].is_sent_start = True return doc
nlp.add_pipe(set_rule, before = 'parser')
text = 'Welcome to KGP Talkie...Thanks...Like and Subscribe!' doc = nlp(text) for sent in doc.sents: print(sent)
Welcome to KGP Talkie... Thanks... Like and Subscribe!
for token in doc: print(token.text)
Welcome to KGP Talkie ... Thanks ... Like and Subscribe !
Visualization
spaCy comes with a built-in visualizer called
displaCy. We can use it to visualize a dependency parse or named entities in a browser or a Jupyter notebook.
You can pass a
Doc or a
list of Doc objects to displaCy and run
displacy.serve to run the web server, or
displacy.render to generate the raw markup.
from spacy import displacy
doc
Welcome to KGP Talkie...Thanks...Like and Subscribe!
Visualizing the dependency parse
The dependency visualizer,
dep, shows part-of-speech tags and syntactic dependencies.
displacy.render(doc, style='dep')
The argument
options lets you specify a dictionary of settings to customize the layout.
displacy.render(doc, style='dep', options={'compact':True, 'distance': 100})
Visualizing the entity recognizer
The entity visualizer,
ent, highlights named entities and their labels in a text.
doc = nlp("Apple isn't looking at buyig U.K. startup for $1 billion")
displacy.render(doc, style='ent')
Conclusion
spaCy is a modern, reliable NLP framework that quickly became the standard for doing NLP with Python. Its main advantages are: speed, accuracy, extensibility.
We have gained insights of linguistic Annotations like Tokenization, Lemmatisation, Part-of-speech(POS) tagging, Entity recognition, Dependency parsing, Sentence segmentation and Visualization using displaCy. | https://kgptalkie.com/spacy-introduction-for-nlp-linguistic-features-extraction/ | CC-MAIN-2021-17 | refinedweb | 1,131 | 50.84 |
Hi i am in need of an explanation to the following:
I have created a program to take in a string(stored in a character array).
With my string,i must pass it into a function and 2 functions i have created are stringLength and toUpperCase.(i know there are string functions to do these,it is just practice.)
The following code is what i have done for this,however i am not sure that i am doing this correctly,as when i call the first function it will give me the length,but then crash.
Similarly,when i pass in the array to my second function,it will print out the string in upper case,but random characters will sometimes follow after.Even though i have used the '\0' test in the for loop to stop it once it reaches this character.
What i am basically asking is,am i passing it the array correctly with the use of pointers because i have just ended up confusing myself so much that i cant see a way around it.
Thanks for any help.
MAIN()
#include <iostream> #include "string.cpp" using namespace std; main(){ const int max_chars =100; int length= 0; char* letters[max_chars + 1]; String* string; string = new String(); cout << "Enter string: " ; cin.getline(*letters, max_chars + 1, '\n'); string->stringLength(*letters); string->toUpperCase(*letters); }
Class implementation file
// Class implementation file #include <iostream> #include "String.h" using namespace std; String::String(){ length = 0; letters[length]; } // function to find length of string void String::stringLength(char* _letters){ //letters[length]; //int length = 0; for(length = 0; _letters[length] != '\0'; length++) { letters[length] = _letters[length]; } cout << "Length of string is " << length << endl; } // Function to covert string to uppercase characters. void String::toUpperCase(char* _letters){ int length; // puts driver program array into function array for(length = 0; _letters[length] != '\0'; length++) { letters[length] = _letters[length]; } for(int i = 0; i < length; i++) { if( (letters[i]>=97) && (letters[i]<=122) ) // checking if character is lowercase. { letters[i] -= 32; // convert to uppercase. } } cout << letters ; }
#ifndef STRING_H #define STRING_H class String { private: int length; char letters[]; public: String(); void toUpperCase(char* _letters); void stringLength(char* _letters); }; #endif | https://www.daniweb.com/programming/software-development/threads/237278/functions-pointers-arrays | CC-MAIN-2017-17 | refinedweb | 359 | 61.16 |
From: John Eddy (johneddy_at_[hidden])
Date: 2005-02-15 12:00:15
Hello Darren,
I very much appreciate your feedback and will answer your questions as
best I can.
Darren Cook wrote:
>>);
You can easily remove all logging code from your release build by not
defining BOOST_LOGGING_ON. If you do that, all the headers wind up
doing nothing with the exception of the macro header which causes all
the macros to expand to nothing.
I agree that the macro names are long and a bit confusing. They all
resolve to nothing if BOOST_LOGGING_ON is not defined with the exception
of the BOOST_LOGGING_IF_OFF macro which I'm not sure is useful at all
anyway. The difference between the various *LOG* macros is the ability
to determine whether or not an entry will get logged before the entry is
actually created. If that can be done (which it can using the macros)
the process of logging in the case where it is common for log entries to
be rejected can become much cheaper.
If I continue to develop this as a possible boost library, I will of
course accept suggestions for renaming the macros and anything else for
that matter. I can see how something like BOOST_LOG() would be
simpler than BOOST_LOGGING_LOG() etc.
>
>.
/*
* This example shows how to use the basic timestamp entry.
*/
#include <iostream>
#include <boost/logging/logs/ostream_log.hpp>
#include <boost/logging/entries/timestamp_entry.hpp>
#include <boost/logging/log_managers/basic_log_manager.hpp>
using namespace std;
using namespace boost::logging;
int main(int argc, char* argv)
{
ostream_log<> alog(cerr);
basic_log_manager<ostream_log<> > alm(alog);
alm.log(timestamp_entry<>("Hello. Its ")
<< timestamp_entry<>::stamp
<< ". Have a nice day."
);
return 0;
}
If you wanted the timestamp to be the very first thing, simply don't
supply anything to the timestamp entry constructor. ie
timestamp_entry<>() instead of timestamp_entry<>("Hello...
> 3. Logging with file, line number and/or function name included.
I have not built the facilities for this yet but I certainly plan to do
the file and line number at least. It is a fairly simple matter of
creating the proper entry helpers and implementing some entry types that
use them.
I would like to note that anything "stream-outable" can be inserted to
any of the basic_entry derivative types. The only thing that you get by
using the helpers is that the entry will store the information (such as
a timestamp) separately from the rest of the text so that it can be
recalled. This is so that entries can be searched or otherwise
differentiated based on their properties. Something as simple as the
file and line numbers using the macros __FILE__ and __LINE__ could be
inserted into any basic_entry right now so long as you don't wish to
query for entries based on that information in your code. So if you are
just printing this stuff to a file or other ostream or whatever and then
forgetting about it (in code anyway) you could replace the above with:
alm.log(basic_entry<>("Hello. Its ")
<< boost::posix_time::microsec_clock::local_time()
<< " and we're in file " << __FILE__
<< " on line " << __LINE__
<< ". Have a nice day."
);
>
>?
>
This would be a matter of creating a new log type. Currently I have
implemented an ostream_log, file_log, list_log, appending_log,
decorator_log, and null_log. These are all fairly simple and I see the
need for many more log types. The only requirement of a log is that it
have a method called log that accepts a single entry parameter. The
method should probably be templatized but of course, it needn't be
depending on the types of things you are logging. The log managers will
accept the log type as the first template argument and use the log
method when appropriate.
To do what you want, I think you should plan to use the appending log
with your custom buffered log and an appender (callback) to output to a
stream - or the decorator log with your custom log as the first and an
ostream log as the second. Then each entry as it comes in would be
written into your custom log and then out to the ostream.
The decorator is fairly simple to use. For example, here is a decorator
for writing to cerr and to a file called "run.log".
ostream_log<> olog(cerr);
file_log<> flog("run.log");
decorator_log<ostream_log<>, file_log<> > dlog(olog, flog);
basic_log_manager<decorator_log<ostream_log<>, file_log<> > > alm(dlog);
Log whatever to alm as you normally would and it will wind up on
standard error and in run.log. To actually do this right now I noticed
that you will have to comment out the code in the constructor bodies of
the file log. I am in the process of getting rid of the exception
handling and so what's there won't compile.
>
> Darren
>
>> current state is available for simple download from
>>. It is not well Boost-ified yet but the
>
> _______________________________________________
> Unsubscribe & other changes:
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2005/02/80467.php | CC-MAIN-2021-04 | refinedweb | 835 | 62.78 |
For the most part, the Script Component Wizard succeeds in automating the process of creating a script component so that you can focus on the code needed to implement your component's logic, rather than on the code needed to implement basic "plumbing" so that the component can work properly. In a number of areas, however, WSC offers functionality that either requires some additional coding or that extend the functionality of VBScript in significant ways. These include handling events, using interface handlers, taking advantage of resources, and building object models.
9.4.1 Handling Events
VBScript itself provides no native support for firing or handling custom events. Its support for events is limited to the Initialize and Terminate events, which are fired when a new instance of a class defined by the Class...End Class construct is created or destroyed, respectively. (And, in fact, they're not real events: the scripting runtime simply calls the routines if they're present.) Support for any other events must be provided by the environment in which VBScript is running.
In the case of Windows Script Components, WSC requires that an event be declared using the element. Its syntax is:
where name defines the name of the event, and dispid is an optional attribute that assigns the event's dispatch ID. Ordinarily, WSC automatically provides a dispatch ID to identify an event. You might want to provide your own dispatch ID to map a custom event to a standard COM event, or to insure that dispatch IDs remain the same across different versions of your component.
Once the event is defined, you can fire it from your code. For this, you use the WSC fireEvent method. Its syntax is:
fireEvent eventName[,...]
where eventName is a string containing the name of the event to be fired. Multiple events can be fired by separating them from one another with a comma. The use of the fireEvent method is illustrated by the boldface line of code in Example 9-3.
Once the event is fired, it must also be handled by the client application using the event definition facilities provided by the client environment. Example 9-3, shown earlier in Section 9.3.2, illustrates how an event is handled in a WSH script. In the code, the ConnectObject method of the WScript object is invoked to indicate that the script should receive event notifications for the math object.
9.4.2 Using an Interface Handler: ASP
The element in a .wsc file allows you to define the interface handlers that are available to your script. The element's syntax is:
The element has the following attributes:
type
The name of the interface handler. In scrobj.dll WSC provides an ASP handler for Active Server Pages and a Behavior handler for DHTML. A third handler for COM automation is automatically referenced without an element if the element is encountered in a .wsc file.
id
An optional element that defines the name by which the interface handler will be referenced in code. Since referenced interfaces are in the script's global namespace (that is, they do not have to be referenced through an interface object), id is typically used only to uniquely identify an object or member when there is a naming conflict between multiple interfaces.
assumed
An optional Boolean that determines whether the value of the internalName attribute is assumed in scripts, so that the referenced interface resides in the script's global namespace and does not have to be referenced through an object. By default, its value is true.
Ordinarily, once the interface handler is defined, interface classes and members can be referenced as if they were native to the component. In the case of ASP, for instance, an implements element like:
means that the ASP intrinsics are globally accessible to a WSC component. As a result, the number of items in the Contents collection of the Application object, for instance, can be retrieved with the following line of code, which is identical to the code that would be used within an Active Server Page itself:
Dim iCount = Application.Contents.Count iCount = Application.Contents.Count
Example 9-4 shows a simple ASP component that displays information from the intrinsic ASP Request object. Although most of the code is straightforward, several features are worth noting:
ASP.Request.ServerVariables("Http_User_Agent")
but is instead accessed in Example 9-4 as:
Request.ServerVariables("Http_User_Agent")
Example 9-4. A simple component for ASP
Example 9-5 provides the HTML source for a page that requests the ASP page whose listing appears in Example 9-6.
Example 9-5. An HTML page
Using an ASP Component
Enter your name:
Example 9-6. An ASP page that uses a Windows Script Component
<% Dim info Set info = CreateObject("ASPInfo.WSC") Response.Write "Your Browser: " & Info.Browser & " " Response.Write "Server Name: " & info.ServerName & " " Response.Write "Your IP Address: " & info.RemoteAddress & " " Response.Write "Your Name: " & Server.HTMLEncode(info.Value("name")) & " " %>
9.4.3 Using Resources
Typically, strings are handled by hardcoding their values throughout one or more scripts. This creates a maintenance nightmare when the strings need to be modified or localized. To deal with this problem, WSC offers the element, which allows a value to be associated with a resource identifier. The syntax of the resource element is:
value
resourceID must be a string that uniquely identifies the resource in the component; it is, in other words, a key value. value is the string or number that is associated with the resource identifier.
Example 9-7 illustrates one possible way to use resources. The component has a SayHello method that returns a string in one of four languages. The language name serves as the key or resource ID that provides access to the localized string. The user can then select his native language from a drop-down list box (see the HTML page in Example 9-8). An ASP page (see Example 9-9) instantiates the component, retrieves the user's name and language choice from the Request object's Form collection, and uses the language as the key to look up the localized version of the greeting.
Example 9-7. A component that uses resources
Good day Dobar dan Bonjour Guten tag
Example 9-8. HTML page allowing the user to select a language
Using a ResourceEnter your name:
Your native language: EnglishFrenchCroatGerman
Example 9-9. ASP page that uses the Greeting component
<% Dim greet, lang, name Set greet = CreateObject("Greeting.WSC") lang = Request.Form.Item("language") name = Request.Form.Item("name") If Not name = "" Then Response.Write greet.SayHello(lang) & ", " & name Else Response.Write "You have failed to provide us with your name." End If %>
9.4.4 Building an Object Model
Often when you work with your component, you don't want to instantiate just one object. Instead, you want to instantiate a parent object, which in turn builds a hierarchy of child objects.
To build an object model in this way with Windows Script Component, you can include multiple components in your .wsc file. This requires some modification to the basic .wsc file created by the Script Component Wizard:
You can then instantiate all but the parent or top-level component by calling the Windows Script Component's createComponent method. Its syntax is:
Set object = createComponent(componentID)
where object is the variable that will contain the object reference, and componentID is the name assigned to the component by the id attribute of the element.
Example 9-10 illustrates the use of the createComponent method to instantiate child components. A parent Workgroup object contains a Users component, which in turn contains zero or more User components. When the workgrp component is instantiated, a users object is also automatically instantiated; it is accessible only through the workgrp object's Users property. When the users object's Add method is called, a user object is added to the array held by the users object.
Example 9-10. A three-component object model | https://flylib.com/books/en/2.688.1/wsc_programming_topics.html | CC-MAIN-2020-10 | refinedweb | 1,328 | 54.83 |
Backstory
Nowadays, there are many text editors and IDEs present for developers and it is often a hard decision to choose a particular one. Since most of them offer quite similar interface and functionality, it doesn’t really matter what editor will a beginner developer use. Nevertheless, for an intermediate or advanced developer choosing the right editor can give a significant performance boost.
There are many modern and powerful editors and IDEs, such as JetBrains IDEs, Visual Studio Code, Atom, etc. Nevertheless I would like to concentrate on one of the relatively old text editors, which can be as powerful (or more powerful :) ) as the other text editors, while, giving you a better typing experience. If you haven’t guessed it, I am talking about vim: a text editor released in 1991, which is still very popular. Although vim isn’t very beginner-friendly and isn’t as powerful out-of-the-box as other IDEs, with several plug-ins and configurations it can give you a better performance than your standard IDE.
In my previous post I talked about configuring VIM to compete with other IDEs. It was more C/C++ oriented, as that time my main language of coding was C++. A year ago, I changed my workplace and Python became my main programming language. Since almost everyone was using PyCharm at my workplace for Python development, I decided to give it a shot. I have to admit, PyCharm is a really good IDE and I enjoyed it’s smart functionality. Nevertheless I missed my typing experience in VIM. Installing vim layout for PyCharm didn’t rally help as well, and I decided to spend a day and configure my vim to have all the functionalities of PyCharm that I really needed.
Now let’s go to some of the features I managed to bring to VIM that made not to miss the PyCharm IDE.
Note that the plugins mentioned here are not python specific, but since I have mostly tested them on python, I cannot say for sure how they will work on other languages. If you have tested them it would be interesting to hear about your experience in the comment section:
Configuration
Code Completion
In my previous post I talked about YouCompleteMe, which is an awesome and open-source plug-in that offers a very good code suggestion for many languages. It was one of the best plug-ins I found while I was developing in C++, nevertheless, for Python I found a better alternative. Kite claims to use Machine Learning to offer useful code completion. Since Kite is closed-source we can not be really sure whether they really use Machine Learning or not, but I can guarantee that you will like it’s code suggestion.
One of the disadvantages of Kite is the fact that it is closed-source, and if you are a sworn open-source person, YouCompleteMe is still a pretty good option.
Error Detection
For syntax checking and error detection I use the ALE (Asynchronous Lint Engine) plug-in, which allows you to check your syntax while you type. It uses
Furthermore, after some configurations ALE can check whether the python code is PEP-8 compliant and fixes it if it is not.
Navigation
Fast and effective navigation across files is an essential feature for fast development. One of the well known plug-ins for navigation in vim is the NERD Tree. Nevertheless I found the vim-vinegar to be a better alternative. It offers a cleaner interface and better shortcuts for navigation.
crtl-p is another useful plug-in for easy navigation in vim. It is a
Full path fuzzy file, buffer, mru, tag, ... finder for Vim.
- Written in pure Vimscript for MacVim, gVim and Vim 7.0+.
- Full support for Vim's regexp as search patterns.
- Built-in Most Recently Used (MRU) files monitoring.
- Built-in project's root finder.
- Open multiple files at once.
- Create new files and directories.
- Extensible.
tmux
Although tmux is not a vim plug-in, it really improves the experience while coding in vim. Not only I can ssh to my machine anytime I want from anywhere in the world and get my session and layout, but it also takes my multitasking ability to a whole level.
Other plugins?
As I mentioned in my previous post there is a very awesome website full of various vim plugins, called vimawesome.com. You can find many many more plugins there and make your vim much closer to an actual IDE.
You can comment about your vim configuration below. By the way this is my first blog post, so I am waiting for your positive criticism in the comment section.
Discussion (9)
Thanks for the nice article. Just a suggestion. I think most of this is available by default in Spacevim. And for sure this is all available in Spacemacs with just the C and python layers . In my opinion these are very good starter (and expert) distributions. Also in my opinion, despite a few little downsides, Spacemacs is a better vim than (Space)vim itself and has many upsides compared to vim (especially the easier hackability due to the
describefunctionality). Of course you might call this opinion subjective, but you can not blame me for not mentioning it. Anyway, I recommend anyone (beginner and experts) to check these out... both a breeze to install
Thank you, very good note. I tried spacevim, but it was just too much, the vim was opening slowely, some parts were reacting really slow, i just brought back what I had. Maybe it makes sense to spend a little more time on it :)
Yo !
Like the setup, wish to know what debugger you're using and if you could write a lil' about it's setup.
Have you tested out LSP for python in vim? If so what are your thoughts?
I personally found pyls to be lacking autocomplete functionality but for C/C++ It proved better results than YCM.
Hey, thanks for the feedback!
Actually the debugging is one of the things I miss from PyCharm. I usually use debugger only when I feel that debugging with printing (logging in python) will not help me or will take more time (While coding in C++ I used debugger a lot, and had GDB integrated in vim with ConqueGDB Plugin). Sometimes when I really need a debugger I open PyCharm, debug and go back to vim :). Nevertheless, I am going to give vimpdb a try, will write about it if it proves itself useful.
No, I haven't tested LSP, my autocomplete path was this way:
Actually I will try LSP now, and see how it works for me!
Thanks for the response :)
I’m still a student so in the mean time it doesn’t bother me using kite even though it’s a lil’ unsafe. I didn’t try YCM with python, what are your thoughts about it compared to kite ?
As for the debugger, It’s a lil’ annoying to hit 2 keys instead of 1 compared to PyCharm using the standard pdb. Vimpdb seems a better way to integrate it with my workflow but based on the documentation, it won’t support python 3.0 +, only 2.4-2.7.
Please prove me wrong haha..
Nvim terminal is a bit buggy for me, so I’ll
test out ConqueGBD.. Demo looks good, hope you can customize it (color wise :p)
PS: Wish to find some more vim buddies as nobody in my college even heard of vim, nor wishes to learn about it unfortunately.. Didn’t find a way to contact you privately though.
Kite was pretty good for me, nevertheless, I have seen reviews that since it works locally now (or so they say :) ), it became worse. YCM is pretty good. The difference is that Kite uses ML to make suggestions. For example when you type
import numpy, it will suggest
as npas continuation. Not smth necessary but pretty cool :D
Actually, I have in mind to build smth like Kite but completely open-source (If I find time of course) :D I think LSTM will do the job :)
Ah Yeah, I missed that part about python 3. Guess I will not try it then :D Will search for alternatives, or stick to logging and occasionally PyCharming.
Keep me informed if you find something cool.
I used ConqueGBD really much when I was working on C++ projects. It is really good. Actually I believe there is a way to make GDB work for python as well. Will need some tweaking though. If I remember correctly (I used it last around 2 years ago) it picks your color scheme colors.
Hi,
Could you give your list of plugins or even create guide how to setup vim the way you did.
Looks awesome!
Hi, I am glad you liked my vim setup :D Yes, I have plans to write a blog post about, but I am not sure whether the time will allow me to do so in near future. In the meanwhile I will give you the list of plugins I use. I hope it will be helpful for you.
And also, don't forget about kite :D
Thanks a lot :).
Looking forward to your setup guide if you catch some time.
Cheers. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/bezirganyan/editor-wars-vim-as-a-perfect-python-ide-19ne | CC-MAIN-2021-10 | refinedweb | 1,556 | 71.44 |
Command go
Use "go help [topic]" for more information about that topic.
Compile packages and dependencies
Usage:
go build [-o output] [-i] [build flags] [packages]
Build compiles the packages named by the import paths, along with their dependencies, but it does not install the results.
If the arguments to build are a list of .go files, build treats them as a list of source files specifying a single package.
When compiling a single main package, build writes the resulting executable to an output file named after the first source file ('go build ed.go rx.go' writes 'ed' or 'ed.exe') or the source code directory ('go build unix/sam' writes 'sam' or 'sam.exe'). The '.exe' suffix is added when writing a Windows executable.
When compiling multiple packages or a single non-main package, build compiles the packages but discards the resulting object, serving only as a check that the packages can be built.
When compiling packages, build ignores files that end in '_test.go'.
The -o flag, only allowed when compiling a single package, forces build to write the resulting executable or object to the named output file, instead of the default behavior described in the last two paragraphs.
The -i flag installs the packages that are dependencies of the target.
The build flags are shared by the build, clean, get, install, list, run, and test commands:
-a force rebuilding of packages that are already up-to-date. -n print the commands but do not run them. -p n the number of programs, such as build commands or test binaries, that can be run in parallel. The default is the number of CPUs available. -race enable data race detection. Supported only on linux/amd64, freebsd/amd64, darwin/amd64 and windows/amd64. -msan enable interoperation with memory sanitizer. Supported only on linux/amd64, and only with Clang/LLVM as the host C compiler. -v print the names of packages as they are compiled. -work print the name of the temporary work directory and do not delete it when exiting. -x print the commands. 'arg list' arguments to pass on each go tool compile invocation. -installsuffix suffix a suffix to use in the name of the package installation directory, in order to keep output separate from default builds. If using the -race flag, the install suffix is automatically set to race or, if set explicitly, has _race appended to it. Likewise for the -msan flag. Using a -buildmode option that requires non-default compile flags has a similar effect. -ldflags 'flag list' arguments to pass on each go tool link invocation. -linkshared link against shared libraries previously created with -buildmode=shared. -pkgdir dir install and load all packages from dir instead of the usual locations. For example, when building with a non-standard configuration, use -pkgdir to keep generated packages in a separate location. -tags 'tag list' a list of build tags to consider satisfied during the build. For more information about build tags, see the description of build constraints in the documentation for the go/build package. -toolexec 'cmd args' a program to use to invoke toolchain programs like vet and asm. For example, instead of running asm, the go command will run 'cmd args /path/to/asm <arguments for asm>'.
The list flags accept a space-separated list of strings. To embed spaces in an element in the list, surround it with either single or double quotes.
For more about specifying packages, see 'go help packages'. For more about where packages and binaries are installed, run 'go help gopath'. For more about calling between Go and C/C++, run 'go help c'.
Note: Build adheres to certain conventions such as those described by 'go help gopath'. Not all projects can follow these conventions, however. Installations that have their own conventions or that use a separate software build system may choose to use lower-level invocations such as 'go tool compile' and 'go tool link' to avoid some of the overheads and design decisions of the build tool.
See also: go install, go get, go clean.
Remove object files
Usage:
go clean [-i] [-r] [-n] [-x] [build flags] [packages] *.so from SWIG
In the list, DIR represents the final path element of the directory, and MAINFILE is the base name of any Go source file in the directory that is not included when building the package.
The -i flag causes clean to remove the corresponding installed archive or binary (what 'go install' would create).
The -n flag causes clean to print the remove commands it would execute, but not run them.
The -r flag causes clean to be applied recursively to all the dependencies of the packages named by the import paths.
The -x flag causes clean to print remove commands as it executes them.
For more about build flags, see 'go help build'.
For more about specifying packages, see 'go help packages'.
Show documentation for package or symbol
Usage:
go doc [-u] [-c] [package|[package.]symbol[.method]]
Doc prints the documentation comments associated with the item identified by its arguments (a package, const, func, type, var, or method)>] go doc [<pkg>.]<sym>[.<method>] go doc [<pkg>.][<sym>.]<method> must be a full package path (not just a suffix), and the second is a symbol or symbol and method; this is similar to the syntax accepted by godoc:
go doc <pkg> <sym>[.<method>]:
-c Respect case when matching symbols. -cmd Treat a command (package main) like a regular package. Otherwise package main's exported symbols are hidden when showing the package's top-level documentation. -u Show documentation for unexported as well as exported symbols and methods.
Print Go environment information
Usage:
go env [var ...]
Env prints Go environment information.
By default env prints information as a shell script (on Windows, a batch file). If one or more variable names is given as arguments, env prints the value of each named variable on its own line.
Start a bug report
Usage:
go bug
Bug opens the default browser and starts a new bug report. The report includes useful system information.
Run go tool fix on packages
Usage:
go fix [packages]
Fix runs the Go fix command on the packages named by the import paths.
For more about fix, see 'go doc cmd/fix'. For more about specifying packages, see 'go help packages'.
To run fix with specific options, run 'go tool fix'.
See also: go fmt, go vet.
Run gofmt on package sources
Usage:
go fmt [-n] [-x] [packages].
To run gofmt with specific options, run gofmt itself.
See also: go fix, go vet.
Generate Go files by processing source
Usage:
go generate [-run regexp] [-n] [-v] [-x] [build flags] [file.go... | packages]
Generate runs commands described by directives within existing files. Those commands can run any process but the intent is to create or update Go source files.
Go generate is never run automatically by go build, go get, go test, and so on. It must be run explicitly.
Go generate scans the file for directives, which are lines of the form,
//go:generate command argument...
(note: no leading spaces and no space in "//go") where command is the generator to be run, corresponding to an executable file that can be run locally. It must either be in the shell path (gofmt), a fully qualified path (/usr/you/bin/mytool), or a command alias, described below.
Note that go generate does not parse the file, so lines that look like directives in comments or multiline strings will be treated as directives.
The arguments to the directive are space-separated tokens or double-quoted strings passed to the generator as individual arguments when it is run.
Quoted strings use Go syntax and are evaluated before execution; a quoted string appears as a single argument to the generator.
Go generate sets several variables when it runs the generator:
$GOARCH The execution architecture (arm, amd64, etc.) $GOOS The execution operating system (linux, windows, etc.) $GOFILE The base name of the file. $GOLINE The line number of the directive in the source file. $GOPACKAGE The name of the package of the file containing the directive. $DOLLAR A dollar sign.
Other than variable substitution and quoted-string evaluation, no special processing such as "globbing" is performed on the command line.
As a last step before running the command, any invocations of any environment variables with alphanumeric names, such as $GOFILE or $HOME, are expanded throughout the command line. The syntax for variable expansion is $NAME on all operating systems. Due to the order of evaluation, variables are expanded even inside quoted strings. If the variable NAME is not set, $NAME expands to the empty string.
A directive of the form,
//go:generate -command xxx args...
specifies, for the remainder of this source file only, that the string xxx represents the command identified by the arguments. This can be used to create aliases or to handle multiword generators. For example,
//go:generate -command foo go tool foo
specifies that the command "foo" represents the generator "go tool foo".
Generate processes packages in the order given on the command line, one at a time. If the command line lists .go files, they are treated as a single package. Within a package, generate processes the source files in a package in file name order, one at a time. Within a source file, generate runs generators in the order they appear in the file, one at a time.
If any generator returns an error exit status, "go generate" skips all further processing for that package.
The generator is run in the package's source directory.
Go generate accepts one specific flag:
-run="" if non-empty, specifies a regular expression to select directives whose full original source text (excluding any trailing spaces and final newline) matches the expression.
It also accepts the standard build flags including -v, -n, and -x. The -v flag prints the names of packages and files as they are processed. The -n flag prints commands that would be executed. The -x flag prints commands as they are executed.
For more about build flags, see 'go help build'.
For more about specifying packages, see 'go help packages'.
Download and install packages and dependencies
Usage:
go get [-d] [-f] [-fix] [-insecure] [-t] [-u] [build flags] [packages]
Get downloads the packages named by the import paths, along with their dependencies. It then installs the named packages, like 'go install'.
The -d flag instructs get to stop after downloading the packages; that is, it instructs get not to install the packages.
The -f flag, valid only when -u is set, forces get -u not to verify that each package has been checked out from the source control repository implied by its import path. This can be useful if the source is a local fork of the original.
The -fix flag instructs get to run the fix tool on the downloaded packages before resolving dependencies or building the code.
The -insecure flag permits fetching from repositories and resolving custom domains using insecure schemes such as HTTP. Use with caution.
The -t flag instructs get to also download the packages required to build the tests for the specified packages.
The -u flag instructs get to use the network to update the named packages and their dependencies. By default, get uses the network to check out missing packages but does not use it to look for updates to existing packages.
The -v flag enables verbose progress and debug output.
Get also accepts build flags to control the installation. See 'go help build'.
When checking out a new package, get creates the target directory GOPATH/src/<import-path>. If the GOPATH contains multiple entries, get uses the first one. For more details see: 'go help gopath'.
When checking out or updating a package, get looks for a branch or tag that matches the locally installed version of Go. The most important rule is that if the local installation is running version "go1", get searches for a branch or tag named "go1". If no such version exists it retrieves the most recent version of the package.
When go get checks out or updates a Git repository, it also updates any git submodules referenced by the repository.
Get never checks out or updates code stored in vendor directories.
For more about specifying packages, see 'go help packages'.
For more about how 'go get' finds source code to download, see 'go help importpath'.
See also: go build, go install, go clean.
Compile and install packages and dependencies
Usage:
go install [build flags] [packages]
Install compiles and installs the packages named by the import paths, along with their dependencies.
For more about the build flags, see 'go help build'. For more about specifying packages, see 'go help packages'.
See also: go build, go get, go clean.
List packages
Usage:
go list [-e] [-f format] [-json] [build flags] [packages]
List lists the packages named by the import paths, one per line.
The default output shows the package import path:
bytes encoding/json github.com/gorilla/mux golang.org/x/net/html
The -f flag specifies an alternate format for the list, using the syntax of package template. The default output is equivalent to -f '{{.ImportPath}}'. The struct being passed to the template is:
type Package struct { Dir string // directory containing package sources ImportPath string // import path of package in dir ImportComment string // path in import comment on package statement Name string // package name Doc string // package documentation string Target string // install path Shlib string // the shared library that contains this package (only set when -linkshared) Goroot bool // is this package in the Go root? Standard bool // is this package part of the standard Go library? Stale bool // would 'go install' do anything for this package? StaleReason string // explanation for Stale==true Root string // Go root or Go path dir containing this package ConflictDir string // this directory shadows Dir in $GOPATH BinaryOnly bool // binary-only package: cannot be recompiled from sources // Source files GoFiles []string // .go source files (excluding CgoFiles, TestGoFiles, XTestGoFiles) CgoFiles []string // .go sources files that import "C" IgnoredGoFiles []string // .go sources ignored due to build constraints CFiles []string // .c source files CXXFiles []string // .cc, .cxx and .cpp source files MFiles []string // imports template function "join" calls strings.Join.
The template function "context" returns the build context, defined as: BuildTags []string // build constraints to match in +build lines ReleaseTags []string // releases the current release is compatible with InstallSuffix string // suffix to use in the name of the install dir }
For more information about the meaning of these fields see the documentation for the go/build package's Context type.
The -json flag causes the package data to be printed in JSON format instead of using the template format.
The -e flag changes the handling of erroneous packages, those that cannot be found or are malformed. By default, the list command prints an error to standard error for each erroneous package and omits the packages from consideration during the usual printing. With the -e flag, the list command never prints errors to standard error and instead processes the erroneous packages with the usual printing. Erroneous packages will have a non-empty ImportPath and a non-nil Error field; other information may or may not be missing (zeroed).
For more about build flags, see 'go help build'.
For more about specifying packages, see 'go help packages'.
Compile and run Go program
Usage:
go run [build flags] [-exec xprog] gofiles... [arguments...]
Run compiles and runs the main package comprising the named Go source files. A Go source file is defined to be a file ending in a literal ".go" suffix.
By default, 'go run' runs the compiled binary directly: 'a.out arguments...'. If the -exec flag is given, 'go run' invokes the binary using xprog:
'xprog a.out arguments...'.
If the -exec flag is not given, GOOS or GOARCH is different from the system default, and a program named go_$GOOS_$GOARCH_exec can be found on the current search path, 'go run' invokes the binary using that program, for example 'go_nacl_386_exec a.out arguments...'. This allows execution of cross-compiled programs when a simulator or other execution method is available.
For more about build flags, see 'go help build'.
See also: go build.
Test packages
Usage:
go test [build/test flags] [packages] [build/test flags & test binary flags]
'Go test' automates testing the packages named by the import paths. It prints a summary of the test results in the format:
ok archive/tar 0.011s FAIL archive/zip 0.022s ok compress/gzip 0.033s ...
followed by detailed output for each failed package.
'Go test' recompiles each package along with any files with names matching the file pattern "*_test.go". Files whose names begin with "_" (including "_test.go") or "." are ignored. These additional files can contain test functions, benchmark functions, and example functions. See 'go help testfunc' for more. Each listed package causes the execution of a separate test binary.
Test files that declare a package with the suffix "_test" will be compiled as a separate package, and then linked and run with the main test binary.
The go tool will ignore a directory named "testdata", making it available to hold ancillary data needed by the tests.
By default, go test needs no arguments. It compiles and tests the package with source in the current directory, including tests, and runs the tests.
The package is built in a temporary directory so it does not interfere with the non-test installation.
In addition to the build flags, the flags handled by 'go test' itself are:
is the last element of the package's import path). The file name can be changed with the -o flag. -exec xprog Run the test binary using xprog. The behavior is the same as in 'go run'. See 'go help run' for details. -i Install packages that are dependencies of the test. Do not run the test. -o file Compile the test binary to the named file. The test still runs (unless -c or -i is specified).
The test binary also accepts flags that control execution of the test; these flags are also accessible by 'go test'. See 'go help testflag' for details.
For more about build flags, see 'go help build'. For more about specifying packages, see 'go help packages'.
See also: go build, go vet.
Run specified go tool
Usage:
go tool [-n] command [args...]
Tool runs the go tool command identified by the arguments. With no arguments it prints the list of known tools.
The -n flag causes tool to print the command that would be executed but not execute it.
For more about each tool command, see 'go tool command -h'.
Print Go version
Usage:
go version
Version prints the Go version, as reported by runtime.Version.
Run go tool vet on packages
Usage:
go vet [-n] [-x] .
Calling between Go and C
There are two different ways to call between Go and C/C++ code.
The first is the cgo tool, which is part of the Go distribution. For information on how to use it see the cgo documentation (go doc cmd/cgo).
The second is the SWIG program, which is a general tool for interfacing between languages. For information on SWIG see. When running go build, any file with a .swig extension will be passed to SWIG. Any file with a .swigcxx extension will be passed to SWIG with the -c++ option.
When either cgo or SWIG is used, go build will pass any .c, .m, .s, or .S files to the C compiler, and any .cc, .cpp, .cxx files to the C++ compiler. The CC or CXX environment variables may be set to determine the C or C++ compiler, respectively, to use.
Description of build modes
The 'go build' and 'go install' commands take a -buildmode argument which indicates which kind of object file is to be built. Currently supported values are:
-buildmode=archive Build the listed non-main packages into .a files. Packages named main are ignored. -buildmode=c-archive Build the listed main package, plus all packages it imports, into a C archive file. The only callable symbols will be those functions exported using a cgo //export comment. Requires exactly one main package to be listed. -buildmode=c-shared Build the listed main packages, plus all packages that they import, into C shared libraries. The only callable symbols will be those functions exported using a cgo //export comment. Non-main packages are ignored. -buildmode=default Listed main packages are built into executables and listed non-main packages are built into .a files (the default behavior). -buildmode=shared Combine all the listed non-main packages into a single shared library that will be used when building with the -linkshared option. Packages named main are ignored. -buildmode=exe Build the listed main packages and everything they import into executables. Packages not named main are ignored. -buildmode=pie Build the listed main packages and everything they import into position independent executables (PIE). Packages not named main are ignored. -buildmode=plugin Build the listed main packages, plus all packages that they import, into a Go plugin. Packages not named main are ignored.
File types
The go command examines the contents of a restricted set of files in each directory. It identifies which files to examine based on the extension of the file name. These extensions are:
.go Go source files. .c, .h C source files. If the package uses cgo or SWIG, these will be compiled with the OS-native compiler (typically gcc); otherwise they will trigger an error. .cc, .cpp, .cxx, .hh, .hpp, .hxx C++ source files. Only useful with cgo or SWIG, and always compiled with the OS-native compiler. .m Objective-C source files. Only useful with cgo, and always compiled with the OS-native compiler. .s, .S Assembler source files. If the package uses cgo or SWIG, these will be assembled with the OS-native assembler (typically gcc (sic)); otherwise they will be assembled with the Go assembler. .swig, .swigcxx SWIG definition files. .syso System object files.
Files of each of these types except .syso may contain build constraints, but the go command stops scanning for build constraints at the first item in the file that is not a blank line or //-style line comment. See the go/build package documentation for more details..
GOPATH environment variable
The Go path is used to resolve import statements. It is implemented by and documented in the go/build package.
The GOPATH environment variable lists places to look for Go code. On Unix, the value is a colon-separated string. On Windows, the value is a semicolon-separated string. On Plan 9, the value is a list.
If the environment variable is unset, GOPATH defaults to a subdirectory named "go" in the user's home directory ($HOME/go on Unix, %USERPROFILE%\go on Windows), unless that directory holds a Go distribution. Run "go env GOPATH" to see the current GOPATH.
See to set a custom GOPATH.
Each directory listed in GOPATH GOPATH, a package with source in DIR/src/foo/bar can be imported as "foo/bar" and has its compiled form installed to "DIR/pkg/GOOS_GOARCH/foo/bar.a".
The bin directory holds compiled commands. Each command is named for its source directory, but only the final element, not the entire path. That is, the command with source in DIR/src/foo/quux is installed into DIR/bin/quux, not DIR/bin/foo/quux. The "foo/" prefix is stripped so that you can add DIR/bin to your PATH to get at the installed commands. If the GOBIN environment variable is set, commands are installed to the directory it names instead of DIR/bin. GOBIN must be an absolute path.
Here's an example directory layout:
GOPATH=/home/user/go /home/user/go/ src/ foo/ bar/ (go code in package bar) x.go quux/ (go code in package main) y.go bin/ quux (installed command) pkg/ linux_amd64/ foo/ bar.a (installed package object)
Go searches each directory listed in GOPATH to find source code, but new packages are always downloaded into the first directory in the list.
See for an example.
Internal Directories
Code in or below a directory named "internal" is importable only by code in the directory tree rooted at the parent of "internal". Here's an extended version of the directory layout above:
/home/user/go/ src/ crash/ bang/ (go code in package bang) b.go foo/ (go code in package foo) f.go bar/ (go code in package bar) x.go internal/ baz/ (go code in package baz) z.go quux/ (go code in package main) y.go
The code in z.go is imported as "foo/internal/baz", but that import statement can only appear in source files in the subtree rooted at foo. The source files foo/f.go, foo/bar/x.go, and foo/quux/y.go can all import "foo/internal/baz", but the source file crash/bang/b.go cannot.
See for details.
Vendor Directories
Go 1.6 includes support for using local copies of external dependencies to satisfy imports of those dependencies, often referred to as vendoring.
Code below a directory named "vendor" is importable only by code in the directory tree rooted at the parent of "vendor", and only using an import path that omits the prefix up to and including the vendor element.
Here's the example from the previous section, but with the "internal" directory renamed to "vendor" and a new foo/vendor/crash/bang directory added:
/home/user/go/ src/ crash/ bang/ (go code in package bang) b.go foo/ (go code in package foo) f.go bar/ (go code in package bar) x.go vendor/ crash/ bang/ (go code in package bang) b.go baz/ (go code in package baz) z.go quux/ (go code in package main) y.go
The same visibility rules apply as for internal, but the code in z.go is imported as "baz", not as "foo/vendor/baz".
Code in vendor directories deeper in the source tree shadows code in higher directories. Within the subtree rooted at foo, an import of "crash/bang" resolves to "foo/vendor/crash/bang", not the top-level "crash/bang".
Code in vendor directories is not subject to import path checking (see 'go help importpath').
When 'go get' checks out or updates a git repository, it now also updates submodules.
Vendor directories do not affect the placement of new repositories being checked out for the first time by 'go get': those are always placed in the main GOPATH, never in a vendor subtree.
See for details.
Environment variables
The go command, and the tools it invokes, examine a few different environment variables. For many of these, you can see the default value of on your system by running 'go env NAME', where NAME is the name of the variable.
General-purpose environment variables:
GCCGO The gccgo command to run for 'go build -compiler=gccgo'. GOARCH The architecture, or processor, for which to compile code. Examples are amd64, 386, arm, ppc64. GOBIN The directory where 'go install' will install a command. GOOS The operating system for which to compile code. Examples are linux, darwin, windows, netbsd. GOPATH For more details see: 'go help gopath'. GORACE Options for the race detector. See. GOROOT The root of the go tree.
Environment variables for use with cgo:
CC The command to use to compile C code. CGO_ENABLED Whether the cgo command is supported. Either 0 or 1. CGO_CFLAGS Flags that cgo will pass to the compiler when compiling C code. CGO_CPPFLAGS Flags that cgo will pass to the compiler when compiling C or C++ code. CGO_CXXFLAGS Flags that cgo will pass to the compiler when compiling C++ code. CGO_FFLAGS Flags that cgo will pass to the compiler when compiling Fortran code. CGO_LDFLAGS Flags that cgo will pass to the compiler when linking. CXX The command to use to compile C++ code. PKG_CONFIG Path to pkg-config tool.
Architecture-specific environment variables:
GOARM For GOARCH=arm, the ARM architecture for which to compile. Valid values are 5, 6, 7. GO386 For GOARCH=386, the floating point instruction set. Valid values are 387, sse2.
Special-purpose environment variables:'.
Import path syntax
An import path (see 'go help packages') denotes a package stored in the local file system. In general, an import path denotes either a standard package (such as "unicode/utf8") or a package found in one of the work spaces (For more details see: 'go help gopath').
Relative import paths
An import path beginning with ./ or ../ is called a relative path. The toolchain supports relative import paths as a shortcut in two ways.
First, a relative path can be used as a shorthand on the command line. If you are working in the directory containing the code imported as "unicode" and want to run the tests for "unicode/utf8", you can type "go test ./utf8" instead of needing to specify the full path. Similarly, in the reverse situation, "go test .." will test "unicode" from the "unicode/utf8" directory. Relative patterns are also allowed, like "go test ./..." to test all subdirectories. See 'go help packages' for details on the pattern syntax.
Second, if you are compiling a Go program not in a work space, you can use a relative path in an import statement in that program to refer to nearby code also not in a work space. This makes it easy to experiment with small multipackage programs outside of the usual work spaces, but such programs cannot be installed with "go install" (there is no work space in which to install them), so they are rebuilt from scratch each time they are built. To avoid ambiguity, Go programs cannot use relative import paths within a work space.
Remote import paths
Certain import paths also describe how to obtain the source code for the package using a revision control system.
A few common code hosting sites have special syntax:
Bitbucket (Git, Mercurial) import "bitbucket.org/user/project" import "bitbucket.org/user/project/sub/directory" GitHub (Git) import "github.com/user/project" import "github.com/user/project/sub/directory" Launchpad (Bazaar) import "launchpad.net/project" import "launchpad.net/project/series" import "launchpad.net/project/series/sub/directory" import "launchpad.net/~user/project/branch" import "launchpad.net/~user/project/branch/sub/directory" IBM DevOps Services (Git) import "hub.jazz.net/git/user/project" import "hub.jazz.net/git/user/project.org/repo or repo.git.
When a version control system supports multiple protocols, each is tried in turn when downloading. For example, a Git download tries https://, then git+ssh://.
By default, downloads are restricted to known secure protocols (e.g. https, ssh). To override this setting for Git downloads, the GIT_ALLOW_PROTOCOL environment variable can be set (For more details see: 'go help environment'). meta tag should appear as early in the file as possible. In particular, it should appear before any raw JavaScript or CSS, to avoid confusing the go command's restricted parser.
The vcs is one of "git", "hg", "svn", etc,
The repo-root is the root of the version control system containing a scheme and not containing a .vcs qualifier.
For example,
import "example.org/pkg/foo"
will result in the following requests: (preferred) (fallback, only with -insecure) (For more details see: 'go help gopath').
The go command attempts to download the version of the package appropriate for the Go release being used. Run 'go help get' for more.
Import path checking
When the custom import path feature described above redirects to a known code hosting site, each of the resulting packages has two possible import paths, using the custom domain or the known hosting site.
A package statement is said to have an "import comment" if it is immediately followed (before the next newline) by a comment of one of these two forms:
package math // import "path" package math /* import "path" */
The go command will refuse to install a package with an import comment unless it is being referred to by that import path. In this way, import comments let package authors make sure the custom import path is used and not a direct path to the underlying code hosting site.
Import path checking is disabled for code found within vendor trees. This makes it possible to copy code into alternate locations in vendor trees without needing to update import comments.
See for details.
Description of package lists (For more details see: 'go help gopath').
If no import paths are given, the action applies to the package in the current directory.
There are four reserved names for paths that should not be used for packages to be built with the go tool:
- "main" denotes the top-level package in a stand-alone executable.
- "all" expands to all package directories found in all the GOPATH trees. For example, 'go list all' lists all the packages on the local system.
- "std" is like all but expands to just the packages in the standard Go library.
- "cmd" expands to the Go repository's commands and their internal libraries.
Import paths beginning with "cmd/" only match source code in the Go repository. importpath' 'github.com/user/repo'.
Packages in a program need not have unique package names, but there are two reserved package names with special meaning. The name main indicates a command, not a library. Commands are built into binaries and cannot be imported. The name documentation indicates documentation for a non-Go program in the directory. Files in package documentation are ignored by the go command.
As a special case, if the package list is a list of .go files from a single directory, the command is applied to a single synthesized package made up of exactly those files, ignoring any build constraints in those files and ignoring any other files in the directory.
Directory and file names that begin with "." or "_" are ignored by the go tool, as are directories named "testdata".
Description of testing flags
The 'go test' command takes both flags that apply to 'go test' itself and flags that apply to the resulting test binary.
Several of the flags control profiling and write an execution profile suitable for "go tool pprof"; run "go tool pprof -h" for more information. The --alloc_space, --alloc_objects, and --show_bytes options of pprof control how the information is presented.
The following flags are recognized by the 'go test' command and control the execution of any test:
. -covermode set,count,atomic Set the mode for coverage analysis for the package[s] being tested. The default is "set" unless -race is enabled, in which case it is "atomic". The values: set: bool: does this statement run? count: int: how many times does this statement run? atomic: int: count, but correct in multithreaded tests; significantly more expensive. Sets -cover. -coverpkg). -v Verbose output: log all tests as they are run. Also print all text from Log and Logf calls even if the test succeeds.
The following flags are also recognized by 'go test' and can be used to profile the tests during execution:
-benchmem Print memory allocation statistics for benchmarks. -blockprofile block.out Write a goroutine blocking profile to the specified file when all tests are complete. Writes test binary as -c would. -blockprofilerate n Control the detail provided in goroutine blocking profiles by calling runtime.SetBlockProfileRate with n. See 'go doc runtime.SetBlockProfileRate'. The profiler aims to sample, on average, one blocking event every n nanoseconds the program spends blocked. By default, if -test.blockprofile is set without this flag, all blocking events are recorded, equivalent to -test.blockprofilerate=1. -coverprofile cover.out Write a coverage profile to the file after all tests have passed. Sets -cover. -cpuprofile cpu.out Write a CPU profile to the specified file before exiting. runtime.MemProfileRate'. To profile all memory allocations, use -test.memprofilerate=1 and pass --alloc_space flag to the pprof tool. .
Description of testing functions
The 'go test' command expects to find test, benchmark, and example functions in the "*_test.go" files corresponding to the package under test.
A test function is one named TestXXX (where XXX is any alphanumeric string not starting with a lower case letter) and should have the signature,
func TestXXX(t *testing.T) { ... }
A benchmark function is one named BenchmarkXXX and should have the signature,
func BenchmarkXXX(b *testing.B) { ... }
An example function is similar to a test function but, instead of using *testing.T to report success or failure, prints output to os.Stdout. If the last comment in the function starts with "Output:" then the output is compared exactly against the comment (see examples below). If the last comment begins with "Unordered output:" then the output is compared to the comment, however the order of the lines is ignored. An example with no such comment is compiled but not executed. An example with no text after "Output:" is compiled, executed, and expected to produce no output.
Godoc displays the body of ExampleXXX to demonstrate the use of the function, constant, or variable XXX. An example of a method M with receiver type T or *T is named ExampleT_M. There may be multiple examples for a given function, constant, or variable, distinguished by a trailing _xxx, where xxx is a suffix not beginning with an upper case letter.
Here is an example of an example:
func ExamplePrintln() { Println("The output of\nthis example.") // Output: The output of // this example. }
Here is another example where the ordering of the output is ignored:
func ExamplePerm() { for _, value := range Perm(4) { fmt.Println(value) } // Unordered output: 4 // 2 // 1 // 3 // 0 }
The entire test file is presented as the example when it contains a single example function, at least one other function, type, variable, or constant declaration, and no test or benchmark functions.
See the documentation of the testing package for more information. | http://docs.activestate.com/activego/1.8/pkg/cmd/go/ | CC-MAIN-2018-47 | refinedweb | 6,305 | 66.03 |
I was reading a thread over at pylons-discuss about worker threads in pylons. Worker threads are useful if you need to execute a long running task, but want to return to the user immediately. You could run a new thread per request, but if you have many requests, it is probably better to queue things up.
To do this, you start a thread and use python’s Queue for managing the tasks. Here is a very basic implementation:
config/environment.py:
# PYLONS IMPORTS from myapp.lib.myworker import start_myworker def load_environment(global_conf, app_conf): """Configure the Pylons environment via the ``pylons.config`` object """ # PYLONS STUFF GOES HERE # start worker start_myworker()
lib/myworker.py:
import Queue import threading worker_q = Queue.Queue() class MyWorkerThread(threading.Thread): def run(self): print 'Worker thread is running.' while True: msg = worker_q.get() try: # do a long running task... print 'We got %s, do something with it!' % (msg) except Exception, e: print 'Unable to process in worker thread: ' + str(e) worker_q.task_done() def start_myworker(): worker = MyWorkerThread() worker.start()
And in your controller….
from myapp.lib.myworker import worker_q class WorkerController(BaseController): def do_task(self): worker_q.put({'id': generate_some_unique_id(), 'msg': 'your data goes here'}) | https://www.chrismoos.com/2009/03/04/pylons-worker-threads/ | CC-MAIN-2021-39 | refinedweb | 197 | 53.78 |
On Thu, Nov 16, 2000 at 09:24:32AM +0100, Przemys?aw G. Gawro?ski wrote: > but I can also access the variable i what is the difference between them > ( i and j ) and why the variable i isn't listed ??? Because i lives in the namespace of class A, and j lives in that of instance v. If Python cannot find the name in the instance-namespace, it looks for it in the namespace of the class. Very helpfull, because you can keep the common data in the class-space, and the data that are specific for a given instance, in the instance-space. egbert -- Egbert Bouwman - Keizersgracht 197 II - 1016 DS Amsterdam - 020 6257991 ======================================================================== | https://mail.python.org/pipermail/python-list/2000-November/041023.html | CC-MAIN-2017-17 | refinedweb | 116 | 70.13 |
Abstract::
In general, linear extrapolation of a time series is a dubious business, but in this case I think it is justified:
1) The distribution of running speeds is not a bell curve. It has a long tail of athletes who are much faster than normal runners. Below I propose a model that explains this tail, and suggests that there is still room between the fastest human ever born and the fastest possible human.
2) I’m not just fitting a line to arbitrary data; there the current top marathoners will be able to maintain this pace for two hours, but we have no reason to think that it is beyond theoretical human capability.
My model, and the data it is based on, are below.
-----
In April 2011, I collected the world record progression for running events of various distances and plotted speed versus:
def GeneratePerson(n=10):
factors = [random.normalvariate(0.0, 1.0) for i in range(n)]
logs = [Logistic(x) for x in factors]
return min(logs)
Yes, that's right, I just reduced a person to a single number. Cue the humanities majors lamenting the blindness and arrogance of scientists. Then explain that this is supposed to be an explanatory model, so simplicity is a virtue. A model that is as rich and complex as the world is not a model.:
def WorldRecord(m=100000, n=10):
data = []
best = 0.0
for i in xrange(m):
person = GeneratePerson(n)
if person > best:
best = person
data.append(i/m, best))
return data).
Maybe world records aren't the right data to be using, they can be quite discontinuous, making the line of best fit inaccurate. If you use best time in a year, the data shows a curve: . In addition to this, wouldn't using linear progression assumes that maybe in 2513 we'll be able to run 1:30:00, while using non-linear regression more sensibly suggests that we'll never break 1:30:00 (or at least suggests that that year is unfeasibly far away). Just my views, you're obviously much more qualified to discuss this than I am.
Hi James. Thanks for these comments. I am planning a future post to discuss model comparison and how to judge whether this kind of model should be considered reliable. One test is whether the extrapolation behaves well on very long time scales. As you pointed out, my linear model eventually exceeds human capability. Another criterion is whether the functional form of the model has a theoretical basis (as mine does) or whether it is chosen to fit the data (as in the example you cited). And one other criterion is the quality of fit, which includes things like the correlation coefficient and also analysis of residuals (are they uncorrelated and reasonably distributed?) Details to follow!
Oh P.S what's up with the second data point from the left? If that's a 12.25 mph marathon, how can the third, fourth, fifth and sixth (all under that pace) also be WRs?
That was Derek Clayton in 1969. The record was disputed because the course was short. I should probably discard that point, but it doesn't affect the results much.
Fascinating writeup.
Shows that you do not need to know the actual factors to model the overall shape of the distribution :)
I liked your potential list of factors.
If the factors were fixed and are mostly physical constraints (some mental capacities could translate to brain physics/chemistry, some may be emotional) , then one way to look at progress is how over time athletes have become better and better in overcoming limitations or adapting technique to work around things such as healing rate, tendon strength etc.
That gives me a view that sports trainers could focus on one such factor at a time and improve the technique - which I am sure they do. Bayesian multiple linear regression may provide us an optimal configuration of these factors all at a time, to shoot for. Maybe that bayesian optimization is what the practicing/improving athelete is doing - identifying optimal pressure on the feet, the pacing, the breathing etc that would keep most factors at their bests through the course of the entire run! Exciting stuff, Alan. Thanks.
Ravi
I really loved this line, and laughed out loud:
"A model that is as rich and complex as the world is not a model."
Very interesting, Allen. I think you are in the right ballpark and are right about the linear relationship between the rate (speed) and time. Another way to think about it is that it's a common negative exponential growth curve when marathon time (or percentage change in speed) is on the y-axis. Doing it that way makes it clear that we are running up against a limit at some point. | http://allendowney.blogspot.com/2013/10/one-step-closer-to-two-hour-marathon.html | CC-MAIN-2014-42 | refinedweb | 807 | 60.95 |
Microsoft ACT standalone installation
Microsoft ACT is great for stress testing web sites. The only "problem" is that you have to install Visual Studio .NET in order to use it. I use it frequently on my dev machine but some times it is useful have it on a remote machine for stress testing directly in a pre-production environment. The steps below shows how you can copy your local ACT installation to a standalone computer.
Pre-requisite: Internet Explorer 6.0
Steps by step instructions:
- Copy the C:\Program Files\Microsoft ACT directory from you dev PC to the same directory on the remote machine
- Create the Act.Reg and Register.cmd files below
- Execute Register.cmd
- Create a local user: ACTUser with the "User" rights
- Set the Identify of the following COM objects to ACTUser (using dcomcnfg):
- Application Center Test Broker
- Application Center Test Controller
- Give full control to ACTUser on the following WMI namespace using "Computer Management": Root/CIMV2/Application/MicrosoftACT
== Save as Register.cmd ==
c:
cd "C:\Program Files\Microsoft ACT"
regedit -s act.reg
for %%i in (*.dll) do regsvr32 /s %%i
ACTBroker.exe -regserver
actcontroller.exe -regserver
ACTRegMof.exe -i "C:\Program Files\Microsoft ACT\actnamespace.mof"
ACTRegMof.exe -i "C:\Program Files\Microsoft ACT\actbroker.mof"
ACTRegMof.exe -i "C:\Program Files\Microsoft ACT\actcontroller.mof"
== Save as Act.Reg ==
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ACT]
"AppPath"="C:\\Program Files\\Microsoft ACT\\"
"ProductCode"="{E05F0409-0E9A-48A1-AC04-E35E3033604A}"
"Feature"="AppCenter_Test_for_VS.NET"
"Version"="1.0.0536"
Disclaimer: Follow the instructions at your own risk and make sure you have a working backup!
Egil I was curios how you have found out all those dependencies..
I was suspecting that you have a component that humans doesn't have: time! :-P
Thank you very much !
This allows to install ACT on a windows 2000 server for testing a server on a production LAN.
Egil, Thank you very much~ This is what I need!
To readers:
>Give full control to ACTUser on the following WMI namespace using "Computer >Management": Root/CIMV2/Application/MicrosoftACT
Run "wmimgmt.msc" edit security.
Anybody knows maybe how to enable the chart control in ACT installed this way?
could you recommend good book on ACT. thanx.
The rest was trial and a lot of error. ?
Ty but To readers:
>Give full control to ACTUser on the following WMI namespace using "Computer >Management": Root/CIMV2/Application/MicrosoftACT
Run "wmimgmt.msc" edit security.
Sorry wrong url for my blog with my last post :(
Thank you very much !
This allows to install ACT on a windows 2000 server for testing a server on a production LAN.
Very useful information, thanks.
Can I add this article to my site? | http://blog.egilh.com/2004/11/305aspx.html | CC-MAIN-2017-34 | refinedweb | 455 | 52.46 |
This is the second part of a two-part series on creating a video watch party application using the Vonage Video API and Ruby on Rails.
In the first article, we went through the steps of building the backend of the app. If you have not read that post yet, it would be a good place to start. Now we are going to focus on the frontend of our application. While the backend was written mainly in Ruby, the frontend will be a lot of client-side JavaScript.
Once we are done, we will have a video watch party app that we can use to chat with our friends and watch videos together!
Let's get started!
tl;dr If you would like to skip ahead and get right to deploying it, you can find all the code for the app and a one-click Deploy to Heroku button at the GitHub repository
.
Table of Contents
- What Will We Be Building
- Creating the JavaScript Packs
- Styling the Application
- Putting It All Together
What Will We Be Building
Before we start coding, it is a good idea to take a moment and discuss what we will be building.
If you recall from the first post, we had instantiated a Video API Session ID, and are actively creating tokens for each participant. That information is being passed to the frontend by newly created JavaScript variables in the ERB view files. Additionally, we are also passing data from our environment variables to the frontend. We will be using all that information in the code we will write to create the experience of the app.
Ruby on Rails has come a long way in integrating client-side JavaScript directly into the stack with the introduction of Webpack in Rails starting with version 5.1. JavaScript is incorporated through packs placed inside
/app/javascript/packs and added as either
import or
require() statements inside the
application.js file inside the directory.
We will be separating out the various concerns of our code into different files so that at the end your folder will have the following files:
# app/javascript/packs - application.js - app_helpers.js - chat.js - opentok_screenshare.js - opentok_video.js - party.js - screenshare.js
Each file, besides
application.js, will contain code to cover distinct concerns:
app_helpers.js: Cross-functional code that is needed across the frontend
chat.js: Creating a
Chatclass that will be used to instantiate instances of the text chat
party.js: Creating a
Partyclass that will be used to instantiate instances of the video chat
screenshare.js: Creating a
Screenshareclass that will be used to instantiate instances of the screenshare functionality
Prior to creating the code, let's add these files to the
application.js file, which will instruct Webpack to compile them at runtime:
// application.js import './app_helpers.js' import './opentok_video.js' import './opentok_screenshare.js'
Creating the JavaScript Packs
In each subsection, we will create the JavaScript files that we enumerated above.
The
app_helpers.js File
The
app_helpers.js file will contain generic helper functions that we will export to the rest of the code to use throughout the app. We will create
screenshareMode(),
setButtonDisplay(),
formatChatMsg(), and
streamLayout() functions.
The
screenshareMode() function will take advantage of the Vonage Video API Signal API to send a message to the browsers of all the participants that will trigger a
window.location change. The Signal API is the same API we will use for the text chat, which is its simplest use case. However, as we will see in this function, the Signal API provides an intuitive and powerful way to direct the flow of your application simultaneously for all the participants without needing to write lots of code:
export function screenshareMode(session, mode) { if (mode == 'on') { window.location = '/screenshare?name=' + name; session.signal({ type: 'screenshare', data: 'on' }); } else if (mode == 'off') { window.location = '/party?name=' + name; session.signal({ type: 'screenshare', data: 'off' }); }; };
The next function,
setButtonDisplay() changes the style for the HTML element containing the "Watch Mode On/Off" button to either be
block or
none depending on whether the participant is the moderator or not. There are many other ways to do this, including more secure methods. However, in order to keep things simple for this app to watch videos amongst friends, we will keep the keep minimalist:
export function setButtonDisplay(element) { if (name == moderator_env_name) { element.style.display = "block"; } else { element.style.display = "none"; }; };
The
formatChatMsg() function takes in the text message the participant sent as an argument and formats it for presentation on the site. This function looks for any text bracketed by two colons and attempts to parse the text inside those colons as an emoji. It also appends the participant's name to each message so everyone knows who is talking.
In order to add the emojis, we need to install a node package called
node-emoji and we can do that by adding
const emoji = require('node-emoji); to the top of the file and running
yarn add node-emoji in the command line. The function will utilize
match() with a regular expression to search for strings of text bookmarked by two colons, and if it matches, it will invoke the
emoji const we defined to turn that string into an emoji:
export function formatChatMsg(message) { var message_arr; message_arr = message.split(' ').map(function(word) { if (word.match(/(?:\:)\b(\w*)\b(?=\:)/g)) { return word = emoji.get(word); } else { return word; } }) message = message_arr.join(' '); return `${name}: ${message}` };
The last function inside
app_helpers.js we need to create is
streamLayout() that takes in arguments of the HTML element and the count of participants. The function will add or remove CSS classes to the element depending on the number of participants in order to change the video chat presentation into a grid format:
export function streamLayout(element, count) { if (count >= 6) { element.classList.add("grid9"); } else if (count == 5) { element.classList.remove("grid9"); element.classList.add("grid4"); } else if (count < 5) { element.classList.remove("grid4"); } };
The
chat.js File
The
chat.js code is going to create the
Chat class using a
constructor(). This
Chat class will be called and instantiated in both the video chat and screenshare views:
// chat.js import { formatChatMsg } from './app_helpers.js'; export default class Chat { constructor(session) { this.session = session; this.form = document.querySelector('form'); this.msgTxt = document.querySelector('#message'); this.msgHistory = document.querySelector('#history'); this.chatWindow = document.querySelector('.chat'); this.showChatBtn = document.querySelector('#showChat'); this.closeChatBtn = document.querySelector('#closeChat'); this.setupEventListeners(); }
We have given several properties to
Chat, mostly based on different elemnts in the DOM and the Video API session. The last one,
this.setupEventListeners() is invoking a function that we need to now add to the file:
setupEventListeners() { let self = this; this.form.addEventListener('submit', function(event) { event.preventDefault(); self.session.signal({ type: 'msg', data: formatChatMsg(self.msgTxt.value) }, function(error) { if (error) { console.log('Error sending signal:', error.name, error.message); } else { self.msgTxt.value = ''; } }); }); this.session.on('signal:msg', function signalCallback(event) { var msg = document.createElement('p'); msg.textContent = event.data; msg.className = event.from.connectionId === self.session.connection.connectionId ? 'mine' : 'theirs'; self.msgHistory.appendChild(msg); msg.scrollIntoView(); }); this.showChatBtn.addEventListener('click', function(event) { self.chatWindow.classList.add('active'); }); this.closeChatBtn.addEventListener('click', function(event) { self.chatWindow.classList.remove('active'); }); } }
setupEventListeners() creates an
EventListener for the text chat
submit button. When a new message is submitted it is sent to the Signal API to be processed and sent to all the participants. Similarly, when a new message is received a new
<p> tag is added to the chat element, and the participant's text chat window is scrolled to view it.
The next two files we will create perform similar functionality in creating new classes for the video chat party and for the screenshare view.
The
party.js File
In this file we will create the
Party class that will be used to instantiate new instances of the video chat:
// party.js import { screenshareMode, setButtonDisplay, streamLayout } from './app_helpers.js'; export default class Party { constructor(session) { this.session = session; this.watchLink = document.getElementById("watch-mode"); this.subscribers = document.getElementById("subscribers"); this.participantCount = document.getElementById("participant-count"); this.videoPublisher = this.setupVideoPublisher(); this.clickStatus = 'off'; this.setupEventHandlers(); this.connectionCount = 0; setButtonDisplay(this.watchLink); }
The
constructor() function is given the Video API session as an argument and passes that to
this.session. The rest of the properties are defined and given values. The
watchLink,
subscribers,
participantCount properties come from the HTML elements, while
videoPublisher is provided a function as its value, and
clickStatus is given default of
off.
We will create the
setupVideoPublisher() function at this point. The function invokes the Video API JavaScript SDK
initPublisher() function to start the video publishing. It can take in optional arguments, and as such, we specify that the video should occupy 100% of the width and height of its element and should be appended to the element:
setupVideoPublisher() { return OT.initPublisher('publisher', { insertMode: 'append', width: "100%", height: "100%" }, function(error) { if (error) { console.error('Failed to initialise publisher', error); }; }); }
There are several actions we also must create event listeners for and add them to the class. We need to listen for when the session is connected, when a video stream has been created, when a connction has been added and when a connection has been destroyed. When a connection has been added or destroyed, we either increment or decrement the participant count, and share the number of participants in the participant count
<div> element on the page:
setupEventHandlers() { let self = this; this.session.on({ // This function runs when session.connect() asynchronously completes sessionConnected: function(event) { // Publish the publisher we initialzed earlier (this will trigger 'streamCreated' on other // clients) self.session.publish(self.videoPublisher, function(error) { if (error) { console.error('Failed to publish', error); } }); }, // This function runs when another client publishes a stream (eg. session.publish()) streamCreated: function(event) { // Subscribe to the stream that caused this event, and place it into the element with id="subscribers" self.session.subscribe(event.stream, 'subscribers', { insertMode: 'append', width: "100%", height: "100%" }, function(error) { if (error) { console.error('Failed to subscribe', error); } }); }, // This function runs whenever a client connects to a session connectionCreated: function(event) { self.connectionCount++; self.participantCount.textContent = `${self.connectionCount} Participants`; streamLayout(self.subscribers, self.connectionCount); }, // This function runs whenever a client disconnects from the session connectionDestroyed: function(event) { self.connectionCount--; self.participantCount.textContent = `${self.connectionCount} Participants`; streamLayout(self.subscribers, self.connectionCount); } });
Lastly, we add one more event listener. This event listener is attached to the
click action on the "Watch Mode On/Off" button. When it is clicked it goes to the screenshare view, if the click status was off. You will recall that the click status is given a default of off in the construction of the class:
this.watchLink.addEventListener('click', function(event) { event.preventDefault(); if (self.clickStatus == 'off') { // Go to screenshare view screenshareMode(self.session, 'on'); }; }); } }
The
screenshare.js File
The final class we will create is a
Screenshare class that will be responsible for defining the video screenshare. The
constructor() function takes the Video API session and the participant's name as arguments:
// screenshare.js import { screenshareMode } from './app_helpers.js'; export default class Screenshare { constructor(session, name) { this.session = session; this.name = name; this.watchLink = document.getElementById("watch-mode"); this.clickStatus = 'on'; }
Unlike the
Party class, the
clickStatus here defaults to
on since we want to move away from the screenshare and back to the video chat mode, if the moderator clicks the "Watch Mode On/Off" button.
We also utilize
toggle() to either share the participant's screen, if the participant is the moderator, or subscribe to the screenshare for everyone else:
toggle() { if (this.name === moderator_env_name) { this.shareScreen(); } else { this.subscribe(); } }
The
shareScreen() function invoked in the
toggle() needs to be defined:
shareScreen() { this.setupPublisher(); this.setupAudioPublisher(); this.setupClickStatus(); }
This function itself has three functions that need to also be created. The first function will publish the screen of the moderator. However, the screen publishing by itself does not also include audio. Therefore, a second function will publish the audio from the moderator's computer. Then, the final function in
shareScreen() will move back to the video chat view if the "Watch Mode On/Off" button is clicked:
setupClickStatus() { // screen share mode off if clicked off // Set click status let self = this; this.watchLink.addEventListener('click', function(event) { event.preventDefault(); if (self.clickStatus == 'on') { self.clickStatus = 'off'; screenshareMode(self.session, 'off'); }; }); } setupAudioPublisher() { var self = this; var audioPublishOptions = {}; audioPublishOptions.insertMode = 'append'; audioPublishOptions.publishVideo = false; var audio_publisher = OT.initPublisher('audio', audioPublishOptions, function(error) { if (error) { console.log(error); } else { self.session.publish(audio_publisher, function(error) { if (error) { console.log(error); } }); }; } ); } setupPublisher() { var self = this; var publishOptions = {}; publishOptions.videoSource = 'screen'; publishOptions.insertMode = 'append'; publishOptions.height = '100%'; publishOptions.width = '100%'; var screen_publisher = OT.initPublisher('screenshare', publishOptions, function(error) { if (error) { console.log(error); } else { self.session.publish(screen_publisher, function(error) { if (error) { console.log(error); }; }); }; } ); }
All the above is in order to create the screenshare for the moderator. Everyone else in the app will want to subscribe to that screenshare. We will use the
subscribe() function to do that. This will be the last function inside the file:
subscribe() { var self = this; this.watchLink.style.display = "none"; this.session.on({ streamCreated: function(event) { console.log(event); if (event.stream.hasVideo == true) { self.session.subscribe(event.stream, 'screenshare', { insertMode: 'append', width: '100%', height: '100%' }, function(error) { if (error) { console.error('Failed to subscribe to video feed', error); } }); } else if (event.stream.hasVideo == false ) { self.session.subscribe(event.stream, 'audio', { insertMode: 'append', width: '0px', height: '0px' }, function(error) { if (error) { console.error('Failed to subscribe to audio feed', error); } }); }; } }); } }
We are now ready to make all these classes we have defined work in the application by creating instances of them inside the
Creating
The
opentok_video.js file will build a new video chat experience. Most of the work was done in the classes we defined above, so this file is relatively small. First, let's import the
Chat and
Party classes:
Then, we will define a global empty variable to hold the Video API session:
var session = ''
Then we wrap the rest of the code in three checks to make sure we are on the correct website path, that the DOM is fully loaded and that the participant name is not empty:
if (window.location.pathname == '/party') { document.addEventListener('DOMContentLoaded', function() { if (name != '') {
The rest of the code initiates a new Video API session if one does not exist and instantiates a new
Chat and new
Party. At the end, we also listen for the Signal API to send a
screenshare data message with the value of
on. When that message is received the
window.location is moved to
/screenshare:
// Initialize an OpenTok Session object if (session == '') { session = OT.initSession(api_key, session_id); } new Chat(session); new Party(session); // Connect to the Session using a 'token' session.connect(token, function(error) { if (error) { console.error('Failed to connect', error); } }); // Listen for Signal screenshare message session.on('signal:screenshare', function screenshareCallback(event) { if (event.data == 'on') { window.location = '/screenshare?name=' + name; }; }); }; }); }
Creating
The last JavaScript file we will create is mightily similar to the last one. It is responsible for the screenshare view and leverages the
Screenshare and
Chat classes we defined earlier:
import Screenshare from './screenshare.js' import Chat from './chat.js' // declare empty global session variable var session = '' if (window.location.pathname == '/screenshare') { document.addEventListener('DOMContentLoaded', function() { // Initialize an OpenTok Session object if (session == '') { session = OT.initSession(api_key, session_id); } // Hide or show watch party link based on participant if (name != '' && window.location.pathname == '/screenshare') { new Chat(session); new Screenshare(session, name).toggle(); // Connect to the Session using a 'token' session.connect(token, function(error) { if (error) { console.error('Failed to connect', error); } }); // Listen for Signal screenshare message session.on('signal:screenshare', function screenshareCallback(event) { if (event.data == 'off') { window.location = '/party?name=' + name; }; }); } }); };
Before we can wrap this up, last but certainly not least, we need to define the frontend style of the application. All this code is useless if it is not accessible by the participants.
Styling the Application
The stylesheet for this application would not have happened without the help of my friend and former colleague, Hui Jing Chen who taught me a lot about front-end design through this process. The app primarily uses Flexbox Grid to order the elements.
Let's start by creating a
custom.css file inside
app/javascript/stylesheets. We want to make sure that it is included in our application so add an import line to
application.scss in the same folder,
@import './custom.css';.
First, let's add the core styling in
custom.css:
:root { --main: #343a40; --txt-alt: white; --txt: black; --background: white; --bgImage: url('~images/01.png'); --chat-bg: rgba(255, 255, 255, 0.75); --chat-mine: darkgreen; --chat-theirs: indigo; } html { box-sizing: border-box; height: 100%; } *, *::before, *::after { box-sizing: inherit; margin: 0; padding: 0; } body { height: 100%; display: flex; flex-direction: column; background-color: var(--background); background-image: var(--bgImage); overflow: hidden; } main { flex: 1; display: flex; position: relative; } input { font-size: inherit; padding: 0.5em; border-radius: 4px; border: 1px solid currentColor; } button, input[type="submit"] { font-size: inherit; padding: 0.5em; border: 0; background-color: var(--main); color: var(--txt-alt); border-radius: 4px; } header { background-color: var(--main); color: var(--txt-alt); padding: 0.5em; height: 4em; display: flex; align-items: center; justify-content: space-between; }
Then, let's add the styling for the landing page:
.landing { margin: auto; text-align: center; font-size: 125%; } .landing form { display: flex; flex-direction: column; margin: auto; position: relative; } .landing input, .landing p { margin-bottom: 1em; } .landing .error { color: maroon; position: absolute; bottom: -2em; width: 100%; text-align: center; }
We also want to add the styling for the text chat, especially making sure that it stays in place and does not scroll the whole page as it progresses:
.chat { width: 100%; display: flex; flex-direction: column; height: 100%; position: fixed; top: 0; left: 0; z-index: 2; background-color: var(--chat-bg); transform: translateX(-100%); transition: transform 0.5s ease; } .chat.active { transform: translateX(0); } .chat-header { padding: 0.5em; box-shadow: 0 1px 5px rgba(0, 0, 0, 0.12), 0 1px 3px rgba(0, 0, 0, 0.24); display: flex; justify-content: space-between; } .btn-chat { height: 5em; width: 5em; border-radius: 50%; box-shadow: 0 3px 6px 0 rgba(0, 0, 0, .2), 0 3px 6px 0 rgba(0, 0, 0, .19); position: fixed; right: 1em; bottom: 1em; cursor: pointer; } .btn-chat svg { height: 4em; width: 2.5em; } .btn-close { height: 2em; width: 2em; background: transparent; border: none; cursor: pointer; } .btn-close svg { height: 1em; width: 1em; } .messages { flex: 1; display: flex; flex-direction: column; overflow-y: scroll; padding: 1em; box-shadow: 0 1px 5px rgba(0, 0, 0, 0.12), 0 1px 3px rgba(0, 0, 0, 0.24); scrollbar-color: #c1c1c1 transparent; } .messages p { margin-bottom: 0.5em; } .mine { color: var(--chat-mine); } .theirs { color: var(--chat-theirs); } .chat form { display: flex; padding: 1em; box-shadow: 0 1px 5px rgba(0, 0, 0, 0.12), 0 1px 3px rgba(0, 0, 0, 0.24); } .chat input[type="text"] { flex: 1; border-top-left-radius: 0px; border-bottom-left-radius: 0px; background-color: var(--background); color: var(--txt); min-width: 0; } .chat input[type="submit"] { border-top-right-radius: 0px; border-bottom-right-radius: 0px; }
Now let's create the styling for the video chat and screenshare elements:
.videos { flex: 1; display: flex; position: relative; } .subscriber.grid4 { display: grid; grid-template-columns: repeat(auto-fit, minmax(25em, 1fr)); } .subscriber.grid9 { display: grid; grid-template-columns: repeat(auto-fit, minmax(18em, 1fr)); } .subscriber, .screenshare { width: 100%; height: 100%; display: flex; } .publisher { position: absolute; width: 25vmin; height: 25vmin; min-width: 8em; min-height: 8em; align-self: flex-end; z-index: 1; } .audio { position: absolute; opacity: 0; z-index: -1; } .audio { display: none; } .dark { --background: black; --chat-mine: lime; --chat-theirs: violet; --txt: white; }
Lastly, we will add a media query that will keep the text chat in proportion on smaller screens:
@media screen and (min-aspect-ratio: 1 / 1) { .chat { width: 20%; min-width: 16em; } }
That's it! The application, both the backend and the frontend, has been created. We are now ready to put it all together.
Putting It All Together
Even though the application is a combination of multiple programming languages, namely Ruby and JavaScript, with an intertwined backend and frontend, it is relatively straightforward to run it. This is because Rails allows us to seamlessly integrate it all together with one command.
From the command line, you can execute
bundle exec rails s and watch your Rails server start. You will also see the following almost magical line in your console output the first time you run the app:
[Webpacker] Compiling...
In fact, you will see that every time you make a change to any of your JavaScript or CSS packs. That output tells you that Rails is using Webpack to compile and incorporate all of your packs into the application. Once the
[Webpacker] Compiling... is done you will see a list of all your compiled packs:
Version: webpack 4.42.1 Time: 1736ms Built at: 05/01/2020 12:01:37 PM Asset Size Chunks Chunk Names js/app_helpers-31c49752d24631573287.js 100 KiB app_helpers [emitted] [immutable] app_helpers js/app_helpers-31c49752d24631573287.js.map 44.3 KiB app_helpers [emitted] [dev] app_helpers js/application-d253fe0e7db5e2b1ca60.js 564 KiB application [emitted] [immutable] application js/application-d253fe0e7db5e2b1ca60.js.map 575 KiB application [emitted] [dev] application js/chat-451fca901a39ddfdf982.js 103 KiB chat [emitted] [immutable] chat js/chat-451fca901a39ddfdf982.js.map 46.1 KiB chat [emitted] [dev] chat js/opentok_screenshare-2bc51be74c7abf27abe2.js 110 KiB opentok_screenshare [emitted] [immutable] opentok_screenshare js/opentok_screenshare-2bc51be74c7abf27abe2.js.map 51 KiB opentok_screenshare [emitted] [dev] opentok_screenshare js/opentok_video-15ed35dc7b01325831c0.js 109 KiB opentok_video [emitted] [immutable] opentok_video js/opentok_video-15ed35dc7b01325831c0.js.map 50.6 KiB opentok_video [emitted] [dev] opentok_video js/party-f5d6c0ccd3bb1fcc225e.js 105 KiB party [emitted] [immutable] party js/party-f5d6c0ccd3bb1fcc225e.js.map 47.5 KiB party [emitted] [dev] party js/screenshare-4c13687e1032e93dc59a.js 105 KiB screenshare [emitted] [immutable] screenshare js/screenshare-4c13687e1032e93dc59a.js.map 47.9 KiB screenshare [emitted] [dev] screenshare manifest.json 2.38 KiB [emitted]
The file names reflect that they have been compiled down, but you can still see your pack names in there if you look closely, like
party,
app_helpers, etc.
Running your application locally is great for testing with yourself, but you probably would like to invite friends to participate with you!
You can create an externally accessible link to your application running locally using a tool like ngrok. It gives an external URL for your local environment. The Nexmo Developer Platform has a guide on getting up and running with ngrok that you can follow.
If you would like to just get up and running, you can also deploy with one click this application from GitHub directly to Heroku. Click on the
button and within moments you will have a watch party app ready to be used and enjoyed.
I would love to hear what you built using the Vonage Video API! Please join the conversation on our Community Slack and share your story!
Discussion
Great finish, Ben!
Just curious, as I'm still learning about webpack every day... how come you put each class into the packs folder, essentially creating n packs?
Usually I see application.js importing from '../controllers' or whatever is appropriate to whatever framework is in place. Your configuration seems to essentially treat each pack as a separate file that needs to be sent down the wire, even though you don't appear to be doing any dynamic imports.
I'm not so uptight about a batch of HTTP requests, but I am curious if you had some great intent behind it.
Final question for the moment: do you have to load the OpenTok script on your global window object via a SCRIPT tag, or is there an npm package we can import?
Hello again,
Thanks so much!
There are different ways people choose to organize their assets, whether the stylesheets, javascript or images. If this was going to be a production app that I could reasonably expect would grow in complexity and in usage, I would probably do it differently. However, as the use case here was something fun for kids (or any other group of friends, etc.) I was less concerned with adopting a specific opinion on Webpack asset folder structure. As an aside, if this was to be anything more than what it was intended for, I also would not recommend implementing the "security" protocols the way I did either, as that is not very secure! A real production app would want to have more than hiding the moderator functions with a CSS element style! :)
There is indeed a Node SDK for OpenTok. Reference docs are here: tokbox.com/developer/sdks/node/.
I actually meant on the client: github.com/nexmo-community/rails-v...
I'm a Rails dev. :)
So, yeah... now that I've clarified I meant "is there a way to import the OpenTok library without needing a script tag in the document"... is there? Not a rhetorical question. Hoping to put this API to work!
Yes, there is, look at the last sentence in my last reply :)
I am clearly failing to communicate what I'm trying to ask. Let me try harder/better.
I'm a Rails developer, not a Node developer - so you can safely presume that I'm not looking for a Node library in any potential future inquiry. ;)
What I am trying to do is load whatever client library is necessary via my webpack application.js pack file, specifically so that I don't have to resort to putting a script tag in my HEAD and addressing the TokBox classes via the global namespace.
I think what I'm looking for is an ES6 module, probably downloaded and added to my package.json via
npm add?
FWIW, the answer could very well be "no, we haven't done that (yet)"!
Not a problem, I think I understand now.
If I understand correctly, then the answer is you can add the OpenTok JS/Node SDK to your application via Webpack with
yarn add opentokor
npm install opentokwithout the use of a script tag.
Thanks for trying, Ben. That package is, once again, the node server implementation. I'm looking for the client. :)
It's all good... I think that the @opentok/client is the module I'm looking for. I'll let you know how I make out!
Glad you followed the link! Best of luck to you!
Thanks for the tutorial. Is there any way we can share desktop and allow remote control of dashboard of each others.
Hi Kamal,
Unfortunately, it's not possible to remotely control the screenshare. The screenshare functions as a published video stream in the same way that a webcam is a published video stream, except the media source is the screen instead of the camera. Just like a camera is a one-way stream, (i.e. viewers view the camera), so too with the screenshare stream.
All the best,
Ben
Hi Ben,
Could you help me with this ?
I am trying to implement group video functionality and I have following code
While showing subscribers I am doing
But at random times the video goes black and also while some of the subscriber refresh the page it goes blank creates multiple subscirber with null stream.
How can i resolve this?
Hi Kamal,
Good to hear from you here and in the Video Ruby SDK where you raised an issue with this question and in our Community Slack! Instead of responding in 3 separate places to the same question, we're following up with you in the Community Slack.
If anyone has any questions about the Video API, or any other Vonage API, please feel free to join us in the Vonage Community Slack where we would be glad to lend a hand! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/vonagedev/create-a-video-party-app-with-rails-part-2-building-the-frontend-hfe | CC-MAIN-2021-04 | refinedweb | 4,675 | 50.02 |
[open] TMS parallel port managing delays
Dear all,
I am running a visual behavioral experiment where subjects must make a size judgment one 2 elements presented on a picture (which one is larger) and then press left or right. Experiment works fine using psychopy back-end but now i want to add TMS on it.
In the experiment I present the picture during 200 ms. and I want to use 5 different delays (0-50-100-150-200 ms) in the TMS pulse. in order to "send" the pulse instruction to the TMS i just need to send a "1" pulse through the parallel port and then come back to "0" state.
So I have two problems at the moment:
1- On one side I can't use the parrallel_port plugin because the parallel port of the computer is an external card whose number is E010 ... what means it is the port number 57360 in decimal values, and this number in not covered by the port plugin. is there anyway to overcome this issue?
2- In the case I could use the plugin, is there any way to "tell" the plugging to "wait" for an x amount of time before sending the pulse?
I think that both problems could be solved using inline code , but looking in the forum I just found the posibility to use something like:
from ctypes import windll port = 889 dev = windll.inpout32 val = dev.Inp32(port) print "Read %d from port %d" % (val, port)
but when I tried OpensSesame launches an error saying that there is no module ctypes (I alreaddy installed inpout32.dll).
I am using OpenSesame is 2.9.7 Hesitant Heisenberg on a Windows 7 machine.
Any ideas? Recomendations?
Thanks in advance.
Felipe
Hi Felipe,
Let's begin with the
ctypesissue. This module is part of the Python standard library, so it should always be available. What script did you try exactly, and what is the exact error message?
Cheers,
Sebastiaan
Hi Sebastiaan,
Thanks a lot for your fast answer...
So far I have the code line like this:
Using this code I get the following error message in the debug window:
Error while executing inline script
Could you help me here on what is wrong in the code in order to activate the parallel port? Is there any "easier" way to activate the port in between of the stimulus presentation?
Thanks again
Felipe
Hi Felipe,
It's a bit unclear where exactly this error message comes from. Could you upload the entire experiment script, for example to?
And are you running 32 bit or 64 bit Windows 7? It kind of looks like a compatibility issue, but I'm not sure.
I'm afraid not. The parallel port is very outdated, pretty much only still used by EEG researchers. It's always a bit of a problem to activate it under Windows. (But it can be done.)
Cheers,
Sebastiaan
Hi Sebastiaan,
Thanks for your answer.
There are two ways toa ccess the TMS serial or parallel port. The parallel seems to be much easier since you just need to sens a "1" to open the port and activate the tms pulse and a "0" to close it again.
Of course, as I tried to explain before, this is not really straight foward with OpenSesame when i need to use different time delays after a picture presentation.
I let you the code I have written here:
I am running it on a Windows 7 Profesisonal 32 bits machine.
Thanks a lot for your help
Felipe
Hi there,
It seems that my pastebin link expired so I reupload in case somebody is interested to give me some help on this issue.
Thanks in advance for any idea or help to deal with this problem.
Felipe
Hi Felipe,
Could you clarify a bit what exactly you're trying to do? From your description it sounds like you want to send triggers via the parallel port, but in your script you're reading from the parallel port.
Then we have the
inpout32.dllerror, which is quite mysterious. Is
inpout32.dllactually present in the OpenSesame folder?
As an alternative, you could use
dlportio.dllinstead, either through a script or the
parallel_port_triggerplugin? This is explained on this page:
Cheers,
Sebastiaan
Hi Sebastiaan,
Thanks for your reply.
You re totally right with the issue inpout32.dll ... I want to send a "1" and not to read the port.
I need to present a picture and send a "1" to the parallel port either 0-100-150-200 ms after the picture presentation, and inmediately after close the port. (The "1" triggers the TMS pulse) .
The pluggin does not work because the parallel port of the computer is an external card whose number is E010 ... what means it is the port number 57360 in decimal values, and this number in not covered by the port plugin. Is there anyway to overcome this issue? Solving this problem would be the easiest way.
Then , the inline code of dlportio.dll would be the only option. Still, the problem is how to delay 0-100-150-200 ms to send the trigger.
Any idea how to overcome this issue with the inline code?
Thanks again.
Felipe.
Hi Felipe, did you solve the problem? I am facing something smilar right now with the need to trigger TMS and a too large parallel port number. Thanks! Luigi
Hi Luigi,
Unfortunately I never found a solution . So I moved to another stimulus presentation software.
I hoipe you got better luck!
Cheers
F.
Hi Luigi,
Managed to get this working with port = 0xD010 by downloading the Binaries only from
Unpacking the Win32 folder.
Running InstallDriver.exe as Admin
Copy the file inpout32.dll from Win32 to c:\Windows\SysWOW64\ and rename it to dlportio.dll
Best,
Jarik
I tested this on Windows 7 64 bit with OpenSesame 3.0.7 using inline script:
with port = 0xD010 instead of port = 0x378
I used a StartTech PEX1PLP (PCI Express Parallel port card) | https://forum.cogsci.nl/discussion/comment/5885 | CC-MAIN-2021-04 | refinedweb | 1,004 | 73.47 |
I...
And there is, it's objcopy to the rescue. objcopy converts object files or executables from one format to another. One of the formats it understands is "binary", which is basicly any file that's not in one of the other formats that it understands. So you've probably envisioned the idea: convert the file that we want to embed into an object file, then it can simply be linked in with the rest of our code.
Let's say we have a file name data.txt that we want to embed in our executable:
# cat data.txt Hello worldTo convert this into an object file that we can link with our program we just use objcopy to produce a ".o" file:
# objcopy --input binary \ --output elf32-i386 \ --binary-architecture i386 data.txt data.oThis tells objcopy that our input file is in the "binary" format, that our output file should be in the "elf32-i386" format (object files on the x86). The --binary-architecture option tells objcopy that the output file is meant to "run" on an x86. This is needed so that ld will accept the file for linking with other files for the x86. One would think that specifying the output format as "elf32-i386" would imply this, but it does not.
Now that we have an object file we only need to include it when we run the linker:
# gcc main.c data.oWhen we run the result we get the prayed for output:
# ./a.out Hello worldOf course, I haven't told the whole story yet, nor shown you main.c. When objcopy does the above conversion it adds some "linker" symbols to the converted object file:
_binary_data_txt_start _binary_data_txt_endAfter linking, these symbols specify the start and end of the embedded file. The symbol names are formed by prepending _binary_ and appending _start or _end to the file name. If the file name contains any characters that would be invalid in a symbol name they are converted to underscores (eg data.txt becomes data_txt). If you get unresolved names when linking using these symbols, do a hexdump -C on the object file and look at the end of the dump for the names that objcopy chose.
The code to actually use the embedded file should now be reasonably obvious:
#include <stdio.h> extern char _binary_data_txt_start; extern char _binary_data_txt_end; main() { char* p = &_binary_data_txt_start; while ( p != &_binary_data_txt_end ) putchar(*p++); } 44 sec ago
- Reply to comment | Linux Journal
4 hours 56++ Linkage
NB: In order to compile with C++, declare the symbols as follows.
extern "C" {
extern char binary_data_txt_start;
extern char binary_data_txt_end;
}
whoa the version number on
whoa the version number on this article!
for 64bit x86's, use --output elf64-x86-64. The --binary-architecture option need not change, again somewhat unintuitively.
Its the program version number
The version number is the version of the "hello world" program, not the article. And could somebody please come up with a new standard first program. If I see "hello world" in one more language I'm gonna spit-up :).
Mitch Frazier is an Associate Editor for Linux Journal.
so much stuff for little problem...
man xxd for "xxd -i":
cat input_file | ( echo "unsigned char xxx[] = {"; xxd -i; echo "};" ) > output_file.c
There is another, portable way to do this
I was facing exactly the same problem when I wanted to embed 4tH bytecode into an executable. The trick is to convert the file into a C-file that can be compiled properly with any C compiler. 4tH features a program to do that. In essence it works like this: you read the file in binary mode byte by byte and convert those bytes to unsigned characters. A converted file looks like this:
'unit' is equivalent to 'unsigned char'. You can even embed several files like this. IMHO this method is more transparent to both the programmer and the compiler. The source to do this is pretty trivial:
Hans Bezemer
Same Thing Using "Standard" Linux Commands
As I allued to in my comment reply below about assembler output, you can create C (or assembler) data with standard Linux commands:
Using objcopy does this without the extra compilation step, although using the result is a bit more obscure. The other thing I like about using objcopy is that it doesn't leave a "temporary" ".c" file sitting around. Makes me nervous deleting ".c" files.
PS Try this, the hexdump command looks freaky but it actually does work!
Mitch Frazier is an Associate Editor for Linux Journal.
That is one of the most
That is one of the most interesting things I have ever seen in this magazine. It's almost an introduction to how a linker works. It would be really excellent to expand upon this article, although I'm not expert enough to suggest in what way.
Thanks.
Use reswrap instead
Or you just use a utility called reswrap which can convert any file into c/c++ data arrays. More portable and lot easier to use.
It's part of the fox toolkit. ():
Usage: reswrap [options] [-o[a] outfile] files...
Convert files containing images, text, or binary data into C/C++ data arrays.
Options:
-o[a] outfile Output [append] to outfile instead of stdout
-h Print help
-v Print version number
-d Output as decimal
-m Read files with MS-DOS mode (default is binary)
-x Output as hex (default)
-t[a] Output as [ascii] text string
-e Generate external reference declaration
-i Build an include file
-k Keep extension, separated by underscore
-s Suppress header in output file
-p prefix Place prefix in front of names of declarations and definitions
-n namespace Place declarations and definitions inside given namespace
-c cols Change number of columns in output to cols
-u Force unsigned char even for text mode
-z Output size in declarations
Each file may be preceded by the following extra option:
-r name Override resource name of following resource file
How about assembler?
Mitch Frazier is an Associate Editor for Linux Journal.
Ehhh...
.globl data_begin
.data
data_begin:
.incbin "data.txt"
.globl data_end
data_end:
Good luck to us,
Mikhail Kourinny
Macro version
(Thank you for the initial code that got me started.)
I turned the code into a macro, got rid of the global data_end and replaced it with data_len. You could go one big step forward and create a common header file containing the assembly and C macros. It could also contain a macro for C++. Then, just ifdef the macros based on the compiler flags. Then, you can just #include the same file, I think, in many places.
// Common Include File: test.h
// Assembly: test.S
// C or C++:
Hi mkourinny & Mitch
Hi mkourinny & Mitch Frazier,
Both of ur scripts mentioned above for assembly
give the same output.
But I don't understand what does "Converting to
assembly mean". Sorry if it sounds silly. I guess
its converting an assembly file (.s) to hex bytes.
Thanks,
Ram
Not Quite
Its converting a data file, of any type of data, into text that is valid assembly language. The resulting output could then be passed to the assembler and "assembled" (ie compiled by the assembler) into an object file.
Some of the other comments mention converting it to C and then compiling the C, this is the same idea only the target language is assembly language and not C.
The linux assembler is a program invoked with the command "as", it is sometimes referred to as "gas" for the GNU Assembler.
Mitch Frazier is an Associate Editor for Linux Journal.
Thank u much. :) Sorry for
Thank u much. :)
Sorry for posting many times.
It happened without my knowledge.
Ram
At Last an Assembly Language Programmer
Didn't know that!
Mitch Frazier is an Associate Editor for Linux Journal. | http://www.linuxjournal.com/content/embedding-file-executable-aka-hello-world-version-5967?quicktabs_1=1 | CC-MAIN-2013-20 | refinedweb | 1,298 | 63.8 |
view raw
I am programming a python user interface to control various instruments in a lab. If a script is not run interactively, the connection with instruments is lost at the end of the script, which can be very bad. I want to help the user to 'remember' running the script interactively.
I'm thinking of two possible ways to do it. First, as specified in the title, I could make an alias for run -i:
%alias_magic lab_run run -i
UsageError: unrecognized arguments: -i
In [1]: import sys
In [2]: run -i test.py random args
['test.py', 'random', 'args']
You can define your own magic function and use
%run -i in it:
from IPython.core.magic import register_line_magic @register_line_magic def r(line): get_ipython().magic('run -i ' + line) del r
Now:
%r
does the same as:
%run -i | https://codedump.io/share/RgH9nj0Uk8f4/1/ipython-run-magic-how-create-an-alias-for-quotrun--iquot | CC-MAIN-2017-22 | refinedweb | 138 | 65.32 |
I try to create some instance dynamically but these dynamic classes don't have any instances in this file with hard-code.
So the compiler can't find the path of the class, is there any way to make the compiler
import some packages forcibly without declare any instance of any class hard-code in the file.
ReferenceError: Error #1065: Variable GetPersonInfoCommand is not defined.
at global/flash.utils::getDefinitionByName()
var command:Class;
for each( var eve:String in evtArr.eventsArr ){
command = getDefinitionByName("frameworks.com.esint.command." + eve + "Command") as Class;
addCommand(eve, command, true);
}
See MXML option:
-includes
If I want to import some classes ?
How should I do ?
It should handle a comma delimited list or multiple appearances on the
command line.
You can also list them in a -config.xml file. Here's one from the framework
test code:
halo.scripts.TextAreaTestScript</symbol | https://forums.adobe.com/thread/764480 | CC-MAIN-2017-51 | refinedweb | 146 | 50.23 |
Angular 2: A Guide for Beginners
Angular 2: A Guide for Beginners
A comprehensive guide covering the introductory steps for the Angular 2 platform as well as a few other development tools.
Join the DZone community and get the full member experience.Join For Free
Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar.
For about half a year, I've been organizing a local Meetup group around Software Craftsmanship. I recently also published a video course on "Learning Angular 2 Directives" and given that Angular 2 finally released RC1, I decided to organize a Meetup session to introduce Angular 2 to our members.
Intro
This article is for those of you who are new to Angular 2 or even to web development in general. Here, I’m going to give you a good overview of what Angular 2 is all about, highlighting some of the main concepts behind. The idea is to give you a good starting point.
If you did already some coding examples in Angular 2, then I’m probably going to bore you.
But maybe you want to dive deeper with my Learning Angular 2 Directives video course I recently published.
Big Picture
Here’s a simple classification of today’s web application architectures.
In server-side rendered applications, most of the application’s logic resides and remains on the server. The user basically enters the URL, the request gets sent to the server, which then produces the final HTML containing the requested data and sends that back to the browser—which simply renders it out. When the user interacts with the page, that request again gets sent to the server, which in turn generates a new HTML page and serves it back to the browser.
This is how the web has been designed, a perfectly valid model that many pages still use today.
But modern web pages are often required to work more like applications do on the desktop. People demand a much better user experience, more interactivity, fast transitions between “pages” and even offline capabilities. That’s where the so-called SPAs (Single Page Applications) come into play.
When the user enters the URL, the web server responds with an HTML page, but also with a set of resources (JavaScript files and images) that make up our client-side application. The browser receives that, loads the JavaScript application and “boots it.” Now it’s the job of that application to dynamically generate the user interface (HTML) based on the data, right from within the browser. After that happens, every new user action doesn’t reload the entire website again, but rather the data for that specific user interaction is sent to the server (usually by using the JSON format) and the server, in turn, responds with only the data requested by the JavaScript client, again using JSON (normally). The JavaScript application gets the data, parses it and dynamically generates HTML code which is shown to the user.
As you can see, the amount of data that is being exchanged is optimized. However, a big downside of such type of applications is that the startup time is usually much longer. You might already have figured why: The browser doesn’t get the HTML code to show, but rather a bunch of JavaScript files that need to be interpreted and executed, which then generates the final HTML to be shown to the user.
I'd like to show you some examples of the third type of web application I’d like to show you. As you might guess, it aims at taking the best of the server-side rendered web apps and SPAs.
In a nutshell, when the user enters the URL, the server loads the JavaScript application on the server side, boots it up and then delivers the already (by the JavaScript app) pre-rendered HTML plus the JavaScript app itself back to the client. There, the browser interprets the JS app and runs it, which has to be intelligent enough to resume where the server has left off.
The advantage is obvious: You get fast startup times and the benefit of a SPA as mentioned before.
Why Angular 2? What’s Different?
Ok, let’s get to the meat. Why should you be interested in Angular 2? Here are a couple of things I picked up mainly from Brad Green and other core member’s talks from a couple of the latest conferences.
At his keynote at NgConf 2016, Brad Green named Angular 2 a platform rather than a library or framework. The main reason is that it is split up into modular pieces built upon each other, some of which can even be used outside the Angular ecosystem.
There are some building blocks, like the dependency injection, decorator support, zone.js (which can be used completely independently from Angular and is currently under discussion at stage 0 at TC39 for being included in the ECMA standard), a compilation service, change detection, and a rendering engine (which is platform independent). On top of that, there are other libraries such as Angular Material (a UI library with material design support), mobile, and Universal, etc.
There are even modules such as i18n, the router, and animation that can be used from within Angular 1.x as well.
Extremely fast!
Angular 2 is designed to be extremely fast. Well, every new JS library would probably claim that, but there are some differences in the approach Angular 2 is taking.
First of all, they’re currently working on a so-called “template compiler” or “offline compiler." Many JavaScript frontend frameworks basically render the templates dynamically into the browser’s DOM at runtime, which requires a templating engine of some kind. Angular 2 templates and components are made in a way that Angular 2 is able “to reason about your app’s templates” and is able to generate an AST and consequently to translate them into pure JavaScript at compile time. This is huge.
As a result, your deployed app doesn’t require any templating engine to run, but it rather contains highly optimized JavaScript code for directly manipulating the DOM. That’s super fast and, moreover, the resulting Angular 2 library gets a lot smaller, as you don’t need its templating engine anymore when you deploy in production.
And that leads us to the next part.
Small!
The library gets really, really small.
@John_Papa @arhoads76 @DanWahlin I believe current numbers are something like 49K via Rollup and 25K via jscompiler.— Brad Green (@bradlygreen) May 31, 2016
By being able to strip out useless parts, a lot can be dropped when deploying in production. Furthermore, the goal is to use a bundler that supports tree shaking and thus reduces the size of the final compiled JS files even more by eliminating everything that is not actively being used within your application. Frankly, if you don’t use the routing module of Angular, it simply won’t get included in your final app.
Tree shaking is basically “dead code elimination”. By analyzing which JavaScript modules are used and which aren’t, compilers that support such approach can eliminate the unused parts and thus produce a much more optimized and smaller bundle. Obviously a proper module system such as ES2015 modules has to be used for this to work.
Built-In Lazy Loading
Finally! Lazy loading has been a hot topic for Angular 1.x and many other frameworks. When you build a serious application, it might get quite big pretty quick. That said, you cannot force your users to download megabytes over megabytes just to get your app booted up, especially on flaky Internet connections or mobiles. That’s why lazy loading needs to be implemented. The idea is to load only those parts the users most heavily use and load other parts on demand when needed. This was particularly hard in Angular 1. ocLazyLoad being one possibility to achieve this.
Now with Angular 2 this is finally built-in right from the beginning, through the framework’s router and so-called “lazy routes”.
Angular Universal
This is Angular 2’s answer to isomorphic JavaScript or server side pre-rendering. Again, it’s all about performance, to get the app to the user as quickly as possible.
Angular-universal is a library that lives under the Angular 2 GitHub repository with the goal of making server-side rendering as easy and straightforward as possible. Since Angular 2 is made to be platform agnostic, it can be executed in non-browser environments without many issues.
So, when loading a universal Angular 2 app, from a high-level perspective, here's what happens:
- Your user opens your universal Angular app, perhaps also invoking a client-side route like
- Some angular-universal compatible server gets the request, knows it is a client-side route and thus boots the Angular 2 root component (usually
app.component.ts) on the server side and executes it.
- The server then delivers the already rendered application state of that invoked client route inside
index.htmlback to the browser.
- The browser renders the HTML, so the user will see the person with id 1 rendered immediately (as it is already present in the HTML)
- Meanwhile the Angular 2 app boots again, but this time on the client-side (the browser) in the background (in a hidden div, basically). A library,
preboot.js, records all user events, like clicks and input changes, until Angular 2 is fully loaded.
- When the Angular 2 app is ready, it will have the same rendered state as the server has delivered previously.
prebootthen replays all of the user events against that client-side rendered app.
- Finally, the client-side rendered app gets activated and the server-rendered HTML is dropped.
That was from a bird's-eye perspective, but you get the idea. A good place for more details is the quickstart guide. Also, the universal-starter repository has some good examples to play around with.
Currently, the angular-universal-supported server frameworks are Node and ASP.net. But support for Java, Go, and PHP is the works.
Unified Development
Something I’m particularly excited about is having a unified development model.
Angular Mobile Toolkit focuses mostly on a new architectural approach for creating web applications: Progressive Web Apps (PWA). These are normal web applications that facilitate modern web technologies like service workers for offline caching and a special (non-standard right now) manifest file that instructs Chrome to provide “installation like capabilities” for the app. It can be added onto your home screen. Even better, Google I/O 2016 was all about PWA development. Just check out some of the talks on Youtube. Also, JavaScriptAir’s latest talk on it might be relevant.
Ionic 2 is a hybrid mobile application framework. That just means you build a web application and package it in a native installable package for iOS and Android, which can be installed through the corresponding app stores. The app itself is served through a WebView component, which means you’re running a web application in the end. Access to underlying APIs is achieved through Apache Cordova. Ionic specializes in providing you the tools for setup and building the native app packages. Moreover, it gives you a highly tuned UI framework and mobile routing support. They recently also announced future support for PWAs, so it’ll be quite interesting to see it evolve.
NativeScript is a framework developed by Telerik. Different from Ionic, it “compiles” to a native application. Actually, your JavaScript code is being executed through a special JavaScript VM, like Chrome’s V8, which builds the bridge to the underlying native platform. But anyway, take a look at this video with John Papa, Burke Holland and TJ VanToll showing off some NativeScript capabilities.
React Native—It's even possible to use React-Native with Angular 2. Unfortunately, I haven't done anything with it yet, so you may want to browse the web for it if you’re particularly interested in this one.
Installed Desktop
When talking about “installed desktop” we don’t mean like running an Angular 2 app on a desktop browser, but rather to run it inside an installable application. This is powered by Electron and Windows Universal (UWP). Watch what Brad Green had to say about it at NGConf 2016.
A very important point here that’s easy to miss: By running Angular 2 directly from within a web worker, not only you get an enormous performance boost, as it runs in a separate thread, but you also get access to the underlying platform, databases etc..
Not convinced? Well, chances are you’re already using an Electron app, like VSCode, or Slack or some of these:
The 7 Key Concepts behind Angular 2
At NGConf 2016, John Papa had a slide in his talk describing the seven main key concepts behind Angular 2. These really nail it.
Modules and the ES2015 story
Ok, at this point I should probably stop, and we should take a broader look at what modules are all about.
Initially in web development, you most likely did something like this:
<html> <head>...</head> <body> <script src="./vendor/jquery.min.js"></script> <script src="./vendor/super-awesome-datepicker.min.js"></script> <script src="./myapp.js"></script> </body> </html>
You simply include the required JavaScript files in the correct order. Within your
myapp.js you might have:
function initDatePicker(someDateValue) { $('#myBirthDateInput').superAwesomeDatePicker({ value: moment(someDateValue) }); }
A couple of things to note here. We rely on global state and libraries to be loaded, like
$ for jQuery and
moment and
superAwesomeDatePicker. These libraries need to be present at the moment this function is executed. Meaning you have to load all of the scripts in the right order based on their respective dependencies. This is simply not feasible for large scale applications with hundreds of different JavaScript files. That’s why module systems have been created, like the AMD standard implemented by—for example—RequireJS:
define(['jquery', 'moment'], function($, moment) { function initDatePicker(someDateValue) { $('#myBirthDateInput').superAwesomeDatePicker({ value: moment(someDateValue); }); } return initDatePicker; });
Note, we now have imports where dependencies called
jquery and
moment get defined somewhere and imported as
$ and
moment inside this specific file. In turn, functionality within this file gets exported with
return initDatePicker. It can be imported by other files in the exact same way. And so on.
This worked, but it wasn’t always ideal. Or better, there have been different patterns around and they weren’t always that nicely compatible and interchangeable. With ES6 or ES2015, the TC39 (the committee deciding over the next ECMAScript features to be implemented by browsers) finally specified a standard syntax for defining JavaScript modules.
import * as $ from 'jquery'; import * as moment from 'moment/moment'; function initDatePicker(someDateValue) { $('#myBirthDateInput').superAwesomeDatePicker({ value: moment(someDateValue) }); } export initDatePicker;
A much more clear and expressive syntax.
Another notable feature that many developers coming from languages like Java or C# may like are classes and inheritance:
class MyApp { _someDateValue; constructor(someDateValue) { this._someDateValue = someDateValue; } get someDateValue() { return this._someDateValue; } set someDateValue(value) { this._someDatevalue = value; } static someStaticFunction() { ... } }
Finally, another construct we need to learn about to understand Angular 2 apps are decorators.
@DebugLog({ ... }) export class MyApp { ... }
These simply provide metadata to the underlying framework, in this case Angular, about the class. There is currently a proposal for decorator support in ECMAScript. Also, despite what some may think, decorators are not annotations in JavaScript.
Annotations and decorators are two competing and incompatible ways to compile the @ symbols that we often see attached to Angular components. Annotations create an "annotations" array. Decorators are functions that receive the decorated object and can make any changes to it they like.
Traceur gives us annotations. TypeScript gives us decorators. Angular 2 supports both. nicholasjohnsom.com
Also, check out the article on Thoughtram on this topic.
So, can I use all of this in the browser right now? No, unfortunately not. What you need is a compiler or transpiler. Currently, Babel and TypeScript are the most popular ones.
The Angular team decided to go with TypeScript and has written its entire codebase with it. TypeScript was created by Microsoft in 2010, going public in 2012. But it really kicked off only recently with Angular 2. The main difference to other transpilers is that it adds optional type support to JavaScript. First of all, this helps discover and prevent nasty errors at compile time and opens up numerous possibilities for better tooling support.
OK, wait, how is this relevant for Angular 2? Angular 2 is written entirely in TypeScript, and while it’s not impossible to write Angular 2 applications in ES5, it is highly recommended to write them in ES6 or TypeScript to get the best out of it. This way you can start using all of the mentioned features on modules, classes, decorators, and much more we didn’t even cover.
Long story short, you should get accustomed to the new features of ES2015. Browse the web or check out my article , which gives you a quick intro and links to many other useful resources.
<(web)-components>
Component-based architectures are the new paradigm for frontend development. This is not something particular to Angular, but it's something that’s shared among other libraries like React, Ember, or Polymer. The idea is to build autonomous pieces with clearly defined responsibilities that might even be reusable across multiple applications.
So what are web components about? Roughly speaking, it’s about defining custom HTML tags and their corresponding behavior.
<google-map</google-map>
It’s a powerful way to express semantics, isn’t it? By simply looking at this HTML tag, we know that it’ll render a Google map and set a pointer at the given coordinates. Neat! Currently, this isn’t something the browser understands natively, although there’s a draft spec document on the w3c website on the web component standard and what concepts it should embrace.
You might also want to check out Polymer and webcomponents.org.
Angular 2 fully embraces this component-based development style. In fact, since the beginning of Angular 1.x, allowing the user to define custom HTML elements with behavior was one of the core philosophies of the framework. A simple component in Angular 2 looks like this:
@Component({ selector: 'hello-world', template: `<p>Hello, world!</p>` }) class HelloWorldComponent { }
In the corresponding HTML, you would write this to instantiate it.
<hello-world></hello-world>
As you can see, decorators are being used to add meta information about the tag of the component, about the template that should be rendered and much more, which aren’t being used in this basic example here.
"Components are first-class citizens in Angular 2"
Now with components being a first-class citizen in Angular 2, there’s the concept of the so-called component-tree. Every Angular 2 application consists of such a component tree, having a top-level “application component” or root component and from there, lots of child and sibling components.
The component tree is of major importance in Angular 2, and you will come across it again. For example, this is how you compose your application, and the arcs from one component to the other are the way data is assumed to flow through your application as well as what Angular uses for change detection.
“Change detection” is the mechanism by which Angular determines which components need to be refreshed as a result of changes in the data of the application.
Templates and Data Binding
Obviously, when we write something like
<hello-world></hello-world>, we also need to define somewhere what Angular should render in place. That’s where templates and data binding come into play. As we’ve seen before, we define the template directly in the
@Component({}) annotation by either using the
template or
templateUrl property, depending on whether we want to define it inline or load it from some url.
@Component({ ... template: ` <p>Hello, world!</p> ` }) class HelloWorldComponent {}
We also need some data binding mechanism to get data into this template and out of it again. Let’s look at an example:
@Component({ selector: 'hello-world', template: ` <p>Hello, </p> ` }) class HelloWorldComponent { who: string = 'Juri' }
As you can see, the variable
who inside the component’s class gets bound to into the template. Whenever you change the value of
who, the template will automatically reflect that change.
Services and Dependency Injection
Besides components, Angular always had the concept of Services and Dependency Injection. So does Angular 2. While the component is meant to deal with the UI and related stuff, the service is the place where you put your “business logic” so that it can be shared and consumed by multiple components. A service is nothing else than a simple ES6 class:
@Injectable() export class PersonService { fetchAllPeople() { ... } }
From within some component, we can then use this service
import { PersonService } from './services/person.service'; @Component({ ... provider: [ PersonService ] }) class PersonComponent { people; constructor(private personService: PersonService) { // DI in all it's beauty, just provide TS type annotation and Angular will handle the rest // like adding the reference of personService to the class // no need for "this.personService = personService;" } ngOnInit() { this.people = this.personService.fetchAllPeople(); } }
Nice, we get a reference to
PersonService from within our component. But wait, who instantiates the class? You guessed it, Angular’s dependency injection. For this to work you need to two things:
- add the
@Injectableannotation.
PersonServiceas a provider either on the app, the top level component or from the part of the component tree (downwards) where you want to have the service injectable.
Reactive Programming With RxJs 5 and HTTP
Angular 2 uses a paradigm called “Reactive Programming." This is implemented through the RxJS 5 library. The Angular 2 HTTP service won’t return promises, but instead RxJS Observables.
This pattern is not new at all and has recently gained a lot of popularity in modern frontend development. I’m not going into the details here now, as it would be an entire article on its own. Just know that Angular 2 will heavily rely on it and that’s why you should probably go and learn more about it.
Lot’s of Stuff, Let’s Get Started With Some Code
In recent years, getting started quickly with frontend development got notably more difficult. Just creating some
index.html with a couple of
<script> tags is not enough. What you need is a transpiler and a build tool that transpiles the code and serves it up, not to then mention optimizations like minification, inclusion of HTML templates, CSS compilation etc.
$ ng new my-super-awesome-project $ ng g component my-new-component $ ng g route hero
Moreover, it generates these components by following the official styleguide and has even linting built-in.
Angular CLI is still under heavy development and still has some way to go till it’s fully usable. But it’s awesome for quickly getting started, and I’m quite sure it’ll get better and be a huge help, especially for newcomers, to get started with Angular 2 without having to know all the tooling in depth (recently Webstorm integrated Angular CLI support). However, I also strongly recommend learning these tools as you go along. The CLI will bring you quickly to a good point, but it’s indispensable to know your tooling to get further ahead.
Here are some other popular starters you definitely want to take a look at. They are community-based, have lots of best practices bundled and have been around for quite a while now.
Here's an Angular 2 starter kit featuring Angular 2 (router, HTTP, forms, services, tests, E2E, Dev/Prod, Material Design, Karma, Protractor, Jasmine, Istanbul, TypeScript, TsLint, Codelyzer, Hot Module Replacement, Typings, and Webpack by @AngularClass.
Okay, We’re All Set Up, I Guess. Time to Code!
I recorded a ~20 min screencast where I walk through some of these seven key concepts behind Angular 2. Hope you enjoy it!
Conclusion
Congrats, you’ve come to the end.
By now, you probably realize there’s lots of new stuff for you to learn. But the nice thing is that maybe it isn’t even only Angular 2 related. Like switching to ES2015 (ES6) and/or TypeScript, or adopting the reactive programming style with RxJS, or learning new toolings like Webpack and SystemJS. These are all things you can totally reuse, even though you don't plan to continue with Angular 2 in the end. Fortunately, the things you have to exclusively learn for Angular 2 got a lot smaller compared to Angular 1.x!
So this was basically just the beginning. From here, you can go more in depth. Follow the links I provided to get started. Also, feel free to drop me a line on Twitter. In general, try to connect with the Angular 2 community (over Twitter, GitHub, and Slack). There are lots and lots of awesome people willing to help you with their enormous expertise.
Thanks to Martin Hochel for reviewing this article
If you enjoyed this post you might want to follow me on Twitter for more news around JavaScript and Angular 2.
Note: This article has been reposted here on DZone with my permission. Please refer to the original article for future updates and more links. }} | https://dzone.com/articles/angular-2-a-getting-started-guide-for-beginners | CC-MAIN-2018-26 | refinedweb | 4,242 | 54.83 |
How do I adjust the size of axes labels and figure titles in plots?
I've been experimenting around with the following code:
from sage.calculus.desolvers import desolve_odeint y,dy = var('y,dy'); g = 9.8; l = 1; f = [dy,-g/l*cos(y)]; v = [y,dy]; t = srange(0,5,0.01); ci = [0,0]; sol = desolve_odeint(f,ci,t,v,rtol=1e-15, atol=1e-10,h0=1e-4,hmax=1e-2,hmin=1e-6,mxstep=10000) p = line(zip(t,sol[:,0]),title=r"$\frac{d^{2}\theta}{dt^2} = -\frac{g}{l} \cos{\theta}$",axes_labels=[r"$t$",r"$\theta$"]); p2 = line(zip(t,sol[:,1])); p2 += text(r"$\frac{d\theta}{dt}$",(-0.5,0),fontsize=27); p2 += text(r"$t(s)$",(5,0.5),fontsize=20); p.show()
which gives this plot:
As you can see the axes labels and title of the figure are insanely small (or at least in my opinion). I'd like the axes labels to be 20 pt and the figure title to be 30 pt. How do I do this? | https://ask.sagemath.org/question/26807/how-do-i-adjust-the-size-of-axes-labels-and-figure-titles-in-plots/ | CC-MAIN-2017-47 | refinedweb | 182 | 60.11 |
The QFileIconProvider class provides icons for QFileDialog to use. More...
#include <qfiledialog.h>
Inherits QObject.
List of all member functions.'s advisable to make all the icons QFileIconProvider returns be of the same size, or at least the same width. This makes the list view look much better.
See also QFileDialog.
Constructs an empty file icon provider.
[virtual]
Returns a pointer to a pixmap which should be used for visualizing the file with the information info.
If pixmap() returns 0, QFileDialog draws the default pixmap.
The default implementation returns particular icons for files, directories, link-files, link-directories, and blank for other types.
If you return a pixmap here, it should be of the size 16x16.
Search the documentation, FAQ, qt-interest archive and more (uses):
This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved. | https://doc.qt.io/archives/2.3/qfileiconprovider.html | CC-MAIN-2021-21 | refinedweb | 141 | 60.01 |
java.util.BitSet; 27 import net.jcip.annotations.Immutable; 28 29 /** 30 * An encoder useful for converting text to be used within a filename on common file systems and operating systems, including 31 * Linux, OS X, and Windows XP. This encoder is based upon the {@link UrlEncoder}, except that it removes the '*' character from 32 * the list of safe characters. 33 * 34 * @see UrlEncoder 35 */ 36 @Immutable 37 public class FilenameEncoder extends UrlEncoder { 38 39 /** 40 * Data characters that are allowed in a URI but do not have a reserved purpose are called unreserved. These include upper and 41 * lower case letters, decimal digits, and a limited set of punctuation marks and symbols. 42 * 43 * <pre> 44 * unreserved = alphanum | mark 45 * mark = "-" | "_" | "." | "!" | "˜" | "'" | "(" | ")" 46 * </pre> 47 * 48 * Unreserved characters can be escaped without changing the semantics of the URI, but this should not be done unless the URI 49 * is being used in a context that does not allow the unescaped character to appear. 50 */ 51 private static final BitSet SAFE_CHARACTERS = new BitSet(256); 52 private static final BitSet SAFE_WITH_SLASH_CHARACTERS; 53 54 public static final char ESCAPE_CHARACTER = '%'; 55 56 static { 57 SAFE_CHARACTERS.set('a', 'z' + 1); 58 SAFE_CHARACTERS.set('A', 'Z' + 1); 59 SAFE_CHARACTERS.set('0', '9' + 1); 60 SAFE_CHARACTERS.set('-'); 61 SAFE_CHARACTERS.set('_'); 62 SAFE_CHARACTERS.set('.'); 63 SAFE_CHARACTERS.set('!'); 64 SAFE_CHARACTERS.set('~'); 65 SAFE_CHARACTERS.set('\''); 66 SAFE_CHARACTERS.set('('); 67 SAFE_CHARACTERS.set(')'); 68 69 SAFE_WITH_SLASH_CHARACTERS = (BitSet)SAFE_CHARACTERS.clone(); 70 SAFE_WITH_SLASH_CHARACTERS.set('/'); 71 } 72 73 /** 74 * {@inheritDoc} 75 */ 76 @Override 77 public String encode( String text ) { 78 if (text == null) return null; 79 if (text.length() == 0) return text; 80 return encode(text, isSlashEncoded() ? SAFE_CHARACTERS : SAFE_WITH_SLASH_CHARACTERS); 81 } 82 83 /** 84 * @param slashEncoded Sets slashEncoded to the specified value. 85 * @return this object, for method chaining 86 */ 87 @Override 88 public FilenameEncoder setSlashEncoded( boolean slashEncoded ) { 89 super.setSlashEncoded(slashEncoded); 90 return this; 91 } 92 93 } | http://docs.jboss.org/modeshape/1.2.0.Final/xref/org/modeshape/common/text/FilenameEncoder.html | CC-MAIN-2015-11 | refinedweb | 314 | 50.43 |
Random Forest is the best algorithm after the decision trees. You can say its collection of the independent decision trees. Each decision tree has some predicted score and value and the best score is the average of all the scores of the trees. But wait do you know you can improve the accuracy of the score through tuning the parameters of the Random Forest. Yes, rather than completely depend upon adding new data to improve accuracy, you can tune the hyperparameters to improve the accuracy. In this tutorial of “how to, you will know how to improve the accuracy of random forest classifier.
How Random Forest Works?
In a Random Forest, algorithms select a random subset of the training data set. Then It makes a decision tree on each of the sub-dataset. After that, it aggregates the score of each decision tree to determine the class of the test object. It is the case of Random Forest Classifier. But for the Random Forest regressor, it averages the score of each of the decision tree. This intuition is for random forest Classifier.
When to use Random Forest?
There are various machine learning algorithms and choosing the best algorithms requires some knowledge. Here are the things you should remember before using the Random Forest Algorithm
1. Random Forest works very well on both the categorical ( Random Forest Classifier) as well as continuous Variables (Random Forest Regressor).
2. Use it to build a quick benchmark of the model as it is fast to train.
3. If you have a dataset that has many outliers, missing values or skewed data, it is very useful.
In the background, Random Forest Tree has hundreds of trees, Due to this, it takes more time to predict, therefore you should not use it for real-time predictions.
Hyper Parameters Tuning of Random Forest
Step1: Import the necessary libraries
import numpy as np import pandas as pd import sklearn
Step 2: Import the dataset.
train_features = pd.read_csv("train_features.csv") train_label = pd.read_csv("train_label.csv")
You can download the dataset here. Same Dataset that works for tuning Support Vector Machine.
Step 3: Import the Random Forest Algorithm from the scikit-learn.
from sklearn.ensemble import RandomForestClassifier,RandomForestRegressor print(RandomForestClassifier()) print(RandomForestRegressor())
Step 4: Choose the parameters to be tuned.
On running step 3, you will see a lot of parameters for both the Random Forest Classifier and Regressor. I am choosing the important one that us number of estimators/trees (n_estimators) and the maximum depth of the tree (max_depth).
Step 5: Call the classifier constructor and make the expected list of all the parameters.
You will make a list of all the parameters, you chose on step 4. Like in this example.
rfc = RandomForestClassifier() parameters = { "n_estimators":[5,10,50,100,250], "max_depth":[2,4,8,16,32,None] }
Step 6: Use the GridSearchCV model selection for cross-validation
You will pass the classifier and parameters and the number of iteration in the GridSearchCV method. In this example, I am passing the cross-validation iteration of 5. Then you will fit the GridSearchCV to the X_train variables and the X_train label.
Please note that you have to convert the values of label into a one-dimensional array. That’s why we are using ravel() method.
from sklearn.model_selection import GridSearchCV cv = GridSearchCV(rfc,parameters,cv=5) cv.fit(train_features,train_label.values.ravel())
Step 7: Print the best Parameters.
This feature is available in the GridSearchCV. You can use cv. best_params_ to know the best parameters. But what is the algorithm is doing inside it doesn’t print. That’s why We have defined the method for printing all the iteration done and scores in each iteration.
def display(results): print(f'Best parameters are: {results.best_params_}') print("\n") mean_score = results.cv_results_['mean_test_score'] std_score = results.cv_results_['std_test_score'] params = results.cv_results_['params'] for mean,std,params in zip(mean_score,std_score,params): print(f'{round(mean,3)} + or -{round(std,3)} for the {params}')
display(cv)
It will print the entire iteration results defined in the above function. And you can clearly see it print out the best score and the parameters. In this example the best parameters are :
{'max_depth': 8, 'n_estimators': 250}
Use it in your random forest classifier for the best score.
Conclusion
The Parameters tuning is the best way to improve the accuracy of the model. In fact, There are also other ways, like adding more data e.t.c. But it obvious that it adds some cost and time to improve the score. Therefore I recommend you to first go with parameter tuning if you have sufficient data and then move to add more data.
That’s all for now. If you want to get featured on Data Science Learner Page. Then contact us to know what are the requirements. If you have any query, then message us. You can also message on our official Facebook Page. | https://www.datasciencelearner.com/how-to-improve-accuracy-of-random-forest-classifier/ | CC-MAIN-2020-29 | refinedweb | 818 | 58.58 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
import xml file into PDF
Hello,
I have an official company PDF form, that I can not modify. We use this form to fill out and submit service tickets back to the parent company.
Right now we are manually entering in the data, exporting to xml and emailing to the company.
Is it possible for OE7 to import the xml file into the PDF to populate the PDF form?
We are able to manually do this from Acrobat, but would like to automate if possible.
Thanks Er | https://www.odoo.com/forum/help-1/question/import-xml-file-into-pdf-48905 | CC-MAIN-2017-04 | refinedweb | 116 | 63.7 |
I:
- everything necessary for LINQ -- implicitly typed locals, anonymous types, lambda expressions, extension methods, object and collection initializers, query comprehensions, expression trees, improved method type inference
- partial methods
- automatically implemented properties#.
oh man.. thats so bad.. I was expecting extension properties to be part of next ver of c#…. you will realize the value or importance of it only if you work with extension methods.. in last 6 months atleast 10 times I have thought why these guys(c# guys) didnt include exntesion properties… you ask your WPF team(i was missing it during my databindings) then you will realise its worth more than silly things(for me) like the partial methods.. hopefully you will get a big budget next time 🙂
I can think of far more uses for extension operators and extension constructors – but maybe that’s just my domain.
I think the thing that gets me is that not everyone who uses .NET uses WPF. In fact, I find when I have to put together a desktop app (my primary work is web-based), I use winforms because it’s generally the simplest thing that solves the problem. However, most of the time, I could care less about WPF (or WF, for that matter).
Disappointing.
Frankly, for WPF, some automated way to generate property change notifications and/or propertyinfoof() would result in a much higher productivity increase than extension properties, IMO. So if they are the main potential beneficiaries of this feature, I would hope to see it after those other two rather than before.
Side note: captchas are still acting weird. When I load a page, I see a new captcha (i.e. it’s different from what was there last time I posted), but when I actually try to enter it when posting, I get the "code was invalid" error message; I have to reload the captcha image itself to get a "valid" one.
@Rick – So you really DO care about WPF and WF [if you COULD care less, you much have a level of caring which could be decreased – it is only if you could NOT care less that you would have no caring]. Just as an FYI, I have adopted the use of WF for almost all of the logic even in Winforms apps, once it becomes "comfortable" there are really some amazing capabilities that can easily be leveraged.
@Pavel – I agree with you 100%. In fact, I am not sure extension properties really help databinding at all since they are not part of the intrinsic metadata of thye type itself.
Can you elaborate any on the differing needs of WPF vs. the initial implementation?
My current expectation as a C# programmer is that an extension property "Foo" of type T would be about (nearly? almost exactly?) the same as two extension methods: "T get_Foo(this SomeObject)" and "void set_Foo(this SomeObject, T value)".
While I can understand the problems with making my "about the same as" statement working with the language, it’s not clear how extension properties would/could be (significantly) different than extension methods.
@TheCPUWizard – Yes, I meant "could not care less". Noticed it after posting, I had hoped that was clear by the context. Didn’t realize it needed clarification.
I second J. Daniel Smith’s request for elaboration, it would at least make an interesting article.
To me, the natural consumer of extension properties is fluent interfaces. Fluent nHibernate, fluent NUnit, home-grown fluent interfaces, etc. It’s a real pain to have to remember where you do need parens (because it’s an extension method) and when you don’t.
Here’s hoping it makes the cut in a future version. And I’ll also vote in favor of hearing the details — it definitely would be an interesting article.
I don’t understand how extension properties would be a good idea. Sure, they look cool and may get rid of some parenthesis, but i don’t understand how something that looks like a member types (but it’s actually static) isn’t a bad idea.
To be fair, I have no idea what extension properties would look like in the mind of the C# team.
This article makes it look as if the only customers that count are internal Microsoft customers. I expect a lot of developers outside of the WPF team would have been very happy to have extension properties, I know I would. What about the rest of your customers, especially those who actually pay money for your product?
Sorry, I must have done a really exceptionally poor job of explaining myself, since you’ve completely and utterly misunderstood me. The WPF team is a few dozen people, tops. We don’t design and implement language features for the convenience of a few dozen people, even if they are very nice people just across the plaza from us. That would be ludicrous. We design features for the WPF team’s hundreds of thousands of customers. You know, the people who pay us money for the WPF designer, and then use it to make applications that they then sell to their customers. The WPF team members are the people who best understand the needs of those thousands of customers; when I say we’re designing a feature “for the WPF team”, it’s not for their internal development team, it’s for their customers. Is that now clear? — Eric
@Scott Schlesier,
Could it possibly be taken one step further? I mean could the C# language become – in part, at least – an open-source project where the MS team people do what THEY can from what THEY think is necessary (on the gotta have / nice to have / bad idea basis), and the rest of the community does the rest, if they want it badly enough?
The users can then purchase the "original" C# with VS/.NET and download the extensions for free, or for a price that looks reasonable from the average developer’s salary (or average small consultancy business’ income) viewpoint…
Personally, I’ve got enough of ITIL/PRINCE2/SCRUM/etc. certifications to understand what the resource and financial restrictions are all about (and why, therefore, some features not only have to be cut, but also reviled as "not serving the needs of the customers" so that the team doesn’t feel so bad about it 🙂 ), but one thing I never learned (and never will) is to like and appreciate restrictions (I can tolerate them for now: that’s as much as you gonna get from me).
I know we can do it now: no one is stopping us, but what I mean is some sort of approval, endorsement, certification, whatever, from Microsoft, saying, basically, "this extension is approved by the C# designers, can be downloaded from MSDN, and is guaranteed to work as well as the rest of the language", to calm down the nervous Nellies from the executive management who want everything legal and official. Would that be possible?
You mention above : "and determined that we did not have the resources to do more than about half the things in the "gotta have" bucket. So we cut half that stuff."
So what happens to the half of the features that were cut for C# 4.0 – do they then become the absolutely must have top 5or 6 things for C# 5.0 ? Or do you again start from scratch for C# 5.0 ?
Sure , things will change between now and the time for drawing up the spec for C# 5.0 comes around, but keeping the "dropped features from 4.0" as top priority for 5.0 will lend continuity to the process and also help re assure developers that if not in this version then atleast the next.
great posts , by the way.
Thanks.
@Eric,
What I got from your post was the feature didn’t meet the needs of WPF, but presumably it did address non-WPF needs. I suspect that there are still many more C# developers out there who are not using WPF than are using WPF. It seems the non-WPF folks missed out because the WPF team couldn’t use the implementation of extension properties.
I’m guessing the key is that the extension properties were intended to have been used in most of not all WPF applications, but outside WPF it would merely be an occaisionally used bit of syntactic sugar.
@Denis,
Mono is already the open source alternative. Though they have their hands pretty full just keeping up with the core features so far. Extending libraries is one thing, and I love that MS is embracing OSS more and more. But, I think getting acceptance for non-standard extensions to the C# language will be very difficult.
It would be very interesting to be able to see the list of features that were considered for C#3 in the gotta have/nice to have/bad idea buckets, maybe along with reasoning for a few of them, to see the behind-the-scenes thought processes…
Thank you. Every time you explain why C# doesn’t do something or does it a certain way, any frustration with the fact quickly evaporates and turns into understanding. This blog always makes me feel better about my most favourite language ever, C#. And I’m wishing MSDN docs linked to your posts whenever possible, because to me, answering the "why"s is no less important than answering the "how"s.
Eric, what about static extension methods? What I mean by that is extension methods that can apply to a type like static methods.
For example, you could have an extension method Yesterday that you could call on DateTime. Obviously, you could create a DateTimeUtil but other coders working on the same project might not now that it exists!
I know it would have been useful to me in a couple of cases. Has it been considered? If yes, why was it rejected? Could it be added in a future version?
Cool – I’ve actually often found myself wondering lately why there are no extension properties.
It’s a little disappointing, but really nothing to get bent out of shape over. I don’t see why so many people need to complain. There’s nothing you’d be able to do with extension properties that you can’t already do with extension methods, except for having a slightly cleaner interface. Properties ARE methods, fundamentally, they’re just syntactic sugar for a getter and setter.
I’m actually more interested in knowing what was so important for the WPF team (and its customers) that pushed the feature into the "must have" bucket in the first place.
From what I gather, the primary benefit of a property over a method is not that you don’t have to type (), it’s that the designers (such as WPF, windows forms, etc) can bind to properties but not to methods. It sounds like Eric is saying that due to some as-of-yet unexplained reason, their design did not allow this binding to work as expected. If it doesn’t help with that scenario, then all that’s left is the syntactic sugar letting you omit the (), which is not an overly useful language feature and is sure to get in the way of designing a better implementation later.
‘In retrospect, we should have gotten input and feedback from the primary customer much earlier…’
This is starting to sound a bit like Agile 😉
@Pavel
You wrote that an automatic way to do generate property change notifications would be the more interesting feature. I wrote something to do exactly that, here:
It is usable from standard .NET code, and it works entirely at runtime (no codegen).
The problem with automatic property change notifications is when I have a property like this:
string FullName { get { return FirstName + " " + LastName; } }
Somehow a change to either FirstName or LastName has to notify subscribers that FullName has also changed. You would have to be able to annotate FullName to indicate what proprties to notify changes for.
@Gabe, Actually my SmartData architecture addresses exactly that type of situation. An example (very basic) implementation is on CodePlex []. Please feel free to contact me directly for more information: david "dot" corbin "at" dynconcepts "dot" com.
In order to be able to set a property in XAML, it has to be an actual property. This means that if I want to be able to write <Foo Spam="eggs"/> then the Foo class has to have a Spam property of type string.
However, the needs of WPF are more than that. Any given object might have dozens of inherited properties, only a few of which you may care to set, so you don’t want to waste tons of storage on all those properties’ default values. This means that the property mechanism must support some sort of backing store other than a dedicated field (as automatic properties currently have).
Additionally, objects may have properties that they don’t even know about. For example, the child items of a grid need to have some way of storing what row and column of the grid they belong in. This is accomplished by saying that all UI elements have a Grid.Row and a Grid.Column property. That sort of thing is what I typically would think of as an extension property.
The types of properties used mostly in WPF objects are DependencyProperties, part of which includes a backing store for only the non-default property values. DependencyProperties also facilitate data binding, styling, animation, and more. Since declaring a DependencyProperty is currently a bit ugly, it would be nice if C# included a mechanism to support it better (hell, even macros would do fine).
Presumably the WPF team looked at the extension property mechanism they came up with, found that it didn’t meet at least one of these needs, and said they couldn’t use it. I’d like to know what the mechanism was and how it differed from what WPF needs.
Eric, thanks for sharing that story.
I think everybody would love to have extension properties.
In my opinion, properties are different members from methods in a sense that they highly need to be discoverable. Databinding is of course almost entirely based on properties and properties metadata, in WPF but also in any other interfaces (winforms, asp.net).
And if it’s been a problem for WPF, I think there are good chance that we all meet the same issues one day.
Personnaly, I prefer having to wait the best possible feature.
It’s very saddening to see extension properties meet the axe, I see very little in code that looks dumber than
if(session.Is().Not.Null)
but it’s nice to see Microsoft be open about the reason for something than sometimes that it feels features get the axe with little to not explanation.
Couldn’t extensions just be defined when you prefix a variable, method, or property with a class in perentesis?
(Person) private int height;
——————————-
(Person) public Person BestFriend
{
get;
set;
}
——————————-
(Person) public static void Person(string firstName, string lastName)
{…}
Seems that this would make extensions a lot simpler, (without the extra ‘this’ parameter) and would make properties and variables extension-able. Newbie programmer’s thoughts; is this confusable with some other C# format?
What about STATIC extension methods? It’s very interesting there’s no mention of it here as it’s an oft-requested feature that would be extremely usefull.
Eric, We can count on this for 5.0 right?
I wouldn't if I were you. – Eric
With the rising popularity of fluent interfaces (including internally at Microsoft) it seems like extension properties would be more and more useful. From setting "properties" like IsRequired() in the new Entity Framework Code First fluent mappings, to being able to namespace ASP.NET MVC HTML Helpers (for example adding Html.Product.SomeProductRelatedHelper() where Product is an extension property on System.Web.Mvc.HtmlHelper), to the canonical Rubyesque 10.Days.Ago example, it seems like it would fill a missing gap in the language. Doing some searches it looks like there is a large demand for this, and I for one would love to see it!
I would love to see extension properties, if only to pretty my code by getting rid of many brackets (), but understand the reasons why they are not in yet. C# keeps getting better with each version. Good work.
BitFlipper does a great job of designing extension properties, constructors, operators, and methods here:
channel9.msdn.com/…/257556-C-Extension-Properties
I propose that design be used in the next version of C#. It requires moving on from the existing implementation, but it's much more natural, and makes the path forward obvious. … 🙁) | https://blogs.msdn.microsoft.com/ericlippert/2009/10/05/why-no-extension-properties/ | CC-MAIN-2018-34 | refinedweb | 2,802 | 61.06 |
PyX — Example: graphstyles/density.py
Drawing a density plot
from pyx import * # Mandelbrot calculation contributed by Stephen Phillips # Mandelbrot parameters re_min = -2 re_max = 0.5 im_min = -1.25 im_max = 1.25 gridx = 100 gridy = 100 max_iter = 10 # Set-up re_step = (re_max - re_min) / gridx im_step = (im_max - im_min) / gridy d = [] # Compute fractal for re_index in range(gridx): re = re_min + re_step * (re_index + 0.5) for im_index in range(gridy): im = im_min + im_step * (im_index + 0.5) c = complex(re, im) n = 0 z = complex(0, 0) while n < max_iter and abs(z) < 2: z = (z * z) + c n += 1 d.append([re, im, n]) # Plot graph g = graph.graphxy(height=8, width=8, x=graph.axis.linear(min=re_min, max=re_max, title=r"$\Re(c)$"), y=graph.axis.linear(min=im_min, max=im_max, title=r'$\Im(c)$')) g.plot(graph.data.points(d, x=1, y=2, color=3, title="iterations"), [graph.style.density(gradient=color.rgbgradient.Rainbow)]) g.writeEPSfile() g.writePDFfile()
Description
2 dimensional plots where the value of each point is represented by a color can be created by the density style. The data points have to be spaced equidistantly in each dimension with the possible exception of missing data.
For data which is not equidistantly spaced but still arranged in a grid,
graph.style.surface can be used, which also provides a smooth representation by means of a color interpolation between the mesh moints. Finally, for completely unstructured data,
graph.style.rect can be used. provides those gradients in other color spaces as shown in the example. | http://pyx.sourceforge.net/examples/graphstyles/density.html | CC-MAIN-2015-18 | refinedweb | 256 | 52.46 |
.swing.plaf; 27 28 import java.awt.Insets; 29 import javax.swing.plaf.UIResource; 30 31 32 /* 33 * A subclass of Insets that implements UIResource. UI 34 * classes that use Insets values for default properties 35 * should use this class. 36 * <p> 37 * <strong>Warning:</strong> 38 * Serialized objects of this class will not be compatible with 39 * future Swing releases. The current serialization support is 40 * appropriate for short term storage or RMI between applications running 41 * the same version of Swing. As of 1.4, support for long term storage 42 * of all JavaBeans™ 43 * has been added to the <code>java.beans</code> package. 44 * Please see {@link java.beans.XMLEncoder}. 45 * 46 * @see javax.swing.plaf.UIResource 47 * @author Amy Fowler 48 * 49 */ 50 public class InsetsUIResource extends Insets implements UIResource 51 { 52 public InsetsUIResource(int top, int left, int bottom, int right) { 53 super(top, left, bottom, right); 54 } 55 } | http://checkstyle.sourceforge.net/reports/javadoc/openjdk8/xref/openjdk/jdk/src/share/classes/javax/swing/plaf/InsetsUIResource.html | CC-MAIN-2017-51 | refinedweb | 155 | 58.89 |
Advanced MessageBox for Windows Phone
This article shows how to create a customised messagebox: asynchronous, custom button, no beep, etc.
Windows Phone 8
Windows Phone 7.5
Introduction
The standard phone message boxes is very easy to use, and is created with just a single line:
Message.Show("Hello!");
But these message boxes aren't easy to customize.
We will learn in this article how to :
- create an asynchronous native message box
- customize buttons
- remove beep
Accessing advanced features through XNA
There is a way to access advanced features of the native messageBox through one back door using XNA.
Tip: You can access XNA framework in a Windows Phone 8 project, but only for some namespaces (you can't make full-XNA games for Windows Phone 8). The list of allowed namespaces is in XNA Framework and Windows Phone 8 development.
If you look at the Microsoft.Xna.Framework.GameServices assembly, you will notice that the Guide object has the following methods :
BeginShowMessageBox() method displays a native and asynchronously messagebox with parameters:
- title: Title of the message box
- text: text to be displayed in the message box
- buttons: Legends associated with the buttons on the message box. The maximum number of buttons is two.
- focusButton: 0-based index that defines the button highlighted.
- icon: Type icon in the message box.
- callback: method to call when the asynchronous operation is complete.
- STATE: The unique user-created that identifies this request.
Title and text match mutually the Silverlight parameters caption and messageboxtext of Silverlight.
One difference here: the text can not exceed 256 characters, otherwise an exception is thrown.
Buttons is here much more advanced than our Silverlight MessageBox, we can specify the button text, for example, we can modify the "ok" and "cancel" text like that :
This offers us a lot of new possibilities, but take care to localize your buttons.
How to make synchronous messagebox?
Be able to launch a asynchronous dialog is pretty cool, but in most cases, we expect it to be synchronous. In order to do this, simply retrieve the result of the asynchronous call and wait for its execution.
We have now a native, synchronous and customizable message box !
Test result of the dialog
If you want to test the result of your dialog box to know which button has been clicked, simply call the method EndShowMessageBox:
int? choice = Microsoft.Xna.Framework.GamerServices.Guide.EndShowMessageBox(result);
if(choice.HasValue)
{
if(choice.Value==0)
{
//user clicks the first button
}
}
How to remove the messagebox sound?
To remove the beep or the vibration, just change the icon... It's not logical, but that's how it works.
XNA is a common platform between Windows, Xbox and Windows Phone. On windows, we are used to display icons like 'warning', 'alert', etc. on the left of the dialog box but this isn't ideal for mobile screen. The distinction between a normal dialog and a warning/alert dialog box is therefore the sound / vibration. To remove the sound, we just have to set the type to None.
No sound will be launch.
Conclusion
- Add reference Microsoft.Xna.Framework.GameServices (not necessary with Windows Phone 8 project)
- Write the following code:
IAsyncResult result = Microsoft.Xna.Framework.GamerServices.Guide.BeginShowMessageBox(
"Quizz",
"What is your favorite Windows Phone?",
new string[] { "Nokia Lumia 820", "Nokia Lumia 920" },
0,
Microsoft.Xna.Framework.GamerServices.MessageBoxIcon.None,
null,
null);
result.AsyncWaitHandle.WaitOne();
int? choice = Microsoft.Xna.Framework.GamerServices.Guide.EndShowMessageBox(result);
if(choice.HasValue)
{
if(choice.Value==0)
{
//User clicks on the first button
}
}
R2d2rigo - Wiki competition templateYou marked this article as an internal wiki competition, but not the other you published. Please correct the wrong one.
r2d2rigo 03:06, 4 December 2012 (EET)
Chintandave er - Thanks and Sub-edited!
Hi, Thanks for this article.
I have sub-edited this article. Also I changed the article title. Also I have corrected contest template as it was for internal employee and I think you are not employee of Nokia or MS. Still let me know if I am wrong.
Have some suggestions for you. Please follow this suggestions and correct those in your articles.
You might want to check out wiki help articles Help:Formatting and Help:Wiki Article Review Checklist.
RegardsChintan Dave
Chintandave er 07:21, 4 December 2012 (EET)
Yan -You can access XNA framework in a WP8 project??
yan_ 14:22, 4 December 2012 (EET)
R2d2rigo - XNA in WP8Yes you can, yan_, but only to some namespaces. You can't make full-XNA games for WP8.
r2d2rigo 12:23, 5 December 2012 (EET)
Yan -Bur is it not deprecated to use XNA in a WP8 application?
yan_ 10:59, 6 December 2012 (EET)
R2d2rigo - You can only use some namespacesHere is the list of allowed ones:
r2d2rigo 16:10, 6 December 2012 (EET)
Yan -thanks
yan_ 17:23, 6 December 2012 (EET)
Hamishwillee - Subedit/Review
Hi Rudy/All
I have given this a basic subedit for wiki style and added a "tip" with the above link on supported XNA namespaces.
Rudy, I like this article. It could be improved by providing references to the classes and guide on the standard message box, and by explaining what the standard box offers (buttons, text, icon) so it is clear what is the delta that your advanced approach offers. As it is now, you state that the new box is better but it isn't clear why. If it were me I'd have a "old" and "new" image next to each other.
It would also be excellent to have a downloadable app where we could try launching both the standard box and your one.
In terms of the competition, this looks like it isn't really a WP8 feature - correct? Which means that it isn't necessarily exactly what we wanted. It is however excellent material for the wiki (expecially with that example code)
RegardsHamish
hamishwillee 06:41, 12 December 2012 (EET)
Mohamed.fekry - Need Help please
i need to style the text at the alert, all i need to change the FlowDirection="RightToLeft" when the application language be Arabic.any help please!!
mohamed.fekry (talk) 15:27, 24 March 2014 (EET) | http://developer.nokia.com/community/wiki/Advanced_MessageBox_for_Windows_Phone | CC-MAIN-2014-41 | refinedweb | 1,025 | 56.25 |
I have been neglecting the test suite in the Dart version of the ICE Code Editor. I still have the original four specs that I wrote for it and they still pass. The code still passes
dart_analyzer. So it ought to be in pretty decent shape. Still, I need to go back to add some tests to catch regressions and the like.
But I'll leave that for another day. Maybe even for a #pairwithme session.
Tonight I am going to start working through new code, but I hope to be better about driving it with tests. The next feature that I want to add to ICE is the ability to interface with localStorage. Before I do that, I need to be able to deflate and inflate compressed data that will be stored therein. A little while back, I spiked some exploratory code to find that Dart could use its Zlib package to work with this data. Unfortunately, I also found that it was only capable of doing so on the server. Since this needs to work in the browser, I need to js-interop with the legacy JavaScript.
From the JavaScript version, I know how the string
"Howdy, Bob!"should look when it is encoded (gzip'd and mime64 encoded). This makes it easy to write two tests:
import 'package:unittest/unittest.dart'; import 'package:ice_code_editor/store.dart'; import 'dart:html'; main() { group("gzipping", () { test("it can encode text", (){ expect(Store.encode("Howdy, Bob!"), equals("88gvT6nUUXDKT1IEAA==")); }); test("it can decode as text", (){ expect(Store.decode("88gvT6nUUXDKT1IEAA=="), equals("Howdy, Bob!")); }); }); }From there, the tests drive me. First up, I need to change the message indicating that I do not have a
store.dartfile:
Failed to load a file store_test.dart:-1 GET fixing that, my tests tell me that I need a
Storeclass, then that I need static methods
encodeand
decode. This leaves me with:
library ice; class Store { static String encode(String string) { } static String decode(String string) { } }At this point, the message that I need to change is:
FAIL: gzipping it can encode text Expected: '88gvT6nUUXDKT1IEAA==' but: expected String:'88gvT6nUUXDKT1IEAA==' but was null:<null>.Finally, to make that pass, I use a bit of knowledge left over from the spike. The
RawDeflate.deflate()call from Dart looks like:
js.context.RawDeflate.deflate(string)thanks to the magic of js-interop. That returns a string of gzip'd data. I then use
CryptoUtils.bytesToBase64()from the
dart:cryptopackage (why is base64 encoding in
dart:crypto, but zlib is not?) package to base64 encode the gzip compressed data. The end result that make my test pass is:
library ice; import 'dart:crypto'; import 'package:js/js.dart' as js; class Store { static String encode(String string) { var gzip = js.context.RawDeflate.deflate(string); return CryptoUtils.bytesToBase64(gzip.codeUnits); } static String decode(String string) { } }To support decoding, I implement the reverse as:
library ice; import 'dart:crypto'; import 'package:js/js.dart' as js; class Store { static String encode(String string) { var gzip = js.context.RawDeflate.deflate(string); return CryptoUtils.bytesToBase64(gzip.codeUnits); } static String decode(String string) { var bytes = CryptoUtils.base64StringToBytes(string); var gzip = new String.fromCharCodes(bytes); return js.context.RawDeflate.inflate(gzip); } }With that, I have two passing tests that will let me work with existing data stored from the JavaScript version of the code editor. This seems like a fine stopping point for tonight. At least until my next #pairwithme session!
Day #745 | https://japhr.blogspot.com/2013/05/bdding-dart-gzip-with-js-interop.html | CC-MAIN-2017-47 | refinedweb | 576 | 68.47 |
SimpleRNN¶
- class paddle.nn. SimpleRNN ( input_size, hidden_size, num_layers=1, direction='forward', time_major=False, dropout=0.0, activation='tanh', weight_ih_attr=None, weight_hh_attr=None, bias_ih_attr=None, bias_hh_attr=None, name=None ) [source]
Multilayer Elman network(SimpleRNN). It takes input sequences and initial states as inputs, and returns the output sequences and the final states.
Each layer inside the SimpleRNN maps the input sequences and initial states to the output sequences and final states in the following manner: at each step, it takes step inputs(\(x_{t}\)) and previous states(\(h_{t-1}\)) as inputs, and returns step outputs(\(y_{t}\)) and new states(\(h_{t}\)).\[ \begin{align}\begin{aligned}h_{t} & = act(W_{ih}x_{t} + b_{ih} + W_{hh}h_{t-1} + b_{hh})\\y_{t} & = h_{t}\end{aligned}\end{align} \]
where \(act\) is for
activation.
Using key word arguments to construct is recommended.
- Parameters
input_size (int) – The input size for the first layer’s cell.
hidden_size (int) – The hidden size for each layer’s cell.
num_layers (int, optional) – Number of layers. Defaults to 1.
direction (str, optional) – The direction of the network. It can be “forward” or “bidirect”(or “bidirectional”). When “bidirect”, the way to merge outputs of forward and backward is concatenating. Defaults to “forward”.
time_major (bool, optional) – Whether the first dimension of the input means the time steps. Defaults to False.
dropout (float, optional) – The droput probability. Dropout is applied to the input of each layer except for the first layer. Defaults to 0.
activation (str, optional) – The activation in each SimpleRNN cell. It can be tanh or relu. Defaults to tanh.
weight_ih_attr (ParamAttr, optional) – The parameter attribute for weight_ih of each cell. Defaults to None.
weight_hh_attr (ParamAttr, optional) – The parameter attribute for weight_hh of each cell. Defaults to None.
bias_ih_attr (ParamAttr, optional) – The parameter attribute for the bias_ih of each cells. Defaults to None.
bias_hh_attr (ParamAttr, optional) – The parameter attribute for the bias_hh of each cells. Defaults to None.
name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.
- Inputs:
inputs (Tensor): the input sequence. If time_major is True, the shape is [time_steps, batch_size, input_size], else, the shape is [batch_size, time_steps, hidden_size].
initial_states (Tensor, optional): the initial state. The shape is [num_layers * num_directions, batch_size, hidden_size]. If initial_state is not given, zero initial states are used.
sequence_length (Tensor, optional): shape [batch_size], dtype: int64 or int32. The valid lengths of input sequences. Defaults to None. If sequence_length is not None, the inputs are treated as padded sequences. In each input sequence, elements whose time step index are not less than the valid length are treated as paddings.
- Returns
the output sequence. If time_major is True, the shape is [time_steps, batch_size, num_directions * hidden_size], else, the shape is [batch_size, time_steps, num_directions * hidden_size]. Note that num_directions is 2 if direction is “bidirectional” else 1.
final_states (Tensor): final states. The shape is [num_layers * num_directions, batch_size, hidden_size]. Note that num_directions is 2 if direction is “bidirectional” (the index of forward states are 0, 2, 4, 6… and the index of backward states are 1, 3, 5, 7…), else 1.
- Return type
outputs (Tensor)
- Variables:
weight_ih_l[k]: the learnable input-hidden weights of the k-th layer. If k = 0, the shape is [hidden_size, input_size]. Otherwise, the shape is [hidden_size, num_directions * hidden_size].
weight_hh_l[k]: the learnable hidden-hidden weights of the k-th layer, with shape [hidden_size, hidden_size].
bias_ih_l[k]: the learnable input-hidden bias of the k-th layer, with shape [hidden_size].
bias_hh_l[k]: the learnable hidden-hidden bias of the k-th layer, with shape [hidden_size].
Examples
import paddle rnn = paddle.nn.SimpleRNN(16, 32, 2) x = paddle.randn((4, 23, 16)) prev_h = paddle.randn((2, 4, 32)) y, h = rnn(x, prev_h) print(y.shape) print(h.shape) #[4,23,32] #[2,4,32] | https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/nn/SimpleRNN_en.html | CC-MAIN-2022-05 | refinedweb | 630 | 53.58 |
I recently found out about the technique called dynamic programming and I stumbled upon a problem which I can't figure out. You are given a list of arguments in the beginning and you need to do sums on as if you were cutting it. If the list has only one element, you don't sum it. If it has more, you sum the elements and cut it in every possible way. So if list has n elements, there are just n-1 ways to cut it. The picture will explain:
I first wanted to sum up all of the sumable parts and I expected the result 20( 11 + 9 ) ( even thought the correct answer is 9 ), but I thought it would be a good start. But my code returns number 37 and I have no idea why. What am I doing wrong?
summ = 0
def Opt( n ):
global summ
if len( n ) == 1:
return 0
else:
summ += sum( n )
for i in range( 1,len( n ) ):
summ += Opt( n[ :i ] ) + Opt( n[ i: ] )
return summ
print( Opt( [ 1,2,3 ] ) )
I think this is what you want:
def Opt(n): if len(n) == 1: return 0 else: return sum(n) + min(Opt(n[:i]) + Opt(n[i:]) for i in range(1, len(n)))
Example:
>>> Opt([1]) 0 >>> Opt([1, 2]) 3 >>> Opt([2, 3]) 5 >>> Opt([1, 2, 3]) 9 >>> Opt([1, 2, 3, 4]) 19
Dynamic programming is about dividing the "big problem" into small subproblems.
So, first of all, you should identify how the big problem is related to the subproblems. You do this by writing a recurrence relation. In this case:
Opt(nums) = sum(nums) + min(...)
You also need a starting point:
Opt(nums) = 0 iff len(nums) == 1
As you can see, once you have wrote the recurrence relation, transforming it into Python code is often straightforward.
It's important to understand that each subproblem is self-contained, and should not need external input. Your use of
global variables is not only producing the wrong result, but it's against the spirit of dynamic programming.
Your use of trees for expressing
Opt() is nice. What you forgot to do is writing the relationship between each node and its children. If you did, I'm almost sure that you would have found the correct solution yourself.
By the way, remember to handle the case where
n is empty (i.e.
len(n) == 0). | https://codedump.io/share/gw9b6MfWRFCa/1/dynamic-programming-doesn39t-give-correct-answer | CC-MAIN-2019-04 | refinedweb | 408 | 76.76 |
Calculating IRR and NPV using excel
1) What is the internal rate of return (IRR) for a project whose intitial after tax cost is $5,000,000 and it is expected to provide after tax operating cash flows of ($1,800,000) in year 1, $2,900,000 in year 2, $2,700,000 in year 3, and $2,300,000 in year 4 ?
2) A firm is evaluating a proposal that has an initial investment of $50,000 and has cash flows of $15,000 per year for 5 years. If the firms required return or cost is 15%, should it accept the project using the internal rate of return (IRR) as a decision criteria ?
3) What is the net present value (NPV) for a project whose cost of capital is 12% and its initial after tax cost is $5,000,000 and it is expected to provide after tax operating cash flows of $1,800,000 in year 1, $1,900,000 in year 2, $1,700,000 in year 3, and ($1,300,000) in year 4 ?
Solution Preview
1) The IRR is the rate at which the present value of cash inflows = present value of cash outflows and so the NPV is zero. The IRR denotes the return that the project gives on the initial investment. We calculate IRR manually using trial and error or using the IRR ...
Solution Summary
The solutions explains about IRR and NPV calculations and shows how to calculate NPV and IRR using excel | https://brainmass.com/economics/finance/calculating-irr-and-npv-using-excel-182475 | CC-MAIN-2018-22 | refinedweb | 252 | 59.16 |
use Data::Printer;
use JSON::Any;
my $foo = JSON::Any->new->jsonToObj(
print Dumper ($foo), "\n";
print np ($foo), "\n";
That prints this unreadable garbage:
'b' => bless( do{\(my $o = 0)}, 'JSON::PP::Boolean' ),
'a' => bless( do{\(my $o = 1)}, 'JSON::PP::Boolean' ),
'c' => $VAR1->{'a'}
};
\ {
a JSON::PP::Boolean {
public methods (0)
private methods (1) : __ANON__
internals: 1
},
b JSON::PP::Boolean {
public methods (0)
private methods (1) : __ANON__
internals: 0
},
c var{a}
}
When I just want to see:
'a' => 1,
'b' => 0,
'c' => 1
};
None of this works:
$JSON::PP::false = 0;
sub JSON::PP::true { return 1 };
sub JSON::PP::false { return 0 };
$JSON::Any::true = 1;
$JSON::Any::false = 0;
sub JSON::Any::true { return 1 };
sub JSON::Any::false { return 0 };
This almost works:
return ($_[0] eq JSON::PP::true ? "1" : "0") }
But this is still stupid:
a 0,
b 1,
c var{a}
}
$Data::Dumper::Freezer will let you set a method for serializing it.
$Data::Dumper::Deepcopy will make it not bother with the $VAR1->{a} thing. Not sure what the equivalent of that is for Data::Printer.
Freezer seems to allow you to side-effect an object's internals before it is printed, but not change how it is printed.
Toaster seems to just append the string "->toaster_method_name()" to the printed output.
sub Freezer { $_[0] = 'fuck off' }
Data::Dumper is ancient perl, so I think you have to mangle yourself.
$Data::Dumper::Freezer = '_debool';
sub JSON::PP::Boolean::_debool { $_[0] = ($_[0] eq JSON::PP::true ? "1" : "0"); }
print Dumper (JSON::Any->new->jsonToObj('{"a":true, "b":false, "c":true}'));
==>
$VAR1 = {
'c' => \undef,
'b' => \undef,
'a' => \undef
};
And this makes Perl segfault, whee:
sub JSON::PP::Boolean::_debool { $_[0] = ($_[0] eq JSON::PP::true ? 1 : 0); }
It expects that the thing it passed you as
$_[0]to still be a reference afterwards. You can speak a prayer and do this:
print do {
local $Data::Dumper::Freezer = '_debool';
local *JSON::PP::Boolean::_debool
= sub { $_[0] = \\\\\\\\\\\(0+$_[0]) if JSON::PP::is_bool $_[0] };
Dumper($foo) =~ s!\\\\\\\\\\\\\\\\\\\\\\!!gr;
};
But that may not be an option because it will modify the data structure in order to print it.
I can’t be bothered to look into Data::Printer.
I know two less horrible approaches that will work, though I don’t know if they’re applicable for you.
If you don’t need to preserve the fact that these were JSON booleans and you have control of the place where the JSON gets decoded, just change the JSON::PP true/false values before decoding:
my $foo = do {
local $JSON::PP::true = !!1, local $JSON::PP::false = !!0;
JSON::PP->new->decode('{"a":true, "b":false, "c":true}');
};
Then the data structure won’t have those objects in the first place and it will print sanely regardless of dumper with no further faffing around.
(And yes, perl’s canonical false value prints as an empty string rather than a numeric 0. Use
1and
0instead of
!!1and
!!0if you hate that.)
If that doesn’t work for you, but you can switch to a different dumper, use Data::Dump, which has a very simple/obvious/nice interface for what you want:
use Data::Dump::Filtered 'dump_filtered';
dump_filtered( $foo, sub { { dump => "!!$_[1]" } if JSON::PP::is_bool $_[1] } );
If you can’t do that either, then I guess the question would be if you can at least afford to copy the data structure to dump it. If so, you could
use Cloneand do that backslash train hack. Or maybe walk the data structure and twiddle the values into real true/false (or 1/0) values. Maybe with
use Data::Visitor::Callbackor something.
If you can’t do even that… well then I’m out of ideas.
P.S.: Using JSON::Any but only handling JSON::PP booleans is fragile if this isn’t just for printf debugger sessions.
P.P.S.: Your comment system sucks at code formatting. :-)
print JSON::Any->new(pretty=>true)->objToJson($foo);
That changes nothing. What do you expect it to do?
Oh wait, nevermind, I see what you did.
Well, "convert Perl objects back to JSON and pretty-print those instead" is not quite what I was hoping for, because it's actually the Perl objects I'm interested in.
the place I work at has many many (many) old modperl webservices we still maintain to this day, and we convert the perl objects to json (we use JSON::XS because that's what we had back in the day) and then print them (without prettifying them). looks nice in the logs without all the extra junk.
If I explicitly use JSON::PP instead of JSON::Any or JSON, one of the tricks you posted seems to work (with perl-5.20.2):
#!/usr/bin/perl
use strict;
use warnings;
use diagnostics;
use feature 'say';
use Data::Dumper;
use JSON::PP;
$JSON::PP::true=1;
$JSON::PP::false=0;
my $json = JSON::PP->new->allow_nonref;
my $foo = $json->decode('{"a":true, "b":false, "c":true}');
print Dumper ($foo), "\n";
outputs:
$VAR1 = {
'b' => 0,
'c' => 1,
'a' => 1
};
Stop using this shit and migrate to Python.
Translation: trade this shit for the other shit.
The cycle of "now they have two problems" jokes has been fully completed.
The var{a} reference can be overcome by setting _reftype in your printer method:
use Data::Printer;
use JSON::Any;
sub JSON::PP::Boolean::_data_printer {
my ($item, $p) = @_;
$p->{_reftype} = 1;
return ($item eq JSON::PP::true) ? "1" : "0" }
print np(JSON::Any->new->jsonToObj('{"a":true, "b":false, "c":true}'));
That's really good to know, thanks!
I've run into similar problems to jwz's original question, and I never found a better way than using '$Data::Dumper::Deepcopy = 1' to improve Data::Dumper's output. I've gone as far as writing my own DataDumper equivalent to try and make the output better for many data structures, but didn't realise Data::Printer even existed. It looks like I can use Data::Printer and add my own methods for rendering specific classes that I want to improve, instead of re-inventing everything. Thanks!
This makes me really sad, but then again it's Perl. Untested code, but you get the idea.
sub true {
return 1 if $caller[0] =~ /^Data::/;
return JSON::PP::Boolean::true;
}
Never mind that is obviously stupid.
You could dump a copy of the structure that didn't have those booleans in it, if the dumping code you're using doesn't do exactly what you wanted? This worked on your toy data structure:";
Hmmm, could have sworn that I posted this earlier. Maybe I was trying to be too clever in the markup.
Anyway, this works for your toy example: rather than fight the dumper, mung (a copy of) the data structure. It works with various flavours of JSON back-ends as well.
use JSON::Any;";
Ack, duplicate - sorry! Was the comment held for moderation or something? Either way, please delete the dupe.
Getting a bit closer but still doesn't deal with re-used variables:
use Data::Printer filters=>{'JSON::PP::Boolean' => sub {int($_[0])}}
use JSON::Any
my $foo = JSON::Any->new->jsonToObj('{"a":true,"b":false,"c":true,"d":"hello"}')
p($foo)
====>
\ {
a 1,
b 0,
c var{a},
d "hello"
}
Late to the party, but didn't see any answers that actually claimed to work so figured I'd better come up with a terrible hack. I couldn't figure out any way to override what Data::Dumper::Dumpxs does, so instead you have to use Data::Dumper::Dumpperl which does all you to override the _dump method and therefore have access to the $val before Data::Dumper gets it.
The following does print:
$VAR1 = {
'a' => 1,
'b' => 0,
'c' => 1
};
In all it's terribleness:
#!/usr/bin/perl
use strict;
use warnings;
use JSON::PP;
print JSON::Dumper->Dumpperl(
[ JSON::PP->new->decode('{"a": true, "b": false, "c": true }') ] );
package JSON::Dumper;
use parent 'Data::Dumper';
sub _dump {
my ($s, $val, $name) = @_;
if ( ( ref $val || '' ) eq 'JSON::PP::Boolean' ) {
$val = $val ? 1 : 0;
}
$s->SUPER::_dump($val, $name);
} | https://www.jwz.org/blog/2016/10/death-to-jsonppboolean/ | CC-MAIN-2018-39 | refinedweb | 1,376 | 68.91 |
Hello,
I have not got any reply for my problem that I mentioned below. I am
not getting any logic or I am doing any mistake in configuration/build. I am
building the Apache 2.0 with the following options - to enable DSO Support
and MPM is worker and then I will install it.
CC="cc" \
./configure --enable-so --with-mpm=worker
These two are enough?? or anything else I have to give.
And also I build Apache with "cc" compiler and My application/module is
written in C++ code and I build with "KCC" Compiler. Because use of
different compilers for Apache and my application is creating this problem.
When I load my library then Apache hangs and fails at the following point. I
am anxiously waiting for help, please anybody can help me out in this
regard, Anybody faced this type of problem before. I am very grateful if I
get any reply.
Thanks,
GS
-----Original Message-----
From: Guntupalli, Santhi
Sent: Saturday, May 10, 2003 6:13 PM
To: 'dev@httpd.apache.org'
Subject: RE: help - Apche 2.0 - Unix
Hello,
Thanks for your response. I had debugged the code. This is the point
where it is failing. this backtrace is at procsup.c.
(gdb) bt
#0 apr_proc_detach (daemonize=1074511840) at procsup.c:57
#1 0x00000001200525dc in worker_pre_config (pconf=0x140068028,
plog=0x1400b0828, ptemp=0x1400b5828) at worker.c:1900
#2 0x0000000120068038 in ap_run_pre_config (pconf=0x140068028,
plog=0x1400b0828, ptemp=0x1400b5828) at config.c:123
#3 0x0000000120054f98 in main (argc=7, argv=0x11ffff208) at main.c:615
after this daemonize=1 and going inside the if(daemonize) this condition,
after this fork() is being called and fails inside this if condition, this
is the error I am getting this
program received signal SIGSEGV, Segmentation fault.
warning: Hit heuristic-fence-post without finding
warning: enclosing function for address 0x3ff80dd74b0
0x000003ff80dd74b0 in ?? ()
when I give backtrace here this is the trace.
(gdb) bt
#0 0x000003ff80dd74b0 in ?? ()
#1 0x000003ff80176d68 in __do_atfork () from /usr/shlib/libc.so
#2 0x000003ff80176d68 in __do_atfork () from /usr/shlib/libc.so
Error accessing memory address 0x10: Invalid argument.
I have built apache with MPM as worker.
I have written small test program which uses this Apache APIs. This is my
small test program: test.cxx
extern "C"{
#include "httpd.h"
#include "http_config.h"
#include "http_protocol.h"
#include "http_log.h"
#include "util_script.h"
#include "http_main.h"
#include "http_request.h"
}
static int test_handler(request_rec *r)
{
int result = 500;
return 1;
}
static void register_hooks(apr_pool_t *p) {
ap_hook_handler(test_handler, NULL, NULL, APR_HOOK_MIDDLE);
}
extern "C"{
module AP_MODULE_DECLARE_DATA server_module = {
STANDARD20_MODULE_STUFF,
NULL, /* per-directory config creator */
NULL, /* dir config merger */
NULL, /* server config creator */
NULL, /* server config merger */
NULL, /* command table */
register_hooks, /* set up other request processing hooks */
};
}
This small test program test.cxx creates test.so
While creating this shared library I link with "-lpthread -lm -lexc" ( all
these are OS libraries ) apart from Apache shared libraries. Apache was
started. When I opened from webbrowser "localhost:80" "I am getting internal
error" and does not open the home page of Apache. When again I relink with
"-lpthread -lm -lexc -lrt" this time Apache does not start ( because of
"lrt" library) and failing at the same point as mentioned in the above
trace. I did not understand how this linking is making this difference for
apache to start. I am working on Tru64 4.OG and 5.1A. I have tested with
both C and C++ code.
I have seen same behavior.
What could be the problem. Any solution to this. Any help is very much
appreciated/
Thanks,
GS
-----Original Message-----
From: amit athavale [mailto:amit_athavale@lycos.com]
Sent: Thursday, May 08, 2003 9:00 PM
To: dev@httpd.apache.org; dev@httpd.apache.org
Subject: Re: help - Apche 2.0 - Unix
If you send gdb back trace of hanged process it will be helpful.
If you dont know how to do it :
use "ps -aef | grep httpd" command to find pid then use "gdb httpd <pid>" to
open gdb.
After gdb prompt comes up, type "bt" to see backtrace.
--
On Thu, 8 May 2003 20:01:49
Guntupalli, Santhi wrote:
>Hello,
> We are using Apache 2.0 in our application. our application supports
>both windows and Unix (Tru64). I had built my module which uses Apache 2.0
>APIs and creates DLL (test.dll) . We are loading this DLL by LoadModule in
>"httpd.conf". I was able to start Apache and also my application works
>properly without any problems on windows.
> The same application we support it on Unix also without any code
>change. I had built the same module with Apche 2.0 and created shared
>library "test.so". I load this library by LoadModule from httpd.conf.
>LoadModule test_handle modules/test.so
> If I add this in httpd.conf, when I start Apache, simply it hangs,
>it does not come to the prompt and does not spawn any hutted processes and
>also no error.
> If I don't load this shared library, Apache is starting no problems.
>If I load my shared library then only apache does not start. How Load
module
>( my shared library) holds the apache from start. How LoadModule is related
>in spawning the httpd processes. I had tried many ways, nothing was
working.
>what could be the problem. Same application is running on Windows, no
>problems at all, this problem I am facing it on Unix only, it does not give
>any error/log to locate the problem.
>Anybody has any clues or suggestions. any help is much appreciated.
>
>Thanks,
>GS
>
>
____________________________________________________________
Get advanced SPAM filtering on Webmail or POP Mail ... Get Lycos Mail! | http://mail-archives.apache.org/mod_mbox/httpd-dev/200305.mbox/%3C177E503C4DA3D311BC9D0008C791C3060E8FCD4C@diexch01.xko.dec.com%3E | CC-MAIN-2014-41 | refinedweb | 934 | 69.38 |
The new Release 20.03 is out! You can download binaries for Windows and many major Linux distros here .
1. Do not use lambda captures if you don't understand them.
2. Why do you need a capture in this case? The frame is passed as a parameter.
3. What does Temp mean? To me it is the same as using My, Mine, Random, Some, etc.
transientFrame has different meaning in some OS/wms.
function is not a member of std
#include <functor>
Hi,i can not build the latest C::B trunk on linux, because of the error messageCodefunction is not a member of stdTo fix this i have to add Code#include <functor>at the top of the edited files
@bluehazzard: There is no <functor> header it is <functional> ...
Sorry about that, because I don't use Linux most of the time. This does not happen on my Windows GCC 7.2. I think you can push your changes. Thanks.
Well here we are.... I have commited with the wrong message text... Is there a way to edit the commit message?
I think this is because windows uses precompiled headers? And linux not? | https://forums.codeblocks.org/index.php/topic,23110.msg157394.html?PHPSESSID=878206de4525caac9863c7e4f79333c8 | CC-MAIN-2021-43 | refinedweb | 195 | 86.4 |
Scott Swigart
Microsoft Corporation
December 2005
Applies to:
Microsoft Visual Basic 2005
Microsoft Visual Basic 6.0
Development
Summary: Review the exciting new changes in version 2.0 of the Microsoft Visual Basic .NET framework that make the switch for Visual Basic 6.0 developers an easy and functional move. (18 printed pages)
I Haven't Needed .NET for Anything so Far... I'll Take Some of the Blame Giving Credit Where Credit Is Due Level 2 Gone with the Grid I Want That Toolbar You Know What I Mean Power Steering Try This, It's Free My, My... What Part of "I Don't Want to Install Visual Studio 2005" Don't You Understand? Conclusion Additional Resources
The Microsoft Hype Machine has already started turning up the volume on Visual Studio 2005. The question is, if you work primarily in Visual Basic 6, you're happy with Visual Basic 6, and you've looked at .NET and not found anything you really need, should you even care about this new release of Visual Studio 2005? Considering that MSDN has been nice enough to host this article, you can probably guess the conclusion that I've reached. But bear with me, there are some great things coming for the Visual Basic 6 developer.
First of all, let's just admit that for many of us, Visual Basic .NET left something to be desired. If you think back to when Microsoft unleashed .NET on the world, the megaphone was blaring only two things: Web Service and C#. Well, many of us Visual Basic 6 developers, didn't need (or even want) Web Services. And, we weren't really interested in learning this new C# language when we already had a good language. After Web Services and C#, the other thing that Microsoft was blaring was ASP.NET. While many of us do some Web development, lots of Visual Basic 6 developers are primarily interested in desktop applications, so if .NET was all about Web Services, C#, and ASP.NET, then Microsoft should understand if many of us did not find anything in .NET that we couldn't live without.
Finally, let's just say that Visual Basic .NET wasn't 100 percent compatible with Visual Basic 6. This made it non-trivial to move a Visual Basic 6 application to .NET, assuming that I wanted to.
Like most Visual Basic developers, there were some things that I wanted for my language. I wanted the glass ceiling to be gone. I was tired of "you can't do X in Visual Basic 6." I wanted my language to be able to do anything. I also wanted the language to support at least some simple object oriented concepts like inheritance. Be careful what you ask for...
Visual Basic .NET provides 100 percent access to the full .NET framework. There is no glass ceiling. You can do anything in .NET from Visual Basic .NET. It's also fully object oriented, to a fault. Instead of something simple like "Form1.Show" working, you have to treat everything like the class that it really is. Meaning that to show a form, you have to do the following:
Listing 1. Showing a form with Visual Basic .NET
Dim f as New Form1()
f.Show()
In addition, some of my other favorite features were apparently just too hard to do in the architecture of .NET v1.0. Edit and Continue, for example, didn't make the cut.
I have to give Microsoft credit for one thing; they do listen. Since the release of Visual Basic .NET, Microsoft has been working very hard to bring back a lot of the favorite features of Visual Basic 6, without dumbing down the language. As a result, in Visual Studio 2005, "Form1.Show" works again.
Figure 1. Displaying forms, the Visual Basic 6 way
Edit and Continue is back as well. In fact, it's better than before. With Visual Basic 6, Edit and Continue worked almost too well. You could F5 for a long time, working on your application, just to find out that when you try to make the executable, it doesn't compile because there's an error someplace. With Visual Basic .NET, every time you hit F5, it makes sure that all the code will compile, but still gives you the full ability to change your code and continue execution.
Figure 2. Edit and Continue
Visual Studio 2005 doesn't just reintroduce lost Visual Basic 6 features. In many cases, it takes these features to a new level. First, think about designing a form in Visual Basic 6. When you design a form, there are a number of things that have always bothered me. First, if you just double-click a control in the toolbox, it places it on the form using some default size, but it's not the size you would ever want. Second, when you line up controls, you're doing it based on the grid dots. This makes it somewhat simple to align the top or left side of controls, but harder to align the bottom or right edges. Also, if you actually want the text inside of the controls to be on the same line, good luck.
Figure 3. Designing a form in Visual Basic 6
When I first looked at the Visual Studio 2005 form designer, my first thought was "Where's the grid? How can you possibly design a form without a grid?" You'll find the form designer in Visual Studio 2005 to be nothing short of awesome. What I realized is that you really want to align controls based on other controls, or the edges of the forms, and you want to be able to easily lay out controls with some standard spacing between them. Visual Studio 2005 does this with "snap lines." Visual Studio 2005 also defaults controls to correct sizes, and does lots of other little things to make it very productive to build forms.
Figure 4. Snaplines to align controls
Another common complaint about Visual Basic 6 was some of the controls it included. For example, the menus and toolbars were really limited compared to the menus and toolbars that Microsoft used in their own applications.
Figure 5. The office toolbar
Take the office toolbar, for example. This toolbar not only has buttons, but it can host text boxes, drop-downs, and separator controls. In addition, you can drag the toolbars around and stack them however you want. The menu bar can also contain text boxes and other controls. The chorus from the Visual Basic 6 camp has long been "Why can't you give us the same controls that you use?"
The answer from Visual Studio 2005: "Done."
Figure 6. Office look and feel from Visual Studio 2005
Simple mistakes are easy to make when coding. Visual Studio 2005 is able to figure out these common errors, give you a meaningful error message, and say "Would you like me to fix this for you?"
Figure 7. Autocorrect in Visual Studio 2005
If it can't be autocorrected as you type it, or caught at compile time, the next best thing is a good help when an error happens at runtime. With Visual Basic 6, if you forget to "New" an object you get the error "Object variable or with block not set." With Visual Studio 2005, you get an error that actually lets you know, "Hey, you forgot to use New."
Figure 8. The Exception Assistant
What I want from a development environment isn't a bunch of wizards. To use the analogy of a car, I don't want something that will drive the car for me, because (1) it will probably get me in a wreck, and (2) I wouldn't trust it to take the right route. However, I do love power steering, power brakes, cruise control, and so on. These just help take the grunt work out of driving. I'm still fully in control of every move the car makes, but I don't have to expend as much effort to wrestle the car down the road.
I like things that give me the same experience for the development environment. I want to stay fully in control of what code gets written, but I want the IDE to take a lot of the grunt work out of writing that code.
Visual Basic 6 made a simple start down that path with the Add Procedure tool. I use this tool quite a bit for properties, because it writes a lot of the boilerplate for me.
Figure 9. Add procedure tool
With Visual Studio 2005, you get this on steroids. First, if you just type in "Public Property UserName as String," the IDE will splat in all the boilerplate, giving you:
Listing 2. Visual Studio auto-generating a property procedure
Public Property UserName as String
Get
End Get
Set
End Set
End Property
And with the addition of the free Refactor! add-in, the IDE seems like it can read your mind.
Figure 10. Using Refactor! to create a property procedure
The Refactor! add-in can automate a number of common tasks. For example, if you want to turn a section of code into its own separate function, Refactor! will let you extract it, and will even figure out what arguments need to be passed in. You can easily convert hard-coded numbers or strings into constants. You can also highlight some expression, and convert it to a local variable (I find myself doing this all the time).
Microsoft's making another smart move with Visual Studio 2005. They're releasing free versions for Visual Basic .NET and Web development (as well as C# and C++). They're also releasing a replacement for MSDE that is actually easy to deploy with your applications. The light-weight, free, Visual Basic development environment is called Visual Basic Express. This gives you a good opportunity to experiment with .NET without making any expensive investments.
Figure 11. Visual Basic Express
Visual Basic Express contains the common functionality that you need to build stand-alone applications, or class libraries. You get the same toolbox, designer, base classes, code editor, and so on, that you get with the full Visual Studio product.
I mentioned earlier that things like "Form1.Show" work again. In Visual Basic parlance, this is referred to as a "default instance," meaning that by default, there's a Form1 created that you can start working with, and you don't have to New one up from scratch. There are, however, a lot of other default instances now at your disposal as well, and they're available off of the "My" keyword.
Table 1 My namespace commands
Figure 12. Using the My Namespace
Even if you never plan on installing Visual Studio 2005, there's a ton of stuff in version 2.0 of the .NET framework that you can use from your Visual Basic 6 applications. The whole point of Visual Basic Fusion is that you can use everything in .NET from Visual Basic 6, without any rewriting of your Visual Basic 6 code. This is just as true with version 2.0 of the framework as it is with version 1.1. You can just create a simple wrapper class around the .NET functionality that exposes it as a COM object. From there, you can use this functionality from Visual Basic 6, VBA, ASP, or VBScript.
Listing 3. Using Visual Basic .NET to expose .NET functionality as a COM object
<ComClass(MyWrapper.ClassId, MyWrapper.InterfaceId, MyWrapper.EventsId)> _
Public Class MyWrapper
#Region "COM GUIDs"
' These GUIDs provide the COM identity for this class
' and its COM interfaces. If you change them, existing
' clients will no longer be able to access the class.
Public Const ClassId As String = "36d8911d-0395-4961-9893-4325fcb5e522"
Public Const InterfaceId As String = "ae2c5486-fbc4-4043-83f2-1687161858c9"
Public Const EventsId As String = "0e5e9245-e331-41e8-b84c-6ad3dc69d9bd"
#End Region
' A creatable COM class must have a Public Sub New()
' with no parameters, otherwise, the class will not be
' registered in the COM registry and cannot be created
' via CreateObject.
Public Sub New()
MyBase.New()
End Sub
Public Function NetworkIsAvailable() As Boolean
Return My.Computer.Network.IsAvailable()
End Function
End Class
This creates a COM object that you can easily use from Visual Basic 6 or other COM based environments.
Listing 4. Determining network availability from Visual Basic 6
Dim c as NetFrameworkWrapper.MyWrapper
Set c = new NetFrameworkWrapper.MyWrapper
If c.NetworkIsAvailable Then
...
End If
I think version 2.0 of the .NET framework is a great gift to the Visual Basic 6 developer. You can redistribute the framework completely free of charge, and with Visual Basic Express, you have a free development environment that you can use to build libraries or whole applications. This means that you can extend your Visual Basic 6 applications with the .NET Framework without spending a dime. And, version 2.0 of the .NET framework provides some really great functionality that just wasn't available in Visual Basic 6. In addition, Visual Basic .NET has been fixed to be a lot more like Visual Basic 6, but without dumbing down the language in any way. The development environment lets you layout forms very quickly, and the IDE saves you umpteen keystrokes as it injects boilerplate code for you, and allows you to quickly rework existing code. There's also great help for coding and runtime errors with autocorrect and the Exception Assistant. All-in-all, I think that this is a strong addition for the Visual Basic 6 developer. If you want to continue to use Visual Basic 6, you have now been given thousands of new classes in version 2.0 of the framework that you can use from your existing Visual Basic 6 applications. If you like what you see in the development environment, it's free to use with Visual Basic Express.
Enjoy.
A Sneak Preview of Visual Studio 2005
Visual Basic Express
Refactor!
What's New in Windows Forms and Controls for Visual Studio 2005
Visual Basic Fusion – Using everything in .NET from Visual Basic 6
Scott Swigart () spends his time consulting with companies on how to best use today's technology and prepare for tomorrows. Along this theme, Scott is a proud contributor to the VB Fusion site, as it offers information and tactics of real use for Visual Basic developers who want to build the most functionality with the least effort. Scott is also a Microsoft MVP, and co-author of numerous books and articles. Scott can be reached at scott@swigartconsulting.com. | http://msdn.microsoft.com/en-us/library/ms364065(VS.80).aspx | crawl-002 | refinedweb | 2,445 | 65.32 |
Functional languages treat functions as first-class values.
This means that, like any other value, a function can be passed as a parameter and returned as a result.
This provides a flexible way to compose programs.
Functions that take other functions as parameters or that return functions as results are called higher order functions.
Consider the following programs.
Take the sum of the integers between
a and
b:
def sumInts(a: Int, b: Int): Int = if (a > b) 0 else a + sumInts(a + 1, b)
Take the sum of the cubes of all the integers between
a
and
b:
def cube(x: Int): Int = x * x * x def sumCubes(a: Int, b: Int): Int = if (a > b) 0 else cube(a) + sumCubes(a + 1, b)
Take the sum of the factorials of all the integers between
a
and
b:
def sumFactorials(a: Int, b: Int): Int = if (a > b) 0 else factorial(a) + sumFactorials(a + 1, b)
Note how similar these methods are. Can we factor out the common pattern?
Let's define:
def sum(f: Int => Int, a: Int, b: Int): Int = if (a > b) 0 else f(a) + sum(f, a + 1, b)
We can then write:
def id(x: Int): Int = x def sumInts(a: Int, b: Int) = sum(id, a, b) def sumCubes(a: Int, b: Int) = sum(cube, a, b) def sumFactorials(a: Int, b: Int) = sum(factorial, a, b)
The type
A => B is the type of a function that
takes an argument of type
A and returns a result of
type
B.
So,
Int => Int is the type of functions that map integers to integers.
Passing functions as parameters leads to the creation of many small functions.
Sometimes it is tedious to have to define (and name) these functions using
def.
Compare to strings: We do not need to define a string using
val. Instead of:
val str = "abc"; println(str)
We can directly write:
println("abc")
because strings exist as literals. Analogously we would like function literals, which let us write a function without giving it a name.
These are called anonymous functions.
Example of a function that raises its argument to a cube:
(x: Int) => x * x * x
Here,
(x: Int) is the parameter of the function, and
x * x * x is it's body.
The type of the parameter can be omitted if it can be inferred by the compiler from the context.
If there are several parameters, they are separated by commas:
(x: Int, y: Int) => x + y
An anonymous function
(x1: T1, …, xn: Tn) => e
can always be expressed using
def as follows:
{ def f(x1: T1, …, xn: Tn) = e ; f }
where
f is an arbitrary, fresh name (that's not yet used in the program).
One can therefore say that anonymous functions are syntactic sugar.
Using anonymous functions, we can write sums in a shorter way:
def sumInts(a: Int, b: Int) = sum(x => x, a, b) def sumCubes(a: Int, b: Int) = sum(x => x * x * x, a, b)
The
sum function uses linear recursion. Complete the following tail-recursive
version:
def sum(f: Int => Int, a: Int, b: Int): Int = { def loop(x: Int, acc: Int): Int = { if (x > b) acc else loop(x + res0, acc + f(x)) } loop(a, res1) } sum(x => x, 1, 10) shouldBe 55 | https://www.scala-exercises.org/scala_tutorial/higher_order_functions | CC-MAIN-2017-39 | refinedweb | 553 | 63.43 |
Background
- package handles
- Packages are referenced by a handle of the form
OWNER/NAME
- Teams packages include a prefix,
TEAM:OWNER/NAME
- READMEs
- A
README.mdis recommended at the root of your package. README files support full markdown syntax via remarkable. READMEs are rendered to HTML on the package landing page.
- Short hashes
- Commands that take hashes support "short hashes", up to uniqueness. In practice, 6-8 characters is sufficient to achieve uniqueness.
quilt install akarve/examples -x 4594b5 # matches hash 4594b58d64dd9c98b79b628370618031c66e80cbbd1db48662be0b7cac36a74e
- Requirements file (quilt.yml)
$ quilt install [@FILENAME] # quilt.yml is the default if @filename is absent
- Installs a list of packages specified by a YAML file. The YAML file must contain a
packagesnode with a list of packages:
packages: - USER/PACKAGE[/SUBPACKAGE][:h[ash]|:t[ag]|:v[ersion]][:HASH|TAG|VERSION]
- Example
packages: - vgauthier/DynamicPopEstimate # get latest - danWebster/sgRNAs:a972d92 # get a specific version via hash - akarve/sales:tag:latest # get a specific version via tag - asah/snli:v:1.0 # get a specific version via version
API
Team users
See teams docs for additional commands and syntax.
Core: build, push, and install packages
Versioning
Instances, hashes, tags, and versions
- A package instance is a package handle plus a hash.
akarve/sales:fc7f0bis an instance. Instances are immutable.
- Hashes are automatically generated by Quilt for each package build.
- Tags are human-readable strings associated with a package instance. Tags can be altered to point to different instances of the same package. The most recent build is automatically tagged
"latest".
- Versions are human-readable strings associated with a package instance. Unlike tags, versions can only ever point to a single package instance.
Access
Local storage
Registry search
Export a package or subpackage
Exporting to Symlinks
If a node references raw (file) data, symlinks may be used instead of copying data when exporting. But be cautious when using symlinks for export:
- When using any OS
- If a file is edited, it may corrupt the local quilt repository
- Preventing this is up to you
- When using Windows
- Symlinks may not be supported
- Symlinks may require special permissions
- Symlinks may require administrative access (even if an administrator has the appropriate permissions)
Import and use data
For a package in the public cloud:
from quilt.data.USER import PACKAGE
For a package in a team registry:
from quilt.team.TEAM_NAME.USER import PACKAGE
Using packages
Packages contain three types of nodes:
PackageNode- the root of the package tree
GroupNode- like a folder; may contain one or more
GroupNodeor
DataNodeobjects
DataNode- a leaf node in the package; contains actual data
Working with package contents
- List node contents with dot notation:
PACKAGE.NODE.ANOTHER_NODE
- Retrieve the contents of a
DataNodewith
_data(), or simply
():
PACKAGE.NODE.ANOTHER_NODE()
- Columnar data (
XLS,
CSV,
TSV, etc.) returns as a
pandas.DataFrame
- All other data types return a string to the path of the object in the package store
Enumerating package contents
quilt.inspect("USER/PACKAGE")shows package columns, types, and shape
NODE._keys()returns a list of all children
NODE._data_keys()returns a list of all data children (leaf nodes containing actual data)
NODE._group_keys()returns a list of all group children (groups are like folders)
NODE._items()returns a generator of the node's children as (name, node) pairs.
Example
from quilt.data.uciml import wine In [7]: wine._keys() Out[7]: ['README', 'raw', 'tables'] In [8]: wine._data_keys() Out[8]: ['README'] In [9]: wine._group_keys() Out[9]: ['raw', 'tables']
Editing Package Contents
PACKAGENODE._set(PATH, VALUE)sets a child node.
PATHis an array of strings, one for each level of the tree.
VALUEis the new value. If it's a Pandas dataframe, it will be serialized. A string will be interpreted as a path to a file that contains the data to be packaged. Common columnar formats will be serialized into data frames. All other file formats, e.g. images, will be copied as-is.
GROUPNODE._add_group(NAME)adds an empty
GroupNodewith the given name to the children of
GROUPNODE.
Example
import pandas as pd import quilt quilt.build('USER/PKG') # create new, empty packckage from quilt.data.USER import PKG as pkg pkg._set(['data'], pd.DataFrame(data=[1, 2, 3])) pkg._set(['foo'], "example.txt") quilt.build('USER/PKG', pkg)
This adds a child node named
data to the new empty package, with the new DataFrame as its value. Then it adds the contents of
example.txt to a node called
foo. Finally, it commits this change to disk by building the package with the modified object.
See the examples repo for additional usage examples. | https://docs.quiltdata.com/api.html | CC-MAIN-2018-22 | refinedweb | 760 | 59.3 |
In this tutorial, I will teach you how to load data in the ListView using C# and SQL Server 2005. Displaying data in the ListView is a little bit the same with the DataGridView. The only difference is, you need to add items in displyaing data in the ListView, while in the DataGridView, you have to set the data source of it.
So, lets get started:
Create a database and name it “testdb”.
After creating database, do the following query for creating a table in the database that you have created.
Open Microsoft Visual Studio 2008 and create new Windows Form Application for C#. Then do the following design of a Form as shown below.
Go to the Solution Explorer, double click the “View Code” to display the code editor.
In the code editor, declare all the classes that are needed.
Note: Put using System.Data.SqlClient; above the namespace to access sql server library.
After declaring the classes, go back to the design view double click the form and establish a connection between SQL server and C#.net.
After establishing the connection, go back to the design view double click the button and do the following codes for retrieving data in the database that will display in the ListView.
Output:
For all students who need programmer for your thesis system or anyone who needs a sourcecode in any programming languages. You can contact me @ :
Mobile No. – 09305235027 – tnt | https://itsourcecode.com/2016/06/load-data-listview-using-c-sql-server/ | CC-MAIN-2018-05 | refinedweb | 239 | 64 |
I Have a 2 year old son, Henry. Like most two year olds, he loves everythig to do with Christmas.
So, to make the most of the festive season, we got a real Christmas tree at the weekend. And he *loves* it :
Here is it in all it's glory. the problem is that it is still over two weeks from Christmas, and we'd like there to be some meat left on it.
The best way to preserve them is keep them cool, and give them *lots* of water. I mean lots... they drink water at a heckuva rate.
The holder thingy that it is mounted in hold about 2 litres of water, but that empties at an alarming rate...
Note the bits of wood jammed in there to keep it sturdy, upright and safe. The circular casting it is all sitting is about 8" across and about 4" deep.
To ensure it always has water we have a system: I top it up in the morning, and my wife tops it up in the evening. However, I dont have such a good memory. Here is what happened last time I forgot to water the tree* :
*I have been asked to point out that this has never actually happened.
In the name of marital harmony, I broke out my mbed, an original BoB, a clothes peg and a glue gun.
Casting my mind back to D&T at school, I seem to remember the project of choice was an alarm that hooks over the bath, so that when the bath water level reached the predetermined height, a current passed between two probes, triggering a circuit that switched on a lame buzzer to alert the happless incompetant that the bath is full. In this case we want to detect when the water is *below* a certain level, and the happless incompetant is someone who cant even remember to water the 7' tree that has just appeared in his lounge, even though it is covered in flashing lights.
So my plan is to get some 0.1" header strip and glue it to clothes peg with some flying wires. I can then clip the clothes peg over the side of the casting my tree is mounted in so that the 0.1" header will stop touching the water when it drops below a predetermined level. If I can sense the current, or lack of then the water stops shorting out the contacts, I can raise some kind of alarm.
So far so good. The 0.1" strip is just poiing out the front of the peg, the flying leads are securely attached by a long bead of hot glue.
The first thing to do is see if I can measure a current through water with *just* the mbed. In the days of D&T there was always a transistor involved.
#include "mbed.h" DigitalIn sensor(p20); DigitalOut led (LED1); int main() { sensor.mode(PullDown); while(1) { led = sensor; } }
I connect the flying leads between Vout (3.3v) and p20. Nothing. I lick the header, there is a slight tingle, but clearly not enough to detect.
I move the live flying wire to the Vu (5v), give it a lick... yup.. that has more bite...(LED still doesnt light though). Still this is a water level detector, not a tongue detector, so lets try properly.
Bingo! It works a treat. [video to follow]
So I can now go about working out the alert.
To detect that the tree needs more water, we are looking for a falling edge on my DigitalIn. Since the tree drinking process is slow, and the refills are not often i probably dont have to worry to much about debouncing and stuff. However, I do want to send multiple reminders, so I might just activate a ticker when I detect the water is low, and deactivate it when is filled.
#include "mbed.h" DigitalIn sensor(p20); DigitalOut led (LED1); Ticker activated; void alert (void) { led = !led; } int main() { sensor.mode(PullDown); while (1) { // while the water is deep enough, wait while(sensor) { } // we are here because the water is low activated.attach(&alert,0.1); // crude debounce wait (30); // while water remains low, wait while (!sensor) { } // We hare here as the water level is back up activated.detach(); // crude debounce wait (30); } }
Wow, that worked like a charm first time! Here is the test rig :
Okay, I say "some".. I have a good idea :-)
At the moment I am highly inspired by Rolfs work on the HTTP Client. I Have a Netgear HomePlug adaptor (XE103) and there is one hooked up to my broadband router, so I *could* get onto the internet pretty easily.. there is a socket right behind the Christmas tree, and I am using a BoB with an RJ45 connector. Coincidence? I think not :-)
The first port of call is Rolfs page:
It all looks too simple to be true.
I import the prebuilt library from :
I fuse Rolfs example code with my own. I decide that whatever I'll do I will do it in the ticker, and that I should make the ticker run on a 300 second interval. I can chande this easily enough... it will depend on the final alert mechanism.
I register a hosting account at for this experiment.
Here is the code :
#include "mbed.h" #include "HTTPClient.h" DigitalIn sensor(p20); DigitalOut led (LED1); HTTPClient http; Ticker activated; // new alert function calls a PHP script // I registered an account on void alert (void) { char result[64]; led = !led; http.get("", result); } int main() { sensor.mode(PullDown); while (1) { // while the water is deep enough, wait while(sensor) { } // we are here because the water is low activated.attach(&alert,10); // crude debounce wait (30); // while water remains low, wait while (!sensor) { } // We hare here as the water level is back up activated.detach(); // crude debounce wait (30); } }
The ticker is just going to call a PHP script at the address in the listing. So I need to write that script.
I'm no PHP wizzard, but I know PHP has a nice function called mail()
<?php
$to = 'chris_styles@yahoo.com';
$subject = 'The tree needs water';
$message = 'Quick, before it dries out';
$headers = 'From: "The Christmas Tree" <chris_styles@yahoo.com>' . "\r\n" .
'Reply-To: chris_styles@yahoo.com' . "\r\n" .
'X-Mailer: PHP/' . phpversion();
mail($to, $subject, $message, $headers);
?>
The first test is to call it by typing in the the URL of the script directly into IE, and see what happens...
IE seems to load the page and complete silently (I should have put some debug in there). Behold, 60 seconds later :
Right, that seems to work a treat. Can I trigger one from my mbed now. Moment of truth, I have to compile and download the code. Oh, and run a teraterm so I can see the mbed fetch it's IP address.
Phut, nothing, not even an IP address. I download Rolfs example program, run it, and teraterm happily reports 192.168.0.36.
So I try a program that is simpler :
#include "mbed.h" #include "HTTPClient.h" DigitalOut led(LED1); HTTPClient http; int main(void) { char result[64]; http.get("", result); printf("Done!\n"); while(1) { led = !led; wait(0.5); } }
This works like a charm :
After 30 seconds I get an email. Yay!
So why didn't it work with the ticker i wonder?
I'll remove the ticker and strip my code back a bit. Start simple.
#include "mbed.h" #include "HTTPClient.h" DigitalIn sensor(p20); DigitalOut led (LED1); HTTPClient http; int main() { sensor.mode(PullDown); while (1) { // while the water is deep enough, wait while(sensor) { } led = !led; char result[64]; http.get("", result); // crude debounce wait (30); // while water remains low, wait while (!sensor) { } // crude debounce wait (30); } }
That works great!
But what if i'm not picking up email? Surely there is some other method :-)
For this experiment I paid a visit to and registered and account. They have very nice APIs which allows you to send and receive SMS messages in all sorts of ways, the one that is of interest is via PHP script.
For more details have a look here:
For now, I intend to tweek thier script slightly, upload it to my 000webhost account, so that calls to will cause an SMS to be sent to me.
Using the txtlocal code as and example, I add my authentication details, customise my message and add my cell number :
<?php
// Authorisation details
$uname = "";
$pword = "";
// Configuration variables
$info = "1";
$test = "0";
// Data for text message
$from = "Xmas Tree";
$selectednums = "447957363xxx"; // my cell number
$message = "I'm your tree - I need water please!";
$message = urlencode($message);
// Prepare data for POST request
$data = "uname=".$uname."&pword=".$pword."&message=".$message."&from=". $from."&selectednums=".$selectednums."&info=".$info."&test=".$test; // Send the POST request with cURL
$ch = curl_init('');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($ch); //This is the result from Txtlocal
curl_close($ch);
?>
As always, the first and best test is to call the script directly from IE. I do this and get :
Cant argue with that!
Last job before it all gets deployed under the tree is to fuse email and SMS code so I get both. I'll test it from IE first - and yes, I'll rename stuff so no one can send me loads of emails and SMS :-)
Done.
It is all happily installed under the tree, powered, with network connection thanks to the Netgear XE103 homeplug.
All that remains to be seen now is when the first trigger happens. It is 00:55 GMT, so i'll see what my inbox and cellphone has for me tomorrow morning ;-)
'Night.
Please log in to post a comment.
Nice one! My upgrade would be to add a windscreen washer pump and jerrycan. I guess that would need a second waterlevel sensor to alert you to fill up the reserve tank... | http://mbed.org/users/chris/notebook/christmas-tree-watering/?page=1#comment-95 | crawl-003 | refinedweb | 1,664 | 82.14 |
Creating a simple but useful plugin
NOTE: This is a work in progress. Also, if you have access to a non-Windows platform, please test to ensure interoperability.
Contents
Prerequisites
You may want to read the tutorial Creating a simple "Hello_World" plugin before starting this tutorial.
This tutorial assumes you have read through at least some of the wxSmith tutorials, have a working version of Code::Blocks installed and some knowledge of how to deal with projects, in particular how to compile them. To use the Code::Blocks SDK you must also have a working version of wxWidgets installed.
Overview
This tutorial will guide you through the creation of a simple plugin, which acquires settings from the user to run upx on the project output.
Setting up the environment
Select the Plugin type: of Tool from the Code::Blocks Plugin Wizard. Select wxSmith->Add wxScrollingDialog. Select use Xrc File:. Add the xrc resource to the current project through Project->Add Files.... (This is not necessary, however, it is preferred as it ensures all files stay together.) Add the xrc resource to the plugin's resource zip through selecting Project->Build options..., navigating to default->Pre/post build steps and appending the xrc resource (PopUp.xrc) to the end of the first command 'zip -j9 Upx.zip manifest.xml'. A proper setup is key, as, for example, failing to add the xrc resource to the plugin's resources will mysteriously crash Code::Blocks whenever the plugin is run.
Drafting the UI
Assuming you have read through some of the wxSmith tutorials, constructing a dialog (off the wxScrollingDialog base added in the previous section) with these features should not be too difficult. The compression slider is set for 1-10, wxButtons were used for the OK and Cancel (not a wxStdDialogButtonSizer), and the Overlay drop-down includes copy and skip.
You can, of course, choose to add or remove as many features as you would like.
Adding functionality
The first point is to call the dialog when the plugin is executed. This can be achieved by changing the content (in the main source file, Upx.cpp) of the function
Upx.cpp
int Upx::Execute()
{
// do your magic ;)
NotImplemented(_T("Upx::Execute()"));
return -1;
}
to
Upx.cpp
int UPX::Execute()
{
PopUp dlg( Manager::Get()->GetAppWindow() );
dlg.ShowModal();
return 0;
}
There's one more thing we need to add before our application will compile. Jump to the beginning of the file and add the following code after the first group of other includes:
Upx.cpp
#include <sdk.h> // Code::Blocks SDK
#include <configurationpanel.h>
#include "upx.h"
#include "popup.h" // <- here
See the sdk documentation for full information on managers (and everything else).
Next, add an event handler to the OK button through wxSmith. This function will contain most of the action as very little needs to happen before or after it. There are two main jobs this fuction must complete: forming a command, and executing it. | http://wiki.codeblocks.org/index.php?title=Creating_a_simple_but_useful_plugin&oldid=7544 | CC-MAIN-2020-05 | refinedweb | 493 | 57.37 |
Hello all this is my first post so go easy on me :P ive been browsing these forums for a while and have finally decided to register and hoepfully contribute. Im having trouble getting my programme to reconise and sort invalid data. Any help would be great
My spec is to:
Is there a line in the file, if yes add 1 to count.
Read the line in, split it into column s (7)
validate
If valid then output the result and increment a valid count.
If invalid then break and display invalid count.
//imports the scanner
import java.io.File;
import java.io.FileNotFoundException;
import java.util.Scanner;
public class splitter
{
int count = 0;
int valid = 0;
public static void main(String[] args) throws FileNotFoundException {
Scanner s = new Scanner(new File("results.txt")); // create a scanner which scans from a file
String line = ""; // stores the each line of text read from the file
while ( s.hasNext() == true ) {
line = s.nextLine(); // read the next line of text from the file
String [] splitupText = line.split(","); // split the text into multiple elements
if (splitupText.length ==7)
for ( int i = 0; i < splitupText.length; i=i+1 ) { // loop over each element
splitupText [i] = splitupText [i].trim ();
if (splitupText [i].compareTo ("")==0);
String nextBit = splitupText[i]; // get the next element (indexed by i)
nextBit = nextBit.trim(); // use the trim method to remove leading and trailing spaces
}
//displaying data
System.out.println (splitupText [0] + " : " + "Gold " + "(" + splitupText [4] + ")" + ", " + "Silver "+ "(" + splitupText [5] + ")" + ", " + "Bronze " + "(" + splitupText[6] + ")") ;
}
}
}
Ive got to adding the invalid data into the text file and have some a bit unstuck, the programme reads and outputs valid data correctly but so far not luck with the invalid, if anyone could help that would be greatly appriciated, Thanks.
Sorry it's taken a while to answer your question - I appear to have overlooked it.
Please use code tags when posting code.
the programme reads and outputs valid data correctly but so far not luck with the invalid,
You haven't said what constitutes valid data so I can only give a generalised answer.
Anything that isn't valid (ie isn't output) must by definition be invalid so where ever you have a test for validity add an else clause that will be executed if the data is invalid.
If you test the data in more than one place then you should to put the invalid data code in a separate method so you can call the same code multiple times rather than writing it out in several places.
Posting code? Use code tags like this: [code]...Your code here...[/code]
Forum Rules | http://forums.codeguru.com/showthread.php?518058-Arrays-and-Methods-Help-in-Java&goto=nextoldest | CC-MAIN-2017-34 | refinedweb | 436 | 73.17 |
NAME
pmcd - performance metrics collector daemon
SYNOPSIS
pmcd [-f] [-i ipaddress] [-l logfile] [-L bytes] [-n pmnsfile] [-p port[,port ...] [-q timeout] [-T traceflag] [-t timeout] [-x file]
DESCRIPTION
pmcd is the collector used by the Performance Co-Pilot (see PCPIntro(1)) to gather performance metrics on a system. As a rule, there must be an instance of pmcd running on a system for any performance metrics to be available to the PCP. pmcd accepts connections from client applications running either on the same machine or remotely and provides them with metrics and other related information from the machine that pmcd is executing on. pmcd delegates most of this request servicing to a collection of Performance Metrics Domain Agents (or just agents), where each agent is responsible for a particular group of metrics, known as the domain of the agent. For standard dotted form (e.g. 100.23.45.6). The -i option may be used multiple times to define a list of IP addresses. Connections made to any other IP addresses the host has will be refused. This can be used to limit connections to one network interface if the host is a network gateway. It is also useful if the host takes over the IP address of another host that has failed. In such a situation only the standard namespace is loaded from the file pmnsfile. . -t timeout To prevent misbehaving agents from hanging the entire Performance Metrics Collection System (PMCS), pmcd uses timeouts on PDU exchanges By default, event tracing is buffered using a circular buffer that is over-written as new events are recorded. The default buffer size holds the last 20 events, although this number may be over-ridden by using pmstore(1) to modify the metric pmcd.control.tracebufs. Similarly once pmcd is running, the event tracing control may be dynamically modified by storing 1 (enable) or 0 (disable) into the metrics pmcd.control.traceconn, pmcd.control.tracepdu and pmcd.control.tracenobuf. These metrics map to the bit fields associated with the traceflag argument for the -T option. When operating in buffered mode, the event trace buffer will be dumped whenever an agent connection is terminated by pmcd, or when any value is stored into the metric pmcd.control.dumptrace via pmstore(1). In unbuffered mode, every event will be reported when it occurs. -x file Before the pmcd logfile can be opened, pmcd may encounter a fatal error which prevents it from starting. By default, the output describing this error is sent to /dev/tty but it may redirected to file. If a PDU exchange with an agent times out, the agent has violated the requirement that it delivers metrics with little or no delay. This is deemed a protocol failure and the agent is disconnected from pmcd. Any subsequent requests for information from the agent will fail with a status indicating that there is no agent to provide it. It is possible to specify host-level access control to pmcd. This allows one to prevent users from certain hosts from accessing the metrics provided by pmcd and is described in more detail in the Section on ACCESS CONTROL below.
CONFIGURATION as root. The configuration file may contain shell commands to create agents, which will be executed by root. To prevent security breaches the configuration file should be writable only by root. The use of absolute path names is also recommended. The case of the reserved words in the configuration file is unimportant, but elsewhere, the case is preserved. Blank lines and comments are permitted (even encouraged) in the configuration file. A comment begins with a ``#'' character and finishes at the end of the line. A line may be continued by ensuring that the last character on the line is a ``\'' (backslash). A
Each whitespace characters, however a single agent specification may not be broken across lines unless a \ (backslash) is used to continue the line. Each agent specification must start with a textual label (string) followed by an integer in the range 1 to 510. The label is a tag used to refer to the agent and the integer specifies the domain for which the agent supplies data. This domain identifier corresponds explictily
The access control section of the configuration file is optional, but if present it must follow the agent configuration data. The case of reserved words is ignored, but elsewhere case is preserved. Lexical elements in the access control section are separated by whitespace or the special delimiter characters: square brackets (``['' and ``]''), braces (``{'' and ``}''), colon (``:''), semicolon (``;'') and comma (``,''). The special characters are not treated as special in the agent configuration section. The access control section of the file must start with a line of the form: [access] Leading and trailing whitespace may appear around and within the brackets and the case of the access keyword is ignored. No other text may appear on the line except a trailing comment. Following this line, the remainder of the configuration file should contain lines that allow or disallow operations from particular hosts or groups of hosts. There are two kinds of operations that occur via pmcd: fetch allows retrieval of information from pmcd. This may be information about a metric (e.g. its description, instance domain 129.127.112.2 : all except fetch; because they both refer to the same host, but disagree as to whether the fetch operation is permitted from that host. Statements containing more specific host specifications override less specific ones according to the level of wildcarding. For example a rule of the form allow * : all except store, maximum 2 connections; This says that only 2 client connections at a time are permitted for all hosts other than "clank", which is permitted 5. If a client from host "boing" is the first to connect to pmcd, its connection is checked against the second statement (that is the most specific match with a connection limit). As there are no other clients, the connection is accepted and contributes towards the limit for only the second statement above. If the next client connects from "clank", its connection is checked against the limit for the first statement. There are no other connections from "clank", so the connection is accepted. Once this connection is accepted, it counts towards both statements' limits because "clank" matches the host identifier in both statements. Remember that the decision to accept a new connection is made using only the most specific matching access control statement with a connection limit. Now, the connection limit for the second statement has been reached. Any connections from hosts other than "clank" will be refused. If instead, pmcd with no clients saw three successive connections arrived from "boing", the first two would be accepted and the third refused. After that, if a connection was requested from "clank" it would be accepted. It matches the first statement, which is more specific than the second, so the connection limit in the first is used to determine that the client has the right to connect. Now there are 3 connections contributing to the second statement's connection limit. Even though the connection limit for the second statement has been exceeded, the earlier connections from "boing" are maintained. The connection limit is only checked at the time a client attempts a connection rather than being re-evaluated every time a new client connects to pmcd. This gentle scheme is designed to allow reasonable limits to be imposed on a first come first served basis, with specific exceptions. As illustrated by the example above, a client's connection is honored once it has been accepted. However, pmcd reconfiguration (see the next section) re-evaluates all the connection counts and will cause client connections to be dropped where connection limits have been exceeded.
RECONFIGURING PMCD
If the configuration file has been changed or if an agent is not responding because it has terminated or the PMNS has been changed, pmcd may be reconfigured by sending it a SIGHUP, as in # pmsignal -a -s HUP pmcd When pmcd receives a SIGHUP, it checks the configuration file for changes. If the file has been modified, it is reparsed and the contents become the new configuration. If there are errors in the configuration file, the existing configuration is retained and the contents of the file are ignored. Errors are reported in the pmcd log file. It also checks the PMNS file for changes. If the PMNS file has been modified, then it is reloaded. Use of tail(1) on the log file is recommended while reconfiguring pmcd. If the configuration for an agent has changed (any parameter except the agent's label is different), the agent is restarted. Agents whose configurations do not change are not restarted. Any existing agents not present in the new configuration are terminated. Any deceased agents are that are still listed are restarted. Sometimes it is necessary to restart an agent that is still running, but malfunctioning. Simply stop the agent (e.g. using SIGTERM from pmsignal(1)), then send pmcd a SIGHUP, which will cause the agent to be restarted.
STARTING AND STOPPING PMCD
Normally, stop to stop pmcd. Starting pmcd when it is already running is the same as stopping it and then starting it again. Sometimes it may be necessary to restart pmcd during another phase of the boot process. Time-consuming parts of the boot process are often put into the background to allow the system to become available sooner (e.g. mounting huge databases). If an agent run by pmcd requires such a task to complete before it can run properly, it is necessary to restart or reconfigure pmcd after the task completes. Consider, for example, the case of mounting a database in the background while booting. If the PMDA which provides the metrics about the database cannot function until the database is mounted and available but pmcd is started before the database is ready, the PMDA will fail (however pmcd will still service requests for metrics from other domains). If the database is initialized by running a shell script, adding a line to the end of the script to reconfigure pmcd (by sending it a SIGHUP) will restart the PMDA (if it exited because it couldn't connect to the database). If the PMDA didn't exit in such a situation it would be necessary to restart pmcd because if the PMDA was still running pmcd would not restart it. Normally pmcd listens for client connections on is a comma-separated list of one or more numerical port numbers. Should both methods be used or multiple -p options appear on the command line, pmcd will listen on the union of the set of ports specified via all -p options and the PMCD_PORT environment variable. If non-default ports are used with pmcd care should be taken to ensure that PMCD_PORT is also set in the environment of any client application that will connect to pmcd.
In addition to the PCP environment variables described in the PCP ENVIRONMENT section below, the PMCD_PORT variable is also recognised as the TCP/IP port for incoming connections (default 443dbg(1), pmerr(1), pmgenmap(1), pminfo(1), pmstat(1), pmstore(1), pmval(1), pcp.conf(4), and pcp.env(4).
DIAGNOSTICS
If possible to run pmcd..
CAVEATS
pmcd does not explicitly terminate its children (agents), it only closes their pipes. If an agent never checks for a closed pipe it may not terminate. The configuration file parser will only read lines of less than 1200 characters. This is intended to prevent accidents with binary files. The timeouts controlled by the -t option apply to IPC between pmcd and the PMDAs it spawns. This is independent of settings of the environment variables PMCD_CONNECT_TIMEOUT and PMCD_REQUEST_TIMEOUT (see PCPIntro(1)) which may be used respectively to control timeouts for client applications trying to connect to pmcd and trying to receive information from pmcd. | http://manpages.ubuntu.com/manpages/oneiric/man1/pmcd.1.html | CC-MAIN-2014-15 | refinedweb | 1,982 | 52.49 |
Jeff King <p...@peff.net> writes: > On Wed, Jan 16, 2013 at 03:53:23PM +0100, Max Horn wrote: > >> -#ifdef __GNUC__ >> +#if defined(__GNUC__) && ! defined(__clang__) >> #define config_error_nonbool(s) (config_error_nonbool(s), -1) >> #endif > > You don't say what the warning is, but I'm guessing it's complaining > about throwing away the return value from config_error_nonbool?
Advertising
Yeah, I was wondering about the same thing. The other one looks similar, ignoring the return value of error(). Also, is this "some versions of clang do not like this"? Or are all versions of clang affected? -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at | https://www.mail-archive.com/git@vger.kernel.org/msg15857.html | CC-MAIN-2016-50 | refinedweb | 119 | 68.06 |
#include <iostream> #include <string> #include <ctype.h> using namespace std; void isphone(int number) { bool isphone = false; int size2= number.size(); for (int i=0; i<size2; i++) { if(isdigit(number[i])&& size2>=4) { isphone=true; } else isphone = false; if(isphone=true) { cout<<"yes this is a phone number"<<endl; } else cout<<"Invalid number!"<<endl; } } //for testing purpose int main(int argc, char* argv[]) { if (argc==2) isphone(string(argv[1])); }
I was thinking to use this function to read the user input to see if they input the right number. But it just wont allow me to run! It has something like
testing.cpp: In function 'void isphone(int)':
testing.cpp:11: error: request for member 'size' in 'number', which is of non-class type 'int'
testing.cpp:14: error: invalid types 'int[int]' for array subscript
testing.cpp:22: warning: suggest parentheses around assignment used as truth value
testing.cpp: In function 'int main(int, char**)':
testing.cpp:38: error: 'isname' was not declared in this scope
Compilation failed.
O.O can someone plz help me figuring it out? I see no problem....or any suggestion to modify it? | https://www.daniweb.com/programming/software-development/threads/305788/how-come-my-function-can-t-read-number-plz-help | CC-MAIN-2017-34 | refinedweb | 191 | 61.02 |
Sourceware Bugzilla – Bug 10600
stdio/strfmon.c multiple vulnerabilities
Last modified: 2009-10-30 04:36:32 UTC
Affected Software (tested 27.08.2009):
- Fedora 11
- Slackware 12.2
- Ubuntu 9.04
- others linux distributions
Previous URL:
--- 0.Description ---
strfmon -- convert monetary value to string
The strfmon() function places characters into the array pointed to by s as
controlled by the string pointed to by format. No more than maxsize bytes are
placed into the array.
The format string is composed of zero or more directives: ordinary characters
(not %), which are copied unchanged to the output stream; and
conversion specifications, each of which results in fetching zero or more
subsequent arguments. Each conversion specification is introduced by the %
character.
SYNOPSIS:
#include <monetary.h>
ssize_t
strfmon(char * restrict s, size_t maxsize, const char * restrict
format,
...);
--- 1. glibc 2.10.1 stdio/strfmon.c Multiple vulnerabilities ---
In March 2008, our team has published a security note (SREASONRES:20080325)
about vulnerabilities in strfmon(3) function. Issue has been officially
diagnosed in NetBSD, FreeBSD and MacOSX. However, from the source code due to a
glibc also is vulnerable to. We have informed glibc team. However, the
description of the issue and fix was not enough for gnu team. They has changed
status for BOGUS and response was:
---
And what exactly does an BSD implementation has to do with glibc?
---
Today we now, only NetBSD is secure for this. And all systems uses glibc are
affected. Despite the differences in the code NetBSD libc and glibc, issue is
the same but the exploit differs from that presented in (SREASONRES:20080325).
Description of the vulnerabalitie: (SREASONRES:20080325)
Description of the fix:
To present this issue in Fedora 11, we will use php client. money_format() use
strfmon(3) function so this program will be perfect.
[cx@localhost ~]$ php -r 'money_format("%.1073741821i",1);'
Segmentation fault
for 'money_format("%.1073741821i",1);' we will get
Program received signal SIGSEGV, Segmentation fault.
0x0019331a in __printf_fp () from /lib/libc.so.6
(gdb) bt
#0 0x0019331a in __printf_fp () from /lib/libc.so.6
#1 0x0018832b in __vstrfmon_l () from /lib/libc.so.6
#2 0x00187a36 in strfmon () from /lib/libc.so.6
strfmon() will call to __printf_fp() with overflowed arg. In result
(gdb) x/20s ($esi)-10
0x8448ff6: ""
0x8448ff7: ""
0x8448ff8: "0"
0x8448ffa: ""
0x8448ffb: ""
0x8448ffc: "0"
0x8448ffe: ""
0x8448fff: ""
0x8449000: <Address 0x8449000 out of bounds>
0x8449000: <Address 0x8449000 out of bounds>
0x8449000: <Address 0x8449000 out of bounds>
...
(gdb) i r
eax 0x30 48
ecx 0x0 0
edx 0x0 0
ebx 0x2bdff4 2875380
esp 0xbfffec14 0xbfffec14
ebp 0xbfffed78 0xbfffed78
esi 0x8449000 138711040
edi 0x810c 33036
eip 0x19331a 0x19331a <__printf_fp+3274>
Now let's see what will hapen for 'money_format("%.1073741822i",1);'
Program received signal SIGSEGV, Segmentation fault.
0x0034b27b in hack_digit.12295 () from /lib/libc.so.6
php will crash in hack_digit().
(gdb) i r
eax 0x3ffffffe 1073741822
ecx 0x32 50
edx 0x2 2
ebx 0x476ff4 4681716
esp 0xbfffebc4 0xbfffebc4
ebp 0xbfffebf4 0xbfffebf4
esi 0x32 50
edi 0x3e 62
we can try change edi register.
For 'money_format("%.1073741824i",1);'
(gdb) i r
eax 0x40000000 1073741824
ecx 0x32 50
edx 0x2 2
ebx 0x35bff4 3522548
esp 0xbfffebbc 0xbfffebbc
ebp 0xbfffebec 0xbfffebec
esi 0x32 50
edi 0x42 66
But let's see what will hapen for 'money_format("%.77715949976712904702i", 1.1);'
crash in
Program received signal SIGSEGV, Segmentation fault.
0x00e4327b in hack_digit.12295 () from /lib/libc.so.6
(gdb) i r
eax 0x3ffffffe 1073741822
ecx 0x34 52
edx 0x2 2
ebx 0xf6eff4 16183284
esp 0xbfffebb4 0xbfffebb4
ebp 0xbfffebe4 0xbfffebe4
esi 0x34 52
edi 0x3e 62
esi 52.
Interesting is that the PHP memory_limit has no control over what happens in the
level of the libc. Function strfmon(3) can allocate a lot of data in memory
without control by PHP memory_limit and will crash.
For example:
php -r 'money_format("%.1343741821i",1);'
will allocate ~1049MB real memory.
memory_limit can be less that 1049M
Strange is the fact that nobody checked the code of glibc. The algorithm used in
BSD libc and glibc is very similar. Funy.
(In reply to comment #0)
> Affected Software (tested 27.08.2009):
> - Fedora 11
> - Slackware 12.2
> - Ubuntu 9.04
> - others linux distributions
Look like you should be listing architectures here too, as they do seem to
matter here.
> ---
> And what exactly does an BSD implementation has to do with glibc?
> ---
That sounds like a reference to:
Further on, I'll be quoting this advisory:
> Let's see libc/stdlib/strfmon_l.c (glibc rev-1.5.2.4)
...
> if (width > LONG_MAX / 10
> || (width == LONG_MAX && val > LONG_MAX % 10))
> {
> __set_errno (E2BIG);
> return -1;
> }
...
> if (width >= maxsize - (dest - s))
> {
> __set_errno (E2BIG);
> return -1;
> }
..
> Perfect. The above code protects us.
For the posterity and completeness of references, integer overflow check was
added via following commit:;a=commitdiff;h=153aa31b93be22e01b236375fb02a9f9b9a0195f
This sounds like a reason why your original vector %99999999999999999999n does
not work any more.
> But what is below, is a mistake already
This seems to refer to missing integer overflows checks in the code converting
left_prec / right_prec from string to number, as similar approach is used there
as for converting width:;a=blob;f=stdlib/strfmon_l.c#l242;a=blob;f=stdlib/strfmon_l.c#l259
But wait, how does that explain a crash on "%.1073741821i"? 1073741821 is less
than 2^31, so it won't overflow (signed) integer on either 32 bit or 64 bit
architectures, right?
> info.width = left_prec + (right_prec ? (right_prec + 1) : 0);
This should not overflow either, as left_prec is 0 here. So the problem seems
to be elsewhere...
So let's ignore srtfmon for a while and try something more simple:
printf("%.1073741821f\n", 0.0);
Testing this on F11 glibc-2.10.1, this crashes when compiled with -m32, but does
not with -m64. Little more looking leads to:;a=blob;f=stdio-common/printf_fp.c#l890
This is where integer overflow occurs (when computing wbuffer_to_alloc). It
should also explain where do ~1gig memory usage come from with your
"%.1343741821i" test.
Ulrich, I bet your knowledge of this code is a lot better than reporter's and
mine combined, so you can come up with proper fix. I just hope this additional
info does help. Is it enough for NEW -> ASSIGNED state change.;a=commitdiff;h=199eb0de8d
Only 32-bit had a problem and it's fixed. | http://sourceware.org/bugzilla/show_bug.cgi?id=10600 | CC-MAIN-2013-48 | refinedweb | 1,039 | 67.45 |
Basic Pong
<< Questions 20 | OtherProjectsTrailIndex | Basic Cannon >>
I hope this is a understandable basic introduction to a game with motion. This is a simplified classic: Pong. Our version will have three sides, one ball and one paddle. Once you get the idea, you can make a two player version, or perhaps make a "brick out" version.
We'll need:
- a field with edges
- a ball that has a speed and direction
- a paddle that moves up and down
- an applet that handles user input and the passage of time
BasicPong.java
package pong; import java.applet.Applet; import java.awt.Color; import java.awt.Font; import java.awt.Graphics; import java.awt.Image; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import java.awt.event.MouseMotionListener; import javax.swing.Timer; public class BasicPong extends Applet implements ActionListener, MouseListener, MouseMotionListener { private Timer timer; /** * These instance fields are for getting rid of flicker * on the Windows platform-- the Applet will draw the * picture in memory before putting it on the screen */ private Image virtualMem; private Graphics g0; private Font font; private String message; public void init(){ timer = new Timer(10,this); font = new Font("Helvetica", Font.BOLD, 18); message="Pong: click to begin"; timer.stop(); addMouseListener(this); addMouseMotionListener(this); } public void paint (Graphics g) { //make a new buffer in case the applet size changed virtualMem = createImage(getWidth(),getHeight()); g0 = virtualMem.getGraphics(); g0.setColor(Color.BLACK); g0.fillRect(0, 0, this.getWidth(), this.getHeight()); g0.setColor(Color.WHITE); g0.setFont(font); g0.drawString(message, 20, 20); g.drawImage(virtualMem,0,0,this);//set new display to Screen } public void update(Graphics g) { paint(g); //get rid of flicker with this method } @Override public void actionPerformed(ActionEvent e) { repaint(); } @Override public void mouseDragged(MouseEvent e) {} @Override public void mouseMoved(MouseEvent e) {} @Override public void mouseClicked(MouseEvent e) {} @Override public void mousePressed(MouseEvent e) {} @Override public void mouseReleased(MouseEvent e) {} @Override public void mouseEntered(MouseEvent e) {} @Override public void mouseExited(MouseEvent e) {} }
Field Class
the field class should have the following methods:
- A constant for the thickness of the outside wall
- A constructor that makes two Rectangles, one for the outside and one for the area that the ball could contain
- A
drawmethod
- A
containsmethod that responds with a true if a rectangle is contains in the legal are for the ball
- A
getLeft, getRight, getTopand
getBottomthat returns the
intcorresponding to the legal ball area
Ball Class
- A constant for the ball size (try 15)
- A constructor that makes a bounding box of type
Rectangle, initalises a
deltaXand a
deltaY
- A
movemethod that translates the bounding box
Rectangle
- A A
getLocationmethod that returns the bounding box
- A
changeXand
changeYmethod that changes the deltaX and deltaY to the negative version of the current value
- A
drawmethod
The Paddle class
- Needs two constants for the width and height of the paddle
- A constructor that inialises the bounding box
- A
setHeightmethod that sets the y location of the paddle's bounding box
- A
touchesmethod that compares a given
Rectanglewith the paddles' bounding box (use the
Rectangleclass'
intersectsmethod. | https://mathorama.com/apcs/pmwiki.php?n=Main.BasicPong | CC-MAIN-2018-51 | refinedweb | 521 | 50.67 |
Why QDoc don't work, it's really confusing.
- jsulm Qt Champions 2018
@Stephen-INF said in Why QDoc don't work, it's really confusing.:
warning: clang found diagnostics parsing \fn MainWindow::MainWindow(QWidget *parent)
error: use of undeclared identifier 'MainWindow'
error: unknown type name 'QWidget'
Can you show how you use \fn? It should be part of the doc comment.
@jsulm
Thank you for you help,here is my test code
/*! \fn MainWindow::MainWindow(QWidget *parent) Constructor Mainwindow */ MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); }
But I think the issue might be related to "error: use of undeclared identifier 'MainWindow' ".
- jsulm Qt Champions 2018
@Stephen-INF This error comes from CLang code model. I guess it does not understand this QDoc syntax.
@jsulm
But isn't QDoc a usable tool?Comparing the comments in Qt source code and official document,it seems QDoc is used.
@raven-worx
Hi,I really want to try Qdoc,and I have spent many hours,still don't know where I am doing wrong.If QDoc is usable for you,can you give me an example,thanks a lot.
- raven-worx Moderators
@Stephen-INF
in the end we are talking about a CLANG warning. This shouldn't influence QDoc though.
QDoc should output something meaningful in the meantime or?
Hi
Maybe try a simple sample and see ?
@raven-worx
But all output is above.
@mrjj
I tried,QDoc works well with qml ,but C++ Class still do not works.
I found generate documentation for qml only need one "source" file,but C++ class need both "source" and "header" file.
I guess there might be something wrong with header file ?So QDoc can't analyse source file as normal.
New C++ class "WidgetTest" added in project privided in the link.
header file:
#include <QWidget> class WidgetTest : public QWidget { Q_OBJECT public: explicit WidgetTest(QWidget *parent = nullptr); };
source file without "\fn":
/*! \class WidgetTest \brief WidgetTest for ui interface. \inmodule module0 */ /*! WidgetTest::WidgetTest(QWidget *parent) constructor WidgetTest */ WidgetTest::WidgetTest(QWidget *parent) : QWidget(parent) { }
output:
warning: Cannot tie this documentation to anything [qdoc found a /*! ... */ comment, but there was no topic command (e.g., '\fn', '\page') in the comment and no function definition following the comment.]
source file with "\fn":
/*! \class WidgetTest \brief WidgetTest for ui interface. \inmodule module0 */ /*! \fn WidgetTest::WidgetTest(QWidget *parent) constructor WidgetTest */ WidgetTest::WidgetTest(QWidget *parent) : QWidget(parent) { }
output:
warning: clang found diagnostics parsing \fn WidgetTest::WidgetTest(QWidget *parent) error: use of undeclared identifier 'WidgetTest' error: unknown type name 'QWidget'
widgettest.html is same:
- mrjj Lifetime Qt Champion
Hi
Did you change config file to match ?
sample uses
headers.fileextensions = "*.hpp" but often its actually just "*.h"
@Stephen-INF
hmm odd. then
it seems clang have issue parsing the header
( QDoc uses clang from Qt 5.11)
You did follow step to install it ?
@mrjj
Yes,I did it
installed LLVM6.0.1 and specify Clang location by "set LLVM_INSTALL_DIR=C:\Program Files\LLVM"
@Stephen-INF
That seems pretty ok.
However, since the other part of generation seems to work, then
it must be something with clang and /fn
But I cant guess what not right.
@Stephen-INF
I might :)
@FrancoF
Hi
I did try the QDoc but could not get any /fn to work either.
What version of Qt are you using ?
- DevMachines
Re: [Why QDoc don't work](it's really confusing.)
I have the same issue with Qt5.12.0 on Windows. Does anyone have the solution for this?
Some notes - the enumerator was processed without problems:
class MyClass
{
public:
enum Type
{
}
void foo();
}
/*!
\enum MyClass::Type - parsed without errors
\value …
*/
/*!
\fn void MyClass::foo() - error: use of undeclared identifier 'MyClass' why???
*/
- DevMachines
To fix the error, you need to switch to VS2015 Build Tool. For VS2017 I could not get the compiler to work. But for 2015 everything works as expected. | https://forum.qt.io/topic/102055/why-qdoc-don-t-work-it-s-really-confusing/23 | CC-MAIN-2019-30 | refinedweb | 647 | 60.01 |
Hey guys I have a problem with one of my methods (overriding an operator , ~ )
When I try to print my object, something unexpected happens... need some help
this is my whole code
#include"stdafx.h"
#include<iostream>
using namespace std;
class complex
{
private:
double Re, Im;
public:
complex(double _Re = 0, double _Im = 0) : Re(_Re), Im(_Im){} //class constructor
void print() const
{
cout << Re << " + " << "i(" << Im << ")" << endl;
}
complex operator~() const
{
return (Re, -Im);
}
};
void main()
{
complex x(2, 3);
x.print();
(~x).print();
}
If i compile it, I'll get the correct complex number on the screen, but when I try to execute the ~ overridden operator it displays to me -
-3 + 0 i....
Really need some help.
Thanks
Sorry for posting such brain-dead questions, but I can't figure it out by myself....been looking at the damn code for more than 30 minutes and I can't see where I am wrong.
You are missing a
complex(Re, -Im);
in:
complex operator~() const { return (Re, -Im); }
Hence you return an implicit converted
complex(-Im) (comma operator).
You might use explicit constructors to avoid a pitfall like this.
The comma in
return (Re, -Im);
does NOT do what you think it does.
Surrounding something in parenthesis does NOT call a constructor. The parenthesis are discarded and the expression is evaluated as
return Re, -Im;
The comma means rvaluation each term and the result is the rightmost term.
Because the expression
Re doesn't do anything the expression is evaluated as
return -Im;
But this calls the constructor which has a default of 0 for the imaginary term. And so you get -3 for the real part and 0 for the imaginatry part.
Instead that line should read
return complex(Re, -Im);
Which constructs what you thought it should.
return (Re, -Im);
This is using the comma operator, where it evaluates
Re (numbers do nothing), and then evaluates and "returns"
-Im (-3). Then, the return type is expected to be a
complex, so it tries to convert this -3 to a
complex. It finds this constructor:
complex(double _Re = 0, double _Im = 0), and uses it:
complex(-3, 0);
The immediate solution is to add the word
complex to the return
return complex(Re, -Im);
Or, in C++11, use
{} to tell the compiler that you meant to call the constructor:
return {Re, -Im};
Since you accept default values for your constructor (
0) the compiler takes the expression:
return (Re, -Im); evaluates
Re (it is 2) throws away and creates a new
complex with the
-Im by calling the constructor like:
complex(-3, 0). So this is how you get the funny value. | http://www.dlxedu.com/askdetail/3/369d6070ce6140cee4cd3dafb6cbe12c.html | CC-MAIN-2019-09 | refinedweb | 444 | 57.4 |
version 2.0.
As there are no BPEL 2.0 upgrade tools we had basically two options. The first was to complete rebuild all our processes. Not a very good option to redo all the work. The second options was to manually upgrade the BPEL 1.1 definitions to 2.0. Expecting this to be less work we choose the second option.
The first step was to fool Jdeveloper into thinking a BPEL 1.1 process is actually BPEL 2.0.
1.      In the BPEL source file I changed two namespaces:
xmlns =â€â€ into xmlns=â€â€
xmlns:xxx=â€â€Â into xmlns:bpel=””
2.      In the composite.xml I added the version=â€2.0†attribute to the bpel component  <component name=â€name bpel process†version=â€2.0â€>
3.      Finally I closed JDeveloper, deleted the SCA-INF directory and restarted JDeveloper. The SCA-INF was created again. When I opened the bpel process in the editor it was recognized as a BPEL 2.0 process:
But it contained quit some errors. This was to be expected as there are differences between BPEL 1.1 and 2.0. I fixed the errors. The fixes I will discuss do not cover all possible errors that will arise when you upgrade to 2.0. I only described the ones I encountered.
- I clicked on all my partner links to check if all roles are still correct. Of the six processes I migrated only one partner link was corrupted and needed to be reassigned.
- Then I clicked on all invokes, replies and receives just to make sure the correct operation was still selected and the right variables still used. No problems here.
- The assignments (assign activity) needed some major fixing.In BPEL 1.1 if you look at the assign activity under the hood you will see this:
</pre> </li> </ol> <pre><assign name="Assign1"> <copy> <from variable=â€inputVariable†part=â€payload†query=â€ns2:/StartRequest/ns2:procesIdâ€/> <to variable=†start_InputVariable†part=â€payload†query=â€/ns6:/Parameters/ns6:procesIdâ€/> </copy> <copy> <from expression=â€concat(bpws:getVariableData(‘inputVariable’,’in’,’ns2:/StartRequest/ns2:procesId’,’ _tmp’))/> <to variable=†tempvalue†/> </copy> </assign>
In BPEL 2.0 a xsl variable notation is used. The above assignments now look like:
<assign> <copy> <from>$inputVariable.payload/ns2:procesId</from> <to>$start_InputVariable.payload/ns6:procesId</to> </copy> <copy> <from>concat($inputVariable.in/ns2:procesId,’ _tmp’)</from> <to>$tempvalue</to> </copy> </assign>
First I opened the assign editor for all assignments and closed it again (press ok). If you look at the source before and after, you will notice the contents of the variable and part attribute have been moved to the correct location.
The expression and query attribute have not been moved and are still there. You will have to do this yourself. Add them to the variable but remove the first level of the xpath as it is no longer not needed.
IF you have assigned a XML fragment to a variable you need to do the following. Enclose the fragment inside a <literal> element.
Check all the assignments if they are correct.
- You need to add an additional attribute to the catch element when you are catching faults. This can be a faultMessageType or faultElement. Open the catch fault editor and select the right one
- In BPEL the while condition is an attribute of the while element. In BPEL 2.0 this has become a child element instead. Remember you need to use the xsl variable format here also.
- The switch statement has been replaced by an if statement.
- Just rename the switch to if
- Make the condition attribute of the first case element a child element of the if element and change its a xsl variable format (see earlier).
- All other Case  elements become elseif’s. For the condition the same applies as explained before.
- otherwise becomes else.
- I enclosed the Embedded Java activity element (bpelx:exec) by the extensionActicity element and removed the version attribute.
- The checkpoint  element can be replaced by the dehydrate element also enclosed by an extensionActicity element as it is a oracle specific bpel 2.0 extension like the embedded java activity.
- The terminate activity just needs to be renamed to exit.
- The until attribute of the BPEL 1.1 Pick/onAlarm element has become a child element in 2.0 and needed to be moved.
- Not all annotations are supported anymore. I commented some of them out for now as they were mainly for documentation purposes only.
- Unfortunately the skipCondition attribute is no longer supported. I was a little disappointed about this although I can understand why it is removed. This attribute is just not part of the BPEL 2.0 standard. I had to add if/then/else constructions to mimic the same behavior polluting my once so beautiful BPEL processes.
So we painted our donkey brown but is it a horse now?  Our transformed BPEL 1.1 process validates as a BPEL 2.0 process so it looks like it. Now you can do some BPEL 2.0 refactorings to make it into a proper horse. Here are some suggestions:
- In BPEL 2.0 there is the notation of variable initialization. Where in BPEL 1.1 you need an additional assign activity to initialize a variable in 2.0 you can do this as part of the variable definition. So you can  make the BPEL process cleaner by moving the initialization to the definition part.
- The assign activity has some additional functionalities:
keepSrcElementName (in BPEL 2.0 projects only): Select this option to toggle the keepSrcElementName attribute on the copy rule on and off. This option enables you to replace the element name of the destination (as selected by the to-spec) with the element name of the source.
Change Rule Type (in BPEL 2.0 projects only): Select this option to change the type of the selected rule to one of the BPEL extension rules: bpelx:copyList, bpelx:insertAfter, bpelx:insertBefore, or bpelx:append.
For six simple to moderate complex BPEL processes this cost me about a day and a half work. For our project it was not cost effective to automate this. We could have created an xsl transformation to do parts of the migration. I wonder if you could fully automate this especially the refactoring part.
The explanation was very good. Thanks. | https://technology.amis.nl/2011/01/27/migrating-your-bpel-1-1-process-to-bpel-2-0-soa-suite-11gr1ps2-to-ps3/ | CC-MAIN-2017-43 | refinedweb | 1,094 | 68.36 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
Because data is coming from User Data, do I need additional info as per this post?
Hi,
There are quite a few hoops to jump through, but this example should get you the idea.
import c4d
def get_port_descid(node, port):
"""Returns the DescID for a ``GvPort``.
If all our ports would be for static description elements, this would be
easy. It would be just ``port.GetSubID()``. Unfortunately we are dealing
with a lot of dynamic descriptions in the context of Xpresso, and at
least I am not aware of any builtin methods that would return the
``DescID`` of a ``GvPort`` in a more sane way.
This solution is especially lazy, a solution less prone to errors
would be to iterate over the description of the ``GvNode`` and see which
combination of ``m, s, u`` is in there.
Args:
node (``c4d.modules.graphview.GvNode``): The node of the port.
port (``c4d.modules.graphview.GvPort``): The port to get the DescID for.
Returns:
``tuple[int]`` or ``None``: The resolved DescID.
"""
# The fragments (DescLevels) of the DescID of the port. The problem is
# that all methods always return an integer-
m, s, u = port.GetMainID(), port.GetSubID(), port.GetUserID()
# Now we are just trying to access the node and see if it raises an
# (attribute) error. If so, we just try the next combination.
# There are no user data DescLevels below 1.
if u > 0:
try:
node[(m, s, u)]
# It is a user data DescID.
return (m, s, u)
except AttributeError as e:
pass
try:
node[(m, s)]
# It is a dynamic description DescID.
return (m, s)
except AttributeError as e:
pass
try:
node[s]
# It is a static description DescID.
return s
except AttributeError as e:
pass
return None
def main():
"""Entry point.
"""
# This example is for a Xpresso GraphView instance, since I do not
# have access to RedShift. I did use two constant nodes and a math
# node as an example scene.
if not op or not op.GetTag(c4d.Texpresso):
return
tag = op.GetTag(c4d.Texpresso)
master = tag.GetNodeMaster()
root = master.GetRoot()
if not root:
return
nodes = root.GetChildren()
if not nodes:
return
# I just took the first top-level node as the node to disconnect,
# which was one of the constant nodes connected to the math node.
out_node = nodes[0]
# Go over all output ports in our node to disconnect/remove.
for out_port in out_node.GetOutPorts():
# Get all input ports the current output port is connected to.
connected_ports = out_port.GetDestination()
# Severe all connections from our output port.
out_port.Remove()
# Let Cinema catch up.
c4d.EventAdd()
# This one of the nastier parts. We will have to unwind the
# DescID the output port has been build for. See the
# respective function for details.
out_did = get_port_descid(out_node, out_port)
# We couldn't resolve the DescID properly.
if out_did is None:
continue
# Now we are just going over all input ports, get the DescID
# they are pointing at and write the value from our output_node
# to the input_node with these two DescIDs.
for in_port in connected_ports:
in_node = in_port.GetNode()
in_did = (in_port.GetMainID(), in_port.GetSubID())
in_did = get_port_descid(in_node, in_port)
if in_did is not None:
in_node[in_did] = out_node[out_did]
if __name__=='__main__':
main()
Cheers,
zipit
I did not really test the user data stuff, I just included it, since I knew you are going to need it. But looking at my code now, the user data stuff does not make much sense. It should probably be node[(m, u)] or node[(s, u)]to test for user data ids (probably the first one). User data DescIDs follow the form ID_USERDATA, x, so for example 700, 1 for the first element. You have to poke around a little bit to find out what is what, but my guess would be node[(m, u)].
node[(m, u)]
node[(s, u)]
ID_USERDATA, x
700, 1
If you keep running into errors, you should try the description stuff I mentioned in the function docstring instead. The approach of the function is not the safest
PS: You also do not need the line in_did = (in_port.GetMainID(), in_port.GetSubID()), this is just some garbage I forgot to delete before I realised that I was going to need a dedicated function for this
in_did = (in_port.GetMainID(), in_port.GetSubID())
The original code yields image 1:
m, u yields image 2:
s, u yields image 3:
It doesn't look like any of the actual User Data Values (3 of which are colour) are coming through. Here are images of the node and the UD:
Neither m,u or s,u causes the actual User Data to transfer to the nodes before the User Data node gets deleted. Is there something I am missing?
I actually meant doing that inside the get_port_descid function. However, I just did it for you and found out that for user data DescID elements the port returns some gibberish for m, s, u like for example 20000000 1000 -1 for a port for the first user data element for a null (we would at least expect 700 and 1 to pop up in there).
get_port_descid
DescID
m, s, u
20000000 1000 -1
700
1
The problem is that ports for user data elements and the DescID they are being build for always have been a bit weird (see GvNode.AddPort). To build a port for the first user data element of a node, we would initialise it with the DescID (700, 5, 1) (the weird part being the 5, which is the integer for the symbol DTYPE_SUBCONTAINER). The other problem is that ports do not really expose the element they point at to the outside world.
GvNode.AddPort
(700, 5, 1)
5
DTYPE_SUBCONTAINER
I am afraid that you will have to wait for MAXON to shed some light on the topic, but I would not hold my breath for them coming up with a solution. It might very well be the case that you cannot infer the DescID for ports pointing at user data elements.
Cool. I will try a different approach. Thanks for all your help on this, zipit.
I tried inputting the User Data directly into a RS port using Python and this code:
import c4d
def main():
obj = op.GetObject()
neon_col_INPUT = obj[c4d.ID_USERDATA,834]
# List: Spline, Colour, Power, Gas, Blend.
spline_UD = {1: 618}
for item in spline_UD:
rs_mat = doc.SearchObject ("S" + str (item) + " Gas")
rs_tag = rs_mat.GetTag(c4d.Ttexture)
rs_mat = rs_tag.GetMaterial()
outputNode = rs_mat[c4d.REDSHIFT_GRAPH_NODES]
gvMaster = outputNode.GetNodeMaster()
gvRoot = gvMaster.GetRoot()
colour = obj[c4d.ID_USERDATA,spline_UD[item]]
print colour
power = obj[c4d.ID_USERDATA,619]
gas = obj[c4d.ID_USERDATA,65]
blend = obj[c4d.ID_USERDATA,66]
currentNode = gvRoot.GetDown()
while currentNode is not None:
if currentNode.GetName() == "RS Material":
RSMaterial[c4d.REDSHIFT_SHADER_MATERIAL_EMISSION_COLOR] = colour
break
Now my code causes C4D to hang. Can you help?
I figured out a way to get this to work. They key is separating the User Data out of the RS XPresso node but still maintaining a connection to this node through a regular XPresso node. Cinema won't let me drag Redshift nodes from the RS XPresso window to the C4D XPresso window. Nor can I drag them from the AM.
The solution is to delete the User Data from the RS XPresso window, paste it into a new C4D Xpresso window, then use Set Driven (Absolute) on the RS nodes I need what is essentially double access to. Cinema then automatically creates a new XPresso node, which I then cut the contents from and paste them into my UD Xpresso node.
My only question is what is the difference between Set Driven (Absolute) and Set Driven (Relative)? Does it make a difference? Thanks.
sorry, I did not see your replies. But I cannot help you as I do neither have access to Redshift nor am very fluent when it comes to Cinemas features.
@Swinn said in Redshift deleting using old information:
My only question is what is the difference between Set Driven (Absolute) and Set Driven (Relative)? Does it make a difference?
My only question is what is the difference between Set Driven (Absolute) and Set Driven (Relative)? Does it make a difference?
Both setups add a Range Mapper node. In case of 'Relative', this range mapper creates an offset. See the online help.
Thanks, PluginStudent!
hi,
your question is sometimes not obvious
Even with the scene it's a bit compicated.
What i can say about your code is that you should be more defensive in your code. Always check the value before continue.
For example what if the object is not found ? You should continue to the next item or stop ?
for item in spline_UD:
rs_mat = doc.SearchObject ("S" + str (item) + " Gas")
if rs_mat is None:
# Because we don't have object to continue, we iterate to the next item.
continue
...
Cheers,
Manuel | https://plugincafe.maxon.net/topic/12627/redshift-deleting-using-old-information/16?lang=en-US | CC-MAIN-2022-05 | refinedweb | 1,508 | 66.23 |
EL expressions are one of the main driving forces for JavaServer Faces. Most dynamic characteristics of pages and widgets are governed by EL expressions. In JSF 1.x, there are some limitations for EL expressions that can at times be a little frustrating. One of the limitations is the fact that no custom functions or operators can be used in EL expressions. Quite some time ago, I wrote this article – – to demonstrate a trick for using a Map interface implementation to access custom functionality from EL expression after all.
However, things can even be better. Rather than jumping through the somewhat elaborate hoops of implementing the Map and consructing complex EL expressions, there are two other approaches. One is to create a custom EL Resolver can configure it in the faces-config.xml. Another is discussed in this article. It involves registering custom Java methods as eligible for use in EL expressions. And that really makes life a lot easier. It allows us to create EL expressions such as:
#{cel:concat (cel:upper( bean.property), cel:max(bean2.property, bean3.property), cel:avg(bean4.list))}
or
#{cel:substr(bean.property, 1, 5)}
Leveraging new custom operators in EL expressions is done in a few simple steps:
- Create custom class with static method(s)
- Create a tag library (.tld file)
- Register each method that should be supported in EL expressions
- Add a reference to the tag library’s URI in the jsp:root element for the page
- Use the registered functions in EL expressions in the page
As a very simple example, let’s take a look at two EL extensions: a concat operator and an upper.
1. Create custom class with static method(s)
The class could hardly be simpler:
package nl.amis.jsf; public final class ELFun { /** * Method that concattenates two strings. More strings can be concattenated * through nested calls such as * #{cel:concat('a', cel:concat('b','c'))} * * @param first string to concattenate * @param second string to concattenate * @return first and second string concattenated together */ public static String concat(final String first, final String second) { return (first == null ? "" : first) + (second== null ? "" : second); } /** * Function that returns the uppercased rendition of the input. * * to be used in EL expressions like this: * #{cel:upper('a')} * can be combined with other functions like this: * #{cel:concat( cel:upper('a'), cel:upper('B'))} * * @param input string to uppercase * @return input turned to uppercase */ public static String upper(final String input) { return (input== null ? "" : input.toUpperCase()); } }
2. Create a tag library descriptor (.tld file)
A TLD file is a straightforward XML document, used to descripe custom JSF UI components, Validators and other extension. And also custom functions. A TLD file is typically located in the WEB-INF directory of the application. An important element of the TLD is the uri. This element is used to identify the Tag Library when referenced from pages.
<> </taglib>
3. Register each method that should be supported in EL expressions
The TLD file contains a function entry for each operator to be enabled for use in EL expressions. For a function we need to indicate the name to be used in EL expressions, a reference to the class that contains the implementation for the function and the exact signature – name, result type and input parameters – for the method that is backing the function:
<> <function> <name>concat</name> <function-class>nl.amis.jsf.ELFun</function-class> <function-signature>java.lang.String concat(java.lang.String, java.lang.String)</function-signature> </function> <function> <name>upper</name> <function-class>nl.amis.jsf.ELFun</function-class> <function-signature>java.lang.String upper(java.lang.String)</function-signature> </function> </taglib>
4. Add a reference to the tag library’s URI in the jsp:root element for the page
<jsp:root xmlns:
5. Use the registered functions in EL expressions in the page
<h:form <h:outputText </h:form>
And that really is all there is to it. You could choose to create the Tag Library and custom class(es) in a separate project, deploy it to JAR and associate the JAR file with other JSF projects that then can leverage these custom functions in their EL expressions.
(Note: my thanks goes to Robert Willem of Brilman who first introduced me to this functionality)
Thanks for sharing. How is it different in JSF 2.0? The only step that is missing is referring .tld in web.xml as facelets tag library. | http://technology.amis.nl/2012/01/17/using-custom-functions-in-el-expressions-in-jsf-1-x/ | CC-MAIN-2014-49 | refinedweb | 736 | 54.93 |
A configuration element for the affine gap cost scheme. More...
#include <seqan3/alignment/configuration/align_config_gap_cost_affine.hpp>
A configuration element for the affine gap cost scheme.
Configures the gap scheme for the alignment algorithm. The gap scheme determines how gaps are penalised inside of the alignment algorithm. If the gap scheme is not configured, it will default to a linear gap scheme initialised with edit distance. Note that the gap open score is used as an additional score. This means that the score for opening a gap during the affine alignment execution is the sum of the gap score and the gap open score.
Construction from strongly typed open score and extension score.
The score for a sequence of
n gap characters is computed as
open_score + n * extension_score.
(n-1) * extension_score + open_score. | https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1align__cfg_1_1gap__cost__affine.html | CC-MAIN-2021-39 | refinedweb | 131 | 58.69 |
The File menu is probably the most widely implemented menu in main-window-style applications, and in most cases it offers, at the least, "new", "save", and "quit" (or "exit") options.
def fileNew(self):
if not self.okToContinue(): return dialog = newimagedlg.NewImageDlg(self) if dialog.exec_():
self.addRecentFile(self.filename) self.image = QImage()
for action, check in self.resetableActions:
action.setChecked(check) self.image = dialog.image() self.filename = None self.dirty = True self.showImage()
self.sizeLabel.setText("%d x %d" % (self.image.width(), self.image.height())) self.updateStatus("Created new image")
okToCon- When the user asks to work on a new file we begin by seeing whether it is "okay Unuef) to continue". This gives the user the chance to save or discard any unsaved 186 "sal changes, or to change their mind entirely and cancel the action.
If the user continues, we pop up a modal NewImageDlg in which they can specify the size, color, and brush pattern of the image they want to create. This dialog, shown in Figure 6.9, is created and used just like the dialogs we created in the preceding chapter. However, the New Image dialog's user interface was mk-
pyqt.py and
Make
We set the filename to be None and the dirty flag to be True to ensure that the user will be prompted to save the image and asked for a filename, if they terminate the application or attempt to create or load another image.
We then call showImage() which displays the image in the imageLabel, scaled according to the zoom factor. Finally, we update the size label in the status bar, and call updateStatus().
def updateStatus(self, message):
self.statusBar().showMessage(message, 5000) self.listWidget.addItem(message) if self.filename is not None:
self.setWindowTitle("Image Changer - %s[*]" % \
os.path.basename(self.filename)) elif not self.image.isNull():
self.setWindowTitle("Image Changer - Unnamed[*]") else:
self.setWindowTitle("Image Changer[*]") self.setWindowModified(self.dirty)
We begin by showing the message that has been passed, with a timeout of five seconds. We also add the message to the log widget to keep a log of every action that has taken place.
If the user has opened an existing file, or has saved the current file, we will have a filename. We put the filename in the window's title using Python's os.path.basename() function to get the filename without the path. We could just as easily have written QFileInfo(fname).fileName() instead, as we did earlier. If there is no filename and the image variable is not a null image, it means that the user has created a new image, but has not yet saved it; so we use a fake filename of "Unnamed". The last case is where no file has been opened or created.
Regardless of what we set the window title to be, we include the string "[*]" somewhere inside it. This string is never displayed as it is: Instead it is used to indicate whether the file is dirty. On Linux and Windows this means that created using Qt Designer, and the user interface file must be converted into a module file, using pyuic4, for the dialog to be usable. This can be done directly by running pyuic4, or by running either mkpyqt.py or Make PyQt, both of which are easier since they work out the correct command-line arguments automatically. We will cover all of these matters in the next chapter.
If the user accepts the dialog, we add the current filename (if any) to the recently used files list. Then we set the current image to be a null image, to ensure that any changes to checkable actions have no effect on the image. Next we go through the actions that we want to be reset when a new image is created or loaded, setting each one to our preferred default value. Now we can safely set the image to the one created by the dialog.
the filename will be shown unadorned if it has no unsaved changes, and with an asterisk (*) replacing the "[*]" string otherwise. On Mac OS X, the close button will be shown with a dot in it if there are unsaved changes. The mechanism depends on the window modified status, so we make sure we set that to the state of the dirty flag.
def fileOpen(self):
if not self.okToContinue(): return dir = os.path.dirname(self.filename) \
if self.filename is not None else "." formats = ["*.%s" % unicode(format).lower() \
for format in QImageReader.supportedImageFormats()] fname = unicode(QFileDialog.getOpenFileName(self,
"Image Changer - Choose Image", dir, "Image files (%s)" % " ".join(formats)))
if fname:
self.loadFile(fname)
If the user asks to open an existing image, we first make sure that they have had the chance to save or discard any unsaved changes, or to cancel the action entirely.
If the user has decided to continue, as a courtesy, we want to pop up a file open dialog set to a sensible directory. If we already have an image filename, we use its path; otherwise, we use ".", the current directory. We have also chosen to pass in a file filter string that limits the image file types the file open dialog can show. Such file types are defined by their extensions, and are passed as a string. The string may specify multiple extensions for a single type, and multiple types. For example, a text editor might pass a string of:
"Text files (*.txt)\nHTML files (*.htm *.html)"
If there is more than one type, we must separate them with newlines. If a type can handle more than one extension, we must separate the extensions with spaces. The string shown will produce a file type combobox with two items, "Text files" and "HTML files", and will ensure that the only file types shown in the dialog are those that have an extension of .txt, .htm, or .html.
List compre hen-sions
In the case of the Image Changer application, we use the list of image type extensions for the image types that can be read by the version of PyQt that the application is using. At the very least, this is likely to include .bmp, .jpg (and .jpeg, the same as .jpg), and .png. The list comprehension iterates over the readable image extensions and creates a list of strings of the form "*.bmp", "*.jpg", and so on; these are joined, space-separated, into a single string by the string join() method.
The QFileDialog.getOpenFileName() method returns a QString which either holds a filename (with the full path), or is empty (if the user canceled). If the user chose a filename, we call loadFile() to load it.
Here, and throughout the program, when we have needed the application's name we have simply written it. But since we set the name in the application object in main() to simplify our QSettings usage, we could instead retrieve the name whenever it was required. In this case, the relevant code would then become:
fname = unicode(QFileDialog.getOpenFileName(self,
"%s - Choose Image" % QApplication.applicationName(), dir, "Image files (%s)" % " ".join(formats)))
It is surprising how frequently the name of the application is used. The file imagechanger.pyw is less than 500 lines, but it uses the application's name a dozen times. Some developers prefer to use the method call to guarantee consistency. We will discuss string handling further in Chapter 17, when we cover internationalization.
If the user opens a file, the loadFile() method is called to actually perform the loading. We will look at this method in two parts.
def loadFile(self, fname=None): if fname is None:
action = self.sender() if isinstance(action, QAction):
fname = unicode(action.data().toString()) if not self.okToContinue(): return else:
return
If the method is called from the fileOpen() method or from the loadInitial-File() method, it is passed the filename to open. But if it is called from a recently used file action, no filename is passed. We can use this difference to distinguish the two cases. If a recently used file action was invoked, we retrieve the sending object. This should be a QAction, but we check to be safe, and then extract the action's user data, in which we stored the recently used file's full name including its path. User data is held as a QVariant, so we must convert it to a suitable type. At this point, we check to see whether it is okay to continue. We do not have to make this test in the "file open" case, because there, the check is made before the user is even asked for the name of a file to open. So now, if the method has not returned, we know that we have a filename in fname that we must try to load.
if fname:
self.filename = None image = QImage(fname)
if image.isNull():
message = "Failed to read %s" % fname else:
self.addRecentFile(fname) self.image = QImage()
for action, check in self.resetableActions:
action.setChecked(check) self.image = image self.filename = fname self.showImage() self.dirty = False self.sizeLabel.setText("%d x %d" % (
image.width(), image.height())) message = "Loaded %s" % os.path.basename(fname) self.updateStatus(message)
We begin by making the current filename None and then we attempt to read the image into a local variable. PyQt does not use exception handling, so errors must always be discovered indirectly. In this case, a null image means that for add- some reason we failed to load the image. If the load was successful we add the new filename to the recently used files list, where it will appear only if another file is subsequently opened, or if this one is saved under another name. Next, we set the instance image variable to be a null image: This means that we are free to reset the checkable actions to our preferred defaults without any side effects. This works because when the checkable actions are changed, although the relevant methods will be called due to the signal-slot connections, the methods do nothing if the image is null.
After the preliminaries, we assign the local image to the image instance variable and the local filename to the filename instance variable. Next, we call showImage() to show the image at the current zoom factor, clear the dirty flag, and update the size label. Finally, we call updateStatus() to show the message in the status bar, and to update the log widget.
def fileSave(self):
if self.image.isNull(): return if self.filename is None:
self.fileSaveAs() else:
if self.image.save(self.filename, None):
self.updateStatus("Saved as %s" % self.filename) self.dirty = False else:
self.updateStatus("Failed to save %s" % self.filename)
Recent-File()
The fileSave() method, and many others, act on the application's data (a QImage instance), but make no sense if there is no image data. For this reason, many of the methods do nothing and return immediately if there is no image data for them to work on.
If there is image data, and the filename is None, the user must have invoked the "file new" action, and is now saving their image for the first time. For this case, we pass on the work to the fileSaveAs() method.
If we have a filename, we attempt to save the image using QImage.save(). This method returns a Boolean success/failure flag, in response to which we update the status accordingly. (We have deferred coverage of loading and saving custom file formats to Chapter 8, since we are concentrating purely on main window functionality in this chapter.)
def fileSaveAs(self):
if self.image.isNull(): return fname = self.filename if self.filename is not None else "." formats = ["*.%s" % unicode(format).lower() \
for format in QImageWriter.supportedImageFormats()] fname = unicode(QFileDialog.getSaveFileName(self,
"Image Changer - Save Image", fname, "Image files (%s)" % " ".join(formats)))
if fname:
if "." not in fname: fname += ".png" self.addRecentFile(fname) self.filename = fname self.fileSave()
When the "file save as" action is triggered we begin by retrieving the current filename. If the filename is None, we set it to be ".", the current directory. We then use the QFileDialog.getSaveFileName() dialog to prompt the user to give us a filename to save under. If the current filename is not None, we use that as the default name—the file save dialog takes care of giving a warning yes/no dialog if the user chooses the name of a file that already exists. We use the same technique for setting the file filters string as we used for the "file open" action, but this time using the list of image formats that this version of PyQt can write (which may be different from the list of formats it can read).
If the user entered a filename that does not include a dot, that is, it has no extension, we set the extension to be .png. Next, we add the filename to the recently used files list (so that it will appear if a different file is subsequently opened, or if this one is saved under a new name), set the filename instance variable to the name, and pass the work of saving to the fileSave() method that we have just reviewed.
The last file action we must consider is "file print". When this action is invoked the filePrint() method is called. This method paints the image on a printer. Since the method uses techniques that we have not covered yet, we will defer
Printing
Images sidebar. | https://www.pythonstudio.us/pyqt-programming/handling-file-actions.html | CC-MAIN-2019-51 | refinedweb | 2,252 | 66.13 |
Deletes a class.
Workload Manager Library (libwlm.a)
#include <sys/wlm.h>
int wlm_delete_class ( wlmargs)
struct wlm_args *wlmargs;
The wlm_delete_class subroutine deletes an existing superclass or subclass. A superclass cannot be deleted if it still has subclasses other than Default and Shared defined.
The caller must have root authority to delete a superclass and must have administrator authority on a superclass to delete a subclass of the superclass.
The following fields of the wlm_args structure and the embedded
substructures need to be provided:
All the other fields can be left uninitialized for this call.
Upon successful completion, the wlm_delete_class subroutine returns a value of 0. If the wlm_delete_class subroutine is unsuccessful, a non-0 value is returned.
For a list of the possible error codes returned by the WLM API functions, see the description of the wlm.h header file.
The mkclass command, chclass command, rmclass command.
The wlm.h header file.
The wlm_change_class (wlm_change_class Subroutine) subroutine, wlm_create_class (wlm_create_class Subroutine) subroutine.
Workload Management in AIX 5L Version 5.1 System Management Concepts: Operating System and Devices. | http://ps-2.kev009.com/wisclibrary/aix51/usr/share/man/info/en_US/a_doc_lib/libs/basetrf2/wlm_delete_class.htm | CC-MAIN-2022-33 | refinedweb | 176 | 52.26 |
Chapter 23. Class Coding Basics
Now that we’ve talked about OOP in the abstract, it’s time to see how this translates to actual code. This chapter and the next.
Classes have three primary distinctions. At a base level, they are mostly just namespaces, much like the modules we studied in Part V. But, unlike modules, classes also have support for generating multiple objects, for namespace inheritance, and for operator overloading. Let’s begin our
class statement tour by exploring each of these three distinctions in turn.
Classes Generate Multiple Instance Objects. Class ...
Get Learning Python, 3rd Edition now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers. | https://www.oreilly.com/library/view/learning-python-3rd/9780596513986/ch23.html | CC-MAIN-2021-21 | refinedweb | 121 | 58.38 |
I.
Even if I open the generated test case files, the "Code Cleanup" option is disabled (yet other ReSharper options are available). This is with VS2008 and RS EAP 807.
Why would RS exclude some partial class files in the same project? Here is a snippet of the start of a file that is skipped by RS cleanup (Note that RS does flag errors in the file, including global analysis):
using Microsoft.Pex.Framework;
using Bks.Framework.Common;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Microsoft.Pex.Framework.Generated;
using System;
namespace Bks.Framework.Common.Pex
{
public partial class CachedStringTest
{
[System.Diagnostics.CodeAnalysis.SuppressMessage ( "Microsoft.Naming", "CA1707:IdentifiersShouldNotContainUnderscores" ), System.Diagnostics.CodeAnalysis.SuppressMessage ( "Microsoft.Naming", "CA1709:IdentifiersShouldBeCasedCorrectly", MessageId = "op" ), TestMethod]
public void op_GreaterThanCachedStringObject_20080522_220038_000 ( )
{
PexValue.Generated.Clear ( );
this.op_GreaterThan ( ( CachedString ) null, ( object ) null );
PexValue.Generated.Validate ( "result", "False" );
}
Edited by: Brian Strelioff on May 25, 2008 5:26 PM.
Hello,
Could you please paste a piece of the .csproj file that includs the file
in question? Or make a screenshot of the Solution Explorer with that file
selected.
R# fails to reformat fils that are parented under other C# files in the solution
explorer.
—
Serge Baltic
JetBrains, Inc —
“Develop with pleasure!”
Here is a snipet from a csproj file:
AtgProjectTest.cs AtgRunnerTest.cs ]]>
Note that the .g. files are the ones skipped by RS code cleanup.
In the Solution explorer, they show up under their parent (i.e. the "DependentUpon" file via usual tree view display).
Hello,
Yes, that's it. R# would currently ignore .cs files that have a DependentUpon
metadata item pointing to another .cs file.
Removing this item should not affect compilation but will enable R# analyses
and code cleanups. R# regards such files as autogenerated and supposes that
they should not be altered. I do not know if there're any plans for adjusting
this heuristics.
—
Serge Baltic
JetBrains, Inc —
“Develop with pleasure!”
Maybe add an option on whether or not "DependentUpon" files should be skipped?
I am not sure all users would be satisfied with hardcoding this decision one way or the other.
On a sidenote: I don't recommend using tools like Pex. If you use TDD you
should focus on the behaviour of your code and not the implemenation. IMHO
it is a bad practice to first write your code and then using code coverage
tools to write tests for execution paths that are not covered by unit tests.
Microsoft should focus on providing tools and frameworks that actually help
us do our work and at the same time promote good practices instead of
providing us a tool like Pex that actually promotes the bad practices of
code first, unit test later.
Regards
Gabriel Lozano-Moran
"Brian Strelioff" <BKStrelioff@Hotmail.com> wrote in message
news:23114470.21611211577422569.JavaMail.jive@app4.labs.intellij.net...
.
>
>
>
>
This behavour is not likely to be modified in ReSharper 4.0
Once we discover more reliable way to tell generated files from manually
created, we'll fix this problem
--
Eugene Pasynkov
Developer
JetBrains, Inc
"Develop with pleasure!"
"Brian Strelioff" <BKStrelioff@Hotmail.com> wrote in message
news:6806554.23271211765773548.JavaMail.jive@app4.labs.intellij.net...
>
Realistically, while I support TDD there is *NEVER*enough time or budget allocated for it, nor can the cost of manually generating tests for existing software be justified. Automatic test generators are essential, not only for capturing existing behaviour (i.e. creating regression tests) but also for ensuring timely and cost-effective completeness/coverage of any test suite.
Something like Pex is a far more credible source for tests since it is based on run-time analysis of the code under test (i.e. how the code actually works), rather than potential misunderstanding of potentially missing/incorrect documents (design or usage). There will always be some thought required, for example does any particular test (TDD or generated) capture the proper behaviour, or does it reproduce an existing "unknown" bug behaviour. Similarly, Pex is far more reliable at discovering and writing test cases for "hidden" behaviours/requirements (i.e. those imposed by a subcomponent of the component under test).
TDD is a good idea, but it is not the solution to reliable, affordable, and timely software. Nor is Pex all by itself, or any other technology that I am aware of. But Pex does aid the TDD process (and other areas of software development), and as such reduces the cost while simultaneously increasing the quality of software development.
Anyway if you need tools like PEX you are not practicing TDD. I agree that
it takes at least a couple of months (2, 3 months) before the developers get
the hang of TDD and yes, TDD is the solution to reliable, affordable and
timely software. What you will test using PEX is the correctness of your
implemenation, not the behaviour. PEX will NEVER be adopted by the XP
community.
I just said that I don't recommend PEX but if you really want to use it, be
my guest and have loads of fun with it.
Using Continous Integration with a ten-minute build/first-stage build and
code coverage tool, you could easily detect when someone checks-in code that
was not covered by at least 1 unit test. If this is the case then PEX is
still not the answer. I would like to know why this developer checked in
code that had no covering unit test and have him/her throw away the code.
This is my 50 cent...
Cheers
Gabriel Lozano-Moran
"Brian Strelioff" <BKStrelioff@Hotmail.com> wrote in message
news:8660862.24861211809950226.JavaMail.jive@app4.labs.intellij.net...
>
> | https://resharper-support.jetbrains.com/hc/en-us/community/posts/206075219-ReSharper-and-Pex | CC-MAIN-2020-16 | refinedweb | 933 | 57.98 |
Announcing .NET 5.0 RC 1
Richard
Today, we are shipping .NET 5.0 Release Candidate 1 (RC1). It is a near-final release of .NET 5.0, and the first of two RCs before the official release in November. RC1 is a “go live” release; you are supported using it in production. At this point, we’re looking for reports of any remaining critical bugs that should be fixed before the final release. We need your feedback to get .NET 5.0 across the finish line.
We also released RC1 versions of ASP.NET Core and EF Core today.
You can download .NET 5.0, for Windows, macOS, and Linux:.
We recently published a few deep-dive posts about new capabilities in 5.0 that you may want to check out:
- F# 5 update for August
- ARM64 Performance in .NET 5
- Improvements in native code interop in .NET 5.0
- Introducing the Half type!
- App Trimming in .NET 5
- Customizing Trimming in .NET 5
- Automatically find latent bugs in your code with .NET 5
Just like I did for .NET 5.0 Preview 8 I’ve chosen a selection of features to look at in more depth and to give you a sense of how you’ll use them in real-world usage. This post is dedicated to records in C# 9 and
System.Text.Json.JsonSerializer. They are separate features, but also a nice pairing, particularly if you spend a lot of time crafting POCO types for deserialized JSON objects.
C# 9 — Records
Records are perhaps the most important new feature in C# 9. They offer a broad feature set (for a language type kind), some of which requires RC1 or later (like
record.ToString()).
The easiest way to think of records is as immutable classes. Feature-wise, they are closest to tuples. One can think of them as custom tuples with properties and immutability. There are likely many cases where tuples are used today that would be better served by records.
If you are using C#, you will get the best experience if you are using named types (as opposed to a feature like tuples). Static typing is the primary design point of the language. Records make it easier to use small types, and take advantage of type safety throughout your app.
Records are immutable data types
Records enable you to create immutable data types. This is great for defining types that store small amounts of data.
The following is an example of a record. It stores user information from a login screen.
public record LoginResource(string Username, string Password, bool RememberMe);
It is semantically similar (almost identical) to the following class. I’ll cover the differences shortly.
public class LoginResource { public LoginResource(string username, string password, bool rememberMe) { Username = username; Password = password; RememberMe = rememberMe; } public string Username { get; init; } public string Password { get; init; } public bool RememberMe { get; init; } }
init is a new keyword that is an alternative to
set.
set allows you to assign to a property at any time.
init allows you to assign to a property only during object construction. It’s the building block that records rely on for immutability. Any type can use
init. It isn’t specific to records, as you can see in the previous class definition.
private set might seem similar to
init;
private set prevents other code (outside the type) from mutating data.
init will generate compiler errors when a type mutates a property accidentally (after construction).
private set isn’t intended to model immutable data, so doesn’t generate any compiler errors or warnings when the type mutates a property value after construction.
Records are specialized classes
As I just covered, the record and the class variants of
LoginResource are almost identical. The class definition is a semantically identical subset of the record. The record provides more, specialized, behavior.
Just so we’re on the same page, the following comparison is between a
record, and a
class that uses
init instead of
set for properties, as demonstrated earlier.
What’s the same?
- Construction
- Immutability
- Copy semantics (records are classes under the hood)
What’s different?
- Record equality is based on content. Class equality based on object identity.
- Records provide a
GetHashCode()implementation that is based on record content.
- Records provide an
IEquatable<T>implementation. It uses the unique
GetHashCode()behavior as the mechanism to provide the content-based equality semantic for records.
- Record ToString() is overridden to print record content.
The differences between a record and a class (using
init) can be seen in the disassembly for LoginResource as a record and LoginResource as a class.
I’ll show you some code that demonstrates these differences.
Note: You will notice that the
LoginResource types end in
Record and
Class. That pattern is not the indication of a new naming pattern. They are only named that way so that there can be a record and class variant of the same type in the sample. Please don’t name your types that way.
This code produces the following output.
rich@thundera records % dotnet run Test record equality -- lrr1 == lrr2 : True Test class equality -- lrc1 == lrc2 : False Print lrr1 hash code -- lrr1.GetHashCode(): -542976961 Print lrr2 hash code -- lrr2.GetHashCode(): -542976961 Print lrc1 hash code -- lrc1.GetHashCode(): 54267293 Print lrc2 hash code -- lrc2.GetHashCode(): 18643596 LoginResourceRecord implements IEquatable<T>: True LoginResourceClass implements IEquatable<T>: False Print LoginResourceRecord.ToString -- lrr1.ToString(): LoginResourceRecord { Username = Lion-O, Password = jaga, RememberMe = True } Print LoginResourceClass.ToString -- lrc1.ToString(): LoginResourceClass
Record syntax
There are multiple patterns for declaring records that cater to different use cases. After playing with each one, you start to get a feel for the benefits of each pattern. You’ll also see that they are not distinct syntax but a continuum of options.
The first pattern is the simplest one — a one liner — but offers the least flexibility. It’s good for records with a small number of required properties.
Here is the LoginResource record, shown earlier, as an example of this pattern. That’s it. That one line is the entire definition.
public record LoginResource(string Username, string Password, bool RememberMe);
Construction follows the requirements of a constructor with parameters (including the allowance for optional parameters).
var login = new LoginResource("Lion-O", "jaga", true);
You can also use target typing if you prefer.
LoginResource login = new("Lion-O", "jaga", true);
The next syntax makes all the properties optional. There is an implicit parameterless constructor provided for the record.
public record LoginResource { public string Username {get; init;} public string Password {get; init;} public bool RememberMe {get; init;} }
Construction uses object initializers and could look like the following:
LoginResource login = new() { Username = "Lion-O", TemperatureC = "jaga" };
Maybe you want to make those two properties required, with the other one optional. This last pattern would look like the following.
public record LoginResource(string Username, string Password) { public bool RememberMe {get; init;} }
Construction could look like the following, with
LoginResource login = new("Lion-O", "jaga");
And with
LoginResource login = new("Lion-O", "jaga") { RememberMe = true };
I want to make sure that you don’t think that records are exclusively for immutable data. You can opt into exposing mutable properties, as you can see in the following example that reports information about batteries.
Model and
TotalCapacityAmpHours properties are immutable and
RemainingCapacityPercentange is mutable.
It produces the following output.
Non-destructive record mutation
Immutability provides significant benefits, but you will quickly find a case where you need to mutate a record. How can you do that without giving up on immutability? The
with expression satisfies this need. It enables creating a new record in terms of an existing record of the same type. You can specify the new values that you want to be different, and all other properties are copied from the existing record.
Let’s transform the username to lower-case. That’s how usernames are stored in our pretend user database. However, the original username casing is required for diagnostic purposes. It could look like the following, assuming the code from the previous example:
LoginResource login = new("Lion-O", "jaga", true); LoginResource loginLowercased = login with {Username = login.Username.ToLowerInvariant()};
The
login record hasn’t been changed. In fact, that’s impossible. The transformation has only affected
loginLowercased. Other than the lowercase transformation to
loginLowercased, it’s identical to
We can check that
with has done what we expect using the built-in
ToString() override.
Console.WriteLine(login); Console.WriteLine(loginLowercased);
This code produces the following output.
LoginResource { Username = Lion-O, Password = jaga, RememberMe = True } LoginResource { Username = lion-o, Password = jaga, RememberMe = True }
We can go one step further with understanding how
with works. It copies all values from one record to the other. This isn’t a delegation model where one record depends on another. In fact, after the
with operation completes, there is no relationship between the two records.
with only has meaning for record construction. That means for reference types, the copy is just a copy of the reference. For value types, the value is copied.
You can see that semantic at play with the following code.
Console.WriteLine($"Record equality: {login == loginLowercased}"); Console.WriteLine($"Property equality: Username == {login.Username == loginLowercased.Username}; Password == {login.Password == loginLowercased.Password}; RememberMe == {login.RememberMe == loginLowercased.RememberMe}");
It produces the following output.
Record equality: False Property equality: Username == False; Password == True; RememberMe == True
Record inheritance
It’s easy to extend a record. Let’s assume a new
LastLoggedIn property. It could be added directly to
LoginResource. That’s a fine idea. Records are not brittle like interfaces traditionally have been, unless you want to make new properties required constructor parameters.
In this case, I want to make
LastLogin required. Imagine the codebase is large, and it would be expensive to sprinkle knowledge of the
LastLoggedIn property in all the places where a
LoginResource is created. Instead, we’re going to create a new record that extends
LoginResource with this new property. Existing code will work in terms of
LoginResource and new code will work in terms of a new record that can then assume that the
LastLoggedIn property has been populated. Code that accepts a
LoginResource will happily accept the new record, by virtue of regular inheritance rules.
This new record could be based on any of the
LoginResource variants demonstrated earlier. It will be based on the following one.
public record LoginResource(string Username, string Password) { public bool RememberMe {get; init;} }
The new record could look like the following.
public record LoginWithUserDataResource(string Username, string Password, DateTime LastLoggedIn) : LoginResource(Username, Password) { public int DiscountTier {get; init}; public bool FreeShipping {get; init}; }
I’ve made
LastLoggedIn a required property, and taken the opportunity to add additional, optional, properties that may or may not be set. The optional
LoginResource record.
Modeling record construction helpers
One of the patterns that isn’t necessarily intuitive is modeling helpers that you want to use as part of record construction. Let’s switch examples, to weight measurements. Weight measurements come from an internet-connected scale. The weight is specified in Kilograms, however, there are some cases where the weight needs to be provided in pounds.
The following record declaration could be used.
public record WeightMeasurement(DateTime Date, double Kilograms) { public double Pounds {get; init;} public static double GetPounds(double kilograms) => kilograms * 2.20462262; }
This is what construction would look like.
var weight = 200; WeightMeasurement measurement = new(DateTime.Now, weight) { Pounds = WeightMeasurement.GetPounds(weight) };
In this example, it is necessary to specify the weight as a local. It isn’t possible to access the
Kilograms property within an object initializer. It is also necessary to define
GetPounds as a static method. It isn’t possible to call instance methods (for the type being constructed) within an object initializer.
Records and Nullability
You get nullability for free with records, right? Everything is immutable, so where would the nulls come from? Not quite. An immutable property can be null and will always be null in that case.
Let’s look at another program without nullability enabled.
using System; using System.Collections.Generic; Author author = new(null, null); Console.WriteLine(author.Name.ToString()); public record Author(string Name, List<Book> Books) { public string Website {get; init;} public string Genre {get; init;} public List<Author> RelatedAuthors {get; init;} } public record Book(string name, int Published, Author author);
This program compiles and will throw a
NullReference exception, due to dereferencing
author.Name, which is
null.
To further drive home this point, the following will not compile.
author.Name is initialized as
null and then cannot be changed, since the property is immutable.
Author author = new(null, null); author.Name = "Colin Meloy";
I’m going to update my project file to enable nullability.
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>net5.0</TargetFramework> <LangVersion>preview</LangVersion> <Nullable>enable</Nullable> </PropertyGroup> </Project>
I’m now seeing a bunch of warnings like the following.
/Users/rich/recordsnullability/Program.cs(8,21): warning CS8618: Non-nullable property 'Website' must contain a non-null value when exiting constructor. Consider declaring the property as nullable. [/Users/rich/recordsnullability/recordsnullability.csproj]
I updated the
Author record with null annotations that describe my intended use of the record.
public record Author(string Name, List<Book> Books) { public string? Website {get; init;} public string? Genre {get; init;} public List<Author>? RelatedAuthors {get; init;} }
I’m still getting warnings for the
null, null construction of
Author seen earlier.
/Users/rich/recordsnullability/Program.cs(5,21): warning CS8625: Cannot convert null literal to non-nullable reference type. [/Users/rich/recordsnullability/recordsnullability.csproj]
That’s good, since that’s a scenario I want to protect against. I’ll now show you an updated variant of the program that plays nicely with and enjoys the benefits of nullability.
This program compiles without nullable warnings.
You might be wondering about the following line:
lord.RelatedAuthors.AddRange(
Author.RelatedAuthors can be null. The compiler can see that the
RelatedAuthors property is set just a few lines earlier, so it knows that
RelatedAuthors reference will be non-null.
However, imagine the program instead looked like the following.
Author GetAuthor() { return new Author("Karen Lord") { Website = "", RelatedAuthors = new() }; } Author lord = GetAuthor();
The compiler doesn’t have the flow analysis smarts to know that
RelatedAuthors will be non-null when type construction is within a separate method. In that case, one of two following patterns would be needed.
lord.RelatedAuthors!.AddRange(
or
if (lord.RelatedAuthors is object) { lord.RelatedAuthors.AddRange( ... }
This is a long demonstration of records nullability just to say that it doesn’t change anything about the experience of using nullable reference types.
Separately, you may have noticed that I moved the
Books property on the
Author record to be an initialized get-only property, instead of being a required parameter in the record constructor. This was driven by there being a circular relationship between
Author and
Books. Immutability and circular references can cause headaches. It is OK in this case, and just means that all
Author objects need to be created before
Book objects. As a result, it isn’t possible to provide a fully initialized set of
Book objects as part of
Author construction. The best we could ever expect as part of
Author construction is an empty
List<Book>. As a result, initializing an empty
List<Book> as part of
Author construction seem like the best choice. There is no rule that all of these properties need to be
init style. I’ve chosen to do that to demonstrate the behavior when you do.
We’re about to transition to talk about JSON serialization. This example, with circular references, relates to the Preserving references in JSON object graphs section coming shortly.
JsonSerializer supports object graphs with circular references, but not with types with parameterized constructors. You can serialize the
Author object to JSON, but not back to an
Author object as it is currently defined. If
Author wasn’t a
record or didn’t have circular references, then both serialization and deserialization would work with
JsonSerializer.
System.Text.Json
System.Text.Json has been significantly improved in .NET 5.0 to improve performance, reliability, and to make it easier for people to adopt that are familiar with Newtonsoft.Json. It also includes support for deserializing JSON objects to records, the new C# feature covered earlier in this post.
GetFromJsonAsync<T>() extension method.. Yes, in a future release..
Performance
JsonSerializer performance is significantly improved in .NET 5.0. Stephen Toub covered some
JsonSerializer improvements in his Performance Improvements in .NET 5 post. I’ll cover a few more here.
Collections (de)serialization
We made significant improvements for large collections (~1.15x-1.5x on deserialize, ~1.5x-2.4x+ on serialize). You can see these improvements characterized in much more detail dotnet/runtime #2259.
The improvements to
List<int> (de)serialization is particularly impressive, comparing .NET 5.0 to .NET Core 3.1. Those changes are going to be show up as meaningful with high-performance apps.
Property lookups — naming convention missing properties and case insensitivity has been greatly improved in .NET 5.0. It is ~1.75x faster in some cases.
The following benchmarks for a simple 4-property test class that has property names > 7 bytes.
3.1 performance | Method | Mean | Error | StdDev | Median | Min | Max | Gen 0 | Gen 1 | Gen 2 | Allocated | |---------------------------------- |-----------:|--------:|--------:|-----------:|-----------:|-----------:|-------:|------:|------:|----------:| | CaseSensitive_Matching | 844.2 ns | 4.25 ns | 3.55 ns | 844.2 ns | 838.6 ns | 850.6 ns | 0.0342 | - | - | 224 B | | CaseInsensitive_Matching | 833.3 ns | 3.84 ns | 3.40 ns | 832.6 ns | 829.4 ns | 841.1 ns | 0.0504 | - | - | 328 B | | CaseSensitive_NotMatching(Missing)| 1,007.7 ns | 9.40 ns | 8.79 ns | 1,005.1 ns | 997.3 ns | 1,023.3 ns | 0.0722 | - | - | 464 B | | CaseInsensitive_NotMatching | 1,405.6 ns | 8.35 ns | 7.40 ns | 1,405.1 ns | 1,397.1 ns | 1,423.6 ns | 0.0626 | - | - | 408 B | 5.0 performance | Method | Mean | Error | StdDev | Median | Min | Max | Gen 0 | Gen 1 | Gen 2 | Allocated | |---------------------------------- |---------:|--------:|--------:|---------:|---------:|---------:|-------:|------:|------:|----------:| | CaseSensitive_Matching | 799.2 ns | 4.59 ns | 4.29 ns | 801.0 ns | 790.5 ns | 803.9 ns | 0.0985 | - | - | 632 B | | CaseInsensitive_Matching | 789.2 ns | 6.62 ns | 5.53 ns | 790.3 ns | 776.0 ns | 794.4 ns | 0.1004 | - | - | 632 B | | CaseSensitive_NotMatching(Missing)| 479.9 ns | 0.75 ns | 0.59 ns | 479.8 ns | 479.1 ns | 481.0 ns | 0.0059 | - | - | 40 B | | CaseInsensitive_NotMatching | 783.5 ns | 3.26 ns | 2.89 ns | 783.5 ns | 779.0 ns | 789.2 ns | 0.1004 | - | - | 632 B |
TechEmpower improvement
We’ve spent significant effort improving .NET performance on the TechEmpower benchmark. It made sense to validate these
JsonSerializer improvements with the TechEmpower JSON benchmark. Performance is now ~ 19% better, which should improve the placement of .NET on that benchmark once we update our entries to .NET 5.0. Our goal for the release was to be more competitive with
netty, which is a common Java webserver..
If we look at
Min column, we can do some simple math to calculate the improvement:
153.3/128.6 = ~1.19. That’s a 19% improvement.
Closing
I hope you’ve enjoyed this deeper dive into records and
JsonSerializer. They are just two of the many improvement in .NET 5.0. The Preview 8 post covers a larger set of features, that provides a broader view of the value that’s coming in 5.0.
As you know, we’re not adding any new features in .NET 5.0 at this point. I’m using these late preview and RC posts to cover all the features we’ve built. Which ones would you like to see me cover in the RC2 release blog post? I’d like to know what I should focus on.
Please share your experience using RC1 in the comments. Thanks to everyone that has installed .NET 5.0. We appreciate all the engagement and feedback we’ve received so far.
Produces the following output: | https://devblogs.microsoft.com/dotnet/announcing-net-5-0-rc-1/comment-page-2/ | CC-MAIN-2021-10 | refinedweb | 3,335 | 60.82 |
Hi Martin,
thanks for your help. I know now there will be no easy way for using the
FileItem of the commons FileUpload. I think it is some kind of strange that
the struts owned FormFile is not serializable.
So the only way to store a file will be as byte array or String or something
like that.
As said some lines ago, thanks for your help.
greetings
Andreas Heinecke
Am Sonntag, 31. Juli 2005 18:42 schrieb Martin Cooper:
> The reason you're getting null returned from parseRequest() is that
> the request was already parsed, and therefore consumed, by Struts
> before your action was invoked. If you want to parse the request
> yourself, the only way to do that is to _not_ associate an action form
> with your action mapping. If there is no form bean, then Struts will
> obviously not try to populate it, and it is the population process
> that causes a multipart stream to be parsed by Struts. (In the old
> days, there was a way to disable multipart handling, but that got lost
> somewhere along the line, I'm afraid.)
>
> --
> Martin Cooper
>
> On 7/31/05, Andreas Heinecke <andreas@objectinc.de> wrote:
> > Hi,
> >
> > I encountered a problem with commons FileUpload and Struts. I decided to
> > use the commons FileUpload in my struts app because the upload with
> > struts (FormFile) isn't serializable. I need it to be serializable
> > because I want to make the uploaded file persistent with hibernate.
> > I found out that using the commons FileUpload will be serializable, since
> > it implements the Interface. But how do I integrate it with struts?
> >
> > Here is what I've done:
> >
> > I created a multipart-form with tags by struts:
> > <html:form
> > <table>
> > <tr>
> > <td>Titel </td>
> > <td><html:text > /></td> </tr>
> > <tr>
> > <td>Datei </td>
> > <td><html:file
> ></html:file> </td> </tr>
> > <tr>
> > <td> </td>
> > <td
> ><html:reset>reset</html:reset> <html:submit>eintragen<
> >/html:submit> </td>
> > </tr>
> > </table>
> > </html:form>
> >
> > The I created to corresponding Form class:
> > public class UploadForm extends ValidatorForm
> > {
> > private String title;
> > private FormFile file;
> >
> > // getters .. and setters left out for this post
> >
> > }
> >
> > This form will be sent to my Action class UploadAction:
> >
> > Here I am able to retrieve the FormFile ... but thats not serializable
> >
> > If I try to get the uploaded file as described at FileUpload Homepage,
> > like this:
> >
> > DiskFileUpload upload = new DiskFileUpload();
> > List<FileItem> items = upload.parseRequest(request);
> >
> > The returned list is null.
> >
> > Does anybody allready done something like that?
> > Any sugesstion is much appreciated!
> >
> > Thx in advance,
> >
> > regards
> >
> > Andreas Heinecke
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: commons-user-help@jakarta.apache.org | http://mail-archives.apache.org/mod_mbox/commons-user/200507.mbox/%3C200507312303.11627.andreas@objectinc.de%3E | CC-MAIN-2016-07 | refinedweb | 442 | 62.98 |
Learn how to display temperature and humidity readings from a DHT11/DHT22 sensor in an SSD1306 OLED display using an ESP32 or an ESP8266 with Arduino IDE.
The idea of using the OLED display with the ESP32 or ESP8266 is to ilustrate how you can create a physical user interface for your boards.
Project Overview
In this project we’ll use an I2C SSD1306 128×64 OLED display as shown in the following figure.
The temperature and humidity will be measured using the DHT22 temperature and humidity sensor (you can also use DHT11).
If you’re not familiar with the DHT11/DHT22 sensor, we recommend reading the following guide:
Parts required
For this tutorial you need the following components:
- 0.96 inch OLED display
- ESP32 or ESP8266 (read ESP32 vs ESP8266)
- DHT22 or DHT11 temperature and humidity sensor
- Breadboard
- 10k Ohm resistor
- Jumper wires
You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price!
Schematic
The OLED display we’re using communicates via I2C communication protocol, so you need to connect it to the ESP32 or ESP8266 I2C pins.
By default, the ESP32 I2C pins are:
- GPIO 22: SCL
- GPIO 21: SDA
If you’re using an ESP8266, the default I2C pins are:
- GPIO 5 (D1): SCL
- GPIO 4 (D2): SDA
Follow the next schematic diagram if you’re using an ESP32 board:
Recommended reading: ESP32 Pinout Reference Guide
If you’re using an ESP8266 follow the next diagram instead.
In this case we’re connecting the DHT data pin to GPIO 14, but you can use any other suitable GPIO.
Recommended reading: ESP8266 Pinout Reference Guide
Installing Libraries
Before uploading the code, you need to install the libraries to write to the OLED display and the libraries to read from the DHT sensor.
Installing the OLED libraries
There are several libraries available to control the OLED display with the ESP8266. In this tutorial we’ll use the libraries from adafruit: the Adafruit_SSD1306 library and the Adafruit_GFX library. Follow the next steps to install the DHT Sensor libraries
To read from the DHT sensor we’ll use the libraries from Adafruit..
Installing the ESP boards
We’ll program the ESP32/ESP8266 using Arduino IDE, so you must have the ESP32/ESP8266 add-on installed in your Arduino IDE. If you haven’t, follow the next tutorial first that fits your needs:
- Install the ESP32 Board in Arduino IDE (Windows instructions)
- Install the ESP32 Board in Arduino IDE (Mac OS X and Linux instructions)
- Install the ESP8266 Board in Arduino IDE
Finally, restart your Arduino IDE.
Code
After installing the necessary libraries, you can copy the following code to your Arduino IDE and upload it to your ESP32 or ESP8266 board.
/********* Rui Santos Complete project details at *********/ #include <Wire.h> #include <Adafruit_GFX.h> #include <Adafruit_SSD1306.h> #include <Adafruit_Sensor.h> #include <DHT); #define DHTPIN 14 //); void setup() { Serial.begin(115200); dht.begin(); if(!display.begin(SSD1306_SWITCHCAPVCC, 0x3C)) { Serial.println(F("SSD1306 allocation failed")); for(;;); } delay(2000); display.clearDisplay(); display.setTextColor(WHITE); } void loop() { delay(5000); //read temperature and humidity float t = dht.readTemperature(); float h = dht.readHumidity(); if (isnan(h) || isnan(t)) { Serial.println("Failed to read from DHT sensor!"); } // clear display display.clearDisplay(); // display temperature"); // display humidity display.setTextSize(1); display.setCursor(0, 35); display.print("Humidity: "); display.setTextSize(2); display.setCursor(0, 45); display.print(h); display.print(" %"); display.display(); }
How the code works
Let’s take a quick look on how the code works.
Importing libraries
The code starts by including the necessary libraries. The Wire, Adafruit_GFX and Adafruit_SSD1306 are used to interface with the OLED display. The Adafruit_Sensor and the DHT libraries are used to interface with the DHT22 or DHT11 sensors.
#include <Wire.h> #include <Adafruit_GFX.h> #include <Adafruit_SSD1306.h> #include <Adafruit_Sensor.h> #include <DHT.h>
Create a display object
Then, define your OLED display dimensions. In this case, we’re using a 128×64 pixel display.
#define SCREEN_WIDTH 128 // OLED display width, in pixels #define SCREEN_HEIGHT 64 // OLED display height, in pixels
Then, initialize a display object with the width and height defined earlier with I2C communication protocol (&Wire).
Adafruit_SSD1306 display(SCREEN_WIDTH, SCREEN_HEIGHT, &Wire, -1);
The (-1) parameter means that your OLED display doesn’t have a RESET pin. If your OLED display does have a RESET pin, it should be connected to a GPIO. In that case, you should pass the GPIO number as a parameter.
Create a DHT object
Then, define the DHT sensor type you’re using. If you’re using a DHT22 you don’t need to change anything on the code. If you’re using another sensor, just uncomment the sensor you’re using and comment the others.
//#define DHTTYPE DHT11 // DHT 11 #define DHTTYPE DHT22 // DHT 22 (AM2302) //#define DHTTYPE DHT21 // DHT 21 (AM2301)
Initialize a DHT sensor object with the pin and type defined earlier.
DHT dht(DHTPIN, DHTTYPE);
setup()
In the setup(), initialize the serial monitor for debugging purposes.
Serial.begin(115200);
Initialize the DHT sensor:
dht.begin();
Then, initialize the OLED display.
if(!display.begin(SSD1306_SWITCHCAPVCC, 0x3C)) { // Address 0x3D for 128x64 Serial.println(F("SSD1306 allocation failed")); for(;;); }
In this case, the address of the OLED display we’re using is 0x3C. If this address doesn’t work, you can run an I2C scanner sketch to find your OLED address. You can find the I2C scanner sketch here.
Add a delay to give time for the display to initialize, clear the display and set the text color to white:
delay(2000); display.clearDisplay(); display.setTextColor(WHITE)
In the loop() is where we read the sensor and display the temperature and humidity on the display.
Get temperature and humidity readings from DHT
The temperature and humidity are saved on the t and h variables, respectively. Reading temperature and humidity is as simple as using the readTemperature() and readHumidity() methods on the dht object.
float t = dht.readTemperature(); float h = dht.readHumidity();
In case we are not able to get the readings, display an error message:
if (isnan(h) || isnan(t)) { Serial.println("Failed to read from DHT sensor!"); }
If you get that error message, read our troubleshooting guide: how to fix “Failed to read from DHT sensor”.
Display sensor readings on the OLED display
The following lines display the temperature on the OLED display.");
We use the setTextSize() method to define the font size, the setCursor() sets where the text should start being displayed and the print() method is used to write something on the display.
To print the temperature and humidity you just need to pass their variables to the print() method as follows:
display.print(t);
The “Temperature” label is displayed in size 1, and the actual reading is displayed in size 2.
To display the º symbol, we use the Code Page 437 font. For that, you need to set the cp437 to true as follows:
display.cp437(true);
Then, use the write() method to display your chosen character. The º symbol corresponds to character 167.
display.write(167);
A similar approach is used to display the humidity:
display.setTextSize(1); display.setCursor(0, 35); display.print("Humidity: "); display.setTextSize(2); display.setCursor(0, 45); display.print(h); display.print(" %");
Don’t forget that you need to call display.display() at the end, so that you can actually display something on the OLED.
display.display();
Recommended reading: ESP32 with DHT11/DHT22 Temperature and Humidity Sensor using Arduino IDE
Demonstration
The following figure shows what you should get at the end of this tutorial. Humidity and temperature readings are displayed on the OLED.
Troubleshooting
If your DHT sensor fails to get the readings or you get the message “Failed to read from DHT sensor”, read our DHT Troubleshooting Guide to help you solve that problem.
If you get the “SSD1306 allocation failed” error or if the OLED is not displaying anything in the screen, it can be one of the following issues:
Wrong I2C address
The I2C address for the OLED display we are using is 0x3C. However, yours may be different. So, make sure you check your display I2C address using an I2C scanner sketch.
SDA and SCL not connected properly
Please make sure that you have the SDA and SCL pins of the OLED display wired correctly. If you’re using:
- ESP32: connect SDA pin to GPIO 21 and SCL pin to GPIO 22
- ESP8266: connect SDA pin to GPIO 4 (D2) and SCL pin to GPIO 5 (D1)
Wrapping Up
We hope you’ve found this tutorial about displaying sensor readings on the OLED display useful. The OLED display is a great way to add a user interface to your projects. If you like this project, you may also like to know how to display sensor readings in your browser using an ESP Web Server:
- ESP32 DHT Web Server (Arduino IDE)
- ESP8266 DHT Web Server (Arduino IDE)
- ESP32/ESP8266 DHT Web Server (MicroPython)
You can learn more about the ESP32 and ESP8266 with our courses:
Thanks for reading.
37 thoughts on “ESP32/ESP8266: DHT Temperature and Humidity Readings in OLED Display”
It might be useful to do a version of this using the BME280 sensor. Although more expensive than the DHT sensors, the BME is much more accurate and reliable.
I started with the BME280 and found that there was a problem with using on the ESP8266. It was a few years ago and the problem may have been resolved but I’m thinking it was a data transfer rate issue. I wish I could post pictures here. I have an arduino nano clone running the bme280 and a RTC module with an analog TFT displaying an analog clock with time, temp and date. Also, I have an ESP8266 with a DHT11 pulling its time from the web to display an analog clock on a tiny oled. Most of my projects I use is RandomNerd – some I pilfer from others. The analog clock I got from rinkydink (I think).
Great stuff – thanks – keep it coming.
Thank you for following our work.
Regards,
Sara
Project worked the first time I tried it!
Great job, Rui and Sara!
Very cool project ! How do i connect it to my wifi ?
Hi.
You can follow one of these tutorials:
Regards,
Sara
I have been on the seen before ESP 🙂 I want to get back on track. This ESP8266 is a WIFI thing. Could you do this project with one ESP8266 outside in the wind and snow and another indoor with a LCD screen to retrieve this info?
Kind regards
Leslie
Thanks for the project suggestion, but I don’t have any tutorials on that exact subject.
Hello! For web server with esp32 that you have published on your website: you can access the web’s data only from the local network or from all internet??
Thanks
With this example you can only access the ESP32 data locally. However, you can open a port in your router to make the web server available from anywhere.
thanks you for the project , i am starting with esp8266 and i really need help on a personal project.
i want the servo motor to be controlled by an LDR and a esp8266.
thanks for the help.
You’re welcome! Thanks for the project suggestion, but I don’t have any tutorials on that exact subject.
Rui and Sara!
I have just started with Arduino and ESP8266 this Random Nerd Tutorials
as education för beginners as me.
Keep up going with more useful stuff. THANKS
Thanks 😀
Another great project-If I want to use a BME280 do I need to wire it to diferent gpios to the oled or is the address definition sufficient ?
Hi Peter.
You can use the same pins.
Or you can also create another I2C instance for the sensor.
You can read all about this here:
Regards,
sara
I come across the function isnan in the code and I couldn’t find what it does.
Can you explain it please?
Thank you.
It verifies that the temperature and humidity have been well received and formatted.
Sorry to bother you. I google it and found the answer.
Please ignore my post before.
Thank you.
Guess I have a slow OLED display. I could see text, but it was all over the display. Sometimes it was readable, sometimes not. So I added a slight delay at the bottom above display.display();
display.print(” %”);
delay(1000); // <<<<< added this line to make display readable
display.display();
}
Hi Jim.
Thanks for sharing that.
Other readers might be struggling with the same issue.
Regards,
Sara
How hard is it to change the readout from Celsius to Fahrenheit?
Thanks, LT
Hi.
To get temperature in Fahrenheit degrees, replace the following line
temp = dht.readTemperature();
with
temp = dht.readTemperature(true);
Regards,
Sara
Works great!
Thanks LT
Perfect! What would us “newbies” do without this site???? lol
😀
Hi Sara und Rui,
this was a perfect project for me.
I build a ESP8266 with DHT22 and a Telegram Bot.
When i send a Codeword from Mobilephone via Telegram to my Bot he will send me the Temperature and Humidity.
Then i found this OLED Display and your Website and then i have copy your code into my Telegram Bot Code and it works perfectly.
I get all time the Data on Display and when i want to my Telegram Bot.
Thank you for this.
Daniel
That’s great!
Regards,
Sara
I recommend use this sketch
Hi Sara & Rui,
Thanks for the project.
I plugged all the components, loaded the libraries into Arduino IDE, but when I’m trying to verify the copied and pasted sketch , i have an error message.
( i choose Generic ESP8266 for the board, and the correct USBport).
Any ideas how to help me ?
Thank you in advance
Hi.
What is the exact error that you get?
Regards,
Sara
Hi,
The error messages are :
WARNING : Category ‘Network’ in library 1wIP_enc28j60 is not valid. Setting to ‘uncategorized’
WARNING : Category ‘Network’ in library 1wIP_w5500 is not valid. Setting to ‘uncategorized’
Regards
That library is an Ethernet Library which I don’t think is used in that project…
Hi,
Actually the full error message is :
WARNING: Category ‘Network’ in library lwIP_PPP is not valid. Setting to ‘Uncategorized’
WARNING: Category ‘Network’ in library lwIP_enc28j60 is not valid. Setting to ‘Uncategorized’
WARNING: Category ‘Network’ in library lwIP_w5500 is not valid. Setting to ‘Uncategorized’
WARNING: Category ‘Network’ in library lwIP_w5500 is not valid. Setting to ‘Uncategorized’
Build options changed, rebuilding all
Multiple libraries were found for “DHT.h”
In file included from C:\Users\p******\Documents\Arduino\libraries\Adafruit_GFX_Library\Adafruit_GrayOLED.cpp:20:
Used: C:\Users\p******\Documents\Arduino\libraries\DHT_sensor_library
C:\Users\p******\Documents\Arduino\libraries\Adafruit_GFX_Library\Adafruit_GrayOLED.h:30:10: fatal error: Adafruit_I2CDevice.h: No such file or directory
Not used: C:\Users\p******\Documents\Arduino\libraries\Grove_Temperature_And_Humidity_Sensor
30 | #include <Adafruit_I2CDevice.h>
| ^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
Any help would be appreciated, as i am a beginner 🙂
Thank you
Hi Pascual it may not be my place to say this but this is what I think you should do.
Start a new project in IDE , delete the lines at the top that come with a new file.
Go to the project in RNT below the code is a button “raw code” click on that and the code opens in a new page. Copy and paste that into your new project.
You cannot copy code off a web page as you can get all sorts of weird errors.
Hope that helps Iain.
Code…this fixes degree & percent symbols…
Also configures for multi displays…
1(SCREEN_WIDTH, SCREEN_HEIGHT, &Wire, -1);
Adafruit_SSD1306 Display2(SCREEN_WIDTH, SCREEN_HEIGHT, &Wire, -1);
void setup() {
if(!Display1.begin(SSD1306_SWITCHCAPVCC, 0x3C)) {
Serial.println(F(“SSD1306 allocation failed”));
for(;;);
}
delay(10);
Display1.cp437(true);
Display1.clearDisplay();
Display1.setTextColor(WHITE,BLACK);
if(!Display2.begin(SSD1306_SWITCHCAPVCC, 0x3D)) {
Serial.println(F("SSD1306 allocation failed"));
for(;;);
}
delay(10);
Display1.cp437(true);
Display2.clearDisplay();
Display2.setTextColor(WHITE,BLACK);
}
void loop() {
// Correction: 248 is code for Degree Symbol…not 167
Display1.write(248);
// Correction: 37 is code for Percent Symbol…
Display1.write(37);
}
Is there a way to update changes in the temperature & humidity without issuing the
display.clearDisplay(); command?
How could I clear just a row, or any previous values at a location specified by the
display.setCursor(column, row); command?
Thanks for any assistance | https://randomnerdtutorials.com/esp32-esp8266-dht-temperature-and-humidity-oled-display/?replytocom=367498 | CC-MAIN-2022-27 | refinedweb | 2,729 | 66.03 |
Investors in FireEye Inc (Symbol: FEYE) saw new options begin trading this week, for the August 16th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the FEYE options chain for the new August 16th contracts and identified one put and one call contract of particular interest.
The put contract at the $14.00 strike price has a current bid of 58 cents. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $14.00, but will also collect the premium, putting the cost basis of the shares at $13.42 (before broker commissions). To an investor already interested in purchasing shares of FEYE, that could represent an attractive alternative to paying $14.71.14% return on the cash commitment, or 29.65% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for FireEye Inc, and highlighting in green where the $14.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $15.00 strike price has a current bid of 79 cents. If an investor was to purchase shares of FEYE stock at the current price level of $14.71/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $15.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 7.34% if the stock gets called away at the August 16th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if FEYE shares really soar, which is why looking at the trailing twelve month trading history for FireEye Inc, as well as studying the business fundamentals becomes important. Below is a chart showing FEYE's trailing twelve month trading history, with the $15.00 strike highlighted in red:
Considering the fact that the 38.44% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 44%, while the implied volatility in the call contract example is 43%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $14.71) to be 39%.. | https://www.nasdaq.com/articles/first-week-of-august-16th-options-trading-for-fireeye-feye-2019-06-26 | CC-MAIN-2019-39 | refinedweb | 403 | 63.8 |
IntroRecently I released WpWinlMaps for the Universal Windows Platform, a NuGet package that allows you to data bind map shapes to the awesome new map control for Windows 10. This map control got recently even more awesome with SDK 10586, when multipolygons, aka polygons with holes, aka donuts where added to it. For those who have read this blog before, this binding code should not come as a surprise - I basically did this already in 2012 for the Bing Maps control for Windows, and there are incarnations of this for Windows Phone 8 and the Here Maps control for Windows Phone 8.1. The UWP binding - of course built as a behavior - is an evolution of the Windows Phone 8.1 behavior. It's most important new features are:
- It's built on top of the new UWP Behaviors NuGet Package
- MapShapeDrawBehavior can now also draw multi polygons (with holes)
- The geometry type used to support Geopath only (even if you wanted to draw just a MapIcon). Now you can use a BasicGeoposition for MapIcon, a Geopath for MapPolyline or a 'normal' MapPolygon, and an IList<Geopath> to create the new type of polygons-with-holes that I mentioned earlier.
- MapShapeDrawBehavior supports the new MapElementClick event for selecting objects on the map (and still supports the old MapTapped event, as well as Tapped, although the last one is still not recommended for use)
- The EventToCommandMapper is renamed to EventToHandlerMapper; now it can not only call a command, but also directly a method of the view model. This is to align with the way x:Bind introduces calling direct events as well.
- Speaking of - x:Bind to the MapShapeDrawBehavior's ItemSource is fully supported, although that's 99% thanks to the platform and 1% to my coding.
Getting startedCreate a project, add the WpWinNl NuGet package to it. This will pull in the WpWinNlBasic package as well, as well as - of course Microsoft.Xaml.Behaviors.Uwp.Managed, and Rx-Linq because I use that to dynamically react on events.
Then, of course, you will need some MVVM framework, be it something that you make yourself or something that is made by someone else. In my sample I opted for pulling in MVVMLight, this being more or less an industry standard now. I also pulled in WpWinNl full, because I use some more features from it in my sample code. And that automatically pulls in MVVMLight too, so that saves you the trouble of doing that yourself ;)
ConceptsThese are basically still the same, but I will repeat them here for your convenience.
Typically, maps are divided into layers. You can think of this as logical units representing one class of real-world objects (or ‘features’ as they tend to be called in the geospatial word). For instance, “houses”, “gas stations”, “roads”. In WpWinNlMaps, a layer translates to one behavior attached to the map.
A MapShapeDrawBehavior contains the following properties
- ItemsSource – this is where you bind your business objects/view models to.
- PathPropertyName – the name of the property in a bound object that contains the BasicGeoposition, the Geopath or the IList<Geopath> describing the object’s location
- LayerName – the name of the layer. Make sure this is unique within the map
- ShapeDrawer – the name of the class that actually determines how the shape in PathPropertyName is actually displayed
- EventToCommandMappers – contains a collection of events of the map that need to be trapped, mapped to a command or a method of the bound object that needs to be called when the map receives this event. Presently, the only events that make sense are "MapClicked", “MapTapped” and “Tapped”.
SampleAs always, a sample says more than a 1000 words. Our view model has a property
MultiPolygons = new ObservableCollection<MultiPathList>();And a MultiPathList indeed as a
public List<Geopath> Paths { get; set; }Drawing a set of polygons with holes in it, is as easy as
:MapMultiPolygonDrawer </mapbinding:MapShapeDrawBehavior.ShapeDrawer> </mapbinding:MapShapeDrawBehavior> </interactivity:Interaction.Behaviors> </maps:MapControl>So what we have here is a MapShapeDrawBehavior that binds to ViewModel.MultiPolygon, using a converter. Unfortunately, due to the nature of x:Bind, you will always need to use this converter. If you don't, you will run into this error: "XamlCompiler error WMC1110: Invalid binding path 'ViewModel.MultiPolygons' : Cannot bind type 'System.Collections.ObjectModel.ObservableCollection(WpWinNl.MapBindingDemo.Models.MultiPathList)' to 'System.Collections.Generic.IEnumerable(System.Object)' without a converter". So I give it a converter to make it happy, although the convert method of the MapObjectsListConverter in fact only is this
public override object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return value; }If you have been working as a career developer for 23 you learn it's best just not get wound up about these kinds of things and just happily accept a feasible work-around :)
Event handlingNext up is the EventToHandlerMapper; in its EventName you can put the following event names:
- MapElementClick
- MapTapped
- Tapped
The EventToHandlerMapper has two other properties: MethodName and CommandName. The first one is checked first, so if you are a smartypants who defines them both, only MethodName is used. Once again - this is a method or a command on the bound object, not the view model that hosts the ItemSource. The method or command should take a MapSelectionParameters object as a parameter. In the sample code you will see a class GeometryProvider that actually implements both, utilizing standard MVVMLight code:
public class GeometryProvider : ViewModelBase { public string Name { get; set; } public ICommand SelectCommand => new RelayCommand<MapSelectionParameters>(Select); public void Select(MapSelectionParameters parameters) { DispatcherHelper.CheckBeginInvokeOnUI( () => Messenger.Default.Send( new MessageDialogMessage(Name, "Selected object", "Ok", "Cancel"))); } }I use this as a base class for all types that I bind to the MapShapeDrawBehavior to provide an easy base for event handling.
Shape drawersThese are classes that for actually converting the geometry into an actual shape, that is, a MapIcon, a MapPolyline, or a MapPolygon. Out of the box, there are four drawers with the following properties:
- MapIconDrawer
- AnchorX - sets the NormalizedAnchorPoint.X
- AnchorY - set the NormalizedAnchorPoint.Y
- CollisionBehaviorDesired - the CollisionBehaviorDesired of a MapIcon. See also here.
- Title - the optional MapIcon Title
- ImageUri - the optional MapIcon Image
- MapPolylineDrawer
- Color - line color
- StrokeDashed - dashed or solid line
- Width - line width
- MapPolygonDrawer
- Color - shape fill color
- StrokeDashed - dashed or solid shape outline
- StrokeColor - shape outline color
- Width - shape outline width
- MapPolylineDrawer
- Same as MapPolygonDrawer
Thematic maps - making your own shape drawersI wish to stress that is does not end with the four default drawers. If you want map elements to change color or other properties based upon values in object that you bind to - there is nothing that keeps you from doing that. You can do this by making by sub classing an existing drawer (or make a completely new one). Suppose you have this business object:
public class CustomObject { public string Name { get; set; } public BasicGeoposition Point { get; set; } public int SomeValue { get; set; } }And you want to have the color of the line to change based on the SomeValue property, you can achieve this by writing something like this:
public class MyLineDrawer : MapPolylineDrawer { public override MapElement CreateShape(object viewModel, Geopath path) { var shape = (MapPolyline)base.CreateShape(viewModel, path); var myObject = (CustomObject)viewModel; switch (myObject.SomeValue) { case 0: { shape.StrokeColor = Colors.Black; break; } case 1: { shape.StrokeColor = Colors.Red; break; } //etc } return shape; } }
Drawer class hierarchyThe class drawers are built according to the following class hierarchy
I'd recommend overriding only the concrete classes when creating custom drawers. Be aware there are three virtual methods in MapShapeDrawer that you can override:
public abstract class MapShapeDrawer { public virtual MapElement CreateShape(object viewModel, BasicGeoposition postion) { return null; } public virtual MapElement CreateShape(object viewModel, Geopath path) { return null; } public virtual MapElement CreateShape(object viewModel, IList<Geopath> paths) { return null; } public int ZIndex { get; set; } }Make sure you override the right method for the right goal:
- CreateShape(object viewModel, BasicGeoposition postion) when you are dealing with icons
- CreateShape(object viewModel, Geopath path) when you are dealing with lines or polygons
- CreateShape(object viewModel, IList<Geopath> paths) when are are dealing with multipolygons
LimitationsBe aware this binding method respond to changes in the list of bound objects - that is, if you add or remove an object to or from the bound list, it will be drawn of the map or removed from it. If you change properties within the individual objects after binding and drawing, for instance the color, those will not reflect on the map - you will have to replace the object in the list.
Sample solutionsThis article comes not with one but two samples - it's amazing Mike! ;). The first one is actually in the code on GitHub and you can find it here. The drawback of that sample it that it actually requires you to compile the whole library as it uses the sources directly - it was my own test code. So for your convenience I made more or less the same solution, but then using the NuGet packages. You can find that here - it's an old skool downloadable ZIP file as I don't want to confuse people on GitHub. Both solutions work the same and show the same data as in an earlier post where I described the multipolygon feature first.
4 comments:
Hi Joost,
Thank you for your blog series on UWP mapping. It has been very helpful so far!
One thing I cannot find however (and maybe this is because of the new 3D functions of the control) is that there is no 'getMapBounds'.
In a Windows 8.1 app, I could just get the 'bounds' of a displayed map, and have a search query using the SW and NE corners (or NW and SE corners) to get all objects within the displayed map. But it seems like it's no longer there...
Can you explain or tell me where to look for this?
Kind regards,
Michel
Hi Michel,
If it is any help, I have run into the same problem. There is a kind of workaround, which you can find my Manipulation_Drawing demo. Or follow this link directly
Cavaeat emptor - it only works properly on an orthogonal map. You start using 3D settings (pitch, yaw, roll, the works) - you are out of luck. You will need to look into Camera settings for that.
Hi Joost,
Thank you for your solution, this works perfectly!
But now, I have another question about your WpWinNlMaps library.
I do databinding to add Icons to a map by using the MapShapeDrawBehavior.
However, when I navagate away from the page, an Unhandled exception pops up:
System.ArgumentNullException: Value cannot be null.
Parameter name: key
at System.Collections.Generic.Dictionary`2.FindEntry(TKey key)
at WpWinNl.Maps.MapShapeDrawBehavior.RemoveObservable(Object viewModel)
at WpWinNl.Maps.MapShapeDrawBehavior.RemoveShape(Object viewModel)
at WpWinNl.Maps.MapShapeDrawBehavior.RemoveShapes(IList viewModels)
at WpWinNl.Maps.MapShapeDrawBehavior.OnDetaching()
at Microsoft.Xaml.Interactivity.Behavior.Detach()
at Microsoft.Xaml.Interactivity.Behavior
Is there something I must do before being able to unload the page or is it an issue in the library?
Kind regards,
Michel
Hi Michel,
My knee-jerk reaction is I most likely messed something up - however, I have added a second page to my GitHub Mapbinding demo and could not repro your error. So most likely you are doing something I have not foreseen. Are you clearing out observables, disposing stuff, whatever prior to navigating from the page, or maybe even directly after it?
You can, btw, also mail me directly or open a GitHub issue. That way, we can communicate directly - I now have no idea if and when you are reading this :) | http://dotnetbyexample.blogspot.com/2015/12/uwp-map-data-binding-with-wpwinnlmaps.html | CC-MAIN-2017-51 | refinedweb | 1,927 | 51.28 |
Before talking of memory layout of a C program and its various segments to store data and code instructions we should first understand that a compiler driver (that invokes the language preprocessor, compiler, assembler, and linker, as needed on behalf of the user) can generate three types of object files depending upon the options supplied to the compiler driver. Technically an object file is a sequence of bytes stored on disk in a file. These object files are as follows:
ldprogram take collection of relocatable object files and command line arguments as input and generate a fully linked executable object file as output that can be loaded into memory and run. Relocatable object files contain binary code and data in a form that can be combined with other relocatable object files at compile time to create an executable object file.
Object files have a specific format, however this format may vary from system to system. Some most prevalent formats are
.coff (Common Object File Format),
.pe (Portable Executable), and
elf (Executable and Linkable Format).
However, the actual layout of a program's in-memory image is left entirely up to the operating system, and often the program itself as well. This article focus on the concepts of code and data segments of a program and does not take any specific platform into account. For a running program both the machine instructions (program code) and data are stored in the same memory space. The memory is logically divided into text and data segments. Modern systems use a single text segment to store program instructions, but more than one segment for data, depending upon the storage class of the data being stored there. These segments can be described as follows:
1. Text or Code Segment
2. Initialized Data Segments
3. Uninitialized Data Segments
4. Stack Segment 5. Heap Segment
Code segment, also known as text segment contains machine code of the compiled program. The text segment of an executable object file is often read-only segment that prevents a program from being accidentally modified.
Data segment stores program data. This data could be in form of initialized or uninitialized variables, and it could be local or global. Data segment is further divided into four sub-data segments (initialized data segment, uninitialized or .bss data segment, stack, and heap) to store variables depending upon if they are local or global, and initialized or uninitialized.
Initialized data or simply data segment stores all global, static, constant, and external variables (declared with
extern keyword) that are initialized beforehand.
Contrary to initialized data segment, uninitialized data or .bss segment stores all uninitialized global, static, and external variables (declared with
extern keyword). Global, external, and static variable are by default initialized to zero. This section occupies no actual space in the object file; it is merely a place holder. Object file formats distinguish between initialized and uninitialized variables for space efficiency; uninitialized variables do not have to occupy any actual disk space in the object file.
Randal E. Bryant explains in his famous book on Computer Systems: A Programmer's Perspective, Why is uninitialized data called .bss?
The use of the term .bss to denote uninitialized data is universal. It was originally an acronym for the "Block Storage Start" instruction from the IBM 704 assembly language (circa 1957) and the acronym has stuck. A simple way to remember the difference between the .data and .bss sections is to think of "bss" as an abbreviation for "Better Save Space!"
Stack segment is used to store all local variables and is used for passing arguments to the functions along with the return address of the instruction which is to be executed after the function call is over. Local variables have a scope to the block which they are defined in; they are created when control enters into the block. Local variables do not appear in data or bss segment. Also all recursive function calls are added to stack. Data is added or removed in a last-in-first-out manner to stack. When a new stack frame needs to be added (as a result of a newly called function), the stack grows downward (See the figure 1).
Heap segment is also part of RAM where dynamically allocated variables are stored. In C language dynamic memory allocation is done by using
malloc and
calloc functions. When some more memory need to be allocated using
malloc and
calloc function, heap grows upward as shown in above diagram.
The stack and heap are traditionally located at opposite ends of the process's virtual address space.
The
size command, a GNU utility, reports the sizes (in bytes) of the text, data, .bss segments, and total size for each of the object or archive files in its argument. By default, one line of output is generated for each object file or each module in an archive.
For example, see the following C program and the size of its object file.
#include <stdio.h> int main () { unsigned int x = 0x76543210; char *c = (char*) &x; if (*c == 0x10) { printf ("Underlying architecture is little endian. \n"); } else { printf ("Underlying architecture is big endian. \n"); } return 0; }
For the above mentioned program
check-endianness.c (which finds whether the underlying architecture is little endian or big endian) the size of text, data, .bss segments, and the total size is examined as follows with help of the
size command. The fourth and fifth columns are the total of the three sizes, displayed in decimal and hexadecimal, respectively. You can read man page of
size for more details.
[root@host ~/cprogs]$ gcc check-endianness.c -o check-endianness [krishaku@adc6140630 ~/cprogs]$ size check-endianness text data bss dec hex filename 1235 492 16 1743 6cf check-endianness
In this tutorial we talked of memory layout of a C program, and its various segments (text or code segment, data, .bss segments, stack and heap segments). | http://cs-fundamentals.com/c-programming/memory-layout-of-c-program-code-data-segments.php | CC-MAIN-2017-17 | refinedweb | 984 | 55.13 |
How do I randomize responses?
Is there a way to give a bot a short list in the code so it can generate out one of a few responses? If someone types in p.h, the bot could respond with hi, hello, or hey. I've so far only seen tutorials that use APIs for random generation, but I don't need anything like that.
from random import choice print(choice(responses))
all you need to do is put in a different list for the different types of responses and ur good
It looks like you have a myriad of conditions, some using
startswith() and some just using
in. I guess the first thing I would ask is why not just use the
in nomenclature instead of using
startswith that way you can have a consistent format. And then secondly, you should separate all of those responses into a function that you can call with particular content like I'm going to do below.
Ideally, you would store all responses in a database, JSON store, and anywhere else with persistent storage and remove all of the
if statements and just retrieve the appropriate response based on the message content. But for now, just moving the logic out of the event into its own function will clean up that event a lot and separate things out nicely.
async def probeForResponse(message): if "What" in message.content: await message.add_reaction("Lacie:822228153685508156") #all of the other logic would be here
And then inside of the client event where a message is posted, call the
probeForResponse function.
@client.event async def on_message(message): if message.author == client.user: return await probeForResponse(message)
Good luck to you, Carnelian!
@Carnelian You can import a random and make a list.
ex:
Then you can make the random. It can be stored in a variable for future use of the same random.
ex:
or
The make the code do anything you want.
ex:
Hope that helps! | https://replit.com/talk/ask/How-do-I-randomize-responses/131889 | CC-MAIN-2021-17 | refinedweb | 329 | 72.46 |
Serge Wautier asks, "Why are the copy/cut/paste buttons not disabled when there's nothing to copy/cut/paste?", noting that the back/forward buttons do disable themselves when navigation is not possible in that direction.
To get to this question, we'll first go back in time a bit to a world without toolbars. In those early days, these dynamic options such as copy/cut/paste appeared solely on the Edit menu. Since the contents of Edit menu were visible only when the user clicked on it, the cut/copy/paste options needed to be updated only when the menu was visible. In other words, during
WM_INITMENUPOPUP handling.
This is also why it is somewhat risky to post
WM_COMMAND messages which correspond to a menu item to a window which is not prepared for it. The only way an end-user can generate that
WM_COMMAND message is by going through the menu: clicking the top-level menu to show the drop-down menu, then clicking on the menu item itself. Most programs do not maintain the menu item states when the menu is closed since there's no point in updating something the user can't see. Instead, they do it only in response to the
WM_INITMENUPOP message. Lazy evaluation means that the user doesn't pay for something until they use it. In this case, paying for the cost of calculating whether the menu item should be enabled or not. Depending on the program, calculating whether a menu item should be enabled can turn out to be rather expensive, so it's natural to avoid doing it whenever possible. ("I can do nothing really fast.")
When toolbars showed up, things got more complicated. Now, the affordances are visible all the time, right there in the toolbar. How do you update something continuously without destroying performance?
The navigation buttons disable and enable themselves dynamically because the conditions that control their state satisfy several handy criteria.
- The program knows when the state has potentially changed. (The program maintains the navigation history, so it knows that the button states need to be recalculated only when a navigation occurs.)
- Computing the state is relatively cheap. (All the program has to check is whether there is a previous and next page in the navigation history Since the navigation history is typically maintained as a list, this is easy to do.)
- They change in proportion to user activity within the program. (Each state change can be tied to a user's actions. They don't change on their own.)
- They change rarely. (Users do not navigate a hundred times per second.)
Since the program knows when the navigation stack has changed, it doesn't have to waste its time updating the button states when nothing has changed. Since recalculating the state is relatively cheap, the end user will not see the main user interface slow down while the program goes off to determine the new button state after each navigation. And finally, the state changes rarely, so that this cheap calculation does not multiply into an expensive one.
The copy/cut/paste buttons, on the other hand, often fail to meet these criteria. First, the copy and cut options:
- The program knows when the state has potentially changed. (Whenever the selection changes.) — good
- Computing the state is not always cheap. (For example, determining whether an item in Explorer can be cut or copied requires talking to its namespace handler, which can mean loading a DLL. If the item on the clipboard is a file on the network, you may have to access a computer halfway around the world.) — often bad
- It changes in proportion to user activity within the program. (Each state change can be traced to the user changing the selection.)
- They change with high frequency. (Dragging a rectangle to make a group selection changes the selection each time the rectangle encloses a new item.) — bad
Paste is even worse.
- The program doesn't know when the state has potentially changed. (The clipboard can change at any time. Yes, the program could install a clipboard viewer, but that comes with its own performance problems.) — bad
- Computing the state is not cheap. (The program has to open the clipboard, retrieve the data on it, and see whether it is in a format that can be pasted. If the clipboard contents are delay-rendered, then the constant probing of the clipboard defeats the purpose of delay-rendered clipboard data, which is to defer the cost of generating clipboard data until the user actually wants it. For Explorer, it's even worse, because it has to take the data and ask the selected item whether it can accept the paste. Doing this means talking to the namespace handler, which can mean loading a DLL. And if the file on the clipboard is on the network, the paste handler may need to open the file to see if it is in a format that can be pasted.) — bad
- It can change out of proportion to user activity. (Any time any other program copies something to the clipboard, the toolbar has to update itself. Then can happen even when the user is not using the program that has the toolbar! Imagine if Explorer started saturating your network because you copied a lot of UNC paths to the clipboard while editing some text file.) — bad
- The frequency of change is unknown. (The clipboard is a shared resource, and who knows what other people might be using it for.) — bad
This is one of those balancing acts you have to do when designing a program. How much performance degredation are you willing to make the user suffer through in order to get a feature they may never even notice (except possibly in a bad way)?
It’s not clear; but this doesn’t apply to cut/copy/paste buttons in the general case, just to the affordances in Windows Explorer.
There’s no reason why a simple application can’t update their cut & copy buttons in real-time. A paste button would be more work; but, as a clipboard "viewer" it doesn’t actually have to render anything, just check for supported formats when it’s notified of render messages.
Sounds like you’ve explained where things like WM_CLIPBOARDUPDATE and AddClipboardFormatListener were added to Vista…
The Windows Media Player shell extension opens an .avi file everytime it’s selected in explorer in order to display the movie length and resolution in the status bar. On my old computer this resulted in ~10sec 100% CPU usage peaks everytime I selected one.
It was also hard to select and delete them too, since they were being "used" by the shell extension.
Very nice essay. I nice investigation of cost/benefit from the inside. The comparison to why navigation does work is really good at illustrating the specifics of the differences between the two assessments.
I have a feeling the quote "I can do nothing really fast." is going to stick with me in future assessments :).
@Koro, I’ve had similar problems. Could this be an example of what would be happending if copy/paste were disabled/enabled in real time? Methinks it is.
This is described here;en-us;822430&Product=winxp
Raymond, thanks for such a detailed explanation.
I don’t buy every "bad" conclusion you draw but of course your overall point is valid even if I remove these ones.
BTW, I learned a new word today: affordance. :)
One text editor, TextPad, has a set of clipboard tool buttons that do change in realtime. I don’t believe it bothers checking the format of the data in the clipboard (perhaps they figured the cost wasn’t worth the minimal usefulness) but it does enable and disable Cut, Copy and Paste according to the current state of the selection. The result? An irritating and initially-inexplicable flicker out of the corner of your eye, making you think something is buggy until you eventually figure out what’s going on. Verdict: dumb idea.
Textpad does disable the Paste button when the clipboard’s format isn’t text [for example, when you hit PrtScn]. But a text editor scenario does further restrict when state changes and valid clipboard data and may make it worthwhile to do. OTOH Word can potentially paste most any clipboard format, so it may not be worthwhile there, as it isn’t in Explorer.
I wonder if there’s a usability issue in there as well, perhaps it’s better to have the troika of clipboard buttons always visible and enabled because they are close to the Most Useful Thing To Know (on a PC), and possibly The Only Thing Some People Know (on a PC) and therefore best not to confuse them.
As an example the lady who sits near me couldn’t work out how to Copy in IE7 last week because it doesn’t have Copy on the toolbar by default — if Paste had been visible and grayed out I might have been asked "Why is paste grayed out".
Anyways, the moral of the story is sometimes it’s not worth updating the user interface if it degrades the user experience…
Thanks for the post.
Off-topic, but regarding the clipboard: Has anyone ever suggested creating a more advanced clipboard manager? Something that retains a history of clipboard items with the ability to place an old item back into the ‘current’ buffer? I don’t know how many times I’ve wished the clipboard stored old stuff.
This would be a *great* addition to Windows or as a PowerToy for XP/Vista.
(Yes, I know you can pay to do it with Clipmate [], but it’d be nice as a part of Windows).
WordPad seems to have solved this in a different way. Even when it is not active it updates the paste button when the mouse is over one of the toolbar buttons.
Nice writeup on the differences between the cost of displaying internal state and external state
although i agree that "paste" activation handling is more expensive for shell formats, with good (sensible) idle processing and using IsClipboardFormatAvailable(SHIDLIST) the cost is negligible, this is pentium era after all :)
having said that i’ve noticed that xplorer2 (which implements the above UI strategy) does get the occasional cpu usage spike … nothing to write home about though
PingBack from
@Nick: do you mean something like the Office Clipboard?
Since the clipboard is an internal service, then should not any program making a change to it’s state be able to send a notification to all other programs that this state has change (e.g. a WM_NOTIFY_CLIPBOARD…) message, and let other programs decide to handle it or not (doing a query on the new formats available?)
Actually, I meant to say the clipboard service sends the norification to other applications (including the one who made the original notification)…
swautier:
The term ‘affordances’ originated with a different intent than is used here.
Donald Norman explains here:
and here:
Donald’s book, "Design of Everyday Things" somewhat popularized the term, and due to it’s popularity, the meaning has become skewed. This is too bad, since now it obscures the principles of graphical UI design which the skewed usage intends to mean.
UI gestures are often consistent metaphor and convention, with word labeling to get the perceived affordance across. This is different from actual, or real, affordance. In UI design, these are often separate, since you don’t actually "click" on a screen or "move" the screen when dragging a UI element (though the Wiimote emulates this – would this be a ‘proxied’ affordance?).
"the frequency of update is unknown"
I can’t see how I can put something on the clipboard faster than (say) the PrtSc key autorepeats (if it does) and if some app is constantly rewriting the clipboard then the format change notifications will bring all of my remote desktops to a crawl and it deserves to die. | https://blogs.msdn.microsoft.com/oldnewthing/20070122-05/?p=28323 | CC-MAIN-2016-40 | refinedweb | 2,003 | 60.95 |
While we have already incorporated the password hashing into our registration page, I wanted to take some time to go over what is actually happening. Maybe you end up working in another language, or maybe passlib doesn't support the version of Python you are using in the future. Because of this, you should know at least at a high level, how it works.
Not only is it important for security practices, it's also just pretty cool how it works!
To begin, you can probably understand why it is important to encrypt passwords to begin with. If your database stores plain-text passwords, at the very least, you are going to see the passwords yourself, and so will anyone who has access to your server. In a perfect world, no one would invade a user's privacy, but this world is not perfect. Not only might someone who works for you steal user passwords, a hacker might, or even the host to your server might, if you are using a virtual private server, or shared hosting.
So then how might we obscure passwords? Obscuring original text is easy enough, we can right a randomized algorithm that does this. The problem is, with passwords, we actually need to be able to validate what a user enters in the future as the original password.
One of the more primitive measures taken was simple password hashing. This was where a hash function was applied to what the user input, and that hash was what was stored as a password.
Here's a simple hashing script to illustrate this, which you can run:
import hashlib password = 'pa$$w0rd' h = hashlib.md5(password.encode()) print(h.hexdigest())
Import hashlib, set an example password, create the hash object, print the hash:
6c9b8b27dea1ddb845f96aa2567c6754
So that works pretty well. If you just saw that hash in a database, you'd have no idea what it meant. The problem, however, arises with the following: Run the script two times, or five times. You will find that the output is the same every time. Initially, with validation in mind, you may think well isn't this a requirement anyway? How else can we achieve validation?
The problem here is that people created massive hash tables, notably referred to as hash-lookup tables, where you could just search for the hash, and then find the corresponding plain-text password. You could also create one yourself, by just generating hashes for combinations of characters. It takes a bit longer to generate the tables, . These tables are big, but not too big to store on your laptop or netbook.
What we need instead, is a way to generate unique hashes, yet find a way to validate that hash by asking merely if two hashes came from the same input, despite being very different hashes.
Before arriving there, however, people came up with an easier solution: Why not place a secret pattern of text into every entered password, that only we the server knew. This is what is known as "salting."
Salting, while still used, initially started out pretty simple. Here's an example of how salting works, building off our last example:
import hashlib user_entered_password = 'pa$$w0rd' salt = "5gz" db_password = user_entered_password+salt h = hashlib.md5(db_password.encode()) print(h.hexdigest())
Here, the only major difference is we just have a salt that we append to the very end. Then, any time the user enters their password, we append the salt, hash it, and then compare those hashes.
de6e389819bdaa9e0ca60bb52cabccae
Now, the salt can be added anywhere. Maybe it's input right in the middle, maybe at the beginning, maybe at the end. May you have a salt at the beginning, another for the middle of the password, and one more at the end even.
This is pretty good, but there is inherent risk, still, and here's why:
The hash is always the same for the same password. This means if someone cracks how you generated your salt, then they have now cracked all passwords by generating a hash table. This, again, can take a lot of processing, but this is by no means out of reach by today's standards.
One of the adages for encryption is that you cannot depend on secrecy for security. A good test for your encryption is to ask yourself: "If someone discovers my encryption method, is my security compromised?" In many cases, like with a cipher for example, the answer to this is "yes!" That's a problem. Consider that many reasons why someone has access to your database also mean that they have access to your source code. This means someone can find out your salt. From here, it's relatively quick work to break the entire database of encrypted passwords.
What we want instead is a way to generate unique hashes, where their source can be validated easily, but brute forcing will require a brute forcing per password, not a brute forcing for the entire database. Let's bring in the big guns with passlib.
If you do not have passlib already, which you likely do not since it is not part of the standard library, do a quick:
pip install passlib
or...
sudo apt-get install python-passlib
Once you have passlib, let's play!
from passlib.hash import sha256_crypt password = sha256_crypt.encrypt("password") password2 = sha256_crypt.encrypt("password") print(password) print(password2) print(sha256_crypt.verify("password", password))
Here we're bringing in passlib's hashing ability, and using SHA256 as the algorithm. SHA256 is inherently better than md5, but you're free to replace "md5" with "sha256" in our above examples to see the hash that is output still remains the same, just a bit longer.
Next, we show that we use the sha256_crypt from passlib to hash "password" twice. Once to the variable of password and once more to password2.
Then we output the hashes of both, noticing they are different.
Finally, we validate that the two separate hashes came from the same source.
Sure enough, the boolean rings True, and we have a match!
Now, we have a great way to protect user passwords, while still being able to validate the user when they login.
Now, consider the requirements of the hacker who breaches our server and gains access to both our source code and our database. They can see everything, but now what?
Now, they will have to crack passwords by brute force, the same as before, only now it is one measily password at a time. Yikes. What they can do is take their password-dictionary (usually a massive list of possible passwords), generate a hash, then attempt to validate this hash against all passwords in the database by iterating through each one and running the sha256_crypt.verify against them for the True/False response. This process, however, is exceptionally cumbersome, and the results are slow. This is going to take a long time, and there's no way to pre-prepare here. You might think, well cannot they prepare by generating the SHA256 hashes in advanced? Nope, because sha256_crypt also uses a unique salt.
At the time of my writing this, there are no known weaknesses to this method.
Now, I would like to stress the use of "method" above. There exists a major difference between a method, and the application of the method.
Another adage for encryption and security in general goes something like:
"You can have the strongest, most impenetrable, reinforced door on the planet protecting a room, but that does no good if the walls are still weak."
It's really simple to forget about the walls, the ceiling, or even the ground.
Consider how many times you have written a program's logic, thought it was solid, then hit a bug and went "of course!" You're going to make mistakes constantly, and you probably know that you make them a lot. With security, these bugs often go unchecked, untested. Try your best to think like a hacker, but always remember that *every* system, connected to the world wide web, is hack-able. Just accept it, and work on that premise.
Accept that passlib might be flawed, and that, one day, or already, someone knows a flaw in SHA256. Also, plenty of password encryption systems are bypassed by hackers with server access extremely simply:
If a hacker gains access to your server, and finds that your database is encrypted securely, they can do something as simple as creating a logging function on your login form, where it just simply saves what the users typed into the field, before hashing, to a text file, or transmits the data elsewhere. This obviously isn't as great as getting the entire database at once, but this sort of thing happens.
A lot of people also put a lot of trust in things like 2FA (two factor authentication). I hate to burst your bubble, but, while this method makes a lot of sense, the application of this method by both you, the client, and the website you use matters greatly.
As a developer myself, I have set up 2FA a few times. There are many options you can select when installing 2FA that can increase, or hinder, security. One particular, very popular, bitcoin wallet website, for example, re-uses the public key for your 2FA. I discovered this when I changed phones. The result here is that someone could get access to your phone temporarily, get access to your account, revalidate 2FA, and you'd notice no change at all on your device. They are recycling the public keys that generate the code. Now they just wait for you to deposit a large sum, and then they take you. You'd never know you were even vulnerable. If the hackers could also fake your session cookie, this is another way they could do this, and they wouldn't even need your code. Faking sessions is pretty hard, but still possible. This is why websites usually require you to re-enter your password when making security changes on your account. This popular bitcoin wallet website? Nope, no need to re-enter your password.
Nice secure door, but weak walls.
Another great example of 2FA mistakes is when people use 2FA via something like Google. Great, but if the gmail account that you have your 2FA set up on with Google Authenicator is not also protected with 2FA, well you're screwed.
Great door, weak walls.
Finally, before leaving you feeling vulnerable, I will address the weakest point to all businesses and servers:
The people running them.
The weakest link is always the people. Whether it's because they make mistakes or it is because they can be easily social engineered, the people are usually the main target, or at least the reason for the vulnerabilities.
I cannot even count how many websites I have seen hacked, because someone posed as one of the admins, and was able to gain access. It sounds stupid, but this scam is easy to fall for, especially considering the world we live in today where developers are dispersed and usually not all local. I've personally been the victim of a successful version of this, I've had developers for the website be the hacker, and I've had endless attempts. That's what hackers do, they hack. They keep trying, and eventually they can get through. Your job is to just make it as challenging as possible.
It's like most crime. Most crimes are crimes of opportunity, your job is to not be the slowest, fattest, juiciest kid running from the bear. | https://pythonprogramming.net/password-hashing-flask-tutorial/ | CC-MAIN-2019-26 | refinedweb | 1,940 | 71.24 |
iTriangleMesh Struct Reference
[Geometry utilities]
This interface reprents a mesh of triangles. More...
#include <igeom/trimesh.h>
Detailed Description
This interface reprents a mesh of triangles.
It is useful to communicate geometry information outside of the engine. One place where this will be useful is for communicating geometry information to the collision detection plugin.
All Crystal Space mesh objects (things, sprites, ...) should implement and/or embed an implementation of this interface.
A triangle mesh has the concept of a vertex buffer and an array of triangles.
Main creators of instances implementing this interface:
- Almost all mesh objects have several implementations of this interface.
Main ways to get pointers to this interface:
Main users of this interface:
- Collision detection plugins (iCollideSystem)
- Visibility culler plugins (iVisibilityCuller)
- Shadow stencil plugin
Definition at line 112 of file trimesh.h.
Member Function Documentation
When this number changes you know the triangle mesh has changed (deformation has occured) since the last time you got another number from this function.
Get flags for this triangle mesh.
This is zero or a combination of the following flags:
- CS_TRIMESH_CLOSED: mesh is closed.
- CS_TRIMESH_NOTCLOSED: mesh is not closed.
- CS_TRIMESH_CONVEX: mesh is convex.
- CS_TRIMESH_NOTCONVEX: mesh is not convex.
- CS_TRIMESH_DEFORMABLE: mesh is deformable.
Note that if neither CS_TRIMESH_CLOSED nor CS_TRIMESH_NOTCLOSED are set then the closed state is not known. Setting both is illegal. Note that if neither CS_TRIMESH_CONVEX nor CS_TRIMESH_NOTCONVEX are set then the convex state is not known. Setting both is illegal.
Get the number of triangles for this mesh.
Get the triangle table for this mesh.
Get the number of vertices for this mesh.
Get the pointer to the array of vertices.
Lock the triangle mesh.
This prevents the triangle data from being cleaned up.
Unlock the triangle mesh.
This allows clean up again.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1 | http://www.crystalspace3d.org/docs/online/api-1.4.1/structiTriangleMesh.html | CC-MAIN-2015-48 | refinedweb | 317 | 52.56 |
Back to: C# Tutorials For Beginners and Professionals
Static and non-static members in C#
In this article, I am going to discuss the Static and non-static members in C# with some examples. Please read our previous article before proceeding to this article where we discussed the Data Type in C# with examples. At the end of this article, you will be having a very good understanding of the following pointers.
- What are static and non-static members in C#?
- When do we need to use static and non-static members in C#?
- What are the differences between Static and Non-Static Members in C#?
The member of a class is divided into two categories
- Static members
- Non-static members
In simple word, we can define that, the members of a class which does not require an instance for initialization or execution are known as the static member. On the other hand, the members which require an instance of a class for both initialization and execution are known as non-static members.
Static and Non-static variables in C#
Whenever we declare a variable by using the static modifier or when we declare a variable inside of any static block then those variables are considered as static variables whereas the rest of the other are considered as non-static variables.
If you want a variable to have the same value throughout all instances of a class then you need to declare that variable as a static variable. So, the static variables are going to hold the application level data which is going to be the same for all the objects. Have a look at the following example.
The static variable gets initialized immediately once the execution of the class starts whereas the non-static variables are initialized only after creating the object of the class and that is too for each time the object of the class is created.
A static variable gets initialized only once during the life cycle of a class whereas a non-static variable gets initialized either 0 or n number of times, depending on the number of objects created for that class.
If you want to access the static members of a class, then you need to access them using the class name whereas you need an instance of a class to access the non-static members.
Let us see an example for better understanding:
namespace StaticNonStaticDemo { class Example { int x; // Non statuc variable static int y = 200; //Static Variable public Example(int x) { this.x = x; } static void Main(string[] args) { //Accessing the static variable using class name //Before object creation Console.WriteLine("Static Variable Y = " + Example.y); //Creating object1 Example obj1 = new Example(50); //Creating object2 Example obj2 = new Example(100); Console.WriteLine($"object1 x = {obj1.x} object2 x = {obj2.x}"); Console.WriteLine("Press any key to exit."); Console.ReadLine(); } } }
OUTPUT:
Non Static variables Scope in C#:
The Non Static variables are created when the object is created and are destroyed when the object is destroyed. The object is destroyed when its reference variable is destroyed or initialized with null. So we can say that the scope of the object is the scope of its referenced variables.
Static and Non-Static methods in C#
If we declare a method using the static modifier then it is called as a static method else it is a non-static method. You cannot consume the non-static members directly within a static method. If you want to consume any non-static members with a static method then you need to create an object that and then through the object you can access the non-static members. On the other hand, you can directly consume the static members within a non-static method without any restriction.
Rules while working with static and non-static members in c#:
- Non-static to static: Can be consumed only by using the object of that class.
- Static to static: Can be consumed directly or by using the class name.
- Static to non-static: Can be consumed directly or by using the class name.
- Non-static to non-static: Can be consumed directly or by using the “this” keyword.
Let us understand this with an example:
namespace StaticNonStaticDemo { class Example { int x = 100; static int y = 200; static void Add() { //This is a static block //we can access non static members X with the help of Example object //We can access the static member directly or through class name Example obj = new Example(); //Console.WriteLine(obj.x + Example.y); Console.WriteLine("Sum of 100 and 200 is :" + (obj.x + y)); } void Mul() { //This is a non-static method //we can access static members directly or through class name //we can access the non-static members directly or through this keyword Console.WriteLine("Multiplication of 100 and 200 is :" + (this.x * Example.y)); Console.WriteLine("Multiplication of 100 and 200 is :" + (x * y)); } static void Main(string[] args) { // Main method is a static method // ADD() method is a static method // Statid to Static // we can call the add method directly or through class name Example.Add(); Add(); // Mul() method is a non-static method // we can call the non-static method using object only from a static method // Static to non-static Example obj = new Example(); obj.Mul(); Console.WriteLine("Press any key to exit."); Console.ReadLine(); } } }
OUTPUT:
The Static and Non-Static Constructor in C#:
If we create the constructor explicitly by the static modifier, then we call it as a static constructor and rest of the others are the non-static constructors.
The most important point that you need to remember is the static constructor is the fast block of code which gets executes under a class. No matter how many numbers of objects you created for the class the static constructor is executed only once. On the other hand, a non-static constructor gets executed only when we created the object of the class and that is too for each and every object of the class.
It is not possible to create a static constructor with parameters. This is because the static constructor is the first block of code which is going to execute under a class. And this static constructor called implicitly, even if parameterized there is no chance of sending the parameter values.
Let us understand this with an example:
namespace StaticNonStaticDemo { class Example { static Example() { Console.WriteLine("static constructor is called"); } public Example() { Console.WriteLine("non-static constructor is called"); } static void Main(string[] args) { Console.WriteLine("Main method is executed"); Example obj1 = new Example(); Example obj2 = new Example(); Console.WriteLine("Press any key to exit."); Console.ReadLine(); } } }
OUTPUT:
Static class in C#:
The class which is created by using the static modifier is called a static class. A static class can contain only static members in it. It is not possible to create an instance of a static class. This is because it contains only static members. And we know we can access the static members of a class by using the class name.
Let us understand this with an example.
namespace StaticNonStaticDemo {(); } } }
OUTPUT:
In the next article, I will discuss const and read-only variables in C# with examples.
SUMMARY:
In this article, I try to explain the static and non-static members in C# with some examples. I would like to have your feedback. Please post your feedback, question, or comments about this article. | https://dotnettutorials.net/lesson/static-and-non-static-members-csharp/ | CC-MAIN-2019-35 | refinedweb | 1,241 | 52.39 |
Django is the most popular Python web framework around. It makes it easy to build web apps more quickly and with less code. The demand for Django developers remains high as it's the most sought-after skill set right now.
If you’re aspiring to become a Django Developer, it’s essential to have strong knowledge of these core concepts before appearing for an interview. Through the medium of this article, we are sharing the top 60 most asked Django Interview Questions and Answers that will help you clear the interview with flying colors.
Django is a high-level Python web framework that enables the rapid development of secure and maintainable websites. It's free and open source. It takes care of much of the hassle of web development and allows you to focus on writing apps without any need to reinvent the wheel.
The purpose behind developing this framework is to make developers spend time on new application components instead of already developed components.
The reasons why Django is most preferred are:
Django is suitable for both the backend and frontend. It's a collection of Python libraries that allow you to develop useful web apps ideal for backend and frontend purposes.
The latest version of Django is Django 3.1. The new features of it are:
CDN Integration
Both Python and Django are intertwined but not the same. Python is a programming language used for various application developments: machine learning, artificial intelligence, desktop apps, etc.
Django is a Python web framework used for full-stack app development and server development.
Using core Python, you can build an app from scratch or craft the app with Django using prewritten bits of code
Django follows a Model-Template-View (MTV) architecture. It contains three different parts:
As discussed in the previous question, Django follows MTV architecture - Model, Template, View.
The below diagram depicts the working cycle of Django MTV architecture:
From the diagram, you'll notice Template is on the Client side, and both the Model and View are on the Server side. Django uses request and response objects to communicate between the client and server.
If the website receives the request, it is transmitted from browser to server to manage the view file using a template.
After sending the correct URL request, the app logic and Model initiate the right response to the presented request. After that, a detailed response is sent back to View to check the response and transmit it as an HTTP response or desired user format. Then it again passes to the browser via Templates.
For your clear understanding, let's take a real-life example:
While logging into Django based website, you open the login page. It happens because View will process the request and send it to the login page URL. Then the response is sent from a server to the browser.
After then, you'll enter the credentials in Template, and the data sent back to the View to rectify the request, and then data is presented in the Model. Then the Model verifies the data provided by the user in the connected database.
If the user's data matches, it sends the related data (profile name, image, etc.) to the Views.
Otherwise, the model passes the negative result to the Views.
That's how the Django MTV architecture is working.
Compared to other frameworks, Django offers more code reusability. As Django is a combination of apps, copying those apps from one directory to another with some tweaks to the settings.py file won't need much time to write new applications from scratch.
That is why Django is the rapid development framework, and this kind of code reusability is not allowed in any other framework.
Yes, Django is an easy-to-learn framework compared to others. Having some knowledge of Python and web-working helps you to start developing with Django.
The best features of Django that make it better compared to others are:
Django has many advantages, but we'll look at major ones that differentiate it from other frameworks.
Django offers three inheritance styles:
A model is a definitive source of information about data, defined in the “app/models.py”.
Models work as an abstraction layer that structures and manipulates data. Django models are a subclass of the "django.db.models". Model class and the attributes in the models represent database fields.
As the name implies, it's the main settings file of the Django file. Everything inside the Django project, like databases, middlewares, backend engines, templating engines, installed applications, static file addresses, main URL configurations, allowed hosts and servers, and security key stores in this file as a dictionary or list.
So when Django files start, it first executes the settings.py file and then loads the respective databases and engines to quickly serve the request.
No, Django is not CMS (Content Management System). It's just a web framework and programming tool that allows you to build websites.
In Django, static files are the files that serve the purpose of additional purposes such as images, CSS, or JavaScript files. Static files managed by “django.contrib.staticfiles”. There are three main things to do to set up static files in Django:
1) Set STATIC_ROOT in settings.py
2) Run manage.py collect static
3) Set up a Static Files entry on the PythonAnywhere web tab
Middlewares in Django is a lightweight plugin that processes during request and response execution. It performs functions like security, CSRF protection, session, authentication, etc. Django supports various built-in middlewares.
Every field in a model is an instance of the appropriate field class. In Django, field class types determine:
(e.g. <input type="text">, <select>)
Django includes a "signal dispatcher" to notify decoupled applications when some action takes place in the framework. In a nutshell, signals allow specific senders to inform a suite of receivers that some action has occurred. They are instrumental when we use more pieces of code in the same events.
Django provides a set of built-in signals that enable users to get notified of specific actions.
The app is a module that deals with the dedicated requirements in a project. On the other hand, the project covers an entire app. In Django terms, a project can contain different apps, while an app features in various projects.
Django allows you to design URL functions however you want. For this, you need to create a Python module informally called URLconf (URL configuration).
This module is purely a Python code and acts as a mapping between URL path expressions and Python functions. Also, this mapping can be as long or short as needed and can also reference other mappings.
The length of this mapping can be as long or short as required and can also reference other mappings. Django also provides a way to translate URLs according to the active language.
An exception is an abnormal event that leads to program failure. Django uses its exception classes and python exceptions as well to deal with such situations.
We define Django core exceptions in "Django.core.exceptions". The following classes are present in this module:
Django uses the session to keep track of the state between the site and a particular browser. Django supports anonymous sessions. The session framework stores and retrieves data on a per-site-visitor basis. It stores the information on the server side and supports sending and receiving cookies. Cookies store the data of session ID but not the actual data itself.
A cookie is a piece of information stored in the client's browser. To set and fetch cookies, Django provides built-in methods. We use the set_cookie() method for setting a cookie and the get() method for getting the cookie.
You can also use the request.COOKIES['key'] array to get cookie values.
Flask and Django are the two most popular Python web frameworks. The following table lists some significant differences between Django and Flask
To check the version of Django installed on your system, open the command prompt and enter the following command:
py -m django --version
You can also try to import Django and use the get_version() method as follows:
import django print(django.get_version())
Django Admin is the command-line utility for administrative tasks. It's a preloaded interface to fulfill all web developer's needs and is imported from the "django.contrib packages".
Django Admin interface has its user authentication and offers advanced features like authorizing the access, CMS (Content Management System), managing various models, etc.
You can even perform the following tasks using Django admin as listed out in the table:
To create a Django project, navigate to the directory where you want to do a project and type the following command:
$ django-admin startproject ABC
That will create an "ABC" folder with the following structure −
ABC/ manage.py myproject/ __init__.py settings.py urls.py wsgi.py
Note: Here, "ABC" is the name of the project. You can mention any name you want.
Various companies out there are using Django. Of them, major are Instagram, Pinterest, Udemy, Mozilla Firefox, Reddit, etc.
Django views are the critical component of the framework They serve the purpose of encapsulation. They encapsulate the logic liable to process a user's request and return a response to the user.
Either they return HTTP responses or raise an exception such as 404 in Django. Besides, Views also perform tasks like reading records from a database, generating PDF files, etc.
Every app in Django comes with a views.py file, and this contains the views functions. Views function can be imported directly in the URLs file in Django.
To achieve that, you have to import the view function in the urls.py file first and add the path/URL that the browser should request to call that View function.
Django Templates generate dynamic web pages. Using templates, you can show the static data and the data from various databases connected to the app through a context dictionary. You can create any number of templates based on project requirements. Even it's OK to have none of them.
Django template engine handles the templating in the Django web framework. Some template syntaxes declare variables, filters, control logic, and comments.
Django ships built-in backends for its template system called the Django template language (DTL).
In Django, the most notable feature is Object-Relational Mapper (ORM), which allows you to interact with app data from various relational databases such as SQLite, MySQL, and PostgreSQL.
Django ORM is the abstraction between web application data structure (models) and the database where the data is stored. Without writing any code, you can retrieve, delete, save, and perform other operations over the database.
The main advantage of ORMs is rapid development. ORMs make projects more portable. It's easier to change the database with Django ORM.
Iterators are containers in Python containing several elements. Every object in the iterator implements two methods that are __init__() and the __next__() methods.
In Django, the fair use of an iterator is when you process results that take up a large amount of memory space. For this, you can use the iterator() method, which evaluates the QuerySet and returns the corresponding iterator over the results.
Caching is the process of saving expensive calculation output to avoid performing the same calculation again.
Django supports a robust cache system to save web pages such that they don't have to be evaluated repeatedly for each request.
They are few strategies to implement caching in Django, and the following table lists them:
Whenever the Django Server receives a request, the system follows an algorithm to determine which Python code needs execution. Here are the steps that sum up the algorithm:
Python 3 is the most recommended version for Django. Because it's faster, has more features, and is better supported.
A typical Django project consists of these four files:
The final four files are inside a directory, which is at the same level as manage.py.
Django is known as a loosely coupled framework beca+use of its MTV architecture.
Django's architecture is a variant of MVC architecture, and MTV is beneficial because it completely discards server code from the client's machine. Models and Views are present on the client machine, and templates only return to the client.
All the architecture components are different from each other. Both frontend and backend developers can work simultaneously on the projects as they won't affect each other when changed.
Django REST framework is a flexible and powerful toolkit for building Web APIs rapidly.
The following are the significant reasons that are making REST framework perfect choice:
The Django framework is monolithic, which is valid to some extent. As Django's architecture is MTV-based, it requires some rules that developers need to follow to execute the appropriate files at the right time.
With Django, you get significant customizations with implementations. Through this, you cannot change file names, variable names, and predefined lists.
Django's file structure is a logical workflow. Thus the monolithic behavior of Django helps developers to understand the project efficiently.
Django comes with a built-in user authentication system to handle objects such as users, groups, permissions, etc. It not only performs authentication but authorization as well.
Following are the system objects:
Apart from this, there are various third-party web apps that we can use instead of the default system to provide more user authentication with more features.
When a View function returns a web page as HttpResponse instead of a simple string, we use the render function.
Render is a shortcut for passing a data dictionary with a template. This function uses a templating engine to combine templates with a data dictionary.
Finally, the render() returns the HttpResponse with the rendered text, the models' data.
Syntax:
render(request, template_name, context=None, content_type=None, status=None, using=None)
The request generates a response.
The template name and other parameters pass the dictionary.
For more control, specify the content type, the data status you passed, and the render you are returning.
Forms serve the purpose of receiving user inputs and using that data for logical operations on databases. Django supports form class to create HTML forms. It defines a form and how it works and appears.
Django's forms handle the following parts:
There are two ways to add the view function to the main URLs config:
1. Adding a function View
In this method, import the particular View's function and add the specific URL to the URL patterns list.
2. Adding a Class-based view
This one is a more class-based approach. For this, import the class from the views.py and then add the URL to the URL patterns. An inbuilt method is needed to call the class as a view.
Write the name of the function on the previous method as shown below:
class_name.as_view()
This will pass your view class as a view function.
Both function-based and class-based have their advantages and disadvantages. Depending on the situation, you can use them to get the right results.
Protecting user's data is an essential part of any website design. Django implements various sufficient protections against several common threats. The following are Django's security features:
AJAX (Asynchronous JavaScript And XML) allows web pages to update asynchronously to and from the server by exchanging data in Django. That means without reloading a complete webpage you can update parts of the web page.
It involves a combination of a browser built-in XMLHttpRequest object, HTML DOM, and JavaScript.
To handle Ajax requests in the Django web framework, perform the following:
Writing views is a heavy task. Django offers an easy way to set Views called Generic Views. They are classes but not functions and stored in "django.views.generic".
Generic views act as a shortcut for common usage patterns. They take some common idioms and patterns in view development and abstract them to write common views of data without repeating yourself quickly.
In case all your templates need the same objects, use "RequestContext." This method takes HttpRequest as the first parameter and populates the context with a few variables simultaneously as per the engine's context_processors configuration option.
For this, we have to set the SESSION_ENGINE settings to “Django.contrib.sessions.backends.file.”
“Django-admin.py load data” loads data in Django. This command line performs data searching and loads the contents of the named fixtures into the database.
CRUS is an acronym for Create, Read, Update, and Delete. It’s a mnemonic framework used for constructing models when building application programming interfaces (APIs).
When a process starts, the Django server receives a request and checks for a matching URL in the project-defined URL patterns. If the URL matches, it executes the associated code in the view file with the URL and sends a response. If the server can’t find a matching URL, it invokes a 404-status code.
For the following functions, you can use Middleware in Django:
Django does not support multiple-column primary keys. It only supports single-column primary keys.
In the context of Django, QuerySet is a set of SQL queries. To see the SQL query from the Django filter call, type the command print(b.query).
Make sure that the DEBUG setting is set to True, and type the following commands:
No, Django signals are synchronous. There is no background thread or asynchronous jobs to execute them. When we use signals in applications, they allow you to maintain the code to understand application behavior and solve issues faster and better.
Not suitable for small projects due to its monolithic size
1 /15 | https://mindmajix.com/django-interview-questions | CC-MAIN-2022-27 | refinedweb | 2,959 | 56.25 |
New Analysis Finds That Mondays Are the Best Days to Buy Bitcoin
MARKET ANALYSIS
This week saw Bitcoin price (BTC) hitting the $9,000 barrier amid the launch of CME Bitcoin options and Plaid’s acquisition by Visa, reaching a record price for the last two months.
Bitcoin’s 27% price gain since the beginning of the year along with the future bullish scenarios laid down by investors may attract new crypto holders. But since BTC/USD is traded 24/7, new investors may be wondering: is there a difference between investing on a particular day of the week?
Figure 1. Crypto market data, 1-day performance. Source: Coin360
The basis of a difference in a day of a week returns comes from traditional stock markets. It has been shown that stock returns on Mondays are, on average, negative. This is called the Weekend Effect. One explanation is that the effects on a particular stock will only be felt on Monday since the market is closed during the weekend. However, the cryptocurrency market is always open: Could we expect the same behavior on Mondays for Bitcoin?
Bitcoin weekly trend in 2019
Analyzing Bitcoin returns from the beginning of 2019 until Jan. 13, 2020, data shows that Fridays present the highest average return across the days of the week at 1.1%. In contrast, only two days of the week show negatively average returns, Tuesday (-0.24%) and Thursday (-0.97%).
If an investor only started investing at the start of 2019 on a particular day of the week, Friday would present the best cumulative return, followed by Monday (Figure 2). Taking Fridays as an example, it’s assumed that the strategy would be to buy BTC closing price on Thursdays and sell it at the closing price on Fridays.
The closing prices (UTC timezone, a rolling 24-hour period) are used for simplicity reasons since the desired time to buy and sell during those days is based on the investor’s preference. The same buy/sell rationale applies if another day of the week is chosen to conduct the strategy (i.e. Monday).
Figure 2: Cumulative Return for investing on a specific day only between January 2019 and January 2020
Bitcoin weekly trend in the long-term
Taking a deeper look at Bitcoin returns for a longer time period, as seen from Figure 3, we can conclude that Mondays offer the best average return from all the days of the week (0.54%).
On the other hand, Thursday and Wednesday are the worst days of the week to invest in Bitcoin with an average return of -0.09% and -0.23%, respectively.
Bitcoin’s Monday anomaly case is reinforced from a statistical perspective since Monday is the only day of the week with a statistically significant result from the used regression models.
Curiously, as a truly anti-status quo coin, Bitcoin shows a mean positive return on Mondays, in contrast to traditional stock markets’ Weekend Effect.
Figure 3: Average Daily Return for each Day of the Week between April 2013 and January 2020.
Using the same long-term sample starting in April 2013, an investor choosing exclusively one day of the week as a strategy would get the best option by choosing Mondays, followed by Saturdays, as seen from Figure 4.
Figure 4: Cumulative Return for specific day investment during the entire sample analyzed (Between April 2013 and January 2020)
Day of the week during market bubbles
We cannot ignore Bitcoin’s explosive gains from two highly volatile periods seen in 2017 and how those influence the average returns for the longer time sample. By isolating that year, we find that Monday still shows the highest average return (1.5%) across the days of the week, followed by Thursday (0.55%).
Figure 5: Average Daily Return for each Day of the Week between during 2017
In summary, Bitcoin’s unique features reveal an opposite behavior to traditional stock markets, showing a positive average return on Mondays when considering wider time periods. However, when dealing with shorter time frames, we identify Fridays as the day with the highest average returns across the days of the week.
As reported by Cointelegraph, a study in September 2019 showed that Bitcoin holders make a profit after an average of 1,335 days, or roughly three years and eight months. Overall, holding BTC has been profitable for over 94% of days Bitcoin has existed, according to the latest data from Bitcoin Hodl Calculator.
By Tiago Vidal | https://p2ps.medium.com/new-analysis-finds-that-mondays-are-the-best-days-to-buy-bitcoin-e2220d8a79f2?source=post_internal_links---------1---------------------------- | CC-MAIN-2022-27 | refinedweb | 751 | 56.89 |
Process escape sequences in a string in Python
Sometimes when I get input from a file or the user, I get a string with escape sequences in it. I would like to process the escape sequences in the same way that Python processes escape sequences in string literals.
For example, let's say
myString is defined as:
>>>>> print(myString) spam\neggs
I want a function (I'll call it
process) that does this:
>>> print(process(myString)) spam eggs
It's important that the function can process all of the escape sequences in Python (listed in a table in the link above).
Does Python have a function to do this?
The correct thing to do is use the 'string-escape' code to decode the string.
>>>>> decoded_string = bytes(myString, "utf-8").decode("unicode_escape") # python3 >>> decoded_string = myString.decode('string_escape') # python2 >>> print(decoded_string) spam eggs
Don't use the AST or eval. Using the string codecs is much safer.
Python 2.7 Tutorial, There are two ways to go about unescaping backslash escaped strings in Python. First is using literal_eval to evaluate the string. Note that in.
unicode_escape doesn't work in general
It turns out that the
string_escape or
unicode_escape solution does not work in general -- particularly, it doesn't work in the presence of actual Unicode.
If you can be sure that every non-ASCII character will be escaped (and remember, anything beyond the first 128 characters is non-ASCII),
unicode_escape will do the right thing for you. But if there are any literal non-ASCII characters already in your string, things will go wrong.
unicode_escape is fundamentally designed to convert bytes into Unicode text. But in many places -- for example, Python source code -- the source data is already Unicode text.
The only way this can work correctly is if you encode the text into bytes first. UTF-8 is the sensible encoding for all text, so that should work, right?
The following examples are in Python 3, so that the string literals are cleaner, but the same problem exists with slightly different manifestations on both Python 2 and 3.
>>>>> print(s.encode('utf-8').decode('unicode_escape')) naïve test
Well, that's wrong.
The new recommended way to use codecs that decode text into text is to call
codecs.decode directly. Does that help?
>>> import codecs >>> print(codecs.decode(s, 'unicode_escape')) naïve test
Not at all. (Also, the above is a UnicodeError on Python 2.)
The
unicode_escape codec, despite its name, turns out to assume that all non-ASCII bytes are in the Latin-1 (ISO-8859-1) encoding. So you would have to do it like this:
>>> print(s.encode('latin-1').decode('unicode_escape')) naïve test
But that's terrible. This limits you to the 256 Latin-1 characters, as if Unicode had never been invented at all!
>>> print('Ernő \\t Rubik'.encode('latin-1').decode('unicode_escape')) UnicodeEncodeError: 'latin-1' codec can't encode character '\u0151' in position 3: ordinal not in range(256)
Adding a regular expression to solve the problem
(Surprisingly, we do not now have two problems.)
What we need to do is only apply the
unicode_escape decoder to things that we are certain to be ASCII text..
import re import codecs ESCAPE_SEQUENCE_RE = re.compile(r''' ( \\U........ # 8-digit hex escapes | \\u.... # 4-digit hex escapes | \\x.. # 2-digit hex escapes | \\[0-7]{1,3} # Octal escapes | \\N\{[^}]+\} # Unicode characters by name | \\[\\'"abfnrtv] # Single-character escapes )''', re.UNICODE | re.VERBOSE) def decode_escapes(s): def decode_match(match): return codecs.decode(match.group(0), 'unicode-escape') return ESCAPE_SEQUENCE_RE.sub(decode_match, s)
And with that:
>>> print(decode_escapes('Ernő \\t Rubik')) Ernő Rubik
Python Strings decode() method, Hello World! Some escape sequences are only recognized in string literals. These are: Escape Sequence, Description. \.
What does "\r" do in the following script?, In a string literal, hexadecimal and octal escapes denote the byte with the given value; Unless an 'r' or 'R' prefix is present, escape sequences in strings are The escape sequence starts with a backslash (\) character then followed by a character. This will be interpreted by Python as a special character.
The
ast.literal_eval function comes close, but it will expect the string to be properly quoted first.
Of course Python's interpretation of backslash escapes depends on how the string is quoted (
"" vs
r"" vs
u"", triple quotes, etc) so you may want to wrap the user input in suitable quotes and pass to
literal_eval. Wrapping it in quotes will also prevent
literal_eval from returning a number, tuple, dictionary, etc.
Things still might get tricky if the user types unquoted quotes of the type you intend to wrap around the string.
How to process escape sequences in a string in Python?, Python code to demonstrate escape character. # string. ch = "I\nLove\tGeeksforgeeks". print ( "The string after resolving escape character is : " ). print (ch) List of escape sequences available in Python 3. Some escape sequences are only recognized in string literals.
This is a bad way of doing it, but it worked for me when trying to interpret escaped octals passed in a string argument.
input_string = eval('b"' + sys.argv[1] + '"')
It's worth mentioning that there is a difference between eval and ast.literal_eval (eval being way more unsafe). See Using python's eval() vs. ast.literal_eval()?
Python 3 Escape Sequences, Python string literals come in many different forms, but the main ones look But strings can also contain escape sequences, such as '\n' for In general, the consensus in the thread seems to be to slow down the process of Python | Ignoring escape sequences in the string Here, we are going to learn how to ignore escape sequence in python programming language and print the actual assigned value ? Submitted by IncludeHelp , on November 28, 2018
Escape Characters, The backslash ( \ ) character is used to escape characters that otherwise have a special meaning, such as newline, backslash itself, or the quote character. String In Python, a tab inserts 8 spaces into the string. To add a tab into a string, we use the tab escape character (\t) in the string. You can see that in the output string, there is 8 spaces at the beginning. Lastly, we want to insert a line break into a string.
Ways to print escape characters in Python, In source files and strings, any of the standard platform line termination sequences Unless an 'r' or 'R' prefix is present, escape sequences in string and bytes.
Escape sequences in Python strings [LWN.net], You have some string input with some specical characters escaped using syntax rules resemble Use Python's builtin codecs to decode them efficiently. This procedure is shown in the second algorithm of the recipe.. | http://thetopsites.net/article/50523520.shtml | CC-MAIN-2020-34 | refinedweb | 1,111 | 64.91 |
Choose Your Own Pyventure/old toc page
Contents
Both[edit]
(Optional) Install
ipython
Ipython is a python package that gives a much nicer command-line environment, which includes syntax highlighting, history, and a variety of debugging tools and improvements. Download it from the Ipython site.
Related Tools[edit]
PyScripter is a free IDE (available for Windows. If you have previous programming experience - this is similar to the Borland Delphi IDE. You can download PyScripter from PyScipter project site.
Lessons[edit]
Lesson 0[edit]
print "Hello, world!"
After running this, you should see:
"Hello, world!"
Play Around With
Turtle
The Python
turtle module, is a simple reimplementation of a LOGO-like language.
From your python prompt:
- # import everything from the turtle module
- # import: make them available for use
- # everything: (mostly) everything (there are some exceptions)
- # the turtle module: a collection of functions (actions), constants,
- # and other useful stuff that is grouped into one 'space' (a namespace)
- # named turtle, for easy of memory, and general good sense
- >>> from turtle import *
- >>> circle(80) # this will draw a circle with a diameter of 80
- >>> reset() # reset the screen
- >>> forward(10) # make the turtle go forward 10
All the commands are listed at the Turtle reference documentation
Lesson 1[edit]
no line numbers, but can be copy and pasted
##!"
- ## anything from '#' is a comment, and gets ignored.
- ##!"
Other Pages[edit]
Additional Resources[edit]
Exercises (under construction)[edit]. | http://en.wikibooks.org/wiki/Choose_Your_Own_Pyventure/old_toc_page | CC-MAIN-2014-52 | refinedweb | 232 | 51.28 |
Before getting started with Cocos2d you need to learn how to set up a project. This will make things easier than starting from scratch. By the end of this chapter, you will know how to install the templates that come with the source code and we'll take a look at the samples included.
In this chapter, we shall:
Learn how to get the example projects (called templates) working
Play with those templates
Take a look at the basic structure of a Cocos2d game
So let's get on with it.
Note
In this book we will always mention the term iOS programming, which refers to iPhone, iPod Touch, and iPad programming, as they all share the same iOS and are similar in programming terms.
There are many topics which this book does not cover because they are advanced topics; some of them are not so simple for beginners, but when you feel ready, give them a try.. We'll explore it in Chapter 9.file:.
Try taking a look at all of the samples included with Cocos2d; some are just performance tests, but they are still useful for knowing what to expect from the framework and what it can do.
Although these tests don't make anything that looks like a game, they are useful for learning the basics, and they are very good consulting material. So, when you are in doubt and reading the source code does not help, check these examples, and you will surely find your answer.
Cocos2d comes with three templates. These templates are the starting point for any Cocos2d game. They let you:
Which one you decide to use for your project depends on your needs. Right now we'll create a simple project from the first template..
Note).
Cocos2d templates will appear right there along with the other Xcode project templates, as shown in the following screenshot:
Select Cocos2d-0.99.1 Application.
Name the project
HelloCocos2dand save it to your
Documentsfolder..
Pretty boring, isn't it? As you advance in the book, you will build many more interesting things, but let's stop for a moment and take a look at what was created here.
When you run the application you'll notice a couple of things, as follows:
In a moment, we'll see how this is achieved by taking a look at the generated classes.
The CCDirector is the class whose main purpose is scene management. It is responsible for switching scenes, setting the desired FPS, the device orientation, and a lot of other things.
The CCDirector is the class responsible for initializing OpenGL ES.
Note. We'll talk about this in the next.
The CCDirector handles the scene management. It can tell the game which scene to run, suspend scenes, and push them into a stack. In Chapter 5, you'll learn how to do this, for now should know that in the AppDelegate you need to tell the Director which is the scene you wish run when starting the application..
In the following chapters, you'll learn how to display a UIAlert, so that the action does not resume instantly. Later on, you'll learn how to do a nice pause screen with options to resume and quit the game.
- .
Start by opening the
HelloWorldScene.hfile. Let's analyze it line by line:
#import "cocos2d.h"
Each class you create that makes use of Cocos2d should import its libraries. You do so by writing the preceding line.
// HelloWorld Layer @interface HelloWorld : CCLayer { } // returns a Scene that contains the HelloWorld as the only child +(id) scene; @end
These lines define the Interface of the HelloWorld CCLayer.file, where the action happens:
// preceding code is the one that will be called when the layer is initialized. What it does is create a CCLabel to display the Hello World text.
CCLabel* label = [CCLabel labelWithString:@"Hello World" fontName:@"Marker Felt" fontSize:64];
CCLabelis one of the three existent classes that allow you to show text in your game. We'll do a lot of things with labels throughout this book..
Note
Most Cocos2d classes can be instantiated using convenience methods, thus making memory management easier. To learn more about memory management check the Apple documents at the following URL:
CGSize size = [[CCDirector sharedDirector] winSize];
The preceding line gets the size of the current window. Right now the application is running in landscape mode so it will be 480 * 320 px.
Note
Remember that the screen sizes might vary from device to device and the orientation you choose for your game. For example, in an iPad application in portrait mode, this method would return 768 * 1024.
label.position = ccp( size.width /2 , size.height/2 );
This line sets the label's position in the middle of the screen.
Now, all that is left is to actually place the label in the layer.
[self addChild: label];.
Note
You can find further information about parent-child relationships in the Cocos2d documentation at:. We'll see a lot of examples of this throughout the book as it is a very useful feature.
They can execute actions: For example, you could tell the CCLabel we had in the previous example to move to the position (0,0) in 1 second. Cocos2d allows for this kind of action in a very easy fashion.folder.
#import "cocos2d.h"
Unit should inherit from CCNode to be able to schedule methods, so let's make the corresponding changes.
@interface Unit : CCNode { }
Import the
HelloWorldScene.h. We'll need this class soon.
#import "HelloWorldScene.h";
That is all you must change for now in the Unit.h file. Now, open the Unit.m file. You should see something like this:
#import "Unit.h" @implementation Unit @end
What we have to do now is fill it up.
Create the
initmethod for the
Unitclass. This one is a simple example so we won't be doing a lot here:
-(id) initWithLayer:(HelloWorld*) game { if ((self = [super init])) { [game addChild:self]; [self schedule:@selector(fire) interval:1]; } return (self); }
This init method takes the HelloWorld layer as a parameter. We are doing that because when the object is instantiated it will need to be added as a child of the layer node.
[game addChild:self];
When adding the unit object as a child of the layer node it allows it to schedule methods.
Note
If you are scheduling a method inside a custom class and it is not running, check whether you have added the said object as a child of a layer. Not doing so won't yield any errors but the method won't run!
[self schedule:@selector(fire) interval:1];
This line is the one that does the magic! You pass the desired selector you want to run in the schedule parameter and a positive float number as the interval in seconds.
Now, add the
firemethod.
-(void)fire { NSLog(@"FIRED!!!"); }
You did expect a bullet being fired with flashy explosions, didn't you? We'll do that later! For now content yourself with this. Each second after the Unit instance is created a "FIRED!!!" message will be output in the Console.
We just need to make a couple of changes to the HelloWorldScene class to make this work.
In the
HelloWorldScene.hfile, add the following line:
#import "Unit.h"
Then in the HelloWorldScene.m file let's create an instance of the
Unitclass.
Unit * tower = [[Unit alloc]initWithLayer:self];
As you can see we are passing the HelloWorld layer to the Unit class to make use of it.
That is all, now Build and Run the project. You should see the following output:!!!
As you can see, the unit created is firing a bullet each second, defeating every enemy troop on its way.. In the next chapter, you will learn how to use CCSprites, another subclass of CCNode.>
These two messages show what was dealloced in this particular case. What these classes do does not matter right now. However, CCScheduler is responsible of triggering scheduled callbacks and CCTextureCache is responsible of handling the loading of textures.:
CCSprite * image = [CCSprite spriteWithFile:@"Icoun.png"]; [self addChild:image];
That line of code creates a Sprite from an image file in your resource folder. As you may notice,
"Icoun.png"is not present in the project's resource folder, so when the application is run and execution gets to that line of code, it will crash.
Run the application and see it crash.
Open the Console, and you will see the following output:
The debug messages tell you exactly what is failing. In this case, it couldn't use the image
icoun.png. Why? This is because it is not there!
Change the string to match the file's name to see the error go away..
In this chapter,.
In the next chapter, we will learn everything there is to know about CCSprites by building the first game of the book, "Colored Stones". | https://www.packtpub.com/product/cocos2d-for-iphone-0-99-beginner-s-guide/9781849513166 | CC-MAIN-2020-40 | refinedweb | 1,487 | 73.47 |
import "github.com/iotexproject/iotex-core/pkg/lifecycle"
Package lifecycle provides application models' lifecycle management.
Lifecycle manages lifecycle for models. Currently a Lifecycle has two phases: Start and Stop. Currently Lifecycle doesn't support soft dependency models and multi-err, so all models in Lifecycle require to be succeed on both phases.
Add adds a model into LifeCycle.
AddModels adds multiple models into LifeCycle.
OnStart runs models OnStart function if models implmented it. All OnStart functions will be run in parallel. context passed into models' OnStart method will be canceled on the first time a model's OnStart function return non-nil error.
OnStop runs models Start function if models implmented it. All OnStop functions will be run in parallel. context passed into models' OnStop method will be canceled on the first time a model's OnStop function return non-nil error.
Model is application model which may require to start and stop in application lifecycle.
StartStopper is the interface that groups Start and Stop.
Starter is Model has a Start method.
Stopper is Model has a Stop method.
Package lifecycle imports 2 packages (graph) and is imported by 10 packages. Updated 2019-07-27. Refresh now. Tools for package owners. | https://godoc.org/github.com/iotexproject/iotex-core/pkg/lifecycle | CC-MAIN-2019-51 | refinedweb | 201 | 52.46 |
Hi guys,
I'm new to ASP.NET and MySQL. I've worked with Microsoft Access for a while (6 months or so) so I have an understanding on how databases work. However, now I am trying to create a website with ASP.NET 3.5 and MySQL. So far I've been able to get a connection to MySQL and insert records to tables. However, I'm struggling to learn how to do more because I cannot find any tutorials/information online.
From my experience with Access, I've gotten use to using recordsets. For example, filtering a recordset and being able to read/change single/multiple records from the database using the recordset. I haven't been able to find out how to do this sort of thing with ASP.NET and MySQL.
So pretty much I'm looking for a direction on how to use these two technologies together. Does anyone know any tutorials/books on this topic? I've been trying to find out how to use the MySql.Data.MySqlClient namespace just by looking at the method and property descriptions is visual studio but haven't had much luck. Any help is greatly appreciated! Thanks. | https://www.daniweb.com/programming/web-development/threads/145803/using-asp-net-with-mysql-tutorial | CC-MAIN-2018-09 | refinedweb | 200 | 68.47 |
One of the challenges we’ve been dealing with in the Yellowbrick library is the proper resolution of colors, a problem that seems to have parallels in
matplotlib as well. The issue is that colors can be described by the user in a variety of ways, then that description has to be parsed and rendered as specific colors. To name a few color specifications that exist in
matplotlib:
- None: choose a reasonable default color
- The name of the color, e.g.
"b"or
"blue"
- The hex code of the color e.g.
"#377eb8"
- The RGB or RGBA tuples of the color, e.g.
(0.0078, 0.4470, 0.6353)
- A greyscale intensity string, e.g.
"0.76".
The pyplot api documentation sums it up as follows:
In addition, you can specify colors in many weird and wonderful ways, including full names (‘green’), hex strings ('#008000'), RGB or RGBA tuples ((0,1,0,1)) or grayscale intensities as a string (‘0.8’). Of these, the string specifications can be used in place of a fmt group, but the tuple forms can be used only as kwargs.
Things get even weirder and slightly less wonderful when you need to specify multiple colors. To name a few methods:
- A list of colors whose elements are one of the above color representations.
- The name of a color map object, e.g.
"viridis"
- A color cycle object (e.g. a fixed length group of colors that repeats)
Matplotlib
Colormap objects resolve scalar values to RGBA mappings and are typically used by name via the
matplotlib.cm.get_cmap function. They come in three varieties: Sequential, Diverging, and Qualitative. Sequential and Diverging color maps are used to indicate continuous, ordered data by changing the saturation or hue in incremental steps. Qualitative colormaps are used when no ordering or relationship is required such as in categorical data values.
Trying to generalize this across methodologies is downright difficult. So instead let’s look at a specific problem. Given a dataset, X, whose shape is
(n,d) where
n is the number of points and
d is the number of dimensions, and a target vector, y, create a figure that shows the distribution or relationship of points defined by X, differentiated by their target y. If
d is 1 then we can use a histogram, if
d is 2 or 3 we can use a scatter plot, and if
d >= 3, then we need RadViz or Parallel Coordinates. If y is discrete, e.g. classes then we need a color map whose length is the number of classes, probably a qualitative colormap. If y is continuous, then we need to perform binning or assign values according to a sequential or diverging color map.
So, problem number one is detecting if y is discrete or continuous. There is no automatic way of determining this, so besides having the user directly specify the behavior, I have instead created the following rule-based functions:
def is_discrete(vec): """ Returns True if the given vector contains categorical values. """ # Convert the vector to an numpy array if it isn't already. vec = np.array(vec) if vec.ndim != 1: raise ValueError("can only handle 1-dimensional vectors") # Check the array dtype if vec.dtype.kind in {'b', 'S', 'U'}: return True if vec.dtype.kind in {'f', 'c'}: return False # For vectors of >= than 50 elements if vec.shape[0] >= 50: if np.unique(vec).shape[0] <= 20: return True return False # For vectors of < than 50 elements else: elems = Counter(vec) if len(elems.keys()) <= 20 and all([c > 1 for c in elems.values()]): return True return False # Raise exception if we've made it to this point. raise ValueError( "could not determine if vector is discrete or continuous" ) def is_continuous(vec): """ Returns True if the given vector contains continuous values. To keep things simple, this is currently implemented as not is_discrete(). """ return not is_discrete(vec)
The rules for determining discrete/categorical values are as follows:
- If it is a string type - True
- If it’s a bool type - True
- If it is a floating point type - False
- If > 50 samples then if there are 20 or fewer discrete values
- If < 50 samples, then if there are 20 or fewer discrete samples that are represented more than once each.
These rules are arbitrary but work on the following test cases:
datasets = ( np.random.normal(10, 1, 100), # Normally distributed floats np.random.randint(0, 100, 100), # Random integers np.random.uniform(0, 1, 1000), # Small uniform numbers np.random.randint(0, 1, 100), # Binary data (0 and 1) np.random.randint(1, 4, 100), # Three integer clases (1, 2, 3) np.random.choice(list('ABC'), 100), # String classes ) for d in datasets: print(is_discrete(d))
The next step is to determine how best to assign colors for continuous vs. discrete values. One typical use case is to directly assign color values using the target variable, then provide a colormap for color assignment as shown:
# Create some data sets. X = np.random.normal(10, 1, (100, 2)) yc = np.random.normal(10, 1, 100) yd = np.random.randint(1, 4, 100)
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(9,4)) # Plot the Continuous Target ax1.scatter(X[:,0], X[:,1], c=yc, cmap='inferno') # Plot the Discrete Target ax2.scatter(X[:,0], X[:,1], c=yd, cmap='Set1')
Alternatively, the colors can be directly assigned by creating a list of colors. This brings us to our larger problem - how do we create a list of colors in a meaningful way to assign our colormap appropriately? One solution is to use the
matplotlib.colors.ListedColormap object which takes a list of colors and can convert a dataset to that list as follows:
- If the input data is in (0,1) - then uses a percentage to assign the color
- If the input data is an integer, then uses it as an index to fetch the color
This means that some work has to be done ahead of time, e.g. discretizing the values or normalizing them.
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(9,4)) # Plot the Continuous Target norm = col.Normalize(vmin=yc.min(), vmax=yc.max()) cmap = col.ListedColormap([ "#ffffcc", "#ffeda0", "#fed976", "#feb24c", "#fd8d3c", "#fc4e2a", "#e31a1c", "#bd0026", "#800026" ]) ax1.scatter(X[:,0], X[:,1], c=cmap(norm(yc))) # Plot the Discrete Target cmap = col.ListedColormap([ "#34495e", "#2ecc71", "#e74c3c", "#9b59b6", "#f4d03f", "#3498db" ]) ax2.scatter(X[:,0], X[:,1], c=cmap(yd), cmap='Set1')
Note that in the above function, the indices 1-3 are used (not the 0 index) since the classes were 1-ordered.
Clearly color handling is tricky, but hopefully these notes will provide us with a reference when we need to continue to resolve these issues developing yellowbrick. | https://bbengfort.github.io/2017/01/resolving-matplotlib-colors/ | CC-MAIN-2021-17 | refinedweb | 1,131 | 55.34 |
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
The current client managers depends on a relatively large number of configuration items, that is the combination of all clients parameters. This makes its migration to tempest-lib troublesome.
We have several plans about the client manager:
The current client managers depends on CONF, and it’s structure does not easily allow for runtime registration of extra clients.
For instance, in Manager class:
self.network_client = NetworkClient( self.auth_provider, CONF.network.catalog_type, CONF.network.region or CONF.identity.region, endpoint_type=CONF.network.endpoint_type, build_interval=CONF.network.build_interval, build_timeout=CONF.network.build_timeout,
Another issue with the current structure is that new API versions lead to proliferation of client attributes in the client manager classes. With service clients being split into pieces, the size of the client manager grows accordingly.
Split the client manager in two parts.
The first part provides lazy loading of clients, and it does not depend on tempest CONF, as it is planned for migration to tempest.lib. It covers the six client groups for the six core services covered by tempest in the big tent. It exposes an interface to register further service clients.
Lazy loading of clients provides protection against clients that try to make API calls at __init__ time; it also helps in running tempest with the minimum amount of CONF required for the clients in use by a specific test run.
The second part passes tempest CONF values to the first one. It registers any non-core client, whether still in tempest tree or coming from a plug-in.
The client registration interface could look like:
def register_clients_group(self, name, service_clients, description=None, group_params=None, **client_params): """Register a client group to the client manager The client manager in tempest only manages the six core client. Any extra client, provided via tempest plugin-in, must be registered via this API. All clients registered via this API must support all parameters defined in common parameters. Clients registered via this API must ensure uniqueness of client names within the client group. :param name: Name of the client group, e.g. 'orchestration' :param service_clients: A list with all service clients :param description: A description of the group :param group_params: A set of extra parameters expected by clients in this group :param client_params: each is a set of client specific parameters, where the key matches service_client.__name__ """
The tempest plugin
TempestPlugin interface is extended with a method to
return the service client data specific to a plugin. Each plugin defines
a new service clients group and the relevant data.
Service Clients data is stored in a singleton
ServiceClientsData.
ServiceClientsData is instantiated by the
TempestTestPluginManager,
which obtains the service client data from each plugin and registers it.
Client managers used by tests consume the service client data singleton, and dynamically defines a set of attributes which can be used to access the clients.
Attributes names are statically defined for now. They will be the same names as now, to minimize the impact on the codebase. For plugins, attributes names include the group name, to avoid name conflicts across service clients that belong to different plugins.
In future we may define a standard naming convention for attribute names and to enforce it by deriving names automatically. Future names may not contain the ‘_client’ suffix, to save space and allow for always specifying the client provider in test code, so to make the code more readable. This naming convention will not be implemented as part of this spec.
Work has stared on this: Change-id I3aa094449ed4348dcb9e29f224c7663c1aefeb23
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | http://specs.openstack.org/openstack/qa-specs/specs/tempest/implemented/client-manager-refactor.html | CC-MAIN-2018-43 | refinedweb | 616 | 54.93 |
sigqueue(2) sigqueue(2)
NAME
sigqueue() - queue a signal to a process
SYNOPSIS
#include <<<<signal.h>>>>
int sigqueue(pid_t pid, int signo, const union sigval value);() system call.
The sigqueue() system call returns immediately. If SA_SIGINFO is set
for signo at the receiving process (see sigqueue(2)) and if resources
are available to queue the signal, the signal will be queued and sent
to the receiving process. When the signal is delivered or accepted,
the field si_value of the siginfo parameter (see signal(5)) will be
set to value. If SA_SIGINFO is not set for signo, then signo, but not
necessarily value, will be sent at least once to the receiving
process.
If the value of pid causes signo to be generated for the sending
process, and if signo is not blocked, either signo or at least one
pending unblocked signal will be delivered to the sending process
before the sigqueue() system call returns. Should any of multiple
pending signals in the range SIGRTMIN to SIGRTMAX be selected for
delivery or acceptance, it will be the lowest numbered one. The
selection order between realtime and non-realtime signals, or between
multiple pending non-realtime signals, is unspecified.
Application Usage
Threads Considerations
sigqueue() can be used to post signals to another process but can not
be used to post signals to a specific thread in another process. one pending unblocked signal will
be delivered to the calling thread before the sigqueue() function
returns.
LWP Considerations
Hewlett-Packard Company - 1 - HP-UX Release 11i: November 2000
sigqueue(2) sigqueue(2)
Signals can not be posted to specific Lightweight Processes (LWPs) in
another process.
RETURN VALUE
Upon successful completion, the specified signal will be queued, and
the sigqueue() function returns a value of 0 (zero). Otherwise, a
value of -1 is returned, and errno is set to indicate the error.
ERRORS
sigqueue() fails and no signal is sent if any of the following
conditions occur:
[EAGAIN] No resources are available to queue the signal.
The process has already queued {SIGQUEUE_MAX}
signals that are still pending at the receiver(s),
or a systemwide
kill(2), sysconf(2), signal(5).
Hewlett-Packard Company - 2 - HP-UX Release 11i: November 2000 | http://modman.unixdev.net/?sektion=2&page=sigqueue&manpath=HP-UX-11.11 | CC-MAIN-2017-17 | refinedweb | 365 | 50.77 |
Adventures in deno land
8 min read - 2020-05-15
Earlier this week deno was released.
As I was very excited ever since I first heard about it on Ryan Dahl’s talk at jsconf, I had to give it a try.
This talk is one of my personal favorites, it is a lesson on humility. Having Ryan looking at what he built 10 years ago with a criticizing tone is interesting. Even when node is used by millions of people, its creator still feels bad about some decisions made at the time.
Getting back to what brought me here… After hearing of the launch of v1.0 I took some hours to learn more about it. The documentation is very well written and structured, which by following what they call the manual one could have a very good understanding of how to start using it.
Building something
After reading the documentation, it looked great, in theory. But my default way to learn is normally to build something with it. It normally helps me identify pains I’d have in the real world if I had to build a real application with it.
The decision was to build an API that connects to twitter and returns 15 tweets from a user with more than 5 likes, I called it popular tweets. This small server should then run on a Kubernetes environment.
If you wanna follow the code, here you have it
At first, I was kinda lost and didn’t know any APIs. I’ve decided to go explore the standard library. I got very impressed by how approachable was the code, took some time to read it, and learned a ton.
It got this idea on the back of my mind, which might lead to a future article, similar to what Paul Irish did 10 years ago on 10 things I learned from the jquery source but for deno source, might actually do it!
After getting to know the basics, installing the VSCode plugin and deno, we were ready to start my adventure.
To be honest, it wasn’t a real adventure, everything looked so familiar that I almost forgot I was using a different runtime.
Getting to code
By using the standard library’s
http server it was very easy to build a server and get it up running handling requests.
import { serve } from "./deps.ts"; const s = serve({ port: 8080 }); for await (const req of s) { req.respond({ status: 200, body: "Hello world", }); }
Step 2 was to connect it to twitter API. Having
fetch already included on
deno made it very easy and familiar.
fetch( "(from: ampsantos0 min_faves: 5)", { headers: new Headers([["content-type", "application/json"]]) } )
Deno opted for mimicking existing Web APIs where they existed, rather than inventing new proprietary ones. For APIs that are not web standard, the
Deno namespace is used. This looks like a smart choice to me, improving discoverability and reusing knowledge developers already have of the existing APIs.
Running it
Running the code was a breeze. One of deno’s selling points is security and I couldn’t agree more, it improved over node. You notice it the first time you try to run a program:
$ deno run ./index.ts
Once we are, in this case, using network to both expose our endpoint (:8080) and access Twitter’s API without our explicit consent, here’s what you get:
error: Uncaught PermissionDenied: network access to "0.0.0.0:8080", run again with the --allow-net flag at unwrapResponse ($deno$/ops/dispatch_json.ts:43:11) at Object.sendSync ($deno$/ops/dispatch_json.ts:72:10) at Object.listen ($deno$/ops/net.ts:51:10) at listen ($deno$/net.ts:152:22)
This is a very reasonable and comprehensive error, again, good job on this!
A good approach for this is by enabling whitelist permissions by using the flag
--allow-net which deno does it in a very simple and pragmatic way:
$ deno run --allow-net=0.0.0.0:8080,api.twitter.com index.ts
When running the code, the
--inspect flag enables developers to use Chrome Dev Tools the same way they did in node, the debugging experience is as good as developers are used to.
Module resolution
When Ryan first talked about deno, and the mistakes made in node’s design, one of the big things he mentioned that node’s way of importing modules was too complicated and had lots of edge cases.
Example:
const path = require("path")
The dependency we’re importing, path might come from node standard library. At the same time, it can come from node-modules, or you could have installed a dependency named path, right? Ok, now you found the dependency, do you know what is the file you are requiring? Is it index.js? What if package.json has a different main file defined?
Lots of unknowns…
What about local imports? When you do:
const add1 = require("./utils/math")
Is
math a file? Or a folder with an
index.js inside of it? What is the file extension? Is it .js, .ts?
You get the point… Node imports are hard.
Deno follows a
golang like approach, of having absolute urls. If it sounds strange to you, bare with me. Let’s look at the advantages:
- It solves local imports by adding the extension to it.
import { add1 } from "./utils/math.ts"
You know just from reading it that
math.ts is a file.
- It solves third party imports by having an absolute URL
import { serve } from ""
No more magic module resolution.
This absolute module resolution enabled some fun stuff like what R. Alex Anderson did, running code from a set of gists.
Gang, you can throw Deno programs onto and it'll just work. Even the relative imports work correctly through Gists.— R. Alex Anderson 🚀 (@ralex1993) May 14, 2020
Throwing together a little demo and having someone else play around with it just by sending a link is 🔥
Example:
Note: VSCode plugin functions well with the third party imports, you can
cmd+click on dependency and you’re directed to the code, as usual.
Keeping track of dependencies
Let’s talk about managing dependencies. As deno simplified the module imports, it allowed it to automatically cache dependencies.
When you first try to run it, it downloads the dependencies, caches them, and then runs with the cached version.
To force the caching of a module without running it, you can run
$ deno cache [module url].
You are probably thinking it is strange and error-prone to URLs all around the code? That’s right. You can manage it however you want, as all modules have absolute URLs now, it’s just code at the end of the day.
Deno recommends having a
deps.ts file, you can call it whatever you want but since it is in the documentation, I see this start becoming a standard. On that file, you can import all the dependencies from the URLs and export the methods used.
// deps.ts export { serve } from "" export { parseDate } from "" // index.ts import { serve } from "./deps.ts"
Having one single
deps.ts file allows you to do some caching (as you did in
package.json) on docker builds.
COPY deps.ts . RUN deno cache deps.ts
By doing this, the
RUN command will only run if the
deps.ts file changed. With this, and as the installation step is now automatic, running it on docker became simpler.
There is one thing that has to be taken care of with deno, we have to send the flags for the permissions.
CMD ["run", "--allow-net", "index.ts"]
Deno binaries
Deno provides an
install command. But, as I said earlier, it does not install dependencies on the project, as that is done automatically.
Its usage is similar to the
npm install --global flag, citing the explanation on the official website about
install:
This command creates a thin, executable shell script which invokes deno using the specified CLI flags and main module. It is placed in the installation root’s bin directory.
When you install a global binary, you have to specify what permissions it will need to run, again, secure by default.
$ deno install --allow-net --allow-read
And you can then run
$ file_server
Conclusion
Coming from the JS/TS world I’d say deno got lots of things right. It has the familiarity of JS and TS with small twists, to the better side. Having the standard library written in TS is a big plus, at it isn’t always straightforward to set it up in node.
The standard library is great, it looks both readable and well thought. Quoting the
deno_std main repo:
deno_std is a loose port of Go’s standard library. When in doubt, simply port Go’s source code, documentation, and tests.
This is funny and interesting at the same time, deno used the effort the golang community put in its standard lib to drive its own, the result looks great.
The permission system is great and intuitive. Module resolution is now simpler and removes pretty much all the magic we got used to in node lands.
All the async APIs return Promises now. It means using
await and
.then everywhere, not incurring into callback hell and not needing tools like
promisify and such.
Adding to all of this, deno also got inspiration from golang by shipping a lot of the essential tools in the main binary. Discussions about bundler, formatter, and test runner will no longer be a thing, and even if they are, there’s an official way now. I haven’t tried the test suite and the documentation generator yet, I might write about it later.
Gotta say the overall experience of building a (very small) application with it was very good and intuitive. Can’t wait to build something more real with it!
I’m excited to see how this will evolve and thus I wrote another article, my second adventure in deno.land where I go a little deeper on the runtime.
Subscribe to my newsletter to get updates on my newest articles!
No worries, I will not send more than 1 email per month , nor sell your email to third parties. | https://alexandrempsantos.com/adventures-in-deno-land/ | CC-MAIN-2020-29 | refinedweb | 1,705 | 73.58 |
I use vklongpoll from vk_api.longpool
Authorization with a token in the community
vk = vk_api.vkapi (token = token) LongPoll = VklongPoll (VK) Def Main_loop (Self) - & gt; None: FOR EVENT IN SELF.LONGPOLL.LISTEN (): # How to make a check on the exit from the conversation # And then squeeze through removechatuser as I understand
How to catch this event and piss one who came out of the conversation?
Answer 1, Authority 100%
To begin with, it is worth saying that you use the wrong module. You need
vk_api.bot_longpoll .
Travele out a person out from the conversation by checking the message for the
Action key with the type
chat_kick_user .
Example (
Python 3.8 + ):
from vk_api import vkapi from vk_api.bot_longpoll import vkbotlongpoll, vkboteventtype Access_Token = '' # Substitute your! Group_id = # Substitute your! vk_session = vkapi (token = access_token) vk = vk_session.get_api () LongPoll = VKBOTLONGPOLL (VK_SESSION, GROUP_ID) DEF Main (): For event in longpoll.listen (): If event.Type == vkboteventtype.Message_New and (Action: = event.obj ['Message']. Get ('action')): If Action ['Type'] == 'Chat_Kick_user': vk.messages.removechatUser ( Chat_id = event.chat_id, user_id = action ['Member_ID'], ) if __name__ == '__main__': Main ()
Do not forget that the bot must issue a conversation administrator: | https://computicket.co.za/python-python-vk_api-how-to-catch-a-mans-event-from-a-conversation-and-piss-him/ | CC-MAIN-2022-21 | refinedweb | 184 | 53.47 |
numericube-twistranet 1.1.4
twistranet - An Enterprise Social Network
This is the twistranet project!
(c)2011 NumeriCube ()
Official website: / French version on
About
twistranet is an Enterprise Social Software. It's a Social Network you can use to help people collaborate. And it is also a nice CMS (Content Management System) with a social focus.
twistranet is published under the termes of the GNU Affero General Public License v3.
Requirements
TwistraNet is written in PYTHON (>= 2.6, or >= 2.5 + simplejson) Twistranet is based on Django Framework (as of writing, Django >= 1.2 is mandatory, Django >= 1.3 is highly recommanded).
If Django is always installed you can install twistranet over your Django platform. Otherwise The last Django version will be downloaded and installed at setup.
Other requirements:
- python-setuptools
- python-imaging (aka PIL)
- python-ldap, only if you want to authenticated against LDAP/Active Directory.
Installation
Installation - short version
- Install requirements (Python, SetupTools and PIL)
- Download and untar (or unzip) twistranet from
- In the unzipped directory, just execute:
- (sudo) python ./setup.py install clean
twistranet is now installed. You can have many sites with just one twistranet installation, so you need to explicitly deploy and bootstrap your new site.
- (sudo) twistranet_project <path_to_my_new_site>
Don't forget to write down your generated admin password!!
Your server should now be fully working and running on !
If you want to start it again:
- cd <path_to_my_new_site>
- python ./manage.py runserver 0.0.0.0:8000
Installation - the Big Picture
Installation is in fact a 2 steps process. You must install twistranet's core features as a python library, then you have to create a project (which is an instance of a twistranet site).
To install twistranet's core features:
- Download and install Python >= 2.6 (with setuptools and PIL)
- Execute (as a superuser) "easy_install numericube-twistranet" ; this will normally download and install twistranet and all dependencies.
To create a new project:
- In the directory you want your website files created, type "python twistranet_project -n [<template>] <project_path>",
where <project_path> is the name of your site (it will be created by the process) ; <template> is the name of the project template to deploy. Currently admitted values are:
- 'default' (which is... the default value), an empty project based on sqlite;
- 'cogip', a sample french-language project of a fictious company called COGIP.
The '-n' (or '--no-bootstrap') is an option to tell the twistranet_project script not to bootstrap it immediately (the bootstraping process is the initial database feed).
You can do it by hand once (and only once!) with the following commands:
Go to your <project_path>
Review the settings.py file and local_settings.py, change to whatever suits your needs.
Among other things, carefully choose your DATABASE scheme, your LDAP/AD settings and the 'admin' password that has been generated for you.
Execute "./manage.py bootstrap" to build the database
Running Twistranet :
- Execute ./manage.py runserver 0.0.0.0 to start playing with twistranet.
- Point your browser at
More informations
You can get other informations in the "docs" folder inside this package about:
- installing/upgrading/uninstalling twistranet with PIP (quick and clean)
- installing Twistranet for testing and development (using virtualenv / installing in place the devel package / localization / running tests ...)
- Running Twistranet in debug mode
Troubleshooting
No image / thumbnail on my fresh twistranet instance!
This is probably a problem with python-imaging installation. Just install PIL for your OS.
Under debian, the easiest is to do "apt-get install python-imaging".
error: Could not find required distribution Django
If you've got this message, that means the autoinstall procedure of twistranet can't install django automatically. Just install django (see) either from sources or from a package from your OS, and run "python setup.py install" again.
Seems that it is a python-2.5 related problem.
I've lost my admin password!
It's easy to set a new one.
- Stop your server
- Run ./manage.py changepassword admin (and change your password)
- Start your server again
error when using mod_python
mod_wsgi is recommended, but if you need mod_python this little django1.2.5 hack is needed :
- in django.http.init: do not use "from mod_python.util import parse_qsl"
replace the lines 7 to 11 with:
from cgi import parse_qsl
Thanks to esimorre
Greetings
Email templates are inspired from MailChimp's Email-Blueprints (). We do love Mailchimp and strongly recommand it if you want a powerful mailing-list solution!
MimeTypes Icons came from Farm Fresh Free icons Collection, under Creative Commons 3.0 License. Many thanks to
- Author: numeriCube
- Keywords: twistranet Enterprise Social Network
- License: GNU Affero General Public License v3
- Categories
- Development Status :: 4 - Beta
- Environment :: Web Environment
- Framework :: Django
- Intended Audience :: Information Technology
- License :: OSI Approved :: GNU Affero General Public License v3
- Operating System :: OS Independent
- Programming Language :: Python :: 2
- Topic :: Internet :: WWW/HTTP
- Topic :: Internet :: WWW/HTTP :: Dynamic Content
- Topic :: Internet :: WWW/HTTP :: WSGI
- Topic :: Software Development :: Libraries :: Application Frameworks
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: numericube
- DOAP record: numericube-twistranet-1.1.4.xml | http://pypi.python.org/pypi/numericube-twistranet/1.1.4 | crawl-003 | refinedweb | 835 | 57.47 |
std::basic_istream::sync
Synchronizes the input buffer with the associated data source.
Behaves as
UnformattedInputFunction, except that gcount() is not affected. After constructing and checking the sentry object,
if rdbuf() is a null pointer, returns -1
Otherwise, calls rdbuf()->pubsync(). If that function returns -1, calls setstate(badbit) and returns -1. Otherwise, returns 0.
[edit] Parameters
(none)
[edit] Return value
0 on success, -1 on failure or if the stream does not support this operation (is unbuffered).
[edit] Notes
As with readsome(), it is implementation-defined whether this function does anything with library-supplied streams. The intent is typically for the next read operation to pick up any changes that may have been made to the associated input sequence after the stream buffer last filled its get area. To achieve that,
sync() may empty the get area, or it may refill it, or it may do nothing. A notable exception is Visual Studio, where this operation discards the unprocessed input when called with a standard input stream.
[edit] Example
Demonstrates the use of input stream sync() with file input, as implemented on some platforms.
#include <iostream> #include <fstream> void file_abc() { std::ofstream f("test.txt"); f << "abc\n"; } void file_123() { std::ofstream f("test.txt"); f << "123\n"; } int main() { file_abc(); // file now contains "abc" std::ifstream f("test.txt"); std::cout << "Reading from the file\n"; char c; f >> c; std::cout << c; file_123(); // file now contains "123" f >> c; std::cout << c; f >> c; std::cout << c << '\n'; f.close(); file_abc(); // file now contains "abc" f.open("test.txt"); std::cout << "Reading from the file, with sync()\n"; f >> c; std::cout << c; file_123(); // file now contains "123" f.sync(); f >> c; std::cout << c; f >> c; std::cout << c << '\n'; }
Possible output:
Reading from the file abc Reading from the file, with sync() a23 | http://en.cppreference.com/w/cpp/io/basic_istream/sync | CC-MAIN-2017-17 | refinedweb | 307 | 64.41 |
Describes a resource, such as a buffer or texture. More...
#include <reshade_api_resource.hpp>
Describes a resource, such as a buffer or texture.
Used when resource type is a buffer.
If this is a 3D texture, depth of the texture (in texels), otherwise number of array layers.
Flags that describe additional parameters.
Data format of each texel in the texture.
Memory heap the resource allocation is placed in.
If this is a 2D or 3D texture, height of the texture (in texels), otherwise 1.
Maximum number of mipmap levels in the texture, including the base level, so at least 1. Can also be zero in case the exact number of mipmap levels is unknown.
The number of samples per texel. Set to a value higher than 1 for multisampling.
Size of the buffer (in bytes).
Structure stride for structured buffers (in bytes), otherwise zero.
Used when resource type is a texture or surface.
Type of the resource.
Flags that specify how this resource may be used.
Width of the texture (in texels). | https://crosire.github.io/reshade-docs/structreshade_1_1api_1_1resource__desc.html | CC-MAIN-2022-27 | refinedweb | 171 | 70.39 |
DatagramSocket and DatagramPacket are classes we use in Java to work with the UDP protocol. the classes are found in the java.net package. The UDP is a non reliable protocol as it does not goes with the point to point handshake. This protocol rely on the IP to transmit data successfully.
The UDP is one of the transporting protocol which is used to transfer the data. When we work with the UDP there is pro as well as con. The pro is that It does not requires to handshake the other party before transmitting the data however that data may get lost in between it the con. Then The UDP does not uses the Socket and server socket, instead it uses the DatagramSocket and DatagramPacket.
The communication steps are as follows:
For Server Class:
1. create a Datagram Socket which will operate on a particular port.
2. Now create a byte array which will contain the data received or to be send.
3. Now a datagram Packet is needed which will use the byte array as the data to create a packet. Now the point to be noticed is that when receiving the data you only need to specify the byte array but while sending you must specify the InetAddress and port to which the packet is to be transmitted, this is because the UDP packed rely on the IP to complete its transmission as there is no Point to point connection operating.
4. send or receive the data.
5. In case you are receiving the data this will be in the byte array associated with the packet, so you will need to retrieve it from that DatagramPacket using method getData() which will return a byte array which in turn can be converted into string easily.
So these five steps takes you through the whole concept of data transmission using UDP, using Java networking concepts. So now let us look at the program which will depict these five steps in the form of code.
The logic of the program is to set up communication between the client and server, the client will send a message to server and server will sent back acknowledgement.
Note: in actual UDP communication there is no acknowledgement, in the program it is just to make a logic to understand the concept.
package networking; import java.io.*; import java.net.*; public class DGServer { public static void main(String[] args) throws Exception{ DatagramSocket ds = new DatagramSocket(5000); System.out.println("Datagram Socket initialized"); byte[] bb = new byte[128]; DatagramPacket dp = new DatagramPacket(bb, bb.length); System.out.println("Packet created"); System.out.println("Waiting for data from client"); ds.receive(dp); System.out.println("Data Received"); String s = new String(dp.getData()); System.out.println(s); byte[] b = ("Data Acknowledged:"+s).getBytes(); System.out.println("Sending back acknowledge to client"); ds.send(new DatagramPacket(b,b.length,dp.getAddress(),dp.getPort())); System.out.println("Acknowledgement sent"); } }
This is datagram server class which incorporate many printings to show the program proceedings at each step. We are using the port 5000, to communicate. This program creates a DatagramSocket at port 5000 and the byte array bb will endure the data received. We have bind this array with the DatagramPacket as you will see only the name and size of array has been passed however you may also specify the index from where to start. Please check the Class definition for more details on constructor.
After creating the packet we are receiving the data in that packet awaiting on the socket. DatagramSocket provide the method send() and receive() which takes the DatagramPacket as the argument. This will take on the packet and will have it in the byte array bounded with the DatagramPacket. To send the acknowledgement back to the client we have used the same phenomenon except that now we need the address and port number on which client is communicating so we take it form the Socket object on which client contacted the server.
This address and the port number is passed on the the Packet we need to send back to the client. rest of the procedure is same. Just to save some precious reference variables we have used method chaining and the wrapping but we hope you are familiar with Java to understand these little concepts.
Here is the client class and the results. the results are shown in the three steps.
package networking; import java.io.*; import java.net.*; public class DGClient { public static void main(String[] args) throws Exception{ DatagramSocket ds = new DatagramSocket(); byte[] b = {'h','e','l','l','o',' ','w','o','r','l','d'}; ds.send(new DatagramPacket(b,b.length,InetAddress.getLocalHost(),5000)); byte[] ack = new byte[128]; DatagramPacket dp = new DatagramPacket(ack,ack.length,InetAddress.getLocalHost(),5000); ds.receive(dp); String s = new String(dp.getData()); System.out.println(s); } }
The client program creates the DatagramSocket without the port number because it will communicate the packet using the Address and port in the packet. As you see in the program the packet both the sending and receiving specifies the address and port number however if you will compare it to the server you will find that when you receive the packet at sever it does not need the port or InetAddress, so that makes the difference.
Run the program Step 1: run the server
Datagram Socket initialized Packet created Waiting for data from client
Step 2: run the client
Data Acknowledged:hello world
step 3: check back the changes in the server output
Datagram Socket initialized Packet created Waiting for data from client Data Received hello world Sending back acknowledge to client Acknowledgement sent. | http://www.examsmyantra.com/article/63/java/working-with-udp-in-java-networking-the-datagram-socket-and-packets | CC-MAIN-2019-09 | refinedweb | 944 | 53.21 |
How to Create a Day & Night Cycle in Phaser
By Josh Morony
As I’ve stressed before, atmosphere is an extremely important element in a game that can turn a fun game into a masterpiece. One great way to add some atmosphere to your game is to add a day and night cycle (imagine how much less fun Minecraft would be with no night).
This tutorial, and the one before it, is inspired by the mobile game Alto which I think absolutely nails the atmosphere in the game. Like in Alto, the “night” in your game doesn’t necessarily need to have anything other than a visual effect on the game, but you could also quite easily modify the gameplay to tie into the time of day in the game (e.g. spawn night time monsters, change the music and so on).
Here’s what we will be creating:
There’s going to be two main things we need to implement here. We will need sprites for the sun and the moon, and we will need to tween them in our game so that the sun rises and sets and so does the moon.
As you will have hopefully noticed in real life, when the sun goes down it has a bit of an effect on the world around us… mainly that it gets pretty dark. Phaser doesn’t have a lighting system by default, and although you can create some pretty cool lighting effects with HTML5 it’s not going to be the best idea to implement something complex like this in Phaser if we want it to run well on mobile.
Instead we will be simulating lighting by tweening the colour of our background sprites. As the sun goes down, we will slowly transition their colour to go from the original, to something much darker, and back again when the sun comes back.
We will also be building on top of a previous tutorial, where we create a parallax background. It’s also not necessary to complete this tutorial first, but if you want to follow along step by step it will make it a lot easier as that is what I will be referencing.
For the most part we already have all the assets we need for this tutorial included, but we’ll be introducing two more sprites, our sun:
and moon:
if you haven’t already downloaded the source code for the tutorial, you should add both of these to your static/assets folder. These aren’t exactly the most amazing assets in the world, but they do look sunnny and moony, and I created them so you can feel free to use them in whatever projects you like.
If you missed it in the last tutorial I also created the background graphics, but they were copied almost exactly from the game Alto so please just use those graphics for this tutorial, not your own projects.
Update the Preload State
Since we have added some new sprites to our game we will need to load them in the preload state as well.
Modify Preload.js to reflect the following:
class Preload extends Phaser.State { preload() { this.game.load.image('mountains-back', 'assets/mountains-back.png'); this.game.load.image('mountains-mid1', 'assets/mountains-mid1.png'); this.game.load.image('mountains-mid2', 'assets/mountains-mid2.png'); this.game.load.image('sun', 'assets/sun.png'); this.game.load.image('moon', 'assets/moon.png'); } create() { this.game.state.start("Main"); } } export default Preload;
Creating a Day Cycle Plugin
We’re going to do something a bit different in this tutorial. Rather than adding the code to make our day cycle happen directly to our main state, we are going to create a separate “plugin” or “object” that we can use to handle it instead.
It might look a little more complicated to do it this way, but it has the benefit of keeping our code much more organised, and allows us to easily reuse the plugin in other games as well.
IMPORTANT: The structure used in this tutorial requires the use of ES6. I will be using the Phaser game template discussed in this tutorial.
If you are using the template I’ve linked above, then you may notice that there is a file called ExampleObject.js in the objects folder, and in the main state we are importing that object which makes it available for us to use. We will be using the same structure to build our DayCycle object.
Create a new file in the objects folder called DayCycle.js and add the following code:
class DayCycle { constructor(game, dayLength){ this.game = game; this.dayLength = dayLength; this.shading = false; this.sunSprite = false; this.moonSprite = false; } initSun(sprite) { this.sunSprite = sprite; this.sunset(sprite); } initMoon(sprite) { this.moonSprite = sprite; this.moonrise(sprite); } initShading(sprites){ this.shading = sprites; } sunrise(sprite){ //TODO: Implement } sunset(sprite){ //TODO: Implement } moonrise(sprite){ //TODO: Implement } moonset(sprite){ //TODO: Implement } tweenTint(spriteToTween, startColor, endColor, duration) { let colorBlend = {step: 0}; this.game.add.tween(colorBlend).to({step: 100}, duration, Phaser.Easing.Default, false) .onUpdateCallback(() => { spriteToTween.tint = Phaser.Color.interpolateColor(startColor, endColor, 100, colorBlend.step, 1); }) .start() } } export default DayCycle;
We have a partially finished class here, so let’s talk about the stuff we have so far and then we will work through implementing the more interesting functions that are missing.
First, our
constructor handles setting up a few member variables that our class will use. We pass in a reference to our game object so that we can interact with it, and we also pass in a dayLength so that we can control how long the day and night cycle should be (from comically quick, to realistically slow).
The
initSun,
initMoon and
initShading functions allow us to pass in the sprites we want to use for various parts of this process and in the case of
initShading the sprites we pass in will actually be objects that contain both the sprite, and the “from” and “to” colours for each sprite. The “from” colour will be the tint that the sprite starts with, and the “to” colour is the tint the sprite will transition to at night time. The tinting is handled by the
tweenTint function which will handle animating the sprite from one colour to another over the course of the day.
Now let’s implement the functions that we’ve left blank.
Modify the
sunrisefunction to reflect the following:
sunrise(sprite){ sprite.position.x = this.game.width - (this.game.width / 4); this.sunTween = this.game.add.tween(sprite).to( { y: -250 }, this.dayLength, null, true); this.sunTween.onComplete.add(this.sunset, this); if(this.shading){ this.shading.forEach((sprite) => { this.tweenTint(sprite.sprite, sprite.from, sprite.to, this.dayLength); }); } }
This function will be passed in the sprite for our sun. It then sets the x position of the sun to be a little bit off from the right side of the game (one quarter of the game width off from the right side of the screen to be exact). Then we add a “tween” which will animate the sun sprites y position all the way off the top of the screen – the total time for this tween is set to dayLength which is the time of our day.
The important thing here is that we add an
onComplete listener to tween, so that when the sun has finished rising we can then trigger the
sunset function. In that function we will do the same and trigger the
sunrise function once that finished. This creates an endless loops of sunrises and sunsets.
Let’s take a look at the
sunset function now.
Modify the
sunsetfunction to reflect the following:
sunset(sprite){ sprite.position.x = 50; this.sunTween = this.game.add.tween(sprite).to( { y: this.game.world.height }, this.dayLength, null, true); this.sunTween.onComplete.add(this.sunrise, this); if(this.shading){ this.shading.forEach((sprite) => { this.tweenTint(sprite.sprite, sprite.to, sprite.from, this.dayLength); }); } }
As you can probably tell, it’s basically the exact same thing just reversed. This time we put the sun sprite on the left side of the screen, and we are animating it to the bottom of the screen instead.
Now that we’ve done that, we can quite easily do the same for the moon.
Modify the
moonriseand
moonsetfunctions to reflect the following:
moonrise(sprite){ sprite.position.x = this.game.width - (this.game.width / 4); this.moonTween = this.game.add.tween(sprite).to( { y: -350 }, this.dayLength, null, true); this.moonTween.onComplete.add(this.moonset, this); } moonset(sprite){ sprite.position.x = 50; this.moonTween = this.game.add.tween(sprite).to( { y: this.game.world.height }, this.dayLength, null, true); this.moonTween.onComplete.add(this.moonrise, this); }
The only difference here is that the positions are reversed, and that we are not calling the
tweenTint function because the
sunrise and
sunset functions already handle that.
That’s it! Our day cycle plugin is now officially created, all we have to do now is use it in our game.
Use the DayCycle Plugin
We’ve done most of the heavy lifting already, so implementing the DayCycle into our main state is going to be pretty easy.
Modify Main.js to reflect the following:
import DayCycle from 'objects/DayCycle'; class Main extends Phaser.State { create() { this.game.physics.startSystem(Phaser.Physics.ARCADE); this.game.stage.backgroundColor = '#000'; this.dayCycle = new DayCycle(this.game, 5000); let bgBitMap = this.game.add.bitmapData(this.game.width, this.game.height); bgBitMap.ctx.rect(0, 0, this.game.width, this.game.height); bgBitMap.ctx.fillStyle = '#b2ddc8'; bgBitMap.ctx.fill(); this.backgroundSprite = this.game.add.sprite(0, 0, bgBitMap); this.sunSprite = this.game.add.sprite(50, -250, 'sun'); this.moonSprite = this.game.add.sprite(this.game.width - (this.game.width / 4), this.game.height + 500, 'moon'); this.mountainsBack = this.game.add.tileSprite(0, this.game.height - this.game.cache.getImage('mountains-back').height, this.game.width, this.game.cache.getImage('mountains-back').height, 'mountains-back' ); this.mountainsMid1 = this.game.add.tileSprite(0, this.game.height - this.game.cache.getImage('mountains-mid1').height, this.game.width, this.game.cache.getImage('mountains-mid1').height, 'mountains-mid1' ); this.mountainsMid2 = this.game.add.tileSprite(0, this.game.height - this.game.cache.getImage('mountains-mid2').height, this.game.width, this.game.cache.getImage('mountains-mid2').height, 'mountains-mid2' );); } update() { this.mountainsBack.tilePosition.x -= 0.05; this.mountainsMid1.tilePosition.x -= 0.3; this.mountainsMid2.tilePosition.x -= 0.75; } } export default Main;
If you’ve followed along from the previous tutorial, you will notice there isn’t actually many changes here. First of all, we are importing our DayCycle plugin at the top of the file and then we create a new instance of it, starting with a dayLength of 5000, or 5 seconds. Obviously this is super quick, but you don’t want to wait around for 10 minutes to see if your code is working or not. You will probably want to bump this length up quite a bit.
Also notice that we are creating the background sprite with bitmap data. Since we are tweening the tint of sprites to simulate light levels, we can’t just have a normal background colour like this:
this.game.stage.backgroundColor = '#b2ddc8';
it needs to be a sprite. So we create a sprite that automatically takes up the full height and width of the game space, this saves us loading a giant sprite and it also means the game can be resized easily to any size.
We of course add our sun and moon sprites, but the most important bit of code here is the following:);
This is where we intialise the DayCycle plugin by passing it the sprites we want to use. We also create an array for the background sprites which defines what tint we want to transition from and to.
With that done, the game should now look something like this:
Summary
It wasn’t too much effort to create a pretty cool visual effect, and hopefully this tutorial has highlighted how useful it can be to separate functionality like this out into its own little class or plugin. The more modular your code is, the more easily you can reuse components, and now we could very easily drop that DayCycle object into a different game (or even a different state in the same game) and all we would have to do is initialise it in whatever state it is being used in. | https://www.joshmorony.com/how-to-create-a-day-night-cycle-in-phaser/ | CC-MAIN-2020-10 | refinedweb | 2,081 | 63.19 |
In C#, methods are used to define a block of code or statements, which we can use again and again, to simply the code readability and usablity. A method consist of one or more coding statements, which we can execute by calling methods name.
Syntax to create methods or functions in C#
<Access specifier> <Return Type> <Method Name> (Parameter List) { // Method Body //code statements }
Where,
Access Specifier = Private, Public, Protected etc, determines if method can be accessed from different class ( if public) or not (if private).
It is optional. If you don't define any, then the function will be private. by default.
Return type= return type can be of any datatype like int, string, any class, char, list etc.
return type= void, when method doesn't return anything.
Method Name= it can be anything, should be a useful name.
Parameter List = list of variables which we will pass to method and will be used inside it.
Example:
using System; public class Program { public static void Main() { //call method AddTwoNumber(a,b); } public static void AddTwoNumber() { int a=10, b=20; int c= a+b; Console.WriteLine("Sum ="+c); } }
Note: In C#, methods and functions are same things, these are two names of doing same thing in C#
Above example shows a simple method, which adds two numbers ( a+ b) and print the value (c).
In the above method, we have used
static keyword, using
static keyoword means we don't need to create it's class (Program in this example) instance and call it.
If you don't want to use it as static keyword, you need to create instance ( object ) of it's class and call it.
Note: If you are not familier with Class and objects, we will discuss it in later chapter in detail or you can read the article Object Oriented Programming (OOPS) concepts in c# with example
using System; public class Program { public static void Main() { //create class instance Program p= new Program(); //call method using class instance created above p.AddTwoNumber(); } //void is return type, means it will not return anything //public is acces type, means it can be accessed from anywhere in the program public void AddTwoNumber() { int a=10, b=20; int c= a+b; Console.WriteLine("Sum ="+c); } }
Static methods are the methods, which don't require class object to call them, we can directly invoke them without creating object.
As you can see in the last example, we created Program class object (
Program p = new Program()) and called the method using that object (
p), but in-case of static methods, we can call the method directly from another static method, without creating it's object.
Example:
using System; //this is class public class Program { //this is main method, which is also static public static void Main() { //static method AddTwoNumber(); } //we have used static keyword, so we don't need to class it's class object public static void AddTwoNumber() { int a=10, b=20; int c= a+b; Console.WriteLine("Sum ="+c); } }
Above program lines has been explained more using comments.
We can pass some datatype to methods in C#, which will be used inside the methods code. The list of parameters can be added in the parentheses followed by the method name. Each parameter has a name and a type. The syntax of parameter is similar to the declaration of the local variables.
Example of method with paramters
public void MethodWithParameters(int x, string y) { //some code }
As you can see in the above example, we are passing parameters
int x and
string y, to the method.
Parameters can be passed using the three mechanisms:
Passing parameters by value is the default type for passing parameters to the method. The value parameter is defined by specifying the data type followed by the variable name.
The values in the variable are passed when the method is invoked, here is the simple example
using System; public class Program { //method is returning int value public int AddOne(int i) { i=i + 1; return i; } public static void Main( ) { //creat program class object, p Program d = new Program(); int no = 5; //call method using p object and passing parameter, get update value no=d.AddOne(no); Console.WriteLine("The new number is:" +no); } }
In the above method
AddOne(int i) , we are passing value of no as 5 and getting it's value as 6, so return type= int.
Instead of passing value, we will be passing reference, that is, memory location of the variable. MethodByReference( ref int id, ref int age ) { }
Take a look at an example
using System; public class Program { public static void Main(string[] args) { int number = 20; //no need to return a value, as reference is updated AddFive(ref number); Console.WriteLine(number); } //reference is passed to this method public static void AddFive(ref int number) { number = number + 5; } }
As you can see, we've added the
ref keyword to the function declaration as well as to the call function. If you run the program now, you will see that the value of number has now changed, once we return from the function call.
A return statement is used for returning a value from the method. A return statement can be used to return only a single value. The output parameter is used to overcome this problem. The out modifier works pretty much like the ref modifier..
Example:
using System; public class Program { public static void OutSample( out int i ) { i=20; } public static void Main() { int no; OutSample( out no ); Console.WriteLine("Value for the number is: " +no ); } } | https://qawithexperts.com/tutorial/c-sharp/19/c-sharp-methods | CC-MAIN-2021-39 | refinedweb | 932 | 56.29 |
Introduction
Lists in Python is a broad concept and numerous problems can be framed on the same. Since it’s one of the most widely used Python built-in datatypes.
It is important for beginners to become familiar with the topic to the core. This can be achieved by solving a variety of problems based on Lists. Solving problems widens the logical thinking power not limited to [programming but also in solving life problems.
In the previous part, we look upon six highly popular Lists Programs explained in the best way for beginners, in this part, we will continue our list and look upon more sets of Lists problems. First, let’s revisit the Basics of Python Lists again.
Python List Basics
Lists are one of Python’s built-in datatypes used for storing data items. Lists are also known as collection type in Python. One can identify a Python List by square brackets [ ]. Lists are used to store the data items where each data item is separated by a comma (,). A Python List can have data items of any data type, ranging from string, int to boolean.
One of the primary reasons why lists are being widely used is that Lists are mutable. Being mutable means, any data item of a List can be replaced by any other data item. This makes Lists different from Tuples, which are also used for storing data items but are immutable.
For example,
list1 = ['Hello', 1, 4.63, True, "World", 2.0, "False"]
Understanding Indexing and Slicing a List is very necessary as it helps building and implementing logic rapidly and in an impromptu manner. Let’s talk about Indexing first.
Indexing in a List are of two types:
1. Positive Indexing – Here the indexing starts from 0, moving from left to right.
2. Negative Indexing – In this, the indexing starts from right to left and the rightmost element has an index value of -1.
Taking the above example as reference, Positive and Negative Indexing will look like:
Now as we know about Indices with respect to Lists, we can perform Slicing in lists effortlessly.
For example, taking our List named list1 which we defined above,
list1[1 : 3] gives [1, 4.63] list1[ : ] gives ["Hello", 1, 4.63, True, "World", 2.0, "False"] list1[ - 1: - 4 : -1] gives ["False", 2.0, "World"]
Now, let’s see how we can replace a data item with another data item in a list. FOr example, our list list1,
list1[3] = 2.47 print(list1) gives ['Hello', 1, 4.63, 2.47, "World", 2.0, "False"]
Now, let’s look at the Python Lists Programs for Absolute Beginners, continued from Part 1.
Python List Programs
1. Program to Print Reverse List
a = [4, 3, 2, 76, 32, 1, 23] print(a[ : :-1]) ''' Expected Output: [23, 1, 32, 76, 2, 3, 4] '''
Explanation: Given a list having certain elements in it. To print the reverse of this given list, we can use the concept of slicing, as discussed above.
In the slicing, we will include all elements and give a step size of -1. When we give a negative step size, the list traverse in the opposite direction, from right to left. Thus here, a step size of -1 means it will traverse from right to left with a step size of 1 and will include all the elements up to the left end.
2. Program to Check if List is Empty
b = [1, 65, 23, 'Hello', 3.23] if len(b) == 0: print("Given List is Empty") else: print("List is not empty") ''' Expected Output: List is not empty '''
Explanation: Given a list b with a certain number of elements. We used the condition statement here to differentiate an empty list. If the length of a list is 0, this means that the given list has no elements in it. The length of a collection item can be found using the len() function. Thus, if the length of the list is 0, print that the given list is empty else print the given list is not empty.
3. Program to Truncate the List
# Method 1: c = [True, 42, 9.23, 12, 22] c.clear() print(c) ''' Expected Output: [] '''
Explanation: Given a list c with a certain number of elements. In this method, we used the .clear() method of Python Lists. This would truncate the list. Thus, if we print our list again, it will give an empty list.
#Method 2: c = [True, 42, 9.23, 12, 22] c *= 0 print(c) ''' Expected Output: [] '''
Explanation: Given a list c with a certain number of elements. In this method, we multiplied the list by 0. Conventionally, when a list is multiplied by a number, it concatenates the list by the given number of times. Since we are multiplying by 0, this means we are concatenating 0 times and eventually would get zero.
#Method 3: c = [True, 42, 9.23, 12, 22] c = [] print(c) ''' Expected Output: [] '''
Explanation: Given a list c with a certain number of elements. In this method, we assigned an empty list to the same variable c. Thus, printing the variable c would give the value of c with the most recent assignment, i.e. an empty list.
4. Program to Find Length of a List
# Method 1 d = [4, 3.12, False, "Python", 66] print(len(d)) ''' Expected Output: 5 '''
Explanation: Given a list d with a certain number of elements. The length of a collection in Python can be found using the len() function. Thus, printing len(d) will give the number of elements inside the list d.
# Method 2 d = [4, 3.12, False, "Python", 66] count = 0 for i in d: count += 1 print(count) ''' Expected Output: 5 '''
Explanation: Given a list d with a certain number of elements. In this method, we used a for loop with a variable count that would increment every time the variable i iterates over elements of list b. Thus the value of variable count would increment up to the number of elements present inside the list d.
5. Program to Find 2nd Smallest Element of a List
e = [14, 57, 2, 43, 29] e.sort() print(e[1]) ''' Expected Output: 14 '''
Explanation: Given a list e with a certain number of elements. To find the 2nd smallest element from a list, here we sorted the list e using the .sort() method. Thus, in a sorted list, an element at index position 1 is the 2nd smallest element of that list.
6. Program to Get Combination of Pair of Elements
import itertools f = [2, 'Hello', 'World', 4.21] print(list(itertools.permutations(f, 2))) ''' Expected Output: [(2, 'Hello'), (2, 'World'), (2, 4.21), ('Hello', 2), ('Hello', 'World'), ('Hello', 4.21), ('World', 2), ('World', 'Hello'), ('World', 4.21), (4.21, 2), (4.21, 'Hello'), (4.21, 'World')] '''
Explanation: Given a list f with a certain number of elements. Here we used the .permutations() method of the library itertools. The .permutations() takes 2 arguments, first the list f and the second, the number of elements we want in each permutation. This will give permutations of elements in a pair.
Conclusion
Thus, understanding Lists in Python is extremely important if one wants to build a career that completely depends upon the usage of Python. Understanding these basic problems would help the beginner in acing the interviews as well.
But the problems in this part and earlier part are not just limited to beginners only. An intermediate or expert can always come back and take a quick overview of these problems as these are the most extremely used and asked Python Lists programs.
In a few of the problems, I have shown more than one method of solving the problem. I would encourage the readers to find their own way of solving these problems and try on different use cases.
As I said in the earlier part as well, these problems can be solved in more best possible ways, but the main objective of this article is to let beginners understand the concepts in the easiest and prompt way. This would help them building logic and framing their own solutions. Solving these problems would also encourage beginners to create their own problems and working towards them._1<<
One Comment
More popular method to check if list empty
if not b:
print(’empty’)
else:
print(‘not empty’)
or even more popular is to run code when list not empty
if b:
print(‘not empty’)
else:
print(’empty’) | https://www.analyticsvidhya.com/blog/2021/05/python-list-programs-for-absolute-beginners-part-ii/ | CC-MAIN-2021-25 | refinedweb | 1,417 | 73.78 |
Better JS Cases with Sum Types
Improving Semantics and Correctness · Illustrated via Redux State
Abstract
JavaScript has built-in atomic scalar types, such as numbers and booleans. It can also represent composite product types via arrays or record types via objects. However, it lacks an immediate solution for disjoint sum types. Sum types (a.k.a. tagged or discriminated unions) are a common tool in other languages; they allow a value to be one of a set of explicit cases, with easy and safe identification and data extraction. This article uses Redux state design to demonstrate common difficulties JS developers face when modeling a domain, shows how sum types mitigate those difficulties, and reviews a few libraries aiming to port sum types into the language.
Background: Redux State
Redux.js is “a predictable state container” inspired by the Elm Architecture. A developer can represent the canonical stateful data of their application in whatever form they wish, from a sophisticated Immutable.js
Record to a straightforward POJO (Plain Old JavaScript Object). The state for an app which fetches and displays a list of adorable kittens might be as simple as:
const initialState = {
kittens: [] // no kittens fetched yet 😿
}
Developers specify the state logic of a Redux-based app in a “reducer” function with the signature
(oldState, action) -> newState.
function reducer (oldState, action) {
if (action.type === 'GOT_KITTENS') {
return { kittens: action.kittens } // replaces the kittens
}
return oldState // default, do nothing
}
Given an “action” object representing an application event, the reducer determines how to produce the new state.
newState = reducer(initialState, {
type: 'GOT_KITTENS',
kittens: ['Snuggles', 'Mittens']
})console.log(newState) // { kittens: ['Snuggles', 'Mittens'] } 😺
User interface code (e.g. React components) can subsequently read the state, creating a list of kittens.
const currentState = reduxStore.getState()
const listItems = currentState.kittens.map(kitten =>
<li>{ kitten.name }</li>
)
Aside: if you have never used JSX before, the above might appear unsettling. This domain-specific language compiles to vanilla JS:
const currentState = reduxStore.getState()
const listItems = currentState.kittens.map(kitten =>
React.createElement('li', null, kitten.name)
)
Motivating Example: Tackling Complexity
Initially, this direct representation of state works as expected. The application starts with no kittens. Mapping over the empty array
state.kittens produces no list items, and our UI shows nothing. Later, when kitten data is fetched and the state is updated, our list will pop into view (assuming the rest of our AJAX / Redux / React code is wired correctly).
In practice, however, the user may be confused upon being shown a blank page. We really ought to let them know that the kittens are on their way:
const kittens = reduxStore.getState().kittensif (!kittens.length) { // kittens are loading? return <p>Calling the kittens!</p>} else { // kittens received, show them return (<ul>{
kittens.map(kitten =>
<li>{ kitten.name }</li>
)
}</ul>)}
Now when the user first loads the page, they see a lovely paragraph informing them of impending kittens (how exciting!). Later, the paragraph is replaced with a list of kitten names.
Signs of Trouble
One day, however, a user submits a help ticket. “When I visit the kittens page, it says they are loading forever.” What went wrong?
Well, on that day, the kittens data from the server was empty. That is,
[] is a valid value representing the kittens in our database. We’ve hijacked the empty array to mean that the kittens are still loading, but that’s not necessarily true. We started with an initial state of 0 kittens:
{
kittens: [] // intent: not yet fetched
}
And then ended up with a final state of 0 kittens:
{
kittens: [] // intent: fetched (empty list from db)
}
These two states are indistinguishable, though we intended them to be distinct. Our mistake was conflating data (the array) with metadata (information regarding the array).
Falling Down the Rabbit Hole
As intrepid JS developers, we may next try distinguishing status based not on
length, but on type. What if we use
null to indicate unloaded kitties?
const initialState = {
kittens: null
}
Unbidden, a wild error appears.
Error: cannot read property 'length' of null
In a way, we got lucky this time — the code failed noisily and immediately. The problem, of course, is that
null values cannot have properties, so our old UI code checking
kittens.length is broken. The fix isn’t especially difficult:
const kittens = reduxStore.getState().kittensif (!kittens) { // kittens are loading return <p>Calling the kittens!</p>} else if (!kittens.length) { // kittens are loaded but empty return <p>Sorry, no kittens available.</p>} else { // kittens are loaded and can be shown return (<ul>{
kittens.map(kitten =>
<li>{ kitten.name }</li>
)
}</ul>)}
That we have annotated the meaning of each case above should be considered a code smell; it reveals that our solution is not very semantic. Regardless, the unit tests pass, the app is deployed, and all seems well for a few days. Until…
When Zombies Attack
Error: cannot read property 'join' of null
Now what? Didn’t we solve this already? Ah, but this error is coming from another component:
const kittens = reduxStore.getState().kittensreturn <p>'Known kittens include: ' + kittens.join(' & ')</p>
Oops. Someone forgot, or was never informed, that
kittens might sometimes be
null. The oversight escaped attention for a while because nobody ever ran this part of the application before the kittens were fetched. It was a trap waiting for the right edge case to come along.
Over time, multiple failures like this one are added and/or discovered. As long as developers think of
state.kittens semantically as a collection of kittens, they keep trying to use it as an array — even if sometimes it isn’t one.
Out of the Frying Pan…
Fast-forward, and as product requirements have multiplied, so have the variety of ad-hoc solutions attempting to wrangle our state. The user stories now specify that kittens need to have four visible representations: unloaded, loading, fetched (with data), and failed (with an error). Across the app, similar needs are being dealt with for
state.puppies and
state.bunnies. Some leaves of the state tree use the strings
unloaded and
null, but that breaks some falsy checks:
if (!state.bunnies) is now a mistake. The
state.puppies leaf is being replaced with an
Error object in the case of HTTP failure, forcing the verbose and unreliable check
if (state.puppies instanceof Error) in the consuming code. Meanwhile,
state.kittens is an object with
.collection array and
.isError boolean, forcing a different handling pattern —
if (!state.kittens.isError) return state.kittens.collection[0]. Or maybe someone decided that some value might be
false or
null, and each stands for something different… good luck keeping them straight. With a deluge of ad-hoc reinvented cases to keep track of, developers are frequently forgetting to handle some of them, especially as the representations are inconsistent and inexpressive.
“Many languages have trouble expressing data with weird shapes. They give you a small set of built-in types, and you have to represent everything with them. So you often find yourself using null or booleans or strings to encode details in a way that is quite error prone.” — Elm Language, Union Types Documentation
JavaScript developers often consider tackling these burdens to be a normal part of writing code. There is a common assumption that one simply must be more careful not to make errors like those detailed above. Losing type safety is the tradeoff one accepts in return for JS’s flexibility, right? We see examples of this all the time, especially when searching for data:
Array.prototype.indexOfreturns
-1for “not found.” Not exactly clear.
- Worse,
Array.prototype.findreturns
undefinedfor “not found;” how can we distinguish between “not found” and “found
undefined?”
- Sequelize’s
findByIdmethod returns a promise for
nullwhen database row is not found. It’s important to remember that might be the case, or else
.then(user => user.name)could one day throw an error.
Programming paradigms that put the onus on human beings to be perfect inevitably result in confusion and mistakes. There must be a better way.
Sum Types, Defined in (Informal) Theory
The problems examined so far can be boiled down to two essential categories:
- No consistent and expressive way to represent separate data status cases. “Not found” ought to be clearly different from “found, with this data.”
- A high likelihood of error when consuming data that could be in one of several forms (i.e. polymorphic). Humans focus on the ordinary case and use variables assuming they will have certain properties or methods, forgetting that the actual value may sometimes be an exceptional case.
Sum types can improve both of the above. To understand sums, it helps to first examine scalar and product types.
Scalars
A scalar type contains atomic values: single items which cannot be decomposed into parts. Examples of scalars in JavaScript include built-in types like Boolean, Number, and Null. For instance, the number
42 in JavaScript is a single value inhabiting the Number scalar.
A scalar type has an intrinsic size (a.k.a. cardinality), which is just the number of values inhabiting that type.
- The Boolean type has a size of 2 as it contains two values,
true&
false.
- The Number type is of “infinite” size (if we ignore the limits of IEEE 754 ).
- The Null type has a size of 1 as it contains only one value,
null.
Products
Product types are composite — they consist of multiple (possibly different, possibly identical) types, grouped together in an “and” fashion. For example, a type whose values consist of two Booleans (one Bool and one Bool) is a product. In JavaScript, we can represent custom product types using arrays as tuples, with position serving to distinguish each member type.
// FoodFacts are tuples of 2 bools: [isYummy, isHealthy]const saladFacts = [true, true]
const burgerFacts = [true, false]
const vitaminFacts = [false, true]
const chalkFacts = [false, false]
Why “product?” Because the size of this new composite
FoodFacts type is determined by multiplying the sizes of its constituent types. We can easily see above that there are only four different values in the type, obtained by multiplying 2 options for the first Bool × 2 options for the second Bool.
Positional notation (e.g.
chalkFacts[0]) is not very clear with respect to meaning. A more expressive way to represent multiple values grouped together in JS is with objects, which can label each member value. Technically the labels make objects record types rather than products, but we will overlook that in this article, in the interest of making it easier to write examples:
// Person = { name: String, age: Number, employed: Boolean }const mark = { name: 'Mark', age: 67, employed: false }
const jin = { name: 'Jin', age: 34, employed: true }
const ford = { name: 'Ford', age: 19, employed: true }
const sian = { name: 'Sian', age: 24, employed: false }
...
The size of the
Person type, ignoring the labels, is Infinity × Infinity × 2. That is, the infinite number of Strings, times the infinite number of Numbers, times the two possible Booleans. Clearly, we will not be able to list every possible value in the
Person type.
Sums
If scalar type size is an intrinsic number, and the size of a product type is the product of its constituent type sizes, you will probably not be surprised to hear that the size of a sum type is the sum of its constituent type sizes. Sum types are composite like product types, but in an “or” fashion; a single value in the type is only ever one of the constituent types, not a grouping of them all.
// FinitePrimitive = Boolean | Null | Undefinedconst finitePrimitive1 = true
const finitePrimitive2 = false
const finitePrimitive3 = null
const finitePrimitive4 = undefined
The
FinitePrimitive type we define above has a size of 2 + 1 + 1 = 4. That is, both of the Booleans, plus the single number of Nulls, plus the single number of Undefineds. We can easily list out all four values, which we have done above. Notice, a value in this type is only one of the constituent types.
As another example, consider a sum type composed of some larger types:
// InfinitePrimitive = String | Number | Symbolconst infinitePrimitive1 = 'hello'
const infinitePrimitive2 = 'goodbye'
const infinitePrimitive3 = 42
const infinitePrimitive4 = Symbol('hmm')
const infinitePrimitive5 = 314159
const infinitePrimitive6 = 'ok we get it, there are a lot of these'
...
The
InfinitePrimitive type has a size of Infinity + Infinity + Infinity. It can be any one of the infinite strings, or one of the infinite numbers, or one of the infinite symbols.
A Sum of Products
Products and sums can consist of scalars, but we didn’t define them as needing to consist of scalar types — on the contrary, the constituent types may themselves be products and/or sums. Here is a (non-JavaScript) sum of both scalar and product types:
scalar type Ghost (contains one value, `ghost`)product type Character { (contains 2 × 2 = 4 possible values)
afraidOfNoGhosts: Boolean,
ghostbuster: Boolean
}sum type Entity = Ghost | Character
How many values does the
Entity type have? Well, it’s 1 possible Ghost value + (2 × 2) possible Character values = 5 different Entity values. Let’s enumerate them:
entity1 = Ghost ghost
entity2 = Character { afraidOfNoGhosts: true, ghostbuster: true }
entity3 = Character { afraidOfNoGhosts: false, ghostbuster: true }
entity4 = Character { afraidOfNoGhosts: true, ghostbuster: false }
entity5 = Character { afraidOfNoGhosts: false, ghostbuster: false }
Tag, You’re It
We’ve almost finished defining sum types, but we’re missing a crucial characteristic which distinguishes them from the (very similar) union type. Suppose we define a sum type for name parts as being either a first name or a last name, where each is a string:
// NamePart can be a first name string OR a last name string
sum type NamePart = String | StringnamePart1 = 'Wilson' // is this a first or last name?
namePart2 = 'Ashley' // is this a first of last name?
When we encounter a value in the wild, the fact that we know its type is String isn’t quite enough to know whether it was supposed to be from the first choice of
NamePart strings, or the second choice.
For that, we need to somehow label the value — with a “tag.” The value of interest will not consist of just the string on its own, but also be accompanied by a symbolic identifier that allows the developer to know unambiguously which of the constituent types it belongs to:
namePart1 = <LastName 'Wilson'>
namePart2 = <FirstName 'Ashley'>
Ah, now we know exactly what roles
'Ashley' and '
Wilson' each play.
“This overall operation is called disjoint union. Basically, it can be summarized as ‘a union, but each element remembers what set it came from’.” — Waleed Khan, Union vs Sum Types
If you think about it, the combination of a tag and some data is itself a product, meaning we can reframe our example sum type as a sum of products, where every product includes a tag:
sum type NamePart = (FirstName & String) | (LastName & String)
Since each tag is only one value, it doesn’t affect the size of the sum type.
NamePart now has a size of (1 × Infinity) + (1 × Infinity), equivalent to its earlier size of Infinity + Infinity. Tags are therefore unit types.
With tags, we can now discriminate between otherwise identical values of a given type; the tag is a minimal form of metadata. There is a preponderance of synonyms for the concept of sum types: discriminated unions, tagged unions, disjoint unions, choice types, variants, etc.
Sum types and product types, both being composite, are also known as algebraic data types.
Sum Types, Defined in Practice
We now have a loose theoretical understanding of what a sum type is, but what does one look like in actual code? For that we can turn to myriad typed languages which implement sum types as a native feature. Though they have been supported at least as far back as Algol68, using sum types as building blocks is especially important to many functional programming languages including Haskell, ReasonML, F#, Elm, and Rust.
Let’s compare and contrast an identical sum type across a couple different languages. The stock example, used in resources like Exploring ReasonML and Functional Languages and Learn You a Haskell for Great Good, is
Shape:
- A
Shapeis a sum type, consisting of either a
Circleor a
Rectangle.
Circletags a product type, consisting of a
Point(the center location) and a
Float(the radius).
Rectanglealso tags a product type, consisting of a
Point(one corner location) and another
Point(the other corner).
- A
Pointis a product of two
Floats (x and y coordinates).
- A
Floatis a scalar type; we will use IEEE 754 double precision floats as supplied by each language.
Shapes in ReasonML
ReasonML is an impure functional, eagerly evaluated, strongly typed language with type inference. It is essentially a JavaScript-like syntax for OCaml, which can compile to native code, JavaScript or even back to OCaml. It’s a great way for a JS native to start moving towards coding in a typed and more functional style. Sum types are referred to in ReasonML as variants.
“Behold, the crown jewel of Reason data structures!” —ReasonML Language, Variant Documentation
It should be emphasized that we are defining
shape,
Circle,
Rectangle, and
point here. The only built-in datatype we used was
float. The sum type is
shape, which can be a
Circle or a
Rectangle. We also construct a couple of example shape instances using each member tag.
Perhaps confusingly,
Circle can be considered a tag, a constructor function, and a type; it has properties of all three. It is used to identify a value as being from one of the
shape types; that makes it a tag. It is used to create values (like
circ1) in the language; that makes it a constructor. And those values consist of two other types
point and
float grouped together; that is a product type. We can either use
Circle as the name of that product of two types, or considering the implementation details, we could say that
Circle is a value of the unit type within a product of three types. Whew!
On a side note, you might have noticed at the bottom of the code snippet that ReasonML ends up representing product types (like
Point) as simple arrays in JavaScript, exactly as we showed in our earlier examination of products.
Shapes in OCaml
ReasonML can not only compile to JS but to OCaml, making it easy to see how we would define the same entities in that language.
OCaml’s type definition syntax, freed from the burden of seeming familiar to JavaScript developers, hews much closer to the purely theoretical concept of sum and product types — even to the point of using
*.
The subsequent construction of instances is quite noisy with parens, on the other hand, making it clear why OCaml is sometimes called “LISP with types”:
Shapes in Haskell
Haskell is a purely functional, lazily evaluated, strongly typed language with type inference. Haskell has a heavy theoretical focus, though it is also used for practical purposes; it is very powerful yet exceptionally terse.
Again, the only built-in datatype we used was
Float. Using
:type in the GHCi REPL dutifully reports that yes, each example value has the type
Shape. Had we added
deriving (Show) to
Shape, we could also log out
circ1 in the REPL, which would display
Circle (Point 2.0 3.0) 6.5. In short, our values “know” what types they come from and which tags they have.
Shapes in Rust
OK, one last example. Many of the languages that emphasize the use of sum types are dialects of, or were heavily influenced by, the ML family of languages: ReasonML / OCaml, Standard ML, F# etc. It’s difficult to find dramatically different syntaxes for sums, which makes evolutionary sense.
Rust is a nice example as it allows the developer to define product types (as
structs) and sum types (as
enums) using positional or record-style notation. This isn’t unique to Rust — Haskell can do the same, for instance — but Rust’s syntax demonstrates yet more names for types. Note also how constructing instances requires using the sum type as a namespace (
Shape::Circle):
Don’t mistake the Rust
enum for an ANSI C-style
enum. The classic enumerated type is just a set of source code aliases for integers. True sum types allow the “arms” of the union to hold varied product types. Tagged unions can be implemented in C via a
union of
structs with an
enum of tags and manual tag-checking code.
Shapes in JavaScript, Perhaps?
Having seen the same pattern encoded in a few languages, one might begin wondering how we can emulate it in JavaScript. The essence of sum types is that they consist of disjoint tagged types; the essence of tagging a type is to create a product of some identifier with the type. In JavaScript, a symbolic identifier with no other meaning should sound familiar; that’s a perfect use case for
Symbol. And we already know some ways (arrays, objects) to loosely represent products in JavaScript; even if it cheats the theory a bit (again, records are technically not products), let’s use objects for clarity.
Oof. This concrete albeit unsophisticated first-pass attempt is not only much more laborious than the previous examples we saw, it is also incomplete. We do get some of the benefits correct:
- Expressive construction using namespaced and named factory functions
- Runtime type checks against constructor arguments
- Ability to discriminate unambiguously between values based on
tag
On the other hand, we are missing some features:
- We have failed to truly create a sum type, as our rectangle and circle instances do not know they are shapes! This makes it difficult to specify that a function should receive any shape. We could manually write
if (arg.tag !== CircleTag && arg.tag !== RectangleTag)in such a function, but that is brittle; what if we added a
trianglecase later?
- This code pattern is itself difficult, lengthy, and error-prone to replicate in a manual, ad-hoc fashion. It is also still abusable by human developers who may try to read the
centerof a rectangle or
corner2of a circle.
- Since JavaScript is dynamically typed, we only get failures at runtime. If a given function attempting to use a
Shapeis not immediately executed, we might not be aware that it contains a latent mistake.
Some of these points can be mitigated. Making values “remember” their types and tags together can be accomplished using a bit more code, and making it easier to define types in this fashion can be accomplished by abstracting the definition process to a function. Publish it as a library with a small API, and we’d be well on our way to capturing many of the benefits of sum types.
However, let’s not get ahead of ourselves. Before we see some practical real-world versions of JS sum types, we need to address the other half of the equation; not only constructing sum type instances, but consuming them.
The Payoff: Pattern Matching
By now you may be impatiently wondering what the point of all this theoretical messing about is. Recall the two problems we identified earlier: lack of a semantic way to separate cases, and human errors when using polymorphic data.
Sum types definitely solve the first problem. With the addition of a constructor tag, now every value comes with its own metadata identifier explaining just what the data’s case is.
How do we address the second problem? Ah, this is where sum types shine: pattern matching.
Pattern matching is a language-supported syntax for doing two things at once: identifying which case a value represents, and extracting the data from such a value. It lets the user of a sum type easily, declaratively, and safely consume values in the type. It prevents the user from mishandling the values by inverting control; the user no longer preemptively tries to inspect (potentially nonexistent) properties of an unknown type, but instead provides all possible type-handling cases in order to produce a result.
Let’s see pattern matching on our
shapes via an example
area function, with the signature
Shape -> Float.
Pattern Matching in ReasonML
A ReasonML function can directly specify that it takes a
shape. How do we use the
shape? By matching all the tags using
switch:
When we call
area at the end, passing in a circle or a rectangle, the
switch will match the passed-in shape to the correct case. Note that there is no JS-style fall-through — no more than one case will ever match. We should provide cases for every tag; if we omit one, the compiler will actually warn us we’ve forgotten something!
File "", line 14, characters 2-90: Warning 8: this pattern-matching is not exhaustive. Here is an example of a value that is not matched: Rectangle (_, _)
Exhaustiveness means that the function will be able to handle every value it could accept. An exhaustive function is also known as a total function. Non-total (i.e. partial) functions throw runtime errors if applied to a value they cannot handle.
Going in the other direction and attempting something nonsensical, like adding
Point to our cases, stops the compiler outright:
Line 22, 6: This variant pattern is expected to have type shape The constructor Point does not belong to type shape
“But wait, there’s more!” Not only do we get to declaratively match to a specific case, but we can then destructure the data from that case using whatever variable names we want. The second argument of
Circle is a float used for a radius; in our
switch we pattern match to
Circle and bind the second argument as the variable
radius. We can also use
_ to specify a value that we want to ignore, e.g. the center point for the circle; the area of a circle only depends on the radius after all.
Each case is a function from the matched pattern (with bindings from destructuring) to a result. In a very compact way, our code above states:
- if the passed-in shape is a circle,
- consisting of <nobody cares> and a radius,
- then return pi * radius²
Similar logic for the
Rectangle case lets us destructure the x and y coordinates of each
Point in the rectangle, and do the proper math.
Pattern matching in ReasonML can get more advanced; a common tool is a fallback case if no other pattern matches, which can be enumerated as
_ => (your return value here). See here for other capabilities.
Pattern Matching in Haskell
Haskell also has a switch syntax, called
case:
Calling
area circ1 etc. in the REPL gives the expected results. Again, note that adding a nonsense case will cause the compiler to fail. Even more strict than ReasonML, omitting a case can also cause the compiler to fail, provided that you opt in to such behavior with the pragma
{-# OPTIONS_GHC -Wincomplete-patterns -Werror #-}.
shape.hs:25:8: warning: [-Wincomplete-patterns]
Pattern match(es) are non-exhaustive
In a case alternative: Patterns not matched: (Circle _ _)
The popular LambdaCase language extension lets us to skip binding a name for the
shape argument:
Haskell has other ways of defining
area, including guards or simply writing each case as a separate equality:
Finally, Haskell allows a fallback case with
otherwise.
Pattern Matching in Rust
Rust uses the somewhat more verb-oriented keyword
match to perform pattern matching. Like when constructing, Rust also requires the developer to cite which sum type the tag comes from. When destructuring arguments from a record-style product type, it uses
.. to ignore unused fields. Not shown, it can use
_ to ignore positional fields and also to define a fallback case.
Again, omitting or exceeding the exhaustive cases causes the compiler to fail, informing the developer that something is wrong before any code is run.
error[E0004]: non-exhaustive patterns: `Rectangle(_, _)` not covered
--> src/main.rs:10:18
|
10 | return match shape_arg {
| ^^^^^^^^^ pattern `Rectangle(_, _)` not covered
Sum Type Feature Wishlist
“Modelling data is important for a range of reasons. From performance to correctness to safety. Tagged unions give you a way of modelling choices that forces the correct handling of them, unlike predicate-based branching, such as the one used by if statements and other common control flow structures.” — Folktale Library, ADT Union Documentation
We’ve now seen sum types defined in theory, constructed in practice, and consumed in practice. If we were to list requirements for the ideal features of a language-supported sum type, they might read as follows.
Overall
- 📖 Feature a clean, expressive API
- 👮 Prevent naive direct inspection of data from a sum type value
- ✅ Enable easy type checking with an
isor
hasfunction
- 💎 Luxury: enable extending sum types with methods
- 💎 Luxury: easy serialization / deserialization
Definition
- 🖋️ Define the name of the sum type
- 🗄️ Define the disjoint components of a sum type
- 📦 Allow each component to itself be another product, sum, or scalar type
- 🏷️ Label each component with a tag
- 👷 Have tags act as constructor functions, producing values from the type
- 📋 Enable tags to receive arguments, positionally and/or in record style
- 💎 Luxury: include integrated tools for working with product types
Matching
- 🔍 Enable matching a value to case by tag name
- 📜 Destructure component data during the match
- ➡️ Map destructured data to a return value for the match expression
- ❓ Allow for a catch-all fallback case in the match
- 🚨 Ideal: fail statically if possible (during compile-time) when omitting a required case or adding an incorrect case
- 🚨 Compromise: barring the above, fail immediately during runtime when omitting or adding a case
- 🚨 Worst: barring the above, fail eventually during runtime, upon discovering that a case cannot be matched
State of the Art: JavaScript Sum Type Libraries
Porting sum types into JS is hardly a new idea. A number of projects, some fairly prominent, have made efforts in this direction. Each implementation differs somewhat in terms of interface, capabilities, and guarantees. Below is a selection (far from exhaustive) of found examples, sorted by npm downloads per month:
Shapes in Folktale
Folktale is a general-purpose functional library with other tools besides
adt/union. That inflates its numbers somewhat in the above table, but it also features one of the nicest (IMHO) discussions of tagged unions, so we’ll take a look at it. The embedded code below is editable, run it yourself.
Biggest knock: developers can directly access
.radius off of a circle. This could tempt them to try and grab
.radius from an unknown shape.
- Lack of an integrated product type is slightly awkward but not a big deal.
- Lack of mandatory declarative argument types is potentially dangerous. We can manually do the checking using static
hasInstance, but that is a chore and relies on developers to dot their i’s and cross their t’s.
- Using vanilla JS destructuring with aliasing makes the pattern matching quite similar to the ideal.
- There is no exhaustiveness check or failure on extra cases yet, though it is an active issue (Folktale #138).
- There is no fallback case yet, though it is an active issue (Folktale #139).
Folktale has other features not shown, such as extension via shared derivations (presumably inspired by Haskell). Overall, it definitely gets us much of the way towards the ideal, but could stand to have better type safety.
Shapes in Daggy
Daggy is part of the wider Fantasy-Land ecosystem of functional JS specs and tools. The documentation is minimal, just two barely-explained example snippets; nothing we cannot understand, however.
- Biggest knock: developers can still directly access
.radiusoff of a circle. This could tempt them to try and grab
.radiusfrom an unknown shape.
- Including an integrated product type is a nice touch.
- Only declaring field names is minimalist, but prevents us from even being able to perform type checking.
- Using positional fields is more powerful than forcing destructuring, as you could always use a single field and destructure it if you wanted.
- There is no exhaustiveness check or failure on extra cases, nor obvious plans to add either.
- There is no fallback case, nor obvious plans to add one.
Daggy allows extension through idiomatic prototypal inheritance — adding a method to
Shape.prototype will allow
Circles and
Rectangles to delegate to that method.
Shapes in Union-Type
Conveniently,
shape is one of the demonstrated examples from this library’s documentation (adapted slightly here).
- Biggest knock: developers can still directly access
.radiusoff of a circle. This could tempt them to try and grab
.radiusfrom an unknown shape.
- We are back to the slightly awkward
Point.Pointas there is no distinct product type.
- We get all the benefits of declarative field names, plus validations, including declarative validations using both built-in and predefined types. Union-type does the type-checking part quite well.
- Relatively seamless handling of arrays or objects in defining and constructing values for the type, albeit only positional arguments during a match.
- There is no exhaustiveness check yet, though there is an issue for this (Union-Type #52).
- No failure on extra cases, nor obvious plans to add either.
- There is a fallback case,
_, though it doesn’t help for
shapeper se.
Union-type is one of the most fully-featured examples at first glance. It includes prototypal inheritance, some fancy tools e.g. both instance and curried static
case functions, variations like
caseOn, recursive types, a fallback syntax and more.
Of note, one of the other libraries found (JAForbes/sum-type) was based on union-type, but with enhancements related to the sanctuary-def project.
So… Which Shall We Use?
This is just a small selection of libraries, and the comments above are not intended to make a final judgment as to viability, implementation details, and other concerns. Rather, the intent is to see how people have attempted to solve this issue to date, and consider how we would want such a library to work. We haven’t touched on potentially important features like serialization, for example. In short, given JavaScript’s dynamically-typed nature and variety of existing edge cases, any library attempting to implement sum types is likely to have some opinions baked in, and some issues left to iron out. For instance, none of the three libraries examined prevent developers from just directly attempting to grab a property from a sum type, even if not every member of the type has that property.
For the remainder of this article, I’ll be using union-type, mostly because it includes declarative type checks and a fallback case syntax.
Back to Redux
An age and a half ago, this article opened with an example JavaScript use case: a Redux state tree. If you recall, we were struggling to encode various states like “unloaded,” “loading,” “loaded with data,” and “failed with an error.” It should be clear that we could represent those states as a sum type.
If we had additional leaves in our state tree —
state.bunnies,
state.puppies, etc. — they would also be one of the four
Leaf types. Our consuming code would use union-type’s
case function to determine what UI to display.
It’s very likely that you’ve noticed something familiar about Redux’s action objects. Why… they’re just a tagged union themselves! An action can be one of a set number of objects, each with a
type (tag), and each with potentially any number of other properties (making it a product type). In the reducer, we
switch on each
case based on
type — #mindblown. Why not represent actions as a bona-fide sum type, then?
Unfortunately, Redux currently aggressively asserts that action objects be POJOs for now — no fancy library constructs. But the idea is sound, and in fact is precisely what inspired Redux in the first place. Redux is based on Elm, which includes tagged unions as a language feature. Recognizing that actions are members of a sum type, we have come full circle.
Redux Conclusion
So what have we gained? We are no longer reinventing the wheel, using primitive types to encode various mutually exclusive cases. And when we want to extract the data, we are reminded to handle every possible case. It’s not perfect — the type checks are limited, there are undoubtedly edge cases to consider, etc. And yet, the gain in both expressiveness, cleanliness, some degree of safety and more ought to be pretty appealing. Modeling your domain explicitly like this is a natural part of working with typed languages, and with all of JavaScript’s vaunted flexibility, it makes sense to adopt some of those benefits.
Case Studies (Pun Intended)
Redux state was a motivating example for this article, but sum types are so generally useful that it would be a shame not to go over some of their “greatest hits.” These are constructs so fundamental that they are often included in other language’s standard module. The following examples will be in pseudocode… see if you can implement them in a real language.
Maybe / Option
The Maybe type is either some data, or nothing at all.
sum type Maybe = Nothing | Just anythingfirstOfList = list =>
if (!list.length) return Nothing
else return Just(list[0])res = match firstOfList([])
Nothing => 'Sorry, there was nothing there.'
Just (something) => 'Ah, found something:' + somethinglog(res) // 'Sorry, there was nothing there.'
No more
-1,
undefined, or
null generating confusion. Methods like
findIndex,
find, and
findById can now return a
Maybe value; consuming code will then pattern match to decide what to do with the actual
Nothing or
Just something as the case may be. Try writing a
safeDivide function which returns
Nothing in case of divide by zero.
Cons List
This classic, recursive, closure-based, functional linked list is a workhorse of functional programming languages. Basically, a list is either the empty list
Nil, or it is constructed from an element and a following list:
Cons x xs.
sum type List = Nil | Cons anything ListmyList = Cons(1, Cons(2, Cons(3, Nil)))addList = someList =>
match someList
Nil => 0
Cons num xs => num + addList(xs)log(addList(myList)) // 6
Binary Tree
Sum types excel at processing tree structures, which is part of why they are very handy in writing compilers.
sum type Tree = Leaf | anything Tree TreemyTree = Tree(1,
Tree(0, Leaf, Leaf), Tree(2,
Tree(1.5, Leaf, Leaf), Tree(2.5, Leaf, Leaf)))addTree = someTree =>
match someTree
Leaf => 0
Tree num left right => num + addTree(left) + addTree(right)addTree(myTree) // 7
Conclusion
When I started taking notes for this article, I told some friends that it would be “a quick blog post.” As it turns out, there was a lot more I wanted to say about sum types than I first realized. If you’ve made it this far, congratulations; I hope you enjoyed it and are interested to try using sum types in your next JavaScript project. The nature of the language may make the attempt imperfect, but an imperfect improvement is still an improvement. And with the possible future inclusion of native pattern matching, sum types in vanilla JS may become even more viable.
Resources
Here is a partial list of resources I found helpful in researching the subject, and can recommend for further reading.
Language Documentation
- ReasonML: Variant! [sic]. See also ReasonMLHub: Variant
- Haskell: Algebraic Data Type
- Elm : Union Types
- F#: Discriminated Unions
- Rust: Enums
JS Library Documentation
- Folktale: Union (particularly good doc-article hybrid)
- Union-Type
- Flow: Unions
Articles & Article Series
- Waleed Khan, Union vs Sum Types. Excellent article, really helped me understand the difference between these concepts.
- Gabriel Gonzalez, Sum Types. Examples in Haskell.
- Scott Wlaschin, F# for Fun and Profit: Designing with Types. Goes much further into how types can underpin the entire logic of an application. Also check out his talk on the same subject.
- Joel Burget, The Algebra (and Calculus!) of Algebraic Data Types. Goes more deeply into the theoretical and mathematical aspects of ADTs. | https://medium.com/fullstack-academy/better-js-cases-with-sum-types-92876e48fd9f | CC-MAIN-2021-21 | refinedweb | 6,618 | 62.17 |
Enforces a timeout on another thread.
More...
#include <FailsafeThread.h>
Enforces a timeout on another thread.
The target thread needs to either complete execution or set the progressFlag to 'true' within the specified timeout period. If the progressFlag is set, it will be cleared at the end of the timeout, thus requiring the target to re-set within the next timeout period.
Definition at line 13 of file FailsafeThread.h.
List of all members.
false
constructor, specify target thread, timeout period, and optionally whether to start now
Definition at line 16 of file FailsafeThread.h.
returns true if the FailsafeThread is waiting for the target to stop running
This is useful for the target thread to check whether it is being stopped from a timeout (in which case isEngaged() will return true), or if it has been stopped for some other reason.
Definition at line 36 of file FailsafeThread.h.
[protected, virtual]
override this as a convenient way to define your thread -- return the number of *micro*seconds to sleep before the next call; return -1U to indicate end of processing
Reimplemented from Thread.
Definition at line 39 of file FailsafeThread.h.
microseconds to wait between checks on progressFlag
Changing this value won't change the current timeout period. You would need to stop and restart the thread for a change to immediately take effect.
Definition at line 28 of file FailsafeThread.h.
Referenced by runloop().
[protected]
set to true when FailsafeThread is in the process of stopping (and possibly restarting) the target thread
Definition at line 75 of file FailsafeThread.h.
Referenced by isEngaged(), and runloop().
the function to call on the target thread, defaults to Thread::stop, but Thread::interrupt may be useful
Definition at line 31 of file FailsafeThread.h.
should be set by target thread if it's still making progress and wants another delay
Definition at line 23 of file FailsafeThread.h.
if set to true, the failsafe thread will restart the target if it times out instead of just stopping it
Definition at line 20 of file FailsafeThread.h.
the thread being monitored (or at least the one that will be stopped if progressFlag isn't set)
Definition at line 72 of file FailsafeThread.h. | http://www.tekkotsu.org/dox/classFailsafeThread.html | CC-MAIN-2022-33 | refinedweb | 371 | 62.88 |
This example draws a simple line plot. Below is the source code needed to produce this chart.
../demos/linetest.py
from pychart import * theme.get_options() # We have 10 sample points total. The first value in each tuple is # the X value, and subsequent values are Y values for different lines. data = [(10, 20, 30), (20, 65, 33), (30, 55, 30), (40, 45, 51), (50, 25, 27), (60, 75, 30), (70, 80, 42), (80, 62, 32), (90, 42, 39), (100, 32, 39)] # The format attribute specifies the text to be drawn at each tick mark. # Here, texts are rotated -60 degrees ("/a-60"), left-aligned ("/hL"), # and numbers are printed as integers ("%d"). xaxis = axis.X(format="/a-60/hL%d", tic_interval = 20, label="Stuff") yaxis = axis.Y(tic_interval = 20, label="Value") # Define the drawing area. "y_range=(0,None)" tells that the Y minimum # is 0, but the Y maximum is to be computed automatically. Without # y_ranges, Pychart will pick the minimum Y value among the samples, # i.e., 20, as the base value of Y axis. ar = area.T(x_axis=xaxis, y_axis=yaxis, y_range=(0,None)) # The first plot extracts Y values from the 2nd column # ("ycol=1") of DATA ("data=data"). X values are takes from the first # column, which is the default. plot = line_plot.T(label="foo", data=data, ycol=1, tick_mark=tick_mark.star) plot2 = line_plot.T(label="bar", data=data, ycol=2, tick_mark=tick_mark.square) ar.add_plot(plot, plot2) # The call to ar.draw() usually comes at the end of a program. It # draws the axes, the plots, and the legend (if any). ar.draw()
To produce a PostScript chart, just feed the file to Python.
% python linetest.py >linetest.eps
Or, to produce a PDF chart, run python like below
% python linetest.py --format=pdf >linetest.pdf
To handle command-line options such as
--format=pdf,
you need to put
theme.get_options() in the beginning of your file.
PyChart also supports PNG, SVG, and interactive X11 display.
Every PyChart program starts with line "
from pychart import *"
to import
classes and objects provided by PyChart. Each chart is represented by
an
area object (see Section 6),
which defines the size , the coordinate system
(linear, log, etc; see Section 6.1), and plots to be drawn. The final line
of a program should end with
area.draw(), which draws all the
components of the chart to the standard output. | http://home.gna.org/pychart/doc/node2.html | crawl-001 | refinedweb | 401 | 68.57 |
This will be the final part for our Game Object classes and will cover factories and loading objects from a file at runtime
Last time we covered the great possibilities that polymorphism gives us and how we can make what would otherwise be a lot of code into nice reusable chunks of code. That itself was an extremely powerful idea and in this tutorial we will take it one step further.
This tutorial will show you how to use a factory, this is essentially a class that creates objects based on a value. In our case we will have each class override a specific method which returns their class type.
virtual void const char* GetClassType() { return "TypeName"; }
Due to the way we structured our states in the previous tutorial we essentially made it so that our state didn't need to know about the types of the game objects and they could be used through a pointer to their base class "GameObject". We were still having to hard code which objects we wanted to use into the Init() function which poses problems for maintenance and reusability, what if we could make it so that the state would load the objects it needed from a file therefore eliminating the need to recompile when we decide to add more objects. Also what if the objects themselves could also load their initial values from an external file. Sounds great doesn't it!
This is known as data driven design and is the goal of this tutorial. So open up your IDE with the previous tutorial project and lets get coding
Create 2 new files. File.h and File.cpp we will write our file loading and reading classes inside these files.
File.h
#ifndef FILE_H #define FILE_H #include <fstream> class File { public: bool Open(const std::string& filename); bool GetInt(int* pInt); bool GetFloat(float* pFloat); bool GetString(std::string* pString); bool EndOfFile() const; private: std::ifstream m_ifstream; }; #endif
So looking at this class in a little more detail you can see that it has functions to open a file, read a string from a file, read a float, read an int and check if the end of the file has been reached. That should be all we need at this point. Also notice that all of the functions return a boolean value, this is necessary because there may be errors in how we structure our text files.
File.cpp
#include "File.h" bool File::Open(const std::string& filename) { m_ifstream.open(filename.c_str(), std::ios::in); return m_ifstream.good(); } // use the GetString function and use atoi() to get its value as an int bool File::GetInt(int* pInt) { std::string s; if (!GetString(&s)) { return false; } *pInt = atoi(s.c_str()); return true; } // return a float value bool File::GetFloat(float* pFloat) { m_ifstream >> (*pFloat); return true; } // If you have read from files before this should be pretty easy to understand, if // not then a trip to msdn might be in order :)/> bool File::GetString(std::string* pString) { char buf[10000]; while (true) { if (EndOfFile()) { return false; } m_ifstream.getline(buf, 10000); *pString = buf; if (pString->size() > 0 && (*pString)[0] == '#') { continue; } if (!pString->empty()) { return true; } } } // check if the end of the file has been reached or the stream has failed bool File::EndOfFile() const { return m_ifstream.eof() || !m_ifstream.good(); }
Try to go through each line of this code and make sure you understand most of it as it will serve you well in the future. It took me a while to write it and get it to work so don't just use it by copying and pasting and never thinking about it again. But anyway we now have a way to load and read from files
Lets test it out using our game object class by creating two new variables
float m_positionx; float m_positiony;
Now we shall change our GameObject::Load() function slightly
#include "GameObject.h" #include "File.h" void GameObject::Load(char* filename) { m_pSprite = Sprite::Load(filename); File newFile; // here we create an instance of our file class newFile.Open("gameobject.txt"); // load the file newFile.GetFloat(&m_positionx); // read the values from the file newFile.GetFloat(&m_positiony); }
Next we will create the gameobject.txt file, it is essentially just a basic text file which you can create in windows by right clicking inside a folder and then going to new->txt file and then name it gameobject.txt. This file will need to be in the same folder as the rest of your project. Since we are only loading 2 values from this file it is pretty basic at the moment
10 10
those values are going to be the game objects x and y position values. One more thing is needed to test this out and that is to slightly change our Draw function to use these values.
void GameObject::Draw() { Sprite::Draw(GameInst::Instance()->GetScreen(), m_pSprite, m_positionx, m_positiony); }
Ok so now run the program and you will see that the objects are at the x = 10 and y = 10 positions. Change the values in the text file and run the program again, no need to recompile, WOW! this is great! Imagine the possibilities
This tutorial is to be continued but I thought I would release what I have so far due to the huge gap between this and the previous tutorial.
Don't fret though, we will be loading objects from files at runtime very soon!
| http://www.dreamincode.net/forums/topic/232911-beginning-sdl-part-53-data-driven-design/page__pid__1344589__st__0 | CC-MAIN-2013-20 | refinedweb | 906 | 67.08 |
This tutorial requires an understanding of C, provided in the previous tutorial. This teaches the beginnings of using the Allegro library. Once a basic understanding is achieved, using the library is as simple as reading the library docs, which is what I learned from, and learned very well. Allegro is one of the best documented libs I have ever seen. And it is for this reason that this tutorial will be fairly short. The real learning will come from the next tutorial on how to use Allegro in a game.
How is this more than restating the manual? Well basically a lot of areas are pieced together where they need to be. The bulk of new info comes from the Learning Allegro section, which basically are random tips of what to learn, and things I don't feel the documentation covers well or points out.
This tutorial applies to the latest Allegro 4.0.0.
The latest Allegro library needs to be installed, and can be found at:
Starting Allegro
Learning Allegro
Sample Code
Using Microsoft Visual C++ with Allegro
First thing is to include the header file:
#include <allegro.h>
Before doing anything with Allegro, you must initalize it:
alleg_init();
Before initializing other functions, it is best to setup the current .CFG file, which is created by Allegro's included setup program, or by your program. It defaults to allegro.cfg, but if you want to change it to the name of your application/game, use this command:
set_config_file("myapp.cfg");
If you want timing functions, or use the mouse/music/movies/etc which require the timing functions, install the timer. This is very important to keep your program running the same speed on all computers.
install_timer();
Next if you want to install the keyboard, do so now:
install_keyboard();
While Allegro has the keyboard, none of the C library commands will work on the keyboard.
Next step is installing the mouse, if needed, returns 0 on error (no buttons):
int buttons = install_mouse();
This returns the number of buttons on the mouse. The left button is #0, right is #1, and middle button is #2. This is important when detecting which button is pressed.
If you want the joystick, install it (returns non-zero on error):
if (install_joystick(JOY_TYPE_AUTODETECT)) { allegro_message("Joystick Error: %s", allegro_error); return 1; //exit to DOS }
All of the different joysticks are listed in the Allegro docs, but autodetect will pull info from the .CFG file if loaded, or it will pick a generic joystick type. Allegro_error is a global char* which contains an error message from most functions when they give an error.
If you want sound drivers, install them now:
if(install_sound(DIGI_AUTODETECT, MIDI_AUTODETECT, NULL)) { allegro_message("Sound Error: %s", allegro_error); return 1; }
If you don't want sound or MIDI music, replace the driver name with DIGI_NONE or MIDI_NONE, respectively. The NULL parameter refers to a char*, which was an old feature in Allegro, and is only there for compatability now. Nothing you put in there will have any effect. At this time you may also wish to set the volume levels, which could come from the .CFG file (this discussed later in the tutorial):
set_volume(255, 255); //digital and music to max levels
The last thing you do is initialize graphics. This is because the other functions will output text mode errors, which cannot be displayed in graphics mode. Another good reason for this is that if a problem occurs, video is most likely the incompatable part.
The first thing to do is set the color depth, and the screen size. I strongly recommend that the screen size be placed in constants in alleg.h or some global include file -- this has the advantage of being able to change these 2 constants and never having to change any code, and the application will adjust, if you base all your calculations on these constants, so you can have on-the-fly resolution changes.
set_color_depth(8); //256 color mode (8 bits mode) //in a global file, scrx is: const int scrx = 640; //in a global file, scry is: const int srcy = 480; if (set_gfx_mode(GFX_AUTODETECT, scrx, scry, 0, 0) { printf("Video Error: %s", allegro_error); //screen would still be in text mode on error return 1; }
You could also support multiple screen modes (for example if a card didn't support a certain resolution or color depth) by nesting if statements like the one above, to test all acceptable modes.
Now Allegro is set up and ready to go. Remember to add the line END_OF_MAIN() right after your main functions ending brace.
The Allegro documentation is written so well that it would be hopelessly redundant trying to write a tutorial for all of its hundreds of functions. So basically this section here is some Allegro tips on where to start playing around with Allegro and what methods I found to be good, and what I found that sucked. More advanced issues working with Allegro are in the game tutorial.
The first thing to do for learning Allegro is getting some cool graphics and musics off the net to play around with. Then try making a few small projects with Allegro, such as:
A ball screensaver project -- create a bouncing ball, start with a drawn circle, then move to a .bmp file of a better ball (perhaps a beach ball) and bounce that. If you want to expand the project some more, have a MIDI file play in the background, or have a sound effect (.wav file) when the ball hits the sides. If you want to get really fancy, add in some keyboard controls and gravity to the ball and let the user "nudge" the ball in directions, using geometric vectors, sin and cos to calculate the position.
Allegro Tips
A few random raves that the Allegro docs don't cover, plus what to waste your time on, and what to certainly not even try. . .
I see sooooo many Allegro programmers not using these features and they are the best in there, espically from a user point of view. Use datafiles whenever you can. There is nothing more annoying to me than having 100s of little .BMP files taking up valuable slack space on the hard drive, slowing it down. Datafiles are compressed and can knock those .BMPs to 1/4 of their size! Plus it's a lot faster to program since they load into an array -- no need to type in all the filenames into load_bitmap()!
Also on this issue, use the packfile functions to read and save everything you do (with the exception of .CFG files of course.) They take no time to learn since they perfectly emulate the C-style file functions, and give all the benefits of compressed data.
Although this is a very controversial subject in today's Pentium II and Pentium III world, fixed numbers might be something to consider when using them in a game, where floating accuracy is not needed. This has the advantage of using integer math. If you ever plan to run your game on a 486, definately use these, and also fixed point is very friendly to old (pre-Athlon)AMD and Cyrix processors. On the latest processors, though, double precision floating point variables were tested to be faster than fixed point (see fixtest.cpp for the test results). Also Allegro fixed point trig is fairly inaccurate and is noticeable over a distance of more than 200 pixels or so. I recommend using double type variables if you are planning on running the program on "today's" computers.
I've never used the 3D routines on Allegro. I would think you would be wasting your time. If you really want to do 3D, move to Windows and do Direct3D and OpenGL. The only good reason I could see for doing this would be to make some simple 3D rotating logo perhaps on the title screen or something small like that. I wouldn't base an application off the 3D functions. I suggest the Allegro GL library which you can find on as a library extension.
I would say the same thing about GUI. If you look at the GUI in my Project V2143 game, you will see that it looks nice, but I will warn you that I've spend literally 3x as much time on that GUI as I did on the game. I could probably do it in Windows in 1/4 of the time. If you want a GUI-heavy application, use Windows. It's not worth the trouble. I really should have made the map editor in Windows. . . Learn from my mistake.
RLE sprites and compiled sprites are usually not worth the trouble, since their speed boost is exteremely small. This boost may increase on lower class machines, but I haven't seen any yet.
Read the FAQ on making screenshots. For some reason I've seen a lot of Allegro programmers saying they can't do screenshots yet. It's just a simple one-line statement that works anywhere anytime:
save_bitmap("screen.pcx", BUFFER, pal);
Where BUFFER is the BITMAP* to your double buffer, and pal is the current pallete (use a dummy pallette if in truecolor mode.) Even if you aren't double buffering you can still save the screen.
And by the way, a final note about the BUFFER. Usually you don't draw directly to the screen. Use a technique like double buffering where you draw everything to a buffer then draw the buffer to the screen to eliminate flicker and increase speed. Dirty rectangles is also a good variation of double buffering which draws only changed areas to the screen.
This sample program was made and tested under both MSVC and DJGPP using the latest Allegro WIP as of 7/9/00. It displays Hello World!!! in the middle of the screen in yellow with a blue background then waits for a keypress. Many other examples can be found in Allegro's examples directory.
If you have seen Allegro 3.1 you may notice some differences from the code you are used to seeing. The function allegro_message works like printf but works under environments like Windows in the form of a pop-up box, since they have no default text output. In GUI environments the set_window_title function sets the window's title.
Each hicolor and truecolor depth is tested since different cards support different depths. For example my Voodoo3 card supports 24 but not 32 in VESA but 32 and not 24 under Windows. 15 and 16 bit color depths have roughly the same amount of colors and 24 and 32 bit color depths have the same amount of colors. The makecol function takes red, green, and blue components to form a color. See the Allegro documentation for detailed explanations of all of these functions.
#include <allegro.h> const int scrx = 640; const int scry = 480; int main(int argc, char* argv[]) { if (allegro_init()) { allegro_message("Cannot initalize Allegro.\n"); return 1; } //Set the window title when in a GUI environment set_window_title("Hello World"); if (install_keyboard()) { allegro_message("Cannot initalize keyboard input.\n"); return 1; } //set graphics mode, trying all acceptable depths set_color_depth(32); if (set_gfx_mode(GFX_AUTODETECT, scrx, scry, 0, 0)) { set_color_depth(24); if (set_gfx_mode(GFX_AUTODETECT, scrx, scry, 0, 0)) { set_color_depth(16); if (set_gfx_mode(GFX_AUTODETECT, scrx, scry, 0, 0)) { set_color_depth(15); if (set_gfx_mode(GFX_AUTODETECT, scrx, scry, 0, 0)) { allegro_message("Video Error: %s.\n", allegro_error); return 1; } } } } //set text background color to bright blue text_mode(makecol(0, 0, 255)); //prints yellow "Hello World!!!" in middle of screen textout_centre(screen, font, "Hello World!!!", scrx/2, scry/2, makecol(255, 255, 0)); //Wait for a key to be pressed while (!keypressed()) {} return 0; //Allegro will automatically deinitalize itself on exit } END_OF_MAIN()
I wrote these instructions using MSVC 6.0, and probably work for other MSVC versions too. This section assumes you have never seen MSVC before.
The first thing is to get Allegro installed. Follow the instructions in the Allegro package (at the time of writing, the readme.vc file).
Once Allegro is setup you will want to try to create new Allegro programs or port your Allegro 3.x programs to the new WIP, and to do this you will want to create a new project.
In the file menu, select new, then click projects tab, select Win32 application, and in the project name box type in the name for your new project. If you wish you may change the location of the created directory in the Location box. Press the OK button.
Note for those porting their old projects: it is not necessary for the source code to be in that same directory -- for example if you are working on a project that you want both DOS and Windows executables and you started it in DJGPP, you could leave your source code in the DJGPP directory so both DJGPP and MSVC share the same code, and what you change in MSVC changes in the DJGPP version and vice versa.
The next window will ask what kind of application to you want. Select "An empty project" then press the finish button.
If you do not have any source files to add (from a previous Allegro project you wish to port to Windows), select the new option from the file menu. Select the files tab. From here you can create C++ source and header files, as well as other files which you will probably not create while using Allegro. To create a file select the appropiate file from the list on the left, type in a file name, and press OK. By default the file will be added to your newly created project
If you are porting a program and already have the source code, you can add your code in the Project menu. Select Add To Project > Files, then locate your source files and add them. Remember you can drag a box and use the shift to select multiple files just like other Windows file dialogs. You should add all of your C/C++ source files and your headers as well. Although adding the headers is not required, adding them makes it easy to click on the file in the project to edit it. You can add any other file type as well, for example a readme.txt file or a Word document, for quick editing.
The left pane has the class view, which when you add your C++ files you can get a chart of all your classes and their data members and methods which you can double click to see/edit, a very valuable feature when using C++. You can click the tabs below to see files, resources, and a help window. Click on the FileView tab to see the source you added. Use the above tree to navigate the files in your project -- double click to edit.
If you are porting your program from Allegro 3.x, there are a few changes you need to make, which are mentioned in the Allegro documentation. Once you make these changes your program will compile under any compiler supported by Allegro.
Once you have your source code in MSVC, you will want to set up your project to compile for Allegro. In the project menu select settings. In the Settings For box make sure Win32 Debug is selected. Make sure your project's name is highlighted in the tree box below by clicking on it, then click on the Link tab. In the Object/library modules textbox add "alld.lib" to the end of the list (do not type the quotes). Lastly, change the Settings For box to Win32 Release and add "alleg.lib" to the end of the Object/library modules list and press OK.
To compile your program press the build button on the toolbar (shortcut F7). The message window at the bottom will show any errors and warnings -- double click on a line to be shown the error to be fixed. After all errors are fixed press the build again until the EXE builds (It will show 0 errors and warnings if it worked). To run your program press the the execute button which looks like an exclaimation point (shortcut Shift-F5) to run the program. If you want to run the program in debugging mode press the button next to it, the "Go" button (shortcut F5). Common debugging functions are shown on the debugging menu.
NOTE: The MSVC debugger cannot be used if your program is running in full-screen DirectX mode. Change your graphics driver to GFX_DIRECTX_WIN or GFX_DIRECTX_OVL if your card supports it, else use GFX_GDI which is unfortunately much slower. You obviously will also need to have your desktop in a higher resolution than your game if you intend to see other windows.
After your program is complete and you want to distribute it, you will want to compile the program in release mode. Note that release mode only works on the Professional and Enterprise editions of MSVC; MSVC standard does not have an optimizer and only runs in debugging mode. In the Build menu pick "Set Active Configuration" and pick the release mode for your project and rebuild your project as before. You will find the release and debug EXE files in the Release and Debug directories in your main project's directory. | http://www.gillius.org/allegtut/index.htm | crawl-002 | refinedweb | 2,891 | 70.53 |
Walkthrough: Binding Silverlight Controls to a WCF Data Service
Updated: May 2011
In this walkthrough, you will create a Silverlight application that contains data-bound controls. The controls are bound to customer records accessed through a WCF Data Service.
This walkthrough illustrates the following tasks:
Creating an Entity Data Model that is generated from data in the AdventureWorksLT sample database.
Creating a WCF Data Service that exposes the data in the Entity Data Model to a Silverlight application.
Running the Data Source Configuration Wizard to connect to the data service which populates the Data Sources window.
Creating a set of data-bound controls by dragging items from the Data Sources window to the Silverlight Designer.
Creating buttons that navigate forward and backward through records.
You need the following components to complete this walkthrough:
Visual Studio 2010..
Entity Data Models and the ADO.NET Entity Framework. For more information, see Introducing the Entity Framework.
Silverlight data binding. For more information, see Data Binding.
Start this walkthrough by creating an empty web application project to host a WCF Data Service.
To create the service project
On the File menu, point to New, and then click Project.
Expand Visual C# or Visual Basic, and then select Web.
Select the ASP.NET Empty Web Application project template.
In the Name box, type AdventureWorksWebApp and then click OK.
To expose data to an application by using a WCF Data Service, a data model must be defined for the service. In this walkthrough, create an Entity Data Model.
To create an Entity Data Model
On the Project menu, click Add New Item.
In the Data category, choose the ADO.NET Entity Data Model project item.
Change the name to AdventureWorksDataModel.edmx, and then click Add.
The Entity Data Model Wizard opens.
On the Choose Model Contents page, click Generate from database, and then click Next.
On the Choose Your Data Connection page, select one of the following options:
If a data connection to the AdventureWorksLT sample database is available in the drop-down list, select it.
or
Click New Connection and create a connection to the AdventureWorksLT database.
Verify that the Save entity connection settings in Web.Config as option is selected and then click Next.
On the Choose Your Database Objects page, expand Tables, and then select the Customer table.
Click Finish.
You must configure the service to operate on the Entity Data Model that you created.
To configure the service
In the AdventureWorksDataService.svc code file, replace the AdventureWorksDataService class declaration with the following code:
public class AdventureWorksDataService : DataService<AdventureWorksLTEntities> { //; } }
Build the project, and verify that it builds without errors.
Create a new Silverlight application, and then add a data source to access the service.
To create the Silverlight application
In Solution Explorer, right-click the solution node, click Add, and select New Project.
In the New Project dialog, expand Visual C# or Visual Basic, and then select Silverlight.
Select the Silverlight Application project template.
In the Name box, type AdventureWorksSilverlightApp and then click OK.
In the New Silverlight Application dialog box, click OK.
Create a data source that is based on the data returned by the service.
To create the data source
On the Data menu, click Show Data Sources.DataService.svc to the list of available services in the Services box.
In the Namespace box, type AdventureWorksService.
In the Services box, click AdventureWorksDataService.svc and then click OK.
In the Add Service Reference page, click Finish.
Visual Studio adds nodes that represent the data returned by the service to the Data Sources window.
Add buttons to the window by modifying the XAML in the Silverlight Designer.
To create the window layout
In Solution Explorer, double-click MainPage.xaml.
The window opens in the Silverlight Designer.
In the XAML view of the designer, add the following code between the <Grid> tags:
<Grid.RowDefinitions> <RowDefinition Height="75" /> <RowDefinition Height="525" /> </Grid.RowDefinitions> <Button HorizontalAlignment="Left" Margin="22,20,0,24" Name="backButton" Width="75" Content="<"></Button> <Button HorizontalAlignment="Left" Margin="116,20,0,24" Name="nextButton" Width="75" Content=">"></Button>
Build the project.
Create controls that display customer records by dragging the Customers node from the Data Sources window to the designer.
To create the data-bound controls
In the Data Sources window, click the drop-down menu for the Customers node and select Details.
Expand the Customers node.
For this example some fields will not be displayed so click the drop-down menu next to the following nodes and select None:
NameStyle
PasswordHash
PasswordSalt
rowguid
This prevents Visual Studio from creating controls for these nodes when they are dropped onto the designer. For this walkthrough, it is assumed that the end user does not want to see this data.
From the Data Sources window, drag the Customers node to the designer under the buttons.
Visual Studio generates XAML and code that creates a set of controls that are bound to the customer data.
Use the service to load data, and then assign the returned data to the data source that is bound to the controls.
To load the data from the service
In the designer, click an empty area next to one of the buttons.
In the Properties window, verify the UserControl is selected and then click the Events tab.
Locate the Loaded event and double-click it.
In the code file that opens (MainPage.xaml) add the following using (C#) or Imports (Visual Basic) statements:
Replace the event handler with the following code. Make sure that you replace the localhost address in this code with the local host address on your development computer:
private AdventureWorksLTEntities advWorksService; private System.Windows.Data.CollectionViewSource customersViewSource; private void UserControl_Loaded(object sender, RoutedEventArgs e) { advWorksService = new AdventureWorksLTEntities(new Uri("")); customersViewSource = this.Resources["customersViewSource"] as System.Windows.Data.CollectionViewSource; advWorksService.Customers.BeginExecute(result => Dispatcher.BeginInvoke(() => customersViewSource.Source = advWorksService.Customers.EndExecute(result)), null); }
Add code that enables scrolling through records by using the < and > buttons.
To enable users to navigate sales records
Open MainPage.xaml in the designer and double-click the < button.
Replace the generated backButton_Click event handler with the following code:
Return to the designer, and double-click the > button.
Visual Studio opens the code-behind file and creates a new nextButton_Click event handler.
Replace the generated nextButton_Click event handler with the following code:
Build and run the application to verify that you can view and navigate customer records.
To test the application
On Build menu, click Build Solution. Verify that the solution builds without errors.
Press F5.
Verify the first record in the Customers table appears.
Click the < and > buttons to navigate back and forward through the customer records.
Close the application.
After completing this walkthrough, you can perform the following related tasks:
Learn how to save changes back to the database. For more information, see Data Binding.
Learn how to incorporate more features using WCF Data Services in Silverlight applications. For more information, see ADO.NET Data Services (Silverlight). | http://msdn.microsoft.com/en-us/library/ee621313(VS.100).aspx | CC-MAIN-2013-20 | refinedweb | 1,154 | 50.43 |
Maximize the Confusion of an Exam
Introduction
Professor Reddy is a Physics teacher. He will be organizing a physics test that will contain True / False questions. Each question has an answer which is either true or false. He wants to design the question paper to maximize the confusion for the students. One way he can do this is to keep a series of questions whose answer is True, then keep a series of questions whose answer is False, and so on. Thus he can create confusion among the students. Let us now discuss the problem statement and the possible solution.
Problem Statement
A teacher is creating an exam containing N true/false questions, with the letters 'T' indicating true and 'F' indicating false. He intends to confuse the students by increasing the number of questions with the same answer in succession (multiple trues or multiple falses in a row). You're given a string ‘ANSWER_KEY’, where ANSWER_KEY[i] is the ‘i’th question's original answer. You're also provided with an integer ‘K’, which represents the maximum number of times you can conduct the following operation: Set any question's answer key to 'T' or 'F' (i.e., set ANSWER_KEY[i] to 'T' or 'F'). After completing the operation ‘K’ times, return the maximum number of consecutive 'T's or 'F's in the answer key.
Example 1:
ANSWER_KEY = ”FFTTTTTT”, K = 2.
We can toggle the first two ‘F’ to ‘T’ as K = 2. So we can do the operation two times at max. Thus, the final string will be “TTTTTTTT”. As a result, there are four consecutive ‘T’ in a row.
Output: 8
Example 2:
ANSWER_KEY = ”TTFTTTTT”, K = 1.
We can toggle ‘F’ at index 2. Thus the modifies ‘ANSWER_KEY’ will be “TTTTTTTT”. In both cases, we have 8 consecutive ‘T’. Therefore the output is 8.
Output: 8
Approach
The base condition occurs when ‘K’ is greater than half of the string length. We can argue that we can make the entire string either true or false. Now we calculate the number of consecutive T’s and the number of consecutive F’s using two separate function calls. Then we greedily try to find an answer such that the given two conditions are established. Finally, we return the answer.
Algorithm
- If ‘K’ is higher than half the length of the string, the answer is the string length. Since if the count of 'F' is greater than or equal to the count of 'T,' then the count of 'T' must be less than or equal to the count of 'F,' and vice versa.
- We've now called our function consecutiveCount() twice. We try to determine the maximum number of consecutive T's in the first call. Then, we try to determine the maximum number of consecutive Fs in the second call. The ‘BEGIN’ variable in the consecutiveCount() function represents the starting index from which a subsequent string begins.
- Now, we create a loop in which two conditions are established if character c is not equal to the string element.
- If ‘COUNT’ equals ‘K’, the ‘BEGIN’ variable is updated, which indicates the first modified character of the string is removed, and the current character is modified.
- If ‘M’ does not equal ‘K’, change the current character and increment ‘M.’Also, for each iteration, update ‘ANS’. Finally, return ‘ANS’.
Program
#include<iostream> #include<string> #include<vector> using namespace std; int consecutiveCount(string str, int k, char c) { int ans = 0, begin = 0, n = str.size(); // Stores index of modified characters in the string. vector <int> v(n); // Index of first character to be modified. int first = 0; // Index of last character to be modified. int last = 0; // Number of characters modified. int count = 0; for (int i = 0; i < n; i++) { if (str[i] != c) { // If count == k, we store update begin and modify the current character. if (k == count) { begin = v[first++] + 1; v[last++] = i; } // We modify the current character and increment count. else { v[last++] = i; count++; } } ans = max(ans, i - begin + 1); } return ans; } int maximizeConfusionOfAnExam(string answerKey, int k) { int ans1 = consecutiveCount(answerKey, k, 'T'); int ans2 = consecutiveCount(answerKey, k, 'F'); return max(ans1, ans2); } int main() { string answerKey; cout << "Enter the answer key: "; cin >> answerKey; cout << "Enter the value of K: "; int K; cin >> K; cout << "Answer is: " << maximizeConfusionOfAnExam(answerKey, K); return 0; }
Input
Enter the answer key: TTFTTFTT Enter the value of K: 1
Output
Answer is: 5
Time Complexity
O(N), where ‘N’ is the size of the string.
We are using one for loop in the consecutiveCount method. The loop iterates upon string length. Thus the time complexity is O(N).
Space Complexity
O(1). We are using constant space as we just declaring a few variables. Thus space complexity is O(1).
Key Takeaways
We saw an interesting ad-hoc problem i.e., Maximize the confusion of an Exam. It took O(N) time and constant space. But every ad-hoc problem is unique in itself and requires separate logic, thus don’t stop practising and Move to our industry-leading practice platform CodeStudio to practice more such problems and many more.
Thank You and Happy Coding! | https://www.codingninjas.com/codestudio/library/maximize-the-confusion-of-an-exam-789 | CC-MAIN-2022-27 | refinedweb | 866 | 65.73 |
On Mon, 09 Apr 2012 15:24:19 +0400Stanislav Kinsbursky <skinsbursky@parallels.com> task not found, of it's lockd wasn't started for it's namespace, then grace > period can be either restarted for all namespaces, of just silently dropped. > This is the place where I'm not sure, how to do. Because calling grace period > for all namespaces will be overkill...> > There also another problem with the "task by pid" search, that found task can be > actually not sender (which died already), but some other new task with the same > pid number. In this case, I think, we can just neglect this probability and > always assume, that we located sender (if, of course, lockd was started for > sender's network namespace).> > Trond, Bruce, could you, please, comment this ideas?> I can comment and I'm not sure that will be sufficient.The grace period has a particular purpose. It keeps nfsd or lockd fromhanding out stateful objects (e.g. locks) before clients have anopportunity to reclaim them. Once the grace period expires, there is nomore reclaim allowed and "normal" lock and open requests can proceed.Traditionally, there has been one nfsd or lockd "instance" per host.With that, we were able to get away with a relatively simple-mindedapproach of a global grace period that's gated on nfsd or lockd'sstartup and shutdown.Now, you're looking at making multiple nfsd or lockd "instances". Doesit make sense to make this a per-net thing? Here's a particularlyproblematic case to illustrate what I mean:Suppose I have a filesystem that's mounted and exported in twodifferent containers. You start up one container and then 60s later,start up the other. The grace period expires in the first container andthat nfsd hands out locks that conflict with some that have not beenreclaimed yet in the other.Now, we can just try to say "don't export the same fs from more thanone container". But we all know that people will do it anyway, sincethere's nothing that really stops you from doing so.What probably makes more sense is making the grace period a per-sbproperty, and coming up with a set of rules for the fs going into andout of "grace" status.Perhaps a way for different net namespaces to "subscribe" to aparticular fs, and don't take the fs out of grace until all of thegrace period timers pop? If a fs attempts to subscribe after the fscomes out of grace, then its subscription would be denied and reclaimattempts would get NFS4ERR_NOGRACE or the NLM equivalent.-- Jeff Layton <jlayton@redhat.com>--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2012/4/9/131 | CC-MAIN-2017-43 | refinedweb | 463 | 64 |
QCamera (auto) exposure/focus settings
Hi!
Some time ago I made a basic but functional implementation of a Stopmotion interface using the QCamera class. In fact, everything works pretty well, except for one thing: the auto exposure/focus settings are enabled all the time, generating a constant "blink" effect with a lot of horizontal lines moving around on my display.
Here is a short example of my problem:
So, my question is: how can I set the exposure/focus parameters of my webcam using Qt to avoid this behaviour? I noticed that there is a class called QCameraExposure and other called QCameraFocus, but the lack of documentation don't let me find the hint I need.
Any suggestion?
Hi,
AFAIK, you can get these directly from the QCamera instance and then modify them to change the camera settings.
- mrjj Qt Champions 2016
hi
From these
?
Just guessing, I tried this code:
QCamera *camera = new QCamera(); ... camera->setCaptureMode(QCamera::CaptureStillImage); QCameraExposure *exposure = camera->exposure(); exposure->setExposureMode(QCameraExposure::ExposureManual); QCameraFocus *focus = camera->focus(); focus->setFocusMode(QCameraFocus::ManualFocus); focus->setFocusPointMode(QCameraFocus::FocusPointCenter);
But the result is exactly the same, the webcam keeps trying to calculate exposure and focus values every second. I am using a webcam FaceCam 320X. Suggestions?
- jsulm Moderators
Does this webcam support manual exposure and focus?
@jsulm It is a very basic webcam model, so I really doubt it. To me, this has been a slow learning process about dealing with cameras using Qt.
Now, I wonder if the right path to solve my issue (if there is a solution for this) is to look for some "Webcam system settings" in the operating system context and try to deal with the auto-exposure/focus from there.
Should I give up to manage webcams for doing stopmotion and focus on DSRL devices? Should I get used to the annoying blinking horizontal lines if I decide to work with webcams? ->
I would like to understand the technical boundaries of working with webcams, how far can I take my expectations?
Something surprises me here, isn't that camera focus handling manual ?
Do you have the same phenomenon if you use QML ?
Something surprises me here, isn't that camera focus handling manual ?
In fact, you are right. The focus doesn't change when the finger appears. But, what about the exposure? why the camera view is blinking all the time? I tried setting focus and exposure values as Auto and Manual with the same result.
Do you have the same phenomenon if you use QML ?
I was trying to learn QML but I must confess that I couldn't deal with it, different paradigm, just difficult to me. My apologies to QML developers.
It is just for the sake of testing, not trying to convince you to change your code base.
Just create a default QtQuick application and put
import QtQuick 2.6 import QtQuick.Window 2.2 import QtMultimedia 5.0 Window { visible: true Camera { id: camera } VideoOutput { source: camera anchors.fill: parent focus : visible // to receive focus and capture key events when visible } }
in the main.qml file.
It will show you the first camera it finds.
This morning I was testing the QML code you shared with me and I compared it with my own implementation. Here is the result:
What I wonder most is how the blink effect disappears when the natural light is strong. I mean, I used to run my tests at night and as you could see, the blinking effect was exaggerated, but in this latest test is almost imperceptible. I made a comparison with the FaceTime app and definitely there is a big difference when you work either with night or day light.
About the QML vs QWidget implementation, I must say that in my opinion, the "quality" result is the same. My most important learning about this issue is that natural light matters when you are going to work with webcams, at least from Mac operating systems. I guess for some of you this could sound "really obvious", but please, count on I am a newbie in this topic ;)
I ran some night tests from my laptop with Ubuntu using the same webcam and the same Qt code, and the auto exposure effect was not so intense as in my Mac.
After all my tests, I consider that this issue is far beyond the software itself, environment light matters. Maybe with DSLR devices is a different story.
Do you know whether that camera provides a RGB or YUV stream ?
@SGaist Not sure how to answer your question, but this is all the info I could get from the manufacturer:
Do you mind building a custom QtMultimedia with two patches to test that ?
You can clone QtMultimedia from here and checkout the branch matching your current Qt.
Then:
git fetch refs/changes/04/156204/5 && git format-patch -1 --stdout FETCH_HEAD > yuv422.patch
git fetch refs/changes/45/156845/2 && git format-patch -1 --stdout FETCH_HEAD > yuv422_avcamera.patch
That will give you two patches that you can apply with:
patch -p1 -i yuv422.patchand
patch -p1 -i yuv422_avcamera.patch.
If you have trouble compiling QtMultimedia from git, you can also grab the sources from the installer and just apply the two patches on them. It shouldn't be problematic.
@SGaist Hi, following your instructions I downloaded the git branch corresponding to my Qt version (5.6), applied the patches and finally, compiled the whole source code with no issues (qmake and then make).
Now, what should I do with the content of this "qtmultimedia" folder?
PS: I am running my tests on a Mac system.
You need to call
make install, that will replace your current QtMultimedia. Then you only have to re-build your application, it should use the new available format if possible (just double check that's indeed the case)
@SGaist Just one question before continuing: Should I make a backup copy of some directory of my current Qt (official) installation? I don't want to break something :S
It shouldn't break anything but you can copy your QtMultimedia.framework as well as the mediaservices plugin folder.
@SGaist Sorry for my delay. I was busy working on my latest release.
Now that I had some time to run the test, I must say that I couldn't detect any difference in my camera behavior after installing the new version of the QtMultimedia module. The blinking effect remains.
Initially, I was expecting to run my tests on my Mac system. Unfortunately, I couldn't work with the Qt 4.6 version due to some qmake bug I found when I was trying to compile my project, so I decided to try the whole thing from my Linux box.
This is the output I got before compiling:
# qmake Checking for openal... no Checking for alsa... yes Checking for pulseaudio... yes Checking for gstreamer... yes Checking for gstreamer_photography... no Checking for gstreamer_encodingprofiles... yes Checking for gstreamer_appsrc... yes Checking for linux_v4l... yes Checking for resourcepolicy... no Checking for gpu_vivante... no
Not sure if this test doesn't have any sense in Linux. Please, let me know if there are another tests related to the webcam management I could run.
From a quick look at the gstreamer plugin sources, I can't tell if you'll be using that format. You have to check that.
At least since GStreamer 1.0 the format is available (see the qgstutils.cpp file) | https://forum.qt.io/topic/66190/qcamera-auto-exposure-focus-settings/13 | CC-MAIN-2018-13 | refinedweb | 1,244 | 57.16 |
PugSQL is an anti-ORM that facilitates interacting with databases using SQL in files.
Project description
PugSQL is a simple Python interface for using parameterized SQL, in files, with any SQLAlchemy-supported database.
For more information and full documentation, visit pugsql.org.
import pugsql # Create a module of database functions from a set of sql files on disk. queries = pugsql.module('resources/sql') # Point the module at your database. queries.connect('sqlite:///foo.db') # Invoke parameterized queries, receive dicts! user = queries.find_user(user_id=42) # -> { 'user_id': 42, 'username': 'mcfunley' }
In the example above, the query would be specified like this:
--- :name find_user :one select * from users where user_id = :user_id
So throw away your bulky ORM and talk to your database the way the gods intended! Install PugSQL today!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pugsql/ | CC-MAIN-2020-50 | refinedweb | 156 | 57.06 |
15 July 2008 10:04 [Source: ICIS news]
LONDON (ICIS news)--US phosphates exports continue to run below the corresponding period in 2006-2007, despite exports being 90% higher in May 2008 than in the same month of 2007, the latest US government data revealed on Tuesday.?xml:namespace>
A report by the US Department of Agriculture (USDA) showed that DAP exports were running around 7% lower in the July 2007-May 2008 period at around 4.7m short tons, than the corresponding period in 2006-2007 when exports reached around 5m short tons.
The deficit continued despite DAP exports in May reaching 624,158 short tons, around 90% higher than the 328,726 short tons exported in May 2007.
Monammonium phosphate (MAP) exports for the period July 2007- May 2008 totalled 2.3m short tons, around 8% below the same period last year when exports totalled around 2.5m short tons.
MAP exports in May totalled 243,781 short tons, a fall of around 31% on May 2007 when exports reached 352,621 short tons.
For more on fertilizers | http://www.icis.com/Articles/2008/07/15/9140130/us-dap-exports-up-90-in-may-but-lower-overall.html | CC-MAIN-2014-41 | refinedweb | 180 | 69.72 |
1 Why it doesn't compile?
val sum = {x, y -> x + y}
println(sum(1,2))
code from
is there any plan for local type inference for lambdas?
2 What benefits from local functions
fun main(args:Array<String>){
val sum = {(x : Int, y : Int) : Int -> x + y}
fun sum2(x:Int, y:Int):Int {return x + y}
}
if language already have lambdas?
1. Seems to be a bug in the docs. Thanks
BTW, what type would you expect 'sum' to be?
2. This makes the language more regular: I can have a function anywhere I like.
Your argument can be extended to "why have functions at all when we can have val's of function types?", and this approach does not seem right.
1) Int -> Int
Nemerle, for example, have this feature, type inference by usage.
def sum(x,y) { x + y } doesn't compile
but if local function is used sum(1,2) it's clear what type it has
Nemerle's type inference is indeed very powerful (and, as a consequence, rather slow). Kotlin aims for a simpler (faster and more predictable) type inference algorithm.
fun <T,K,R> Tuple2<T,K>.apply(func: Function2<T,K,R>) : R {
return func(this._1, this._2)
}
Is it really
val sum = {x, y -> x + y}
sum(1,2)
more complex then current type inference?
val r =#(1,2).apply({x,y -> x + y})
Local functions are changed often, is't annoying change it's type every time.
Yes, it is much more complex, because instead of a single call expression we have to analyze the whole local scope. | http://devnet.jetbrains.com/message/5452518 | CC-MAIN-2014-42 | refinedweb | 270 | 74.59 |
Group the Excel cells is to tie a range of cells together so that they can be collapsed or expanded. But usually, we also need to ungroup the Excel cells. Consequently, the articles aims at introducing how to ungroup Excel cells in C#, through a professional Excel .NET Component Spire.Xls.
Just as its name implies, ungroup Excel cells is to ungroup a range of cells that were previously grouped. Before ungroup Excel cells, we should complete the preparatory work:
- Download the.
Then here comes to the explanation of the code:
Step 1: Create an instance of Spire.XLS.Workbook.
Workbook workbook = new Workbook();
Step 2: Load the file base on a specified file path.
workbook.LoadFromFile(@"group.xlsx");
Step 3: Get the first worksheet.
Worksheet sheet = workbook.Worksheets[0];
Step 4: Ungroup the first 5 row cells.
sheet.UngroupByRows(1, 5);
Step 5: Save as the generated file.
workbook.SaveToFile(@"result.xlsx", ExcelVersion.Version2010);
Full code:
using Spire.Xls; namespace UngroupCell { class Program { static void Main(string[] args) { Workbook workbook = new Workbook(); workbook.LoadFromFile(@"group.xlsx"); Worksheet sheet = workbook.Worksheets[0]; sheet.UngroupByRows(1, 5); workbook.SaveToFile(@"..\..\result.xlsx", ExcelVersion.Version2010); } } }
Please preview the original group effect screenshot:
And the generated ungroup effect screenshot:
| https://www.e-iceblue.com/Tutorials/Spire.XLS/Spire.XLS-Program-Guide/Excel-Data/How-to-Ungroup-Excel-Cells-in-C.html | CC-MAIN-2022-40 | refinedweb | 205 | 52.36 |
_intr_v86()
Execute a real-mode software interrupt
Synopsis:
#include <x86/v86.h> int _intr_v86( int swi, struct _v86reg* regs, void* data, int datasize );
Arguments:
- swi
- The software interrupt that you want to execute.
- regs
- A pointer to a _v86reg structure that specifies the values you want to use for the registers on entry to real mode; see below.
- data
- A pointer to the data that you want to copy into memory; see below.
- datasize
- The size of the data, in bytes.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:().
Errors:
- EBADF
- The connection to the system process is no longer connected to a channel, or the connection doesn't exist. The channel may have been terminated by the server, or the network manager if it failed to respond to multiple polls.
- EFAULT
- A fault occurred when accessing the information pointed to by the data or regs arguments.
- EINTR
- The call was interrupted by a signal.
- EOVERFLOW
- The sum of the IOV lengths being sent to the system process exceeds INT_MAX.
- EPERM
- The calling process doesn't have the required permission; see procmgr_ability().
- ETIMEDOUT
- A kernel timeout unblocked the underlying call to MsgSendvnc(). See TimerTimeout().
Examples:
); } | http://developer.blackberry.com/native/reference/bb10/com.qnx.doc.neutrino.lib_ref/topic/i/_intr_v86.html | CC-MAIN-2013-20 | refinedweb | 208 | 56.76 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.