text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
On 05/08/2010 12:23 AM, Jan Kara wrote:>> When rsv is the right hand side of goal, we should return NULL,>> because now rsv's prev is NULL, or we return rsv.>>>> Signed-off-by: Peter Pan(潘卫平)<wppan@redflag-linux.com>>>>> --->> fs/ext2/balloc.c | 6 +----->> fs/ext3/balloc.c | 6 +----->> 2 files changed, 2 insertions(+), 10 deletions(-)>>>> diff --git a/fs/ext2/balloc.c b/fs/ext2/balloc.c>> index 3cf038c..023990f 100644>> --- a/fs/ext2/balloc.c>> +++ b/fs/ext2/balloc.c>> @@ -323,11 +323,7 @@ search_reserve_window(struct rb_root *root,>> ext2_fsblk_t goal)>> * side of the interval containing the goal. If it's the RHS,>> * we need to back up one.>> */>> - if (rsv->rsv_start> goal) {>> - n = rb_prev(&rsv->rsv_node);>> - rsv = rb_entry(n, struct ext2_reserve_window_node, rsv_node);>> - }>> - return rsv;>> + return (rsv->rsv_start< goal) ? rsv : NULL;> Hmm, I'm not sure I understand your reasoning. Suppose we have an RB-tree> with two intervals 0-10, 20-30. Interval 0-10 is in the root. Now we search> for goal 15. In the root we go to right because 10<15, in the next node we go> to left because 15< 20. Then the loop terminates. Now your code would return> NULL but previous code would return rb_prev of interval 20-30 which is 0-10.> And that is what we want as far as I understand what we expect from the> function...>> HonzaYou got the point!Many thanks.Regards-- Peter PanRed Flag Software Co.,Ltd--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2010/5/9/184 | CC-MAIN-2017-09 | refinedweb | 275 | 69.48 |
Provider
Since: BlackBerry 10.0.0
#include <bb/pim/account/Provider>
To link against this class, add the following line to your .pro file: LIBS += -lbbpim
A Provider.
This class represents a Provider record. The functions in the AccountService class allow you to populate a Provider record and retrieve information from it.
Overview
Public Functions Index
Protected Functions Index
Public Functions
Destructor.
BlackBerry 10.0.0
QList< QString >
QVariant
Provides access to the capabilities map field contained in the entry key. If the field is not found, it will return QVariant::Invalid.
QString
Provides access to the Provider object's id property. Use the AccountService::providers() function to obtain the complete list of current providers.
Property::EnterpriseType
Provides access to the Provider object's enterprise property. Note: An Account object created using this Provider object will inherit the enterprise property from this Provider object, except for a provider with enterprise set to Property::EnterpriseUnknown. Such a provider may create several types of accounts, some with enterprise set to Property::Enterprise and others with enterprise set to Property::NonEnterprise.
Returns the Property::EnterpriseType of the Provider object.
BlackBerry 10.0.0
bool
Accessor for read-only capability of service.
Returns whether the service is read-only for the provider. If it's not, it implies the service is read-write. Note: An Account object created using this Provider object will inherit the read-only capability for all services from this Provider object. Switching the read-only capability of a service for an account is not possible.
bool
Accessor for support capability of service.
Returns whether the service is supported for the provider. Note: An Account object created using this Provider object will inherit the support capability for all services from this Provider object. Switching the support capability of a service for an account from true to false is possible. See the Account::setServiceSupported() function for more details.
bool
Object correctness.
Determines whether or not the Provider object returned from AccountService function calls has acceptable attribute values.
QString
Provider &
QVariantMap
QList< QString >
QVariant
Protected Functions
void
void
void
void
void
Set function for read-only capability of service.
Assigns the value of serviceAccessReadOnly to the provider's read-only capability for service. If serviceAccessReadOnly is true, it implies service is read-only.
BlackBerry 10.0.0
void
Set function for support capability of service.
Assigns the value of serviceSupported to the provider's support capability for service. If serviceSupported is true, it implies service is supported.
BlackBerry 10.0.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/cascades/bb__pim__account__provider.html | CC-MAIN-2016-07 | refinedweb | 434 | 51.44 |
Hi!? Many thanks, David Dear List, this should be so basic that I feel bad asking this question here, but I don't get it. I am having a look at Sande's book "Hello World". The topic is 'Modules', and the code comes directly from the book. I have two files: Sande_celsius-main.py and Sande_my_module.py. I import the latter from within the former. #> First of all, this error message doesn't exactly tell me _where_ the problem is, does it? It could be a problem with(in) the imported function c_to_f... I wish he would tell me: "Problem in file x, line y". Secondly, the name celsius in the global namespace of ~-main.py is merely a variable, which later is then used as a parameter to c_to_f. I do not see a problem here. What is going on? | https://mail.python.org/pipermail/tutor/2010-February/074319.html | CC-MAIN-2018-05 | refinedweb | 142 | 86.4 |
Apache::AxKit::XSP::Language::SimpleTaglib - alternate XSP taglib helper'}.');'; }
This taglib helper allows you to easily write tag handlers with most of the common behaviours needed. It manages all 'Design Patterns' from the XSP man page plus several other useful tag styles.
A tag "<yourNS:foo>" will trigger a call to sub "foo" during the closing tag event. What happens in between can be configured in many ways using for subs and variables get created by replacing any non-alphanumeric character in the original tag or attribute to underscores. For example, 'get-id' becomes 'get_id'.
The called subs get passed 3 parameters: The parser object, the tag name, and an attribute hash. This hash only contains XML attributes declared using the 'attrib()' function attribute. (Try not to confuse these two meanings of 'attribute' - unfortunately XML and Perl both call them that way.) All other declared parameters get converted into local variables with prefix 'attr_'.
If a sub has any result attribute ('node', 'expr', etc.), it gets called in list context. If neccessary, returned lists get converted to scalars by joining them without separation. Plain subs (without result attribute) inherit their context and have their return value left unmodified.
If more than one handler matches a tag, the following rules determine which one is chosen. Remember, though, that only tags in your namespace are considered.
Apache::AxKit::Language::XSP contains a few handy utility subs:
Parameters to attributes get handled as if 'q()' enclosed them. Commas separate arguments, so values cannot contain commas..
In an expression context, passes on the unmodified return value.
These may appear more than once and modify output behaviour.
nodeAttr(name,expr,...)
Adds an XML attribute named
name to all generated nodes.
expr gets evaluated at run time. Evaluation happens once for each generated node. Of course, this tag only makes sense with 'node()' type handlers.
These tags specify how input gets handled. Most may appear more than once, if that makes sense.
attrib(name,...)
Declares
name as a (non-mandatory) XML attribute. All attributes declared this way get passed to the handler subs in the attribute hash.
child(name,...)
Declares a child tag
name. It always lies within the same namespace as the taglib itself. The contents of the tag, if any, get saved in a local variable named $attr_
name. the handler sub, instead of adding it to the enclosing element. Non-text nodes will not work as expected.
childStruct(spec)
Marks this tag to take a complex xml fragment as input. The resulting data structure is available as %_ in the sub. Whitespace is always preserved.
spec has the following syntax:
specconsists of a list of tag names, separated by whitespace.
specand a closing '}' must follow.
Example:
sub:
set_permission : childStruct(add{@permission{$type *name} $target $comment(lang)(day)} remove{@permission{$type *name} $target})
XML:
>
Result: a call to set_permission with %_ set like this:
%_ = ( add => { permission => [ { type => "user", name => 'foo' }, { type => "group", name => 'bar' }, ], target => '/test.html', comment => { 'en' => { 'Sun' => 'Test entry', 'Wed' => 'Test entry 2' }, 'de' => { '' => 'Testeintrag' }, } }, remove => { permission => [ { type => "user", name => 'baz' }, ], target => '/test2.html', }, );
See AxKit::XSP::Sessions and AxKit::XSP::Auth source code for full-featured examples..
Jörg Walter <jwalt@cpan.org>
Copyright (c) 2002 Jörg Walter. Documentation All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as AxKit itself.
AxKit, Apache::AxKit::Language::XSP, Apache::AxKit::Language::XSP::TaglibHelper | http://search.cpan.org/dist/AxKit/lib/Apache/AxKit/Language/XSP/SimpleTaglib.pm | CC-MAIN-2016-26 | refinedweb | 572 | 59.8 |
In this Django tutorial, you will learn how to get data from get request in Django.
When you send a request to the server, you can also send some parameters. Generally, we use a GET request to get some data from the server. We can send parameters with the request to get some specific data.
For example, you can think of an e-commerce website where you can see a list of products. Now to see the details of a specific product, you send the id associated with that product as a GET parameter.
These GET parameters are parsed at the server and the server returns a response page showing the details of that particular product.
You can parse the GET parameters from the request.GET object in Python Django. You have to define the name of the HTML control inside this object to parse the value. For example, you can use an HTML textbox to send a parameter.
In this article, you will see some examples of how GET parameters work in Django.
- Django get data from get request example
- Add two numbers in django
Django get data from get request example
Now I will start with a simple example. I have created a simple HTML page where I will take input using a textbox. The HTML source file is:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Demo</title> </head> <body> <div> <form action = "result"> Enter your parameter : <input type="text" name="result" placeholder="Your input value"><br><br> <input type="submit"> </form><br> </div> </body> </html>
Now this HTML form will redirect the request to the result endpoint. I have defined this endpoint in the urls.py file of the application. The urls.py file is:
from django.urls import path from . import views urlpatterns = [ path('',views.index), path('result', views.result) ]
This endpoint will execute the result function in the view.py file. You can see the views.py file below.
from django.shortcuts import render def index(request): return render(request, 'index.html') def result(request): result = request.GET['result'] return render(request, 'result.html', {'result':result})
You can see in the file how I have implemented the result function to parse the GET parameter. After parsing the parameter, I have returned it to result.html template that will be rendered when the function is executed.
The above is the output of the first HTML page that I mentioned above.
When the Submit button is clicked, the request is made to the result endpoint along with the GET parameter as shown below in the image:
You can see that the value in the GET parameter got rendered.
In this way, you can get the value of a GET parameter in Django.
Read How to install Django
Add two numbers in django
Now let me show you how to add two numbers where I will add two GET parameters received from a request in Django.
For this, first of all, I will need an HTML page, where I will place my input controls to get data. Also, I will return the result on the same page. The HTML file is:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Demo</title> </head> <body> <div> <form action = "sum"> Enter First Number: <input type="text" name="first" placeholder="First Number"><br><br> Enter Second Number: <input type="text" name="second" placeholder="Second Number"><br><br> <input type="submit"> </form><br> </div> </body> </html>
You can see that I have redirected this page to the sum endpoint. Now I need to configure the urls.py file to define the path and the function mapping.
from django.urls import path from . import views urlpatterns = [ path('',views.index), path('sum', views.sum1) ]
You can see in the above file that the sum1() function in the views.py file will be executed when a request is made to the sum endpoint.
Now the views.py file looks like this:
from django.shortcuts import render def index(request): return render(request, 'index.html') def sum1(request): num1 = int(request.GET['first']) num2 = int(request.GET['second']) result = num1 + num2 return render(request, 'sum.html', {'result':result})
Here you can see how I have parsed the GET parameters and returned their sum to the sum.html template using the sum1() function.
You can parse the GET parameters from the request.GET object. You have to define the name of the input control inside this object to parse the value.
Now let us see the execution of this application.
- First of all, I will make a request to the root endpoint:
- Then I will input two numbers and click on Submit.
- Clicking on the Submit button will send a GET request to the sum endpoint along with the two parameters. You can see the GET parameters in the URL.
- The application will return the result as shown in the below image:
Hence, with this example, you might have learned how GET parameters work in a Django application.
You may like the following Django tutorials:
- Python Django length filter
- Python Django set timezone
- Python Django format date
- Python Change Django Version
- If statement in Django template
- Get URL parameters in Django
- How to Get Current time in Django
- Python Django vs Pyramid
Thus, you might have learned how you can get the data from a GET request as a parameter.
- Django get data from get request example
- Add two numbers in django
Entrepreneur, Founder, Author, Blogger, Trainer, and more. Check out my profile. | https://pythonguides.com/get-data-from-get-request-in-django/ | CC-MAIN-2022-21 | refinedweb | 952 | 66.74 |
.NET Interview Questions – Part 5
1. Which method do you invoke on the DataAdapter control to load your generated dataset with data?
DataAdapter’s fill () method is used to fill load the data in dataset.
2. What is the purpose of an Assembly?
An assembly controls many aspects of an application. The assembly handles versioning, type and class scope, security permissions, as well as other metadata including references to other assemblies and resources. The rules described in an assembly are enforced at runtime.
3..
4. What are the types of Authentication?
There are 3 types of Authentication.
Windows authentication
Forms authentication
Passport authentication.
5. What is a Literal Control?
The Literal control is used to display text on a page. The text is programmable. This control does not let you apply styles to its content.
6. What are the namespace available in .net?
Namespace is a logical grouping of class.
System
System.Data
System.IO
System.Drawing
System.Windows.Forms
System.Threading
7. What is Side-by-Side Execution?
The CLR allows any versions of the same-shared DLL (shared assembly) to execute at the same time, on the same system, and even in the same process. This concept is known as side-by-side execution.
8. What is the difference between System.String and System.StringBuilder classes?
System.String is immutable, System.StringBuilder was designed with the purpose of having a mutable string where a variety of operations can be performed.
9. What is the use of JIT ?
JIT (Just – In – Time) is a compiler which converts MSIL code to Native Code (ie. CPU-specific code that runs on the same computer architecture).
10. What is the difference between early binding and late binding?
Calling a non-virtual method, decided at a compile time is known as early binding. Calling a virtual method (Pure Polymorphism), decided at a runtime is known as late binding.
11. What are the different types of Caching?
There are three types of Caching :
Output Caching
Fragment Caching
Data Caching.
12. What is Reference type and value type?
Reference Type : Reference types are allocated on the managed CLR heap, just like object types. A data type that is stored as a reference to the value’s location..
13. What is Delegates?
Delegates are a type-safe, object-oriented implementation of function pointers and are used in many situations where a component needs to call back to the component that is using it.
14..
15. What is a Static class?
Static class is a class which can be used or accessed without creating an instance of the class.
16 What is sealed class?
Sealed classes are those classes which can not be inherited and thus any sealed class member can not be derived in any other class. A sealed class cannot also be an abstract class.
17. What are the two main parts of the .NET Framework?
There are the two main parts of the .NET Framework are :
The common language runtime (CLR).
The .NET Framework class library.
18. What is the advantage of using System.Text.StringBuilder over System.String?
StringBuilder is more efficient in cases where there is a large amount of string manipulation. Strings are immutable, so each time it’s being operated on, a new instance is created.
19. What is reflection?
All .NET compilers produce metadata about the types defined in the modules they produce. This metadata is packaged along with the module (modules in turn are packaged together in assemblies), and can be accessed by a mechanism called reflection.
20..
21. What is the difference between Compiler and Interpreter?
Compiler :A compiler is a program that translates program (called source code) written in some high level language into object code.
Interpreter:An interpreter translates high-level instructions into an intermediate form, which it then executes. Interpreter analyzes and executes each line of source code in succession, without looking at the entire program; the advantage of interpreters is that they can execute a program immediately. .
22. What is a class?
Class is concrete representation of an entity. It represents a group of objects, which hold similar attributes and behavior. It provides abstraction and encapsulations.
23. What is an Object?
Object represents/resembles a Physical/real entity. An object is simply something you can give a name.
24 What is Abstraction?
Hiding the complexity. It is a process of defining communication interface for the functionality and hiding rest of the things.
25. How do you convert a string into an integer in .NET?
Int32.Parse(string)
Convert.ToInt32()
26..
27. What Is Boxing And Unboxing?
Boxing :Boxing is an implicit conversion of a value type to the reference type.
Examples : Stuct Type, Enumeration Type
UnBoxing :Unboxing is an explicit conversion from the reference to a value type.
Examples : Class , Interface.
28. How do you create threading in .NET? What is the namespace for that?
System.Threading.Thread
29. What is Method overloading?
Method overloading occurs when a class contains two methods with the same name, but different signatures.
30. What is Method Overriding?
An override method provides a new implementation of a member inherited from a base class. The method overridden by an override declaration is known as the overridden base method.
31. What is difference between inline and code behind?
Inline code written along side the html in a page. Code-behind is code written in a separate file and referenced by the .aspx page.
32. What is an abstract class?
An abstract class is a class that must be inherited and have the methods overridden. An abstract class is essentially a blueprint for a class without any implementation.
33. What is the difference between datagrid and gridview?
Datagrid is used in windows application and gridview is used in web and in datagrid we cannot write the code for datagrid properties where as for grid view we can write the code like template column item template etc this will not be done in datagrid.
34. What is the use of System.Diagnostics.Process class?
The System.Diagnostics namespace provides the interfaces, classes, enumerations and structures that are used for tracing.
The System.Diagnostics namespace provides two classes named Trace and Debug that are used for writing errors and application execution information in logs.
35. What is the difference between static or dynamic assemblies?
Assemblies can be static or dynamic.
Static assemblies :can include .NET Framework types (interfaces and classes), as well as resources for the assembly (bitmaps, JPEG files, resource files, and so on).Staticassemblies are stored on disk in portable executable (PE) files.
Dynamic assemblies :which are run directly from memory and are not saved to disk before execution. You can save dynamic assemblies to disk after they have executed.
36. What are the difference between Structure and Class?
Structures are value type and Classes are reference type.
Structures can not have contractors or destructors. Classes can have both contractors and destructors.
Structures do not support Inheritance, while Classes support Inheritance
37..
38. What is the use of ErrorProvider Control?
The ErrorProvider control is used to indicate invalid data on a data entry form.
39. How many languages .NET is supporting now?
When .NET was introduced it came with several languages. VB.NET, C#, COBOL and Perl, etc. 44 languages are supported.
40. How many .NET languages can a single .NET DLL contain?
Many.
41. What is metadata?
Metadata means data about the data i.e., machine-readable information about a resource, . Such information might include details on content, format, size, or other characteristics of a data source. In .NET, metadata includes type definitions, version information, external assembly references, and other standardized information.
42. What is the difference between Custom Control and User Control?
Custom Controls are compiled code (Dlls), easier to use, difficult to create, and can be placed in toolbox. Drag and Drop controls. Attributes can be set visually at design time.
AUser Control is shared among the single application files.
43. What keyword is used to accept a variable number of parameter in a method?
“params” keyword is used as to accept variable number of parameters.
44. What is boxing and unboxing?
Implicit conversion of value type to reference type of a variable is known as BOXING, for example integer to object type conversion.
Conversion of reference type variable back to value type is called as UnBoxing.
45. What is object?
An object is an instance of a class. An object is created by using operator new. A class that creates an object in memory will contain the information about the values and behaviours (or methods) of that specific object.
46. Where are the types of arrays in C#?
Single-Dimensional
Multidimensional
Jagged arrays.
47. What is the difference between Object and Instance?
An instance of a user-defined type is called an object. We can instantiate many objects from one class.
An object is an instance of a class.
48. What are different types of JIT ?
There are three types of jit :
pre – jit
Econo – jit
Normal – jit.
49. What is difference between C# And Vb.net?
C# is case sensitive while VB is not case sensitive.
vb.net does not support xml while c# support xml
vb.net supports with constructor while c# do not.
50. What does assert() method do?
In debug compilation, assert takes in a Boolean condition as a parameter, and shows the error dialog if the condition is false. The program proceeds without any interruption if the condition is true.
51. Why string are called Immutable data Type?
The memory representation of string is an Array of Characters, So on re-assigning the new array of Char is formed & the start address is changed . Thus keeping the Old string in Memory for Garbage Collector to be disposed.
52. What is the difference between Convert.toString and .toString() method?
Convert.toString handles null while i.tostring() does not handles null.
53. How many types of Transactions are there in COM + .NET ?
There are 5 transactions types that can be used with COM+.
Disabled
Not Supported
Supported
Required
Required New
54. What is a DataTable?
A DataTable is a class in .NET Framework and in simple words a DataTable object represents a table from a database.
55. How many namespaces are in .NET version 1.1?
124.
56. What is a DataSet?
A DataSet is an in memory representation of data loaded from any data source
57..
58. What is the differnce between Managed code and unmanaged code?
Managed Code: Code that runs under a “contract of cooperation” with the common language runtime. Managed code must supply the metadata necessary for the runtimeto).
59. What is difference between constants, readonly and, static?
Constants: The value can’t be changed.
Read-only: The value will be initialized only once from the constructor of the class.
Static: Value can be initialized once.
60. What is the difference between Convert.toString and .toString() method?
Convert.toString handles null while i.tostring() does not handles null.
61. What are the advantages of VB.NET?
The main advantages of .net are :
.NET is a language independent
Automatic memory management(garbage collection)
Disconnected architecture
Object Oriented.
62. What is strong-typing versus weak-typing?
Strong type is checking at the variables in compile time.Weak typing is checking the variables at run-time.
63. What is the root class in .Net?
system.object is the root class in .net .
64. What is the maximum size of the textbox?
65536
65. What is managed code execution?
The .Net framework loads and executes the .Net applications, and manages the state of objects during program execution. This also provides automatically garbage collections.
66. What is the strong name in .net assembly?
Strong Name is similar to GUID (It is supposed to be unique in space and time).
In COM components. Strong name is only needed when we need to deploy assembly in GAC.
Strong names use public key cryptography (PKC) to ensure that no one can spoof it. PKC use public key and private key concept. Following are the step to generate a strong name and sign an assembly:
67. What are the types of comment in C#?
There are 3 types of comments in C#.
Single line (//)
Multi (/* */)
Page/XML Comments (///).
68. What are the namespaces used in C#.NET?
Namespace is a logical grouping of class.
using System;
using System.Collections.Generic;
using System.Windows.Forms;
69. What are the characteristics of C#?
There are several characteristics of C# are :
Simple
Type safe
Flexible
Object oriented
Compatible
Consistent
Interoperable
Modern
70. How to run a Dos command in Vb.net?
Shell(“cmd.exe /c c:first.exe < in.txt > out.txt”)
71. What are the assembly entry points?
An assembly can have only one entry point from DllMain, WinMain or Main.
72..
73. What are the types of Authentication?
There are 3 types of Authentication.
Windows Authentication
Forms Authentication
Passport Authentication
.74. What namespaces are necessary to create a localized application?
System.Globalization
System.Resources
75. Which namespaces are used for data access?
System.Data
System.Data.OleDB
System.Data.SQLClient
76. What is a SESSION and APPLICATION object?
Session object store information between HTTP requests for a particular user.
Session variables are used to store user specific information where as in application variables we can’t store user specific information.
while application object are global across users.
77. What is static constructor?
A static constructor is used to initialize a class. It is called automatically to initialize the class before the first instance is created or any static members are referenced.
78. What is C#?
C# (pronounced “C sharp”) is a simple, modern, object-oriented, and type-safe programming language.It will immediately be familiar to C and C++ programmers.C# combines the high productivity of Rapid Application Development (RAD) languages.
79. What are the different categories of inheritance?
Inheritance in Object Oriented Programming is of four types:
Single inheritance : Contains one base class and one derived class.
Hierarchical inheritance : Contains one base class and multiple derived classes of the same base class.
Multilevel inheritance : Contains a class derived from a derived class.
Multiple inheritance : Contains several base classes and a derived class.
80. What are the basic concepts of object oriented programming?
It is necessary to understand some of the concepts used extensively in object oriented programming.These include
Objects
Classes
Data abstraction and encapsulation
Inheritance
Polymorphism
Dynamic Binding
Message passing.
81. Can you inherit multiple interfaces?
Yes. Multiple interfaces may be inherited in C#.
82. What is inheritance?
Inheritance is deriving the new class from the already existing one.
83. Define scope?
Scope refers to the region of code in which a variable may be accessed.
84. What are the modifiers in C#?
Abstract
Sealed
Virtual
Const
Event
Extern
Override
Readonly
Static
New
85. What are the types of access modifiers in C#?
Access modifiers in C# are :
public
protect
private
internal
internal protect
86. Define destructors?.
87..
88. Define Constructors?
A constructor is a member function with the same name as its class. The constructor is invoked whenever an object of its associated class is created.It is called constructor because it constructs the values of data members of the class.
89. What is encapsulation?
The wrapping up of data and functions into a single unit (called class) is known as encapsulation. Encapsulation containing and hiding information about an object, such as internal data structures and code.
90. Does c# support multiple inheritance?
No,its impossible which accepts multi level inheritance.
91. What is ENUM?
Enum are used to define constants.
92. What is a data set?
A DataSet is an in memory representation of data loaded from any data source.
93..
94. Define polymorphism?
Polymorphism means one name, multiple forms. It allows us to have more than one function with the same name in a program.It allows us to have overloading of operators so that an operation can exhibit different behaviours in different instances.
95. What is Jagged Arrays?
A jagged array is an array whose elements are arrays.The elements of a jagged array can be of different dimensions and sizes.A jagged array is sometimes called an array–of–arrays.
96. what is an abstract base class?
An abstract class is a class that is designed to be specifically used as a base class. An abstract class contains at least one pure virtual function.
97. How is method overriding different from method overloading?
When overriding a method, you change the behavior of the method for the derived class. Overloading a method simply involves having another method with the same name within the class.
98. What is the difference between ref & out parameters?
An argument passed to a ref parameter must first be initialized. Compare this to an out parameter, whose argument does not have to be explicitly initialized before being passed to an out parameter.
99. What is the use of using statement in C#?
The using statement is used to obtain a resource, execute a statement, and then dispose of that resource.
100. What is serialization?
Serialization is the process of converting an object into a stream of bytes.De-serialization is the opposite process of creating an object from a stream of bytes.Serialization / De-serialization is mostly used to transport objects.
101. What are the difference between Structure and Class?
Structures are value type and Classes are reference type
Structures can not have contractors or destructors.
Classes can have both contractors and destructors.
Structures do not support Inheritance, while Classes support Inheritance.
102..
103. What is Delegates?
Delegates are a type-safe, object-oriented implementation of function pointers and are used in many situations where a component needs to call back to the component that is using it.
104.. | http://www.lessons99.com/dot-net-interview-questions-and-answers-5.html | CC-MAIN-2019-09 | refinedweb | 2,960 | 61.02 |
To be able to display an image on Holusion’s products using Unity, the first step is to flip camera images.
We provide a script to do this. It should be attached on every camera on he scene. (Download it here)
using UnityEngine; using System.Collections; [RequireComponent (typeof (Camera))] public class HSymmetry : MonoBehaviour { void OnPreCull () { Matrix4x4 scale; if(camera.aspect >2){ scale = Matrix4x4.Scale (new Vector3 (-1, 1, 1)); }else{ scale = Matrix4x4.Scale (new Vector3 (1, -1, 1)); } camera.ResetWorldToCameraMatrix (); camera.ResetProjectionMatrix (); camera.projectionMatrix = camera.projectionMatrix * scale; } void OnPreRender () { GL.SetRevertBackfacing (true); } void OnPostRender () { GL.SetRevertBackfacing (false); } }
It’s simply making an horizontal flip on cameras. Warning : this script is based on camera ratio to detect flip direction. It might require some adjustments to work properly on other projects.
Once image is properly setup, next step should be to place cameras on screen. As unity wouldn’t allow non-rectangular cameras, your cameras are probably going to overlap. It’s up to the developper to ensure nothing si going to be displayed on those surfaces.
one must disable resolution dialog to prevent it from blocking app from launching.
Go to
Edit -> Project Settings -> Player and in Resolution and Presentation, check
disable Display Resolution Dialog.
At the same place, activate default to fullscreen mode.
Use the
linux x86_64 build option.
It will provide you with a file *
On Windows :
right click -> Send to -> Compressed folder.
Then transfer this archive to your product. | https://dev.holusion.com/en/platform/unity_4/index | CC-MAIN-2018-22 | refinedweb | 241 | 52.66 |
0
I am having a problem with left alignment after adding a second JPanel. Both JPanels are using the BoxLayout with the first set to Y_AXIS and the second set to X_AXIS. If you comment out the add(twoPanel) it results in the desired left alignment. I don't understand why the alignment changes after adding the second JPanel. Thanks in advance.
import javax.swing.*; public class BoxManager extends JApplet{ JButton one, two, three; public void init() { one = new JButton("one"); two = new JButton("two"); three = new JButton("three"); setLayout(new BoxLayout(getContentPane(), BoxLayout.Y_AXIS)); add(new JLabel("This is the contentPane.")); JPanel twoPanel = new JPanel(); twoPanel.setLayout(new BoxLayout(twoPanel, BoxLayout.X_AXIS)); twoPanel.add(new JLabel("Second Panel")); twoPanel.add(new JTextField("Text Field")); add(twoPanel); // comment this line out add(one); add(two); add(three); } } | https://www.daniweb.com/programming/software-development/threads/355797/jpanel-boxlayout-nested-jpanel-alignment-issue | CC-MAIN-2016-50 | refinedweb | 137 | 52.66 |
I started using TDD to improve my quality and the design of my code but I usually encounter a problem. I'll try to explain it through a simple example:
I try to implement a simple application using passive view design. This means that I try to make the view as dumb as possible. Let's consider an application, where the GUI has a button and a label. If the user presses the button, a file get created with one random line in it. Then the label displays whether the creation was successful or not.
The code might look like this:
Instantiation looks like this:
As you can clearly see, there's a circular dependency in the design.
I usually try to avoid using events, I don't like testing with them and I think this type of design is more self explanatory as it clearly states what are the relation of the classes.
I'v heard of IoC design style but I'm not really familiar with it.
What are my "sticking point" in TDD regarding this issue? I always end up running into this problem and I want to learn a proper pattern or principle to avoid it in the future.
I would get rid of the GUIEventListener class. Seems like a overkill to me.
Since the view knows when the button is clicked, let the view share its knowledge with the world:
public interface IView { void DisplayMessage(string message); void AddButtonClickHandler(Action handler); }
The FileSaver is even simpler:
public interface IFileSaver { Boolean SaveFileWithRandomLine(); }
Just for fun, let's create an interface for the controller:
public interface IController { }
And the controller implementation:
public class Controller : IController { public Controller(IView view, IFileSaver fileSaver) { } }
OK, let's write the tests (I am using NUnit and Moq):
[TestFixture] public class ControllerTest { private Controller controller; private Mock<IFileSaver> fileSaver; private Mock<IView> view; private Action ButtonClickAction; [SetUp] public void SetUp() { view = new Mock<IView>(); //Let's store the delegate added to the view so we can invoke it later, //simulating a click on the button view.Setup((v) => v.AddButtonClickHandler(It.IsAny<Action>())) .Callback<Action>((a) => ButtonClickAction = a); fileSaver = new Mock<IFileSaver>(); controller = new Controller(view.Object, fileSaver.Object); //This tests if a handler was added via AddButtonClickHandler //via the Controller ctor. view.VerifyAll(); } [Test] public void No_button_click_nothing_happens() { fileSaver.Setup(f => f.SaveFileWithRandomLine()).Returns(true); view.Verify(v => v.DisplayMessage(It.IsAny<String>()), Times.Never()); } [Test] public void Say_it_worked() { fileSaver.Setup(f => f.SaveFileWithRandomLine()).Returns(true); ButtonClickAction(); view.Verify(v => v.DisplayMessage("It worked!")); } [Test] public void Say_it_failed() { fileSaver.Setup(f => f.SaveFileWithRandomLine()).Returns(false); ButtonClickAction(); view.Verify(v => v.DisplayMessage("It failed!")); } }
I think the tests are pretty clear, but I don't know if you know Moq.
The full code for the Controller could look like the following (I just hammered it into one line, but you don't have to, of course):
public class Controller : IController { public Controller(IView view, IFileSaver fileSaver) { view.AddButtonClickHandler(() => view.DisplayMessage(fileSaver.SaveFileWithRandomLine() ? "It worked!" : "It failed!")); } }
As you can see, this way you are able to test the controller, and we don't have even startet to implement the View or the FileSaver. By using interfaces, they don't have to know each other.
The View knows nothing (except that somebody may be informed when the Button is clicked), it is as dump as possible. Note that no events pollute the interface, but if you're going to implement the View in WinForms, nothing stops you from using events inside the View-implementation. But nobody outside has to know, and so we don't need to test it.
The FileSaver just saves files and tells if it failed or not. It doesn't know about controllers and views.
The Controller puts everything together, without knowing about the implementations. It just knows the contracts. It knows about the View and the FileSaver.
With this design, we just test the behaviour of the controller. We ask: 'If the button was clicked, was the view informed that it should display that information?' and so on. You could add more tests to check if the Save-Method on the FileSaver were called by the Controller if you like.
A nice resource on this topic is The Build Your Own CAB Series by Jeremy Miller
- GUIController class: UpdateLabel method which gets called from the FileSaver class's SaveFile
...
- FileSaver's ctor: FileSaver(GUIController controller)
Here's the flaw in your design. The FileSaver should be agnostic of who calls it (read: shouldn't hold a reference to the layer underneath it), it should just do its job i.e. save a file and inform the world how the operation went - typically through a return value.
This not really related to TDD, except maybe TDD would have forced you to think in terms of the most basic behavior that is expected from a FileSaver and realize it is not its responsibility to update a label (see Single Responsibility Principle).
As for the other parts of your system, like Roy said they'll most often be difficult to test in TDD except for the Controller.
Unit testing UIs is often a problem, for many reasons... The way I've done it in the past few years on MVC projects is to simply unit-test only the Controllers and to later test the application hands-on.
Controllers can be unit-tested easily because they are logic classes just like any other and you can mock out the dependencies. UIs, especially for Web applications, are much tougher. You can use tools such as Selenium or WatiN but that is really integration/acceptance testing rather than unit testing.
Here's some further reading:
How to get started with Selenium Core and ASP.NET MVC
This is how ASP.NET MVC controller actions should be unit tested
Good luck! | http://www.dlxedu.com/askdetail/3/20b4908d93a8f064d1e71ff131b44cf8.html | CC-MAIN-2018-22 | refinedweb | 975 | 62.27 |
Tales on my home automation project
To make the experience fit your profile, pick a username and tell us what interests you.
We found and based on your interests.
My first attempt to create an animation in OpenSCAD: How the modules will fit into the enclosure:
As I'm thinking further, it become obvious that I want to create a modular system. Something like a backplane, with equaly sized modules, the sensors, the actuators, the MCU, and the power source also reside on separate modules.
Even I went further. Designed a box, the backplane and the module form factor already. The modules will be implemented in 5x5cm form factor. Why? Because this can be cheaply ordered from the Chinese PCB factories - ~$10 for 10pcs.
The planed module connector has 1 SPI, 1 I2C and 8 GPIO connections. This is 17 pins together with the two power rails (3.3V and 5V). A 17 pin 0.1" single row header fits onto the 5cm edge of the board.
As I'm going further with it, I came up with quite a few modules, I want to implement. The list isn't complete, and the modules are subject to change:
In addition I plan to design Backplane for 4, 6 and 8 modules. The backplane will have an additional connector, what is able to connect the whole thing to a BeagleBone CAPE. This means, I'll able to use this modular system for the BBG based OpenHAB central also.
I also adding parametric 3D printable enclosure for it.
Some of the modules already designed. Although it can be ordered from a fab, but all of the designs ready until today uses single sided board, what can be easily created at home with toner transfer. Most (not all) of the designs uses exclusively trough hole components, what make the life easier.
No, I don't want to measure the mains voltage (yet) here.
I want to build my garage lamp switch as minimally invasive to our current setup, as I can.
What does that mean?
The lamp today is switched by two wall switches (one inside the garage, one in the house at the garage door) in alternating configuration. I want to keep the possibility to operate the lamp with this switches. As I'm not an electrician, I not really want to touch the wires, switches inside the wall (it would be a dirty job anyways). Fortunately I've continuous mains source also at the lamp itself.
So the plan is to connect the remote switching unit to the uninterrupted source and sense if the switches are on or off. This signal will be used to alternate the lamp and not to determine the status. Every time it change (from on to off or off to on), will change the light state.
For the sensing I need some electronics. The mains is not a child play so the proper isolation is crucial.
I want to stick to the simplest circuit possible:
A capacitive non-isolated power supply (two caps, two resistors, a diode and a zener) plus an optoisolator. The whole thing cost around $1.
Here is the design:
As I mentioned earlier I've several Conrad (ELV) FHT80B thermostats, with radio controlled valves, and I've a CUL v3.2 device. Now I'm try to connect these devices to my OpenHAB.
Before I started to work on this project I updated the firmware in the CUL to the one suggested for OpenHAB from here:
The references for the connection can be find here:
Setting up the binding:
1. Copy the required addons into the /opt/openhab/addons folder:
org.openhab.io.transport.cul-1.8.1.jar org.openhab.binding.fht-1.8.1.jar
If you have other devices than the heating from the FS20 family you may need the
org.openhab.binding.fs20-1.8.1.jar
also.
2. Edit the configuration file /opt/openhab/configurations/openhab.cfg. Add the following to the end:
fht:device=serial:/dev/ttyACM0 fht:baudrate=38400 fht:parity=0 fht:husecode=XXXX
The housecode is your choice. It is a four digit hexadecimal code. And it is absolutely necessary. If you don't provide it here, your system will not start without even a sign of the problem. You can just see the error message when you switch on the debug logging.
This code will be the code of your CUL device and has no connection with the codes of the FHT80b devices. In addition if you want to get readings from your devices, you have to be sure, that the "CEnt" setting in each of FHT80B devices is set to "nA".
3. You have to create some items in your items file for your FHT80B device. Something like this:
Number fhtRoom1Desired "Desired-Temp. [%.1f °C]" { fht="housecode=552D;datapoint=DESIRED_TEMP" } Number fhtRoom1Measured "Measured Temp. [%.1f °C]" { fht="housecode=552D;datapoint=MEASURED_TEMP" } Number fhtRoom1Valve "Valve [%.1f %%]" { fht="housecode=552D;address=00;datapoint=VALVE" } Switch fhtRoom1Battery "Battery [%s]" { fht="housecode=552D;datapoint=BATTERY" } Contact fhtRoom1Window "Window [MAP(en.map):%s]" { fht="housecode=52FB;address=7B;datapoint=WINDOW" }The housecode in the example is the code of the FHT80B device itself. You can read it from the device, the code1 is the first part, the code2 is the second part, and you must convert your readings to hexadecimal.
4. Now you have to insert the read values into your sitemap:
Frame label="Heating" { Setpoint item=fhtRoom1Desired minValue=6 maxValue=30 step=0.5 Text item=fhtRoom1Measured Text item=fhtRoom1Valve Text item=fhtRoom1Battery Text item=fhtRoom1Window }The result is something like this:
Sorry for the Hungarian text, as this is a real data from my house, I'll translate the whole frontend to Hungarian to my family.
One thing to add: Be patient. Getting the data is a time consuming task. Some of the data will just arrive when changed. So it can take several minutes, or even hours, to get your data.
Going further. This is just a copy-paste programming.
I grabbed the Arduino code from here:
Changed the SSID and the password, connected a 150ohm resistor and the first LED I found in my junk box to the corresponding pins of the ESP8266. Downloaded the code, and fired the serial monitor in the Visual Studio:
When I connect to the web server from a browser, I can switch the LED on and off.
Now integrate it to the OpenHAB.
We don't need anything else just modify the previously created rules. First I just added the http calls into the previously created rule, but the result wasn't satisfactory. I was able to control the LED from the remote, but not from the web browser. So I modified it a bit more.
Here is the result:
import org.openhab.core.library.types.* import org.openhab.core.persistence.* import org.openhab.model.script.actions.* rule "GarageLightRemote" when Item KeeLoq_Remote_B changed from CLOSED to OPEN then if(Garage_Light.state == ON) { sendCommand(Garage_Light, OFF) } else { sendCommand(Garage_Light, ON) } end rule "GarageLightOffAction" when Item Garage_Light changed from ON to OFF then sendHttpGetRequest("") end rule "GarageLightOffAction" when Item Garage_Light changed from OFF to ON then sendHttpGetRequest("") end
Here is the circuit:
I'm not a big fan of the whole Arduino ecosystem. The hateful IDE, and the lack of the debug-ability kept far from it.
On the other side I must admit that it is evolved since my first encounter, and it looks like something unavoidable.
I need some remote sensors, actuators, for my OpenHAB system. I want to build most of them. As I looked around, I seen two easily usable solution:
ESP8266 based WiFi modules
MySensors.org NRF24L01+ and Arduino based mesh network
I want to try both of them. As I not really have Arduinos on hand yet (actually I've few but different manufacturers, different hassles, and not really fits into this project), so I want to start to play with the ESP8266.
As I heard the LUA interpreter of it not really fits into bigger project, so I chosen the Arduino framework for it. Actually I'm just learning it, so I don't know where this lead to.
My first goal to be able to switch a GPIO of the ESP8266 via a HTTP REST API interface.
As I mentioned above I really hate the Arduino IDE. I run into various problems with it when I tried to configure and compile the Marlin firmware for my 3D printer. Then I found that you can use the trusted Microsoft Visual Studio (I'm using it for my daily work and a few contract projects) with this addon:
for the task.
Putting together the environment:
1. Install an Arduino IDE. Can be downloaded from here:
2. Install the Microsoft Visual Studio 2015. I'm using the Professional version, but there is a free version, can be downloaded from here:
Make sure that you enable the C++ component
3. Start the Visual Studio. Go into the Tools/Extensions and Updates menu. Select the Online in the left pane. Search for Arduino
Download the Arduino IDE for Visual Studio
Install it, and restart Visual Studio as instructed.
4. After restart the Arduino configuration will appear. You should set the Arduino IDE location and add the ESP8266 Board Manager URL:
Press OK
5. On the menu bar next to the Arduino board selector click on the magnifying glass icon
select the Manage Boards tab
Expand the esp8266 and click on the Version 2.1.0
On the popup dialog Click OK for the installation
And this is the point where I run into a great trouble. My machine is part of a domain, I'm running things as a regular user, means I've limited permissions (of course I'm admin on the other side, but I should specify it when I need it). It looks like the installer have a bug. Because I'm domain member and when I installed the machine at the beginning the user name I gave is the same I use in the domain. The result of this, that my profile name in the c:\Users folder is looks like this: <username>.<domainname>. This caused no problems in the past, but not here. The installer converted the dot between the name and the domain to underscore in some places and gave me an access denied error - yes because the folder in question didn't exists and I've no right to create it.
So I created the folder as admin and granted permission for me. After this the install run smoothly, but when I wanted to compile my first code, it failed. The error message wasn't too informative. After several trials, even using Process Monitor (Microsoft, formally Sysinternals tool), I was able to copy the files left in the temp folder under the underscore user folder I created, back into its place. Now it works, several hours wasted.
6. Try this out
I wanted to have a fast test, so I just created a demo project. The blink from the Visual Studio project list.
After selecting the correct board (NodeMCU v0.9 in my case) and the serial port, just uploaded the code and it started to work.
It looks like the ESP8266 environment is ready for more complicated tasks.
To be able to use the rolling code receiver in my OpenHAB, the first thing we need to enable the GPIO binding in the OpenHAB. There is a fairly good documentation of it in the OpenHAB wiki:
I mostly done, what is in it, but some modifications were needed.
1. Install the native JNA library:
apt-get install libjna-java
2. Modify /opt/openhab/start.sh
Add -Djna.boot.library.path=/usr/lib/jni into the list of java command line parameters
3. Modifiy the /etc/init.d/openhab
Add -Djna.boot.library.path=/usr/lib/jni into the DAEMON_ARGS parameters
and add the commands to unexport the gpio pins on stopping the daemon
4. From the previously downloaded distribution-1.8.1-addons.zip (part of the OpenHAB downloads) expand the following files:
org.openhab.io.gpio org.openhab.binding.gpiointo the /opt/openhab/addons directory
5. Restart openhab:
/etc/init.d/openhab restart
Now the OpenHAB is ready to handle the GPIO. We need to make the configuration changes, to be able to use it.
6. Add the required items
The GPIO inputs can be represented by Contacts in the OpenHAB, so edit one of the files in the /opt/openhab/configurations/items directory (I already dropped the demo config from here, so I've the config of my house) and add the following:
Contact KeeLoq_Remote_A "Remote A [MAP(en.map:%s]" (<ID of your Group>, Windows) { gpio="pin:66"}
Contact KeeLoq_Remote_B "Remote B [MAP(en.map:%s]" (<ID of your Group>, Windows) { gpio="pin:67"}
Contact KeeLoq_Remote_C "Remote C [MAP(en.map:%s]" (<ID of your Group>, Windows) { gpio="pin:69"}
Contact KeeLoq_Remote_D "Remote D [MAP(en.map:%s]" (<ID of your Group>, Windows) { gpio="pin:68"}
When everything is fine you should see the following after the OpenHAB reload the configuration:
And when you push a button on the remote, the closed text change to open, and back when you release.
7. For the lighting it will not be enough
We need to keep the status of the light, as it switched on or off. So I added a Switch into the items config:
Switch Garage_Light "Garage Light" (<ID of your Group>, Lights)
This is only a regular switch, so if you want to control it from the remote you need some rules. You can add it to a rules file in the /opt/openhab/configurations/rules folder:
rule "GarageLightToggle" when Item KeeLoq_Remote_B changed from CLOSED to OPEN then if(Garage_Light.state == ON) { sendCommand(Garage_Light, OFF) } else { sendCommand(Garage_Light, ON) } endAfter this, when you push the B button on the remote, it will switch the web control, on and off
This still not change the physical light. I have to create/buy some actor for this.
[2016.04.16]: When I wrote this article didn't realized, that I used the old location of the cape manager slots file. Corrected.
When you sit in your car and try to enter to your garage, I know it is not fancy, but the most convenient device is the tiny remote you can hang on your keyring (something integrated into your car would be better and fancier, I may add something like this later on).
To be able to remote control the OpenHAB and the garage door/lights through it, I bought this rolling code receiver on the aliexpress:
My plan is to use it for switching the lights in the garage, control the garage doors and a few more (as it has two more buttons - exactly two) tasks.
First I put this onto a breadboard, powered up, and tried out if it works.
It was working for the first try, finding out the different channels for the buttons was easy like a child's play.
The only problem with the device, that it has 5V logic outputs and I want to connect to my BeagleBone Greens GPIO what isn't 5V safe.
I decided to use the simplest level shifter on earth possible. A resistor divider. A 39K and a 68K resistor does its job like champ, if you measure it with a multimeter.
The next part is to setup the GPIO on the BeagleBone. The OpenHAB uses the Linux GPIO Sysfs interface, what is supereasy to use ().
First I tried to read the data from the command line. Connected one channel to the GPIO_60 (P9 pin 12) of the BBG.
On the console the following commands are needed:
echo '60' > /sys/class/gpio/export echo 'in' > /sys/class/gpio/gpio60/direction
After this you can read the pin value by the following command:
cat /sys/class/gpio/gpio60/value
The commands went well, but doesn't matter if I push the button on the remote or not, always get back a 1.
Measured the input with a multimeter. It gives 1.6V even the pin of the receiver is on 0V. This means, that the BBG input pin is pulled up.
From this point I've two choice:
1. Switch of the pull up somehow
2. Use some active circuit as level shifter instead of the resistors.
I went for the first one.
Looked around and it clearly turned out, that changing the pull-up is not part of the linux GPIO framework. Read many things and found that BBG uses Cape Manager Overlays for this task. It was sometimes there, sometimes not, but on the 4.1.x kernel what I'm using is already part of the mainline kernel.
Here are some articles about it:
The overlay repository (you can learn from it, but not necessary for the task here):
The most useful of my findings was this online tool:
It is able to create the overlay you need.
So for switching off the pull-up on Gpio 60 above you have to do the following things:
1. Save this file into the /lib/firmware under the name bspm_p9_12_2f-00A0.dts
/* * This is a template-generated file from BoneScript */ /dts-v1/; /plugin/; /{ compatible = "ti,beaglebone", "ti,beaglebone-black"; part_number = "BS_PINMODE_P9_12_0x2f"; exclusive-use = "P9.12", "gpio1_28"; fragment@0 { target = <&am33xx_pinmux>; __overlay__ { bs_pinmode_P9_12_0x2f: pinmux_bs_pinmode_P9_12_0x2f { pinctrl-single,pins = <0x078 0x2f>; }; }; }; fragment@1 { target = <&ocp>; __overlay__ { bs_pinmode_P9_12_0x2f_pinmux { compatible = "bone-pinmux-helper"; status = "okay"; pinctrl-names = "default"; pinctrl-0 = <&bs_pinmode_P9_12_0x2f>; }; }; }; };2. Compile it:
dtc -O dtb -o /lib/firmware/bspm_P9_12_2f-00A0.dtbo -b 0 -@ /lib/firmware/bspm_P9_12_2f-00A0.dts3. Load it:
echo bspm_P9_12_2f > /sys/devices/platform/bone_capemgr/slots...Read more »
Now here I install OpenHAB as the home automation controller, the brain of the whole system.
The OpenHAB is a java based system, so first we need an Oracle Java. To achieve this you have to add a PPA repository to the system:
BTW. I know that the sudo voodoo is absolutely necessary to keep the security. Even I'm working on my windows machine as a standard user and not admin since XP when the UAC wasn't even planed. But when I put together something on linux or windows and what I'm doing is clearly administrative task, I'm doing it as admin. So the first thing when I install several things, packages, etc. on linux I start with
sudo su
And don't bother to write sudo at beginning of every single line.
1. Add PPA repository of the Oracle Java and install it.
apt-get install software-properties-common add-apt-repository ppa:webupd8team/java apt-get update apt-get install oracle-java8-installe
2. Download OpenHAB files. You will need this ones:
Actually I usually download the files on my Windows machine and transfer them to the linux via WinSCP
3. Extract the files, create configuration
mkdir /opt/openhab unzip distribution-1.8.1-runtime.zip -d /opt/openhab/ unzip distribution-1.8.1-demo.zip -d /opt/openhab/ cp /opt/openhab/openhab_default.cfg /opt/openhab/openhab.cfg
4. As we still in sudo mode we can start the OpenHAB server to test the functionality of our system:
/opt/openhab/start.sh
After waiting for a while, we can test in a browser if it works:
http://<ip address>:8080/openhab.app?sitemap=demo
OK. Our server works, but we not yet finished. Just stop it now.
First: you may realized that I didn't created any specific user for the openhab. There is a reason why.
The Ubuntu not really designed for embedded systems. Therefore no mechanism (what I'm aware of) for regular user to access GPIO (if you know a simple method for it please don't keep inside). On the next part I want to use GPIO. Based on this I plan to run OpenHAB as root.
Second: I need a method to run OpenHAB as daemon So..
5. Download the daemon script from here:
Save as openhab into the /etc/init.d folder
Configure it:
chmod a+x /etc/init.d/openhab update-rc.d openhab defaults
Edit the file with your favorite editor. Most probably you only need to change the RUN_AS from openhab to root
Start it:
/etc/init.d/openhab startNow you can check it again if it still works via a web browser, and you can check the /var/log/openhab.log for errors.
Ok. So let start:
I'm not a linux guru. There are many favors of linux distribution, but I've just some experience with Ubuntu and Debian. In addition I usually not using any linux GUI.
So when I start a linux based project, it means to install an Ubuntu.
As I decided, this project will be based on a Beaglebone Green. Let start to install a 14.04 LTS to it.
The basic information reside here:
1. You need to download the image from here:
(today the 16.04 image already exists, but I stick to the older as it seems more stable), download the image writing tool (Win32DiskImager):
And you need an empty minimum 2GB uSD card.
2. Expand the image to a folder in you machine. For example the 7-Zip will do it. Install the Win32DiskImager.
3. Write the image to a uSD card with the Win32DiskImager
4. When the writing is finished, insert the uSD card into the Beaglebone. Press and hold the boot select button (the one close to the uSD slot) while powering up the board. After short booting sequence you will see the "knight rider pattern" on the LEDs. Wait until it finish and the leds turn on.
5. Connect the board to the ethernet network. If you can figure out the address get by DHCP the steps from 6-8 are optional. Just reboot the board by power cycling it.
6. Install the PC driver for the board (if you have a BeagleBone Green, this is absolutelly necessary as you don't have HDMI connector to connect monitor). The windows driver can be downloaded from here:
64 bit:
32 bit:
This is NOT a serial driver as you expect, but a virtual ethernet driver. You will need an SSH terminal to connect to Beaglebone.
7. Connect the Beaglebone to your PC. If it is was connected already, just cycle the power on it with removing and reconnecting the USB cable, when you disconnect the cable you can remove the uSD card, as you don't need it anymore.
8. If you run ipconfig on your machine a new ethernet interface will appear:
You'll get 192.168.7.1 as a new interface on your machine, the Beaglebone will appear on the 192.168.7.2. You can connect to it with an SSH terminal like PuTTY
9. If you didn't used the USB/Ethernet to log in, just login via the DHCP provided IP address. If you need static IP address edit the /etc/network/interfaces file and setup the address.
10. If you check the DNS settings of the system it is probably not working. Just by editing the /etc/resolv.conf will not resolve the problem, as it gets overwriten on the next reboot.
Edit the /etc/resolvconf/interface-order file and put eth* to the first on the list. This will give you the correct DNS settings.
11. Reboot the board.
12. Install the updates with:
apt-get update
apt-get upgrade
Sure, many things have been done, just not posted. I put this project aside for a while. I think it is just the time to reanimate it, for many reason:
1. Your project
2. Still missing Home Automation in my house
3. Knowledge I collected since on ESP8266
4. Ready made ESP8266 firmwares
5. New OpenHAB version
6. Acting as Amazon AWS consultant interested in IoT.
Hi, looks like it is going to be a very cool home automation project. I am also trying wireless devices with CC3200 wireless Cortex 4 MCU. Lets share ideas, though I have none at this point :).
Yea, cool.
Actually this project already advanced a bit further than the things you can see in the log. I made that mistake, not documenting everything right from the begining, so a few things will be recreated and documented, to be able to advance the logs until the current status.
Become a member to follow this project and never miss any updates
About Us Contact Hackaday.io Give Feedback Terms of Use Privacy Policy Hackaday API
This looks amazing buddy!
Did you get as far as fabricating the backplane at all? I really like that idea. In terms of what I'm thinking of a DIN rail enclosure could be made with a backplane installed and perhaps reqork my idea regarding using RJ45 connectors to link them all up... :-) | https://hackaday.io/project/10227-automatales | CC-MAIN-2020-50 | refinedweb | 4,157 | 65.01 |
Hi Hu,
To create simple 3D graphics in VB.NET, you need to imports System.Drawing and System.Drawing.Drawing2D namespaces, fill the background using Solid brush, Overrides OnPaint() method. If you want to create 3D animation, suggest using Microsoft DirectX.
Here are some samples.
1. GDI+ Samples - Rectangles, Ellipses, and 3D
The sample code in this article shows you how to use GDI+ and VB.NET to draw rectangles, ellipses, and 3D graphics objects.
2. Here is one 3D application, you can download its source project and view it.
3D tree rendering in C#/VB.NETThis program may help beginners learn more about graphics, GDI+, mathematical algorithms, fractals, recursive functions.
3. Using True Vision to Create 3D DirectX Animation in C# and .NET
This article will give you an easy way to create 3D animation using the True Vision Game Engine. True Vision wraps the DirectX framework for a more straightforward way of 3D game development in .NET.
I hope that can help you.
Regards,
Martin
Additional samples about drawing 3D graphics in .NET.
How to: Draw Points, Lines, and Other 3D Primitives
Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. If you choose to participate, the online survey will be presented to you when you leave the Msdn Web site.
Would you like to participate? | https://social.msdn.microsoft.com/Forums/en-US/53632033-6bfe-433f-8ac6-090126327766/3d-graphics-in-vbnet?forum=Vsexpressvb | CC-MAIN-2016-07 | refinedweb | 224 | 59.9 |
I'm making a real simple RPG game at college and I would like to use pygame. However, my college has a group policy disabling any Windows executables from running so I can't install pygame. Is there any way I can just import pygame by just having it in the same folder?
One thing I do with external python libraries is to download the source, build it and copy the built library into ${PROJECT_HOME}/lib/${EXTERNAL_LIB_NAME}. This way, I can run my python scripts with just standard python installation on a machine. To be able to use the external lib, you'll have to include the lib directory to sys path like this:
import os from sys import path as syspath syspath.append(os.path.join(os.path.dirname(__file__), 'lib'))
P.S.: Since pygame has C/C++ files to be compiled, you'd have to use mingw instead. Refer to this answer for that: error: Unable to find vcvarsall.bat
EDIT: In case you're wondering how to build pygame from source, you'll need to run
setup.py build. This will build the python library into 'build' folder of the package directory and there you'd see how it should be placed in Python's directory. You will face the compilation problem in Windows as I've mentioned before, but you can easily fix that.
EDIT 2: Download link to contents of 'lib' for PyGame:
Python 2.7:
Python 3.2: | https://codedump.io/share/ecBYGOSs8LlK/1/using-pygame-without-installing | CC-MAIN-2017-04 | refinedweb | 244 | 72.56 |
NAME
Mail [v14.9.23] —
send and receive Internet mail.
Options
-
folder); if it is not accessible but contains a ‘
If only an input character set is specified, the input side is fixed, and no character set conversion will be applied; an empty or the special string hyphen-minus
-C "Blah: Neminem laede; imo omnes, quantum potes, juva"’. Standard header field names cannot be overwritten by custom headers. Runtime adjustable custom headers are available via the variable customhdr, and in (Compose mode)
-qfile,
--quote-file=..
- (Send mode) Initialize the message body with the contents of file, which may be standard input ‘
line options are used when contacting a file-based MTA, unless this automatic deduction is enforced by
settingvar[=value],
--set=...
If the setting of expandargv allows their
recognition all mta-option arguments given at the end
of the command line after a ‘
shquote’ constraint of
expandaddr, for more please see
On sending
mail, and non-interactive mode.
--’ separator will be passed through to a file-based mta (Mail-Transfer-Agent) and persist for the entire session. expandargv constraints do not apply to the content of mta-arguments. Command line receiver address handling supports the ‘
$ mail -#:/ -X 'addrcodec enc Hey, ho <silver@go>' -Xx
A starter
Mail mode
To
Fcc:’ header,
see Compose mode.
|’
-S to specify variables:
-:to disable configuration files in conjunction with repetitions of
$ env LC_ALL=C mail -:/ \ -Sv15-compat \ -Sttycharset=utf-8 -Smime-force-sendout \
As shown, scripts producing messages
If standard input is a terminal rather than the message to be
sent, the user is expected to type in the message contents. In compose mode
lines beginning with the character ‘
~v or
~e will start the
VISUAL
text
EDITOR, respectively, to revise the message in
its current state,
~h allows editing of the most
important message headers, with the potent
-C and customhdr).
[Option]ally
~’ (in fact the value of escape) are special – these are so-called COMMAND ESCAPES which can be used to read in files, process shell commands, add and edit attachments and more. For example
~^custom headers can be created, for example (more specifically than with
~?gives an overview of most other available command escapes.
To create file-carbon-copies the special recipient header
‘
Fcc:’ may be used as often as
desired, for example via.
~^. Its entire value (or body in standard terms) is interpreted as a
Once finished with editing the command escape
~x or
~q, the latter of
which will save the message in the file denoted by
DEAD unless nosave is set. And
unless ignoreeof is set the effect of
control-D’
(‘
^D’) at the beginning of an empty
line, and
~q is always reachable by typing
end-of-text (ETX) twice via
‘
control-C’
(‘
^C’).
~.(see there) will call hooks, insert automatic injections and receivers, leave compose mode and send the message once it is completed. Aborting letter composition is possible with either of
~.can also be achieved by typing end-of-transmission (EOT) via ‘ mode
When
PAGER for; the following will search for subjects:
? from '@Some subject to search for'
In the default setup all header fields of a message will be
typed, but fields can be white- or blacklisted for a
variety of applications by using the command
headerpick, e.g., to restrict their display to a
very restricted set for
type:
‘
’.
MBOX file, when the mailbox
is left, either by changing the active mailbox or by quitting Mail –.
%:’ modifier (to propagate it to a primary system mailbox), then messages which have been read (see Message states) will be automatically moved to a secondary mailbox, the user's
After examining a message the user can
reply ‘
HTML-only messages become more and more common, and many messages
come bundled with a bouquet of MIME (Multipurpose Internet Mail Extensions)
parts and attachments. To get a notion of MIME types there is a built-in
default set, onto which the content of
The mime.types files will be
added (as configured and allowed by
mimetypes-load-control). Types can also become
registered and listed with the command
mimetype. To
improve interaction with the faulty MIME part declarations of real life
mime-counter-evidence will allow verification of the
given assertion, and the possible provision of an alternative, better MIME
type. Note plain text parts will always be preferred in
‘
multipart/alternative’ MIME messages
unless mime-alternative-favour-rich is set.
Whereas a simple HTML-to-text filter for displaying HTML messages
is [Option]ally supported (indicated by
‘
,+filter-html-tagsoup,’ in
features), MIME types other than plain text cannot be
handled directly. To deal with specific non-text MIME types or file
extensions programs need to be registered which either prepare
(re-)integrable plain text versions of their input (a mode which is called
copiousoutput),=?t ? endif ? mimetype ?t application/mathml+xml mathml ? wysh set pipe-application/pdf='?&=? \ trap "rm -f \"${MAILX_FILENAME_TEMPORARY}\"" EXIT;\ trap "trap \"\" INT QUIT TERM; exit 1" INT QUIT TERM;\ mupdf "${MAILX_FILENAME_TEMPORARY}"' ? define showhtml { ? \localopts yes ? \set mime-alternative-favour-rich pipe-text/html=?h? ? \type "$@" ? } ? \commandalias html \\call showhtml
Mailing lists
Known or
Lfollowup, when
followup or
reply
search
@subject@'[[]open bracket'
^[*+?|$’; see re_format(7) or regex(7), dependent on the host system) will be compiled and used as one, possibly matching many addresses. It is not possible to escape the “magic”: in order to match special characters as-is, bracket expressions must be used, for example ‘
’.
?.
A central concept to S/MIME is that of the certification authority (CA). A CA is a trusted institution that issues certificates. For each of these certificates it can be verified that it really originates from the CA, provided that the CA's own certificate is previously known. A set of CA certificates is usually delivered
To sign outgoing messages, in order to allow receivers to verify the origin of these messages, a personal S/MIME certificate is required.
Variables of interest for S/MIME in general are
smime-ca-dir, smime-ca-file,
smime-ca-flags,
smime-ca-no-defaults,
smime-crl-dir, smime-crl-file.
For S/MIME signing of interest are smime-sign,
smime-sign-cert,
smime-sign-include-certs and
smime-sign.
[v15 behaviour may differ].
On URL syntax and credential lookup
/path’ for example
is used by the [Option]al Maildir
folder type and
the IMAP protocol, but not by POP3. If
‘
USER’ and
‘
PASSWORD’ are included in an URL
server specification, URL percent encoded (RFC 3986) forms are needed,
generable with
urlcodec.
[]’, optional either because there also exist other ways to define the information, or because the part is protocol specific. ‘
Normally that is all there is to do, given that] The user's locale environment is detected by looking at
the
LC_ALL environment variable. The internal
variable ttycharset will be set to the detected
terminal character set accordingly, and will thus show up in the output of
commands like
set and
varshow. This character set will be targeted when
trying to display data, and user input data is expected to be in this
character set, too.
When creating messages their character input data is classified.
7-bit clean text data and attachments will be classified as
charset-7bit. 8-bit data will [Option]ally be
converted into members of sendcharsets until a
character set conversion succeeds. charset-8bit is the
implied default last member of this list. If no 8-bit character set is
capable to represent input data, no message will be sent, and its text will
optionally be saved in
DEAD.
If that is not acceptable, for example in script environments,
mime-force-sendout can be set to force sending of
non-convertible data as
‘
application/octet-stream’ classified
binary content instead: like this receivers still have the option to inspect
message content (for example via
mime-counter-evidence). If the [Option]al character
set conversion is not available (features misses
‘
,+iconv,’),
ttycharset is the only supported character set for non
7-bit clean data, and it is simply assumed it can be used to exchange 8-bit
messages.
ttycharset may also be given an explicit
value to send mail in a completely “faked” locale environment,
which can be used to generate and send for example 8-bit UTF-8 input data in
a pure 7-bit US-ASCII ‘
LC_ALL=C’
environment (an example of this can be found in the section
On sending
mail, and non-interactive mode). Due to lack of programming interfaces
reading mail will not really work as expected in a faked environment:
whereas ttycharset might be addressable, any output
will be made safely printable, as via
vexpr
makeprint, according to the actual locale
environment, which is not affected by ttycharset.
Classifying 7-bit clean data as charset-7bit is a problem if the input character set (ttycharset) is a multibyte character set that is itself 7-bit clean. For example, the Japanese character set ISO-2022-JP is, but, otherwise an invalid email message would result! To achieve this, the variable charset-7bit can be set to ISO-2022-JP. (Today a better approach regarding email is the usage of UTF-8, which uses 8-bit bytes for non-US-ASCII data.)
When replying to a message and the variable
reply-in-same-charset is set, the character set of the
message being replied to is tried first as a target character set (still
being a subject of
charsetalias filtering, however).
Another opportunity is sendcharsets-else-ttycharset to
reflect the user's locale environment automatically, it will treat
ttycharset as an implied member of (an unset)
sendcharsets.
[Option] When reading messages, their text data is converted into
ttycharset as necessary in order to display them on
the user's terminal. Unprintable characters and invalid byte sequences are
detected and replaced by substitution characters. Character set mappings for
source character sets can be established with
charsetalias, which may be handy to work around
faulty or incomplete character set catalogues (one could for example add a
missing LATIN1 to ISO-8859-1 mapping), or to enforce treatment of one
character set as another one (“interpret LATIN1 as CP1252”).
Also see charset-unknown-8bit to deal with another
hairy aspect of message interpretation.TYPE locale and/or the variable
ttycharset. The best results are usually achieved when
running
_’ and hyphen-minus
‘
.’, underscore ‘
-’.
Message states
Mail differentiates in between several message states; the current
state will be reflected in the summary of
headers if was used) – however, because this may be
irritating to users which are used to “more modern”
mail-user-agents, the provided global mail.rc
template sets the internal hold and
keepsave variables in order to suppress this
behaviour.
new’
- Message has neither been viewed nor moved to any other state. Such messages are retained even in the primary system mailbox.
unread’
- Message has neither been viewed nor moved to any other state, but the message was present already when the mailbox has been opened last: Such messages are retained even in the primary system mailbox..
deleted’
- The message has been processed by one of the following commands:
delete,
dp,
dt. Only
undeletecan be used to access such messages.
preserved’
- The message has been processed by a
preservecommand and it will be retained in its current location. the internal variable keepsave is set.
In addition to these message states, flags which otherwise have no technical meaning in the mail system except allowing special ways of addressing them when Specifying messages can be set on messages. These flags are saved with messages and are thus persistent, and are portable between a set of widely used MUAs.
Specifying messages
[Only new quoting rules]
COMMANDS which, arguments of the previous command; needs to be quoted. (A convenient way to read all new messages is to select them via ‘
from :n’, as below, and then to read them in order with the default command —
next— simply by successively typing ‘
- magic regular expression characters is seen. If the optional @name-list part is missing the search is restricted to the subject field body, but otherwise name-list specifies a comma-separated list of header fields to search, for example
'@to,from,cc@Someone i ought to know'
In order to search for a string that includes a
body’ or ‘
text’ or ‘,c@@a\.safe\.domain\.match$'
- :c
- All messages of state or with matching condition ‘
c’, where ‘
c’ is one or multiple of the following colon modifiers:
- a
answeredmessages (cf. the variable markanswered).
- d
deleted’ messages (for the
undeleteand
fromcommands only).
- f
flagged messages.
- L
- Messages with receivers that match
mlsubscribed addresses.
- l
- Messages with receivers that match
mlisted addresses.
- n
new’ messages.
- o
- Old messages (any not in state ‘
read’ or ‘
new’).
- r
read’ messages.
- S
- [Option] Messages with unsure spam classification (see Handling spam).
- s
- [Option] Messages classified as spam.
- t
- Messages marked as
draft.
- u-Dec-2012’.
- specification cannot be used as part of another criterion. If the previous command line contained more than one independent criterion then the last of those criteria is used.
On terminal control and line editor
[Option] Terminal control:
\cA’
- Go to the start of the line (
mle-go-home).
\cB’
- Move the cursor backward one character (
mle-go-bwd).
\cC’
- raise(3) ‘
SIGINT’ (
mle-raise-int).
\cD’
- Forward delete the character under the cursor; quits Mail if used on the empty line unless the internal variable ignoreeof is set (
mle-del-fwd).
\cE’
- Go to the end of the line (
mle-go-end).
\cF’
- Move the cursor forward one character (
mle-go-fwd).
\cG’
- Cancel current operation, full reset. If there is an active history search or tabulator expansion then this command will first reset that, reverting to the former line content; thus a second reset is needed for a full reset in this case (
mle-reset).
\cH’
- Backspace: backward delete one character (
mle-del-bwd).
\cI’
- [Only new quoting rules] Horizontal tabulator: try to expand the word before the cursor, supporting the usual Filename transformations (
mle-complete; this is affected by
mle-quote-rndtripand line-editor-cpl-word-breaks).
\cJ’
- Newline: commit the current line (
mle-commit).
\cK’
- Cut all characters from the cursor to the end of the line (
mle-snarf-end).
\cL’
- Repaint the line (
mle-repaint).
\cN’
- [Option] Go to the next history entry (
mle-hist-fwd).
\cO’
- ([Option]ally context-dependent) Invokes the command
dt.
\cP’
- [Option] Go to the previous history entry (
mle-hist-bwd).
\cQ’
- Toggle roundtrip mode shell quotes, where produced, on and off (
mle-quote-rndtrip). This setting is temporary, and will be forgotten once the command line is committed; also see
shcodec.
\cR’
- [Option] Complete the current line from (the remaining) older history entries (
mle-hist-srch-bwd).
\cS’
- [Option] Complete the current line from (the remaining) newer history entries (
mle-hist-srch-fwd).
\cT’
- Paste the snarf buffer (
mle-paste).
\cU’
- The same as ‘
\cA’ followed by ‘
\cK’ (
mle-snarf-line).
then special-treated and thus cannot be part of any other sequence (because it will trigger the
mle-prompt-charfunction immediately).
\cW’
- Cut the characters from the one preceding the cursor to the preceding word boundary (
mle-snarf-word-bwd).
\cX’
- Move the cursor forward one word boundary (
mle-go-word-fwd).
\cY’
- Move the cursor backward one word boundary (
mle-go-word-bwd).
\cZ’
- raise(3) ‘
SIGTSTP’ (
mle-raise-tst the active sequence takes precedence and will consume the control code.
\c\’
- ([Option]ally context-dependent) Invokes the command ‘
z+
\c]’
- ([Option]ally context-dependent) Invokes the command ‘
z$
\c^’
- ([Option]ally context-dependent) Invokes the command ‘
z0
\c_’
- Cut the characters from the one after the cursor to the succeeding word boundary (
mle-snarf-word-fwd).
removes). Here is an
example, requiring it to be accessible. Spam can be checked automatically when opening specific folders by setting a specialized form of the internal variable folder-hook.
See also the documentation for the variables spam-interface, spam-maxsize, spamc-command, spamc-arguments, spamc-user, spamfilter-ham, spamfilter-noham, spamfilter-nospam, spamfilter-rate and spamfilter-rate-scanscore.
COMMANDS
Mail reads input in lines. An unquoted reverse solidus
‘
space,
tabulator,
newline as well as those defined by the variable
ifs are removed from the beginning and end. Placing
any whitespace characters at the beginning of a line will prevent a possible
addition of the command line to the [Option]al
history.
\’ at the end of a command line “escapes” the newline character: it is discarded and the next line of input is used as a follow-up line, with all leading whitespace removed; once an entire line is completed, the whitespace characters.
‘
? set one=value two=$one’ for example
will never possibly assign value to one, because the variable assignment is
performed no sooner but by the command (
set), long
after the expansion happened.
A list of all commands in lookup order is dumped by the command
list. [Option]ally the command
help (or
?t’, which
should be a shorthand of ‘
?type’; with
these documentation strings both commands support a more
verbose listing mode which includes the argument type
of the command and other information which applies; a handy suggestion might
thus be:
?), when given an argument, will show a documentation string for the command matching the expanded argument, as in ‘
? define __xv { # Before v15: need to enable sh(1)ell-style on _entire_ line! localopts yes;wysh set verbose;ignerr eval "${@}";return ${?} } ? commandalias xv '\call __xv' ? xv help set
Command modifiers
Commands may be prefixed by none to multiple command modifiers.
Some command modifiers can be used with a restricted set of commands only,
the verbose version of
list
will ([Option]ally) show which modifiers apply.
- The modifier reverse solidus, for example, for example
sh(1)ell-style, and
therefore POSIX standardized, argument parsing and quoting rules are used by
most commands.
space,
tabulator,
newline. The additional metacharacters left and
right parenthesis
|, ampersand
&, semicolon
;, as well as all characters from the variable ifs, and / or
(,
)and less-than and greater-than signs
<,
>that the sh(1) supports are not used, and are treated as ordinary characters: for one these characters are a vivid part of email addresses, and it seems highly unlikely that their function will become meaningful to Mail.
vput):
INTERNAL VARIABLES as well as
ENVIRONMENT (shell) variables can be
accessed through this mechanism, brace enclosing the name is supported
(i.e., to subdivide a token).
#’.
? echo one; wysh set verbose; echo verbose=$verbose.
Quoting is a mechanism that will remove the special meaning of metacharacters and reserved words, and will prevent expansion. There are four quoting mechanisms: the escape character, single-quotes, double-quotes and dollar-single-quotes:
-
- Arguments enclosed in ‘
$'dollar-single-quotes'’ extend normal single quotes in that reverse solidus escape sequences are expanded as follows:
\a’
- bell control character (ASCII and ISO-10646 BEL).
\b’
- backspace control character (ASCII and ISO-10646 BS).
\E’
- escape control character (ASCII and ISO-10646 ESC).
\e’
- the same.
\f’
- form feed control character (ASCII and ISO-10646 FF).
\n’
- line feed control character (ASCII and ISO-10646 LF).
\r’
- carriage return control character (ASCII and ISO-10646 CR).
\t’
- horizontal tabulator control character (ASCII and ISO-10646 HT).
\v’
- vertical tabulator control character (ASCII and ISO-10646 VT).
- emits a reverse solidus character.
- single quote.
- double quote (escaping is optional).
\NNN’
- eight-bit byte with the octal value ‘
NNN’ (one to three octal digits), optionally prefixed by an additional ‘
0’. A 0 byte will suppress further output for the quoted argument.
\xHH’
- eight-bit byte with the hexadecimal value ‘
HH’ (one or two hexadecimal characters, no prefix, see
vexpr). A 0 byte will suppress further output for the quoted argument.
.
\uHHHH’
- Identical to ‘
\UHHHHHHHH’ except it takes only one to four hexadecimal characters.
\cX’
- Emits the non-printable (ASCII and compatible) C0 control codes 0 (NUL) to 31 (US), and 127 (DEL). Printable representations of ASCII control codes can be created by mapping them to a different, visible part of the ASCII character set. Adding the number 64 achieves this for the codes 0 to 31, here 7 (BEL):, as in ‘
^G’, the reverse solidus notation has been standardized: ‘
\cG’. Some control codes also have standardized (ISO-10646, ISO C) aliases, as shown above (‘
\a’, ‘
\n’, ‘
\t’ etc) :.
\$NAME’
- Non-standard extension: expand the given variable name, as above. Brace enclosing the name is supported.
\`{command}’
- Not yet supported, just to raise awareness: Non-standard extension.
Caveats:
? echo 'Quotes '${HOME}' and 'tokens" differ!"# no comment ? echo Quotes ${HOME} and tokens differ! # comment ? echo Don"'"t you worry$'\x21' The sun shines on us. $'\u263A'
Message list arguments
Many will indicate whether a command searches for a
default message, or not.
Raw data arguments for codec commands
A special set of commands, which all have the string
“codec” in their name, like, for example
?, so a file ‘
diet\ is \curd.txt’ may be displayed as ‘
'diet\ is \curd.txt'’.
Commands
list.
- A synonym for the
pipecommand.is changed again. The special account ‘
null’ (case-insensitive) always exists, and all but it can be deleted by the latter command, and in one operation with the special name ‘, then parsed and expanded for real with comma as the field separator, therefore whitespace needs to be properly quoted, see Shell-style argument quoting. Using Unicode reverse solidus escape sequences renders a binding defunctional if the locale does not support Unicode (see Character sets), and using terminal capabilities does so if no (corresponding) terminal control support is (currently) available. Adding, deleting or modifying a key binding invalidates the internal prebuilt lookup tree, it will be recreated as necessary: this process will be visualized in most verbose as well as in debug mode.
The following terminal capability names are built-in and can be used in terminfo(5) or (if available) the two-letter termcap(5) notation. See the respective manual for a list of capabilities. The program infocmp(1) can be used to show all the capabilities of
TERM.
call_if
- Identical to
callif the given macro has been created via
define, but does not fail nor warn if the macro does not exist.
cd
- Synonym for
chdir.
certsave
- [Option] Only applicable to S/MIME signed messages. Takes an optional
Otherwise the second argument defines the mappable slot, the third argument a (comma-separated list of) colour and font attribute specification(s), and the optionally supported available preconditions depend on the mappable slot, regular expression characters is seen the precondition will be evaluated as (an extended) one.
- view-msginfo
- For the introductional message info line.
- view-partinfo
- For MIME part info lines.
The following (case-insensitive) colour definitions and font attributes are understood, multiple of which can be specified in a comma-separated list:
- ft=
- a font attribute: ‘ colours).
The command
uncolourwill remove for the given colour type (the special type asterisk ‘
? commandalias xx
find
- Search for the second in the first argument. Shows the resulting 0-based offset shall it have been found. ‘
substring
- Creates a substring of its first argument. The optional
readall(or
readand
readsh) command(s). Note: output must be consumed before normal processing can continue; for
digmsgobjects this means each command output has to be read until the end of file (EOF) state occurs.] As console user interfaces at times scroll error messages by too fast and/or out of scope, data can additionally be sent to an error queue manageable by this command: show or no argument will display and clear the queue, clear will only clear it. As the queue becomes filled with errors-limit entries the eldest entries are being dropped. There are also the variables ^ERRQUEUE-COUNT and ^ERRQUEUE-EXISTS.. A possibly set on-account-cleanup will be invoked, however. ‘ to deal with the file type, respectively,
?' ? set record=+sent.zst.pgp
flag,
unflag
- Take message lists and mark the messages as being flagged, or not being flagged, respectively, for urgent/special attention. See the section Message states.
Folder
- (Fold). Changing hooks will not affect already opened mailboxes. For example, the following creates hooks for the gzip(1) compression tool and a combined compressed and encrypted format:
? filetype \ gzip 'gzip -dc' 'gzip -c' \ zst.pgp 'gpg -d | zstd -dc' 'zstd -19 -zc | gpg -e'
The latter command always takes three or more arguments and can be used to remove selections, i.e., from the given context, the given type of list, all the given headers will be removed, the special argument ‘
headers
- (h) Show the current group of headers, the size of which depends on the variable screen. Via the question mark ‘
saturated’ is optional,
case’ is optional, ‘
==?case’ are identical.
Available string operators are ‘
When the [Option]al regular expression support is available, the additional string operators ‘
Conditions can be joined via AND-OR lists (where the AND operator is ‘
The results of individual conditions and entire groups may be modified via unary operators: the unary operator ‘
wysh set v15-compat=yes # with value: automatic "wysh"! if -N debug;echo *debug* set;else;echo not;endif if "$ttycharset" == UTF-8 || "$ttycharset" ==?cas:
`local'’
- command supports the command modifier
local.
`vput'’
- command supports the command modifier
vput.
- the error number is tracked in !.
needs-box’
- whether the command needs an active mailbox, a
folder.
ok:’
- indicators whether command is ...
batch/interactive’
- usable in interactive or batch mode (
send-mode’
- usable in send mode.
subprocess’
- allowed to be used when running in a subprocess instance, for example from within a macro that is called via on-compose-splice.
not ok:’
- indicators whether command is not ...
compose mode’
- available in Compose mode.
startup’
- available during program startup, like in Resource files.follow. For more documentation please refer to On sending mail, and non-interactive mode.
This may generate the errors ^ERR-DESTADDRREQ if no receiver has been specified, ^ERR-PERM if some addressees where rejected by expandaddr, ^ERR)(m) Takes a (list of) recipient address(es) as (an) argument(s), or asks on standard input if none were given; then collects the remaining mail content and sends it out. Unless the internal variable fullnames is set recipient addresses will be stripped from comments, names etc. For more documentation please refer to On sending mail, and non-interactive mode.
This may generate the errors ^ERR-DESTADDRREQ if no receiver has been specified, ^ERR-PERM if some addressees where rejected by expandaddr, ^ERRor
preservesh
- [Only new quoting rules] Like
read, but splits on shell token boundaries (see Shell-style argument quoting) rather than at ifs. [v15 behaviour may differ] Could become a
commandalias, maybe ‘
read --tokenize --’.command
name’ if there is no value, i.e., a boolean variable. If a name begins with ‘
no’, as in ‘
set nosave’, the effect is the same as invoking the
unsetcommand
- [Only new quoting rules] Manage the file- or pathname shortcuts as documented for
folder. The latter command deletes all shortcuts given as arguments, or all at once when given the asterisk ‘ible.command
ignerrhad been used, encountering errors will stop sourcing of the given input. [v15 behaviour may differ] Note that
sourcecannot,
Another numeric operation is
pbase, which takes a number base in between 2 and 36, inclusive, and will act on the second number given just the same as what equals sign ‘
Numeric operations support a saturated mode via the question mark ‘
saturated’ is optional, ‘
+?satu’, and ‘
+?saturated’ are therefore identical. In saturated modecharacter
VISUALdisplay editor on each message. Modified contents are discarded unless the writebackedited variable is set, and are not used unless the mailbox can be written to and the editor returns a successful exit status.
editcan be used instead for a less display oriented editor. convenience saving of each part may be skipped by giving an empty value, the same result as writing it to /dev/null. Shell piping the part content by specifying a leading vertical bar ‘
xcall
- [Only new quoting rules] The sole difference to
call. Without arguments this command scrolls to the next window of messages, likewise if the argument is ‘
Z
- [Only new quoting rules] Similar to
z, but scrolls to the next or previous window that contains at least one ‘
new’ or
flagged message.
COMMAND ESCAPES
Command
- The modifier dollar ‘
evaluates the remains of the line; also see Shell-style argument quoting. [v15 behaviour may differ] For now the entire input line is evaluated as a whole; to avoid that control operators like semicolon
Addition of the command line to the [Option]al history can be
prevented by placing whitespace directly after escape.
The [Option]al key
bindings support a compose mode
specific context. The following command escapes are supported:
- Insert the string of text in the message prefaced by a single ‘
-
- Can be used to execute COMMANDS (which are allowed in compose mode).
- Identical to
~r.
- command is executed using the shell. Its standard output is inserted into the message.
- [Option] Write a summary of command escapes.
- Append or edit the list of attachments. Does not manage the error number ! and the exit status ? (please use
(
For all modes, if a given filename solely consists of the number sign ‘
message/rfc822’ MIME message part. The number sign must be quoted to avoid misinterpretation as a shell comment character.
-.
- Inspect and modify the message using the semantics of
digmsg, therefore arguments are evaluated according to Shell-style argument quoting. Error number ! and exit status ? are not managed::
210’
- Status ok; the remains of the line are the result.
aliasprocessing) instead, the actual value will be in the second field.
212’
- Status ok; the rest of the line is optionally used for more status. What follows are lines of furtherly unspecified (quoted) string content, terminated by an empty line. All the input, including the empty line, must be consumed before further commands can be issued.
500’
- Syntax error; invalid command.
501’
- Syntax error or otherwise invalid parameters or arguments.
505’
- Error: an argument fails verification. For example an invalid address has been specified (also see expandaddr), or an attempt was made to modify anything in Mail's own namespace, or a modifying subcommand has been used on a read-only message.):
filename’
- Sets the filename of the MIME part, i.e., the name that is used for display and when (suggesting a name for) saving (purposes).
content-description’
- Associate some descriptive information to the attachment's content, used in favour of the plain filename by some MUAs.
content-id’
- May be used for uniquely identifying MIME entities in several contexts; this expects a special reference address format as defined in RFC 2045 and generates a ‘
505’ upon address content verification failure.
content-type’
- Defines the media type/subtype of the part, which is managed automatically, but can be overwritten.:
Mailx-Command:’
- The name of the command that generates the message, one of ‘
forward’, ‘
Lreply’, ‘
Reply’, ‘
reply’, ‘
resend’. This pseudo header always exists (in compose-mode).
Mailx-Raw-To:’
-
Mailx-Raw-Cc:’
-
Mailx-Raw-Bcc:’
- Represent the frozen initial state of these headers before any transformation (
alias,
alternates, recipients-in-cc etc.) took place.
Mailx-Orig-Sender:’
-
Mailx-Orig-From:’
-
Mailx-Orig-To:’
-
Mailx-Orig-Cc:’
-
- Insert the value of the specified variable into the message. The message remains unaltered if the variable is unset or empty. Any embedded character sequences ‘
\t’ horizontal tabulator and ‘
\n’ line feed are expanded in posix mode; otherwise the expansion should occur at
settime (
~wfilename
- Write the message onto the named file, which is object to the usual Filename transformations. If the file exists, the message is appended to it.
~x
- Same as
~q, except that the message is not saved at all.
INTERNAL VARIABLES
Internal
Dependent upon the actual option string values may become
interpreted as colour names, command specifications, normal text, etc. They
may be treated as numbers, in which case decimal values are expected if so
documented, but otherwise any numeric format and base that is valid and
understood by the
vexpr command may be used,
too.
There also exists a special kind of string value, the
settings
The standard POSIX 2008/Cor 2-2016 mandates the following initial
variable settings: noallnet,
noappend, asksub,
noaskbcc, noautoprint,
nobang, nocmd,
nocrt, nodebug,
nodot, escape set to
‘
5’.
~’,. The documentation is an [Option], the name is used if not available.
- ^ERRQUEUE-COUNT, ^ERRQUEUE-EXISTS
- The number of messages in the [Option]al queue of
errors, and a string indicating queue state: empty or (translated) “ERROR”. Always 0 and the empty string, respectively, unless features includes ‘
,+errors,’.
- *
- this:
N’
- new.
U’
- unread but old.
R’
- new but read.
O’
- read and old.
S’
- saved.
P’
- preserved.
M’
- mboxed.
F’
- flagged.
A’
- answered.
T’
- draft.
- [v15 behaviour may differ] start of a (collapsed) thread in threaded mode (see autosort,
thread);
- [v15 behaviour may differ] an uncollapsed thread in threaded mode; only used in conjunction with
-L.
- classified as spam.
- autosort=thread’.
- bang
- (Boolean) Enables the substitution of all not (reverse-solidus) escaped exclamation mark ‘
- bind-timeout
-
Keywords:’. Different to the command line option
-Cthe variable value is interpreted as a comma-separated list of custom headers: to include commas in header bodies they need to become escaped with reverse solidus ‘
?.
- errors-limit
- [Option] Maximum number of entries in the
errorsqueue.
- escape
- The first character of this value defines the escape character for COMMAND ESCAPES in Compose mode. The default value is the character tilde ‘
- expandaddr
- If
restrict’ really acts like ‘
restrict,-all,+name,+addr’, so care for ordering issues must be taken.
Recipient types can be added and removed with a plus sign
fail’. A lesser strict variant is the otherwise identical ‘
restrict’, which does accept such arguments in interactive mode, or if tilde commands were enabled explicitly by using one of the command line options
- features
- (Read-only) String giving a list of optional features. Features are preceded with a plus sign ‘
versionincludes.
- folder
- The default path under which mailboxes are to be saved: filenames that begin with the plus sign ‘
folderfor more on this topic, and know about standard imposed implications of outfolder. The value supports a subset of transformations itself, and if the non-empty value does not start with a solidus ‘can be used to set noheader.
- headline
- A format string to use for the summary of
headers. Format specifiers in the given string start with a percent sign ‘
- A plain percent sign.
- “Dotmark”: a space character but for the current message (“dot”), for which it expands to ‘
- “Dotmark”: a space character but for the current message (“dot”), for which it expands to ‘
- [Option] The spam score of the message, as has been classified via the command
spamrate. Shows only a replacement character if there is no spam support.
%a’
- Message attribute character (status flag); the actual content can be adjusted by setting attrlist.
%d’
- The date found in the ‘
Date:’ header of the message when datefield is set (the default), otherwise the date when the message was received. Formatting can be controlled by assigning a strftime(3) format string to datefield (and datefield-markout-older).
%e’
- The indenting level in ‘
thread’ed
sortmode.
%f’
- The address of the message sender.
%i’
- The message thread tree structure. (Note that this format does not support a field width, and honours headline-plain.)
.
%l’
- The number of lines of the message, if available.
%m’
- Message number.
%o’
- The number of octets (bytes) in the message, if available.
%S’
- Message subject (if any) in double quotes.
%s’
- Message subject (if any).
%t’
- The position in threaded/sorted order.
- ignoreeof
- (Boolean) Ignore end-of-file conditions (‘
control-D’) in Compose mode on message input and in interactive command input. If set an interactive command input session can only be left by explicitly using one of the commands
exitand
quit, and message input in compose mode can only be terminated by entering a period ‘
- inbox
- If this is set to a non-empty string it will specify the user's primary system mailbox, overriding
folderfor more on this topic. The value supports a subset of transformations itself.
- indentprefix
- String used by the
~m,
~Mand
~RCOMMAND without an encoding if possible):. By established rules and popular demand occurrences of ‘
^From_’ (see mbox-rfc4155) will be MBOXO quoted (prefixed with greater-than sign ‘
quoted-printable’ to be chosen, unless context (like message signing) requires otherwise., for example ISO-8859-1. The encoding will cause a large overhead for messages in other character sets: for example it will require up to twelve (12) bytes to encode a single UTF-8 character of four (4) bytes. It is the default encoding., and HTML mail and MIME attachments for how to internally or externally handle part content.
-. Also see mta-bcc-ok. [Option]ally expansion of aliases(5) can be performed by setting mta-aliases.
The otherwise occurring implicit usage of the following MTA command line arguments can be disabled by setting the boolean variable mta-no-default-arguments (which will also disable passing
-i(for not treating a line with only a dot ‘
-m(shall the variable metoo be set) and
-v(if the verbose variable is set); in conjunction with the
-rcommand line option or r-option-implicit
-fas well as possibly
-Fwill (not) be passed.
[Option]ally Mail can send mail over SMTP aka.
- mta-arguments
- Arguments to pass through to a file-based mta (Mail-Transfer-Agent), parsed according to Shell-style argument quoting into an array of arguments which will be joined onto MTA options from other sources, for example ‘
? wysh set mta-arguments='-t -X "/tmp/my log"'’.
- mta-no-default-arguments
- (Boolean) Avoids passing standard command line options to a file-based mta (please see there).
- mta-no-receiver-arguments
- (Boolean) By default all receiver addresses will be passed as command line options to a file-based mta. Setting this variable disables this behaviour to aid those MTAs which employ special treatment of such arguments. Doing so can make it necessary to pass a
-tvia mta-arguments, to testify the MTA that it should use the passed message as a template.
-.
- mta-bcc-ok
- (Boolean) In violation of RFC 5322 some MTAs do not remove ‘
Bcc:’ header lines from transported messages after having noted the respective receivers for addressing purposes. (The MTAs Exim and Courier for example require the command line option
-tto enforce removal.) Unless this is set corresponding receivers are addressed by protocol-specific means or MTA command line options only, the header itself is stripped before being sent over the wire.
- netrc-lookup-USER@HOST, netrc-lookup-HOST, netrc-lookup
- (Boolean)[v15-compat][Option] Used to control usage of the user's ~/
define t_ocl { vput ! i cat ~/.mysig if $? -eq 0 vput csop message-inject-tail trim-end $i end # Alternatively readctl create ~/.mysig if $? -eq 0 readall i if $? -eq 0 vput csop. Note: this runs late and so terminal settings etc. are already teared down.
-command without performing MIME and character set conversions.
- pipe-EXTENSION
- Identical to pipe-TYPE/SUBTYPE except that ‘
EXTENSION’ (normalized to lowercase using character mappings of the ASCII charset) denotes a file extension, for example ‘
xhtml’. Handlers registered using this method take precedence.
- pipe-TYPE/SUBTYPE
- A MIME message part identified as ‘
TYPE/SUBTYPE’ (case-insensitive, normalized to lowercase using character mappings of the ASCII charset) is displayed or quoted, its text is filtered through the value of this variable interpreted as a shell command. Unless noted only parts displayable as inline plain text (see
copiousoutput) are covered, other MIME parts will only be considered by and for
mimeview.
The special value question mark ‘
set pipe-application/xml=?’. (This can also be achieved by adding a MIME type-marker via
mimetype.) [Option]ally MIME type handlers may be defined via The Mailcap files to which should be referred to for documentation of flags like
copiousoutput. Question mark is indeed a trigger character to indicate flags that adjust behaviour and usage of the rest of the value, the shell command, for example:
? set pipe-X/Y='?!++=? vim ${MAILX_FILENAME_TEMPORARY}'
- The command output can be reintegrated into this MUA's normal processing:
copiousoutput. Implied when using a plain ‘’.
- Only use this handler for display, not for quoting a message:
x-mailx-noquote.
- Run the command asynchronously, do not wait for the handler to exit:
x-mailx-async. The standard output of the command will go to /dev/null.
- The command must be run on an interactive terminal, the terminal will temporarily be released for it to run:
needsterminal.
-; it is an error to use automatic deletion in conjunction with
x-mailx-async.
- Normally the MIME part content is passed to the handler via standard input; with this the data will instead be written into
MAILX_FILENAME_TEMPORARY(
x-mailx-tmpfile-fill), the creation of which is implied; in order to cause automatic deletion of the temporary file two plus signs ‘
t’
- Text type-marker: display this as normal plain text (for type-markers: The mime.types files). Identical to only giving plain ‘
copiousoutput.
h’
- [Option] HTML type-marker: display via built-in HTML-to-text filter. Implies
copiousoutput.
- To avoid ambiguities with normal shell command content another question mark can be used to forcefully terminate interpretation of remaining characters. (Any character not in this list will have the same effect.)
Some information about the MIME part to be displayed is embedded into the environment of the shell command:. URL targets should not be activated automatically, without supervision. is automatically squared with the environment variable
POSIXLY_CORRECT, changing the one will adjust the other..
- Each command has an exit ? and error ! status that overwrites that of the last command. In POSIX mode the program exit status will signal failure regardless unless all messages were successfully sent out to the mta; also see sendwait.
-’ also
- A plain percent sign.
%a’
- The address(es) of the sender(s).
%d’
- The date found in the ‘
Date:’ header of the message when datefield is set (the default), otherwise the date when the message was received. Formatting can be controlled by assigning a strftime(3) format string to datefield (and datefield-markout-older).
%f’
- The full name(s) (name and address, as given) of the sender(s).
%i’
- The ‘
Message-ID:’.
%n’
- The real name(s) of the sender(s) if there is one and showname allows usage, the address(es) otherwise.
and
Resendcommands.
- (Boolean) If this variable is set Mail first tries to use the same character set of the original message for replies. If this fails, the mechanism described in Character sets is evaluated as usual.
-.
- A list of addresses to put into the ‘
Reply-To:’ field of the message header. Members of this list are handled as if they were in the
alternateslist.
- replyto
- [Obsolete] Variant of reply-to.
- Controls whether a ‘
Reply-To:’ header is honoured when replying to a message via
replyor
Lreply. This is a quadoption; if set without a value it defaults to “yes”.
-. content-description-smime-message will be inspected for messages which become encrypted.
- smime-force-encryption
- (Boolean)[Option] Causes Mail to refuse sending unencrypted messages.
- smime-sign
- (Boolean)[Option] S/MIME sign outgoing messages with the user's (from) private key and include the users will
- [Obsolete][Option] Predecessor(s) of smime-sign-digest.
-
- ssl-ca-dir-USER@HOST, ssl-ca-dir-HOST, ssl-ca-dir, ssl-ca-file-USER@HOST, ssl-ca-file-HOST, ssl-ca-file
- termcap) capabilities (see On terminal control and line editor, escape commas with reverse solidus ‘
,+termcap,’.
\061’, are supported. To specify that a terminal supports 256-colours, and to define sequences that home the cursor and produce an audible bell, one might write:
? set termcap='Co#256,home=\E[H,bel=^G'
The following terminal capabilities are or may be meaningful for the operation of the built-in line editor or Mail in general:
am
auto_right_margin: boolean which indicates if the right margin needs special treatment; the
xenlcapability is related, for more see
COLUMNS. This capability is only used when backed by library support..
xenlor
xn
eat_newline_glitch: boolean which indicates whether a newline written in the last column of an
auto_right_marginindicating terminal is ignored. With it the full terminal width is available even on autowrap terminals. This will be inspected even without ‘
,+termcap,’ features.
- tls-config-pairs-USER@HOST, tls-config-pairs-HOST, tls-config-pairs
- [Option] The value of this variable chain will be interpreted as a comma-separated list of directive/value pairs. Directives and values need to be separated by equals signs
ALL’. Multiple protocols may be given as a comma-separated list, any whitespace is ignored, an optional plus sign ‘
-ALL, TLSv1.2’ enables only the TLSv1.2 protocol.
-commandwill include this information.
- writebackedited
- If this variable is set messages modified using the
editor
visualcommands mbox-rfc4155 ‘
From_’ quoting of newly added or edited content is also left as an exercise to the user.
ENVIRONMENT
The and
unset, causing
automatic program environment updates (to be inherited by newly created
child processes).
In order to integrate other environment variables equally they
need to be imported (linked) with the command
environ. This command can also be used to set and
unset non-integrated environment variables from scratch, sufficient system
support provided. The following example, applicable to a POSIX shell, sets
the
COLUMNS environment variable for settings this directory is a default write target for, for example,
MAILX_NO_SYSTEM_RC
- If this variable is set then reading of mail.rc (aka system-mailrc) at startup is inhibited, i.e., the same effect is achieved as if Mail had been started up with the option
-n. This variable is only used when it resides in the process environment.
MBOX
- The name of the user's secondary mailbox file. A logical subset of the special Filename transformations (also see
folder) (the portable)
Upon startup Mail reads in several resource files, in order:
- mail.rc
- System wide initialization file (system-mailrc). Reading of this file can be suppressed, either by using the
-ncommand line options, or by setting the ENVIRONMENT variable
MAILX_NO_SYSTEM_RC.
- ~/.mailrc
- File giving initial commands. A different file can be chosen by setting the ENVIRONMENT variable
MAILRC. Reading of this file can be suppressed with the
-
- If the line (content) starts with the number sign ‘
Errors while loading these files are subject to the settings of
errexit and posix. More files
with syntactically equal content can be
sourceed.
The following, saved in a file, would be an examplary content:
# This line is a comment command. And y\ es, it is really continued here. set debug \ verbose set editheaders
The mime.types files
As stated in
HTML mail and MIME
attachments Mail needs to learn about MIME (Multipurpose Internet Mail
Extensions) media types in order to classify message and attachment content.
One source for them are mime.types files, the
mimetypes-load-control. Another is the command
mimetype, which also offers access to
type-marker’:
#’, causing the remaining line to be discarded. Mail also supports an extended, non-portable syntax in especially crafted files, which can be loaded via the alternative value syntax of mimetypes-load-control, and prepends an optional ‘
.
Further reading: for sending messages:
mimetype,
mime-allow-text-controls,
mimetypes-load-control. For reading etc. messages:
HTML mail and MIME
attachments, The Mailcap
files,
mimetype,
mime-counter-evidence,
mimetypes-load-control,
pipe-TYPE/SUBTYPE,
pipe-EXTENSION.
The Mailcap files
[Option] RFC 1524 defines a “User Agent Configuration
Mechanism” resource files, and the
MAILCAPS
environment variable to overwrite that. Handlers found from doing the path
search will be cached, the command
mailcap operates
audio/*’ would match any audio type.
The second field is the
view shell command used to
display MIME parts of the given type.
*’ the entry is meant to match all subtypes of the named type, e.g., ‘ (because that is implemented by means of this temporary file).lin (see HTML mail and MIME attachments) does not support the additional
formats ‘
User credentials for machine accounts (see
On URL syntax and
credential lookup) can be placed in the .netrc
file, which will be loaded and cached when requested by
netrc-lookup. The default location
~/.netrc may be overridden by the
NETRC environment # Or, entirely IMAP based setup #set folder=imaps://imap.gmail.com record="+[Gmail]/Sent Mail" \ # imap-cache=~/spool/cache_COLOUR_FLAG} -aFlrS' commandalias llS '!ls ${LS_COLOUR
When storing passwords in ~/.mailrc
appropriate permissions should be set on this file with
‘
$ }
and, in the ~/.netrc file:
machine *.yXXXXx.ru login USER password PASS
This configuration should now work just fine:
$ echo text |
verify signed):
$ openssl req -nodes -newkey rsa:4096 -keyout key.pem -out creq.p
[Option].
In general it is a good idea to turn on
debug (
-d) and / or
verbose (
-v, twice) if
something does not work well. Very often a diagnostic message can be
produced that leads to the problems' solution.
Mail shortly hangs on startup
This (via OAuth)
Since
Two thinkable situations: the first is a shadowed sequence;
setting debug, or the most possible
verbose mode, causes a printout of the
bind tree
Newer git(1) versions
(v2.33.0) added the option
sendmailCmd. Patches can
also be send directly, for example:
$ git format-patch -M --stdout HEAD^ | mail -A the-account-you-need -t RECEIVER
Howto handle stale dotlock files
folder sometimes
imaps://mylogin@imap.myisp.example/INBOX.
should be used (the last character is the server's hierarchy delimiter). The following IMAP-specific commands exist:. Encoding will honour the (global) value of imap-delim.
The following IMAP-specific internal variables exist:
-
- imap-keepalive-USER@HOST, imap-keepalive-HOST, imap-keepalive
- IMAP servers may close the connection after a period of inactivity;
folders
M..
During the 1960s it was common to connect a large number of
terminals to a single, central computer. Connecting two computers together
was relatively unusual. This began to change with the development of the
ARPANET, the ancestor of today's Internet.. For
the first time it was necessary to specify the recipient's computer as well
as an account name. Tomlinson decided that the underused commercial at
‘
@’ would work to separate the two.,
K aka
When
eval
version may be helpful:
?’. Including the verbose output of the command
? wysh set escape=! verbose; vput version xy; unset verbose;\ eval mail $contact-mail Bug subject !I xy !.
Information on the web at ‘
$ mail -X 'echo
$contact-web; x'’. | https://man.archlinux.org/man/mail.1 | CC-MAIN-2022-21 | refinedweb | 8,361 | 54.83 |
Accessing Files in a OneDrive Account from Code
The last time, we saw how to authenticate to a Microsoft Live OneDrive account from within a standard Windows forms application. This time, we continue on from that previous post and create routines to make working with the file list much more friendly, followed by showing how to upload and download files from the account.
If you're jumping straight into this without reading the previous article, please be aware this post assumes that you already know how to authenticate to the connected OneDrive account and send rest requests to it. If you don't already know this, I recommend that you at least skim over it so you know how to get an accessToken. The previous article can be helpful.
Getting a File Listing
In the previous post, we added the following code:
private string GetOneDriveRootListing() { var accessToken = GetAccessToken(); string jsonData; string url = string.Format(@"; }
This code was used to return the JSON data that represents a complete file listing of the user's root folder.
This is all very useful, but parsing the folder and file list manually is a rather tricky task. File System objects in a OneDrive account can take many different forms; for example, a standard folder might look like this:
{ "id": "folder.4515677xxxxxxxxx.4515677xxxxxxxxx!223", "from": { "id": "4515677xxxxxxxxx" }, "name": "Blog images", "description": "", "parent_id": "folder.4515677xxxxxxxxx", "size": 66529, "upload_location": "", "comments_count": 0, "comments_enabled": true, "is_embeddable": true, "count": 3, "link": " xxxxxxxxx!161", "type": "album", "shared_with": { "access": "Shared" }, "created_time": "2009-05-23T10:55:58+0000", "updated_time": "2010-09-28T18:30:53+0000", "client_updated_time": "2010-09-28T18:30:53+0000" }
Whereas an image file might look like this:
{ " "", "source": "", "upload_location": "", , "shared_with": { "access": "Just me" }, "created_time": "2014-08-28T11:26:38+0000", "updated_time": "2014-08-28T11:46:03+0000", "client_updated_time": "2014-08-28T11:46:03+0000" }
The image file, as you can see straight away, has many extra fields that a folder does not have. It gets even more complicated with movies and Office documents, and also some folders such as "Pictures," which are (as MS refers to them) 'Albums'.
All of this means we can't just use a single simple class object created from the JSON like we did for the 'OneDriveInfo' object as in the code above.
Sidetip: Indecently, if you have Visual Studio 2012 or later, there's a great new feature built into it. Copy your JSON data from whatever source you have, and then in a new class file, rather than just performing a CTRL+V as normal, click Edit, Paste Special in your menus. You'll find 'Paste JSON as Classes', which will take your JSON code and turn in directly into a .NET class ready for you to use. Be careful, though; it can get very complicated, very fast.
If we look at just the basic properties that match on all the objects, we'll see that in actual fact, they are all the same as the whole set of properties in our 'OneDriveInfo' object. This makes sense, because when we get the info for the root of our OneDrive, what we're actually requesting is the file system object for our root folder, and this contains the most basic set of properties we should use.
At this point, we're going to repurpose this object as a base class, so use your Visual Studio refactoring tools (or find and replace) to rename your object class from 'OneDriveInfo' to something more suitable, such as 'FileSystemBase'. We now should be able to deserialize the 'data' property of our file list into a list of 'FileSystemBase' objects, by using newtonsoft.JSON.
Before we do that, however, we need a container to deserialize them into.
Why a Container?
Well, even though the Live API returns a JSON Array of file objects, it wraps that array in a single top-level object. Now, we could if we wanted, simply strip everything before the first '[' and after the last ']' in our data, but because we're using Newtonsoft.JSON, it much easier just to create a class called 'OneDriveFileList' and add the following code:
using System.Collections.Generic; namespace Onedrive { public class OneDriveFileList { public List<FileSystemBase> Data { get; set; } } }
This will allow us to take the JSON returned from the file list call:
{ "data": [ .... file objects here .... ] }
and save us a whole heap of work.
Our file listing method now simply becomes:
private OneDriveFileList GetOneDriveRootListing() { var accessToken = GetAccessToken(); string jsonData; string url = string.Format(@"{0}" using (var client = new WebClient()) { var result = client.OpenRead(new Uri(url)); var sr = new StreamReader(result); jsonData = sr.ReadToEnd(); } FileSystemBase driveInfo = JsonConvert.DeserializeObject<FileSystemBase>(jsonData); url = string.Format("{0}?access_token{1}", driveInfo.Upload_Location, accessToken); using (var client = new WebClient()) { var result = client.OpenRead(new Uri(url)); var sr = new StreamReader(result); jsonData = sr.ReadToEnd(); } OneDriveFileList rootList = JsonConvert.DeserializeObject<OneDriveFileList>(jsonData); return rootList; }
Which, when called within the app, should result in the following nicely typed structure:
Figure 1: Showing the nicely typed structure
Which, like the base object, has an upload location (allowing you to get the file list for that folder) along with the other items such as 'size', 'type', and share permissions.
Once we have the base object, we then can start to extend that class to take account of the other object types. In our demo root folder, for example, we have a small PNG image file. The extra data that this carries is as follows:
"picture": "", "source": "", ,
Note that I'm not showing the entire JSON structure here, just the items that are different.
If we turn just those properties into a class, and then inherit that class from file system base, we should end up with a class that looks similar to this:
namespace Onedrive { public class FileSystemImage : FileSystemBase { public int Tags_Count { get; set; } public bool Tags_Enabled { get; set; } public string Picture { get; set; } public string Source { get; set; } public ImageInfo[] Images { get; set; } public When_Taken { get; set; } public int Height { get; set; } public int Width { get; set; } public object Location { get; set; } public object Camera_Make { get; set; } public object Camera_Model { get; set; } public int Focal_Ratio { get; set; } public int Focal_Length { get; set; } public int Exposure_Numerator { get; set; } public int Exposure_Denominator { get; set; } } public class ImageInfo { public int Height { get; set; } public int Width { get; set; } public string Source { get; set; } public string Type { get; set; } } }
With the addition of these two new classes, and the following method in our Main Form class:
private List<FileSystemImage> GetIamgesFromFileList(OneDriveFileList inputList) { List<FileSystemImage> results = new List<FileSystemImage>(); using (var client = new WebClient()) { var images = inputList.Data.Where(x => x.Type.Equals("photo")); foreach (var fileSystemBase in images) { string url = string.Format("{0}?access_token={1}", fileSystemBase.Upload_Location.Replace("content/", string.Empty), GetAccessToken()); var webResult = client.OpenRead(new Uri(url)); var sr = new StreamReader(webResult); var jsonData = sr.ReadToEnd(); results.Add(JsonConvert.DeserializeObject<FileSystemImage>(jsonData)); } } return results; }
It's now trivial to get all the images, and, providing you create types for them, any other file types you wish to handle. All the JSON structures are documented in various pages scattered around the OneDrive developer portal. You may notice in the method for getting image lists that, when I grab the upload location to get the extra JSON information, I do a replace on the string to remove the 'content' leaf from the URL:
fileSystemBase.Upload_Location.Replace("content/", string.Empty)
The upload location for a file actually points to the download link. If you request that location directly, rather than getting a JSON payload as expected, you'll actually get a byte array of the file's contents.
Because all we want is the file information, we strip this off (rather than do things the other way and build the URL), and then just use that as the URL to make the request to. This will, in response, return to us a single JSON object of the same format of the single entry in the original file list, but with all of the properties intact.
There are many other ways we could achieve the same thing. We could, for instance, use a dynamic type and just simply populate the root file list with everything, and then build our lists on the fly from that. This would save the extra network accesses, and if you were using this on a mobile device, it's certainly something you might need to think about. For this post, though, it serves to show how to handle the different file types easily and simply.
All this talk of content leads into the next topic.
Downloading a File
As you've just seen, if you take the 'upload_location' property directly out of the file info object, and the object you're dealing with is a kind of file (rather than a folder or Album), the location URL will have "content/" on the end of it. Making a normal get request against this, as we have been doing with the other requests, will simply and efficiently return you a stream of bytes that consist entirely of the file being requested.
When making a content request, there is no JSON data or any other parts to the payload, just a pure data stream containing the file requested. How you get that file onto disk, or into memory, is entirely up to you. It is, however, sensible to use the StreamReader interface to handle it, especially if the file is something like a video file, which might end up being several hundred megabytes in size.
You can download a file from your OneDrive account by adding the following method to your application:
private void DownloadFile(OneDriveFileList inputList, string fileIdToDownload) { var fileToDownload = inputList.Data.FirstOrDefault(x => x.ID == fileIdToDownload); if (fileIdToDownload == null) return; using (var client = new WebClient()) { string url = string.Format("{0}?access_token={1}", fileToDownload.Upload_Location, GetAccessToken()); var webResult = client.OpenRead(new Uri(url)); if (webResult == null) return; using (var fileStream = File.Create(fileToDownload.Name)) { webResult.CopyTo(fileStream); } } }
Note that you first need to grab a copy of the file list (or at the very least the single object) of the file you want to download. This is because you need the 'upload_location' with its 'content' URL so that you have the location of the files binary data.
In the preceding method, all I've done is passed in the list of our files, and the ID I want to download. The method then uses this to get the file's original file name and upload location, makes the request to the upload location, and then finally streams the bytes into a local file with the same name as the original.
And, that's all there is to it. Downloading files is easy once you have the base file information.
Uploading Files
In the final part of the post, we'll now look at how to upload a file to your OneDrive account.
Like everything we've done so far, this is as simple as using a web client, making a request, processing the JSON that's returned, and acting on it. The major difference from everything we've done so far, however, is that files MUST be sent using either "POST" or "PUT" HTTP verbs.
Of the two, MS actually recommends that, if at all possible, you should try to use "PUT". You can use "POST", but you have to send this by using "multipart/form-data", and because you're not actually sending this from a web page, you'll need to build all the request headers and multipart boundaries, as well as the protocol information by hand.
The easiest way to upload files is with the following code:
private void UploadFileToSkyDrive(string localFile) { string url = string.Format( @" files/{0}?access_token={1}", Path.GetFileName(localFile), GetAccessToken()); using (var client = new WebClient()) { var result = client.UploadData(new Uri(url), "PUT" string strResult = Encoding.UTF8.GetString(result); } } private byte[] FileToByteArray(string fileName) { FileStream fileStream = File.OpenRead(fileName); byte[] fileData = new byte[fileStream.Length]; fileStream.Read(fileData, 0, fileData.Length); fileStream.Close(); return fileData; }
You'll notice in the URL that I've hard coded the root folder of my OneDrive instance. In reality, you'll most likely want to pass the folder name in as a parameter, allowing you to choose where on your OneDrive you want to upload the file to. Another improvement that you'll also likely want to make is to use a file stream, rather than converting the file to a byte array before sending it. Using a byte array is great for small files, but if you try to use that method for uploading very large files, you'll quickly consume all your available memory.
Once you upload the file, you'll get the obligatory JSON response, which for a successful file should look something like this:
{ "id": "file.4515677bdf99b35f.4515677BDF99B35F!1022", "name": "IMAG0067.jpg", "source": "" }
You'll see that this contains your file's ID, its original name, and a public link should you want to make it downloadable in a web page.
That's All, Folks?
Except that it really isn't.
The topics we've covered in the last two posts have only just begun to scratch the surface. With the correct scope requests, you can get contact lists from Outlook and profile information. You even can access Office documents and do specialist tasks with those that you can't do on normal files.
The LiveSDK has a massive amount of functionality in it, covering all of Microsoft's Live API. Unfortunately, its sheer size means that it's impractical for me to go any farther than I have done in these posts.
Remember, too, that even though we didn't make any direct use of it, we added the LiveSDK NuGet package to our project too, so it's worth exploring that. Look at the methods and routines in there and study the data structures available.
Finally, remember that this is just a rest-based interface. This means you can use it in PHP, Python, NodeJS, Perl, and just about ANY other language on the planet that's capable of making rest-based requests.
I'll make the sample code/project for these two articles available on my github account at:
For any one who wishes to clone them, happy OneDriving......
not understandPosted by al on 11/04/2015 05:51am
what the code languae and where i write itReply
Good Stuff!Posted by Gilberto Tezini on 10/17/2015 02:43am
Very usefull article! Good job!Reply
Upload gives a 403 Error?Posted by Brendon S on 05/28/2015 04:45am
Hi Peter, very helpful post thanks. In the published article, the FileUpload section of your code - line 12 is incomplete. I took it as: var result = client.UploadData(new Uri(url), "PUT", FileToByteArray(localFile)); I am calling it like: UploadFileToSkyDrive(@"C:\Script.txt"); But I am getting an error "The remote server returned an error: (403) Forbidden." Browsing files and downloading works fine. I looked on the OneDrive developer site but can see no settings that would stop a file upload. The file I am trying to upload is 1Kb of text. Any clues? RegardsReply
Useless without GetAccessTokenPosted by dick on 01/03/2015 12:41pm
Code is completely useless without knowing what GetAccessToken() is.
Read Previous PostPosted by Keenan on 02/23/2016 09:36am
Read the previous post to find out about access token and how to get one.Reply
Teaser....Posted by Tim on 11/17/2014 01:50pm
You say in your final closing that it would be better to use a filestream object rather than converting the file to a byte array - any hints on where to start with this? Thanks for the article I have learned a lot! Tim
RE: Teaser...Posted by Peter Shaw on 12/03/2014 04:45am
Hi Again Tim: There's a ton of stuff out there that will help you in this respect. I think your referring to using a stream reader however, as I already use a file stream in the demo code above :-) If you are then this stack overflow post will give you a good start: Feel free to find me on twitter and ask me directly, you can usually find me as @shawty_dsReply
Senior Manager, ProgrammingPosted by Richard Thomas on 09/29/2014 03:59am
Nice article, but the link to the previous article goes to "Creating Webforms with Friendly URLs". I don't see anything about OneDrive authenticating there.
RE:Posted by Peter Shaw on 09/30/2014 09:08am
Hi Richard, yes sorry about that, I've informed the site publishers. The article on Authentication was designed to go out before or soon after this one. Hopefully the authentication article will be published soon, and they will link them both together once they do. Regards ShawtyReply
Wrong link?Posted by RC on 09/29/2014 02:42am
Hi, I'm not sure if I'm getting the right link to your previous article but it points to a post on creating WebForms which doesn't tackle anything about authentication or OneDrive at all. I'd love to read the previous post so I could understand this one better. Could you post the right link? :)
Authenticating PostPosted by Peter Shaw on 11/06/2014 04:59am
The previous post about authenticating, is now on-line at :
RE: Wrong link?Posted by Peter Shaw on 09/30/2014 09:09am
Hi RC, Yes, sorry about that, I've notified the site publishers, hopefully they'll rectify things as soon as possible.Reply | http://www.codeguru.com/columns/dotnet/accessing-files-in-a-onedrive-account-from-code.html | CC-MAIN-2017-04 | refinedweb | 2,929 | 59.33 |
Create a Custom Loader in React Native
In a mobile app, we often fetch data from a web-service/API. And an app takes time to fetch this data and render it on the screen. In such cases, we want to render a loader on the screen indicating to the user that there is a process running in the background. Here is how you can add your own custom loader in react-native apps.
First things first, choosing a loader. Head over here and choose a loader that you like. Make sure you toggle the transparent background to yes, and have the file format selected as .gif. Download the file and copy it to /src/assets.
Next, we must add animated gif support to our app. To do this, open
android/appp/build.gradle and add the following line to your
dependencies block:
compile 'com.facebook.fresco:animated-gif:1.10.0'
Next, create a file
PreLoader.js in
/src/components/ and add the following code to the file:
import React, {Component} from 'react';
import {Platform, StyleSheet, Text, View, Image} from 'react-native';export default class PreLoader extends Component {
_renderLoader = () => {
if (this.props.preLoaderVisible) return (
<View style={styles.background}>
<Image source={require('../assets/preLoader.gif')} />
</View>
)
else return null;
}render () {
return (
this._renderLoader()
)
}
}const styles = StyleSheet.create ({
background: {
backgroundColor: <ENTER_BACKGROUND_COLOR_HERE>,
flex: 1,
position: 'absolute',
top: 0,
bottom: 0,
left: 0,
right: 0,
alignItems: 'center',
justifyContent: 'center'
}
});
Enter your choice of background color where mentioned. You also choose a partially or completely transparent background if you want.
Next, open the file where you want the loader to appear. Add an import statement as follows: (assuming your current file is in a separate directory in /src)
import PreLoader from '../components/PreLoader';
Include a state variable named loading and set it to true. Make sure to set
componentDidMount() which is the method called when the screen is rendered. Just before making any fetch call to an API, set the loading to true, and set it to false inside the
componentDidReceiveProps() method.
Add the following line in your
render() method:
<PreLoader preLoaderVisible={this.state.loading} />
And you’re done! Open up your app, and you should see your own custom loader when you make an API call. | https://shreyasnisal.medium.com/create-a-custom-loader-11ebfdf84ecd | CC-MAIN-2020-50 | refinedweb | 373 | 58.69 |
This document takes you through the basics of using NetBeans IDE 5.5 to develop
web applications. This document is designed to get you going as quickly as possible.
For more information on working with NetBeans IDE, see the Support
and Docs page on the NetBeans website..
Note: This document uses the NetBeans IDE 5.5 Release. If you
are using NetBeans IDE 6.0 or 6.1, see Introduction to Developing Web Applications. If you
are using NetBeans IDE 6.5, see Introduction to Developing Web Applications.
Before you start writing code, you have to make sure you have all of the necessary software
and that your project is set up correctly.
Before you begin, you need to install the following software on your
computer:
Optionally, you can download and use the Sun Java System (SJS) Application
Server (download),
JBoss, or WebLogic. However, the Tomcat Web Server that is bundled with the
IDE provides all the support you need for two-tier web applications such as
the one described in this guide. An application server (such as
the SJS Application Server, JBoss, or WebLogic) is only required when you
want to develop enterprise applications.
The bundled Tomcat Web Server is registered with the IDE automatically.
However, before you can deploy to
the SJS Application Server, JBoss, or WebLogic, you have to register a local instance
with the IDE. If you installed the NetBeans IDE 5.5/SJS Application
Server bundle, a local instance of the SJS Application Server is registered automatically.
Otherwise, take the following steps:
The IDE creates the $PROJECTHOME.
String name;
name = null;
package org.me.hello;
/**
*
* @author Administrator
*/
public class NameHandler {
private String name;
/** Creates a new instance of NameHandler */
public NameHandler() {
setName(null);
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}>
The IDE uses an Ant build script to build and run your web applications.
The IDE generates the build script based on the options
you enter in the New Project wizard and the project's Project Properties dialog box.
The IDE builds the web application and deploys it, using the server you specified when creating the project.
Click OK. The response.jsp page should open and greet you:
For more information about developing web applications in NetBeans IDE 5.5, see the following resources:
To send comments and suggestions, get support, and keep informed on the latest
developments on the NetBeans IDE Java EE development features, join the
nbj2ee
@
netbeans.org
mailing list. For more information
about upcoming Java EE development features in NetBeans IDE, see.
Bookmark this page | http://www.netbeans.org/kb/55/quickstart-webapps.html | crawl-002 | refinedweb | 430 | 57.06 |
The MMIX Supplement to The Art of Computer Programming: Programming Techniques
Programming Techniques
1. Index Variables
Many algorithms traverse information structures that are sequentially allocated in memory. Let us assume that a sequence of n data items a0, a1, . . . , an−1 is stored sequentially. Further assume that each data item occupies 8 bytes, and the first element a0 is stored at address A; the address of ai is then A + 8i. To load ai with 0 ≤ i < n from memory into register ai, we need a suitable base address and so we assume that we have A = LOC(a0) in register a. Then we can write ‘8ADDU t,i,a; LDO ai,t,0’ or alternatively ‘SL t,i,3; LDO ai,a,t’. If this operation is necessary for all i, it is more efficient to maintain a register i containing 8i as follows:
Note how i advances by 8 when i advances by 1.
The branch instructions of MMIX, like most computer architectures, directly support a test against zero; therefore a loop becomes more efficient if the index variable runs toward 0 instead of toward n. The loop may then take the form:
In the above form, the items are traversed in decreasing order. If the algorithm requires traversal in ascending order, it is more efficient to keep A + 8n, the address of an, as new base address in a register an, and to run the index register i from −8n toward −8 as in the following code:
If a is used only to compute A+8n, it is possible to write ‘8ADDU a,n,a’ and reuse register a to hold A + 8n. Loading ai then resumes the nice form ‘LDO ai,a,i’, without any need for an. For an example, see Program 4.3.1S on page 63.
When computer scientists enumerate n elements, they say “a0, a1, a2, . . . ”, starting with index zero. When mathematicians (and most other people) enumerate n elements, they say “a1, a2, a3, . . . ” and start with index 1. Nevertheless when such a sequence of elements is passed as a parameter to a subroutine, it is customary to pass the address of its first element LOC(a1). If this address is in register a, the address of ai is now a + 8(i − 1). To load ai efficiently into register ai, we have two choices: Either we adjust register a, saying ‘SUBU a,a,8’ for a ← LOC(a0), or we maintain in register i the value of 8(i − 1), saying for example ‘SET i,0’ for i ← 1. In both cases, we can write ‘LDO ai,a,i’ to load ai ← ai.
Many variations of these techniques are possible; a nice and important example is Program 5.2.1S on page 76.
2. Fields
Let us assume that the data elements ai, just considered, are further structured by having three fields, two WYDEs and one TETRA, like this:
It is then convenient to define offsets for the fields reusing the field names as follows:
There is very little information in these lines, so these definitions are usually suppressed in a program’s display.
Computing the address of, say, the KEY field of ai requires two additions, A + 8i + KEY, of which only one must be done inside a loop over i. The quantity A + KEY can be precomputed and kept in a register named key. This simplifies loading of KEY(ai) as follows:
3. Relative Addresses
In a more general setting, this technique can be applied to relative addresses. Assume that one of the data items ai is given by its relative address P = LOC(ai) − BASE relative to some base address BASE.
Then again KEY(ai) can be loaded by a single instruction ‘LDT k,key,p’, if P is in register p, and BASE + KEY is in register key.
While an absolute address always requires eight bytes in MMIX’s memory, relative addresses can be stored using only four bytes, two bytes, or one byte, which allows tighter packing of information structures and reduces the memory footprint of applications that handle large numbers of links. Using this technique, the use of relative addresses can be as efficient as the use of absolute addresses.
4. Using the Low Order Bits of Pointers (“Bit Stuffing”)
Modern computers impose alignment restrictions on the possible addresses of primitive data types. In the case of MMIX, an OCTA may start only at an address that is a multiple of 8, a TETRA requires a multiple of 4, and a WYDE needs an even address. As a result, data structures are typically octabyte-aligned, because they contain one or more OCTA-fields—for example, to hold an absolute address in a link field. Those link fields, in turn, are multiples of eight as well. Put differently, their three low-order bits are all zero. Such precious bits can be put to use as tag bits, marking the pointer to indicate that either the pointer itself or the data item it points to has some special property. MMIX further simplifies the use of these bits as tags by ignoring the low-order bits of an address in load and store instructions. That convention is not the case for all CPU architectures. Still, these bits are usable as tags; they just need to be masked to zero on such computers before using link fields as addresses.
Three different uses need to be distinguished. First, a tag bit in a link may contain some additional information about the data item it links to. Second, it may tell about the data item that contains the link. Third, it may disclose information about the link itself.
An example of the first type of use is the implementation of two-dimensional sparse arrays in Section 2.2.6. There, the nonzero elements of each row (or column) form a circular linked list anchored in a special list head node. It would have been possible to mark each head node using one of the bits in one of its link fields, but it is more convenient to put this information into the links pointing to a head node. Once the link to the next node in the row is known, a single instruction is sufficient to test for a head node, as for example in the implementation of Program 2.2.6S on page 132:
If a head node would be marked by using a tag bit in its own UP link, the code would require an extra load instruction:
The great disadvantage of this method, so it seems, is the need to maintain all the tag bits in all of the links that point to a head node during the running time of the program. A closer look at the operations a Program like Algorithm 2.2.6S performs will reveal, however, that it inserts and deletes matrix elements but never deletes or creates head nodes. Inserting or deleting matrix elements will just copy existing link values; hence no special coding is required to maintain the tag bits in the links to head nodes.
The second, more common, type of use of a tag field is illustrated by the solution to exercise 2.3.5–4 on page 139. The least significant bit of the ALINK field is used to mark accessible nodes, and the least significant bit of the BLINK field is used to distinguish between atomic and non-atomic nodes. The following snippet taken from this code is typical for testing and setting of these tag bits:
An interesting variation of this use of a tag bit can be seen in exercise 2.2.3–26 on page 23. There, the data structure asks for a variable-length list of links allocated sequentially in memory. Instead of encoding the length of the list somewhere as part of the data structure, the last link of the structure is marked by setting a tag bit. This arrangement leads to very simple code for the traversal of the list.
As a final example, consider the use of tag bits in the implementation of threaded binary trees in Section 2.3.1. There, the RIGHT and LEFT fields of a node might contain “down” links to a left or right subtree, or they might contain “thread” or “up” links to a parent node (see, for example, 2.3.1–(10), page 324 ). Within a tree, there are typically both “up” and “down” links for the same node. Hence, the tag is clearly a property of the link, not the node. Searching down the left branch of a threaded binary tree, as required by step S2 of Algorithm 2.3.1S, which reads “If LTAG(Q) = 0, set Q ← LLINK(Q) and repeat this step,” may take the following simple form:
5. Loop Unrolling
The loop shown at the end of the last section has a SET operation that has no computational value; it just reorganizes the data when the code advances from one iteration to the next. A small loop may benefit significantly from eliminating such code by unrolling it or, in the simplest case, doubling it. Doubling the loop adds a second copy of the loop where the registers p and q exchange roles. This leads to
The new loop requires 2υ per iteration instead of 3υ. For another example, see the solution to exercise 5.2.1–33 on page 167. Further, Program 6.1Q′ on page 98 illustrates how loop unrolling can benefit loops maintaining a counter variable, and the solution to exercise 6.2.1–10 on page 184 shows how to completely unroll a loop with a small, fixed number of iterations.
6. Subroutines
The code of a subroutine usually starts with the definition of its stack frame, the storage area containing parameters and local variables. Using the MMIX register stack, it is sufficient for most subroutines to list and name the appropriate local registers. Once the stack frame is defined, the instructions that make up the body of the subroutine follow. The first instruction is labeled with the name of the subroutine—typically preceded by a colon to make it global; the last instruction is a POP. For a simple example see the solution to exercise 2.2.3–2 on page 124 or the solution to exercise 5–7 on page 162.
Subroutine Invocation. Calling a subroutine requires three steps: passing of parameters, transfer of control, and handling of return values. In the simplest case, with no parameters and no return values, the transfer of control is accomplished with a single ‘PUSHJ $X,YZ’ instruction and a matching POP instruction. The problem remains choosing a register $X such that the subroutine call will preserve the values of registers belonging to the caller’s stack frame. For this purpose, the subroutines in this book will define a local register, named t, such that all other named local registers have register numbers smaller than t. Aside from its role in calling subroutines, t is used as temporary variable. The typical form of a subroutine call is then ‘PUSHJ t,YZ’.
If the subroutine has n > 0 parameters, the registers for the parameter values can be referenced as t+1, t+2, . . . , t+ n. A simple example is Program 2.3.1T, where the two functions Inorder and Visit are called like this:
After the subroutine has transferred control back to the caller, it may use the return values. If the subroutine has no return values, register t (and all registers with higher register numbers) will be marginal and a reference to it will yield zero; otherwise, t will hold the principal return value and further return values will be in registers t+1, t+2, . . . . The function FindTag in the solution to exercise 2.5–27 on page 143 is an example of a function with three return values.
Nested Calls. If the return value of one function serves as a parameter for the next function, the schema just described needs some modification. It is better to place the return value of the first function not in register t but directly in the parameter register for the second function; therefore we have to adjust the first function call. For example, the Mul function in Section 2.3.2, page 42, needs to compute Q1 ← Mult(Q1,Copy(P2)), and that is done like this:
The Div function of exercise 2.3.2–15, which computes the slightly more complex formula
Q ← Tree2(Mult(Copy(P1),Q),Tree2(Copy(P2),Allocate(),“↑”),“/”),
contains more examples of nested function calls (see also the Pwr function of exercise 2.3.2–16).
Nested Subroutines. If one subroutine calls another subroutine, we have a situation known as nested subroutines. The most common error when programming MMIX is failing to save and restore the rJ register. At the start of a subroutine, the special register rJ contains the return address for the POP instruction. It will be rewritten by the next PUSHJ instruction and therefore must be saved if the next PUSHJ occurs before the POP.
There are two preferred places to save and restore rJ: Either start the subroutine with a GET instruction, saving rJ in a local register, and end the subroutine with a PUT instruction, restoring rJ, immediately before the terminating POP instruction; or, if the subroutine contains only a single PUSHJ instruction, save rJ immediately before the PUSHJ and restore it immediately after the PUSHJ. An example of the first method is the Mult function in Section 2.3.2; the second method is illustrated by the Tree2 function in the same section. If subroutines use the PREFIX instruction to create local namespaces, the local copy of ‘:rJ’ can simply be called ‘rJ’; that is the naming convention used in this book.
Tail Call Optimization. The Mult function of Section 2.3.2 is an interesting example for another reason: It uses an optimization called “tail call optimization.” If a subroutine ends with a subroutine call in such a way that the return values of the inner subroutine are already the return values of the outer subroutine, the stack frame of the outer subroutine can be reused for the inner subroutine because it is no longer needed after the call to the inner routine. Technically, this is achieved by moving the parameters into the right place inside the existing stack frame and then using a jump or branch instruction to transfer control to the inner subroutine. The POP instruction of the inner subroutine will then return directly to the caller of the outer subroutine. So, when the function Mult(u,v) wants to return Tree2(u,v,“×”), u and v are already in place and ‘GETA v+1,:Mul’ initializes the third parameter; then ‘BNZ t,:Tree2’ transfers control to the Tree2 function, which will return its result directly to the caller of Mult.
A special case of this optimization is the “tail recursion optimization.” Here, the last call of the subroutine is a recursive call to the subroutine itself. Applying the optimization will remove the overhead associated with recursion, turning a recursive call into a simple loop. For an example, see Program 5.2.2Q on page 82, which uses PUSHJ as well as JMP to call the recursive part Q2.
7. Reporting Errors
There is no good Program without good error handling. The standard situation is the discovery of an error while executing a subroutine. If the error is serious enough, it might be best to issue an error message and terminate the Program immediately. In most cases, however, the error should be reported to the calling Program for further processing.
The most common form of error reporting is the specification of special return values. Most UNIX system calls, for example, return negative values on error and nonnegative values on success. This schema has the advantage that the test for a negative value can be accomplished with a single instruction, not only by MMIX but by most CPUs. Another popular error return value, which can be tested equally well, is zero. For example, functions that return addresses often use zero as an error return, because addresses are usually considered unsigned and the valid addresses span the entire range of possible return values. In most circumstances, it is, furthermore, simple to arrange things in a way that excludes zero from the range of valid addresses.
MMIX offers two ways to return zero from a subroutine: The two instructions ‘SET $0,0; POP 1,0’ will do the job, but just ‘POP 0,0’ is sufficient as well. The second form will turn the register that is expected to contain the return value into a marginal register, and reading a marginal register yields zero (see the solution to exercise 2.2.3–4 on page 125 for an example).
The POP instruction of MMIX makes another form of error reporting very attractive: the use of separate subroutine exits for regular return and for error return (see exercise 2.2.3–3 and its solution on page 125 for an example). The subroutine will end with ‘POP 0,0’ in case of error and with ‘POP 1,1’ in case of success, returning control to the instruction immediately following the PUSHJ in case of error and to the second instruction after the PUSHJ otherwise. The calling sequence must then insert a jump to the error handler just after the PUSHJ while the normal control flow continues with the instruction after the jump instruction. The advantages of this method are twofold. First, the execution of the normal control path is faster because it no longer contains a branch instruction to test the return value. Second, this programming style forces the calling Program to provide explicit error handling; simply skipping the test for an error return will no longer work. | http://www.informit.com/articles/article.aspx?p=2303311 | CC-MAIN-2016-50 | refinedweb | 2,978 | 59.43 |
Directory Services
As discussed at the start of today's work a Directory Service is a Name Service with the ability to categorize names to support complex searches of the name space. The JNDI support for a Directory Service is differentiated from a Name Service by storing attributes as well as an object against a bound name. Typically you will probably use LDAP as your Directory Service, but NDS (Novell Directory Services) is also a Directory Service. The simple name services provided with J2SE (CORBA) and J2EE are not directory services.
An attribute is additional information stored with a name. Storing full name, address, phone number, and email with a person's name is a common use of a directory service. NDS uses attributes to control access to shared network drives and to configure a user's login environment.
A directory service stores attributes as values against a keyword (LDAP calls them IDs). Directory services usually support searching for names (objects) that have certain attributes defined (or not defined). Searching often supports looking for names with attributes that have a specific value (often wildcard pattern matching is supported). A simple search of a personnel database under an LDAP server might, for example, find all entries with the surname Washington.
LDAP uses a schema system to control which attributes an object must define and those that it may optionally define. Any attributes that you add or delete must not break the schema's requirements. LDAP servers may be able to disable schema checking, but disabling schema checking is usually a bad idea because the schema was created for a purpose.
If you want to see the capabilities of attributes, you must have access to a directory server. The rest of this section shows how to use an LDAP Directory Server.
Using LDAP
Using an LDAP Directory Service requires you to set the JNDI properties to specify the JNDI Service provider from Sun Microsystems and, of course, you must have an LDAP server running.
The J2EE RI does not include an LDAP server, so if you wish to work through this section, and you do not already have an LDAP server, you will have to obtain one from elsewhere. Only certain operating systems provide LDAP servers. Windows NT, 2000, and XP users will have to purchase the enterprise (or server) editions of these operating systems, which are typically significantly more expensive than the usual desktop or professional editions. Sun Microsystems' Solaris 8 Operating Environment includes an LDAP server.
Linux (and Solaris) users can download and install the OpenLDAP implementation, which is an open source server available free of charge for personal use. The Open LDAP server can be downloaded from.
Users of Microsoft Windows will have to make other arrangements as OpenLDAP is not available for this platform. If an Active Directory server (which supports an LDAP interface) is accessible on the network or you use the Enterprise (or Server) edition of the Operating System, you are ok. Otherwise, your best solution is to install Linux and OpenLDAP on a spare PC.
To use an LDAP server simply create a jndi.properties file with the following entries:
java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory java.naming.provider.url=ldap://localhost:389
If the LDAP server is not running on the current machine, replace the name localhost with the name or IP address of the actual LDAP server. Port number 389 is the default LDAP port number, and you can omit it if LDAP is running on the default port (or replace it by the actual port number if a non-standard port is being used).
LDAP names conform to the X.500 standard that requires a hierarchical namespace. A Distinguished Name (DN) unambiguously identifies each entry in the directory. The DN consists of the concatenation of the names from the root of the directory tree down to the specific entry.
A sample LDAP DN looks like the following:
cn=Martin Bond, ou=Authors, o=SAMS, c=us
This will be a familiar structure if you have worked with digital certificates or Active Directory.
Using a Directory Service
Directory Services cannot be accessed through the ordinary Context object. Instead, you must use a javax.naming.directory.DirContext class. The DirContext is a sub-class of Context, and you can use it in place of a Context when dealing with a Directory Service where you require directory functionality (such as attributes). For example,
DirContext ic = new InitialDirContext();
The DirContext class supports the same lookup(), bind(), rebind(), list() and other operations of the Context class. Additionally the DirContext provides support for attributes.
Attributes are read from the context just like you would look up a name from the context. The DirContext.getAttributes() method returns a NamingEnumeration that contains a collection of Attribute objects. Each Attribute has an ID (or key) and a list of values (an attribute can have more than one value for the same key). The following example prints all the attributes for a name specified by args[0]:
DirContext ctx = new InitialDirContext(); Attributes attrs = ctx.getAttributes(args[0]); NamingEnumeration ae = attrs.getAll(); while (ae.hasMore()) { Attribute attr = (Attribute)ae.next(); System.out.println(" attribute: " + attr.getID()); NamingEnumeration e = attr.getAll(); while (e.hasMore()) System.out.println(" value: " + e.next()); }
A second form of the getAttributes() method allows you to provide an array of attribute names, and it only returns the values for those attributes. It is not an error to query an attribute that isn't defined; a value is simply not returned for that attribute. The following fragment shows how to find the email and cellphone attributes for a name:
String[] IDs = {"email", "cellphone"}; Attributes attrs = ctx.getAttributes("cn=Martin Bond, ou=Authors, o=SAMS, c=us", IDs);
Overloaded versions of the bind() and rebind() methods in the DirContext class take a third Attributes parameter for binding a name. The following example could bind a new entry LDAP entry for cn=Martin Bond (the details of the Person object have been omitted for clarity and the email address has not been defined for privacy):
Person martin = ...; Attributes attrs = new BasicAttributes(); attrs.put(new BasicAttribute("email","...")); attrs.put(new BasicAttribute("description","author")); ctx.bind("cn=Martin Bond, ou=Authors, o=SAMS, c=us", martin, attrs);
As a final point, the DirContext.ModifyAttributes() method supports the addition, modification, and deletion of attributes for a name.
Searching a Directory Service
A powerful and useful feature of attributes is the ability to search for names that have specific attributes or names that have attributes of a particular value.
You use the DirContext.search() method to search for names. There are several overloaded forms of this method, all of which require a DN to define the context in the name tree where the search should begin. The simplest form of search() takes a second parameter that is an Attributes object that contains a list of attributes to find. Each attribute can be just the name, or the name and a value for that attribute.
The following code excerpt shows how to find all names that have an email attribute and a description attribute with the value author in the o=SAMS name space.
DirContext ctx = new InitialDirContext(); Attributes match = new BasicAttributes(true); match.put(new BasicAttribute("email")); match.put(new BasicAttribute("description","author")); NamingEnumeration enum = ctx.search("o=SAMS", match); while (enum.hasMore()) { SearchResult res = (SearchResult)enum.next(); System.out.println(res.getName()+", o=SAMS"); }
The search() method returns a NamingEnumeration containing objects of class SearchResult (a sub-class of NameClassPair discussed earlier). The SearchResult encapsulates information about the names found. The example code excerpt simply prints out the names (the names in the SearchResult object are relative to the context that was searched).
The SearchResult class also has a getAttributes() method that returns all the attributes for the found name. A second form of the search() method takes a third parameter that is an array of String objects specifying the attributes for the method to return. The following code fragment shows how to search and return just the email and cellphone name attributes:
Attributes match = new BasicAttributes(true); match.put(new BasicAttribute("email")); match.put(new BasicAttribute("description","author")); String[] getAttrs = {"email","cellphone"} NamingEnumeration enum = ctx.search("o=SAMS", match, getAttrs);
Yet another form of the search() method takes a String parameter and a SearchControls parameter to define a search filter.
The filter String uses a simple prefix notation for combining attributes and values.
You can use the javax.naming.directory.SearchControls argument required by search() to
Specify which attributes the method returns (the default is all attributes)
Define the scope of the search, such as the depth of tree to search down
Limit the results to a maximum number of names
Limit the amount of time for the search
The following example searches for a description of author and an email address ending with sams.com with no search controls:
String filter ="(|(description=author)(email=*sams.com))"; NamingEnumeration enum = ctx.search("o=SAMS", filter, null);
The JNDI API documentation and the JNDI Tutorial from Sun Microsystems provide full details of the search filter syntax. | https://www.informit.com/articles/article.aspx?p=174364&seqNum=5 | CC-MAIN-2020-40 | refinedweb | 1,522 | 55.03 |
- NAME
- DESCRIPTION
- APPETIZERS
- MEAT & POTATOES
- FAST FOOD
- THE MAIN COURSE
- JUST DESSERTS
- ENTERTAINING GUESTS
- FOOD FOR THOUGHT
- SEE ALSO
- AUTHORS
NAME
Inline::C::Cookbook - A Cornucopia of Inline C Recipes
DESCRIPTION adapted from email discussions I have had with Inline users around the world. It has been my experience so far, that Inline provides an elegant solution to almost all problems involving Perl and C.
Bon Appetit!
APPETIZERS
Hello, world
- Problem
-
It seems that the first thing any programmer wants to do when he learns a new programming technique is to use it to greet the Earth. How can I do this using Inline?
- Solution
use Inline C => <<'...'; void greet() { printf("Hello, world\n"); } ... greet;
- Discussion
Nothing too fancy here. We define a single C function
greet()which prints a message to STDOUT. One thing to note is that since the Inline code comes before the function call to
greet, we can call it as a bareword (no parentheses).
- See Also
See Inline and Inline::C for basic info about
Inline.pm.
- Credits
Brian Kernigan
Dennis Ritchie
One Liner
- Problem
A concept is valid in Perl only if it can be shown to work in one line. Can Inline reduce the complexities of Perl/C interaction to a one-liner?
- Solution
perl -e 'use Inline C=>q{void greet(){printf("Hello, world\n");}};greet'
- Discussion
Try doing that in XS :-)
- See Also :-(
- Credits
"Eli the Bearded" <elijah@workspot.net> gave me the idea that I should have an Inline one-liner as a signature.
MEAT & POTATOES
Data Types
- Problem
How do I pass different types of data to and from Inline C functions; like strings, numbers and integers?
- Solution
#; }
- Discussion
This script takes a file name from the command line and prints the ratio of vowels to letters in that file.
vowels.pluses.
It is very important to note that the examples in this cookbook use
char *to mean a string. Internally Perl has various mechanisms to deal with strings that contain characters with code points above 255, using Unicode. This means that naively treating strings as
char *, an array of 8-bit characters, can lead to problems. You need to be aware of this and consider using a UTF-8 library to deal with strings.
- See Also
The Perl Journal vol #19 has an article about Inline which uses this example.
- Credits
This example was reprinted by permission of The Perl Journal. It was edited to work with Inline v0.30 and higher.
Variable Argument Lists
- Problem
-
How do I pass a variable-sized list of arguments to an Inline C function?
- Solution
- Discussion
This little program greets a group of people, such as my coworkers. We use the
Cell.
- See Also
-
- Credits
-
Multiple Return Values
- Problem
How do I return a list of values from a C function?
- Solution
print map {"$_\n"} get_localtime(time); use Inline C => <<'END_OF_C_CODE'; #include <time.h> void get_localtime(SV * utc) { const time_t utc_ = (time_t)Sv
- Discussion
#includestatement is not really needed, because Inline automatically includes the Perl headers which include almost all standard system calls.
- See Also
For more information on the Inline stack macros, see Inline::C.
- Credits
Richard Anderson <starfire@zipcon.net> contributed the original idea for this snippet.
Multiple Return Values (Another Way)
- Problem
How can I pass back more than one value without using the Perl Stack?
- Solution; }
- Discussion.
- See Also
-
- Credits
Ned Konz <ned@bike-nomad.com> brought this behavior to my attention. He also pointed out that he is not the world famous computer cyclist Steve Roberts (), but he is close (). Thanks Ned.
Using Memory
- Problem
How should I allocate buffers in my Inline C code?
- Solution
print greeting('Ingy'); use Inline C => <<'END_OF_C_CODE'; SV* greeting(SV* sv_name) { return (newSVpvf("Hello %s!\n", SvPV(sv_name, PL_na))); } END_OF_C_CODE
- Discussionvdoes just that. And
newSVpvfincludes
sprintffunctionality.call.
- See Also
-
- Credits
-
FAST FOOD
Inline CGI
- Problem
How do I use Inline securely in a CGI environment?
- Solution
#!
- Discussion Also
See CGI for more information on using the
CGI.pmmodule.
- Credits
-
mod_perl
- Problem
-
How do I use Inline with mod_perl?
- Solution) }
- Discussion
This is a fully functional mod_perl handler that prints out the factorial values for the numbers 1 to 100. Since we are using Inline under mod_perl, there are a few considerations to , um, consider.
First, mod_perl handlers are usually run with
-Tt Also
See Stas Bekman's upcoming O'Reilly book on mod_perl to which this example was contributed.
Object Oriented Inline
- Problem
-
How do I implement Object Oriented programming in Perl using C objects?
- Solution"; } #--------------------------------------------------------- package Soldier; use Inline C => <<'END'; /* Allocate memory with Newx if it's available - if it's an older perl that doesn't have Newx then we resort to using New. */ #ifndef Newx # define Newx(v,n,t) New(0,v,n,t) #endif typedef struct { char* name; char* rank; long serial; } Soldier; SV* new(const char * classname, const char * name, const char * rank, long serial) { Soldier * soldier; SV * obj; SV * obj_ref; Newx(soldier, 1, Soldier); soldier->name = savepv(name); soldier->rank = savepv(rank); soldier->serial = serial; obj = newSViv((IV)soldier); obj_ref = newRV_noinc(obj); sv_bless(obj_ref, gv_stashpv(classname, GV_ADD));
- Discussion
-
savep.
- See Also
-
Read "Object Oriented Perl" by Damian Conway, for more useful ways of doing OOP in Perl.
You can learn more Perl calls in perlapi. If you don't have Perl 5.6.0 or higher, visit
THE MAIN COURSE
Exposing Shared Libraries
- Problem
You have this great C library and you want to be able to access parts of it with Perl.
- Solution
- Discussion';
- See Also
The
LIBSand
INCconfiguration options are formatted and passed into MakeMaker. For more info see ExtUtils::MakeMaker. For more options see Inline::C.
- Credits
This code was written by Matt Sergeant <matt@sergeant.org>, author of many CPAN modules. The configuration syntax has been modified for use with Inline v0.30.
Automatic Function Wrappers
- Problem
You have some functions in a C library that you want to access from Perl exactly as you would from C.
- Solutionimplementsoffers a much richer interface.
- Discussion
We access existing functions by merely showing Inline their declarations, rather than a full definition. Of course the function declared must exist, either in a library already linked to Perl or in a library specified using the
LIBSoption.
The first example wraps a function from the standard math library, so Inline requires no additional
LIBSdirective. The second uses the Config option to specify the libraries that contain the actual compiled C code.
This behavior is always disabled by default. You must enable the
autowrapoption to make it work.
- See Also
readline
Term::ReadLine::Gnu
- Credits.
Replacing h2xs
- Problem
You have a complete C library that you want to access from Perl exactly as you would from C.
- Solution
Just say:
use IO::All; use Inline C => sub { io('allheaders.h')->all =~ s/LEPT_DLL extern//gr }, enable => "autowrap", libs => '-lleptonica';
- Discussion
In principle, you can use h2xs to wrap a C library into an XS module. One problem with this is that the C parser code is a little out of date. Also, since it works by generating a number of files, maintaining it when the C library changes is a certain amount of work. Using Inline to do the work is much easier.
If the header file needs some processing, like removing some text that a full C compiler can deal with, but the Inline::C parser cannot, as in the example above? Well, Perl is good at text-processing.
Complex Data
- Problem
-
How do I deal with complex data types like hashes in Inline C?
- Solution => "Ingy döt Net", Nickname => "INGY", Module => "Inline.pm", Version => "0.30", Language => "C", ); dump_hash(\%hash);
- Discussionfunction call. This is the proper way to die from your C extensions.
- See Also
See perlapi for information about the Perl5 internal API.
- Credits
-
Hash of Lists
- Problem
-
How do I create a Hash of Lists from C?
- Solution)) { array = newAV(); hv_store(hash, word, strlen(word), newRV_noinc((SV*)array),; }
- Discussion
This is one of the larger recipes. But when you consider the number of calories it has, it's not so bad. The function
load_datatakes the name of a file as it's input. The file
cartoon.textmight Also
See perlapi for information about the Perl5 internal API.
- Credits
Al Danial <alnd@pacbell.net> requested a solution to this on comp.lang.perl.misc. He borrowed the idea from the "Hash of Lists" example in the Camel book.
JUST DESSERTS
Win32
- Problem
How do I access Win32 DLL-s using Inline?
- Solution
use Inline C => DATA => LIBS => '-luser32'; $text = "@ARGV" || 'Inline.pm works with MSWin32. Scary...'; WinBox('Inline Text Box', $text); __END__ __C__ #include <windows.h> int WinBox(char* Caption, char* Text) { return MessageBoxA(0, Text, Caption, 0); }
- Discussion Also
See Inline-Support for more info on MSWin32 programming with Inline.
- Credits
This example was adapted from some sample code written by Garrett Goebel <garrett@scriptpro.com>
Embedding Perl in C
- Problem
How do I use Perl from a regular C program?
- Solution
#!/usr/bin/cpr int main(void) { printf("Using Perl version %s from a C program!\n\n", CPR_eval("use Config; $Config{version};")); CPR_eval("use Data::Dumper;"); CPR_eval("print Dumper \\%INC;"); return 0; }
- Discussion Also
See Inline::CPR for more information on using CPR.
Inline::CPRcan be obtained from "/search.cpan.org/search?dist=Inline- CPR" in http:
- Credits.
ENTERTAINING GUESTS.
Event handling with Event.pm
- Problem
You need to write a C callback for the
Event.pmmodule. Can this be done more easily with Inline?
- Solution
- Discussion
The first line tells Inline to load the
Event.pmmodule. Inline then queries
Eventfor configuration information. It gets the name and location of Event's header files, typemaps and shared objects. The parameters that
Eventreturnsstructure that was passed to you.
In this example, I simply print values out of the structure. The Perl code defines 2 timer events which each invoke the same callback. The first one, every two seconds, and the second one, every three seconds.
As of this writing,
Event.pmis the only CPAN module that works in cooperation with Inline.
- See Also
Read the
Event.pmdocumentation for more information. It contains a tutorial showing several examples of using Inline with
Event.
- Credits
Jochen Stenzel <perl@jochen-stenzel.de> originally came up with the idea of mixing Inline and
Event. He also authored the
Eventtutorial.
Joshua Pritikin <joshua.pritikin@db.com> is the author of
Event.pm.
FOOD FOR THOUGHT
Calling C from both Perl and C
- Problem
I'd like to be able to call the same C function from both Perl and C. Also I like to define a C function that doesn't get bound to Perl. How do I do that?
- Solution
- Discussion.
Calling Perl from C
- Problem
So now that I can call C from Perl, how do I call a Perl subroutine from an Inline C function.
- Solution
use Inline C; for(1..5) { c_func_1('This is the first line'); c_func_2('This is the second line'); print "\n"; } sub perl_sub_1 { print map "$_\n", @_; } __DATA__ __C__ void c_func_2(SV* text) { dSP; ENTER; SAVETMPS; XPUSHs(sv_2mortal(newSVpvf("Plus an extra line"))); PUTBACK; call_pv("perl_sub_1", G_DISCARD); FREETMPS; LEAVE; } void c_func_1(SV* text) { c_func_2(text); }
- Discussion
This demo previously made use of Inline Stack macros only - but that's not the correct way to do it. Instead, base the callbacks on the perlcall documentation (as we're now doing).which calls
c_func_2. The second time we call
c_func_2directly.
c_func_2calls the Perl subroutine (
perl_sub_1) using the internal
perl_call_pvfunction. It has to put arguments on the stack by hand. Since there is already one argument on the stack when we enter the function, the
XPUSHs( which is equivalent to an
Inline_Stack_Push) adds a second argument.
We iterate through a 'for' loop 5 times just to demonstrate that things still work correctly when we do that. (This was where the previous rendition, making use solely of Inline Stack macros, fell down.)
- See Also
See Inline::C for more information about Stack macros.
See perlapi for more information about the Perl5 internal API.
Evaling C
- Problem
I've totally lost my marbles and I want to generate C code at run time, and
evalit into Perl. How do I do this?
- Solution
use Inline; use Code::Generator; my $c_code = generate('foo_function'); Inline->bind(C => $c_code); foo_function(1, 2, 3);
- Discussion
I can't think of a real life application where you would want to generate C code on the fly, but at least I know how I would do it. :)
The
bind()function of Inline let's you bind (compileloadexecute).
Providing a pure perl alternative
- Problem
I want to write a script that will use a C subroutine if Inline::C is installed, but will otherwise use an equivalent pure perl subroutine if Inline::C is not already installed. How do I do this?
- Solution
use strict; use warnings; eval { require Inline; Inline->import (C => Config => BUILD_NOISY => 1); Inline->import (C =><<'EOC'); int foo() { warn("Using Inline\n"); return 42; } EOC }; if ($@) { *foo =\&bar; } sub bar { warn("Using Pure Perl Implementation\n"); return 42; } my $x = foo(); print "$x\n";
- Discussion
If Inline::C is installed and functioning properly, the C sub foo is called by the perl code. Otherwise, $@ gets set, and the equivalent pure perl function bar is instead called.
Note, too, that the pure perl sub bar can still be explicitly called even if Inline::C is available.
Accessing Fortran subs using Inline::C
- Problem
I've been given a neat little sub written in fortran that takes, as its args, two integers and returns their product. And I would like to use that sub as is from Inline::C. By way of example, let's say that the fortran source file is named 'prod.f', and that it looks like this:
integer function sqarea(r,s) integer r, s sqarea = r*s return end
- Solution
We can't access that code directly, but we can compile it into a library which we can then access from Inline::C. Using gcc we could run:
gfortran -c prod.f -o prod.o ar cru libprod.a prod.o
The function is then accessible as follows:
use warnings; use Inline C => Config => LIBS => '-L/full/path/to/libprod_location -lprod -lgfortran'; use Inline C => <<' EOC'; int wrap_sqarea(int a, int b) { return sqarea_(&a, &b); } EOC $x = 15; $y = $x + 3; $ret = wrap_sqarea($x, $y); print "Product of $x and $y is $ret\n";
- Discussion
Note firstly that, although the function is specified as 'sqarea' in the source file, gfortran appends an underscore to the name when the source is compiled. (I don't know if all fortran compilers do this.) Therefore Inline::C needs to call the function as 'sqarea_'.
Secondly, because fortran subs pass args by reference, we need to pass the addresses of the two integer args to sqarea() when we call it from our Inline::C sub.
If using g77 instead of gfortran, the only necessary change is that we specify '-lg2c' instead of '-lgfortran' in our 'LIBS' setting.
SEE ALSO
AUTHORS
Ingy döt Net <ingy@cpan.org>
Sisyphus <sisyphus@cpan.org>
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See | https://metacpan.org/pod/Inline::C::Cookbook | CC-MAIN-2015-22 | refinedweb | 2,549 | 65.32 |
Eclipse Community Forums - RDF feed Eclipse Community Forums Help regarding JDBC Driver <![CDATA[I use <b>windows 7 x64<b>. I downloaded and installed these latest version softwares and installed them using environment variables and other proceedures: <b>jdk7u9(64 bit)</b> <b>MySQL Server 5.5(64 bit)</b> <b>Apache 7.0.32</b> <b>Eclipse 4.2 Juno(64 bit)</b> and <b>mysql-connector-java-5.1.22.zip</b> The jdk, eclipse and apache had been integrated. I had them tested. MySQL also works fine on itself. I accessed it using the command prompt. But could'nt use the 'com.mysql.jdbc.Driver' within, neither my java application program nor my jsp program. These programs were tested successfully on another machine. So the codes are correct. I'm affraid it is some version incompatibility between these... The java application was just to insert a value into the database: <b>import java.sql.*; public class dbtest { public static void main(String[] args) { try{ Class.forName("com.mysql.jdbc.Driver"); Connection con=DriverManager.getConnection("jdbc:mysql://localhost:3306/student201","student201","tester201"); Statement st=con.createStatement(); st.executeUpdate("Insert into LoginRegister values('jhgjh','gfhj','jhf',2)"); con.close(); } catch(Exception e) { e.printStackTrace(); } } }</b> The error report and the complete stack at the time of error was... <b>java.lang.ClassNotFoundException: com.mysql.jdbc.Driver dbtest.main(dbtest.java:7)</b> Please Help!! ]]> Praveen Raj 2012-11-17T13:47:12-00:00 Re: Help regarding JDBC Driver <![CDATA. ]]> Praveen Raj 2012-11-17T17:32:41-00:00 Re: Help regarding JDBC Driver <![CDATA[On 11/17/2012 10:32 AM, Praveen Raj wrote: >. > The JDBC connector (JAR) must be in on the classpath (use Build Path).]]> Russell Bateman 2012-11-17T19:41:32-00:00 Re: Help regarding JDBC Driver <![CDATA[hi praveen, please could you explain me in detail what you did to fix this error as i did add the jar file to the buildpath and the simplest standalone java application to connect to mysql database still doesnt work , whereas it does work in comandprompt when i write the same code in a notepad and run it ... ]]> raji m 2013-11-10T23:18:27-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=440266&basic=1 | CC-MAIN-2015-06 | refinedweb | 370 | 53.88 |
After working on the Ninject.RhinoMocks automocking container, I started using it in my current project right away and it wasn’t long before I started simplifying the usage of it with helper methods in my base test class.
From “MockingKernel.Get<T>()” To “Get<T>()”
I got tired of calling MockingKernel.Get<MyClass>() all over the place, so I created a helper method in my base ContextSpecification class called Get<T>(). This method does nothing more than forward calls to the MockingKernel.Get method, but it could easily be enhanced to do something more – like caching the object retrieved, so that the IoC container is not always resolving (even though it resolves to a singleton).
1: protected T Get<T>()
2: {
3: return MockingKernel.Get<T>();
4: }
This is a small change, but it makes a lot of code easier to ready. Compare this:
1: [Test]
2: public void it_should_do_that_thing()
3: {
4: MockingKernel.Get<IMyView>().AssertWasCalled(v => v.ThatThing());
5: }
To this:
1: [Test]
2: public void it_should_do_that_thing()
3: {
4: Get<IMyView>().AssertWasCalled(v => v.ThatThing());
5: }
A small amount of code reduction and a little easier to read.
From “Get<IMyView>().AssertWasCalled(…)” to “AssertWasCalled<IMyView>(…)”
After adding the Get<T> code, I realized that I could simplify the assertion even further by creating a helper method that would call Get<T> for me. RhinoMocks has two methods for AssertWasCalled. The first just takes the method and sets some defaults, like only expecting 1 call. The second allows you to specify method options for more advanced needs. I created to AssertWasCalled<T> methods to mimic the RhinoMocks methods and call Get<T> for me:
1: protected void AssertWasCalled<T>(Action<T> action)
2: {
3: T mock = Get<T>();
4: mock.AssertWasCalled(action);
5: }
6:
7: protected void AssertWasCalled<T>(Action<T> action, Action<IMethodOptions<object>> methodOptions)
8: {
9: T mock = Get<T>();
10: mock.AssertWasCalled(action, methodOptions);
11: }
This allowed me to simplify my specs down even further:
1: [Test]
2: public void it_should_do_that_thing()
3: {
4: AssertWasCalled<IMyView>(v => v.ThatThing());
5: }
6:
7: [Test]
8: public void it_should_do_the_other_thing_twice()
9: {
10: AssertWasCalled<IMyView>(v => v.TheOtherThing(), mo => mo.Repeat.Twice());
11: }
This is less code to read and easier to understand.
A Full Spec Example
With these helper methods in place, a full specification is much easier to read, now:
1: public class when_doing_something_with_that_thing : ContextSpecification
2: {
3: protected MyPresenter SUT;
4:
5: protected override void EstablishContext()
6: {
7: SUT = Get<MyPresenter>();
8: }
9:
10: protected override void When()
11: {
12: SUT.DoSomething();
13: }
14:
15: [Test]
16: public void it_should_do_that_thing()
17: {
18: AssertWasCalled<IMyView>(v => v.ThatThing());
19: }
20:
21: [Test]
22: public void it_should_do_the_other_thing_twice()
23: {
24: AssertWasCalled<IMyView>(v => v.TheOtherThing(), mo => mo.Repeat.Twice());
25: }
26: }
But Wait! There’s More!
It gets even better! In tomorrow’s blog post – part 2 of simplifying unit tests with automocking – I’ll reduce the full specification code even further by eliminating the need to declare and setup the System Under Test (SUT). | https://lostechies.com/derickbailey/2010/05/24/simplify-your-unit-tests-with-auto-mocking-part-1-helper-methods/ | CC-MAIN-2016-50 | refinedweb | 503 | 55.54 |
In this guide you will deploy a Consul datacenter on Azure Kubernetes Service (AKS).
» Prerequisites
To complete this guide successfully, you should have an Azure account with the ability to create a Kubernetes cluster.
All the tools you need are installed in the Azure Cloud Shell. Visit the Cloud Shell to run this example. We used the Linux bash shell.
The code for this example is in a git repository. Clone this repository within your cloud shell before starting the rest of the tutorial.
$ git clone
» AKS Configuration).
» Create an AKS Cluster with Terraform
First, create an Azure Kubernetes Service cluster. We'll use Terraform to create the cluster with the features we need for this demo.
Change into the
k8s/terraform/azure/01-create-aks-cluster directory.
$ cd k8s/terraform/azure/01-create-aks-cluster
Run the
az command with the following arguments to create an Active Directory
service principal account for this demo. If it works correctly, you'll see a
JSON snippet that includes your
appId,
password, and other values.
$ az ad sp create-for-rbac --skip-assignment { "appId": "aaaa-aaaa-aaaa", "displayName": "azure-cli-2019-04-11-00-46-05", "name": "", "password": "aaaa-aaaa-aaaa", "tenant": "aaaa-aaaa-aaaa" }
Use these values to configure Terraform. Open a new
terraform.tfvars file in
the in-browser text editor from the cloud shell with the
code command.
$ code terraform.tfvars
Next, copy the JSON output of the
az command above and paste it into the new
terraform.tfvars file. Edit the contents to conform to Terraform variable
style (remove curly braces and use the
= sign for assignment):
"appId"="aaaa-aaaa-aaaa" "displayName"="azure-cli-2019-04-11-00-46-05" "name"="" "password"="aaaa-aaaa-aaaa" "tenant"="aaaa-aaaa-aaaa"
Now you're ready to initialize the Terraform project.
$ terraform init Initializing provider plugins... - Checking for available provider plugins on... - Downloading plugin for provider "azurerm" (1.20.0)... Terraform has been successfully initialized!
The final step in this section is to run
terraform apply to create the
cluster. Respond with
yes when prompted.
$ terraform apply Plan: 2 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Apply complete! Resources: 2 added, 0 changed, 0 destroyed. Outputs: kubernetes_cluster_name = demo-aks resource_group_name = demo-rg
NOTE: It may take as many as 10 minutes to provision the AKS cluster.
Optionally, you may review the Terraform files to see the configuration code
needed to create the cluster on AKS. Note the lines which
specify that
role_based_access_control should be used. This will enable the
helm tool to work smoothly in a subsequent step.
resource "azurerm_kubernetes_cluster" "default" { # ... role_based_access_control { enabled = true } # ... }
» Provision the Tiller Service Account and Helm
Now, use the second Terraform configuration in the
02-fix-k8s-rbac directory
to configure the AKS cluster to run the
helm package manager tool.
Change into the
02-fix-k8s-rbac directory.
// Provision tiller service account, install helm $ cd ../02-fix-k8s-rbac
Run
terraform init on this separate project.
$ terraform init Initializing provider plugins... - Checking for available provider plugins on... - Downloading plugin for provider "kubernetes" (1.5.2)...
Now run
terraform apply to configure the cluster for
helm and install the
server-side components necessary to use
helm.
$ terraform apply Plan: 3 to add, 0 to change, 0 to destroy. Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
At this point, all the necessary prerequisites should be installed and running.
While still in the cloud shell, you can run a
kubectl command to verify that
tiller is running (the server-side component of
helm).
$ kubectl get pods --all-namespaces | grep tiller kube-system tiller-deploy-aaaa-v5999 1/1 Running 0 35s
You can also use the
az aks browse command to open a new web browser tab with
the Kubernetes dashboard.
// View k8s dashboard $ az aks browse --resource-group demo-rg --name demo-aks
The Kubernetes dashboard will open in your web browser.
» Consul Configuration
Now that your AKS cluster is running and
helm is installed, you're ready to
install Consul to the cluster. Consul can run inside or outside of a Kubernetes
cluster but for this demo we will use containers to run Consul itself inside of
Kubernetes pods.
» Install Consul with Helm
Move out to the
k8s directory in the project. Clone the
consul-helm project
inside the
k8s directory so that Consul can be installed.
$ cd ~/demo-consul-101/k8s $ git clone
Optionally, open the
helm-consul-values.yaml file with the
code command to
review the configuration that the Helm chart will use. You'll see that a datacenter name is
specified, a load balancer is configured, and the Consul UI will be exposed
through the load balancer.
$ code helm-consul-values.yaml
We can now use
helm to install Consul using the
consul-helm chart that we cloned.
TIP: It is good to specify a
name so that you can more easily refer to
the release or optionally re-install the chart to the cluster without creating
unnecessary duplicates.
$ helm install -f helm-consul-values.yaml --name=azure ./consul-helm NAME: azure LAST DEPLOYED: Thu Apr 11 01:09:01 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES:
It may take a few minutes for the pods to spin up. When they are ready, you can view the Consul UI in your web browser.
TIP: Use the
--watch flag to wait for the load balancer to spin up.
// View Consul UI $ kubectl get service azure-consul-ui --watch NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE azure-consul-ui LoadBalancer 10.0.134.144 <pending> 80:31768/TCP 35s azure-consul-ui LoadBalancer 10.0.134.144 52.151.17.26 80:31768/TCP 104s
It may take a few minutes, but you'll see an entry for
EXTERNAL-IP. View that
IP address in your web browser and you'll see the Consul UI.
Click through to the Nodes screen and you'll see several Consul servers and agents running.
» Deploy Microservices
As the final deployment step, let's deploy a few containers which contain
microservices. A back end
counting service returns a JSON snippet with an
incrementing number. A
dashboard service displays the number that it finds
from the
counting service and also displays debugging information when the
backend service can be found or is unreachable.
The YAML files for these microservices are contained in the
04-yaml-connect-envoy directory.
Use the standard
kubectl command to
apply them to the cluster.
$ kubectl apply -f 04-yaml-connect-envoy pod/counting created pod/dashboard created service/dashboard-service-load-balancer created
You should see output showing that a
counting pod and a
dashboard pod have
been created, along with a load balancer for the
dashboard service.
Use the
kubectl command again to find the IP address of the
dashboard load balancer.
$ kubectl get service dashboard-service-load-balancer --watch NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-service-load-balancer LoadBalancer 10.0.187.224 52.247.202.123 80:31622/TCP 54s
Open
EXTERNAL-IP in your web browser.
You'll see a number that was fetched from the backend
counting API. It will
increment every few seconds.
» Configure Intentions
Consul can be configured to allow access between services or block access.
Go to the Consul UI IP address as mentioned previously. Find the Intentions tab. Click the Create button.
Create an intention from
* to
* as a Deny intention. Click Save.
Back in the web browser, find the microservice dashboard as mentioned previously. You'll see that the Counting Service is Unreachable.
Back at the Consul UI, create another intention to allow communication. Click Create. Select a Source Service of dashboard and a Destination Service of counting. Choose the allow radio button. Finally, click the Create button.
Back at the microservice dashboard, you will see that it is again Connected and shows a new number every few seconds.
» Destroy the Demo
Now that you have created an AKS cluster, deployed Consul with
helm, and
deployed applications, you can
destroy the cluster. This requires only one
step.
Move back into the
terraform/azure/01-create-aks-cluster directory and run
terraform destroy.
$ cd terraform/azure/01-create-aks-cluster/ $ terraform destroy Plan: 0 to add, 0 to change, 2 to destroy. Destroy complete! Resources: 2 destroyed.
NOTE: This operation could take up to 10 minutes.
» Explanation
This guide covers the steps needed to deploy and configure a cluster as an operator. Additional steps not mentioned include development tasks such as creating a Golang web application, building Docker containers for each part of the application, configuring Consul and Kubernetes from init containers, writing YAML to deploy the containers and associated environment variables.
This guide will not go into detail about all the steps required, but the code is available for you to view. In particular, look for:
- Entire application and all configuration in the
k8sdirectory.
- YAML for Kubernetes in the
04-yaml-connect-envoydirectory. This includes configuration for the
countingand
dashboardservices, including annotations to enable Connect sidecar proxies and send environment variables to the relevant Docker containers.
- Init containers in the
counting-initand
dashboard-initdirectories. These contain shell scripts that register services with Consul. A Kubernetes init container runs before the related application container and has access to port numbers so the service can be configured.
- Application containers in the
counting-serviceand
dashboard-servicedirectories. These run several microservices and accept configuration via environment variables.
» Summary
In this guide you learned to deploy a Consul datacenter on Azure Kubernetes Service with the official Helm chart. Terraform configurations for AKS and Helm can make the process more consistent and automated. Helm charts and Docker containers run microservices and connect to each other securely with Consul Connect.
Further steps can be taken to secure the entire cluster, connect to other clusters in other datacenters, or deploy additional microservices that can find each other with Consul service discovery and connect securely with Consul Connect.
For additional reference documentation on Azure Kubernetes Service or HashiCorp Consul, refer to these websites: | https://learn.hashicorp.com/consul/getting-started-k8s/azure-k8s | CC-MAIN-2019-30 | refinedweb | 1,689 | 57.77 |
Recently there was a little bit of a ruckus about the correct way to talk to the Word object model in C# when it comes to missing arguments. If you've ever used the Word PIAs with C# (Primary Interop Assemblies) you will be familiar with the coding practice below. For example, this slightly modified example comes from the MSDN VSTO 1.0 documentation--an example of how to spell check a string using the word object model in C#:
internal void SpellCheckString() { string str = "Speling erors here."; object ignoreUpperCase = true; object missingType = Type.Missing; bool blnSpell = ThisApplication.CheckSpelling(str, ref missingType, ref ignoreUpperCase, ref missingType, ref missingType, ref missingType, ref missingType, ref missingType, ref missingType, ref missingType, ref missingType, ref missingType, ref missingType); MessageBox.Show(blnSpell.ToString(), "False if Errors, True if OK"); }
The first thing that probably comes to mind if you're a VB.NET programmer and you've never seen code written against Word in C# is “Why is this so verbose?“
VB.NET does some special things for you when there are optional arguments in a method, so the VB version of this looks like this:
Friend Sub SpellCheckString() Dim str As String = "Speling erors here." Dim blnSpell As Boolean = _ ThisApplication.CheckSpelling(str, , True) MessageBox.Show(blnSpell.ToString, "False if Errors, True if OK")End Sub
In VB.NET you don't have to worry about passing a value for each optional argument--the language handles this for you. You can even use commas as shown above to omit one particular variable you don't want to specify--in this case we didn't want to specify a custom dictionary, but we did want to set IgnoreUpperCase, so we ommitted the custom dictionary argument by just leaving it out between the commas.
The first thing that probably comes to mind if you're a C# programmer and you've never seen code written against Word in C# is “Why is all that stuff passed by reference?“
The first thing to understand is that when you are talking to Word methods, you are talking to the Word object model through interop. The PIA (Primary Interop Assembly) is the vehicle through which you talk to the unmanaged Word object model from managed code.
If you were to examine the IDL definition for “CheckSpelling“ generated from Word's COM Type Library you would see something like this:
Note that any parameter that is marked as optional--meaning you can omit the value and Word will pick a reasonable default value or ignore that option--is marshalled as a pointer to a VARIANT in Word (Excel doesn't typically use a pointer to a VARIANT for optional parameters so you don't have this by ref issue for most of Excel). When the PIA is generated, the generated IL ends up looking like this in the PIA:
.method public hidebysig newslot abstract virtual instance bool CheckSpelling([in] string marshal( bstr) Word, [in][opt] object& marshal( struct) CustomDictionary, [in][opt] object& marshal( struct) IgnoreUppercase, [in][opt] object& marshal( struct) MainDictionary, [in][opt] object& marshal( struct) CustomDictionary2, [in][opt] object& marshal( struct) CustomDictionary3, [in][opt] object& marshal( struct) CustomDictionary4, [in][opt] object& marshal( struct) CustomDictionary5, [in][opt] object& marshal( struct) CustomDictionary6, [in][opt] object& marshal( struct) CustomDictionary7, [in][opt] object& marshal( struct) CustomDictionary8, [in][opt] object& marshal( struct) CustomDictionary9, [in][opt] object& marshal( struct) CustomDictionary10) runtime managed internalcall{ .custom instance void [mscorlib]System.Runtime.InteropServices.DispIdAttribute::.ctor(int32) = ( 01 00 44 01 00 00 00 00 ) } // end of method _Application::CheckSpelling
Or, what you see in)
So the upshot of all this is that any optional argument in Word has to be passed by ref from C# and has to be declared as an object. Even though you'd like to strongly type the IgnoreUppercase to be a boolean in the CheckSpelling example, you can't. You have to type it as an object or you'll get a compile error. This ends up being a little confusing because you can strongly type the first argument--the string you want to check. That's because in the CheckSpelling method, the “Word“ argument (the string you are spell checking) is not an optional argument to CheckSpelling. Therefore, it is strongly typed and not passed by reference.
So this all brings us back to Type.Missing.
The way you specify in C# that you want to omit an argument because it's optional (after all, who really wants to specify 10 custom dictionaries?) is you pass an object by reference which you have set to Type.Missing. In our example, we just declared one variable called missingType and passed it in 11 times.
Now when you pass objects by reference to managed functions, you do that because the managed function is telling you that it might change the value of that object you passed into the function. So it might seem bad to you that we are passing one object set to missingType to all the parameters of CheckSpelling that we don't care about.
After all, imagine you have a function called DoStuff (shown below) that takes two parameters by ref. If you set the first parameter to true, it will do something happy. If you set the second parameter to true, it will delete an important file. But if you pass in Type.Missing to both parameters, it won't do anything--or so you thought.
Because you are passing by ref, what if the code evaluating the first parameter changes it from Type.Missing to true as a side-effect? Now, when the code executes later in the function to look at the second parameter, it will see the second parameter is now true because you passed the same instance to both parameters:
namespace
static void DoStuff(ref object DoSomethingHappy, ref object DeleteImportantFile) { if (DoSomethingHappy == Type.Missing) { // Don't do something happy but set DoSomethingHappy to true DoSomethingHappy = true; }
if (DeleteImportantFile == Type.Missing) { // Don't do anything } else if (((bool)DeleteImportantFile) == true) { // Do It System.Diagnostics.Debug.Assert(false, "About to delete an important file"); System.IO.File.Delete("c:\veryimportantfile.txt"); } } }}
static void Main(string[] args) { object missingType1 = Type.Missing; object missingType2 = Type.Missing; DoStuff(ref missingType1, ref missingType2); }
So you might guess that you might need to rewrite the first method, CheckSpelling, to declare a missingType1..missingType11 because of the possibility that Word might go and change one of the by ref parameters on you and thereby make it so you are no longer passing Type.Missing but something else like “true” that may cause unintended side effects...
WRONG!
Remember that Word is an unmanaged object and you are talking to it through interop. The interop layer realizes that you are passing a Type.Missing to an optional argument on a COM object. Word expects a missing optional argument to be a VARIANT of type VT_ERROR set to DISP_E_PARAMNOTFOUND. So interop obliges and instead of passing a reference to your missingType object in some way, the interop layer passes a variant of type VT_ERROR set to DISP_E_PARAMNOTFOUND. Your missingType object that you passed by reference is safe because it never really got passed directly into Word. It is impossible for Word to mess with your variable, even though you look at the syntax of the call and think it would be possible because it is passed by ref.
So the inital CheckSpelling code is completely correct. Your missingType variable is safe--it won't be changed on you by Word even though you pass it by ref.
But remember this is sort of a special case that only applies when talking through interop to an unmanaged object model that has optional arguments. Don't let this Word special case make you sloppy with other managed methods that you pass values to by ref. When talking to managed methods, you have to be careful when passing by ref because the managed method can change the variable you pass in as shown in the DoStuff example.
PingBack from
Trademarks |
Privacy Statement | http://blogs.msdn.com/eric_carter/archive/2004/04/15/114079.aspx | crawl-002 | refinedweb | 1,342 | 51.38 |
Introducing Truffle, a Blockchain Smart Contract Suite — SitePoint
In the early days of smart contract development (circa 2016) the way to go was to write smart contracts in your favorite text editor and deploy them by directly calling
geth and
solc.
The way to make this process a little bit more user friendly was to make bash scripts which could first compile and then deploy the contract … which was better, but still pretty rudimentary — the problem with scripting, of course, being the lack of standardization and the suboptimal experience of bash scripting.
The answer came in two distinct flavors — Truffle and Embark — with Truffle being the more popular of the two (and the one we’ll be discussing in this article).
To understand the reasoning behind Truffle, we must understand the problems it’s trying to solve, which are detailed below.
Compilation
Multiple versions of the
solc compiler should be supported at the same time, with a clear indication which one is used.
Environments
Contracts need to have development, integration and production environments, each with their own Ethereum node address, accounts, etc.
Testing
The contracts must be testable. The importance of testing software can’t be overstated. For smart contracts, the importance is infinitely more important. So. Test. Your. Contracts!
Configuration
Your development, integration and production environments should be encapsulated within a config file so they can be committed to git and reused by teammates.
Web3js Integration
Web3.js is a JavaScript framework for enabling easier communication with smart contracts from web apps. Truffle takes this a step further and enables the Web3.js interface from within the Truffle console, so you can call web functions while still in development mode, outside the browser.
Installing Truffle
The best way to install Truffle is by using the Node Package Manager (npm). After setting up NPM on your computer, install Truffle by opening the terminal and typing this:
npm install -g truffle
Note: the
sudo prefix may be required on Linux machines.
Getting Started
Once Truffle is installed, the best way to get a feel for how it works is to set up the Truffle demo project called “MetaCoin”.
Open the terminal app (literally Terminal on Linux and macOS, or Git Bash, Powershell, Cygwin or similar on Windows) and position yourself in the folder where you wish to initialize the project.
Then run the following:
mkdir MetaCoin cd MetaCoin truffle unbox metacoin
You should see output like this:
Downloading... Unpacking... Setting up... Unbox successful. Sweet! Commands: Compile contracts: truffle compile Migrate contracts: truffle migrate Test contracts: truffle test
If you get some errors, it could be that you’re using a different version of Truffle. The version this tutorial is written for is
Truffle v4.1.5, but the instructions should stay relevant for at least a couple of versions.
The Truffle Project Structure
Your Truffle folder should look a little bit ├── truffle-config.js └── truffle.js
Contracts folder
This is the folder where you will put all of your smart contracts.
In your contracts folder, there’s also a
Migrations.sol file, which is a special file — but more about that in the following section.
When Truffle compiles the project, it will go through the
contracts folder and compile all the compatible files. For now, the most used files are Solidity files with the
.sol extension.
In the future, this might transition to Vyper or SolidityX (both better for smart contract development, but less used for now).
Migrations Folder
What is a truffle migration? In essence it’s a script which defines how the contracts will be deployed to the blockchain.
Why do we need migrations?
As your project becomes more and more complex, the complexity of your deployments becomes more and more complex accordingly.
Let’s take an example:
- You have smart contracts
One,
Twoand
Three
- The smart contract
Threecontains a reference to the smart contract
Oneand requires the address of contract
Twoin its constructor.
This example requires that contracts not only to be deployed sequentially, but also that they cross reference each other. Migrations, in a nutshell, enable us to automate this process.
A rough overview of how you would do this would be as follows:
var One = artifacts.require("One"); var Two = artifacts.require("Two"); var Three = artifacts.require("Three"); module.exports = function(deployer) { deployer.deploy(One).then(function() { deployer.deploy(Two).then(function() { deployer.deploy(Three, One.address); }) }); };
Beyond that, migrations allow you to do a lot of other cool things like:
- set max gas for deployments
- change the
fromaddress of deployments
- deploy libraries
- call arbitrary contract functions
Initial migration
As you’ve noticed in your
MetaCoin project, you have a file called
1_initial_migration.js. What this file does is deploy the
Migrations.sol contract to the blockchain.
Usually you don’t have to do anything to this file once it’s initialized, so we won’t focus too much on this.
Test Folder
As I’ve said: YOU! MUST! TEST! SMART! CONTRACTS! No buts, no ifs, no maybes: you MUST do it.
But if you’re going to do it, it would be cool to have an automatic tool to enable you to do it seamlessly.
Truffle enables this by having a built-in testing framework. It enables you to write tests either in Solidity or JavaScript.
The examples in the MetaCoin project speak for themselves basically, so we won’t get too much into this.
The key is, if you’re writing Solidity tests, you import your contracts into the tests with the Solidity
import directive:
import "../contracts/MetaCoin.sol";
And if you’re writing them in JavaScript, you import them with the
artifacts.require() helper function:
var MetaCoin = artifacts.require("./MetaCoin.sol");
Configuration File
The configuration file is called either
truffle.js or
truffle-config.js. In most cases it’ll be called
truffle.js, but the fallback is there because of weird command precedence rules on Windows machines.
Just know that, when you see
truffle.js or
truffle-config.js, they’re the same thing, basically. (Also, don’t use CMD on windows; PowerShell is significantly better.)
The config file defines a couple of things, detailed below.
Environments
Develop, TestNet, Live (Production). You can define the address of the Geth note, the
network_id, max gas for deployment, the gas price you’re willing to pay.
Project structure
You can change where the files are built and located, but it isn’t necessary or even recommended.
Compiler version and settings
Fix the
solc version and set the
-O (optimization) parameters.
Package management
- Truffle can work with EthPM (the Ethereum Package Manager), but it’s still very iffy.
- You can set up dependencies for EthPM to use in your Truffle project.
Project description
Who made the project, what is the project name, contact addresses etc.
Running the Code
In order to compile your contracts, run this:
truffle compile
In order to run migrations, you can just use this:
truffle migrate
Or you can do it by specifying an environment:
truffle migrate --network live
In order to test your contracts run this:
truffle test
Or you can run a specific test by running this:
truffle test ./path/to/FileTest.sol
Conclusion
Truffle is a very handy tool that makes development in this brand new ecosystem a little easier. It aims to bring standards and common practices from the rest of the development world into a little corner of blockchain experimentation.
This quick tutorial has demonstrated and explained the basics, but to truly understand Truffle, you’ll need to dive in deeper and experiment on actual projects. That’s what we’ll explore throughout SitePoint’s blockchain hub. We next take a look in a bit more detail at testing smart contracts and Truffle migrations. | http://brianyang.com/introducing-truffle-a-blockchain-smart-contract-suite-sitepoint/ | CC-MAIN-2018-51 | refinedweb | 1,288 | 55.03 |
XAML is a declarative markup language. As applied to the .NET Framework programming model, XAML simplifies creating a UI for a .NET Framework application. You can create visible UI elements in the declarative XAML markup, and then separate the UI definition from the run-time logic by using code-behind files, joined to the markup through partial class definitions. XAML directly represents the instantiation of objects in a specific set of backing types defined in assemblies. This is unlike most other markup languages, which are typically an interpreted language without such a direct tie to a backing type system. XAML enables a workflow where separate parties can work on the UI and the logic of an application, using potentially different tools.
When represented as text, XAML files are XML files that generally have the .xaml extension. The files can be encoded by any XML encoding, but encoding as UTF-8 is typical.
The following example shows how you might create a button as part of a UI. This example is.
XAML Object Elements include a prefix, a concept that will be explained later.) After this, you can optionally declare attributes on the object element. To complete the object element tag, end with a closing angle bracket (>). You can instead use a self-closing form that does not have any content, by completing the tag with a forward slash and closing angle bracket in succession (/>). For example, look at the previously shown markup snippet again:
This specifies two object elements: <StackPanel> (with content, and a closing tag later), and <Button .../> (the self-closing form, with several attributes). The object elements StackPanel and Button each map to the name of a class that is defined by WPF and is part of the WPF assemblies. When you specify an object element tag, you create an instruction for XAML processing to create a new instance. Each instance is created by calling the default constructor of the underlying type when parsing and loading the XAML.
Attribute Syntax (Properties)
Properties of an object can often be expressed as attributes of the object element. An attribute syntax names the property that is being set in attribute syntax, followed by the assignment operator (=). The value of an attribute is always specified as a string that is contained within quotation marks.
Attribute syntax is the most streamlined property setting syntax and is the most intuitive syntax to use for developers who have used markup languages in the past. For example, the following markup creates a button that has red text and a blue background in addition to display text specified as Content.
Property Element Syntax
For some properties of an object element, attribute syntax is not possible, because the object or information necessary to provide the property value cannot be adequately expressed within the quotation mark and string restrictions of attribute syntax. For these cases, a different syntax known as property element syntax can be used.
The syntax for the property element start tag is <typeName.propertyName>. Generally, the content of that tag is an object element of the type that the property takes as its value . After specifying content, you must close the property element with an end tag. The syntax for the end tag is </typeName.propertyName>.
If an attribute syntax is possible, using the attribute syntax is typically more convenient and enables a more compact markup, but that is often just a matter of style, not a technical limitation. The following example shows the same properties being set as in the previous attribute syntax example, but this time by using property element syntax for all properties of the Button.
Collection Syntax
The XAML language includes some optimizations that produce more human-readable markup. One such optimization is that if a particular property takes a collection type, then items that you declare in markup as child elements within that property's value become part of the collection. In this case a collection of child object elements is the value being set to the collection property.
The following example shows collection syntax for setting values of the GradientStops property:
XAML Content Properties
XAML specifies a language feature whereby a class can designate exactly one of its properties to be the XAML content property. Child elements of that object element are used to set the value of that content property. In other words, for the content property uniquely, you can omit a property element when setting that property in XAML markup and produce a more visible parent/child metaphor in the markup.
For example, Border specifies a content property of Child. The following two Border elements are treated identically. The first one takes advantage of the content property syntax and omits the Border.Child property element. The second one shows Border.Child explicitly..
Text Content
A small number of XAML elements can directly process text as their content. To enable this, one of the following cases must be true:
The class must declare a content property, and that content property must be of a type assignable to a string (the type could be Object). For instance, any ContentControl uses Content as its content property and it is type Object, and this supports the following usage on a.
Content Properties and Collection Syntax Combined
Consider this example:
Here, each Button is a child element of StackPanel. This is a streamlined and intuitive markup that omits two tags for two different reasons.
Omitted StackPanel.Children property element: StackPanel derives from Panel. Panel defines Panel.Children as its XAML content property.
Omitted UIElementCollection object element: The Panel.Children property takes the type UIElementCollection, which implements IList. The collection's element tag can be omitted, based on the XAML rules for processing collections such as IList. (In this case, UIElementCollection actually cannot be instantiated because it does not expose a default constructor, and that is why the UIElementCollection object element is shown commented out).
Attribute Syntax (Events)
Attribute syntax can also be used for members that are events rather than properties. In this case, the attribute's name is the name of the event. In the WPF implementation of events for XAML, the attribute's value is the name of a handler that implements that event's delegate. For example, the following markup assigns a handler for the Click event to a Button created in markup:
There is more to events and XAML in WPF than just this example of the attribute syntax. For example, you might wonder what the ClickHandler referenced here represents and how it is defined. This will be explained in the upcoming section of this topic..
WPF XAML processors and serializers will ignore or drop all nonsignificant whitespace, and will normalize any significant whitespace. This is consistent with the default whitespace behavior recommendations of the XAML specification. This behavior is generally only of consequence when you specify strings within XAML content properties. In simplest terms, XAML converts space, linefeed and tab characters into spaces, and then preserves one space if found at either end of a contiguous string. The full explanation of XAML whitespace handling is not covered in this topic. For details, see Whitespace Processing in XAML.
Markup extensions are a XAML language concept. When used to provide the value of an attribute syntax, curly braces ({ and }) indicate a markup extension usage. This usage directs the XAML processing to escape from the general treatment of attribute values as either a literal string or a string-convertible value.
The most common markup extensions used in WPF application programming are Binding, used for data binding expressions, and the resource references StaticResource and DynamicResource. By using markup extensions, you can use attribute syntax to provide values for properties even if that property does not support an attribute syntax in general. Markup extensions often use intermediate expression types to enable features such as deferring values or referencing other objects that are only present at run time.
For example, the following markup sets the value of the Style property using attribute syntax. The Style property takes an instance of the Style class, which by default could not be instantiated by an attribute syntax string. But in this case, the attribute references a particular markup extension, StaticResource. When that markup extension is processed, it returns a reference to a style that was previously instantiated as a keyed resource in a resource dictionary. section, it was stated that the attribute value must be able to be set by a string. The basic, native handling of how strings are converted into other object types or primitive values is based on the String type itself, in addition to native processing for certain types such as DateTime or Uri. But many WPF types or members of those types extend the basic string attribute processing behavior, in such a way that instances of more complex object types can be specified as strings and attributes.
The Thickness structure is an example of a type that has a type conversion enabled for XAML usages. Thickness indicates measurements within a nested rectangle and is used as the value for properties such as Margin. By placing a type converter on Thickness, all properties that use a Thickness are easier to specify in XAML because they can be specified as attributes. The following example uses a type conversion and attribute syntax to provide a value for a Margin:. For most WPF application scenarios, and for almost all of the examples given in the WPF sections of the SDK, the default XAML namespace is mapped to the WPF namespace. The xmlns:x attribute indicates an additional XAML namespace, which maps the XAML language namespace.
This usage of xmlns to define a scope for usage and mapping of a namescope is consistent with the XML 1.0 specification. XAML namescopes are different from XML namescopes only in that a XAML namescope also implies something about how the namescope's elements are backed by types when it comes to type resolution and parsing the XAML.
Note that the xmlns attributes are only strictly necessary on the root element of each XAML file. xmlns definitions will apply to all descendant elements of the root element (this behavior is again consistent with the XML 1.0 specification for xmlns.) xmlns attributes are also permitted on other elements underneath the root, and would apply to any descendant elements of the defining element. However, frequent definition or redefinition of XAML namespaces can result in a XAML markup style that is difficult to read.
The WPF implementation of its XAML processor includes an infrastructure that has awareness of the WPF core assemblies. The WPF core assemblies are known to contain the types that support the WPF mappings to the default XAML namespace. This is enabled through configuration that is part of your project build file and the WPF build and project systems. Therefore, declaring the default XAML namespace as the default xmlns is all that is necessary in order to reference XAML elements that come from WPF assemblies.
The x: Prefix
In the previous root element's markup.
x:Class: Specifies the CLR namespace and class name for the class that provides code-behind for a XAML page. You must have such a class to support code-behind per the WPF programming model, and therefore you almost always see x: mapped, even if there are no resources.
x:Name: Specifies a run-time object name for the instance that exists in run-time code after an object element is processed. In general, you will frequently use a WPF-defined equivalent property for x:Name. Such properties map specifically to a CLR backing property and are thus more convenient for application programming, where you frequently use run time code to find the named elements from initialized XAML. The most common such property is FrameworkElement.Name. You might still use x:Name when the equivalent WPF framework-level Name property is not supported in a particular type. This occurs in certain animation scenarios.
x:Static: Enables a reference that returns a static value that is not otherwise a XAML-compatible property.
x:Type: Constructs a Type reference based on a type name. This is used to specify attributes that take Type, such as Style.TargetType, although frequently the property has native string-to-Type conversion in such a way that the x:Type markup extension usage is optional.
There are additional programming constructs in the x: prefix/XAML namespace, which are not as common. For details, see XAML Namespace (x:) Language Features.
For your own custom assemblies, or for assemblies outside the WPF core of PresentationCore, PresentationFramework and WindowsBase, you can specify the assembly as part of a custom xmlns mapping. You can then reference types from that assembly in your XAML, so long as that type is correctly implemented to support the XAML usages you are attempting.
The following is a very basic example of how custom prefixes work in XAML markup. The prefix custom is defined in the root element tag, and mapped to a specific assembly that is packaged and available with the application. This assembly contains a type NumericUpDown, which is implemented to support general XAML usage as well as using a class inheritance that permits its insertion at this particular point in a WPF XAML content model. An instance of this NumericUpDown control is declared as an object element, using the prefix so that a XAML parser knows which XAML namespace contains the type, and therefore where the backing assembly is that contains the type definition.
<Page xmlns="" xmlns: <StackPanel Name="LayoutRoot"> <custom:NumericUpDown ... </StackPanel> </Page> consist of both XAML markup and code-behind. Within a project, the XAML is written as a .xaml file, and a CLR language such as Microsoft Visual Basic or C# is used to write a code-behind file. When a XAML file is markup compiled as part of the WPF programming and application models, the location of the XAML code-behind file for a XAML file is identified by specifying a namespace and class as the x:Class attribute of the root element of the XAML.
In the examples so far, you have seen several buttons, but none of these buttons had any logical behavior associated with them yet. The primary application-level mechanism for adding a behavior for an object element is to use an existing event of the element class, and to write a specific handler for that event that is invoked when that event is raised at run time. The event name and the name of the handler to use are specified in the markup, whereas the code that implements your handler is defined in the code-behind.
Notice that the code-behind file uses the CLR namespace ExampleNamespace and declares ExamplePage as a partial class within that namespace. This parallels the x:Class attribute value of ExampleNamespace.ExamplePage that was provided in the markup root. The WPF markup compiler will create a partial class for any compiled XAML file, by deriving a class from the root element type. When you provide code-behind that also defines the same partial class, the resulting code is combined within the same namespace and class of the compiled.
Routed Events
A particular event feature that is fundamental to WPF is a routed event. Routed events enable an element to handle an event that was raised by a different element, as long as the elements are connected through a tree relationship. When specifying event handling with a XAML attribute, the routed event can be listened for and handled on any element, including elements that do not list that particular event in the class members table. This is accomplished by qualifying the event name attribute with the owning class name. For instance, the parent StackPanel in the ongoing StackPanel / Button example could register a handler for the child element button's Click event by specifying the attribute Button.Click on the StackPanel object element, with your handler name as the attribute value. For more information about how routed events work, see Routed Events Overview.
By default, the object instance that is created in an object graph by processing a XAML object element does not possess a unique identifier or object reference. In contrast, if you call a constructor in code, you almost always use the constructor result to set a variable to the constructed instance, so that you can reference the instance later in your code. In order to provide standardized access to objects that were created through a markup definition, XAML defines the x:Name attribute. You can set the value of the x:Name attribute on any object element. In your code-behind, the identifier you choose is equivalent to an instance variable that refers to the constructed instance. In all respects, named elements function as if they were object instances (the name references that instance), and your code-behind can reference the named elements to handle run-time interactions within the application. This connection between instances and variables is accomplished by the WPF XAML markup compiler, and more specifically involve features and patterns such as InitializeComponent that will not be discussed in detail in this topic.
WPF framework-level XAML elements inherit a Name property, which is equivalent to the XAML defined x:Name attribute. Certain other classes also provide property-level equivalents for x:Name, which is also generally defined as a Name property. Generally speaking, if you cannot find a Name property in the members table for your chosen element/type, use x:Name instead. The x:Name values will provide an identifier to a XAML element that can be used at run time, either by specific subsystems or by utility methods such as FindName.
The following example sets Name on a StackPanel element. Then, a handler on a Button within that StackPanel references the StackPanel through its instance reference buttonContainer as set by Name.
Just like a variable, the XAML name for an instance is governed by a concept of scope, so that names can be enforced to be unique within a certain scope that is predictable. The primary markup that defines a page denotes one unique XAML namescope, with the XAML namescope boundary being the root element of that page. However, other markup sources can interact with a page at run time, such as styles or templates within styles, and such markup sources often have their own XAML namescopes that do not necessarily connect with the XAML namescope of the page. For more information on x:Name and XAML namescopes, see Name, x:Name Directive, or WPF XAML Namescopes.
XAML specifies a language feature that enables certain properties or events to be specified on any element, regardless of whether the property or event exists in the type's definitions for the element it is being set on. The properties version of this feature is called an attached property, the events version is called an attached event. Conceptually, you can think of attached properties and attached events as global members that can be set on any XAML element/object instance. However, that element/class or a larger infrastructure must support a backing property store for the attached values.
Attached properties in XAML are typically used through attribute syntax. In attribute syntax, you specify an attached property in the form ownerType.propertyName.
Superficially, this resembles a property element usage, but in this case the ownerType you specify is always a different type than the object element where the attached property is being set. ownerType is the type that provides the accessor methods that are required by a XAML processor in order to get or set the attached property value.
The most common scenario for attached properties is to enable child elements to report a property value to their parent element.
The following example illustrates the DockPanel.Dock attached property. The DockPanel class defines the accessors for DockPanel.Dock and therefore owns the attached property. The DockPanel class also includes logic that iterates its child elements and specifically checks each element for a set value of DockPanel.Dock. If a value is found, that value is used during layout to position the child elements. Use of the DockPanel.Dock attached property and this positioning capability is in fact the motivating scenario for the DockPanel class.
In WPF, most or all the attached properties are also implemented as dependency properties. For details, see Attached Properties Overview.
Attached events use a similar ownerType.eventName form of attribute syntax. Just like the non-attached events, the attribute value for an attached event in XAML specifies the name of the handler method that is invoked when the event is handled on the element. Attached event usages in WPF XAML are less common. base classes are used for inheritance in the CLR objects model. Base classes, including abstract ones, are still important to XAML development because each of the concrete XAML elements inherits members from some base class in its hierarchy. Often these members include properties that can be set as attributes on the element, or events that can be handled. FrameworkElement is the concrete base UI class of WPF at the WPF framework level. When designing UI, you will use various shape, panel, decorator, or control classes, which all derive from FrameworkElement. A related base class, FrameworkContentElement, supports document-oriented elements that work well for a flow layout presentation, using APIs that deliberately mirror the APIs in FrameworkElement. The combination of attributes at the element level and a CLR object model provides you with a set of common properties that are settable on most concrete XAML elements, regardless of the specific XAML element and its underlying type.
XAML.
XAML can be used to define all of the UI, but it is sometimes also appropriate to define just a piece of the UI in XAML. This capability could be used to enable partial customization, local storage of information, using XAML to provide a business object, or a variety of possible scenarios. The key to these scenarios is the XamlReader class and its Load method. The input is a XAML file, and the output is an object that represents all of the run-time tree of objects that was created from that markup. You then can insert the object to be a property of another object that already exists in the. | http://msdn.microsoft.com/en-us/library/ms752059(d=printer,v=vs.100).aspx | CC-MAIN-2014-49 | refinedweb | 3,714 | 52.29 |
It can sometimes be necessary to reference two versions of assemblies that have the same fully-qualified type names, for example when you need to use two or more versions of an assembly in the same application. By using an external assembly alias, the namespaces from each assembly can be wrapped inside root-level namespaces named by the alias, allowing them to be used in the same file.
The extern keyword is also used as a method modifier, declaring a method written in unmanaged code.
To reference two assemblies with the same fully-qualified type names, an alias must be specified on the command line, as follows:
/r:GridV1=grid.dll
/r:GridV2=grid20.dll
This creates the external aliases GridV1 and GridV2. To use these aliases from within a program, reference them using their fully qualified name, rooted in the appropriate namespace-alias
In the above example, GridV1::Grid would be the grid control from grid.dll, and GridV2::Grid would be the grid control from grid20.dll.
For more information, see the following sections in the C# Language Specification:
25.4 Extern aliases | http://msdn.microsoft.com/en-us/ms173212(VS.80).aspx | crawl-002 | refinedweb | 185 | 52.29 |
Difference between revisions of "Introduction to Python"
Revision as of 19:57, 30 May 2014 bottom part:
(If you don't have it, click on View → Views → we dominate totally our interpreter, secondNumber = 20 print print varA + varB
This will give us an error, varA is a string and varB is an int, and Python doesn't know what to do. But,
The standard Python commands are not many.. Suppose we write a file like this:
def sum(a,b): return a + b print "test.py succesfully loaded"
and we save it as test.py in our FreeCAD/bin directory. Now, let's start FreeCAD, and in the interpreter window,__ here. There are three very important Python reference documents on the net:
- the official Python tutorial with way more information than this one
- the official Python reference
- the Dive into Python wikibook/ book.
Be sure to bookmark them!
| https://www.freecadweb.org/wiki/index.php?title=Introduction_to_Python&diff=88610&oldid=16730 | CC-MAIN-2019-51 | refinedweb | 149 | 70.73 |
This is a demo of the
ipythonblocks module available at.
ipythonblocks provides a
BlockGrid object whose representation is an HTML table. Individual table cells are represented by
Block objects that have
.red, .
green, and
.blue attributes by which the color of that cell can be specified.
ipythonblocks is a teaching tool that allows students to experiment with Python flow control concepts and immediately see the effects of their code represented in a colorful, attractive way.
BlockGrid objects can be indexed and sliced like 2D NumPy arrays making them good practice for learning how to access arrays.
from ipythonblocks import BlockGrid
grid = BlockGrid(10, 10, fill=(123, 234, 123))
grid
grid[0, 0]
BlockGrid objects support iteration for quick access to individual blocks.
Blocks have
.row and
.col attributes (zero-based) to help track where in the grid you are. The individual color channels on each
Block can be modified directly.
for block in grid: if block.row % 2 == 0 and block.col % 3 == 0: block.red = 0 block.green = 0 block.blue = 0 grid
BlockGrid objects have
.height and
.width attributes to facilitate loops over the grid. Individual
Blocks can be accessed via Python- or NumPy-like indexing.
for r in range(grid.height): for c in range(grid.width): sq = grid[r, c] sq.red = 100 if r % 2 == 0: sq.green = 15 else: sq.green = 255 if c % 2 == 0: sq.blue = 15 else: sq.blue = 255
The
BlockGrid.show() method can also be used to display the grid or individual blocks.
grid.show()
grid[5, 5].show()
Slicing a
BlockGrid returns a new
BlockGrid object that is a view of the original, much like NumPy arrays.
sub_grid = grid[:, 5] sub_grid.show()
for block in sub_grid: block.red = 255
sub_grid
grid
Slicing can be used with iteration to work on a sub-grid. The
Block.set_colors method can be used to update all the colors at once.
for block in grid[2:6, 2:4]: block.set_colors(245, 178, 34) grid
Like NumPy arrays, the
BlockGrid.copy() method can be used to get a completely independent copy of the grid or slice.
sub_copy = grid[3:-3, 3:-3].copy() sub_copy.show()
sub_copy[:] = (0, 0, 0) sub_copy.show()
grid
Blocks can also be modified by assigning RGB tuples. This type of assignment can be used on individual blocks, or on slices to change many blocks at once.
grid[5] = (0, 0, 0) grid.show()
grid[:, 5] = (255, 0, 0) grid.show()
grid[-3:, -3:] = (0, 124, 124) grid.show()
grid[1, 1] = (255, 255, 255) grid.show()
grid[:, :] = (123, 234, 123) grid.show()
The displayed size of blocks (in pixels) can be controlled with the
block_size keyword, allowing some flexibility in how many blocks can comfortably fit on the screen.
grid = BlockGrid(50, 50, block_size=5) grid
The block display size can be modified at any time by changing the
BlockGrid.block_size attribute.
grid.block_size = 2 grid
And the grid lines between individual cells can be toggled by setting the
BlockGrid.lines_on attribute.
grid.lines_on = False grid | http://nbviewer.jupyter.org/github/jiffyclub/ipythonblocks/blob/master/demos/ipythonblocks_demo.ipynb | CC-MAIN-2016-30 | refinedweb | 508 | 70.29 |
import repository.query.Query as Query p = rep.findPath('//Queries') k = rep.findPath('//Schema/Core/Query') q = Query.Query("for i in '//Schema/Core/Kind' where True", p, k) for i in q: print iA Query is a Chandler Item, so you must supply the repository parent and kind of the item being created. In the Chandler application, the parent should be
'//Queries'.
argsattribute to a dict keyed by the parameter name, including the "$". The value of an entry in the dict depends on how you used the parameter in the query. If you used the parameter to indicate a ref-collection between the in and where keywords, then the value of the entry should be a tuple of the UUID of the item containing the reference collection, and a string which is the name of the reference collection attribute. If you used the parameter in the boolean condition, then the value is a single valued tuple containing the value you wish to pass as an argument. So for example, An argument used in the boolean condition
q.args["$0"] = ( data, )An argument used to specify a reference collection as the source of a query
q.args["$0"] = ( item.itsUUID, "attributeName" )
subscribemethod on Query. This method has two mandatory parameters and two optional parameters. The first parameter is an Item that has the required callback item, and the second parameter is that name of that callback method. The two optional parameters are a little more difficult to explain. The repository's concurrency model gives each thread a separate view of the items in the repository. You can select when you would like to be notified of changes. The options are: | http://chandlerproject.org/Projects/QuerySystemReference | crawl-002 | refinedweb | 280 | 56.76 |
We’ll discuss:
ES6 Background
ES6 Arrow Functions
JSX Transpilation
ES6 Variables with let and const
ES6 Destructuring
Intro to ES6
ES6 stands for ECMAScript version 6. ECMAScript is another name for JavaScript. You’ll find that there are various versions – fourth edition, fifth edition, sixth edition. The fifth edition is what I consider to be old-school JavaScript. This is the kind of JavaScript that you’d write since 2009 – between 2009 and 2015. The JavaScript that you see is ES5.
var square = function (x) { return x * x; }
This is ES5 JavaScript. You use
var to declare variables, and you need to use the
return statement, and you always need to use curly braces with functions. So if we define that, and then we call
square(4) of it returns
16.
In Chrome, there are many ways to open the JavaScript console – you can click on these three dots, and then go under More tools and select Developer tools. You can also open with Ctrl + Shift + I. You can type any JavaScript in here, in the console tab, and hit Enter.
console.log(“Hello JavaScript”)
Any errors that occur in the files that you edit will also show up here in the console. So it’s very useful for debugging.
The 6th edition is ECMAScript version 6, also abbreviated ES6, and it’s also called ECMAScript 2015 or ES 2015. One of the biggest improvements in this version is ES6 modules. And that’s what lets you write this
import * as modulename syntax. Other changes include using
const and
let to define variables instead of var. And also this really sweet syntax for functions, namely the arrow syntax.
()=> {…}
ES6 Arrow Functions
const square = (x) => {return x * x}
const square equals a function where
x is the argument, and the return value is
x times
x.
This is what you would write in ES6. You can leave out these parentheses.
const square = x => {return x * x}
You can leave out these curly braces, and if you do, you can leave out this return statement.
const square = x => x * x
This implicitly returns whatever the expression is.
If you have a large expression that you want to return, you can put it in parenthesis, and this is common practice with JSX, with React.
const square = x => (x * x)
But anyway this is how I would write square in ES6.
const square = x => x * x
It does the same thing as the other ones. We can invoke it –
square(4) and it returns
16.
JSX Transpilation
Back in our code, here I’m going to start refactoring this to use React components. And by the way, we already have one React component, and it’s defined as an ES6 arrow function that returns some JSX. And by the way, when this JSX gets transpiled, because JSX is not part of JavaScript, we can take a look at the generated
bundle.js to see how it gets transpiled from JSX into ES6.
See here’s our App component. And this is actually the code that runs React. All of our attribute values get passed in like this. So I just want you to be aware that JSX doesn’t actually run. It gets transpiled into this, and then gets run. So what this (
const App) is assigned to, is a function that returns an object returned from
React.createElement. This is called virtual DOM. Virtual Document Object Model. And React takes care of making sure that the actual DOM on the page mirrors your intended DOM. The virtual DOM that gets returned by this function.
But anyway, back in
index.js, let me start refactoring.
We’ve got all of our logic sort of jumbled up inside of
index.js. We’ve got variables that are global to this module, and we’re not really using the component infrastructure that React provides. Just looking at this, I have no idea what each of these different things represents. I would much rather see eyes, mouth components. Names that have some kind of semantic meaning. And usually, the app component sort of orchestrates all the other components. It doesn’t directly render these DOM elements like this. So I’m gonna go ahead and start refactoring this to take advantage of components and modules.
The first thing that stands out is this circle when I just glance at the code I have no idea what this represents.
<circle r={centerY - strokeWidth / 2} fill="yellow" stroke="black" stroke-width={strokeWidth} />;
What I would prefer to do is say background circle perhaps?
</BackgroundCircle/>
And this is what it looks like to use a React component. It needs to be uppercase, because if it’s lowercase, then React JSX sort of interprets it as a native kind of a DOM element. But if it’s uppercase, it should be a defined component that you have. So let me define the component now. Say
const BackgroundCircle equals a function that returns in parentheses, the JSX that I had cut from before. There we have it. We have defined a React component and used it.
const BackgroundCircle = () => ( <circle r={centerY - strokeWidth / 2} fill="yellow" stroke="black" stroke-width={strokeWidth} /> );
One thing that doesn’t quite feel right about this component is that it refers to these variables that are defined in this scope here at the start of
index.js.
In order to make this component independent, we can pass in these things as props – React props or properties. I’d like to make it the responsibility of the outer component to calculate the radius and then pass it into
BackgroundCircle.
So I’m going to cut this code here, and put
radius instead. And we can get
radius from
props, and we can access it like this –
props.radius.
const BackgroundCircle = (props) => ( <circle r={props.radius} fill="yellow" stroke="black" stroke-width={strokeWidth} /> );
This is currently broken because we’re not passing in any radius. Let’s pass in the value for
radius to our
BackgroundCircle component. And that can look like this..
radius= and then, in curly braces, we can put any JavaScript expression. And here, I’m going to paste that expression that I had from earlier.
<BackgroundCircle radius={centerY - strokeWidth / 2} />
And Boom! It works. This is how you define React props –
radius is a prop that’s being defined as this value –
centerY - strokeWidth / 2, and then we can access it as
props.radius inside of this function assigned to
const BackgroundCircle.
ES6 Destructuring
This could be simplified though by using ES6 destructuring, which looks like this.
const BackgroundCircle = ({ radius }) => ( <circle r={radius} fill="yellow" stroke="black" stroke-width={strokeWidth} /> );
To make sure this is crystal clear, let me just show some examples of this.
const person = { firstName: 'Jane', lastName: 'Doe', };
Let’s declare an object called
person –
firstName Jane, and
lastName Doe. So we define this object, and then we can access
person.firstName and
person.lastName.
If we want first name and last name available to us as sort of local variables, we could extract them like this.
const firstName = person.firstName
const lastName = person.lastName.
And now we have firstName and lastName defined. So we can just confirm that by saying,
console.log(firstName, lastName)
The problem here is that this is kind of verbose, and we see that verbosity in a React context, with props.this, props.that, we don’t want props.this and that all over the place. One way to solve that is put all the props. in one place like this. But another, cleaner solution is to use ES6 destructuring. And that looks something like this.
const { firstName, lastName } = person
ES6 Variables with let and const
This does the exact same thing as these two lines together. If I run this I get an error, because I’m using
const. When you use
const you can only define these things once. They can’t be mutated. They can’t be changed after the fact. See, if I say
const foo=5, and then I try to say
foo=6, it doesn’t work. But if you want to change something, that’s where you need to use
let —
let bar=5
Since we used
let to declare it, we can say
bar=6. No problem. So there’s a little detour explaining the background of these ES6 language features, and this is why they’re useful in refactoring React components. BackgroundCircle = ({ radius }) => ( <circle r={radius} fill="yellow" stroke="black" stroke-width={strokeWidth} /> ); const App = () => ( <svg width={width} height={height}> <g transform={`translate(${centerX},${centerY})`}> <BackgroundCircle radius={centerY - strokeWidth / 2} /> ); | https://datavis.tech/datavis-2020-episode-8-lets-make-a-face-part-iv-react-components-es6/ | CC-MAIN-2020-45 | refinedweb | 1,442 | 74.79 |
At Sun, 25 Jul 2004 17:19:41 +0200, Andreas Jochens wrote: > On 04-Jul-25 23:57, GOTO Masanori wrote: > > Andreas Jochens wrote: > > > Additionally, the attached patch adds > > > > > > nptl_extra_cflags = -g0 -O3 -fomit-frame-pointer -D__USE_STRING_INLINES > > > > > > to 'debian/sysdeps/amd64.mk' which uses '-g0' instead of the default '-g1'. > > > This is also necessary to compile glibc with gcc-3.4. > > > > Why is this needed? Could you teach me the reason? > > Thank you for your reply. > > The '-g0' is needed because gcc-3.4 has a bug which will be triggered > when '-g1' is used to compile nptl (please see gcc-3.4 bug #260710 in > the BTS). This gcc-3.4 bug is known to upstream since a while ago and > has been discussed on the upstream gcc developers list. However, I > could not find a better fix. > This gcc-3.4 bug is not amd64-specific, it also occurs on i386. Hmm. It's actually bad bug. Applying -g0 makes libc6-dbg useless, so I hope it should be fixed soon. To make sure applying -g0 is only for gcc-3.4, I modified amd64.mk as follows: nptl_extra_cflags = -O3 -fomit-frame-pointer -D__USE_STRING_INLINES # work around patch for gcc-3.4: BUILD_CC_VERSION := $(shell $(BUILD_CC) -dumpversion | sed 's/\([0-9]*\.[0-9]*\)\(.*\)/\1/') ifeq ($(BUILD_CC_VERSION),3.4) nptl_extra_cflags += -g0 endif BTW, your patch could not be applied because of patch rejection: patching file ./debian/patches/fno-unit-at-a-time.dpatch patch: **** malformed patch at line 123: open libc_cv_z_initfirst libc_cv_Bgroup ASFLAGS_config libc_cv_z_combreloc libc_cv_have_initfini libc_cv_cpp_asm_debuginfo no_whole_archive exceptions LIBGD EGREP sizeof_long_double libc_cv_gcc_unwind_find_fde uname_sysname uname_release uname_version old_glibc_headers libc_cv_slibdir libc_cv_localedir libc_cv_sysconfdir libc_cv_rootsbindir libc_cv_forced_unwind use_ldconfig ldd_rewrite_script gnu_ld gnu_as elf xcoff static shared pic_default profile omitfp bounded static_nss nopic_initfini DEFINES linux_doors mach_interface_list VERSION RELEASE LIBOBJS LTLIBOBJS' Please send your no-unit-at-a-time.dpatch attached as normal file, not diff style. amd64-libs.dpatch and control* are already reflected in the cvs, thus we need to pay additional work to strip your patch. Regards, -- gotom | https://lists.debian.org/debian-glibc/2004/07/msg00331.html | CC-MAIN-2017-39 | refinedweb | 330 | 59.19 |
The importance of .self
I am a VERY new person to python and pythonista(not ot coding though, for I have been programming in Lua and C++ and Java for the last 8 months). So this error that i am getting is really bothering me here is the code and error. Please Help.
from scene import * class MyScene (Scene): def setup(self): # This will be called before the first frame is drawn. show = 0 pass def draw(self): # This will be called for every frame (typically 60 times per second). background(0, 0, 0) # Draw a red circle for every finger that touches the screen: fill(0.50, 1.00, 0.00) rect(100, 100, 150, 50) fill(1, 0, 0) if show == 1: #HERE IS WHERE MY ERROR IS TAKING PLACE ellipse(500, 400, 100, 100) for touch in self.touches.values(): #ellipse(touch.location.x - 50, touch.location.y - 50, 100, 100) if touch.location.x > 99: if touch.location.x < 251: if touch.location.y > 99: if touch.location.y < 151: show = 1 def touch_began(self, touch): pass def touch_moved(self, touch): pass def touch_ended(self, touch): pass run(MyScene())
The error says UnboundLocalError: local variable 'show' referenced before assignment
Change all instances of 'show' to 'self.show'. The show variable is a local variable only visible within a single function. The self.show variable is bound to the scene object and is thus shared across all functions (methods) of that object.
Two minor points to consider:<li>instead of assigning 0 and 1 to self.show, consider using True and False which are easier to read/understand</li>
<li>instead of using <i>if self.show == 1:</i> consider using <i>if self.show:</i> which is easier to type/read/understand and deals well with the situation where self.show is set to 2, 'a', True, etc.</li>
Thank you so much!!! That really helped!
So when ever I want to use a variable all throughout my code I should make it self.
All throughout the <b>class</b>. | https://forum.omz-software.com/topic/275/the-importance-of-self/1 | CC-MAIN-2018-09 | refinedweb | 344 | 67.45 |
You can change many of the basic properties of a table layout using the Table Layout Options tab in the Table Layout dialog box. Any properties or options that do not apply to the selected table layout, because of the data source it is associated with, are disabled.
You can view the effects of the changes you make in the data preview area in the dialog box.
To configure properties for a table layout:
Fixed Record Length – Select for data files where each record in the file has the same maximum length and the position of each field is consistent from record to record.
IBM Variable Record Length – Select for data files where the data file contains records of varying length.
Text File (CR or CRLF) – Select if the data file is a text file where the end of each record is marked by a carriage return (CR) or a carriage return and line feed sequence (CRLF).
Select the Hex checkbox to view the data in hexadecimal format. This option is useful if you are working with unprintable characters or compressed data, such as packed numeric data originating from an IBM mainframe computer, and you need to modify the Record Length or Skip Length. | https://help.highbond.com/acl/105/topic/com.acl.user_guide.help/tables/t_configuring_properties_for_table_layouts.html | CC-MAIN-2019-51 | refinedweb | 204 | 62.51 |
#include <sys/types.h>
#include <stdio.h>
#include <features.h>
libcgroup's cgroup_get_value_string() reads only relatively short parametrs of a group.
Use following functions to read stats parameter, which can be quite long.
stats
This iterator returns all subgroups of given control group.
It can be used to return all groups in given hierarchy, when root control group is provided.
Type of the walk.
Type of returned entity.
Use following functions to read tasks file of a group.
tasks
Use following function to list mounted controllers and to see, how they are mounted together in hierarchies.
Use cgroup_get_all_controller_begin() (see later) to list all controllers, including those which are not mounted.
Use following functions to list all controllers, including those which are not mounted.
The controllers are returned in the same order as in /proc/cgroups file, i.e. mostly random. | http://libcg.sourceforge.net/html/iterators_8h.html | CC-MAIN-2017-17 | refinedweb | 140 | 69.58 |
How to Get 'Active Model ' For One2many Field?
Hello All,
I have Facing One problem related to One2many field Active Model .
For Example:
i am in sale order and i am create any record in sale order line than i am get active_model = sale.order. So any one help me how to get active_model = sale.order.line. I am try to pass this value using context and for this i am using get_default method.
And
How to Change One2many field language according to partner(Customer) language.
Thanks In Advanced
Hello Nitin,
I am added one2many field in sale order and below my class.
class CustomerForm(models.Model):
_name = 'customer.from'
_rec_name = 'customer_name'
@api.model
def default_get(self, fields):
rec = super(CustomerForm, self).default_get(fields)
context = dict(self._context or {})
active_model = self.env.context.get('active_model', False)
return rec
My problem is when i am try to print Active Model than It's show 'sale.order' but can you tell me how to get active model name 'customer.from'.
anything wrong in above code ?
Could you share piece of code so that people will understand what you are exactly trying to do ? | https://www.odoo.com/ar/forum/help-1/question/how-to-get-active-model-for-one2many-field-141255 | CC-MAIN-2019-26 | refinedweb | 191 | 60.72 |
Starting with Python 2.5, the Python compiler (the part that takes your source-code and translates it to Python VM code for the VM to execute) works as follows [1]:
- Parse source code into a parse tree (Parser/pgen.c)
- Transform parse tree into an Abstract Syntax Tree (Python/ast.c)
- Transform AST into a Control Flow Graph (Python/compile.c)
- Emit bytecode based on the Control Flow Graph (Python/compile.c)
Previously, the only place one could tap into the compilation process was to obtain the parse tree with the parser module. But parse trees are much less convenient to use than ASTs for code transformation and generation. This is why the addition of the _ast module in Python 2.5 was welcome - it became much simpler to play with ASTs created by Python and even modify them. Also, the python built-in compile function can now accept an AST object in addition to source code.
Python 2.6 then took another step forward, including the higher-level ast module in its standard library. ast is a convenient Python-written toolbox to aid working with _ast [2]. All in all we now have a very convenient framework for processing Python source code. A full Python-to-AST parser is included with the standard distribution - what more could we ask? This makes all kinds of language transformation tasks with Python very simple.
What follows are a few examples of cool things that can be done with the new _ast and ast modules.
Manually building ASTs
import ast node = ast.Expression(ast.BinOp( ast.Str('xy'), ast.Mult(), ast.Num(3))) fixed = ast.fix_missing_locations(node) codeobj = compile(fixed, '<string>', 'eval') print eval(codeobj)
Let's see what is going on here. First we manually create an AST node, using the AST node classes exported by ast [3]. Then the convenient fix_missing_locations function is called to patch the lineno and col_offset attributes of the node and its children.
Another useful function that can help is ast.dump. Here's a formatted dump of the node we've created:
Expression( body=BinOp( left=Str(s='xy'), op=Mult(), right=Num(n=3)))
The most useful single-place reference for the various AST nodes and their structure is Parser/Python.asdl in the source distribution.
Breaking compilation into pieces
Given some source code, we first parse it into an AST, and then compile this AST into a code object that can be evaluated:
import ast source = '6 + 8' node = ast.parse(source, mode='eval') print eval(compile(node, '<string>', mode='eval'))
Again, ast.dump can be helpful to show the AST that was created:
Expression( body=BinOp( left=Num(n=6), op=Add(), right=Num(n=8)))
Simple visiting and transformation of ASTs
import ast class MyVisitor(ast.NodeVisitor): def visit_Str(self, node): print 'Found string "%s"' % node.s class MyTransformer(ast.NodeTransformer): def visit_Str(self, node): return ast.Str('str: ' + node.s) node = ast.parse(''' favs = ['berry', 'apple'] name = 'peter' for item in favs: print '%s likes %s' % (name, item) ''') MyTransformer().visit(node) MyVisitor().visit(node)
This prints:
Found string "str: berry" Found string "str: apple" Found string "str: peter" Found string "str: %s likes %s"
The visitor class implements methods that are called for relevant AST nodes (for example visit_Str is called for Str nodes). The transformer is a bit more complex. It calls relevant methods for AST nodes and then replaces them with the returned value of the methods.
To prove that the transformed code is perfectly valid, we can just compile and execute it:
node = ast.fix_missing_locations(node) exec compile(node, '<string>', 'exec')
As expected [4], this prints:
str: str: peter likes str: berry str: str: peter likes str: apple
Reproducing Python source from AST nodes
Armin Ronacher [5] wrote a module named codegen that uses the facilities of ast to print back Python source from an AST. Here's how to show the source for the node we transformed in the previous example:
import codegen print codegen.to_source(node)
And the result:
favs = ['str: berry', 'str: apple'] name = 'str: peter' for item in favs: print 'str: %s likes %s' % (name, item)
Yep, looks right. codegen is very useful for debugging or tools that transform Python code and want to save the results [6]. Unfortunately, the version you get from Armin's website isn't suitable for the ast that made it into the standard library. A slightly patched version of codegen that works with the standard 2.6 library can be downloaded here.
So why is this useful?
Many tools require parsing the source code of the language they operate upon. With Python, this task has been trivialized by the built-in methods to parse Python source into convenient ASTs. Since there's very little (if any) type checking done in a Python compiler, in classical terms we can say that a complete Python front-end is provided. This can be utilized in:
- IDEs for various "intellisense" needs
- Static code checking tools like pylint and pychecker
- Python code generators like pythoscope
- Alternative Python interpreters
- Compilers from Python to other languages
There are surely other uses I'm missing. If you're aware of a library/tool that uses ast, let me know.
| http://eli.thegreenplace.net/2009/11/28/python-internals-working-with-python-asts/ | CC-MAIN-2017-09 | refinedweb | 874 | 63.7 |
Detailed Description
The goal of this namespace is to make sure that, when the user needs to specify a file via the file selection dialog, this dialog will start in the directory most likely to contain the desired files.
This works as follows: Each time the file selection dialog is shown, the programmer can specify a "file-class". The file-dialog will then start with the directory associated with this file-class. When the dialog closes, the directory currently shown in the file-dialog will be associated with the file-class.
A file-class can either start with ':' or with '::'. If it starts with a single ':' the file-class is specific to the current application. If the file-class starts with '::' it is global to all applications.
- Since
- 4.6
Function Documentation
Associates
directory with
fileClass.
- Since
- 4.6
Definition at line 69 of file krecentdirs.cpp.
Returns the most recently used directory associated with this file-class.
- Since
- 4.6
Definition at line 63 of file krecentdirs.cpp.
Returns a list of directories associated with this file-class.
The most recently used directory is at the front of the list.
- Since
- 4.6
Definition at line 55 of file krecentdirs.cpp.
Documentation copyright © 1996-2019 The KDE developers.
Generated on Thu Apr 18 2019 02:43:02 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/frameworks/kio/html/namespaceKRecentDirs.html | CC-MAIN-2019-26 | refinedweb | 237 | 50.33 |
MVC 2 is quiet old and this article was written long years back. We would recommend you to start reading from our fresh Learn MVC 5 step by step series from here: -
So, what’s the agenda for MVC day 6?
Day 1:-Controllers, strong typed views and helper classes
Day 2:- Unit test, routing and outbound URLS
Day 3:- Partial views, Data annotations,Razor, Authentication and Authorization
Day 4:- JSON ,Jquery, State management and Asynch controllers
Day 5 :- Bundling, Minification , ViewModel , Exception handling and areas
Lab 22:- Display modes (MVCMobile)
Introduction
Step 1:- Create appropriate pages
Step 2:- That’s it, solet’s test.
Step 3:- More customization and control
Step 4:- Test your mobile customization
Lab 23:- MVC OAuth provider
Step 1: - Register your application and get the ID and Key
Step 2: - Create a MVC site for authenticating with OAuth
Step 3 :Start browsing
Lab 24:- MVC Model Binders
Step 1: - Create “EnterCustomer.aspx” and the controller
Step 2: - Create Customer model
Step 3: - Create binder which does mapping.
Step 4: - Attach the mapper with the action
Step 5: - Enjoy your output
Lab 25:- Razor Layout
Step 1: - Create Layout page
Step 2: - Create view and apply the layout
Step 3: - Create a controller and see your layout in action
Lab 26 :- Custom Html Helper methods
Step 1 :- Create a MVC project with simple class file
Step 2: Mark the class as Static and add methods
Step 3: Use the Helper class.
What is for the Lastday?
Are you completely new to MVC?
Do not miss MVC interview questions with answers
For day 6 we have five great labs: -
In case you have missed the previous days of Asp.net MVC tutorials, below are the links with what topics are covered.
It’s a world of small devices i.e. mobile. As a MVC developerand we expect a lot of support from
Microsoft MVC template for the same. Now desktop screen’s and mobile screens have a huge variation in size.
So we would like to create different screens for desktop and different screens for mobile. For example we would create “Home.aspx” for normal desktop and “Home.mobile.aspx” for mobile. If MVC can automatically detect the device type and render the appropriate page that would save lot of work. This is automated by using “MVC Display Mode”.
When any HTTP request comes to a web application this HTTP request has a value called as “User Agent”. This “User Agent” value is used by MVC display mode and appropriate view is picked and rendered as per device. So let’s do a demo and see it live in blood and flesh.
So let’s a create a sample MVC project which has two views “Index.aspx” for desktop and “Index.Mobile.aspx” for mobile as shown in the below figure.
Also let’s add a controller called as “Home” controller which will invoke the “Index” view.
Note :-You can see in the below code snippet we have created an action result as index. Because our view name and action name are same, we do not need to pass the view name in the “return View();”.
publicclassHomeController : Controller
{
publicActionResult Index()
{
return View();
}
}
And that’s all we need to do. Now let’s go and test if MVC display mode lives up to its promise.
So now if you go and hit the controller and action from the browser you will see the below left hand side output. If you hit the same controller and action using the android mobile emulator you will see the right part of the screen.
width="625" alt="Image 3" data-src="/KB/Articles/789278/3.png" class="lazyload" data-sizes="auto" data->
For simulating mobile testing in this lab we have used “Opera mobile” simulator. You can download the emulator from
But what if we want more customization and control:-
We have already implemented the first two conditions. Now for the third condition we need to perform some more extra steps. Relax they are absolutely small and baby steps but with great end results.
First step is to add one more page “Index.android.aspx” especially for android in your views folder as shown in the below figure.
The next step is to make some changes in your “Global.asax.cs” file. The first step is to add “Webpages” namespace as shown in the below figure.
using System.Web.WebPages;
Second step is to use the “DisplayModeProvider” class and add an “Android” entry in to the “Modes” collection as shown in the below code snippet. The below code searches for the string “Android” and if found it tries to render the “Index.Android.aspx” page.
protectedvoid Application_Start()
{
DisplayModeProvider.Instance.Modes.
Insert(0, newDefaultDisplayMode("Android")
{
ContextCondition = (context => context.
GetOverriddenUserAgent().IndexOf("Android",
StringComparison.OrdinalIgnoreCase) >= 0)
});
}
Now if you run the opera mobile simulator with Android as the user agent as shown in the below figure ,you will see android page ( Index.android.aspx ) getting rendered.
width="625" alt="Image 5" data-src="/KB/Articles/789278/5.png" class="lazyload" data-sizes="auto" data->
One of the most boring processes for an end user is registering on a site. Sometimes those long forms and email validation just puts off the user. So how about making things easy by validating the users using their existing facebook / twitter / linkedin / etc accounts. So the user uses something which he already has while the site is assured that this user is a proper user.
This is achieved by using MVC OAuth (Open standard for authorization).
To implement OAuth is a three step process , see the above figure :-
So the first step is to register your APP with the third party site. For this Lab we will use facebook for open authentication. Please note steps will vary for twitter , linked in and other sites. Go to developers.facebook.com and click on “Create new App” menu as shown in the below figure.
width="625" alt="Image 7" data-src="/KB/Articles/789278/7.png" class="lazyload" data-sizes="auto" data->
Give “app name”, ”category” and hit the “create App” button as shown in the below figure.
width="625" alt="Image 8" data-src="/KB/Articles/789278/8.png" class="lazyload" data-sizes="auto" data->
Once the app is registered you need to get “App ID” and “App Secret key” by hitting the show button as shown in the below figure.
width="625" alt="Image 9" data-src="/KB/Articles/789278/9.png" class="lazyload" data-sizes="auto" data->
Now that we have the ID and the key let’s go ahead and create a MVC Internet application. We are creating an internet application so that we get some readymade or you can say template code for “OAuth”.
width="465" alt="Image 10" data-src="/KB/Articles/789278/10.png" class="lazyload" data-sizes="auto" data->
Once the project is created open the “AuthoConfig.cs” from the “App_start” folder.
In this config file you will find “RegisterAuth” method and you will see lot of method calls for third party site. Uncomment “RegisterFacebookClient” method and put the ID and the Key as shown in the below code.
publicstaticclassAuthConfig
{
publicstaticvoidRegisterAuth()
{
// To let users of this site log in using their accounts from other sites such as Microsoft, Facebook, and Twitter,
// you must update this site. For more information visit
//OAuthWebSecurity.RegisterMicrosoftClient(
// clientId: "",
// clientSecret: "");
//OAuthWebSecurity.RegisterTwitterClient(
// consumerKey: "",
// consumerSecret: "");
OAuthWebSecurity.RegisterFacebookClient(
appId: "753839804675146",
appSecret: "fl776854469e7af9a959359a894a7f1");
//OAuthWebSecurity.RegisterGoogleClient();
}
}
Run your application and copy the localhost URL name with the port number.
width="625" alt="Image 12" data-src="/KB/Articles/789278/12.png" class="lazyload" data-sizes="auto" data->
Go back to your Developer FB portal , open the App you have just created , click on settings and click “Add platform” as shown in the below figure.
width="625" alt="Image 13" data-src="/KB/Articles/789278/13.png" class="lazyload" data-sizes="auto" data->
It open’s one more dialog box , choose website and click Add.
width="625" alt="Image 14" data-src="/KB/Articles/789278/14.png" class="lazyload" data-sizes="auto" data->
In the URL give your local host URL with the port number as shown in the below figure.
width="625" alt="Image 15" data-src="/KB/Articles/789278/15.png" class="lazyload" data-sizes="auto" data->
That’s it you are all set , now the run the application and click log in.
width="584" alt="Image 16" data-src="/KB/Articles/789278/16.png" class="lazyload" data-sizes="auto" data->
The screen open’s up to two options one at the left hand side is your local login using “Forms” authentication and the right hand side is your third party provided. Click on the facebookbutton , put your credential’s and enjoy the output.
width="625" alt="Image 17" data-src="/KB/Articles/789278/17.png" class="lazyload" data-sizes="auto" data->
In Learn MVC Day 1 lab 5 we had used HTML helper classes to map the HTML UI with the MVC model objects. So below is a simple HTML form which makes a post to “SubmitCustomer” action.
<
Customer code :- <
Customer name :- <
<input type=submit/>
</form>
The “SubmitCustomer” action takes in a customer object. This “Customer” object is produced automatically with the data what is filled in those textboxes without any binding’s and mappings.
publicclassCustomerController : Controller
{
publicActionResult SubmitCustomer(Customer obj)
{return View("DisplayCustomer");}
}
Do you know why the customer object fills automatically?. It’s because the name of the textboxes and the property names of the customer class is same.
publicclassCustomer
{
publicstring CustomerCode { get; set; }
publicstring CustomerName { get; set; }
}
But what if the textbox names are not same as the “Customer” class property names.
In other words the HTML text box name is “txtCustomerCode” and the class property name is “CustomerCode”. This is where model binders come in to picture.
Model binder maps HTML form elements to the model. It acts like a bridge between HTML UI and MVC model. So let’s do some hand’s on exercise for “ModelBinder”.
The first step is to create “EnterCustomer.aspx” view which will take “Customer” data.
<
Customer code :- <
Customer name :- <input name="TxtName"type="text"/>
<input type=submit/>
</form>
To invoke this form we need an action in the “Customer” controller because you cannot invoke a view directly in MVC you need to go via the controller.
publicclassCustomerController : Controller
{
publicActionResult EnterCustomer()
{
return View();
}
}
The next step is to create a “Customer” model. Please note the property name of the “Customer” class and the HTML UI element textbox names are different.
Now because the UI element name and the “Customer” class have different name’s we need to create the “Model” binder. To create the model binder class we need to implement “IModelBinder” interface. In the below code you can see how we have written the mapping code in “BindModel” method.
publicclassCustomerBinder : IModelBinder
{
publicobject BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
HttpRequestBase request = controllerContext.HttpContext.Request;
string strCustomerCode = request.Form["TxtCode"];
string strCustomerName = request.Form["TxtName"];
returnnewCustomer
{
CustomerCode = strCustomerCode,
CustomerName = strCustomerName
};
}
}
So now we have the binder , we have the HTML UI it’s time to connect them. Look at the “SubmitCustomer” action code below. “ModelBinder” attribute binds the binder and the “Customer” object.
publicclassCustomerController : Controller
{
publicActionResult SubmitCustomer([ModelBinder(typeof(CustomerBinder))]Customer obj)
{
return View("DisplayCustomer");
}
}
So now hit the action (“EnterCustomer”) which invokes the customer data entry screen.
When you fill data and hit submit, you will see the filled “Customer” object below.
width="462" alt="Image 20" data-src="/KB/Articles/789278/20.png" class="lazyload" data-sizes="auto" data->
Layouts are like master pages in ASP.NET Web form. Master pages give a standard look and feel for Web form views ( ASPX) while layout gives standard look and feel for razor views (CSHTML). In case you are new to Razor see Lab 12 MVC Razor view.
In this lab we will see how to implement Razor Layout.
The first thing is we need to create a Layout page. So create a new MVC web application, go to the views folder, right click, add new item and select MVC Layout page template as shown in the below figure.
In the MVC layout page we need to define the common structure which will be applied to all razor pages. You can see in the below layout page we have three sections Header, Body and Footer. Header and Footer are custom sections while “RenderBody” something which comes from MVC and displays the body content.
<div>
@RenderSection("Header")
@RenderBody()
@RenderSection("Footer")
</div>
Now once we have created the layout the next thing is to apply that layout to a view. So right click on shared folders of the view and select razor view.
To apply layout page select the “…” browse option as shown in the above figure and select layout page as shown in the below screen.
Once the view is created the first line of code points out what layout the page is using. It looks something as shown in the below code.
@{Layout = "~/Views/Default1/_LayoutPage1.cshtml";}
So now the final thing in the view is to fill all sections. Footer and header section are custom sections so we need to use @section command followed by the section name and what we want to put in those sections. All the other text will be part of the body ( @RenderBody()).
This is body
@section Footer{Copyright 2015-2016}
@section Header{Welcome to my site}
In simple words the mapping between the layout and the razor view code is as shown below.
Now that we are all set its time to create a controller and action to invoke the view. You should see something as shown below. You can see how the layout template is applied to the view.
In day 1 we have talked about MVC Helper classes . It helps us to work with input controls in a more efficient manner. When you type ‘@Html.” in MVC razor view you get something like this in intellisense.
Html helper method let us create Html input controls like Textbox, Radio button, Checkbox, Text Area easily and quickly.In this lab we will go one step ahead and create “Custom” helper method.
To create a custom HTML helper method we need to use extension methods. Extension method concept was introduced in .NET 3.5.
In case you are new to extension methods watch the below youtube video by
Create a simple MVC project called CustomHtmlHelperDemo. Add a controller called HelperSample and an action called Index.Create a new Folder inside MVC project and call it ExtensionClasses.
width="515" alt="Image 27" data-src="/KB/Articles/789278/image_2.png" class="lazyload" data-sizes="auto" data->
For extension method we need to mark the class as static.
public static class HelperExtension
{
}
In this class let’s create a new static method called “HelloWorldLabel” which will return a value of type MvcHtmlStringand accepting a parameter of type HtmlHelper.
Note: Make sure to add “this” keyword before declaring first parameter because our target is to create an extension method for HtmlHelper class.
publicstaticMvcHtmlStringHelloWorldLabel(this HtmlHelper helper)
{
}
The final step is to import “System.Web.Mvc.Html” namespace.We need to import this namespace because default TextBoxFor, TextAreaFor and other html helper extension methods are available inside this namespace. It is required only if we are going to use one of these extension method.
returnhelper.Label("Hello World");
Simply write following code in the view and say build , you may end up with an error as shown below.
width="425" alt="Image 28" data-src="/KB/Articles/789278/image_3.png" class="lazyload" data-sizes="auto" data->
To resolve the above error simply put the using statement in the top of the view as follows
@using CustomHtmlHelperDemo.ExtensionClasses
Build the application, Press F5 and Test the application.
width="695" alt="Image 29" data-src="/KB/Articles/789278/image_4.png" class="lazyload" data-sizes="auto" data->
My last day would be mainly on how to integrate Javascript framework’s ( Angular , KO) with MVC.
Final note, you can watch my c# and MVC training videos on various sections like WCF, Silver light, LINQ, WPF, Design patterns, Entity framework etc. By any chance do not miss my .NET and c# interview questions and answers book from .
In case you are completely a fresher I will suggest to start with the below 4 videos which are 10 minutes approximately so that you can come to MVC quickly.
Lab 1:- A simple Hello world ASP.NET MVC application.
width="594" alt="Image 30" data-src="/KB/Articles/789278/21.png" class="lazyload" data-sizes="auto" data->
Lab 2:- In this Lab we will see how we can share data between controller and the view using view data.
width="585" alt="Image 31" data-src="/KB/Articles/789278/22.jpg" class="lazyload" data-sizes="auto" data->
Lab 3 :- In this lab we will create a simple customer model, flourish the same with some data and display the same in a view.
width="593" alt="Image 32" data-src="/KB/Articles/789278/23.png" class="lazyload" data-sizes="auto" data->
Lab 4 :- In this lab we will create a simple customer data entry screen with some validation on the view.
width="591" alt="Image 33" data-src="/KB/Articles/789278/24.png" class="lazyload" data-sizes="auto" data->
In case you want to start with MVC 5 start with the below video Learn MVC 5 in 2 days.
width="580" alt="Image 34" data-src="/KB/Articles/789278/mockup.png" class="lazyload" data-sizes="auto" data->
Every lab I advance in this 7 day series I am also updating a separate article which discusses about important MVC interview questions which are asked during interviews. Till now I have collected 60 important questions with precise answers you can have a look at the same from
For further reading do watch the below interview preparation videos and step by step video. | https://www.codeproject.com/Articles/789278/Learn-MVC-Model-view-controller-Step-by-Step-in-d | CC-MAIN-2022-27 | refinedweb | 2,989 | 56.45 |
Expose an Application with NGINX Plus Ingress Controller
This topic provides a walkthrough of deploying NGINX Plus Ingress Controller for Kubernetes to expose an application within NGINX Service Mesh.
Overview
Follow this tutorial to deploy the NGINX Plus Ingress Controller with NGINX Service Mesh and an example application.
Objectives:
- Deploy the NGINX Service Mesh.
- Install NGINX Plus Ingress Controller.
- Deploy the example
bookinfoapp.
- Create a Kubernetes Ingress resource for the Bookinfo application.
Note:
All communication between the NGINX Plus Ingress Controller and the Bookinfo application occurs over mTLS.
Note:
The NGINX Plus version of NGINX Plus Ingress Controller is required for this tutorial.
Install NGINX Service Mesh
Note:
If you want to view metrics for NGINX Plus Ingress Controller, ensure that you have deployed Prometheus and Grafana and then configure NGINX Service Mesh to integrate with them when installing. Refer to the Monitoring and Tracing guide for instructions.
Follow the installation instructions to install NGINX Service Mesh on your Kubernetes cluster.
You can either deploy the Mesh with the default value for mTLS mode, which is
permissive, or set it to
strict.
Caution:
Before proceeding, verify that the mesh is running (Step 2 of the installation instructions). NGINX Plus Ingress Controller will try to fetch certs from the Spire agent that gets deployed by NGINX Service Mesh on startup. If the mesh is not running, NGINX Plus Ingress controller will fail to start.
Install NGINX Plus Ingress Controller
Install NGINX Plus Ingress Controller with mTLS enabled. This tutorial will demonstrate installation as a Deployment.
Get Access to the Ingress Controller. This tutorial creates a LoadBalancer Service for the NGINX Plus Ingress Controller.
Find the public IP address of your NGINX Plus Ingress Controller Service.
kubectl get svc -n nginx-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress LoadBalancer 10.76.7.165 34.94.247.235 80:31287/TCP,443:31923/TCP 66s
Note:
At this point, you should have the NGINX Plus Ingress Controller running in your cluster; you can deploy the Bookinfo example app to test out the mesh integration, or use NGINX Plus Ingress controller to expose one of your own apps.
Deploy the Bookinfo App
Use
kubectl to deploy the example
bookinfo app.
If automatic injection is enabled, NGINX Service Mesh will inject the sidecar proxy into the application pods automatically. Otherwise, use manual injection to inject the sidecar proxies.
kubectl apply -f bookinfo.yaml
Verify that all of the Pods are ready and in “Running” status:
kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-74f858558f-khg8t 2/2 Running 0 25s productpage-v1-8554d58bff-n4r85 2/2 Running 0 24s ratings-v1-7855f5bcb9-zswkm 2/2 Running 0 25s reviews-v1-59fd8b965b-kthtq 2/2 Running 0 24s reviews-v2-d6cfdb7d6-h62cb 2/2 Running 0 24s reviews-v3-75699b5cfb-9jtvq 2/2 Running 0 24s
Optionally, verify that the application works:
Note:
The steps in this section only work with
permissivemTLS mode. With
strictmTLS mode, the sidecar will drop all traffic that is not encrypted with a certificate issued by NGINX Service Mesh, so the below steps won’t work. For
strictmTLS mode skip forward to the next section which covers how to Expose the Bookinfo App.
Port-forward to the
productpageService:
kubectl port-forward svc/productpage 9080
Open the Service URL in a browser:.
Click one of the links to view the app as a general user, then as a test user, and verify that all portions of the page load.
Expose the Bookinfo App
Create an Ingress Resource to expose the Bookinfo application, using the example
bookinfo-ingress.yaml file.
Important:
If using Kubernetes v1.18.0 or greater you must use
ingressClassNamein your Ingress resources. Uncomment line 6 in the resource below or the downloaded file,
bookinfo-ingress.yaml.
kubectl apply -f bookinfo-ingress.yaml
The Bookinfo Ingress defines a host with domain name
bookinfo.example.com. It routes all requests for that domain name to the
productpage Service on port 9080.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: bookinfo-ingress spec: # ingressClassName: nginx # use only with k8s version >= 1.18.0 tls: rules: - host: bookinfo.example.com http: paths: - path: / pathType: Exact backend: service: name: productpage port: number: 9080
Access the Bookinfo App
To access the Bookinfo application:
Modify
/etc/hostsso that requests to
bookinfo.example.comresolve to NGINX Plus Ingress Controller’s public IP address. Add the following line to your
/etc/hostsfile:
<INGRESS_CONTROLLER_PUBLIC_IP> bookinfo.example.com
Open your browser.
Click one of the links to view the app as a general user, then as a test user, and verify that all portions of the page load.
View Traffic Flow
After sending a few requests as a general or test user, you can view the flow of traffic throughout your application. If you have configured NGINX Service Mesh to export metrics to your Prometheus deployment, run the
nginx-meshctl top command to see traffic in the namespace your bookinfo application is deployed in:
$ nginx-meshctl top Deployment Incoming Success Outgoing Success NumRequests productpage-v1 100.00% 6 reviews-v2 100.00% 100.00% 4 reviews-v3 100.00% 2 details-v1 100.00% 6 ratings-v1 100.00% 2 reviews-v1 100.00% 3
Or, for a more in-depth look at the bookinfo components, run the
top command on a deployment:
$ nginx-meshctl top deployment/productpage-v1 Deployment Direction Resource Success Rate P99 P90 P50 NumRequests productpage-v1 To reviews-v3 100.00% 50ms 48ms 40ms 2 To details-v1 100.00% 20ms 15ms 3ms 6 To reviews-v1 100.00% 99ms 90ms 20ms 3 To reviews-v2 100.00% 100ms 95ms 75ms 4 From nginx-plus-ingress 100.00% 196ms 160ms 75ms 6
You can also view the Grafana dashboard, which provides additional statistics on your application, by following the Monitor your application in Grafana section of our Expose an Application with NGINX Plus Ingress Controller guide. | https://docs.nginx.com/nginx-service-mesh/tutorials/kic/ingress-walkthrough/ | CC-MAIN-2022-40 | refinedweb | 988 | 55.95 |
Command Sets¶
Command Sets are intimately linked with Commands and you should be familiar with Commands before reading this page. The two pages were split for ease of reading.
A Command Set (often referred to as a CmdSet or cmdset) is the basic unit for storing one or more Commands. A given Command can go into any number of different command sets. Storing Command classes in a command set is the only way to make the commands available to use in your game.
When storing a CmdSet on an object, will you make the commands in that
command set available to the object. An example is the default command
set stored on new Characters. This command set contains all the useful
commands, from
look and
inventory to
@dig and
@reload
(permissions then limit which players may use them, but that’s a
separate topic).
When an account enters a command, cmdsets from the Account, Character, its location and elsewhere are pulled together into a merge stack. This stack is merged together in a specific order to create a single “merged” cmdset, representing the pool of commands available at that very moment.
An example would be a
Window object that has a cmdset with two
commands in it:
look through window and
open window. The command
set would be visible to players in the room with the window, allowing
them to use those commands only there. You could imagine all sorts of
clever uses of this, like a
Television object which had multiple
commands for looking at it, switching channels and so on. The tutorial
world coming with Evennia showcases a dark room that replaces certain
critical commands with its on versions because the Character cannot see.
If you want a quick start into defining your first commands and use them with command sets, you can head over to the Adding Command Tutorial which steps through things without the explanations.
Defining Command Sets¶
A CmdSet is, as most things in Evennia, defined as a Python class
inheriting from the correct parent (
evennia.CmdSet, which is a
shortcut to
evennia.commands.cmdset.CmdSet). The CmdSet class only
needs to define one method, called
at_cmdset_creation(). All other
class parameters are optional, but are used for more advanced set
manipulation and coding (see the merge rules section).
# file mygame/commands/mycmdset.py from evennia import CmdSet # this is a theoretical custom module with commands we # created previously: mygame/commands/mycommands.py from commands import mycommands class MyCmdSet(CmdSet): def at_cmdset_creation(self): """ The only thing this method should need to do is to add commands to the set. """ self.add(mycommands.MyCommand1()) self.add(mycommands.MyCommand2()) self.add(mycommands.MyCommand3())
The CmdSet’s
add() method can also take another CmdSet as input. In
this case all the commands from that CmdSet will be appended to this one
as if you added them line by line:
def at_cmdset_creation(): ... self.add(AdditionalCmdSet) # adds all command from this set ...
If you added your command to an existing cmdset (like to the default cmdset), that set is already loaded into memory. You need to make the server aware of the code changes:
@reload
You should now be able to use the command.
If you created a new, fresh cmdset, this must be added to an object in
order to make the commands within available. A simple way to temporarily
test a cmdset on yourself is use the
@py command to execute a python
snippet:
@py self.cmdset.add('commands.mycmdset.MyCmdSet')
This will stay with you until you
@reset or
@shutdown the
server, or you run
@py self.cmdset.delete('commands.mycmdset.MyCmdSet')
Above a specific Cmdset class is removed. Calling
delete without
arguments will remove the latest added cmdset.
Note: Command sets added using
cmdset.addare by default not persistent in the database.
If you want the cmdset to survive a reload, you can do
@py self.cmdset.add(commands.mycmdset.MyCmdSet, permanent=True)
or you could add the cmdset as the default cmdset:
@py self.cmdset.add_default(commands.mycmdset.MyCmdSet)
An object can only have one “default” cmdset (but can also have none).
This is meant as a safe fall-back even if all other cmdsets fail or are
removed. It is always persistent and will not be affected by
cmdset.delete(). To remove a default cmdset you must explicitly call
cmdset.remove_default().
Command sets are often added to an object in its
at_object_creation
method. For more examples of adding commands, read the Step by step
tutorial. Generally you can customize which command sets are added to
your objects by using
self.cmdset.add() or
self.cmdset.add_default().
Important: Commands are identified uniquely by key or alias (see Commands). If any overlap exists, two commands are considered identical. Adding a Command to a command set that already has an identical command will replace the previous command. This is very important in order to easily overload default Evennia commands with your own, but you need be aware of this or you may accidentally “hide” your own command in your command set because you add a new one that happen to have a matching alias.
Properties on command sets¶
There are a few extra flags one can set on CmdSets in order to modify how they work. All are optional and will be set to defaults otherwise. Since many of these relate to merging cmdsets you might want to read up on next section for some of these to make sense.
key(string) - an identifier for the cmdset. This is optional but should be unique; it is used for display in lists but also to identify special merging behaviours using the
key_mergetypedictionary below. -
mergetype(string) - one of “Union”, “Intersect”, “Replace” or “Remove”.
priority(int) - This defines the merge order of the merge stack - cmdsets will merge in rising order of priority with the highest priority set merging last. During a merger, the commands from the set with the higher priority will have precedence (just what happens depends on the merge type). If priority is identical, the order in the merge stack determines preference. The priority value must be greater or equal to
-100. Most in-game sets should usually have priorities between
0and
100. Evennia default sets have priorities as follows (these can be changed if you want a different distribution):
- EmptySet:
-101(should be lower than all other sets)
- SessionCmdSet:
-20
- AccountCmdSet:
-10
- CharacterCmdSet:
0
- ExitCmdSet:
101(generally should always be available)
- ChannelCmdSet:
101(should usually always be available) - since exits never accept arguments, there is no collision between exits named the same as a channel even though the commands “collide”.
key_mergetype(dict) - a dict of
key:mergetypepairs. This allows this cmdset to merge differently with certain named cmdsets. If the cmdset to merge with has a
keymatching an entry in
key_mergetype, it will not be merged according to the setting in
mergetypebut according to the mode in this dict. Please note that this is more complex than it may seem due to the merge order of command sets. Please review that section before using
key_mergetype.
duplicates(bool/None default
None) - this determines what happens when merging same-priority cmdsets containing same-key commands together. The
dupicateoption will only apply when merging the cmdset with this option onto one other cmdset with the same priority. The resulting cmdset will not retain this
duplicatesetting.
None(default): No duplicates are allowed and the cmdset being merged “onto” the old one will take precedence. The result will be unique commands. However, the system will assume this value to be
Truefor cmdsets on Objects, to avoid dangerous clashes. This is usually the safe bet.
False: Like
Noneexcept the system will not auto-assume any value for cmdsets defined on Objects.
True: Same-named, same-prio commands will merge into the same cmdset. This will lead to a multimatch error (the user will get a list of possibilities in order to specify which command they meant). This is is useful e.g. for on-object cmdsets (example: There is a
red buttonand a
green buttonin the room. Both have a
press buttoncommand, in cmdsets with the same priority. This flag makes sure that just writing
press buttonwill force the Player to define just which object’s command was intended).
no_objsthis is a flag for the cmdhandler that builds the set of commands available at every moment. It tells the handler not to include cmdsets from objects around the account (nor from rooms or inventory) when building the merged set. Exit commands will still be included. This option can have three values:
None(default): Passthrough of any value set explicitly earlier in the merge stack. If never set explicitly, this acts as
False.
True/
False: Explicitly turn on/off. If two sets with explicit
no_objsare merged, priority determines what is used.
no_exits- this is a flag for the cmdhandler that builds the set of commands available at every moment. It tells the handler not to include cmdsets from exits. This flag can have three values:
None(default): Passthrough of any value set explicitly earlier in the merge stack. If never set explicitly, this acts as
False.
True/
False: Explicitly turn on/off. If two sets with explicit
no_exitsare merged, priority determines what is used.
no_channels(bool) - this is a flag for the cmdhandler that builds the set of commands available at every moment. It tells the handler not to include cmdsets from available in-game channels. This flag can have three values:
None(default): Passthrough of any value set explicitly earlier in the merge stack. If never set explicitly, this acts as
False.
True/
False: Explicitly turn on/off. If two sets with explicit
no_channelsare merged, priority determines what is used.
Command Sets searched¶
When a user issues a command, it is matched against the merged
command sets available to the player at the moment. Which those are may
change at any time (such as when the player walks into the room with the
Window object described earlier).
The currently valid command sets are collected from the following sources:
- The cmdsets stored on the currently active Session. Default is the empty
SessionCmdSetwith merge priority
-20.
- The cmdsets defined on the Account. Default is the AccountCmdSet with merge priority
-10.
- All cmdsets on the Character/Object (assuming the Account is currently puppeting such a Character/Object). Merge priority
0.
- The cmdsets of all objects carried by the puppeted Character (checks the
calllock). Will not be included if
no_objsoption is active in the merge stack.
- The cmdsets of the Character’s current location (checks the
calllock). Will not be included if
no_objsoption is active in the merge stack.
- The cmdsets of objects in the current location (checks the
calllock). Will not be included if
no_objsoption is active in the merge stack.
- The cmdsets of Exits in the location. Merge priority
+101. Will not be included if
no_exitsor
no_objsoption is active in the merge stack.
- The channel cmdset containing commands for posting to all channels the account or character is currently connected to. Merge priority
+101. Will not be included if
no_channelsoption is active in the merge stack.
Note that an object does not have to share its commands with its
surroundings. A Character’s cmdsets should not be shared for example, or
all other Characters would get multi-match errors just by being in the
same room. The ability of an object to share its cmdsets is managed by
its
call lock. For example, Character objects defaults to
call:false() so that any cmdsets on them can only be accessed by
themselves, not by other objects around them. Another example might be
to lock an object with
call:inside() to only make their commands
available to objects inside them, or
cmd:holds() to make their
commands available only if they are held.
Adding and merging command sets¶
Note: This is an advanced topic. It’s very useful to know about, but you might want to skip it if this is your first time learning about commands.
CmdSets have the special ability that they can be merged together into new sets. Which of the ingoing commands end up in the merged set is defined by the merge rule and the relative priorities of the two sets. Removing the latest added set will restore things back to the way it was before the addition.
obj.cmdset.delete()will never delete the default set. Instead one should add new cmdsets on top of the default to “hide” it, as described below. Use the special
obj.cmdset.delete_default()only if you really know what you are doing.
CmdSet merging is an advanced feature useful for implementing powerful
game effects. Imagine for example a player entering a dark room. You
don’t want the player to be able to find everything in the room at a
glance - maybe you even want them to have a hard time to find stuff in
their backpack! You can then define a different CmdSet with commands
that override the normal ones. While they are in the dark room, maybe
the
look and
inv commands now just tell the player they cannot
see anything! Another example would be to offer special combat commands
only when the player is in combat. Or when being on a boat. Or when
having taken the super power-up. All this can be done on the fly by
merging command sets.
Merge rules¶
Basic rule is that command sets are merged in reverse priority order. That is, lower-prio sets are merged first and higher prio sets are merged “on top” of them. Think of it like a layered cake with the highest priority on top.
To further understand how sets merge, we need to define some examples.
Let’s call the first command set A and the second B. We assume
B is the command set already active on our object and we will merge
A onto B. In code terms this would be done by
object.cdmset.add(A). Remember, B is already active on
object
from before.
We let the A set have higher priority than B. A priority is
simply an integer number. As seen in the list above, Evennia’s default
cmdsets have priorities in the range
-101 to
120. You are
usually safe to use a priority of
0 or
1 for most game effects.
In our examples, both sets contain a number of commands which we’ll
identify by numbers, like
A1, A2 for set A and
B1, B2, B3, B4 for B. So for that example both sets contain
commands with the same keys (or aliases) “1” and “2” (this could for
example be “look” and “get” in the real game), whereas commands 3 and 4
are unique to B. To describe a merge between these sets, we would
write
A1,A2 + B1,B2,B3,B4 = ? where
? is a list of commands that
depend on which merge type A has, and which relative priorities the
two sets have. By convention, we read this statement as “New command set
A is merged onto the old command set B to form ?”.
Below are the available merge types and how they work. Names are partly borrowed from Set theory.
Union (default) - The two cmdsets are merged so that as many commands as possible from each cmdset ends up in the merged cmdset. Same-key commands are merged by priority.
# Union A1,A2 + B1,B2,B3,B4 = A1,A2,B3,B4
Intersect - Only commands found in both cmdsets (i.e. which have the same keys) end up in the merged cmdset, with the higher-priority cmdset replacing the lower one’s commands.
# Intersect A1,A3,A5 + B1,B2,B4,B5 = A1,A5
Replace - The commands of the higher-prio cmdset completely replaces the lower-priority cmdset’s commands, regardless of if same-key commands exist or not.
# Replace A1,A3 + B1,B2,B4,B5 = A1,A3
Remove - The high-priority command sets removes same-key commands from the lower-priority cmdset. They are not replaced with anything, so this is a sort of filter that prunes the low-prio set using the high-prio one as a template.
# Remove A1,A3 + B1,B2,B3,B4,B5 = B2,B4,B5
Besides
priority and
mergetype, a command-set also takes a few
other variables to control how they merge:
duplicates(bool) - determines what happens when two sets of equal priority merge. Default is that the new set in the merger (i.e. A above) automatically takes precedence. But if duplicates is true, the result will be a merger with more than one of each name match. This will usually lead to the player receiving a multiple-match error higher up the road, but can be good for things like cmdsets on non-player objects in a room, to allow the system to warn that more than one ‘ball’ in the room has the same ‘kick’ command defined on it and offer a chance to select which ball to kick … Allowing duplicates only makes sense for Union and Intersect, the setting is ignored for the other mergetypes.
key_mergetypes(dict) - allows the cmdset to define a unique mergetype for particular cmdsets, identified by their cmdset
key. Format is
{CmdSetkey:mergetype}. Example:
{'Myevilcmdset','Replace'}which would make sure for this set to always use ‘Replace’ on the cmdset with the key
Myevilcmdsetonly, no matter what the main
mergetypeis set to.
Warning: The
key_mergetypesdictionary can only work on the cmdset we merge onto. When using
key_mergetypesit is thus important to consider the merge priorities - you must make sure that you pick a priority between the cmdset you want to detect and the next higher one, if any. That is, if we define a cmdset with a high priority and set it to affect a cmdset that is far down in the merge stack, we would not “see” that set when it’s time for us to merge. Example: Merge stack is
A(prio=-10), B(prio=-5), C(prio=0), D(prio=5). We now merge a cmdset
E(prio=10)onto this stack, with a
key_mergetype={"B":"Replace"}. But priorities dictate that we won’t be merged onto B, we will be merged onto E (which is a merger of the lower-prio sets at this point). Since we are merging onto E and not B, our
key_mergetypedirective won’t trigger. To make sure it works we must make sure we merge onto B. Setting E’s priority to, say, -4 will make sure to merge it onto B and affect it appropriately.
More advanced cmdset example:
from commands import mycommands class MyCmdSet(CmdSet): key = "MyCmdSet" priority = 4 mergetype = "Replace" key_mergetypes = {'MyOtherCmdSet':'Union'} def at_cmdset_creation(self): """ The only thing this method should need to do is to add commands to the set. """ self.add(mycommands.MyCommand1()) self.add(mycommands.MyCommand2()) self.add(mycommands.MyCommand3())
Assorted notes¶
It is very important to remember that two commands are compared both
by their
key properties and by their
aliases properties. If
either keys or one of their aliases match, the two commands are
considered the same. So consider these two Commands:
- A Command with key “kick” and alias “fight”
- A Command with key “punch” also with an alias “fight”
During the cmdset merging (which happens all the time since also things like channel commands and exits are merged in), these two commands will be considered identical since they share alias. It means only one of them will remain after the merger. Each will also be compared with all other commands having any combination of the keys and/or aliases “kick”, “punch” or “fight”.
… So avoid duplicate aliases, it will only cause confusion. | http://evennia.readthedocs.io/en/latest/Command-Sets.html | CC-MAIN-2018-13 | refinedweb | 3,277 | 63.39 |
I have a scanner Epson V600. Installed `iscan`, `iscan-data` from repos and `iscan-plugin-gt-x820` from AUR.
$ scanimage -h -d 'epkowa:interpreter:003:020' Usage: scanimage [OPTION]... Start image acquisition on a scanner device and write image data to standard output. Parameters are separated by a blank from single-character options (e.g. -d epson) and by a "=" from multi-character options (e.g. --device-name=epson). -d, --device-name=DEVICE use a given scanner device (e.g. hp:/dev/scanner) --format=pnm|tiff|png|jpeg file format of output file -i, --icc-profile=PROFILE include this ICC profile into TIFF file -L, --list-devices show available scanner devices -f, --formatted-device-list=FORMAT similar to -L, but the FORMAT of the output can be specified: %d (device name), %v (vendor), %m (model), %t (type), %i (index number), and %n (newline) -b, --batch[=FORMAT] working in batch mode, FORMAT is `out%d.pnm' `out%d.tif' `out%d.png' or `out%d.jpg' by default depending on --format --batch-start=# page number to start naming files with --batch-count=# how many pages to scan in batch mode --batch-increment=# increase page number in filename by # --batch-double increment page number by two, same as --batch-increment=2 --batch-print print image filenames to stdout --batch-prompt ask for pressing a key before scanning a page --accept-md5-only only accept authorization requests using md5 -p, --progress print progress messages -n, --dont-scan only set options, don't actually scan -T, --test test backend thoroughly -A, --all-options list all available backend options -h, --help display this help message and exit -v, --verbose give even more status messages -B, --buffer-size=# change input buffer size (in kB, default 32) -V, --version print version information Options specific to device `epkowa:interpreter:003:020': Scan Mode: --mode Binary|Gray|Color [Color] Selects the scan mode (e.g., lineart, monochrome, or color). --depth 8|16 [8] Number of bits per sample, typical values are 1 for "line-art" and 8 for multibit scans. --halftoning None|Halftone A (Hard Tone)|Halftone B (Soft Tone)|Halftone C (Net Screen) [inactive] Selects the halftone. --dropout None|Red|Green|Blue [inactive] Selects the dropout. --brightness-method iscan|gimp [iscan] Selects a method to change the brightness of the acquired image. --brightness -100..100 (in steps of 1) [0] Controls the brightness of the acquired image. --contrast -100..100 (in steps of 1) [0] Controls the contrast of the acquired image. --sharpness -2..2 [inactive] --gamma-correction User defined (Gamma=1.0)|User defined (Gamma=1.8) [User defined (Gamma=1.8)] Selects the gamma correction value from a list of pre-defined devices or the user defined table, which can be downloaded to the scanner --color-correction User defined [inactive] Sets the color correction table for the selected output device. --resolution 400|800|1600|3200dpi [400] Sets the resolution of the scanned image. --x-resolution 200|400|600|800|1200|1600|3200|6400dpi [200] Sets the horizontal resolution of the scanned image. --y-resolution 200|240|320|400|600|800|1200|1600|3200|4800|6400dpi [320] Sets the vertical -2..2 [1.2578] Controls red level --cct-2 -2..2 [-0.213989] Adds to red based on green level --cct-3 -2..2 [-0.0437927] Adds to red based on blue level --cct-4 -2..2 [-0.193893] Adds to green based on red level --cct-5 -2..2 [1.2856] Controls green level --cct-6 -2..2 [-0.0916901] Adds to green based on blue level --cct-7 -2..2 [-0.0257874] Adds to blue based on red level --cct-8 -2..2 [-0.264191] Adds to blue based on green level --cct-9 -2..2 [1.28999] Control blue level Preview: --preview[=(yes|no)] [no] Request a preview-quality scan. --preview-speed[=(yes|no)] [no] Geometry: --scan-area Maximum|A4|A5 Landscape|A5 Portrait|B5|Letter|Executive|CD [Maximum] Select an area to scan based on well-known media sizes. Maximum|A4|A5 Landscape|A5 Portrait|B5|Letter|Executive|CD [Maximum] Select an area to scan based on well-known media sizes. (DEPRECATED) Optional equipment: --source Flatbed|Transparency Unit [Flatbed]) --detect-doc-size[=(yes|no)] [inactive] Activates document size auto-detection. The scan area will be set to match the detected document size. --adf-auto-scan[=(yes|no)] [inactive] Skips per sheet device setup for faster throughput. --double-feed-detection-sensitivity None|Low|High [inactive] Sets the sensitivity with which multi-sheet page feeds are detected and reported as errors. --deskew[=(yes|no)] [inactive] Rotate image so it appears upright. --autocrop[=(yes|no)] [inactive] Determines empty margins in the scanned image and removes them. This normally reduces the image to the size of the original document but may remove more. --calibrate [inactive] Performs color matching to make sure that the document's color tones are scanned correctly. --clean [inactive] Cleans the scanners reading section. Type ``scanimage --help -d DEVICE'' to get list of all options for DEVICE. List of available devices: epkowa:interpreter:003:021 utsushi:esci:usb:/sys/devices/pci0000:00/0000:00:1a.7/usb9/9-5/9-5:1.0 utsushi:esci:networkscan://192.168.0.2:1865 utsushi:sane::epkowa:usb:0x04b8:0x0848
Trying simple `scanimage` with default depth works fine:
$ scanimage --verbose -d 'epkowa:interpreter:003:021' --format=tiff --source 'Transparency Unit' > test.tiff scanimage: scanning image of size 808x3052 pixels at 24 bits/pixel scanimage: acquiring RGB frame scanimage: min/max graylevel value = 0/254 scanimage: read 7398048 bytes in total
But setting the depth to 16bpc results in a crash?:
$ scanimage --verbose -d 'epkowa:interpreter:003:021' --format=tiff --source 'Transparency Unit' --depth 16 > test.tiff scanimage: scanning image of size 808x3052 pixels at 48 bits/pixel scanimage: acquiring RGB frame $ hexdump test.tiff 0000000 4949 002a 0008 0000 0010 00fe 0004 0001 0000010 0000 0000 0000 0100 0003 0001 0000 0328 0000020 0000 0101 0003 0001 0000 0bec 0000 0102 0000030 0003 0003 0000 00ce 0000 0103 0003 0001 0000040 0000 0001 0000 0106 0003 0001 0000 0002 0000050 0000 0111 0004 0001 0000 00f0 0000 0112 0000060 0003 0001 0000 0001 0000 0115 0003 0001 0000070 0000 0003 0000 0116 0004 0001 0000 0bec 0000080 0000 0117 0004 0001 0000 c540 00e1 0118 0000090 0003 0003 0000 00d4 0000 0119 0003 0003 00000a0 0000 00da 0000 011a 0005 0001 0000 00e0 00000b0 0000 011b 0005 0001 0000 00e8 0000 0128 00000c0 0003 0001 0000 0002 0000 0000 0000 0010 00000d0 0010 0010 0000 0000 0000 ffff ffff ffff 00000e0 0190 0000 0001 0000 0190 0000 0001 0000 00000f0
scanimage does not seem to segfault, but the output image is garbage and the scanner seems to be stuck and needs restart. I've tried xsane and skanlite, but since they all (presumably) work with sane, it didn't help.
Has anyone managed to make 16bpc transparency (analog film negative) work?
$ scanimage -V scanimage (sane-backends) 1.0.27; backend version 1.0.27 $ uname -roms Linux 4.20.6-arch1-1-ARCH x86_64 GNU/Linux
Last edited by JonnyRobbie (2019-03-08 10:19:10)
Offline
Over the past year I have scanned almost 7k prints, slides and negatives on my V600 -- it helps to be retired. When I started I quickly gave up on scan image. I tried out vuescan and after seeing the results I purchased a license and never looked back.
Offline
I'm using iscan only as a backend for sane. I don't really want to use epson's imagescan as a frontend. I just need 16bpc in sane/xsane. Are your scans in 16bpc?
Offline
My scans are not 16-bit but according to the docs for vuescan they do support 16-bit. Note that this is not imagescan but a different piece of software. It is proprietary but is in the AUR. Without purchasing a license you get a watermarked result.
Offline
So I was looking more into it. I know that the scanner supports 16bpc on win+mac, so the hardware is capable of that. I've tried running the scan with some more debug info. The output being quite substantial is hosted here. The most important being the last line:
$ SANE_DEBUG_EPKOWA=HEX scanimage --verbose --format=pnm --source 'Transparency Unit' --depth 16 > test16bpcV2.pnm ... dip-obj.c:572: [epkowa][F] failed: require (8 == buf->ctx.depth)
So I looked into the source of iscan and here's what I found:
$ cat backend/dip-obj.c ... /*! \todo Add support for 16 bit color values (#816). */ void dip_apply_color_profile (const void *self, const buffer *buf, const double profile[9]) { ... require (8 == buf->ctx.depth); ... } ...
Shaking my head at this point, it seems like while the hardware might support 16bpc, Epson could not be arsed enough to implement that in Linux. (Worth noting that 3rd party proprietary Vuescan does support 16bpc, I've tried that with the free watermarked version).
It seemed like I just should resign on the futility and move on, but this miraculous find happened:. Someone had the exact same problem more then five years ago, but knew enough of C to implement a patch. So I've checked out iscan using ABS, formatted a patch from hean01's changes:
--- backend/channel-usb.c +++ backend/channel-usb.c @@ -91,6 +91,7 @@ static ssize_t channel_usb_send (channel *, const void *, static ssize_t channel_usb_recv (channel *, void *, size_t, SANE_Status *); +static size_t channel_usb_max_request_size (const channel *); channel * channel_usb_ctor (channel *self, const char *dev_name, SANE_Status *status) @@ -119,7 +120,7 @@ channel_usb_ctor (channel *self, const char *dev_name, SANE_Status *status) self->send = channel_usb_send; self->recv = channel_usb_recv; - self->max_size = 128 * 1024; + self->max_request_size = channel_usb_max_request_size; return self; } @@ -265,9 +266,6 @@ channel_interpreter_ctor (channel *self, const char *dev_name, self->dtor = channel_interpreter_dtor; } } - - self->max_size = 32 * 1024; - return self; } @@ -283,3 +281,10 @@ channel_interpreter_dtor (channel *self) self->dtor = channel_dtor; return self->dtor (self); } + +static size_t +channel_usb_max_request_size (const channel *self) +{ + return (self->interpreter ? 32 : 128) * 1024; +} + --- backend/dip-obj.c +++ backend/dip-obj.c @@ -555,44 +555,70 @@ dip_change_GRB_to_RGB (const void *self, const buffer *buf) return; } -/*! \todo Add support for 16 bit color values (#816). - */ void dip_apply_color_profile (const void *self, const buffer *buf, const double profile[9]) { SANE_Int i; - SANE_Byte *r_buf, *g_buf, *b_buf; double red, grn, blu; - SANE_Byte *data; SANE_Int size; require (dip == self && buf && profile); - require (8 == buf->ctx.depth); + require (buf->ctx.depth == 8 || buf->ctx.depth == 16); if (SANE_FRAME_RGB != buf->ctx.format) return; - data = buf->ptr; - size = buf->end - buf->ptr; + if (buf->ctx.depth == 8) + { + SANE_Byte *r_buf, *g_buf, *b_buf; + SANE_Byte *data; + + data = buf->ptr; + size = buf->end - buf->ptr; - for (i = 0; i < size / 3; i++) + for (i = 0; i < size / 3;); + } + } + else if (buf->ctx.depth ==); + uint16_t *r_buf, *g_buf, *b_buf; + uint16_t *data; + + data = (uint16_t *)buf->ptr; + while(data < buf->end) + { +, 65535); + *data++ = clamp (grn, 0, 65535); + *data++ = clamp (blu, 0, 65535); + } } }
, and it seems to work!!! Makepkg went through and scanning with 16bpc actually worked.
So if anyone is having the issue, checkout iscan from arch repos using abs, apply this patch (tested with version 2.30.3) and makepkg. I'll try to file a bugreport to have this patch in the archrepos. Thanks hean01.
Offline
Thanks for your writeup @JohnRobbie.
I only seem to be able to get to 600dpi with 16bit color on my Epson Photo Perfection v370:
sudo scanimage --verbose --depth 16 --format=tiff --resolution 600 > /Ebooks/test-16.bit.600.dpi.tiff scanimage: scanning image of size 5096x7019 pixels at 48 bits/pixel scanimage: acquiring RGB frame scanimage: read 214612944 bytes in total
If I try 1200-4800dpi it results in :
$sudo scanimage -d epkowa:interpreter:002:006 --verbose --depth 16 --format=tiff --resolution 1200 > /Ebooks/test-16.bit.1200.dpi.tiff scanimage: sane_start: Invalid argument
Any ideas how to get the higher resolutions working? The specs suggests that it could be working. However, is there a way to see what is actually offered by the hardware other than scanimage -A?
First time using ABS, patch and editing the PKGBUILD. So it could be that I made some mistakes. Here's how I did it:
I installed subversion following the Arch wiki for ABS. Created the community directory, changed to it and used svn update iscan. Changed to the directory iscan/repos/community-x86_64. I copy pasted your patch into a file called support16bpc.patch, created a sha256sum and changed the beginning lines of PKGBUILD to:
# Maintainer: Muflone # Contributor: Frederic Bezies < fredbezies at gmail dot com> # Contributor: garion < garion @ mailoo.org > # Contributor: Alessio Sergi <asergi at archlinux dot us> pkgname=iscan pkgver=2.30.3.1 pkgrel=2 pkgdesc="EPSON Image Scan! front-end for scanners and all-in-ones" arch=('x86_64') url="" license=('GPL2' 'custom:AVASYSPL') depends=('gtk2' 'sane' 'libstdc++5') makedepends=('gettext' 'gimp') optdepends=('iscan-data: Image Scan! data files required for some devices') source=("{pkgname}_${pkgver%.*}-${pkgver/*.}.tar.gz" "libpng15.patch" "jpegstream.cc.patch" "support16bpc.patch" "epkowa.conf") sha256sums=('91a6cc1571e5ec34cee067eabb35f13838e71dfeda416310ecb5b5030d49de67' '1a75b8df945a813a297dfd6e3dabae3bc8b51898f23af31640091e31b901f0ba' '44990a5264e530a7a8ad1f95524e5d70e4f0f9009c53c8ea593cedf8d861a669' '8089bf64c0151667a41051b106ab8521ee60f639cb1b93ef085b6dc87ffdb834' '8e9e90fa50f1bd476b13766b19f100470c41dd253dc0605fbb1d0ac346a0beff') install="${pkgname}.install" backup=("etc/sane.d/epkowa.conf") prepare() { cd "${pkgname}-${pkgver%.*}" # patch for building iscan against libpng15 by giovanni patch -Np0 -i "../libpng15.patch" # patch for ambiguous div in jpegstream.cc patch -Np0 -i "../jpegstream.cc.patch" # add fix for CXX ABI different than 1002 ln -s libesmod-x86_64.c2.so non-free/libesmod-x86_64.so # add hean01 modification for 16bpc using JohnRobbie patch from> patch -Np0 -i "../support16bpc.patch" }
(I manually checked src/iscan/backend/channel-usb.c and the patches were applied.)
After that I ran in that directory
makepkg -s
and installed the resulting file with
pacman -U iscan-2.30.3.1-2-x86_64.pkg.tar.xz
Last edited by Markismus (2019-04-01 09:07:39)
Offline
Hi
I'm planning to integrate this patch into the main repository package but I'm not entirely sure if this could work for every firmware.
Could you please share your experience after installing this patch?
Offline
@Muflone It is an patch in the iscan package. So it is independent of specific firmware, isn't it? For the firmware I had to install a plugin package from the AUR.
That being said, although it enables 16bit scanning for the Epson Photo Perfection v370, it doesn't enable it for all resolutions. Only up to 600dpi, which is rather a low resolution for the scanning of photo's.
Offline
@Muflone It is an patch in the iscan package. So it is independent of specific firmware, isn't it? For the firmware I had to install a plugin package from the AUR.
As far I know iscan is able to perform scan for some models without any needed firmwares.
For some models you need to provide extra firmwares. I own a V30 perfection which needs a specific firmware to work.
As I wish to avoid breakage of an official package using an unofficial patch, any report about using this special patch is welcome.
Offline
First time using ABS, patch and editing the PKGBUILD. So it could be that I made some mistakes.
For the records, the ABS tool is deprecated and replaced by ASP.
asp export iscan
Offline
Thanks Muflone. Seems indeed a better way to go.
I hadn't read all the way to the tips of ABS. Shouldn't this be in the introduction of ABS? Or is it really still very much up for discussion whether ABS is deprecated?
Offline
As far as I can remember, there's more commits in that repo across three branches then what I've compiled in this patch. And some of those commits try to fix higher dpi scanning too.
Unfortunately, I can't remember exactly, which commits I took to form this patch. The reason being that the iscan version he worked on is a five years old one and if I would naively took all the commits there and just tried to merge that on top of the new version, it would most likely result in a merge hell. So I took the absolute minimum code of what I thought might solve my issue and created a patch.
You might try to take all those commits on that forked repo and patiently sift though merges to create a complete patchset - that might help to solve the issue.
Last edited by JonnyRobbie (2019-04-02 09:03:47)
Offline
Thanks @JonnyRobbie. That seems like something to do when there's time; a lot of time! Only a few commits. I'll look into it.
I'll first test the scanner on windows to see what can be expected to be supported.
Last edited by Markismus (2019-04-03 07:05:53)
Offline
TLDR: My Epson Photo Perfection V370 now works up to 4800dpi at 48bit color by patching iscan.
Workflow:
asp export iscan
Then get the PKGBUILD and hain01commits2dip-obj.patch by clicking their links. Put them into the newly created iscan directory.
makepkg -si
______________________
I have looked into these commits by Hain01 and most lines in the patch provided by JonnyRobbie seem to convert updates in iscan back to the older version.
In backend/channel-usb.c only one change is made over 2 commits changing the value from initially 1024 to 4096. Searching in the iscan\src\iscan-2.30.3 directory in no file the function channel_usb_max_request_size is mentioned. So I'm seeing this as an obsolete tweak. Why it worked for JonnyRobbie could well be because he reverted iscan to an older version. However, as I couldn't go beyond 600dpi it turned out not to be enough.
In backend/dip-obj.c a whole slew of changes is made with a small tweak in a second commit.
I've manually changed dip-obj.c, generated a patch file with diff -u and updated the PKGBUILD to include the patch file.
Testing results:
$sudo scanimage --verbose --depth 16 --format=tiff --resolution 600 > /Ebooks/test-16bps.600.dpi.tiff scanimage: scanning image of size 5096x7019 pixels at 48 bits/pixel scanimage: acquiring RGB frame scanimage: read 214612944 bytes in total $sudo scanimage --verbose --depth 16 --format=tiff --resolution 1200 > /Ebooks/test-16.bps.1200.dpi.tiff scanimage: scanning image of size 10192x14039 pixels at 48 bits/pixel scanimage: acquiring RGB frame scanimage: read 858512928 bytes in total $sudo scanimage --verbose --depth 16 --format=tiff --resolution 2400 > /Ebooks/test-16.bps.2400.dpi.tiff scanimage: scanning image of size 20392x28079 pixels at 48 bits/pixel scanimage: acquiring RGB frame scanimage: read 3435521808 bytes in total sudo scanimage --verbose --depth 16 --format=tiff --resolution 4800 -x 20 -y 20 > /Ebooks/test-16.bps.4800.dpi.tiff scanimage: scanning image of size 3776x3779 pixels at 48 bits/pixel scanimage: acquiring RGB frame scanimage: read 85617024 bytes in total
Manual verification shows that indeed the scans are finer grained as the dpi goes up. Scanning of a photo at 4800dpi finally shows the texture of the photo material!
That is 205MB at 600dpi, 818MB at 1200dpi and 3.2GB at 2400dpi. There is 6min between the timestamps of the 600dpi and 1200dpi files. The 2400dpi is took almost an hour. (For comparison a 300dpi scan of the entire scan bed takes 26.814s.)
@Muflone So concluding I would say that my experience is that I won't use the earlier patch generated by JonnyRobbie, but the manually made one based on Hain01 commits to his repo.
Thanks @JonnyRobbie for pointing out that there was a lot more commited than originally taken into account by your patch!
Last edited by Markismus (2019-04-12 16:28:37)
Offline
Using this patched version of iscan prevents my V39 from connecting. Reverting to the community version restores connectivity.
I don't have time to dig into the issue today, but would be happy to investigate further next week.
Offline
I am Assuming you meant my patch and not JohnRobbie’s. Does your V39 support 16bpc at higher resolutions in the hardware (or in windows)?
I am speculating that it is either a mistake while creating the package or the enabling of higher color modes that are not supported. The changes in the patch are so little that I don’t know what else would cause a scanner to be not recognized any more.
What messages are generated when it fails?
Offline
Another Epson V370 owner here.
@Muflone
It seems that iscan 2.30.3.1 doesn't work ("No scanners were identified") when built against glibc 2.29.
Both official and patched version of iscan work fine for me when built against glibc 2.28.
@Markismus
Thanks for your patches!
Could you try rebuilding iscan against glibc 2.29?
@matthew_2nHk
Could you try rebuilding iscan without any "unofficial" patches?
Offline
I was unable to rebuild the package in any way, even without the proposed patches. It will always segfault with my scanner firmware.
I was thinking to drop it to AUR until a solution can be found.
Offline
I was unable to rebuild the package in any way, even without the proposed patches. It will always segfault with my scanner firmware.
Have you tried rebuilding the package against glibc 2.28?
By the way, are you aware of a more or less "clean" way to do it?
Offline
@strang3r I build iscan with the patches against glibc 2.29. It's been around since January and I installed the second package glibc v2.29-2 at the end of April, so I never build it against 2.28 nor needed to.
@muflone I have no idea why you can't rebuild the package. Are we talking about something else? I used asp and makepkg. Reran both your package in the AUR and my patched version.
AUR: Finished making: iscan 2.30.3.1-2 (Sun 02 Jun 2019 09:37:18 PM CEST)
Patched: Finished making: iscan 2.30.3.1-2 (Mon 03 Jun 2019 10:34:06 AM CEST)
Offline
I build iscan with the patches against glibc 2.29
Could you post the output of
readelf -s /usr/lib/sane/libsane-epkowa.so.1.0.15 | grep GLIBC_2\.29
and
readelf -s /usr/lib/sane/libsane-epkowa.so.1.0.15 | grep pow
?
Offline
$ readelf -s /usr/lib/sane/libsane-epkowa.so.1.0.15 | grep GLIBC_2\.29 $ readelf -s /usr/lib/sane/libsane-epkowa.so.1.0.15 | grep pow 72: 0000000000000000 0 FUNC GLOBAL DEFAULT UND pow@GLIBC_2.2.5 (4)
The output of the first request is empty. Checking it with grep GLIBC_2 showed only GLIBC_2.2.5. The installed sane package is version 1.0.27-2. Install date is 9th November 2018. It's the current package in the extra repository.
So it looks your problem isn't with iscan, but with the sane epkowa library. I'll not update sane before this is resolved!
Last edited by Markismus (2019-06-03 11:20:22)
Offline
@Markismus
Well, I'm getting the exact same output when I build iscan against glibc 2.28.
This is what I get when I build iscan against glibc 2.29:
$ readelf -s libsane-epkowa.so.1.0.15 | grep GLIBC_2\.29 128: 0000000000000000 0 FUNC GLOBAL DEFAULT UND pow@GLIBC_2.29 (10)
Just to be clear:
pacman -Qo /usr/lib/sane/libsane-epkowa.so.1.0.15 /usr/lib/sane/libsane-epkowa.so.1.0.15 is owned by iscan 2.30.3.1-2
Last edited by strang3r (2019-06-03 11:44:36)
Offline
Ah, so it is from iscan. The file name and the directory name lulled me into thinking it would be from sane.
Could it be that I should add extra configuration to force build against another glibc library? Some environmental variable?
Last edited by Markismus (2019-06-03 12:55:27)
Offline
@Markismus
Assuming your system is up-to-date, could you try building iscan in a clean chroot? … ean_chroot
e.g.:
sudo pacman -S devtools --needed multilib-build
Offline | https://bbs.archlinux.org/viewtopic.php?id=244017 | CC-MAIN-2021-10 | refinedweb | 3,983 | 57.47 |
This is like KSS question: why we resolve a question with the first way we find? not the better one? Is the human being. But as Steve Jobs said some time ago: only excelence people don't stop in the first way to resolve a problem
Advertising
When I meet Zope for the first time I was absolutely impress with it There was nothing better to represent an object universe I know that Zopers have their own way (this was discuss some time ago, too) but my ideal scenario will be a minimalism Zope where everything was an URL In this ideal scenario you could ask for an URL with or without parameters as if you ask for to the browser We stop this theme to wait for a Plone2PDF or similar. I hope we could solve this in a simple way (I think if this needs a more complicated solution we will have failed) I only need to put a link with the PDF download to the current page! (this don't sound to much complicated, isn't it?) 2008/10/12 Dieter Maurer <[EMAIL PROTECTED]> > Garito wrote at 2008-10-11 16:39 +0200: > >Did you imagine another way to do what I need to do? > > I have not followed intensively "what you need to do". > Thus, what follows may not be adequate. > > When I remember right, then a PageTemplate's namespace is passed > on to a "Script (Python)" when this script binds "namespace". > Thus, this way you get access to the variables defined in the template. > > A simple path "var/s1/.../sn" is roughly requivalent to > > x = var.restricedTraverse("s1/.../sn") > if callable(x): x = x() # "callable" may not be available in "Script > (Python)" > > If the path contains "?var", these must be resolved beforehand. > > More complex paths "p1 | p2 | ... pn" are roughly equivalent > to > > exc = None > for p in (p1, p2, ... pn): > try: return path1(p, ...) > except <some standard exceptions>: > exc = sys.exc_info() > if exc is not None: raise exc[0], exc[1], exc[2] > > The most difficult part are for paths where "pn" is not a path > expression but an arbitrary one. In this case, the concrete > TALES implementation will be required for an interpretation. > > > Along this outline, a function "path(path_expr, variable_binding)" > can be defined which roughly behaves like "path(path_expr)" in > a PageTemplate with the current variable binding expressed > as "variable_binding". > > For simple cases, this function could be implemented in untrusted > code. Complex cases will require access to the TALES implementation > and therefore probably trusted code. > > > > -- > Dieter > -- Mis Cosas Zope Smart Manager
_______________________________________________ Zope maillist - Zope@zope.org ** No cross posts or HTML encoding! ** (Related lists - ) | https://www.mail-archive.com/zope@zope.org/msg31373.html | CC-MAIN-2016-50 | refinedweb | 442 | 72.05 |
Most large C++ programs make use of dynamically allocated memory. This is especially true of EDA applications (simulators and tools for VLSI chip design), where the designer can never safely set an upper limit on application size. using the default C++ delete function. Time consuming memory recovery can be avoided by using a pool based memory allocator..
If a C++ object is allocated from a memory pool, its constructor will not be called by default. This can sometimes be handled by initializing the object with an explicit call to the constructor. For example:
my_class *pMyClass; pMyClass = (my_class *)pool.GetMem( sizeof( my_class ) ); *pMyClass = my_class(); // invoke the constructor
This approach has several problems. The constructor invokation above creates a temporary instance of the class my_class. This class is then copied into the memory pointed to by pMyClass. After the class is copied, the destructor for my_class will be called. If memory allocation takes place in the constructor and deallocation takes place in the destructor, there can be a problem. For example, if the class pointed to by pMyClass is initialized with a pointer the memory allocated by the my_class constructor, this same memory will be deallocated by the destructor. When the assignment completes, the class pointed to by pMyClass will point to deallocated memory, which is not what the programmer intended.
The problem above can be handled by adding init and dealloc class functions which can be called explicitly to allocate memory. The init function can be called after the class constructor copy.
my_class *pMyClass; pMyClass = (my_class *)pool.GetMem( sizeof( my_class ) ); *pMyClass = my_class(); // invoke the constructor pMyClass->init(); // allocateion memory
Unfortunately, the scheme outlined above is totally inadequate if the class have virtual functions. The class constructor will not initialize the virtual function table with the appropriate function addresses.
Since at least 1993 C++ has defined a way to overload the new operator for a given class. By providing an overloaded verion of new for a class (e.g., my_class above), there will be a simple and natural way to allocate an object from a memory pool and initialize it properly. The overloaded new function allocates memory from the memory pool and the C++ compiler generates code to initialize the virtual function table and to invoke the class constructor.
The C++ code below has a base class (cleverly named base) and a derived class base_one. The base class has an overloaded version of new, which takes two arguments: the number of bytes to allocate and a pointer to a memory pool allocation object. The compiler automatically plugs in the type size (the first argument to the overloaded new). The call to new then takes the form
pClass = new( user args ) type
which is expanded into a call to the overloaded new function
void *operator new( type size, user args );
For more details see section 5.3.3 of the ANSI C++ standard.
class base { public: base() { } void *operator new( unsigned int num_bytes, pool *mem) { return mem->GetMem( num_bytes); } virtual void pr(void) = 0; }; class base_one : public base { private: int a; public: base_one() {} void pr(void) { // local print } }; main() { base *pB1; pool mem; pB1 = new( &mem ) base_one; pB1->pr(); }
In this example there is a memory allocation type pool. This is passed as an argument to new, which uses the GetMem class function to allocate memory.
A complete test case, demonstrating overloading of the new operator to allocate a class with virtual functions is shown here.
The complexity of C++ and the obscurity of the language standard in some areas takes the problems encountered in portability to new levels.
With older compilers from HP and IBM, use of the "positional new", where the operator new is passed a memory pool argument, as shown above, requires an overloaded version of the default new as well. Note that this is not required by Solaris C++ or current releases of HP or IBM C++. In the code shown below, the first function overloads the default version of operator new. Since this class will allocate memory from a pool, this version should never be used and contains an assert. The second version of the new operator allocates memory from the memory pool.
#ifdef _BRAIN_DAMAGED_IBM_ // IBM requires that the operator new size argument be unsigned long void *operator new( unsigned long num_bytes ) #else void *operator new( unsigned int num_bytes ) #endif { assert( FALSE ); return NULL; } #ifdef _BRAIN_DAMAGED_IBM_ void *operator new( unsigned long num_bytes, pool *mem ) #else void *operator new( unsigned int num_bytes, pool *mem ) #endif { return mem->GetMem( num_bytes ); } // operator new // There is no delete, since memory is recovered through // deallocation of the memory pool void operator delete( void * ) { /* do nothing */ }
The code shown above will compile on Sun and on the earlier versions of the IBM and HP compilers. Of course when this code is compiled on IBM the -D_BRAIN_DAMAGED_IBM_ flag must be used, since IBM requires that the num_bytes argument be unsigned long.
Ian Kaplan
Last updated May 1998
back to Notes on Software and Software Engineering
back to home page | http://www.bearcave.com/software/c++_mem.html | crawl-001 | refinedweb | 834 | 50.57 |
We learned about arrays and string in C++ now, we are going to divulge deep into the Strings as they constitute an important part in the C++ programming language. In this chapter, we will learn about the string functions.
What are String Functions?
As the name suggests, the string functions are actually the functions that manipulate the strings. These are a special type of functions that are already defined in the C++ header files and can be used by any programmer to manipulate the strings or the character arrays in C++.
To use the string functions in C++, we must include the header file string.h, which contains the definition of all the string functions.
String Functions in C++:
Following string functions are offered in C++:
– Copy Function:
strcpy(s1, s2);
Used to copy the value of s2 into s1. But it works only if s1 has a size big enough to accommodate the s2.
–Concatenate Function:
strcat(s1, s2);
Concatenates string s2 onto the end of string s1. But it works only if s1 has a size big enough to accommodate the contents of s2.
-Length Function:
strlen(str);
Returns the length of the string specified in the parenthesis. Its return type is an integer.
–Comparison Function:
strcmp(s1, s2);
Compares strings s1 and s2 with each other. This comparison is done in a case sensitive manner, thus, C++ and c++ are two different words here. Returns 0 if s1 and s2 are the same; less than 0 if s1<s2; greater than 0 if s1>s2.
–Case Insensitive Comparison Function:
strcmpi(s1, s2);
Compares strings s1 and s2 with each other. This comparison is done in a case insensitive manner, thus, C++ and c++ are the same words here. Returns 0 if s1 and s2 are the same; less than 0 if s1<s2; greater than 0 if s1>s2.
Below code illustrates the use of some of these string functions:
#include <iostream.h> #include <string.h> length of str1 after concatenation len = strlen(str1); cout << "strlen(str1) : " << len << endl; }
Output:
strcpy( str3, str1) : Hello strcat( str1, str2): HelloWorld strlen(str1) : 10
Report Error/ Suggestion | https://www.studymite.com/cpp/string-functions-in-cpp/?utm_source=sidebar_recentpost&utm_medium=sidebar | CC-MAIN-2019-43 | refinedweb | 357 | 72.66 |
From: Convector Editor (creyes123_at_[hidden])
Date: 2003-03-26 09:58:54
I took a look at the auto_unit_test.hpp source code. I believe that we are both
correct. The GCC "#pragma interface" feature lets it all work for me. I have a
pretty good idea how, but that is outside the scope of this thread. Rest
assured, I use BOOST_AUTO_UNIT_TEST across a dozen source files in a single
application, and they are all being identified and run correctly. Yes, I got
lucky and didn't even know it.
In your original e-mail, you asked for a solution to the multiple function
definition error. Wouldn't using a mechanism similar to cpp_main work? Ie, put
the init_unit_test_suite() definition in a CPP file that only gets included
once, such as:
#include <boost/test/included/auto_unit_test.hpp>
Which in turn includes
#include <libs/test/src/auto_unit_test.cpp>
I still stand by my original patch submission. Although, as you pointed out,
for the vast majority of users, it is of no value. However, it worked for me,
and it is likely to help others in the future.
> Hmmm. It works for me. I'm using BOOST_AUTO_UNIT_TEST across several of my
> source files without name collisions (after my patch). I'm also using the
GCC
> "#pragma interface" feature, which might make a difference. Pardon my
> ignorance, since I'm not very familiar with the auto unit test
implementation.
I couldn't imagine how it may work. Each module will have following:
static boost::unit_test_framework::test_suite* test = BOOST_TEST_SUITE(
"Auto Unit Test" );
boost::unit_test_framework::test_suite*
init_unit_test_suite( int /* argc */, char* /* argv */ [] ) {
return test;
}
So. You should get symbols conflicts. Even if compiler/linker is able to
somehow choose and bind one of multiple init_unit_test_suite with library
call, it still should only run tests registered in selected module.
I would not play on such shaky grounds.
Gennadiy.
__________________________________________________ | https://lists.boost.org/Archives/boost/2003/03/46240.php | CC-MAIN-2020-29 | refinedweb | 311 | 58.89 |
No update?
Would be even better if the pyto site-packages could be exposed to Pythonista...
I think it was possible with the last (disappeared) beta
alala pythonista is falling down falling down falling down~~
Would be even better if the pyto site-packages could be exposed to Pythonista...
I think it was possible with the last (disappeared) beta
@cvp, could you please elaborate? Do you refer to a beta of Pyto? It does not seem to expose site-packages today (in the same way that Working Copy exposes the repositories).
To clarify, I do not expect we could run pandas in Pythonista, but was thinking that it would be nice to get code completion going in some way.
@mikael Sorry, I was referring to Pythonista but I did a mistake, really sorry.
It is sure we could share an iCloud Drive file in an user directory, like
'/private/var/mobile/Library/Mobile Documents/com~apple~CloudDocs/MyFolder/MyFile.txt'
Still waiting on pythonista beta release to app store. the lack of shared directory access (resolves in the beta) greatly diminishes the value of the app. @omz please release it to the app store.
@cvp, I guess the biggest obstacle for a smooth ”Pythonista UI/Pyto pandas” experience is that Pyto needs to pop in front while it’s executing?
The solution suggested between pythonista and pyto is ugly, awful and stupid. If omz stopped updating pythonista, we should abandon it, too. Just let is dead. I’m trying pyto now. Thank god I have pycharm so I don’t have to view or write code with pyto.
@timtim said:
The solution suggested between pythonista and pyto is ... stupid
Thanks a lot
@timtim On the other hand, I consider stupid and useless the constant ranting against Pythonista, his active community and against some very active/skilled people here. Those who propose solutions or try to study them and make them available to others not only do something worthy of note but keep alive the interest and freedom to dream of many users, it is the most important thing for me.
Take a tour in this forum (since six years ago) and you will see that there are many posts where people have written very interesting thing and what thay have written, driven by their passion for something, have allowed other people to start dreaming or simply to be more productive with Pythonista.
@Matteo The forum is alive but how about the author of Pythonista? Do you think the forum can stay alive even the application is dead? I don't think?
- mithrendal
Hello Ole,
I am back on the app store version again. Why not just release the last beta to the app store? It was pretty stable. Come on don't be shy and just push the button !
;-).
I know, it is stupid, but, please, let me have fun.
I launch a Pyto script and a Pythonista script, sharing a local file in
Files App/On my Ipad/Pythonista_to_Pyto/pythonista_to_pyto.txt.
Both apps run together in MultiView MultiTasking, only to show that both apps could dialog...
but, I agree, if too ugly, don't watch this video
Pythonista script
import ui from math import pi,cos,sin class MyClass(ui.View): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.iv = ui.View() self.r = ui.get_screen_size()[0]/2 self.iv.frame = (0,6,self.r*2,self.r*2) self.iv.corner_radius = self.r self.iv.background_color = 'gray' self.add_subview(self.iv) self.c = ui.View() self.ri = self.r / 10 self.c.frame = (0,0,self.ri*2,self.ri*2) self.c.corner_radius = self.ri self.c.background_color = 'red' self.iv.add_subview(self.c) self.path = '/private/var/mobile/Containers/Shared/AppGroup/E778DE4A-FE79-42E6-9200-353821BFF879/File Provider Storage/Pythonista_to_Pyto/pythonista_to_pyto.txt' self.update_interval = 0.02 self.ang = 0 def update(self): self.ang += 1 a = self.ang*pi/180 self.c.x = self.r + (self.r-self.ri)*cos(a) - self.ri self.c.y = self.r + (self.r-self.ri)*sin(a) - self.ri r = str(self.c.x)+','+str(self.c.y) with open(self.path,mode='wt') as fil: fil.write(r) def layout(self): w = self.bounds.size[0] self.iv.x = w - 2*self.r if __name__ == '__main__': mc = MyClass(bg_color='white', name='Pythonista') mc.present('fullscreen')
Pyto script
import pyto_ui as ui from math import pi,cos,sin from UIKit import UIScreen from time import sleep import threading class my_thread(threading.Thread): def __init__(self,view): threading.Thread.__init__(self) self.view = view def run(self): path = '/private/var/mobile/Containers/Shared/AppGroup/E778DE4A-FE79-42E6-9200-353821BFF879/File Provider Storage/Pythonista_to_Pyto/pythonista_to_pyto.txt' ang = 0 while True: #sleep(0.01) with open(path,mode='rt') as fil: r = fil.read() try: xy = r.split(',') self.view.c.x = float(xy[0]) self.view.c.y = float(xy[1]) except Exception as e: if str(e) == "could not convert string to float: ''": continue print(str(e)) break class MyClass(ui.View): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.background_color = ui.COLOR_WHITE self.title = 'Pyto' self.iv = ui.View() self.r = UIScreen.mainScreen.bounds.size.width/2 self.iv.frame = (0,0,self.r*2,self.r*2) self.iv.corner_radius = self.r self.iv.background_color = ui.COLOR_GRAY self.add_subview(self.iv) self.c = ui.View() self.ri = self.r / 10 self.c.frame = (0,0,self.ri*2,self.ri*2) self.c.corner_radius = self.ri self.c.background_color = ui.COLOR_RED self.add_subview(self.c) server_thread = my_thread(self) server_thread.start() if __name__ == '__main__': mc = MyClass() ui.show_view(mc,ui.PRESENTATION_MODE_FULLSCREEN)
@timtim?
But in my opinion if Python.org updates Python to 4.0 version, it automatically doesn't mean it is a good thing also for Pythonista developer...it is like a car that has a lot of electronics: it is good for those who sell electronic components, not always for those who have to buy or maintain them (in this example the seller would be python.org or Apple (with ios versions) and who would buy and maintain the updates by python.org or Apple is omz: yes, about python no expense of money, because it is free, but certainly expense of time invested on maintaining).
I don't know what omz is doing, personally I don't care much, because I consider privacy important, but I hope the best for him. Sorry but I don't see an employment contract between us users and the developer, and I think it's a good thing that it doesn't exist...I'm sorry for you that you need to use the latest Python 3 features not present in Pythonista and you have spent money to buy (from what I understand) several Pythonista licenses for you and other people, thinking of making a good deal because, maybe, you were hoping that Pythonista would continue to be updated with the frequency of a few years ago. I have not spent so much (the equivalent of a pizza and drink) and above all, for what I have to do, the version of Pythonista I use is great for me (I almost completely ignore the benefits that I could have using the last version of python 3 in the things I do, also because the things I do concern things discovered many years ago by people like this, this, this, this, this ..., which have very little to do with python 3...).
I have a very personal opinion about these things, I'm not a programmer and for me a tool is valid as long as I can use it in the things I have to do, I don't need updates if I can add (someone can add) some extra capabilities to version I use. So for me the version of Pythonista may have stopped at the one I use, but clearly only for me, not for others that take more advantage of using the updated version.
I read about Pyto, it is very poweful, I would like to install it but the ios version that I use does not allow it and I have little free mem space, and I have no intention of updating the ios operating system for now with my phone. About python.org nobody can decide the fate of something in which he is not directly involved, many things happen or can happen in the future and we don't want it. For example I'm interested in environmental issues but I still use a diesel car, when diesel will cost more than electric I will change car. So new tool when I will need it or will be imposed on me by other things that I cannot govern (ie: global economy) ...
About an updated version of Pythonista I am neither an optimist nor a pessimist because the version I use (an old one) is fine and I know that here there are people who, with their knowledge and intellectual abilities, manage to increase the power of this application with programming (with Python as well or objc). I'm not a big fan of software updates in general, even less if they concern only graphic updates. I would still use Windows XP if Microsoft continued to update it with security patches, new and useful functions, optimization of ram and cpu usage, etc ... but the abandonment of certain versions of certain software is often related to economic and marketing aspects (new operating system that requires more power means that we need new hardware, even if, with programming only, someone could give more power to existing softwares / operating systems if they were extensible, even if they are not open-source, but this would involve less money...).
@cvp, heh! I understand why it had to be done.
Also, nice comparison of the syntax of the two implementations. | https://forum.omz-software.com/topic/6118/no-update/?page=4 | CC-MAIN-2021-21 | refinedweb | 1,652 | 63.09 |
12 October 2010 16:20 [Source: ICIS news]
By Anu Agarwal
DUBAI (ICIS)--The Indian finished lubricants market is expected to grow at around 4% annually, in both the automotive and industrial lube sectors, a senior industry executive said on Tuesday.
R Sudhakar Rao, Executive Director at ?xml:namespace>
Half of this was automotive oils, 25% industrial oils, 10% process oils, 10% greases and 5% marine lubricants.
India produces around 800,000 tonnes of base oils while the deficit of close to 1 million tonnes, is met through imports - largely from Korea and Iran, he said on the sidelines of the 7th ICIS Middle Eastern Base Oils & Lubricants conference.
Growing automotive production in
''India is also adding significant power generation capability in the coming years - plans to start some 60,000 megawatts of power generation plants over the next five years, is also contributing to the growth in demand for transformer oils'', he added.
Rao said that there was an ongoing shift in the quality of base oils being used in the country and this was being driven by a number of factors.
The government's regulatory push in terms of the Clean Air Act, as well new engine design and technology and consumer demand for warranties and extended drain intervals, were all driving the increasing use of higher quality base oils.
HPCL was currently in the midst of converting its 335,000 tonne/year group I base oils plant to produce some higher quality base oils- namely group II and group III base oils.
After the conversion is completed in December, the total base oils production capacity of the company will be 360,000 tonnes/year - of which 160,000 tonne/year will be group I, 180,000 tonne/year will be group II and around 20,000 tonne/year will be group III.
The company would continue to produce some 30,000 tonne/year of brightstock.
When asked whether
''The big challenge with base oils is the type of crude that is required to be processed at the refinery, whereas refinery economics in general are not based on base oils,'' he added.
Base oils usually require a paraffinic and waxy crude, as that provides optimum base oil yields. However, refinery economics work on fuels production and this could work on various different types of crudes.
Rao estimated Indian base oils demand to hit 3m tonnes by 2015 on the back of automotive, infrastructure and power generation sectors.
The main producers of base | http://www.icis.com/Articles/2010/10/12/9400858/hpcl-predicts-4-growth-in-indian-lubricants-market.html | CC-MAIN-2013-48 | refinedweb | 414 | 53.14 |
These are chat archives for FreeCodeCamp/Help
get help in general - we have more specialized help rooms here:
thekholm80 sends brownie points to @darrenfj :sparkles: :thumbsup: :sparkles:
darrenfj sends brownie points to @thekholm80 :sparkles: :thumbsup: :sparkles:
body ( background-color: lightblue; )
{ }instead of
( )
type="type/css"out of your
<style>tag it works
document.write()for all of this?
writeused after a page is loaded will destroy the HTML and replace it.
writebesides for testing or for some funky new tabs creation
document.writewill be like a brand new page. So if you want to use it as it is, you have to write the head as well
jeddionido sends brownie points to @marmiz :sparkles: :thumbsup: :sparkles:
pppppsssst... i told you how a second ago:
Jut to make it simple, everything you use in
document.writewill be like a brand new page. So if you want to use it as it is, you have to write the head as well
:shipit:
writewas with
window.opento test a backend feature :)
querySelectorand
appendChildstuff
olapri sends brownie points to @thekholm80 and @sjames1958gm and @heinoustugboat :sparkles: :thumbsup: :sparkles:
[i]to the
source[keySource[i]] !== item[keySource])- make it
source[keySource[i]] !== item[keySource[i]])
keySource[i]in your inner loop.
truewhen there is only a single match, it is not fully testing all of the sub-object properties - for example, the
sourcein your test case above has
{ "a": 1, "b": 2 }and the second object in the array only consists of
{ "a": 1 }. Unless you determine that the second key/value
"b": 1exists and matches in the item, you will not get the correct result. so you don't want to
return truein that case...
source[keySource[i]] !== item[keySource])since
keySourceis an array. That can be solved with
source[keySource[i]] !== item[keySource[i]])now with the second problem of it returning
truetoo early, that can be solved with having
return trueoutside the loop so something like this.
for(var i = 0; i < keySource.length; i++) { if(!item.hasOwnProperty(keySource[i]) || source[keySource[i]] !== item[keySource[i]]){ return false;} else {} } return true;
olapri sends brownie points to @ezioda004 and @khaduch :sparkles: :thumbsup: :sparkles:
#include <stdio.h> #include <cs50.h> #include <math.h> int main (void) { float f = 0; //prompt user for a valid input do { f = get_float("How much do we owe you?: \n"); } while (f < 0); printf("%f\n", f); }
float f; | https://gitter.im/FreeCodeCamp/Help/archives/2018/02/09 | CC-MAIN-2020-40 | refinedweb | 399 | 74.19 |
I have an idea of how to complete it, but i'm worried about the performance and how much time it would take to insert any of the values into the script. Its important for the script to load on the agents screen as fast as possible.
My idea
example script
Hi {Lead.FirstName}, my name is {Agent.FirstName} and im....
example model
public class ScriptModel{ public LeadModel Lead{get;set;} public EmployeeModel Agent{get;set;} }
I thinking i could do the following
1) take the "ScriptModel" and convert it to a JObject (using NewtonSoft.Json),
2) then search the script for the "{...}"
3) string.Split('.') the value inside of "{...}"
4) Loop through the JObject until the final split value
5) insert the final value into the script
My questions are,
1) what is the best (performance wise) way to search the script file for "{...}" and retrieve the value inside
2) Is there a more "Native" way to complete this task? | https://www.dreamincode.net/forums/topic/416774-map-value-in-file-to-a-c%23-model/ | CC-MAIN-2019-30 | refinedweb | 161 | 72.97 |
Components and supplies
Apps and online services
About this project
In this project, we’ll try to get some information from human body using simple sensors and Arduino. You can use these.
Code
In this code, we get temperature and perspiration from SHT20 sensor by I2C port, Heart rate from pulse sensor by analog input pin and breath rate from impedance pneumography circuit by analog input pin.!
#include "Wire.h" #include "DFRobot_SHT20.h" DFRobot_SHT20 sht20; void setup() { Serial.begin(9600);(temp, 2); Serial.print(","); Serial.print(analogRead(A3)); // Read Pulse sensor value Serial.print(","); Serial.print(humd, 1); Serial.print(","); Serial.println(analogRead(A2)); // Read Breath rate }
Assembling
All you need to do is just cut the sponge and place the sensors. Then put it on your hand and adjust the stretch rubber and stick it using the hot glue gun.
Now, open up the Tools >> Serial Plotter.
You can see PULSE and BREATH graphs in the plotter of Arduino.
What’s Next?
You can improve this project as you wish. Here are a few suggestions:
- Use ECG modules and connect them to Arduino.
- Try analyzing data from your body.
- Make a simple lie detector by more sensors and more coding.
Code
Pulse SensorC/C++
No preview (download only).
SHT20C/C++
No preview (download only).
Author
ElectroPeak
- 28 projects
- 277 followers
Published onDecember 12, 2018
Members who respect this project
you might like | https://create.arduino.cc/projecthub/electropeak/arduino-polygraph-machine-lie-detector-7d8b10 | CC-MAIN-2019-39 | refinedweb | 233 | 68.57 |
Apache::iNcom::CartManager - Object responsible for managing the user shopping cart.
$Cart->order( \%item ); my $items = $Cart->items; $Cart->empty;
This is the part of the Apache::iNcom framework that is responsible for managing the user shopping cart. It keep tracks of the ordered items and is also responsible for the pricing of the order. It this is module that computes taxes, discount, price, shipping, etc.
Well not completly since all these operations are delegated to user implemented functions implemented in a pricing profile. The idea behind it is to make policy external to the framework. One thing that varies considerably between different applications is the pricing, discount, taxes, etc. So this is left to the implementation of the application programmer.
The pricing profile is a file which is C{eval}-ed at runtime. (It is also reloaded whenever it changes on disk. It should return an hash reference which may contains the following key :
The function should return the price of the item. The function is passed only one parameter : the item which we should compute the price.
Ex: item_price => sub { my $item = shift; my $data = $DB->template_get( "product", $item->{code} ); return $data->{price}; }
The function should return the discounts that apply for that particular item. It can return zero or more discounts. It returning more that one discount return a an array reference. Discount are substracted from the item price so don't return a percentage.
Ex: item_discount => sub { my $item = shift; # Discount are relative to item and quantity my $data = $DB->template_get( "discount", $item->{code}, $item->{quantity} ); return unless $data; # No discount # Discount is proportional to the price return $item->{price} * $data->{discount}; }
The subtotal of the cart is equal to the sum of
($item->{price} - $item->{discount}) * $item->{quantity}
This function determines the shipping charges that will be added to the subtotal. The function receives as arguments the subtotal of the cart and an array ref to the cart's items. It should return zero or more shipping charges that will be added to the subtotal. If returning more that one charges, return an array reference.
Ex: shipping => sub { # Flat fee based shipping charges if ( $Session{shipping} eq "ONE_NIGHT" ) { return 45; } else { return 15; } }
That function determines discount that will be substracted from the subtotal. Function is called with 3 arguments, the subtotal of the cart, the shipping charges and an array reference to the cart's items. Again the function may elect to return zero or more discounts and should return an array reference if returning more that one discounts.
Ex: discount => sub { my $subtotal = shift; my $user = $Request->user; return unless $user->{discount}; return $subtotal * $user->{discount}; }
That functions determines the taxes charges that will be added to the order. It should return zero or more taxes. If the functions returns more that one taxes, it should return an array reference. The functions receives 4 arguments, the cart's subtotal, the shipping charges, the discount and the cart's items as an array reference.
Ex: taxes => sub { my ( $sub, $ship, $disc ) = @_; # We only charges taxes to Quebec's resident. All our # items are taxable and is shipping. if ( ${$Request->user}->{province} eq "QC" ) { my $taxable = $sub + $ship - $disc; my $gst = $taxable * 0.07 my $gsp = ($taxable + $gst) * 0.075 return [ $gst, $gsp ]; } else { return undef; } }
If one of these functions is left undefined. The framework will create one on the fly which will return 0. (No taxes, no discount, no shipping charges, item is free, etc).
All those functions are defined and execute in the namespace of the pages which will use the $Cart object. This means that those functions have access to the standard Apache::iNcom globals ($Request, %Session, $Localizer, $Locale, etc ). DONT ABUSE IT. Also, don't call any methods on the $Cart object or you'll die of infinite recursion.
An item is simply an hash with some reserved key names. All other keys are ignored by the CartManager. Each item with the same (non reserved) key values is assumed to be identic in terms of price, discount, etc.
This design was chosen to handle the infinite variety of item attributes (color, size, variant, ...). The framework doesn't need knowledge of those, only the application specific part. (The pricing functions.)
These are reserved names and can't be used as item attributes :
quantity,
price,
discount,
subtotal
An object is automatically initialized on each request by the Apache::iNcom framework. It is accessible through the $Cart global variable in Apache::iNcom pages.
This method will add all the specified items (hash reference) to the Cart. The quantity ordered should be specified in the
quantity attribute. (If unspecified, it is assumed to be one). If an identical item is already in the cart, the quantity will be added.
Use a negative quantity to remove from the quantity ordered. If the new quantity is lower or equal to zero it will be removed.
Use a quantity of 0 to remove an item.
Removes all items from the cart.
Return all the ordered items as an array. Each item have the following attribute set :
The quantity of the item ordered.
The price of that item.
The discounts applied to this item.
That item subtotal.
Returns the cart subtotal. (This is before global discount, shipping charges and taxes.)
Returns the taxes that will be added to the order.
Returns the order total. (subtotal + shipping charges - discounts + taxes ).
Returns the overall discount that applied to this order.
Returns the shipping charges for this order.
Returns the price of the item specified. If no quantity is specified, a quantity of 1 is assumed. This method doesn't modify the cart.
Returns the discounts associated with the specified item. It no quantity is specified, a quantity of 1 is assumed. This method doesn't modify the cart.
Returns the item as it would be added to the cart.
quantity,
price,
discount and
subtotal will be set in the returned item. This method doesn't modify the cart.::Request(3) Apache::iNcom::OrderManager(3) | http://search.cpan.org/~frajulac/Apache-iNcom-0.09/lib/Apache/iNcom/CartManager.pm | CC-MAIN-2015-35 | refinedweb | 1,005 | 66.94 |
#include <MThreadPool.h>
MThreadPool class. The thread pool is created with a number of threads equal to one less than the number of logical processors.
Initialize thread pool creation. No threads are created by this call.
Creates a new parallel region. All tasks created by createTask() will be added to this region. init() must be called to create the thread pool before calling this function.
Add a single task to the parallel region. The parallel region must already have been created by calling newParallelRegion.
Run all tasks in the present parallel region to completion. This function must be called from within a function invoked by the method newParallelRegion.
Release decreases the reference count on the thread pool. When the reference count reaches zero the thread pool is deleted. | http://download.autodesk.com/us/maya/2009help/API/class_m_thread_pool.html | crawl-003 | refinedweb | 128 | 69.99 |
In this tutorial we are going to talk about regular expressions and their implementation or usage in the Python programming language.
We’ll be covering the following topics in this tutorial:
What is RegEx?
Regular expressions in Python also write in short as RegEx, or you can also pronounce it as regX. In simple words, a regular expression is a sequence of characters that define the search pattern. We know that a sequence of characters is simply a string. A string that acts as a pattern for searching something in any given text can term as a regular expression.
what do we mean by a pattern?
It’s just a strategy that we use to identify text. So a pattern can mean something like three digits in a row or two alphabetic letters in a row or the letters BCA in sequence, or any number of whitespace characters in a row. It’s just a search pattern. It’s a strategy to identify text, and the applications in the real world are vast.
For example, we may need to pass a big chunk of text and find the nested email address within it. An email address has a particular pattern.
We have the @ sign in the middle, and then we have something before it and something afterward. Or we can, for example, be looking for a phone number. A phone number has a specific pattern as well. It’s a sequence of numbers. And usually, those numbers are separated by spaces or dashes or slashes or something like that. Or if we’re looking for something like a zip code within the United States, we can write a pattern to search for five digits in a row.
So regular expressions are just an internal language built into Python that allows us to identify and write out those strategies to help identify snippets of text within larger chunks of text.
To work with regular expressions will have to begin by importing a module from within the standard library called re. That is short for regular expressions.
If the pattern does not exist, we’re going to get a none object to represent nowness or nothingness. And if the pattern does match in the string that we pass in, we’re going to get a different type of object called a match.
Let’s take a look at both of those scenarios. First up, let’s pass in a string like candy.
import re pattern = re.compile("flower") print(type(pattern)) print(pattern.search("candy"))
So, again, Python and regular expressions is going to look for this combination of characters flower within this string of candy.
We’re going to see it’s going to be the none object whenever Python cannot find a match using the regular expression pattern and returns None.
So what I’m going to do below is I’m going to once again invoke the search method on my pattern object and I’m going to give it a string like flower power.
import re pattern = re.compile("flower") print(type(pattern)) print(pattern.search("candy")) match = pattern.search("flower power") print(type(match))
So now this combination of six characters that we specified in here is going to exist at some point in this string. So we’re going to get a match object right here on the right hand side.
Now, that match object is going to have some helpful methods to help us figure out where the match occurred.
For example, on my match object, I can call a method called group and group is going to return the actual string that’s matched.
import re pattern = re.compile("flower") print(type(pattern)) print(pattern.search("candy")) match = pattern.search("flower power") print(type(match)) print(match.group())
So within flower power with the pattern of flower, the pattern that was identified was flower.
Regular Expression in Python and Their Uses
Metacharacters
Metacharacters are characters which are interpreted in a particular way.
Metacharacter is a character with the specified meaning.
Metacharacter Description Example
[] Specifies set of characters to match. “[a-z]”
\ Treat meta characters as ordinary characters. “\r”
. Matches any single character except a newline. “Ja.v.”
^ Match the starting character of the string. “^Java”
$ Match ending character of the string. “point”
* Matches zero or more occurrence of the pattern left to it. “hello*”
+ Matches one or more occurrences of the pattern left to it. “hello+”
{} Match for a specific number of pattern occurrences in a string. “java{2}”
| Either/Or “java|point”
() Group various patterns.
Special Sequences
Special sequences are the sequences containing \ followed by one of the characters.
Character Description
\A Return a match if the pattern is at the start of the string.
\b Return a match if the pattern is at the beginning or end of a word.
\B Return a match if the pattern is present but not at the beginning or end of a word.
\d Return a match where the string contains digits.
\D Return a match where the string does not contain digits.
\s Return a match where the string contains a white space character.
\S Return a match where the string does not contain a white space character.
\w Return a match where the string contains any word character.
\W Return a match where the string does not contain any word character.
\Z Return a match if the pattern is at the end of the string.
Sets
A set is a group of characters given inside a pair of square brackets. It represents the special meaning.
SN Set Description
1 [arn] Returns a match if the string includes some defined characters in the sequence.
2 [a-n] Returns a match if the string contains any characters between a to n.
3 [^arn] Returns a match if the string includes the characters except a, r, and n.
4 [0123] Returns a match if the string includes any specified digits.
5 [0-9] Returns a match if the string is between 0 and 9 digits.
6 [0-5][0-9] Returns a match if the string is between 00 and 59 digits.
10 [a-zA-Z] Returns a match if there is some alphabet in the string (lower-case or upper-case).
Regular Expressions Methods in Python
1. let us suppose we are to find string for a particular match. So such for ape in the string.
import re # Search for ape in the string if re.search("ape","The ape was at the apex") print("There is an ape") Output: There is an ape
Now if we do this searching, we are finding that there is and if so, when this particular added or such return are true, then this respective message will get printed.
2. Next, we’re going to find all this function returns a list of matches.
import re # findall() return a list of matches # . is used to match only 1 character or space allApes = re.findall("ape.","The ape was at the apex") for i in allApes: print(i) Output: ape apex
So Dot it to match any one character.Dot Will is nothing but one wildcard character, which will be denoting any single character or espace.
3. Next, we are going for this finditer, which returns and iterator of matching objects and you spend to get the location.
theStr = "The ape was at the apex" for i in re.finditer("ape.",theStr): # Span returns a tuple locTuple = i.span() print(locTuple) # Slice the match out using the tuple values print(theStr[locTuple[0]:locTuple[1]]) Output: (4,8) ape (19,23) apex
4. Now Square brackets will match any one of the character between the brackets not including upper and lowercase varieties unless they are listed.
animalStr = Cat rat mat fat pat" allAnimals = re.findall("[crmfp]at",animalStr) for i in allAnimals: print(i) print() Output: rat mat fat pat
5. We can also allow for characters in a range.
animalStr = "Cat rat mat fat pat" someAnimals = re.findall("[c-mC-M]at",animalStr) for i in someAnimals: print(i) print() Output: Cat mat fat
6. Next Use ^ to denote any character but whatever characters are between the brackets.
animalStr = "Cat rat mat fat pat" someAnimals = re.findall("[^Cr]at", animalStr) for i in someAnimals: print(i) print() Output: mat fat pat
7. Replace maching items in a string
owlFood = "rat cat mat pat" # You can compile a regex into pattern objects which provide additional methods. regex = re.compile("[cr]at") # sub() replaces items that match the regex in the string with the 1st attribute string passed to sub owlFood = regex.sub("owl",owlFood) print(owlFood) Output: owl owl mat pat
8. Regex use the backslash to designate special characters and Python does the same inside strings which causes issues.Lets try to get “”\\stuff out of a string.
randStr = "Here is \\stuff" # This won't find it print("Find \\stuff : ",re.search("\\stuff", randStr)) #This does, but we have to put in 4 slashes which is messy print("Find \\stuff: ", research("\\\\stuff", randStr)) # You can get around this by using raw string which don't treat backslashes as special print("Find \\stuff: ", re.search(r"\\stuff", randStr)) Output Find \stuff: None Find \stuff: <_sre.SRE_Match object; span=(8,14), Find \stuff: <_sre.SRE_Match object; span=(8,14),
9. We saw that . matches any character, but what if we want to match a period. Backslash the period. You do the same with[,] and others
randStr= " F.B.I. I.R.S. CIA" print("Matches :", len(re.findall(".\..\..",randStr))) print("Matches :", re.findall(".\..\.."",randStr)) Matches : 2 Matches : ['F.B.I', 'I.R.S']
10. We can match many whitespace characters
randStr = """This is a long string that goes on for many lines""" print(randStr) #Remove newlines regex = re.compile("\n") randStr = regex.sub(" ", randStr) print(randStr) # You can also match # \b : backspace # \f : Form Feed # \r : Carriage Return # \t : Tab # \v : vertical Tab # You may need to remove \r\n on Windows Output : This is a long string that goes on for many lines This is a long string that goes on for many lines
import re # \d can be used instead of [0-9] # \D is the same as [^0-9] randStr = "12345" print("Matches :", len(re.findall("\d",randStr)))) Output: Matches : 5
12. You can match multiple digits by following the \d with {numOfValues}
#Match 5 numbers only if re.search("\d{5}","12345"): print("It is a zip code") # You can also match within a range. Match values that are between 5 and 7 digits. numStr = "123 12345 123456 1234567" print("Matches :", len(re.findall("\d{5,7}", numStr))) Output : It is a zip code Matches : 3 | https://ecomputernotes.com/python/regular-expression-in-python | CC-MAIN-2022-21 | refinedweb | 1,788 | 66.54 |
Hello,
Im hoping someone can help me. Im trying to write a program to determine if a word or phrase is palindrome or not, the same letters forward as backwards (bob, level, madam im adam). No matter what my keyboard input is, it always sets my "palindrome" flag to "1". I even added lines to verify my array was being read correctly. Even if the characters displayed are not the same, my if statement goes as true and sets the flag high. I have modified and tested it many times, and Im sure the problem is the If statement that compares the individual characters. Im sure it is something I have messed up, but I cant seem to find what it is. The code is pasted below. Any help would be much appreciated.
Code:
// project created on 11/18/2009 at 4:20 PM
#include <stdio.h>
#include <string.h>
main()
{
{
//The Following Lines Are Declarations And Initializations //
int x, first, last, palindrome = 1;
char str[100];
//The Following Lines Are User Input Statements//
printf("Enter A Word Or Phrase To Check Whether It Is A Palindrome Or Not- ");
printf("No Spaces Or Punctuation Please. ");
scanf("%s",str);
//The Following Lines Are Declarations and Initializations//
first = 0;
x = strlen(str);
last = x - 1;
//The Following Two Lines Were Added To Assist Me In Debugging//
//This Proves The Values Were Read Correctly From The Array//
printf ("The Beginning First Character Is %c. ",str[first]);
printf ("The Beginning Last Character Is %c. ",str[last]);
//The Following While Loop Sets The Loop For The String Length//
while (first <= last)
{
//The Following If/Else If Statement Evaluates Individual Characters In The str String//
if (str[last] == str[first])
first = (first + 1),last = (last - 1);
else
palindrome = 0, last = 0;
}
//The Following If/Else Statement Takes The Result From The Character Evaluation And Displays The Result//
if(palindrome = 0)
printf("This Is Not A Palindrome");
else if(palindrome = 1)
printf("This Is A Palindrome");
}
} | https://cboard.cprogramming.com/c-programming/121813-palindrome-problems-printable-thread.html | CC-MAIN-2017-22 | refinedweb | 329 | 67.59 |
std::num_get::do_get() cannot parse nan, infinity
-------------------------------------------------
Key: STDCXX-239
URL:
Project: C++ Standard Library
Type: New Feature
Components: 22. Localization
Versions: 4.1.2, 4.1.3
Environment: all
Reporter: Martin Sebor
Moved from the Rogue Wave bug tracking database:
****Created By: sebor @ Apr 04, 2000 07:13:59 PM****
The num_get<> facet's do_get() members fail to take the special strings [-]inf[inity]
and [-]nan into account. The facet reports an error when it encounters such strings. See 7.19.6.1
and 7.19.6.2 of C99 for a list of allowed strings.
The fix for this will not be trivial due to the messy implementation of the facets. It might
be easier just to rewrite them from scratch.
The testcase below demonstrates the incorrect behavior. Modified test case added as tests/regress/src/test_issue22564.cpp
- see p4 describe 22408.
$ g++ ... test.cpp
$ a.out 0 1 inf infinity nan INF INFINITY NAN
sscanf("0", "%lf") --> 0.000000
num_get<>::do_get("0", ...) --> 0.000000
sscanf("1", "%lf") --> 1.000000
num_get<>::do_get("1", ...) --> 1.000000
sscanf("inf", "%lf") --> inf
num_get<>::do_get("inf", ...) --> error
sscanf("infinity", "%lf") --> inf
num_get<>::do_get("infinity", ...) --> error
sscanf("nan", "%lf") --> nan
num_get<>::do_get("nan", ...) --> error
sscanf("INF", "%lf") --> inf
num_get<>::do_get("INF", ...) --> error
sscanf("INFINITY", "%lf") --> inf
num_get<>::do_get("INFINITY", ...) --> error
sscanf("NAN", "%lf") --> nan
num_get<>::do_get("NAN", ...) --> error
$ cat test.cpp
#include <iostream>
#include <locale>
#include <stdio.h>
#include <string.h>
using namespace std;
int main (int argc, const char *argv[])
{
num_get<char, const char*> nget;
for (int i = 1; i != argc; ++i) {
double x = 0, y = 0;
ios::iostate err = ios::goodbit;
nget.get (argv [i], argv [i] + strlen (argv [i]), cin, err, x);
if (1 != sscanf (argv [i], "%lf", &y))
printf ("sscanf(\"%s\", \"%%lf\") --> error\n", argv [i]);
else
printf ("sscanf(\"%s\", \"%%lf\") --> %f\n", argv [i], y);
if ((ios::failbit | ios::badbit) & err)
printf ("num_get<>::do_get(\"%s\", ...) --> error\n", argv [i]);
else
printf ("num_get<>::do_get(\"%s\", ...) --> %f\n", argv [i], x);
}
}
****Modified By: sebor @ Apr 09, 2000 09:31:49 PM****
Fixed with p4 describe 22544. Test case fixed with p4 describe 22545. Closed.
****Modified By: leroy @ Mar 30, 2001 03:09:11 PM****
Change 22544 by sebor@sebor_dev_killer on 2000/04/09 20:30:50
Added support for inf[inity] and nan[(n-char-sequence)] as described
in 7.19.6.1, p8 of C99.
nan(n-char-sequence) currently treated the same as nan due to poor
implementation of std::num_get<> and supporting classes - fix requires
at least a partial rewrite of the facet.
Resolves Onyx #22564 (and the duplicate #22601).
Affected files ...
... //stdlib2/dev/source/src/include/rw/numbrw#17 edit
... //stdlib2/dev/source/src/include/rw/numbrw.cc#12 edit
... //stdlib2/dev/source/vendor.cpp#17 edit
****Modified By: sebor @ Apr 03, 2001 08:46:50 PM****
It looks like this is actually not a bug and the fix is wrong (even as an extension). Here's
some background...
Subject: Is this a permissible extension?
Date: Thu, 8 Feb 2001 18:16:18 -0500 (EST)
From: Andrew Koenig <ark@research.att.com>
Reply-To: c++std-lib@research.att.com
To: C++ libraries mailing list
Message c++std-lib-8281
Suppose we execute
double x;
std::cin >> x;
at a point where the input stream contains
NaN
followed perhaps by other characters.
One might plausibly expect an implementation to set x to NaN
on an implementation that supports IEEE floating-point.
Surely the standard cannot mandate such behavior, because not
every implementation knows what NaN is. However, on an implementation
that does support NaN, is such behavior a permitted extension?
My first attempt at an answer is no, because if I track through the
standard, I find that the behavior of this statement is defined
as being identical to the behavior of strtod in c89, and that behavior
requires at least one digit in the input in order for the intput to
be valid. However, I might have missed something. Have I?
****Modified By: sebor @ Apr 03, 2001 08:48:03 PM****
Subject: Re: Is this a permissible extension?
Date: Fri, 09 Feb 2001 09:28:25 -0800
From: Matt Austern <austern@research.att.com>
Reply-To: c++std-lib@research.att.com
Organization: AT&T Labs - Research
References: 1 , 2
To: C++ libraries mailing list
Message c++std-lib-8284
Andrew Koenig wrote:
> Fred> In "C" locale, only decimal floating-point constants are valid.
> Fred> So, no NaN nor Infinity is allowed.
>
> Yes -- I was talking about the default locale.
Actually, I think that strtod isn't the important part, at least for
discussing C++. I think that this is an illegal extension in all
named locales.
First, let me explain why I said *named* locales. If you construct
a locale with locale("foo"), the way it works is that the locale is
built up out of _byname facets instead of base class facets. Except
that not all facets have _byname derived classes, so in some cases
you've still got the default behavior from the facet base class.
One of the facets that has no _byname variant is num_get<>. So if I
can construct an argument that the documented behavior of num_get<>
precludes this extension, I have also proved that this extension is
impossible in any named locale. This argument does not apply to
arbitrary locales, since an arbitrary locale may replace any base
class facet that with a facet that inherits from it.
OK, now the argument I promised, saying that num_get<> can't recognize
the character string "NaN".
22.2.2.1.2, paragraph 2: num_get's overloaded conversion function,
num_get::do_get(), works in three stages.
(1) It determines conversion specifiers. We're OK so far.
(2) It accumulates characters from a provided input character.
(3) It uses the conversion specifiers and the characters it has
accumulated to produce a number.
Stage 2 is the crucial one. it's described in 22.2.2.1.2/8-10, in
great detail.
For each character,
(a) We get it from a supplied input iterator.
(b) We look it up in a lookup table whose contents are prescribed
by the standard. (This has to do with wide characters, but there
is no exception for the special case where you're reading narrow
characters.)
(c) If a character is found in the lookup table, or if it's a decimal
point or a thousands sep, then it's checked to see if it can
legally appear in the number at that point. If so, we keep
acumulating characters.
The characters in the lookup table are "0123456789abcdefABCDEF+-".
Library issue 221 would amend that to "0123456789abcdefxABCDEFX+-".
"N" isn't present in the lookup table, so stage 2 of num_get<>::do_get()
is not permitted to read the character sequence "NaN".
If you want to argue that num_get<>::do_get() is overspecified, I
wouldn't disagree too violently.
--Matt
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
-
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/stdcxx-dev/200606.mbox/%3C13481492.1151524530010.JavaMail.jira@brutus%3E | CC-MAIN-2016-22 | refinedweb | 1,174 | 66.94 |
GreenletExit not thrown when a greenlet is garbage-collected
Created originally on Bitbucket by antocuni (Antonio Cuni)
import greenlet import gc class A(object): def __del__(self): print 'A.__del__' def g1(): try: print 'g1: begin' a = A() print 'g1: switching' MAIN.switch() except greenlet.GreenletExit: print 'g1: GreenletExit' finally: print 'g1: finally' def main(): global MAIN MAIN = greenlet.getcurrent() greenlet.greenlet(g1).switch() gc.collect() main()
These are the results on CPython and PyPy;
A.__del__ is used just to show that the greenlet is actually garbage collected.
$ python green.py g1: begin g1: switching g1: GreenletExit g1: finally A.__del__ $ pypy green.py g1: begin g1: switching A.__del__
The following patch seems to be enough to fix the problem; however, since it’s a good practice to be very careful when dealing with greenlets and continuelets, I’d like some feeback before committing it:
diff --git a/lib_pypy/greenlet.py b/lib_pypy/greenlet.py --- a/lib_pypy/greenlet.py +++ b/lib_pypy/greenlet.py @@ -47,6 +47,9 @@ if parent is not None: self.parent = parent + def __del__(self): + self.throw() + def switch(self, *args, **kwds): "Switch execution to this greenlet, optionally passing the values " "given as argument(s). Returns the value passed when switching back."
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information | https://foss.heptapod.net/pypy/pypy/-/issues/3162 | CC-MAIN-2021-10 | refinedweb | 225 | 50.33 |
Difference between revisions of "Implement a chat server"
Revision as of 05:43, 22 September 2017
Contents
Introduction
This page describes how to implement a simple chat server which can be connected to with telnet for basic chatting functionality. The server should support multiple connected users. Messages sent to the server are broadcast to all currently connected users. For this tutorial we'll use Network.Socket, which provides low-level bindings to the C-socket API.
(Socket, SockAddr) — this corresponds to a new socket object which can be used to send and receive data for a given connection. This socket object is then closed at the end of our
runConn method.
The
SockAddr, as you can see from the
runConn method, is largely uninteresting for this use-case and will simply be the initial socket address of 4242.
Using System.IO for sockets
Network.Socket incorrectly represents binary data in
send and
recv and, as a result, use of these functions is not advised and may lead to bugs.
Network.Socket actually recommends using these same methods defined in the ByteString module. However, to keep things simple, we'll stick to
System.IO for input and output.
Importing our new module and turning our
Socket into a
Handle now looks like the following:
-- in the imports our Main.hs add: import System.IO -- and we'll change our `runConn` function to look like: runConn :: (Socket, SockAddr) -> IO () runConn (sock, _) = do hdl <- socketToHandle sock ReadWriteMode hSetBuffering hdl NoBuffering hPutStrLn hdl "Hello!" hClose hdl
Concurrency
So far the server can only handle one connection at a time. This is
We do not need to explitely import the module because it is imported by
Control.Concurrent., fmap and fix. In short,
fmap allows us to elegantly lift a function over some structure, while
fix allows us to define a Monadic fixpoint.
-- at the top of Main.h <- fmap init (hGetLine hdl) broadcast line loop
Notice how
runConn, running in a separate thread from our main one, now forks another worker thread for sending messages to the connected user.
Cleanups and final code
There are two major problems left in the code. The first is the fact that the code has a memory leak because the original channel is never read by anyone. We can fix this by adding another thread just so that people have access to this channel.
The second issue is that we do not gracefully close our connections. This will require exception handling. Next we'll fix the first issue, handle the second case to a larger extend, and add the following cosmetic improvements:
- Make messages get echoed back to the user that sent them.
- Associate each connection with a name.
- Change
Msgto alias
(Int, String)for convience.
We'll import
Control.Exception and handle exceptions in our final code, below:
-- Main.hs, final code module Main where import Network.Socket import System.IO import Control.Exception import Control.Concurrent import Control.Concurrent.Chan import Control.Monad (liftM, when) import Control.Monad.Fix (fix) main :: IO () main = do sock <- socket AF_INET Stream 0 setSocketOption sock ReuseAddr 1 bind sock (SockAddrInet 4242 iNADDR_ANY) listen sock 2 chan <- newChan forkIO $ fix $ \loop -> do (_, msg) <- readChan chan loop mainLoop sock chan 0 type Msg = (Int, String) mainLoop :: Socket -> Chan Msg -> Int -> IO () mainLoop sock chan msgNum = do conn <- accept sock forkIO (runConn conn chan msgNum) mainLoop sock chan $! msgNum + 1 runConn :: (Socket, SockAddr) -> Chan Msg -> Int -> IO () runConn (sock, _) chan msgNum = do let broadcast msg = writeChan chan (msgNum, msg) hdl <- socketToHandle sock ReadWriteMode hSetBuffering hdl NoBuffering hPutStrLn hdl "Hi, what's your name?" name <- liftM init (hGetLine hdl) broadcast ("--> " ++ name ++ " entered chat.") hPutStrLn hdl ("Welcome, " ++ name ++ "!") commLine <- dupChan chan -- fork off a thread for reading from the duplicated channel reader <- forkIO $ fix $ \loop -> do (nextNum, line) <- readChan commLine when (msgNum /= nextNum) $ hPutStrLn hdl line loop handle (\(SomeException _) -> return ()) $ fix $ \loop -> do line <- liftM init (hGetLine hdl) case line of -- If an exception is caught, send a message and break the loop "quit" -> hPutStrLn hdl "Bye!" -- else, continue looping. _ -> broadcast (name ++ ": " ++ line) >> loop killThread reader -- kill after the loop ends broadcast ("<-- " ++ name ++ " left.") -- make a final broadcast hClose hdl -- close the handle
Run the server and connect with telnet
Now that we have a functional server, after building your executable and firing up the server we can start chatting! After running your server, connect to it with telnet like so:
$ telnet localhost 4242 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Hi, what's your name?
Remember that to quit telnet, you need to
^] and run
quit after dropping into the telnet prompt.
Fire up two clients and have fun chatting! | https://wiki.haskell.org/index.php?title=Implement_a_chat_server&diff=prev&oldid=62132 | CC-MAIN-2020-50 | refinedweb | 785 | 65.52 |
>> Ben Zorn: So it's a great pleasure to introduce Harry Xu from Ohio State
University. Harry is a PhD student and he's going to graduate this year. So he's
here to talk about his PhD work.
Harry had some great internships at IBM research and some
-
-
two internships
sort of prompted his line of thinking around understanding how programs use
memory and the aspect of memory bloat which he's actually investigated over the
past few years with a number of really interesting publications.
Harry has rece
ived several awards, including an IBM Research fellowship, a
fellowship at the
--
or a distinguished paper award ICSE and also a departmental
fellowship at Ohio State University.
So with that, I'll introduce Harry. Thank you.
>> Guoqing (Harry) Xu:
Thank you. Thank you, Ben for introduction. So it's a
great pleasure here to talk about my research. So okay. So let's get started.
So I'm actually a program analysis person. I'm interested in both theoretical
foundations, program analysis and thi
s applications. So the important
applications that forms the basis of my PhD thesis is to use static and dynamic
program analysis techniques to help programmers find and remove what we call
runtime bloat in modern object
-
oriented software.
So this rese
arch here is motivated entirely by the really world problems that we
have regularly encountered and studied. So here I'm going to tell you the story.
All right. So as probably all of us here of already seen the past decade has
witnessed a tremendous i
ncrease in the size of software which now contains
more and more functionality and consume more and more resources in order to
solve increasingly important problems.
So this picture shows the growth of the total number of objects in the heap in the
larg
e Java server applications over the course of two years, between 2002 and
2004. So the total number of objects in the heap has increased, for example,
from less than half a million in the beginning all the way up to 30 million at the
end. Like more than
60 times over this two
-
year period.
So what happened to this application? Why does this application have
--
consume so much memory now? Because now we have this big pile
-
up thing.
So this picture shows the architecture of the SAP, the SAP netweaver a
pplication
server where each box in this picture represents a component of the server.
So this application server has millions of lines of code and needs about 20
components to work simultaneously in order to function. So one important thing
that can b
e seen from this picture is that this application server itself is a big
pile
-
up, right? It's built on top of layers, layers frameworks and libraries. So first
of all, this can be extremely easy for such a large scale framework intense
application to suf
fer from performance problems.
So for example suppose there exists small performance issue in one of
components here. Its effect can quickly get orders magnitude more significant,
when its components become nested in layers, right? And second, if this
application has a small performance problem, it will be extremely hard for their
developers to find why. Because the problem can easily cross many, many
layers of libraries and frameworks. And the library and frameworks can come
from different vendors.
In most cases their source code is not available. So as a
result significant performance degradation and scalability problems can be
regularly seen in large scale and real world applications.
All right. So a lot of evidence has been shown that, you k
now, most performance
problems in modern object oriented software can caused not by the lack of
hardware support but instead by the runtime redundancies or inefficiencies
during the execution. So what causes the general redundancies or inefficiencies
runt
ime bloat.
A typical example of bloat is to use a very complex heavyweight function to
achieve very simple task which should have been shipped in a much easier way.
Bloat has caused applications to consume increasing large memory space. So
for example
, a typical Java application Java heap has quickly grown from, you
know, for example 500 megabytes a few years ago to 2 to 3 gigabytes today. It's
very common. But it doesn't necessarily mean that we're now supporting more
users or functions. Right?
So here is a list of common bloat effects that we found in real world applications
that we have studied. So for example, a large scale application originally
designed to support millions of concurrent users can eventually scale only to
thousand users in p
ractice. And it was extremely hard for their developers to find
out the reason. So they gave it to IBM Research for performance tuning.
As another example the designer of a large
-
scale application initially expected
only two kilobytes memory for savin
g session state per user but eventually they
found 500K, you know, for saving session state per user, more than 250 times
larger than they initially expected.
So the consequence of a bloat is not just excessive memory usage, large
memory consumption usu
ally comes with the execution of lot of redundant
operations that can cause a significant slowdown of the application. So bloat can
have huge impact on scalability, power usage and performance of large scale
real world applications that form the backbone
of modern is enterprise computing
used every day by thousand businesses.
Bloat can also have a huge impact on mobile computing where most applications
have, you know, very restrict resource constraints. According to IBM Research,
millions of dollars ar
e spent every year by its customers. Memory like bloat
detection, like performance tuning and memory leak detection. So that's why
IBM Research is pushing very hard to develop useful tools that can help
programmers identify performance problems.
All r
ight. So what can we do? What can we do to deal with this every increasing
level of inefficiencies? The first thing that we can probably think of immediately is
to use compiler optimizations. Programmers, people with general, you know,
feeling is like
this. We don't need to worry about performance in object
orientation because we have this advanced compiler technology, we have this
garbage collector, right. Let's leave it entirely to the compilers and the runtime
systems because they are always smarte
r than ourselves. That's basically what
people thinking in the real world.
However, the usefulness of this compiler
--
traditional compiler technology is very
limited in optimizing bloat away because a general dataflow in large scale
application can ea
sily cross thousand method applications. Many, many layers
of libraries and even frameworks.
This is a very large scope, way beyond what a compiler analysis would inspect.
Of course optimizing a lot of real world problems may require a lot of develope
r
insight which the compiler doesn't have. On the other hand, it's not easy as well
for the human experts to perform many optimization, primarily because of a lack
of good tools that can help them make sense of large heaps with millions of
objects and lon
g executions that can last for hours and days.
So my entire thesis here is to find a better way that can identify larger
optimization opportunities with a small amount of developer time. How we do
that? The basic methodology is to advocate compiler
-
as
sisted manual tuning in
order to take advantage of both sides, the manual optimization side and the
compiler side. Specifically I've designed, implemented and evaluated static and
dynamic program analysis techniques that can first identify the root cause
of a
memory problem or performance problem and second remove automatically or
manually some kinds of bloat patterns that we, you know, identified with the
techniques. And third, prevent bloat from occurring in the early stage of software
development. So
this is actually three
-
step approach.
All right. So our techniques were implemented on a variety of different platforms.
So for example the dynamic analysis were implemented on, you know, JVM, you
know, the IBM J9 which is the commercial JVM of IBM.
Jikes RVM, which is
open source JVM within Java. And also used JVM TI which is like general tool
interface supported by all JVMs. And, you know, the [inaudible] analysis can be
used to generate online bloat warnings during the execution of the program.
The static analysis were completed on Soot, which is a popular program analysis
framework for Java. The static analysis can be used either to produce online
--
I
mean not online, to produce bloat warnings or to generate refactoring
discussions during co
ding when the developers write their code. So this really
demonstrates that our techniques are general enough and they are not limited by
any specific platform or framework.
All right. So this slide shows and overview of the set of techniques that I h
ave
developed for my thesis. For example for dynamic analysis, we start with
profiling the execution to identify certain bloat evidences like a lot of copies, large
cost benefit ratios, high memory leak confidences. They are all strong indicators
of bloa
t.
And then by analyzing the profiled activities we generate bloat or memory leak
For static analysis we analyze the bytecode of large scale applications to identify
certain bloat patterns that we have previously found using the dynamic analy
sis
in order to either generate bloat warnings or to transform the program to produce
optimized code.
So for example static analysis can be used to help programmers to identify
bloated containers. They can also be used to help programmers find and hois
t
loop invariant data structures.
So in addition to software bloat analysis, I've also contributed to some other
program analysis areas. For example I'm very interested in scalable points to
and dataflow analysis. I've done some work on control and da
taflow analysis for
AspectJ which is, you know, the major aspect of [inaudible] language. I also
have some experience with language
-
level checkpointing and the replaying.
In terms of research areas my work goes from very high level software
engineering
across programming languages, compilers, all the way down to
runtime systems. So this slide gives a detailed classification of the publications
in terms of research areas. So I actually have experience and expertise in these
three areas.
So now we're
done with the introduction part. Now let's get a little deeper into
technical problems and solutions. So here I talk about two specific program
analysis techniques to help programmers find and remove bloat. One dynamic
analysis and one static analysis.
The goal of the dynamic analysis is to find low
-
utility data structures to help
programmers do performance tuning. The second static analysis can be used to
help programmers find and hoist loop
-
invariant data structures.
So the goal
--
I mean the m
otivation of the second static analysis is based on a
lot of loop
-
invariant taught structures that we found using the first dynamic
analysis. That's how these two analysis are connected.
So eventually I'm going to talk about future work and conclusions
.
All right. So this is the first piece of work which was published in PLDI last
--
one
of my PLDI 2010 papers. The goal of this analysis is to identify
high
-
cost
-
low
-
benefit data structures that are likely to be performance
bottlenecks.
So how we
going to do that? We design a runtime analysis that computes a cost
measurement and a benefit measurement for each data structure in the heap.
And eventually we present to the user a list of data structures that are ranked
based on their cost
-
benefit rat
ios.
Intuitively data structures with high cost
-
benefit ratios are likely to be closely
related to performance problems. That's the motivation.
All right. So before getting deeper into the technical problems and the solutions,
let's first look at t
he
--
go through this motivation part. Yeah, sure?
>>: So you're talking about the cost but, in fact, the cost of extra memory isn't
--
I
mean there's sort of a subtle indirection in the sense that the cost of extra
memory isn't visible unless you do
this performance effect service, say for
example, you know, it's bigger than the cash, et cetera.
>> Guoqing (Harry) Xu: Oh, yes.
>>: So do you get to
--
when you're talking about cost, do you get to the actual
cost in terms of the impact on cycles
or do you actually
--
>> Guoqing (Harry) Xu: On cycles.
>>: Or is it just more memory is better [inaudible].
>> Guoqing (Harry) Xu: No, I mean
--
>>: [inaudible].
>> Guoqing (Harry) Xu: Right. So I think it's cost benefit is defined in te
rms of
like actual computation done, the model memory needed, used or something. So
it's like more about execution boat or
--
instead of memory bloat.
>>: Okay.
>> Guoqing (Harry) Xu: So
--
>>: Okay. So I'll let you go.
>> Guoqing (Harry) X
u: Yeah, yeah.
>>: [inaudible].
>> Guoqing (Harry) Xu: So okay. I mean this is actually
--
let's first go through
motivation. As we probably know
--
already know, runtime bloat can manifest
itself through many observable symptoms. So for exampl
e, a lot of temporary
objects that are really short lived objects, right, they're not long lived, a lot of pure
data copies without any computation done. And may be, you know, a lot of
highly
-
stale objects, meaning objects without any
--
I mean, that are
not used for
a long time. And many problematic containers. Maybe many, many other
systems that we're going to observe in the future. However, in any tuning task it
will probably be impossible for the developer to try all those different techniques,
all
those different
--
look at all those different symptoms to identify a performance
problem.
So that basically doesn't make sense. The immediate question to ask here is
that is there a common way or in other words more systematic way beyond all
those dif
ferent symptoms and techniques that can characterize different kind of
bloat even though they could exhibit different symptoms? As bloat is really about
runtime inefficiencies, we found that what is really common about different kinds
of bloat is that the
re exist operations that are very expensive, very hard to
execute but do not produce data values that have large benefit or impact on the
forward progress. In other words, the cost of those operations are out of line with
their benefits. This observation
actually motivates us of computing
--
to compute
this cost benefit measurements to help programmers do performance tuning.
So how are we going to do that? Let's first look at example. This example
shows a problem that we found from Eclipse framework
using our dynamic
analysis. So here is a method called package that takes a Java string as input
and eventually returns whether or not the string represents a valid Java
packaging name. Right? The way this method is implemented is that it first calls
an
other method called directory list in order to identify and return a list of all files
in this current directory represented by S here. And then it checks whether or not
this return list equals null. Equals null.
So it's easy to see that there's a muc
h easier way or smarter way to implement
this method, right? So for example we can directly parse this incoming stream
into two parts like Java and IO and check directly whether or not they correspond
to valid Java directory names, right? So we don't nec
essarily have to do
--
find
all the files in directory.
So in this case, for example, this, you know, big list data structure pointing to by
this variable packs has the highest cost
-
benefit ratios because intuitively there's a
lot of effort made to iden
tify those files and, you know, populate this entire list.
But eventually none of the elements in the list are used for any purpose. So the
cost
-
benefit ratio is the highest.
So for this example, I mean, this example actually shows a typical problem w
ith
the current object
-
oriented programming practice. So for example, the
programmers are encouraged to pay more attention to high
-
level abstractions
like modularity, reuse readability without considering performance.
So for example in this case, think
about why the programmer wants to do this,
why the programmer wants to implement this method in this way. The only
reason that I can see here is that the programmer just wants to reuse the
implementation of this method directory list instead of creating
a specialized
simplified version for it, right?
However, they're never aware that this piece of code can be executed for millions
of times and when this list becomes large then it really hurts performance. In
fact, by simply creating a specialized vers
ion for this method we're able to reduce
the running time for this application by eight percent, which is very impressive.
All right. Now let's go back to the definitions. So what is cost? Yeah, sure.
>>: [inaudible] eight percent easily apparent
in your profile or was it hidden by its
[inaudible] too much of the cost being [inaudible] and stuff like that?
>> Guoqing (Harry) Xu: Well, I think
--
generally I think it's the algorithm
problem.
>>: Yeah, but if someone looked and decided I car
e about the performance of
this program, I mean look at the profile, wouldn't they have been
--
>> Guoqing (Harry) Xu: Identified the problem by looking at the simple profiles?
>>: At least would he have been told to look at his package or would it
have
been spread out by mostly secondary GC costs or something like that?
>> Guoqing (Harry) Xu: Well, we don't have sort of a detailed profiles of the
--
you know, like what percentage of the eight percent comes from the GC and
what percentage comes f
rom [inaudible].
>>: [inaudible] if you just used a very simple technique for just, you know,
profiling the amount of time spent in each method.
>> Guoqing (Harry) Xu: Oh, okay.
>>: Wouldn't this directory list pop up?
>> Guoqing (Harry) Xu:
Well, I've
--
actually if you look at the G
--
and the simple
profiles of the
--
like the running time profiles of the large scale application like
Eclipse there's
--
I'll give you very specific example. So if you look at a large
scale applications like
we have profile, we have did
--
we have done this profiling
for this large
-
scale applications. We found that the most frequently executed
method for this big application is the method called hash map that I'll get.
So it turns out I mean methods like t
his can never be the most executed method
--
most frequently executed method.
>>: [inaudible].
[brief talking over].
>>: Would it have shown up as eight percent if you would have seen oh, eight
percent is for this little tiny package thing I'm go
ing to go look at that?
>> Guoqing (Harry) Xu: No. I mean, the profile cannot
--
I mean, you look at the
running time profiles it never show like eight percent for this method. Right?
Other than
--
>>: [inaudible].
>> Guoqing (Harry) Xu: Beca
use of cost, yeah.
>>: GC costs associated
--
>> Guoqing (Harry) Xu: There are a lot of different things going
--
yeah. There's
no
--
yeah. So that's probably the answer, right.
>>: Thank you. Of.
>> Guoqing (Harry) Xu: Okay. Yes. Yeah.
Yeah. Okay. Now, let's go back to
the cost benefit thing. So what is cost and what is benefit? So the absolute cost
of a heap value here is defined as the total number of instructions executed to
produce a value. So each instruction here has a three
address representation
like A equals B equals B plus C and each instruction here is considered to have
unit cost, right? So that part is a really easy to understand.
And then so it's a little tricky to define what the benefit is. As the benefit of a
v
alue is really related to how good consumption of that value is, right? So there's
no clear metric for that. So here I'm going to give you some intuitive definitions of
benefit. Later I'll talk more about the formal definitions.
So intuitively benefi
t of a value is defined a very large number if this value, this
heap value goes to program output like socket or file system because, you know,
it's actually used for any purpose
--
for some purpose.
Of course benefit is zero if this value, heap value i
s never used, you know. The
third case here is the most common case where this heap value is used to
produce another value, another heap value, V prime. In this case, the benefit of
V is defined as amount of work done to transform V into V prime.
So t
here is some kind of relationship between the cost of V prime and the benefit
of V. So again, these are just intuitive definitions. I'll give you more
--
you know,
more formal definitions later during the talk.
So first let's look at the cost computat
ion. How can we compute a cost? A
natural idea to compute a cost for a program like this will be to capture a dynamic
dependence graph, you know, that looks like this one here, right, where each
edge add should represent a data dependence relationship be
tween a pair of
instructions when writing to a memory location and the other reading from the
same location. Right?
So for example, suppose now we want to compute the cost of D here. It can be
computed efficiently by traversing backward this dependenc
e graph and counting
the total number of reachable nodes which represent the total number of
instructions executed, which is actually four in this case. So we cause this
problem of backward dynamic flow problem because in order to solve the
problem we nee
d to record and traverse backward some kind of a history
information of the execution. So this is a backward problem as opposed to a
forward problem. Right?
There are actually many, many other backward dynamic flow problems. This is
just one of them.
If you are interested in other problems you can look at the
paper or we can talk offline.
So in general the only way to solve this backward dynamic flow problem is to use
dynamic slicing that captures a dynamic dependence graph that look like this.
S
o for example
--
I mean dynamic slicing is actually a general dynamic technique
that needs to record all memory accesses and
--
during the execution and their
dependence relationship, right? So it's easy to see that dynamic slicing is
prohibitively expens
ive for large scale and long
-
run applications because the
trace that it generates, dynamic slicing generates is unbounded. It depends
completely on the dynamic behavior of the program. So it's prohibitively
expensive.
However, I mean in order to achie
ve efficiency and make our analysis more
scalable, we propose a new technique here called abstract dynamic slicing that
performs dynamic slicing over bounded abstract domains.
So the motivation of this new technique is as follows: If we look at a speci
fic
client analysis that uses the result of the slicing algorithm, the trace generated by
dynamic slicing usually provides many more details than this
--
you know, this
specific client would possibly need. So a natural question to ask here is that is it
p
ossible to let the slicing algorithm be aware of this client analysis so that it can
capture only part of the execution that is relevant only to the client? In other
words, we wonder whether or not it's possible to customize the client analysis
--
the sli
cing algorithm with the semantics of this client analysis.
The answer is yes. And we do this by looking at abstractions. So we found that
for many backward dynamic flow problems equivalence classes exist. So here is
some example that shows a fragment
of program trace that contains different
runtime instances of the same instruction A equals B.M. Each instruction
instance here is annotated with an integer I that represents the Ith execution of
that instruction. So given the client analysis, a specifi
c client analysis it would be
possible for us to divide those equivalence
--
those runtime instruction instances
into two equivalence classes like E1 and E2. So later it will be sufficient for the
client analysis to look only at those equivalence classes
E1, E2 instead of looking
at individual runtime instruction instance.
So in this way, we can potentially record only one runtime instruction instance
per equivalence class as is representative that can potentially lead to a
--
you
know, a significant re
duction in the amount of memory needed, right?
It's easy to see here that the dependence graph computed this way is an abstract
dependence graph where each node represents an equivalence class and each
edge connects two equivalence classes as long as th
eir exists a dependence
relationship between two runtime instruction instances aggregated into that
equivalence classes, right?
So we've actually generalized this idea and defined a more general theoretical
framework that can be instantiated to solve ma
ny, many other backward dynamic
flow problems. This talk focus only on high
-
level ideas. So if you're interested
--
yeah, sure.
>>: So how do you define equivalence classes?
>> Guoqing (Harry) Xu: Well, I mean, that's, you know, user has to provi
de some
kind of annotation like the designer of the analysis gave the semantics of
analysis so that our analysis, our profiling framework takes analysis
--
takes the
semantics as input and defines those equivalence classes.
>>: [inaudible] one example?
>> Guoqing (Harry) Xu: Yes, I'll give you a specific example, sure. Of course.
So what was your question now?
>>: The same thing.
>> Guoqing (Harry) Xu: Okay. Sure.
>>: [inaudible] example [inaudible].
>> Guoqing (Harry) Xu: Of cours
e. Of course. Here is example. So let's look at
the
--
you know, go back to cost computing. How do we use abstract dynamic
slicing to compute cost for a specific runtime value? So recall that the absolute
cost for a runtime value is defined as the tot
al number of instructions executed,
right, to produce the value.
So here, for example, the absolute cost for a specific runtime instruction instance
like this one, A equals B plus C annotated with 20 can be computed by traversing
backward this concrete
dependence graph and counting the total number of
reachable nodes which represent total number of instruction executed, right?
That's basically for absolute cost for the concrete dependent graph.
However, as we already know, concrete dependence graph i
s very expensive to
compute. In addition it's very hard for the programmer to make sense of specific
runtime instruction instance because the static instruction can potentially have
millions of runtime instances. So it doesn't make sense for the programm
er to let
the program make sense of the cost for specific one.
To solve the problem, though, instead of computing this absolute cost for a
specific runtime instruction instance, we're proposed to compute abstract cost for
an equivalence class of instanc
es. So for example, the abstract cost for an
equivalence class like this one, A equals B plus C with a
--
annotated with E0 can
be computed by traversing backward this abstract dependence graph and
calculating the sum of the sizes of the reachable equival
ence classes. Right, the
size of a equivalence class is essentially the total number of instruction instances
aggregated in that equivalence class.
So in this way, we can usually find that, you know, first of all, abstract
dependence graph is much easi
er to compute and, second, it's easier for the
programmer to make sense of it because the abstract cost for an equivalence
class is essentially the absolute cost for several runtime instances aggregated
based on some kind of abstractions, right?
Now the
question is what is the proper abstraction that we can use for computing
cost here? That's basically you guy's question, all right. All right. So here we
use calling context to define equivalence classes. In other words, those E102,
E47, E0 are object
-
sensitivity
-
based calling context, specifically each of the E
here is a chain of receiver object allocation site for the call site on the call stack.
In this way it will be natural for us to aggregate runtime instruction instances
based on object
-
oriente
d data structures.
So if you're interested in object sensitivity we can definitely talk offline. That's
basically the general idea. Okay?
>>: [inaudible].
>> Guoqing (Harry) Xu: Yes.
>>: So if the instruction is in a loop you count it only
once for that calling context
even if it's [inaudible] a million times [inaudible].
>> Guoqing (Harry) Xu: Right. Right. I mean, we only consider the method cost
like the chain of the object.
>>: [inaudible] the cost can still be very skewed away
from the actual cost?
>> Guoqing (Harry) Xu: Oh, yeah, they're still like different, yeah, sure.
>>: [inaudible].
>> Guoqing (Harry) Xu: This is not very accurate. I mean it's like an
approximation.
>>: [inaudible] approximation, I mean, i
t is always under approximated? It could
be over approximated, too, right?
>> Guoqing (Harry) Xu: I think it's always over approximation.
>>: I just gave you an example it's under approximated, right? If it's a loop
--
if
it's an instruction insi
de a loop you will count it only once based on the calling
context even though I execute it a million times because I go around the loop a
million times.
>> Guoqing (Harry) Xu: You go around the loop a million times for the
aggregate. I mean we
--
the
cost is actually the frequency, right. Consider the
frequency of that.
>>: [inaudible] count, you will count the number of times I execute the
--
>>: The trick here is he wants to limit the number of the size of the dynamic
--
>>: I understand.
>> Guoqing (Harry) Xu: Yeah.
>>: Is that right? So he's going to have
--
you're iterating a million times over
like a million objects, say, but the call stack remains the same. So you're not
--
>> Guoqing (Harry) Xu: The equivalence class
--
>>: So you're not create
--
>>: Okay. So you have one equivalence class and then you have a count
--
>> Guoqing (Harry) Xu: Yeah, yeah. Right, right, sure. It's considered
frequency actually.
>>: So the static analysis case you would use so
me estimate of [inaudible] get
real numbers for
--
>> Guoqing (Harry) Xu: This is dynamic analysis, so it's not static analysis.
>>: [inaudible] try to do this statically [inaudible].
>> Guoqing (Harry) Xu: [inaudible] statically, that's a little
--
I don't know. That's
not question for me. I don't
--
>>: You take this information and [inaudible] the static but you don't try to do it
statically?
>> Guoqing (Harry) Xu: No, no. This is completely a runtime analysis.
All right. So so fa
r all costs that we have talked about are cumulative costs that
measures that effort made from the very beginning of the execution to produce a
runtime value. So however we found that cumulative cost is now useful in
helping programmers understand perform
ance problems because it's almost
certain that value produced later during the execution has the higher cost than
the value produced earlier in the execution, right? There does exist a strong
correlation between a high cost an a performance problem.
So
in order to solve the problem, we propose to compute relative cost instead of,
you know, cumulative cost. So the relative cost for a runtime value like a heap
location, heap value, is defined as the amount of work done on the stack that
transforms values
read from other existing heap locations in order to produce this
value.
I show this by example. Let's consider this picture where boxes represent
objects, edges represent dataflow. So it's FG, HG represent opposite fields.
Suppose now we want to com
pute the cost for O3. If we want to compute its
cumulative cost we only need to consider. So we basically need to consider this
amount of work done and pretty much all the work done from the beginning in
order to produce the value as written in to O3, ri
ght? All the work.
However, if you want to compute the relative cost, we're only to consider this
amount of work done on the stack that transform values read from other existing
heap locations like O1, O2, in order to produce the values in to O3. So t
his is
actually the fundamental difference between relative cost and cumulative cost.
Completely symmetric to relative cost, relative benefit for heap value is defined
as the amount of work done on the stack that transform this value in order to
produce v
alues written into other heap locations.
Let's consider again this example. So for example now we want to compute
benefit, relative benefit for O3. We're only to consider this amount of work done
on stack that transform O3 into other heap locations li
ke O4 and O5. So it's clear
to see that relative benefit for heap location is determined by both the frequency
and the complexity of the use of their value. Right? So eventually, you know,
the cost of benefits computed by individual
--
for individual he
ap locations are
aggregated based on object structures in order to produce cost and benefits for
high
-
level data structures.
So this slide reviews some of the key problems and challenges and ideas in this
worker. So for example solve the first problem,
dynamic slicing is too expensive,
we propose to use a new feedback called abstract dynamic slicing that performs
dynamic slicing over bounded abstract domains.
To solve the second problem, which is how to abstract instances for OO data
structures we pro
posed to use object
-
sensitivity
-
based calling context as
abstraction.
To solve the third problem, which is cumulative cost is not correlated with
performance problems, we proposed to use relative cost instead of cumulative
cost.
So combining all the
three insights we eventually compute relative abstract cost
and the relative abstract benefit and use this cost
-
benefit ratio as an indicator to
performance problem.
So this analysis was implemented in IBM J9 virtual machine. And we performed
case stud
ies on the real world large
-
scale applications. So, in fact, all those
applications here except bloat have millions live code. This picture shows the
running time reductions that we have achieved after removing the problem that
we found using this dynami
c analysis cost benefit analysis.
So for example for bloat, there are 35 percent of running time reduction that we
have achieved after removing the problems. This is actually the very first
dynamic analysis targeting general bloat. All existing work t
arget different kind of
symptoms like symptom base upload. This is actually the only piece of a word
that targets
--
>>: Go back to the
--
>> Guoqing (Harry) Xu: Sure.
>>: The original example with the is package and explain how it relates to th
is
analysis? So what
--
how would the
--
yeah, how do you compute the relative
cost
--
sorry. Sorry, man.
>> Guoqing (Harry) Xu: All the way back.
>>: So what ends up being the relevant cost of whatever packs and relative
benefit
--
>> Guoqing
(Harry) Xu: Well, so first of all consider the role of benefit. The
benefit is really easy to compute in this case because none of the elements in the
heap locations, the heap values in this list are used for any purpose. Right? So
they're never used.
Because the only
--
you only use this reference value. You
never retrieve the heap values from this list. Right? So the benefit for this entire
list is zero.
>>: Not quite zero.
>>: Yeah, it can't be zero. Well, I mean, zero is
--
[brief tal
king over].
>> Guoqing (Harry) Xu: Yeah, for data memory, right, exactly.
>>: Equals whatever [inaudible].
>> Guoqing (Harry) Xu: Equals. I mean, this is a reference
--
I mean this is a
pointer value. This is not like the heap
--
the value retr
ieved from the heap
locations, right? So there is a large benefit
--
there's a large cost associated with
the list because, you know, you have
--
you compute all those files. You
populate the list. So every data member in the list has a large cost assoc
iated
with it, right? So to do all the computations in order to produce values and then
write them into that heap location, the list location, right. In this way, you know,
it's clear to see that there's a large cost but there's no benefit. I mean, very
little
benefit. So you get a large
--
>>: You can go on. I just
--
I'm a little
--
some benefit because you have to test
against string. Is it [inaudible] is a test then that gets encoded in the result, right,
if they match.
>> Guoqing (Harry) Xu
: Yeah, sure, sure. Yeah. We can definitely talk offline. I
mean some subtle issues here. Let's go all the way forward.
All right. So
--
>>: So can you just back up
--
>> Guoqing (Harry) Xu: Yeah, sure.
>>: So the bloat, so the bloat is
a constructed example, or is that [inaudible].
>> Guoqing (Harry) Xu: No.
>>: [inaudible].
>> Guoqing (Harry) Xu: Well, bloat is program knowledge framework written by
Purdue University like many years ago. A few
--
large Java program analysis
f
ramework.
>>: Why is it called bloat? I don't know actually. It was the name of the
application. It was clustered in the [inaudible] benchmark [inaudible].
>>: Interesting [inaudible].
>> Guoqing (Harry) Xu: Yeah. That many years ago. So tha
t's basically
--
>>: [inaudible].
>>: Yeah, so did I. Okay. Thanks.
>> Guoqing (Harry) Xu: Well, the reason I mean why we can't find this such a
large running time reduction is because bloat is pretty much written by graduate
students. So the
quality of the code is really
--
I mean really poor.
>>: Was it one issue that you found in this, or were there multiple issues?
>> Guoqing (Harry) Xu: There were multiple issues. Yeah.
>>: So there's not one that you could
--
>> Guoqing (H
arry) Xu: There's no, no. I can give you specific example later
offline. Yeah.
All right. So this is actually the very first dynamic analysis targeting general flow
--
general bloat. In addition in this work we have identified a lot of interesting
bloat patterns that can be regularly observed during the execution of large
-
scale
applications.
So a further step would naturally be to develop a static analysis that can identify
and remove such patterns in a source code so that the programmers can avo
id
such small performance problems during development, during coding, right, in
order
--
and before these problems really pile up and become significant.
So, in fact, some of the interesting bloat patterns that we found in this work have
already left to
the development of new analysis and the tools. So for example
give you specific examples. We found a lot of interesting like, you know,
container inefficiencies. So we develop a new static analysis that uses
context
-
free
-
language reachability formulati
on to help programmers identify
underutilized containers and overpopulated containers. That work was published
in another PLDI 2011 paper.
And we found there were a lot of loop
-
invariant data structures in this work. So
we used a type and effect syste
m to help programmers identify and hoist the
loop
-
invariant data structures. Some other patterns that we found in this work
include problematic implementations of a certain design patterns and anonymous
classes.
So actually the kind of work that can do
immediately will be to develop static
analysis to deal with like three months work.
All right. So now we're getting to the third part of the talk which is actually the
static analysis that can be used to help programmers identify and hoist
loop
-
invari
ant data structures. The motivation of this particular analysis is based
on the observation that in large scale applications there are a lot of places where
objects with the same content in the loop get created in many, many times by
different loop iterat
ions and all their instances are exactly the same. And it really
hurts performance in many cases. So by pulling those objects out of loops, in
other words, hoisting those objects in the semantics preserving way we can
potentially save a lot of computatio
n as well as garbage collection effort. Right?
So that's basically the motivation.
Let's first look at example. All right. So this piece of code showed the problem in
multiple applications. Not only one application like multiple applications that
many we stupid in IBM TJ Watson Research Center.
So the loop here either is over a string array called date. Date here is a string
array. In order to parse each of the string in this array into a date object, right.
So the way this loop is implemente
d is that it creates a simple data format object
per iteration of the loop. And they use this object to parse each of the strings.
So it's easy to see that you know this entire data structure reachable from this
OSDF which represents this allocation site
gets created many, many, many
times, right, by loop iterations. You know, so and all their instances are exactly
the same.
In many cases creating an object using this new keyword in Java is much more
than allocating the space for the object. It can i
nvolve a lot of heavyweight
operations to initialize a big data structure like this one here and create a lot of
other objects like this O1 and O2, right? So specifically for this case creating one
simple data format requires to load many resource models
from disk which can
involve a lot of very slow disk IO operations. So it's very
--
so it's perfectly okay
for us to put all of the loop and use only one simple data format object
--
data
structure to parse all the incoming strings. Right. Yeah, sure. Y
eah.
>>: You said it's perfectly okay. That depends on a lot of stuff.
>> Guoqing (Harry) Xu: Oh, yeah, sure. Yeah. Of course. We have a
--
yeah.
Definitely go to
--
I mean, there is five or six different checks like very specific
checks over
there. So it's not that simple. Not as simple as what I said.
>>: So in this particular case it's the case that parse treats SDF as a read only
object. Are there other mutating
--
>> Guoqing (Harry) Xu: No.
>>: Operations
--
>> Guoqing (Harr
y) Xu: No. Our static analysis
--
our static analysis can make
sure that
--
>>: On the class, on the entire class is there any way to mutate a simple data
format?
>> Guoqing (Harry) Xu: Out of the entire class? What's your question?
>>: [inau
dible] simple data formats that's constructed there, I mean
--
>> Guoqing (Harry) Xu: Yes. It's immutable, right?
>>: I don't know.
>> Guoqing (Harry) Xu: I think so. Yeah, of course. I mean
--
no, no, no. I think
that the only thing that is
not immutable is you have to load sort of the current
data from the
--
like a date and whatever
--
the date and the time format from the
local computer to initialize this
--
lot of resource models loaded.
>>: Once it's constructed
--
>> Guoqing (Har
ry) Xu: Yeah, once it's constructed it's immutable, of course.
>>: So could this be solved at the library level by having like a static like factory
method that yielded a singleton object that was only
--
>> Guoqing (Harry) Xu: Of course you can de
finitely do that, yeah. There's a
possible way of doing that, right. Yes. That's definitely one possible way of
doing this. Optimizing this case.
>>: So is it often
--
I'll talk to you later.
>> Guoqing (Harry) Xu: Well, okay.
>>: Okay?
>> Guoqing (Harry) Xu: Yeah. Sometimes for library designers they're not
careful enough to consider these kind of complex cases. All right. So it's clear to
see that the challenge, the biggest challenge in our work here is that we're one
hoist a big da
ta structure out of the loop instead of one single instruction or one
single object. That is the fundamental difference between our work here and the
traditional compiler loop optimization work.
So given the difficulty of hoisting a big data structure
out of the loop we divide
this technical problem into two subproblems, one focused on the data site, one
focused on the call site.
So the first solved problem here focused on hoistable data structure. So which
means that first of all, for each object c
reated in the loop we need to identify this
entire data structure, big data structure that is reachable from this object. And
then we check whether or not all fields in this big data structure are loop invariant
without considering any actual call signal
that access this data structure. So if
you'll fields in this data structure are loop invariant, we call this data structure a
hoistable data structure.
Back to the example our static analysis can make sure that this entire data
structure reachable from
OSDF is a hoistable data structure because no field in
any instance of this data structure can contain iteration specific value that can
change across iterations. Our static analysis can make sure of that. And once
we have a set of hoistable data struct
ures identified, the second step here tries to
hoist the actual call signal that access those data structures. So the key idea
here is that for each hoistable data structure we check each call site that is
invoked on this data structure and see if they're
hoistable. If this data structure is
--
I mean if this call site is indeed hoistable, we just pull it up.
Let's go back to the example. Once we make sure that this entire data structure
is reachable from OSDF, it's hoistable data structure, we check
each call site that
is invoked on each object in this data structure. So first of all we first check this
allocation site because call site, right, it's a structural call. And in this case the
allocation site itself is completely hoistable so we just pul
l it out of the loop like
this, and then we check the second call, which is the call to match the parts.
However, the second call here is not hoistable because the argument date here
contains iteration specific value, right, that can change it across iter
ations. Our
static analysis can identify that.
So there's no way to hoist this second statement, I mean the call site. So this is
actually what the call looks like eventually after our analysis of performance
hoisting. So it's important to know that,
you know, this analysis is interpretable
compiler analysis. It's not the source to source translation or and if some form of
refactoring. So this is completely a compiler analysis.
All right. So let's first look at this first, you know, technical pr
oblem, subproblem,
which is to identify
--
which is focus
--
which focuses on the data site which hard
to identify hoistable data structures. So we found there are three challenges to
identify hoistable data structure. So the first challenge here is to u
nderstand how
a data structure is built up. So for example, for each object creating the loop we
need to identify what other objects are reachable from this object. This requires
us to [inaudible] all points
-
to relationships. Right? This is very simple
.
Straightforward.
The second challenge here is to understand where the data comes from. So for
example we have to make sure that no field in the hoistable data structure can
contain iteration specific value. That can change across iterations. That
requires
us to reason about dependence relationships.
And the third challenge here is to understand in which iterations the objects are
created. So for example a data structure, a hoistable data structure cannot
contain objects that can be created in d
ifferent iterations, otherwise there's no
way to hoist it. To understand this information we propose to compute iteration
count abstraction for each allocation site that can have three abstract values, 0,
1, and the bottom. 0 here means that this allocat
ion site must be outside the
loop. In other words, all instances created by this allocation site must exist
before the loop starts.
The second case here is if the ICA for the allocation site is 1, it's guaranteed that
this allocation site is inside the
loop and no instance the lifetime for any instance
of the allocation site must be within the iteration where its instance gets created.
In other words, no instance of the allocation site can escape the iteration where
it's created to later iterations.
And the third case here is bottom, which means that, you know, some instance
--
the allocation site must be inside the loop and some instance of this allocation
site might escape situation where it's created to later iterations and may actually
be used by
those later iterations.
So it's clear to see that we're interested only in data structures where the ICAs
for all their objects are 1, right, because it's guaranteed that, you know, all
objects any instance of the data structure must be created in one
single iteration
and all die as they end all iteration.
So the ICAs for objects are computed
--
can be computed by general technique
called abstract interpretation. So this talk focus only on those high
-
level analysis
ideas. If you're interested in th
is low
-
level analysis details we can definitely talk
offline or you can read the paper. There are four page forms and a proof
[inaudible].
All right. Sure.
>>: So you came up loaded with an example which you create an object that's
immutable and al
so presumably the constructor depended functionally on
--
>> Guoqing (Harry) Xu: Right.
>>: On things that weren't modified in the loop.
>> Guoqing (Harry) Xu: Correct.
>>: But you have a much more ambitious analysis here that's trying to ide
ntify
--
that would handle objects that are immutable but not mutated within the loop.
>> Guoqing (Harry) Xu: It's immutable but not mutated in the loop.
>>: I mean you decide defining the concept of loop invariant.
>> Guoqing (Harry) Xu: Yeah.
>>: And there's nothing that requires the object to be deeply
--
>> Guoqing (Harry) Xu: Oh, yeah, sure.
>>: Immutable.
>> Guoqing (Harry) Xu: I mean, we have this dependence analysis to identify
this immutable fields. So it's immutable but n
ot mutated in the loop.
>>: No.
>> Guoqing (Harry) Xu: Right?
>>: Mutated.
>> Guoqing (Harry) Xu: Yeah, it's not mutated.
>>: So you could have a less ambitious analysis that would identify just
completely immutable after construction o
bject beyond a class basis.
>> Guoqing (Harry) Xu: Right.
>>: And it wouldn't matter what happened in the loop?
>> Guoqing (Harry) Xu: Uh
-
huh. Oh, yeah, sure. I definitely understand. Our
analysis is more ambitious in terms of your
--
like t
o identify
--
>>: You find actual cases in which
--
>> Guoqing (Harry) Xu: The class is immutable but it's immutable but is not
mutated in the loop.
>>: Where you get extra benefit over just the plain
--
>> Guoqing (Harry) Xu: Yeah, sure, sure
, of course. Yeah. Yeah. All right. So
how do we identify these hoistable data structures? We combine
--
I mean the
interesting idea here is we combine the three abstractions in a powerful way by
annotating points to and dependence relationships with
ICAs.
So here's an example. I don't know I have time to talk about this, but
--
so how
much time do I have, Ben?
>> Ben Zorn: We have the room until 12.
>> Guoqing (Harry) Xu: Oh, okay. Yeah. Okay. I'll continue. So let's first look
at exam
ple. So this very simple example. It contains one loop and the four
objects. So let's first look at the ICA for each object here. So the ICA for
example for O1 here is zero because the allocation site is outside loop, right?
That's very easy.
ICA f
or O2 is one because no instance of this allocation site can escape the
iteration where it's created to later iterations, right? This is the same case for O3.
But for O4 the ICA is bottom because some instance of this allocation site might
escape iterati
on, right? The current iteration where it's created to later iterations.
And it may actually be used in those later iterations, you know, for example here.
So basically 1, O, O, bottom are the ICAs for these four allocation site. It's pretty
easy to
understand. This picture shows the annotated point 2 graph where each
node represent an allocation site, an object, and each edge represent an
annotated points
-
to relationship, which contains a field name which FGH are field
and a pair of ICAs for the two
nodes, two objects connected by the edge.
So by [inaudible] this points relationship like this annotated points graph we can
easily conclude that this entire data structure reachable from O2 is not hoistable
because, you know, the ICA for O4 here is bo
ttom. Because there's
--
so there's
no
--
you know, there doesn't exist any way to hoist the
--
this entire data
structure.
This picture here shows the end [inaudible] dependence relationship where each
node represent either a heap location or a static
location. And each edge
represent a dependence relationship annotated with a pair of ICAs for the two
news connected by the edge. The ICA for heap location like O3 value is actually
the ICA for the object O3 which is a one in this case. The ICA for a s
tack
variable is actually determined by where the stack variable is declared. If the
variable is declared outside the loop, its ICA is zero. If the variable is declared
inside the loop, the ICA is one. So the ICA for stack variable can never be
bottom b
ecause in Java any variable declared in the loop must be initialized
every time the loop iterates, right? So, you know, for example from this case we
can easily see that this location O3 double value depends on a stack variable
that can depend on itself w
hich indicates that this heap location can contain
iteration specific value that can change across iterations.
So any data structure that contains this location is not hoistable. So basically.
We found there are four interesting properties in a hois
table data structure.
Missing any of the properties here can make a data structure not hoistable. So
the four interesting properties are disjointness, confinement, non escaping loop
invariant. So I don't think I'm
--
I will have time to go through all t
he details. I'm
probably going to skip those details. So if you are interested, we can definitely
talk offline.
So there's one important thing I can definitely report here, that is the verification
of those four important properties can be easily done
by performing checks on
the annotated points
-
to and the dependence graph. That's the only thing that
you need to know to understand this piece of work. Okay?
All right. So once we have a set of hoistable data structures identified it's really
easy f
or us to identify hoistable calls. You know, the data structure
--
the actual
[inaudible] that access [inaudible]. This is actually the second step, right? I
mean, the most important part of this work is the first step. As long as you
understand the fi
rst part it's really easy to understand the second part.
So the key idea again is to
--
for each hoistable data structure identified which
have each call site that is invoked on the data structure and see if it's hoistable.
This process can involve, yo
u know, like one, two, three, four, five, six, six
different checks, control dependence checks, argument checks, external
environment checks, exception throwing, thread, control flow checks. As long as
all those checks are satisfied this particular call s
ite can be hoisted.
So I'm not going to talk about all those details as they're very specific checks and
they are defined clearly in the paper. So we can definitely talk offline if you're
interested.
So that's pretty much about this automatic work.
The sound and automatic
transformation. However, we found that this completely sound and automatic
transformation is not quite effective in hoisting real world data structures. There
are basically two reasons here. First of all the real world usage of
Java data
structures is very complex and any static analysis has to be over conservative to
achieve 50, right? And the second reason here is that hoisting a lot of real world
data structures requires developer insight.
So for example consider a data st
ructure with 100 fields. If there's only one field
that is not loop invariant, this can make this entire data structure not hoistable.
However, if this information is given to the developer, the developer might have a
way of hoisting data structure. Rig
ht? So for example the developer might be
able to split the data structure into a hoistable part and non hoistable part, right,
and pull out only the hoistable part.
So in order to make our analysis more practical and help programmers perform
this kind
of manual hoisting, we proposed to compute hoistability measurements
for each data structure in the loop that indicates how likely is this data structure
can be manually hoisted.
So for example we consider metrics like data hoistability and code hoista
bility.
I'm not going to explain all those details again. They're defined in the paper. So
we can definitely talk offline. So but there's one important thing which is the
computation of those hoistability measurements fits nicely, very nicely with the
original analysis that we proposed for transformation.
In other words, we don't need any new analysis to compute those
measurements. They're computed automatically with the transformation, the
actual transformation.
So the analysis was implemented u
sing the Soot 2.3 framework and evaluated
on set of large
--
19 large Java applications. In those 19 large Java applications
a total 981 loop data structures are considered in which we found 155 data
structures are completely hoistable data structures.
However, our completely sound and automatic transformation was able to hoist
only four data structures in three programs, completely automatically because of
the two reasons that I mentioned.
So here's a list of the data structures
--
of the running ti
me that we have achieved
after hoisting those data structures completely automatically. From those
numbers we can easily see that there's a large optimization opportunity out there,
but only small portion of it has been captured by this completely automat
ic
transformation. To explore the rest, we do manual hoisting with the help of
hoistability measurements. Yeah?
>>: [inaudible] Java C by 10 percent?
>> Guoqing (Harry) Xu: Yes.
>>: By hoisting something out of the loop.
>> Guoqing (Harry)
Xu: Java C no
--
yes. Yes.
>>: So what's the
--
so what was the data structure that gave you that benefit?
>> Guoqing (Harry) Xu: I don't know actually. This is a completely automatic
analysis. And Java C is now open source. So I saw
--
you k
now, used the
analysis to produce a new byte code, a new version of the byte code and run the
new version. And it causes 11
percent running time reduction. I don't know what
was actually going on there.
>>: Okay.
>> Guoqing (Harry) Xu: So
--
>>
: So you actually
--
>> Guoqing (Harry) Xu: This is not open source. This
--
you know, you can't get
the source code for this application. Java C application. So. But we do have
open source
--
I mean do have sources, source code for some other appl
ications.
So we studied five large Java applications by inspecting top 10 data structures
ranked based on hoistability measurements. We've actually achieved much
larger performance improvements than this completely automatic transformation.
For example
for applications like PS, which is a
--
which is a postscript
interpreter, we're able to make the application run more than six times faster after
hoisting only a few data structures in this core components.
And as another example, for applications like
xalan, which is a real world XML
transformer processor there's 10 percent of running time reduction after we
hoisted only one XML transformer object of the loop. So this sort of a
performance bug has been conformed by the DaCapo development team. So
the
re's one
--
I mean for this case we hoisted only one XML transformer object.
>>: [inaudible] did in PS [inaudible].
>> Guoqing (Harry) Xu: Yes. Yes.
>>: What was hoisted?
>> Guoqing (Harry) Xu: We found there's
--
the problem is actually wi
th the
usage of the data structure called the stack. The programmer seemed to
understand
--
I mean they don't understand the stack is a subcause of the list.
For every operation they want to do with stack is they use push and pop. So
they keep pushing t
he popping step from the stack. They don't understand the
stack is a public class of the list so they can directly use something like get to get
through retrieve a specific element. That means you know they use
--
>>: [inaudible] so was that a change
in the code beyond just pulling it out of a
loop or what was
--
>> Guoqing (Harry) Xu: Yeah. There's
--
yes. There's something more complex
going on there. Because there's not, you know, just hoisting the data structure.
But we found this problem by
identifying the hoistable data structures. So we
found there's no problems that we found in this work have ever been reported
before in previous work.
All right. So we're getting to the final part which is future work and conclusions.
Of course I wi
ll continue this line of work on software bloat analysis because I
think it's wide open area that has so many interesting problems, challenges and
potential research opportunities.
We're actually very
--
one of the very few academic groups that are doin
g
fundamental research on this real world problem. Why do I think this is a wide
open area? Because if you look at deeper into the cause of the bloat, you will
see that the methodology of object
-
orientation itself encourages certain level of
excess or bl
oat. For example the programmers are encouraged to basically do
whatever they want to do to do favor reuse, modularity or readability leaving
performance entirely to the compilers and the runtime systems.
However, if we want to allocate, to explicitly
consider performance, this can have
impact on almost the entire software development cycle, right? So this picture
shows the kind of working that I'm planning to do in the future. Program we start
with detecting performance problems. So once we find, yo
u know, the root
cause of the problems, we can either do manual tuning or we can classify
interesting bloat patterns.
Once we have a set of interesting bloat patterns identified, we can do a lot of
different things. We can develop a static transformati
on that can automatically
remove such patterns in the source code. We can develop a self
-
adjusting
systems as part of the feedback directory of the compilation that can remove
such problems online as part of the JVM, right? Or like the COR or the runtime
in the Microsoft setting.
We can allocate performance conscious software engineering that can require a
whole lot of new things, new design principles, new modeling tools, new testing
analysis tools, new compilers parse and so on, so forth. And we can
also use a
compiler to synthesize a bloat
-
free implementation given a set of performance
specifications.
So there are a lot of interesting things we can do in the future. In addition I'm
also considering to leverage existing techniques from other fiel
ds such as
systems and architecture in order to deal with this every increasing level of, you
know, excess or inefficiencies.
Of course also look into other program analysis
--
actively look into other
program analysis areas. So definitely one of the t
hings is to adapt existing
techniques to solve MS
--
Microsoft specific problems like, you know, for the
existing techniques like all the existing techniques are implemented within JVM
so is there
--
where does the
--
what are the problems that are specifi
c to
Microsoft products, is there a possible way to adapt those techniques into the
CLR, you know, within this Microsoft family of languages? Is that possible? I
don't know. So definitely it's interesting to see.
There are some other interesting thin
gs that I'm planning to do in the future like
parallelism is one of the most interesting things. Still like optimizations,
data
-
based compilation. Model checking, testing, debugging, security improves,
compilers for high
-
performance computing, data local
ity. I mean the bottom line
here is that I'm open for collaboration with any researcher whose research deals
with static and dynamic program analysis. That's pretty much my goal.
So the conclusion here, I'm a program analysis person. I'm interested b
oth in
theoretical
--
static and dynamic analysis and both the principles and applications
of the program analysis.
My dissertation here deals with important application which is software bloat
analysis that contains both dynamic analysis that can help
programmers make
more sense of heaps and executions and static analysis that can help
programmers remove and prevent bloat.
So with a set of techniques that I have developed I hope that we can lower the
bar for performance tuning. In other words, with
automatically
--
automatic tool
supports I hope that tuning is no longer a daunting task. I also hope to educate
the programmers, developers in the real world, especially the OO programmers
and raise the awareness of this bloat problems in the real world
object
-
oriented
programming.
So for example developers should really be aware that performance is important.
They should do everything possible during development to avoid small
performance problems before they pile up and become significant.
So thi
s is because compilers and runtime systems are not always smarter than
human experts. Thank you very much. I'm ready to take questions.
[applause].
>> Ben Zorn: Questions?
>>: I have one question for you.
>> Guoqing (Harry) Xu: Yes, sir.
>>: In all the work you talked about, you didn't really talk at all about the
hardware or the cache, you know, sort of memory card key as a source of
performance. Are you
--
is that something that you intentionally didn't
investigate, are you interesting
in [inaudible] how does your work relate to that?
>> Guoqing (Harry) Xu: Well, actually I didn't get any chance to investigate that.
I'm interested in doing that, definitely. So yeah, I mean, for example the cost
benefit thing can be
--
the current
definition of cost benefit is defined in terms of
computation, right? The amount of work done to produce their value is can
naturally be extended to work for like the cache and other things. I don't know
how to do that, but I think it's
--
it's doable.
Yes, I mean the answer to the question is that I haven't done any work specific to
the cache in the memory hierarchy thing. So all the work that I have done are
related to program analysis. That's the only thing that I have done.
>> Ben Zorn: Any o
ther questions?
>> Guoqing (Harry) Xu: Yeah?
>>: For the first
--
the first part of
--
>> Guoqing (Harry) Xu: Yeah?
>>: You gave an example in which there were a series of writes that were never
read. And you had a considerably more ambitio
us framework for measuring the
cost benefit.
>> Guoqing (Harry) Xu: Right.
>>: And did you quantify
--
I mean, you could add a simpler one that would just
find
--
identify writes that were never read.
>> Guoqing (Harry) Xu: Oh, yeah, sure.
>>
: And can you quantify how much extra benefit the
--
extra [inaudible].
>> Guoqing (Harry) Xu: Oh, yeah, sure, sure, sure. I understand your problem
--
your question. Well I mean we do find sort of more complex cases where the
benefit part is not jus
t like no use. So I'll give a very specific example. We found
that for lot of data structures in large scale applications there are just purely
copied from one source to the other source. Like if you consider this large scale
Web based applications they
have
--
you know, different components may have
different representation of the same piece of data, right? So in order to be
transmitted the for example for J2EE applications, in order to be able to be
transmitted on the network they have to be wrapped i
nto this soap sort of
protocol.
So some piece of data keeps being wrapping
--
being wrapped, you know, in
order to be transmitted between, you know, different components. So our
analysis can perfectly find places like that. So for example a piece of d
ata is, you
know, is wrapped in this application
--
in this component but is unwrapped in that
without any computation done. So
--
and then there's definitely a lot of
optimization opportunities out there. So we did find a lot of cases, more complex
case
s than the simply like no use. Yeah. Yeah.
>> Ben Zorn: Okay.
>> Guoqing (Harry) Xu: Thank you.
[applause].
>> Guoqing (Harry) Xu: Thank you very much.
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Preparing document for printing…
0%
Log in to post a comment | https://www.techylib.com/en/view/redlemonbalm/ben_zorn_so_its_a_great_pleasure_to_introduce_harry_xu_from_ohio | CC-MAIN-2018-26 | refinedweb | 11,773 | 62.58 |
#include <wx/spinctrl.h>
wxSpinCtrl combines wxTextCtrl and wxSpinButton in one control.
This class supports the following styles:
wxEVT_TEXT_ENTERevents. Using this style will prevent the user from using the Enter key for dialog navigation (e.g. activating the default button in the dialog) under MSW.
The following event handler macros redirect the events to member function handlers 'func' with prototypes like:
Event macros for events emitted by this class:
You may also use the wxSpinButton event macros, however the corresponding events will not be generated under all platforms. Finally, if the user modifies the text in the edit part of the spin control directly, the EVT_TEXT is generated, like for the wxTextCtrl. When the use enters text into the text area, the text is not validated until the control loses focus (e.g. by using the TAB key). The value is then adjusted to the range and a wxSpinEvent sent then if the value is different from the last value sent.
Default constructor.
Constructor, creating and showing a spin control.
If value is non-empty, it will be shown in the text entry part of the control and if it has numeric value, the initial numeric value of the control, as returned by GetValue() will also be determined by it instead of by initial. Hence, it only makes sense to specify initial if value is an empty string or is not convertible to a number, otherwise initial is simply ignored and the number specified by value is used.
Creation function called by the spin control constructor.
See wxSpinCtrl() for details.
Gets maximal allowable value.
Gets minimal allowable value.
Gets the value of the spin control.
Sets the base to use for the numbers in this control.
Currently the only supported values are 10 (which is the default) and 16.
Changing the base allows the user to enter the numbers in the specified base, e.g. with "0x" prefix for hexadecimal numbers, and also displays the numbers in the specified base when they are changed using the spin control arrows.
Sets range of allowable values.
Notice that calling this method may change the value of the control if it's not inside the new valid range, e.g. it will become minVal if it is less than it now. However no
wxEVT_SPINCTRL event is generated, even if it the value does change.
Select the text in the text part of the control between positions from (inclusive) and to (exclusive).
This is similar to wxTextCtrl::SetSelection().
Sets the value of the spin control.
It is recommended to use the overload taking an integerRL events. | http://docs.wxwidgets.org/trunk/classwx_spin_ctrl.html | CC-MAIN-2014-42 | refinedweb | 432 | 65.12 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to strip (remove white spaces) inputs of forms before adding to database?
I want to strip the input data to remove leading and trailing spaces.
I tried this, but didn't work...
class Test(models.Model):
_name = 'test.mymodel'
device_id = fields.Char(string="ID", required=True).strip()
and this too didn't work...
class Test(models.Model):
_name = 'test.mymodel'
device_id_strp = fields.Char(string="ID", required=True)
device_id = device_id_strp.strip()
please help me with a solution...
I don't know if this solution is the most efficient, but you could add an onchange method to the field.
In this onchange, strip the white spaces.
edit:
@api.onchange("device_id")
def onchange_device_id(self):
self.device_id = self.device_id.strip()
hello Jerome... isn't there a way to do this in the back end?
+1, this is the behaviour that should probably be used. And Rizan when you trigger an on_change it will trigger the Python which is your back-end in essence. Another way is to remove all whitespaces the moment you hit the save button.
It is backend if you declare a method in your model. Not a javascript onchange :) in your model, do something like: @api.onchange("device_id") def onchange_device_id(self): self.device_id = self.device_id.strip()
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-strip-remove-white-spaces-inputs-of-forms-before-adding-to-database-93488 | CC-MAIN-2017-17 | refinedweb | 261 | 70.9 |
Advertising
p858snake <p858sn...@gmail.com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |p858sn...@gmail.com Component|Search |General/Unknown --- Comment #1 from p858snake <p858sn...@gmail.com> 2010-07-31 05:27:56 UTC --- WP: is a name space alias of Wikipedia:, and RFA is a page in the Wikipedia namespace with a simple redirect, I'm pretty sure that it could/would break things if we did that due to the possibility of that sub-page having a redirect somewhere else. Example: (Moving into General/Unknown, It's not directly a search issue) -- Configure bugmail: ------- You are receiving this mail because: ------- You are the assignee for the bug. You are on the CC list for the bug. _______________________________________________ Wikibugs-l mailing list Wikibugs-l@lists.wikimedia.org | https://www.mail-archive.com/wikibugs-l@lists.wikimedia.org/msg45319.html | CC-MAIN-2018-22 | refinedweb | 125 | 56.55 |
I wanna show the lines in a file, let the user decide which line should be deleted and then write all lines back to the file, except the one the user wants to delete.
This is what I tried so far, but I'm kinda stuck.
def delete_result():
text_file = open('minigolf.txt', 'r')
zork = 0
for line in text_file:
zork = zork + 1
print zork, line
delete_player = raw_input ("Who's result do you want to delete?")
text_file.close()
def delete_result():
text_file = open('minigolf.txt', 'r')
for line in text_file:
if ';' in line:
line2 = line.split(";")
print line2
print "***"
delete = raw_input ("Who's result do you want to delete? ")
text_file.close()
Sara;37;32;47;
Johan;44;29;34;
Kalle;33;34;34;
Oskar;23;47;45;
This will solve your issue and give you a more robust way of handling user input:
def delete_result(): with open('minigolf.txt', 'r') as f: text_file = f.readlines() # strip newlines from endings text_file = [t[:-1] if t[-1] == '\n' else t for t in text_file] users = set() for line_number, line in enumerate(text_file): print line_number + 1, line users.add(line[:line.index(';')].lower()) print(line[:line.index(';')].lower()) # get result from user with exception handling result = None while not result: delete_player = raw_input('Which user do you want to delete? ') try: result = str(delete_player).lower() assert result in users except ValueError: print('Sorry, I couldn\'t parse that user.') except AssertionError: print('Sorry, I couldn\'t find that user.') result = None # write new file new_file = [t + '\n' for t in text_file if t[:t.index(';')].lower() != result] with open('minigolf.txt', 'w') as f: f.writelines(new_file) if __name__ == '__main__': delete_result()
EDIT: I saw that you wanted to wanted to delete by name, not line number, so changed it to resemble @danidee's method. | https://codedump.io/share/H9f1hOWqUwqM/1/delete-line-in-file | CC-MAIN-2018-22 | refinedweb | 298 | 66.03 |
> Hello, > > I'd be concerned about (s1 != s2). Depending on how efficiently this > compiles, could not branch prediction make this faster for match vs. not > match, etc?. I'd be worried about all the ways (and future ways) compilers > might help us and introduce time differences. I was avoiding suggesting new conditionals for that reason, but didn't see the one already there. Good find. > > I'd feel most comfortable with the time delay, but why not stick to complete > artithmetic? I agree. But I think you've inverted the return value (strcmp returns 0 on perfect match). > > > int i; > int acc = 0; > > for(i=0;i<MAX_LEN;i++,s1++,s2++) > { > acc |= (*s1 ^ *s2); > > if (*s1 == 0) > break; > } > > return (acc == 0); > > > Also, these strcmp functions don't properly return < or >. Just = / !=. > However, my context being so new is quite limited. > > > Darron > > > > _______________________________________________ > Grub-devel mailing list > address@hidden > > | http://lists.gnu.org/archive/html/grub-devel/2009-11/msg00129.html | CC-MAIN-2013-48 | refinedweb | 148 | 66.44 |
Variables).
Google Tag Manager provides a set of predefined variables in each Web or Mobile App container that you create. With these variables, you’re able to create the most commonly needed tags and triggers. However, you can create additional variables to suit your specific requirements.
Note: Built-in variables are a special category of variables that are pre-configured by Google Tag Manager. They replace the variables that used to be generated when creating a new container. Once a selection of built-in variables have been enabled, you can use them just like any other type of variable. Built-in variables cover many of the basic and common variables such as page URL, referrer, click ID, random numbers, or events. Learn more about built-in variables..
Variable Types for Web
1st party cookie: The value is set to the 1st party cookie with the matching name for domain that the user is currently on. In the case that a cookie with same name is deployed on multiple paths or multiple levels of domain within the same domain, the first value will be chosen. This is the same is if you had called document.cookie from within a page and chosen the first result.
Built-In Variables: These are a special set of commonly-used, pre-created, non-customizable variables that you can select to make available to Google Tag Manager. Learn more.
Constant String: The value is set to the string you provide. Since this string will always be the same, and is simply the string that you provide here, the Constant String variable type is limited in usefulness. However, if you want to set a standard company name across your site, for example, you could define it as a Constant String type variable. This would allow you to easily update the string in Google Tag Manager and see it reflected across all the tags that use this variable.
Container Version Number: When the container is in preview mode, the container version variable returns the container's preview version number. Otherwise, this variable returns the container's live version number.
Custom JavaScript: The value is set to the result of a JavaScript function. The JavaScript must take the form of an anonymous function that returns a value. For example, you could write a custom JavaScript variable called "lowerUrl" that operates on the predefined {{url}} variable:
function () {
return {{url}}.toLowerCase();
}
Data Layer Variable: The value is set to ‘value’ when the following code on your website is executed:
dataLayer.push({'Data Layer Name': 'value'})
You can specify, in Google Tag Manager, how dots ('.') are to be interpreted in the data layer variable name:
- Version 1: allow dots in key names. For example, for
dataLayer.push('a.b.c': 'value'), interpret the name of the key as "a.b.c" (i.e.
{'a.b.c': 'value'}).
- Version 2: interpret dots as nested values. For example, interpret
dataLayer.push({'a.b.c': 'value'})as three nested levels:
{a: {b: {c: 'value'}}}. This allows you to read nested values; you could set the variable name to 'a.b' and it would return the object
{c: 'value'}(according to standard JavaScript rules). Nested pushing also allows you to directly edit nested values, so executing:
dataLayer.push({'a.b.c': 'value'});
dataLayer.push({'a.b.d': 4});
on your page would result in a dataLayer that looks like
{a: {b: {c: 'value', d: 4}}}.
Debug Mode: The value is set to true if the container is being viewed in debug mode.
DOM Element: The value is set to the text of the DOM (Document Object Model) element or the value of the specified DOM element attribute..
If the optional attribute name is set, the variable's value will return the value specified from that attribute (e.g.
data-food="cupcakes"); otherwise, the variable's value will be the text within the DOM element.
HTTP Referrer: The value is set to the HTTP referrer, the previous page that the person visited. For example, if a person navigates to one of your product pages from the home page, the referrer will be the home page. An instance of this variable type is automatically created by Google Tag Manager but you can create additional instances if you would like to have expose different part(s) of the referrer URL..
Lookup Table: The value is set according to the instructions in the lookup table. The lookup table contains two columns (Table empty to illustrate how data is used later):
The Lookup Table type allows you to create a variable for which the value varies according to the value in another variable. This is useful if your website is set up in such a way that the appropriate value (for example, a conversion tracking ID) can be mapped to the URL or another aspect of the page. In this example, a variable named Conversion ID is being created. If the URL is “/thanks/buy1.html”, the value is set to “12345”; if the URL is “thanks/buy2.html”, the value is set to “34567”. There is no limit to the number of rows in the lookup table. Fields are case sensitive.
Random number: The value is set to a random number between 0 and 2147483647.
Built-in variables: These variables are populated automatically by Google Tag Manager when certain on-page events occur such as link click, element click, form submit, etc. Learn more.
URL: This type of variable allows you to parse and expose URL components. Google Tag Manager automatically creates 3 instances of this variable type (full url, hostname and path). You can create additional instances to expose different parts of the URL. The URL components you can choose from are: Protocol, Hostname, Port, Path, Query, and Fragment. The input value set for variables of this type is the url of the current page the user is on (retrieved from document.location). By adjusting the URL Source setting, it is possible to tell Google Tag Manager to use another variable as the source of the url value.
Variable Types for Mobile Apps
Application Name: The value is set to the currently running application. A predefined variable of this type is provided in mobile app containers; you don't need to define a new variable of this type. On Android, this variable is the package name. On iOS, this variable is the CFBundleName (short name of the bundle).
Application Version: The value is set to the version of the currently running application. A predefined variable of this type is provided in mobile app containers; you don't need to define a new variable of this type.
Constant String: The value is set to the string you provide.
Device Name: The value is set to the device name of the currently running application (e.g., "Samsung Android", "Android SDK built for x86"). A predefined variable of this type is provided in mobile app containers; you don't need to define a new variable of this type.
Function Call: The value is set to the return value of a call to a pre-registered function. To learn more, refer to the SDK documentation (Android or iOS).
Language: The value is set to the two letter language code representing the user-set device language. A predefined variable of this type is provided in mobile app containers; you don't need to define a new variable of this type.
Operating System Version: The value is set to the version of the operating system in which the application is installed.
Platform: The value is set to the platform of the currently running application (one of "Android" or "iOS"). A predefined variable of this type is provided in mobile app containers; you don't need to define a new variable of this type.
Random Number: The value is set to a random number between 0 and 2147483647.
Screen Resolution: The value is set to the screen resolution of the device of the currently running application. The format is "width x height", e.g., "1024x768". A predefined variable of this type is provided in mobile app containers; you don't need to define a new variable of this type.
SDK Version: The value is set to the SDK version of the operating system in which the application is installed. A predefined variable of this type is provided in mobile app containers; you don't need to define a new variable of this type.
Value Collection: This variable contains a set of key-value pairs expressed in JSON format. You use a Value Collection to set the configuration values for your application. For a race driving game app, for example, you might define an "App settings" variable of type Value Collection with the following:
{ "max-fuel": 200, "starting-fuel": 100, "fuel-burn-rate": 20 }
Your mobile app can retrieve a value in the variable by providing the key. For example:
public class MainActivity { // Add your public container ID. private static final String CONTAINER_ID = "XXX-YYY"; // Container configuration value keys, used later // for retrieving values. private static final String MAX_FUEL_KEY = "max-fuel"; private static final String INIT_FUEL_KEY = "init-fuel"; private static final String FUEL_BURN_KEY = "fuel-burn-rate"; // Rest of your onCreate code. } } /* * Method to update game configuration values using a * Google Tag Manager container. */ public void updateConfigurationFromContainer(Container container) { // Get reference to the container. Container container = mFutureContainer.get(); // Update game settings using Container // configuration value keys. maxFuel = mContainer.getDoubleValue(MAX_FUEL_KEY); startingFuel = mContainer.getDoubleValue(INIT_FUEL_KEY); burnRate = mContainer.getDoubleValue(FUEL_BURN_KEY); }
A Value Collection variable has triggers associated with it. For configuration values that apply to all instances and versions of your app, set the enabling trigger to the predefined Always. Refer to the developer documentation (Android or iOS) for details on how to use the Value Collection variable. | https://support.google.com/tagmanager/answer/6106899?hl=en&ref_topic=2574304&rd=1 | CC-MAIN-2015-48 | refinedweb | 1,628 | 55.34 |
Let's create a functioning web-based dashboard for Docker!
In this article, we are going to use a few different technologies together to build something which, after a bit more elaboration, might actually be useful! We will be creating a web-based dashboard for a Docker installation using a number of different frameworks and technologies, both front-end and server-side, enabling some administrator to monitor running containers, start and stop existing containers, and create new containers based on existing Docker images. There is a wide scope for elaboration here, of course, but I'll leave that as an exercise for you, the reader. Hopefully this article will set you off on the right foot with a good overview of the relevant technologies, enabling you to add even more value to the product!
The app
This is a quick preview of what the app looks like when it's finished. It's essentially a page that displays two lists of Docker containers; those that are currently running, and those that are stopped. It allows the user to start and stop these containers, as well as start a new container from an existing image by clicking the 'New container' button.
The code
If you want to explore the finished product as a reference (finished as far as the article is concerned!) then you can fork the code on Github.com.
Technology stack
Let's have a look at exactly what we're going to be using, and why. I'll go through the prerequisites and installation requirements in a bit.
- Node: We will use this to write our server-side code in JavaScript to run it on our machine, and serve up our website to our users.
- Docker: This uses container technology to reliably run apps and services on a machine. The app interfaces with the Docker daemon through the Docker Remote API. More on this later.
- TypeScript: This allows us to add type safety to JavaScript and allows us to use modern JavaScript syntax in older browsers.
- React: Allows us to write the front-end of our application in isolated components in an immutable, state-driven way, mixing Html with JavaScript.
- Socket.io: Provides us with a way to communicate in real-time with the server and other clients using WebSocket technology, gracefully degrading on older browsers.
Peppered amongst the main technologies mentioned above are various libraries which also provide a lot of value during development time:
- ExpressJS: Used to serve our web application.
- Webpack 2: To transpile our TypeScript assests into normal JavaScript.
- Bootstrap: To provide something decent looking - a problem I know all of us programmers endure!
There are a few more minor ones, but I will cover those as we come to them.
Prerequisites
Docker
As this is going to be a slick-looking dashboard for Docker, we need to make sure we have Docker installed (if you don't already).
Head to docker.com and download the latest version of the client for your operating system. If you've never heard of or used Docker before, don't worry about it too much, but it might be worth following through their getting started tutorial for Mac or Windows or Linux.
To make sure your Docker installation is up and running, open up a command prompt and type:
docker -v. You should see some version information repeated back to you; mine says
Docker version 1.12.5, build 7392c3b. If you can't see this or you get an error, follow through the installation docs again carefully to see if you missed anything.
Keep the command prompt open - you're going to need it!
A note about the Docker Toolbox: The article was written assuming that you have the Docker native tools installed. If you happen to have the older Docker Toolbox installed then the Docker API may not work for you straight out of the box. If you're in this situation, you may need to perform some additional steps to enable the API with Docker Toolbox.
Many thanks to reader Rick Wolff for pointing this out!
NodeJS
To write our app and serve the web interface to the user, we're going to use NodeJS. This has a number of libraries and frameworks which will make the job very easy for us.
Node, version 6.3.1 was used to build the demo app for this article, so I would urge you to use the same version or later if you can, as there are some language features that I'm using which may not be available in earlier versions of the framework.
You can grab the 6.3.1 release from their website, or simply grab the latest release from their main downloads page. You can also use something like NVM if you want to mix and match your versions for different projects, which is something I can recommend doing.
Once you have Node installed, open up your command line and make sure it's available by typing:
node -v
It should repeat the correct version number back to you. Also check that NPM is available (it should have been installed by the NodeJS installer) by typing:
npm -v
It should ideally be version 3 or greater.
TypeScript
We will need to install the TypeScript compiler for our application to work; luckily we can do this through NPM.
Now that we have NPM installed from the previous step, we can install TypeScript using the following command:
npm install -g typescript
This will download the TypeScript compiler using the node package manager and make the tools available on the command-line. To verify that your installation has worked, type:
tsc -v
Which should again echo a version number back to you (I'm using 2.0.10).
Webpack 2
Finally, install Webpack, which will allow us to package our JavaScript assets together and will effectively run our TypeScript compiler for us. Again, we can do this through NPM:
npm install -g webpack
This has installed webpack into our global package repository on our machine, giving us access to the 'webpack' tool.
Setting up the project
First of all, create a folder somewhere on your machine to house the development of your Docker dashboard, and navigate to it in your command line. We'll go through a number of steps to set this folder up for use before we start coding.
Next, initialise the NodeJS project by typing:
npm init
This will ask you a number of questions about the project, none of which are terribly important for this demo, except that the name must be all lower-case and contain no spaces.
Once that has finished, you will be left with a
package.json file in your project. This is the manifest file that describes your node project and all of its dependencies, and we'll be adding to this file shortly.
Creating the web server
Next, we'll get the basic web server up and running which will eventually serve our ReactJS app to the user.
Let's begin by installing ExpressJS, which will enable us to get this done:
npm install --save express
Express is a framework that provides us with an API for handling incoming HTTP requests, and defining their responses. You can apply a number of view engines for serving web pages back to the user, along with a whole host of middleware for serving static files, handling cookies, and much more. Alas, we're simply going to use it to serve up a single HTML file and some JavaScript assets, but at least it makes that job easy!
Next, create the file
server.js inside the root of your project, and add the code which will serve the HTML file:
let express = require('express') let path = require('path') let app = express() let server = require('http').Server(app) // Use the environment port if available, or default to 3000 let port = process.env.PORT || 3000 // Serve static files from /public app.use(express.static('public')) // Create an endpoint which just returns the index.html page app.get('/', (req, res) => res.sendFile(path.join(__dirname, 'index.html'))) // Start the server server.listen(port, () => console.log(`Server started on port ${port}`))
Note: You're going to see a lot of new ES6 syntax in this article, like
let,
const, arrow functions and a few other things. If you're not aware of modern JavaScript syntax, it's worth having a read up on some the new features!
Next, create an
index.html file in the root of the project with the following content:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Docker Dashboard</title> <link rel="stylesheet" href="" type="text/css"> </head> <body> <div id="app"> Docker Dashboard! </div> <script src="" integrity="sha256-BbhdlvQf/</script> <script src=""></script> </body> </html>
This simply gives us a basic template for the front page of our app - we'll be adding to this later!
Finally, let's test it out to make sure it's all working so far. In the command line, type:
node server.js
The prompt should tell you that it has managed to start the site on port 3000. Browse there now and make sure we can see our default index page. If not, check both the browser window and the console to see if Node has spat out any useful errors, and try again.
Keeping a smooth development workflow
Right now when you make changes to the site you will be forced to stop and restart the node app to see your changes to NodeJS code take effect, or re-run the webpack command whenever you make a change to your React components. We can mitigate both of these by causing them to reload themselves whenever changes are made.
To automatically reload your NodeJS server-side changes, you can use a package called nodemon. If you want to use this package from the command line, you can do
npm install -g nodemon. This will allow us to run our app in such a way that any changes to the server-side code will cause the web server to automatically restart, by using
nodemon server.js. We only want to do this on our development machines though, so we will configure our
package.json accordingly.
To handle the recompilation of your React components automatically, webpack has a 'watch' option that will cause it to re-run by itself. To do this, start webpack using
webpack --watch and notice that your JavaScript bundles will start recompiling automatically whenever you change your React components.
To have thes two things - nodemon and webpack - running together, you can either start them in two different console windows, or if you're using OSX or Linux you can run them from one console using this neat one-liner:
nodemon server.js & webpack --watch
Note This won't work on Windows systems, but luckily there is a package for that called concurrently that you can use to achieve the same affect:
npm install -g concurrently concurrently "nodemon server.js" "webpack --watch"
While you can use these tools by installing them globally, for our application we're going to install these two things as development dependencies, and adjust our
package.json file with two commands: one to start the app normally without nodemon, and a development script we can use to start both nodemon and webpack watch.
Firstly, install these two packages as development dependencies:
npm install -D nodemon concurrently
Then edit the 'scripts' node of the
package.json file to look like the following:
... "main": "index.js", "scripts": { "start": "webpack -p && node server.js", "start-dev": "./node_modules/.bin/concurrently \"nodemon server.js\" \"webpack --watch\"" }, "author": "", ...
- The
startscript (run using
npm start) will firstly compile your JavaScript assets using Webpack and then run our app using node. The
-pswitch causes Webpack to automatically optimize and minimize our scripts, ready for production
- The
start-devscript (run using
npm run start-dev) is our development mode. It starts our webserver using Nodemon and Webpack in 'watch' mode, meaning that both our server-side and client-side code will be automatically reloaded when something changes
(Thanks to @OmgImAlexis for some suggestions in this area!)
Starting some React and TypeScript
The main body of our client application is going to be constructed using React and TypeScript, which means we need to spend a little more time setting up one or two more tools. Once we set up a workflow for compiling the first component, the rest will easily follow.
Firstly, let's have a look at how we're going to structure our React components.
app/ |--- components/ | |--- app.tsx | |--- containerList.tsx | |--- dialogTrigger.tsx | |--- modal.tsx | |--- newContainerModal.tsx |--- index.tsx
They will all be housed inside an 'app' folder, with the smaller components inside a 'components' subfolder.
index.tsx is essentially an entry point into our client-side app; it binds the React components to the Html Dom.
app.tsx glues everything together - it arranges and communicates with the other components in order to present the interface to the user and allow them to interact with the application. Let's set the project up to start compiling
index.tsx
Create the 'app' folder, and then the 'index.tsx' file inside of that, with the following contents:
import * as React from 'react' import * as ReactDOM from 'react-dom' import { AppComponent } from './components/app' ReactDOM.render( <AppComponent />, document.getElementById('app') )
If you're using the excellent Visual Studio Code you'll notice that it will immediately start throwing up intellisense issues, mainly because it doesn't know what 'react', 'react-dom' and our application component is. We're going to use Webpack and TypeScript to fix that!
Setting up Webpack
Webpack will take all our .tsx files, work out their dependencies based on the imported files, run them through the TypeScript compiler and then spit out one JavaScript file that we can include on the main Html page. It does this primarily by referencing a configuration file in the root of our project, so let's create that next.
Create the file
webpack.config.js in the root of your project, with the following contents:
module.exports = { entry: "./app/index.tsx", output: { filename: "bundle.js", path: __dirname + "/public/js" }, devtool: "source-map", resolve: { extensions: [".webpack.js", ".web.js", ".ts", ".tsx", ".js"] }, module: { loaders: [ { test: /\.tsx?$/, loader: "ts-loader" } ] } };
There's quite a bit in there, so let's go through it:
- The
entrykey tells Webpack to start processing files using the
/app/index.tsxfile.
- The output key tells Webpack where to put the output files; in the
/public/jsfolder with the name
bundle.js.
- The
devtoolkey, along with the
source-map-loaderpreloader in the
modulesection, tells Webpack to generate source maps, which will come in very handy when trying to debug your JavaScript app later.
- The
resolvekey tells Webpack which extensions to pay attention to when resolving module.
- The
loaderssection tells Webpack what middleware to use when processing modules. Here we tell it that, whenever Webpack comes across a file with a .ts or .tsx extension, it should use the
ts-loadertool. This is the tool that processes a TypeScript file and turns it into regular JavaScript.
There is a lot more you can do with Webpack, including automatically splitting out common modules into a
common.js file, or including css files along with your JavaScript, but what we have here is sufficient for our requirements.
To get this to work, we still need to install the
ts-loader and
source-map-loader packages:
npm install --save-dev ts-loader source-map-loader
We also need to install the React packages that we need:
npm install --save-dev react react-dom
Next, we need install TypeScript into the project. We have already installed it globally in the first section of this article, so we can simply link it in:
npm link typescript
TypeScript itself needs a configuration file, which lives in the
tsconfig.json file in the root of the project. Create that now, with the following content:
{ "compilerOptions": { "outDir": "dist/", "sourceMap": true, "noImplicitAny": true, "module": "commonjs", "target": "es5", "jsx": "react" } }
The main parts of this configuration are the
module,
target and
jsx keys, which instruct TypeScript how to output the correct code to load modules in the right way, and also how to deal with the React JSX syntax correctly (covered later).
Let's see what state our Webpack set up is in at the moment. From the command line, simply type
webpack to start compilation.
It should give you some stats about compile times and sizes, along with a few errors:
ERROR in ./app/index.tsx (1,24): error TS2307: Cannot find module 'react'. ERROR in ./app/index.tsx (2,27): error TS2307: Cannot find module 'react-dom'. ERROR in ./app/index.tsx (3,30): error TS2307: Cannot find module './components/app'. ERROR in ./app/index.tsx (6,5): error TS2602: JSX element implicitly has type 'any' because the global type 'JSX.Element' does not exist. ERROR in ./app/index.tsx Module not found: Error: Cannot resolve 'file' or 'directory' ./components/app in /Users/stevenhobbs/Dev/personal/docker-dashboard/app @ ./app/index.tsx 4:12-39
Essentially, it still doesn't know what 'react' is, so let's fix that now!
Installing typings for React
Because we've told Webpack that we're going to handle the React and ReactDOM libraries ourselves, we need to tell TypeScript what those things are. We do that using Type Definition Files. As you can see from the Github repository, there are thousands of files, covering most of the JavaScript frameworks you've heard of. This is how we get rich typing, compile-time hints and intellisense while writing TypeScript files. Luckily, we can also install them using NPM.
To install them, use:
npm install --save-dev @types/react @types/react-dom
Now try running
webpack again. This time we get just one error, telling us that the
./components/app module is missing. Create a skeleton file for now so that we can get it compiling, and inspect the results. Create the file
app/components/app.tsx with the following content:
import * as React from 'react' export class AppComponent extends React.Component<{}, {}> { render() { return (<h1>Docker Dashboard</h1>) } }
At the moment it does nothing except print out 'Docker Dashboard' in a header tag, but it should at least compile. We'll flesh this out much more later on! For now though, you should be able to run the
webpack command again now, and have it produce no errors.
To inspect what Webpack has created for us, find the
public/js folder and open the
bundle.js file. You'll see that, while it does look rather obtuse, you should be able to recognise elements of your program in there towards the very bottom, as normal JavaScript that can run in the browser. It's also rather large, as it also includes the React libraries and it will include even more by the time we're finished!
The next thing to do is include this file in our Html page. Open
index.html and put a script tag near the bottom, underneath the Bootstrap include:
<script src=""></script> <!-- Add our bundle here --> <script src="/js/bundle.js"></script> </body>
Now, you should be at the point where you can run the site using
node server.js, browse to and view the running website. If you can see 'Docker Dashboard' written using a large header font, then you've successfully managed to get your Webpack/TypeScript/React workflow working! Congratulations!
Now let's flesh out the actual application a bit more and add some real value.
Creating the components
What we have now is a server-side application which acts as the backbone of our React app. Now that we have done all that setup and configuration, we can actually concentrate on creating the React components that will form the application's interface. Later on, we will tie the interface to the server using socket.io, but for now let's start with some React components.
To figure out what components we need, let's take another look at a screenshot of the application, this time with the individual React components highlighted:
- The DialogTrigger component displays a button which can trigger a Bootstrap modal dialog
- The ContainerItem component knows how to display a single Docker container, including some info about the container itself
- The ContainerList displays a number of ContainerItem components. There are two ContainerList components here - one for running containers, and one for stopped containers
One additional component which is not shown in that screenshot is the modal dialog for starting new containers:
To start with, let's create the component to display a single container. Create a new file in /app/components called
containerListItem.tsx, and give it the following content:
import * as React from 'react' import * as classNames from 'classnames' export interface Container { id: string name: string image: string state: string status: string } export class ContainerListItem extends React.Component<Container, {}> { // Helper method for determining whether the container is running or not isRunning() { return this.props.state === 'running' } render() { const panelClass = this.isRunning() ? 'success' : 'default' const classes = classNames('panel', `panel-${panelClass}`) const buttonText = this.isRunning() ? 'Stop' : 'Start' return ( <div className="col-sm-3"> <div className={ classes }> <div className="panel-heading">{ this.props.name }</div> <div className="panel-body"> Status: {this.props.status}<br/> Image: {this.props.image} </div> <div className="panel-footer"> <button className="btn btn-default">{buttonText}</button> </div> </div> </div> ) } }
Here we have defined a component that can render a single container. We also declare an interface that has all of the properties about a container that we'd want to display, like its name, image and current status. We define the 'props' type of this component to be a Container, which means we can get access to all the container information through
this.props.
The goal of this component is to not only display the current status of the component, but also to handle the start/stop button - this is something we'll flesh out later once we get into the socket.io goodness.
The other interesting this component can do, is slightly alter its appearance depending on whether the container is running or not. It has a green header when it's running, and a grey header when it's not. It does this by simply switching the Css class depending on the status.
We'll need to install the
classnames package for this to work, along with its TypeScript reference typings. To do that, drop into the command line once more:
npm install --save classnames
npm install --save-dev @types/classnames
Classnames is not strictly necessary, but does provide a handy API for conditionally concatenating CSS class names together, as we are doing here.
Next, let's create the
ContainerItemList component, which is in charge of displaying a whole list of these components together. Create a new file in
/app/components called
ContainerList with the following content:
import * as React from 'react' import { Container, ContainerListItem } from './containerListItem' export class ContainerListProps { containers: Container[] title?: string } export class ContainerList extends React.Component<ContainerListProps, {}> { render() { return ( <div> <h3>{this.props.title}</h3> <p>{ this.props.containers.length == 0 ? "No containers to show" : "" }</p> <div className="row"> { this.props.containers.map(c => <ContainerListItem key={c.name} {...c} />) } </div> </div> ) } }
This one is a little simpler as it doesn't do too much except display a bunch of
ComponentListItems in a list. The properties for this component include an array of
Container objects to display, and a title for the list. If the list of containers is empty, we show a short message.
Otherwise, we use
map() to convert the list of
Container types into
ContainerListItem components, using the spread operator (the
...c part) to apply the properties on
Container to the component. We also give it a key so that React can uniquely identify each container in the list. I'm using the name of the container, seeing as that will be unique in our domain (you can't create two Docker containers with the same name, running or not).
So now we have a component to render a container, and one to render a list of containers with a title, let's flesh out the App container a bit more.
Displaying some containers
Back to
app.tsx. First we need to import our new containers into the module:
import { Container, ContainerListItem } from './containerListItem' import { ContainerList } from './containerList'
Next, we'll create a couple of dummy containers just for the purpose of displaying something on screen; we'll swap this out later with real data from the Docker Remote API. Add this inside the AppComponent class, near the top:
containers: Container[] = [ { id: '1', name: 'test container', image: 'some image', state: 'running', status: 'Running' }, { id: '2', name: 'another test container', image: 'some image', state: 'stopped', status: 'Running' } ]
Now we need to create some state for this application component. The state will simply tell us which components are running, and which are stopped. We'll use this state to populate the two lists of containers respectively.
To this end, create a new class
AppState outside of the main application component to hold this state:
class AppState { containers?: Container[] stoppedContainers?: Container[] }
Now change the type of the state on
AppComponent so that TypeScript knows what properties are available on our state. Your
AppComponent declaration should now look like this:
export class AppComponent extends React.Component<{}, AppState> {
Then create a constructor inside
AppComponent to initialise our state, including giving it our mocked-up containers. To do this, we use lodash to partition our container list into two lists based on the container state. This means that we'll have to install lodash and the associated typings:
npm install --save lodash npm install --save-dev @types/lodash
And then import the lodash library at the top of the file:
import * as _ from 'lodash'
Lodash is a very handy utility library for performing all sorts of operations on lists, such as sorting, filtering - and in our case - partitioning!
Here's the constructor implementation:
constructor() { super() const partitioned = _.partition(this.containers, c => c.state == 'running') this.state = { containers: partitioned[0], stoppedContainers: partitioned[1] } }
Now in our state we should have two lists of containers - those that are running, and those that aren't.
Finally, let's replace the render method so that it takes our dummy containers and uses our components to represent them on the screen:
render() { return ( <div className="container"> <h1 className="page-header">Docker Dashboard</h1> <ContainerList title="Running" containers={this.state.containers} /> <ContainerList title="Stopped containers" containers={this.state.stoppedContainers} /> </div> ) }
At this point you should have a basic dashboard setup with some dummy containers - let's have a look:
Making things dynamic!
Let's have a look at the Docker and socket.io side of things now, and replace those dummy containers with some real data!
Firstly, install dockerode, a NodeJS library that enables us to interact with the Docker Remote API:
npm install --save dockerode
Next, install the libraries and associated typings for socket.io - we'll be using this both on the server-side and the client, as a means of communicating between the two:
npm install --save socket.io npm install --save-dev @types/socket.io @types/socket.io-client
Now, open
server.js in the root of the project and import socket.io, binding it to the Express server that we've already created:
let io = require('socket.io')(server)
We can also get a connection to the Docker Remote API at this point, through Dockerode. We need to connect to the API differently depending on whether we're on a Unix system or a Windows system, so let's house this logic in a new module called
dockerapi.js in the root of the project:
let Docker = require("dockerode"); let isWindows = process.platform === "win32"; let options = {}; if (isWindows) { options = { host: '127.0.0.1', port: 2375 } } else { options = { socketPath: '/var/run/docker.sock' } } module.exports = new Docker(options);
Now we can include this in our
server.js file and get a handle to the API:
let docker = require('./dockerapi')
We're going to provide the client with a few methods; getting a list of containers, starting a container, stopping a container, and running a new container from an exiting image. Let's start with the container list.
Firstly, we need to listen for connections. We can do this further down the
server.js script, after we start the web server on the line that begins
server.listen(..)
io.on('connection', socket => { socket.on('containers.list', () => { refreshContainers() }) })
This starts socket.io listening for connections. A connection will be made when the React app starts; at least it will be when we put the code in a bit later on!
In order to send the list of Docker containers, we listen for the 'containers.list' message being sent from the socket that has connected to the server; in other words, the client app has requested the list of containers from the server. Let's go ahead and define the
refreshContainers() method:
function refreshContainers() { docker.listContainers({ all: true}, (err, containers) => { io.emit('containers.list', containers) }) }
Whenever we call
refreshContainers(), the Docker API will be used to retrieve the list of all of the containers that exist on the current system (running or not), which will then send them all using the 'containers.list' message through socket.io. Notice though that we're sending the message through the main
io object rather than through a specific socket - this means that all of the clients currently connected will have their container lists refreshed. You will see why this becomes important later in the article.
Moving over to the main React component, we should now be able to start picking up messages through socket.io which indicate that we should display the container list. First, import the socket.io library and connect to the socket.io server:
import * as io from 'socket.io-client' let socket = io.connect()
Next, delete the mocked-up containers that we had put in before. Then change the constructor so that we react to the messages being passed to us from socket.io instead of using our mocked-up containers. We will also initialise the component state so that the containers are just empty lists; the component will populate them at some short time in the future when it has received the appropriate message. Here's what the constructor looks like now:
constructor() { super() this.state = { containers: [], stoppedContainers: [] } socket.on('containers.list', (containers: any) => { const partitioned = _.partition(containers, (c: any) => c.State == "running") this.setState({ containers: partitioned[0].map(this.mapContainer), stoppedContainers: partitioned[1].map(this.mapContainer) }) }) }
We listen for messages using
io.on() and specify the message string. When our socket receives a message with this name, our handler function will be called. In this case, we handle it and receive a list of container objects down the wire. We then partition it into running and stopped containers (just as we did before) and then we set the state appropriately. Each container from the server is mapped to our client-side Container type using a function
mapContainer(), which is shown here:
mapContainer(container:any): Container { return { id: container.Id, name: _.chain(container.Names) .map((n: string) => n.substr(1)) .join(", ") .value(), state: container.State, status: `${container.State} (${container.Status})`, image: container.Image } }
This is where we extract out properties such as the name, image, status and so on. Any other properties that you want to include on the UI in the future, you will probably read inside this function.
So now we have the ability to react to socket.io messages coming down the wire, the next thing to do is cause the server to send us the container list! We do this by sending a 'containers.list' message to the server using
socket.emit, which will send all the connections a similarly-titled message back with the container data. We can send this message from the
componentDidMount event, which is called on our Component once it has been 'mounted' to the DOM:
componentDidMount() { socket.emit('containers.list') }
Right now, you should be able to start your app and have it display a list of the running and stopped Docker containers on your machine!
Starting containers
Being able to start and stop a container is merely an extension of what we've already accomplished. Let's have a look at how we can start a container when we click the 'Start' button.
Wiring up the start button
The workflow we're going to implement looks like this:
- We are going to handle the 'click' event of the start button from inside the React component.
- Inside the click event, we're going to send a message to the socket running on the server.
- The server will receive the message and tell Docker to start the appropriate container.
- When the container starts, the server will dispatch a message to all connections with a refreshed list of containers.
Let's start with the button. Alter the button inside your
ContainerListItem component so that it handles the click event using a method called
onActionButtonClick:
<button onClick={this.onActionButtonClick.bind(this)}{buttonText}</button>
Next create, the
onActionButtonClick handler somewhere inside the same component:
onActionButtonClick() { socket.emit('container.start', { id: this.props.id }) }
Here we post the 'container.start' message to the socket along with the container id. Armed with this information, we'll be able to tell Docker which container to start. You might find that you'll get an issue here, because TypeScript doesn't know what
socket is yet. We can fix that by importing
socket.io-client and connecting to the server socket. At the top of the file, then:
import * as io from 'socket.io-client' const socket = io.connect()
Now everything should be fine. To complete the feature, let's pop over to the server side and handle the incoming message. Open
server.js and add the following somewhere inside your socket connection handler, alongside where you handle the 'containers.list' message:
socket.on('container.start', args => { const container = docker.getContainer(args.id) if (container) { container.start((err, data) => refreshContainers()) } })
Here we simply get a container from Docker using the id that we get from the client. If the container is valid, we call
start on it. Once start has completed, we call our
refreshContainers method that we already have. This will cause socket.io to send our current list of containers to all the connected clients.
Stopping containers
The functionality for stopping containers that are running is done in much the same way; we send a message through socket.io to the server with a 'containers.stop' message, the server stops the relevant container and then tells everyone to refresh their container list.
Once again, let's start on the component side of things. In the previous section, we added a handler for the 'start/stop' button which tells socket.io to send a message to start the container. Let's tweak that a bit so that we can use it for stopping containers too; we'll just send the right message or not depending on whether the container is currently running or not. So this handler now becomes:
onActionButtonClick() { const evt = this.isRunning() ? 'container.stop' : 'container.start' socket.emit(evt, { id: this.props.id }) }
Next, we'll handle the message on the server. Add a handler for this alongside the one we added in the previous section for 'container.start':
socket.on('container.stop', args => { const container = docker.getContainer(args.id) if (container) { container.stop((err, data) => refreshContainers()) } })
The code looks strikingly similar to the start code, except we stop a container instead of starting it. If you run the app now, you should be able to start and stop your containers!
Periodically refreshing container state
Before we head into the last section, now would be a good time to add a quick feature that will automatically refresh our container state. As awesome as our new Docker dashboard is, containers can be started, stopped, created and destroyed from a few different places outside of our app, such as the command line. It would be nice to reflect these changes in our app too.
A quick and easy way to achieve this is to simply read the container state every x seconds, then update our clients. We already have most of the tools to do this, so let's implement it!
Back in
server.js in the server-side app, add a quick one-liner to send an updated list of Docker containers every 2 seconds. Put this outside of the
io.on('connection', ... block:
setInterval(refreshContainers, 2000)
Now, once your app is running, dive into the command line and stop one of your containers using
docker stop <container id or name>, and you should see the container stop inside your dashboard too!
Furthermore, thanks to the power of socket.io, you should be able to open your dashboard in multiple browsers and see them all update at the same time. Go ahead and try browsing your dashboard on your mobile device too!
Starting brand new containers
In this final section, we're going to explore how we can start brand new containers from exiting Docker images. This will involve a couple of new React components, a Bootstrap Modal popup and some more interaction with socket.io and the Docker API.
First, let's create the React components. There are 3 components involved:
- A 'modal' component, which is a generic component for creating any modal dialog
- A 'new container model' component, which is based upon the generic modal component for showing the new container-specific UI, as well as handling validation
- A 'dialog trigger' component which is used to show a modal dialog component on the screen.
Creating a generic modal popup component
Let's start with the generic component, seeing as our modal for creating a new container will be based upon this one. We're making a generic component just as an exercise to show you how you can extend such a component for multiple uses. For example, later you might go on to create a dialog to accept an image name that will be pulled from the Docker hub - you could also base that modal upon this generic component.
Create a new file in the 'components' director called
modal.tsx, and begin by importing the relevant modules:
import * as React from 'react'
Next, define some properties that our modal can accept so that we can configure how it looks and works:
interface ModalProperties { id: string title: string buttonText?: string onButtonClicked?: () => boolean|undefined }
We must take an id and a title, but we can also accept some text for the button on the dialog and also a handler for the button click, so that we can define what happens when the user clicks the button. Remember that this component is designed to be used in a generic way - we don't actually know what the behaviour will be yet!
Now let's define the component itself:
export default class Modal extends React.Component<ModalProperties, {}> { // Store the HTML element id of the modal popup modalElementId: string constructor(props: ModalProperties) { super(props) this.modalElementId = `#${this.props.id}` } onPrimaryButtonClick() { // Delegate to the generic button handler defined by the inheriting component if (this.props.onButtonClicked) { if (this.props.onButtonClicked() !== false) { // Use Bootstrap's jQuery API to hide the popup $(this.modalElementId).modal('hide') } } } render() { return ( <div className="modal fade" id={ this.props.id }> <div className="modal-dialog"> <div className="modal-content"> <div className="modal-header"> <button type="button" className="close" data-×</button> <h4 className="modal-title">{ this.props.title }</h4> </div> <div className="modal-body"> { this.props.children } </div> <div className="modal-footer"> <button type="button" onClick={this.onPrimaryButtonClick.bind(this)}{ this.props.buttonText || "Ok" } </button> </div> </div> </div> </div> ) } }
The component definition itself is mostly straightforward - we just render out the appropriate Bootstrap markup for modal popups, but we pepper it with values, such as the component title. We also specify the client handler on the button as well as the button text. If the component doesn't specify what the button text should be, the default value "Ok" is used, using this line:
{ this.props.buttonText || "Ok" }
Most importantly, the component called
this.props.children for the modal body. You'll see why this important in the next section, but basically it allows us to render other components that are specified as children of this component. More on that later.
Also note the
onPrimaryButtonClick handler; when the button is clicked, it delegates control to whatever is using this component, but it also inspects the return value from that call. If false is returned, it doesn't automatically close the dialog. This is useful for later when we don't want to close the dialog in the event that our input isn't valid.
One last thing before we move on; when this component compiles, you'll probably find that TypeScript will complain that it can't find
$, which is true since we haven't imported it. To fix this, we need to simply install the typings for jQuery so that it knows how to resolve that symbol. You will also need to install the types for Twitter Bootstrap, so that it knows what the bootstrap-specific methods and properties are.
In the command line, then:
npm install --save-dev @types/jquery @types/bootstrap
Creating the 'new container' dialog
This dialog will be defined by creating a new dialog component and wrapping the content in the generic dialog component that we created in the last section, specifing some things like the title and what happens when the user clicks the button. Create a new file for the component called 'newContainerModal'.
Firstly, define our imports:
import * as React from 'react' import Modal from './modal' import * as classNames from 'classnames'
Note that we're importing our generic modal as
Modal, allowing us to make use of it in this new modal component - more on that shortly.
Now let's define some incoming properties, and some state for our new component:
interface ModalProperties { id: string, onRunImage?: (name: string) => void } interface ModalState { imageName: string isValid: boolean }
For the properties, we allow an id for the component to be set - this will make sense soon when we create our last component, the 'modal dialog trigger'. We also take a function that we can call when the name of an image to run has been entered.
For the state, we're going to record the name of the image that was entered, and also some basic form validation state using the
isValid flag.
As a reminder, this is what this modal popup is going to look like; there's just one text field and one button:
Let's fill out the component and have a look at its
render method. Also note the constructor, where can initialise the component state to something default:
export class NewContainerDialog extends React.Component<ModalProperties, ModalState> { constructor(props: ModalProperties) { super(props) this.state = { imageName: '', isValid: false } } render() { let inputClass = classNames({ "form-group": true, "has-error": !this.state.isValid }) return ( <Modal id="newContainerModal" buttonText="Run" title="Create a new container" onButtonClicked={this.runImage.bind(this)}> <form className="form-horizontal"> <div className={inputClass}> <label htmlFor="imageName" className="col-sm-3 control-label">Image name</label> <div className="col-sm-9"> <input type="text" className="form-control" onChange={this.onImageNameChange.bind(this)} </div> </div> </form> </Modal> ) } }
Hopefully now you can see how the component is constructing using the generic modal component we created earlier. In this configuration, the
Modal component acts as a higher-order component, wrapping other components inside of it, instead of our new component inheriting from it as we might have otherwise done.
The rest of the markup is fairly standard Bootstrap markup that defines a form field with a label. Three things to note, however:
- We apply a class to the
divthat wraps the form elements that is derived from our
isValidstate property; if the form isn't valid, the input box gets a nice red border, and the user can see they've done something wrong
- We specify a handler for the textbox's 'onChange' event, allowing us to handle and record what the user is typing in
- We specify a handler for the generic modal's button click - when the user clicks that button, our new component is going to handle the event and do something specific to our needs. We'll come back to this in a minute
Let's define that change handler now:
onImageNameChange(e: any) { const name = e.target.value this.setState({ imageName: name, isValid: name.length > 0 }) }
All of the form behaviour is captured here. As the user is typing into the box, we record the input value into the
imageName state property, and also determine whether or not it's valid; for now, it's good enough for the image name to have at least one character.
Next, we need to define what happens when the user clicks the button on the modal popup. This is done inside the
runImage function:
runImage() { if (this.state.isValid && this.props.onRunImage) this.props.onRunImage(this.state.imageName) return this.state.isValid }
This should be fairly straightforward - we simply say that if the state of the component is valid, and the
onRunImage handler has been defined, we call it with the name of the image that the user typed in. We also return a value which indicates to the generic modal component that it should close itself. This happens to just be the same thing is the value of the
isValid flag.
That's it for this component - let's create a trigger component so that we can open it!
Triggering the modal
This last component is going to represent the trigger - the thing the user will click on - that opens a modal popup. It's definition is actually very simple. Create a new component called 'dialogTrigger.tsx' and populate it with the following:
import * as React from 'react' export interface DialogTriggerProperties { id: string buttonText: string } export class DialogTrigger extends React.Component<DialogTriggerProperties, {}> { render() { const href = `#${this.props.id}` return ( <a className="btn btn-primary" data-toggle="modal" href={ href }>{ this.props.buttonText }</a> ) } }
For the component properties, we take the id of the modal we want to trigger, and also the text that we want to show on the button. Then inside the render function, a standard Bootstrap link is displayed with button styling and the id of the modal to open. If you're not familiar with Bootstrap, note that the actual opening of the dialog is all done with the Bootstrap JavaScript library - all we need to do is specify the
data-toggle="modal" attribute and set the href attribute to the id of the modal we want to open.
Tying it all together
Now that we have all of our modal components, we can put them all together. Head back to
app.tsx and import all the components we just created:
import { NewContainerDialog } from './newContainerModal' import { DialogTrigger } from './dialogTrigger'
There's no need to import the generic Modal component, as that will be done by the
NewContainerDialog component; we're not going to use it directly here.
Now, update the render function so that it contains our new components. For the trigger, place it under the header, and for the 'new container' dialog, it just needs to go on the page somewhere; Bootstrap will place it correctly once it has been opened:
render() { return ( <div className="container"> <h1 className="page-header">Docker Dashboard</h1> <DialogTrigger id="newContainerModal" buttonText="New container" /> <ContainerList title="Running" containers={this.state.containers} /> <ContainerList title="Stopped containers" containers={this.state.stoppedContainers} /> <NewContainerDialog id="newContainerModal" onRunImage={this.onRunImage.bind(this)} /> </div> ) }
Note how the id property of
DialogTrigger is the same as the id property of
NewContainerDialog - this is necessary in order for the trigger to understand that this is the dialog it needs to trigger.
Also note how the
onRunImage property of the dialog component is defined - let's create that now:
onRunImage(name: String) { socket.emit('image.run', { name: name }) }
It just sends the name of the image to the server inside a message called 'image.run'. We can define that now by heading over to
server.js and handling a new message alongside where we've created the others:
socket.on('image.run', args => { docker.createContainer({ Image: args.name }, (err, container) => { if (!err) container.start((err, data) => { if (err) socket.emit('image.error', { message: err }) }) else socket.emit('image.error', { message: err }) }) })
Here we call out to the Docker API and its convenient
createContainer method, passing in the image name that the user typed in. This will not pull new images from the Docker Hub - it will only start new containers from existing images that exist on the local system. However, it can certainly be done - I'll leave it as an exercise for you, the reader, to complete in your own time.
If we're able to create the container, we'll start it. Remember our timer that we created earlier? Once the container starts, that timer will pick up the new container and display it to all the clients that are connected!
Finally, if there is an error we can send an 'image.error' message back to the socket that sent the original 'image.run' message, which will be useful for the user so that they are aware that something didn't work as expected. Let's head back to the app component for the final piece of the puzzle. Inside the constructor of the
app.tsx component:
socket.on('image.error', (args: any) => { alert(args.message.json.message) })
Here we simply throw an alert if Docker encounters an error running the image. Armed with your new-found React knowledge, I'm sure you can now come up with some fancy UI to make this a lot prettier!
Wrapping up
By now you should have a useful but somewhat basic Docker dashboard, and hopefully the journey has been worth it! With all the socket.io goodness, be sure to play around with loading your app from multiple sources, like your desktop browser and mobile phone, and watch them all keep in sync!
Some things you could continue on with to make it a lot more useful, include:
- Using the Docker API to pull images instead of simply running them.
- Using the Docker API to stream the container logs to the client through Socket.io.
- Extending the container dialog form to include options for port mapping, volumes, container name and more!. | https://auth0.com/blog/docker-dashboard-with-react-typescript-socketio/ | CC-MAIN-2017-22 | refinedweb | 8,467 | 54.52 |
Important: Please read the Qt Code of Conduct -
Full screen on Android ???
Please has anyone –managed- to remove/hide the Android "StatusBar" ?
making a real “Full Screen” application.
I use this code …
@ MainWindow w;
w.Ini();
w.showFullScreen();
@
Thank you for your help
Artemis
do as follows:
@
namespace sysvar
{
const Qt::WindowModality WindowsFormShow =Qt::ApplicationModal;
const Qt::WindowType WindowsFormStyle=Qt::FramelessWindowHint;
const Qt::WindowStates WindowsFormState=
#ifdef Q_OS_WIN32
Qt::WindowNoState;
#endif
#ifdef Q_OS_WINCE
Qt::WindowFullScreen;
#endif
#ifdef Q_OS_ANDROID
Qt::WindowFullScreen;
#endif
}
form->setWindowFlags( sysvar::WindowsFormStyle );
form->setWindowState( form->windowState()|sysvar::WindowsFormState);
form->setWindowModality(sysvar::WindowsFormShow);
form->showFullScreen();
@
Works only alpha4 or higher
[Edit: Added @ tags around code -- mlong]
Use forum qt-android
Hi Flavio
I think you gave the best answer I ever see in any forum about the Android FullScreen. I will test the code at the afternoon when I have the device because now is not with me and I will do a confirmation.
millions thanks !
Hi Flavio
Unfortunately the test of the above code failed to do Full-Screen on the "FlyTouch 8" device with "Android 4" OS.
Have you tested it and is it working on your own device?
Have you any idea?
Thank you for the try.
For WinCE worked.
To Android_alpha3 worked occasionally with mistakes.
So I did not use the feature Alpha3.
I do not know if this alpha4_1 working, but I saw that the ticket had to correct fix.
Here you will find information about all the bugs and how this Qtnecessitas the correction process:
I forced to bring this thread to the life :\
I face same problem with all dialog types under Android 4.x and Qt 5.1.x as mentioned "in my thread":; So may you please help me out to find a solution.
BTW, I mentioned in my thread that this issue disappear in case hiding to the tray then showing the application.
In Qt4.8.x fullScreen not work well.
In Qt5.1.X fullScreen not work.
I'm always watching and trying to recompile the SDK to verify changes.
If I have success with this, I warn you here in this topic.
[quote]If I have success with this, I warn you here in this topic.[/quote]
Thanks; I think it's better to file a bug report about this issue specially it's critical one
I filed a bug report in this link: QTBUG-33294
[edit: updated bug report link SGaist] | https://forum.qt.io/topic/20577/full-screen-on-android | CC-MAIN-2020-40 | refinedweb | 405 | 64.41 |
Opened 6 years ago
Closed 6 years ago
Last modified 5 years ago
#19698 closed Bug (fixed)
Deleting Sites through a manager does not clear cache
Description (last modified by )
When you delete a Site instance, i.e.
Site.objects.get_current().delete(), the cache is cleared. However, if you delete all sites,
Site.objects.all().delete(), the cache is not cleared.
def test_delete_all_sites_clears_cache(self): """ When all site objects are deleted the cache should also be cleared and get_current should raise a DoesNotExist """ from django.contrib.sites.models import Site site = Site.objects.create(domain='example.com', name='test') self.assertIsInstance(Site.objects.get_current(), Site) Site.objects.all().delete() self.assertRaises(Site.DoesNotExist, Site.objects.get_current)
Attachments (4)
Change History (10)
comment:1 Changed 6 years ago by
comment:2 Changed 6 years ago by
Changed 6 years ago by
Patched to make delete() a pre_delete signal
Changed 6 years ago by
Refactored the previous patch as Site no longer needed a save() method either.
Changed 6 years ago by
Sorry, forgot to take out decorator on previous patch.
Changed 6 years ago by
comment:3 Changed 6 years ago by
Hi;
I've rebased the patch, checked and fixed some pep8 violations.
I've also moved the function to the end of the file, instead of writing in after the class Site.
The patch is ready for checking. I've also changed the version to 1.5-rc1, because it is a bug. If you don't agree, you can change it.
This is my first work in Django trac, so sorry if I did something wrong :) Thanks for your help, any comments are appreciated.
comment:4 Changed 6 years ago by
You can't mark your own patches as Ready For Check-in and version is used for when the bug was identified. Otherwise the patch looks fine! Leaving as RFC.
The
delete()method of the
Siteclass should probably be replaced by a
pre_deletesignal. | https://code.djangoproject.com/ticket/19698 | CC-MAIN-2019-26 | refinedweb | 325 | 74.69 |
Created on 2005-08-01 18:23 by liturgist, last changed 2020-09-19 19:04 by georg.brandl.
2.4.1 documentation has a list of standard encodings in
4.9.2. However, this list does not seem to match what
is returned by the runtime. Below is code to dump out
the encodings and aliases. Please tell me if anything
is incorrect.
In some cases, there are many more valid aliases than
listed in the documentation. See 'cp037' as an example.
I see that the identifiers are intended to be case
insensitive. I would prefer to see the documentation
provide the identifiers as they will appear in
encodings.aliases.aliases. The only alias containing
any upper case letters appears to be 'hp_roman8'.
$ cat encodingaliases.py
#!/usr/bin/env python
import sys
import encodings
def main():
enchash = {}
for enc in encodings.aliases.aliases.values():
enchash[enc] = []
for encalias in encodings.aliases.aliases.keys():
enchash[encodings.aliases.aliases[encalias]].append(encalias)
elist = enchash.keys()
elist.sort()
for enc in elist:
print enc, enchash[enc]
if __name__ == '__main__':
main()
sys.exit(0)
13:12 pwatson [
ruth.knightsbridge.com:/home/pwatson/src/python ] 366
$ ./encodingaliases.py
ascii ['iso_ir_6', 'ansi_x3_4_1968', 'ibm367',
'iso646_us', 'us', 'cp367', '646', 'us_ascii',
'csascii', 'ansi_x3.4_1986', 'iso_646.irv_1991',
'ansi_x3.4_1968']
base64_codec ['base_64', 'base64']
big5 ['csbig5', 'big5_tw']
big5hkscs ['hkscs', 'big5_hkscs']
bz2_codec ['bz2']
cp037 ['ebcdic_cp_wt', 'ebcdic_cp_us', 'ebcdic_cp_nl',
'037', 'ibm039', 'ibm037', 'csibm037', 'ebcdic_cp_ca']
cp1026 ['csibm1026', 'ibm1026', '1026']
cp1140 ['1140', 'ibm1140']
cp1250 ['1250', 'windows_1250']
cp1251 ['1251', 'windows_1251']
cp1252 ['windows_1252', '1252']
cp1253 ['1253', 'windows_1253']
cp1254 ['1254', 'windows_1254']
cp1255 ['1255', 'windows_1255']
cp1256 ['1256', 'windows_1256']
cp1257 ['1257', 'windows_1257']
cp1258 ['1258', 'windows_1258']
cp424 ['ebcdic_cp_he', 'ibm424', '424', 'csibm424']
cp437 ['ibm437', '437', 'cspc8codepage437']
cp500 ['csibm500', 'ibm500', '500', 'ebcdic_cp_ch',
'ebcdic_cp_be']
cp775 ['cspc775baltic', '775', 'ibm775']
cp850 ['ibm850', 'cspc850multilingual', '850']
cp852 ['ibm852', '852', 'cspcp852']
cp855 ['csibm855', 'ibm855', '855']
cp857 ['csibm857', 'ibm857', '857']
cp860 ['csibm860', 'ibm860', '860']
cp861 ['csibm861', 'cp_is', 'ibm861', '861']
cp862 ['cspc862latinhebrew', 'ibm862', '862']
cp863 ['csibm863', 'ibm863', '863']
cp864 ['csibm864', 'ibm864', '864']
cp865 ['csibm865', 'ibm865', '865']
cp866 ['csibm866', 'ibm866', '866']
cp869 ['csibm869', 'ibm869', '869', 'cp_gr']
cp932 ['mskanji', '932', 'ms932', 'ms_kanji']
cp949 ['uhc', 'ms949', '949']
cp950 ['ms950', '950']
euc_jis_2004 ['eucjis2004', 'jisx0213', 'euc_jis2004']
euc_jisx0213 ['eucjisx0213']
euc_jp ['eucjp', 'ujis', 'u_jis']
euc_kr ['ksc5601', 'korean', 'euckr', 'ksx1001',
'ks_c_5601', 'ks_c_5601_1987', 'ks_x_1001']
gb18030 ['gb18030_2000']
gb2312 ['chinese', 'euc_cn', 'csiso58gb231280',
'iso_ir_58', 'euccn', 'eucgb2312_cn', 'gb2312_1980',
'gb2312_80']
gbk ['cp936', 'ms936', '936']
hex_codec ['hex']
hp_roman8 ['csHPRoman8', 'r8', 'roman8']
hz ['hzgb', 'hz_gb_2312', 'hz_gb']
iso2022_jp ['iso2022jp', 'iso_2022_jp', 'csiso2022jp']
iso2022_jp_1 ['iso_2022_jp_1', 'iso2022jp_1']
iso2022_jp_2 ['iso_2022_jp_2', 'iso2022jp_2']
iso2022_jp_2004 ['iso_2022_jp_2004', 'iso2022jp_2004']
iso2022_jp_3 ['iso_2022_jp_3', 'iso2022jp_3']
iso2022_jp_ext ['iso2022jp_ext', 'iso_2022_jp_ext']
iso2022_kr ['iso_2022_kr', 'iso2022kr', 'csiso2022kr']
iso8859_10 ['csisolatin6', 'l6', 'iso_8859_10_1992',
'iso_ir_157', 'iso_8859_10', 'latin6']
iso8859_11 ['iso_8859_11', 'thai', 'iso_8859_11_2001']
iso8859_13 ['iso_8859_13']
iso8859_14 ['iso_celtic', 'iso_ir_199', 'l8',
'iso_8859_14_1998', 'iso_8859_14', 'latin8']
iso8859_15 ['iso_8859_15']
iso8859_16 ['iso_8859_16_2001', 'l10', 'iso_ir_226',
'latin10', 'iso_8859_16']
iso8859_2 ['l2', 'csisolatin2', 'iso_ir_101',
'iso_8859_2', 'iso_8859_2_1987', 'latin2']
iso8859_3 ['iso_8859_3_1988', 'l3', 'iso_ir_109',
'csisolatin3', 'iso_8859_3', 'latin3']
iso8859_4 ['csisolatin4', 'l4', 'iso_ir_110',
'iso_8859_4', 'iso_8859_4_1988', 'latin4']
iso8859_5 ['iso_8859_5_1988', 'iso_8859_5', 'cyrillic',
'csisolatincyrillic', 'iso_ir_144']
iso8859_6 ['iso_8859_6_1987', 'iso_ir_127',
'csisolatinarabic', 'asmo_708', 'iso_8859_6',
'ecma_114', 'arabic']
iso8859_7 ['ecma_118', 'greek8', 'iso_8859_7',
'iso_ir_126', 'elot_928', 'iso_8859_7_1987',
'csisolatingreek', 'greek']
iso8859_8 ['iso_8859_8_1988', 'iso_ir_138',
'iso_8859_8', 'csisolatinhebrew', 'hebrew']
iso8859_9 ['l5', 'iso_8859_9_1989', 'iso_8859_9',
'csisolatin5', 'latin5', 'iso_ir_148']
johab ['cp1361', 'ms1361']
koi8_r ['cskoi8r']
latin_1 ['iso8859', 'csisolatin1', 'latin', 'l1',
'iso_ir_100', 'ibm819', 'cp819', 'iso_8859_1',
'latin1', 'iso_8859_1_1987', '8859']
mac_cyrillic ['maccyrillic']
mac_greek ['macgreek']
mac_iceland ['maciceland']
mac_latin2 ['maccentraleurope', 'maclatin2']
mac_roman ['macroman']
mac_turkish ['macturkish']
mbcs ['dbcs']
ptcp154 ['cp154', 'cyrillic-asian', 'csptcp154', 'pt154']
quopri_codec ['quopri', 'quoted_printable',
'quotedprintable']
rot_13 ['rot13']
shift_jis ['s_jis', 'sjis', 'shiftjis', 'csshiftjis']
shift_jis_2004 ['shiftjis2004', 's_jis_2004', 'sjis_2004']
shift_jisx0213 ['shiftjisx0213', 'sjisx0213', 's_jisx0213']
tactis ['tis260']
tis_620 ['tis620', 'tis_620_2529_1', 'tis_620_2529_0',
'iso_ir_166', 'tis_620_0']
utf_16 ['utf16', 'u16']
utf_16_be ['utf_16be', 'unicodebigunmarked']
utf_16_le ['utf_16le', 'unicodelittleunmarked']
utf_7 ['u7', 'utf7']
utf_8 ['u8', 'utf', 'utf8_ucs4', 'utf8_ucs2', 'utf8']
uu_codec ['uu']
zlib_codec ['zlib', 'zip']
Logged In: YES
user_id=38388
Doc patches are welcome - perhaps you could enhance your
script to have the doc table generated from the available
codecs and aliases ?!
Thanks.
Logged In: YES
user_id=197677
I would very much like to produce the doc table from code.
However, I have a few questions.
It seems that encodings.aliases.aliases is a list of all
encodings and not necessarily those supported on all
machines. Ie. mbcs on UNIX or embedded systems that might
exclude some large character sets to save space. Is this
correct? If so, will it remain that way?
To find out if an encoding is supported on the current
machine, the code should handle the exception generated when
codecs.lookup() fails. Right?
To generate the table, I need to produce the "Languages"
field. This information does not seem to be available from
the Python runtime. I would much rather see this
information, including a localized version of the string,
come from the Python runtime, rather than hardcode it into
the script. Is that a possibility? Would it be a better
approach?
The non-language oriented encodings such as base_64 and
rot_13 do not seem to have anything that distinguishes them
from human languages. How can these be separated out
without hardcoding?
Likewise, the non-language encodings have an "Operand type"
field which would need to be generated. My feeling is,
again, that this should come from the Python runtime and not
be hardcoded into the doc generation script. Any suggestions?
Logged In: YES
user_id=21627
I would not like to see the documentation contain a complete
list of all aliases. The documentation points out that this
are "a few common aliases", ie. I selected aliases that
people are likely to encounter, and are encouraged to use.
I don't think it is useful to produce the table from the
code. If you want to know everything in aliases, just look
at aliases directly.
Logged In: YES
user_id=38388
Martin, I don't see any problem with putting the complete
list of aliases into the documentation.
liturgist, don't worry about hard-coding things into the
script. The extra information Martin gave in the table is
not likely going to become part of the standard lib, because
there's no a lot you can do with it programmatically.
Logged In: YES
user_id=197677
The script attached generates two HTML tables in files
specified on the command line.
usage: encodingaliases.py
<language-oriented-codecs-html-file>
<non-language-oriented-codecs-html-file>
A static list of codecs in this script is used because the
language description is not available in the python runtime.
Codecs found in the encodings.aliases.aliases list are
added to the list, but will be described as "unknown" encodings.
The "bijectiveType" was, like the language descriptions,
taken from the current (2.4.1) documentation.
It would be much better for the descriptions and "bijective"
type settings to come from the runtime. The problem is one
of maintenance. Without these available for introspection
in the runtime, a new encoding with no alias will never be
identified. When it does appear with an alias, it can only
be described as "unknown."
Logged In: YES
user_id=21627
I do see a problem with generating these tables
automatically. It suggests the reader that the aliases are
all equally relevant. However, I bet few people have ever
heard of or used, say, 'cspc850multilingual'.
As for the actual patch: Please don't generate HTML.
Instead, TeX should be generated, as this is the primary
source. Also please add a patch to the current TeX file,
updating it appropriately.
Logged In: YES
user_id=197677
For example: there appears to be a codec for iso8859-1, but
it has no alias in the encodings.aliases.aliases list and it
is not in the current documentation.
What is the relationship of iso8859_1 to latin_1? Should
iso8859_1 be considered a base codec? When should iso8859_1
be used rather than latin_1?
Logged In: YES
user_id=21627
I think the presence of iso8859_1.py is a bug, resulting
from automatic generation of these files. The file should be
deleted; iso8859-1 should be encoded through the alias to
latin-1. Thanks for pointing that out.
Logged In: YES
user_id=197677
If it does not present a problem, making latin_1 and alias
for iso8859_1 as the base codec would present the ISO
standards as a complete, orthogonal set. The alias would
mean that no existing code is broken. Right?
Would this approach present any problem? Should this
become a separate bug entry?
Logged In: YES
user_id=21627
It does present a problem: the latin-1 codec is faster than
the iso8859-1 codec, as it is a special case in C (employin
the fact that Latin-1 and Unicode share the first 256 code
points). So I think the iso8859-1 should be dropped. But, as
you guess, this is an issue independent of the documentation
issue at hand, and should be reported (and resolved) separately. | https://bugs.python.org/issue1249749 | CC-MAIN-2020-45 | refinedweb | 1,419 | 61.46 |
Chris Wilson <ch...@chris-wilson.co.uk> writes: > Quoting Scott D Phillips (2018-04-03 21:05:43) >> Rename the (un)map_gtt functions to (un)map_map (map by >> returning a map) and add new functions (un)map_tiled_memcpy that >> return a shadow buffer populated with the intel_tiled_memcpy >> functions. >> >> Tiling/detiling with the cpu will be the only way to handle Yf/Ys >> tiling, when support is added for those formats. >> >> v2: Compute extents properly in the x|y-rounded-down case (Chris Wilson) >> >> v3: Add units to parameter names of tile_extents (Nanley Chery) >> Use _mesa_align_malloc for the shadow copy (Nanley) >> Continue using gtt maps on gen4 (Nanley) >> >> v4: Use streaming_load_memcpy when detiling >> --- >> src/mesa/drivers/dri/i965/intel_mipmap_tree.c | 108 >> ++++++++++++++++++++++++-- >> 1 file changed, 100 insertions(+), 8 deletions(-) >> >> diff --git a/src/mesa/drivers/dri/i965/intel_mipmap_tree.c >> b/src/mesa/drivers/dri/i965/intel_mipmap_tree.c >> index 23cb40f3226..58ffe868d0d 100644 >> --- a/src/mesa/drivers/dri/i965/intel_mipmap_tree.c >> +++ b/src/mesa/drivers/dri/i965/intel_mipmap_tree.c
Advertising
[...] >> @@ -3093,11 +3094,93 @@ intel_miptree_map_gtt(struct brw_context *brw, >> } >> >> static void >> -intel_miptree_unmap_gtt(struct intel_mipmap_tree *mt) >> +intel_miptree_unmap_map(struct intel_mipmap_tree *mt) >> { >> intel_miptree_unmap_raw(mt); >> } >> >> +/* Compute extent parameters for use with tiled_memcpy functions. >> + * xs are in units of bytes and ys are in units of strides. */ >> +static inline void >> +tile_extents(struct intel_mipmap_tree *mt, struct intel_miptree_map *map, >> + unsigned int level, unsigned int slice, unsigned int *x1_B, >> + unsigned int *x2_B, unsigned int *y1_el, unsigned int *y2_el) >> +{ >> + unsigned int block_width, block_height; >> + unsigned int x0_el, y0_el; >> + >> + _mesa_get_format_block_size(mt->format, &block_width, &block_height); >> + >> + assert(map->x % block_width == 0); >> + assert(map->y % block_height == 0); >> + >> + intel_miptree_get_image_offset(mt, level, slice, &x0_el, &y0_el); >> + *x1_B = (map->x / block_width + x0_el) * mt->cpp; >> + *y1_el = map->y / block_height + y0_el; >> + *x2_B = (DIV_ROUND_UP(map->x + map->w, block_width) + x0_el) * mt->cpp; >> + *y2_el = DIV_ROUND_UP(map->y + map->h, block_height) + y0_el; >> +} >> + >> +static void >> +intel_miptree_map_tiled_memcpy(struct brw_context *brw, >> + struct intel_mipmap_tree *mt, >> + struct intel_miptree_map *map, >> + unsigned int level, unsigned int slice) >> +{ >> + unsigned int x1, x2, y1, y2; >> + tile_extents(mt, map, level, slice, &x1, &x2, &y1, &y2); >> + map->stride = ALIGN(_mesa_format_row_stride(mt->format, map->w), 16); >> + >> + /* The tiling and detiling functions require that the linear buffer >> + * has proper 16-byte alignment (that is, `x0` is 16-byte aligned). > > Throw in an its here, i.e. (that is, its `x0`...) Just spent a few > moments going what x0 before remembering it's the internal x0 of > tiled_to_linear(). > > We really want to move that knowledge back to intel_tiled_memcpy.c. A > single user isn't enough to justify a lot of effort though (or be sure > you get the interface right). You mean putting the code to decide the stride and alignment requirements by the detiling code, something like alloc_linear_for_tiled? >> + * Here we over-allocate the linear buffer by enough bytes to get >> + * the proper alignment. >> + */ >> + map->buffer = _mesa_align_malloc(map->stride * (y2 - y1) + (x1 & 0xf), >> 16); >> + map->ptr = (char *)map->buffer + (x1 & 0xf); >> + assert(map->buffer); >> + >> + if (!(map->mode & GL_MAP_INVALIDATE_RANGE_BIT)) { >> + char *src = intel_miptree_map_raw(brw, mt, map->mode | MAP_RAW); >> + src += mt->offset; >> + >> + const mem_copy_fn fn = >> +#if defined(USE_SSE41) >> + cpu_has_sse4_1 ? (mem_copy_fn)_mesa_streaming_load_memcpy : >> +#endif >> + memcpy; > > So always use a streaming load and bypass cache, even coming from WB. > Justifiable I believe, since there is no reason to keep it in cache as > the modification is on map->buffer not the tiled bo. > > But do we want to use this path if !USE_SSE41 and WC? Let's see if > that's excluded. Presently the logic is to always do map_tiled_memcpy for tiled surfaces, except on gen 4 where we finally could do a gtt map. You're saying we're better off doing a gtt map if we do have a wc map and don't have movntdqa? That sounds reasonable >> static void >> intel_miptree_map_blit(struct brw_context *brw, >> struct intel_mipmap_tree *mt, >> @@ -3655,8 +3738,11 @@ intel_miptree_map(struct brw_context *brw, >> (mt->surf.row_pitch % 16 == 0)) { >> intel_miptree_map_movntdqa(brw, mt, map, level, slice); >> #endif >> + } else if (mt->surf.tiling != ISL_TILING_LINEAR && >> + brw->screen->devinfo.gen > 4) { >> + intel_miptree_map_tiled_memcpy(brw, mt, map, level, slice); >> } else { >> - intel_miptree_map_gtt(brw, mt, map, level, slice); >> + intel_miptree_map_map(brw, mt, map, level, slice); > > Ah, the remaining choice is to go through a GTT map if tiled_memcpy > doesn't support it. > > So memcpy is the only option right now, that is painful. Maybe use > perf_debug if we hit map_map. > > I'm never sure how map_blit ties into this. We definitely want to use > the direct copy over the indirect blit in the majority, if not all, > cases. > > As it stands, as an improvement over map_gtt, > From: Chris Wilson <ch...@chris-wilson.co.uk> Was this meant to be R-b, or maybe it's the pointed absence of one :) _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org | https://www.mail-archive.com/mesa-dev@lists.freedesktop.org/msg190377.html | CC-MAIN-2018-17 | refinedweb | 766 | 53.21 |
- Code: Select all
import pyexpat
And getting an error about pyexpat missing. I tried various ways of linking to /usr/lib/python2.6/ but then I would get "pyexpat.so misses symbol _Py_HashSecret".
I'm not sure if this is due to a difference in python versions? 2.6.6 bundled with sublimetext, 2.6.8 on the system.
To fix this on debian
- Code: Select all
sudo apt-get deb-build python2.6
Download python-2.6.6 source and unzip. You want to install to a local prefix and configure unicode for 4 bytes, so for example
- Code: Select all
./configure --prefix=/home/daniel/Downloads/Python-2.6.6/mypy --enable-unicode=ucs4
make
make install
After that, i copied the `mypy/lib/python2.6` directory to `<sublime_root>/lib/` with that same folder name, `python2.6`.
After starting SublimeText2 and try `import pyexpat`, it works fine. I tried the SuperAnt plugin and it now works as well. Just an FYI from someone in need. | http://www.sublimetext.com/forum/viewtopic.php?f=3&t=10078&p=39971 | CC-MAIN-2015-35 | refinedweb | 166 | 62.85 |
Last Updated on August 14, 2020
Deep Learning for NLP Crash Course.
Bring Deep Learning methods to Your Text Data project in 7 Days.
We are awash with text, from books, papers, blogs, tweets, news, and increasingly text from spoken utterances.
Working with text is hard as it requires drawing upon knowledge from diverse domains such as linguistics, machine learning, statistical methods, and these days, deep learning.
Deep learning methods are starting to out-compete the classical and statistical methods on some challenging natural language processing problems with singular and simpler models.
In this crash course, you will discover how you can get started and confidently develop deep learning for natural language processing problems using Python in 7 days.
This is a big and important post. You might want to bookmark it.
Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Update Jan/2020: Updated API for Keras 2.3 and TensorFlow 2.0.
- Update Aug/2020: Updated link to movie review dataset.
How to Get Started with Deep Learning for Natural Language Processing
Photo by Daniel R. Blume,:
- You need to know your way around basic Python, NumPy and Keras for deep learning.
You do NOT need to know:
- You do not need to be a math wiz!
- You do not need to be a deep learning expert!
- You do not need to be a linguist!
This crash course will take you from a developer that knows a little machine learning to a developer who can bring deep learning methods to your own natural language processing deep learning for natural language processing in Python:
- Lesson 01: Deep Learning and Natural Language
- Lesson 02: Cleaning Text Data
- Lesson 03: Bag-of-Words Model
- Lesson 04: Word Embedding Representation
- Lesson 05: Learned Embedding
- Lesson 06: Classifying Text
- Lesson 07: Movie Review Sentiment Analysis deep learning, natural language processing and the best-of-breed tools in Python (hint, I have all of the answers directly on this blog, use the search box).
I do provide more help in the form of links to related posts because I want you to build up some confidence and inertia.
Post your results in the comments, I’ll cheer you on!
Hang in there, don’t give up.
Note: This is just a crash course. For a lot more detail and 30 fleshed out tutorials, see my book on the topic titled “Deep Learning for Natural Language Processing“.
Need help with Deep Learning for Text Data?
Take my free 7-day email crash course now (with code).
Click to sign-up and also get a free PDF Ebook version of the course.
Start Your FREE Crash-Course Now
Lesson 01: Deep Learning and Natural Language
In this lesson, you will discover a concise definition for natural language, deep learning and the promise of deep learning for working with text data. problem of understanding text is not solved, and may never be, is primarily because language is messy. There are few rules. And yet we can easily understand each other most of the time.
Deep Learning
Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.
A property of deep learning is that the performance of these type of model improves by training them with more examples by increasing their depth or representational capacity.
In addition to scalability, another often cited benefit of deep learning models is their ability to perform automatic feature extraction from raw data, also called feature learning.
Promise of Deep Learning for NLP
Deep learning methods are popular for natural language, primarily because they are delivering on their promise.
Some of the first large demonstrations of the power of deep learning were in natural language processing, specifically speech recognition. More recently in machine translation.
The 3 key promises of deep learning for natural language processing are as follows:
-.
Natural language processing is not “solved“, but deep learning is required to get you to the state-of-the-art on many challenging problems in the field.
Your Task
For this lesson you must research and list 10 impressive applications of deep learning methods in the field of natural language processing. Bonus points if you can link to a research paper that demonstrates the example.
Post your answer in the comments below. I would love to see what you discover.
More Information
- What Is Natural Language Processing?
- What is Deep Learning?
- Promise of Deep Learning for Natural Language Processing
- 7 Applications of Deep Learning for Natural Language Processing
In the next lesson, you will discover how to clean text data so that it is ready for modeling.
Lesson 02: Cleaning Text Data
In this lesson, you will discover how you can load and clean text data so that it is ready for modeling using both manually and with the NLTK Python library.
Text is Messy
You cannot go straight from raw text to fitting a machine learning or deep learning model.
You must clean your text first, which means splitting it into words and normalizing issues such as:
- Upper and lower case characters.
- Punctuation within and around words.
- Numbers such as amounts and dates.
- Spelling mistakes and regional variations.
- Unicode characters
- and much more…
Manual Tokenization
Generally, we refer to the process of turning raw text into something we can model as “tokenization”, where we are left with a list of words or “tokens”.
We can manually develop Python code to clean text, and often this is a good approach given that each text dataset must be tokenized in a unique way.
For example, the snippet of code below will load a text file, split tokens by whitespace and convert each token to lowercase.
You can imagine how this snippet could be extended to handle and normalize Unicode characters, remove punctuation and so on.
NLTK Tokenization
Many of the best practices for tokenizing raw text have been captured and made available in a Python library called the Natural Language Toolkit or NLTK for short.
You can install this library using pip by typing the following on the command line:
After it is installed, you must also install the datasets used by the library, either via a Python script:
or via a command line:
Once installed, you can use the API to tokenize text. For example, the snippet below will load and tokenize an ASCII text file.
There are many tools available in this library and you can further refine the clean tokens using your own manual methods, such as removing punctuation, removing stop words, stemming and much more.
Your Task
Your task is to locate a free classical book on the Project Gutenberg website, download the ASCII version of the book and tokenize the text and save the result to a new file. Bonus points for exploring both manual and NLTK approaches.
Post your code in the comments below. I would love to see what book you choose and how you chose to tokenize it.
More Information
In the next lesson, you will discover the bag-of-words model.
Lesson 03: Bag-of-Words Model
In this lesson, you will discover the bag of words model and how to encode text using this model so that you can train a model using the scikit-learn and Keras Python libraries.
Bag-of-Words
The bag-of-words model is a way of representing text data when modeling text with machine learning algorithms.
The approach is very simple and flexible, and can be used in a myriad of ways for extracting features from documents.
A bag-of-words is a representation of text that describes the occurrence of words within a document.
A vocabulary is chosen, where perhaps some infrequently used words are discarded. A given document of text is then represented using a vector with one position for each word in the vocabulary and a score for each known word that appears (or not) in the document.
It is called a “bag” of words, because any information about the order or structure of words in the document is discarded. The model is only concerned with whether known words occur in the document, not where in the document.
Bag-of-Words with scikit-learn
The scikit-learn Python library for machine learning provides tools for encoding documents for a bag-of-words model.
An instance of the encoder can be created, trained on a corpus of text documents and then used again and again to encode training, test, validation and any new data that needs to be encoded for your model.
There is an encoder to score words based on their count called CountVectorizer, one for using a hash function of each word to reduce the vector length called HashingVectorizer, and a one that uses a score based on word occurrence in the document and the inverse occurrence across all documents called TfidfVectorizer.
The snippet below shows how to train the TfidfVectorizer bag-of-words encoder and use it to encode multiple small text documents.
Bag-of-Words with Keras
The Keras Python library for deep learning also provides tools for encoding text using the bag-of words-model in the Tokenizer class.
As above, the encoder must be trained on source documents and then can be used to encode training data, test data and any other data in the future. The API also has the benefit of performing basic tokenization prior to encoding the words.
The snippet below demonstrates how to train and encode some small text documents using the Keras API and the ‘count’ type scoring of words.
Your Task
Your task in this lesson is to experiment with the scikit-learn and Keras methods for encoding small contrived text documents for the bag-of-words model. Bonus points if you use a small standard text dataset of documents to practice on and perform data cleaning as part of the preparation.
Post your code in the comments below. I would love to see what APIs you explore and demonstrate.
More Information
- A Gentle Introduction to the Bag-of-Words Model
- How to Prepare Text Data for Machine Learning with scikit-learn
- How to Prepare Text Data for Deep Learning with Keras
In the next lesson, you will discover word embeddings.
Lesson 04: Word Embedding Representation
In this lesson, you will discover the word embedding distributed representation and how to develop a word embedding using the Gensim Python library.
Word Embeddings
Word.
Word embedding methods learn a real-valued vector representation for a predefined fixed sized vocabulary from a corpus of text.
Train Word Embeddings
You can train a word embedding distributed representation using the Gensim Python library for topic modeling.
Gensim offers an implementation of the word2vec algorithm, developed at Google for the fast training of word embedding representations from text documents,
You can install Gensim using pip by typing the following on your command line:
The snippet below shows how to define a few contrived sentences and train a word embedding representation in Gensim.
Use Embeddings
Once trained, the embedding can be saved to file to be used as part of another model, such as the front-end of a deep learning model.
You can also plot a projection of the distributed representation of words to get an idea of how the model believes words are related. A common projection technique that you can use is the Principal Component Analysis or PCA, available in scikit-learn.
The snippet below shows how to train a word embedding model and then plot a two-dimensional projection of all words in the vocabulary.
Your Task
Your task in this lesson is to train a word embedding using Gensim on a text document, such as a book from Project Gutenberg. Bonus points if you can generate a plot of common words.
Post your code in the comments below. I would love to see what book you choose and any details of the embedding that you learn.
More Information
- What Are Word Embeddings for Text?
- How to Develop Word Embeddings in Python with Gensim
- Project Gutenberg
In the next lesson, you will discover how a word embedding can be learned as part of a deep learning model.
Lesson 05: Learned Embedding
In this lesson, you will discover how to learn a word embedding distributed representation for words as part of fitting a deep learning model. You must specify the input_dim which is the size of the vocabulary, the output_dim which is the size of the vector space of the embedding, and optionally the input_length which is the number of words in input sequences.
Or, more concretely, a vocabulary of 200 words, a distributed representation of 32 dimensions and an input length of 50 words.
Embedding with Model
The Embedding layer can be used as the front-end of a deep learning model to provide a rich distributed representation of words, and importantly this representation can be learned as part of training the deep learning model.
For example, the snippet below will define and compile a neural network with an embedding input layer and a dense output layer for a document classification problem.
When the model is trained on examples of padded documents and their associated output label both the network weights and the distributed representation will be tuned to the specific data.
It is also possible to initialize the Embedding layer with pre-trained weights, such as those prepared by Gensim and to configure the layer to not be trainable. This approach can be useful if a very large corpus of text is available to pre-train the word embedding.
Your Task
Your task in this lesson is to design a small document classification problem with 10 documents of one sentence each and associated labels of positive and negative outcomes and to train a network with word embedding on these data. Note that each sentence will need to be padded to the same maximum length prior to training the model using the Keras pad_sequences() function. Bonus points if you load a pre-trained word embedding prepared using Gensim.
Post your code in the comments below. I would love to see what sentences you contrive and the skill of your model.
More Information
- Data Preparation for Variable Length Input Sequences
- How to Use Word Embedding Layers for Deep Learning with Keras
In the next lesson, you will discover how to develop deep learning models for classifying text.
Lesson 06: Classifying Text
In this lesson, you will discover the standard deep learning model for classifying text used on problems such as sentiment analysis of text.
Document Classification
Text classification describes a general class of problems such as predicting the sentiment of tweets and movie reviews, as well as classifying email as spam or not.
It is an important area of natural language processing and a great place to get started using deep learning techniques on text data.
Deep learning methods are proving very good at text classification, achieving state-of-the-art results on a suite of standard academic benchmark problems.
Embeddings + CNN
The modus operandi for text classification involves the use of a word embedding for representing words and a Convolutional Neural Network or CNN for learning how to discriminate documents on classification problems.
The architecture is comprised of three key pieces:
- Word Embedding Model: A distributed representation of words where different words that have a similar meaning (based on their usage) also have a similar representation.
- Convolutional Model: A feature extraction model that learns to extract salient features from documents represented using a word embedding.
- Fully-Connected Model: The interpretation of extracted features in terms of a predictive output.
This type of model can be defined in the Keras Python deep learning library. The snippet below shows an example of a deep learning model for classifying text documents as one of two classes.
Your Task
Your task in this lesson is to research the use of the Embeddings + CNN combination of deep learning methods for text classification and report on examples or best practices for configuring this model, such as the number of layers, kernel size, vocabulary size and so on.
Bonus points if you can find and describe the variation that supports n-gram or multiple groups of words as input by varying the kernel size.
Post your findings in the comments below. I would love to see what you discover.
More Information
In the next lesson, you will discover how to work through a sentiment analysis prediction problem.
Lesson 07: Movie Review Sentiment Analysis Project
In this lesson, you will discover how to prepare text data, develop and evaluate a deep learning model to predict the sentiment of movie reviews.
I want you to tie together everything you have learned in this crash course and work through a real-world problem end-to-end.
Movie Review Dataset
The Movie Review Dataset is a collection of movie reviews retrieved from the imdb.com website in the early 2000s by Bo Pang and Lillian Lee. The reviews were collected and made available as part of their research on natural language processing.
You can download the dataset from here:
- Movie Review Polarity Dataset (review_polarity.tar.gz, 3MB)
From this dataset you will develop a sentiment analysis deep learning model to predict whether a given movie review is positive or negative.
Your Task
Your task in this lesson is to develop and evaluate a deep learning model on the movie review dataset:
- Download and inspect the dataset.
- Clean and tokenize the text and save the results to a new file.
- Split the clean data into train and test datasets.
- Develop an Embedding + CNN model on the training dataset.
- Evaluate the model on the test dataset.
Bonus points if you can demonstrate your model by making a prediction on a new movie review, contrived or real. Extra bonus points if you can compare your model to a neural bag-of-words model.
Post your code and model skill in the comments below. I would love to see what you can come up with. Simpler models are preferred, but also try going really deep and see what happens.
More Information
- How to Prepare Movie Review Data for Sentiment Analysis
- How to Develop a Deep Learning Bag-of-Words Model for Predicting Movie Review Sentiment
- How to Develop a Word Embedding Model for Predicting Movie Review Sentiment
The End!
(Look How Far You Have Come)
You made it. Well done!
Take a moment and look back at how far you have come.
You discovered:
- What natural language processing is and the promise and impact that deep learning is having on the field.
- How to clean and tokenize raw text data manually and use NLTK to make it ready for modeling.
- How to encode text using the bag-of-words model with the scikit-learn and Keras libraries.
- How to train a word embedding distributed representation of words using the Gensim library.
- How to learn a word embedding distributed representation as a part of fitting a deep learning model.
- How to use word embeddings with convolutional neural networks for text classification problems.
- How to work through a real-world sentiment analysis problem end-to-end using deep learning methods.?
Let me know. Leave a comment below.
Hi, thanks for the useful blog.
Here’s 10 impressive NLP applications using deep learning methods:
1. Machine translation
Learning phrase representations using RNN encoder-decoder for statistical machine translation
2. Sentiment analysis
Aspect specific sentiment analysis using hierarchical deep learning
3. Text classification
Recurrent Convolutional Neural Networks for Text Classification.
4. Named Entity Recognition
Neural architectures for named entity recognition
5. Reading comprehension
Squad: 100,000+ questions for machine comprehension of text
6. Word segmentation
Deep Learning for Chinese Word Segmentation and POS Tagging.
7. Part-of-speech tagging
End-to-end sequence labeling via bi-directional lstm-cnns-crf
8. Intent detection
Query Intent Detection using Convolutional Neural Networks
9. Spam detection
Twitter spam detection based on deep learning
10. Summarization
Abstractive sentence summarization with attentive recurrent neural networks
Very nice!
1. LyreBird >> So this is natural language processing for speech generation and synthesis. Given a string it is able to correctly synthesize a human voice to read it to include the tonal inflections. The use of tonal inflection is a major improvement in making computer generated speech far more natural sounding than the normal flat sounding speak-n’-spell. This could pass the Turing Test if the user didn’t know if was computer generated.
2. >> You can input any line of text be it quotes from the Bible or Star Wars and its able to start writing a story automatically which is both intelligible and contextually accurate.
3. YouTube auto-caption generation
4. Facebook messenger / iMessage: When you type in a pre-formatted string like a phone number or email or date it recognizes this and automatically makes suggestions like adding it to your contacts of calendar.
5. Alexa is able to describe a number of scientific concepts in great detail
6. Amazon customer service: It is extremely rare that you will need to talk to an actual human in order to handle most of your customer service requests because the voice based chat bot is able to handle your requests.
7. Google voice search
8. Google translator
9. >> Is able to generate its own music in the style of any possible artist… Bon Jovi on piano… Mozart on electric guitar and merge the styles of various artists together (musical notes only)
10. >> Develops its own musics and writes its own lyrics in the style of various artists
Nice work!
Hello sir,
Im new to NLP,MachineLearning and other AI stuffs. I know this will be helpful to me. I always used to tweet your articles. Good work sir. And i will try my level best to update you my learning, since im running a business too, i may get a lil busy :). Thanks a ton for tutorial anyway.
Thanks, hang in there!
Hi Jason…. I am new to NLP. I have a dataset, which has 5000 records/samples with targets. For every 10 records, there is one Target/class. In other words, there are 500 targets/class. Besides, each target/class is multiple words (like a simple sentence). I really seek your advise/help in building a (NLP) model using this kind of data set. Thank you.
Perhaps start with some of the simple text classification models described here:
Before I get some rest, here is one interesting article I found as an example of NLP
Perhaps the biggest reason why I want to learn it: to train a network to detect fake news.
Thnaks Danh.
Hello Jason,
Are you soon going to publish a book for NLP?
I am really looking forward to it
Yes, I hope to release it in about 2 weeks.
Hello Jason,
As always thanks for your great, inspiring and distilled tutorial.i have bought almost all your books and looking forward to dive into new NLP book.
I want do machine translation from English to a rare language with scanty written resources so that native speaker can get spoken or written contents from Internet.
kindly advice or do a tutorial on specific approach in your future blogs.
I hope to have many tutorials on NMT.
I would recommend getting as many paired examples as you can and consider augmentation methods to reuse the pairs that you do have while training the model.
Here is the code used to tokenize ‘A Little Journey’ by Ray Bradbury. Ray Bradbury is one of my top favorite classic sci-fi authors. Unfortunately, Project Gutenberg didn’t have any Orwell so I went with him. I’m using the most basic way to tokenize but I will add onto it some more. Included in the repository is the text file containing the tokenized text.
Very cool Danh, thanks for sharing!
Well done for taking action, it is one of the biggest differences between those who succeed and those that don’t!
Thank you very much Jason. This is quite concise and crisp. Would like to hear more
Thanks Ravi.
Hello everyone
Here’s 30 impressive NLP applications using deep learning methods:
It is a map with links. Download this and open this on XMind.
Awesome, thanks for sharing Gilles!
Hey Jason. Just wanted to say thanks for the great tutorial. This provided the inspiration and help to actually complete an actual, working machine learning project. Granted, my model could use some improvement. But I was actually able to walk through the above steps to create and evaluate a model and then actually use it to make predictions on an unseen piece of data that was not used to train or test. Now, I want to go and try to get it even more accurate! Thanks again and keep up the great work!
Well done Kyle!
Hi Jason,
Here’s the code that I’m trying to implement for this entire tutorial . I’ve finished till theTokenisation Task for . A Tale of Two cities by charles dickens which is also one of my favourite books :
Thanks,
Hussain
Well done!
Hi Jason,
Thank you for this wonderful tutorial, it was really helpful. I learnt a lot by following your tutorials and additional information which you have provided.
In lesson 2 – I was able to see the difference between manual and NLTK approaches. There weer lot of tokens in NLTK approach. In manual approach the symbols are considered as a part of the word preceding it. This was my analysis.
In lesson 3- I used different versions of the betty botter tounge twister and was able to clean up the data. It was easy to implement since I followed the steps you provided.
In lesson 4- I did word embedding on A Little Journey, by Ray Bradbury from project Gutenberg and I was able to plot common words. I had to zoom in on the graph to see the words since they were all bit congested.
In lesson 5 – I implemented the task similar to your code, used ten docs on my own providing the labels . I didn’t face any troubles in doing that.
In lesson 6- I was able to achieve accuracy of 85 pc! I guess i did a mistake somewhere and I am looking to fix it.
Lesson 7- I am still working on it. I am done with the clean up part. The next step would be implementing Deep learning model.
I will share my learning once I am completely done. Thanks a lot for clear and concise material. It was indeed a great pleasure in reading and implementing your tutorials.
Regards,
Arjun
Well done!
Jason, thanks for your tutorial and mini-course. As requested,
– the code is placed at;
– I chose dickens_94_tale_of_two_cities.txt
Well done!
Ten application of Deep Learning in Natural Processing include:
1. Predicting the next word
2. Language translation
3. Music generation ()
4. Text classification ()
5. Automated email response()
6. Text summarization ()
7. Speech recognition
8. Information retrieval
9. Text clustering
10. Text generation
Thanks for sharing Andrew!
Hello,
Thanks a lot for your wonderful posts.
Are there other models rather than bag of words and word embedding? I want to find a proper model for a specific application, so I was thinking that I need to explore possible models.
I want to extract a name entity (NER) (such as authors’ names in articles). This can be a subtask of information extraction. If you have any suggestion/post on that, can you please point it to me?
I’m sure there are, but these two are the most popular and successful.
Sorry, I don’t have material on NER. I hope to cover it in the future.
I did the whole thing! I often start these things and don’t finish. Thanks for the opportunity.
Thanks Josh!
10 deep learning applications in NLP:
1. Text Classification and Categorization
Recurrent Convolutional Neural Networks for Text Classification()
2. Named Entity Recognition (NER)
Neural Architectures for Named Entity Recognition()
3. Paraphrase Detection
Detecting Semantically Equivalent Questions in Online User Forums()
4. Spell Checking
Personalized Spell Checking using Neural Networks()
5. Sequence labeling/tagging
Sequence labeling is a type of pattern recognition task that involves the algorithmic assignment of a categorical label to each member of a sequence of observed values.
Semi-supervised sequence tagging with bidirectional language models()
6. Semantic Role labeling
Deep Semantic Role Labeling with Self-Attention()
7. Semantic Parsing and Question Answering
Semantic Parsing via Staged Query Graph Generation Question Answering with Knowledge Base()
8. Language Generation and Multi-document Summarization
Natural Language Generation, Paraphrasing and Summarization of User Reviews with Recurrent Neural Networks()
9. Sentiment analysis
Twitter Sentiment Analysis Using Deep Convolutional Neural Network ()
10. Automatic summarization
Text Summarization Using Unsupervised Deep Learning ()
Great work!
Hi Jason,
Great tutorial, well structured. I’m a novice and learning from your tutorials.
I downloaded the project Gutenberg text file and did all the cleaning you have taught in lesson -2, I now have a tokenized text, with stopwords, punctuation, removed.
In lesson-3 I’m trying to implement the vectorizer on the pre processed data from the above step. Stemmed is my preprocessed data
vectorizer = CountVectorizer()
vectorizer.fit(Stemmed)
print(vectorizer.vocabulary)
When I print, I’m getting None as the output.
Can we vectorize on tokens?
I tried tfidf vectorizer and I’m able to get the vocabulary and idf, but when I print vector.to arry, I’m getting all 0 in the array
file = ‘C:/Study/Machine Learning/Dataset/NLP_Data_s.txt’
text = open(file,’rt’)
words = text.read()
text.close()
lower = str.lower(words)# convert all words to lower case
tokens = word_tokenize(lower)# tokenize words
table = str.maketrans(“”,””,string.punctuation)# remove punctuation on
tokens
remove_punct = [w.translate(table) for w in tokens]# remove punctuation on
tokens
stop_words = set(stopwords.words(‘english’))
remove_stop = [word for word in remove_punct if not word in stop_words]#
removed stop words
porter = PorterStemmer()
Stemmed = [porter.stem(word) for word in remove_stop]
vectorizer = TfidfVectorizer()
vectorizer.fit(Stemmed)
print(vectorizer.get_feature_names())
print(vectorizer.vocabulary_)
print(vectorizer.idf_)
vector= vectorizer.transform(Stemmed)
print(vector.shape)
print(type(vector))
print(vector.toarray())
Perhaps check the content of your data before the model to confirm that everything is working as you expect?
Perhaps this post will help:
I’m getting values now in the array,
Earlier I tokenized the complete data( 3 paragraphs) using word_tokenize
Now, I tokenized using sent_tokenize
So I can’t create a tfid vector with tokenized words?
You can, you might just need to convert the tokens back into a string with space first.
Thanks Jason
F10-SGD: Fast Training of Elastic-net Linear Models for Text Classification and Named-entity Recognition
======================================================================
On the Density of Languages Accepted by Turing Machines and Other Machine Models
======================================================================
Speech Recognition with no speech or with noisy speech
======================================================================
Actions Generation from Captions
======================================================================
Massively Multilingual Neural Machine Translation
======================================================================
Neural Related Work Summarization with a Joint Context-driven Attention Mechanism
======================================================================
Option Comparison Network for Multiple-choice Reading Comprehension
Well done!
I am not a computer wiz. Is there an easier way to set up a Python environment?
Yes, try this step by step tutorial:
1. Text Classification and Categorization
2. Named Entity Recognition (NER)
3. Part-of-Speech Tagging
4. Semantic Parsing and Question Answering
5. Paragraph Detection
6. Language Generation and Mukti-document Summarization
7. Machine Translation
8. Speech Recognition
9. Character Recognition
10. Spell Checking
Well done!
thank you very much for this explanation can you explain to me what is the
matrix embedding
Yes, perhaps start here:
1. Text to speech
2. Speech recognition
3. Multi label Image classification ()
4. Multitask learning (a generalization of previous item) ()
5.Knowledge-Powered Deep Learning for Word Embedding ()
6.Semantic natural language processing ()
7.Character recongintion
8. Spell checking
9.Sentiment analysis
Thanks for sharing!
Thank you Jason.
I’ve learned a lot through your elaborations in each step.
Thanks!
1) “Stylometry in Computer-Assisted Translation: Experiments on the Babylonian Talmud” () is about using NLP to support expert in translation from ancient hebraic to modern languages of a book which poses hard interpretation issues, The system learns the style-related features which enables it to disambiguate words, phrases and sentences, and provide suggestion to the equipe of experts in translation, further learning from their decisions.
2) “Multi-Task Learning in Deep Neural Network for Sentiment Polarity and Irony classification” () is about multi-task sentiment analysis which integrately deals with the tasks of: polarity detection (positive, negative) and irony detection, asserting that this integrated approach enhances the sentiment analysis
3) “SuperAgent: A Customer Service Chatbot for E-commerce Websites” () is about using corpora of products descriptions and of user provided contents and questions in order to train a chatbot for effective Q&A task with potential consumers.
4) “Deep Learning for Chatbots” () is a thesis about deep learning NLP to boost chatbots performances in conversation with humans.
5) AI (Visual and NLP) for translating the American Sign Language into text and viceversa “”
6) “CSI: A Hybrid Deep Model for Fake News Detection” ( )
7) ” Deep Semantic Role Labeling: What Works and What’s Next” () DL-NLP for Text Mining Tasks
8) DL-NLP for Smart Manifacturing: ehnhanced abstractive and extractive summarisation of technical documents to support human maintenance task by augmented reality device
9) DL-NLP in the Medical field: ehnhanced abstractive and extractive summarisation of medical documentation and clinical analysis results to support the human diagnostic task
10) Ontology-synthesis by DL-NLP enhanced text mining
Well done!
For Day2 task (using the Kafka’s methamorphosis text)
1- cleaning by splitting on white spaces
’]
2 – cleaning by splitting with reg-ex
’, ‘s’, ‘happened’, ‘to’, ‘me’, ‘he’, ‘thought’, ‘It’, ‘wasn’, ‘t’, ‘a’, ‘dream’, ‘His’, ‘room’]
3- cleaning by i) splitting on white spaces ii) removing punctuation
Whats’, ‘happened’, ‘to’, ‘me’, ‘he’, ‘thought’, ‘It’, ‘wasnt’, ‘a’, ‘dream’, ‘His’, ‘room’, ‘a’, ‘proper’, ‘human’]
4 – cleaning by splitting on white spaces and setting lower-case
’]
5 – sentence-level nltk token.
6 – word-level nltk tokenization
[‘One’, ‘morning’, ‘,’, ‘when’, ‘Gregor’, ‘Samsa’, ‘woke’, ‘from’, ‘troubled’, ‘dreams’, ‘,’, ‘he’, ‘found’, ‘himself’, ‘transformed’, ‘in’, ‘his’, ‘bed’, ‘into’, ‘a’, ‘horrible’, ‘vermin’, ‘.’, ‘He’, ‘lay’, ‘on’, ‘his’, ‘armour-like’, ‘back’, ‘,’, ‘and’, ‘if’, ‘he’, ‘lifted’, ‘his’, ‘head’, ‘a’, ‘little’, ‘he’, ‘could’, ‘see’, ‘his’, ‘brown’, ‘belly’, ‘,’, ‘slightly’, ‘domed’, ‘and’, ‘divided’, ‘by’, ‘arches’, ‘into’, ‘stiff’, ‘sections’, ‘.’,.’, ‘His’, ‘many’, ‘legs’, ‘,’, ‘pitifully’, ‘thin’, ‘compared’, ‘with’, ‘the’, ‘size’, ‘of’, ‘the’, ‘rest’, ‘of’, ‘him’, ‘,’, ‘waved’, ‘about’, ‘helplessly’, ‘as’, ‘he’, ‘looked’, ‘.’, ‘
‘, ‘What’, “‘s”, ‘happened’, ‘to’]
7- nltk tokenization and removal of non alphabetic token
happened’, ‘to’, ‘me’, ‘he’, ‘thought’, ‘It’, ‘was’, ‘a’, ‘dream’, ‘His’, ‘room’, ‘a’, ‘proper’, ‘human’, ‘room’]
8 – cleaning pipeline: nltk word tokenization/lowercase/remove punctuation/remove remaining non alphabetic/filter out stop words
[‘one’, ‘morning’, ‘gregor’, ‘samsa’, ‘woke’, ‘troubled’, ‘dreams’, ‘found’, ‘transformed’, ‘bed’, ‘horrible’, ‘vermin’, ‘lay’, ‘armourlike’, ‘back’, ‘lifted’, ‘head’, ‘little’, ‘could’, ‘see’, ‘brown’, ‘belly’, ‘slightly’, ‘domed’, ‘divided’, ‘arches’, ‘stiff’, ‘sections’, ‘bedding’, ‘hardly’, ‘able’, ‘cover’, ‘seemed’, ‘ready’, ‘slide’, ‘moment’, ‘many’, ‘legs’, ‘pitifully’, ‘thin’, ‘compared’, ‘size’, ‘rest’, ‘waved’, ‘helplessly’, ‘looked’, ‘happened’, ‘thought’, ‘nt’, ‘dream’, ‘room’, ‘proper’, ‘human’, ‘room’, ‘although’, ‘little’, ‘small’, ‘lay’, ‘peacefully’, ‘four’, ‘familiar’, ‘walls’, ‘collection’, ‘textile’, ‘samples’, ‘lay’, ‘spread’, ‘table’, ‘samsa’, ‘travelling’, ‘salesman’, ‘hung’, ‘picture’, ‘recently’, ‘cut’, ‘illustrated’, ‘magazine’, ‘housed’, ‘nice’, ‘gilded’, ‘frame’, ‘showed’, ‘lady’, ‘fitted’, ‘fur’, ‘hat’, ‘fur’, ‘boa’, ‘sat’, ‘upright’, ‘raising’, ‘heavy’, ‘fur’, ‘muff’, ‘covered’, ‘whole’, ‘lower’, ‘arm’, ‘towards’, ‘viewer’]
Thanks for sharing.
Hello !!
The documents for Day3 task are:
‘In the laboratory they study an important chemical formula’,
‘Mathematician often use this formula’,
‘An important race of Formula 1 will take place tomorrow’,
‘I use to wake up early in the morning’,
‘Often things are not as they appear’
(the idea is verifying that IDF lowers the contribution of words like “formula” that, although not so common, appears (with 3 different meaning) in 3 different documents. In this case the word provide no distinguishing features for the first 3 documents)
1-Bag-of-words representation – Scikit-learn TF-IDF
{‘wake’: 27, ‘as’: 3, ‘mathematician’: 10, ‘are’: 2, ‘will’: 28, ‘race’: 16, ‘the’: 19, ‘laboratory’: 9, ‘take’: 18, ‘to’: 23, ‘use’: 26, ‘early’: 5, ‘formula’: 6, ‘this’: 22, ‘up’: 25, ‘chemical’: 4, ‘morning’: 11, ‘study’: 17, ‘tomorrow’: 24, ‘in’: 8, ‘appear’: 1, ‘not’: 12, ‘important’: 7, ‘things’: 21, ‘of’: 13, ‘place’: 15, ‘they’: 20, ‘often’: 14, ‘an’: 0}
[1.69314718 2.09861229 2.09861229 2.09861229 2.09861229 2.09861229
1.40546511]
DOC n.1
(1, 29)
[[0.31161965 0. 0. 0. 0.38624453 0.
0.25867246 0.31161965 0.31161965 0.38624453 0. 0.
0. 0. 0. 0. 0. 0.38624453
0. 0.31161965 0.31161965 0. 0. 0.
0. 0. 0. 0. 0. ]]
DOC n.2
(1, 29)
[[0. 0. 0. 0. 0. 0.
0.34582166 0. 0. 0. 0.51637397 0.
0. 0. 0.41660727 0. 0. 0.
0. 0. 0. 0. 0.51637397 0.
0. 0. 0.41660727 0. 0. ]]
DOC n.3
(1, 29)
[[0.28980239 0. 0. 0. 0. 0.
0.24056216 0.28980239 0. 0. 0. 0.
0. 0.35920259 0. 0.35920259 0.35920259 0.
0.35920259 0. 0. 0. 0. 0.
0.35920259 0. 0. 0. 0.35920259]]
DOC n.4
(1, 29)
[[0. 0. 0. 0. 0. 0.37924665
0. 0. 0.30597381 0. 0. 0.37924665
0. 0. 0. 0. 0. 0.
0. 0.30597381 0. 0. 0. 0.37924665
0. 0.37924665 0.30597381 0.37924665 0. ]]
DOC n.5
(1, 29)
[[0. 0.39835162 0.39835162 0.39835162 0. 0.
0. 0. 0. 0. 0. 0.
0.39835162 0. 0.32138758 0. 0. 0.
0. 0. 0.32138758 0.39835162 0. 0.
0. 0. 0. 0. 0. ]]
2-Bag-of-words representation – Keras-COUNT. 1. 1. 1. 1. 1. 1. 0. 0. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 1. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1.]]
3- -Bag-of-words representation Keras TF-IDF. 0.81093022 0.98082925 0.98082925 0.98082925 0.98082925
0.98082925 0. 0. 1.25276297 1.25276297 1.25276297
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
[0. 0.81093022 0. 0. 0. 0.
0. 0.98082925 0.98082925 0. 0. 0.
1.25276297 1.25276297 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
[0. 0.81093022 0. 0. 0. 0.98082925
0.98082925 0. 0. 0. 0. 0.
0. 0. 1.25276297 1.25276297 1.25276297 1.25276297
1.25276297 1.25276297 1.25276297 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. ]
[0. 0. 0.98082925 0.98082925 0. 0.
0. 0. 0.98082925 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 1.25276297 1.25276297 1.25276297
1.25276297 1.25276297 1.25276297 0. 0. 0.
0. 0. ]
[0. 0. 0. 0. 0.98082925 0.
0. 0.98082925 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 1.25276297 1.25276297 1.25276297
1.25276297 1.25276297]]
Bye
Nice work!
Hello everybody
for day4 task I’ve used kafka’s methamorphosis (again).
As for python code,I combinated tutorial codes to
– tokenize at sentence level the whole dataset (the book)
– tokenize and clean (as for day2) each sentence
Then I went on training the word2vec model.
This is the python code:
from gensim.models import Word2Vec
from sklearn.decomposition import PCA
from matplotlib import pyplot
from nltk.tokenize import word_tokenize
import string
from nltk.corpus import stopwords
# define training data
#clean by nltk
filename = ‘metamorphosis_clean.txt’
file = open(filename, ‘rt’)
text = file.read()
file.close()
# split into sentences
from nltk import sent_tokenize
sentences = sent_tokenize(text)
sent_list = [];
for sent in sentences:
sent_tokens = word_tokenize(sent)
sent_tokens = [w.lower() for w in sent_tokens]
table = str.maketrans(”, ”, string.punctuation)
stripped = [w.translate(table) for w in sent_tokens]
# remove remaining tokens that are not alphabetic
words = [word for word in stripped if word.isalpha()]
# filter out stop words
stop_words = set(stopwords.words(‘english’))
words = [w for w in words if not w in stop_words]
sent_list.append(words)
print(sent_list)
#train model
model = Word2Vec(sent_list, min_count=1)
# fit a 2D PCA model to the vectors
X = model[model.wv.vocab]
pca = PCA(n_components=2)
result = pca.fit_transform(X)
# create a scatter plot of the projection
pyplot.scatter(result[:, 0], result[:, 1])
words = list(model.wv.vocab)
for i, word in enumerate(words):
pyplot.annotate(word, xy=(result[i, 0], result[i, 1]))
pyplot.show()
model.wv.save_word2vec_format(‘model.txt’, binary=False);
I’ve saved the ascii models of the word2vec model in the 2 cases of using or not using stopword removal in the phase of cleaning (result are different)
I would ask you if you may help me to enlarge the scatter plot of the PCA analysis, because the default visualisation is not understandable when using a relatively large vocabulary.
Thanks and best regard
Paolo Bussotti
Thanks for sharing!
More on changing the size of plots, set the “figsize” argument in:
Test for day5
—————– TEST1 ———————————————-
Settings: vocab_length = 100 (to minimise conflict with one_hot hashing)
embedding size = 8’
[[10 3 13 12 18 24 19 4]
[ 1 36 40 43 10 30 7 0]
[49 42 10 12 35 21 34 0]
[ 1 2 35 12 5 0 0 0]
[10 23 43 10 30 43 0 0]
[ 4 43 36 44 8 1 0 0]
[ 1 19 43 6 33 0 0 0]
[20 46 4 9 25 29 0 0]
[26 40 34 22 8 25 0 0]
[21 20 26 32 25 0 0 0]]
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_49 (Embedding) (None, 8, 8) 400
_________________________________________________________________
flatten_49 (Flatten) (None, 64) 0
_________________________________________________________________
dense_49 (Dense) (None, 1) 65
=================================================================
Total params: 465
Trainable params: 465
Non-trainable params: 0
_________________________________________________________________
None
Accuracy: 100.000000
Accuracy has been obtained using the code:
model.fit(padded_docs, labels, epochs=50, verbose=0)
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
print(‘Accuracy: %f’ % (accuracy*100))
——————-TEST2—————————————–
Using the weigths from pretrained embeddings from “metamorphosis”,
and evalating the same 10 sentences of the test above (with embedding size = 100), I obtained:’
vocab_size = 49
[[1, 11, 12, 4, 13, 14, 15, 16], [2, 17, 5, 3, 6, 7, 18], [19, 20, 1, 21, 8, 22, 23],
[2, 24, 8, 4, 25], [1, 26, 3, 6, 7, 27], [28, 3, 29, 30, 9, 31],
[2, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41], [42, 5, 43, 44, 9, 10], [45, 46, 47, 48, 10]]
[[ 1 11 12 4 13 14 15 16]
[ 2 17 5 3 6 7 18 0]
[19 20 1 21 8 22 23 0]
[ 2 24 8 4 25 0 0 0]
[ 1 26 3 6 7 27 0 0]
[28 3 29 30 9 31 0 0]
[ 2 32 33 34 35 0 0 0]
[36 37 38 39 40 41 0 0]
[42 5 43 44 9 10 0 0]
[45 46 47 48 10 0 0 0]]
Loaded 2581 word vectors.
Embedding_matrix (weights for pretraining):
[[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 … 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[-5.84273934e-01 4.10322547e-02 5.30006588e-01 … 2.99149841e-01
6.40603900e-02 4.65089172e-01]
[-1.88604981e-01 1.39787989e-02 1.74110234e-01 … 1.01836950e-01
2.37775557e-02 1.49607286e-01]
…
[-2.63660215e-03 1.93641812e-03 -1.67637059e-04 … 4.98276250e-03
4.28335834e-03 -9.61201615e-04]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 … 0.00000000e+00
0.00000000e+00 0.00000000e+00]
[-6.03308380e-01 4.21844646e-02 5.49575210e-01 … 3.08609307e-01
6.65459782e-02 4.79907840e-01]]
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_61 (Embedding) (None, 8, 100) 4900
_________________________________________________________________
flatten_60 (Flatten) (None, 800) 0
_________________________________________________________________
dense_60 (Dense) (None, 1) 801
=================================================================
Total params: 5,701
Trainable params: 801
Non-trainable params: 4,900
_________________________________________________________________
None
Accuracy: 89.999998
Nice work!
I used the codes in and substitute the ten sentences and the file for pretraining (i.e. the metamorphosis words embedding obtained with Gensim)
Best Regards
Paolo Bussotti
I have a question when we are using sigmoid activation function in deep learning our output comes in form of probabilities.but i need output in the form of 0 and 1 and for that I have to set a threshold value so what could be the best threshold value???
You can round the value to et a crisp class label.
Dear Jason,
I am stuck with one of the most basic tasks – tokenization. When I try to run your sample code from Lesson 2 using NLTK, I get the following (see below). Any idea what could be wrong?
Thanks for your help!
Špela
(base) [email protected] ~/Documents/python $ python tokenize.py
Traceback (most recent call last):
File “tokenize.py”, line 7, in
from nltk.tokenize import word_tokenize
File “/opt/anaconda3/lib/python3.7/site-packages/nltk/__init__.py”, line 98, in
from nltk.internals import config_java
File “/opt/anaconda3/lib/python3.7/site-packages/nltk/internals.py”, line 10, in
import subprocess
File “/opt/anaconda3/lib/python3.7/subprocess.py”, line 155, in
import threading
File “/opt/anaconda3/lib/python3.7/threading.py”, line 8, in
from traceback import format_exc as _format_exc
File “/opt/anaconda3/lib/python3.7/traceback.py”, line 5, in
import linecache
File “/opt/anaconda3/lib/python3.7/linecache.py”, line 11, in
import tokenize
File “/Users/spelavintar/Documents/python/tokenize.py”, line 7, in
from nltk.tokenize import word_tokenize
File “/opt/anaconda3/lib/python3.7/site-packages/nltk/tokenize/__init__.py”, line 65, in
from nltk.data import load
File “/opt/anaconda3/lib/python3.7/site-packages/nltk/data.py”, line 58, in
from nltk.internals import deprecated
ImportError: cannot import name ‘deprecated’ from ‘nltk.internals’ (/opt/anaconda3/lib/python3.7/site-packages/nltk/internals.py)
Sorry to hear that, it might be a version thing.
This might help:
And this:
Very helpful, I really like this article, good job and thanks for sharing such information.
Thanks!
I put a notebook of task #4 here: where I did it to Genesis. I had to clean out al the numbers, and it’s still kind of odd – most words cluster near the origin, and very high use words spread out to high X, and abs(Y) <~ X
It bothered me for a while that 'And' and 'and' wheren't that close, but okay, they're not being used in quite the same way, so that makes sense. Though maybe I should've just lower-cased everything.
Well done.
Thanks. I also put a notebook for task #5 here – it took me an embarrassingly long amount of time to understand how the tokenizer was interacting with the embedding layer – I think the code might’ve worked for at least twenty minutes before I understood -why-.
That said, although it can get 100% on the training data, I’m pretty convinced it’s just overfitting the small sample; when I try it on other sentences it’s essentially random and the model values are all in the range 0.45-0.55, which I take to mean the model has minimal confidence in it’s answer.
Well done!
I started this course recently and here’s my answer to the first task.
1. Tuning word/phrase spaces
2. Cleaning the voice recording up to make speech recognition easier
3. Dividing the input text into phrases and understand its structure
4. Building sentences from given ideas
5. Finding stylistical or syntax errors in given text
6. Assessing text quality and logic
7. Extracting data from text for future usage
8. Translating text between languages
9. Recognizing person’s accent or dialect from voice recording
10. Recognizing and changng text style
Well done!
Hi Jason,
My answer:
1. Sentence detection positive or negative
2. Sentiment analysis of person
3. Chat bot application (Alexa, Siri.)
4. Email Spam detection
5. Typing assistance
6. Google search engine voice assistance
7. Word to images conversion
8. Reading road text & convert in to action for Autonomous car
9. Multilingual Audio conversion
10. Language translation (google translate)
Well done!
Hi Jason, I was trying with just this file read as in your email & it was showing error at line#1347 for Story book ‘Midshipman Marrill’
Theni read on internet & found solution:
Old:
file = open(filename, ‘rt’)
new:
file = open(filename, encoding=”utf8″)
Working with this
Thanks for sharing.
How to share the new file with processed words ?
project gutenberg ebook midshipman merril henri harrison lewi ebook use anyon anywher unit state part world cost almost restrict whatsoev may copi give away reus term project gutenberg licens includ ebook onlin wwwgutenbergorg locat unit state check law countri locat use ebook titl midshipman merril author henri harrison lewi releas date novemb ebook languag english charact set encod produc demian katz craig kirkwood onlin distribut proofread team http wwwpgdpnet imag courtesi digit librari villanova univers http digitallibraryvillanovaedu
Bag of Words: This is good for small sentences & paragraph. For big document it will create a sparse matrix, have no importance to sequences.
The sparse matrix problem will be solved with:
TF-IDF: Handle word frequency & common words frequency in document differently(this assign it less frequency to common words).
Encoding with one_hot: Create matrix with different weightage for words & will be dense matrix, give importance to sequence. This will be helpful for search engine, language translation etc..
Thanks for sharing.
Day 1 Task
1. Text Classification and Categorization
2. Spell Checking
3. Character Recognition
4. Part-of-Speech Tagging
5. Named Entity Recognition (NER)
6. Semantic Parsing and Question Answering
7. Language Generation and Multi-document Summarization
8. Machine Translation
9. Speech Recognition
10. Paraphrase Detection
Well done!
Please find my list of applications of DL for the NLP domain below:
1. NMT (e.g., AE application in “Attention Is All You Need”, in NIPS 2017 proceedings)
2. Bunch of researches in speech recognition domain
3. -//- in text correction domain
4. -//- text generation systems (incl. dialogue generation)
5. Health diagnosis prediction based on speech (e.g. “Predicting Depression in Screening Interviews from Latent Categorization of Interview Prompts”, in ACL 2020 proceedings)
6. Defining emotions of text (e.g., “Learning and Evaluating Emotion Lexicons for 91 Languages” in ACL 2020 proceedings)
7. Text summarization (e.g., title/keywords generation)
8. Bunch of researches in text classification domain
9. NL to machine language translation (incl. NL2(p)SQL: “NL2pSQL: Generating Pseudo SQL Queries from Under-Specified Natural Language Questions” in EMNLP 2019 proceedings)
10. Bunch of researches in detecting fake news, spam etc. domain
Well done!
While doing lesson 2 realized that the Korean language is not supported by nltk library.
The konply library can be used instead, but most of the tutorials are in korean.
Thanks for the note!
Day 1:
1- Email classification
2- Sentiment analysis (coursera)
3- Generating Captions
4- Sequence-to-Sequence Prediction
5- Chatbots
6- Language translation (google translate/deepl translator )
7- Text summarization
8- ML-powered autocomplete feature( gmail autocomplete and smart reply)
9- Predictive text generator
10- Fake news detection
ell done!
1. Tokenization and Text Classification ()
2. Generating Captions for Images ()
3. Speech Recognition ()
4. Machine Translation ()
5. Question Answering ()
6. Document Summarization ()
7. Language Modeling ()
8. Chatbots ()
9. NER ()
10. Plagiarism Detection ()
Well done!
Day 1 task:
Applications of deep learning in natural language processing:
1. Sentence completion using text prediction, for example, the Gmail autocomplete feature.
2. Grammar and spelling checker, for example, the Grammarly web app.
3. Building automated chatbots
4. Speech Recognition
5. Text Classifier [Classifying amazon product reviews, movie reviews, or book reviews into positive, negative or neutral]
6. Language Translator
7. Targeted advertising [On online shopping platforms]
8. Caption Generator
9. Document summarization
10. Urgency detection
Well done!
Hi Jason:
Thank you for this tutorial !
I came back to this NLP DL crash course looking for references to end to end applications on text classification tutorials.
Do you have any other NLP text classification tutorial different to the classic one “sentiment analysis”? More precisely I am interested on Text Multi-Label Class
thanks
I don’t think so, offhand.
Any of the binary text classification examples can be adapted for multi-class directly, e.g. under “text classification” here:
OK thanks Jason
I also discover this tutorial of multi-label class :
where it is well explained how to adapt the model output layer (to the number of classes and retaining the ‘sigmoid’ as activation argument ) and the model compile method (retaining the e.g. =binary_crossentropy” for loss function).
The confusion comes from using several outputs neurons for multi-class But ‘sigmoid’ as activation and binary-crossentropy as loss when we use multi-label even those are the appropriate arguments for “binary class implementation
Ah, I see. Not sure I have addressed that specific case before.
thks
You’re welcome! | https://machinelearningmastery.com/crash-course-deep-learning-natural-language-processing/ | CC-MAIN-2021-31 | refinedweb | 8,976 | 54.73 |
In this tutorial I will show you how to create and work with Java Interfaces. As always I will demonstrate a practical example of Java interface.
What is Java Interface?
As many other Java concepts, Interfaces are derived from real-world scenarios with the main purpose to use an object by strict rules. For example, if you want to turn on the washing machine to wash your clothes you need to press the start button. This button is the interface between you and the electronics inside the washing machine. Java interfaces have same behaviour: they set strict rules on how to interact with objects. To find more about Java objects read this tutorial.
The Java interface represents a group of methods with empty bodies. Well, it is not mandatory to have a whole list of methods inside an interface – they can be 0 or more… but, no matter the number of methods, they all should be empty.
Create an Interface
Using the example with the washing machine, lets create an Interface called
WashingMachine with one method
startButtonPressed()
public interface WashingMachine { public void startButtonPressed(); }
That’s all you need to define an interface. Note the usage of the keyword
interface. The method
startButtonPressed()has no body. It just ends with ; Of course you can also use methods with return types and parameters like:
public int changeSpeed(int speed);
How to Implement an Interface
Now we will create a class that implements our interface. To continue with the example we will create a washing machine of specific make that has the start button.
public class SamsungWashingMachine implements WashingMachine { @Override public void startButtonPressed() { System.out.println("The Samsung washing machine is now running."); } }
We use the
implements keyword in the class declaration. We need to implement the startButtonPressed method (give it some functionality) or otherwise our class will not compile.
Please note, you can implement more than one interface in one class. You just need to separate the interface names with commas in the class declaration like this:
public class SamsungWashingMachine implements WashinMachine, Serializable, Comparable<WashinMachine> { ... }
Test your Interface
Now lets create a small program to test our interface and the implementation
public class Test { public static void main(String[] args) { WashinMachine samsungWashinMachine = new SamsungWashingMachine(); samsungWashinMachine.startButtonPressed(); } }
and the output of the program will be :
The Samsung washing machine is now running.
Use Interfaces to Declare Specific Object Characteristics
There is another common usage of interfaces in Java – to tell an object has specific use or characteristics.
Lets give one more real-world example. You are a survival in the woods. You find different object and put them in your backpack for later use. When you rest you go through the found objects and eat the once that are eatable.
First, lets define an interface called
FoundObject with no methods at all. Those are all the objects we found in the woods:
public interface FoundObject { }
now we define a second interface called
Eatable. We will use it just to denote if the object is eatable or not
public interface Eatable { public void eat(); }
With the following three classes we will define the objects we find in the woods – apples, raspberries and stones
public class Apple implements FoundObject, Eatable { private String name; public Apple(String name) { this.name = name; } @Override public void eat() { System.out.println("Yummy! you eat some " + this.name); } }
public class Raspberry implements FoundObject, Eatable { private String name; public Raspberry(String name) { this.name = name; } @Override public void eat() { System.out.println("Yummy! you eat some " + this.name); } }
public class Stone implements FoundObject { private String name; public Stone(String name) { this.name = name; } }
Now lets write the survival program. We will collect found objects in our backpack (array) and try to eat them
public class WoodsSurvival { public static void main(String[] args) { // create an array of type FoundObject FoundObject backpack [] = new FoundObject[3]; // create the objects we found in the woods FoundObject apple = new Apple("apple"); FoundObject stone = new Stone("stone"); FoundObject raspberry = new Raspberry("raspberry"); // add the found objects to the backpack backpack[0] = apple; backpack[1] = stone; backpack[2] = raspberry; // iterate over the found objects for (int i=0; i<backpack.length; i++) { FoundObject currentObject = backpack[i]; // check if object is eatable if (currentObject instanceof Eatable) { // cast the object to eatable and execute eat method ((Eatable) currentObject).eat(); } } } }
The output of the program is:
Yummy! you eat some apple Yummy! you eat some raspberry
The code explained
First we create the interface
FoundObject with the sole purpose to denote the objects of specific type, so we can put them in the same array. We create the
Eatable interface to mark which objects can be eaten.
When we create the three objects (apple, raspberry and stone) we put
implements FoundObject in the class declaration for all of them, and the one we can eat also implement the
Eatable interface.
In
WoodsSurvival class we first create an array of type
FoundObject. The three object we create later are also of type
FoundObject so we can put them in the same array. Follow this article to learn more about Java arrays.
When we iterate the array we check if the current object is of type
Eatable. We do this with the help of
instanceof keyford.
instanceof returns true if two objects are of the same type. In our case apples and raspberries will return true when checked with
instanceof Eatable, because both implement the
Eatable interface. To be able to execute the
eat() method we need to explicitly typecast the object to
Eatable first. We achieve this with following line of code:
((Eatable) currentObject).eat();
We can not execute the eat method of a stone object, because it is not of type
Eatable.
Disclaimer
The code example above can be written in more fashionable way using abstract classes, Collections and inheritance. Some of those are more advanced topics and are explained in next tutorials. This is a beginner tutorial that intents to explain java interfaces only.
Thanks!
Its a good start. thanks for your post… | https://javatutorial.net/java-interface-example | CC-MAIN-2019-43 | refinedweb | 1,003 | 55.13 |
the math to make it more understandable but, for the most curious of you, I'll leave the links to complete explanations/courses in the end.
In 29 mins, you'll be able to configure an algorithm that's going to recognize the written digits in python :)
🧠 What is a Neural Network?
Imagine Neural Network as an old wise wizard who knows everything and can predict your future by just looking at you.
It turns out that he manages to do so in a very non-magical way:
Before you visited him, he trained, carefully studied everything about many thousands of people who came to see him before you.
He now collects some data about what you look like (your apparent age, the website you found him at, etc).
He then compares it to the historical data he has about people that came to see him before.
Finally, he gives his best guess on what kind of person you are based on the similarities.
In very general terms, it is the way many machine learning algorithms work. They are often used to predict things based on the history of similar situations: Amazon suggesting the product you might like to buy, or Gmail suggesting to finish the sentence for you, or a self-driving car learning to drive.
📙 Part 1: Import libraries
Let's start! I have put together a class that is doing all the math behind our algorithm and I'd gladly explain how it works in another tutorial or you could go through my comments and try to figure it out yourself if you know some machine learning.
For now, create a file called
NN.py and paste this code:
import numpy as np from scipy.optimize import minimize class Neural_Network(object): def configureNN(self, inputSize, hiddenSize, outputSize, W1 = np.array([0]), W2 = np.array([0]), maxiter = 20, lambd = 0.1): #parameters self.inputSize = inputSize self.outputSize = outputSize self.hiddenSize = hiddenSize #initialize weights / random by default if(not W1.any()): self.W1 = np.random.randn( self.hiddenSize, self.inputSize + 1) # weight matrix from input to hidden layer else: self.W1 = W1 if (not W2.any()): self.W2 = np.random.randn( self.outputSize, self.hiddenSize + 1) # weight matrix from hidden to output layerself.W2 = W2 else: self.W2 = W2 # maximum number of iterations for optimization algorithm self.maxiter = maxiter # regularization penalty self.lambd = lambd def addBias(self, X): #adds a column of ones to the beginning of an array if (X.ndim == 1): return np.insert(X, 0, 1) return np.concatenate((np.ones((len(X), 1)), X), axis=1) def delBias(self, X): #deletes a column from the beginning of an array if (X.ndim == 1): return np.delete(X, 0) return np.delete(X, 0, 1) def unroll(self, X1, X2): #unrolls two matrices into one vector return np.concatenate((X1.reshape(X1.size), X2.reshape(X2.size))) def sigmoid(self, s): # activation function return 1 / (1 + np.exp(-s)) def sigmoidPrime(self, s): #derivative of sigmoid return s * (1 - s) def forward(self, X): #forward propagation through our network X = self.addBias(X) self.z = np.dot( X, self.W1.T) # dot product of X (input) and first set of 3x2 weights self.z2 = self.sigmoid(self.z) # activation function self.z2 = self.addBias(self.z2) self.z3 = np.dot( self.z2, self.W2.T) # dot product of hidden layer (z2) and second set of 3x1 weights o = self.sigmoid(self.z3) # final activation function return o def backward(self, X, y, o): # backward propgate through the network self.o_delta = o - y # error in output self.z2_error = self.o_delta.dot( self.W2 ) # z2 error: how much our hidden layer weights contributed to output error self.z2_delta = np.multiply(self.z2_error, self.sigmoidPrime( self.z2)) # applying derivative of sigmoid to z2 error self.z2_delta = self.delBias(self.z2_delta) self.W1_delta += np.dot( np.array([self.z2_delta]).T, np.array([self.addBias(X)])) # adjusting first set (input --> hidden) weights self.W2_delta += np.dot( np.array([self.o_delta]).T, np.array([self.z2])) # adjusting second set (hidden --> output) weights def cost(self, nn_params, X, y): #computing how well the function does. Less = better self.W1_delta = 0 self.W2_delta = 0 m = len(X) o = self.forward(X) J = -1/m * sum(sum(y * np.log(o) + (1 - y) * np.log(1 - o))); #cost function reg = (sum(sum(np.power(self.delBias(self.W1), 2))) + sum( sum(np.power(self.delBias(self.W2), 2)))) * (self.lambd/(2*m)); #regularization: more precise J = J + reg; for i in range(m): o = self.forward(X[i]) self.backward(X[i], y[i], o) self.W1_delta = (1/m) * self.W1_delta + (self.lambd/m) * np.concatenate( (np.zeros((len(self.W1),1)), self.delBias(self.W1)), axis=1) self.W2_delta = (1/m) * self.W2_delta + (self.lambd/m) * np.concatenate( (np.zeros((len(self.W2),1)), self.delBias(self.W2)), axis=1) grad = self.unroll(self.W1_delta, self.W2_delta) return J, grad def train(self, X, y): # using optimization algorithm to find best fit W1, W2 nn_params = self.unroll(self.W1, self.W2) results = minimize(self.cost, x0=nn_params, args=(X, y), options={'disp': True, 'maxiter':self.maxiter}, method="L-BFGS-B", jac=True) self.W1 = np.reshape(results["x"][:self.hiddenSize * (self.inputSize + 1)], (self.hiddenSize, self.inputSize + 1)) self.W2 = np.reshape(results["x"][self.hiddenSize * (self.inputSize + 1):], (self.outputSize, self.hiddenSize + 1)) def saveWeights(self): #sio.savemat('myWeights.mat', mdict={'W1': self.W1, 'W2' : self.W2}) np.savetxt('data/TrainedW1.in', self.W1, delimiter=',') np.savetxt('data/TrainedW2.in', self.W2, delimiter=',') def predict(self, X): o = self.forward(X) i = np.argmax(o) o = o * 0 o[i] = 1 return o def predictClass(self, X): #printing out the number of the class, starting from 1 print("Predicted class out of", self.outputSize,"classes based on trained weights: ") print("Input: \n" + str(X)) print("Class number: " + str(np.argmax( np.round(self.forward(X)) ) + 1)) def accuracy(self, X, y): #printing out the accuracy p = 0 m = len(X) for i in range(m): if (np.all(self.predict(X[i]) == y[i])): p += 1 print('Training Set Accuracy: {:.2f}%'.format(p * 100 / m))
📊 Part 2: Understanding Data
Cool! Now, much like the wizard who had to study all the other people who visited him before you, we need some data to study too. Before using any optimization algorithms, all the data scientists first try to understand the data they want to analyze.
Download files
X.in (stores info about what people looked like - question) and
y.in(stores info about what kind of people they were - answer) from here and put them into folder
data in your repl.
- X: We are given 5,000 examples of 20x20 pixel pictures of handwritten digits from 0 to 9 (classes 1-10). Each picture's numerical representation is a single vector, which together with all the other examples forms an array
X.
- Y: We also have an array
y. Each column represents a corresponding example (one picture) from
X.
yhas 10 rows for classes 1-10 and the value of only the correct class' row is one, the rest is zeros. It looks similar to this:
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1] # represents digit 0 (class 10) [1, 0, 0, 0, 0, 0, 0, 0, 0, 0] # represents digit 1 (class 1) ...... [1, 0, 0, 0, 0, 0, 0, 0, 1, 0] # represents digit 9 (class 9)
Now, let's plot it!
In the end, I'd want a function
displayData(displaySize, data, selected, title), where
displaySize- the numer of images shown in any one column or row of the figure,
data- our X array,
selected- an index (if displaying only one image) or vector of indices (if displaying multiple images) from X,
title- the title of the figure
Create a
plots folder to save your plots to. Also, if you use repl, create some empty file in the folder so that it doesn't disappear.
Create a
display.py file and write the following code in there. Make sure to read the comments:
import matplotlib.pyplot as plt # Displaying the data def displayData( displaySize, data, selected, title ): # setting up our plot fig=plt.figure(figsize=(8, 8)) fig.suptitle(title, fontsize=32) # configuring the number of images to display columns = displaySize rows = displaySize for i in range(columns*rows): # if we want to display multiple images, # then 'selected' is a vector. Check if it is here: if hasattr(selected, "__len__"): img = data[selected[i]] else: img = data[selected] img = img.reshape(20,20).transpose() fig.add_subplot(rows, columns, i+1) plt.imshow(img) # We could also use plt.show(), but repl # can't display it. So let's insted save it # into a file plt.savefig('plots/' + title) return None
Great, we are halfway there!
💪 Part 3: Training Neural Network
Now, after we understand what our data looks like, it's time to train on it. Let's make that wizard study!
It turns out that the results of the training process of the Neural Networks have to be stored in some values. These values are called parameters or weights of the Neural Network. If you were to start this project from scratch, your initial weights would be just some random numbers, however, it would take your computer forever to train to do such a complex task as recognizing digits. For this reason, I will provide you with the initial weights that are somewhat closer to the end result.
Download files
W1.in and
W2.in from here and put them into
data folder.
We are now ready to write code to use our Neural Network library!
Create a
train.py file and write the following code in there. Make sure to read the comments:
# This code trains the Neural Network. In the end, you end up # with best-fit parameters (weights W1 and W2) for the problem in folder 'data' # and can use them to predict in predict.py import numpy as np import display from NN import Neural_Network NN = Neural_Network() # Loading data X = np.loadtxt("data/X.in", comments="#", delimiter=",", unpack=False) y = np.loadtxt("data/y.in", comments="#", delimiter=",", unpack=False) W1 = np.loadtxt("data/W1.in", comments="#", delimiter=",", unpack=False) W2 = np.loadtxt("data/W2.in", comments="#", delimiter=",", unpack=False) # Display inputs sel = np.random.permutation(len(X)); sel = sel[0:100]; display.displayData(5, X, sel, 'TrainingData'); # Configuring settings of Neural Network: # # inputSize, hiddenSize, outputSize = number of elements # in input, hidden, and output layers # (optional) W1, W2 = random by default # (optional) maxiter = number of iterations you allow the # optimization algorithm. # By default, set to 20 # (optional) lambd = regularization penalty. By # default, set to 0.1 # NN.configureNN(400, 25, 10, W1 = W1, W2 = W2) # Training Neural Network on our data # This step takes 12 mins in Repl.it or 20 sec on your # computer NN.train(X, y) # Saving Weights in the file NN.saveWeights() # Checking the accuracy of Neural Network sel = np.random.permutation(5000)[1:1000] NN.accuracy(X[sel], y[sel])
Now, you have to run this code either from:
- Repl.it - but you would need to move code from
train.pyinto
main.py. Don't delete
train.pyjust yet. It would also take approximately 12 minutes to compute. You can watch this Crash Course video while waiting :)
- Your own computer - just run
train.py, which takes 20 sec on my laptop to compute.
If you need help installing python, watch this tutorial.
🔮 Part 4: Predicting!
By now, you are supposed to have your new weights (
TrainedW1.in,
TrainedW2.in) saved in
data folder and the accuracy of your Neural Network should be over 90%.
Let's now write a code to use the trained weights in order to predict the digits of any new image!
Create a
predict.py file and write the following code in there. Make sure to read the comments:
import numpy as np import display from NN import Neural_Network NN = Neural_Network() # Loading data X = np.loadtxt("data/X.in", comments="#", delimiter=",", unpack=False) y = np.loadtxt("data/y.in", comments="#", delimiter=",", unpack=False) trW1 = np.loadtxt("data/TrainedW1.in", comments="#", delimiter=",", unpack=False) trW2 = np.loadtxt("data/TrainedW2.in", comments="#", delimiter=",", unpack=False) # Configuring settings of Neural Network: NN.configureNN(400, 25, 10, W1 = trW1, W2 = trW2) # Predicting a class number of given input testNo = 3402; # any number between 0 and 4999 to test NN.predictClass(X[testNo]) # Display output display.displayData(1, X, testNo, 'Predicted class: ' + str(np.argmax(np.round(NN.forward(X[testNo]))) + 1) )
Change the value of
testNo to any number between 0 and 4999. In order to get a digit (class) prediction on the corresponding example from array X, run the code from:
- Repl.it - but you would need to move code from
predict.pyinto
main.py. Don't delete
predict.pyjust yet.
- Your own computer - just run
predict.py.
Yay, you are officially a data scientist! You have successfully:
Analyzed the data
Implemented the training of your Neural Network
Developed a code to predict new testing examples
🚀 Acknowledgments
Hat tip to @shamdasani whose code I used as a template for Neural Network architecture and Andrew Ng from Stanford whose data I used.
Plenty of things I told you are not completely correct because I rather tried to get you excited about the topic I am passionate about, not dump some math on you!
If you guys seem to enjoy it, please follow through with studying machine learning because it is just an amazing experience. I encourage you to take this free online course on it to learn the true way it works.
Also, it's my first post here and I'd appreciate any feedback on it to get better.
Keep me updated on your progress, ask any questions, and stay excited! ✨✨✨
Hey! I skimmed through this and this is awesome. By any chance do you have a YouTube channel where you explain everything in-depth? I love machine learning and built a very simple neural network to output a number (0 or 1) based on a given scenario with data although I'd love to get more advanced like this. This is really cool, thanks for making it!
@gforero thank you very much! I'm glad you liked it. I don't have a youtube channel, but this particular task is well explained in-depth in Andrew Ng's Machine Learning course, weeks 4 and 5. Check it out, it's free!
Haven't read it yet, but it looks pretty good. It's really helpful for me, because machine learning is very fascinating and i want to learn more about it :)
Nice Tutorial. I haven't gone through the whole thing in-depth, but I liked it. It reminds me of a CGP Grey video where he talks that same basic topic, but on a much more generalized level, so it was cool to see some of the technical aspect of it.
I do have one question though, you mention downloading the X.in, y.in, W1.in and W2.in files, but where do these files come from/are these files able to just be copied and pasted like some of the other code?
@ArtemLaptiev1 This is a great tutorial and all (upvoted!) but you do realize you can edit posts, right? This includes markdown. I also had markdown errors in my tutorial and just edited them to fix it. :)
Just a couple questions:
1. How do i know what output i am supposed to get for my input on testNo?
2. How to i rearrange this build for the AI to learn different things?
3. "This problem is unconstrained." and "Line search cannot locate an adequate point after 20 function
and gradient evaluations. Previous x, f and g restored.
Possible causes: 1 error in function or gradient evaluation;
2 rounding error dominate computation." What did i do wrong during training?
Did you normalize the data? I believe you didn't and it generally has a bad impact on the performance of the ai.
And also, you should use the function tf.keras.layers.Dropout(0.2) to generalize the ai. The risk of not doing this is that your ai stops picking up patterns and becomes overfit.
And third, you can make one in much, much fewer lines with tensorflow. | https://replit.com/talk/learn/Building-AI-Neural-Networks-for-beginners/8156 | CC-MAIN-2021-17 | refinedweb | 2,734 | 59.6 |
Note
Learning Objectives
By the end of this chapter, you will be able to:
Perform basic operations in Python
Describe the business context of the case study data and its suitability for the task
Perform data cleaning operations
Examine statistical summaries and visualize the case study data
Implement one-hot encoding on categorical variables
Most businesses possess a wealth of data on their operations and customers. Reporting on these data in the form of descriptive charts, graphs, and tables is a good way to understand the current state of the business. However, in order to provide quantitative guidance on future business strategies and operations, it is necessary to go a step further. This is where the practices of machine learning and predictive modeling become involved. In this book, we will show how to go from descriptive analyses to concrete guidance for future operations using predictive models.
To accomplish this goal, we'll introduce some of the most widely-used machine learning tools via Python and many of its packages. You will also get a sense of the practical skills necessary to execute successful projects: inquisitiveness when examining data and communication with the client. Time spent looking in detail at a dataset and critically examining whether it accurately meets its intended purpose is time well spent. You will learn several techniques for assessing data quality here.
In this chapter, after getting familiar with the basic tools for data exploration, we will discuss a few typical working scenarios for how you may receive data. Then, we will begin a thorough exploration of the case study dataset and help you learn how you can uncover possible issues, so that when you are ready for modeling, you may proceed with confidence.
In this book, we will use the Python programming language. Python is a top language for data science and is one of the fastest growing programming languages. A commonly cited reason for Python's popularity is that it is easy to learn. If you have Python experience, that's great; however, if you have experience with other languages, such as C, Matlab, or R, you shouldn't have much trouble using Python. You should be familiar with the general constructs of computer programming to get the most out of this book. Examples of such constructs are for loops and if statements that guide the control flow of a program. No matter what language you have used, you are likely familiar with these constructs, which you will also find in Python.
A key feature of Python, that is different from some other languages, is that it is zero-indexed; in other words, the first element of an ordered collection has an index of 0. Python also supports negative indexing, where index-1 refers to the last element of an ordered collection and negative indices count backwards from the end. The slice operator, :, can be used to select multiple elements of an ordered collection from within a range, starting from the beginning, or going to the end of the collection.
Here, we demonstrate how indexing and the slice operator work. To have something to index, we will create a list, which is a mutable ordered collection that can contain any type of data, including numerical and string types. "Mutable" just means the elements of the list can be changed after they are first assigned. To create the numbers for our list, which will be consecutive integers, we'll use the built-in range() Python function. The range() function technically creates an iterator that we'll convert to a list using the list() function, although you need not be concerned with that detail here. The following screenshot shows a basic list being printed on the console:
Figure 1.1: List creation and indexing
A few things to notice about Figure 1.1: the endpoint of an interval is open for both slice indexing and the range() function, while the starting point is closed. In other words, notice how when we specify the start and end of range(), endpoint 6 is not included in the result but starting point 1 is. Similarly, when indexing the list with the slice [:3], this includes all elements of the list with indices up to, but not including, 3.
We've referred to ordered collections, but Python also includes unordered collections. An important one of these is called a dictionary. A dictionary is an unordered collection of key:value pairs. Instead of looking up the values of a dictionary by integer indices, you look them up by keys, which could be numbers or strings. A dictionary can be created using curly braces {} and with the key:value pairs separated by commas. The following screenshot is an example of how we can create a dictionary with counts of fruit – examine the number of apples, then add a new type of fruit and its count:
Figure 1.2: An example dictionary
There are many other distinctive features of Python and we just want to give you a flavor here, without getting into too much detail. In fact, you will probably use packages such as pandas (pandas) and NumPy (numpy) for most of your data handling in Python. NumPy provides fast numerical computation on arrays and matrices, while pandas provides a wealth of data wrangling and exploration capabilities on tables of data called DataFrames. However, it's good to be familiar with some of the basics of Python—the language that sits at the foundation of all of this. For example, indexing works the same in NumPy and pandas as it does in Python.
One of the strengths of Python is that it is open source and has an active community of developers creating amazing tools. We will use several of these tools in this book. A potential pitfall of having open source packages from different contributors is the dependencies between various packages. For example, if you want to install pandas, it may rely on a certain version of NumPy, which you may or may not have installed. Package management systems make life easier in this respect. When you install a new package through the package management system, it will ensure that all the dependencies are met. If they aren't, you will be prompted to upgrade or install new packages as necessary.
For this book, we will use the Anaconda package management system, which you should already have installed. While we will only use Python here, it is also possible to run R with Anaconda.
Note
Environments
If you previously had Anaconda installed and were using it prior to this book, you may wish to create a new Python 3.x environment for the book. Environments are like separate installations of Python, where the set of packages you have installed can be different, as well as the version of Python. Environments are useful for developing projects that need to be deployed in different versions of Python. For more information, see.
In this exercise, you will examine the packages in your Anaconda installation and practice some basic Python control flow and data structures, including a for loop, dict, and list. This will confirm that you have completed the installation steps in the preface and show you how Python syntax and data structures may be a little different from other programming languages you may be familiar with. Perform the following steps to complete the exercise:
Note
The code file for this exercise can be found here:.
Open up a Terminal, if you're using macOS or Linux, or a Command Prompt window in Windows. Type conda list at the command line. You should observe an output similar to the following:
Figure 1.3: Selection of packages from conda list
You can see all the packages installed in your environment. Look how many packages already come with the default Anaconda installation! These include all the packages we will need for the book. However, installing new packages and upgrading existing ones is easy and straightforward with Anaconda; this is one of the main advantages of a package management system.
Type python in the Terminal to open a command-line Python interpreter. You should obtain an output similar to the following:
Figure 1.4: Command-line Python
You should also see some information about your version of Python, as well as the Python command prompt (>>>). When you type after this prompt, you are writing Python code.
Write a for loop at the command prompt to print values from 0 to 4 using the following code:
for counter in range(5): ... print(counter) ...
Once you hit Enter when you see (...) on the prompt, you should obtain this output:
Figure 1.5: Output of a for loop at the command line
Notice that in Python, the opening of the for loop is followed by a colon, and the body of the loop requires indentation. It's typical to use four spaces to indent a code block. Here, the for loop prints the values returned by the range() iterator, having repeatedly accessed them using the counter variable with the in keyword.
Note
For many more details on Python code conventions, refer to the following:.
Now, we will return to our dictionary example. The first step here is to create the dictionary.
Create a dictionary of fruits (apples, oranges, and bananas) using the following code:
example_dict = {'apples':5, 'oranges':8, 'bananas':13}
Convert the dictionary to a list using the list() function, as shown in the following snippet:
dict_to_list = list(example_dict) dict_to_list
Once you run the preceding code, you should obtain the following output:
Figure 1.6: Dictionary keys converted to a list
Notice that when this is done and we examine the contents, only the keys of the dictionary have been captured in the list. If we wanted the values, we would have had to specify that with the .values() method of the list. Also, notice that the list of dictionary keys happens to be in the same order that we wrote them in when creating the dictionary. This is not guaranteed, however, as dictionaries are unordered collection types.
One convenient thing you can do with lists is to append other lists to them with the + operator. As an example, in the next step we will combine the existing list of fruit with a list that contains just one more type of fruit, overwriting the variable containing the original list.
Use the + operator to combine the existing list of fruits with a new list containing only one fruit (pears):
dict_to_list = dict_to_list + ['pears'] dict_to_list
Figure 1.7: Appending to a list
What if we wanted to sort our list of fruit types?
Python provides a built-in sorted() function that can be used for this; it will return a sorted version of the input. In our case, this means the list of fruit types will be sorted alphabetically.
Sort the list of fruits in alphabetical order using the sorted() function, as shown in the following snippet:
sorted(dict_to_list)
Once you run the preceding code, you should see the following output:
Figure 1.8: Sorting a list
That's enough Python for now. We will show you how to execute the code for this book, so your Python knowledge should improve along the way.
Note
As you learn more and inevitably want to try new things, you will want to consult the documentation:.
Much of your time as a data scientist is likely to be spent wrangling data: figuring out how to get it, getting it, examining it, making sure it's correct and complete, and joining it with other types of data. pandas will facilitate this process for you. However, if you aspire to be a machine learning data scientist, you will need to master the art and science of predictive modeling. This means using a mathematical model, or idealized mathematical formulation, to learn the relationships within the data, in the hope of making accurate and useful predictions when new data comes in.
For this purpose, data is typically organized in a tabular structure, with features and a response variable. For example, if you want to predict the price of a house based on some characteristics about it, such as area and number of bedrooms, these attributes would be considered the features and the price of the house would be the response variable. The response variable is sometimes called the target variable or dependent variable, while the features may also be called the independent variables.
If you have a dataset of 1,000 houses including the values of these features and the prices of the houses, you can say you have 1,000 samples of labeled data, where the labels are the known values of the response variable: the prices of different houses. Most commonly, the tabular data structure is organized so that different rows are different samples, while features and the response occupy different columns, along with other metadata such as sample IDs, as shown in Figure 1.9.
Figure 1.9: Labeled data (the house prices are the known target variable)
Regression Problem
Once you have trained a model to learn the relationship between the features and response using your labeled data, you can then use it to make predictions for houses where you don't know the price, based on the information contained in the features. The goal of predictive modeling in this case is to be able to make a prediction that is close to the true value of the house. Since we are predicting a numerical value on a continuous scale, this is called a regression problem.
Classification Problem
On the other hand, if we were trying to make a qualitative prediction about the house, to answer a yes or no question such as "will this house go on sale within the next five years?" or "will the owner default on the mortgage?", we would be solving what is known as a classification problem. Here, we would hope to answer the yes or no question correctly. The following figure is a schematic illustrating how model training works, and what the outcomes of regression or classification models might be:
Figure 1.10: Schematic of model training and prediction for regression and classification
Classification and regression tasks are called supervised learning, which is a class of problems that relies on labeled data. These problems can be thought of a needing "supervision" by the known values of the target variable. By contrast, there is also unsupervised learning, which relates to more open-ended questions of trying to find some sort of structure in a dataset that does not necessarily have labels. Taking a broader view, any kind of applied math problem, including fields as varied as optimization, statistical inference, and time series modeling, may potentially be considered an appropriate responsibility for a data scientist.
Now it's time to take a first look at the data we will use in our case study. We won't do anything in this section other than ensure that we can load the data into a Jupyter Notebook correctly. Examining the data, and understanding the problem you will solve with it, will come later.
The data file is an Excel spreadsheet called default_of_credit_card_clients__courseware_version_1_13_19.xls. We recommend you first open the spreadsheet in Excel, or the spreadsheet program of your choice. Note the number of rows and columns, and look at some example values. This will help you know whether or not you have loaded it correctly in the Jupyter Notebook.
Note
The dataset can obtained from the following link:. This is a modified version of the original dataset, which has been sourced from the UCI Machine Learning Repository []. Irvine, CA: University of California, School of Information and Computer Science.
What is a Jupyter Notebook?
Jupyter Notebooks are interactive coding environments that allow for in-line text and graphics. They are great tools for data scientists to communicate and preserve their results, since both the methods (code) and the message (text and graphics) are integrated. You can think of the environment as a kind of webpage where you can write and execute code. Jupyter Notebooks can, in fact, be rendered as web pages and are done so on GitHub. Here is one of our example notebooks:. Look it over and get a sense of what you can do. An excerpt from this notebook is displayed here, showing code, graphics, and prose, known as markdown in this context:
Figure 1.11: Example of a Jupyter Notebook showing code, graphics, and markdown text
One of the first things to learn about Jupyter Notebooks is how to navigate around and make edits. There are two modes available to you. If you select a cell and press Enter, you are in edit mode and you can edit the text in that cell. If you press Esc, you are in command mode and you can navigate around the notebook.
When you are in command mode, there are many useful hotkeys you can use. The Up and Down arrows will help you select different cells and scroll through the notebook. If you press y on a selected cell in command mode, it changes it to a code cell, in which the text is interpreted as code. Pressing m changes it to a markdown cell. Shift + Enter evaluates the cell, rendering the markdown or executing the code, as the case may be.
Our first task in our first Jupyter Notebook will be to load the case study data. To do this, we will use a tool called pandas. It is probably not a stretch to say that pandas is the pre-eminent data-wrangling tool in Python.
A DataFrame is a foundational class in pandas. We'll talk more about what a class is later, but you can think of it as a template for a data structure, where a data structure is something like the lists or dictionaries we discussed earlier. However, a DataFrame is much richer in functionality than either of these. A DataFrame is similar to spreadsheets in many ways. There are rows, which are labeled by a row index, and columns, which are usually given column header-like labels that can be thought of as a column index. Index is, in fact, a data type in pandas used to store indices for a DataFrame, and columns have their own data type called Series.
You can do a lot of the same things with a DataFrame that you can do with Excel sheets, such as creating pivot tables and filtering rows. pandas also includes SQL-like functionality. You can join different DataFrames together, for example. Another advantage of DataFrames is that once your data is contained in one of them, you have the capabilities of a wealth of pandas functionality at your fingertips. The following figure is an example of a pandas DataFrame:
Figure 1.12: Example of a pandas DataFrame with an integer row index at the left and a column index of strings
The example in Figure 1.12 is in fact the data for the case study. As a first step with Jupyter and pandas, we will now see how to create a Jupyter Notebook and load data with pandas. There are several convenient functions you can use in pandas to explore your data, including .head() to see the first few rows of the DataFrame, .info() to see all columns with datatypes, .columns to return a list of column names as strings, and others we will learn about in the following exercises.
Now that you've learned about Jupyter Notebooks, the environment in which we'll write code, and pandas, the data wrangling package, let's create our first Jupyter Notebook. We'll use pandas within this notebook to load the case study data and briefly examine it. Perform the following steps to complete the exercise:
Note
For Exercises 2–5 and Activity 1, the code and the resulting output have been loaded in a Jupyter Notebook that can be found at. You can scroll to the appropriate section within the Jupyter Notebook to locate the exercise or activity of choice.
Open a Terminal (macOS or Linux) or a Command Prompt window (Windows) and type jupyter notebook.
You will be presented with the Jupyter interface in your web browser. If the browser does not open automatically, copy and paste the URL from the terminal in to your browser. In this interface, you can navigate around your directories starting from the directory you were in when you launched the notebook server.
Navigate to a convenient location where you will store the materials for this book, and create a new Python 3 notebook from the New menu, as shown here:
Figure 1.13: Jupyter home screen
Make your very first cell a markdown cell by typing m while in command mode (press Esc to enter command mode), then type a number sign, #, at the beginning of the first line, followed by a space, for a heading. Make a title for your notebook here. On the next few lines, place a description.
Here is a screenshot of an example, including other kinds of markdown such as bold, italics, and the way to write code-style text in a markdown cell:
Figure 1.14: Unrendered markdown cell
Note that it is good practice to make a title and brief description of your notebook, to identify its purpose to readers.
Press Shift + Enter to render the markdown cell.
This should also create a new cell, which will be a code cell. You can change it to a markdown cell, as you now know how to do, by pressing m, and back to a code cell by pressing y. You will know it's a code cell because of the In [ ]: next to it.
Type import pandas as pd in the new cell, as shown in the following screenshot:
Figure 1.15: Rendered markdown cell and code cell
After you execute this cell, the pandas library will be loaded into your computing environment. It's common to import libraries with "as" to create a short alias for the library. Now, we are going to use pandas to load the data file. It's in Microsoft Excel format, so we can use pd.read_excel.
Note
For more information on all the possible options for pd.read_excel, refer to the following documentation:.
Import the dataset, which is in the Excel format, as a DataFrame using the pd.read_excel() method, as shown in the following snippet:
df = pd.read_excel('../Data/default_of_credit_card_clients_courseware_version_1_21_19.xls')
Note that you need to point the Excel reader to wherever the file is located. If it's in the same directory as your notebook, you could just enter the filename. The pd.read_excel method will load the Excel file into a DataFrame, which we've called df. The power of pandas is now available to us.
Let's do some quick checks in the next few steps. First, do the numbers of rows and columns match what we know from looking at the file in Excel?
Use the .shape method to review the number of rows and columns, as shown in the following snippet:
df.shape
Once you run the cell, you will obtain the following output:
Figure 1.16: Checking the shape of a DataFrame
This should match your observations from the spreadsheet. If it doesn't, you would then need to look into the various options of pd.read_excel to see if you needed to adjust something.
With this exercise, we have successfully loaded our dataset into the Jupyter Notebook. You can also have a look at the .info() and .head() methods, which will tell you information about all the columns, and show you the first few rows of the DataFrame, respectively. Now you're up and running with your data in pandas.
As a final note, while this may already be clear, observe that if you define a variable in one code cell, it is available to you in other code cells within the notebook. The code cells within a notebook share scope as long as the kernel is running, as shown in the following screenshot:
Figure 1.17: Variable in scope between cells
Now let's imagine we are taking our first look at this data. In your work as a data scientist, there are several possible scenarios in which you may receive such a dataset. These include the following:
You created the SQL query that generated the data.
A colleague wrote a SQL query for you, with your input.
A colleague who knows about the data gave it to you, but without your input.
You are given a dataset about which little is known.
In cases 1 and 2, your input was involved in generating/extracting the data. In these scenarios, you probably understood the business problem and then either found the data you needed with the help of a data engineer, or did your own research and designed the SQL query that generated the data. Often, especially as you gain more experience in your data science role, the first step will be to meet with the business partner to understand, and refine the mathematical definition of, the business problem. Then, you would play a key role in defining what is in the dataset.
Even if you have a relatively high level of familiarity with the data, doing data exploration and looking at summary statistics of different variables is still an important first step. This step will help you select good features, or give you ideas about how you can engineer new features. However, in the third and fourth cases, where your input was not involved or you have little knowledge about the data, data exploration is even more important.
Another important initial step in the data science process is examining the data dictionary. The data dictionary, as the term implies, is a document that explains what the data owner thinks should be in the data, such as definitions of the column labels. It is the data scientist's job to go through the data carefully to make sure that these impressions match the reality of what is in the data. In cases 1 and 2, you will probably need to create the data dictionary yourself, which should be considered essential project documentation. In cases 3 and 4, you should seek out the dictionary if at all possible.
The case study data we'll use in this book is basically similar to case 3 here.
Our client is a credit card company. They have brought us a dataset that includes some demographics and recent financial data (the past six months) for a sample of 30,000 of their account holders. This data is at the credit account level; in other words, there is one row for each account (you should always clarify what the definition of a row is, in a dataset). Rows are labeled by whether in the next month after the six month historical data period, an account owner has defaulted, or in other words, failed to make the minimum payment.
Goal
Your goal is to develop a predictive model for whether an account will default next month, given demographics and historical data. Later in the book, we'll discuss the practical application of the model.
The data is already prepared, and a data dictionary is available. The dataset supplied with the book, default_of_credit_card_clients__courseware_version_1_21_19.xls, is a modified version of this dataset in the UCI Machine Learning Repository:. Have a look at that web page, which includes the data dictionary.
Note
The original dataset has been obtained from UCI Machine Learning Repository []. Irvine, CA: University of California, School of Information and Computer Science. In this book, we have modified the dataset to suit the book objectives. The modified dataset can be found here:.
Now that we've understood the business problem and have an idea of what is supposed to be in the data, we can compare these impressions to what we actually see in the data. Your job in data exploration is to not only look through the data both directly and using numerical and graphical summaries, but also to think critically about whether the data make sense and match what you have been told about them. These are helpful steps in data exploration:
How many columns are there in the data?
These may be features, response, or metadata.
How many rows (samples)?
What kind of features are there? Which are categorical and which are numerical?
Categorical features have values in discrete classes such as "Yes," "No," or "maybe."Numerical features are typically on a continuous numerical scale, such as dollar amounts.
What does the data look like in these features?
To see this, you can examine the range of values in numeric features, or the frequency of different classes in categorical features, for example.
Is there any missing data?
We have already answered questions 1 and 2 in the previous section; there are 30,000 rows and 25 columns. As we start to explore the rest of these questions in the following exercise, pandas will be our go-to tool. We begin by verifying basic data integrity in the next exercise.
In this exercise, we will perform a basic check on whether our dataset contains what we expect and verify whether there are the correct number of samples.
The data are supposed to have observations for 30,000 credit accounts. While there are 30,000 rows, we should also check whether there are 30,000 unique account IDs. It's possible that, if the SQL query used to generate the data was run on an unfamiliar schema, values that are supposed to be unique are in fact not unique.
To examine this, we can check if the number of unique account IDs is the same as the number of rows. Perform the following steps to complete the exercise:
Note
The code and the resulting output graphics for this exercise have been loaded in a Jupyter Notebook that can be found here:.
Examine the column names by running the following command in the cell:
df.columns
The .columns method of the DataFrame is employed to examine all the column names. You will obtain the following output once you run the cell:
Figure 1.18: Columns of the dataset
As can be observed, all column names are listed in the output. The account ID column is referenced as ID. The remaining columns appear to be our features, with the last column being the response variable. Let's quickly review the dataset information that was given to us by the client:
LIMIT_BAL: Amount of the credit provided (in New Taiwanese (NT) dollar) including individual consumer credit and the family (supplementary) credit.
SEX: Gender (1 = male; 2 = female).
Note
We will not be using the gender data to decide credit-worthiness owing to ethical considerations.
EDUCATION: Education (1 = graduate school; 2 = university; 3 = high school; 4 = others).
MARRIAGE: Marital status (1 = married; 2 = single; 3 = others).
AGE: Age (year).
PAY_1–Pay_6: A record of past payments. Past monthly payments, recorded from April to September, are stored in these columns.
PAY_1 represents the repayment status in September; PAY_2 = repayment status in August; and so on up to PAY_6, which represents the repayment status in April.
The measurement scale for the repayment status is as follows: -1 = pay duly; 1 = payment delay for one month; 2 = payment delay for two months; and so on up to 8 = payment delay for eight months; 9 = payment delay for nine months and above.
BILL_AMT1–BILL_AMT6: Bill statement amount (in NT dollar).
BILL_AMT1 represents the bill statement amount in September; BILL_AMT2 represents the bill statement amount in August; and so on up to BILL_AMT7, which represents the bill statement amount in April.
PAY_AMT1–PAY_AMT6: Amount of previous payment (NT dollar). PAY_AMT1 represents the amount paid in September; PAY_AMT2 represents the amount paid in August; and so on up to PAY_AMT6, which represents the amount paid in April.
Let's now use the .head() method in the next step to observe the first few rows of data.
Type the following command in the subsequent cell and run it using Shift + Enter:
df.head()
You will observe the following output:
Figure 1.19: .head() of a DataFrame
The ID column seems like it contains unique identifiers. Now, to verify if they are in fact unique throughout the whole dataset, we can count the number of unique values using the .nunique() method on the Series (aka column) ID. We first select the column using square brackets.
Select the target column (ID) and count unique values using the following command:
df['ID'].nunique()
You will see in the following output that the number of unique entries is 29,687:
Figure 1.20: Finding a data quality issue
Run the following command to obtain the number of rows in the dataset:
df.shape
As can be observed in the following output, the total number of rows in the dataset is 30,000:
Figure 1.21: Dimensions of the dataset
We see here that the number of unique IDs is less than the number of rows. This implies that the ID is not a unique identifier for the rows of the data, as we thought. So we know that there is some duplication of IDs. But how much? Is one ID duplicated many times? How many IDs are duplicated?
We can use the .value_counts() method on the ID series to start to answer these questions. This is similar to a group by/count procedure in SQL. It will list the unique IDs and how often they occur. We will perform this operation in the next step and store the value counts in a variable id_counts.
Store the value counts in a variable defined as id_counts and then display the stored values using the .head() method, as shown:
id_counts = df['ID'].value_counts() id_counts.head()
You will obtain the following output:
Figure 1.22: Getting value counts of the account IDs
Note that .head() returns the first five rows by default. You can specify the number of items to be displayed by passing the required number in the parentheses, ().
Display the number of grouped duplicated entries by running another value count:
id_counts.value_counts()
You will obtain the following output:
Figure 1.23: Getting value counts of the account IDs
In the preceding output and from the initial value count, we can see that most IDs occur exactly once, as expected. However, 313 IDs occur twice. So, no ID occurs more than twice. Armed with this information, we are ready to begin taking a closer look at this data quality issue and fixing it. We will be creating Boolean masks to further clean the data.
To help clean the case study data, we introduce the concept of a logical mask, also known as a Boolean mask. A logical mask is a way to filter an array, or series, by some condition. For example, we can use the "is equal to" operator in Python, ==, to find all locations of an array that contain a certain value. Other comparisons, such as "greater than" (>), "less than" (<), "greater than or equal to" (>=), and "less than or equal to" (<=), can be used similarly. The output of such a comparison is an array or series of True/False values, also known as Boolean values. Each element of the output corresponds to an element of the input, is True if the condition is met, and is False otherwise. To illustrate how this works, we will use synthetic data. Synthetic data is data that is created to explore or illustrate a concept. First, we are going to import the NumPy package, which has many capabilities for generating random numbers, and give it the alias np:
import numpy as np
Now we use what's called a seed for the random number generator. If you set the seed, you will get the same results from the random number generator across runs. Otherwise this is not guaranteed. This can be a helpful option if you use random numbers in some way in your work and want to have consistent results every time you run a notebook:
np.random.seed(seed=24)
Next, we generate 100 random integers, chosen from between 1 and 5 (inclusive). For this we can use numpy.random.randint, with the appropriate arguments.
random_integers = np.random.randint(low=1,high=5,size=100)
Let's look at the first five elements of this array, with random_integers[:5]. The output should appear as follows:
Figure 1.24: Random integers
Suppose we wanted to know the locations of all elements of random_integers equal to 3. We could create a Boolean mask to do this.
is_equal_to_3 = random_integers == 3
From examining the first 5 elements, we know the first element is equal to 3, but none of the rest are. So in our Boolean mask, we expect True in the first position and False in the next 4 positions. Is this the case?
is_equal_to_3[:5]
The preceding code should give this output:
Figure 1.25: Boolean mask for the random integers
This is what we expected. This shows the creation of a Boolean mask. But what else can we do with them? Suppose we wanted to know how many elements were equal to 3. To know this, you can take the sum of a Boolean mask, which interprets True as 1 and False as 0:
sum(is_equal_to_3)
This should give us the following output:
Figure 1.26: Sum of the Boolean mask
This makes sense, as with a random, equally likely choice of 5 possible values, we would expect each value to appear about 20% of the time. In addition to seeing how many values in the array meet the Boolean condition, we can also use the Boolean mask to select the elements of the original array that meet that condition. Boolean masks can be used directly to index arrays, as shown here:
random_integers[is_equal_to_3]
This outputs the elements of random_integers meeting the Boolean condition we specified. In this case, the 22 elements equal to 3:
Figure 1.27: Using the Boolean mask to index an array
Now you know the basics of Boolean arrays, which are useful in many situations. In particular, you can use the .loc method of DataFrames to index the rows of the DataFrames by a Boolean mask, and the columns by label. Let's continue exploring the case study data with these skills.
In this exercise, with our knowledge of Boolean arrays, we will examine some of the duplicate IDs we discovered. In Exercise 3, we learned that no ID appears more than twice. We can use this learning to locate the duplicate IDs and examine them. Then we take action to remove rows of dubious quality from the dataset. Perform the following steps to complete the exercise:
Note
The code and the output graphics for this exercise have been loaded in a Jupyter Notebook that can be found here:.
Continuing where we left off in Exercise 3, we want the indices of the id_counts series, where the count is 2, to locate the duplicates. We assign the indices of the duplicated IDs to a variable called dupe_mask and display the first 5 duplicated IDs using the following commands:
dupe_mask = id_counts == 2 dupe_mask[0:5]
You will obtain the following output:
Figure 1.28: A Boolean mask to locate duplicate IDs
Here, dupe_mask is the logical mask that we have created for storing the Boolean values.
Note that in the preceding output, we are displaying only the first five entries using dupe_mask to illustrate to contents of this array. As always, you can edit the indices in the square brackets ([]) to change the number of entries displayed.
Our next step is to use this logical mask to select the IDs that are duplicated. The IDs themselves are contained as the index of the id_count series. We can access the index in order to use our logical mask for selection purposes.
Access the index of id_count and display the first five rows as context using the following command:
id_counts.index[0:5]
With this, you will obtain the following output:
Figure 1.29: Duplicated IDs
Select and store the duplicated IDs in a new variable called dupe_ids using the following command:
dupe_ids = id_counts.index[dupe_mask]
Convert dupe_ids to a list and then obtain the length of the list using the following commands:
dupe_ids = list(dupe_ids) len(dupe_ids)
You should obtain the following output:
Figure 1.30: Output displaying the list length
We changed the dupe_ids variable to a list, as we will need it in this form for future steps. The list has a length of 313, as can be seen in the preceding output, which matches our knowledge of the number of duplicate IDs from the value count.
We verify the data in dupe_ids by displaying the first five entries using the following command:
dupe_ids[0:5]
We obtain the following output:
Figure 1.31: Making a list of duplicate IDs
We can observe from the preceding output that the list contains the required entries of duplicate IDs. We're now in a position to examine the data for the IDs in our list of duplicates. In particular, we'd like to look at the values of the features, to see what, if anything, might be different between these duplicate entries. We will use the .isin and .loc methods for this purpose.
Using the first three IDs on our list of dupes, dupe_ids[0:3], we will plan to first find the rows containing these IDs. If we pass this list of IDs to the .isin method of the ID series, this will create another logical mask we can use on the larger DataFrame to display the rows that have these IDs. The .isin method is nested in a .loc statement indexing the DataFrame in order to select the location of all rows containing "True" in the Boolean mask. The second argument of the .loc indexing statement is :, which implies that all columns will be selected. By performing the following steps, we are essentially filtering the DataFrame in order to view all the columns for the first three duplicate IDs.
Run the following command in your Notebook to execute the plan we formulated in the previous step:
df.loc[df['ID'].isin(dupe_ids[0:3]),:].head(10)
Figure 1.32: Examining the data for duplicate IDs
What we observe here is that each duplicate ID appears to have one row with what seems like valid data, and one row of entirely zeros. Take a moment and think to yourself what you would do with this knowledge.
After some reflection, it should be clear that you ought to delete the rows with all zeros. Perhaps these arose through a faulty join condition in the SQL query that generated the data? Regardless, a row of all zeros is definitely invalid data as it makes no sense for someone to have an age of 0, a credit limit of 0, and so on.
One approach to deal with this issue would be to find rows that have all zeros, except for the first column, which has the IDs. These would be invalid data in any case, and it may be that if we get rid of all of these, we would also solve our problem of duplicate IDs. We can find the entries of the DataFrame that are equal to zero by creating a Boolean matrix that is the same size as the whole DataFrame, based on the "is equal to zero" condition.
Create a Boolean matrix of the same size as the entire DataFrame using ==, as shown:
df_zero_mask = df == 0
In the next steps, we'll use df_zero_mask, which is another DataFrame containing Boolean values. The goal will be to create a Boolean series, feature_zero_mask, that identifies every row where all the elements starting from the second column (the features and response, but not the IDs) are 0. To do so, we first need to index df_zero_mask using the integer indexing (.iloc) method. In this method, we pass (:) to examine all rows and (1:) to examine all columns starting with the second one (index 1). Finally, we will apply the all() method along the column axis (axis=1), which will return True if and only if every column in that row is True. This is a lot to think about, but it's pretty simple to code, as will be observed in the following step.
Create the Boolean series feature_zero_mask, as shown in the following:
feature_zero_mask = df_zero_mask.iloc[:,1:].all(axis=1)
Calculate the sum of the Boolean series using the following command:
sum(feature_zero_mask)
You should obtain the following output:
Figure 1.33: The number of rows with all zeros except for the ID
The preceding output tells us that 315 rows have zeros for every column but the first one. This is greater than the number of duplicate IDs (313), so if we delete all the "zero rows," we may get rid of the duplicate ID problem.
Clean the DataFrame by eliminating the rows with all zeros, except for the ID, using the following code:
df_clean_1 = df.loc[~feature_zero_mask,:].copy()
While performing the cleaning operation in the preceding step, we return a new DataFrame called df_clean_1. Notice that here we've used the .copy() method after the .loc indexing operation to create a copy of this output, as opposed to a view on the original DataFrame. You can think of this as creating a new DataFrame, as opposed to referencing the original one. Within the .loc method, we used the logical not operator, ~, to select all the rows that don't have zeros for all the features and response, and : to select all columns. These are the valid data we wish to keep. After doing this, we now want to know if the number of remaining rows is equal to the number of unique IDs.
Verify the number of rows and columns in df_clean_1 by running the following code:
df_clean_1.shape
You will obtain the following output:
Figure 1.34: Dimensions of the cleaned DataFrame
Obtain the number of unique IDs by running the following code:
df_clean_1['ID'].nunique()
Figure 1.35: Number of unique IDs in the cleaned DataFrame
From the preceding output, we can see that we have successfully eliminated duplicates, as the number of unique IDs is equal to the number of rows. Now take a breath and pat yourself on the back. That was a whirlwind introduction to quite a few pandas techniques for indexing and characterizing data. Now that we've filtered out the duplicate IDs, we're in a position to start looking at the actual data itself: the features, and eventually, the response. We'll walk you through this process.
Thus far, we have identified a data quality issue related to the metadata: we had been told that every sample from our dataset corresponded to a unique account ID, but found that this was not the case. We were able to use logical indexing and pandas to correct this issue. This was a fundamental data quality issue, having to do simply with what samples were present, based on the metadata. Aside from this, we are not really interested in the metadata column of account IDs: for the time being these will not help us develop a predictive model for credit default.
Now, we are ready to start examining the values of the features and response, the data we will use to develop our predictive model. Perform the following steps to complete this exercise:
Note
The code and the resulting output for this exercise have been loaded in a Jupyter Notebook that can be found here:.
-2 means the account started that month with a zero balance, and never used any credit
-1 means the account had a balance that was paid in full
0 means that at least the minimum payment was made, but the entire balance wasn't paid (that is, a positive balance was carried to the next month)
We thank our business partner since this answers our questions, for now. Maintaining a good line of communication and working relationship with the business partner is important, as you can see here, and may determine the success or failure of a project.
Obtain the data type of the columns in the data by using the .info() method as shown:
df_clean_1.info()
You should see the following output:
Figure 1.36: Getting column metadata
We can see in Figure 1.34 that there are 25 columns. Each row has 29,685 non-null values, according to this summary, which is the number of rows in the DataFrame. This would indicate that there is no missing data, in the sense that each cell contains some value. However, if there is a fill value to represent missing data, that would not be evident here.
We also see that most columns say int64 next to them, indicating they are an integer data type, that is, numbers such as ..., -2, -1, 0, 1, 2,... . The exceptions are ID and PAY_1. We are already familiar with ID; this contains strings, which are account IDs. What about PAY_1? According to the values in the data dictionary, we'd expect this to contain integers, like all the other features. Let's take a closer look at this column.
Use the.head(n) pandas method to view the top n rows of the PAY_1 series:
df_clean_1['PAY_1'].head(5)
You should obtain the following output:
Figure 1.37: Examine a few columns' contents
The integers on the left of the output are the index, which are simply consecutive integers starting with 0. The data from the PAY_1 column is shown on the left. This is supposed to be the payment status of the most recent month's bill, using values –1, 1, 2, 3, and so on. However, we can see that there are values of 0 here, which are not documented in the data dictionary. According to the data dictionary, "The measurement scale for the repayment status is: -1 = pay duly; 1 = payment delay for one month; 2 = payment delay for two months; . . .; 8 = payment delay for eight months; 9 = payment delay for nine months and above" (). Let's take a closer look, using the value counts of this column.
Obtain the value counts for the PAY_1 column by using .value_counts() method:
df_clean1['PAY_1'].value_counts()
You should see the following output:
Figure 1.38: Value counts of the PAY_1 column
The preceding output reveals the presence of two undocumented values: 0 and –2, as well as the reason this column was imported by pandas as an object data type, instead of int64 as we would expect for integer data. There is a 'Not available' string present in this column, symbolizing missing data. Later on in the book, we'll come back to this when we consider how to deal with missing data. For now, we'll remove rows of the dataset, for which this feature has a missing value.
Use a logical mask with the != operator (which means "does not equal" in Python) to find all the rows that don't have missing data for the PAY_1 feature:
valid_pay_1_mask = df_clean_1['PAY_1'] != 'Not available' valid_pay_1_mask[0:5]
By running the preceding code, you will obtain the following output:
Figure 1.39: Creating a Boolean mask
Check how many rows have no missing data by calculating the sum of the mask:
sum(valid_pay_1_mask)
You will obtain the following output:
Figure 1.40: Sum of the Boolean mask for non-missing data
We see that 26,664 rows do not have the value 'Not available' in the PAY_1 column. We saw from the value count that 3,021 rows do have this value, and 29,685 – 3,021 = 26,664, so this checks out.
Clean the data by eliminating the rows with the missing values of PAY_1 as shown:
df_clean_2 = df_clean_1.loc[valid_pay_1_mask,:].copy()
Obtain the shape of the cleaned data using the following command:
df_clean_2.shape
You will obtain the following output:
Figure 1.41: Shape of the cleaned data
After removing these rows, we check that the resulting DataFrame has the expected shape. You can also check for yourself whether the value counts indicate the desired values have been removed like this: df_clean_2['PAY_1'].value_counts().
Lastly, so this column's data type can be consistent with the others, we will cast it from the generic object type to int64 like all the other features, using the .astype method. Then we select a couple columns, including PAY_1, to examine the data types and make sure it worked.
Run the following command to convert the data type for PAY_1 from object to int64 and show the column metadata for PAY_1 and PAY_2:
df_clean_2['PAY_1'] = df_clean_2['PAY_1'].astype('int64') df_clean_2[['PAY_1', 'PAY_2']].info()
Figure 1.42: Check the data type of the cleaned column
Congratulations, you have completed your second data cleaning operation! However, if you recall, during this process we also noticed the undocumented values of –2 and 0 in PAY_1. Now, let's imagine we got back in touch with our business partner and learned the following information:
So far, we remedied two data quality issues just by asking basic questions or by looking at the .info() summary. Let's now take a look at the first few columns. Before we get to the historical bill payments, we have the credit limits of the accounts of LIMIT_BAL, and the demographic features SEX, EDUCATION, MARRIAGE, and AGE. Our business partner has reached out to us, to let us know that gender should not be used to predict credit-worthiness, as this is unethical by their standards. So we keep this in mind for future reference. Now we explore the rest of these columns, making any corrections that are necessary.
In order to further explore the data, we will use histograms. Histograms are a good way to visualize data that is on a continuous scale, such as currency amounts and ages. A histogram groups similar values into bins, and shows the number of data points in these bins as a bar graph.
To plot histograms, we will start to get familiar with the graphical capabilities of pandas. pandas relies on another library called Matplotlib to create graphics, so we'll also set some options using matplotlib. Using these tools, we'll also learn how to get quick statistical summaries of data in pandas.
In this exercise, we start our exploration of data with the credit limit and age features. We will visualize them and get summary statistics to check that the data contained in these features is sensible. Then we will look at the education and marriage categorical features to see if the values there make sense, and correct them as necessary. LIMIT_BAL and AGE are numerical features, meaning they are measured on a continuous scale. Consequently, we'll use histograms to visualize them. Perform the following steps to complete the exercise:
Note
The code and the resulting output for this exercise have been loaded in a Jupyter Notebook that can found here:.
Import matplotlib and set up some plotting options with this code snippet:
import matplotlib.pyplot as plt #import plotting package #render plotting automatically %matplotlib inline import matplotlib as mpl #additional plotting functionality mpl.rcParams['figure.dpi'] = 400 #high resolution figures
This imports matplotlib and uses .rcParams to set the resolution (dpi = dots per inch) for a nice crisp image; you may not want to worry about this last part unless you are preparing things for presentation, as it could make the images quite large in your notebook.
Run df_clean_2[['LIMIT_BAL', 'AGE']].hist() and you should see the following histograms:
Figure 1.43: Histograms of the credit limit and age data
This is nice visual snapshot of these features. We can get a quick, approximate look at all of the data in this way. In order to see statistics such as mean and median (that is, the 50th percentile), there is another helpful pandas function.
Generate a tabular report of summary statistics using the following command:
df_clean_2[['LIMIT_BAL', 'AGE']].describe()
You should see the following output:
Figure 1.44: Statistical summaries of credit limit and age data
Based on the histograms and the convenient statistics computed by .describe(), which include a count of non-nulls, the mean and standard deviation, minimum, maximum, and quartiles, we can make a few judgements.
LIMIT_BAL, the credit limit, seems to make sense. The credit limits have a minimum of 10,000. This dataset is from Taiwan; the exact unit of currency (NT dollar) may not be familiar, but intuitively, a credit limit should be above zero. You are encouraged to look up the conversion to your local currency and consider these credit limits. For example, 1 US dollar is about 30 NT dollars.
The AGE feature also looks reasonably distributed, with no one under the age of 21 having a credit account.
For the categorical features, a look at the value counts is useful, since there are relatively few unique values.
Obtain the value counts for the EDUCATION feature using the following code:
df_clean_2['EDUCATION'].value_counts()
You should see this output:
Figure 1.45: Value counts of the EDUCATION feature
Here, we see undocumented education levels 0, 5, and 6, as the data dictionary describes only "Education (1 = graduate school; 2 = university; 3 = high school; 4 = others)". Our business partner tells us they don't know about the others. Since they are not very prevalent, we will lump them in with the "others" category, which seems appropriate, with our client's blessing, of course.
Run this code to combine the undocumented levels of the EDUCATION feature into the level for "others" and then examine the results:
df_clean_2['EDUCATION'].replace(to_replace=[0, 5, 6], value=4, inplace=True) df_clean_2['EDUCATION'].value_counts()
The pandas .replace method makes doing the replacements described in the preceding step pretty quick. Once you run the code, you should see this output:
Figure 1.46: Cleaning the EDUCATION feature
Note that here we make this change in place (inplace=True). This means that, instead of returning a new DataFrame, this operation will make the change on the existing DataFrame.
Obtain the value counts for the MARRIAGE feature using the following code:
df_clean_2['MARRIAGE'].value_counts()
You should obtain the following output:
Figure 1.47: Value counts of raw MARRIAGE feature
The issue here is similar to that encountered for the EDUCATION feature; there is a value, 0, which is not documented in the data dictionary: "1 = married; 2 = single; 3 = others". So we'll lump it in with "others".
Change the values of 0 in the MARRIAGE feature to 3 and examine the result with this code:
df_clean_2['MARRIAGE'].replace(to_replace=0, value=3, inplace=True) df_clean_2['MARRIAGE'].value_counts()
The output should be:
Figure 1.48: Value counts of cleaned MARRIAGE feature
We've now accomplished a lot of exploration and cleaning of the data. We will do some more advanced visualization and exploration of the financial history features, that come after this in the DataFrame, later.
Machine learning algorithms only work with numbers. If your data contains text features, for example, these would require transformation to numbers in some way. We learned above that the data for our case study is, in fact, entirely numerical. However, it's worth thinking about how it got to be that way. In particular, consider the EDUCATION feature.
This is an example of what is called a categorical feature: you can imagine that as raw data, this column consisted of the text labels 'graduate school', 'university', 'high school', and 'others'. These are called the levels of the categorical feature; here, there are four levels. It is only through a mapping, which has already been chosen for us, that these data exist as the numbers 1, 2, 3, and 4 in our dataset. This particular assignment of categories to numbers creates what is known as an ordinal feature, since the levels are mapped to numbers in order. As a data scientist, at a minimum you need to be aware of such mappings, if you are not choosing them yourself.
What are the implications of this mapping?
It makes some sense that the education levels are ranked, with 1 corresponding to the highest level of education in our dataset, 2 to the next highest, 3 to the next, and 4 presumably including the lowest levels. However, when you use this encoding as a numerical feature in a machine learning model, it will be treated just like any other numerical feature. For some models, this effect may not be desired.
What if a model seeks to find a straight-line relationship between the features and response?
Whether or not this works in practice depends on the actual relationship between different levels of education and the outcome we are trying to predict.
Here, we examine two hypothetical cases of ordinal categorical variables, each with 10 levels. The levels measure the self-reported satisfaction levels from customers visiting a website. The average number of minutes spent on the website for customers reporting each level is plotted on the y-axis. We've also plotted the line of best fit in each case to illustrate how a linear model would deal with these data, as shown in the following figure:
Figure 1.49: Ordinal features may or may not work well in a linear model
We can see that if an algorithm assumes a linear (straight-line) relationship between features and response, this may or may not work well depending on the actual relationship between this feature and the response variable. Notice that in the preceding example, we are modeling a regression problem: the response variable takes on a continuous range of numbers. However, some classification algorithms such as logistic regression also assume a linear effect of the features. We will discuss this in greater detail later when we get into modeling the data for our case study.
Roughly speaking, for a binary classification model, you can look at the different levels of a categorical feature in terms of the average values of the response variable, which represent the "rates" of the positive class (i.e., the samples where the response variable = 1) for each level. This can give you an idea of whether an ordinal encoding will work well with a linear model. Assuming you've imported the same packages in your Jupyter notebook as in the previous sections, you can quickly look at this using a groupby/agg and bar plot in pandas:
f_clean_2.groupby('EDUCATION').agg({'default payment next month':'mean'}).plot.bar(legend=False) plt.ylabel('Default rate') plt.xlabel('Education level: ordinal encoding')
Once you run the code, you should obtain the following output:
Figure 1.50: Default rate within education levels
Similar to example 2 in Figure 1.49, it looks like a straight-line fit would probably not be the best description of the data here. In case a feature has a non-linear effect like this, it may be better to use a more complex algorithm such as a decision tree or random forest. Or, if a simpler and more interpretable linear model such as logistic regression is desired, we could avoid an ordinal encoding and use a different way of encoding categorical variables. A popular way of doing this is called one-hot encoding (OHE).
OHE is a way to transform a categorical feature, which may consist of text labels in the raw data, into a numerical feature that can be used in mathematical models.
Let's learn about this in an exercise. And if you are wondering why a logistic regression is more interpretable and a random forest is more complex, we will be learning about these concepts in detail during the rest of the book.
Note
Categorical variables in different machine learning packages
Some machine learning packages, for instance, certain R packages or newer versions of the Spark platform for big data, will handle categorical variables without assuming they are ordinal. Always be sure to carefully read the documentation to learn what the model will assume about the features, and how to specify whether a variable is categorical, if that option is available.
In this exercise, we will "reverse engineer" the EDUCATION feature in the dataset to obtain the text labels that represent the different education levels, then show how to use pandas to create an OHE.
First, let's consider our EDUCATION feature, before it was encoded as an ordinal. From the data dictionary, we know that 1 = graduate school, 2 = university, 3 = high school, 4 = others. We would like to recreate a column that has these strings, instead of numbers. Perform the following steps to complete the exercise:
Note
The code and the resulting output for this exercise have been loaded in a Jupyter Notebook that can found here:.
Create an empty column for the categorical labels called EDUCATION_CAT. Using the following command:
df_clean_2['EDUCATION_CAT'] = 'none'
Examine the first few rows of the DataFrame for the EDUCATION and EDUCATION_CAT columns:
df_clean_2[['EDUCATION', 'EDUCATION_CAT']].head(10)
The output should appear as follows:
Figure 1.51: Selecting columns and viewing the first 10 rows
We need to populate this new column with the appropriate strings. pandas provides a convenient functionality for mapping values of a Series on to new values. This function is in fact called .map and relies on a dictionary to establish the correspondence between the old values and the new values. Our goal here is to map the numbers in EDUCATION on to the strings they represent. For example, where the EDUCATION column equals the number 1, we'll assign the 'graduate school' string to the EDUCATION_CAT column, and so on for the other education levels.
Create a dictionary that describes the mapping for education categories using the following code:
cat_mapping = { 1: "graduate school", 2: "university", 3: "high school", 4: "others" }
Apply the mapping to the original EDUCATION column using .map and assign the result to the new EDUCATION_CAT column:
df_clean_2['EDUCATION_CAT'] = df_clean_2['EDUCATION'].map(cat_mapping) df_clean_2[['EDUCATION', 'EDUCATION_CAT']].head(10)
After running those lines, you should see the following output:
Figure 1.52: Examining the string values corresponding to the ordinal encoding of EDUCATION
Excellent! Note that we could have skipped Step 1, where we assigned the new column with 'none', and gone straight to Steps 3 and 4 to create the new column. However, sometimes it's useful to create a new column initialized with a single value, so it's worth knowing how to do that.
Now we are ready to one-hot encode. We can do this by passing a Series of a DataFrame to the pandas get_dummies() function. The function got this name because one-hot encoded columns are also referred to as dummy variables. The result will be a new DataFrame, with as many columns as there are levels of the categorical variable.
Run this code to create a one-hot encoded DataFrame of the EDUCATION_CAT column. Examine the first 10 rows:
edu_ohe = pd.get_dummies(df_clean_2['EDUCATION_CAT']) edu_ohe.head(10)
This should produce the following output:
Figure 1.53: Data frame of one-hot encoding
You can now see why this is called "one-hot encoding": across all these columns, any particular row will have a 1 in exactly 1 column, and 0s in the rest. For a given row, the column with the 1 should match up to the level of the original categorical variable. To check this, we need to concatenate this new DataFrame with the original one and examine the results side by side. We will use the pandas concat function, to which we pass the list of DataFrames we wish to concatenate, and the axis=1 keyword saying to concatenate them horizontally; that is, along the column axis. This basically means we are combining these two DataFrames "side by side", which we know we can do because we just created this new DataFrame from the original one: we know it will have the same number of rows, which will be in the same order as the original DataFrame.
Concatenate the one-hot encoded DataFrame to the original DataFrame as follows:
df_with_ohe = pd.concat([df_clean_2, edu_ohe], axis=1) df_with_ohe[['EDUCATION_CAT', 'graduate school', 'high school', 'university', 'others']].head(10)
You should see this output:
Figure 1.54: Checking the one-hot encoded columns
Alright, looks like this has worked as intended. OHE is another way to encode categorical features that avoids the implied numerical structure of an ordinal encoding. However, notice what has happened here: we have taken a single column, EDUCATION, and exploded it out into as many columns as there were levels in the feature. In this case, since there are only four levels, this is not such a big deal. However, if your categorical variable had a very large number of levels, you may want to consider an alternate strategy, such as grouping some levels together into single categories.
This is a good time to save the DataFrame we've created here, which encapsulates our efforts at cleaning the data and adding an OHE column.
Choose a filename, and write the latest DataFrame to a CSV (comma-separated value) file like this: df_with_ohe.to_csv('../Data/Chapter_1_cleaned_data.csv', index=False), where we don't include the index, as this is not necessary and can create extra columns when we load it later.
We are ready to explore the rest of the features in the case study dataset. We will first practice loading a DataFrame from the CSV file we saved at the end of the last section. This can be done using the following snippet:
df = pd.read_csv('../Data/Chapter_1_cleaned_data.csv')
Note that if you are continuing to write code in the same notebook, this overwrites the value held by the df variable previously, which was the DataFrame of raw data. We encourage you to check the .head(), .columns, and .shape of the DataFrame. These are good things to check whenever loading a DataFrame. We don't do this here for the sake of space, but it's done in the companion notebook.
The remaining features to be examined are the financial history features. They fall naturally into three groups: the status of the monthly payments for the last six months, and the billed and paid amounts for the same period. First, let's look at the payment statuses. It is convenient to break these out as a list so we can study them together. You can do this using the following code:
pay_feats = ['PAY_1', 'PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6']
We can use the .describe method on these six Series to examine summary statistics:
df[pay_feats].describe()
This should produce the following output:
Figure 1.55: Summary statistics of payment status features
Here, we observe that the range of values is the same for all of these features: -2, -1, 0, ... 8. It appears that the value of 9, described in the data dictionary as "payment delay for nine months and above", is never observed.
We have already clarified the meaning of all of these levels, some of which were not in the original data dictionary. Now let's look again at the value_counts() of PAY_1, now sorted by the values we are counting, which are the index of this Series:
df[pay_feats[0]].value_counts().sort_index()
This should produce the following output:
Figure 1.56: Value counts of the payment status for the previous month
Compared to the positive integer values, most of the values are either -2, -1, or 0, which correspond to an account that was in good standing last month: not used, paid in full, or made at least the minimum payment.
Notice that, because of the definition of the other values of this variable (1 = payment delay for one month; 2 = payment delay for two months, and so on), this feature is sort of a hybrid of categorical and numerical features. Why should no credit usage correspond to a value of -2, while a value of 2 means a 2-month late payment, and so forth? We should acknowledge that the numerical coding of payment statuses -2, -1, and 0 constitute a decision made by the creator of the dataset on how to encode certain categorical features, which were then lumped in with a feature that is truly numerical: the number of months of payment delay (values of 1 and larger). Later on, we will consider the potential effects of this way of doing things on the predictive capability of this feature.
For now, we will simply continue to explore the data. This dataset is small enough, with 18 of these financial features and a handful of others, that we can afford to individually examine every feature. If the dataset had thousands of features, we would likely forgo this and instead explore dimensionality reduction techniques, which are ways to condense the information in a large number of features down to a smaller number of derived features, or, alternatively, methods of feature selection, which can be used to isolate the important features from a candidate field of many. We will demonstrate and explain some feature selection techniques later. But on this dataset, it's feasible to visualize every feature. As we know from the last chapter, a histogram is a good way to get a quick visual interpretation of the same kind of information we would get from tables of value counts. You can try this on the most recent month's payment status features with df[pay_feats[0]].hist(), to produce this:
Figure 1.57: Histogram of PAY_1 using default arguments
Now we're going to take an in-depth look at how this graphic is produced and consider whether it is as informative as it should be. A key point about the graphical functionality of pandas is that pandas plotting actually calls matplotlib under the hood. Notice that the last available argument to the pandas .hist() method is **kwds, which the documentation indicates are matplotlib keyword arguments.
Note
For more information, refer to the following:.
Looking at the matplotlib documentation for matplotlib.pyplot.hist shows additional arguments you can use with the pandas .hist() method, such as the type of histogram to plot (see for more details). In general, to get more details about plotting functionality, it's important to be aware of matplotlib, and in some scenarios, you will want to use matplotlib directly, instead of pandas, to have more control over the appearance of plots.
You should be aware that aware that pandas uses matplotlib, which in turn uses NumPy. When plotting histograms with matplotlib, the numerical calculation for the values that make up the histogram is actually carried out by the NumPy .histogram function. This is a key example of code reuse, or "not reinventing the wheel". If a standard functionality, such as plotting a histogram, already has a good implementation in Python, there is no reason to create it anew. And the if mathematical operation to create the histogram data for the plot is already implemented, this should be leveraged as well. This shows the interconnectedness of the Python ecosystem.
We'll now address a couple of key issues that arise when calculating and plotting histograms.
Number of bins
Histograms work by grouping together values into what are called bins. The number of bins is the number of vertical bars that make up the discrete histogram plot we see. If there are a large number of unique values on a continuous scale, such as the histogram of ages we viewed earlier, histogram plotting works relatively well "out of the box", with default arguments. However, when the number of unique values is close to the number of bins, the results may be a little misleading. The default number of bins is 10, while in the PAY_1 feature, there are 11 unique values. In cases like this, it's better to manually set the number of histogram bins to the number of unique values.
In our current example, since there are very few values in the higher bins of PAY_1, the plot may not look much different. But in general, this is important to keep in mind when plotting histograms.
Bin edges
The locations of the edges of the bins determine how the values get grouped in the histogram. Instead of indicating the number of bins to the plotting function, you could alternatively supply a list or array of numbers for the bins keyword argument. This input would be interpreted as the bin edge locations on the x axis. The way values are grouped into bins in matplotlib, using the edge locations, is important to understand. All bins, except the last one, group together values as low as the left edge, and up to but not including values as high as the right edge. In other words, the left edge is closed but the right edge is open for these bins. However, the last bin includes both edges; it has a closed left and right edge. This is of more practical importance when you are binning a relatively small number of unique values that may land on the bin edges.
For control over plot appearance, it's usually better to specify the bin edge locations. We'll create an array of 12 numbers, which will result in 11 bins, each one centered around one of the unique values of PAY_1:
pay_1_bins = np.array(range(-2,10)) - 0.5 pay_1_bins
The output shows the bin edge locations:
Figure 1.58: Specifying histogram bin edges
As a final point of style, it is important to always label your plots so that they are interpretable. We haven't yet done this manually, because in some cases, pandas does it automatically, and in other cases, we simply left the plots unlabeled. From now on, we will follow best practice and label all plots. We use the xlabel and ylabel functions in matplotlib to add axis labels to this plot. The code is as follows:
df[pay_feats[0]].hist(bins=pay_1_bins) plt.xlabel('PAY_1') plt.ylabel('Number of accounts')
The output should look like this:
Figure 1.59: A better histogram of PAY_1
While it's tempting, and often sufficient, to just call plotting functions with the default arguments, one of your jobs as a data scientist is to create accurate and representative data visualizations. To do that, sometimes you need to dig into the details of plotting code, as we've done here.
What have we learned from this data visualization?
Since we already looked at the value counts, this confirms for us that most accounts are in good standing (values -2, -1, and 0). For those that aren't, it's more common for the "months late" to be a smaller number. This makes sense; likely, most people are paying off their balances before too long. Otherwise, their account may be closed or sold to a collection agency. Examining the distribution of your features and making sure it seems reasonable is a good thing to confirm with your client, as the quality of these data underlie the predictive modeling you seek to do.
Now that we've established some good plotting style for histograms, let's use pandas to plot multiple histograms together, and visualize the payment status features for each of the last six months. We can pass our list of column names pay_feats to access multiple columns to plot with the .hist() method, specifying the bin edges we've already determined, and indicating we'd like a 2 by 3 grid of plots. First, we set the font size small enough to fit between these subplots. Here is the code for this:
mpl.rcParams['font.size'] = 4 df[pay_feats].hist(bins=pay_1_bins, layout=(2,3))
The plot titles have been created automatically for us based on the column names. The y axes are understood to be counts. The resulting visualizations are as follows:
Figure 1.60: Grid of histogram subplots
We've already seen the first of these, and it makes sense. What about the rest of them? Remember the definitions of the positive integer values of these features, and what each feature means. For example, PAY_2 is the repayment status in August, PAY_3 is the repayment status in July, and the others go further back in time. A value of 1 means payment delay for 1 month, while a value of 2 means payment delay for 2 months, and so forth.
Did you notice that something doesn't seem right? Consider the values between July (PAY_3) and August (PAY_2). In July, there are very few accounts that had a 1-month payment delay; this bar is not really visible in the histogram. However, in August, there are suddenly thousands of accounts with a 2-month payment delay. This does not make sense: the number of accounts with a 2-month delay in a given month should be less than or equal to the number of accounts with a 1-month delay in the previous month. Let's take a closer look at accounts with a 2-month delay in August and see what the payment status was in July. We can do this with the following code, using a Boolean mask and .loc, as shown in the following snippet:
df.loc[df['PAY_2']==2, ['PAY_2', 'PAY_3']].head()
The output of this should appear as follows:
Figure 1.61: Payment status in July (PAY_3) of accounts with a 2-month payment delay in August (PAY_2)
From Figure 1.61, it's clear that accounts with a 2-month delay in August have nonsensical values for the July payment status. The only way to progress to a 2-month delay should be from a 1-month delay the previous month, yet none of these accounts indicate that.
When you see something like this in the data, you need to either check the logic in the query used to create the dataset or contact the person who gave you the dataset. After double-checking these results, for example using .value_counts() to view the numbers directly, we contact our client to inquire about this issue.
The client lets us know that they had been having problems with pulling the most recent month of data, leading to faulty reporting for accounts that had a 1-month delay in payment. In September, they had mostly fixed these problems (although not entirely; that is why there were missing values in the PAY_1 feature, as we found). So, in our dataset, the value of 1 is underreported in all months except for September (the PAY_1 feature). In theory, the client could create a query to look back into their database and determine the correct values for PAY_2, PAY_3, and so on up to PAY_6. However, for practical reasons, they won't be able to complete this retrospective analysis in time for us to receive it and include it in our analysis.
Because of this, only the most recent month of our payment status data is correct. This means that, of all the payment status features, only PAY_1 is representative of future data, those that will be used to make predictions with the model we develop. This is a key point: a predictive model relies on getting the same kind of data to make predictions that it was trained on. This means we can use PAY_1 as a feature in our model, but not PAY_2 or the other payment status features from previous months.
This episode shows the importance of a thorough examination of data quality. Only by carefully combing through the data did we discover this issue. It would have been nice if the client had told us up front that they were having reporting issues over the last few months, when our dataset was collected, and that the reporting procedure was not consistent during that time period. However, ultimately it is our responsibility to build a credible model, so we need to be sure we believe the data is correct, by making this kind of careful examination. We explain to the client that we can't use the older features since they are not representative of the future data the model will be scored on (that is, make predictions on future months), and politely ask them to let us know of any further data issues they are aware of. There are none at this time.
In this activity, you will examine the remaining financial features in a similar way to how we examined PAY_1, PAY_2, PAY_3, and so on. In order to better visualize some of these data, we'll use a mathematical function that should be familiar: the logarithm. You'll use pandas to apply, which serves to apply any function to an entire column or DataFrame in the process. Once you complete the activity, you should have the following set of histograms of logarithmic transformations of non-zero payments:
Figure 1.62: Expected set of histograms
Perform the following steps to complete the activity:
Note
The code and the resulting output graphics for this exercise have been loaded into a Jupyter Notebook that can be found here:.
Create lists of feature names for the remaining financial features.
Use .describe() to examine statistical summaries of the bill amount features. Reflect on what you see. Does it make sense?
Visualize the bill amount features using a 2 by 3 grid of histogram plots.
Hint: You can use 20 bins for this visualization.
Obtain the .describe() summary of the payment amount features. Does it make sense?
Plot a histogram of the bill payment features similar to the bill amount features, but also apply some rotation to the x-axis labels with the xrot keyword argument so that they don't overlap. In any plotting function, you can include the xrot=<angle> keyword argument to rotate x-axis labels by a given angle in degrees. Consider the results.
Use a Boolean mask to see how many of the payment amount data are exactly equal to 0. Does this make sense given the histogram in the previous step?
Ignoring the payments of 0 using the mask you created in the previous step, use pandas .apply() and NumPy's np.log10() to plot histograms of logarithmic transformations of the non-zero payments. Consider the results.
Hint: You can use .apply() to apply any function, including log10, to all the elements of a DataFrame or a column using the following syntax: .apply(<function_name>).
This was the first chapter in our book, Data Science Projects with Python. Here, we made extensive use of pandas to load and explore the case study data. We learned how to check for basic consistency and correctness by using a combination of statistical summaries and visualizations. We answered such questions as "Are the unique account IDs truly unique?", "Is there any missing data that has been given a fill value?", and "Do the values of the features make sense given their definition?"
You may notice that we spent nearly all of this chapter identifying and correcting issues with our dataset. This is often the most time consuming stage of a data science project. While it is not always the most exciting part of the job, it gives you the raw materials necessary to build exciting models and insights. These will be the subjects of most of the rest of this book.
Mastery of software tools and mathematical concepts is what allows you execute data science projects, at a technical level. However, managing your relationships with clients, who are relying on your services to generate insights from their data, is just as important to a successful project. You must make as much use as you can of your business partner's understanding of the data. They are likely going to be more familiar with it than you, unless you are already a subject matter expert on the data for the project you are completing. However, even in that case, your first step should be a thorough and critical review of the data you are using.
In our data exploration, we discovered an issue that could have undermined our project: the data we had received was not internally consistent. Most of the months of the payment status features were plagued by a data reporting issue, included nonsensical values, and were not representative of the most recent month of data, or the data that would be available to the model going forward. We only uncovered this issue by taking a careful look at all of the features. While this is not always possible in different projects, especially when there is a very large number of features, you should always take the time to spot check as many features as you can. If you can't examine every feature, it's useful to check a few of every category of feature (if the features fall into categories, such as financial or demographic).
When discussing data issues like this with your client, make sure you are respectful and professional. The client may simply have forgotten about the issue when presenting you with the data. Or, they may have known about it but assumed it wouldn't affect your analysis for some reason. In any case, you are doing them an essential service by bringing it to their attention and explaining why it would be a problem to use flawed data to build a model. You should back up your claims with results if possible, showing that using the incorrect data either leads to decreased, or unchanged, model performance. Or, alternatively, you could explain that if only a different kind of data would be available in the future, compared to what's available now for training a model, the model built now will not be useful. Be as specific as you can, presenting the kinds of graphs and tables that we used to discover the data issue here.
In the next chapter, we will examine the response variable for our case study problem, which completes the initial data exploration. Then we will start to get some hands-on experience with machine learning models and learn how we can decide whether a model is useful or not. | https://www.packtpub.com/product/data-science-projects-with-python/9781838551025 | CC-MAIN-2021-39 | refinedweb | 15,042 | 61.56 |
NAME
wctomb - convert a wide character to a multibyte sequence
SYNOPSIS
#include <stdlib.h> int wctomb(char *s, wchar_t wc);
DESCRIPTION only known, only known to this function, to the initial state, and returns non-zero if the encoding has nontrivial shift state, or zero if the encoding is stateless.
RETURN VALUE-zero if the encoding has nontrivial shift state, or zero if the encoding is stateless.
CONFORMING TO
C99.
NOTES
The behavior of wctomb() depends on the LC_CTYPE category of the current locale. This function is not multi-thread safe. The function wcrtomb(3) provides a better interface to the same functionality.
SEE ALSO
MB_CUR_MAX(3), wcrtomb(3), wcstombs(3)
COLOPHON
This page is part of release 3.01 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/intrepid/man3/wctomb.3.html | CC-MAIN-2015-40 | refinedweb | 140 | 58.38 |
Message-ID: <978468361.6585.1413793327016.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_6584_303775040.1413793327016" ------=_Part_6584_303775040.1413793327016 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This tutorial assumes you have basic computing knowledge and are comfort= able with the command line interface of your operating system. If you're fa= miliar with programming in general, you might just want to browse the code = snippets and ignore the rest of the article in general.=20
*Things required:
*Boo.
*A .NET ru= ntime - either Microsoft's .NET Runtime on = Windows, or Mono on Linux.
*Your favorite text editor.<= br /> *Being comfortable with the command-line, for now.
*Free time.<= /p>=20
Before we even get started, its time to show you the obligatory "He= llo, world!" application. Every language tutorial has them, and Boo is= no exception! Everybody's gotta start somewhere.=20
Crack open Your Favorite Text Editor.
On the very first line, type= ,
print "Hello, world!"=20
Save the file as hello.boo, and make a note of the directory its been sa=
ved in.
Go to the command prompt and find the directory you installed= Boo into, and go to the bin subdirectory. /boo/bin, if you will!
Now, type,=20
booi path_to_directory_where_hello_boo_is\hello.boo=20
You'll see "Hello, world!" printed on screen. Welcome to Boo.<= /p>=20
Print is an expression in Boo that is used to feed output to a device ca=
lled "Standard Output."
Ignoring the concept of "Stand= ard Output" completely, what it means (for now) is that the "prin= t" expression will just show text in the console, like you just saw wi= th the hello world program.
What if we want to get really specific - what if we want to print out so= meone else's name instead of just saying hello to the entire freaking plane= t, eh? We could, of course,=20
print "Hello, Bob!" print "Hello, Sally!" ...=20
but everytime we wanted to say hello to someone new, we would be in quit= e a quandary!=20
But, never fear, for your hero is here, and he will now show you how to = read input from the user - don't worry, its really easy and you don't have = to worry about it. Crack open hello.boo and replace its contents with,= =20
import System name =3D Console.ReadLine() print "Hello, $name"=20
and save the file.=20
We'll break it down line-by-line in a second, but for now, run "boo= i" just like you did before, and stare in awe: nothing's happening! Al= l there is is a blinking cursor! Type your name and press enter. I typed &q= uot;Bongo," because I'm a freak.=20
Hello, Bongo=20
Neat, eh, but what happened?=20
import System=20
System is a namespace - its like a box with lots of delicious snacktreat=
s in it, or, if you're on a diet, like a box full of slightly stale protein=
bars. The "Console" class is one of these delicious treats, just=
waiting to be plucked from the box. We could have accessed the "Conso=
le" class by using "System.Console," but we didn't - why?
Using the "import" keyword is a way of saying, "dump all= the contents of the System namespace into my file so I don't have to keep = typing the namespace, 'System,' before everything." Why would you do t= his? Because you're lazy, that's why.
name =3D Console.ReadLine()=20
Here you are doing two things - you are calling a member of the "Co= nsole" class, called "ReadLine()", and storing a value it re= turns into "name." ReadLine() is a method that waits for the user= to type something and press enter, and returns a string o= f characters. This string goes into the "name" object. Thus, were= the user - an upstanding citizen such as yourself - to type "Bongo,&q= uot; then "name" would now have the contents "Bongo" af= ter the user pressed the enter key.=20
print "Hello, $name"=20
This is the easiest part of the program - its called "String Interpolation" The curly b= race symbols essentially mean, 'embed this object inside of this string,' s= o when you write "$name" you are really saying, "replace $na= me with the contents of name." Since we typed in "Bongo" ear= lier and stored that in the name variable, instead of seeing "Hello, $= name" printed on the screen we will instead see, "Hello, Bongo.&q= uot; Take special note: using $<object> actually calls a special memb= er that every object has, called, 'ToString()' - this member returns a stri= ng that represents a formatted description of the object. Not all classes i= mplement their own custom ToString() member, so you might see something str= ange like 'System.DateTime' instead of the actual date and time.=20
Exercises:
*Create a program that reads in the user's name and pri= nts outs something like, "Your name is $name. Hello, $name!" exce= pt that $name is replaced with the user's name.
*Create a program tha=
import System name =3D Console.ReadLine() print "Hello, $name"=20
program if you are feeling lost.=20
*Tip: There are many more classes available in the System namespace - go= to Microsoft .NET Class guide and chec= k out the namespaces available - there are tons of classes inside! Remember= to use "import" or else you'll be typing System.Console" al= l year long. | http://docs.codehaus.org/exportword?pageId=15298 | CC-MAIN-2014-42 | refinedweb | 927 | 74.59 |
1. What is the managed and unmanaged code in .net?
Answer: The .NET Framework provides a run-time environment called the Common Language Runtime, which (Mulesoft Training Online .
2. What are the memory-mapped files?
Answer: Memory-mapped files are used to map the content of a file to the logical address of an application. It makes you able to run multiple processes on the same machine to share data with each other. To obtain a memory-mapped file object, you can use the method MemoryMappedFile.CreateFromFiles( ). It represents a persistent memory-mapped file from a file on disk.
3. Explain GridView control in ASP.NET?
Answer: The GridView control displays the values of a data source in a table. Each column represents a field, while each row represents a record. The GridView control supports the following features:
Binding to data source controls, such as SqlDataSource.
Built-in.
Creating a GridView
4. What is the difference between ASP.NET Web API and WCF?
Answer: The ASP. NET Web API is a framework that uses the HTTP services and makes it easy to provide a response to the client request. The response depends on the request of the clients. The Web API builds the HTTP services and handles the request using the HTTP protocols. The request may be GET, POST, DELETE, PUT. We can also say that ASP. NET Web API:
Is an HTTP service.
Is designed for reaching the broad range of clients.
Uses the HTTP application.
We use ASP. NET Web API for creating the RESTful (Representational State Transfer) services.
The following are some important points of the ASP. NET Web API:
The ASP. NET Web API supports the MVC application features that are controller, media formatters, routing etcetera.
It is a platform for creating the REST services.
It is a framework for creating HTTP services.
Responses can be formatted by the APIs MediaTypeFormatter into the JavaScript Object Notation (JSON) and Extensible Markup Language (XML) formats. (E learning Portal)
5. What are the defining traits of an object-oriented language?
Answer:
The defining traits of an object-oriented language are:
a) Inheritance
b) Abstraction
c) Encapsulation
d) Polymorphism
Inheritance: The main class or the root class is called as a Base Class. Any class which is expected to have ALL properties of the base class along with its own is called a Derived class. The process of deriving such a class is Derived class.
Abstraction: Abstraction is creating models or classes of some broad concept. Abstraction can be achieved through Inheritance or even Composition.
Encapsulation: Encapsulation is a collection of functions of a class and object. The “Food” class is an encapsulated form. It is achieved by specifying which class can use which members (private, public, protected) of an object.
Polymorphism: Polymorphism means existing in different forms. Inheritance is an example of Polymorphism. A base class exists in different forms as derived classes. Operator overloading is an example of a Polymorphism in which an operator can be applied in different situations.
6. What is a CLS (Common Language Specification)?
Answer: CLS is a specification that defines the rules to support language integration. This is done in such a way, that programs are written in any language (.NET compliant) can communicate with one another. This also can take full advantage of inheritance, polymorphism, exceptions, and other features. This is a subset of the CTS, which all .NET languages are expected to support.
7. How can we apply themes in ASP.NET application?
Answer:: ( tableau training videos )
Page Theme
A Page theme contains the control skins, style sheets, graphic files, and other resources inside the subfolder of the App_Theme folder in the Solution Explorer window..
8. Which method do you use to enforce garbage collection in .NET?
Answer: The System.GC.Collect() method.
9. What are the different types of indexes in .Net?
Answer:
There are two types of indexes in .Net:
Clustered index and non-clustered index
10. How can you identify that the page is posted back?
Answer:
There is a property, named as “IsPostBack” property. You can check it to know that the page is post backed or not. ( data science training online )
11. What is the full form of ADO?
Answer:
The full form of ADO is ActiveX Data Object.
12. What is Click Once?
Answer: ClickOnce is a new deployment technology that allows you to create and publish self-updating applications that can be installed and run with minimal user interaction.
13. What is Ajax in ASP.NET?
Answer: Ajax stands for Asynchronous (Selenium Training )JavaScript and XML; in other words, Ajax is the combination of various technologies such as JavaScript, CSS, XHTML, DOM, etc.back.
Ajax is platform-independent; in other words, AJAX is a cross-platform technology that can be used on any Operating System since it is based on XML & JavaScript. It also supports open source implementation of other technology. It partially renders the page to the server instead of complete page postback. We use AJAX for developing faster, better and more interactive web applications. AJAX uses an HTTP request between the webserver & browser. ( devops training videos )
With AJAX, when a user clicks a button, you can use JavaScript and DHTML to (Sap Fico Online Training )immediately update the UI, and spawn an asynchronous request to the server to fetch results.
When the response is generated, you can then use JavaScript and CSS to update your UI accordingly without refreshing the entire page. While this is happening, the form on the user’s screen doesn’t flash, blink, disappear, or stall.
The power of AJAX lies in its ability to communicate with the server asynchronously, using an XMLHttpRequest object without requiring a browser refresh.
Ajax essentially puts JavaScript technology and the XMLHttpRequest object between your Web form and the server.
14. What is the global assembly cache (GAC)?
Answer: GAC is a machine-wide cache of assemblies that allows .NET applications to share libraries. GAC solves some of the problems associated with DLL’s (DLL Hell).
15. What is the use of Error Provider Control in .NET?
Answer:. ( tableau training online )
16. What is the PostBack property in ASP.NET?
Answer: webserver.
ASP.NET also adds two additional hidden input fields that are used to pass information back to the server. This information consists of the ID of the control that raised the event and any additional information if needed. These fields will empty initially as shown below,
The following actions will be taken place when a user changes a control that has the AutoPostBack property set to true:
On the client-side, the JavaScript _doPostBack function is invoked, and the page is resubmitted to the server.
ASP.NET re-creates the Page object using the .aspx file.
ASP.NET retrieves state information from the hidden view state field and updates the controls accordingly.
The Page. The load event is fired.
The appropriate change event is fired for the control. (If more than one control has been changed, the order of change events is undetermined.)
The Page.PreRender event fires and the page is rendered (transformed from a set of objects to an HTML page).
Finally, Page. Unload event is fired.
The new page is sent to the client.
17. Explain Cookie-less Session in ASP.NET?
Answer: By default, a session uses a cookie in the background. To enable a cookie-less session, we need to change some configuration in the Web. Config file. Follow these steps:
Open Web.Config file.
Add a tag under the tag.
Add an attribute “cookieless” in the tag and set its value to “AutoDetect” like below:
The possible values for the “cookieless” attribute are:
AutoDetect: Session uses background cookie if cookies are enabled. If cookies are disabled, then the URL is used to store session information.
UseCookie: Session always uses background cookie. This is the default.
UseDeviceProfile: Session uses background cookie if the browser supports cookies else URL is used.
user: Session always uses URL.
“regenerateExpiredSessionId” is used to ensure that if a cookieless URL is expired a new URL is created with a new session. And if the same cookieless URL is being used by multiple users at the same time, they all get a new regenerated session URL. ( python training online )
18. How is it possible for .NET to support many languages?
Answer: The .NET language code is compiled to Microsoft Intermediate Language (MSIL). The generated code is called managed code. This managed code is run in a .NET environment. So after compilation, the language is not a barrier and the code can call or use the function of another language also.
19. What is Themes in ASP.NET?
Answer: A theme decides the look and feel of the website. It is a collection of files that define the looks of a page. It can include skin files, CSS files & images.
We define themes in a special App_Themes folder. Inside this folder is one or more subfolders named Theme1, Theme2, etc. that define the actual themes. The theme property is applied late in the page’s life cycle, effectively overriding any customization you may have for individual controls on your page.
How to apply themes
There are 3 different options to apply themes to our website:
Setting the theme at the page level: the Theme attribute is added to the page directive of the page.
Setting the theme at the site level: to set the theme for the entire website you can set the theme in the web.config of the website. Open the web.config file and locate the element and add the theme attribute to it:
Setting the theme programmatically at runtime: here the theme is set at runtime through coding. It should be applied earlier in the page’s life cycle ie. Page_PreInit event should be handled for setting the theme. The better option is to apply this to the Base page class of the site as every page in the site inherits from this class.
Page.Theme = Theme1;
Uses of Themes
Since themes can contain CSS files, images, and skins, you can change colors, fonts, positioning, and images simply by applying the desired themes.
You can have as many themes as you want and you can switch between them by setting a single attribute in the (Azure Training )web.config file or an individual apex page. Also, you can switch between themes programmatically.
Setting the themes programmatically, you are offering your users a quick and easy way to change the page to their likings.
Themes allow you to improve the usability of your site by giving users with vision problems the option to select a high contrast theme with large font size. ( data science training online )
20. What are the Navigations technique in ASP.NET?
Answer: Navigation can cause data loss if it not properly handled. We do have many techniques to transfer data from one page to another but every technique has its own importance and benefits.
We will discuss the (Servicenow Training )following techniques in this article.
- Response.Redirect
- Server.Transfer
- Server.Execute
- Cross page posting
21. What is master page in ASP.NET?
Answer: The extension of MasterPage is ‘.master’. MasterPage cannot be directly accessed from the client because it just acts as a template for the other Content Pages. In a MasterPage, we can have content either inside ContentPlaceHolder or outside it. Only content inside the ContentPlaceHolder can be customized in the Content Page. We can have multiple masters in one web application. A MasterPage can have another MasterPage as Master to it. The MasterPageFile property of a web form can be set dynamically and it should be done either in or before the Page_PreInit event of the WebForm. Page.MasterPageFile = “MasterPage.master”. The dynamically set Master Page must have the ContentPlaceHolder whose content has been customized in the WebForm.
A master page is defined using the following code:
<%@ master language=”C#” %>
Adding a MasterPage to the Project
Add a new MasterPage file (MainMaster.master) to the Web Application.
Change the Id of ContentPlaceHolder into “cphHead” and the Id “ContentPlaceHolder1” to “cphFirst”.Add one more ContentPlaceHolder (cphSecond) to Master page.To the master page add some header, footer and some default content for both the content place holders.
22. What is tracing in .NET?
Answer:.
23. What are the data controls available in ASP.NET?
Answer: The Controls having DataSource Property are called Data Controls in ASP.NET. ASP.NET allows the powerful feature of data binding, you can bind any server control to simple properties, collections, expressions and/or methods. When you use data binding, you have more flexibility when you use data from a database or other means. Data Bind controls are container controls. Controls -> Child Control
Data Binding is binding controls to data from databases. With data binding, we can bind a control to a particular column in a table from the database or we can bind the whole table to the data grid. (svr technologies)
Data binding provides a data binding tags and regular code insertion tags <% and %> becomes apparent when the expression is evaluated. Expressions within the data binding tags are evaluated only when the DataBind method in the Page objects or Web control is called.
Data Bind Control can display data in the connected and disconnected model.
Following are data-bind controls in ASP.NET:
- Repeater Control
- DataGrid Control
- DataList Control
- GridView Control
- DetailsView
- FormView
- DropDownList
- ListBox
- RadioButtonList
- CheckBoxList
- BulletList etc.
24. What is WebParts in ASP.NET?
Answer:. ( python online training )
Component of Web Parts:
The web parts consist of different components like:
- Web Part Manager
- Web Part Zone
- catalog Part
- catalog Zone
- Connections Zone
- Editor Part
- Editor Zone
- Web Part Zone
Web Part Zone can contain one or more Web Part controls.
This provides the layout for the Controls it contains. A single ASPX page can contain one or more Web Part Zones.
A Web Part Control can be any of the controls in the toolbox or even the customized user controls.
25. What is the meaning of Immutable?
Answer: Immutable means once you create a thing, you cannot modify it.
For example: If you want to give new value to old value then it will discard the old value and create a new instance in memory to hold the new value.
26. What are the various objects in Data set?
Answer: The DataSet class exists in the System. Data namespace.
The Classes contained in the DataSet class are:
a) DataTable
b) DataColumn
c) DataRow
d) Constraint
e) DataRelation
27. What are the advantages of using session?
Answer: The advantages of using session are:
A session stores user states and data all over the application.
It is very easy to implement and we can store any kind of object.
It can store every user data separately.
The session is secure and transparent from the user because the session object is stored on the server.( oracle apex training online )
28. What are the disadvantages of using session?
Answer: The disadvantages of using session are:
Performance overhead occurs in case of a large number of users because session data is stored in server memory.
Overhead involved in serializing and De-Serializing session Data. Because In case of StateServer and SQLServer session mode we need to serialize the object before store.
29. What is Data Cache in ASP.NET and how to use?
Answer: Data Cache is used to store frequently used data in the cache memory. It’s much efficient to retrieve data from the data cache instead of the database or other sources. We need to use the System.Web.Caching namespace. The scope of the data cache is within the application domain unlike “session”. Every user is able to access this object.
When client request to the server, server execute the stored procedure or function or select statements on the Sql Server database then it returns the response to the browser. If we run again the same process will happen on (Sql Server Training )the webserver with sql server.
30. How to create data cache?
Answer: Cache [“Employee”] = “DataSet Name”
We can create data caching use Cache Keyword. It’s located in the System.Web.Caching namespace. It’s just like assigning value to the variable.
31. How to remove a Data Cache?
Answer: We can remove the Data Cache manually.
//We need to specify the cache name
Cache.Remove(String key);
32. Enterprise Library in ASP.NET?
Answer: Enterprise Library: It is a collection of application blocks and core infrastructure. Enterprise library is the reusable software component designed for assisting the software developers.
We use the Enterprise Library when we want to build application blocks intended for the use of developers who create a complex enterprise-level application.
Enterprise Library Application Blocks
Security Application Block
Security Application Block provides developers to incorporate security functionality in the application. This application can use various blocks such as authenticating and authorizing users against the database.
Exception Handling Application Block
This block provides the developers to create consistency for processing the error that occurs throughout the layers of Enterprise Application.
Cryptography Application Block
Cryptography application blocks provide developers to add encryption and hashing functionality in the applications.
Caching Application Block
Caching Application Block allows developers to incorporate a local cache in the applications.
33.. ( apex training )
34. What is a base class and derived class?
Answer:.
35. What is the state management in ASP.NET?
Answer: State management is a technique that is used to manage a state of an object on different request. It is very important to manage state in any web application. There are two types of state management systems in ASP.NET.
Client-side state management
Server-side state management
36. How do you check whether a DataReader is closed or opened?
Answer: There is a property named “IsClosed” property is used to check whether a DataReader is closed or opened. This property returns a true value if a Data Reader is closed, otherwise, a false value is returned.
37. Which adapter should be used to get the data from an Access database?
Answer: OleDbDataAdapter is used to get the data from an Access database.
Introduction to ASP.NET
38. What are the different validators in ASP.NET?
Answer: ASP.NET validation controls define an important role in validating the user input data. Whenever the user gives the input, it must always be validated before sending it across to various layers of an application. If we get the user input with validation, then chances are that we are sending the wrong data. So, validation is a good idea to do whenever we are taking input from the user. ( hadoop training videos )
39. What are the basic requirements for connection pooling?
Answer: The following two requirements must be fulfilled for connection pooling:
There must be multiple processes to share the same connection describing the same parameters and security settings.
The connection string must be identical.
RegularExpressionValidator Control
CustomFieldValidator Control
ValidationSummary
40.).
41. Which are the new features added in .NET framework 4.0?
Answer:
A list
42. What are the disadvantages of cookies?
Answer:
The main disadvantages of cookies are:
A cookie can store only string value.
Cookies are browser dependent.
Cookies are not secure.
Cookies can store only a small amount of data.
43. What is IL?
Answer: IL stands for Intermediate Language. It is also known as MSIL (Microsoft Intermediate Language) or CIL (Common Intermediate Language).
All .NET source codes are first compiled to IL. Then, IL is converted to machine code at the point where the software is installed, or at run-time by a Just-In-Time (JIT) compiler.
44. What is View State?
Answer:
Features of View State
These are the main features of view state:
Retains the value of the Control after post-back without using a session.
Stores the value of Pages and Control Properties defined in the page.
Creates a custom View State Provider that lets you store View State Information in a SQL Server Database or in another data store.
Advantages of View State
Easy to Implement.
No server resources are required: The View State is contained in a structure within the page load.
Enhanced security features: It can be encoded and compressed or Unicode implementation.
45. What is the difference between trace and debug?
Answer:
Debug class is used to debug builds while Trace is used for both debug and release builds. ( data science online training )
46. What are the different Session state management options available in ASP.NET?
Answer: State Management in ASP.NET
A new instance of the Web page class is created each time the page is posted to the server.
In traditional Web programming, all information that is associated with the page, along with the controls on the page, would be lost with each roundtrip.
The Microsoft ASP.NET framework includes several options to help you preserve data on both a per-page basis and an application-wide basis.
These options can be broadly divided into the following two categories:
Client-Side State Management Options
Server-Side State Management Options
Client-Side State Management
Client-based options involve storing information either on the page or on the client computer.
Some client-based state management options are:
Hidden fields
View state
Cookies
Query strings
Server-Side State Management
There are situations where you need to store the state information on the server-side.
Server-side state management enables you to manage application-related and session-related information on the server.
ASP.NET provides the following options to manage state at the server-side:
Application state
Session state
State Management
oth, debug and release builds. ( hadoop training )
47. What are the implementation and interface inheritance?
Answer:.
48. How do you prevent a class from being inherited?
Answer: In VB.NET you use the NotInheritable modifier to prevent programmers from using the class as a base class. In C#, use the sealed keyword.
49. Explain Different Types of Constructors in C#?
Answer:
There are four different types of constructors you can write in a class –
1. Default Constructor
2. Parameterized Constructor
3. Copy Constructor
4. Static Constructor
50. What are design patterns?
Answer: Design patterns are common solutions to common design problems. ( data science training )
51. What is a connection pool?
Answer: A connection pool is a ‘collection of connections’ which are shared between the clients requesting one. Once the connection is closed, it returns back to the pool. This allows the connections to be reused.
52. What is business logic?
Answer: A connection pool is a ‘collection of connections’ which are shared between the clients requesting one. Once the connection is closed, it returns back to the pool. This allows the connections to be reused.
Note: Browse latest Dot Net Interview Questions and C# Tutorial Videos. Here you can check .net Training details and C# Training Videos for self learning. Contact +91 988 502 2027 for more information. | https://svrtechnologies.com/2018-latest-dot-net-interview-questions-and-answers-pdf/ | CC-MAIN-2020-29 | refinedweb | 3,822 | 58.99 |
fun main(args: Array<String>) { val ch = 'c' val st = Character.toString(ch) // Alternatively // st = String.valueOf(ch); println("The string is: $st") }
When you run the program, the output will be:
The string is: c
In the above program, we have a character stored in the variable ch. We use the
Character class's
toString() method to convert character to the string st.
Alternatively, we can also use
String's
valueOf() method for conversion. However, both internally are the same.
If you have a char array instead of just a char, we can easily convert it to String using String methods as follows:
fun main(args: Array<String>) { val ch = charArrayOf('a', 'e', 'i', 'o', 'u') val st = String(ch) val st2 = String(ch) println(st) println(st2) }
When you run the program, the output will be:
aeiou aeiou
In the above program, we have a char array ch containing vowels. We use
String's
valueOf() method again to convert the character array to
String.
We can also use the
String constructor which takes character array ch as parameter for conversion.
We can also convert a string to char array (but not char) using String's method toCharArray().
import java.util.Arrays fun main(args: Array<String>) { val st = "This is great" val chars = st.toCharArray() println(Arrays.toString(chars)) }
When you run the program, the output will be:
[T, h, i, s, , i, s, , g, r, e, a, t]
In the above program, we've a string stored in the variable st. We use
String's
toCharArray() method to convert the string to an array of characters stored in chars.
We then, use
Arrays's
toString() method to print the elements of chars in an array like form.
Here's the equivalent Java code: Java program to convert char to string and vice-versa | https://cdn.programiz.com/kotlin-programming/examples/char-string | CC-MAIN-2019-47 | refinedweb | 306 | 69.82 |
WxWidgets Compared To Other Toolkits
From WxWiki
Some general notes:
- wxWidgets not only works for C++, but also has bindings for python, perl, java, lua, lisp, erlang, and system provided widgets..
- wxWidgets got bad publicity for large use of macros, especially event tables. It is however to be noted that there are usually non-macro alternatives, and wxWidgets 2.9 (3.0) especially made a lot of efforts to provide alternatives to the old macro-based techniques, using more modern techniques like templates instead. Therefore while you will likely find a lot of old code containing macros, do not let that convince you that this is all wxWidgets can offer
There is also a slashdot thread about cross-platform gui toolkits. Though it is old and, well, it is slashdot :), there are probably some useful insights there.
Contents. Qt does not use system provided widgets, but emulates it with themes. extends the C++ language with what is called the MOC to provide additional features like signal-slots. An advantage is the nicety of signal-slot event processing; a disadvantage is that this invades your build system and makes use of non-standard language features. wxWidgets does not extend the C++ language and as such might be less intrusive to the build system or less surprising to developers expecting standard C++.
- has been a bunch of work done with the goal of a Qt-based port of wxWidgets (see the wxWidgets SVN wxQt branch), so wxWidgets applications aren't required to use GTK-Qt (which hasn't been known to work too well) to build applications that look and feel native for KDE users.
FLTK
- FLTK website:
- wxWidgets has a more mature OO design.
- FLTK is more light-weight, whereas wxWidgets is more full-featured (wxWidgets supports networking, printing, etc. while FLTK has limited or no support for these things). See wxWidgets Feature List () vs. FLTK Feature List ().
- FLTK actually has more elaborate, different widget types. Just compare what you can do in FLUID to wxDesigner or DialogEdit. I ported a FLTK app to wxWidgets and had a hard time emulating the buttons.
- FLTK's modified LGPL license is more restricting than wxWidgets license, although it does provide exceptions for static linking.
- Light IDE called FLUID for GUI design ().
FOX
- FOX website:
- FOX is more light-weight, whereas wxWidgets is more full-featured.
- wxWidgets has a more complete API, while FOX focuses mainly on GUI features.
- FOX draws its own widgets on each platform, instead of using the native widgets (like Qt), whereas wxWidgets offers true native ports for all the supported platforms. FOX may be faster because of this, but the provided look-and-feel may not be well integrated into the target platform (e.g. the Windows XP theme is not currently available).
- FOX lacks printing and I18N support for asian language (it's using UTF-8 internally).
- Standard Windows dialog boxes are not supported in FOX, but a portable similar feature is available.
Java GUI toolkits
- Java is a programming language that can be combined with different GUI toolkits, such as
- wxWidgets is compiled to machine code. So is SWT, Swing and Qt. However, the rest of the applications code can be, for wxWidgets, compiled to machine code, whereas in Java applications, it will be interpreted code. However, SWT also has C++ binding today.
- There are mixed claims about performance (speed) of Java applications. Java applications usually use more memory than C/C++ applications.
- Users of Java-based applications must have a JVM installed. In recent years, this has become less of an issue as more computers are being installed with a JVM. However, users that have an older JVM may suffer from performance/security problems.
- wx4j is a wxWidgets implementation for Java. wx4j is a wxWidgets binding for Java which allows Java programmers to keep their Java language but still have the speed of a C++ written program.
- wxWidgets can be used by a large number of programming languages, and integrated easily. Java GUI toolkits can be used only by programming languages in the JVM (such as Java, JRuby, Jython, JavaScript, BeanShell).
- Java programs can be deployed easily via Webstart, allowing users to try out applications (see for instance).
The following claims need to be made more specific: Which Java GUI Toolkit is discussed here?
- In order to be cross-platform, Java generally targets the least common denominator. Features that are only available or relevant on one platform but not others are left out of the Java APIs. Examples include manipulating the Windows taskbar, the Mac OS menu bar, Unix file attributes, etc.
- A corollary of the statement above: in a wxWidgets program, you can always write some platform-specific code where necessary, inside of an ifdef that makes it only compile on that one platform. In Java, there's no equivalent of an ifdef, and you have to jump through hoops to access external APIs. Also, wxWidgets emulates certain features that are not avalible on different platforms (such as MDI and tree controls).
SDL
-
- SDL (Simple DirectMedia Layer) is a multimedia C library more suited for when you're writing games and such, and want custom-everything and no convenient general-purpose UI elements. It is made of a lot of C structures starting with SDL_.
- It is possible to combine using wxWidgets and SDL:
- Under the LGPL version 2.
- Allows only a single window to be opened.
- Very nice OpenGL integration (or libs that build on OpenGL e.g. OpenSceneGraph, CEGUI)
SFML
-
- SFML (Simple and Fast Multimedia Library) is a multimedia C++ library more suited for when you're writing games and such and want custom-everything and no convenient general-purpose UI elements
- It covers a lot of things like : audio, network or threads ...
- It is possible to combine using wxWidgets, Qt, X11, etc.
Allegro
-
- Much like SDL, Allegro is a cross-platform c library (with quite a bit of assembly used in the backend) for programming games.
- Almost as old as wxWidgets (circa 1993).
- Giftware license (essentially public domain).
- Requires gcc and an assembler to build.
- Development has been stuck in the same version for years, there are a lack of core developers (original developer is no longer on the team), and there are some internal disputes which may lead to a fork.
- Very basic GUI functionality - only supports one window with only bare bones operations supported - you can't move the window, etc.
- "Controls" are sort of supported in allegro also through functions with (many) variable-length arguments, and are owner drawn much like QT (but don't look as good by default). They can be customized via a relatively easy API (and there are a few sub libraries that already have somewhat fair-looking versions).
- Drawing routines are much faster than wx, and there is a opengl layer (allegrogl -) that makes drawing with opengl even easier than it is to begin with.
- Non-GUI routines (input, etc.) are lower-level and generally faster than wxWidgets' native implementations.
- Can be used with wxWidgets without too much trouble - since allegro has some platform specific functions to get the window handle, you can create a wxWidgets window from the window handle and do what you want from that from that point on. While wxWindows uses a wxApp to handle platform-specific main/WinMain stuff, Allegro requres you put END_OF_MAIN() after your main function - getting the two to work together is somewhat of a task, but not a very large one.
GTK+
-
- GTK+, originally the Gimp toolkit, is a LGPL C-language GUI library for Unix systems.
- It has been ported to Windows, VMS, and other systems (MacOS X currently possible through Apple's X11.app, native version in development; both are painful to build and especially to package), )
- It's built on top of glib, a general-purpose library (similar in some ways to the C++ STL -- it provides a few data structures, functions to help memory management, etc).
- It looks and behaves exactly the same on all platforms (unless themes are used). On Windows, it has the ability to get the native appearance with the Wimp theme, which uses UxTheme.
- Does not use system provided widgets on Windows, but emulates it with themes.
Kylix
- Kylix hasn't been much of a success for Borland/Inprise, so it's doubtful how long it will be continued to be supported.
- Kylix is based on Qt, see above :)
- Fewer platforms are supported by Kylix
- The IDE, being based on no less than 3 toolkits, is rather unprofessional.
Lazarus
- Lazarus is a cross-platform and open source RAD IDE, and a library to write GUI software
- Lazarus is mostly compatible with Borland Delphi and the same code can be compiled with both
- Lazarus has data aware components for easy local and client server database applications development
- It only supports variety of (Object) Pascal dialects for language
- Working in a similar way to wxWidgets, it has support for many underlying widgetsets: gtk1, gtk2, win32api, qt, carbon and winCEapi. Cocoa and an owner drawn widgetset exist(fpgui), but are less progressed.
- The underlying Free Pascal Compiler supports most OSes and architectures currently in use
- Currently it supports fewer platforms than wxWidgets
- The combined Lazarus/FPC project contains nearly a complete deeply integrated toolchain in one project. RAD/IDE, compiler, libraries, XML bindings, database connectivity,
- Lazarus is slowly getting supported by the Borland/Embarcadero Delphi component market, providing complex and high quality commercial widgets.
Ultimate++
- Ultimate++ only supports Windows and Linux, not MacOS
- The comparision on is a small example that doesn't show how the toolkit scales to bigger applications.
Notus
- See:
- wxWidgets actually exists ;)
- notus is likely to make a lot more use of standard library and modern C++ concepts, such as iterators, templates, namespaces, etc (whereas wxWidgets reimplements or works around many of these things in non-standard ways); and it's also likely to follow the design principles of Boost (which you could consider either a good or bad thing), and work well with the rest of the Boost library. Of course, since it doesn't yet exist(*still* in alpha stage), whether this is true in practice remains to be seen.
MFC
- MFC is only available for free for Windows
- A macintosh version was available with Visual C++ Crossplatform Edition (~$800 at last check) but has not been supported by the compiler since version 4.1.
- There are also UNIX variants such as MainWin, which are extremely expensive, require runtime licenses, and are reported to have problematic support
- While the source for both wxWidgets and MFC is available, EULAs are not a concern with wxWidgets.
- MFC has a smaller executable size than wx (generally irrelevant with a decent compiler).
- MFC has greater range of good quality commercial components.
- Some say event tables (wxWidgets) are 'better' than message maps (MFC).
- wx's class hierarchy is more intuitive, while MFC tends to be more consistent among top-level class names.
- wx provides a far greater abundance of convenience classes, while MFC provides more windows-specific classes.
- .NET isn't an issue - MFC won't be ported to .NET. On the other hand, wx already has .NET wrappers in alpha stage!
- MFC has a broader range of components available, especially data-bound controls.
- Some things are easier with wxWidgets, such as certain types of windows (always on top, etc.), while other things are easier with MFC, such as detachable toolbars.
- Probably the strongest point to use MFC is MSVC, the IDE, itself.
- For info on class names and other points, see WxWidgets For MFC Programmers
XUL Framework (Mozilla)
- See:
- JavaScript, XUL and CSS are all needed to program in Mozilla -- XUL describes the structure (like HTML in Web documents, JavaScript the behaviour, CSS for styling); wxWidgets does all of this in C++.
- Accessing XUL with C++ (XPCOM) is very difficult; C++ in wxWidgets is easier.
Tk
- Tk is a GUI toolkit designed for the Tcl scripting language and is best used with this language. See: for more information.
- There are bindings for other languages like Python, Perl, Ruby and C++.
- The core of Tk has few widgets, but several extensions are available. For example: BWidgets. There are extensions written in C or pure Tcl.
- Before 8.5 version Tk looked outdated. Now it has been solved on Windows and Mac OS X with the tile extension and it has been added to Tk 8.5 core as ttk. It has now native look on those platforms. On Linux though it still has the old look. This is work in progress, since they're creating themes for GTK and Qt. You can still use other themes to make it look better though.
- Tk is the default GUI toolkit for Python. The binding is called Tkinter. See:
- Tk has a powerful canvas widget that allows you to draw anything and even create custom widgets.
- Tk has a powerful event system.
- Tk is used by several programmers without the need for a GUI designer since the API is simple and GUI code usually is shorter. Several GUI designers exist though. A window with a "Hello world!" on it is a one liner: pack [ttk::label -text "Hello world!"]
- A complete GUI program developed in Tcl/Tk can be wrapped in a single binary file (that's about 1 mb) called Starpack and deployed to all major platforms. See for more information.
- Tk has a very liberal BSD license that allows commercial software development.
Why you should use Tk? If you want a free, mature, stable, cross-platform GUI toolkit and you're using a scripting language.
Why you shouldn't use Tk? If you plan on using C++ or require a bigger default widget set it's better to use WxWidgets.
VCF
- See: or
- Clean OO design
- Mature on Windows, some support for MacOSX and Linux
- BSD licensing
WideStudio
- See:
- WideStudio uses its own widgets
- WideStudio installation comes in a bundle with MinGW and gcc (not optional)
- WideStudio comes with an IDE/Designer
- There is an IDE/Designer plugin project (Native Application Builder) for Eclipse (see:)
- WideStudio does not have keyboard-navigation through controls integrated
- WideStudio container classes do not allow referencing by name (myWindow("labelCaption")->Test)
- WideStudio libraries are less than 10MB total (2008-01-25) and distribution bundles can be <4MB for small applications
Why You Shouldn't Use wxWidgets
- Lack of commercial GUI components for making nice GUI grids, charts, etc. Look at wxCode though.
- No support for themes (apart from using the themes of the underlying toolkit) unless you use wxUniversal or wxSkin
- wxX11 is sub-par compared to other toolkits and unstable. You should use the wxGTK port instead, which builds upon GTK instead of directly onto X11. wxX11 is mostly intended for embedded devices that don't have GTK.
- wxWidgets tries to support a very expansive feature set, and as a result, some lesser-used components of the toolkit are not as stable/reliable as commonly used components are. As with any open source toolkit, thorough testing is the best solution here.
- wxWidgets does not provide binaries for any system. You have to compile wxWidgets yourself. wxpack provides wxWidgets binaries for Windows, but you have to download an entire multi-hundred megabyte Development Kit to get them. However you can download the C++ IDE plus WxWidget by downloading WxDev which is relatively smaller. WxDev is supported for both C and C++.
- The use of native widgets makes it more likely that the same code will behave differently from platform to platform, and also makes it more likely that there will be platform-specific bugs. | https://wiki.wxwidgets.org/index.php?title=WxWidgets_Compared_To_Other_Toolkits&oldid=8780 | CC-MAIN-2016-30 | refinedweb | 2,599 | 61.87 |
Next: Primitive types, Previous: Basic concepts, Up: About CNI
The only global names in Java are class names, and packages. A package can contain.
Here is how you could express this:
(// Declare the class(es), possibly in a header file: namespace java { namespace lang { class Object; class String; ... } } class java::lang::String : public java::lang::Object { ... };
The
gcjh tool automatically generates the necessary namespace
declarations.
Always using the fully-qualified name of a java class can be
tiresomely verbose. Using the full qualified name also ties the code
to a single package making code changes necessary should the class
move from one package to another..
In Java:
import package-name.class-name;
allows the program text to refer to class-name as a shorthand for
the fully qualified name: package-name
.class-name.
To achieve the same effect C++, you have to do this:
using package-name::class-name;
Java can also cause imports on demand, like this:
import package-name.*;
Doing this allows any class from the package package-name to be referred to only by its class-name within the program text.
The same effect can be achieved in C++ like this:
using namespace package-name; | http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcj/Packages.html | crawl-001 | refinedweb | 198 | 56.35 |
So I'm still fairly new at unity but I know my way around, I just started making a game manager script with DoNotDestroyOnLoad and made it check when you enter a new scene if there is a game manager and if not it makes the script from the first scene move on to the next. The problem I'm having is when I move to the next scene it doesn't spawn the player in the level where it is suppose to. It puts you outside just falling and you can see the box that the level is inside but you cant do anything because it just keeps you falling. So how do I change where my fps controller will load into a new scene and on the path in the level where it needs to be. My game manager is super basic as I was just following a video explaining it. here are some pictures of what I am talking about, where you are suppose to spawn, falling outside of the level, and my small script. I posted on reddit and got the scene manager part but it doesnt change where you load into the level at all its like I didnt put the code in.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.SceneManagement;
public class GameControl : MonoBehaviour {
public static GameControl control;
public float health;
public float exp;
// Use this for initialization
void Awake () {
if (SceneManager.GetActiveScene().name.Equals ("Sacral")) {
transform.position = new Vector3(-27, 38, -18);
}
if (control == null) {
DontDestroyOnLoad (gameObject);
control = this;
}
else if (control != this)
{
Destroy .
Lots Of Scenes When Loading A Level...Help :(
1
Answer
Async Level loading issue.
0
Answers
How to load a score form a saved game vs. the level just completed
0
Answers
How to switch to next level after completeing the task in current level
6
Answers
How to check if RealtimeGI loading finished?
0
Answers | https://answers.unity.com/questions/1488997/spawning-into-a-new-scene-with-game-manager.html | CC-MAIN-2019-43 | refinedweb | 321 | 68.81 |
Many beginner and intermediate programmers are often faced with the problem of spaghetti
switch...case code blocks when programming with the Windows API in C/C++. When you add a lot of messages to catch in your window procedure, looking where is your e.g.,
WM_COMMAND or
WM_CHAR, the block of code becomes a real nightmare.
The problem of the thousand lines window procedure can be solved with a header file that is shipped since the days of the C/C++ 7.0 compiler and the Windows Software Development Kit for Windows 3.1. That header is <windowsx.h> and contains a lot of useful macros. According to Microsoft, the facilities of this header file can be resumed in the following groups:
STRICTmacro.
Since Message Cracker Wizard is designed to aid with the message crackers, I will skip the other useful macros the header file makes available. If you are interested in a brief description of what you can do with the WINDOWSX.H file, you can look at the MS Knowledge Base Article #83456.
Well, let's introduce the advantages of the message crackers and, of course, why the tool offered here can be useful to work with them in your code.
When you are programming with the Win32 SDK, you process window and dialog messages with a window procedure, commonly named
WndProc. It is very common in Windows C programming that the window procedure catches every window message using a
switch keyword and a bunch of
case labels for every message we want to catch.
Suppose that we want to process
WM_COMMAND,
WM_KEYUP,
WM_CLOSE and
WM_DESTROY for our main window. We could write a window procedure with a logic like this:
LRESULT CALLBACK MainWndProc (HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch(msg) { case WM_COMMAND: // ... break; case WM_KEYUP: // ... break; case WM_CLOSE: // ... break; case WM_DESTROY: //... break; default: return DefWindowProc(hwnd, msg, wParam, lParam); } }
This is the most used manner since Windows 1.0 days to process window messages, and surely, it works. But the problem is when you begin to add more and more complex features to your program, such as MDI, OLE, common controls, etc., and you get a thousand-lines window procedure. You begin to jump with PageDn and PageUp keys looking for a message you want to modify.
This is the first advantage of using message crackers: they convert that case label spaghetti in easy to maintain handling functions, like MFC.
And the second advantage is the proper parameter format you use in your handling functions. Instead of doing those
switch(LOWORD(wparam)), you can simply use
switch(id) because the message function that you provide passes
id as one of the "cracked" parameters, which equals to
LOWORD(wparam).
The message handling macro
HANDLE_MSG is defined in windowsx.h, as follows:
#define HANDLE_MSG(hwnd, message, fn) \ case (message) : return HANDLE_##message((hwnd), (wParam), (lParam), (fn)). To better explain, the following code in the above window procedure converted to message crackers:
LRESULT CALLBACK MainWndProc (HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch(msg) { HANDLE_MSG (hwnd, WM_COMMAND, OnCommand); HANDLE_MSG (hwnd, WM_KEYUP, OnKeyup); HANDLE_MSG (hwnd, WM_CLOSE, OnClose); HANDLE_MSG (hwnd, WM_DESTROY, OnDestroy); default: return DefWindowProc(hwnd, msg, wParam, lParam); } }
Wow! This is a better, compact and easily manageable window procedure. Now you would want to define your message processing functions (
OnKeyUp, OnClose, and
OnDestroy). This is a real advantage, as you can jump to your message processing function with the Visual Studio IDE:
The problem is that you must search in the definitions of WINDOWSX.H header and look for the parameters of the matching message processing function every time you add a message handler, because you can't use any parameters you want: the format of the handling function is explicit. Doing this repeated searching in the header file can become a tedious task and can lead to errors. The Message Cracker Wizard Tool comes to the rescue: it allows you to paste the correct function parameters for every message handler you want. And if you're writing from scratch, it can also write a template window or dialog procedure to begin with the window messages you will process.
Another useful feature in windowsx.h header is the possibility of message forwarding. This is used for "unpacking" the message processing function parameters into suitable
WPARAM and
LPARAM values to call another function that expects parameters such as
PostMessage,
SendMessage,
CallWindowProc, etc.
Suppose that we want to use
SendMessage to send a
WM_COMMAND message to a parent window that "simulates" the double-clicking of a control named
IDC_USERCTL by sending a notification code of
BN_DBLCLK. You would normally use:
SendMessage (hwndParent, WM_COMMAND, MAKEWPARAM(IDC_USERCTL, BN_DBLCLK), (LPARAM)GetDlgItem(hwnd, ID_USERCTL));
This is a rather complex syntax: the
SendMessage expects a
WPARAM parameter where the low-order word is the control ID and the high-order word is the notification code; and
LPARAM parameter is the handle to the control which we get here with the
GetDlgItem API.
The above code can be converted to WINDOWSX.H message forwarding macros,
FORWARD_WM_xxxxx. For each message, the forwarding macros use the same "packed" parameters as the message handling functions that Message Cracker Wizard creates, plus the function you want to call and pass the unpacked
LPARAM/
WPARAMs. For example, Message Cracker Wizard will generate the following function prototype for a
WM_COMMAND message and a
myWnd window ID:
void myWnd_OnCommand (HWND hwnd, int id, HWND hwndCtl, UINT codeNotify)
Well, those cracked parameters are the same to be used by the forwarding macro -- so, as you may expect, the confusing
SendMessage call we showed above can be reduced to:
FORWARD_WM_COMMAND (hwndParent, IDC_USERCTL, GetDlgItem(hwnd, ID_USERCTL), BN_DBLCLK, SendMessage);
That's easy and works with all Message Cracker Wizard supported messages.
When you fire up the Message Cracker Wizard, its interface appears like the following:
The Wizard offers you all the messages handled by WINDOWSX.H in the top-left list box where you can click one or multiple messages. The Window ID edit box allows you to specify an identifier for the window you are specifying the message. Common IDs are
MainWnd,
About (for about dialogs), etc. This will appear in both the message handling function, in the
HANDLE_MSG macro, and in the name of the window/dialog procedure if you want to create one from scratch. The "Make Window Procedure" check box allows you to do that: create from scratch a window or dialog procedure with all the selected message cracker macros ready. Using this approach when beginning a Windows API project, you can cleanly write and organize your code, and of course, avoid mistakes. The two edit boxes at the bottom of the window will contain the generated code for the cracking macros and the functions to handle those messages (prototypes only). Note that the window procedure template code won't appear here when you check "Make Window Procedure": it will appear when you paste the code to your C++ editor only by clicking "Copy Macro".
To quickly tour the features of the Message Cracker Wizard Tool, let's do it by example. Remember that you must include the <windowsx.h> header with your project using the
#include <windowsx.h> directive in your .C / .CPP file.
Let's begin. Suppose you've already written your
WinMain basic code: you've successfully filled the
WNDCLASS structure, registered the window, and wrote a functioning message loop. Now you need a window procedure for your main window.
Open the Message Cracker Wizard. We need to select messages for our window, because MCW needs it to create our main window procedure from scratch. As you may know, it is very common for Windows programs to handle the
WM_CLOSE,
WM_DESTROY and
WM_CREATE messages, so let's build the window procedure with message crackers for those messages. After that, we'll also build the body of the message processing functions for that window procedure.
WM_CLOSE,
WM_DESTROY and
WM_CREATE in the list box. As this window will be the main window of our program, go the Window ID and type main. This is a window ID that identifies your window/dialog and is put as suffix in cracking macros and processing functions. Of course, you'll want to maintain it consistent for all the message handling of a particular window. Look at the bottom edit boxes. They show the
HANDLE_MSG cracker macro and the related prototypes of the message processing functions.
But wait... we said that we want a ready window procedure. So click on 'Make Window Procedure' check box, and be sure that Window radio button is selected. Now we are ready. Keep in mind that Dialog works just like this, but modifies the procedure to be a dialog-type procedure.
First, we need the window procedure on our source code. Press on the 'Copy Macro' button (or press Ctrl-M), minimize the Wizard (or keep it at hand, since it remains top-most), go to your IDE and paste from the clipboard (Ctrl-V) in the place you want your window procedure. Voil�! You will get code like this:
// // main Window Procedure // LRESULT CALLBACK main_WndProc (HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch(msg) { HANDLE_MSG (hwnd, WM_CLOSE, main_OnClose); HANDLE_MSG (hwnd, WM_CREATE, main_OnCreate); HANDLE_MSG (hwnd, WM_DESTROY, main_OnDestroy); //// TODO: Add window message crackers here... default: return DefWindowProc (hwnd, msg, wParam, lParam); } }
That's the window procedure with the three message cracking macros ready to work! And also, with a TODO comment to remember that you must add new message cracker macros there. Remember to unselect 'Make Window Procedure' checkbox when you want to add a
HANDLE_MSG macro to the window procedure.
But the code above does nothing, because we need the functions that process those three messages we want. Simply return to the Message Cracker Wizard tool and now click on 'Copy Function' button. Switch to your source code, locate your cursor where you want the functions bodies to be inserted, and paste with Ctrl+V or Edit/Paste menu. The wizard automatically creates the functions with the main Window ID and the correct parameters expected by the WINDOWSX.H header macros:
// // Process WM_CLOSE message for window/dialog: main // void main_OnClose(HWND hwnd) { // TODO: Add your message processing code here... } // // Process WM_CREATE message for window/dialog: main // BOOL main_OnCreate(HWND hwnd, LPCREATESTRUCT lpCreateStruct) { // TODO: Add your message processing code here... } // // Process WM_DESTROY message for window/dialog: main // void main_OnDestroy(HWND hwnd) { // TODO: Add your message processing code here... }
The Wizard also automatically creates a heading comment and a TODO line to remind you to add code. Now you can add your message handling and processing logic easily and write complex window procedures. You can remove the comments if you want using the two checkboxes in the main window.
There are a few more features present in the program, which are rather intuitive.
This was a suggestion by some users of the program and it was implemented. Click on "Filters.." button (or press Ctrl+L) and you get the following dialog box. There you can select which messages appear listed on the listbox, classified on the type (this classification criteria was taken from Microsoft Spy++ utility).
Note that a present issue in v2.0 when using message filtering dialog is that the list box is filled up again when you click OK, so the previous selection is lost (this does not mean that your previous selected messages that appear on the target code window will disappear).
You may want to reduce the window size of the Message Cracker Wizard. This is possible by disabling "Show Target Code" option in the View menu (or by pressing Ctrl+F11). The main window will appear without the target code area:
Another feature that can be useful for low resolution displays or cluttered desktops is the window transparency feature. Click on the View menu, Window Transparency menu and select a transparency percentage (Solid is 100% opaque and 75% is 25% opaque). This feature is only available for Windows 2000/XP and Server 2003 users. On 9X OSes, only "Solid" option is available.
The Exclude Comments feature allows the code generator to exclude comments, either heading or "TODO" style commenting. Just select or unselect the checkboxes on the main window.
Finally, the Stay On Top feature is pretty self-descriptive.
The following features may appear in the following releases:
I hope this little tool to be of interest to any Windows SDK programmer and of course, to be a potential method to write cleaner Win32 API programs. I'm open to suggestions to improve the tool. If you find this program useful, mail me, because I will be very happy to listen to any good comment.
Thanks for all support!! You know who you are!
As always, check my home page where I mention the updates to this program.
WM_COPYDATAand
WM_HOTKEYmessages.
WM_CTLCOLORxxxxmessage support.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/winsdk/msgcrackwizard.aspx | crawl-002 | refinedweb | 2,145 | 61.67 |
I was recently working on creating a custom configuration section (deriving from ConfigurationSection) to provide some configuration metadata to an app I’m working on. In creating a ConfigurationElement, I wanted to validate that one of the attributes of the element was a valid value for the application. While looking at the example here I noticed the use of various validation attributes applied to validate the format of configuration values.
The CallbackValidatorAttribute and CallbackValidator seemed like they would fit the bill for doing custom validation. Unfortunately, the documentation on CallbackValidatorAttribute is severely lacking. Here’s what I found out using Reflector:
I’ve created a small sample demonstrating the use of the CallbackValidatorAttribute. You can get it here. The sample validator itself isn’t very useful, but hopefully it illustrates the concept well enough.
I've been trying to get NCover to run successfully on Vista Ultimate x64 on and off for the last month. We're still using one of the free versions of NCover available from here, so there is no built-in x64 support. I came across this post today that mentioned using corflags to set the 32BIT flag in both NCover and your testing application (the post is describes using NUnit, but this worked for me with MBUnit as well).
Those changes got me past the
Profiled process terminated. Profiler connection not established.
error, but I was then confronted with a new wrinkle:
Index was outside the bounds of the array.
This error would appear at the end of the profiled MBUnit run. I poked around some and found some old posts on the NCover forums talking about this error and how it didn't occur in NCover 1.5.5. So I downloaded 1.5.5 (I had been using 1.5.8) and now all is good - coverage runs successfully!
We're not doing that much with NCover at this point, so I don't think we're giving up anything going with 1.5.5. Maybe someday we'll be able to purchase a current license that supports x64 out of the box.
There is a discussion occurring on the ASP.NET forums about passing data to Master Pages using ASP.NET MVC. I couldn't figure out how to post code in the forums, so this post contains an example of the solution I am currently using.
I defined a "container" class that contains the data for the specific view and the data needed by the master page. It looks like this:
public class ViewDataContainer<T> { public ITopMenuData TopMenu { get; set; } public ILeftMenuData LeftMenu { get; set; } public T Data { get; set; } }
As you can see, this class is generic on the type of view data that it contains.
Next, I created a custom Controller-derived class that looks like this:
public class ControllerBase<T> : Controller { protected override void RenderView(string viewName, string masterName, object viewData) { ViewDataContainer<T> container = new ViewDataContainer<T> { TopMenu = new TopMenuData(), LeftMenu = new LeftMenuData(), Data = (T)viewData }; base.RenderView(viewName, masterName, container); } }
This class takes the view data that's passed in by the controller and wraps it up in an instance of ViewDataContainer<T> that is then passed along. As to where to get the TopMenuData and LeftMenuData, I'm still working on that :).
In this incarnation, views need to derive from ViewPage<ViewDataContainer<T>> (specifying T) and then access their view data via the ViewData.Data property. Today I was playing around with a ViewPage<T> derived class that would expose the typed view data directly. Maybe I'll post that soon as well.
I've been playing with the National Digitial Forecast Database (NDFD) provided by the National Weather Service. You can retrieve a forecast via a SOAP web service by providing your latitiude and longitude and a few other details. The web service worked fine when I called it from Python, but I was getting an error when calling it from .NET (using the generated proxy classes in VS 2003 and VS 2005). It turns out that the service needs HTTP version 1.0, not 1.1 (the default in .NET). Unfortunately, the generated proxy class doesn't expose the ProtocolVersion property of the underlying HttpWebRequest object. So, I added support for this property.
It's a little easier with VS 2005 because of the support for partial classes:
public partial class;
}
}
I just added a new file to my solution and added a partial class definition (ndfd is the proxy class generated for the web service) that adds a ProtocolVersion property. Then in the GetWebRequest method, I set the protocol version of the WebRequest.
For VS 2003, I just derived a new class from the generated proxy class and added my property and override there:
public class ndfd2 :;
}
}
Then I can just use the new class in my code. Both of these approaches avoid doing anything to the actual generated class, so the Web Reference can be updated without losing any of the customizations.
Well, after a relatively short CTP, SQL Server 2005 SP1 has been released. This should be pretty cool, since the CTP solved the problems I was having with Import/Export.
[Found via David Hayden]
This is actually pretty basic, but I had a hard time with it this morning, so I thought I'd share how to do this.
I created a host and host instance on our BizTalk server specifically to host the receive locations we are running. Creating the host and instance was no problem, but I had a hard time changing the receive locations to use the new host. In the BizTalk Administration Console as well as in BizTalk Explorer, there is a place to choose the Receive Handler - the problem was that this dropdown contained only the original host, not the new one.
It turns out that the host is associated with the receive adapter, not each instance. So the solution is to go to the Adapters node in the BizTalk Administration Console, choose the Adapter, choose Receive Locations, and then specify the host. This changes it for all receive locations using that host. It's actually documented here, so I'm not sure why I had so much trouble finding the answer.
BackupDiskFile::RequestDurableMedia: failure on backup device '\\storage\Backups\db1\db1.bak'. Operating system error 64(The specified network name is no longer available.).
I struggled for a while to find a solution and finally found this thread on. Unfortunately that seems to have disappeared, so this is a summary of what that thread recommended (and what worked for me):
This corrected the problem for me (on two machines I had to bump the value to 900) and seems to have no ill effects on the rest of the server.
We have a web service that is exposed on the internet but is only used by our client application. The client uses SSL to connect to the server and we are using WS-Security to provide authorization. Even so, I wanted to prevent someone from viewing the interface of the service by going to the default WSDL generated by ASP.Net ().
It turns out that this is actually pretty easy to do. I found documentation on it related to Visual Studio Team System Application Designer, but that page mentions that it makes a change to a service's web.config file. The answer is to remove the "Documentation" protocol from the <webServices>, <protocols> section of the config file. I chose to do it by using a <remove> directive in the service's web.config file, but you could also do it in the machine.config file to affect the whole server. The nice thing about this solution is that I can leave it enabled on our development server so that Visual Studio can auto-generate the client proxy, but it won't be exposed at all on the production site.
Here's my change:
<webServices>
<protocols>
<remove name="Documentation" />
</protocols>
</webServices>
I struggled with this a while back and finally came up with a solution that works. There are a few things that I would do differently now (I'll try to highlight them below), but I've been running this for a few months now and it produces correct output so I'm happy.
My previous experience with NAnt involved using the <solution> task which works very well. Unfortunately, the <solution> task doesn't support the Compact Framework. The solution I came up with is as follows: write a list of source files to compile to a file, convert any resources from .resx format to binary resources, write a list of resource files to include to a file, and finally call the command line compiler passing in these lists of files and any other options. I'll do a quick description of each step below.
Step 1 - Get a list of source files
This is the NAnt target I use to get the list of files (this is one of those things I'd do differently now - I'd use the /recurse option to the compiler).
<target name="listFiles">
<delete file="sourceList.txt" if="${file::exists('sourceList.txt')}" />
<foreach item="File" property="fName">
<in>
<items>
<include name="*.vb" />
<include name="${subDir}/**/*.vb" />
</items>
</in>
<do>
<echo message=""${fName}"" append="true" file="sourceList.txt"/>
</do>
</foreach>
</target>
This deletes the file if it already exists, then uses the <foreach> task to loop through the files with a "vb" extension in the specified directory and sub-directory (contained in the subDir property) and <echo> their names to the "sourceList.txt" file. Pretty straightforward.
Step 2 - Convert Resources
The resources step is something that completely slipped my mind at the beginning. I was originally only building a class library (with no embedded resources) so skipping this step wasn't an issue. It became a problem when I added a WinForms app to the solution and built it without including the resources (that didn't run so well since many control properties are stored in the resource files). Here's the target for building the resources.
<target name="makeResources">
<include name="*.resx" />
<include name="${subDir}/**/*.resx" />
<property name="resName" value="${path::get-file-name-without-extension(fName)}.resources" />
<exec program="C:\NET\CFDLL\CFResGen" commandline="${fName} ${path::combine(subDir, resName)}" />
This target uses the <foreach> task again, this time to go through all .resx files. This time, instead of writing the file names to a file, the task uses the <exec> task to execute the CFResgen tool, which converts .resx format files to binary .resources files. I get the filename by stripping the extension off of the .resx file and replacing it with .resources.
Step 3 - Get a list of resource files to include
This is very similar to Step 1, but here's the target for completeness' sake.
<target name="listResourceFiles">
<delete file="sourceResource.txt" if="${file::exists('sourceResource.txt')}" />
<include name="*.resources" />
<include name="*.png" />
<include name="${subDir}/**/*.resources" />
<include name="${subDir}/**/*.png" />
<property name="resName" value="${path::get-file-name(fName)}" />
<echo message="/resource:"${fName}","Company.Project.${subDir}.${resName}"" append="true" file="sourceResource.txt"/>
You can see that I'm also getting all files with a .png extension (I had one image embedded as a resource). The other difference is that when writing the information to the file, I'm writing it as "/resource:fileName, fully qualified resource name". This is the format expected by the compiler and matches the way resources are named when compiling in Visual Studio.
Step 4 - Call the compiler
The last step is to actually call the compiler, passing in the list of source files, the list of resources, plus any other options you may need. This is another step I'd do differently - I'd use the <vbc> task instead.
<target name="compileProj">
<mkdir dir="build\Debug\${subDir}" unless="${directory::exists('build\Debug\' + property::get-value('subDir'))}" />
<property name="cmdLine" value="@${subDir}.rsp @sourceList.txt" />
<property name="cmdLine" value="${cmdLine + ' @sourceResource.txt'}" if="${file::exists('sourceResource.txt')}" />
<exec program="C:\WINNT\Microsoft.NET\Framework\v1.1.4322\vbc.exe" commandline="${cmdLine}" output="compileData_${subDir}.txt"/>
Here, I first make a directory to put the compiled file in. Then, I build up the command line including an options file, the source file list, and the resource file list (if it exists). Lastly, I use the <exec> task again to execute the compiler. The options file looks like the following:
/warnaserror
/t:winexe
/netcf
/sdkpath:c:\net\cfdll
/out:build\Debug\programName\programName.exe
/r:c:\net\cfdll\system.dll
/r:c:\net\cfdll\system.data.dll
/r:c:\net\cfdll\system.xml.dll
/r:c:\net\cfdll\system.drawing.dll
/r:c:\net\cfdll\system.windows.forms.dll
/r:c:\net\cfdll\system.windows.forms.datagrid.dll
/r:build\Debug\DataAccess\DataAccess.dll
/r:build\Debug\CustomControls\CustomControls.dll
/debug+
/optionstrict+
/imports:Microsoft.VisualBasic
/imports:System
/imports:System.Collections
/imports:System.Configuration
/imports:System.Data
/imports:System.Diagnostics
/imports:System.Drawing
/imports:System.Windows.Forms
/rootnamespace:Company.Project
You can see what each of these options do on MSDN. Basically, they pass in the equivalent settings that Visual Studio uses when it compiles a project.
Wrapping Up
One final target I'll show here is the one that is actually called by NAnt. It sets the "subDir" property and then calls the other tasks shown above.
<target name="compileMultiple">
<property name="subDir" value="CustomControls" />
<call target="listFiles" />
<call target="getResources" />
<call target="compileProj"/>
<property name="subDir" value="DataAccess" />
<property name="subDir" value="programName" />
If anyone has any questions (or wants to see the complete files), feel free to use the Contact link on this blog.
Now that the release of VS2005 and the .NET Framework 2.0 is imminent, I'm able to justify more time at work for reviewing its impact. This list of breaking changes for 2.0 is definitely useful in that regard.
I think one of the most important changes is the fact that unhandled exceptions will now always end a process (search for “Unhandled exceptions will always be fatal to a process”on this page). The 1.1 behavior was that unhandled exceptions on threads other than the main thread would not end a process. I ran into this twice while working on Windows Services that used background threads. I know I should have been catching the exceptions, but it was still design/debug time and that error handling wasn't present yet. The symptom I experienced was that the service would run for a while, but then stop polling a message queue for any more messages. The problem was that the polling was done in response to the Elapsed event of a non auto-reset timer - when the processing was complete, the timer was re-enabled. When an exception was raised in the processing (which took place on a background thread as a result of the timer Elapsed event) the processing stopped before getting to the re-enable code. So essentially the service was dead, since it was no longer polling the message queue. This change will prevent this kind of bug from creeping in.
[Via James Manning]
Geoff Appleby posts a link to a Microsoft support page that can tell you what DLL came from what package. Very cool.
Where'd that DLL come from?
This is a bookmark for myself as much as anything else, but here is Raymond Lewallen's script for generating a text based dependency report for a SQL Server database.
Sql dependency report from query analyzer
One thing I forgot to mention in yesterday's post: it is possible to have SQLXML do the conversion from Base64 to binary using an updategram. When using an updategram, it is possible to specify a mapping schema, which lets you use attribute names in the updategram that are different than the column names in the table. A mapping schema also lets you specify data type conversion, one of which is Base64 to binary. Here's a brief example:
Mapping Schema
<xsd:schema xmlns:xsd=""
xmlns:
<xsd:element
<xsd:complexType>
<xsd:attribute
<xsd:attribute
</xsd:complexType>
</xsd:element>
</xsd:schema>
Updategram
<ROOT xmlns:
<updg:sync
<updg:before />
<updg:after>
<binaryTest someParameter="1" attachmentData="dGhpcyBpcyBhIHRleHQgZmlsZQ==" />
</updg:after>
</updg:sync>
</ROOT>
If this updategram is executed as a template query in a virtual directory configured for SQLXML access, a record will be inserted in the “binaryTest” table with the Base64 encoded data converted to binary.
I wasn't able to use this solution for my problem because I couldn't figure out how to get the updategram generated by BizTalk to use a mapping schema.
I recently had the need to insert some binary data that was encoded as Base64 into a SQL Server image column. Since I was using BizTalk, I was hoping it would be relatively straight forward - I was wrong. There's a bit of information here on how I came to this problem and some BizTalk stuff, so if you just want the solution, jump to here.
The Background
I was using BizTalk to pull a record from one SQL Server, do some processing on the data, and then insert the processed record into another SQL Server. The catch was that I needed to pull along some binary data and insert it into the second SQL Server. The data was stored in an image column in each SQL Server DB.
The Problem
Things started out easily enough. I was using a SQL Receive adapter to get the data from the source database. I added the BINARY BASE64 option to the FOR XML clause I was using and got the binary data as Base64. I then used the Add Generated Items wizard to generate the schema for the SQL insert (I was using a stored procedure that had a parameter typed as image). I was happy to see that the generated schema listed the data type of the attribute corresponding to the stored procedure parameter as “xs:base64binary”. This led me to believe that BizTalk (or the SQL Send Adapter, or SQLXML, or something) would convert the data back to binary as part of the insert. Not so. When the orchestration ran, I just got an error back from SQL Server:
The adapter "SQL" raised an error message.
Details "HRESULT="0x80040e07" Description="Operand type clash: ntext is incompatible with image"
So obviously nothing was converting the Base64 data back to binary.
The Solution
I posted a question to the newsgroups, but didn't really receive a good answer. The solution I ended up with was to slightly modify a SQL Server UDF I found on the newsgroups. The content of the modified UDF is below:
CREATE FUNCTION base64toBin (@bin64raw varchar(8000))
RETURNS varbinary(8000)
AS
BEGIN
declare @out varbinary(6000)
declare @i int
declare @length int
declare @bin64char char(1)
declare @bin64rawval tinyint
declare @bin64phase tinyint
declare @bin64nibble1 tinyint
declare @bin64nibble2 tinyint
declare @bin64nibble3 tinyint
SELECT @bin64phase = 0
SELECT @i = 1
SELECT @length = len(@bin64raw)
if right(@bin64raw, 1) <> '='
set @length = @length + 1
WHILE @i < @length
BEGIN
SELECT @bin64char = substring(@bin64raw,@i,1)
BEGIN
IF ASCII(@bin64char) BETWEEN 65 AND 90
SELECT @bin64rawval = ASCII(@bin64char)-65
ELSE
IF @bin64char LIKE '[a-z]'
SELECT @bin64rawval = ASCII(@bin64char)-71
ELSE
IF @bin64char LIKE '[0-9]'
SELECT @bin64rawval = ASCII(@bin64char)+4
ELSE
IF @bin64char = '+'
SELECT @bin64rawval = ASCII(@bin64char)+19
ELSE
IF @bin64char = '/'
SELECT @bin64rawval = ASCII(@bin64char)+16
ELSE
BEGIN
SELECT @bin64rawval = 0
SELECT @i = @length-1
END
END
IF @bin64phase = 0
BEGIN
SELECT @bin64nibble1 = (@bin64rawval - @bin64rawval%4)/4
SELECT @bin64nibble2 = @bin64rawval%4
SELECT @bin64nibble3 = 0
END
ELSE
IF @bin64phase =1
BEGIN
SELECT @bin64nibble2 = (@bin64nibble2*4) + (@bin64rawval - @bin64rawval%16)/16
SELECT @bin64nibble3 = @bin64rawval%16
IF @i<5
SELECT @out= convert (binary(1),((16*@bin64nibble1) + @bin64nibble2))
ELSE
SELECT @out= @out + convert (binary(1),((16*@bin64nibble1) + @bin64nibble2))
END
ELSE
IF @bin64phase =2
BEGIN
SELECT @bin64nibble1 = @bin64nibble3
SELECT @bin64nibble2 = (@bin64rawval - @bin64rawval%4)/4
SELECT @bin64nibble3 = @bin64rawval%4
SELECT @out=@out+ convert (binary(1),((16*@bin64nibble1) + @bin64nibble2))
END
ELSE
IF @bin64phase =3
BEGIN
SELECT @bin64nibble1 = (@bin64nibble3*4) + (@bin64rawval - @bin64rawval%16)/16
SELECT @bin64nibble2 = @bin64rawval%16
SELECT @out=@out+ convert (binary(1),((16*@bin64nibble1) + @bin64nibble2))
END
SELECT @bin64phase = (@bin64phase + 1)%4
SELECT @i = @i + 1
END
RETURN(@out)
END
The one change I made was to add the section that begins “if right(@bin64raw...”. I added this because my Base64 data was more than 8000 characters long and I needed to be able to call this UDF more than once with “chunks” of the Base64 data. Without the change, the UDF will drop the last character of the Base64 data on “chunks” in the middle. Below is a sample that calls this UDF with chunks of 2400 characters (because of the way Base64 encoding works, the “chunk” size has to be divisible by 4).
CREATE PROCEDURE testConvert
@someParameter int,
@attachmentData text
AS
/*** Table schema used for test
CREATE TABLE testData(someValue int, attachmentData image, CONSTRAINT PK_testData primary key nonclustered (someValue))
***/
-- insert NULL (0x0) into the image field so that the TEXTPTR function will work
insert testData(someValue, attachmentData)
values(@someParameter, 0x0)
declare @pointer varbinary(16)
select @pointer = TEXTPTR(attachmentData) from testData where someValue = @someParameter
declare @buff varchar(2400)
declare @offset int, @imgOffset int
set @offset = 1
set @imgOffset = 0
while @offset <= datalength(@attachmentData)
begin
select @buff = substring(@attachmentData, @offset, 2400)
declare @img varbinary(8000)
select @img = dbo.base64toBin(@buff)
UPDATETEXT testData.attachmentData @pointer @imgOffset NULL @img
set @imgOffset = @imgOffset + datalength(@img)
set @offset = @offset + 2400
end
So that's it. This has been working in production now for over a month, so I'm pretty happy. Hopefully this can help someone else. | http://geekswithblogs.net/scarpenter/Default.aspx | crawl-002 | refinedweb | 3,603 | 52.19 |
The last piece of the puzzle when it comes to password policies is the account lockout . Also this is another area where a tighter policy doesn't necessarily lead to improved security. A lot of companies go for 3 incorrect attempts, and this does lead to a lot of lockouts on Monday mornings and consequently a lot of false positives in the security departments monitoring for attacks on authentication systems.
Again with this section of policy it's important to understand what threat is being mitigated. Account lockout is designed to mitigate online password guessing attacks.
Now given that if you have a password length of say 8 characters with a requirement for upper and lower case characters you've got a theoretical maximum of 52^8 combinations (reality will be obviously be somewhat less than that).
So realistically the risk of an online password guessing attack succeeding is pretty low, if the attacker doesn't have an insight into likely passwords that a user will choose.
So what's a good number for password lockout. Well Microsoft recommends 50, but I've got to say that I think around 10 is good for most circumstances. My reasoning here is that if someone can't remember their password after 10 goes, they've almost definitely forgotten it.
Of course the next part of this is what to do when you've locked the account out. Well if you've gone for a reasonably high setting on the incorrect password attempts, then I'd say in most cases lock the account permanently until the user has gone through the password reset procedure (establishing a good password reset procedure is a topic for another day!).
One problem with account lockout that's raised is the potential for Denial-Of-Service. If the username is predictable to an attacker they can iterate over the namespace and lockout all the accounts.
The mitigation for this depends on the architecture of the systems. If the system is inside a corporation then monitoring should track the attack and incident response should hopefully be able to track down the attacker.
However for Internet facing systems that's obviously not possible, so some other mitigation may be needed.
Two potential options for this are
- IP based lockout. Instead of locking the account for all users, only lock it for the source IP of the attack. This isn't perfect protection but may well work for many attackers
- Use multiple secrets. For example have a password and a secondary secret. Then only increment the incorrect login count if the attacker is getting the username and one of the secrets correct. | https://raesene.github.io/blog/2007/10/06/risk_assessed_p_2/ | CC-MAIN-2020-34 | refinedweb | 441 | 51.89 |
tds alternatives and similar packages
Based on the "ORM and Datamapping" category.
Alternatively, view tds alternatives based on common mentions on social networks and blogs.
ecto10.0 9.2 tds VS ectoA toolkit for data mapping and language integrated query.
eredis9.7 0.0 tds VS eredisErlang Redis client
postgrex9.6 6.8 tds VS postgrexPostgreSQL driver for Elixir
redix9.5 6.2 tds VS redixFast, pipelined, resilient Redis driver for Elixir. 🛍
eventstore9.4 7.6 tds VS eventstoreEvent store using PostgreSQL for persistence
ecto_enum9.2 0.1 tds VS ecto_enumEcto extension to support enums in models
mongodb9.2 6.7 tds VS mongodbMongoDB driver for Elixir
amnesia9.2 0.0 tds VS amnesiaMnesia wrapper for Elixir.
memento9.1 4.1 tds VS mementoSimple + Powerful interface to the Mnesia Distributed Database 💾
mongodb_ecto9.0 0.0 tds VS mongodb_ectoMongoDB adapter for Ecto
mysql9.0 3.5 tds VS mysqlMySQL/OTP – MySQL and MariaDB client for Erlang/OTP
rethinkdb9.0 0.0 tds VS rethinkdbRethinkdb client in pure elixir (JSON protocol)
moebius9.0 0.0 tds VS moebiusA functional query tool for Elixir
paper_trail8.9 7.3 tds VS paper_trailTrack and record all the changes in your database with Ecto. Revert back to anytime in history.
arc_ecto8.8 0.0 tds VS arc_ectoAn integration with Arc and Ecto.
exredis8.7 0.0 tds VS exredisRedis commands for Elixir
mariaex8.6 0.0 tds VS mariaexPure Elixir database driver for MariaDB / MySQL
triplex8.4 2.7 tds VS triplexDatabase multitenancy for Elixir applications!
ExAudit8.4 4.1 tds VS ExAuditEcto auditing library that transparently tracks changes and can revert them.
ecto_mnesia8.3 0.0 tds VS ecto_mnesiaEcto adapter for Mnesia Erlang term database.
riak8.2 0.0 tds VS riakA Riak client written in Elixir.
shards8.2 1.3 tds VS shardsPartitioned ETS tables for Erlang and Elixir
xandra8.2 0.0 tds VS xandraFast, simple, and robust Cassandra driver for Elixir.
Bolt.Sips8.1 1.3 tds VS Bolt.SipsNeo4j driver for Elixir
timex_ecto7.9 0.0 tds VS timex_ectoAn adapter for using Timex DateTimes with Ecto
atlas7.8 0.0 tds VS atlasObject Relational Mapper for Elixir
kst7.8 8.0 tds VS kst💿 KVS: Abstract Chain Database
instream7.7 8.6 tds VS instreamInfluxDB driver for Elixir
ecto_psql_extras7.6 4.8 tds VS ecto_psql_extrasEcto PostgreSQL database performance insights. Locks, index usage, buffer cache hit ratios, vacuum stats and more.
esqlite7.6 1.2 L3 tds VS esqliteErlang NIF for sqlite
arbor7.6 0.0 tds VS arborEcto elixir adjacency list and tree traversal. Supports Ecto versions 2 and 3.
inquisitor7.5 0.0 tds VS inquisitorComposable query builder for Ecto
extreme7.5 0.0 tds VS extremeElixir Adapter for EventStore
ecto_fixtures7.4 0.0 tds VS ecto_fixturesFixtures for Elixir apps
sqlitex7.3 3.4 tds VS sqlitexAn Elixir wrapper around esqlite. Allows access to sqlite3 databases.
kalecto7.3 0.0 tds VS kalectoAdapter for the Calendar library in Ecto
mongo7.2 0.0 tds VS mongoMongoDB driver for Elixir
mongodb_driver7.1 5.9 tds VS mongodb_driverMongoDB driver for Elixir
boltun7.0 0.0 tds VS boltunTransforms notifications from the Postgres LISTEN/NOTIFY mechanism into callback execution
tds_ecto7.0 0.0 tds VS tds_ectoTDS Adapter for Ecto
redo7.0 0.0 tds VS redopipelined erlang redis client
sqlite_ecto6.9 0.0 tds VS sqlite_ectoSQLite3 adapter for Ecto
gremlex6.9 0.0 tds VS gremlexElixir Client for Gremlin (Apache TinkerPop™)
couchdb_connector6.9 0.0 tds VS couchdb_connectorA couchdb connector for Elixir
craterl6.8 0.0 tds VS craterlErlang client for crate.
neo4j_sips6.8 0.0 tds VS neo4j_sipsElixir driver for the Neo4j graph database server
sql_dust6.8 2.3 tds VS sql_dustEasy. Simple. Powerful. Generate (complex) SQL queries using magical Elixir SQL dust.
github_ecto6.7 0.0 tds VS github_ectoEcto adapter for GitHub API
ecto_cassandra6.6 0.0 tds VS ecto_cassandraCassandra Ecto Adapter
triton6.5 0.0 tds VS tritona Cassandra ORM for Elixir
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of tds or a related project?
Popular Comparisons
README
Tds - MSSQL Driver for Elixir
MSSQL / TDS Database driver for Elixir.
NOTE:
Since TDS version 2.0,
tds_ecto package is deprecated, this version supports
ecto_sql since version 3.3.4.
Please check out the issues for a more complete overview. This branch should not be considered stable or ready for production yet.
For stable versions always use hex.pm as source for your mix.exs!!!
Usage
Add
:tds as a dependency in your
mix.exs file.
def deps do [{:tds, "~> 2.0"}] end
As of TDS version
>= 1.2, tds can support windows codepages other than
windows-1252 (latin1).
If you need such support you will need to include additional dependency
{:tds_encoding, "~> 1.0"}
and configure
:tds app to use
Tds.Encoding module like this:
import Mix.Config config :tds, :text_encoder, Tds.Encoding
Note that
:tds_encoding requires Rust compiler installed in order to compile nif.
In previous versions only SQL_Latin1_General was suported (codepage
windows-1252).
Please follow instructions at rust website
to install rust.
When you are done, run
mix deps.get in your shell to fetch and compile Tds.
Start an interactive Elixir shell with
iex -S mix.
iex> {:ok, pid} = Tds.start_link([hostname: "localhost", username: "test_user", password: "test_password", database: "test_db", port: 4000]) {:ok, #PID<0.69.0>} iex> Tds.query!(pid, "SELECT 'Some Awesome Text' AS MyColumn", []) %Tds.Result{columns: ["MyColumn"], rows: [{"Some Awesome Text"}], num_rows: 1}} iex> Tds.query!(pid, "INSERT INTO MyTable (MyColumn) VALUES (@my_value)", ...> [%Tds.Parameter{name: "@my_value", value: "My Actual Value"}]) %Tds.Result{columns: nil, rows: nil, num_rows: 1}}
Features
- Automatic decoding and encoding of Elixir values to and from MSSQL's binary format
- Support of TDS Versions 7.3, 7.4
Configuration
Example configuration
import Mix.Config config :your_app, :tds_conn, hostname: "localhost", username: "test_user", password: "test_password", database: "test_db", port: 1433
Then using
Application.get_env(:your_app, :tds_conn) use this as first parameter in
Tds.start_link/1 function.
There is additional parameter that can be used in configuration and can improve query execution in SQL Server. If you find out that your queries suffer from "density estimation" as described here
you can try switching how tds executes queries as below:
import Mix.Config config :your_app, :tds_conn, hostname: "localhost", username: "test_user", password: "test_password", database: "test_db", port: 1433, execution_mode: :executesql
This will skip calling
sp_prepare and query will be executed using
sp_executesql instead.
Please note that only one execution mode can be set at a time, and SQL Server will probably
use single execution plan (since it is NOT estimated by checking data density!).
Connecting to SQL Server Instances
Tds supports SQL Server instances by passing
instance: "instancename" to the connection options.
Since v1.0.16, additional connection parameters are:
:set_language- check stored procedure output
exec sp_helplanguagename column value should be used here
:set_datefirst- number in range
1..7
:set_dateformat- atom, one of
:mdy | :dmy | :ymd | :ydm | :myd | :dym
:set_deadlock_priority- atom, one of
:low | :high | :normal | -10..10
:set_lock_timeout- number in milliseconds > 0
:set_remote_proc_transactions- atom, one of
:on | :off
:set_implicit_transactions- atom, one of
:on | :off
:set_transaction_isolation_level- atom, one of
:read_uncommitted | :read_committed | :repeatable_read | :snapshot | :serializable
:set_allow_snapshot_isolation- atom, one of
:on | :off
:set_read_committed_snapshot- atom, one of
:on | :off
Set this option to enable snapshot isolation on the database level. Requires connecting with a user with appropriate rights. More info here.
Data representation
Currently unsupported: User-Defined Types, XML
Dates and Times
Tds can work with dates and times in either a tuple format or as Elixir calendar types. Calendar types can be enabled in the config with
config :tds, opts: [use_elixir_calendar_types: true].
Tuple forms:
- Date:
{yr, mth, day}
- Time:
{hr, min, sec}or
{hr, min, sec, fractional_seconds}
- DateTime:
{date, time}
- DateTimeOffset:
{utc_date, utc_time, offset_mins}
In SQL Server, the
fractional_seconds of a
time,
datetime2 or
datetimeoffset(n) column can have a precision of 0-7, where the
microsecond field of a
%Time{} or
%DateTime{} struct can have a precision of 0-6.
Note that the DateTimeOffset tuple expects the date and time in UTC and the offset in minutes. For example,
{{2020, 4, 5}, {5, 30, 59}, 600} is equal to
'2020-04-05 15:30:59+10:00'.
UUIDs
MSSQL stores UUIDs in mixed-endian format, and these mixed-endian UUIDs are returned in Tds.Result.
To convert a mixed-endian UUID binary to a big-endian string, use Tds.Types.UUID.load/1
To convert a big-endian UUID string to a mixed-endian binary, use Tds.Types.UUID.dump/1
Contributing
Clone and compile Tds with:
git clone cd tds mix deps.get
You can test the library with
mix test. Use
mix credo for linting and
mix dialyzer for static code analysis. Dialyzer will take a while when you
use it for the first time.
Development SQL Server Setup
The tests require an SQL Server database to be available on localhost. If you are not using Windows OS you can start sql server instance using Docker. Official SQL Server Docker image can be found here.
If you do not have specific requirements on how you would like to start sql server in docker, you can use script for this repo.
$ ./docker-mssql.sh
If you prefer to install SQL Server directly on your computer, you can find installation instructions here:
Make sure your SQL Server accepts the credentials defined in
config/test.exs.
You also will need to have the sqlcmd command line tools installed. Setup instructions can be found here:
Special Thanks
Thanks to ericmj, this driver takes a lot of inspiration from postgrex.
Also thanks to everyone in the Elixir Google group and on the Elixir IRCds README section above are relevant to that project's source code only. | https://elixir.libhunt.com/tds-alternatives | CC-MAIN-2021-43 | refinedweb | 1,626 | 50.94 |
Good Morning,
I have some Optix 5/6 code that employs textures in a shader via CUDA tex3D call. The following is a snippet of older code I am trying to convert:
rtTextureSampler<float4, 3> myTexture;
RT_PROGRAM void closestHit()
{
// get data for T (float3)
float4 s1;
s1 = tex3D(myTexture,T.x,T.y,T.z);
}
So far I have converted to OptiX 7 as in following where the first part is in a separate header file called system.h
struct Params { float4 *myTexture; };
Second part is file called myOptixTexture.cu:
#include <optix.h>
#include “system.h”
extern "C" __constant__ Params spm;
extern "C" __global__ void __closesthit__ch() {
// get data for T (float3)
float4 s1;
s1 = tex3D(spm.myTexture,T.x,T.y,T.z);
}
The above
s1 = tex3D(spm.myTexture,T.x,T.y,T.z); complains with the following:
no instance of overloaded function “tex3D” matches the argument list
argument types are: (float4 *, float, float, float)
Do I need to define the
spm.myTexture differently? Is there a more standard way to approach retrieving texture(s) in OptiX 7?
Thanks for any help or hints and I apologize for the code formatting issues I have with posting to this forum - I can’t seem to get multiple line code to fall in the ‘prefomatted text option’. | https://forums.developer.nvidia.com/t/tex3d-optix-7/160756 | CC-MAIN-2021-04 | refinedweb | 216 | 54.52 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
how to get log of reports being printed ?
i am trying to override the report.report_sxw.create() method to get the ids,context to know which report is being printed, but i am not being able to do so.
this is my code :
from openerp.report import report_sxw
class log_mod_1(report_sxw.report_sxw):
def __init__(self,name, table, rml=False, parser=report_sxw.rml_parse, header='external', store=False):
super(log_mod_1,self).__init__(name, table, rml=False, parser=report_sxw.rml_parse, header='external', store=False)
def create(self, cr, uid, ids, data, context):
print("context {} \n uid {} \n new".format(context,uid))
return super(log_mod_1,self).create(cr,uid,ids,data it helps , but i want to automate it so tht even if a new module is installed the log of printed reports is maintained , and one doesn't have to edit the 'save as attachment prefix' everytime | https://www.odoo.com/forum/help-1/question/how-to-get-log-of-reports-being-printed-53439 | CC-MAIN-2016-44 | refinedweb | 176 | 52.05 |
Chapter 7 kicks off with a fairly painless reintroduction into functions. We computer harmonic mean for a number pair. There are probably a few ways to do this, but they should all be very similar. Take in two numbers, store them if you like as I did, check if they are zero’s, and if not compute the harmonic mean by way of a function. Check out my source below:
1. Write a program that repeatedly asks the user to enter pairs of numbers until at least one
of the pair is 0. For each pair, the program should use a function to calculate the har-
monic mean of the numbers. The function should return the answer to main(), which
should report the result. The harmonic mean of the numbers is the inverse of the aver-
age of the inverses and can be calculated as follows:
harmonic mean = 2.0 × x × y / (x + y)
#include <iostream> using namespace std; float Hmean(int x, int y); int main() { int myArray[2]; cout << "Please enter pairs of numbers: " << endl; cin >> myArray[0] >> myArray[1]; if(myArray[0] == 0 || myArray[1] == 0) { cout << "You have entered a pair matching zero" << endl; cout << "Exiting..." << endl; } else { cout << "The Harmonic mean of " << myArray[0] << " and " << myArray[1] << " is " << Hmean(myArray[0],myArray[1]); } return 0; } float Hmean(int x, int y) { float calc = 2.0 * x * y / (x + y); return calc; } | https://rundata.wordpress.com/2012/12/30/c-primer-plus-chapter-7-exercise-1/ | CC-MAIN-2016-22 | refinedweb | 236 | 78.28 |
I've just installed the release version of 5.1 and am getting a showstopper which is going to cause me to remove and revert back to 5.0.
Steps to reproduce:
1) Rename a folder
2) Open up one of the files in the folder to rename the namespace
3) Alt+Enter on the namespace and choose to move to namespace to match new folder structure. Something happens and it looks like the file and any referencing files are checked out but when I look at the Souce Control Explorer the scc status icon suggests that it is not checkout out. The namespace is NOT changed to match the new folder structure and I cannot edit the code using the keyboard, it has become read-only?
4) If I click the refresh button on the Source Control Explorer then the file icon changes to show that it is checked out.
5) The code becomes editable and the namespace change functionality works again.
Another possible side-effect I noticed is that the new namespace is not visible from other referencing projects.
I've just installed the release version of 5.1 and am getting a showstopper which is going to cause me to remove and revert back to 5.0.
Hello,
Thank you for the feedback. Could you please try to reproduce this with the latest nightly build and a simple ClassLib project?
I'm going to repro this on my side.
Thanks in advance!
Kirill Falk
JetBrains, Inc
"Develop with pleasure!" | https://resharper-support.jetbrains.com/hc/en-us/community/posts/206684115-Resharper-5-1-scc-refresh-bug-with-TFS-VS2010-?sort_by=votes | CC-MAIN-2020-34 | refinedweb | 252 | 72.05 |
After paving the ground, we are ready to start building on it.
Our first Java program will print a simple hello message to the user. In what follows, we are going to learn together how to write a Java program, compile it, and execute it on a Linux box.
Have fun!!
* * * * * *
Writing our First Program on a Linux Machine
1.Open a terminal on your Linux machine, and start editing a new file: Hello.java
vim Hello.java
2. In the vi editor, type the following code:
public class Hello { public static void main(String [] args) { System.out.println("Hello, I am a Java programmer"); } }
3. Save and exit the file.
4. Now, compile the file using the javac command:
javac Hello.java
5. If you list the files in the current directory, you should find a new file created called Hello.class. This file is the resulting bytecode class file (we know from the last article that Java compiler converts Java code into bytecode).
6. Now, execute the program:
This is how to write, compile, and execute a Java program on Linux. Now, we need to get this code explained.
How Things Work?!
Trivial program it is, no doubt. But, who said we couldn’t come up with some useful knowledge out of it?!
The program starts with the class declaration:
public class Hello
• This defines a class named Hello. The word public is called an access modifier. Access modifiers control access to classes, methods, and attributes. public tells that the Hello class is accessible from anywhere.
• Class names are usually started with an uppercase letter (although it could start with any letter or underscore).
• The file name (Hello.java) must be exactly the same as the public class defined in it. An important rule to always remember.
• A Java program file can contain more than one class definistion, but only one could be defined as public (the one the file name will be named after).
• Code inside class definition is enclosed within curly braces { }.
public static void main(String [] args)
• This defines a method named main, and specifies it as public.
• static in the method declaration makes the main() method available for use without instantiating (creating object of) the Hello class. This is necessary when defining the main() method, as it is the first code to be executed in a Java program.
• void is the return value expected from the main() method. Void means nothing, so specifying void as the method return means the method won’t return a value.
String [] args
• This is an array of strings that contains the list of command-line arguments passed to the program on execution (if any).
• As with classes, code inside a method is also enclosed within curly braces { }
System.out.println("Hello, I am a Java programmer");
• This is the body of the main() method.
• It prints its arguments to the standard out.
• Java statements are terminated with semi-colons.
• Strings of text need to be delimited by double quotes “ “
– All of these conclusions out of this simple program?!!
Yes, I told you, but you didn’t believe me!!
Using Comments
As a best practice, you should document your work, and your Java programs are not exception. Documenting your code makes it easier for others (and even for you) to understand your code if there is a need to modify it in the future. Comments are embedded within the Java code. The Java compiler completely ignores those comments when compiling the source code.
Java supported two types of comments:
Single-line comments: A single-line comment comments out everything appearing after it and up to the end of the line.
Syntax
//A single line comment. It will be ignored by the compiler
Example
public class Hello //The class declaration
Multi-line comments: This type of comments can extend to more than one line. When the compiler sees the /* sequence, it considers everything after it as comment. This continues until it encounters the closing sequence */
Syntax
/* A multi-line comment. This will be ignored by The Java compiler */
Example
Let’s re-write our first program with some comments added.
/* This is our first program With some comments added to it */ public class Hello // Starting the Class definition { public static void main(String [] args) { // This prints a hello message to the standard out System.out.println("Hello, I am a Java programmer"); } } // End of the program
Compressing your code
The above program could be re-written in the following compressed form, and still compile and execute successfully giving the same result:
public class Hello { public static void main(String [] args){ System.out.println("Hello, I am a Java programmer");}}
The reason is that white spaces and empty lines are also ignored by the Java compiler. Although this compact version comes in only three lines (instead of seven) that appear to be saving lines and keeping your code short, it is less readable. That is why I don’t like it, and never recommend it.
* * * * * *
Conclusions
• In this article, we have written our first program. Despite its simplicity, we have come out with important conclusions out of this program.
• The name of the file containing the Java program must exactly match the name of the public class defined in it.
• Class names usually start with uppercase letters (but may not).
• The main() method is the first code to be executed in a Java program.
• Specifying void as the return of a method means the method will not return any value.
• Java statements are terminated with semi-colons.
• Java supports two types of comments: single-line comments, and multi-line comments. A single-line comment uses the // sequence to comment anything that appears to its right until the end of the line. A multi-line comment uses the sequences /* and */ to delimit a comment that can span one or more lines.
• Empty lines and white spaces are ignored by the Java compiler.
That was part two of our long series on Java programming. Stay here!! Part three is on the way.
We won’t be late!! | https://blog.eduonix.com/java-programming-2/writing-our-first-program-in-java-programming-language/ | CC-MAIN-2022-33 | refinedweb | 1,014 | 65.42 |
A student asked me how to create a Vocabulary Enhancer (VocEnhancer) application using C#. A VocEnhancer application is nothing but an application that selects words randomly from a specified multiple options, so in consideration of those requirements and to help the student I have decided to write this article.
Before developing the application let us understand the conditions of VocEnhancer application.
Case
XYZ Inc. is a software development company that specializes in creating educational games for children. After doing a survey of the current market, the marketing team found that there is a great demand for software that helps children to improve their English vocabulary. Therefore, the marketing team proposed that the company should develop a new game that helps sharpen English vocabulary of children. One of the game designers in the company has suggested a game called VocEnhancer.
Rules of VocEnhancer
Design Specifications
- It is a single-player game.
- There are three levels in the game, Beginner, Average, and Expert. The players would choose from these levels depending on their vocabulary.
- Depending upon the level, a few characters from an English word appear on the screen. The total number of characters in the word, the number of chances to guess the word, the marks scored by the player, and the seconds elapsed also appear on the screen.
- The marks, number of chances to guess the word, and the time elapsed is to be updated on the screen.
- The player must guess each word within the given chances. If a player is unable to guess any five words in the given number of chances, the game should be over.
The design of the game should be as per the following specifications:
Now
create the two text files for storing words, one is for entire words
and the other is for the same guess words with underscore. I have
created the two files in my system at the locations:
In the same way you can create files for Level1 and Level2.
The words of these text files will look such as follows.
Now right-click on VocEnhancer Solution Explorer and add the following class files.
Now the VocEnhancer application Solution Explorer will look such as follows.
Now in the preceding class file we have written all the logic required to create the VocEnhancer application.
We are now done with the code, now run the console application in any of the following ways:
After running the VocEnhancer application the following Level selection console screen will be as:
Summary
I hope this application is useful for students. If you have any suggestion related to this article or if you want any additional requirements in the above application then please contact me.
- When the game starts, a menu should be displayed with the following options:
- Level 2: Average.
- Level 3: Expert .
- The player can choose any level. As the player selects the level, details of the highest score for that level should be displayed on the screen and then the game should start.
- Each level has a different sets of words.
- A few characters of a word are displayed on the screen each time. These characters are chosen randomly and are placed at the position where they occur in the word. The missing characters are indicated by blanks or underlines.
- The number of characters displayed on the screen depends upon the length of the word to be guessed. If the word is long then the characters displayed should be more and vice versa.
- The sequence of the words displayed should be random.
- The number of chances to guess each word should be five more than the number of missing characters in the word.
- The player gets one mark for guessing the correct word.
Now by using the design specification above let us start creating the VocEnhancer application using a C# console application using Visual Studio 2010 as in the following:
- Open Visual Studio from Start -> All programs -> Microsoft Visual Studio.
- Then go to to "File" -> "New" -> "Project..." then select "Visual C#" -> "Windows" -> "Console application".
- After that specify the name such as VocEnhancer or whatever name you wish and the location of the project and click on the "OK" button. The new project is created.
- "C:\\Users\\vithal\\Desktop\\Level1words.txt";
- "C:\\Users\\vithal\\Desktop\\guess.txt";
Note:
- There is a need to create separate files for guessing words you can dynamically create them using C# string functions but to make it understandable for the student I have created separate files.
Note:
- Keep the guess words sequence the same as the answer words file.
- hint_Words_utility.cs: for reading hint or guess words from text files.
- CountUtility.cs: for counting the marks, chances and time.
Now open the "hint_Words_utility class" file and write the following code:
using System; using System.IO; namespace VohEnhancer { public class hint_Words_utility { public static int ReadWordsFromHintFile(string[] Hintwords) { //geting file path from system desktop location which i have created string guessfilename = "C:\\Users\\vithal\\Desktop\\guess.txt"; if (File.Exists(guessfilename) == false) return -1; //reading the text file words StreamReader guessRead = new StreamReader(guessfilename); int cguessount = 0; for (int i = 0; i < 50; i++) { if (guessRead.EndOfStream == true) break; Hintwords[cguessount++] = guessRead.ReadLine(); } guessRead.Close(); return cguessount; } public static string GetHints() { string[] Hintwords = new string[50]; int count = ReadWordsFromHintFile(Hintwords); Random r = new Random();//for getting the random words from text files int guessX = (int)(r.Next(count)); String secretWord = Hintwords[guessX]; return secretWord; //returning the random words } } }
Now open the "CountUtility.cs" file where we have created some entities, in other words varibales, to do a count then write the following code in it.
namespace VohEnhancer { public static class CountUtility { private static int _SetChances = 5, _Setmarks=0; public static int Setmarks { get { return _Setmarks; } set { _Setmarks = value; } } public static int SetChances { get { return _SetChances; } set { _SetChances = value; } } } }
Now open the "program.cs" file with the main functionality of the VocEnhancer application with the following code in it.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Timers; namespace VohEnhancer { public class Program { public static string NewLine = Environment.NewLine; public static string Space = "\t"+"\t"+"\t"+"\t"; public static int Chances,Marks; static int ReadWordsFromFile(string[] words) { //getting path of Level1 words from the following location string filename = "C:\\Users\\vithal\\Desktop\\Level1words.txt"; if (File.Exists(filename) == false) return -1; //reading the words from file StreamReader s = new StreamReader(filename); int count = 0; for (int i = 0; i < 50; i++) { if (s.EndOfStream == true) break; words[count++] = s.ReadLine(); } s.Close(); return count; } static void Main(string[] args) { GameMode(); } private static void Level1Mode() { //level1 game mode logic or as same you can do it for Level2 and Level3 Chances = CountUtility.SetChances; Marks = CountUtility.Setmarks; ; Console.WriteLine("Chances:" + " " + Chances + "\t" + "\t" + " Marks:" + " " + Marks+"\t"+"\t"+"\t"+"Time Elapsed:"); Console.WriteLine(NewLine + Space + "Welcome to VocEnhancer" + NewLine + NewLine + NewLine); if (Chances == 0) { Console.Clear(); Console.WriteLine("Number of Chances Over,Please Press P to Play again or Press Q to Quit"); string Option = Console.ReadLine(); if (Option == "P" || Option == "p") { Console.Clear(); GameMode(); } } string[] words = new string[50]; int count = ReadWordsFromFile(words); Random r = new Random(); string hint = hint_Words_utility.GetHints(); int guessX = (int)(r.Next(count)); //getting guess words String secretWord = words[guessX]; //getting the length of guess word int numChars = secretWord.Length; Console.Write(Space + hint + NewLine + NewLine + NewLine + NewLine + NewLine + NewLine); Console.Write("\t" + "Total Charcters:" + " " + numChars + NewLine); Console.WriteLine(); bool bGuessedCorrectly = false; Console.WriteLine("\t" + "Write flip for Next word: " + NewLine + NewLine + "\t" + "Guess:"); while (true & Chances > 0) { //for changing word by wrtting flip string choice = Console.ReadLine(); if (choice == "flip") { Console.Clear(); Level1Mode(); } if (choice == secretWord) { bGuessedCorrectly = true; break; } else { bGuessedCorrectly = false; break; } } if (bGuessedCorrectly == false) { Chances--; //decreasing chances on every wrong attempt CountUtility.SetChances = Chances; Console.WriteLine(" your Guess Is Not correct"+NewLine); Console.WriteLine("Please Enter to stay Playing"); } else { Marks++; //counting marks for each correct answer CountUtility.Setmarks = Marks; Console.WriteLine(NewLine + "Congrats!" + NewLine); Console.WriteLine("Please Enter to stay Playing...."); } Console.ReadLine(); Console.Clear(); Level1Mode(); } public static void GameMode() { Console.WriteLine(Space + "Welcome to VocEnhancer" + NewLine + NewLine + NewLine); Console.WriteLine("Please Select Game Mode" + NewLine); Console.WriteLine("\t" + "1) Press 1 for Level 1- Beginner" + NewLine); Console.WriteLine("\t" + "2) Press 2 for Level 2-Average" + NewLine); Console.WriteLine("\t" + "3) Press 3 for Level 3-Expert" + NewLine); int Mode = int.Parse(Console.ReadLine()); //for selecting the modes switch (Mode) { case 1: Console.Clear(); // Level1 Mode Level1Mode(); break; case 2: Console.Clear(); // Level2 Mode,please write separate function as same Level1 with different file Level1Mode(); break; case 3: Console.Clear(); // Level3 Mode,please write separate function as same Level1 with different file Level1Mode(); break; default: Console.WriteLine("Please Select Game Mode between 1 to 3"); Console.ReadLine(); Console.Clear(); GameMode(); break; } } //end of class } }
Note:
- I have used the same function for all three modes, please use the same logic as level one but provide a different word file path for each function, I hope you will do that since you are smart enough for that.
- Press F5
- Directly clicking on the .exe file from the bin folder of the VocEnhancer application.
- Using any option you wish to and that you know
Now select any mode from the above, I will select Level1 Mode, so I need to press 1 from my keyboard. The following console screen will be displayed with options and words to be guessed.
Now, if you want to change the word then write "flip" and press Enter, it will show the next word as:
Now enter an incorrect guess, it will show the following message:
And the chances will be reduced by one and after four incorrect attempts:
If you attempt five incorrect guesses then the VocEnhancer application will exit from mode and show the following message.
Now type "P" and press Enter because we want to play again. It will start again from the start of the application, now after guessing the correct option, it will show the following screen:
From the preceding screen it's clear that our application is working and after each correct answer, the marks are increased by one. I hope you have done it.
Note:
Request and suggestion for students
- For detailed code please download the Zip file.
- Download Sample .
- Don't forget to create Text Files for words.
- Please don't use the code as is, use the example above as a guide and try improve it so in the future and for your career it's useful for you.
- Don't buy any project from an outsider, do it as your own skill, it will definitely be useful for getting a job quickly and easily.
- Invest your time and money in technical skill improvement instead of investing money for buying projects and if possible choose Microsoft technology for your career.
- For free guidance for any college project, you can contact with me or refer to the C# Corner site but don't buy any project in your final year.
I hope this application is useful for students. If you have any suggestion related to this article or if you want any additional requirements in the above application then please contact me. | http://www.compilemode.com/2015/05/vocenhancer-Game-using-C-Sharp.html | CC-MAIN-2019-26 | refinedweb | 1,866 | 56.05 |
The story:
Pushing for more quality and stability we integrate google test into our existing projects or extend test coverage. One of such cases was the creation of tests to document and verify a bugfix. They called a single function and checked the fields of the returned cv::Scalar.
TEST(ScalarTest, SingleValue) { ... cv::Scalar actual = target.compute(); ASSERT_DOUBLE_EQ(90, actual[0]); ASSERT_DOUBLE_EQ(0, actual[1]); ASSERT_DOUBLE_EQ(0, actual[2]); ASSERT_DOUBLE_EQ(0, actual[3]); }
Because this was the first test using OpenCV, the CMakeLists.txt also had to be modified:
target_link_libraries( ... ${OpenCV_LIBS} ... )
Unfortunately, the test didn’t run through: it ended either with a core dump or a segmentation fault. The analysis of the called function showed that it used no pointers and all variables were referenced while still in scope. What did gdb say to the segmentation fault?
(gdb) bt #0 0x00007ffff426bd25 in raise () from /lib64/libc.so.6 #1 0x00007ffff426d1a8 in abort () from /lib64/libc.so.6 #2 0x00007ffff42a9fbb in __libc_message () from /lib64/libc.so.6 #3 0x00007ffff42afb56 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007ffff54d5135 in void std::_Destroy_aux<false>::__destroy<testing::internal::String*>(testing::internal::String*, testing::internal::String*) () from /usr/lib64/libopencv_ts.so.2.4 #5 0x00007ffff54d5168 in std::vector<testing::internal::String, std::allocator<testing::internal::String> >::~vector() () from /usr/lib64/libopencv_ts.so.2.4 #6 0x00007ffff426ec4f in __cxa_finalize () from /lib64/libc.so.6 #7 0x00007ffff54a6a33 in ?? () from /usr/lib64/libopencv_ts.so.2.4 #8 0x00007fffffffe110 in ?? () #9 0x00007ffff7de9ddf in _dl_fini () from /lib64/ld-linux-x86-64.so.2 Backtrace stopped: frame did not save the PC
Apparently my test had problems at the end of the test, at the time of object destruction. So I started to eliminate every statement until the problem vanished or no statements were left. The result:
#include "gtest/gtest.h" TEST(DemoTest, FailsBadly) { ASSERT_EQ(1, 0); }
And it still crashed! So the code under test wasn’t the culprit. Another change introduced previously was the addition of OpenCV libs to the linker call. An incompatibility between OpenCV and google test? A quick search spitted out posts from users experiencing the same problems, eventually leading to the entry in OpenCVs bug tracker: or. The opencv_ts library which appeared in the stack trace, exports symbols that conflict with google test version we link against. Since we didn’t need opencv_ts library, the solution was to clean up our linker dependencies:
Before:
find_package(OpenCV)
/usr/bin/c++ CMakeFiles/demo_tests.dir/DemoTests.cpp.o -o demo_tests -rdynamic ../gtest-1.7.0/libgtest_main.a ../gtest-1.7.0/libgtest.a -lpthread
After:
find_package(OpenCV REQUIRED core highgui)
/usr/bin/c++ CMakeFiles/demo_tests.dir/DemoTests.cpp.o -o demo_tests -rdynamic ../gtest-1.7.0/libgtest_main.a -lopencv_highgui -lopencv_core ../gtest-1.7.0/libgtest.a -lpthread
Lessons learned:
Know what you really want to depend on and explicitly name it. Ignorance or trust in build tools’ black magic is a recipe for blog posts. | https://schneide.blog/author/vasilikvockin/ | CC-MAIN-2022-27 | refinedweb | 497 | 52.05 |
On Mon, 2004-05-17 at 12:42, SAiello at Jentoo.com wrote:
> Connection Pools seem like a daunting undertaking, it could just be an
> illusion. Until I understand the basic definition of a Connection Pool, in a
> programming way. It will seem like a complicate and mysterious thing.
Connection pools in Apache are particularly difficult because it is
rather difficult to safely share open connections across multiple
processes. In short, this is a complicated and mysterious thing, and I
don't recommend trying it. :)
> Currently, when ever I require IMAP information, a connection needs to be
> established, user authentication, information request, close IMAP connection,
> parse the IMAP information into useable lists/distionaries, display web page.
> In the beginning I really didn't like having to open and close the IMAP
> connection, seemed like such a waste. So I tried to store the imaplib
> function into a session var, that was bad. ImapLib can't be pickled due to it
> using __slots__. I wrote the writer of imapLib, and he said he wasn't even
> aware it used __slots__, and that he may look at it at some point. So to me
> my next option was to look at writing my own IMAP library, how hard can it
> be.. Results, I am not even close to handling tcp sockets. At this point I
> was flustered, and came to conclusion to design my app as best I can, and
> after it is working, go back and try to work on the bits I feel can be done
> better.
You would be unable to pickle ImapLib anyway - pickling takes the state
of an object and serialises it to a data stream, with unpickling being
the reverse. However, in the case of a TCP connection to an IMAP
server, there is state held by both the OS and the IMAP server about the
established connection which cannot be serialised.
(Imagine: you write a pickled ImapLib object to disk, three days later
when you try to unpickle it, how does the remote IMAP server know that
it's an unpickled object from before?)
One way to solve the problem would be to extend the ImapLib object to
store enough information to reestablish the connection, but this
wouldn't help here - you'd still be doing the expensive connection
establishment when you unpickled the object!
Rather, I would look for places you make the imaplib calls, and see if
you could first check for cached data in your session before doing the
call:
def fetch_index(sess, user, password):
if sess.has_key('cache') and sess.cache.has_key('index'):
if sess.cache['index']['expires'] > time.time():
return sess.cache['index']['data']
index = imaplib_fetch_index(user, password)
if not sess.has_key('cache'):
sess['cache'] = {}
sess['cache']['index'] = {}
sess['cache']['index']['expires'] = time.time() + 300
sess['cache']['index']['data'] = index
return index
What this does is check if your session has a cache, and if that cache
contains an entry for the imap index, and that the cached entry has not
expired (it only lasts 5 minutes in this case). If it does, the cached
version is returned.
Otherwise, the old call to fetch the index from imaplib is executed, the
cache is created if it didn't exist, and an entry for the imap index is
added.
> That was my thinking to to use the session storage option. It seemes the
> easiest way to do it. But I am always wary when doing something the easy way.
I like doing things the easy way. If it's easy to understand now, it'll
be easy to understand six months later when you need to do maintenance.
If it's easy to write, it should be easy to spot bugs. And if it's easy
to do, it'll take you less time than a more complex alternative.
> I do not use refresh to cause the error, but page links (i.e. like the Next
> button for the next set of messages). clicking over and over on the A link
>From the server end, it's still a bunch of requests coming in hot on
each others' heels. I used refresh so I didn't have to futz around
making a form that submitted to a mod_python module; I just threw in an
index.py and a PythonHandler index.
> will cause it. It isn't the browser, I have tried Konqueror, firefox, and IE,
> they all will get my error page. Below are my system specs. I think Apache is
> forking, because I am not using threads as a compile option, so that is
> forked right ? I have to read up on which is better, I tried apache with
> threads once to see if that was the issue.. still did the same thing.
"Which process model is better" is a tricky question, and probably
outside of the immediate scope of this list - and definitely outside the
scope of my experience. :)
> Server Specs:
> Gentoo distribution of GNU/Linux, kernel 2.6.4
> Apache 2.0.49 with berkdb, gdbm, & ldap compiled in.
> mod_python 3.1.3
I'm running 2.0.47 with 3.1.3, but it doesn't sound like this is a
problem with your versions. I can't shed any light on why your sessions
are apparently not locked; so I don't know if I can help you further. :(
--
bje | https://modpython.org/pipermail/mod_python/2004-May/015618.html | CC-MAIN-2022-21 | refinedweb | 894 | 72.46 |
I have the following error when deploying my war...
Caused by: java.lang.VerifyError: JVMVRFY013 class loading constraint violated; class=com/sun/xml/bind/DatatypeConverterImpl, method=parseQName(Ljava/lang/String;Ljavax/xml/namespace/NamespaceContext;)Ljavax/xml/namespace/QName;, pc=0 com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:219)
I've read multiple posts indicating this is probably due to a .jar conflict. My question is - how do I go about figuring out how to resolve that? Do I rip out the jar? Replace the jar? Something else?
I have the module and the classloading strategies set to PARENT LAST. So that means it should be pulling MY .jars first, right? This same war deploys in Jboss and so is it possible that Jboss is including a .jar that is helping out this error while WebSphere does not include this jar out of the box? Is it looking for additional jars in my .war that aren't there?
How do I start figuring out the problem? Any advice?
Topic
aehrensberger 270001VXWB
1 Post
Pinned topic java.lang.VerifyError DatatypeConverterImpl
2009-09-22T20:45:01Z |
Updated on 2010-03-09T15:19:07Z at 2010-03-09T15:19:07Z by dustmachine
- SystemAdmin 110000D4XK11979 Posts
Re: java.lang.VerifyError DatatypeConverterImpl2010-02-23T16:34:36Z
This is the accepted answer. This is the accepted answer.Drop the trace on lady4j or another tool and it will find the cross-reference for you. Or you can rip the JAR by hand of course... XD.
Regards.
- dustmachine 2700000ASR2 Posts
Re: java.lang.VerifyError DatatypeConverterImpl2010-03-09T00:22:26Z
This is the accepted answer. This is the accepted answer.It's most likely related to jaxb and Java 6 (websphere 7, right?). See if you're packaging a jaxb jar with your war. Another thing to check is if you're compiling against a Java 5 jaxb jar from somewhere (geronimo?) and then deploying the war without it (so then websphere would use the jaxb bundled with JDK 6).
Good luck.
- dustmachine 2700000ASR2 Posts
Re: java.lang.VerifyError DatatypeConverterImpl2010-03-09T15:19:07Z
This is the accepted answer. This is the accepted answer.One more followup -- you asked "How do I start figuring out the problem?" -- the method I use on a "class loading constraint violation" is to:
1. look for the types in the error message (such as "DatatypeConverterImpl")
2. check how many classes with the same name are on my classpath (CTRL-SHIFT-T in Eclipse, "Open Type...") and which jars they are coming from
3. check how many of those jars I'm including in the war/ear I'm deploying.
The worst is if/when I'm deploying jars that contain duplicate implementations of classes that are already included by JDK 1.6.0. (for example, some stuff in jaxb-api.2.1.jar) In theory they should all be the same, but why take the risk. | https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014297813&ps=25 | CC-MAIN-2016-30 | refinedweb | 486 | 61.22 |
23 August 2012 09:49 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
However, the actual date and prices are not settled yet, the source said.
The company has been conducting trial runs and operating the plant at 30% of capacity since 15 August, the source added.
The plant is the first phase of a 1m cbm/day project at the same site that involves an investment of about yuan (CNY) 95.4m ($15m), the source said.
Feedstock gas is mainly supplied by PetroChina Changqing Oilfield, according to the source.
LNG supply will rise and competition among domestic producers will intensify in the region. However, it will have minimal impact on the Chinese market in the near term.
Hengtong New Energy Development is largely engaged in the investment, construction and management of natural gas liquefaction and utilisation facilities.
( | http://www.icis.com/Articles/2012/08/23/9589291/chinas-hengtong-new-energy-development-to-supply-lng-in-late.html | CC-MAIN-2014-23 | refinedweb | 137 | 56.55 |
0
i am trying to do a program but i have two error and i don't know what they mean. can someone please help me.
this is the program
#include "stdafx.h" #include <iostream> using namespace std; int S1(int x, int& y); int S2(int& m, int n); int main() { int a = 2, b = 2, c =2; cout << "In main, " << endl << "a = " << a << endl << "b = " << b << endl<<endl; a = S1(b, c); cout << "In main, " << endl << "a = " << a << endl << "b = " << b << endl; cout << endl << endl; return(0); } int S1(int& x, int& y) { int t = S2(x,y); x = y; y = t; cout << "In S1, " << endl << "t = " << t << endl << "x = " << x << endl << "y = " << y <<endl<< endl; return(t * 3); } int S2(int& m, int n) { int r=6; n=m; m++; cout << "In S2, " << endl << "m = " << m << endl << "n = " << n << endl << "r = " << r << endl<< endl; return(n + 1); }
and these are the error messages
hwassinment error LNK2019: unresolved external symbol "int __cdecl S1
(ihwassinment fatal error LNK1120:
(int,int &)" (?S1@@YAHHAAH@Z) referenced in function _main
hwassinment fatal error LNK1120: 1 unresolved externals | https://www.daniweb.com/programming/software-development/threads/15286/need-help-with-this-error | CC-MAIN-2016-44 | refinedweb | 185 | 71.41 |
Hello, I just started doing an online tutorial to learn some java and I'm stumped on solving this Hailstorm Problem. The problem is an old math phenomenon where no matter what number you start with you will end at the number 1. If the number is even you divide it by two and if its odd you compute 3n + 1 (n being the number) and you continue to do it until you reach 1. So here was my attempt at solving it:
import acm.program.*; public class Hailstone extends ConsoleProgram { public void run() { int n = readInt ("Enter a number: "); int turns = 1; while (n != 1) { if ((n % 2) == 0){ numEven(n); } else { numOdd(n); } turns++; } println ("The process took " + turns + " to reach 1."); } // numEven computes the answer if n is even. private int numEven (int n) { int j = n/2; println (n + " is even so I take half: " + j); return j; } // numOdd computes the answer if n is odd. private int numOdd (int n) { int k = (int)(n * 3) + 1; println (n + " is odd, so I make 3n + 1: "+ k); return k; } }
The problem being that its looping indefinitely after it solves the first even or odd computation and doesn't put the returned value back into the original n for the second loop. Let me know if you think of anything to solve this. Thanks. | http://www.javaprogrammingforums.com/algorithms-recursion/425-looping-problem-hailstrom-program.html | CC-MAIN-2014-35 | refinedweb | 227 | 74.53 |
2. Installation Procedures
Oracle Java ME Embedded Client SDK and Eclipse Projects
Writing Your First Application
This chapter details the steps to configure Eclipse to use the Oracle Java ME Embedded Client SDK as a Java platform, and presents a sample project and a sample application to run in your configured environment.
Note - The SDK supports application development and the emulation. Debugging and Profiling are not supported.
This section describes how to add the Oracle Java ME Embedded Client SDK as a Platform in Eclipse and how to create a new project in Eclipse.
Launch Eclipse and follow these steps:
The Preferences Window opens.
The central panel displays the JREs on this machine.
The window displays the JRE Type options.
The window displays the JRE definition panel.
The Installed JREs panel lists Oracle Java Micro Edition Embedded Client 1.0 as an installed JRE.
Now you are ready to develop applications for the Oracle Java ME Embedded Client.
After completing the steps documented in the previous task, Eclipse has the JRE platform required to build and run Oracle Java ME Embedded Client applications.
The Create a Java Project window is displayed.
The new project HelloWorld is created along with separate directories for sources and class files, src and bin respectively.
The New Java Class window opens.
public static void main(String[] args)
The new Hello.java file in the src folder is an empty template and must be modified.
package hello; public class Hello { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub System.out.println("Hello World"); } }
The Hello World message prints in the console. | http://docs.oracle.com/javame/config/cdc/cdc-opt-impl/ojmeec/1.0/install/html/z400000a1006487.html | CC-MAIN-2015-22 | refinedweb | 270 | 57.47 |
8.5. Sequence points
Associated with, but distinct from, the problems of real-time programming are sequence points. These are the Standard's attempt to define when certain sorts of optimization may and may not be permitted to be in effect. For example, look at this program:
#include <stdio.h> #include <stdlib.h> int i_var; void func(void); main(){ while(i_var != 10000){ func(); i_var++; } exit(EXIT_SUCCESS); } void func(void){ printf("in func, i_var is %d\n", i_var); }Example 8.6
The compiler might want to optimize the loop so that
i_var
can be stored in a machine register for speed. However, the function
needs to have access to the correct value of
i_var so that
it can print the right value. This means that the register must be stored
back into
i_var at each function call (at least). When and
where these conditions must occur are described by the Standard. At each
sequence point, the side effects of all previous expressions will be
completed. This is why you cannot rely on expressions such as:
a[i] = i++;
because there is no sequence point specified for the assignment,
increment or index operators, you don't know when the effect of the
increment on
i occurs.
The sequence points laid down in the Standard are the following:
- The point of calling a function, after evaluating its arguments.
- The end of the first operand of the
&&operator.
- The end of the first operand of the
||operator.
- The end of the first operand of the
?:conditional operator.
- The end of the each operand of the comma operator.
- Completing the evaluation of a full expression. They are the following:
- Evaluating the initializer of an
autoobject.
- The expression in an ‘ordinary’ statement—an expression followed by semicolon.
- The controlling expressions in
do,
while,
if,
switchor
forstatements.
- The other two expressions in a for statement.
- The expression in a
returnstatement. | http://publications.gbdirect.co.uk/c_book/chapter8/sequence_points.html | CC-MAIN-2014-41 | refinedweb | 313 | 58.28 |
Create a quiz with React
We’re going to create a multiple choice quiz with React - without setting up any build configuration. This is now possible thanks to the Create React App project, which was created by the team at Facebook. Check out the demo here to see the quiz in action. Starting a new React project usually involves a lot of overhead that can be time consuming for anyone and straight up daunting to beginners. With Create React App you get a modern workflow with Webpack, Babel (for ES6 syntax support), ESLint and more all configured for you. This allows you to jump into writing your code straight away.
Initial Setup
To get started make sure you have Node 4 or later installed on your machine. Then to create your app, from the command line, run the following command in your preferred directory:
npx create-react-app react-quiz cd react-quiz
Feel free to name your app whatever you like, I’ve named it react-quiz here. This will create a new directory named react-quiz inside the current directory, generate the initial project structure and install the dependencies. Your app directory will now look something like this:
react-quiz/ README.md index.html favicon.ico node_modules/ package.json src/ App.css App.js index.css index.js logo.svg
Once installation is complete we can run the app with the following command:
npm start
You can now view it in a browser at. Feel free to take a moment to familiarise yourself with the current code. The page will reload automatically if you make any changes. You will also see any build errors and lint warnings in the console. And just like that we have a nice modern development environment setup! Now we can start creating the quiz.
What we’re building
We all know how a quiz works, there are a list of questions, and each question has a few different options that map to the possible outcomes. The data that we’ll be working with today will determine which video game console company the user is a bigger fan of; Nintendo, Sony or Microsoft. Our quiz has five questions, with three options to choose from per question. However the quiz we’re creating will work with any amount of questions/answer options.
We’ll be thinking in the react way when building this app, which involves creating small components to build up our app. Let’s start by defining what these components are:
- Question
- Question count
- Answer options
- Result
These components will be composed together through a container component to build our quiz.
Creating the first component
First off, we’ll install the prop-types library for React, by running the following command in our project’s root directory:
npm install prop-types
Then create a new directory named
components, and inside that create a new file named
Question.js. This is where we’ll start writing our first React component. Add the following code:
You may be wondering why we’re not using the class syntax for this component. Since this is a stateless presentation component, we don’t need to use a class to create the component. In fact it’s best practice not to, as it allows you to eliminate a lot of boilerplate code this way.
There’s a popular pattern in React that divides your components into two categories; presentational and container components. The most basic description of this pattern is that container components should be concerned with how things work, and presentational components should define how things look. Check out this article for a more detailed explanation.
This very simple component is just displaying the question. The question’s content is being passed in via props from a container component. The
propTypes (short for property types) in React are used to assist developers, they define the type of prop and what props are required. React will warn you when there is an invalid
propType.
Let’s add this component to our main container component. First we need to import the component, open
App.js and add this import statement just below the others:
import Question from './components/Question';
Then add the component to the
App component’s render function. Here is what the JSX should now look like:
Note that we’re just passing in a string to the
content prop for demonstration purposes, this will be changed later on. If you view the app in the browser the question should now be displayed.
Creating the other presentational components
Next we’ll create the question count component. In the
components folder, create a new file named
QuestionCount.js and add the following:
This is very similar to the previous component we created. It will receive two props,
counter and
total from a container component.
The next component will display the answer options. Create a file named
AnswerOption.js in the
components folder and add the following:
This component consists of a list item with a radio button and label. There is one new concept introduced here on line 11; the
checked property is a comparison statement. This value will be a boolean (true or false) based on whether the answer selected is equal to the answer option type.
Bringing the components together
We will now bring these components together within the
Quiz component. Create a new file named
Quiz.js in the components directory. And paste in the following import statements:
import React from 'react'; import PropTypes from 'prop-types'; import Question from '../components/Question'; import QuestionCount from '../components/QuestionCount'; import AnswerOption from '../components/AnswerOption';
Here we are importing the components that we just created. Now let’s define the
Quiz component:
We’re building the quiz with the components we previously created, and passing them the required props. You’ll notice that we’re passing in props that have been passed down to the
Quiz component. So the
Quiz component is also a presentational component. That’s because we want to try and keep all of the code concerned with the display of components separate from the functionality.
To make this code work we need to define the
renderAnswerOptions function that is being used to create each of the
AnswerOptions. Paste in this code just above the return statement:
Don’t worry too much about the properties for now, they’ll be defined in the main container component (
App.js). Essentially, this will render an
AnswerOption component for each of the answer options defined in our state.
Add some style
Create React App configures Webpack for us so that we can define separate CSS files for each module. It will then bundle all of our CSS into one file upon saving. We won’t be diving too much into styling. So for this tutorial I’ve just put all of the styles into one CSS file. Grab the CSS from the Github repository here, and replace the current contents of
index.css with it.
Adding functionality
Before creating the quiz functionality we need to define the app’s state. Inside
App.js, we define our initial state in the App class’s constructor function. This is the idiomatic way of declaring initial state when using ES6. In
App.js, place the following code at the top of the
App class:
State should contain data that a component’s event handlers may change to trigger a UI update. The above code is all of the state required for the quiz. Now we need to actually grab some data to populate our state. You can grab the demo question data here. Create a new folder named
api, then create a new file named
quizQuestions.js and paste the demo data contents into that file. Then import that file into
App.js:
import quizQuestions from './api/quizQuestions';
Next, we’ll populate our app’s state using the
componentDidMount life cycle event React provides us. Place this code directly below our constructor function:
The
componentDidMount life cycle event is invoked immediately after a component is mounted (inserted into the tree). When you call
setState within this method as we are above on line 4,
render() will see the updated state and it will be executed only once despite the state change.
As you may have noticed we’ve also used a function named
shuffleArray on line 2, this will randomise the order of the answer options - just to spice things up a bit. But we’re yet to define that function, so let’s do that now directly below the
componentDidMount function:
As the name suggests this function will shuffle an array. I won’t dive into how it’s doing so, as that’s outside the scope of this tutorial, but here’s a link to the source if you’re interested.
Now let’s define the render function for the
App.js component:
One thing to note here is that we actually need to hard bind our event handlers in the
render function. For performance reasons the best place to do this is in the constructor. Add this line to the bottom of our constructor function:
this.handleAnswerSelected = this.handleAnswerSelected.bind(this);
Updating state without mutating it
Now we’re going to create the functionality for selecting an answer. Add the follwing function directly above the
render() function:
This function is currently performing two tasks; setting the answer and then setting the next question. Each task has been extracted into it’s own function to help keep the code clean and readable. We now need to define each of these functions, we’ll start with the
setUserAnswer function, add the following code directly above the
handleAnswerSelected function:
Okay let’s talk about what’s going on here. We’re setting the answer based on the user’s selection, which is the first instance of changing state based on user actions. The value being passed in as the answer parameter on line 1, is the value of the selected answer. Which in our case will be either Nintendo, Microsoft or Sony.
On line 2 we’re calling
setState with a function rather than an object. This is so we can access the previous state, which will be passed into the function as the first parameter.
setState is the primary method used to trigger UI updates from event handlers and server request callbacks. In React we should treat state as if it is unable to be changed (immutable). This is why on line 3 we’re creating a new object. This object has the original properties of
this.state.answersCount (through the use of the spread syntax) merged with the new
answerCount value. We have now updated the state without mutating it directly.
Next we need to define the
setNextQuestion function. As the name suggests, this will update our state to display the next question. Add this code below the
updatedAnswersCount function:
Here we increment the
counter and
questionId state, by first creating the variables, then assigning them via
setState. We’re also updating the
question and
answerOption state based on the counter variable. We now have a somewhat functional app! When you select an answer it should update the state accordingly and display the next question.
Calculating the result
Firstly, we need to update the
handleAnswerSelected function. In the else statement that we previously created but left empty, include the following code:
setTimeout(() => this.setResults(this.getResults()), 300);
Here we’re calling
setResults after 300ms. The delay is simply a UX decision made so that the user has a moment to see the visual feedback indicating that their selection has been made. We’re passing the results in the form of another function
getResults. Let’s define that now:
This function calculates which answer type (Sony, Microsoft or Nintendo in our case) has the highest number - aka the quiz result. This is a fairly ES6 heavy function, but I find it much more verbose than the ES5 equivalent. On line 3,
answersCountKeys is utilising
Object.keys to return an array of strings that represent all the properties of an object. In this case it will return:
['nintendo', 'microsoft', 'sony']
Then on line 4,
answersCountValues is mapping over this array to return an array of the values. Then we can get the highest number of that array with
Math.max.apply, this is assigned to the
maxAnswerCount variable on line 5. Then finally on line 7, we calculate which key has a value equal to the
maxAnswerCount using the filter method and return it.
Now we need to create the
setResults function. Include the following code directly below the
getResults function:
This function receives the result from
getResults which is an array, and checks to see if that array has one value. If so we assign that value via
setState. If the array has more or less than one value that means there is no conclusive answer. So we set the result as
Undetermined.
Displaying the result
Finally we need to display the result. Create a new file in the
components directory named
Result.js and add the following code:
This is a presentation component that will display the result. Next we have to update the
render function in
App.js. Replace the
<Quiz/> component in the
render function with the following:
{this.state.result ? this.renderResult() : this.renderQuiz()}
Here we’re using the JavaScript ternary operater, which is a shorthand if statement, to determine whether the quiz or the result should be displayed. If
state.result has a value then it will display the result.
Finally we need to create these two functions we just added. Add the following code directly above the render function:
We now have a fully functional quiz! If you want to deploy your app, you can run the
npm run build command to generate an optimized build for production. Your app will now be minified and ready to be deployed!
Bonus: Adding animation
Let’s add some subtle animation to make the user experience feel a bit nicer. Firstly we need to install the React animation component with the following command (note we’re using v1 of the library, v2 has changed quite a bit):
npm install react-transition-group@1.x
This provides us with an easy way to perform CSS transitions and animations with React components. If you’ve ever worked with animations in Angular this will feel familiar to you, as it’s inspired by the excellent ng-animate library. We’re simply going to be adding a fade-in and fade-out effect to our questions.
Navigate to the
Quiz.js component and add the following import statement below the others:
import { CSSTransitionGroup } from 'react-transition-group';
CSSTransitionGroup is a simple element that wraps all of the components you are interested in animating. We’re going to be animating the entire quiz component. To do that, update the render function with the following code:
Here we’ve wrapped the quiz element in a
CSSTransitionGroup element. Child elements of
CSSTransitionGroup must be provided with a unique
key attribute. This is how React determines which children have entered, left, or stayed. We’ve defined the key as
props.questionId on line 11, as that value will be different for each question.
There are quite a few properties on the
CSSTransitionGroup element here, I’ll go through what each one’s purpose is. The
component prop is specifying what HTML element this will be rendered as. The
transitionName prop is specifying the name of the CSS classes that will be added to the element. In our case they will be
fade-enter and
fade-enter-active when the element is being rendered, and
fade-leave and
fade-leave-active when they are being removed. The
transitionEnterTimeout and
transitionLeaveTimeout are specifying the animation durations. This also needs to be specified in the CSS. You’ll find that the required CSS is already included in the
index.css file we previously got from Github. This is what it looks like:
This CSS is just changing the opacity values, and specifying the transition duration and type. The
transitionEnterTimeout has been specified as 800ms to cater for the 300ms delay we’re adding to the
.fade-enter CSS transition. The
transitionAppear prop is specifying that we want the component to be animated on initial mount. And
transitionAppearTimeout specifies the duration of that animation. The CSS for that is similar to the other animations:
The last thing we need to change is the
render function of the
Result.js component. Replace the
render function with the following:
This will ensure that our results component is also animated in. And with that, our quiz animation is complete!
Completed Demo
You can find the complete source code for this quiz on Github. I hope this tutorial was helpful, I went through a lot of concepts pretty quickly, so if you have any questions feel free to hit me up on Twitter. | https://mitchgavan.com/react-quiz/ | CC-MAIN-2019-30 | refinedweb | 2,829 | 64.2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.