text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
It’s all there in the repo now. fastai.text is the new processing framework, or just use torchtext. Lesson 4 takes you through a complete example (IMDb).
FitLaM: What I've been working on recently
I have a question about how to set up the embedding matrix for the fine tuned task. Since the embedding matrix needs to be based on the same vocabulary (I believe) as the data used to train the network, how does one deal with new words in the data set used for fine tuning?
@Jeremy, awesome paper!
Lesson 4 code looks very similar to the algorithm described in the paper, but some steps (e.g. pre-training on Wikitext-103 dataset) were not used in the lesson. The paper mentions things like gradual unfreezing of the language model layer by layer, warm-up reverse annealing. Some of these tricks were not used in the lesson 4 (or I couldn’t find them in the code).
Do you plan to share the code that can be used to reproduce results described in the paper?
Is it the code in text.py file?
text.py and nlp.py have some code duplication. Which one should be used as a primary now?
Yup fastai.text is text.py. I’m hoping to replace fastai.nlp with fastai.text by the time we teach part 2
And to have a walkthrough of all the tricks…
Hey @jeremy ,
I want to fine tune a neural translation model (seq2seq) using a pretrained langauge model.
But the vocabularies of both the datasets are not as similiar as I would like.
Should I train both the models using a combination of the vocabularies (adding both the vocabs together),
or train a character level model ?
What are your suggestions upon encountered these issues (dealing with different vocab sizes) while finetuning nlp models ?
Hi @jeremy,
Do you have any example of Concat pooling from the FitLaM paper? Is it available in the videos?
This is the relevant code:
160 class PoolingLinearClassifier(nn.Module): 1 def __init__(self, layers, drops): 2 super().__init__() 3 self.layers = nn.ModuleList([ 4 LinearBlock(layers[i], layers[i + 1], drops[i]) for i in range(len(layers) - 1)]) 5 6 def pool(self, x, bs, is_max): 7 f = F.adaptive_max_pool1d if is_max else F.adaptive_avg_pool1d 8 return f(x.permute(1,2,0), (1,)).view(bs,-1) 9 10 def forward(self, input): 11 raw_outputs, outputs = input 12 output = outputs[-1] 13 sl,bs,_ = output.size() 14 avgpool = self.pool(output, bs, False) 15 mxpool = self.pool(output, bs, True) 16 x = torch.cat([output[-1], mxpool, avgpool], 1) 17 for l in self.layers: 18 l_x = l(x) 19 x = F.relu(l_x) 20 return l_x, raw_outputs, outputs
It lives in lm_rnn.py.
The idea is very elegant. Say you have an RNN with bptt of 10. At each step a hidden state will be generated with the last one being the final, the output. Each hidden state is a vector of length
n. We take the output of shape (1, n), take the avg across all ten hidden states for items with the same index and obtain another vector of shape (1, n), do a similar operation for max across the indexes. As a result we have 3 vectors of shape (1, n). All we do then is we concatenate them together to get a vector of shape (1, 3n).
This is my best understanding but it might be wrong - I haven’t gotten around to experimenting with the model yet.
Looing forward to try fastai.text! My experience with torchrext has been slightly bitter due to its overall sequential tokenization. That makes it slow and memory inefficient. To workaround it I had to play few tricks. May be I’ll post a thread on that for comments sometime.
Me too! Hence fastai.text
Questions on torchtext and padding as a regularizer
looking at the code, it looks like the outputs are the outputs of the rnn and not the hidden states of the rnn.
not like in the paper.
relevent code from fast ai lm_rnn.py (the rnn_encoder forward mathod):
def forward(self, input): """ Invoked during the forward propagation of the RNN_Encoder module. Args: input (Tensor): input of shape (sentence length x batch_size) Returns: raw_outputs (tuple(list (Tensor), list(Tensor)): list of tensors evaluated from each RNN layer without using dropouth, list of tensors evaluated from each RNN layer using dropouth, """ sl,bs = input.size() if bs!=self.bs: self.bs=bs self.reset() emb = self.encoder_with_dropout(input, dropout=self.dropoute if self.training else 0) emb = self.dropouti(emb) raw_output = emb new_hidden,raw_outputs,outputs = [],[],[] for l, (rnn,drop) in enumerate(zip(self.rnns, self.dropouths)): current_input = raw_output with warnings.catch_warnings(): warnings.simplefilter("ignore") raw_output, new_h = rnn(raw_output, self.hidden[l]) new_hidden.append(new_h) raw_outputs.append(raw_output) if l != self.nlayers - 1: raw_output = drop(raw_output) outputs.append(raw_output) self.hidden = repackage_var(new_hidden) return raw_outputs, outputs
am i correct or am i missing something?
I am not sure but I suspect that the issue might be here that the naming gets overloaded. The RNN produces some output for each time step. We can treat it as a black box that just gives us the output vector. Inside the black box many things might happen (including it having multiple layers) and it might be producing some activations that might be referred to as its ‘hidden state’.
I was referring to the ‘hidden state’ on a more macro level, as in hidden state of the entire model being what is produced at each time step by the Encoder. At each time step we get some vector of length n and we can stack them together to get something of the shape (<num_time_steps>, ). The pooling layer then tries to figure out what to do with this information. The simplest approach would be to just grab the last RNN output and call it a day. But this is problematic because some information that might be useful will escape us and also gradient propagation and remembering information from many time steps back is not ideal in any of RNN archs (lstm / gru etc). So here we are doing something quite smart - we are grabbing the last hidden state and also taking the max and mean of each activation in n across time steps.
wow thanks for the very quick response! | http://forums.fast.ai/t/fitlam-what-ive-been-working-on-recently/10043?page=2 | CC-MAIN-2018-17 | refinedweb | 1,069 | 67.04 |
The SAM tag dictionary class that stores all optional SAM fields. More...
#include <seqan3/io/sam_file/sam_tag_dictionary.hpp>
The SAM tag dictionary class that stores all optional SAM fields.
A SAM tag consists of two letters, initialized via the string literal ""_tag, which delegates to its unique id (type uint16_t). Example:
The purpose of those tags is to fill or query the seqan3::sam_tag_dictionary for a specific key (tag_id) and retrieve the corresponding value.
Note that a SAM tag is always associated with a specific type. In the SAM format, the type is indicated in the second argument of the TAG:TYPE:VALUE field. For example "NM:i:3" specifies the NM tag of an integer type with value 3. In seqan3, the types for known SAM tags are pre-defined by a type trait called seqan3::sam_tag_type. You can access the type via:
which is the short cut for:
The following types are allowed by the SAM specifications:
For an integer or numeric array (type ‘B’), the second letter can be one of ‘cCsSiIf’, corresponding to type T = int8_t, uint8_t, int16_t, uint16_t, int32_t, uint32_t and float, respectively.
The dictionary can be accessed via the functions seqan3::sam_tag_dictionary::get() and seqan3::sam_tag_dictionary::set(). Every time the SAM tag you wish to query for must be given as a template argument to the functions.
Example:
Unknown Tag Example:
As mentioned before you can either overload the type trait seqan3::sam_tag_type for the tag "XZ" or learn more about an std::variant at.
Uses std::map::at() for access and throws when the key is unknown.
Uses std::map::at() for access and throws when the key is unknown. | https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1sam__tag__dictionary.html | CC-MAIN-2021-21 | refinedweb | 277 | 52.9 |
When we code in a program, we need to store certain values for latter use in the program. Such values need to be stored in the memory location. Even though memory location will have its own address, it is easy to identify them by name than their address. Hence we use variables – a named memory location to store these values. These variables can be used to get the values from the user, can be used in various calculations, or displaying some result or messages. But we cannot store all types of data in all the variables. If we define type of data that each variable can store, it adds values for a systematic programming using C. That means, it gives the systematic usage of the variables in the program and avoids any confusions and mishandling of data.
Similarly, C language revolves around functions. Even though functions are meant for performing certain task, it will have result values which need to be returned to the calling functions. This also needs memory location which is named as function name. But it cannot return any kind of value. Like variables, if we predefine the type of data that it returns, it makes the program more logical.
All these are done by using the datatypes in C. Datatypes defines the variables and functions along with the range of data stored, type of data stored and indicates how much bytes of memory are occupied. Variables are declared with their respective datatypes at the beginning of the program, before using them in the program/function. These datatypes are the reserved keywords in C like int, float, double, char etc.
A variable is declared using its datatype as below :
datatype variable_name; int intNum1; // variable with integer datatype, defines the variable float flNum=3.14; // Variable with real number, defines and initializes the variable char chOption; // chOption is of character type
When we declare a variable like above inside any function, it defines the variable. If we give the initial value to the variable while declaring them, then it both defines and initializes the variable. We can even declare, define and initialize the variables at different steps too. The keyword ‘extern’ is used to declare the variable in this case and it allows defining those variables anywhere in the program – that means in any of the function in the program.
#include <stdio.h> extern float marks1, marks2; // declare float variables void main() { float marks1, marks2; //define the same float variables, since it is actually being used here marks1 = 67.5; // Initialize the variable marks2 = 88; printf("Marks in Subject 1 is: %f\n", marks1); // display the variable value printf("Marks in Subject 2 is: %f\n", marks2); }
There are different types of datatypes.
Primitive / Basic/ Fundamental Datatype
It contains very basic types of datatypes used to define the variables and functions. This datatype is basically used to declare numbers, and characters.
Character Datatypes
This datatype is used to declare the character variables. It can hold only character values. But each character type of variable can hold only one character at a time. This is because, this datatype occupies only one byte of memory. That means it can store values from -128 to 127. It can be signed character value or unsigned character value.
char chOption; // chOption is of character type unsigned char chOption; // chOption is of character type, but unsigned
Integer Datatypes
This datatype declares the variable as integer. It tells the compiler that the variables declared as integer can contain only digits. It cannot have any fractional numbers. It can either be positive or negative. It occupies 2 bytes (in older systems) or 4 bytes of memory. That indicates, it can store values from -231 to 231 values if size of int is 4 bytes. i.e.; values from -231 , -231 +1, -231 +2, ….. -3, -2, -1, 0, 1, 2, 3, ….231-2, 231-1, 231
It is declared as: int intNum1; // variable with integer datatype
Integer datatype can be signed or unsigned. Signed datatypes are normally referred as int. For unsigned datatypes, ‘unsigned’ keyword is appended before the int. Unsigned integer is also of size 2bytes or 4 bytes depending on the system, but unsigned will have values from 0 to 231 for int with size 4 bytes.
int intNum1; // this is a signed integer variable- can be positive or negative
unsigned int intNum2; // this is unsigned integer variable – can contain only positive values
Integer datatype can belong to any of 3 storage classes – short int, int and long int. All these storage classes can be signed or unsigned. Short int class is used to declare smaller range of numbers and it occupies only 2bytes of space. The int type of storage class uses 4 bytes of space and hence it can hold little bigger range of values. The long int class is used to store even bigger range of values.
Floating-Point Datatypes
These datatypes are used to store the real numbers as well as exponential numbers. It occupies 4 bytes of memory. Hence it can store values from 3.4 e-38 to 3.4 e+38. If we need to store even more range of floating numbers, then we can use double which occupies 8 byte of memory or long double which has 10bytes of memory. Float and double variables are almost same except their sizes and precisions. Float variable is of 4 bytes and has only 6 digits of precision / decimal places, whereas double is of 8 bytes and has 14 digits of decimal places.
float flAvg;
double dbl_fraction_number;
long double lgdbl_fractNum;
Void Datatype
This datatype does not contain any value. It is mainly used to declare functions that do not return any data values, or to indicate that function does not accept any arguments or to hold the address for a pointer variable. Its use on variable is very rare.
When a function without argument or return type needs to be declared, then we use void datatype to declare them. It indicates the compiler that it does have any value.
void fnDisplayName();
void fnGetAddress();
int fn_FindSum(void);
When we use pointers, one may not be sure about the datatype of it at the time of declaration. But memory location for those pointers needs to be reserved before beginning of the program. In such case we declare pointers as void and allocate memory. Latter in the code we type cast the datatype to the pointer. (for more details, refer topic pointers).
void *ptr;
ptr = &intVar1;
void *ptr;
ptr = malloc (sizeof(int) * 10);
Non-primitive/ Derived/ Structured Datatype
Derived datatypes are the datatypes that are derived from primitive datatypes. These datatypes declare a variable, which contains set of similar or different datatype values bounded under one name. Hence these type of datatypes are called derived datatypes. There are mainly 4 types of derived datatypes.
Arrays
These are the named variable which contains set of similar datatype values. That means, using single variable name we can store multiple values. This is made possible by the use of indexes on the variable name. These variables can be of any primitive type.
For example,
int intNumbers [10]; // it stores 10 different integer values in intNumbers variable
unsigned int intVar [10]; // it stores 10 different unsigned integer values
float flReal [5]; // it stores 5 different real values in flReal variable
char chNames [20]; //it holds 20 different characters
Each value in these arrays are accessed by using the indexes. For example 5 elements in the array intNumbers can be accessed as intNumbers[4]. Here index starts from zero; hence 5th element is referred as index 4.
The size of array is equal to size of its datatype multiplied number of elements in it. In above example,
Size of intNumbers = sizeof(int) * 10 = 4 * 10 = 40 bytes.
Size of intVar = sizeof(unsigned int) * 10 = 4 * 10 = 40 bytes.
Size of flReal = sizeof(float) * 5 = 4 * 5 = 20 bytes.
Size of chNames = sizeof(char) * 20 = 1 * 20 = 20 bytes.
Structures
Structures are used to hold a set of similar or dissimilar variables in it. It is useful when we want to store the related information under one name.
For example, student details of a particular student can be stored in structure called student like below :
struct Student{ int intStdId; char chrName[15]; char chrAddress[25]; int Age; float flAvgMarks; char chrGrade; }
Here we can note that structure student has different types of variables. All these variables are related to student, and are combined into one common variable name called Student. Unlike arrays, here we can address each elements of the structure by its individual names. It can even have primitive type of variables, or derived variables – arrays, structures, unions and even pointers within it.
Here size of structure is sum of size of individual elements. In Student structure above,
Size of structure Student = size of (intStdId) + size of (chrName) +size of (chrAddress)
+ Size of (Age) +size of (flAvgMarks) +size of (chrGrade)
= sizeof (int) + (15 * sizeof (char)) + (25 * sizeof (char))
+ Size of (int) + size of (float) + size of (char)
= 4 bytes + (15 * 1byte) + (25 * 1byte) + 4 bytes +4 bytes + 1byte
= 33 bytes.
Union
This is another datatype in C, which is similar to structure. It is declared and accessed in the same way as structure. But keyword union is used to declare union type of datatype.
union Student{ int intStdId; char chrName[15]; char chrAddress[25]; int Age; float flAvgMarks; char chrGrade; }
The main difference between structure and union is in its memory allocation. In structure, total memory allocated is the sum of memory allocated for its individual elements. In unions it is the memory size of the element which has the highest memory allocated. In above Student union, the size of it is the size of chrAddress, as it has the maximum size.
Pointers
Pointers are the special variables used to store the address of another variable. By using pointers, the program gets the memory allocated to the variable to hold another variable. This has an advantage while accessing arrays, passing and returning multiple values to the functions, to handle strings, to handle different data structures like stacks, linked lists, binary tree, B+ tree etc. A pointer is declare in the same way as any other primitive variable, but a ‘*’ is added before the variable name to indicate that it is a pointer. Compiler will then understand that it is a pointer and it needs to be treated differently from any other variable.
int *intPtr; float flflPtr; int *intArrPtr [10]; char *chrName; char *chrMonthPtr [12];
Data structures
Data structures like stack, queue, linked list etc are special type of variables, which use one or more primitive datatypes. Usually these are created using structure datatypes, but here they expands and shrinks as the data is added and removed. Hence these are also considered as another type of derived datatype.
User-Defined Datatype
Sometimes declaring variables using existing primitive or derived datatype will not give meaningful name or serve the purpose of variable or confusing. Sometimes user / developer will not be actually interested in its real datatype, rather they would like to have the meaning or purpose of it. It will be useful for them to create same category of variables again and again.
For example, suppose we want to have variables to store marks of students. Marks can be float number. Using our primitive datatype we will be declaring variables as below:
float flMarks1, flMarks2;
It indicates the compiler that they are the variables of type float. Since we have followed the naming convention, by seeing the variable name, we can understand that it contains marks and are of float type. But imagine that we are not interested in its type. In addition, we would like to have variables for marks as float throughout the program – in all the function. That means, if the program has multiple functions, then there is possibility that marks variables are declared with different datatypes in different functions. This may create bugs while assigning values or returning values from functions. Hence if we create our own datatype – marks, for creating different marks variable, then all functions and variable will be in sync.
That means, rename datatype float as marks. This is done by using typedef in C.
typedef float marks; // redefines float as marks
Now marks can be used to declare any variable as float. But to maintain the purpose of such declaration, all the marks variables are now declared as marks.
marks sub1_marks, sub2_marks;
look at the example program below to understand how it works datatype across the function. The marks is defined as new datatype outside the main function so that it can be used in all the function. Now marks acts as a global datatype for the program. No more float type is used in the program to declare any marks variable in the program.
#include <stdio.h> typedef float marks; // redefines float as marks void fnTotal (marks m1, marks m2){ marks total_marks; total_marks = m1 + m2; printf("Total Marks is: %f\n", total_marks); } void main() { marks sub1_marks, sub2_marks; sub1_marks = 67.5; sub2_marks = 88; printf("Marks in Subject 1 is: %f\n", sub1_marks); printf("Marks in Subject 2 is: %f\n", sub2_marks); fnTotal (sub1_marks, sub2_marks); // calling the function }
Enumerated Datatypes
Apart from C defined datatypes, C gives the flexibility for the user / developer to define their own datatypes. In the traditional way of declaring a variable, when we declare variable as int, float, array etc we can store only those type of data in those variables. When we declare structure or union, though it allows different types of data within it, it does not allow the flexibility to the users to have their own set of data/ values.
Suppose we have to have a datatype to define months in a year. We can declare a string array of size 12. But it does not tell what values it can have. Either we have to enter 12 months as input or we need to hard code the values for each index.
char *chrMonths[12] = {"January", "February"," March",…"December" };
OR
char *chrMonths[12]; *chrMonths[0] = "January"; *chrMonths[0] = " February"; *chrMonths[0] = " March"; ... … *chrMonths[0] = " December ";
Here, we need to define a pointer array with character type or 2 dimensional arrays with character type. Instead of making it so complex with array, pointer and character type, if we can define the same like any other datatype, it will be easy for anyone to understand. Hence C provides another datatype called enumerated datatype- enum. It can be considered as user-defined datatype too. It is declared and defined as shown below :
enum enum_datatype { value1, value2, value3, valueN };
Here enum_ datatype is an enumerated datatype name and it can have values value1, value2,…valueN. Now we can use enum_datatype to declare other variables, which can take only those values that are defined in enum_datatype.
enum enum_datatype ed1, ed2, ed3;
For example, consider below enumerated datatype enumMonths.
enum enumMonths{January, February, March, .., December };
enum enumMonths monthJan, monthFeb, monthMar, monthDec;
monthJan = January;
monthFeb = February;
monthDec = December;
Here enumMonths is used to define the months in a year. When we define a enumerated datatype, we define its values too. Latter we can create variables using new datatype enumMonths as monthJan, monthFeb, monthMar, monthDec etc. These new datatypes can have any one of those values that are listed while creating the datatype. We can note that we have not assigned January, February etc to the variables using quotes. Values for these variables are assigned directly from the enumerated list as if they are also another variable. But actually what it does is, it considers the predefined January, February, March etc as indexes for the enumerated datatype. That means it considers enumMonths as array of 12 indexes from 0,1,…11. When we declare a variable as enumMonths, then it considers each variable as its one of the element – monthJan, monthFeb, monthMar are elements of enumMonths. Hence it can have any of the values from the predefined list which indicates the index for the element.
#include <stdio.h> void main() { enum enumMonths{ January, February, March, December }; // Defining enumerated Datatype enum enum_datatype monthJan, monthFeb, monthMar, monthDec; // Declaring variable of type enumMonths // Assigning the values to the variables monthJan = January; monthFeb = February; monthDec = December; // Displaying the values printf("Value of monthJan is %d\n ", monthJan); printf("Value of monthFeb is %d\n ", monthFeb); printf("Value of monthDec is %d\n\n ", monthDec); printf("Value of February is %d\n ", February); printf("Value of December is %d \n", December); }
Here we can notice that it displays the index values rather than displaying January, February etc. This type of declaring the datatype is useful when we know the number and values for data. | https://www.tutorialcup.com/cprogramming/data-types.htm | CC-MAIN-2021-31 | refinedweb | 2,786 | 61.97 |
Microsoft Enterprise Services
For information on Enterprise Services, see
Credits
Greg Molnar - USMCS MidWest
Keith Olinger - USMCS MidWest
David Trulli - Program Manager, Microsoft Enterprise Customer Solutions
Markus Vilcinskas - Program Manager, Microsoft Enterprise Services
Introduction
Windows 2000 Component Overview
Description of the Windows 2000 Startup and Logon Process
User Logon
Conclusion
Appendix A: Test Environment
Appendix B: TCP/IP Ports Used in the Authentication Process
The client startup and logon process is the process the Microsoft Windows operating systems uses to validate a computer or User in the Windows networking environment. Developing an understanding of the client startup and user logon process is fundamental to understanding Windows 2000 networking. This white paper will provide the reader with detailed information on this process, including:
How clients connect to the network with Windows 2000 Dynamic Host Configuration Protocol (DHCP), Automatic Private Internet Protocol (IP) addressing, and static addressing.
How Windows 2000 clients use the Dynamic Domain Naming System (DDNS) support in Windows 2000 to locate domain controllers and other servers in the infrastructure needed during startup and logon. In addition we will show how Windows 2000 clients register their names in DDNS.
How the Lightweight Directory Access Protocol (LDAP) is used during startup and logon to search the Microsoft Active Directory for required information.
How the Kerberos security protocol is used for authentication.
How MS Remote Procedure Calls (MSRPC) are used.
How Server Message Block (SMB) is used to transfer group policy information and other data during the startup and logon process.
In addition to discussing the Windows 2000 core components used by the startup and logon process, the paper shows what happens and how much network traffic is generated during each part of the process. The discussion begins with an overview of the Windows 2000 components involved in the startup and logon process. We will then examine the Client Startup process and discuss the User logon process.
Throughout the discussion sample information from network monitor traces will be used to illustrate what is happening at that particular point. We have also made an effort to provide references whenever possible to external sources of information where additional information can be found. The most common reference materials cited include:
Internet Engineering Task Force (IETF) Requests for Comments (RFCs)
Microsoft Windows 2000 Resource Kits
Microsoft Support Knowledge Base articles
Microsoft Notes from the Field books
Various Web sites
Reading and understanding this white paper will allow systems architects and administrators to better engineer and support Windows 2000 networks. It should help network designers determine where to place key components to ensure reliable startup and logon in a Windows 2000 network. Support professionals will be able to use this paper to resolve problems by comparing the baseline information provided here to their environments.
The target groups for this discussion are systems administrators and network architects who are planning, implementing, or managing Windows 2000 networks. It is expected that this group will have an understanding of the following topics:
Microsoft Windows NT or 2000 networking concepts
Basic knowledge of the TCP/IP protocol
Some exposure to examining network traces
The Windows 2000 Resource Kit, Microsoft TechNet, and the Notes from the Field series offer more detailed discussions of core Windows 2000 services we will discuss as part of the client startup and logon process. It would be worthwhile to have access to these resources as supplementary resources while reading this paper.
In order to understand the Windows 2000 client startup and logon process, a discussion of the new or updated protocols and services that play a role in this process is needed. This section provides a brief overview of each of the following protocols and services involved:
Dynamic Host Configuration Protocol (DHCP)
Automatic Private IP Addressing
Domain Naming System (DNS)
Kerberos
Lightweight Directory Access Protocol (LDAP)
Server Message Block (SMB)
Microsoft Remote Procedure Call (MSRPC)
Time Service
More in-depth information on each protocol or service can be found using the references provides in each section.
The original objective of the Dynamic Host Configuration Protocol was to provide each DHCP client with a valid Transmission Control Protocol/Internet Protocol (TCP/IP) configuration.
The process in general consists of eight messages:
DHCPDiscover. A DHCP client uses this message in order to detect at least one DHCP server.
DHCPOffer. Each DHCP server that receives the request from a client checks its scopes for a valid configuration set and offers this to the DHCP client.
DHCPRequest. The DHCP client requests the first offer it receives from the DHCP server.
DHCPAcknowledge. The selected DHCP server uses this message in order to confirm the lease with the DHCP client.
DHCPNack. The DHCP server uses this message in order to inform a client that the requested TCP/IP configuration is invalid.
DHCPDecline. The DHCP client uses this message in order to inform the server that an offered TCP/IP configuration is invalid.
DHCPRelease. The DHCP client uses this message to inform the server that an assigned configuration is no longer in use by the client.
DHCPInform. This is a new message defined in Request for Comments (RFC) 2131. If a client has already obtained an Internet Protocol (IP) address (for example, manual configuration), it may use this message to retrieve additional configuration parameters that are related to the IP address from a DHCP server.
This role of a DHCP server was extended with the availability of Dynamic DNS. In this case, the DHCP server can be used for the dynamic registration of the client's IP address and the hostname. In the default configuration, the DHCP server registers the IP address of the client with the DNS server. This is also known as Pointer Record (PTR RR).
For more information about DHCP, see:
RFC 1541
RFC 2131
Dynamic Host Configuration Protocol, TCP/IP Core Networking Guide, Windows 2000 Server Resource Kit
Automatic Private IP Addressing
Windows 2000 implements the Automatic Private IP Addressing (APIPA), which will provide an IP address to a DHCP client even if there is no DHCP server available. APIPA is designed for computers on single-subnet networks that do not include a DHCP server. APIPA automatically assigns an IP address from its reserved range, 169.254.0.01 through 169.254.255.254. What this means is that when a client fails to communicate with a local DHCP server at startup to renew its lease, it will use an APIPA assigned address until it can communicate with a DHCP server. This is different behavior from Windows NT 4.0 where the client would continue to lease a lease that had not expired even if the client could no longer contact the DHCP server.
Using the Registry Editor to create the following registry key can disable APIPA:
HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet
Services\Tcpip\Parameters\Interfaces\<adapter name>
Where <adapter name> is the name of the Dynamic Host Configuration Protocol (DHCP) configured adapter where you want to disable APIPA..
This change requires the computer to be restarted to take effect. It is documented in the articles 244268 and 244268 available at
DNS
The primary mechanism for service location and name resolution in Windows 2000 is the Domain Name System (DNS). Windows 2000 includes a DNS system that is tightly integrated with the operating system providing integration with the Active Directory and support for making Dynamic updates, but any BIND 8.2.2 compliant DNS can be used with Windows 2000. DNS support in Windows 2000 is intended as a standards-based replacement for the NetBIOS-based Windows Internet Naming Service (WINS), which was the previous service that provided dynamic support for Windows clients. Both services provide the ability for dynamic updating of system names into their databases but WINS is a flat-name space and does not scale as well as DNS. By moving to DNS, Windows 2000 not only conforms to Internet standards, it provides a hierarchical naming system that scales to meet the demands of large networks.
The Windows 2000 startup and logon process uses DNS to locate services like LDAP and Kerberos to retrieve the address of at least one controller and to register its hostname and IP address in DNS zone database.
The Windows 2000 DNS system and its requirements are covered in great detail in the Windows 2000 Resource Kit and the Notes from the Field book Building Enterprise Active Directory Services.
For more information, see:
Windows 2000 Resource Kit: Name Resolution in Active Directory, Distributed Systems Guide
Windows 2000 Resource Kit: Windows 2000 DNS, TCP/IP Core Networking Guide
RFC: 1035, 1036
Kerberos
Kerberos V5 is the default authentication protocol in Windows 2000. Kerberos originated as part of Project Athena at MIT in the late 1980s, and version 5 is described in the IETF RFC 1510.
Kerberos V5 is an authentication protocol. It allows mutual authentication between computers. In other words, it allows computers to identify each other. This is of course, the basis of all security systems. Unless a server is absolutely sure you are who you say you are, how can that server reliably control access to its resources? Once the server has positive identification of who you are, it can then make the determination about whether you are authorized to access the resource.
Kerberos, per se, does not authorize the user to access any resource, although the Microsoft implementation of Kerberos V5 does allow secure delivery of user credentials. (For the specification of the fields involved, see.)
There are six primary Kerberos messages. The six messages are really three types of actions, each of which has a request from a client and a response from the Key Distribution Center (KDC). The first action occurs when the client types in a password. The Kerberos client on the workstation sends an "AS" request to the Authentication Service on the KDC asking the Authentication Service to return a ticket to validate the user is who they say they are. The Authentication Service verifies the client's credentials and sends back a Ticket Granting Ticket (TGT).
The second action is when the client requests access to a service or a resource by sending a TGS request to the Ticket Granting Service (TGS) on the KDC. The Ticket Granting Service returns an individual ticket for the requested service that the client can submit to whatever server holds the service or resource the clients wants.
The third action is when the Kerberos client actually submits the service ticket to the server and requests access to the service or resource. These are the AP messages. In Microsoft's implementation, the access security identifiers (SIDs) are contained in the PAC that is part of the ticket sent to the server. This third action need not have a response by the server unless the client has specifically asked for mutual authentication. If the client has marked this exchange for mutual authentication, the server returns a message to the client that includes an authenticator timestamp. For a typical domain logon, all three of these actions occur before the user is allowed access to the workstation.
For more information about Kerberos on Windows 2000, see.
LDAP
Lightweight Directory Access Protocol is a Directory Access Protocol (DAP) designed to allow clients to query, create, update, and delete information stored in a directory. It was initially used as a front-end to X.500 directories, but can also be used with stand-alone directory servers. Windows 2000 supports both LDAP version 3 and version 2.
LDAP Process
The general model adopted by LDAP is of clients performing protocol operations against servers. A client transmits a request describing the operation to be performed to a server. The server is then responsible for performing the necessary operations in the directory. Upon completion of the operations, the server returns a response containing any results or errors to the requesting client.
The LDAP information model is based on the entry, which contains information about some object (for example, a person). Entries are composed of attributes, which have a type and one or more values. Each attribute has a syntax that determines what values are allowed in the attribute and how those values behave during directory operations. Examples of attribute syntaxes are for IA5 (ASCII) strings, URLs, and public keys.
LDAP Features
Windows 2000 supports the LDAPv3 protocol as defined in RFC 2251. Key aspects of the protocol are:
All protocol elements of LDAPv2 (RFC 1777) are supported.
The protocol is carried directly over TCP or other transport, bypassing much of the session/presentation overhead of X.500 DAP.
Most protocol data elements can be encoded as ordinary strings (for example, distinguished names)
Referrals to other servers may be returned (described in the next section).
Simple Authentication and Security Layer (SASL) mechanisms may be used with LDAP to provide association security services.
Attribute values and distinguished names have been internationalized through the use of the International Standards Organization (ISO) 10646 character set.
The protocol can be extended to support new operations, and controls may be used to extend existing operations.
LDAP Referral.
RootDSE
The rootDSE represents the top of the logical namespace and therefore the top of the LDAP search tree. The attributes of the rootDSE identify both the directory partitions (the domain, schema, and configuration directory partitions) that are specific to one domain controller and the forest root domain directory partition.
The rootDSE publishes information about the LDAP server, including what LDAP version it supports, supported SASL mechanisms, and supported controls, as well as the distinguished name for its subschema subentry.
Clients connect to the rootDSE when making LDAP at the start of an LDAP operation.
LDAP over TCP
LDAP message PDUs (Protocol Data Units) are mapped directly onto the TCP byte stream. RFC 2251 recommends that server implementations provide a protocol listener on the assigned port, 389. The Active Directory uses port 389 as the default port for domain controller communications. In addition, the Active Directory supports port 636 for LDAP Secure Sockets Layer (SSL) communications. A Windows 2000 domain controller that is a Global Catalog (GC) server will listen on port 3268 for LDAP communications and port 3269 for LDAP SSL communications.
LDAP During the Startup and Logon Process
LDAP is used extensively during the Windows 2000 startup and logon process. The client uses LDAP during the domain controller locator process to get the domain controller it will use. LDAP is also used to find the applicable group policy objects for the computer or user. Finally, LDAP is used to locate the appropriate certificates for the client during certificate auto enrollment.
Windows 2000 Resource Kit
Windows 2000 Active Directory Technical Reference
RFCs 1777, 1778, 1779, 1959, 1960, 1823
RFCs 2251-2256
SMB
The Server Message Block protocol is the resource-sharing protocol used by MS-Net, LAN Manager, and Windows Networking. In addition, there are SMB solutions for OS/2, Netware, VMS, and Unix from vendors such as AT&T, HP, SCO, and, via Samba, over 33 others. The SMB protocol is used in a client-server environment to access files, printers, mail slots, named pipes, and application programming interfaces (APIs). It was jointly developed by Microsoft, IBM, and Intel in the mid-1980s. As illustrated by the following chart, SMB will run over multiple network protocols:
In SMB communication, a client connects to the server by negotiating a dialect. Once the client has established a connection, it can then send commands (SMBs) to the server that allow the client to access resources.
The SMB commands can be generally categorized into four parts:
Session Control
File Commands
Print Commands
Message Commands
SMB security has evolved as the platforms that use it have evolved. The base SMB protocol model defines two levels of security:
Share level. Protection is applied at the share level on a server. Each share can have a password, and a client needs only that password to access all files under that share. This was the first security model that SMB had and is the only security model available in the Core and CorePlus protocols. Windows for Workgroups vserver.exe implements share level security by default, as does Windows 95.
User Level. Protection is applied to individual files in each share and is based on user access rights. Each user (client) must log in to the server and be authenticated by the server. When it is authenticated, the client is given a user ID, which it must present on all subsequent accesses to the server. This model has been available since LAN Manager 1.0.
The SMB protocol has gone through many revisions over time. The most current version of SMB implemented in Windows 2000 is the Common Internet File System (CIFS), which is a slight variant of the NT LM 0.12 version used previously. The next section goes into the details of this modern implementation.
Windows 2000 support for SMB via the that is currently a draft Internet standard. CIFS was introduced in Service Part 3 for Windows NT 4.0 and is the native file sharing protocol for Windows 2000. The CIFS is a variant of the NTLM 0.12 protocol.
Windows 2000 SMB/CIFS Protocol Implementation
CIFS defines a series of commands used to pass information between networked computers. The redirector packages requests meant for remote computers in a CIFS structure. CIFS can be sent over a network to remote devices. The redirector also uses CIFS to make requests to the protocol stack of the local computer. CIFS messages can be broadly classified as follows:
Connection establishment messages consist of commands that start and end a redirector connection to a shared resource at the server.
Namespace and File Manipulation messages are used by the redirector to gain access to files at the server and to read and write them.
Printer messages are used by the redirector to send data to a print queue at a server and to get status information about the print queue.
Miscellaneous messages are used by the redirector to write to mail slots and named pipes.
In Windows 2000, CIFS supports distributed replicated virtual volumes (such as Distributed File System [DFS]), file and record locking, file change notification, read-ahead and write-behind operations. CIFS communications are established via standard SMB session and name resolution mechanisms.
The SMB/CIFS Process in Windows 2000
When there is a request to open a shared file, the I/O calls the redirector, which in turn requests the redirector to choose the appropriate transport protocol. For NetBIOS requests, NetBIOS is encapsulated in the IP protocol and transported over the network to appropriate server. The request is passed up to the server, which sends data back to satisfy the request.. It is possible to disable either or both of these services in the registry.
SMB Utilization During the Startup and Logon Process
The Windows 2000 client startup and logon process uses SMB to load Group Policy objects applicable to that workstation or user. The basic SMB operation that is observed during the startup and logon process is SMB dialect negotiation. This is an exchange between the client and the server to determine which SMB dialect they will be able to use. SMB will also be used to make a DFS referral for the share that is being accessed. The client loading Group Policy objects will create the majority of SMB traffic during the startup and logon process.
Windows 2000 TCP/IP Protocols and Services Technical Reference
MSRPC
The simplest way to describe an RPC is a request by one computer to use the processing resources on another. The RPC protocol permits one process to request the execution of instructions by another process located on another computer in a network.
The RPC process consists of:
Client application. Requests the remote execution.
Client stub. Translates calls into/from standard network representation (NDR) format.
Client RPC Runtime Library. Converts NDR into network messages
Network Transport: Handles the network communications.
Server RPC Runtime Library. Converts NDR into network messages.
Server stub. Translates calls into/from standard network representation (NDR) format.
Server application. Executes the requested instructions.
The RPC procedures are uniquely identified by an interface number (UUID), an operation number (opnum), and a version number. The interface number identifies a group of related remote procedures. An example for an interface is net logon, which has the UUID 12345678-1234-ABCD-EF00-01234567CFFB.
An example for an RPC call is:
MSRPC: c/o RPC Bind: UUID 12345678-1234-ABCD-EF00-01234567CFFB call 0x1
assoc grp 0x0 xmit 0x16D0 recv 0x16D0
MSRPC: Version = 5 (0x5)
MSRPC: Version (Minor) = 0 (0x0)
MSRPC: Packet Type = Bind
+ MSRPC: Flags 1 = 3 (0x3)
MSRPC: Packed Data Representation
MSRPC: Fragment Length = 72 (0x48)
MSRPC: Authentication Length = 0 (0x0)
MSRPC: Call Identifier = 1 (0x1)
MSRPC: Max Trans Frag Size = 5840 (0x16D0)
MSRPC: Max Recv Frag Size = 5840 (0x16D0)
MSRPC: Assoc Group Identifier = 0 (0x0)
+ MSRPC: Presentation Context List
RPC are independent from the low-level transport protocol. Microsoft RPC (MSRPC) can be layered on top of several different transport protocols such as TCP/IP, Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX), or NetBIOS Enhanced User Interface (NetBEUI).
Most of the RPC interfaces use dynamic ports for the network communication. In this case it is necessary to involve a specific interface, called the End Point Mapper. The End Point Mapper is always listening on port 135 for TCP/IP and the End Point Mapper's UUID is E1AF8308-5D1F-11C9-91A4- 08002B14A0FA.
The client has to bind to an interface first before it can call its procedures. If the bind process was successful, it can send a request to the End Point Mapper, in which it includes the UUID of the target interface. The End Point Mapper sends back the port number the client can use for the communication.
The following table shows the communication sequence for this process.
Frame
Source
Destination
Protocol
Description
1
Client
Server
MSRPC
c/o RPC Bind: UUID E1AF8308-5D1F-11C9-91A4-08002B14A0FA l
2
c/o RPC Bind Ack: call 0x1 assoc grp 0xC85D xmit 0x16D0 recv
3
c/o RPC Request: call 0x1 opnum 0x3 context 0x0 hint 0x84
4
c/o RPC Response: call 0x1 context 0x0 hint 0x80 cancels 0x0
It is also possible to encapsulate MSRPCs into SMB. In this case, the client and server are using a handle to a previously opened file in order to exchange data.
The Windows 2000 startup and logon process uses the Netlogon and the Directory Replication Service (DRS) interface. The Netlogon interface is used to establish the secure communications channel between the client and a domain controller in a domain. The Directory Replication Service is primarily used for communication between Domain Controllers and Global Catalog servers. It does, however, provide an interface used during the logon process. The DRS provides a method to convert names into a format that is useable by LDAP.
For more information, see
"Active Directory Client Network Traffic," Notes from the
Field Building
Enterprise
Active Directory Services
"Analyzing Exchange RPC Traffic Over TCP/IP [159298]" on TechNet
Time Service
Windows 2000 includes the W32Time (Windows Time) time service that provides a mechanism for synchronizing system time between Windows 2000 clients in a domain. Time synchronization occurs during the computer startup process.
The following hierarchy is used by systems in the domain to perform time synchronization:
All client desktops nominate as their in-bound time partner their authenticating domain controller.
All member servers follow the same process as client desktops.
All domain controllers in a domain nominate the primary domain controller (PDC) Operations Masters as their in-bound time partner.
PDC Operations Masters follow the hierarchy of domains in the selection of their in-bound time partner.
Note: Operations Masters are described in Chapter 7 of the Distributed Systems Guide in the Windows 2000 Server Resource Kit.
216734 for a description of the time sync process and how to set up an authoritative time source
RFC: 1769, 2030
Windows 2000 Domain Authentication Methods
Windows 2000 supports two different authentication methods for domain logons: Kerberos and NTLM. The use of Kerberos as the authentication protocol for Windows 2000 changes the default Windows authentication protocol from NTLM to a protocol that is based on Internet Standards.
The default authentication protocol in Windows 2000 is based on MIT's Kerberos version 5, outlined in RFC 1510. Using Kerberos for authentication fundamentally changes the way security information is exchanged between clients, servers, and domain controllers in the Windows network.
With Kerberos, the client requests what is called a session ticket from the KDC. The session ticket contains information that the server can use to validate the client has the necessary authentication for the server. The client then sends these tickets to authenticate with the resource it is trying to use. Kerberos only provides authentication information from the client to the resource it is trying to access. Kerberos does not provide authorization to access resources; it simply supplies the authorization to the system.
To provide legacy support, Windows 2000 continues to provide support for NTLM authentication. Windows 2000 will use NTLM for authentication when not in a Windows 2000 domain, as a member of a NT 4 domain or workgroup, or when there is a need to access older applications that require NTLM for authentication.
This discussion will focus on the Kerberos authentication protocol and how it is used during the startup and logon process. The startup and logon process using the NTLM protocol is unchanged from Windows NT 4.0 and is covered in detail in Notes from the Field series book Building Enterprise Active Directory Services.
Windows 2000 Resource Kit: Authentication; Distributed Systems Guide
When you think of logon in the Windows environment, you typically think of the process of a user pressing ctrl-alt-delete and entering his or her username and password (credentials) to gain access to a system. These credentials will give users access either to resources on the computer they are working on or if the system is part of a network, these credentials give access to resources on the network such as applications, files, or printers that the user has been authorized to access.
The process that allows users to access resources does not start when the user logs on to the system, but begins well before that when the system is started. In a Windows 2000 domain environment, the computer needs to establish itself as a valid member of a domain before users will be able to logon to that system and access other resources on the network.
The objective of the following sections is to describe the steps that are involved in the Windows 2000 startup and logon process. The NetBIOS over TCP/IP functionality is disabled on all computers to set the focus on network traffic that occurs in an environment that has only Windows 2000 computers.
However, it is possible to configure each computer to be compatible with Windows NT and Windows 95 and Windows 98 clients. A detailed description of the startup and logon traffic that is associated with Windows NTbased clients can be found in the "Active Directory Client Network Traffic" chapter of the Notes from the Field series book Building Enterprise Active Directory Services. The following graphic illustrates the configurations.
A computer that is a member of a Windows 2000 domain goes through a startup process that connects it to the domain it is a member of. This startup process allows services on the computer to interact or be interacted with on the network, and more importantly, it is required in order for users to interactively log on. This process flow shows the computer startup process.
Connecting to a Network
The computer startup process begins when the computer connects to the network and loads the TCP/IP protocol. TCP/IP can be configured to use either static or dynamic configuration. Dynamic configuration means using the DHCP, which is a well-documented technology that is a core component of the Windows Server operating system. Static addressing means that TCP/IP configuration information has been manually configured on the computer. Typically, static addresses are used for resources that do not change very often, such as routers or servers. In the examples in this paper, the only systems that use static addresses are the servers.
The DHCP process generates the following frames on the network when the client connects to the network. The sequence of "Discover," "Offer," "Request," and "ACK" in the first four frames is the DHCP process in action. These four frames generate 1,342 bytes (about 342 bytes per DHCP frame) of network traffic, but this will vary depending upon the number of DHCP options specified. The Reverse ARPs (RARP) in frames 5 through 8 are preformed by the client to ensure the address is not in use by another computer on the network. Each RARP frame creates about 60 bytes of network traffic or around 300 bytes for the address check sequence.
*BROADCAST
DHCP
Discover
Offer
Request
ACK
5
ARP_RARP
ARP: Request, Target IP: 10.0.0.100
6
7
8
ARP: Reply, Target IP: 10.0.0.100
It is important to note that if a client already possesses a lease, then it will simply renew the lease with the DHCP server when it restarts. The renewal includes only the Request and Acknowledgement packets shown in the first two frames below. The client still performs the RARP process to ensure its address is not in use.
Domain Controller Detection
A client computer needs to get the latest configuration status during each startup phase. Therefore, it has to locate at least one controller in its domain.
In a Windows 2000 domain, each controller is also an LDAP server. In order to retrieve a list of available controllers, the client can query the DNS for SRV resource records with the name
_ldap._tcp.dc._msdcs.DnsDomanName
.
The following frames show an example of this.
DNS
0x1:Std Qry for _ldap._tcp.dc._msdcs.main.local. of type Srv Loc on class INET addr.
0x1:Std Qry Resp. for _ldap._tcp.dc._msdcs.main.local. of type Srv Loc on class INET addr
The Windows 2000 domain is an administrative boundary, which is independent from the structure of a given network. The computer in a given environment can be grouped into sites. A site in Windows 2000 is defined as a set of IP subnets connected by fast, reliable connectivity. As a rule of thumb, networks with LAN speed or better are considered fast networks for the purposes of defining a site. A domain can span multiple sites and multiple domains can cover a site.
The locator service of a client attempts to find the closest site during the startup process and stores the name of the site in the registry. If a client knows which site it belongs to, it can send a query for controllers in its site to the DNS server. The format of such a query is:
_ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.DnsDomanName
0x1:Std Qry for _ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.dcclab.local. of type Srv Loc on class INET addr.
0x1:Std Qry Resp. for _ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.dcclab.local. of type Srv Loc on class INET addr.
The DNS query above shows the client looking for the LDAP Service in the "Default-First-Site-Name." Default-First-Site-Name is the default name given to a Windows 2000 domain site when it is created.
The domain controllers registered for a site can be viewed with the Microsoft Management Console DNS snap-in. The following diagram shows a snapshot of the snap-in. The expanded view shows the domain controllers that have registered their Kerberos and LDAP service for a given site.
If it is possible for the DNS server to locate the requested information, it sends back a list of all known domain controllers in the site.
DNS: Answer section: _ldap._tcp.Site2._sites.dc._msdcs.dcclab.local. of type Srv Loc
on class INET addr.(2 records present)
DNS: Resource Record: dcclab22.dcclab.local. of type Host Addr on class INET addr.
DNS: Resource Record: dcclab21.dcclab.local. of type Host Addr on class INET addr.
The client randomly picks up one controller for the additional communication process, and it does not distinguish between local or remote subnets because it considers each member of its site as a computer that is reasonably close to the client.
As already mentioned, it is possible to have an influence on the controller selection in form of the site concept. After retrieving a domain controller, the client tries to determine whether the controller is the closest one in form of LDAP queries.
LDAP
ProtocolOp: SearchRequest (3)
ProtocolOp: SearchResponse (4)
Sever
In the query, the client requires a match for attributes such as its:
DNS domain name
Host name
Domain globally unique identifier (GUID)
Domain security identifier (SID)
If the controller does have exactly this information in its Active Directory database, it passes back information about itself such as:
DomainControllerName
DomainControllerAddress
DomainControllerAddressType
DomainGUID
DomainName
DNSForestName
DCSiteName
ClientSiteName
The most important information for the client is the site name. The hex dump of the response from the server will contain only one site name if the client is a member of the controller's site:
00000: 00 A0 C9 F1 A0 00 00 01 02 33 BF E7 08 00 45 00 . ....3..E.
00010: 00 D4 E9 90 00 00 80 11 00 00 0A 00 00 16 0A 00 ._...........
00020: 00 18 01 85 04 04 00 C0 B6 57 30 84 00 00 00 9C ......[para]W0...
00030: 02 01 02 64 84 00 00 00 93 04 00 30 84 00 00 00 ...d..."..0...
00040: 8B 30 84 00 00 00 85 04 08 6E 65 74 6C 6F 67 6F 0.....netlogo
00050: 6E 31 84 00 00 00 75 04 73 17 00 00 00 FD 01 00 n1...u 09 44 LAB..DCCLAB22..D
000A0: 43 43 4C 41 42 32 34 24 00 17 44 65 66 61 75 6C CCLAB24$..Defaul
000B0: 74 2D 46 69 72 73 74 2D 53 69 74 65 2D 4E 61 6D t-First-Site-Nam
000C0: 65 00 C0 50 05 00 00 00 FF FF FF FF 30 84 00 00 e.P....0..
000D0: 00 10 02 01 02 65 84 00 00 00 07 0A 01 00 04 00 .....e.........
000E0: 04 00 ..
If the client is communicating with a controller that is not in the client's site, the controller will also pass back the name of the client's proper site:
00000: 00 20 78 E0 AA 2B 00 20 78 01 80 69 08 00 45 00 . x+. x.i..E.
00010: 00 C9 FD A8 00 00 7F 11 28 64 0A 00 00 16 0B 00 ....(d......
00020: 00 02 01 85 04 03 00 B5 C8 55 30 84 00 00 00 91 ......U0...'
00030: 02 01 01 64 84 00 00 00 88 04 00 30 84 00 00 00 ...d.....0...
00040: 80 30 84 00 00 00 7A 04 08 6E 65 74 6C 6F 67 6F 0...z..netlogo
00050: 6E 31 84 00 00 00 6A 04 68 17 00 00 00 7D 01 00 n1...j 0B 44 LAB..DCCLAB22..D
000A0: 43 43 52 4F 55 54 45 52 32 24 00 05 53 69 74 65 CCROUTER2$..Site
000B0: 32 00 05 53 69 74 65 31 00 05 00 00 00 FF FF FF 2..Site1.....
000C0: FF 30 84 00 00 00 10 02 01 01 65 84 00 00 00 07 0.......e....
000D0: 0A 01 00 04 00 04 00 .......
In this case, the client sends another query to the DNS server asking for the list of controllers in this side. The following table shows an example of this. The client is looking for a domain controller in Site2 and switches to Site1 after the LDAP.
0x1:Std Qry for _ldap._tcp.Site2._sites.dc._msdcs.dcclab.local.
0x1:Std Qry Resp. for _ldap._tcp.Site2._sites.dc._msdcs.dcclab.local
DNS 0x2:Std Qry for _ldap._tcp.Site1._sites.dc._msdcs.dcclab.local.
0x2:Std Qry Resp. for _ldap._tcp.Site1._sites.dc._msdcs.dcclab.local
It is not necessary to have a domain controller in each site. Each domain controller checks all sites in a forest and the replication cost. A domain controller registers itself in any site that doesn't have a domain controller for its domain and for which its site has the lowest-cost connection. This process is also known as automatic site coverage. What this means is that clients will use the next domain controller that it has lowest cost to get to.
The default location process of the closest domain controller consists of 10 network packets and creates around 2,000 bytes of traffic.
ICMP Echo: From 10.00.00.24 To 10.00.00.22
ICMP Echo Reply: To 10.00.00.24 From 10.00.00.22 10.0.0.22 10.0.0.24
0x1:Std Qry for _ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.dcclab.local.
0x2:Std Qry for _ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.dcclab.local.
0x1:Std Qry Resp. for _ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.dcclab.
0x2:Std Qry Resp. for _ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.dcclab.
9
10
11
12
More details about the domain locator process can be found in the chapter "Name Resolution in Active Directory" of the Windows 2000 Resource Kit.
Establishing a Secure Channel with the Domain Controller
When the client determines its site and domain controller, it can then create a secure channel with that domain controller. A secure channel is a connection between a domain member and a domain controller established to retrieve domain specific information, to update computer-specific information in the Active Directory, such as the computer password, and to validate the domain membership.
The process starts with a negotiation of the SMB dialect both parties will use. The SMB protocol has undergone many revisions and extensions since its release in 1984. This means the client and the server may not necessarily be using the same SMB dialect. In this case, both sides are Windows 2000, which uses the dialect NTLM 0.12. This dialect allows exchanges to use Unicode. Prior to this, exchanges were made in ASCII. The benefit of Unicode strings is they can include file names, resource names, and user names.
The client proceeds by connecting to the netlogon interface of the target. The End Port Mapper must be involved in this process in order to connect the client with the correct port on the server. Finally, the client sends three calls (NetrServerReqChallenge, NetrServerAuthenticate3, NetrLogonGetdomainInfo) to the interface. This process produces approximately 4,600 bytes of traffic.
SMB
C negotiate, Dialect = NT LM 0.12
R negotiate, Dialect # = 5
c/o RPC Bind: UUID E1AF8308-5D1F-11C9-91A
c/o RPC Bind Ack: call 0x1 assoc grp 0xD52C
c/o RPC Request: call 0x1 opnum 0x3 contex
c/o RPC Response: call 0x1 context
c/o RPC Bind: UUID 12345678-1234-ABCD-EF0
c/o RPC Bind Ack: call 0x1 assoc grp 0x1C04B
R_LOGON
RPC Client call logon:NetrServerReqChallenge(..)
RPC Server response logon:NetrServerReqChallenge()
Error: Bad Opcode (Function does not exist)
13
14
c/o RPC Bind Ack: call 0x3 assoc grp 0x1C04B
15
16
Note: The current version of Netmon cannot resolve the calls NetrServerAuthenticate3 and NetrLogonGetdomainInfo correctly. In the previous table, these calls are shown as errors with a Bad Opcode.
Important: When an environment is mixed after an upgrade of a Windows NT 4 domain to Windows 2000, it is important to be aware of the following situation. When the only available domain controller for a Windows 2000 client to authenticate with is a Windows NT 4.0 backup domain controller, it will be unable to establish a secure channel. This is by design to increase security. Windows 2000 clients know what type of domain they belong to and will not downgrade their authentication method when setting a secure channel. The following trace sequence shows this scenario. It appears to be identical to the Windows 2000 client in a Windows NT 4.0 domain. The main appears after this because the client was unable to setup a secure channel, thus domain authentication fails.
Kerberos Authentication and Session Creation
After the secure channel has been established, the client will retrieve all necessary tickets to establish an IPC$ session with the controller. Because all Windows 2000 domain controllers are Kerberos Key Distribution Center (KDC), the client tries to detect the closest KDC in the same way it has already done it for the LDAP services.
The first step the client has to perform is the authentication in form of an AS ticket exchange. If this is successfully finished, it requests tickets for the controller (computer name$) and the Kerberos service (krbtgt) that is running on the controller. The packet exchange produces approximately 8 kilobytes (KB). The actual size in a given environment depends on the number of Global and Universal groups that the client is a member of. Each additional group will add about 28 bytes to the total.
DNS 0x3:Std Qry for _kerberos._tcp.Default-First-Site- Name._sites.dc._msdcs.DCCLAB.LOCAL. of type Srv Loc on class INET
DNS 0x3:Std Qry Resp. for _kerberos._tcp.Default-First-Site-Name._sites.dc._msdcs.DCCLAB.LOCAL. of type Srv Loc on class
LDAP ProtocolOp: SearchRequest (3)
LDAP ProtocolOp: SearchResponse (4)
Kerberos KRB_AS_REQ (request for
TGT)
Kerberos KRB_AS_REP
Kerberos KRB_TGS_REQ (request for
DC$)
Kerberos KRB_TGS_REP
Kerberos KRB_TGS_REQ (request for
Kerberos Service)
All the tickets a client has obtained can be viewed with the Kerbtray utility from the Microsoft Resource Kit.
Finally, the client can connect to the IPC$ share of the controller, which produces around 3,600 bytes of traffic.
SMB C session setup & X
SMB R session setup & X
SMB C tree connect & X, Share = \\DCCLAB22.DCCLAB.LOCAL\IPC$
SMB R tree connect & X, Type = IPC$
DFS Referral
The client then makes a Distributed File System (DFS) referral. A DFS referral is the DFS client-server protocol to get DFS-specific information that exists on the server to the client. It occurs whenever necessary. The general referral process is started when the client sends an SMB packet, indicating it is a DFS referral. The server passes this request to the DFS driver to complete the request. Subsequently, any access to network shares could result in a DFS referral request, if the client does not already have information about that share. Windows 2000 is a DFS version 5.0 client. This version allows caching of referrals to a DFS root or link for a (administrator configurable) specific length of time.
When the client starts up, a number of DFS referral requests are made from the client to one of the domain controllers within the client computer's domain that responds to the request. This process is required so that the client will be ready to handle any requests to domain-based DFS shares.
The first two requests serve to initialize the DFS client. The first is used to learn the names of all trusted Windows 2000 domains that could be the accessed by the client. The second is used to obtain a list of domain controller in the domain order by local site first. The names returned contain both the Netbios name and DNS names of the domains.
The third request is made to obtain the actual server path to connect to a sysvol share.
The DFS referral creates a minimum of 394 bytes of traffic. The actual amount of traffic generated will depend on the number of trusted domains and DCs in the local domain returned in the reply.
By default the DFS referral to learn domain configuration is repeated every 15 minutes.
C transact2 NT Get DFS Referral
R transact2 NT Get DFS Referral (response to frame 105)
The client then pings and makes an LDAP request to get the domain controller again.
ICMP
Echo: From 10.00.00.100 To 10.00.00.22
Echo Reply: To 10.00.00.100 From 10.00.00.22
UDP
Src Port: Unknown, (1041); Dst Port: Unknown (389); Length = 209 (0xD1)
Src Port: Unknown, (389); Dst Port: Unknown (1041); Length = 188 (0xBC)
Name Translation
Each object in the Active Directory has a name. There are different formats for names available, such as the user principal names, distinguished names, and the earlier "domain\user" names from Windows NT. It is not necessary for a name to be a string. In general, everything that uniquely identifies an object can be considered as a name. Depending on the service that needs a name as parameter, it might be necessary to convert a given name from one format into another.
This is the objective of an API called DsCrackNames, which is used to map names from one format to another. Details about this call can be obtained from the Microsoft Developers Network (MSDN). Before a client can call this function, it has to bind a handle to the directory service with DSBind and if the operation is done it has to unbind from the directory with DSUnbind.
The following frames show the network traffic that comes along with the translation process. The entire process produces approximately 6,600 bytes of traffic.
c/o RPC Bind: UUID E1AF8308-5D1F-11C9-91A4-08002B14A0FA call 0
c/o RPC Bind Ack: call 0x1 assoc grp 0xD52D xmit 0x16D0 recv 0x1
c/o RPC Bind: UUID E3514235-4B06-11D1-AB04-00C04FC2DCD2 call 0
c/o RPC Bind Ack: call 0x1 assoc grp 0x1C04C xmit 0x16D0 recv 0x
c/o RPC Alt-Cont: UUID E3514235-4B06-11D1-AB04-00C04FC2DCD2 call 0
c/o RPC Alt-Cont Rsp: call 0x1 assoc grp 0x1C04C xmit 0x16D0 recv 0x
c/o RPC Request: call 0x1 opnum 0x0 context 0x0 hint 0x38
c/o RPC Response: call 0x1 context 0x0 hint 0x3C cancels 0x0
c/o RPC Request: call 0x2 opnum 0xC context 0x0 hint 0x6E
c/o RPC Response: call 0x2 context 0x0 hint 0xB4 cancels 0x0
c/o RPC Request: call 0x3 opnum 0xC context 0x0 hint 0x6E
c/o RPC Response: call 0x3 context 0x0 hint 0xAC cancels 0x0
c/o RPC Request: call 0x4 opnum 0x1 context 0x0 hint 0x14
c/o RPC Response: call 0x4 context 0x0 hint 0x18 cancels 0x0
LDAP RootDSE
The client then requests information from the LDAP RootDSE. The RootDSE is a standard attribute defined in the LDAP 3.0 specification. The RootDSE contains information about the directory server, including its capabilities and configuration. The search response will contain a standard set of information as defined in RFC 2251. One of the items returned with this response is the supported Simple Authentication and Security Layer (SASL) mechanism. In this case it returns GSS-SPNEGO.
KRB_TGS_REQ
KRB_TGS_REP
ProtocolOp: BindRequest (0)
ProtocolOp: BindResponse (1)
Load Group Policy
Next, the computer loads applicable Group Policy objects. The client then completes an RPC call to convert its name to a distinguished name and performs an LDAP lookup for policy information that applies to this particular computer and then loads that information using Server Message Block (SMB).
Policy Search
The following frames show the client performing a binding operation to the LDAP directory. LDAP queries require the client to bind to the Directory Service before making a search for information. At this stage of the client logon process, the client is binding to the Active Directory to make a search for Group Policies that apply to the client. This sequence also shows the client making an LDAP request to determine what Group Policies apply. Each bind operation creates about 1,675 bytes of traffic. The policy search creates about 3,527 bytes of traffic in this case.
TCP
.AP..., len: 173, seq: 978423034-978423207, ack:3068556899, win:17069, src: 1048 dst: 389
AP..., len: 294, seq:3068556899-3068557193, ack: 978423207, win:16081, src: 389 dst: 1048
....S., len: 0, seq: 978497639-978497639, ack: 0, win:16384, src: 1050 dst: 389
.A..S., len: 0, seq:3068641675-3068641675, ack: 978497640, win:17520, src: 389 dst: 1050
.A...., len: 0, seq: 978497640-978497640, ack:3068641676, win:17520, src: 1050 dst: 389
.AP..., len: 129, seq: 978498933-978499062, ack:3068642008, win:17188, src: 1050 dst: 389
.AP..., len: 171, seq:3068642008-3068642179, ack: 978499062, win:16098, src: 389 dst: 1050
.AP..., len: 203, seq: 978499062-978499265, ack:3068642179, win:17017, src: 1050 dst: 389
.AP..., len: 201, seq:3068642179-3068642380, ack: 978499265, win:17520, src: 389 dst: 1050
.AP..., len: 467, seq: 978423207-978423674, ack:3068557193, win:16775, src: 1048 dst: 389
17
.AP..., len: 1273, seq:3068557193-3068558466, ack: 978423674, win:17520, src: 389 dst: 1048
After the client determines what policies are applicable, it makes a second DFS referral. This will occur ahead of most attempts by a client in Windows 2000 to connect to a share point.
Policy Load Using SMB
The part completes with the client connecting to the SYSVOL (a standard share point on a domain controller) on the domain controller and downloads its policies generating 1,018 bytes of traffic.
This is a very simple configuration, so this number will grow with more sophisticated Group Policy implementations.
C tree connect & X, Share = \\DCCLAB22.MAIN.LOCAL\SYSVOL
R tree connect & X, Type = _
C NT create & X, File = \main.local\Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}\gpt.ini
R NT create & X, FID = 0x4000
C read & X, FID = 0x4000, Read 0x1a at 0x00000000
R read & X, Read 0x1a
Client Certificate AutoEnrollment
Each time Group Policy objects are applied, the client completes the autoenrollment event. The autoenrollment event does the following:
Checks the status of the computer's certificates, and if they are not OK, the client autoenrolls the client
Downloads the enterprise's certification authority (CA) certificates from the Active Directory enterprise root store (= PKI trust anchors)
Downloads certificates of CAs capable of issuing smart card certificates from Active Directory
The client makes a request to get the LDAP RootDSE (see frames above) information and then uses LDAP to complete autoenrollment.
DNS 0x3:Std Qry for _ldap._tcp.Site2._sites.dc._msdcs
DNS 0x3:Std Qry Resp. Auth. NS is dcclab.local.
DNS 0x4:Std Qry for _ldap._tcp.dc._msdcs.dcclab22.dcc
DNS 0x4:Std Qry Resp. Auth. NS is dcclab.local. of
ProtocolOp: SearchResponse (simple) (5)
ProtocolOp: UnbindRequest (2)
Closer examination of frame 3 in the previous frame reveals that this is a request for configuration information about the public key services in the domain. The following example provides a more detailed view of one of the frames.
LDAP: ProtocolOp: SearchRequest (3)
LDAP: ProtocolOp = SearchRequest
LDAP: Base Object = CN=Public Key
Services,CN=Services,CN=Configuration,DC=dcclab,DC
LDAP: Scope = Single Level
LDAP: Deref Aliases = Never Deref Aliases
LDAP: Size Limit = No Limit
LDAP: Time Limit = 0x00002710
LDAP: Attrs Only = 0 (0x0)
LDAP: Filter Type = Equality Match
LDAP: Attribute Type = cn
LDAP: Attribute Value = NTAuthCertificates
LDAP: Attribute Value = cACertificate
Time Synchronization
Next, the client updates its time with its authenticating domain controller. The following set of frames shows the time synchronization process. Notice that this occurs on port 123. This sequence creates 220 bytes of network traffic.
NTP
Src Port: Unknown, (1051); Dst Port: Network Time Protocol (123); Length = 76 (0x4C)
Src Port: Network Time Protocol, (123); Dst Port: Unknown (1051); Length = 76 (0x4C)
Dynamic Domain Name System Update
The last part of the startup process is for the client to perform its name update in the DNS database. The Windows 2000 dynamic DNS update is based on RFC 2136. This process is based on RFC 2136.
The client first determines whether the DNS server has the authority for the client's zone. This sequence creates 225 bytes of traffic.
0x1:Std Qry for dcclab24.main.local. of type SOA on class INET addr.
0x1:Std Qry Resp. Auth. NS is main.local. of type SOA on class INET addr. : Name does not exist
Next, the client makes the dynamic update of its name in the DNS server. This creates about 1,800 bytes of traffic if the client has to update both (A RR and PTR RR).
DNS 0x4:Dyn Upd PRE records to dcclab24.dcclab.local.
241 99.703125 00010233BFE7 INTEL F1A000 DNS 0x4:Dyn Upd Resp. PRE records to dcclab24.dcclab
DNS 0x5:Std Qry for 0.0.10.in-addr.arpa. of type SOA
0x5:Std Qry Resp. for 0.0.10.in-addr.arpa. of type SOA
0x6:Dyn Upd PRE/UPD records to 24.0.0.10.in-addr.arpa
0x6:Dyn Upd Resp. PRE/UPD records to 24.0.0.10.in-addr.arpa
The actual size of the packets for the dynamic update depends on many conditions. First of all, it depends on whether the client was already registered. Next, it depends in whether there is a conflict with an already registered entry. Another aspect for the client traffic is whether DHCP is in use. The default configuration in this scenario is that the client is updating its A RR, whereas the DHCP server is responsible for the PTR RR.
Last but not least, is the traffic also depending on the configuration of the DNS server and how the secure dynamic update behavior is configured.
Note: If DDNS is not being used in the environment, you should consider turning off the client's ability to make dynamic updates. Turning this off will save the network traffic generated by the unneeded attempts to make the update by the client in an environment that does not support it. This feature is turned off in the advanced TCP/IP properties for the particular network connection. The location is shown in the following illustration:
Completion
The computer startup sequence is completed with the client breaking down its open connections to the domain controller in a sequence similar to the one illustrated in the following table. This creates 473 bytes of traffic.
C tree disconnect
R tree disconnect
C logoff & X
R logoff & X
This traffic sequence is useful to help determine the break between the computer startup and a user logon in a network trace. The system is now available for use. The system display will show the Ctrl-Alt-Delete screen.
After the system startup is complete, the Ctrl-Alt-Delete screen appears on the console. A user will be able to make a domain logon from this system. Pressing Ctrl-Alt-Delete and entering a valid set of domain user credentials (user name and password) initiates the interactive client logon process that ends with the Windows NT shell being loaded and the user being able to interactively use the systems. It is important to note that the user logon process is essentially an abbreviated version of the computer startup process using a subset of the processes described previously. The following diagram shows this process.
Logon Flow
User Identification
Windows 2000 provides three ways for a user to enter account information at logon. The first method is to use the Security Accounts Manager (SAM) account name and select the domain. This is the default method for logon. The second method is to use the fully qualified name, which would appear as <user>@<domain-org-company-com>. The third alternative is use the User Principle Name (UPN), which is described in the Windows 2000 Resource Kit as follows:."
UPN names are resolved to a user and domain by performing a domain controller lookup of the Global Catalog (GC). UPN logon is only supported when the domain is in Native Mode.
Kerberos Authentication for User Logon
The user logon operation generates a Kerberos Authentication request to get its session ticket. Then logon process then requests a session key for a service from the KDC via the Ticket Granting Service.
KRB_AS_REQ
KRB_AS_REP
The logon process requests the following tickets during the user logon process:
Kerberos: Server Name = <clientname>$
Kerberos: Server Name = <dc name>$
Kerberos: Server Name = krbtgt.<dns domain name>
Kerberos: Server Name = ldap.<dc name>.<dns domain name>
As mentioned earlier, the size of the Kerberos packets depends on the number of groups a user is a member of.
Group Policy Load
The process of retrieving Group Policy information is the same as previously described in the computer startup section. The client uses the "DS API DSCrackNames" to perform a name translation and retrieves the information about the policies to load via LDAP.
The client then establishes an SMB connection to the controller and downloads the necessary policy files.
C session setup & X
R session setup & X
R NT create & X
C read & X, FID = 0x8005, Read 0x1a at 0x00000000
This sequence shows the client connecting to the system volume and loading the user's policy. Information on the traffic generated by the loading of group policy information can be found in Chapter 5 of the Microsoft Press book Building Enterprise Active Directory Services, in the Notes from the Field series.
The client then closes its connection to the domain controller. This happens on Port 445. It is represented in the trace like the following table.
SMB C tree disconnect
SMB R tree disconnect
SMB C logoff & X
SMB R logoff & X
The user has now logged on. The following table shows a summary of the network traffic for a user logon.
Frames
Bytes Claimed
3070
114
96
NBT
1164
86
6019
3872
3226
14229
Note: Please keep in mind that the traffic in the previous table represents a user who is just a member of the default groups and that no specific Group Policies were set.
We have now provided a detailed examination of the Windows 2000 computer startup and user logon process. As stated in the introduction, understanding this process will assist with both infrastructure design and systems administration in Windows 2000 networks.
This document covers a great deal of material and should probably be read multiple times to fully understand and kept handy as a reference.
To help cement your understanding of the concepts described in the paper, I would suggest setting up a test environment and using network monitor to make some traces of the process. Use this document as a reference when examining the traces to understand what is happening at each point.
Questions and comments can be directed to gmolnar@microsoft.com
The following system configurations were used to validate the Windows 2000 startup and logon process.
Environment
Windows 2000
1 Windows 2000 domain controller in Domain Main.Local runningDNSDHCPWINS
1 Windows 2000 Professional Desktop1 Windows 2000 Professional Notebook
NT 4
1 NT 4 SP6a domain controller runningDHCPWINS
1 Windows 2000 Professional Desktop
Mixed
1 upgraded Windows 2000 domain controller runningDNSDHCPWINS1 NT 4.0 SP6a BDC
Forest
1 Windows 2000 Server Domain Controller in Domain Corp.Main.Local running:DNSDHCP2 Windows 2000 Servers configured as RRAS routers using a Null Modem Cable to simulate a slow link1 Windows 2000 Domain Controller in Domain Field.Main.Local1 Windows 2000 Domain Controller in Domain Field.Main.Local running:DNS (Secondary)DHCP
1 Windows 2000 Professional Desktop in Domain Corp.Main.Local1 Windows 2000 Professional Desktop in Domain Field.Main.Local
All computers (except where noted) were networked using a 10/100 Ethernet hub. The clients used various Ethernet cards. All clients were connected to the network at 100 MB.
All network traces were made using Network Monitor v5.00.646. A parser for Kerberos traffic was added to Network Monitor.
All network tests and configurations discussed in this document use the TCP/IP transport protocol. TCP/IP is the default network protocol for Windows 2000 and many of the services that logon and startup use require TCP/IP.
The following table is a comprehensive list of ports used by Windows.
Port
TCP/UDP
Function Description
20
FTP
21
23
Telnet
25
IIS SMTP
31
Netmeeting
42
WINS Replication
52
53
DNS Name ResolutionSQL TCP lookup
DNSSQL TCP lookup
67
DHCP Lease (BOOTP)
68
DHCP Lease
80
IIS HTTP
88
110
POP3
119
NNTP
135
Location Service
RPCRPC EP MapperSQL RPC session mapperWINS ManagerDHCP ManagerMS DTC
137
NetBIOS Name Service
SQL RPC LookupLogon SequenceNT 4.0 TrustsNT 4.0 Secure ChannelPass Through ValidationBrowsingPrintingSQL Named Pipes lookup
WINS Registration
138
NetBIOS Datagram ServiceLogon SequenceNT 4.0 TrustsNT 4.0 Directory ReplicationNT 4.0 Secure ChannelPass Through ValidationNetLogonBrowsingPrinting
139
NetBIOS Session ServiceNBTSMBFile SharingPrintingSQL Named Pipes sessionLogon SequenceNT 4.0 TrustsNT 4.0 Directory ReplicationNT 4.0 Secure ChannelPass Through ValidationNT 4.0 Administration Tools (Server Manager, User Manager, Event Viewer, Registry Editor, Diagnostics, Performance Monitor, DNS Administration)
161
SNMP
162
SNMP Trap
215
389
443
HTTP SSL
445
SMB or CIFS
464
Kerberos kpasswd
500
IPSEC isakmp IKE
531
IRC
560
Content Replication Service Site Server
636
LDAP over SSL
731
Dynamic
888
Login and Environment Passing
Directory Replication
1109
POP with Kerberos
1433
SQL TCP session
1645
RADIUS Authentication
1646
RADIUS Accounting
1723
PPTP Control Channel (IP Protocol 47 GRE)
1755
Netshow
1812
1813
1863
MSN Messenger
2053
Kerberos de-multiplexor
2105
Kerberos encrypted rlogin
3268
Global Catalog LDAP
3269
Global Catalog LDAP over SSL
3389
RDP
Terminal Services
8000
CyberCash (credit gateway)
8001
CyberCash (admin)
8002
CyberCash (coin gateway)
10140-10179
DCOM port range
For all the ports on Windows NT, look on your local computer:
%winnt%/system32/drivers/etc/services
The following table lists common ports.
Authentication Authentication services verify the identity of a user or device requesting access to a resource.
SERVICE
TYPE
AFS/Kerberos authentication service
TCP Port 7004 - afs3-kaserver
UDP Port 7004 - afs3-kaserver
Authentication Service
TCP Port 113 - ident
UDP Port 113 - ident
Certificate Distribution Center
TCP Port 223 - cdc
UDP Port 223 - cdc
Funk Software Inc.
TCP Port 1505 - funkproxy
UDP Port 1505 - funkproxy
Login Host Protocol (TACACS)
TCP Port 49 - bbn-login
UDP Port 49 - bbn-login
TACACS-Database Service
TCP Port 65 - tacacs-ds
UDP Port 65 - tacacs-ds
Directory Service/Name Resolution Directory Services provide name resolution and lookup capabilities, allowing users or devices to locate resources on the network by human readable or well-known names.
AppleTalk Name Binding
TCP Port 202 - at-nbp
UDP Port 202 - at-nbp
Directory Location Service
TCP Port 197 - dls
UDP Port 197 - dls
Directory Location Service Monitor
TCP Port 198 - dls-mon
UDP Port 198 - dls-mon
Lightweight Directory Access Protocol
TCP Port 389 - ldap
UDP Port 389 - ldap
Microsoft-DS
TCP Port 445 - microsoft-ds
UDP Port 445 - microsoft-ds
Microsoft's Windows Internet Name Service
TCP Port 1512 - wins
UDP Port 1512 - wins
NETBIOS Name Service
TCP Port 137 - netbios-ns
UDP Port 137 - netbios-ns
NIC Host Name Server
TCP Port 101 - hostnames
UDP Port 101 - hostnames
Prospero Directory Service non-priv
TCP Port 1525 - prospero-np
UDP Port 1525 - prospero-np
Domain Name Server
TCP Port 53 - domain
UDP Port 53 - domain
Host Name Server
TCP Port 42 - nameserver
UDP Port 42 - nameserver
HOSTS2 Name Server
TCP Port 81 - hosts2-ns
UDP Port 81 - hosts2-ns
streettalk
TCP Port 566 - streettalk
UDP Port 566 - streettalk
Encryption
TCP Port 750 - kerberos-sec
TCP Port 751 - kerberos_master
TCP Port 88 - kerberos
UDP Port 750 - kerberos-sec
UDP Port 751 - kerberos_master
UDP Port 88 - kerberos
kerberos administration
TCP Port 749 - kerberos-adm
UDP Port 749 - kerberos-adm
Kerberos Key Distribution Center
Windows NT Service - Kerberos Key Distribution Center
kerberos-master
TCP Port 751 - kerberos-master
Remote Access/VPN Remote Access & VPN services allow users or devices to access remote networks as though they had local connections to that network. This is different from Remote Control Software where users actually assume control of a host on a remote network.
any private dial out service
TCP Port 75 -
UDP Port 75 -
Apple Remote Access Protocol
TCP Port 3454 - mira
IPSEC driver
Windows NT Service - IPSEC driver
pptp
TCP Port 1723 - PPTP
Routing and Remote Access
Windows NT Service - Routing and Remote Access
Shiva
TCP Port 1502 - shivadiscovery
UDP Port 1502 - shivadiscovery
TIA/EIA/IS-99 modem server
TCP Port 380 - is99s
UDP Port 380 - is99s
Routing Routing protocols allow for the transmission of information between networks. TCP/IP is omitted from this list as it is assumed to be running on all hosts on the network. Protocols other than TCP/IP are important to note as they may indicate extranet support for different types of client operating systems and/or network configurations.
AppleTalk Protocol
Windows NT Service - AppleTalk Protocol
AppleTalk Routing Maintenance
TCP Port 201 - at-rtmp
UDP Port 201 - at-rtmp
Appletalk Update-Based Routing Pro.
TCP Port 387 - aurp
UDP Port 387 - aurp
AppleTalk Zone Information
TCP Port 206 - at-zis
UDP Port 206 - at-zis
Border Gateway Protocol
TCP Port 179 - bgp
UDP Port 179 - bgp
IPX
TCP Port 213 - ipx
UDP Port 213 - ipx
Local routing process (on site)
UDP Port 520 - router | http://technet.microsoft.com/en-us/library/bb742590.aspx | crawl-002 | refinedweb | 10,846 | 51.89 |
With signals we mean the "software interrupts" that you also send from the shell using (the build-in function) kill pid. They are used by the user to interrupt or quit a program. The kernel uses them for passing hardware errors and when a process requests so.
Besides that they are useful when all you need is the possibility of sending a couple of predefined messages. They can not be used for passing streams of data.
There is a limit to the number of functions; they are numbered from 0 to 31 and most have already been defined by the OS in the file signal.h header file. Also, on the shell prompt kill -l can be entered for a list.
With the function sigaction() you can set up so a called signal handler. The function has the following syntax:
int sigaction (int signum, const struct sigaction *act, struct sigaction *oldact);
Explanation of the parameters:
int signum
Of course the signal you want to catch. See signal.h for the list.
const struct sigaction *act
Used to tell which function should be started, with which flags, if the signal signum comes in; can be NULL together with oldact, so you can test whether the system supports the signal.
struct sigaction *oldact
You can leave this one NULL, but if you pass it, the previously installed action is saved in the struct pointed to by oldact.
The struct sigaction is defined as:
struct sigaction { void (*sa_handler)(int); void (*sa_sigaction)(int, siginfo_t *, void *); sigset_t sa_mask; int sa_flags; void (*sa_restorer)(void); }
The members of this struct:
void (*sa_handler)(int);
This is a pointer to a function where you handle the signal. The function must have one parameter, the signal number.
Instead of a pointer to a function, you can pass SIG_DFL, then the default action is carried out, for example SIGHUP3 (quit) causes the process to quit. If this parameter is SIG_IGN, then the signal is ignored.
You can pass NULL for this parameter, but then you need to fill in the second.
void (*sa_sigaction)(int, siginfo_t *, void *);
This is the alternative for the first parameter. Pass a pointer to a function where you handle the signal, but besides of the signal number as the only parameter, you will receive a filled-in siginfo_t struct, which contains more information. The third parameter will be filled with a pointer to type ucontext_t (cast to void *), which contains more information on the current process -- if you need to know more about this parameter, check the libc manual.
You can pass NULL for this parameter, but then you need to fill in the first.
sigset_t sa_mask;
This mask is used to block other signals, besides the one which will be handled. Fill in 0 for no other blocking.
int sa_flags;
POSIX.1 defines only SA_NOCLDSTOP, which will result in not sending SIGCHLD when a child process is stopped. But there are lots of other flags which are OS specific; see your manual pages. Fill in 0 for unchanged behaviour.
void (*sa_restorer)(void);
This is a very simple program, just setting up a handler, raising it and then handling it ourselves.
#include <stdio.h> #include <signal.h> #include <unistd.h> #include <sys/types.h> #define MYSIG SIGUSR1 void setup_handler (); void my_handler (int signum); int main (void) { setup_handler(); printf("sigaction setup complete, waiting for signal... \n"); raise(MYSIG); return 0; } void setup_handler() { struct sigaction new_action; /* this struct has to be set up with parameters to call sigaction() */ sigset_t block_mask; /* sigemptyset makes block_mask empty collection of flags */ sigemptyset (&block_mask); /* function my_handler should be started when MYSIG is received */ new_action.sa_handler = my_handler; /* * with block_mask representing an empty collection of flags, this as- * signment will make our handler not block any signals while running */ new_action.sa_mask = block_mask; /* don't pass any additional flags */ new_action.sa_flags = 0; /* now that all the parameters have been taken care of, set it up */ sigaction (MYSIG, &new_action, NULL); } void my_handler (int signum) { printf("You INTERRUPTED me! Signal %d received\n",signum); }
The above example can be adapted, shortened and now with a handler which gets more information. The changes are marked.
#include <stdio.h> #include <signal.h> #include <unistd.h> #include <sys/types.h> #define MYSIG SIGUSR1
void setup_handler (); void my_handler (int signum, siginfo_t *, void *); /* Changed */ int main (void) { setup_handler(); printf("sigaction setup complete, waiting for signal... \n"); raise(MYSIG); return 0; }
void setup_handler() { struct sigaction new_action; sigset_t block_mask; sigemptyset (&block_mask); new_action.sa_sigaction = my_handler; /* Changed */ new_action.sa_mask = block_mask; new_action.sa_flags = SA_SIGINFO; /* Changed */ sigaction (MYSIG, &new_action, NULL); }
void my_handler (int signum, siginfo_t *siginfo, void *context) /* Changed */ { printf("You INTERRUPTED me! Signal %d received\n", siginfo->si_signo); /* Changed */ }
Often you want to temporarily block signals, for instance when you're reading something from a device. To test this, put the following code in the main() function we discussed, instead of the line where we use raise().
sigset_t block_sigs; sigemptyset(&block_sigs); sigaddset(&block_sigs, MYSIG); sigprocmask(SIG_BLOCK, &block_sigs, NULL); raise(MYSIG); sleep(5); printf("sleep finished, unblocking\n"); sigprocmask(SIG_UNBLOCK, &block_sigs, NULL); printf("exiting\n");
Although we do the raise() before the sleep(), you will see that running the program will show a sleep before the raise is handled -- we blocked it for the time of the sleep.
From version 2.4, it's possible on Linux to receive a signal whenever a directory is modified. The following example code does this:
struct sigaction new_action; int data_dir;
// Create file descriptor data_dir = open("/tmp/newdata", O_RDONLY); if(data_dir < 0) { exit(1); }
// Handler for notification. We assume there's a function defined that's // called "new_data_handler()". new_action.sa_handler = new_data_handler; sigemptyset(&new_action.sa_mask); new_action.sa_flags = 0; sigaction(SIGRTMIN, &new_action, NULL);
// Configure F_NOTIFY. Passing SIGRTMIN will cause it to queue events. fcntl(data_dir, F_SETSIG, SIGRTMIN); // Now tell which events should be signalled. fcntl(data_dir, F_NOTIFY, DN_CREATE|DN_DELETE|DN_MULTISHOT);
You can check the man page of fcntl() to see what the flags do exactly
It's also possible to use SRV4 signal() calls. This method is not POSIX compliant and it's also unreliable (check out The Linux Programmer's Guide for more information). So in terms of reliability and portability, the POSIX definition is superior, but there are still many applications around which use the signal() functionality. | https://www.vankuik.nl/Signals | CC-MAIN-2021-04 | refinedweb | 1,033 | 55.44 |
My goal is to multiply all positive numbers in a list by 3. I think I've almost got the solution but I'm not sure what is causing it not to work. It currently just returns back the original numbers and does not multiply any numbers by 3.
def triple_positives(xs):
List = []
product = 3
for i in range (len(List)):
if List[i] >= 0:
product *= i
List.append(xs[i])
return xs
In your code, there are multiple issues. Below is the updated version of it with fixes:
>>> def triple_positives(xs): ... List = [] ... product = 3 ... for i in range (len(xs)): ... if xs[i] >= 0: ... List.append(xs[i]*3) ... else: ... List.append(xs[i]) ... return List ... >>> my_list = [1, 2, -1, 5, -2] >>> triple_positives(my_list) [3, 6, -1, 15, -2]
Alternatively, you may achieve this using List Comprehension as:
>>> [item * 3 if item > 0 else item for item in my_list] [3, 6, -1, 15, -2] | https://codedump.io/share/CSt52khJh5j8/1/need-help-multiplying-positive-numbers-in-a-list-by-3-and-leaving-other-numbers-unmodified | CC-MAIN-2017-04 | refinedweb | 155 | 73.07 |
.
July 26th, 2008 at 8:02 pm
I’m trying to do some tests in different environments these days,
so it’s a quite useful tip to me,
thanks a lot.
July 29th, 2008 at 9:04 am
put your test db settings into a file called test_settings.py
in test_settings.py add this line to the top
from settings.py import *
#database settings/other test settings.
then do this
python manage.py test –settings=test_settings
September 16th, 2008 at 11:22 pm
Excellent concept! I really like norm’s approach here also. I think this is relevant, whether you have special environment considerations or not. Thanks! | http://www.stopfinder.com/blog/2008/07/26/flexible-test-database-engine-selection-in-django/ | crawl-002 | refinedweb | 107 | 77.64 |
goCart.jsgoCart.js
A complete Shopify Ajax cart solution written in vanilla JS. This plugin includes Ajax cart drawer, Ajax mini cart, add to cart modal, and error modal.
Plugin by Bornfight front-end team.
🎮 Demo
- All products
- Product with one variant
- Product with multiple variants
💪 Features
- Cart drawer (with left or right direction)
- Mini cart (cart flying under cart button)
- Success modal when product was added to cart with CTA buttons to continue shopping (optional)
- Error modal (when Ajax request fails)
- Control of products inside opened cart (remove, add or subtract quantity)
- See all product information inside opened cart (image, title, variant)
- Written in vanilla JS (no Jquery needed)
- Barebones (only minimal styles are included)
🔨 Getting Started
Compiled code can be found in the
build directory. The
src directory contains development code.
1. Install plugin1. Install plugin
npm i @bornfight/gocart
2. Import goCart.js to your theme JS2. Import goCart.js to your theme JS
import GoCart from '@bornfight/gocart';
Or if you are not using any module bundler you can import goCart.js manually. Add
index.js file from
build folder (you can also rename it) to your theme
assets folder.
Then just include the goCart.js inside
theme.liquid file, best right before closing of
</body> tag.
{{ 'gocart.js' | asset_url | script_tag }}
3. Import CSS/SCSS styles3. Import CSS/SCSS styles
Take the CSS file from
build folder and include it in your Shopify theme.
If you are using SCSS you can find the SCSS file inside
src folder:
src/lib/scss/go-cart.scss.
You can also simply include it from
node_modules like this:
@import "~@bornfight/gocart/src/lib/scss/go-cart";
4. Include
go-cart.liquid file as section
Take the
go-cart.liquid file from
src/lib/ and put it in
sections folder of your Shopify theme.
This file contains all elements that make goCart.js
To make goCart.js elements visible (drawer, modals) you need to include it in your
theme.liquid file.
theme.liquid file is the main file of your theme, so if a section is included inside of it will be visible on all pages.
To do so insert this code:
{%- section 'go-cart' -%}
inside
theme.liquid file, best right before closing of
</body> tag.
5. Edit product.liquid section5. Edit product.liquid section
Inside your product template (if you use section inside product template then open product section) find the form to add product to cart.
This will probably look similar to this:
<form action="/cart/add" method="post" enctype="multipart/form-data">......
You will need to add identifier to that form which goCart.js uses to add products to cart. Replace this code with this (just adding the ID):
<form action="/cart/add" method="post" enctype="multipart/form-data" id="add-to-cart-{{ product.handle }}-{{ collection.handle }}-{{ section.id }}">
Inside that same form, find 'Add to cart' button, which user presses when he wants to add product to cart.
This will probably look similar to this:
<button type="submit" name="add" data-add-to-cart {%- unless current_variant.available -%}disabled="disabled" {%- endunless -%}> <span data-add-to-cart-text> {%- if current_variant.available -%} {{- 'products.product.add_to_cart' | t -}} {%- else -%} {{- 'products.product.sold_out' | t -}} {%- endif -%} </span> </button>
You will need to add class to that button that goCart.js uses to prevent standard behavior and to add to cart with Ajax (just adding the class).
<button type="submit" name="add" data-add-to-cart class="js-go-cart-add-to-cart" {%- unless current_variant.available -%}disabled="disabled"{%- endunless -%}> <span data-add-to-cart-text> {%- if current_variant.available -%} {{- 'products.product.add_to_cart' | t -}} {%- else -%} {{- 'products.product.sold_out' | t -}} {%- endif -%} </span> </button>
6. Replace your cart button with goCart button6. Replace your cart button with goCart button
Take the
go-cart-button.liquid file from
src/lib/ and put it in
snippets folder of your Shopify theme.
This file contains goCart elements that make cart button with number of items inside cart.
Locate the file that contains your cart button. Usually this will be inside
header.liquid section.
Inside your file just include goCart button as a snippet instead your old cart button like this:
{%- include "go-cart-button" -%}
There is no need to have two cart buttons so you can completely remove your old cart button.
7. Init the plugin7. Init the plugin
const goCart = new GoCart();
or for manual installations, init inside
theme.liquid file, right after including the goCart.js script from assets.
<script> var goCart = new GoCart(); </script>
You should have something like this:
{{ 'gocart.js' | asset_url | script_tag }} <script> var goCart = new GoCart(); </script>
✈️ Options
{ cartMode: 'drawer', //drawer or mini-cart drawerDirection: 'right', //cart drawer from left or right displayModal: false, //display success modal when adding product to cart moneyFormat: '${{amount}}', //template for money format when displaying money }
💰 Currency options
Price is converted to money with Shopify's
theme-currency script. You can check it out here:
Default currency is Dollar ($). If your shop uses different currency you can change the output of money inside goCart.js with
moneyFormat option.
Options accepts template for Shopify's
theme-currency script.
{ moneyFormat: '${{amount}}' }
This will print
$50.00.
Changing the template you can change how your money is displayed, so:
{ moneyFormat: '€{{amount}}' }
will print
€50.00, and
{ moneyFormat: '{{amount}} HRK' }
will print
50.00 HRK.
❓ Drawer and mini cart modes
goCart.js has two cart modes - drawer and mini cart. Drawer is mode where cart comes from left or right outside of the visible viewport. Mini cart mode flies under the cart button (cart icon) in header. Both are very popular these days and you can change the cart layout with goCart.js within seconds. If you are using Drawer mode and you are keen on performance, you can even remove the mini cart liquid code from
go-cart-button.liquid. Liquid code for the mini cart mode is between the mini cart commented code inside that file:
<!--go cart mini cart--> ... <!--end go cart mini cart-->
Or if you are using only mini cart and you are keen on performance you can remove the cart drawer liquid code from
go-cart.liquid. Liquid code for the cart drawer mode is between the cart drawer commented code inside that file:
<!--go cart drawer--> ... <!--end go cart drawer-->
🌐 Browser Compatibility
goCart.js works in all modern browsers, IE11 and above is supported.
✅ License
MIT License | https://preview.npmjs.com/package/@brave-agency/gocart | CC-MAIN-2021-21 | refinedweb | 1,059 | 58.79 |
Predicate<T> Delegate
Represents the method that defines a set of criteria and determines whether the specified object meets those criteria.
Assembly: mscorlib (in mscorlib.dll)
Type Parameters
- in.
Parameters
- obj
- Type: T
The object to compare against the criteria defined within the method represented by this delegate.
Return ValueType: System.Boolean
true if obj meets the criteria defined within the method represented by this delegate; otherwise, false.
The following code example uses a Predicate<T> delegate with the Array.Find<T> method to search an array of Point structures. The method the delegate represents, ProductGT10, returns true if the product of the X and Y fields is greater than 100,000. The Find<T> method calls the delegate for each element of the array, stopping at the first point that meets the test condition.
using System; using System.Drawing; public class Example { public static void Main() { // Create an array of five Point structures. Point[] points = { new Point(100, 200), new Point(150, 250), new Point(250, 375), new Point(275, 395), new Point(295, 450) }; // To find the first Point structure for which X times Y // is greater than 100000, pass the array and a delegate // that represents the ProductGT10 method to the static // Find method of the Array class. Point first = Array.Find(points, ProductGT10); // Note that you do not need to create the delegate // explicitly, or to specify the type parameter of the // generic method, because the C# compiler has enough // context to determine that information for you. // Display the first structure found. Console.WriteLine("Found: X = {0}, Y = {1}", first.X, first.Y); } // This method implements the test condition for the Find // method. private static bool ProductGT10(Point p) { if (p.X * p.Y > 100000) { return true; } else { return false; } } } /* This code example produces the following output: Found: X = 275, Y =. | http://msdn.microsoft.com/en-US/library/bfcke1bz(v=vs.100).aspx | CC-MAIN-2014-42 | refinedweb | 305 | 65.62 |
Jakub Maly
- Total activity 40
- Last activity
- Member since
- Following 0 users
- Followed by 0 users
- Votes 0
- Subscriptions 15
Jakub Maly created a post,
Extending Go To Everything navigationHi Jetbrains, I would like to extend the functionality of Go To Everything window (to also search some specific config files used in our solution). Is this somehow possible (or where should I start...
Jakub Maly created a post,
ReSharper performanceYes, this will be another post about poor performance of R#.I've been using R# since version 4. At that time, our work solution comprised of slightly above 100 projects and it was not possible to u...
Jakub Maly created a post,
Solution with unloaded projectsWhen I unload some projects in my solution, the types and namespaces from these are shown as red and errors in the rest of my projects (Cannot resolve symbol 'xxx'). Why is it so? It is extremely a... | https://resharper-support.jetbrains.com/hc/en-us/profiles/1379933932-Jakub-Maly | CC-MAIN-2019-22 | refinedweb | 153 | 60.75 |
Up to [DragonFly] / src / sys / netgraph / bpf
Request diff between arbitrary revisions
Keyword substitution: kv
Default branch: MAIN
Catch up a bit with FreeBSD netgraph by replacing *LEN constants with *SIZ constants which already account space for trailing '\0's. Submitted-by: "Nuno Antunes" <nuno.antunes@gmail.com> Obtained from: FreeBSD (sorta).
gcc2 doesn't like ary[] inside structures. Add __ARRAY_ZERO to sys/cdefs.h to take care of the differences between GCC2 and GCC3.
C99 specify ary[] instead of ary[0] in structure. Obtained-from: FreeBSD-5 Submitted-by: YONETANI Tomokazu <qhwt+dragonfly-submit@les.ath.cx>
Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections.
import from FreeBSD RELENG_4 1.2.4.3 | http://www.dragonflybsd.org/cvsweb/src/sys/netgraph/bpf/ng_bpf.h?f=h | CC-MAIN-2015-06 | refinedweb | 133 | 53.58 |
In M1 there's no transitive dependencies, thus your users will have to
define each dependency one by one.
But to improve the conversion between m1 and m2 poms for the repository, if
you deploy VFS with m1 you can add the following setting :
Arnaud
On 7/27/06, Mario Ivankovits <mario@ops.co.at> wrote:
>
> Hi Jörg!
> >.
> Not only licensing troubles, also the thing with snapshot/not-released
> dependencies.
>
> bz2 and tar hurts if they are not at least easily pluggable, sure, I can
> copy compress (its not that big) to VFSs codebase (to a different
> namespace), then, only smb and webdav is missing.
> Its an option, but I like the snapshot jar way more.
>
> > As marked out in the other thread, marking dependencies as optional is
> perfectly valid.
> >
> Uhm ... this is not possible with maven 1, is it? Could you give me a
> hint please.
>
>
> Ciao,
> Mario
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: commons-dev-help@jakarta.apache.org
>
> | http://mail-archives.apache.org/mod_mbox/commons-dev/200607.mbox/%3C262c6c680607270656q5b8bc834i60901c3eca4caa12@mail.gmail.com%3E | CC-MAIN-2014-10 | refinedweb | 170 | 66.13 |
Announcement - "No Closures" prototype.
Here is a method declared to be castable as a Runnable
import net.java.dev.rapt.anon.As;
class MyClass {
@As(Runnable.class)
void slowStuff() {
... // whatever
}
}
In order to cast that method to a Runnable, you do this
new Thread(Runnables.slowStuff(this)).start();
Thats right, you get the Runnable by calling a static method on the Runnables class, and pass the object that owns the method.
So how does this work?.
Example).
What's Hot and What's Not.
I don't think this is as good as any of the closures proposals, except in one respect, you don't have to wait for JDK 7 to use it.
- Printer-friendly version
- brucechapman's blog
- 4153 reads
by hlovatt - 2008-03-17 20:41
by rdjackson - 2008-03-20 18:37Thanks for the link to the Netbeans wiki on how to get this setup so that things just work. I can't over express How much I really like this Idea and solution to the "No Closures" needed approach.
by rdjackson - 2008-03-19 19:30When
by rdjackson - 2008-05-21 18:45Any updates on when you will get around to releasing the code for this? I'm still very interested in taking a look at it.
by tobega - 2008-03-11 03:42Cool!
by aberrant - 2008-03-10 09:25From
by opinali - 2008-03-10 08:26 :)
by mcnepp - 2008-03-10 08:03Hi
by brokenshard - 2008-03-11 22:02I agree with fabriziogiudici, just the extra 's' may cause some problems...
by stefan_schulz - 2008-03-08 05:50I.
by zero - 2008-03-08 02:42really cool, let's hope the news spread! looking forward for the next feature release :-)
by fabriziogiudici - 2008-03-08 01:58 | https://weblogs.java.net/blog/brucechapman/archive/2008/03/anouncement_no.html | CC-MAIN-2014-10 | refinedweb | 293 | 75 |
urlmatch 0.0.2
Python library for matching URLs.
Use urlmatch to verify that URLs conform to certain patterns. The library and match patterns are based heavily on the Google Chrome Extension match patterns.
Usage
from urlmatch import urlmatch match_pattern = 'http://*.example.com/*' urlmatch(match_pattern, '') # True urlmatch(match_pattern, '') # True urlmatch(match_pattern, '') # False urlmatch(match_pattern, '') # False
Options
There are a few options that affect how the match patterns work.
- path_re1uired (default is True) - a bool which dictates whether the match pattern must have path
- fuzzy_scheme (default is False) - a bool which dictates whether the scheme should be matched “fuzzily.” if this is true, then any valid scheme (*, http, https) will match both http and https
- ‘http_auth_allowed’ (default is True) - bool which dictates whether URLs with HTTP Authentication in the URL should be allowed or not
Match pattern syntax
The basic match pattern syntax is simple:
<url-pattern> := <scheme>://<host><path> <scheme> := '*' | 'http' | 'https' <host> := '*' | '*.' <any char except '/' and '*'>+ <path> := '/' <any chars>
Examples
- http://*/* - matches any URL that uses the http scheme
- https://*/* - matches any URL that uses the https scheme
- http://*/test* - matches any URL that uses the http scheme and has a path that starts with test
- *://test.com/* - matches any url with the domain test.com
- http://*.test.com - matches test.com and any subdomain of test.com
- - matches the exact URL
Bugs
If you find an issue, let me know in the issues section!
Contributing
From the Rubinius about it.
How To Contribute
- Clone: git@github.com:jessepollak/urlmatch/urlmatch.git
Otherwise, you can continue to hack away in your own fork.
- Downloads (All Versions):
- 5 downloads in the last day
- 69 downloads in the last week
- 226 downloads in the last month
- Author: Jesse Pollak
- Categories
- Package Index Owner: jessepollak
- DOAP record: urlmatch-0.0.2.xml | https://pypi.python.org/pypi/urlmatch | CC-MAIN-2015-14 | refinedweb | 300 | 62.68 |
20 February 2013 21:57 [Source: ICIS news]
HOUSTON (ICIS)--A Brazilian federal judge on Wednesday dropped criminal charges against ?xml:namespace>
A Brazilian federal prosecutor had filed criminal charges and two civil lawsuits totalling $22bn (€17bn) against the defendants over alleged lack of planning and environmental management.
The spill and seepage was estimated at more than 3,000 bbl.
“Chevron Brasil Upstream Frade is pleased by the court's decision,” said James Craig, Chevron’s media advisor for Africa and
Transocean was equally pleased with the judge’s decision.
"We welcome this news that the court recognised with respect to the Frade event of November 2011 that Transocean crews did exactly what they were trained to do, acting responsibly, appropriately and quickly while always maintaining safety as their top priority," Transocean spokesman Guy Cantwell said.
Chevron is awaiting approval from Brazilian regulator ANP to restart operations at the Frade field, the company said on Wednesday.
In December, a Brazilian court dismissed an injunction against Chevron that barred the company and Transocean from operating in the country. The injunction had been issued in response to the November 2011 spill.
Reports surfaced in December that Brazilian authorities and Chevron were discussing settling the civil suits for about $144m, but no further developments have been | http://www.icis.com/Articles/2013/02/20/9642999/brazil-judge-drops-criminal-charges-against-chevron.html | CC-MAIN-2015-22 | refinedweb | 213 | 56.69 |
Migrating a library to .NET Core
Microsoft released .NET Core 1.0 and I want to port some of my .NET libraries to .NET Core. This article describes how to migrate the existing library TinyCsvParser to .NET Core and shows the changes necessary to make it work.
Visual Studio 2015 RC3, .NET Core Tools
The best way to start with .NET Core in Windows is to use Visual Studio and the additional .NET Core 1.0 Tools:
I had the problem, that the intallation of the .NET Core 1.0 for Visual Studio failed due to an errorneous Visual Studio 2015 version check.
You can skip the version check and finish the installation by running the Setup with the
SKIP_VSU_CHECK parameter:
DotNetCore.1.0.0-VS2015Tools.Preview2.exe SKIP_VSU_CHECK=1
The .xproj / .project.json Situation
One of the key goals of .NET Core was to build a platform, which allows to develop applications with Windows, Mac and Linux. MSBuild wasn't Open Source at the time and so Microsoft has developed a new build system. The current build system is based on project.json files.
So the first step for migrating to the current .NET Core version is to migrate the .csproj projects into the new format. I have decided to have projects for both .csproj and project.json, so you can still use Visual Studio 2013 and your existing MSBuild environment to build the library.
Please keep in mind, that Microsoft has recently decided to step back from project.json and intends to move back to a .csproj / MSBuild build system (see the Microsoft announcement on Changes to Project.json). I really, really hope, that some of the very nice features of project.json survive the change.
global.json
The global.json file specifies what folders the build system should search, when resolving dependencies.
TinyCsvParser currently consists of two projects named
TinyCsvParser, which is the library and
TinyCsvParser.Test,
which is is the test project. In the global.json file you can either specify the projects explicitly or let the .NET Core
tooling try to resolve them automatically.
The global.json for TinyCsvParser looks like this:
{ "projects": [ "TinyCsvParser", "TinyCsvParser.Test" ] }
project.json
The projects in .NET Core are specified in a project.json file. The project.json file specifies how the project should be built, it includes the dependencies and specifies the frameworks it works with.
Migrating the TinyCsvParser Project
project.json
In the
frameworks section of the file the target .NET frameworks are specified. I want my library to target .NET 4.5 and .NET Core 1.0.
See the .NET Platform Standard for most recent informations on the .NET platform and informations like Target Framework monikers.
- .NET Documentation: .NET Standard Library
- Andrew Lock: Understanding .NET Core, NETStandard, .NET Standard applications and ASP.NET Core
.NET Core is intended to be a very modular platform, so we also need to include the dependencies necessary to build the project. The packages will be resolved from NuGet.
In the
scripts section, we can define various pre-build and post-build events. In the
postCompile event I have instructed
the build system to pack a NuGet Packages and store it in the current Configuration folder (e.g. Debug / Release). This is a great
feature and makes it very simple to distribute the library.
{ "version": "1.5.0", "title": "TinyCsvParser", "description": "An easy to use and high-performance library for CSV parsing.", "copyright": "Copyright 2016 Philipp Wagner", "authors": [ "Philipp Wagner" ], "packOptions": { "owners": [ "Philipp Wagner" ], "authors": [ "Philipp Wagner" ], "tags": [ "csv", "csv parser" ], "requireLicenseAcceptance": false, "projectUrl": "", "summary": "An easy to use and high-performance library for CSV parsing.", "licenseUrl": "", "repository": { "type": "git", "url": "git://github.com/bytefish/TinyCsvParser" } }, "frameworks": { "net45": {}, "netstandard1.3": { "dependencies": { "System.Runtime": "4.1.0", "System.Runtime.Extensions": "4.1.0", "System.Runtime.InteropServices": "4.1.0", "System.Runtime.InteropServices.RuntimeInformation": "4.0.0", "System.Globalization": "4.0.11", "System.Linq": "4.1.0", "System.Linq.Expressions": "4.1.0", "System.Linq.Parallel": "4.0.1", "System.Text.RegularExpressions": "4.1.0", "System.IO.FileSystem": "4.0.1", "System.Reflection": "4.1.0", "System.Reflection.Extensions": "4.0.1", "System.Reflection.TypeExtensions": "4.1.0" } } }, "scripts": { "postcompile": [ "dotnet pack --no-build --configuration %compile:Configuration%" ] } }
Conditional Compilation for the Reflection API
There were slight changes to the Reflection API in recent .NET Standard Framework versions. An additional call to the
GetTypeInfo method is necessary to access the property informations of a type. I want the library to have a single
code base, so I have used a preprocessor directive to allow conditional compilation.
I target the .NET Standard 1.3 framework with the library, so the
#if directive looks like this:
public static bool IsEnum(Type type) { #if NETSTANDARD1_3 return typeof(Enum).GetTypeInfo().IsAssignableFrom(type.GetTypeInfo()); #else return typeof(Enum).IsAssignableFrom(type); #endif }
Migrating the TinyCsvParser.Test Project
project.json
The NUnit Folks have done an amazing job and provide full support for .NET Core 1.0. In the project.json for the project
we simply need to set the
testRunner to
nunit and include the necessary NUnit dependencies. We reference the
TinyCsvParser as a project dependency.
{ "version": "0.0.0", "testRunner": "nunit", "dependencies": { "dotnet-test-nunit": "3.4.0-beta-1", "NUnit": "3.4.0", "TinyCsvParser": { "target": "project" } }, "frameworks": { "netcoreapp1.0": { "imports": [ "netcoreapp1.0", "portable-net45+win8" ], "dependencies": { "Microsoft.NETCore.App": { "version": "1.0.0", "type": "platform" } }, "buildOptions": { "define": [ "NETCOREAPP" ] } } } }
Conditional Compilation
One of the Unit Tests used the
AppDomain to obtain the current working directory, and write a file to disk. The
AppDomain is
not available in .NET Core for various valid reasons, but it can be replaced with the
AppContext. So for the .NET Core build
the
buildOptions have been used to define a preprocessor symbol for conditional compilation.
The
NETCOREAPP symbol can now be used in the unit test to obtain the current base directory either from the
AppContext or the
AppDomain, depending on the target framework.
#if NETCOREAPP var basePath = AppContext.BaseDirectory; #else var basePath = AppDomain.CurrentDomain.BaseDirectory; #endif
Conclusion
And that's it!
The library is now built for .NET 45 and .NET Core, and the NuGet package is automatically created in a post-build event. Only minimal changes had to be made to build the library against the .NET Core framework. All unit tests went green on first run, and even the NUnit Test Runner in Visual Studio worked without problems.
So migrating an existing project to .NET Core was really easy. Microsoft is currently making hard changes to the .NET ecosystem. And to me it's natural, that a lot of things are still in flux. It was fun to work the project.json based build system and the .NET Core integration in Visual Studio is great. | https://www.bytefish.de/blog/migrating_a_library_to_dotnetcore.html | CC-MAIN-2020-40 | refinedweb | 1,123 | 53.47 |
52563/plot-a-pie-chart-in-python-in-matplotlib
I am trying to plot a pie-chart of the number of models released by every manufacturer, recorded in the data provided. Also, mention the name of the manufacture with the largest releases. Need help on this. I am not able to do it.
car = pd.read_csv("Cars2015.csv")
plt.pie(y, labels=label, autopct="%1.1f%%", shadow=True, startangle=140)
this my code but it gives a different picture
Not sure which dataset you are using. But here's a sample code for your reference:
Please find the below code to solve this problem statement.
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv('C:/Users/ed110374/Desktop/faltu stuff/Cars2015.csv',delimiter=',')
#drop the column which are not require.
df = df.drop(['Model','Type','LowPrice','HighPrice','Drive','CityMPG','HwyMPG','FuelCap','Length'], axis=1)
df = df.drop(['Width','Wheelbase','Height','UTurn','Weight','Acc030','Acc060'],axis=1)
df = df.drop(['QtrMile','PageNum','Size'],axis=1)
#based on make count the number of models
df['count'] = df.groupby('Make')['Make'].transform('count')
df.drop_duplicates('Make',inplace=True)
df.sort_values(by='count',ascending=False,inplace=True)
#Based on the count obtain we will plot our graph.
plt.figure(figsize=(50,8))
ax1 = plt.subplot(121, aspect='equal')
df.plot(kind='pie', y = 'count', ax=ax1, autopct='%1.1f%%',startangle=90, shadow=False, labels=df['Make'], legend = False, fontsize=10)
plt.show()
Hi , Please help
i have created one chart using panda, below code
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv('C:/Users/aku396/test3.csv',delimiter=',')
#drop the column which are not require.
df = df.drop(['Operational Tier 2'],axis=1)
#based on make count the number of models
df['count'] = df.groupby('ID')['ID'].transform('count')
#Based on the count obtain we will plot our graph.
plt.figure(figsize=(50,8))
ax1 = plt.subplot(121, aspect='equal')
df.plot(kind='pie', y = 'count', ax=ax1, autopct='%1.0f%%',startangle=90, shadow=False, labels=df['Operational Tier 1'], legend = False, fontsize=10)
plt.show()
You probably want to use the matrix ...READ MORE
following is the syntax for a boxplot ...READ MORE
You don't have to use two charts. ...READ MORE
yes, you can use "os.rename" for that. ...READ MORE
You could scale your data to the ...READ MORE
Try this, it should work well.
import matplotlib.pyplot ...READ MORE
Many times you want to create a ...READ MORE
This should work well:
import numpy as np
import ...READ MORE
In this case, you can use the ...READ MORE
The in-built variables and functions are defined ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/52563/plot-a-pie-chart-in-python-in-matplotlib | CC-MAIN-2021-43 | refinedweb | 470 | 54.18 |
As of Kubernetes 1.3, DNS is a built-in service launched automatically using the addon manager cluster add CNAME:
my-svc.my-namespace.svc.cluster.local.
For a headless service, this resolves to multiple answers, one for each pod
that is backing the service, and contains the port number and a CNAME of the pod
of the form
auto-generated-name.my-svc.my-namespace.svc.cluster.local.
Previous versions of kube-dns made names of the for
my-svc.my-namespace.cluster.local (the ‘svc’ level was added later). This
is no longer supported..
With v1.2, users can specify a Pod annotation,
pod.beta.kubernetes.io/hostname, to specify what the Pod’s hostname should be.
The Pod annotation, if specified, takes precendence over the Pod’s name, to be the hostname of the pod.
For example, given a Pod with annotation
pod.beta.kubernetes.io/hostname: my-pod-name, the Pod will have its hostname set to “my-pod-name”.
With v1.3, the PodSpec has a
hostname field, which can be used to specify the Pod’s hostname. This field value takes precedence over the
pod.beta.kubernetes.io/hostname annotation value.
v1.2 introduces a beta feature where the user can specify a Pod annotation,
pod.beta.kubernetes.io/subdomain, to specify the Pod’s subdomain.
The final domain will be “
With v1.3, the PodSpec has a
subdomain field, which can be used to specify the Pod’s subdomain. This field value takes precedence over the
pod.beta.kubernetes.io/subdomain annotation value.
Example:
apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: hostname: busybox-1 subdomain: default. Given a Pod with the hostname set to “foo” and the subdomain set to “bar”, and a headless Service named “bar” in the same namespace, the pod will see it’s own FQDN as “foo.bar.my-namespace.svc.cluster.local”. DNS serves an A record at that name, pointing to the Pod’s IP.
With v1.2, the Endpoints object also has a new annotation
endpoints.beta.kubernetes.io/hostnames-map. Its value is the json representation of map[string(IP)][endpoints.HostRecord], for example: ‘{“10.245.1.6”:{HostName: “my-webserver”}}’.
If the Endpoints are for a headless service, an A record is created with the format
With v1.3, The Endpoints object can specify the
hostname for any endpoint, along with its IP. The hostname field takes precedence over the hostname value
that might have been specified via the
endpoints.beta.kubernetes.io/hostnames-map annotation.
With v1.3, the following annotations are deprecated:
pod.beta.kubernetes.io/hostname,
pod.beta.kubernetes.io/subdomain,
endpoints.beta.kubernetes.io/hostnames-map
Create a file named busybox.yaml with the following contents:
apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox restartPolicy: Always
Then create a pod using this file:
kubectl create -f busybox.yaml
You can get its status with:
kubectl get pods busybox
You should see:
NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 <some-time>
Once that pod is running, you can exec nslookup in that environment:
kubectl exec busybox -- nslookup kubernetes.default
You should see something like:
Server: 10.0.0.10 Address 1: 10.0.0.10 Name: kubernetes.default Address 1: 10.0.0.1
If you see that, DNS is working correctly.
If the nslookup command fails, check the following:
Take a look inside the resolv.conf file. (See “Inheriting DNS from the node” and “Known issues” below for more information)
cat /etc/resolv.conf
Verify that the search path and name server are set up like the following (note that seach busybox -- nslookup kubernetes.default Server: 10.0.0.10 Address 1: 10.0.0.10 nslookup: can't resolve 'kubernetes.default'
or
$ kubectl exec
You should see something like:) -c kubedns kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c healthz
See if there is any suspicious log. W, E, F letter
You should see:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h ...
If you have created the service or in the case it should be created by default but it does not appear, see this debugging services page for more information.
You can verify that dns endpoints are exposed by using the
kubectl get endpoints command.
kubectl get ep kube-dns --namespace=kube-system
You should see something like:.
The running Kubernetes DNS pod holds 3 containers - kubedns, dnsmasq and a health check called healthz. The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve performance. The healthz container provides a single health check endpoint while performing dual healthchecks (for dnsmasq and kubedns).
The DNS pod is exposed as a Kubernetes Service with a static IP. Once assigned the
kubelet passes DNS configured using the
--cluster-dns=10.0.0.10 flag to each
container.
DNS names also need domains. The local domain is configurable, in the kubelet using
the flag
--cluster-domain=<default local domain>
The Kubernetes cluster DNS server (based off the SkyDNS library) supports forward lookups (A records), service lookups (SRV records) and reverse IP address lookups (PTR records).
When running a pod, kubelet will prepend the cluster DNS server and search paths to the node’s own DNS settings. If the node is able to resolve DNS names specific to the larger environment, pods should be able to, also. See “Known issues” below for a caveat.
If you don’t want this, or if you want a different DNS config for pods, you can
use the kubelet’s
--resolv-conf flag. Setting it to “” means that pods will
not inherit DNS. Setting it to a valid file path means that kubelet will use
this file instead of
/etc/resolv.conf for DNS inheritance..
Create an Issue
Edit this Page | http://kubernetes.io/docs/admin/dns/ | CC-MAIN-2016-50 | refinedweb | 1,036 | 58.28 |
python tutorial - Python - classes and instances (__INIT__, __CALL__, ETC.) - learn python - python programming
Classes and Instances
- Unlike C++, classes in Python are objects in their own right, even without instances. They are just self-contained namespaces.
- Therefore, as long as we have a reference to a class, we can set or change its attributes anytime we want.
Defining Classes
The following statement makes a class with no attributes attached, and in fact, it's an empty namespace object:
- The name of this class is Student, and it doesn't inherit from any other class. Class names are usually capitalized, but this is only a convention, not a requirement.
- Everything in a class is indented, just like the code within a function, if statement, for loop, or any other block of code. The first line not indented is outside the class.
- In the code, the pass is the no-operation statement. This Student we're stubbing out functions or classes. The pass statement in Python is like an empty set of curly braces {} in Java or C.
- Then, we attached attributes to the class by assigning name to it outside of the class. In this case, the class is basically an objec with field names attached to it.
Note that this is working even though there are no instances of the class yet.
The __init__() method
- Many classes are inherited from other classes, but the one in the example.
In Python, objects are created in two steps:
- Constructs an object
__new()__
- Initializes the object
__init()__
However, it's very rare to actually need to implement __new()__ because Python constructs our objects for us. So, in most of the cases, we usually only implement the special method, __init()__.
- Let's create a class that stores a string and a number:
- When a def appears inside a class, it is usually known as a method.
- It automatically receives a special first argument, self, that provides a handle back to the instance to be processed. Methods with two underscores at the start and end of names are special methods.
- The __init__() method is called immediately after an instance of the class is created. It would be tempting to call this the constructor of the class.
- It's really tempting, because it looks like a C++ constructor, and by convention, the __init__() method is the first method defined for the class.
- It appears to be acting like a constructor because it's the first piece of code executed in a newly created instance of the class.
- However, it's not like a constructor, because the object has already been constructed by the time the __init()__ method is called, and we already have a valid reference to the new instance of the class.
- The first parameter of __init()__ method, self, is equivalent to this of C++. Though we do not have to pass it since Python will do it for us, we must put self as the first parameter of nonstatic methods. But the self is always explicit in Python to make attribute access more obvious.
- The self is always a reference to the current instance of the class. Though this argument fills the role of the reserved word this in c++ or Java, but self is not a reserved word in Python, merely a naming convention. Nonetheless, please don't call it anything but self; this is a very strong convention.
When a method assigns to a self attribute, it creates an attribute in an instance because self refers to the instance being processed.
Instantiating classes
- To instantiate a class, simply call the class as if it were a function, passing the arguments that the __init__() method requires.
- The return value will be the newly created object. In Python, there is no explicit new operator like there is in c++ or Java.
- So, we simply call a class as if it were a function to create a new instance of the class:
- We are creating an instance of the Student class and assigning the newly created instance to the variable s.
- We are passing one parameter, args, which will end up as the argument in Student's __init__() method.
- s is now an instance of the Student class. Every class instance has a built-in attribute, __class__, which is the object's class.
- Java programmers may be familiar with the Class class, which contains methods like getName() and getSuperclass() to get metadata information about an object. In Python, this kind of metadata is available through attributes, but the idea is the same.
- We can access the instance's docstring just as with a function or a module. All instances of a class share the same docstring.
We can use the Student class defined above as following:
Unlike C++, the attributes of Python object are public, we can access them using the dot(.) operator:
We can also assign a new value to the attribute:
How about the object destruction?
- Python has automatic garbage collection. Actually, when an object is about to be garbage-collected, its __del()__ method is called, with self as its only argument. But we rarely use this method.
Instance vairables
Let's look at another example:
What is self.id?
- It's an instance variable. It is completely separate from id, which was passed into the __init__() method as an argument. self.id is global to the instance.
- That means that we can access it from other methods. Instance variables are specific to one instance of a class.
- For example, if we create two Student instances with different id values, they will each remember their own values.
Then, let's make two instances:
- Here, we generated instance objects. These objects are just namespaces that have access to their classes' attributes.
- The two instances have links back to the class from which they were created. If we use an instance with the name of an attribute of class object, Python retrieves the name from the class.
- Note that neither s1 nor s2 has a setData() attribute of its own. So, Python follows the link from instance to class to find the attribute.
- In the setData() function inside Student, the value passed in is assigned to self.data. Within a method, self automatically refers to the instance being processed (s1 or s2). Thus, the assignment store values in the instances' namespaces, not the class's.
- When we call the class's display() method to print self.data, we see the self.data differs in each instance. But the name display() itself is the same in s1 and s2:
- Note that we stored different object types in the data member in each instance. In Python, there are no declarations for instance attributes (members).
- They come into existence when they are assigned values. The attribute named data does not even exist in memory until it is assigned within the setData() method.
example A
Then, we generate instance objects:
- The instance objects are just namespaces that have access to their classes' attributes. Actually, at this point, we have three objects: a class and two instances.
- Note that neither a nor a2 has a setData attribute of its own. However, the value passed into the setData is assigned to self.data. Within a method, self automatically refers to the instance being processed (a or a2).
- So, the assignments store values in the instances' namespaces, not the class's. Methods must go through the self argument to get the instance to be processed. We can see it from the output:
As we expected, we stored the value for each instance object even though we used the same method, display. The self made all the differences! It refers to instances.
example B
Superclass is listed in parenthesis in a class header as we see the example below.
MyClassB redefines the display of its superclass, and it replaces the display attribute while still inherits the setData method in MyClassA as we see below:
But for the instance of MyClassA is still using the display, previously defined in MyClassA.
Methods
Here is an example of a class Rectangle with a member function returning its area.
- Note that this version is using direct attribute access for the width and height.
We could have used the following implementing setter and getter methods:
Object attributes are where we store our information, and most of the case the following syntax is enough:
However, there are cases when more flexibility is required. For example, to validate the setter and getter methods, we may need to change the whole code like this:
Properties
- The solution for the issue of flexibility is to allow us to run code automatically on attribute access, if needed.
- The properties allow us to route a specific attribute access (attribute's get and set operations) to functions or methods we provide, enabling us to insert code to be run automatically.
- A property is created by assigning the result of a built-in function to a class attribute:
We pass
- fget: a function for intercepting attribute fetches
- fset: a function for assignments
- fdel: a function for attribute deletions
- doc: receives a documentation string for the attribute
If we go back to the earlier code, and add property(), then the code looks like this:
We can use the class as below:
The example above simply traces attribute accesses. However, properties usually do compute the value of an attribute dynamically when fetched, as the following example illustrates:
- The class defines an attribute V that is accessed as though it were static data. However, it really runs code to compute its value when fetched.
- When the code runs, the value is stored in the instance as state information, but whenever we retrieve it via the managed attribute, its value is automatically squared.
Again, note that the fetch computes the square of the instance's data.
Operator overloading: 2.6 __cmp__() (Removed in 3.0)
- By implementing __cmp__() method, all of the comparison operators(<, ==, !=, >, etc.) will work.
So, let's add the following to our Rectangle class:
- Note that we used the built-in cmp() function to implement __cmp__. The cmp() function returns -1 if the first argument is less than the second, 0 if they are equal, and 1 if the first argument is greater than the second.
- For Python 3.0, we get TypeError: unorderable types. So, we need to use specific methods since the __cmp__() and cmp() built-in functions are removed in Python 3.0.
Operator overloading: __str__
The __str__ is the 2nd most commonly used operator overloading in Python after __init__. The __str__ is run automatically whenever an instance is converted to its print string.
- Let's use the previous example:
If we print the instance, it displays the object as a whole as shown below.
- It displays the object's class name, and its address in memory which is basically useless except as a unique identifier.
- So, let's add the __str__ method:
- The code above extends our class to give a custom display that lists attributes when our class's instances are displayed as a whole, instead of relying on the less useful display. Note that we're doing string % formatting to build the display string in __str__.
__str__ vs __repr__
The difference between __str__ and __repr__ are not that obvious.
- When we use print, Python will look for an __str__ method in our class. If it finds one, it will call it. If it does not, it will look for a __repr__ method and call it. If it cannot find one, it will create an internal representation of our object.
- Not much information from print(x) and just echoing the object x. That's why we do customize the class by using __str__.
But not when we use print(myObjecs). Note that the instances are in the list:
So, we need to define __repr__:
In his book, Learning Python, Mark Lutz summarizes as follows:
- ..._ audience with an appropriate display.
sys.argv
- sys.argv is the list of arguments passed to the Python program.
- The first argument, sys.argv[0], is actually the name of the program.
It exists so that we can change our program's behavior depending on how it was invoked. For example:
- sys.argv[0] is operating system dependent whether this is a full pathname or not.
- If the command was executed using the -c command line option to the interpreter, sys.argv[0] is set to the string '-c'.
- If no script name was passed to the Python interpreter, sys.argv[0] is the empty string.
sys.argv[1] is thus the first argument we actually pass to the program.
If we run it with or without any argument, we get the output below:
getopt.getopt(args, options[, long_options])
- The getopt.getopt() (:).
- long_options, we run with two file long options:
With short options:
Note that the long option does not have to be fully matched:
__call__
- Python runs a __call__ method when an instance is called as a function form. This is more useful when we do some interfacing work with APIs expecting functions. On top of that, we can also retain state info as we see in the later examples of this section.
- It is believed to be the third most commonly used operator overloading method, behind the __init__ and the __str__ and __repr__ according to Mark Lutz (Learning Python).
- Here is an example with a __call__ method in a class. The __call__ method simply prints out the arguments it takes via keyword args.
As another example with a little bit more mixed arguments:
While the __call__ method allows us to use class instances to emulate functions as we saw in the above examples, there is another use of __call__ method: we can retain state info: | https://www.wikitechy.com/tutorials/python/python-classes-instances | CC-MAIN-2022-21 | refinedweb | 2,297 | 71.85 |
Sure, stopping invalid mime types, excessive filesizes, and blank files could decrease the amount of spam by a lot, but it isn't enough to stop those who really know how to goof things up.
In this tutorial I will only discuss functions which are predefined in php's standard library for us; however, those are definitely not the only checks you're capable of executing. Firstly, do you remember the move_uploaded_file() and/or copy() functions? Well, how can you ensure that the file that's really being copied is the one that was selected by the file upload field?
If a user inputs something that has special meaning to the server, for instance, /../index.html as the file name, it could overwrite necessary elements to your website. PHP has a predefined function which helps us minimize this problem fairly well, it's called is_uploaded_file() and it takes one argument which is a string and is the file name you want to check for.
If you are running a version of php that is older than 4.0.3 then you may need to create or redefine the function to use yourself, because it is pretty useful. Here is a sample version which could be used for php versions less than 4.0.3.
function is_uploaded_file($filename) { if (!$tmp_file = get_cfg_var('upload_tmp_dir')) { $tmp_file = dirname(tempnam('', '')); } $tmp_file .= '/' . basename($filename); /* User might have trailing slash in php.ini... */ return (ereg_replace('/+', '/', $tmp_file) == $filename); } # Here is an example of the self-defined function in action, it's slightly different if (is_uploaded_file($HTTP_POST_FILES['userfi le'])) { copy($HTTP_POST_FILES['userfile'], "/place/to/put/uploaded/file"); } else { echo "Possible file upload attack: filename '$HTTP_POST_FILES[userfile]'."; }
Next, we can check file extensions as well. We can check file extensions to make sure the user didn't simply spoof the mimetype. Now, of course there are ways of spoofing both the mimetype and file extension, but I'm lucky enough to not have seen much of that in my day.
We could use a simply regular expression in order to check for file extensions we want to allow. File extensions can be found in the name element of the $_FILES superglobal. Let's say we wanted to allow users to upload pictures for a photo album, but we only wanted to allow .gif, .jpeg, and .png extensions. Below is an example of the code:
# Other file upload code above this $allowed_filetypes = array('gif', 'jpeg', 'png', 'jpg'); # You should only have to edit the line above $preg_filetypes = join('|', $allowed_filetypes); if ( !preg_match('#.*?\.(' . $preg_filetypes . ')#si', $_FILES['data']['name']) ) { # Invalid file extension die('Invalid file extension. Only the following are allowed: ' . join(', ' , $allowed_filetypes)); } $match = false; foreach($allowed_filetypes as $type) { if ($_FILES['data']['type'] == 'image/' . $type) { $match = true; } } // End foreach if ($match !== true) { die('Invalid mimetype for your file.'); }
Note, if you are using the example for upload.php that was found in the other tutorial linked at the top of the page, you should delete lines 16-19 and place this code there instead.
With those steps, you can help secure your php file uploads, but those aren't the only precausions you can take to ensure safety for your server, website, and other users.
This post has been edited by JackOfAllTrades: 29 October 2010 - 01:07 PM
Reason for edit:: Added code tags | http://www.dreamincode.net/forums/topic/19951-securing-file-uploads/ | CC-MAIN-2016-40 | refinedweb | 549 | 62.78 |
- operator overloading
- Tic Tac Toe Comp Move help
- Sockets or GUI API?
- Function name basic question help
- Please Help Me with this beginner C++ Program!
- Birthstones, Months, Strings and Functions!
- visual c++ 2005 express edition compiler
- warning: deprecated conversion from string constant to 'char*'
- Class + Main() = error cout?
- Loop trouble
- Error window o _ o'
- Class Inheritance
- Swap values inside of a vector ?
- convert numbers to words
- Thoughts about using namespace std?
- C++ Simple Puzzle Word Game
- Net cpu usage of pthreads?!
- Queue with templates
- return 0 to main from within a function, is this possible?
- Class constructor
- Check if variable is integer
- error C1010: unexpected end of file while looking for precompiled header.
- Overloading [] operator
- Linked Lists and Classes
- How to make this "for loop"
- Including headers in a header file
- Selection sort on a vector
- C++) How would I display a value rounded to the nearest integer?
- Boolean | http://cboard.cprogramming.com/sitemap/f-3-p-271.html | CC-MAIN-2014-15 | refinedweb | 150 | 64.71 |
Contents
- Using git for team packages
- Policies
- Procedures
- Creating new repositories
- Creating a new package
- Checking out an existing package
- Ensure up-to-date
- Building
- Uploading
- Tagging
- New upstream release
- Patching
- Cherry picking upstream commits
- Sponsoring, mentoring, reviewing
- Pull requests
- Converting git-dpm to gbp pq
- Packages no longer maintained by DPT
- More information and resources
Using git for team packages
There are many advantages to team-based package development. The DPT.
It is a team requirement that all packages be managed using the same version control system.
Policies
Git Repository, under the python gbp pq.
Git Branch Names
As a team we are strongly recommending you use DEP-14 style branch names, specifically:
debian/master - The Debianized upstream source directory. IOW, this contains upstream source and a debian/ packaging directory. It is a source-full, patches-unapplied checkout.
pristine-tar - Contains the standard pristine-tar deltas.
upstream - The un-Debianized upstream source. This is what you get when you unpack the upstream tarball. This could also be upstream/<version> if required.
If you use other branch names, please have a good reason (not just personal preference, remember - take one for the team!), and you MUST document the differences in debian/README.source. The default branch names will help the repo interoperate better with other tools.
gbp pq creates a patch branch when you want to directly edit upstream source files for export to quilt patches, however this is for local use only and should not be pushed to remote servers.
Git Tag Names
There seems to be several tag styles in common use. However, we recommend using '/' as a separator, yielding tags like debian/0.2.0-1..
Procedures new repositories
To set up a new repository visit and follow the 'new project' link (the link will not be accessible unless you have joined the team). Enter the source package name as the project name and leave the visibility set to public.
Use these Vcs-* fields in the header of your debian/control file:
Vcs-Git:<srcpkgname>.git Vcs-Browser:<srcpkgname>
Replace <srcpkgname> with your source package name.
You can also create the repository using the salsa tool:
salsa --group python-team/packages create_repo <srcpkgname>
Creating a new package
First, initialize the project on salsa.debian.org to set up the bare repo checkout -b upstream $ git add . $ git commit -m "import srcpkgname_1.0.orig.tar.gz" $ git tag -s upstream/1.0 $ pristine-tar commit ../srcpkgname_1.0.orig.tar.gz upstream $ git checkout -b debian/master
You'll now be left on the debian/master branch, i.e. the packaging branch. You'll also have upstream (raw tarball contents) and pristine-tar branch.
Fill in the rest of your debian/ directory, then do this:
$ git add debian/* $ debcommit # assuming you have a d/changelog entry, otherwise git commit
Checking out an existing package
Packages can be checked out individually, or managed as a group, with mr. It is configured to not checkout all the team packages by default, but easily manage the ones that are checkout out, collectively.
$ sudo apt install mr $ git clone git@salsa.debian.org:python-team/tools/python-modules $ cd python-modules $ echo $PWD/.mrconfig >> ~/.mrtrust $ ./checkout PACKAGENAME # For each package you care about
if your username differs between your local machine and Salsa, then you can add this snippet to ~/.ssh/config:
Host salsa.debian.org User SALSApkgname>
but while the Vcs-* fields have been updated in the repository, new uploads with those fields still need to be done. In the meantime, you can:
$ gbp clone git@salsa.debian.org:python-team/packages/<srcpkgname>@salsa.debian.org:python-team/packages/"] insteadof = dpt:
which allows you to do this:
$ gbp clone dpt:srcpkgname.git
make sure you cd into the source directory.
Ensure up-to-date
Before doing anything on an existing repository, always always always ensure you have pulled in any changes:
$ gbp pull --redo-pq
If you always remember to do this, you reduce the risk of causing accidental git conflicts.
Note that the --redo-pq option will discard any local changes to your local patch queue, and replace with the patches from debian/patches.
Building
Now you can build your package using whatever tools you like, including debuild, dpkg-buildpackage, and git-buildpackage.
Uploading
Push your repo to salsa.debian.org:
$ git push --set-upstream git@salsa.debian.org:python-team/packages/<srcpkgname>.git : --tags
At this point, I like to cd to a different directory and do a gbp clone git@salsa.debian.org:python-team/packages/<srcpkgname>.git and then continue working from the <srcpkgname> directory. To prove that all the branches got pushed correctly, in this fresh clone, checkout the debian/master, pristine-tar, and upstream branches.
Tagging
Once you've built and uploaded your package, you should tag the release.
$ gbp buildpackage --git-tag-only $ git push --tags
Alternately, if you want to sign your tags:
$ gbp buildpackage --git-tag-only --git-sign-tags $ git push --tags
New upstream release
You should import the patch queue first. For this to work correctly, debian/gbp.conf should already have the correct value debian-branch, otherwise gbp import-orig will merge to the wrong branch.
Using a debian/watch file (recommended):
$ gbp pq import $ git checkout debian/master $ gbp import-orig --pristine-tar --uscan
Note "gbp pq import" will switch to the wrong branch, and if not fixed then import-orig can do the wrong thing.
Using a tarball file:
$ gbp pq import $ gbp import-orig --pristine-tar .
Important notes:
Don't forget to update debian/changelog!
Rebase the patches:
$ gbp pq rebase $ gbp pq export
You will need to push the debian/master branch and also the upstream and pristine-tar branches.
$ git push origin : --tags
Patching
Patching (i.e. adding quilt patches) is easy. You import the patches to a git patch queue, edit the patch queue, then export back again.
In general, to patch your package do this:
$ gbp pq import Edit patches $ gbp pq export $ vim debian/changelog $ git add debian/changelog $ git add debian/patches/* $ git commit $ gbp pq import $ git cherry-pick any_upstream_commit $ gbp pq export $ vim debian/changelog $ git add debian/changelog $ git add debian/patches/* $ git commit
Sponsoring, mentoring, reviewing
TBD
Pull requests
TBD
Converting git-dpm to gbp pq
Previously DPT used git-dpm at the workflow. This section documents how to convert from git-dpm to gbp pq.
- Ensure all branches up to date.
- Unapply all patches
Delete debian/.git-dpm config file.
Set debian-branch value to debian/master in debian/gbp.conf
Commit to debian/master branch (use git branch -m master debian/master to rename the branch).
Update .git/config branch debian/master to refer to merge = refs/heads/debian/master.
- Refresh the patches.
On Salsa, set the default branch to the new debian/master branch.
Sample debian/gbp.conf file:
[DEFAULT] debian-branch=debian/master
Sample steps to do the above (upstream branch will not be automatically checkout out locally):
gbp pull git read-tree --reset -u upstream git reset -- debian git checkout debian git rm debian/.git-dpm cat <<EOF > debian/gbp.conf [DEFAULT] debian-branch=debian/master EOF git add debian/gbp.conf git commit -m "Convert from git-dpm to patches unapplied format" git branch -m master debian/master git push origin -u debian/master
To refresh the patches:
gbp pq import gbp pq export dch -m "Refresh patches after git-dpm to gbp pq conversion" git add debian/patches/ git add debian/changelog debcommit
To make the new branch the default and delete the old branch:
go to
- expand "General settings"
- select the new default branch
go to
- expand "Protected branches"
- fix those settings
go to
- delete the old master branch
Packages no longer maintained by DPT
Packages repositories can be archived on Salsa by owners of the Python team namespace. If you want an obsolete repositories to be archived you will need to contact one of the owners and ask them to do it.
Q: What if I don't want to use gbp pq?.
Q: should we honor pull requests and allow for mirrors on ?GitHub or GitLab?
A:We will still require all team packages to be available on salsaPT or Debian-at-large, but for now please do not submit merge requests on gitlab.com. | https://wiki.debian.org/Python/GitPackaging?action=show&redirect=Python%2FGitPackagingPQ | CC-MAIN-2021-17 | refinedweb | 1,388 | 55.24 |
in reply to Re^4: Regarding STDOUT and indirect object notation for printin thread Regarding STDOUT and indirect object notation for.
Well that's a major exaggeration. The cost of loading IO::Handle would be quite small if it was rewritten in C. (100 bytes?) In fact, there already exists modules whose methods are built right into Perl: utf8, version and UNIVERSAL, for example.
It does seems odd to me that they'd include the implicit blessing without populating the namespace. I suppose it's a nod to the roll-your-own option.
Deep frier
Frying pan on the stove
Oven
Microwave
Halogen oven
Solar cooker
Campfire
Air fryer
Other
None
Results (323 votes). Check out past polls. | http://www.perlmonks.org/?node_id=814824 | CC-MAIN-2016-26 | refinedweb | 118 | 65.83 |
Difference between revisions of "Draft Mirror/tr"
Revision as of 09:57, 14 July 2021
Description
The
Draft Mirror command creates mirrored copies, Part Mirror objects, from selected objects. A Part Mirror object is parametric, it will update if its source object changes.
The command can be used on 2D objects created with the Draft Workbench or Sketcher Workbench, but also on many 3D objects such as those created with the Part Workbench, PartDesign Workbench or Arch Workbench.
Mirroring an object
Usage
See also: Draft Snap and Draft Constrain.
- Optionally select one or more objects.
- There are several ways to invoke the command:
- Press the
Draft Mirror button.
- Select the Modification →
Mirror option from the menu.
- Use the keyboard shortcut: M then I.
- If you have not yet selected an object: select an object in the 3D view.
- The Mirror task panel opens. See Options for more information.
- Pick the first point of the mirror plane in the 3D view, or type coordinates and press the
Enter point button.
- Pick the second point of the mirror plane in the 3D view, or type coordinates and press the
Enter point button.
- The mirror plane is defined by the selected points and the normal of the Draft working plane.
Options
The single character keyboard shortcuts mentioned here can be changed. See Draft Preferences.
- To manually enter coordinates enter the X, Y and Z component, and press Enter after each. Or you can press the
Enter point button when you have the desired values. It is advisable to move the pointer out of the 3D view before entering coordinates.
- Press R or click the Relative checkbox to toggle relative mode. If relative mode is on, the coordinates of the second point are relative to the first point, else they are relative to the coordinate system origin.
-.
- The Modify subelements checkbox has no purpose for this command.
- Press Esc or the Close button to abort the command.
Notes
- Mirrored copies of Draft Lines, Draft Wires, Draft Arcs and Draft Circles can be turned into independent editable Draft objects by using Draft Downgrade and then Draft Upgrade.
- The Part SimpleCopy command can be used to create a copy of a mirrored object that is not linked to its source object.
Preferences
See also: Preferences Editor and Draft Preferences.
- To change the number of decimals used for the input of coordinates: Edit → Preferences... → General → Units → Units settings → Number of decimals.
Properties
See also: Property editor.
A Part Mirror object is derived from a Part Feature object and inherits all its properties. It also has the following additional properties:
Data
Base
- VeriSource (
Link): specifies the object that is mirrored.
Plane
- VeriBase (
Vector): specifies the base point of the mirror plane.
- VeriNormal (
Vector): specifies the normal direction of the mirror plane.
Scripting
See also: Autogenerated API documentation and FreeCAD Scripting Basics.
To mirror objects use the
mirror method of the Draft module.
mirrored_list = mirror(objlist, p1, p2)
objlistcontains the objects to be mirrored. It is either a single object or a list of objects.
p1is the first point of the mirror plane.
p2is the second point of the mirror plane.
- If the Draft working plane is available the alignment of the mirror plane is determined by its normal, else the view direction of the camera in the active 3D view is used. If the graphical interface is not available the Z axis is used.
mirrored_listis returned with the new
Part::Mirroringobjects. It is either a single object or a list of objects, depending on
objlist.
Example:
import FreeCAD as App import Draft doc = App.newDocument() place = App.Placement(FreeCAD.Vector(1000, 0, 0), App.Rotation()) polygon1 = Draft.make_polygon(3, 750) polygon2 = Draft.make_polygon(5, 750, placement=place) p1 = App.Vector(2000, -1000, 0) p2 = App.Vector(2000, 1000, 0) line1 = Draft.make_line(p1, p2) mirrored1 = Draft.mirror(polygon1, p1, p2) Line2 = Draft.make_line(-p1, -p2) mirrored2 = Draft.mirror([polygon1, polygon2], -p1, -p2) doc.recompute()
- | https://wiki.freecadweb.org/index.php?title=Draft_Mirror/tr&diff=917666&oldid=904835 | CC-MAIN-2022-27 | refinedweb | 656 | 59.9 |
Opened 4 years ago
Closed 4 years ago
#20181 closed Cleanup/optimization (duplicate)
auto generate SECRET_KEY file?
Description
I wonder if you would accept a patch that auto-generates a secret key if its not set in the settings or alternatively a new option "SECRET_KEY_AUTOGENERATE = {False,True}". It would simply check for the file "secret.txt" and load or create that using the same mechanism as what startproject is using (or used to use).
If this would be a accepted change/feature I'm happy to work on this and submit a branch
Change History (4)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
This idea was suggested on the django-developers mailing list a while ago (). From the discussion in that thread, it seems that the preferred method is to use multiple settings files where you store general settings in a settings_global.py file and then keep settings.py out of version control and have it do something like
from settings_global import * SECRET_KEY = '.....'
I personally think that's a better approach since you're going to be overriding many settings for your various environments (dev/stage/test/prod) and not just the secret key. But I'll leave this open for others to weigh in.
Best,
Matt
Essentially it would be just something like:
"""
_secrets_file = os.path.join(os.path.dirname(file), "secret.txt")
if not os.path.exists(_secrets_file) or os.path.getsize(_secrets_file) == 0:
with open(_secrets_file) as f:
del _secrets_file
""" | https://code.djangoproject.com/ticket/20181 | CC-MAIN-2017-09 | refinedweb | 250 | 55.64 |
- [View source↑]
- [History↑]
Contents
How to proceed with paragraphs outside the <"translate"> tag which cannot appear on translate tool ? I having trouble with Development/Tutorials page.
Juliano Assis
Can you be more specific?
Do you mean "Calligra Plugin Tutorials" part? Or is it "Using the KDE PIM Libraries".
If so, it potentially can be translated with the corresponding page (and corrected link). On the other hand, we have some problems with our translation wiki plugin and this can be implemented after manual intervention only.
Just report the problematic pages to me. I will mark them for translation, then we call User:Ognarb and he will spin the database to make the strings translatable.
Have a look in
/Development/Tutorials
Section,
- Setting Up - Before you start writing and building KDE software, you'll need to prepare your tools first.
- Calligra section was inserted in different way.
and Notes,
- Note The tutorials below apply to an older version of Qt only. While there are stable Python bindings available for Qt 5, bindings for KDE Frameworks 5 are still under development.
- Note The tutorials below apply to an older version of Qt only. There are currently no Ruby bindings for Qt 5 and KDE Frameworks 5 available.
/Development
everything below:
- A collection of step-by-step guides and...
Thank you.
Juliano Assis
Thank you for the prompt reply, I will use this post for others pages with same problem in future.
Juliano Assis
The translation is visible now, but I having problems to save: "This namespace is reserved for content page translations. The page you are trying to edit does not seem to correspond any page marked for translation."
I suppose that the numbers in tags !--T:XXX is already used for.
Juliano Assis
Hi. Thats me again
Could you mark the following pages for translation:
/KDE_Frameworks
/Development/Tools
/Development/Tutorials/Setting_Up
/Development/Tutorials/Using_KXmlGuiWindow
Juliano Assis
Four more:
Development/Tutorials/Using_Actions
Development/Tutorials/Saving_and_loading
Development/Tutorials/CommandLineArguments
Development/Tutorials/Common_Programming_Mistakes
Thank you
Juliano Assis
Hello, sorry for the delay. I didn't get a notification. Please contact me per email or ping me in matrix/irc if there is something urgent: carl at carlschwan dot eu or Carl Schwan on Matrix
The translation database should have been updated automatically, this is a quite strange. I will investigate. In the meantime I updated it manually. Can someone confirm that this fixed the bug encountered?
Cheers
Carl
Sorry, but it is not fixed. When I try to translate the new messages I see "This namespace is reserved for content page translations. The page you are trying to edit does not seem to correspond any page marked for translation."
Example:
would yo care to explain your latest blocking and deletions. the reasons yo stated might be a bold lie by yourself..
Hey could you come to #kde-www on IRC please?
ping ochurlaud, we need to coordinate...
Sorry, I do not have my Arch + KDE at hand and web client seems not responding. Is techbase under heavy change?, | https://techbase.kde.org/User_talk:Yurchor | CC-MAIN-2020-16 | refinedweb | 503 | 58.28 |
Subject: Re: [boost] [container][memory] Request to move allocator_arg_t and uses_allocator to a memory related file.
From: Ion Gaztañaga (igaztanaga_at_[hidden])
Date: 2012-03-31 06:04:31
El 31/03/2012 11:03, Vicente J. Botet Escriba escribió:
> I see that Boost.Container has already defined in namespace
> boost::container and in file boost/container/scoped_allocator.hpp the
> following
>
> struct allocator_arg_t { };
> constexpr allocator_arg_t allocator_arg = allocator_arg_t();
>
> template <class T, class Alloc> struct uses_allocator;
There is no constexpr in Boost.Container code. Current trunk code
includes an experimental implementation of scoped_allocator, for C++03
and C++11 compilers.
> I would like to shared these to all the Boost libraries. I don't think
> it is a good idea to make Boost.Thread depend on Boost.Container just
> for this. Could these declarations be included in a specific file? These
> declaration are included in <memory> in the C++11. What would be the
> best library to contain these declarations?
I don't think Boost.thread should depend on Boost.Container, that's why
I've added those classes to boost::container namespace, avoiding any
boost:: namespace pollution. The idea is also to define them as std::
typedefs so that a user with a C++11 conforming standard library does
not need to add boost::container::xxx overloads. And user C++03 code
using boost::xxx types is standard conforming when compiled with C++11
compilers.
> What about a new boost/memory.hpp file?
> What about moving them to the boost or boost::memory namespace?
I think it's a good idea. But we should just typedef them to std:: if
the standard library provides those types. I think GCC 4.7 libstdc++ and
libc++ already support scoped allocators.
Best,
Ion
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2012/03/191829.php | CC-MAIN-2022-21 | refinedweb | 310 | 53.37 |
ImagePy Basic Tutorial
Introduction: ImagePy is an image processing software developed in Python, supporting bmp, rgb, png and other commonly used image formats. It can handle grayscale images and multi-channel (color) images, and supports image stack (sequence) operations. It supports a variety of selection operations (point, line, surface, multi-line, multi-face, hollow polygon). It can carry out a variety of commonly used mathematical operations, commonly used filter operation, image measurements, as well as pixel statistics. It can carry on dem surface reconstruction and three-dimensional reconstruction of image sequences. And the framework is based around Python development. The image data is represented by numpy. And thus it can easily access scikit-image, opencv, itk, mayavi and other third-party mature image processing libraries.
Download and install
works on windows, linux, mac, under python2.7 and python3.4+
# Now ImagePy is on Pypi pip install imagepy # Or install with conda conda install imagepy # Then start imagepy like this python -m imagepy
some trouble
- ImagePy is a ui framework based on wxpython, which can not install with pip on Linux. You need download the whl acrodding to your Linux system.
- On Linux and Mac, there may be permission denied promblem, for ImagePy will write some config information, So please start with sudo. If you install with pip, please add --user parameter like this: pip install -user imagepy
- If you install ImagePy in a Anaconda virtual environment, you may got a error when start like this: This program needs access to the screen. Please run with a Framework
build of python, and only when you are logged in on the main display, if so, please start with pythonw -m imagepy.
Main Interface
The main interface consists of four parts, from top to bottom: the title bar, menu bar, toolbar, and status bar.
Here are a few examples to illustrate what ImagePy can do.
First example: Mathematical operations, filter operations.
Selection Introduction:
Selection refers to processing the image only in the the specific identification areas on the image. ImagePy supports single point, multi-point, single line, multi-line, rectangular, circular, arbitrary polygon and free curve selection. It can superimpose something using Shift key, hollow out something using Ctrl key. In addition, all the selection objects can carry out expansion, shrink, convex hull and other geometric operations.
Geometric Transformation: ImagePy supports geometric transformations. It can carry out rotation, translation and other conventional matrix transformations. What’s more, these rotations are interactive and support selection.
Second example: An example of a cell count
Look up table introduction:
Index color is also called false color. The essence of it is to map the gray color to a predefined spectrum. The index color does not increase the amount of information in the image, but does enhance the visual contrast.
Here, for a cell under a microscope, we organize the image and compute statistics.
- Open the original image and go on Gaussian blur to anti-noise.
- In order to highlight the cells, a large-scale USM mask treatment was performed.
- After processing the picture, it is easy to use the threshold function to carry on binarization.
- Label the binary image, mark unicom area.
- Calculate the centroid of each Unicom area
- Calculate the area occupied by each cell
Third example: Image matching
Use the Surf feature matching algorithm implemented in OpenCV.
- The two graphs are covered by points, that is, Surf feature points, where the correct match is shown in yellow.
- Also output a log of the opations. Identify the feature points of the two graphs, the correct number of matches, and the rotation matrix between the two graphs.
- When a point is clicked with the mouse, the dot will be red with the corresponding match point of the other picture at the same time.
Fourth example: Dem Reconstruction
Use the mayavi library, to perform a large number of three-dimensional reconstructions and three-dimensional visualization functions.
Dem is the digital elevation model, which means that the brightness of the image represents the elevation. Through the Dem data, you can calculate the height, slope. You can draw contours, and perform surface reconstruction.
Fifth example: CT data 3D reconstruction
The following image represents dental MicroCT data. The data were filtered, segmented and three-dimensional reconstructed, as well as visually manipulated.
The figure above is a tooth CT data. Importing the image sequence, you can view the three views, and then go on its three-dimensional reconstruction.
Image Stack: ImagePy supports image stack processing, it has the following two characteristics:
- Images in the image stack have the same format and the same size.
- They will act on each image in the stack when processed.
Plugins and Macros:
In ImagePy itself, each functional component is plug-in (all menus, tools). The implementation of each function, in essence, is through interaction to get a group of parameters and then act on the current image. We can view the plug-in's organizational structure in Plugin Tree View, find plug-ins quickly in Plugin List View, record macros in Macros Recorder, and batch process when needed to do series of related functions and improve work efficiency.
From the two views above, you can get a global view of all the plug-ins, like viewing its related information, introduction, and source code. You can
quickly find the commands. You can run a related command directly by double-clicking.
Macro Recording of Cell Count Example:
We open the Plugins -> Macros -> Macros Recorder plug-in, and then re-operate the cell counting process...
After each step, Macros Recorder will add a log. When all is completed, you can get the following log:
These logs, each line essentially records “plug-in name> {parameter}”. Click “Run> Run Macros (F5)” to perform each action of the record in turn. You can also use the mouse to select a line or a few lines. Click “Run> Run Line (F6)” to implement the selected line. In addition macros have the following characteristics.
- You can save a file where the suffix of macro is (.mc). You can run the specified macro file via Plugins -> Macros -> Run Macros.
- Put the macro file on the menus directory or any of its subdirectories in the project. Starting once again, the macro will be loaded as a menu item. The title is the file name. In fact, some project function are in series by the macro.
Extend a filter:
The examples above only list some of the functionality of the ImagePy. However, ImagePy is not only an image processing program, but a highly scalable framework. Any numpy-based processing function can be easily incorporated. For example, to make a Gaussian blur filter, we only need:
# -*- coding: utf-8 -* import scipy.ndimage as nimg from imagepy.core.engine import Filter class Gaussian(Filter): title = 'Gaussian' note = ['all', 'auto_msk', 'auto_snap','preview'] #parameter para = {'sigma':2} view = [(float, (0,30), 1, 'sigma', 'sigma', 'pix')] #process def run(self, ips, snap, img, para = None): nimg.gaussian_filter(snap, para['sigma'], output=img)
Create a Filter:
- Import related class library. It can be third-party or implemented in C.
- Create a class that subclasses
Filter.
title, this line is the name of the plug-in and will also serve as the title of the menu.
noteindicates how the plug-in works and the associated preprocessing and post-processing work. This includes which types of images can be processed, whether selections are supported or not, and so on.
parais the parameter the kernel function needs.
viewand
paraare corresponding. They inform how the various parameters of the framework plug-in interact (the framework will generate your interactive interface automatically).
runis the core function, if conditional, given
imgto the results of the process (save memory), or return out (the framework will help you give img).
- Place the file in any subdirectory of menus in the project, and it will be loaded as a menu item at startup.
What has the framework helped us do?
They framework enables complex tasks in a uniform way. Simply, you do not need to determine for yourself whether the image type is legitimate. You do not need to make your own image cache to support undo. You do not need to support the selection by yourself. Do not need to monitor the interface by yourself to achieve real-time preview. You do not need to write any interface code after you have defined the required parameters, the type and the range of values for each parameter. When a color image is encountered, the each channel of the image is processed sequentially. When the image stack is encountered, each frame is automatically traversed. You are free to work on either the task
of image analysis, or creating new plugins for the framework.
Extend a Tool:
Another scenario is to interact on the canvas through the mouse, like the selection operations mentioned above. Here is an example of a brush:
from imagepy.core.draw import paint from imagepy.core.engine import Tool import wx class Plugin(Tool): title = 'Pencil' para = {'width':1} view = [(int, (0,30), 0, 'width', 'width', 'pix')] def __init__(self): self.sta = 0 self.paint = paint.Paint() self.cursor = wx.CURSOR_CROSS def mouse_down(self, ips, x, y, btn, **key): self.sta = 1 self.paint.set_curpt(x,y) ips.snapshot() def mouse_up(self, ips, x, y, btn, **key): self.sta = 0 def mouse_move(self, ips, x, y, btn, **key): if self.sta==0:return self.paint.lineto(ips.img,x,y, self.para['width']) ips.update = True def mouse_wheel(self, ips, x, y, d, **key):pass
Create Tool:
- Inherited from
Toolin the
engine.
- Specify the
title, which will be the tool name, and the message of the status bar.
- Adds several methods to achieve mouse_down, mouse_up, mouse_move, mouse_wheel.
- If the tool requires parameters (for example, pen width), use the dictionary to assign to
para. Similarly,
viewspecifies its interactive interface. When the tool is double-clicked, the dialog box will pop-up in accordance with the specified interface.
- Files are stored in the sub-folder of tools, with a generated 16 * 16 thumbnail icon. The icon and the tool are stored in the same name as the gif file. | https://pythonawesome.com/an-image-processing-software-developed-in-python/ | CC-MAIN-2020-50 | refinedweb | 1,688 | 56.86 |
Create your DIY remote for Philips Hue with Raspberry Pi
You know those Philips Hue lights, which are amazing and really cool gadgets for your house? Well, thanks to Philips it is also easy to create your very own DIY remote control for them. Yes, of course you can get a remote for these lights in the shop as well, but why would you if you can program one yourself?!
To do this I’ve used my Raspberry Pi 2 model B, and from my Arduino Starter Kit I’ve used a breadboard, a remote control and an IR (infrared) receiver. I’ve named my remote control “SpecialForMP3”, since this is the only text I could find on it. But you can use any infrared remote control you want, so if you have some useless remote controls lying around, just give them a purpose again.
First, we need to setup the Raspberry Pi to be able to receive infrared signals sent by the remote control. Furthermore, we have to make sure the Raspberry Pi can not only receive the signals, but is also capable of deciphering them. In other words, we want to make sure the Raspberry Pi can “hear” the remote control and is also capable of understanding what is said; they need to speak the same language.
The first step towards achieving this goal, is connecting the IR receiver to the Raspberry Pi. After we’ve done this we can start with setting up the LIRC (Linux Infrared Remote Control) package on the Raspberry Pi. LIRC is a package that allows you to decode and send infrared signals of many (but not all) commonly used remote controls. To setup both the IR receiver and LIRC, you can follow the steps as described here. You can find the resulting LIRC configuration file on GitHub, together with the rest of the code needed to finish this project.
When finished, my setup looks like this:
As a side note, I had some troubles using mode2 -d /dev/lirc0, after pushing a button on my remote instead of seeing something like the example I got the message “Partial read 8 bytes” and then it just stopped. After changing the driver from devinput to default in lirc_options.conf this issue was fixed.
When you find yourself having no permission to change files, look at the chmod 777 option, this makes sure everybody has read-write-execute permissions on the specified files and/or folders.
Now we’re ready for installing Python packages that make it possible to use LIRC and connect to the Philips Hue Bridge which controls the lights:
We get started by connecting the Philips Hue Bridge through a Python script, this can easily be done by using it’s IP address. You can find out what the correct IP address is in multiple ways, an easy one is using the (official) Philips Hue app:
- Go to the settings menu in the Philips Hue app and go to My Bridge, click on Network settings and switch off the DHCP toggle; the IP address of the bridge will show.
The next thing we need to do is connect to the Philips Hue Bridge and determine the names of the available lights and light groups.
# import the necessary packages
from phue import Bridgefrom phue import Bridge
# identify the bridge# identify the bridge
b = Bridge(‘192.168.1.128’)b = Bridge(‘192.168.1.128’)
# connect to the bridge (first press button on bridge)# connect to the bridge (first press button on bridge)
# this only has to be done the first time you setup the connection# this only has to be done the first time you setup the connection
b.connect()b.connect()
# get the names of all the lights# get the names of all the lights
print b.get_light_objects(‘name’)print b.get_light_objects(‘name’)
# get the names of the light groups# get the names of the light groups
print b.get_group()print b.get_group()
In order to change the colour of a light, we need to know the XY code of this colour. Since we’re more familiar with using RGB codes for colour, we can use this function to convert RGB to XY.
Finally, we can connect to the LIRC and create the loop in which we will change the colour of the lights. In my case I have the following lights available: Zithoek, Zithoek bloom, Raamkant, Midden, Keukenkant, Slaapkamer. And I have the following light groups available: Zithoek, Eetkamer, Slaapkamer. You can find the complete script to use the remote to control the Philips Hue Lights on GitLab as well, this script connects to the Philips Hue Bridge and to LIRC and has ten different scenes for the lights, but you can of course adapt this to your own needs and wishes.
Good luck and enjoy! | https://www.theanalyticslab.nl/create-your-diy-remote-for-philips-hue-with-raspberry-pi/ | CC-MAIN-2022-27 | refinedweb | 805 | 72.29 |
Does Python have an ordered set?.
OrderedSet([1, 2, 3])
This is a MutableSet, so the signature for
.union doesn't match that of set, but since it includes
__or__ something similar can easily be added:
def union(*sets): union = OrderedSet() union.union(*sets) return uniondef union(self, *sets): for set in sets: self |= set
The answer is no, but you can use
collections.OrderedDict from the Python standard library with just keys (and values as
None) for the same purpose.
Update: As of Python 3.7 (and CPython 3.6), standard
dict is guaranteed to preserve order and is more performant than
OrderedDict. (For backward compatibility and especially readability, however, you may wish to continue using
OrderedDict.)
Here's an example of how to use
dict as an ordered set to filter out duplicate items while preserving order, thereby emulating an ordered set. Use the
dict class method
fromkeys() to create a dict, then simply ask for the
keys() back.
'foo', 'bar', 'bar', 'foo', 'baz', 'foo']list(dict.fromkeys(keywords))['foo', 'bar', 'baz']keywords = [
An ordered set is functionally a special case of an ordered dictionary.
The keys of a dictionary are unique. Thus, if one disregards the values in an ordered dictionary (e.g. by assigning them
None), then one has essentially an ordered set.
As of Python 3.1 and 2.7 there is
collections.OrderedDict. The following is an example implementation of an OrderedSet. (Note that only few methods need to be defined or overridden:
collections.OrderedDict and
collections.MutableSet do the heavy lifting.)
import collectionsclass OrderedSet(collections.OrderedDict, collections.MutableSet): def update(self, *args, **kwargs): if kwargs: raise TypeError("update() takes no keyword arguments") for s in args: for e in s: self.add(e) def add(self, elem): self[elem] = None def discard(self, elem): self.pop(elem, None) def __le__(self, other): return all(e in other for e in self) def __lt__(self, other): return self <= other and self != other def __ge__(self, other): return all(e in self for e in other) def __gt__(self, other): return self >= other and self != other def __repr__(self): return 'OrderedSet([%s])' % (', '.join(map(repr, self.keys()))) def __str__(self): return '{%s}' % (', '.join(map(repr, self.keys()))) difference = property(lambda self: self.__sub__) difference_update = property(lambda self: self.__isub__) intersection = property(lambda self: self.__and__) intersection_update = property(lambda self: self.__iand__) issubset = property(lambda self: self.__le__) issuperset = property(lambda self: self.__ge__) symmetric_difference = property(lambda self: self.__xor__) symmetric_difference_update = property(lambda self: self.__ixor__) union = property(lambda self: self.__or__) | https://codehunter.cc/a/python/does-python-have-an-ordered-set | CC-MAIN-2022-21 | refinedweb | 426 | 53.68 |
Difference between revisions of "Henshin FAQ"
Revision as of 15:03, 12 September 2012
Contents
- 1 How can I execute transformations?
- 2 After installing an update of Henshin, I cannot open my transformation anymore!
- 3 Does Henshin support automatic tracing for exogenous transformations?
- 4 How can I use dynamic EMF with Henshin?
- 5 I get NPEs when I try to execute my transformation
- 6 How can I define a higher-order (HO) transformation in Henshin?
- 7 Why is my rule with a String constant not matched even though the constant is correct?
- 8 What sort of automatic analysis is supported?
How can I execute transformations?
Use the Henshin Interpreter to execute transformations (available via a wizard or programatically).
After installing an update of Henshin, I cannot open my transformation anymore!
Migrate your old Henshin files using the Henshin Migration Wizard.
Does Henshin support automatic tracing for exogenous transformations?
Henshin follows a rewrite approach. Tracing is not build into the transformation language but can be easily realized using a generic Henshin Trace Model.
How can I use dynamic EMF with Henshin?
Define your EMF models as usual. Make sure you set the namespace URIs and prefixes for all packages. To use the packages in Henshin, right-click in the Henshin editor and select Import Package -- From Workspace and choose the packages from your Ecore files. To create an dynamic instance model, right-click on an Ecore file and select Henshin -- Create Dynamic Instance. This will create an XMI file which you can then edit in the Sample Reflective Model Editor of EMF.
If you don't want to use dynamic EMF, import the packages from the registry and use the runtime version (not the development version).
I get NPEs when I try to execute my transformation
Make sure all node, edge and attribute types are correctly set. Also, it is important that you reference the same version of the metamodels in your Henshin files as in your instance models. For example, if you have a UML instance model created with the UML editors, you must make sure that you use the same UML metamodel in your transformation. In this case, you would have to import the UML metamodel from the registry and use the runtime version. See also the entries before and after this one.
How can I define a higher-order (HO) transformation in Henshin?
Import the Henshin and the Ecore metamodel into your transformation (import from the registry and use the runtime versions). Check out the examples repository on our website for more details.
Why is my rule with a String constant not matched even though the constant is correct?
The attribute in the Henshin rule must look like this: "CONSTANT" where CONSTANT is the string to be matched. Make sure you use double quotes here. In the instance model, the object's attribute must have the value CONSTANT -- in the EMF editor you must not use any quotes here!
What sort of automatic analysis is supported?
You can use the Henshin Statespace Explorer to generate a state space for a transformation system, check structural invariants and do model checking. | http://wiki.eclipse.org/index.php?title=Henshin_FAQ&diff=315360&oldid=240053 | CC-MAIN-2015-48 | refinedweb | 521 | 65.01 |
Created on 2003-04-25 19:57 by mkc, last changed 2009-02-20 01:54 by ajaksu2. This issue is now closed.
Under Tru64 (version 5.1a, and probably others), when a
Python script is structured like so:
#!/something/bin/python
import sys
print sys.prefix
the calculation of prefix in getpath.c won't work
correctly. The reason is that argv[0] will be 'python'
rather than '/something/bin/python' (as it would be
under, say, Linux).
The code happens to work correctly in some simple
scenarios, but fails in a pretty mysterious way if one
has a different python in one's PATH than the one in
the #! line. So, for example, if my PATH=/usr/bin and
there is a /usr/bin/python, then if I run the script
above, sys.prefix will be set to '/usr' instead of
'/something' and the wrong module path will be set up, etc.
It would be much better to simply obey the compiled
PREFIX instead, at least in the case where argv[0] has
no slashes.
Logged In: YES
user_id=21627
Is there any way for a binary to find out its location on
disk, on Tru64?
Logged In: YES
user_id=555
I hunted around in the man pages and googled and I couldn't
find much. There is a proc ioctl PIOCOPENM (see)
which will give you an fd to the executable, but I don't
know of any way to map that back to a filename.
We'd need confirmation, preferably with assistance from a Tru64 guru.
I'll go down to the cemetery and see if I can dig one up. :-)
All of our Tru64 machines have been powered-down for over three years
now, so as far as I'm concerned you can mark this one as no longer relevant.
Mike,
Thanks for the feedback (and laugh!). I'll close this one soon, unless
someone requests to keep open. | http://bugs.python.org/issue727732 | crawl-003 | refinedweb | 324 | 81.93 |
Advertise with Us!
We have a variety of advertising options which would give your courses an instant visibility to a very large set of developers, designers and data scientists.View Plans
Python Interview Questions (Frequently Asked)
Irrespective of the source you pick a list of best programming languages to learn in 2019 from, one name that will always find its place there is Python.
So, the answer is yes, if you are asking whether a lucrative career is possible by dedicating yourself to the interpreted, high-level, general-purpose programming language i.e. learning Python.
Python Interview Questions
Once you’ve had enough understanding of the various concepts of Python, it’s time to give a shot at some interviews. To increase your chances of clearing them, here is a list of 20 Python interview questions that you must know answers to:
Question:.
Question: Draw a comparison between the range and xrange in Python.
Answer: In terms of functionality, both range and xrange are identical. Both allow for generating a list of integers. The main difference between the two is that while range returns a Python list object, xrange returns an xrange object.
Xrange is not able to generate.
Question: Explain Inheritance and its various types in Python?
Answer:. There are 4 forms of inheritance supported by Python:
- Single Inheritance – A single derived class acquires from on single superclass.
- Multi-Level Inheritance – At least 2 different derived classes acquire from two distinct base classes.
- Hierarchical Inheritance – A number of child classes acquire from one superclass
- Multiple Inheritance – A derived class acquires from several superclasses.
Question: Explain how is it possible to Get the Google cache age of any URL or webpage using Python.
Answer: In order to Get the Google cache age of any URL or webpage using Python, following URL format is used:
Simply replace URLGOESHERE with the web address of the website or webpage whose cache you need to retrieve and see in Python.
Question::
DATABASES = { 'default': { 'ENGINE' : 'django.db.backends.sqlite3', 'NAME' : os.path.join(BASE_DIR, 'db.sqlite3'), } }
If you need to use a database server other than the SQLite, such as MS SQL, MySQL, and PostgreSQL, then you need to use the database’s administration tools to create a brand new database for your Django project.
You have to modify the following keys in the DATABASE ‘default’ item to make the new database work with the Django project:
- ENGINE – For example, when working with a MySQL database replace ‘django.db.backends.sqlite3’ with ‘django.db.backends.mysql’
- NAME – Whether using SQLite or some other database management system, the database is typically a file on the system. The NAME should contain the full path to the file, including the name of that particular file.
NOTE: - Settings like Host, Password, and User needs to be added when not choosing SQLite as the database.
Question: How will you differentiate between deep copy and shallow copy?
Answer: We use, deep copy makes execution of a program slower. This is due to the fact that it makes some copies for each object that is called.
Question:.
People Also Read:
Question: Observe the following code: = A6 = [[i,i*i] for i in A1] print(A0,A1,A2,A3,A4,A5,A6)
Write down the output of the code.
Answer:
A0 = {'a': 1, 'c': 3, 'b': 2, 'e': 5, 'd': 4} # the order may vary
A1 = range(0, 10)
A2 = []
A3 = [1, 2, 3, 4, 5]
A4 = [1, 2, 3, 4, 5]
A5 =
A6 = [[0, 0], [1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36], [7, 49], [8, 64], [9, 81]]
Question: Python has something called the dictionary. Explain using an example.
Answer: A dictionary in Python programming language is an unordered collection of data values such as a map. Dictionary holds key:value pair. It helps in defining a one-to-one relationship between keys and values. Indexed by keys, a typical dictionary contains a pair of keys and corresponding values.
Let us take an example with three keys, namely Website, Language, and Offering. Their corresponding values are hackr.io, Python, and Tutorials. The code for the example will be:
dict={‘Website’:‘hackr.io’,‘Language’:‘Python’:‘Offering’:‘Tutorials’} print dict[Website] #Prints hackr.io print dict[Language] #Prints Python print dict[Offering] #Prints Tutorials
Question: Python supports negative indexes. What are they and why are they used?
Answer: The sequences in Python are indexed. It consists of the positive and negative numbers. Positive numbers use 0 as the first index, 1 as the second index, and so on. Hence, any index for a positive number n is n-1.
Unlike positive numbers, index numbering for the negative numbers start from -1 and it represents the last index in the sequence. Likewise, -2 represents the penultimate index. These are known as negative indexes. Negative indexes are used for:
- Removing any new-line spaces from the string, thus allowing the string to except the last character, represented as S[:-1]
- Showing the index to representing the string in the correct order
Question: Suppose you need to collect and print data from IMDb top 250 Movies page. Write a program in Python for doing so. (NOTE: - You can limit the displayed information for 3 fields; namely movie name, release year, and rating.)
Answer:
from bs4 import BeautifulSoup import requests import sys url = '' response = requests.get(url) soup = BeautifulSoup(response.text) tr = soup.findChildren("tr") tr = iter(tr) next(tr) for movie in tr: title = movie.find('td', {'class': 'titleColumn'} ).find('a').contents[0] year = movie.find('td', {'class': 'titleColumn'} ).find('span', {'class': 'secondaryInfo'}).contents[0] rating = movie.find('td', {'class': 'ratingColumn imdbRating'} ).find('strong').contents[0] row = title + ' - ' + year + ' ' + ' ' + rating print(row) Question: Take a look at the following code: try: if '1' != 1: raise "someError" else: print("someError has not occured") except "someError": print ("someError has occured")
What will be the output?
Answer: The output of the program will be “invalid code.” This is because a new exception class must inherit from a BaseException.
Question: What do you understand by monkey patching in Python?
Answer: The dynamic modifications made to a class or module at runtime are termed as monkey patching in Python. Consider the following code snippet:
# m.py class MyClass: def f(self): print "f()"
We can monkey-patch the program something like this:
import m def monkey_f(self): print "monkey_f()" m.MyClass.f = monkey_f obj = m.MyClass() obj.f()
Output for the program will be monkey_f().
The examples demonstrate changes made in the behavior of f() in MyClass using the function we defined i.e. monkey_f() outside of the module m.
Question:.
Question: What is Flask and what are the benefits of using it?
Answer: Flask is a web microframework for Python with Jinja2 and Werkzeug as its dependencies. As such, it has some notable advantages:
- Flask has little to no dependencies on external libraries
- Because there is a little external dependency to update and fewer security bugs, the web microframework is lightweight to use.
- Features an inbuilt development server and a fast debugger.
Question: What is the map() function used for in Python?
Answer: The map() function applies a given function to each item of an iterable. It then returns a list of the results. The value returned from the map() function can then be passed on to functions to the likes of the list() and set().
Typically, the given function is the first argument and the iterable is available as the second argument to a map() function. Several tables are given if the function takes in more than one arguments.
Question: What is Pickling and Unpickling in Python?
Answer: The Pickle module in Python allows accepting any object and then converting it into a string representation. It then dumps the same into a file by means of the dump function. This process is known as pickling.
The reverse process of pickling is known as unpickling i.e. retrieving original Python objects from a stored string representation.
Question: Whenever Python exits, all the memory isn’t deallocated. Why is it so?
Answer: Upon exiting, Python’s built-in effective cleanup mechanism comes into play and try to deallocate or destroy every other object.
However, Python modules that are having circular references to other objects or the objects that are referenced from the global namespaces aren’t always deallocated or destroyed.
This is because it is not possible to deallocate those portions of the memory that are reserved by the C library.
Question: Write a program in Python for getting indices of N maximum values in a NumPy array.
Answer:
import numpy as np arr = np.array([1, 3, 2, 4, 5]) print(arr.argsort()[-3:][::-1]) Output: [4 3 1]
Question: Write code to show randomizing the items of a list in place in Python along with the output.
Answer:
from random import shuffle x = ['hackr.io', 'Is', 'The', 'Best', 'For', 'Learning', 'Python'] shuffle(x) print(x) Output: ['For', 'Python', 'Learning', 'Is', 'Best', 'The', 'hackr.io']
That’s All!
So, that sums up the list of 20 Python interview questions. Learning never gets easier, you need to get better. So, here are some top choices for Python books to double-check you Python preparation.
All the best for your interview!
People are also Reading:
What are the built in modules in Python?
What is Python API?
What is difference between framework and library?
What is the difference between package and library?
What is the difference between module and package in Python?
What is negative index?
What are generators?
How is Python interpreted?
How do I run a Python script in Windows?
Why lambda forms does not have statements in Python? | https://hackr.io/blog/python-interview-questions | CC-MAIN-2019-51 | refinedweb | 1,613 | 66.74 |
Copyright © 2013 W3C® (MIT, ERCIM, Keio, Beihang), to publish this specification as an informative Working Group Note. It was previously published as a Working Group Draft. In addition, this publication updates the references that have changed since the previous publication (diff).
This Note includes material that was published previously for early feedback in the document titled "XML Signature Transform Simplification: Requirements and Design", see.:SignedInfo
This is requirements and design options for XML Security 2.0, including Canonical XML 2.0 and XML Signature 2.0.
The Reference processing model and associated transforms currently defined by XML Signature [XMLDSIG-CORE] are very general and open-ended. This dispute resolution...
The a mechanism that allows a subset of a non XML resource to be signed.
Besides the explicit design principles and requirements in [XML-CANONICAL-REQ], [XML-CANONICAL-REQ] [XML-C14N] (and its cousin, Exclusive Canonicalization [XML-EXC-C14N]) of a 2.0 version.
XML Canonicalization is used in some other specs e.g. DSS to do some cleanup of the XML. This is not required of a 2.0 version. [XMLDS. Only regular attributes can be excluded, not attributes that are namespace declarations or in the xml namespace.
Optionally to this set, reinclude some subtrees (of element nodes). (Note: this is not supported in Canonical XML 2.0, in order to support goals related to simplicity.)
This data model avoids namespace nodes completely. It only deals with namespace declarations. It also prohibits attribute nodes without parent element nodes. Another simplification with this model is if an element node is present, all its namespace declarations and all its child text nodes have to be present. that workflow. signer. as well. Prefix names being significant is yet another source of issues. Schema aware canonicalization is another possibility, but this may have issues related to requiring a schema.
The current Transform chain model usually require schema changes,.
The XML Signature Best Practices document [XMLDSIG-BESTPRACTICES] wrapping attacks [MCINTOSH-WRAP].
Problems with
RetrievalMethod
RetrievalMethod can lead to infinite loops. Also transforms in retrieval method can lead to many attacks, and these cannot be solved by changing the order of operations.
These security risks need to be addressed in the new specification. the (which has been done for XPath 1.0, see [XMLDS.. | http://www.w3.org/2008/xmlsec/Drafts/xmlsec-reqs2/2013-04-11-NOTE/Overview.html | CC-MAIN-2015-27 | refinedweb | 378 | 51.34 |
[Android] Building a Menu for your Android (V1.0 R1) App
25 November, 2008 33 Comments
In Building a simple menu in Android, a menu was built using the MenuBuilder. Since the release of version 1.0 (release 1), MenuBuilder has been removed. In this tutorial, we’ll set up a custom menu that gets displayed when the menu button is pressed.
To add the menu for your application, your activity needs to override the onCreateOptionsMenu method. The method will be given the instance of the menu to popuplate. Our overridden method will add the items to the instance that is given. To control what happens when an item on the menu is selected, override the onOptionsItemSelected method. Here is the first part of the code that sets up the activity and adds a couple of items to our menu:
1 package com.android.menu;
2
3 import android.app.Activity;
4 import android.app.AlertDialog;
5 import android.os.Bundle;
6 import android.view.Menu;
7 import android.view.MenuInflater;
8 import android.view.MenuItem;
9 import android.view.SubMenu;
10
11 public class MenuDemo extends Activity
12 {
13 /**
14 * Called when the activity is first created.
15 */
16 @Override
17 public void onCreate(Bundle savedInstanceState)
18 {
19 super.onCreate(savedInstanceState);
20 setContentView(R.layout.main);
21 }
22
23 /**
24 * {@inheritDoc}
25 */
26 @Override
27 public boolean onCreateOptionsMenu(Menu menu)
28 {
29 super.onCreateOptionsMenu(menu);
30
31 MenuItem item = menu.add("Painting");
32 item.setIcon(R.drawable.paint);
33
34 item = menu.add("Photos");
35 item.setIcon(R.drawable.photos);
36
The call to to the setIcon method attachs an image to the menu item. In the example above, the images have been placed in the appropriate resources directory and is being referenced from there.
The items added so far are top level items. Now let’s say we want to attach a submenu. This is as easy as calling the addSubMenu method.
37 SubMenu subScience = menu.addSubMenu(
38 R.string.scienceMenuName);
39 subScience.setIcon(R.drawable.science);
40
To add items to our submenu, we can create the menu items programatically (like we did with the top level menu). However, just to show you another way of setting up the menu, we’ll use load up a menu specified by a menu XML file here instead. You can use the Android Menu Editor to edit the XML if you want (in Eclipse, create an XML file and then right click on the file Open With -> Android Menu Editor) or you can use the text editor to manually create the XML. Below is the contents of the XML file that I used:
1 <menu xmlns:
2 <item android:
3 <item android:
4 <item android:
5 </menu>
6
It should be noted that we could have also created our top level menu the same way! The loading of the menu from an XML file is done through the use of a MenuInflater.
41 // Now, inflate our submenu.
42 MenuInflater inflater = new MenuInflater(this);
43 inflater.inflate(R.menu.menu, subScience);
44
Of course, once the menu has been loaded from an XML file, we can still add items to the submenu.
45 // Programatically, add one item to the submenu.
46 subScience.add("Meteorology");
47
To make the menu displayable when the menu button is pressed, the method needs to return true.
48 // Return true so that the menu gets displayed.
49 return true;
50 }
51
With the menu setup, we now override the onOptionsItemSelected method to handle menu selections. The selected menu item is given back as the method parameter. In this example, the an alert dialog simply pops up, showing what item was selected.
52 /**
53 * {@inheritDoc}
54 */
55 public boolean onOptionsItemSelected(MenuItem item)
56 {
57
58 if (item.hasSubMenu() == false)
59 {
60 // For this demo, lets just give back what
61 // we found.
62 AlertDialog.Builder dialogBuilder = new
63 AlertDialog.Builder(this);
64
65 dialogBuilder.setMessage(" You selected " +
66 item.getTitle());
67
68 dialogBuilder.setCancelable(true);
69 dialogBuilder.create().show();
70 }
71
72 // Consume the selection event.
73 return true;
74 }
75 }
76
Pingback: Building a simple menu in Android « Kah - The Developer
Excellent little tutorial, mate. Android really has a neat SDK!
Pingback: [Android] Defining a Context Menu for your View « Kah - The Developer
Hi,
I tried to create an Menu bar in Android Project but in code I am facing some queries like “How do design the class and xml file for this”. Please help for this.
Thanks a lot…
Regards,
Ravisankar S
+91 988 477 9432
I don’t really understand what you are asking or what you are after. Once you have figured out how you want your menu to look, you should be able to easily translate it to code or the XML file. Also, note that the code in this little tutorial may also be out of date, as they have release later versions of the SDK.
Pingback: KamoBagula.com » Blog Archive » How to build menus in Android
i want to open up options menu when a custom button is clicked … is it possible? if so, please help me out
thanks in advance
Hi,
I just wrote and posted an example for how you can do this. Please see this post.
Hi..
How to create the permenent menus??.. I want the menu stays visible to user..I don’t want him to press the menu key each time..
Is it possible to develop such menu??? if yes..I would appriciate your help
I can’t really figure out how to do this either from the current API. I’m not even sure why you would want to keep the option menu opened permenantly. If you really do need to keep something like the options menu open “permenently”, you could probably use a panel with buttons that stays open. However, if you really do feel you need to keep it open, I guess you try asking over at StackOverflow.
Thank you very much for your prompt reply.
Basically in my application there are lot of activities and each actvities may have (different or same menu)options user can select.
There may be some activities which may not require menu at all.
So I just don’t want user to keep guessing and press the menu key to look for options he may have from current activity.
I like your suggestion of having panel with multiple buttons.
So for example my three activities require same menu (panel with buttons)…
in that case, can we have something like master detail activity.
Master Activity(Contains my menu) stays visible during all
my Child Activities(Contains other UI stuff) executing.
Fundamentally i know at a time only one activity should run, but is there any other way you think of achiving this scenario.
Please let me know if, i am not able to make you understand.
Thanks
To me, it should be fairly obvious which items or “options” should (and should NOT) be available in an activity’s options menu. So it should be fairly intuitive and easy for the user to “guess” what items are in the options menu based on what is on the screen. Basically, I don’t even think that you should try to keep it open in any of the activities!
Besides, menus in mose other application will normally close if an item is selected or something outside the menu is pressed. Changing the menu to stay open, even after an item is selected, is something that could be unintuitive (since it isn’t the usual behaviour) and just take up precious screen space.
If you really need a menu that stays open, then I think should consider using a toolbar or menubar instead. This could easily be done using a “panel with buttons”. The only difference is that toolbars and menubars don’t need to be “opened” by pressing the MENU button.
Thanks again….
Actually we already have the iphone version of the app, which i am going to build. And if you know iphone
do not have menu key. So we do have Menubars on top of the app. And we are planning to have
identical design.
Anyways, your inputs are very helpful and we will probably go for panel with buttons.
My application crashes when started…..!
///////////main.xml////////////////////
/////////////////end of file//////////////////
/////////////////strings.xml////////////////
Hello World, TestCalc!
Simple Calculator
////////////////end of file//////////////////
///////////////AndroidPeaseed Manifest//////////////////
/////////////////////end of file/////////////////
please help me….!
xml not displaying….?
I suspect XML is not displaying because of the tags. Try putting
[
sourcecode language="xml"
]at top of your xml and
[/
sourcecode
]at the bottom. I used this same tag to format your code above. If you are working in Eclipse you would probably see an exception in the LogCat view (open the DDMS perspective or go through Window -> Show View -> Other… -> Android -> LogCat). If there is an exception and a stack trace, it would be useful to know.
Tip:
I noticed that you are following the example that I wrote a very long time ago, where the dialog is created each time it needs to be shown (when a certain item has been selected). Having to create the same dialog each time is rather inefficient. The only thing that changes is text that displays what has been selected. I think the application would be faster if showDialog(int) was used instead. If you take this approach, you will need to override onCreateDialog instead. The method onOptionsItemSelected then just needs to update the dialog and call showDialog(int).
//////////////////AndroidPeaseed Manifest//////////////
”
”
///////////////////end of file/////////////////
I am having a list of names in my resources and when I tap and hold any items on that list I need to change its font style using menu help me on this issue. please send me some sample code since I am new to this technology
I’m not aware of a way of changing the font (or font style) in menus when tapping or holding any items. Maybe try asking on Stackoverflow.
Hi Kahgoh,
Newly i created database in android its working fine, but i need to synchronize all the contacts from mobile to database, for that i need to create menu options,,,, so could u plz suggest me..?
Regards
Munirasu
Hi,
I don’t understand what you are asking for. Could you provide information on what you are trying to achieve and also what have you tried? Did you just want to create a menu item for synchronizing the contacts database?
Icon is not coming….
Hi,
You will need to add your own icons to the project first. You should place them in one of subdirectories under res. The subdirectories under res (drawable-hdpi, drawable-mdpi and drawable-ldpi) allow you to provide different versions of the icon, depending on the screen resolution of the device (see documentation on Supporting Multiple Screens). Then, in your code, you can reference these resources to use them as the icons.
Hi there, i have a problem in the following line:
inflater.inflate(R.menu.menu, subScience);
eclipse does not find R.menu. Please can you tell me what is my prob? bdw i am a new to android development. thanks
Hi,
I assume that you have created the project as an Android project in Eclipse and that you have also installed the Android plugin for Eclipse (if Build Automatically is set). The R class is generated and kept up to date for you by the Android plugins in Eclipse. First, you’ll need to make sure that you have defined the menu in a file called “menu.xml“, under the res/menu folder of your project. You might need to refresh your project to generate the R class or go to the Project menu and select Build Project. In the later versions, it is generated in the gen folder.
Is there a way to pass data back to the previous screen?
So, if the menu option displays an edit screen, then when that screen is closed, how do you pass the edited information back to the previous screen?
You could try using a ContentProvider. The screen that edits the data then updates the data through the content provider and the activity that had the menu also uses the data from the ContentProvider.
On a side note, I’ll be going on holidays for a few weeks. So if you have any more questions, I may not respond for a while and it may be faster to ask on StackOverflow.
I need to add a menu with a single option to add a new button to my app( as an option ). Can you provide me the code for that?
Hi,
I already discussed how to create the menu here, so I assume you just need to know how to add the button dynamically. Adding the button is as simple as creating the button, setting its layout parameters and adding it to the layout. For example:
Can you post a screenshot of what your menu looks like?
Thanks.
Screenshots ? | https://kahdev.wordpress.com/2008/11/25/building-a-menu-for-your-android-v10-r1-app/ | CC-MAIN-2019-13 | refinedweb | 2,174 | 73.78 |
Previous Chapter: Namespaces
Next Chapter: File Management
Next Chapter: File Management
Global and Local Variables
Global and local Variables in Functions
Let's have a look at the following function:
def f(): print s s = "I hate spam" f()The variable s is defined as the string "I hate spam", before we call the function f(). The only statement in f() is the "print s" statement. As there is no local s, the value from the global s will be used. So the output will be the string "I hate spam". The question is, what will happen, if we change the value of s inside of the function f()? Will it affect the global s as well? We test it in the following piece of code:
def f(): s = "Me too." print s s = "I hate spam." f() print sThe output looks like this:
Me too. I hate spam.What if we combine the first example with the second one, i.e. first access s and then assigning a value to it? It will throw an error, as we can see in the following example:
def f(): print s s = "Me too." print s s = "I hate spam." f() print sIf we execute the previous script, we get the error message:
UnboundLocalError: local variable 's' referenced before assignmentPython :
def f(): global s print s s = "That's clear." print s s = "Python is great!" f() print sNow there is no ambiguity. The output is as follows:
Python is great! That's clear. That's clear.Local variables of functions can't be accessed from outside, when the function call has finished:
def f(): s = "I am globally not known" print s f() print sIf you start this script, you get an output with the following error message:
I am globally not known Traceback (most recent call last): File "global_local3.py", line 6, in <module> print s NameError: name 's' is not definedThe following example shows a deliberate combination of local and global variables and function parameters:
def foo(x, y): global a a = 42 x,y = y,x b = 33 b = 17 c = 100 print a,b,x,y a,b,x,y = 1,15,3,4 foo(17,4) print a,b,x,yThe output of the script above looks like this:
42 17 4 17 42 15 3 4
Previous Chapter: Namespaces
Next Chapter: File Management
Next Chapter: File Management | http://python-course.eu/global_vs_local_variables.php | CC-MAIN-2017-09 | refinedweb | 402 | 77.77 |
08 June 2010 20:06 [Source: ICIS news]
By Nigel Davis
LONDON (ICIS news)--Following the second half 2009 recovery and a particularly strong start to the year it is hardly surprising that concern has grown about just where demand goes next.
Macroeconomic uncertainties cloud the outlook. Sovereign debt issues have spooked financial and commodity markets worldwide.
The just released mid-year outlook from European trade federation Cefic is couched in particularly cautious terms.
“The overall economic recovery in ?xml:namespace>
Cefic and the sector’s economists expect growth in chemicals production of 9.5% in the 27 member states of the EU in 2010 but output growth of only 2% in 2011.
The nearer-term figure is greater than that forecast in November. Output of the basic, commodity chemicals has rebounded and grown faster and longer than expected.
But, as Cefic notes, the output of all segments “remains well below previous levels”.
All-in-all, EU chemicals production, excluding pharmaceuticals, is still only at about the level seen at the end of 2005, before the extended upturn took hold. The chart here shows Cefic’s current projections for chemicals segment growth this year and next.
Growing back towards prior stronger levels of output will take time. Oxford Economics, in a regular global chemicals update at the start of this month, said it expected 2010 output growth of 10.75% in chemicals and man-made fibres in the 15 EU member states it monitors.
Following this peak it says non-pharma chemicals growth could average under 2% in these 15 countries until 2020.
Pharmaceuticals output has been weak so far this year and helped drag down the consultancy’s overall chemicals output forecast to 7.5% for 2010. EU 15 chemicals output could average 3.25% over the next three years and slip below 2.5% as 2020 approaches, it says.
EU non-pharma chemicals output so far this year has been driven by continued expansion in basic chemicals and by paints, Oxford Economics says.
West European chemicals output was up 2.5% quarter-to quarter in the first quarter of 2010 lifted by growth in Germany and despite weakness in pharmaceuticals.
Globally chemicals output growth has been strong, building away from the slump with growth overall of 12% between the first quarter of 2009 and the first quarter of 2010,
They suggest too that prices in the developed economies will grow below the rate of inflation over their forecast period out to 2020.
Clearly hopes are pinned on China to drive chemicals growth over that period although Oxford Economics suggests that expansion of the country’s chemicals output is likely to moderate over the medium term.
It expects
“Despite a likely moderation of Chinese growth over the medium term, the emerging economies will still account for 55% of world output by 2020,” its analysts | http://www.icis.com/Articles/2010/06/08/9366163/insight-commodities-will-lift-output-growth-in-2010-but-11-looks-weak.html | CC-MAIN-2014-42 | refinedweb | 475 | 54.42 |
An adventure in simple web automation. Check out this article from William Koehrson on controlling the Web with Python, happy learning!
password).
Solution: Use Python to automatically submit completed assignments! Ideally, I would be able to save an assignment, type a few keys, and have my work uploaded in a matter of seconds. At first this sounded too good to be true, but then I discovered selenium, a tool which can be used with Python to navigate the web for you.
Anytime we find ourselves repeating tedious actions on the web with the same sequence of steps, this is a great chance to write a program to automate the process for us. With selenium and Python, we just need to write a script once, and which then we can run it as many times and save ourselves from repeating monotonous tasks (and in my case, eliminate the chance of submitting an assignment in the wrong place)!
Here, I’ll walk through the solution I developed to automatically (and correctly) submit my assignments. Along the way, we’ll cover the basics of using Python and selenium to programmatically control the web. While this program does work (I’m using it every day!) it’s pretty custom so you won’t be able to copy and paste the code for your application. Nonetheless, the general techniques here can be applied to a limitless number of situations. (If you want to see the complete code, it’s available on GitHub).
Approach
Before we can get to the fun part of automating the web, we need to figure out the general structure of our solution. Jumping right into programming without a plan is a great way to waste many hours in frustration. I want to write a program to submit completed course assignments to the correct location on Canvas (my university’s “learning management system”). Starting with the basics, I need a way to tell the program the name of the assignment to submit and the class. I went with a simple approach and created a folder to hold completed assignments with child folders for each class. In the child folders, I place the completed document named for the particular assignment. The program can figure out the name of the class from the folder, and the name of the assignment by the document title.
Here’s an example where the name of the class is EECS491 and the assignment is “Assignment 3 — Inference in Larger Graphical Models”.
File structure (left) and Complete Assignment (right)
The first part of the program is a loop to go through the folders to find the assignment and class, which we store in a Python tuple:
# os for file management import os # Build tuple of (class, file) to turn in submission_dir = 'completed_assignments' dir_list = list(os.listdir(submission_dir)) for directory in dir_list: file_list = list(os.listdir(os.path.join(submission_dir, directory))) if len(file_list) != 0: file_tup = (directory, file_list[0]) print(file_tup) ('EECS491', 'Assignment 3 - Inference in Larger Graphical Models.txt')
This takes care of file management and the program now knows the program and the assignment to turn in. The next step is to use selenium to navigate to the correct webpage and upload the assignment.
Web Control with Selenium
To get started with selenium, we import the library and create a web driver, which is a browser that is controlled by our program. In this case, I’ll use Chrome as my browser and send the driver to the Canvas website where I submit assignments.
import selenium # Using Chrome to access web driver = webdriver.Chrome() # Open the website driver.get('')
When we open the Canvas webpage, we are greeted with our first obstacle, a login box! To get past this, we will need to fill in an id and a password and click the login button.
Imagine the web driver as a person who has never seen a web page before: we need to tell it exactly where to click, what to type, and which buttons to press. There are a number of ways to tell our web driver what elements to find, all of which use selectors. A selector is a unique identifier for an element on a webpage. To find the selector for a specific element, say the CWRU ID box above, we need to inspect the webpage. In Chrome, this is done by pressing “ctrl + shift + i” or right clicking on any element and selecting “Inspect”. This brings up the Chrome developer tools, an extremely useful application which shows the HTML underlying any webpage.
To find a selector for the “CWRU ID” box, I right clicked in the box, hit “Inspect” and saw the following in developer tools. The highlighted line corresponds to the id box element (this line is called an HTML tag).
HTML in Chrome developer tools for the webpage
This HTML might look overwhelming, but we can ignore the majority of the information and focus on the 'id = "username"' and 'name="username"' parts. (these are known as attributes of the HTML tag).
To select the id box with our web driver, we can use either the 'id' or 'name' attribute we found in the developer tools. Web drivers in selenium have many different methods for selecting elements on a webpage and there are often multiple ways to select the exact same item:
# Select the id box id_box = driver.find_element_by_name('username') # Equivalent Outcome! id_box = driver.find_element_by_id('username')
Our program now has access to the 'id_box' and we can interact with it in various ways, such as typing in keys, or clicking (if we have selected a button).
# Send id information id_box.send_keys('my_username')
We carry out the same process for the password box and login button, selecting each based on what we see in the Chrome developer tools. Then, we send information to the elements or click on them as needed.
# Find password box pass_box = driver.find_element_by_name('password') # Send password pass_box.send_keys('my_password') # Find login button login_button = driver.find_element_by_name('submit') # Click login login_button.click()
Once we are logged in, we are greeted by this slightly intimidating dashboard:
We again need to guide the program through the webpage by specifying exactly the elements to click on and the information to enter. In this case, I tell the program to select courses from the menu on the left, and then the class corresponding to the assignment I need to turn in:
# Find and click on list of courses courses_button = driver.find_element_by_id('global_nav_courses_link') courses_button.click() # Get the name of the folder folder = file_tup[0] # Class to select depends on folder if folder == 'EECS491': class_select = driver.find_element_by_link_text('Artificial Intelligence: Probabilistic Graphical Models (100/10039)') elif folder == 'EECS531': class_select = driver.find_element_by_link_text('Computer Vision (100/10040)') # Click on the specific class class_select.click()
The program finds the correct class using the name of the folder we stored in the first step. In this case, I use the selection method 'find_element_by_link_text' to find the specific class. The “link text” for an element is just another selector we can find by inspecting the page. :
Inspecting the page to find the selector for a specific class
This workflow may seem a little tedious, but remember, we only have to do it once when we write our program! After that, we can hit run as many times as we want and the program will navigate through all these pages for us.
We use the same ‘inspect page — select element — interact with element’ process to get through a couple more screens. Finally, we reach the assignment submission page:
At this point, I could see the finish line, but initially this screen perplexed me. I could click on the “Choose File” box pretty easily, but how was I supposed to select the actual file I need to upload? The answer turns out to be incredibly simple! We locate the 'Choose File' box using a selector, and use the 'send_keys' method to pass the exact path of the file (called 'file_location' in the code below) to the box:
# Choose File button choose_file = driver.find_element_by_name('attachments[0][uploaded_data]') # Complete path of the file file_location = os.path.join(submission_dir, folder, file_name) # Send the file location to the button choose_file.send_keys(file_location)
That’s it! By sending the exact path of the file to the button, we can skip the whole process of navigating through folders to find the right file. After sending the location, we are rewarded with the following screen showing that our file is uploaded and ready for submission.
Now, we select the “Submit Assignment” button, click, and our assignment is turned in!
# Locate submit button and click submit_assignment = driver.find_element_by_id('submit_file_button') submit_assignent.click()
Cleaning Up
File management is always a critical step and I want to make sure I don’t re-submit or lose old assignments. I decided the best solution was to store a single file to be submitted in the 'completed_assignments' folder at any one time and move files to a 'submitted_assignments' folder once they had been turned in. The final bit of code uses the os module to move the completed assignment by renaming it with the desired location:
# Location of files after submission submitted_file_location = os.path.join(submitted_dir, submitted_file_name) # Rename essentially copies and pastes files os.rename(file_location, submitted_file_location)
All of the proceeding code gets wrapped up in a single script, which I can run from the command line. To limit opportunities for mistakes, I only submit one assignment at a time, which isn’t a big deal given that it only takes about 5 seconds to run the program!
Here’s what it looks like when I start the program:
The program provides me with a chance to make sure this is the correct assignment before uploading. After the program has completed, I get the following output:
While the program is running, I can watch Python go to work for me:
Conclusions
The technique of automating the web with Python works great for many tasks, both general and in my field of data science. For example, we could use selenium to automatically download new data files every day (assuming the website doesn’t have an API). While it might seem like a lot of work to write the script initially, the benefit comes from the fact that we can have the computer repeat this sequence as many times as want in exactly the same manner. The program will never lose focus and wander off to Twitter. It will faithfully carry out the same exact series of steps with perfect consistency (which works great until the website changes).
I should mention you do want to be careful before you automate critical tasks. This example is relatively low-risk as I can always go back and re-submit assignments and I usually double-check the program’s handiwork. Websites change, and if you don’t change the program in response you might end up with a script that does something completely different than what you originally intended!
In terms of paying off, this program saves me about 30 seconds for every assignment and took 2 hours to write. So, if I use it to turn in 240 assignments, then I come out ahead on time! However, the payoff of this program is in designing a cool solution to a problem and learning a lot in the process. While my time might have been more effectively spent working on assignments rather than figuring out how to automatically turn them in, I thoroughly enjoyed this challenge. There are few things as satisfying as solving problems, and Python turns out to be a pretty good tool for doing exactly that.'
This article was written by William Koehrsen and posted originally on Medium. | https://www.signifytechnology.com/blog/2019/10/controlling-the-web-with-python-by-william-koehrsen | CC-MAIN-2021-04 | refinedweb | 1,947 | 60.35 |
Here's a list of the most noteworthy things in the RAP 3.0 release which is available for download since June 24, 2015.
The RAP 3.0 web client will support the following browsers:
Dropping support for older browsers allowed us to remove thousands of lines of
JavaScript code and to take more advantage of modern HTML5/CSS3 features.
RAP also used to enable the so-called quirksmode in HTML in order to avoid glitches in old
IE versions. With the end of support for antique browsers, RAP now uses the HTML5
<!DOCTYPE html> declaration that enables standard mode in all browsers.
RAP may continue to work in some of the older browser, but not in Internet Explorer 8. For Windows Vista, IE9 can be manually installed. For Windows 7, IE10 is installed with Service Pack 1 and IE11 can be installed manually. Windows 8 has IE10 as it's default browser, which is upgraded to IE11 with Windows 8.1. If you still target Windows XP, you have to use either Firefox or Google Chrome.
As many other Eclipse projects, we decided to raise the minimal execution environment to Java 7. Since Jetty 9 and even some parts of Equinox now depend on Java 7, it became almost impossible to run and test RAP with Java 5. Moving to Java 7 makes our life easier and allows us to make use of some new features. With modern JREs being available even for embedded platforms, so we think that we won't exclude anyone by this move.
The Bundle-RequiredExecutionEnvironment (BREE) of all bundles has been updated to JavaSE-1.7.
Deprecated API has been removed in RAP 3.0. We provide migration guide to help you find the proper replacements.
In RAP 2.x, you could access URL parameters by calling
RWT.getRequest().getParameter(name); // OLD
The problem with this approach was that it only worked in the first request.
Moreover,
RWT.getRequest() returns the actual XHR request, not the request that
returned the HTML page. That's why using this method in application code is not recommended.
Now there is a client service named
StartupParameters that provides access to those
startup parameters. Since this service interface is also implemented by
AbstractEntryPoint,
you can access the methods
getParameter(),
getParameterNames(),
and
getParameterValues() directly in your entrypoint:
public class MyEntryPoint extends AbstractEntryPoint { @Override protected void createContents(Composite parent) { String foo = getParameter("foo"); ...
When you test your UI components, you have to simulate the environment that RAP UI code normally runs in (including the UI thread, a UISession, ApplicationContext etc.) To do so, we provide the bundle org.eclipse.rap.rwt.testfixture. However, we always claimed that the classes therein were not part the public RAP API.
From now on, we provide public API for unit tests in the form of a JUnit Rule.
Instead of calling
Fixture.setUp() and
Fixture.tearDown(),
you now only need to include the following line in your test cases.
This will simulate a new test context for every test method.
There's also no need to fake the PROCESS_ACTION phase anymore.
@Rule public TestContext context = new TestContext();
The class
Fixture and all its companions have been moved to the internal package
org.eclipse.rap.rwt.testfixture.internal.
The RAP port of the Nebula Grid (including GridViewer) has been moved from the RAP Incubator to the RAP repository. It supports a subset of the API from the Grid found in the Nebula Release, now also including setAutoHeight. The Nebula Grid also works with RAP specific features such as RWT.MARKUP_ENABLED and Row Templates.
The Grid is included in the RAP target platform and can be used simply by importing the org.eclipse.nebula.widgets.grid package, making it single-sourcing capable. The Nebula Grid ports for RAP 2.x versions will remain in the Incubator.
The file dialog and file upload components have been moved from the RAP Incubator to the RAP repository. FileDialog supports SWT single-sourcing and can upload multiple files with a clean user interface. It now also support file drag and drop.
The FileDialog is located in the org.eclipse.rap.filedialog bundle in the RAP target platform. Older versions of the dialog and upload components for RAP 2.x versions will remain in the Incubator.
The method
Control.setParent() is now fully implemented and will move a control
from one parent to another. To indicate that the re-parenting was successful, the method will
return
true. We implemented re-parenting in order to better support the E4 UI.
Be aware you must not try to replace the parent of a control that is attached to another widget
using a
setControl() method (e.g. a CoolItem, an ExpandItem, or a ScrolledComposite).
Those cases are neither supported by SWT nor by RAP.
TabItem, PUSH Button and PUSH ToolItem now support badges.
Those badges can be set using a data key:
tabItem.setData( RWT.BADGE, "23" );
In TabItem and Button, the badge is overlaying the widget's border and part of the padding/margin areas. The top padding + border + margin need provide enough space to display the badge, or it will be cut off. For ToolItem, the badge is placed within the padding area, overlaying part of the content if necessary.
Several clipping methods have been implemented:
GC.setClipping( Path )
GC.setClipping( Rectangle )
GC.setClipping( int, int, int, int )
The implementation utilizes the native HTML canvas clipping capabilities. Once a region is clipped, all future drawing operations will be limited to the clipped region.
The Button widget now supports RWT.MARKUP_ENABLED, allowing you to use an HTML subset in it's text. Also, Tree and Table now fully support RWT.TOOLTIP_MARKUP_ENABLED. This was previously not the case if the tooltip text was provided by a ColumnViewer.
In SWT, the text of a ToolItem is placed below the icon by default. If the parent ToolBar is
created with the
SWT.RIGHT flag, the text is to the right of the icon.
Until now RAP behaved as though
SWT.RIGHT was always set on the ToolBar, so it was
not possible to have the text below the icon. Now RAP behaves as SWT does. If your
application has a ToolBar without
SWT.RIGHT, text and icon will no longer be
side by side, but on top of each other.
It's now possible to create disabled and grayed images at runtime, using the SWT constants
IMAGE_DISABLE and
IMAGE_GRAY.
Image disabled = new Image( display, srcImage, SWT.IMAGE_DISABLE ); Image grayed = new Image( display, srcImage, SWT.IMAGE_GRAY );
Until now every edge of a widget in RAP had to have the same border width, style and color. Now every widget that supports the border shorthand property also supports the four properties border-left, border-right, border-bottom and border-top. This enables a number of new design choices, like visually merging neighbouring widgets.
In this case a RowLayout was used with spacing set to 0 and the following custom variants:
Button[PUSH].left { border-radius: 2px 0px 0px 2px; border-right: none; } Button[PUSH].middle { border-radius: 0px; border-left: none; border-right: none; } Button[PUSH].right { border-radius: 0px 2px 2px 0px; border-left: none; }
The Control.getBorderWidth() method will from now on return the biggest width of the widgets four edges.
The focus frame (represented in the theming by Button-FocusIndicator, Combo-FocusIndicator and FileUpload-FocusIndicator) will no longer be visible if the widget is focused by the mouse. Like in MS Windows, it will only be visible when focused using the keyboard.
In the default theme the Scroll Bars now have a more modern look and feel. They are invisible until they are "activated" by the user, which is when the indicators fade-in to a semi-transparent state.
This is achieved with a number of new theming options. The up/down and left/right buttons of scroll bars can now be hidden by setting the background-image property of ScrollBar-UpButton and ScrollBar-DownButton to none.The opacity property let's you make the entire Scroll Bar (semi) transparent, with the content below visible. In addition, the new active state is used to indicate that the scrollable area is hovered with the mouse, or that the user is scrolling using the keyboard.
If you don't like the new look and feel, don't worry. The business theme makes no changes to the Scroll Bars, and by making a theme-contribution you can easily adjust the opacity for the active and non-active states to be different values (e.g. "1" for both) and/or change the Scroll Bar background from "transparent" back to a solid color.
The TabItem has several new theming properties:
Since border now also allows styling the different edges of the widget, the following properties have been removed:
There is also a new state on TabItem that the item is a child of TabFolder with the SWT.BOTTOM style flag:
:first and
:last states have been added for ToolItem CSS element. This
allows first and last visible ToolItem to have distinct stylings, like different borders. The
ToolBar itself now also supports the
[VERTICAL] and
[FLAT] selectors.
Exploiting some of the new theming features, the ToolBar has a new default look combining all items to a single visual element. Single items are still distinguishable when hovered with the mouse. The previous look is also still available, simply by setting the SWT.FLAT style flag. This flag is used by default in workbench applications, so these will still look the same. The business theme is also not affected by this change.
The Scale widget theming has been enhanced with the following CSS properties and states:
background-imageproperty for Scale and Scale-Thumb
HORIZONTAL/
VERTICALselectors for Scale and Scale-Thumb
hoverstate for Scale-Thumb
To adjust the look of badges (see above), the Widget-Badge element can be used. It currently supports the properties font, color, background-color, border and border-radius.
text-overflow
Table and Tree items (and columns as well) now support the property text-overflow. When set to ellipsis, texts that don't fit into their cell will end with “…” instead of being cut off. The default RAP theme already has this setting. For custom themes, the default value is clip.
The Combo-List now has an even state, allowing the Combo drop-down to have alternating background colors.
The ProgressBar has been partially rewritten to use CSS3 properties instead of vector graphics. As a result, the indicator is now layouted more precisely when the bar is nearly empty or nearly full, but as a side-effect the ProgressBar-Indicator theming no longer supports the border property. It was not used by any of the themes included in RAP, so it won't change how the bar looks by default.
The RWT Launcher now allows you to run an RWT application from its ApplicationConfiguration. You can do this by specifying the ApplicationConfiguration class in the launcher main tab, from the file/project context menu or directly from the editor.
This list shows all bugs that have been fixed for this release.
To assist you with the migration from RAP 2.x (or 1.x) to 3.0, we provide a migration guide. | http://www.eclipse.org/rap/noteworthy/3.0/ | CC-MAIN-2016-50 | refinedweb | 1,870 | 57.67 |
Rails is a web development framework, where model, view and controller are important aspects of your application. Controllers, just like models and viewers, need to be tested with Ruby communities favorite tool, RSpec.
Controllers in Rails accept HTTP requests as their input and deliver back and HTTP response as an output.
Organizing tests
Describe and context blocks are crucial for keeping tests organized into a clean hierarchy, based on a controller’s actions and the context we’re testing. Betterspecs.org provides the basics about writing your tests, it will help you make your tests much more expressive.
The purpose of ‘describe’ is to wrap a set of tests against one functionality while ‘context’ is to wrap a set of tests against one functionality under the same state. Describe vs. Context in RSpec by Ming Liu.
You want to create a context for each meaningful input and wrap it into a describe block.
We will express each HTTP session in different describe blocks for:
stories_controller_spec.rb.
describe "Stories" do
describe "GET stories#index" do
context "when the user is an admin" do
it "should list titles of all stories"
end
context "when the user is not an admin" do
it "should list titles of users own stories" do
end
When you want to control the authorization access you can create a new context for each user role. In the same way, you can manage the authentication access, by creating a new context for logged in and logged out users.
context "when the user is logged in" do
it "should render stories#index"
end
context "when the user is logged out" do
it "should redirect to the login page"
end
end
By default, RSpec-Rails configuration disables rendering of templates for controller specs. You can enable it by adding
render_views:
- Globally, by adding it to
RSpec.configureblock in
rails_helper.rbfile
- Per individual group
describe "GET stories#show" do
it "should render stories#show template" do
end
end
describe "GET stories#new" do
it "should render stories#new template" do
end
end
It is very common to check if you are using valid or invalid attributes before saving them to the database.
describe "POST stories#create" do context "with valid attributes" do it "should save the new story in the database" it "should redirect to the stories#index page" end context "with invalid attributes" do it "should not save the new story in the database" it "should render stories#new template" end end end
How to get your data ready?
We use factories to get the data ready for our controller specs. The way factories work can be improved with a FactoryBot gem.
With the following factory we will generate multiple stories by using a
sequence of different titles and contents:
FactoryBot.define do
factory :story do
user
sequence(:title) { |n| "Title#{n}" }
sequence(:content) { |n| "Content#{n}" }
end
end
Let’s test this out!
The time has come to create our own controller tests. The tests are written using RSpec and Capybara. We will cover
stories_controller.rb with tests for each of these methods:
#index
First, we want to take a look at our controller
stories_controller.rb. The index action authorizes access to stories depending if the current user is an admin:
def index
@stories = Story.view_premissions(current_user).
end
And in model
story.rb we check if the current user is an admin:
def self.view_premissions(current_user)
current_user.role.admin? ? Story.all : current_user.stories
end
With the info we just gathered, we can create the following GET stories#index test:
describe "GET stories#index" do
context "when the user is an admin" do
it "should list titles of all stories" do
admin = create(:admin)
stories = create_list(:story, 10, user: admin)
login_as(admin, scope: :user)
visit stories_path
stories.each do |story|
page.should have_content(story.title)
end
end
end
context "when the user is not an admin" do
it "should list titles of users own stories" do
user = create(:user)
stories = create_list(:story, 10, user: user)
login_as(user, scope: :user)
visit stories_path
stories.each do |story|
page.should have_content(story.title)
end
end
end
end
As you can see, we created two different contexts for each user role (admin and not admin). The admin user will be able to see all the story titles, on the other hand, standard users can only see their own.
Using options
create(:user) and
create_list(:story, 10, user: user) you can create users and ten different stories for that user. The newly created user will login
login_as(user, scope: :user) and visit the
stories_path page, where he can see all the story titles depending on his current role
page.should have_content(story.title).
Another great way to create new users is using let or before blocks, those are two different ways to write DRY tests.
#show
You can write the #show method tests in a similar way. The only difference is that you want to access the page that shows the story you want to read.
describe "GET stories#show" do
it "should render stories#show template" do
user = create(:user)
story = create(:story, user: user)
login_as(user, scope: :user)
visit story_path(story.id)
page.should have_content(story.title)
page.should have_content(story.content)
end
end
Once again we want to create the user
create(:user) and a story
create(:story, user: user). The created user will log in and visit the page that contains the story based on the story.id
visit story_path(story.id).
#new and #create
Unlike the others, this method creates a new story. Let’s check out the following action in
stories_controller.rb
# GET stories#new
def new
@story = Story.new
end
# POST stories#create
def create
@story = Story.new(story_params)
if @story.save redirect_to story_path(@story), success: "Story is successfully created."
else
render action: :new, error: "Error while creating new story"
end
end
private
def story_params
params.require(:story).permit(:title, :content)
end
The
new action renders a stories#new template, it is a form that you fill out before creating a new story using the
create action. On successful creation, the story will be saved in the database.
describe "POST stories#create" do
it "should create a new story" do
user = create(:user)
login_as(user, scope: :user)
visit new_stories_path
fill_in "story_title", with: "Ruby on Rails"
fill_in "story_content", with: "Text about Ruby on Rails"
expect { click_button "Save" }.to change(Story, :count).by(1)
end
end
This time a created and logged in user will visit the page where it can create a new story
visit new_stories_path. The next step is to fill up the form with title and content
fill_in "...", with: "...". Once we click on the save button
click_button "Save", the number of total stories will increase by one
change(Story, :count).by(1), meaning that the story was successfully created.
Everyone wants to be able to update their stories. This can be easily done in the following way:
def update
if @story.update(story_params)
flash[:success] = "Story #{@story.title} is successfully updated."
redirect_to story_path(@story)
else
flash[:error] = "Error while updating story"
redirect_to story_path(@story)
end
end
private
def story_params
params.require(:story).permit(:title, :content)
end
When a new story is created we will be able to update it, by visiting the stories edit page.
describe "PUT stories#update" do
it "should update an existing story" do
user = create(:user)
login_as(user, scope: :user)
story = create(:story)
visit edit_story_path(story)
fill_in "story_title", with: "React"
fill_in "story_content", with: "Text about React"
click_button "Save"
expect(story.reload.title).to eq "React"
expect(story.content).to eq "Text about React"
end
end
Just like in the previous methods, a newly created logged in user will create a story and visit the edit story page
edit_story_path(story). Once we update the title and content of the story it is expected to change as we asked
expect(story.reload.title).to eq "React".
#delete
At last, we want to be able to delete the stories we disliked.
def destroy
authorize @story
if @story.destroy
flash[:success] = "Story #{@story.title} removed successfully"
redirect_to stories_path
else
flash[:error] = "Error while removing story!"
redirect_to story_path(@story)
end
end
You want to make it sure that only the admin and owner of the story can delete it, by installing
gem 'pundit'.
class StoryPolicy < ApplicationPolicy
def destroy?
@user.role.admin?
end
end
Let’s test this out as well.
describe "DELETE stories#destroy" do
it "should delete a story" do
user = create(:admin)
story = create(:story, user: user)
login_as(user, scope: :user)
visit story_path(story.id)
page.should have_link("Delete")
expect { click_link "Delete" }.to change(Story, :count).by(-1)
end
end
The test is written in a similar way to stories#create, with a major difference. Instead of creating the story, we delete it and such reduce the overall count by one
change(Story, :count).by(-1).
Once again we reached the end! But there are many more articles waiting for you, subscribe now!
Originally published at kolosek.com on February 22, 2018. | https://hackernoon.com/rspec-rails-controller-test-158f142e11ce | CC-MAIN-2020-10 | refinedweb | 1,495 | 55.24 |
Hey,
Recently been introduced to threads & came across a question involving the usage of semaphores. I'm just a bit confused as to how to use them and was wondering if anybody could shed a bit of guidance for me. I'll post the question to begin with:
Create a program using 2 seperate threads, each printing out a message when it reaches a particular part in its run method. Add a Java semaphore to the program and use it to ensure that whenever you run the program the point A is always reached before point B i.e. every time you run the program it behaves as follows:
$ java Points
Point A reached
Point B reached
Create and initialise your semaphore in the main method and pass it to each thread.
I'm unsure how & where to use this semaphore and also what parameter i'm supposed to give it.
My code so far is:
Code Java:
import java.util.concurrent.Semaphore; class PointA extends Thread { public PointA() { } public void run() { try { sleep((int) (Math.random() * 100)); } catch (InterruptedException e) { } System.out.println("Point A reached"); } } class PointB extends Thread { public PointB() { } public void run() { try { sleep((int) (Math.random() * 100)); } catch (InterruptedException e) { } System.out.println("Point B reached"); } } public class Points { public static void main(String[] args) throws InterruptedException { PointA a = new PointA(); PointB b = new PointB(); Semaphore sem = new Semaphore(0); b.start(); a.start(); b.join(); a.join(); } }
Thanks in advance for any help. | http://www.javaprogrammingforums.com/%20threads/14441-help-synchronisation-semaphore-printingthethread.html | CC-MAIN-2017-30 | refinedweb | 249 | 62.38 |
How Big Data Justifies Mining Your Social Data
timothy posted more than 3 years ago | from the anywhere-it-wants dept.
(2)
JaydenT (2012002) | more than 3 years ago | (#35448316)
Sign away your rights (2)
betterunixthanunix (980855) | more than 3 years ago | (#35450048)
Re:Sign away your rights (1)
cheros (223479) | more than 3 years ago | (#35450734)
Oh yes - see chapter 11 of the Google Terms of Service. 11.1 seems OK, but take the time to dissect what 11.2 actually says..
Re:Sign away your rights (1)
rednip (186217) | more than 3 years ago | (#35451820), I believe), judges generally never enforce or throw out with good regularity the worst of the clauses when they are put to a test legally. Still, I feel a little dirty and used when I click 'accept'. So you need to keep an eye out for re-billers and watch what you install from or make available online. It's the best that you'll likely ever be able to do.
Re:Sign away your rights (1)
betterunixthanunix (980855) | more than 3 years ago | (#35452068) much in return (it turns out that you can get internships without signing up for such websites).
Personally, I resolve this dilemma (that is, "do you read every word?") by just not installing proprietary software (it is easy to read and understand the 4~ free software licenses that cover the software on my system) and by not signing up for every trendy website that comes along. Slashdot (if we can even consider it "trendy") is a rare exception to that rule.
script to detect subtle changes (1)
KWTm (808824) | more than 3 years ago | (#35452318)
On this subject, I wanted to mention my quick script to check for subtle changes in text that you see often, such as Terms & Conditions that pop up every time you use a frequently-used Web service. It will alert you to small changes that you might not otherwise notice because you habitually click on the "I Agree To These Terms And Conditions" button without going through all the text each time. Simply select the text and copy to the clipboard, and then run the script.
It's in my journal entry [slashdot.org] . I use it several times a month.
Just a note that the script was done quite a while ago and is rather poorly written. It works with KDE 3 (I've since upgraded to KDE 4). Sometime "Real Soon Now" I'll get around to:
- replacing the clipboard with "xclip" which seems to work for both the KDE 4 and GNOME clipboards
- making variable names not ALLCAPS
- rewriting the option detection so it doesn't have to use grep just to detect cmd-line options
- replacing backticks with $( )
- etc. etc. etc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEABECAAYFAk16Li4ACgkQLnc9OVO/yZ5mVQCgnj+JeGJfZKfTMO/0mgm+dctH
8Y0Anj/lxmXnnnGgtJJyRAn7LT+BZLOe
=1LN6
-----END PGP SIGNATURE-----
This is the GPG signature of the text with all spaces, tabs, newlines and other non-printable characters removed.
To check validity of this signature, put the plain text into PlainTextFile, and the signature (including beginning and end lines) into SignatureFile, and use this command:
tr -cd [:graph:] < PlainTextFile | gpg --verify SignatureFile -
click-through TOS (4, Insightful)
mug funky (910186) | more than 3 years ago | (#35448324)
i think it's time click-through "I Agree" ten mile pages for new accounts get a test in court. people "sign" away too much, and not many people read those "agreements".
Re:click-through TOS (-1)
Anonymous Coward | more than 3 years ago | (#35448514)
Why the double quotes? Is an agreement any less valid if its done on a web form? Ten pages may be a lot to read but at least there's pretty strong stipulations about what you are or aren't agreeing to. Surely people should be boycotting sites that have policies for selling or mining data rather than complaining that sites give you more terms than you can be bothered to read.
Re:click-through TOS (3, Interesting)
Anonymous Coward | more than 3 years ago | (#35448592)
> Why the double quotes? Is an agreement any less valid if its done on a web form?
You didn't sign it, "your kid" or "your neighbor" must have clicked Agree, it wasn't you who agreed to anything. Easy.
Re:click-through TOS (1)
Anonymous Coward | more than 3 years ago | (#35448684)
Disclosed principle/undisclosed agent cases have been done before
Re:click-through TOS (4, Interesting)
Em Adespoton (792954) | more than 3 years ago | (#35449050):click-through TOS (1)
russotto (537200) | more than 3 years ago | (#35449284)
All this is true. However, it doesn't matter. The courts will just handwave it away and rule in the other guy's favor, because they want this crap to be valid.
Re:click-through TOS (3, Interesting)
DarwinSurvivor (1752106) | more than 3 years ago | (#35449056)
Re:click-through TOS (1)
Americium (1343605) | more than 3 years ago | (#35450102), imagine actually banning something like 5-10 US states from watching porn online, i mean come on, that's just insanity.
So please, be sure you know what this means before you start wishing for it.... this is one of the laws that makes the internet as free as it is. If you don't wanna give away your privacy, don't post geotagged pictures on sites known for selling to advertisers, maybe post them on your own site. Allowing stupidity to dictate what the law should be is dagerous....... basically some guy/girl knowingly posts something to the internet and invites other people to look it at, and then gets upset because a couple other people saw it..... wtf! These are the same schmucks complaining that watch every leaked celebrity video, which are the real invasions of privacy.
Re:click-through TOS (1)
DarwinSurvivor (1752106) | more than 3 years ago | (#35469098)
Re:click-through TOS (4, Interesting)
MickLinux (579158) | more than 3 years ago | (#35449216):click-through TOS (0)
Anonymous Coward | more than 3 years ago | (#35449450)
The reality is that in this market they will just find someone else to be their huckleberry. Nobody is irreplaceable, especially in this market.
Re:click-through TOS (0)
Anonymous Coward | more than 3 years ago | (#35452032)
Good for you. It's shit to be unemployed but it's even worse to have no principles.
If more people would act like this the jerks of the world would be where they should be. Left all alone, penniless and crying because nobody wants to play with them.
Re:click-through TOS (1)
Chapter80 (926879) | more than 3 years ago | (#35449814)-clicks.
Then a court challenge would be VERY interesting! "I didn't agree to it! I didn't even SEE the agreement!"
....Filing patent now...
News? (4, Insightful)
cosm (1072588) | more than 3 years ago | (#35448350)
Which led him to ask 'What gives Twitter, Facebook, et al. the right to mine that data?' It turns out, users do when they sign up for social networking services, even if they don't realize that
End of discussion. Pointless article is pointless.
Re:News? (2)
eepok (545733) | more than 3 years ago | (#35448414) article is talking about.
Re:News? (0)
Anonymous Coward | more than 3 years ago | (#35450444)
I have no sympathy for those who sign-up for a service without reading the ToS / Privacy Policy and complain about data mining and other things that was clearly written in it. My site, even though I'm posting as anonymous (too lazy to login) says in bold writing (something on the lines of) "Please read our Terms of Service and Privacy Policy and be sure to read the fine print. Some of your information is being collected for demographic purposes, learn more about it by reading further". Since then, there's been a few less people registering per day but that's okay because nobody has once complained and when our members were polled they said they were very happy about how direct and legit we were. We weren't trying to con anyone and we were truthful the whole time. Because of this and other things, we have a loyal user base. It's a comforting feeling really.
Re:News? (1)
Relayman (1068986) | more than 3 years ago | (#35448476)
Re:News? (1)
Anonymous Coward | more than 3 years ago | (#35448788)
Re:News? (1)
scottbomb (1290580) | more than 3 years ago | (#35449162)? (0)
Anonymous Coward | more than 3 years ago | (#35450012)
You were never in court for a divorce, bankruptcy, traffic ticket etc?
There are many court records available online and if not, you can always get what you need directly from the local courthouse.
Still fighting the Liberal Left?
Re:News? (0)
Anonymous Coward | more than 3 years ago | (#35449984)
You cite as your favorite show, although it isn't licensed to your country, ditto for the DVDs.
So you must be one of those criminal pirate downloaders.
Re:News? (4, Insightful)
martin-boundary (547041) | more than 3 years ago | (#35448516)
Re:News? (-1, Troll)
sortius_nod (1080919) | more than 3 years ago | (#35448536)
Ignorance is no excuse in the eyes of the law.
Not knowing that murder is illegal will not get you off. Not knowing that speeding is illegal won't get you off a fine. Not knowing that slander can get you sued won't get you off.
Your comment is as redundant as the article.
Re:News? (3, Insightful)
martin-boundary (547041) | more than 3 years ago | (#35448616)
Re:News? (5, Insightful)
Anonymous Coward | more than 3 years ago | (#35448688):News? (1)
Anonymous Coward | more than 3 years ago | (#35449004) throwing up your arms and saying "to damned bad for unlucky people or ones who aren't as 'smart' as me!".
Re:News? (4, Interesting)
_Sprocket_ (42527) | more than 3 years ago | (#35448706):News? (1)
martin-boundary (547041) | more than 3 years ago | (#35448738)
Re:News? (3, Insightful)
raddan (519638) | more than 3 years ago | (#35449150)
Like it or not, Facebook provides a service. There ain't no such thing as a free lunch. If you don't like the exchange, don't participate.
Re:News? (2)
JustNilt (984644) | more than 3 years ago | (#35449386)
it's probably on Wikipedia.
Re:News? (1)
Goldberg's Pants (139800) | more than 3 years ago | (#35448518):News? (1)
the_Bionic_lemming (446569) | more than 3 years ago | (#35448878):News? (1)
coofercat (719737) | more than 3 years ago | (#35451884) over most of Europe.
So, just because I clicked "okay, yes, I agree and swear to abide by it, and waive my legal rights in all cases" doesn't mean I have actually done so, should we ever end up in court. This article at worst a "water is wet" sort of article, but at best, it highlights that in some places in the world, there are better legal systems and "terms of engagement" than others. I'll leave it to the reader to decide which ones I'm talking about in particular here.
Fine print. (4, Funny)
Lord_of_the_nerf (895604) | more than 3 years ago | (#35448352)
There's also a clause about 'organ harvesting' that I seem to have missed.
Think about that next time you tweet about your kitty cat spilling milk on your keyboard.
Re:Fine print. (1)
Relayman (1068986) | more than 3 years ago | (#35448508)
Re:Fine print. (1)
psithurism (1642461) | more than 3 years ago | (#35449032). (1)
John Hasler (414242) | more than 3 years ago | (#35448374)
There is none. Nor should there be.
Re:"Ownership of information" is quite clear. (2)
cosm (1072588) | more than 3 years ago | (#35448406)
Re:"Ownership of information" is quite clear. (1)
Relayman (1068986) | more than 3 years ago | (#35448524)
Re:"Ownership of information" is quite clear. (1)
shawb (16347) | more than 3 years ago | (#35448892)
Re:"Ownership of information" is quite clear. (0)
Anonymous Coward | more than 3 years ago | (#35449346)
It's not "write your congressmen". It's not "write your senator". It's not "write your *". It's "write to your *". Or, "write your * a letter". "Write your *" makes no sense at all.
Re:"Ownership of information" is quite clear. (1)
icebraining (1313345) | more than 3 years ago | (#35449878)
The "a letter", or better, "a message", is implied.
Re:"Ownership of information" is quite clear. (0)
Anonymous Coward | more than 3 years ago | (#35452722)
Write a letter to your senator and shove it up your ass,
Re:"Ownership of information" is quite clear. (1)
Anonymous Coward | more than 3 years ago | (#35448538)
Kindly give me your credit card # and Social Security Number. You don't own them - give them to me now.
Privacy and ownership are more closely connected than you realize.
Re:"Ownership of information" is quite clear. (1)
kilfarsnar (561956) | more than 3 years ago | (#35453276)
Re:"Ownership of information" is quite clear. (1)
clydemaxwell (935315) | more than 3 years ago | (#35475188):"Ownership of information" is quite clear. (1)
olden (772043) | more than 3 years ago | (#35448582)
..:"Ownership of information" is quite clear. (2)
Relayman (1068986) | more than 3 years ago | (#35448692)
There's also safety in numbers. If 15 million credit card numbers are stolen, what are the chances that mine will be used?
Re:"Ownership of information" is quite clear. (1)
tunapez (1161697) | more than 3 years ago | (#35449030)
.. (2)
countertrolling (1585477) | more than 3 years ago | (#35448388) (1)
alostpacket (1972110) | more than 3 years ago | (#35448442)
Re:For Immediate Release (0)
Threni (635302) | more than 3 years ago | (#35448570)
LOL! One of the 40 best new apps of the week by some shitty website (Android Police) for the first week in March 2011! Big time! Way to spam Slashdot, dude!
Re:For Immediate Release (0)
Anonymous Coward | more than 3 years ago | (#35448668)
Re:For Immediate Release (1)
alostpacket (1972110) | more than 3 years ago | (#35448898)
Re:For Immediate Release (1)
Threni (635302) | more than 3 years ago | (#35451416)
I took the piss out of that site more than anything. I wish you all the best with your application sales.
Re:For Immediate Release (1)
countertrolling (1585477) | more than 3 years ago | (#35448702)
:) Excellent.. Thank you for understanding.. But you need a journal entry with a faux editorial. Detail details...
Re:For Immediate Release (1)
alostpacket (1972110) | more than 3 years ago | (#35448812) LinkedIn is one thing but...
Re:For Immediate Release (1)
countertrolling (1585477) | more than 3 years ago | (#35448918) best thing you can do is simply zero out your debts as quickly as possible, and plant a nice garden...
Obvious statement is obvious (1)
billcopc (196330) | more than 3 years ago | (#35448462)
Summary: when you click "I Agree", you're agreeing to let the site do whatever with whatever for whatever reason.
Ya, duh. What is this, 1994 ?
Re:Obvious statement is obvious (1)
cosm (1072588) | more than 3 years ago | (#35448504):Obvious statement is obvious (1)
mywhitewolf (1923488) | more than 3 years ago | (#35448850)
Re:Obvious statement is obvious (1)
mywhitewolf (1923488) | more than 3 years ago | (#35448934)
using that logic:".
Re:Obvious statement is obvious (1)
Leebert (1694) | more than 3 years ago | (#354496 clicked it.
One could NOT make any such argument about headers that are universally not reviewed by a human at the time of the HTTP transaction.
Re:Obvious statement is obvious (1)
VortexCortex (1117377) | more than 3 years ago | (#35449874) agreement. To make this a bookmarklet, paste it as the URL of a bookmark, then click that bookmark once per session to activate the agreement cookie.
Of course, I doubt a court would find this legal.
The problem is that these agreements need not be legal. When faced with a costly legal battle, a settlement will likely be preferable than even having to pay your lawyers and taking the chance of losing. Point being: They have a bigger warchest than you; Ergo, your agreement is worthless while theirs is very formidable.
No Free Lunch (4, Insightful)
fermion (181285) | more than 3 years ago | (#35448480):No Free Lunch (1)
Anonymous Coward | more than 3 years ago | (#35448798): more spending means bigger government, bigger government means more power. If you do not understand that, look at the unsustainable Ponzi scheme that is Social Security and look at the sudden end of the political careers of anyone who tries to reform it.
Since when are moral principles like "it is generally wrong to steal" self-righteous? Also most truly moral people would make a huge distinction between a man who steals some food because he is starving versus a career criminal who robs a bank or something. The first is likely a decent person who was driven to steal out of desperation, the other is evil and willingly embraces theft.
The purpose of US public schooling and public schools in most countries is to create a docile, passive workforce that is smart enough to do their jobs but not the kind of truly intelligent, self-sufficient, self-learning abstract thinking philosophers who think critically and seriously question their leaders. The founders of the US public school system (late 1800s) were quite open about this being their goal. They were terrified of discontent among the poor. They were inspired by the Indian caste system and the way taught servitude was used to allow 2% to rule the other 98%. They were also inspired by the Prussian school system which was a tool of management of the social order, of early indoctrination, a form of social technology.
That is why so many US schools switched from using phonics to teach children to read and went instead to this whole-word bullshit. Whole-word is appropriate for something like Mandarin Chinese. A language like English is phonetic precisely so that one does not need to memorize gigantic sets of words in order to be able to read and pronounce them. The evidence is abundant -- children taught to read with phonics are better readers and able to read material at a higher grade level than children taught with whole-word methods.
So why the press to use the whole-word method? Simple. It aligns with the public school's purpose of instilling passive dependency in the children. With phonics, a child could figure out a word on his own. With whole-word methods, he must ask teacher what it is and wait for it to be explained. Ever wonder why so many people, like those who call tech support lines, will endure so much hold-time and effort to get some handholding even when the answer is simple to understand, readily available, and within their power to find? It is because they have learned the dependency lesson.
It is horrible. It exists because most people are oblivious to it and just want their momentary indulgence, their shiny. Most people do not understand database technology and have no idea the power you can attain by centralizing and cross-referencing data that is collected from multiple sources. Perhaps Google and other advertisers are relatively benign. The real trouble is that governments are increasingly eager to access these stockpiles of personal data and governments have many ways to abuse it.
I am not sure if regulation is such a good idea. I do not trust the assholes in power to create regulation that actually protects us from anything. They are just as likely to want their own piece of the pie, in the name of our safety of course.
What I find horrible is not that companies try to do this. What I find horrible is that everyone who warned about these practices and explained them to the public was ignored, called paranoid, and generally disregarded. The sheeple zeroed in on "with our store card you SAVE THIS MUCH MONEY WOOHOO!" and refused to think about how that was possible. In other words
.. the same thing they are doing now with regard to internet privacy.
Re:No Free Lunch (2)
seifried (12921) | more than 3 years ago | (#35449314)
Re:No Free Lunch (1)
Gutboy (587531) | more than 3 years ago | (#35449470)
Re:No Free Lunch (0)
Anonymous Coward | more than 3 years ago | (#35449908)
You actually give your real name, address, and phone number when you're signing up for bonus/rewards cards at Krogers, CVS, et al... I always use a fake phone number (one I use consistently so I can remember it, if needed)... I generally change either my first or last name when signing up... And for the address I'll either do something like "1234 Main St. Springfield, IL" or at least change my real address by a few house numbers... And I've never been turned down or questioned (or even seen a raised eyebrow) when getting a card... And half the time I don't even present the card with I check out at a merchant... They may ask for my phone number so I give them the fake one I've memorized... And if for some reason that doesn't work I plead ignorance and they always just use "the store account"... I always get the discount!!!
People seem to forget the important part of it... (1)
Anonymous Coward | more than 3 years ago | (#35448550) one.
Re:People seem to forget the important part of it. (2)
martin-boundary (547041) | more than 3 years ago | (#35448714)
Technically, not true in Canada (5, Informative)
WillAffleckUW (858324) | more than 3 years ago | (#35448572).
Re:Technically, not true in Canada (0)
Anonymous Coward | more than 3 years ago | (#35450078)
But does the Canadian constitution apply? If you physically travel to another country, you come under the jurisdiction of that country's laws. If you interact with a web-based service whose servers are all physically located in the United States, you've exported your data to a foreign country. IANAL, but I suspect that once data about you is physically located in a foreign country, it is subject to that country's laws, not Canada's.
Re:Technically, not true in Canada (0)
Anonymous Coward | more than 3 years ago | (#35452400)
Just because a lawyer tells you something, doesn't make it true - my family is full of lawyers
Oh, stop. Everything your siblings tell you is true. Everything.
Now lick the flagpole. What are you, chicken?
Re:Technically, not true in Canada (0)
Anonymous Coward | more than 3 years ago | (#35454838)
Just because a lawyer tells you something, doesn't make it true - my family is full of lawyers
Oh, stop. Everything your siblings tell you is true. Everything.
Now lick the flagpole. What are you, chicken?
funny story, I actually remember my brother and I convincing our little brother to do that.
Hang on... (1)
dakameleon (1126377) | more than 3 years ago | (#35448614):Hang on... (1)
pwizard2 (920421) | more than 3 years ago | (#35448864)
Re:Hang on... (1)
vlueboy (1799360) | more than 3 years ago | (#35448880)ursar and you can be sure it's having the desired effect (of allowing you get educated instead of dropping out due to coming short on forewarned dues). We clearly adapt because we know that even money we don't see is hard at work behind the scenes.
The hard difference is that jobs *always* put money in your bank account. It's hard for ordinary people to conceive the idea of crowdsourcing as business models (and on the employee's side, daily bread bringers), let alone the idea of returning to the ancient system of tribal trades where nobody saw a single gold coin, yet all could receive goods beyond *your* ability to *design* in exchange for other goods beyond "their" ability to *acquire for free.*
So we as a pan-society give up Big Data in exchange for Big Social. Whether this trading system and any other are 100% fair is always the million dollar argument. It's queued up awaiting the mathematical answer to another age-old dilemma: the "Apples-to-Oranges conversion rate."
;)
Re:Hang on... (1)
raddan (519638) | more than 3 years ago | (#35449164)
If they said to you, "In order to use our service, you need to upload your bank statements", then, well, different story. But that's not what's happening here.
Re:Hang on... (1)
dakameleon (1126377) | more than 3 years ago | (#35450164) still expect continued ownership. Adobe doesn't claim ownership of every file modified with Photoshop, so why should that be different for an online "app"?
And then there's the case of putting your birthdate on Facebook. Partially, Facebook justifies it as complying with the need to check the user is over 13 - you put it in so your friends can be notified of your birthday, and in turn you can be notified of theirs. However, there's little doubt Facebook is shopping this, as well as your gender data, for all it's worth to advertisers - demographics and targeted messaging is key to even 50 year old advertising models. I'd consider my birthdate private information on par with a bank account number - after all, it is so often used to secure the bank accounts. So where's the line?
REALLY concerned about privacy? DROP OUT... (5, Insightful)
A Man Called Da-da (2013464) | more than 3 years ago | (#35448790)
Nobody is concerned about privacy (3, Insightful)
betterunixthanunix (980855) | more than 3 years ago | (#35450076)
"So what?"
That is an exchange that I had with someone when I was an undergrad. People do not actually care if companies are mining their private lives, they just want to use Facebook and Twitter and not have to think about anything.
Re:REALLY concerned about privacy? DROP OUT... (1)
houghi (78078) | more than 3 years ago | (#35450460) government that lets a company convicted of being a monopoly go free.
Obligatory (3, Funny)
517714 (762276) | more than 3 years ago | (#35449290)
Norwegian Newspaper ran a story on Facebook today (0)
Anonymous Coward | more than 3 years ago | (#35449306)
Translated Quote made by Facebook's European Policy Director to the Norwegian Data Inspectorate:
"–It was either a misinterpretation, or it was poorly worded on our part. Now the conditions are changed so that there is no doubt: It is you who owns the data, you only give us access to them. "
Google translate of the article in question:
it's OK... (2)
bball99 (232214) | more than 3 years ago | (#35449308)
to any on-line entity, i'm an 80-year-old Afghan blind and handicapped woman living in zipcode 20593 - it's my on-line 'presence'
:-)
p.s. the end of second st. SW is a vermin-filled pus hole
Re:it's OK... (0)
Anonymous Coward | more than 3 years ago | (#35452770)
Um.... no, you're not. You're a young white male, who is interested in trucks. You drive a Toyota.
You are not overweight -- indeed, you obsess over your body image. You would LIKE to go to the gym more often, but don't. You have a set of free-weights at home.
You are computer literate, and careful not to divulge too much information on-line.
No free lunch (0)
Anonymous Coward | more than 3 years ago | (#35449310)
Those corporations aren't evil, they're providing a valuable service for which you pay nothing. To pay their bills and make a profit, they chose to harvest data which you consent to give up, instead of charging you a monthly fee.
I don't like that deal, so I don't use Facebook. Others have a right to make that deal.
anyone who gives away free information (0)
Anonymous Coward | more than 3 years ago | (#35449670)
gets what they deserve.
Still wondering (1)
tpstigers (1075021) | more than 3 years ago | (#35449980)
What stupidity is this? (1)
SuperKendall (25149) | more than 3 years ago | (#35450284).
The worst part is they don't DO anything with it! (0)
Anonymous Coward | more than 3 years ago | (#35450352)
As I understand it, these ad pushers, ahem "Marketing Research Firms," NEED this data to effectively target their message, or something. But they fail, epically, consistently, and oftentimes, amusingly. Despite all this data to crunch into their algorithms, they just crank out stuff at random that is totally irrelevant to me or my interests. Even Netflix, who I allow and expect to use the information I give them for the purpose of coming up with recommendations, offers me something interesting only 15%-25% of the time. If Netflix cannot do that, what hope does Big Data have of reading my mind with data that is probably more incorrect than accurate?
What's silly is that I could eliminate half of Netflix's wrong suggestions if I could give them some simple parameters of what I am not interested in. I think Big Data needs to stop trying to read the consumer tea leaves from surreptitiously collected data relying on legally tenuous sneak-wrap and click-wrap, and try ASKING us and listen to what we tell them.
Is this surprising? (1)
Phopojijo (1603961) | more than 3 years ago | (#35450378)
when does the "I Agree" terminate? (0)
Anonymous Coward | more than 3 years ago | (#35452616)
So say I installed Twitter a few months ago, said "I Agree" to all the necessary questions to install the software. When does that agreement end? I certainly never ever ever had to "Disagree" or "Agree Not to Collect Infor on me" anytime I ever uninstalled any software. But it's a web service. Does it say, when I uninstall the twitter app from my phone, is agreement null and void? Because that is the only software I used to access twitter.
I assume that once you go online and possibly close your account, that will cancel the agreement, but does it? and what about online services that don't really delete your account (facebook) and keep all of your info? Does the "Agreement" still say they can collect info? or if I am able to delete my account, have I given up the right to that data that was collected during my stay?
I'm going to stop thinking now... | http://beta.slashdot.org/story/148768 | CC-MAIN-2014-42 | refinedweb | 5,006 | 70.63 |
Append an IPv6 hop-by-hop or destination option to an ancillary data object
#include <netinet/in.h> int inet6_option_append(struct cmsghdr *cmsg, const u_int8_t *typep, int multx, int plusy);
The option type must have a value from 2 to 255, inclusive. (0 and 1 are reserved for the Pad1 and PadN options, respectively.)
The option data length must be between 0 and 255, inclusive, and is the length of the option data that follows.
The inet6_option_append() function appends a hop-by-hop option or a destination option to an ancillary data object that has been initialized by inet6_option_init().
See also: | http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.lib_ref/topic/i/inet6_option_append.html | CC-MAIN-2018-09 | refinedweb | 101 | 52.39 |
Hi,
I drove a lot of I2C tests and having issues between Jetson nano and STM32 blackpill as slave
I am using the command ‘i2cdetect -r -y 1’ with pins 3 & 5 (also tried bus 0 with pins 27 28 with exactly the same results)
my I2C lines have 4.7K pull up resistors.
A/ Jetson Nano & Servo controller as slave : detected
B/ STM32 black pill as master & STM32 black pill as slave: detected
C/ Jetson Nano & ESP32 as slave : detected
D/ ESP 32 as master & STM32 as slave : detected
E/ ESP 32 or STM32 black pill as master & servo controller : detected
F/ Jetson Nano & STM32 black pill as slave : not detected + freeze issue + errors:
[ 1814.353314] tegra-i2c 7000c400.i2c: pio timed out addr: 0x4 tlen:12 rlen:4
[ 1814.361039] tegra-i2c 7000c400.i2c: — register dump for debugging ----
[ 1814.369008] tegra-i2c 7000c400.i2c: I2C_CNFG - 0x22c00
[ 1814.374261] tegra-i2c 7000c400.i2c: I2C_PACKET_TRANSFER_STATUS - 0x10001
[ 1814.381029] tegra-i2c 7000c400.i2c: I2C_FIFO_CONTROL - 0xe0
[ 1814.387048] tegra-i2c 7000c400.i2c: I2C_FIFO_STATUS - 0x800080
[ 1814.392970] tegra-i2c 7000c400.i2c: I2C_INT_MASK - 0x7d
[ 1814.398288] tegra-i2c 7000c400.i2c: I2C_INT_STATUS - 0x2
[ 1814.403643] tegra-i2c 7000c400.i2c: i2c transfer timed out addr: 0x4
I also tried 2 different black pill clones & for each model 2 units.
Any help is welcomed.
Here are the Arduino programs used as slave and master:
Slave:
#include <Wire.h> #define SLAVE_ADR 0x4 void receiveEvent (int) ; void setup() { Serial.begin(9600) ; Wire.begin(SLAVE_ADR) ; Wire.onReceive(receiveEvent) ; } void loop() { } void receiveEvent (int howMany) { (void)howMany; while (Wire.available()){ Serial.println((char)Wire.read()); } } Master: #include <Wire.h> //include Wire.h library void setup() { Wire.begin(); // Wire communication begin Serial.begin(9600); // The baudrate of Serial monitor is set in 9600 while (!Serial); // Waiting for Serial Monitor Serial.println("\nI2C Scanner"); } void loop() { byte error, address; //variable for error and I2C the next I2C scan } | https://forums.developer.nvidia.com/t/i2c-issue-timeout-error-messages/187257 | CC-MAIN-2021-39 | refinedweb | 315 | 71.51 |
Using Spark context in a class contructor can cause serialization issues. Move the logic and variables to a member method to avoid some of these problems. There are many reasons why you can get this nasty SparkException: Task not serializable. StackOverflow is full of answers but this one was not so obvious. At least not for me.
I had simple Spark application which created direct stream to Kafka, did some filtering and then saved results to Cassandra. When I ran it, I got the exception saying that the filtering task cannot be serialized. Check the code and try to tell me what’s wrong with it:
import akka.actor._ class MyActor(ssc: StreamingContext) extends Actor { // Create direct stream to Kafka val kafkaStream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, ...) // Save raw data to Cassandra kafkaStream.saveToCassandra("cassandraKeyspace", "cassandraTableRaw") // Get some data from another Cassandra table val someTable = ssc.sparkContext.cassandraTable[SomeTable]("cassandraKeyspace", "someTable") // Filter and save data to Cassandra kafkaStream .filter { message => // Whatever logic can be here, the point is that "someTable" is used someTable.filter(_.message == message).count > 42 } .saveToCassandra(cassandraKeyspace, cassandraTableAggNewVisitors) def receive = Actor.emptyBehavior }
Ok. Do you see that someTable variable inside the filter function? That’s the cause of the problem. It is an RDD which is, of course, by definition serializable. Firstly I thought that the concrete implementation is for some reason not serializable, but that’s just also wrong way of thinking.
Whom does the variable belong to? I looked at it as a “local” variable inside the class constructor. But it’s not. someTable variable is a public member of the MyActor class! It belongs to the class which is not serializable. (Side note: we don’t want Akka actors to be serializable beacuse it doesn’t make sense to send actors over the wire)
That explains everything. Spark needs to serialize the whole closure and the actor instance is a part of it. Let’s just put the whole logic inside a method. That makes all variables method-local causing that the actor doesn’t have to be serialized anymore.
import akka.actor._ class MyActor(ssc: StreamingContext) extends Actor { def init(): Unit = { // Create direct stream to Kafka ... the same code as before, only inside this methos val kafkaStream = ... ... } init() def receive = Actor.emptyBehavior }
How simple. You’re welcome. | http://buransky.com/spark/fighting-notserializableexception-in-apache-spark/ | CC-MAIN-2020-10 | refinedweb | 389 | 60.11 |
Holy cow, I wrote a book!.
Sometimes software development is inventing new stuff.
But often, it's just putting together the stuff you already have.
Today's puzzle is one of the latter type of problem.
Given a window handle, you can you determine (1) whether it is
an Explorer window, and if so (2) what folder it is viewing, and
(3) what item is currently focused.
This is not an inherently difficult task.
You just have to put together lots of small pieces.
Start with
the ShellWindows object
which represents all the open shell windows.
You can enumerate through them all with
the Item property.
This is rather clumsy from C++ because the ShellWindows object
was designed for use by a scripting language like JScript or Visual Basic.)++) {
...
pdisp->Release();
}
psw->Release();
}
From each item, we can ask it for its window handle and see if it's the one
we want.;
...
}
pwba->Release();
}
Okay, now that we have found the folder via its IWebBrowserApp,
we need to get to the top shell browser. This is done by
querying for the SID_STopLevelBrowser service and asking for
the IShellBrowser interface.))) {
...
psb->Release();
}
psp->Release();
}
From the IShellBrowser, we can ask for the current shell view
via
the QueryActiveShellView method.
IShellView *psv;
if (SUCCEEDED(psb->QueryActiveShellView(&psv))) {
...
psv->Release();
}
Of course, what we really want is
the IFolderView interface,
which is the automation object that contains all the real goodies.
IFolderView *pfv;
if (SUCCEEDED(psv->QueryInterface(IID_IFolderView,
(void**)&pfv))) {
...
pfv->Release();
}
Okay, now we're golden. What do you want to get from the view?
How about the location of the IShellFolder being viewed.
To do that, we need to use
IPersistFolder2::GetCurFolder.
The GetFolder method will give us access to the shell folder,
from which we ask for IPersistFolder2.
(Most of the time you want the IShellFolder interface,
since that's where most of the cool stuff hangs out.)
IPersistFolder2 *ppf2;
if (SUCCEEDED(pfv->GetFolder(IID_IPersistFolder2,
(void**)&ppf2))) {
LPITEMIDLIST pidlFolder;
if (SUCCEEDED(ppf2->GetCurFolder(&pidlFolder))) {
...
CoTaskMemFree(pidlFolder);
}
ppf2->Release();
}
Let's convert that pidl into a path, for display purposes.
pidl
if (!SHGetPathFromIDList(pidlFolder, g_szPath)) {
lstrcpyn(g_szPath, TEXT("<not a directory>"), MAX_PATH);
}
...
What else can we do with what we've got? Oh right, let's see what the
currently-focused object is.
int iFocus;
if (SUCCEEDED(pfv->GetFocusedItem(&iFocus))) {
...
}
Let's display the name of the focused item.
To do that we need the item's pidl and the IShellFolder.
(See, I told you the IShellFolder is where the cool stuff is.)
The item comes from
the Item method (surprisingly enough).
LPITEMIDLIST pidlItem;
if (SUCCEEDED(pfv->Item(iFocus, &pidlItem))) {
...
CoTaskMemFree(pidlItem);
}
(If we had wanted a list of selected items we could have used
the Items method, passing SVGIO_SELECTION.)
After we get the item's pidl, we also need the IShellFolder:
IShellFolder *psf;
if (SUCCEEDED(ppf2->QueryInterface(IID_IShellFolder,
(void**)&psf))) {
...
psf->Release();
}
Then we put the two together to get the item's display name,
with the help of
the GetDisplayNameOf method.
STRRET str;
if (SUCCEEDED(psf->GetDisplayNameOf(pidlItem,
SHGDN_INFOLDER,
&str))) {
...
}
We can use the helper function
StrRetToBuf to convert the kooky
STRRET structure into
a boring string buffer.
(The history of the kooky STRRET structure will have to wait for
another day.)
StrRetToBuf(&str, pidlItem, g_szItem, MAX_PATH);
Okay, let's put this all together.
It looks rather ugly because I put everything into one huge
function instead of breaking them out into subfunctions.
In "real life" I would have broken things up into little helper
functions to make things more manageable.
Start with
the
scratch program and add this new function:
#include <shlobj.h>
#include <exdisp.h>
TCHAR g_szPath[MAX_PATH];
TCHAR g_szItem[MAX_PATH];
void CALLBACK RecalcText(HWND hwnd, UINT, UINT_PTR, DWORD)
{
HWND hwndFind = GetForegroundWindow();
g_szPath[0] = TEXT('\0');
g_szItem[0] = TEXT('\0');))) {);
}
psf->Release();
}
CoTaskMemFree(pidlItem);
}
}
CoTaskMemFree(pidlFolder);
}
ppf2->Release();
}
pfv->Release();
}
psv->Release();
}
psb->Release();
}
psp->Release();
}
}
pwba->Release();
}
pdisp->Release();
}
psw->Release();
}
InvalidateRect(hwnd, NULL, TRUE);
}
Now all we have to do is call this function periodically
and print the results.
BOOL
OnCreate(HWND hwnd, LPCREATESTRUCT lpcs)
{
SetTimer(hwnd, 1, 1000, RecalcText);
return TRUE;
}
void
PaintContent(HWND hwnd, PAINTSTRUCT *pps)
{
TextOut(pps->hdc, 0, 0, g_szPath, lstrlen(g_szPath));
TextOut(pps->hdc, 0, 20, g_szItem, lstrlen(g_szItem));
}
We're ready to roll. Run this program and set it to the side.
Then launch an Explorer window and watch the program track the folder
you're in and what item you have focused.
Okay, so I hope I made my point:
Often, the pieces you need are already there; you just have to
figure out how to put them together. Notice that each of the
pieces is in itself not very big. You just had to recognize
that they could be put together in an interesting way.
Exercise: Change this program so it takes the folder and
switches it to details view.
[Raymond is currently on vacation; this message was pre-recorded.]?)
When.). | https://blogs.msdn.com/b/oldnewthing/archive/2004/07.aspx?Redirected=true&PostSortBy=MostViewed&PageIndex=1 | CC-MAIN-2014-23 | refinedweb | 825 | 66.13 |
I would like to find out why autocomplete stop working in some cases. I encounter this issue on both 1.5 and 2.0 EAP.
By example: when writing something like AAA.BBB.method the methods from BBB are not displayed.
AAA is a package and also a variable name (an instance of AAA class from AAA package).
BBB is a module imported inside AAA.
I am fully aware that a variable should not be named after module name but this is not my code and I would like to know of this is the real cause for the issues I encounter and if there are alternatives. The bad part is that PyDev seems to have no problems with this and most of the team is using PyDev.
import AAA
AAA = AAA.getNew()
AAA.BBB.method() # autocomplete works for BBB but not for method()
"""
# Inside AAA.py:
from BBB import BBB
"""
Hello Sorin,
Unfortunately the only way at the moment to diagnose why autocompletion doesn't
work is to step through PyCharm code in the debugger, which is something
that only JetBrains developers can do.
You're welcome to submit a YouTrack issue with the details, and we'll investigate
the problem.
--
Dmitry Jemerov
Development Lead
JetBrains, Inc.
"Develop with Pleasure!" | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206590675-How-to-find-out-why-auto-complete-doesn-t-work-in-some-cases-?sort_by=created_at | CC-MAIN-2019-26 | refinedweb | 211 | 74.9 |
Invokables¶
For people who have been using the library since the early days you are familiar with the need to use the
.get() method to invoke a method chain:
// an example of get const lists = await sp.web.lists.get();
Starting with v2 this is no longer required, you can invoke the object directly to execute the default action for that class - typically a get.
const lists = await sp.web.lists();
This has two main benefits for people using the library: you can write less code, and we now have a way to model default actions for objects that might do something other than a get. The way we designed the library prior to v2 hid the post, put, delete operations as protected methods attached to the Queryable classes. Without diving into why we did this, having a rethink seemed appropriate for v2. Based on that, the entire queryable chain is now invokable as well for any of the operations.
Other Operations (post, put, delete)¶
import { sp, spPost } from "@pnp/sp"; import "@pnp/sp/webs"; // do a post to a web - just an example doesn't do anything fancy spPost(sp.web);
Things get a little more interesting in that you can now do posts (or any of the operations) to any of the urls defined by a fluent chain. Meaning you can easily implement methods that are not yet part of the library. For this example I have made up a method called "MagicFieldCreationMethod" that doesn't exist. Imagine it was just added to the SharePoint API and we do not yet have support for it. You can now write code like so:
import { sp, spPost, SharePointQueryable } from "@pnp/sp"; import "@pnp/sp/webs"; import "@pnp/sp/fields/web"; // call our made up example method spPost(SharePointQueryable(sp.web.fields, "MagicFieldCreationMethod"), { body: JSON.stringify({ // ... this would be the post body }), }); | https://pnp.github.io/pnpjs/concepts/invokable/ | CC-MAIN-2020-50 | refinedweb | 311 | 61.67 |
When working in a multilingual environment, it is important to know where to translate all the customer-specific texts in all languages in scope. We will give here an overview of all the techniques used for translations in Employee Self-Service (most also apply to other SAP areas).
Launchpad texts
The first place where customer-specific text is written will be on the home page of ESS, which as of ERP6.05 uses the “Launchpad” framework (LPD_CUST). Translating texts in the launchpad is a tedious task. First, you need to generate a key for your text. This occurs when you put your launchpad in a transport request (Launchpad > Transport in the menu when your launchpad is opened).
When the key is generated, you can look it up by running report APB_LPD_SHOW_TEXT_KEYS. When you have the key of the text you want to translate, go to the main transaction for translations, namely SE63. From there, you can use the shortcut and type TX in the transaction bar. Then, enter the key you have in the “object name” field. Select a source and target language and click on Edit.
In newer version of transaction LPD_CUST you will find a menu entry ‘Launchpad -> Textkey / Translation’. From there you will be lead to SE63 to do the translation.
Smartform and workflow texts
Smartform and workflow texts can be translated most of the time using SE63 too. For smartforms, use shortcut SSF in the transaction bar, while for workflow, you can use TLGS to translate the workitem title and WFLW for the workitem description. The object name will be the smartform name for smartforms, and for workflow, concatenate PDTS, the task ID, the wildcard (*) and 06 to find the object name of the workitem title. For workitem description, replace 06 in the previous concatenation by 120.
OTR texts
Most ESS applications currently run on Web Dynpro ABAP, which uses OTR texts for most small pieces of texts displayed on the screens. If you need to change a standard OTR text into some other text, you can do so in transaction SOTR_EDIT. Enter the OTR alias in the corresponding input field. Then, in the menu, open “Edit > Context > Create” to create a new context for your customer. Select the country and industry. In the next screen, you can type your customer text, which will not affect the standard text as this version is created in your customer context.
Then, you only need to tell SAP to first look at the customer context version of the OTR texts. You can do this by implementing BAdI BTFR_CONTEXT, as explained in SAP note 579365.
When creating your own customer-specific OTR texts (e.g. for a customer-specific development), you will sometimes notice a change in a text will not be reflected directly in your application. This can be due to buffering. To clear the OTR buffer, you can enter /$otr in the transaction bar.
Customizing
In some applications like in Leave Request for instance, you will need to translate customizing entries like types of leave from table T554S. To do so, just open the maintenance view of the table, select the entries to translate and click ‘Go to > Translation’ in the menu.
Feeder class
As ESS uses often the FPM framework for Web Dynpro ABAP, some texts are defined in the FPM feeder classes, mostly in the GET_DEFINITION method. Sometimes, the text is even derived from the feeder class text elements, like the title of the Personal Profile detail screen (e.g. Edit address) which is written in the text elements of class CL_HRESS_PER_OVERVIEW. Those texts can be translated directly using the menu item ‘Go to > Translation’.
POWL texts
In MSS, many POWLs are used. The text of categories or queries in POWL is defined in the customizing in transaction POWL_COCKPIT. However, changing the text will only be reflected after you delete the queries buffered using report POWL_D01.
Transporting translations
When you create translations using transaction SE63, your translations are not put in a transport request. This must be done explicitly, in transaction SLXT. First, select the target language (in most recent versions of this report, you can select several languages or all existing languages). Then, enter a description for your transport request. Finally, filter the translations to be selected using e.g. dates, time spans, object types, users, etc. or a combination of those criteria.
The output of this report is that a transport request is created, containing the translations. An example for translation of smartforms texts can be consulted here.
Hi Julien, Nice document. The information on changing standard OTR is useful.
Regards
Sagar
Glad you found it useful Sagar! I’ve expanded the section on OTR if you want. Can be helpful too (OTR buffer).
Hi Julien,
Kindly let me know when transporting launchpad from DEV to QAS ,do we need to perform initially the above mentioend steps,the major problem I got is text is getting display correctly same as in DEV portal for the services whereas in QAS PORTAL it is missing , I tried on applying some notes, but still not working. Done some text changes in QAS ,but i dont think so that is correct ,becoz as we transport any object request from DEV to QAS we shud get the changes updated in QAS also.but when I transport launchpad its not reflecting in QAS, I have done all necessary settings in SE80 also,kindly suggest the solution,if posbile i will attach the video.
Regards,
JWALA,
ESS MSS.
Hi Jwala,
May I suggest you start a discussion thread for this? Have you transported your translations using transaction SLXT as mentioned here?
Regards,
Julien
Hi julien ,
Yas.I have!! becoz many people dont know in ehp6 while trasporting launchpad the same will not reflect in QAS server and Julein can u tell me when we create launchpad what exact namespace we need to give there , as seen in some documents and threads ,it is /OCUST/ —-(-Number range ) and in some for Namespace we need to give the package name which we use like our package name here is ZHR we use , so here I Have given ZHR for Namespace …is this correct or any other we need to give ..kindly suggest….if possible kindly follow me on my profile as I am doing some R & D on Portal 7.03 as well as ESS MSS deeply.
Can you tell me what we need to do in that scrren plz share some info
Regards,
Jwala,
ESS MSS.
Hi Julien,
It’s nice document, made my life easier while changing already defined OTR along with translation & transport.
Thanks a lot for sharing.
Regards
Arjun
Hi Julien,
Thanks for your documentation. I have used SOTR_EDIT to maintain standard text in package PA0C_HAP_DOCUMENT_WD_UI in EHP4 and 5. Now that we are upgrading to EHP6, I get a message that each of the objects were created in German and the text is grey’ed out. I’m trying to edit the text for “Value Description” for example, when Iocate the text line, click on it and I get “Text Created in language D”.
Have you experienced this ? I tried to edit the same field in German but did not see my changes in the application.
thanks,
Chris Thomas
chris.thomas@duke.edu | https://blogs.sap.com/2013/05/23/translating-texts-in-employee-self-service/ | CC-MAIN-2019-30 | refinedweb | 1,215 | 71.85 |
Survey period: 1 Apr 2019 to 8 Apr 2019
Assuming you're given no other information, in which direction would you point a budding new developer?
View optional text answers (133 answers)
After college, I never used that language again.
delete this;
import tables, strutils, algorithm
proc main() =
var
count = 0
anagrams = initTable<a href="">string, seq[string]</a>
for word in "unixdict.txt".lines():
var key = word
key.sort(cmp[char])
anagrams.mgetOrPut(key, newSeq<a href="">string</a>).add(word)
count = max(count, anagrams[key].len)
for _, v in anagrams:
if v.len == count:
v.join(" ").echo
main()
In its current invocation, it can do anything the other languages can do and, in some cases, even more. Yes, it is wordy - but for someone learning, words begin and end make more sense than { and }. The different uses for parenthesis, braces and curly braces is often confusing. The syntax for the for command is somewhat cryptic.
It is my opinion that it is better to start with simple and clear, then add the shortcuts later.
We tell our students that, using programs, they can solve problems. Their biggest problems are homework, lab exercises and the like. Of course, they can do the calculations on their TI-85 or using a spreadsheet, like Excel. For freshman and sophomore physics and chemistry labs, this is perfectly fine. Being able to write their own program to process, print and graph their data adds a "cool factor" to the lab report.
Slacker007 wrote:No more ugly than C# code,
person.Hair.Color = Color.Brown;
person.SetHairColor(Color.Brown);
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Surveys/2197/Which-programming-language-would-you-recommend-to?fid=1946105&df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True&select=5614802&fr=26 | CC-MAIN-2021-39 | refinedweb | 296 | 66.74 |
— Disclaimer: I know this is pretty basic stuff but many, many programmers are doing it still wrong —
As a Java programmer you know how to implement equals and that hashCode has to be implemented as well. You use your favorite IDE to generate the necessary code, use common wisdom to help you code it by hand or use annotations. But there is a fourth way: introducing EqualsBuilder (not the apache commons one which has some drawbacks over this one) which implements the general rules for equals and hashCode:
public class EqualsBuilder { public static interface IComparable { public Object[] getValuesToCompare(); } private EqualsBuilder() { super(); } public static int getHashCode(IComparable one) { if (null == one) { return 0; } final int prime = 31; int result = 1; for (Object o : one.getValuesToCompare()) { result = prime * result + EqualsBuilder.calculateHashCode(o); } return result; } private static int calculateHashCode(Object o) { if (null == o) { return 0; } return o.hashCode(); } public static boolean isEqual(IComparable one, Object two) { if (null == one || null == two) { return false; } if (one.getClass() != two.getClass()) { return false; } return compareTwoArrays(one.getValuesToCompare(), ((IComparable) two).getValuesToCompare()); } private static boolean compareTwoArrays(Object arrayOne, Object arrayTwo) { if (Array.getLength(arrayOne) != Array.getLength(arrayTwo)) { return false; } for (int i = 0; i < Array.getLength(arrayOne); i++) { if (!EqualsBuilder.areEqual(Array.get(arrayOne, i), Array.get(arrayTwo, i))) { return false; } } return true; } private static boolean areEqual(Object objectOne, Object objectTwo) { if (null == objectOne) { return null == objectTwo; } if (null == objectTwo) { return false; } if (objectOne.getClass().isArray() && objectTwo.getClass().isArray()) { return compareTwoArrays(objectOne, objectTwo); } return objectOne.equals(objectTwo); } }
The interface IComparable ensures that equals and hashCode are based on the same instance variables.
To use it your class needs to implement the interface and call the appropiate methods from EqualsBuilder:
public class MyClass implements IComparable { private int count; private String name; public Object[] getValuesToCompare() { return new Object[] {Integer.valueOf(count), name}; } @Override public int hashCode() { return EqualsBuilder.getHashCode(this); } @Override public boolean equals(Object obj) { return EqualsBuilder.isEqual(this, obj); } }
Update: If you want to use isEqual directly one test should be added to the start:
if (one == two) { return true; }
Thanks to Nyarla for this hint.
Update 2: Thanks to a hint by Alex I fixed a bug in areEqual: when an array (especially a primitive one) is passed than the equals would return a wrong result.
Update 3: The newly added compareTwoArrays method had a bug: it resulted in true if arrayTwo is bigger than arrayOne but starts the same. Thanks to Thierry for pointing that out.
Interesting sample code – I just hope that people don’t use it as boiler plate code for everything since it’ll choke horribly with class cast exceptions as soon as the equal object is a non-comparable object!
Also, you’ve got what looks like a bug in your public “isEqual” method. If both are null (not going to happen from where you call it, but since it is public then it could be called from anywhere) then you get “false”. Surely the two nulls are the same since they’re both uninitialised objects? JUnit’s assertEquals() certainly thinks they are.
Hi,
you say this has advantages over the one from apache commons. What are those?
regards,
Wim
“I know this is pretty basic stuff but many, many programmers are doing it still wrong”. With all due respect, yourself included!
Here are some of the immediate problems I see with your code:
Creating an object array for each call to hashCode() and TWO object arrays for each call to equals(), along with possible boxing of primitives, is a sure-fire way to poor performance and excessive garbage collection whenever using these objects in a Collection or tight loop.
A ClassCastException whenever equals() is called with an object that doesn’t implement IComparable violates the equals() spec.
Calling equals() but passing in a different class that still implements IComparable (including even a subclass of the class being compared against) leads to a possible ArrayIndexOutOfBounds exception.
Those problems aside, I can’t see the benefit anyway. You still have to either type in the IComparable implementation or get your IDE to autogenerate it for each class, at which point you might as well let the IDE generate an optimal equals() and hashcode() instead that doesn’t suffer from any of the above bugs or design flaws.
Can you elaborate on what’s the drawback of Apache’s EqualsBuilder over this one?
Pingback: Blog harvest, December 2009 « Schneide Blog
Hi,
You may need to consider testing for arrays as part of the areEquals method:
if (objectOne.getClass().isArray() && objectTwo.getClass().isArray()) {
if (objectOne.getClass().isPrimitive() && objectTwo.getClass().isPrimitive()) {
// lots of if/elseif for primitive conversions
return Arrays.equals((…)objectOne,(…)objectTwo)
} else {
return Arrays.deepEquals((Object[])objectOne,(Object[])objectTwo);
}
}
Its a nice idea, but seems a lot of code to do an equals test. I think you’re right when you wrote:
“..use common wisdom to help you code it by hand..”
No better way 🙂
Yes, indeed, this was a bug. Updated the code with the fix. Thanks!
Pingback: Top Posts — WordPress.com
@IBBoard, @Chris: Regarding the ClassCastExceptions: in line 36 there is a test that the class of object one is the same as object two and since object one is an IComparable object two is one also. So casting object two is safe.
@Kevin, @Wim: The major benefit over other approaches like the Apache Commons EqualsBuilder or the handcoding one is the coupling of the fields that hashCode and equals are testing. So it is assured that hashCode and equals are using the same fields to test against.
There is a small error at the isEqual method:
33 if (null == one || null == two) {
34 return false;
35 }
Calling isEqual(null, null) will result in a false!
Just check the identity before to improve performance and correct the error:
if (one == two) return true;
@Nyarla: Added. Thanks
The method private static boolean compareTwoArrays is wrong, you should start by comparing there length if arrayTwo is bigger than arrayOne but starts the same they will be equals.
Yes, you are right. Corrected. Thanks
Pingback: grep» Blog Archive » equals is not DRY | https://schneide.blog/2009/12/14/a-more-elegant-way-to-equals-in-java/ | CC-MAIN-2018-47 | refinedweb | 1,018 | 54.32 |
Latest Posts
Before you initiate a “docker pull”
In addition to the challenges that are inherent to isolating containers, Docker brings with it an entirely new attack surface in the form of its automated fetching and installation mechanism, “docker pull”. It may be counter-intuitive, but “docker pull” both fetches and unpacks a container image in one step. There is no verification step and, surprisingly, malformed packages can compromise a system even if the container itself is never run. Many of the CVE’s issues against Docker have been related to packaging that can lead to install-time compromise and/or issues with the Docker registry.
One, now resolved, way such malicious issues could compromise a system was by a simple path traversal during the unpack step. By simply using a tarball’s capacity to unpack to paths such as “../../../” malicious images were able to override any part of a host file system they desired.
Thus, one of the most important ways you can protect yourself when using Docker images is to make sure you only use content from a source you trust and to separate the download and unpack/install steps. The easiest way to do this is simply to not use “docker pull” command. Instead, download your Docker images over a secure channel from a trusted source and then use the “docker load” command. Most image providers also serve images directly over a secure, or at least verifiable, connection. For example, Red Hat provides a SSL-accessible “Container Images”. Fedora also provides Docker images with each release as well.
While Fedora does not provide SSL with all mirrors, it does provide a signed checksum of the Docker image that can be used to verify it before you use “docker load”.
Since “docker pull” automatically unpacks images and this unpacking process itself is often compromised, it is possible that typos can lead to system compromises (e.g. a malicious “rel” image downloaded and unpacked when you intended “rhel”). This typo problem can also occur in Dockerfiles. One way to protect yourself is to prevent accidental access to index.docker.io at the firewall-level or by adding the following /etc/hosts entry:
127.0.0.1 index.docker.io
This will cause such mistakes to timeout instead of potentially downloading unwanted images. You can still use “docker pull” for private repositories by explicitly providing the registry:
docker pull registry.somewhere.com/image
And you can use a similar syntax in Dockerfiles:
from registry.somewhere.com/image
Providing a wider ecosystem of trusted images is exactly why Red Hat began its certification program for container applications. Docker is an amazing technology, but it is neither a security nor interoperability panacea. Images still need to come from sources that certify their security, level-of-support, and compatibility.Posted: 2014-12-18T14:30:57+00:00
Container Security: Isolation Heaven or Dependency Hell
Docker is the public face of Linux containers and two of Linux’s unsung heroes: control groups (cgroups) and namespaces. Like virtualization, containers are appealing because they help solve two of the oldest problems to plague developers: “dependency hell” and “environmental hell.”
Closely related, dependency and environmental hell can best be thought of as the chief cause of “works for me” situations. Dependency hell simply describes the complexity inherent in modern application’s tangled graph of external libraries and programs they need to function. Environmental hell is the name for the operating system portion of that same problem (i.e. what wrinkles, in particular which bash implementation,on which that quick script you wrote unknowingly relies).
Namespaces provide the solution in much the same way as virtual memory simplified writing code on a multi-tenant machine: by providing the illusion that an application suite has the computer all to itself. In other words,”via isolation”. When a process or process group is isolated via these new namespace features, we say they are “contained.” In this way, virtualization and containers are conceptually related, but containers isolate in a completely different way and conflating the two is just the first of a series of misconceptions that must be cleared up in order to understand how to use containers as securely as possible. Virtualization involves fully isolating programs to the point that one can use Linux, for example, while another uses BSD. Containers are not so isolated. Here are a few of the ways that “containers do not contain:”
- Containers all share the same kernel. If a contained application is hijacked with a privilege escalation vulnerability, all running containers *and* the host are compromised. Similarly, it isn’t possible for two containers to use different versions of the same kernel module.
- Several resources are *not* namespaced. Examples include normal ulimit systems still being needed to control resources such as filehandlers. The kernel keyring is another example of a resource that is not namespaced. Many beginning users of containers find it counter-intuitive that socket handlers can be exhausted or that kerberos credentials are shared between containers when they believe they have exclusive system access. A badly behaving process in one container could use up all the filehandles on a system and starve the other containers. Diagnosing the shared resource usage is not feasible from within
- By default, containers inherit many system-level kernel capabilities. While Docker has many useful options for restricting kernel capabilities, you need a deeper understanding of an application’s needs to run it inside containers than you would if running it in a VM. The containers and the application within them will be dependent on the capabilities of the kernel on which they reside.
- Containers are not “write once, run anywhere”. Since they use the host kernel, applications must be compatible with said kernel. Just because many applications don’t depend on particular kernel features doesn’t mean that no applications do.
For these and other reasons, Docker images should be designed and used with consideration for the host system on which they are running. By only consuming images from trusted sources, you reduce the risk of deploying containerized applications thates. Docker images should be considered as powerful as RPMs and should only be installed from sources you trust. You wouldn’t expect your system to remain secured if you were to randomly install untrusted RPMs nor should you if you “docker pull” random Docker images.
In the future we will discuss the topic of untrusted images.Posted: 2014-12-17T14:30:37+00:00
Analysis of the CVE-2013-6435 Flaw in RPM.
RPM offers considerable advantages over traditional open-source software install methodology of building from source via tar balls, especially when it comes to software distribution and management. This has led to other Linux distributions to accept RPM as either the default package management system or offer it as an alternative to the ones which are default in those distributions.
Like any big, widely used software, over time several features are added to it and also several security flaws are found. On several occasions Red Hat has found and fixed security issues with RPM.
Florian Weimer of Red Hat Product Security discovered an interesting flaw in RPM, which was assigned CVE-2013-6435. Firstly, let’s take a brief look at the structure of an RPM file. It consists of two main parts: the RPM header and the payload. The payload is a compressed CPIO archive of binary files that are installed by the RPM utility. The RPM header, among other things, contains a cryptographic checksum of all the installed files in the CPIO archive. The header also contains a provision for a cryptographic signature. The signature works by performing a mathematical function on the header and archive section of the file. The mathematical function can be an encryption process, such as PGP (Pretty Good Privacy), or a message digest in the MD5 format.
If the RPM is signed, one can use the corresponding public key to verify the integrity and even the authenticity of the package. However, RPM only checked the header and not the payload during the installation.
When an RPM is installed, it writes the contents of the package to its target directory and then verifies its checksum against the value in the header. If the checksum does not match, that means something is wrong with the package (possibly someone has tampered with it) and the file is removed. At this point RPM refuses to install that particular package.
Though this may seem like the correct way to handle things, it has a bad consequence. Let’s assume RPM installs a file in the /etc/cron.d directory and then verifies its checksum. This offers a small race-window, in which crond can run before the checksum is found to be incorrect and the file is removed. There are several ways to prolong this window as well. So in the end we achieve arbitrary code execution as root, even though the system administrator assumes that the RPM package was never installed.
The approach Red Hat used to solve the problem is:
- Require the size in the header to match with the size of the file in the payload. This prevents anyone from tampering with the payload, because the header is cryptographically verified. (This fix is already present in the upstream version of RPM)
- Set restrictive permissions while a file is being unpacked from an RPM package. This will only allow root to access those file. Also, several programs, including cron, perform a check for permission sanity before running those files.
Another approach to mitigate this issue is the use of the O_TMPFILE flag. Linux kernel 3.11 and above introduced this flag, which can be passed to open(2), to simplify the creation of secure temporary files. Files opened with the O_TMPFILE flag are created, but they are not visible in the file system. As soon as they are closed, they are deleted. There are two uses for these files: race-free temporary files and creation of initially unreachable files. These unreachable files can be written to or changed same as regular files. RPM could use this approach to create a temporary, unreachable file, run a checksum on it, and either delete it or atomically link it to set the file up, without being vulnerable to the attack described above. However, as mentioned above, this feature is only available in Linux kernel 3.11 and above, was added to glibc 2.19, and is slowly making its way into GNU/Linux distributions.
The risk mentioned above is greatly reduced if the following precautions are followed:
- Always check signatures of RPM packages before installing them. Red Hat RPMs are signed with cryptographic keys provided at. When installing RPMs from Red Hat or Fedora repositories, Yum will automatically validate RPM packages via the respective public keys, unless explicitly told not to (via the ânogpgcheckâ option and configuration directive).
- Package downloads via Red Hat software repositories are protected via TLS/SSL so it is extremely difficult to tamper with them in transit. Fedora uses a whole-file hash chain rooted in a hash downloaded over TLS/SSL from a Fedora-run central server.
The above issue (CVE-2013-6435) has been fixed along with another issue (CVE-2014-8118), which is a potentially exploitable crash in the CPIO parser.
Red Hat customers should update to the latest versions of RPM via the following security advisories:: 2014-12-10T14:30:50+00:00
Disabling SSLv3 on the client and server
Recently, some Internet search engines announced that they would prefer websites secured with encryption over those that were not. Of course there are other reasons why securing your website with encryption is beneficial. Protecting authentication credentials, mitigating the use of cookies as a means of tracking and allowing access, providing privacy of your users, and authenticating your own server thus protecting the information you are trying to convey to your users. And while setting up and using encryption on a webserver can be trivial, doing it properly might take a few additional minutes.
Red Hat strives to ship sane defaults that allow both security and availability. Depending on your clients a more stringent or lax configuration may be desirable. Red Hat Support provides both written documentation as well as a friendly person that can help make sense of it all. Inevitably, it is the responsibility of the system owner to secure the systems they host.
Good cryptographic protocols
Protocols are the basis for all cryptography and provide the instructions for implementing ciphers and using certificates. In the asymmetric, or public key, encryption world the protocols are all based off of the Secure Sockets Layer, or SSL, protocol. SSL has come along way since its initial release in 1995. Development has moved relatively quickly and the latest version, Transport Layer Security version 1.2 (TLS 1.2), is now the standard that all new software should be supporting.
Unfortunately some of the software found on the Internet still supports or even requires older versions of the SSL protocol. These older protocols are showing their age and are starting to fail. The most recent example is the POODLE vulnerability which showed how weak SSL 3.0 really is.
In response to the weakened protocol Red Hat has provided advice to disable SSL 3.0 from its products, and help its customers implement the best available cryptography. This is seen in products from Apache httpd to Mozilla Firefox. Because SSL 3.0 is quickly approaching its twentieth birthday it’s probably best to move on to newer and better options.
Of course the protocol can’t fix everything if you’re using bad ciphers.
Good cryptographic ciphers
Cryptographic ciphers are just as important to protect your information. Weak ciphers, like RC4, are still used on the Internet today even though better and more efficient ciphers are available. Unfortunately the recommendations change frequently. What was suggested just a few months ago may no longer be good choices today. As more work goes into researching the available ciphers weaknesses are discovered.
Fortunately there are resources available to help you stay up to date. Mozilla provides recommended cipher choices that are updated regularly. Broken down into three categories, system owners can determine which configuration best meets their needs.
Of course the cipher can’t fix everything if your certificate are not secure.
Certificates
Certificates are what authenticate your server to your users. If an attacker can spoof your certificate they can intercept all traffic going between your server and users. It’s important to protect your keys and certificates once they have been generated. Using a hardware security module (HSM) to store your certificates is a great idea. Using a reputable certificate authority is equally important.
Clients
Most clients that support SSL/TLS encryption automatically try to negotiate the latest version. We found with the POODLE attack that http clients, such as Firefox, could be downgraded to a weak protocol like SSL 3.0. Because of this many server owners went ahead and disabled SSL 3.0 to prevent the downgrade attack from affecting their users. Mozilla has, with their latest version of Firefox, disabled SSL 3.0 by default (although it can be re-enabled for legacy support). Now users are protected even though server owners may be lax in their security (although they are still at the mercy of the server’s cipher and protocol choices).
Much of the work has already been done behind the scenes and in the development of the software that is used to serve up websites as well as consume the data that comes from these servers. The final step is for system owners to implement the technology that is available. While a healthy understanding of cryptography and public key infrastructure is good, it is not necessary to properly implement good cryptographic solutions. What is important is protecting your data and that of your users. Trust is built during every interaction and your website it usually a large part of that interaction.Posted: 2014-12-03T14:30:23+00:00
Enterprise Linux 6.5 to 6.6 risk report
Red Hat Enterprise Linux 6.6 was released the 14th of October, 2014, eleven months since the release of 6.5 in November 2013. So lets use this opportunity to take a quick look back over the vulnerabilities and security updates made in that time, specifically for Red Hat Enterprise Linux 6 Server.
Red Hat Enterprise Linux 6 is in its fourth year since release, and will receive security updates until November 30th 2020.
Errata count
The chart below illustrates the total number of security updates issued for Red Hat Enterprise Linux 6 Server if you had installed 6.5, up to and including the 6.6 release, broken down by severity. It’s split into two columns, one for the packages you’d get if you did a default install, and the other if you installed every single package.
During installation there actually isn’t an option to install every package, you’d have to manually select them all, and it’s not a likely scenario. For a given installation, the number of package updates and vulnerabilities that affected you will depend on exactly what you selected during installation and which packages you have subsequently installed or removed.
For a default install, from release of 6.5 up to and including 6.6, we shipped 47 advisories to address 219 vulnerabilities. 2 advisories were rated critical, 25 were important, and the remaining 20 were moderate and low.
Or, for all packages, from release of 6.5 up to and including 6.6, we shipped 116 advisories to address 399 vulnerabilities. 13 advisories were rated critical, 53 were important, and the remaining 50 13 critical advisories addressed 42 critical vulnerabilities across six different projects:
- An update to php RHSA-2013:1813 (December 2013).Â.
- An update to JavaOpenJDK
- RHSA-2014:0026 (January 2014). Multiple improper permission check issues were discovered in the Serviceability, Security, CORBA, JAAS, JAXP, and Networking components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass certain Java sandbox restrictions.
- RHSA-2014:0406 (April 2014).Â.
- RHSA-2014:0889 (July 2014). It was discovered that the Hotspot component in OpenJDK did not properly verify bytecode from the class files. An untrusted Java application or applet could possibly use these flaws to bypass Java sandbox restrictions.
- An update to ruby RHSA-2013:1764 (November 2014).Â.
- An update to nss and nspr RHSA-2014:0917 (July 2014). A race condition was found in the way NSS verified certain certificates. A remote attacker could use this flaw to crash an application using NSS or, possibly, execute arbitrary code with the privileges of the user running that application.
- An update to bash (Shellshock) RHSA-2014:1293 (September 2014).Â.
- An update to Firefox:
- RHSA-2013:1812 (December 2013). Â Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to terminate unexpectedly or, potentially, execute arbitrary code with the privileges of the user running Firefox.
- RHSA-2014:0132 (February 2014). Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
- RHSA-2014:0310 (March 2014). Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
- RHSA-2014:0448 (April 2014). Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
- RHSA-2014:0741 (June 2014). Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
- RHSA-2014:0919 (July 2014). Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
- RHSA-2014:1144 (September 2014). Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
- RHSA-2014:1635 (October 2014). Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox.
A flaw was found in the Alarm API, which allows applications to schedule
actions to be run in the future. A malicious web application could use this
flaw to bypass cross-origin restrictions.
97% of updates to correct 42 critical vulnerabilities were available via Red Hat Network either the same day or the next calendar day after the issues were public. 6 Server, but isn’t really useful for comparisons with other major versions, distributions, or operating systems — for example, a default install of Red Hat Enterprise Linux 6 Server does not include Firefox, but Red Hat Enterprise Linux 5 Server does. You can use our public security measurement data and tools, and run your own custom metrics for any given Red Hat product, package set, timescales, and severity range of interest.
See also: 6.5, 6.4, 6.3, 6.2, and 6.1 risk reports.Posted: 2014-11-12T14:30:28+00:00
Can SSL 3.0 be fixed? An analysis of the POODLE attack.
SSL and TLS are cryptographic protocols which allow users to securely communicate over the Internet. Their development history is no different from other standards on the Internet. Security flaws were found with older versions and other improvements were required as technology progressed (for example elliptic curve cryptography or ECC), which led to the creation of newer versions of the protocol.
It is easier to write newer standards, and maybe even implement them in code, than to adapt existing ones while maintaining backward compatibility. The widespread use of SSL/TLS to secure traffic on the Internet makes a uniform update difficult. This is especially true for hardware and embedded devices such as routers and consumer electronics which may receive infrequent updates from their vendors.
The fact that legacy systems and protocols need to be supported, even though more secure options are available, has lead to the inclusion of a version negotiation mechanism in SSL/TLS protocols. This mechanism allows a client and a server to communicate even if the highest SSL/TLS version they support is not identical. The client indicates the highest version it supports in its ClientHello handshake message, then the server picks the highest version supported by both the client and the server, then communicates this version back to the client in its ServerHello handshake message. The SSL/TLS protocols implement protections to prevent a man-in-the-middle (MITM) attacker from being able to tamper with handshake messages that force the use of a protocol version lower than the highest version supported by both the client and the server.
Most popular browsers implement a different out-of-band mechanism for fallback to earlier protocol versions. Some SSL/TLS implementations do not correctly handle cases when a connecting client supports a newer TLS protocol version than supported by the server, or when certain TLS extensions are used. Instead of negotiating the highest TLS version supported by the server the connection attempt may fail. As a workaround, the web browser may attempt to re-connect with certain protocol versions disabled. For example, the browser may initially connect claiming TLS 1.2 as the highest supported version, and subsequently reconnect claiming only TLS 1.1, TLS 1.0, or eventually SSL 3.0 as the highest supported version until the connection attempt succeeds. This can trivially allow a MITM attacker to cause a protocol downgrade and make the client/server use SSL 3.0. This fallback behavior is not seen in non HTTPS clients.
The issue related to the POODLE flaw is an attack against the “authenticate-then-encrypt” constructions used by block ciphers in their cipher block chaining (CBC) mode, as used in SSL and TLS. By using SSL 3.0, at most 256 connections are required to reliably decrypt one byte of ciphertext. Known flaws already affect RC4 and non block-ciphers and their use is discouraged.
Several cryptographic library vendors have issued patches which introduce the TLS Fallback Signaling Cipher Suite Value (TLS_FALLBACK_SCSV) support to their libraries. This is essentially a fallback mechanism in which clients indicate to the server that they can speak a newer SSL/TLS versions than the one they are proposing. If TLS_FALLBACK_SCSV was included in the ClientHello and the highest protocol version supported by the server is higher than the version indicated by the client, the server aborts the connection, because it means that the client is trying to fallback to a older version even though it can speak the newer version.
Before applying this fix, there are several things that need to be understood:
- As discussed before, only web browsers perform an out-of-band protocol fallback. Not all web browsers currently support TLS_FALLBACK_SCSV in their released version. Even if the patch is applied on the server, the connection may still be unsafe if the browser is able to negotiate SSL 3.0
- Clients which do not implement out-of-protocol TLS version downgrades (generally anything which does not speak HTTPS) do not need to be changed. Adding TLS_FALLBACK_SCSV is unnecessary (and even impossible) if there is no downgrade logic in the client application.
- Thunderbird shares a lot of its code with the Firefox web browser, including the connection setup code for IMAPS and SMTPS. This means that Thunderbird will perform an insecure protocol downgrade, just like Firefox. However, the plaintext recovery attack described in the POODLE paper does not apply to IMAPS or SMTPS, and the web browser in Thunderbird has Javascript disabled, and is usually not used to access sites which require authentication, so the impact on Thunderbird is very limited.
- The TLS/SSL server needs to be patched to support the SCSV extension – though, as opposed to the client, the server does not have to be rebuilt with source changes applied. Just installing an upgrade TLS library is sufficient. Due to the current lack of browser support, this server-side change does not have any positive security impact as of this writing. It only prepares for a future where a significant share of browsers implement TLS_FALLBACK_SCSV.
- If both the server and the client are patched and one of them only supports SSL 3.0, SSL 3.0 will be used directly, which results in a connection with reduced security (compared to currently recommended practices). However, the alternative is a total connection failure or, in some situations, an unencrypted connection which does nothing to protect from an MITM attack. SSL 3.0 is still better than an unencrypted connection.
- As a stop-gap measure against attacks based on SSL 3.0, disabling support for this aging protocol can be performed on the server and the client. Advice on disabling SSL 3.0 in various Red Hat products and components is available on the Knowledge Base.
Information about (the lack of ongoing) attacks may help with a decision. Protocol downgrades are not covert attacks, in particular in this case. It is possible to log SSL/TLS protocol versions negotiated with clients and compare these versions with expected version numbers (as derived from user profiles or the HTTP user agent header). Even after a forced downgrade to SSL 3.0, HTTPS protects against tampering. The plaintext recovery attack described in the POODLE paper (Bodo MThis POODLE Bites: Exploiting The SSL 3.0 Fallback, September 2014) can be detected by the server and just the number of requests generated by it could be noticeable.ller, Thai Duong, Krzysztof Kotowicz,
Red Hat has done additional research regarding the downgrade attack in question. We have not found any clients that can be forcibly downgraded by an attacker other than clients that speak HTTPS. Due to this fact, disabling SSL 3.0 on services which are not used by HTTPS clients does not affect the level of security offered. A client that supports a higher protocol version and cannot be downgraded is not at issue as it will always use the higher protocol version.
SSL 3.0 cannot be repaired at this point because what constitutes the SSL 3.0 protocol is set in stone by its specification. However, starting in 1999, successor protocols to SSL 3.0 were developed called TLS 1.0, 1.1, and 1.2 (which is currently the most recent version). Because of the built-in protocol upgrade mechanisms, these successor protocols will be used whenever possible. In this sense, SSL 3.0 has indeed been fixed – an update to SSL 3.0 should be seen as being TLS 1.0, 1.1, and 1.2. Implementing TLS_FALLBACK_SCSV handling in servers makes sure that attackers cannot circumvent the fixes in later protocol versions.Posted: 2014-10-20T14:27:34+00:00.
A brief history
Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communication security over networks. The SSL protocol was originally developed by Netscape.Â.Â. Customers should review the recommendations and test changes before making them live in production systems. As always, Red Hat Support is available to answer any questions you may have.Posted: 2014-10-15T14:44:40+00:00
The Source of Vulnerabilities, How Red Hat finds out about vulnerabilities..
So let’s take a look at some example views of recent data: every vulnerability fixed in every Red Hat product in the 12 months up to 30th August 2014 (a total of 1012 vulnerabilities).
Firstly a chart just giving the breakdown of how we first found out about each issue:
- CERT: Issues reported to us from a national cert like CERT/CC or CPNI, generally in advance of public disclosure
- Individual: Issues reported to Red Hat Product Security directly by a customer or researcher, generally in advance of public disclosure
- Red Hat: Issues found by Red Hat employees
- Relationship: Issues reported to us by upstream projects, generally in advance of public disclosure
- Peer vendors: Issues reported to us by other OS distributions, through relationships
or a shared private forum
- Internet: For issues not disclosed in advance we monitor a number of mailing lists and security web pages of upstream projects
- CVE: If we’ve not found out about an issue any other way, we can catch it from the list of public assigned CVE names from Mitre
Next a breakdown of if we knew about the issue in advance. For the purposes of our reports we count knowing the same day of an issue as not knowing in advance, even though we might have had a few hours notice:
There are few interesting observations from this data:
Posted: 2014-10-08T13:30:48+00:00
- Red Hat employees find a lot of vulnerabilities. We don’t just sit back and wait for others to find flaws for us to fix, we actively look for issues ourselves and these are found by engineering, quality assurance, as well as our security teams. 17% of all the issues we fixed in the year were found by Red Hat employees. The issues we find are shared back in advance where possible to upstream and other peer vendors (generally via the ‘distros’ shared private forum).
- Relationships matter. When you are fixing vulnerabilities in third party software, having a relationship with the upstream makes a big difference. But
it’s really important to note here that this should never be a one-way street, if an upstream is willing to give Red Hat information about flaws in advance,
then we need to be willing to add value to that notification by sanity checking the draft advisory, checking the patches, and feeding back the
results from our quality testing. A recent good example of this is the OpenSSL CCS Injection flaw; our relationship with OpenSSL gave us advance
notice of the issue and we found a mistake in the advisory as well as a mistake in the patch which otherwise would have caused OpenSSL to have to have
done a secondary fix after release. Only two of the dozens of companies prenotified about those OpenSSL issues actually added value back to OpenSSL.
- Red Hat can influence the way this metric looks; without a dedicated security team a vendor could just watch what another vendor does and copy them,
or rely on public feeds such as the list of assigned CVE names from Mitre. We can make the choice to invest to find more issues and build upstream relationships.
Bash specially-crafted environment variables code injection attack
Update 2014-09-30 19:30 UTC
Questions have arisen around whether Red Hat products are vulnerable to CVE-2014-6277 and CVE-2014-6278.Â: 2014-09-24T14:00:08+00:00
Enterprise Linux 5.10 to 5.11 risk report
Red Hat Enterprise Linux 5.11 was released this month (September 2014), eleven months since the release of 5.10 in October 2013. So, as usual, let’s use this opportunity to take a look back over the vulnerabilities and security updates made in that time, specifically for Red Hat Enterprise Linux 5 Server.
Red Hat Enterprise Linux 5 is in Production 3 phase, being over seven years since general availability in March 2007, and will receive security updates until March 31st 2017.
Errata count
The chart below illustrates the total number of security updates issued for Red Hat Enterprise Linux 5 Server if you had installed 5.10, up to and including the 5.11 release, broken down by severity. It’s split into two columns, one for the packages you’d get if you did a default install, and the other if you installed every single package.
Note that during installation there actually isn’t an option to install every package, you’d have to manually select them all, and it’s not a likely scenario. For a given installation, the number of package updates and vulnerabilities that affected your systems will depend on exactly what you selected during installation and which packages you have subsequently installed or removed.
For a default install, from release of 5.10 up to and including 5.11, we shipped 41 advisories to address 129 vulnerabilities. 8 advisories were rated critical, 11 were important, and the remaining 22 were moderate and low.
For all packages, from release of 5.10 up to and including 5.11, we shipped 82 advisories to address 298 vulnerabilities. 12 advisories were rated critical, 29 were important, and the remaining 41 12 critical advisories addressed 33 critical vulnerabilities across just three different projects:
- An update to NSS/NSPR: RHSA-2014:0916(July 2014). A race condition was found in the way NSS verified certain certificates which could lead to arbitrary code execution with the privileges of the user running that application.
- Updates to PHP, PHP53: RHSA-2013:1813, RHSA-2013:1814
(December 2013). A flaw in the parsing of X.509 certificates could allow scripts using the affected function to potentially execute arbitrary code. An update to PHP: RHSA-2014:0311
(March 2014). A flaw in the conversion of strings to numbers could allow scripts using the affected function to potentially execute arbitrary code.
- Updates to Firefox, RHSA-2013:1268 (September 2013), RHSA-2013:1476 (October 2013), RHSA-2013:1812 (December 2013), RHSA-2014:0132 (February 2014), RHSA-2014:0310 (March 2014), RHSA-2014:0448 (Apr 2014), RHSA-2014:0741 (June 2014), RHSA-2014:0919 (July 2014) where a malicious web site could potentially run arbitrary code as the user running Firefox.
Updates to correct 32 of the 33 critical vulnerabilities were available via Red Hat Network either the same day or the next calendar day after the issues were public.
Overall, for Red Hat Enterprise Linux 5 since release until 5.11, 98% of critical vulnerabilities have had an update available to address them available from the Red Hat Network either the same day or the next calendar day after the issue was public.
Other significant vulnerabilities
Although not in the definition of critical severity, also of interest are other remote flaws and local privilege escalation flaws:
- A flaw in glibc, CVE-2014-5119, fixed by RHSA-2014:1110 (August 2014). A local user could use this flaw to escalate their privileges. A public exploit is available which targets the polkit application on 32-bit systems although polkit is not shipped in Red Hat Enterprise Linux 5. It may be possible to create an exploit for Red Hat Enterprise Linux 5 by targeting a different application.
- Two flaws in squid, CVE-2014-4115, and CVE-2014-3609, fixed by RHSA-2014:1148 (September 2014). A remote attacker could cause Squid to crash.
- A flaw in procmail, CVE-2014-3618, fixed by RHSA-2014:1172 (September 2014). A remote attacker could send an email with specially crafted headers that, when processed by formail, could cause procmail to crash or, possibly, execute arbitrary code as the user running formail.
- A flaw in Apache Struts, CVE-2014-0114, fixed by RHSA-2014:0474 (April 2014). A remote attacker could use this flaw to manipulate the ClassLoader used by an application server running Stuts 1 potentially leading to arbitrary code execution under some conditions.
- A flaw where yum-updatesd did not properly perform RPM signature checks, CVE-2014-0022, fixed by RHSA-2014:1004 (Jan 2014). Where yum-updatesd was configured to automatically install updates, a remote attacker could use this flaw to install a malicious update on the target system using an unsigned RPM or an RPM signed with an untrusted key.
- A flaw in the kernel floppy driver, CVE-2014-1737, fixed by RHSA-2014:0740 (June 2014). A local user who has write access to /dev/fdX on a system with floppy drive could use this flaw to escalate their privileges. A public exploit is available for this issue. Note that access to /dev/fdX is by default restricted only to members of the floppy group.
- A flaw in libXfont, CVE-2013-6462, fixed by RHSA-2014:0018 (Jan 2014). A local user could potentially use this flaw to escalate their privileges to root.
- A flaw in xorg-x11-server, CVE-2013-6424, fixed by RHSA-2013:1868 (Dec 2013). An authorized client could potentially use this flaw to escalate their privileges to root.
- A flaw in the kernel QETH network device driver, CVE-2013-6381, fixed by RHSA-2014:0285 (March 2014). A local, unprivileged user could potentially use this flaw to escalate their privileges. Note this device is only found on s390x architecture systems.
Note that Red Hat Enterprise Linux 5 was not affected by the OpenSSL issue, CVE-2014-0160, “Heartbleed”., time scales, and severity range of interest.
See also:
5.10, 5.9, 5.8, 5.7, 5.6, 5.5, 5.4, 5.3, 5.2, and 5.1 risk reports.Posted: 2014-09-18T13:30:49+00:00 | https://access.redhat.com/blogs/766093 | CC-MAIN-2015-22 | refinedweb | 6,628 | 54.42 |
Cascading Style Sheets, also referred to as CSS, is a simple design language intended to simplify the process of making web pages presentable.
CSS handles the look and feel part of a web page. Using CSS, you can control the color of the text, style of fonts, spacing between paragraphs, size of columns and layout. Apart from these, you can also control the background images or colors that are used, layout designs, variations in display for different devices and screen sizes as well as a variety of other effects.
JavaFX provides you the facility of using CSS to enhance the look and feel of the application. The package javafx.css contains the classes that are used to apply CSS for JavaFX applications.
A CSS comprises of style rules that are interpreted by the browser and then applied to the corresponding elements in your document.
A style rule is made of three parts, which are −
Selector − A selector is an HTML tag at which a style will be applied. This could be any tag like <h1> or <table>, etc.
Property − A property is a type of attribute of the HTML tag. In simpler terms, all the HTML attributes are converted into CSS properties. They could be color, border, etc.
Value − Values are assigned to properties. For example, a color property can have value either red or #F1F1F1, etc.
You can put CSS Style Rule Syntax as follows −
selector { property: value }
The default style sheet used by JavaFX is modena.css. It is found in the JavaFX runtime jar.
You can add your own style sheet to a scene in JavaFX as follows −
Scene scene = new Scene(new Group(), 500, 400); scene.getStylesheets().add("path/stylesheet.css");
You can also add in-line styles using the setStyle() method. These styles consist of only key-value pairs and they are applicable to the nodes on which they are set. Following is a sample code of setting an inline style sheet to a button.
.button { -fx-background-color: red; -fx-text-fill: white; }
Assume that we have developed an JavaFX application which displays a form with a Text Field, Password Field, Two Buttons. By default, this form looks as shown in the following screenshot −
The following program is an example which demonstrates how to add styles to the above application in JavaFX.
Save this code in a file with the name CssExample.java
import javafx.application.Application; import static javafx.application.Application.launch; import javafx.geometry.Insets; import javafx.geometry.Pos; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.PasswordField; import javafx.scene.layout.GridPane; import javafx.scene.text.Text; import javafx.scene.control.TextField; import javafx.stage.Stage; public class CssExample extends Application { @Override public void start(Stage stage) { //creating label email Text text1 = new Text("Email"); //creating label password Text text2 = new Text("Password"); //Creating Text Filed for email TextField textField1 = new TextField(); //Creating Text Filed for password PasswordField textField2 = new PasswordField(); //Creating Buttons Button button1 = new Button("Submit"); Button button2 = new Button("Clear"); //Creating a Grid Pane GridPane gridPane = new GridPane(); //Setting size for the pane gridPane.setMinSize(400, 200); //Setting the padding gridPane.setPadding(new Insets(10, 10, 10, 10)); //Setting the vertical and horizontal gaps between the columns gridPane.setVgap(5); gridPane.setHgap(5); //Setting the Grid alignment gridPane.setAlignment(Pos.CENTER); //Arranging all the nodes in the grid gridPane.add(text1, 0, 0); gridPane.add(textField1, 1, 0); gridPane.add(text2, 0, 1); gridPane.add(textField2, 1, 1); gridPane.add(button1, 0, 2); gridPane.add(button2, 1, 2); //Styling nodes button1.setStyle("-fx-background-color: darkslateblue; -fx-text-fill: white;"); button2.setStyle("-fx-background-color: darkslateblue; -fx-text-fill: white;"); text1.setStyle("-fx-font: normal bold 20px 'serif' "); text2.setStyle("-fx-font: normal bold 20px 'serif' "); gridPane.setStyle("-fx-background-color: BEIGE;"); // Creating a scene object Scene scene = new Scene(gridPane); // Setting title to the Stage stage.setTitle("CSS Example"); // Adding scene to the stage stage.setScene(scene); //Displaying the contents of the stage stage.show(); } public static void main(String args[]){ launch(args); } }
Compile and execute the saved java file from the command prompt using the following commands.
javac CssExample.java java CssExample
On executing, the above program generates a JavaFX window as shown below. | https://www.tutorialspoint.com/javafx/javafx_css.htm | CC-MAIN-2019-47 | refinedweb | 713 | 51.75 |
A nonlinear BVP
Posted March 08, 2013 at 09:19 AM | categories: pde | tags:
Adapted from Example 8.7 in _Numerical Methods in Engineering with Python_ by Jaan Kiusalaas.
We want to solve \(y''(x) = -3 y(x) y'(x)\) with $y(0) = 0 and \(y(2) = 1\) using a finite difference method. We discretize the region and approximate the derivatives as:
\(y''(x) \approx \frac{y_{i-1} - 2 y_i + y_{i+1}}{h^2} \)
\(y'(x) \approx \frac{y_{i+1} - y_{i-1}}{2 h} \)
We define a function \(y''(x) = F(x, y, y')\). At each node in our discretized region, we will have an equation that looks like \(y''(x) - F(x, y, y') = 0\), which will be nonlinear in the unknown solution \(y\). The set of equations to solve is:\begin{eqnarray} y_0 - \alpha &=& 0 \\ \frac{y_{i-1} - 2 y_i + y_{i+1}}{h^2} + (3 y_i) (\frac{y_{i+1} - y_{i-1}}{2 h}) &=& 0 \\ y_L - \beta &=&0 \end{eqnarray}
Since we use a nonlinear solver, we will have to provide an initial guess to the solution. We will in this case assume a line. In other cases, a bad initial guess may lead to no solution.
import numpy as np from scipy.optimize import fsolve import matplotlib.pyplot as plt x1 = 0.0 x2 = 2.0 alpha = 0.0 beta = 1.0 N = 11 X = np.linspace(x1, x2, N) h = (x2 - x1) / (N - 1) def Ypp(x, y, yprime): '''define y'' = 3*y*y' ''' return -3.0 * y * yprime def residuals(y): '''When we have the right values of y, this function will be zero.''' res = np.zeros(y.shape) res[0] = y[0] - alpha for i in range(1, N - 1): x = X[i] YPP = (y[i - 1] - 2 * y[i] + y[i + 1]) / h**2 YP = (y[i + 1] - y[i - 1]) / (2 * h) res[i] = YPP - Ypp(x, y[i], YP) res[-1] = y[-1] - beta return res # we need an initial guess init = alpha + (beta - alpha) / (x2 - x1) * X Y = fsolve(residuals, init) plt.plot(X, Y) plt.savefig('images/bvp-nonlinear-1.png')
That code looks useful, so I put it in the pycse module in the function BVP_nl. Here is an example usage. We have to create two functions, one for the differential equation, and one for the initial guess.
from pycse import BVP_nl import matplotlib.pyplot as plt def Ypp(x, y, yprime): '''define y'' = 3*y*y' ''' return -3.0 * y * yprime def init(x): return alpha + (beta - alpha) / (x2 - x1) * x x1 = 0.0 x2 = 2.0 alpha = 0.0 beta = 1.0 N = 11 x, y = BVP_nl(Ypp, x1, x2, alpha, beta, init, N) plt.plot(x, y) plt.savefig('images/bvp-nonlinear-2.png')
The results are the same.
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | https://kitchingroup.cheme.cmu.edu/blog/2013/03/08/A-nonlinear-BVP/ | CC-MAIN-2021-31 | refinedweb | 484 | 76.62 |
> The drawback is that this change increases the length of the repr.
I would argue that it is a reasonable trade-off given the increase in ease of understanding.
I know that this is a weak argument, but, keywords are not without precedent. Consider the comically more verbose example:
import time
time.gmtime(1121871596)
# time.struct_time(tm_year=2005, tm_mon=7, tm_mday=20, tm_hour=14, tm_min=59, tm_sec=56, tm_wday=2, tm_yday=201, tm_isdst=0)
> datetime.datetime has more arguments, and its repr doesn't use keywords.
I think that guessing the meaning of values is much harder when it comes to timedelta.
> Users of datetime.timedelta know what arguments mean. If they don't know they always can look in the documentation or builtin help.
I created the issue after ... a friend ... spent an embarrassing amount of time debugging because he thought that the third argument represented milliseconds and not microseconds. <_<
I could, of course, tell him:
> In the face of ambiguity, resist the temptation to guess.
But he could retort:
> Explicit is better than implicit.
and
> Readability counts.
I think he has a point. | https://bugs.python.org/msg293215 | CC-MAIN-2020-40 | refinedweb | 186 | 69.28 |
The series ending in a2fe35bcdf3d383aaa5ad915b80630c8895d703d fails to compile the check target:
-------------------------------
Author: Jose Fonseca <jfonseca@vmware.com>
scons: Support Clang on Windows.
- Introduce 'gcc_compat' env flag, for all compilers that define __GNUC__,
(which includes Clang when it's not emulating MSVC.)
- Clang doesn't support whole program optimization
- Disable enumerator value warnings (not sure why Clang warns about them,
as my understanding is that MSVC promotes enums to unsigned ints
automatically.)
This is not enough to build with Clang + AddressSanitizer though. More
follow up changes will be required for that.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
-------------------------------
In file included from ../../../include/c99_compat.h:28:0,
from ../../../src/util/macros.h:29,
from ../../../src/glx/glxclient.h:55,
from fake_glx_screen.h:24,
from fake_glx_screen.cpp:23:
../../../include/no_extern_c.h:47:1: error: template with C linkage
template<class T> class _IncludeInsideExternCNotPortable;
^
First compilation error occurs in c068610a7df370af94fd6177598a35c4425a75f9.
The problem are the following lines in src/glx/tests/fake_glx_screen.h :
extern "C" {
#include "glxclient.h"
};
Including headers like that inside extern C should be avoided. My change just happened to trigger it.
I'll try to fix it.
Should be fixed with 52c7443932bd38d6708fcab2a8dfcc7ed3d678f2.
bug/show.html.tmpl processed
on Feb 19, 2017 at 20:56:48.
(provided by the Example extension). | https://bugs.freedesktop.org/show_bug.cgi?id=95158 | CC-MAIN-2017-09 | refinedweb | 217 | 54.08 |
05 November 2009 12:58 [Source: ICIS news]
LONDON (ICIS news)--Artenius’ polyethylene terephthalate (PET) plant at El Prat de Llobregat, Spain, could be restarted by the end of the year to accommodate demand, a source from the Spanish company said on Thursday.
“There is a possibility that El Prat could restart by the end of December. It seems the company is financially much healthier...We are sold out and need to prepare for the new season ahead,” the source said.
The 150,000 tonne/year unit was shut down in September because of poor demand.
There was still no final agreement for the future of Artenius’ parent company, La Seda de Barcelona.
In October, La Seda was given a further four-week reprieve from banks as it tried to resolve issues with its finances.
The company’s facility at ?xml:namespace>
La Seda has put up for sale four of the company’s other plants, which are located in
The European PET market was looking firm in November, despite this being the end of the bottling season, sources agreed.
“Now, in the off season, all our customers are purchasing our material,” a reseller noted.
Producers were seeking increases of up to €50/tonne ($75/tonne) this month because of higher feedstock costs and more expensive imports, they said.
October prices for European material spread from €850-920/tonne FD (free delivered)
($1 = €0.67)
For more | http://www.icis.com/Articles/2009/11/05/9261303/artenius-to-resume-pet-output-at-el-prat-de-llobregat.html | CC-MAIN-2013-48 | refinedweb | 237 | 62.48 |
How to use RTI Connext DDS to Communicate Across Docker Containers Using the Host Driver
When using the “host” Docker driver, all network interfaces of the host machine are accessible from the Docker container. Furthermore, there is no networking namespace separation between the Docker containers and the host machine.
To run a container in host mode, run the following command:
$ docker run --network="host" -ti <Docker image> <Program>
Possible scenarios
Communication within a host machine (Docker container to Docker container, or Docker container to host machine):
The configuration of the Docker container can affect the ability of RTI Connext DDS to communicate using the Docker "host" driver if shared memory is enabled. This is due to how the GUID that identifies every DDS Entity is generated.
By default, RTI Connext DDS generates the GUID as follows (for more information, see section “8.5.9.4 Controlling How the GUID is Set” of the RTI Connext DDS User’s Manual):
32 bits for the Host ID (rtps_host_id). The Host ID is currently based upon the IP address by default (although this might change in future versions). This will allow you to determine the node having the GUID problem.
- 16 low bits for the Application ID (
rtps_app_id).
If the originating node is a Windows system, the relevant Process ID will be the value of GetCurrentProcessId().
On a VxWorks system, it will be the Task ID of a specific task.
- On a Linux system, it will be the Process ID that you can see as the output of the command
ps -ef.
8 bits for an Internal Counter. This counter allows an application to create multiple DomainParticipants in the same domain.
8 bits containing a Constant Value (0x01).
As we mentioned before, the host Docker driver mode could affect this generation process as follows:
All the containers in host driver mode will have the same IP address; therefore, they will have the same Host ID.
- Due to the way process IDs are generated, two applications in two containers could have the same Process ID; therefore, they could have the same Application ID.
In this case, RTI Connext DDS will interpret that both applications are in the same machine and will try to communicate using shared memory. This is an optimization done by our Shared Memory transport. To resolve this, there are three possible solutions:
Disabling the Shared Memory transport, as explained in Section "8.5.7 TRANSPORT_BUILTIN QosPolicy (DDS Extension)" of the RTI Connext DDS Core Libraries User’s Manual.
Configuring your RTI Connext DDS applications and Docker containers to enable communication using shared memory, as explained in this Knowledge Base article.
- Changing how the GUID is set, as explained in Section “8.5.9.4 Controlling How the GUID is Set” of the RTI Connext DDS Core Libraries User’s Manual.
Communication across machines (Docker to Docker, or Docker to a remote host)
In this case, you should get out-of-the-box communication. That is, you should be able to communicate with RTI Connext DDS applications running on other machines (whether they are running on Docker containers or not).
This happens because a Docker container behaves just like any other regular process, despite having some resource isolation. With the host driver, the container will use the same network stack that any other process would use. | https://community.rti.com/kb/how-use-rti-connext-dds-communicate-across-docker-containers-using-host-driver | CC-MAIN-2021-17 | refinedweb | 554 | 51.89 |
I'm trying to print a function which uses several parameters from numpy array's and lists, but I keep getting the error "numpy.float 64 object is not iterable". I've looked at several questions on the forum about this topic and tried different answers but none seem to work (or I might be doing something wrong I'm still a beginner at python) but it all comes down to the same thing, I'm stuck and I hope you guys can help. I'm using python 2.7, this is the code:
EDIT: Included the error message and changed the print to "print(T, (obj(T),))"
from __future__ import division
import numpy as np
import random
K = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1,])
x = len(K)
#Production rates and demand rates of products setup costs and holding costs (p, d, c, h)
p = np.array([193, 247, 231, 189, 159])
d = np.array([16, 16, 21, 19, 23])
#c1 = np.array([random.random() for _ in range(x)]) use these values as test values for c
c = [0.752, 0.768, 0.263, 0.152, 0.994, 0.449, 0.431, 0.154, 0.772]
h = [0.10*c[i]/240 for i in range(x)]
n = len(p)
t = [10.76, 74.61, 47.54, 29.40, 45.00, 90.48, 17.09, 85.19, 35.33]
def obj(T):
for i in range(n):
for q in range(x):
for k in range(x):
return ((1. / T) * c[q] + sum((.5*h[k]*(p[i]-d[i])* (p[i]/d[i])*(t[k])**2)))
for T in range(200, 900):
print(T, (obj(T),))
runfile('C:/Users/Jasper/Anaconda2/Shirodkar.py', wdir='C:/Users/Jasper/Anaconda2')
Traceback (most recent call last):
File "<ipython-input-1-0cfdc6b9fe69>", line 1, in <module>
runfile('C:/Users/Jasper/Anaconda2/Shirodkar.py', wdir='C:/Users/Jasper/Anaconda2')
File "C:\Users\Jasper\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Users\Jasper\Anaconda2\lib\site- packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Users/Jasper/Anaconda2/Shirodkar.py", line 24, in <module>
print(T, (obj(T),))
File "C:/Users/Jasper/Anaconda2/Shirodkar.py", line 21, in obj
return ((1. / T) * c[q] + sum((.5*h[k]*(p[i]-d[i])*(p[i]/d[i])*(t[k])**2)))
TypeError: 'numpy.float64' object is not iterable
I suspect the problem is here:
sum((.5*h[k]*(p[i]-d[i])* (p[i]/d[i])*(t[k])**2))
The end result of that expression is a float, isn't it? What is the sum() for? | https://codedump.io/share/rYWB0mAfCCFs/1/numpyfloat64-is-not-iterable | CC-MAIN-2018-09 | refinedweb | 456 | 59.3 |
When we started out doing.
For a while we got by using a solution that relied on having a variant of Lambert conformal conic projection coordinates - it was sufficiently exact if not perfect, and our maps used the same projection, so it worked - although there was the added burden of transforming our stored (WGS-84) coordinates to Lambert every time we needed calculations done. A couple of years ago, however, we switched to Google Maps API and so we really had no use for Lambert - and increased load and precision demands made using the current solution a worse and worse choice.
Enter Chris Veness. Or rather, enter his implementation of the Vincenty inverse formula (pdf). Even though the math is beyond me, porting the Javascript implementation to Python was straightforward, and some testing showed that the result was both faster and had better precision than the previous solution.
Fast-forward to a few months ago, suddenly the performance is starting to look like something that could become a problem. We have many reasons for doing distance calculations, and while the batch jobs were not a problem, any amount of time that can be shaved off user-initiated actions is welcome.
So, I thought to myself, I've ported it once, how hard can it be to do it again? After all, when raw speed becomes the issue, the Python programmer reaches for C. Porting it was once again straightforward, mapping the Python function
def distance(x1, y1, x2, y2): ...
into
const double distance(const double x1, const double y1, const double x2, const double y2) { ...
The resulting C code is almost identical to he Python (and Javascript) implementations but runs about 6 times faster than the Python implementation. Allowing batch submission of calculations instead of calling once for every calculation, eliminating some FFI overhead, would increase the speed further.
$ python2.7 -m tests.test_distance Time elapsed for 100000 calculations in Python: 1952.70 C: 300.46 Factor: 6.50
Wrapping the C and calling it was simple enough using ctypes, and I've added fallback to the Python implementation if the C shared library cannot be found; a small __init__.py in the package hooks up the correct version:
from .distance import distance as _py_distance try: from ctypes import cdll, c_double dll = cdll.LoadLibrary('cDistance.so') dll.distance.restype = c_double dll.distance.argtypes = [c_double, c_double, c_double, c_double] distance = dll.distance _c_distance = dll.distance except OSError: #Fall back to Python implementation distance = _py_distance
Of course, this depends on the C code being compiled into cDistance.so and that file being available for linking - and it keeps the .so hardcoded so a windows DLL wont work. I really did intend to clean it up more before making it open source, but since I've been meaning to start open sourcing some of our tools for years now and never really found the time, I thought it would be better to thow it out there, and postpone making it pretty instead. I hope someone can find some use in this, and I'll try to get it cleaned upp and packaged Real Soon Now.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/wgs-84-distance-calculations | CC-MAIN-2017-09 | refinedweb | 533 | 62.38 |
Extreme Programming Installed 259
The ScoopLast year's Extreme Programming Explained was a manifesto of sorts. Wouldn't it be nice if customers, management, and programmers could work together to produce good software on schedule and under budget? If planning, peer review, testing, and design are good, why not do them all the time? It even put forth the radical notion that customers should set business value while programmers create -- and revise -- technical schedules.
Yet another 'silver bullet' Fred Brooks debunked years ago? The authors of Extreme Programming Installed disagree. The book breaks XP into workable chunks, hanging flesh on the bones of Kent Beck's manifesto. It explains each element of XP in turn, based on the authors' personal and collective experiences.
For example, the Iteration Planning chapter describes planning meetings. The customer presents stories, the developers break the stories into tasks, and individual programmers estimate and sign up for tasks. Each element has further detail on best practices and potential traps. Finally, the chapter describes an average meeting.
What's to Like?As with other titles in the series, the text is clear and easy to read. The short chapters have no fluff, saying only what's needed. Concise explanations and a gentle, conversational tone add up to a book that can be finished in an afternoon.
This book is the most practical of the series so far. Drawing on personal experiences and data gleaned from early adopters, the authors distill XP practices into their purest and most essential forms. Anecdotes from programmers in the trenches line the pages. Though everyone practices the processes slightly differently, a clear picture begins to emerge.
Though listed in the table of contents as "bonus tracks," the last 11 chapters may prove the most valuable. Each track addresses a common concern or criticism of XP, from "Who do you blame when something goes wrong?" to "How do you write unit tests for a GUI?" and "You can't possibly make accurate estimates." This won't satisfy all the nay-sayers, but it adds a healthy dose of reality.
What's to Consider?The testing and refactoring sections, needing the most explanation, have a strong Smalltalk bias. While these chapters have strong supporting text, a decent programmer unfamiliar with the language will have to invest extra time to understand the examples fully. This is the most detailed portion of the book, and may be the hardest to read.
While some readers may like the open-ended nature of the presented techniques, others, familiar with more formal development processes, will want authoritative proclamations. XP actually installed, argue the authors, depends on the nature of the task and the team. The controversial axiom of embracing change by continually performing a certain few practices while discarding the rest, will raise some blood pressures. Clearly, this is not for the faint of heart.
Developers and managers interested in the whys of XP would do well to read Extreme Programming Explained instead. Though the authors present a brief business case for the process, most of the text assumes the reader has already decided to install it. Customers receive more text (a few chapters), though there's clearly room for an expanded treatment of their roles and responsibilities.
The SummaryExtreme Programming Installed will not silence the critics, but it makes great progress in showing how XP can work, in the right places. Beyond that, it demonstrates the flexibility of the approach, with numerous real-world examples. This book deserves a place next to Beck's manifesto, showing off XP as it's actually practiced.
Table of Contents
- Extreme Programming
- The Circle of Life
- On-Site Customer
- User Stories
- Acceptance Tests
- Story Estimation
- Small Releases
- Customer Defines Release
- Iteration Planning
- Quick Design Session
- Programming
- Pair Programming
- Unit Tests
- Test First, by Intention
- Releasing Changes
- Do or Do Not
- Experience Improves Estimates
- Resources, Scope, Quality, Time
- Steering
- Steering the Iteration
- Steering the Release
- Handling Defects
- Conclusion
Bonus Tracks
- We'll Try
- How to Estimate Anything
- Infrastructure
- It's Chet's Fault
- Balancing Hopes and Fears
- Testing Improves Code
- XPer Tries Java
- A Java Perspective
- A True Story
- Estimates and Promises
- Everything That Could Possibly Break
You can Purchase this book at ThinkGeek.
Review this . . . (Score:1)
Good Grief Another Load of BS (Score:1)
Back in the 80's it was 4GL this and 4GL that.
Then 90's we went through 'Booch is a god' phase.
Now we have a resurrection of XP.
Let me explain what works:
1. Hire developers who think coherently.
2. Let them get on with it.
3. Quit reading books, if you don't know what you're supposed to be doing then you haven't worked your way up through the ranks and should quit developing now.
Re:Quit reading books, eh? (Score.
Oh, there definitely are people here with a much stronger grasp of the business domain than myself. Or certain technologies. You're quite right that I have to be careful in judging what people know, and usually I assume people with experience are quite competent. What I am bringing up in this conversation is general "nagging thoughts" in the back of my head that come up time and time again when I see wierd gaps in people's knowledge that shouldn't be there given the experience they have.
For instance, I have people that have worked with a particular technology for years, but they have no idea how it works. Very often, people "programmed to an API", and went home. They never leveraged their experience to gain a deeper understanding of what they were using. Perhaps this "coasting" menatlity is lack of talent, or lack of enthusiasm. I don't know.
Of course, there are the 1 out of 5 people that do leverage their experience and deep understanding of what they're using. But it's still sad that it's 1 out of 5 people.
I think we'll have to agree to disagree. I don't see XP proponents insisting on adhering to minor details. XP is a very tailorable / customizable process and I don't think any two teams do it the same way, or follow all of the practices.
I also think that my point about "coordinating attitudes & feelings" is less about coersion and more about understanding that most project failures are due to emotions, attitudes, and feelings. Fear, especially.
Most of the time we implement a process out of fear -- the develoeprs fear that we won't make the date, fear that we'll be asked to work overtime -- the customers fear they'll be lied to, or won't be able to make decisions about priorities.
XP says "acknowledge that fear up front", and aim the process towards cooling down those fears.
I've already tried to point out that a methodology which depends on a uniform higher-than-average level of skill/motivation among developers can be considered fragile. By definition, most teams are not composed of such developers.
Of course. It's still open debate whether XP requires higher-than-average developers, but generally I don't believe it does. One of the major goals of XP is to get average developers to work productively, with a senior developer or two coaching & guiding them. It does require developers with relatively high discipline, however, which is a potential fragility factor.
At OOPSLA 2000 in Minneapolis this past October, some of the people that were involved with XP's inception (Beck & Fowler, in particular) feel that XP *will not succeed* in most cases, mainly because of the cultural demands it places on the IT organization and the responsibility it demands of the customer.
The general agreement at this session was that "that's okay", because those that *do* use XP enjoy it, and are arguably turning out quality software with high productivity, and having fun while they do it. And that should be all that matters.
However, with business looking for further ways of improving efficiency, there's bound to be a tidal wave of opportunists looking to sell XP the "next great thing". XP probably isn't it, though it definitely is an extraordinarily effective process under particular circumstances.
Re:Quit reading books, eh? (Score:2)
I agree with you. Older programmers are, generally, very smart.
Young people generally do think they're smarter than they actually are.
Here, however, is my point:
- Speaking as a "young'un" I feel I have lots to learn from older/wiser programmers. Problem is that they're a rarity these days -- most people aren't much older, and certainly not much wiser. Do I say this out of arrogance? No, I say this out of pure observation. The few gurus I've had the pleasure of working with have been the exception, not the rule. Whether you really believe I'm fooling myself on this observation will have to be up to you.
- Most people that "worked through the ranks" aren't really much older, they're actually usually within their late 20's or early 30's. After that they move into management. And I don't believe that it's an industry trend to recognize this is a bad thing. Developers know, but management generally doesn't know (you hear it on Slashdot a lot, for instence). The reason programmers move on to management is because arguably, good managers are in *shorter* supply than good programmers. So management promotes whoever shows potential. At least, from my observation.
Now, a bit of an explanation of my position: I mentor and train people in C++ and Java, OO design, and transactional systems design, and I also help architect financial trading systems (where "architect" for me means "a developer with more influence".. I don't draw fancy bubbles and lines on a paper and call that architecture.)
yet I'm very often 5+ years younger than the people I'm teaching or working with. Usually most people don't believe me when I tell them my age. It *FRUSTRATES* the heck out of me at what I see at my various consulting engagements -- very few older programmers, very few senior programmers or architects, and generally very few people with solid understandings of software engineering, good implementation techniques, or design techniques.
I REALLY should not be the one to teach these people these things, but it's what I do.
- Young people are usually very arrogant & think they know everything. I know I could fall into this trap. That's why I make a habit of staring in the morning every day and saying: "You're young. You don't know shit." , it's also why I devour books, and why I use every scrap of experience I get to my advantage.
But my day-to-day experience seems to tell me that I'm not "crap". I shouldn't know more than people older than me, people with better degrees, etc. but usually they're the ones asking ME for the answers to basic day-to-day questions.
And this is not just about the newest tips & techniques -- this is about day-to-day, how do I code this better, how do I test or debug this better?
- I think in my case, experience has been invaluable to me, but being a voracious reader has helped speed along my progress.
Finally, about XP:
- Everyone has a right to healthy skepticism. I know my history -- I've seen the trends come and gone -- CASE tools, Object-Oriented Operating systems, AI, etc. I've also seen trends come that HAVE made a lasting impact: relational databases, GUIs, PCs, modems, and object orientation.
I've also seen the wave of methodologies that came out in the 90's that killed many forests but didn't help as many projects.
I think the reason I like XP is that it's different. It's the anti-methodology. It's not about ceremony or documentation, it's about coordinating people's attitudes, feelings, and fears into a framework that allows people to create quality software.
One of the frustrations with skeptics is when one paints it as "the same as all the other" failed methodologies, that it completely ignores that this one really *IS* different. Healthy skepticism is good; ignorant skepticism is not.
Some of the critiques from Tom Gilb, Craig Larman, etc, espected industry experts, are definite food for thought. But slashdot arguments that effectively say "don't read books!", or "it's bunk, just hire talented developers!" are just hogwash. If we didn't read books, we couldn't evolve our knowledge much. If we COULD hire talented developers, we wouldn't have a problem would we? While they're lots of skilled developers, there is a dearth of talent.
So how do you maximize what little talent there is? You use a process to align people's strengths and minimize their weaknesses. You explicitly state your fears, you measure your progress constantly, and you continually deliver value at whatever speed you're working at. That's what XP is about.
Of course there are limitations and drawbacks - XP does not have any evidence of scaling over 12 developers -- it's rather new, it requires a highly disciplined team, it needs the guiding hand of a "coach" or "architect", it needs the customer to speak "with one voice", etc.
Generally, when I advocate XP, I explain both sides. Especially in my consulting engagements. And it has worked so far -- I've gotten good results in my pilot attempts at using the process in various international banks that I've worked with.
Re:It worked well for a hobby project. (Score:2)
Although (from memory, I haven't read XP in a while) the supposed way to go is not to be looking ahead to the next problem (that's what refactoring is for), but to concentrate on your one use case and tests. This is one of the parts that didn't sit right with me about XP. I liked the idea of most of it, although as I was reading I was thinking how it might apply to projects I'd been involved in, and wasn't so sure.
Also, since no-one else has mentioned it so far, the main source for information and discussion of XP (and an interesting read in it's own right) is the C2 Wiki [c2.com].
Re:My GF did this (Score:2)
I think the XP books, and "refactoring" make it pretty clear that refactoring without unit testing is really just a form of "cowboy coding", it's not really "refactoring" at all.
There are some parts of XP that stand alone (for example, unit testing), and some parts which in isolation compromise quality and need to be tempered by the discipline of unit testing (such as "refactoring")
In conclusion, I'd bet that there are some parts of XP that you could practice and get immediate benefits (like an emphasis on testing), and obviously these should be done first.
Re:reading (Score:2)
Re:Finally an alternative to Giant Computer Books (Score:2)
It also has the side effect that by the time the other two XP books I have on order arrive, I'll have five XP books, and I'll have spent over $150 on them.
The total amount of material covered by the books would fit into one single $50 book. So perhaps I'm feeling a little aggrieved about the approach Kent has chosen, and maybe a little cynical about his reasons why.
Yes, I know, he's only written two of the five, and one of those with Martin Fowler. But maybe what he and the other people advocating XP have done to improve my own skillset worth rewarding by buying their books. I think so.
~Cederic
Re:no luck here (Score:2)
As a simple example, I like vim with syntax highlighting. My boss (who is also one of the programmers) hates it and can't stand to see look like that. I run my monitors at 1280x1024 or higher, depending on what machine I'm using, and most everyone in my office likes 800x600 tops, on a 17" monitor, and so all I hear is "I can't see your code" and stuff like that.
Those are things that standards can certainly enforce, at the expense of my freedom to work how I see fit. But yeah, as far as actual comments, you're supposed to follow standards.
Re:My GF did this (Score:2)
It sounds like they weren't really doing extreme programming then. They may have been following some of the principles of it, and calling it "XP".
Things never worked once? The unit tests should be telling you that. And after refactoring, the unit tests should still pass.
I'd suggest a good reading of _XP Explained_ and possible _XP Installed_, compared and contrasted with the practices at your GFs job, may shed some light on what was actually going on.
Re:Sounds Interesting, but ... (Score:2)
Threading is evil and should be avoided. Sometimes, the evil that is threading is necessary, or the most logical way to accomplish something. Usually it's overused for things (like handling asynchronous behavior (i.e. network/socket communication)) that there are better solutions for.
My biggest complain about Java right now is that it has an I/O model that requires the use of multiple threads to deal with more than one socket at a time, for instance.
Re:Sounds Interesting, but ... (Score:2)
A very central tenent of XP is testing. It's probably the thing that does the most to hold XP together.
Threads introduce non-deterministic interactions between different parts of a program. This can make adequate testing a pain.
Wednesday on FOX.... (Score:2)
LISTEN to Bill Gates tell a customer to upgrade to the next version of MS Office!!!
WATCH as Linus Torvalds and colleagues code around yet another Intel processor bug!!!
SEE Outlook Express proclaiming it's love for its user, and melting mail servers worldwide!!!
--
Tried over-the-shoulder (Score:2)
A co-developer of mine here at the office has the eXtreme Programming book, and we've tried the over-the-shoulder method several times now. It's great, it gives each person a chance to add to the code, and it cuts down on the need for a code-review, but does not eliminate the need. I think it really helps us to not be "in the dark" with each other's code too.
I havent tried doing this with any of the other developers yet, but im sure the results would be just as good. I also believe it took a small chunk out of our overall coding time.
I'm intrested in hearing others have tried any other methods (I'm not very familliar(?sp) with the book myself).
Pair Programming (Score:2)
chris
Re:Methodology of the day (Score:2)
I find that most CS students are particularly ignorant about methodology. They're usually quite naive about such things as working in a team and making realistic plannings and tend to overestimate their own capabilities.
Now pair programming is already practiced in many CS schools and universities anyway. Add some course about testing and let different pairs team up to build something larger and you have an nice XP training course.
Because of the short iterations, it is possible for teachers to keep track of the student rather than having to wait for whatever is thrown over the fence at the end of the semester.
Re: New Age Programming B.S. (Score:2)
Well, I doubt the extra couple years of experience gives you the argument, but I don't want to get into a pissing contest either.
Listen, I'm sorry you've got an axe to grind, but I don't think you're arguing the points. I completely agree with the statements you make in this paragraph, but I still claim the management techniques are very similar; you're arguing about tools and degrees of safety. "Amount" of effort, complience, testing, etc. is different than "style". I have seen XP's style used in everything from embedded avionics (flying in an F-14) to applets; they vary wildly in their actual implementation and goals to meet, as they should. XP is not "sloppy", as you seem to imply; it's a technology of management that can be used in many circumstances, including highly reliable embedded systems.
Anyway, reply to this if you'd like to carry on a serious discussion.
Re:My GF did this (Score:2)
Sigh. More rudeness and condescension for its own sake. If you had looked at any of the other posts I linked to, or at my published writings elsewhere (clue: I wrote this [linuxprogramming.com]) you would know that I am in fact a very strong advocate of testing at all levels, including but not limited to unit testing. I will not allow you to misrepresent my argument as being in any way opposed to unit testing. Someone said "you weren't doing XP" and I said "we don't know that" without any reference at any point to which part of XP was omitted. I wasn't referring at all to unit testing when I spoke of XP's fragility or inapplicability. I was thinking more of XP's admitted limitations as linked from one of my previous posts.
Indeed they do not, but several people here seem to be using just that argument as an excuse for a failure when XP was applied.
Take your strawman and shove it. I don't need to come up with a better methodology. It's not necessary for anybody to come up with a better methodology before we can consider XP's limitations or slashdotters' attempts to ram it down people's throats, but in fact quite a few smart people have managed it (if we define better as "more robust and/or applicable") over the last few decades of software engineering study. Yes, Virginia, there was software engineering before XP. Most elements of XP predate XP itself by decades. Good programming methodologies vary quite a bit, but one thing they have in common is that their authors admit there's no magic bullet. If only people here would believe them.
As I pointed out in a previous post, which you should have read before responding, XP is great but it's no panacea. Great things can still be overhyped, and right now - for all its good points - XP is being overhyped. What I seek to do is not debunk XP entirely, but just to bring the expectations down to a realistic level.
Re:My GF did this (Score:2)
What, another post from "NT Christ" full of flame, with not one whit about methodologies? What a surprise! And I expected so much better from you.
FYI, the article may have been originally written for another (related) thread, but was explicitly referenced in this thread, right here [slashdot.org] in the great-grandparent of your own first attempt at flaming. If it's too difficult for you to click on "User #xxx Info" and find it, surely you could look at the head of the thread to which you're responding. No, guess not. Not even that smart, are you? It may have been hard for you to find, but that doesn't make it an obscure reference.
Re:My GF did this (Score:2)
It's bad to lie. It's just plain dumb to lie so obviously. Anyone reading your post is just a click away from its parent, which anyone can see is at least as on-topic and informative as anything you've "contributed". If my reply fell short of my own usual standards for content, it's because I was replying to something that itself lacked substantive content - in particular to your earlier misrepresentation of my opinion. I notice you've dropped that particular theme since it was revealed as mere fabrication.
Do you have anything worthwhile to contribute to this conversation, or is it all like this? I'm usually glad to engage in discussion about programming methodologies, but this...this, I'm tired of after only two or three posts. Even as exercises in flaming for its own sake, your posts are pathetic.
Re:My GF did this (Score:2)
Agreed. Thank you, rblum.
Re:My GF did this (Score:2)
I have never dismissed XP out of hand. In this entire thread, and in the previous thread, I have been quite explicit about that. I believe XP is a good thing, just not as good (not as robust, not as broadly applicable) as some would have us believe. I've said it time and time again, so please stop lying.
Re:My GF did this (Score:2)
If you misunderstand/misrepresent what someone's saying, and then the person disagrees with the distorted version you present, that's not a contradiction. My opinion and my expression of it have remained consistent throughout this thread, and only your understanding of them has changed.
Re:My GF did this (Score:2)
I don't generally accuse people who merely misunderstand my posts of lying. Yet again, evidence of that is abundant, accessible, and contrary to your portrayal. With this and other comments, you in particular have shown a propensity for misrepresentation and strawman construction that is hard to explain as innocent misunderstanding. I call 'em as I see 'em, so if you want me to stop calling you a liar then stop lying. Stop misrepresenting opponents' positions, stop claiming references are obscure when they're right in front of your nose, stop claiming contradictions where there are none, stop claiming that other people aren't addressing the issues when they are, etc. etc. etc.
As ye sow so shall ye reap, and you have some nasty rhetorical habits that fully explain how you have been treated. Fix them and people will start listening to what you have to say. I'm sure you have much of value to say on this and other subjects, but with your current writing style it's just too much of a drag to separate the rare wheat from the abundant chaff. Grow up yourself - and that's meant quite sincerely. You're not doing yourself any favors by acting this way.
Re:Save Some Money Folks (Score:2)
If I may jump in to this little digression...that's a far from prime example. The AC was correct; there was no ad hominem. An ad hominem is an attack against the credentials of a person making an argument. The term does not apply to any arbitrary adverse portrayal of a participant, let alone of third parties. The construction you present - not an accurate paraphrase of the original, but we've come to expect that of you - may be illogical and unconvincing, but it's not an ad hominem. Your use of the term was, simply and bluntly, incorrect.
I don't know whether you believe in "magic bullets" in software, but you certainly seem to think you can find one in this debate. Ain't gonna happen, kid. The only way you're going to get your point across is to take the slow road - stay on topic, stay honest, and convince people by introducing new facts connected with logic. "Rhetoric bombs" such as random insertion of Latin phrases won't do it for you.
Re:Save Some Money Folks (Score:2)
There you go again. You have yourself commented favorably on some of my earlier posts, and yet now you claim that I "seem incapable" of posting anything but meta-argument. How can anyone have faith in the statements of someone who would so casually contradict themselves to insult an opponent?
And technical correctness doesn't matter? You used the term, you should be willing to accept correction on that use. Instead, you rejected the first correction, and have been singularly ungracious in accepting the second. Is that how you react when someone corrects a minor error in your code or your specs? To bring this back on topic, all of the chief XP advocates agree that having the proper attitude is critical to success deploying XP. How can you champion XP by so profligately displaying an attitude that would prevent it from working? That, along with everything else, undermines belief in the sincerity of your enthusiasm for XP. You seem much more enthusiastic about "winning" at all costs than about standing up for any kind of ideal or standard or methodology.
I'm sure we're boring and/or annoying everyone else with this, though. There are better ways and places to discuss the deficiencies in your manner of communication than in a public forum like this. Feel free to send me email - my address is easy to find - because I don't intend to respond to your flamebait here any more.
Re:Save Some Money Folks (Score:2)
I don't see why not. Yes, there's a lot of crap that gets posted here, but I have had very satisfying conversations here. I've made valuable professional contacts with others in my specialty. I've met people who have offered me money to express my views from a better kind of soapbox, I've met people who've offered me jobs, etc. That kind of thing doesn't happen often, but it does happen. Now that I'm less distracted by your flamage, I'm actually working in another window on a more serious post that I hope will rekindle a little bit of thoughtful discussion here. Or maybe not. It's a small investment, really, one that I don't mind making as a form of recreation, and every once in a while it does pay off.
Do you know about despair.com? One of their slogans is The common element in all of your dysfunctional relationships is you. Cute, huh? I believe I also quoted As ye sow, so shall ye reap to you earlier. In all seriousness, I'd like to suggest that if you never have rewarding interactions here like I do, then your attitude that this is "just a place to flame" might be the reason. If you'd do something besides flame once in a while, you might be surprised at the reaction you get.
Re:My GF did this (Score:2)
The AC might have been reading in non-threaded mode and seen the posts juxtaposed despite their being in different subthreads. It might also have been one of the people who follows my posts closely. There seem to be quite a few; sometimes they turn out to be people I know, but more often it's a mystery. Every time I get involved in a discussion a couple of "regulars" seem to pop up - sometimes on the opposite side, invariably well informed wrt my posting history. I guess being a prominent devil's advocate gets some people's attention. OTOH, it might just have been someone who saw a good flame war going on and tracked it for a while before jumping in. There are all sorts of possibilities.
In any case, you can rest assured that I don't need to resort to such trickery. I'm not exactly afraid to express controversial opinions right out in the open, or to flame people - as you have seen. Why would I bother posting as an AC? If you ask me, the idea sounds just a tad paranoid. Relax; I'm not that motivated to "get" you.
Re:Sounds Interesting, but ... (Score:2)
Unfortunately, writing unit tests for programs with asynchronous behavior is likely to be even more difficult than writing unit tests for programs with threads, so the objection re: XP is still valid.
Re:Quit reading books, eh? (Score:2)
Too true, too true. I've been somewhat lucky that way, I guess. Part of it is that I'm a kernel hacker, and the growth in my specialty hasn't been an explosive as in yours. It's a real problem, when such growth occurs, to find enough mentors and examples and even experienced journeymen.
This is very likely to be a cultural difference between your specialty and mine, though I agree that good managers are probably the rarest beasts in the whole software-industry menager.
Yes. Eventually, skepticism segues into argument by false analogy. Short of that, though, the skeptics' concerns need to be addressed in a forthright manner if one hopes to win them over.
Absolutely, in the case of the former. Mostly, for the latter, because there is a grain of truth to that. Another poster pointed out the dangers of "cherry-picking" your case studies, rejecting any failures from the sample for specious reasons. I've already tried to point out that a methodology which depends on a uniform higher-than-average level of skill/motivation among developers can be considered fragile. By definition, most teams are not composed of such developers.
Exactly. For some teams XP might maximize the talent available. For other teams - I would say most teams - it won't, for reasons I've given elsewhere, and it may even be harmful.
Re:My GF did this (Score:2)
This isn't about my GF. I try not to talk about my GF in public because it really annoys my wife.
;-)
Re:My GF did this (Score:2)
Yes, that is probably true. However, I still worry about the fragility of a methodology that fails to provide benefits - or that actually makes things worse - if it's not followed precisely in every exacting detail (in short, religiously) by highly skilled people. When people are saying that A sucks, and A+B sucks too, and A+B+C sucks, but A+B+C+D will somehow turn out to be really cool, I think it's normal to be just a little bit skeptical. It's like continuing to buy a declining stock because "it has to rebound soon". Most often it's an exercise in denial, not synergy or honest evaluation of results. Maybe this is a false alarm, but don't expect me to adjust the sensitivity on my BS detector because it has served me well at its current setting.
It worked well for a hobby project. (Score:2)
Then we would get together in the evening and code (one watching over the shoulder of the other). I still can't believe how much we accomplished in something like 12 days. And it was a total boon to me, having someone say "if you do it that way, you'll mess up later because
Re:sounds like an old technique (Score:2)
Kaa's Law: In any sufficiently large group of people most are idiots.
Hunter's Corollary: All corporations are, by default, sufficiently large groups of people.
IMO, the problem has never been getting smart people to deliver good code. The real problem is getting the "less than gifted" to produce good (or adequate) code.
It is a rare organization that employs only guru-level employees. What about the remaining organizations? Are they doomed to failure?
To this, I point to Richard Gabriel's well-written essay "Worse is Better" (available at: this location [jwz.org]).
Cheers,
Slak
Re:New Age Programming B.S. (Score:2)
Re:Just say no (Score:2)
Yeah, god forbid that somebody might notice that you're not perfect. And god forbid that you should be confronted with the fact that your colleagues aren't either. Then those errors might get corrected right away, rather than getting incorporated into the final product.
I agree that pair programming takes some getting used to; at first, it feels pretty hard. Eventually, you get used to it. Either way, it produces better code. If you're mainly interested in putting out good work, you'll adjust. And in the meantime, don't forget to take breaks; you don't have to spend 8 hours in a row manacled to somebody.
Yes and no (Score:2)
One of the important elements of XP is indeed continuous unit testing, integrated into the process. Another is continuous customer involvement. A third is short release cycles. All of these drastically improve feedback, which is a vital underpinning to many of the other techniques.
But another vital component is a good team, one where this is mutual respect and a strong sense of shared goals. In the situation above, people were allegedly wrecking one another's code; this either suggests incompetence or internal political struggles that get expressed in the code. Either way, it's a massive management failure to let rogue workers run unchecked.
XP should absolutely not be used in a group of people that is not working as a team. XP is a process that requires people with a sense of community. 90% of the programmers I know have this. 99% of the good programmers I know have this. But if a manager tries to impose XP, she must do it only with people who can play well with others.
Re:He has a point (Score:2)
This is one of those things that is true in theory but false in practice.
Were all participants in a public discussion sane and reasonable people, you might be right. Had I infinite time and infinite patience, you would certainly be right. But none of these things are true.
In practice, if someone does have an axe to grind, they won't be convinced by any rational argument about the topic, as their motivation for continuing the argument has nothing to do with getting at the truth. Try arguing with a PR flack or a lawyer sometime; it is their job to argue a point until the end of time, twisting and dodging to avoid even getting in the vicinity of the truth.
I dunno about you, but I'm mortal. I have a limited amount of time to spend on this planet, and I have stuff to do. I'm glad to discuss thing with people who are, like me, willing to learn and change their viewpoints. But I have pretty much given up arguing with crazy people; I've only got 40 or 50 years left, and I have a lot to do.
In practice, you adapt (Score:2)
Doing things in group always requires a little compromise, so people who are unable to deal with that shouldn't be doing XP.
Re:Yes and no (Score:2)
Re:The good, the bad, and the ugly (Score:2)
I could see how minimalist design without pair-programming and without constant code review might be bad, because these are all essentially checks against errors being made by one programmer. If we are doing pair programming, then maybe code reviews aren't necessary. But, if you decide to do "minimalist/no design", but not pair programming, I can see how that would be trouble. The important thing is to recognize what problem each aspect is meant to solve, and make sure you have a process in place meant to deal with that problem.
Re:My GF did this (Score:2)
If the tests are automated, binary (pass or fail, tertium non datur) and of sufficient scope, it takes real talent to keep a project in a constantly broken state.
Applying Occams Razor, there ain't no belivable way this project was doing proper regression testing.
Re:sounds like an old technique (Score:2)
It is simply naive to think that any group of programmers put together by a company can be made up entirely of talented programmers. All programming methodologies have to deal with this. But XP is likely the best suited to deal with the shortcommings of some programmers in a group through pari programming.
When we did XP, the biggest problem was one guy who didn't like pair programming and none of us bothered to force him (or even try to force him).
Re:damn it (Score:2)
That's cheeper than any (and for many other books too) and significantly less commercial. I mean thinkgeek is just a sad way of targeting people who buy odd things. They browse around and find other people's products (like that PC window kit) and sell it at a markup.
-Daniel
Re:My GF did this (Score:2)
Well, you might not like it, but there it is. Your GF must have known it from the start (they did actually read up on XP first didn't they?), according to Beck in "XP explained" although parts of XP can be used in isolation, it really works properly when you do it all - it is "greater than the sum of the parts" (chapter 23).
Chapter 25 ("When you shouldnt try XP") is also a must read
best wishes,
Mike.
Re:Nothing to see here... Go back to your lives... (Score:2)
Also, XP doesn't claim to be a strict process: it's a bunch of ideas about how you can implement a process. Most of the ideas are ones that seem like common sense when they are explained, but which are ignored by most engineering projects. Draw your own conclusions about whether or not it's a good idea to bundle them together and recommend people try them.
-- Andrem
I am Jack's completely dubious reaction. (Score:2)
You talkin' to me?!?
Care to enlighten me? (Score:2)
Re:Just say no (Score:2)
Do the Dew (Score:2)
Re:Tried over-the-shoulder (Score:2)
My (USD) $0.02 (Score:2)
I'll go out on a limb and say XP can only succeed under the following conditions:
1. Tightly Focused team, with no distractions from other priorities. (I typically work on 2-3 projects at a time, each at various states. My scheduling precludes any 'shared programming' time.)
2. High average team skill level. One of the touted advantages of XP is that its supposed to raise the average skill level of the team. I'll argue the other way: The lowest skilled team member will drag the team and the project down.
3. Well targeted goals. I think this should be a prereq for all projects, but then again I'd like space travel to start for civilians too...
XP is an interesting idea, but lets just focus on the core skills (not just programming, but project too), and not snowboard our way into the trees before we can stand upright.
Thoughts on "successful" coding... (Score:2)
Re:Logic (Score:2)
We're working on a very GUI-intensive project. What didn't work at all was JUnit testing. Their tips on how to write tests for a GUI just don't work. We got Silk Test to do our regression testing, but that doesn't run very often and is a pain in the ass. We've gotten to the point where someone just has to do a daily test by hand to see what's broken, unfortunately.
Selective pair programming, OTOH, is awesome. We don't do it for everything, but for the tougher bits we throw two people on it. We crank out better code in much less time.
Other techniques have worked or not worked to varying extents. It's worth mentioning we have a larger team than the book is probably assuming. It's also worth mentioning that where we've deviated from Extreme without trying it out first, we're usually wrong.
Overall, XP is not a good book because it tells you exactly what to do. It's a good book because it tells you what to TRY, and you keep what works. The process improvements that result from the techniques that work are generally more than worth the time put into trying them out.
Re:Extreme programming? (Score:2)
Re:Attitude (Score:2)
After deciding to back off physical agression for my programming, I channelled my XP agression into non-physical approaches; soon after, one of our directors resigned after "experiencing" a code review I coordinated - he said something about 'the breadth and extent of my profanity offended his religious beliefs in a very deep way'; which is funny, because I'm sure he was agnostic before the meeting... On the other hand, marketing gave me an award for the meeting, calling me "an inspiration for glibness" and thanked me for the (sizeable) additions to their dictionaries.
Things looked good until I made our largest client break down in tears while reviewing case studies over the phone. Meanwhile, the BOFH, [iinet.net.au] the only person with the will and ability to Pair Program with me, is moving on since he feels that my new attitude is infringing on his "turf". They keep "random" drug testing me, under the assumption that my XP energy has an
... illicit chemical component, but I'm clean.
With my XP Energy &tm, my productivity, and thus worth to the company, has skyrocketed; meanwhile, so has my company's liability. Things look tense. Could you recommend my next course of action??
Re:My GF did this (Score:2)
However, pretty much every day, someone stupid would destroy her code under the guise of XP's constant state of refactoring.
Do you mean they broke its design, its style, or its functionality?
If you're referring to the design, then it is possible that your girlfriend is from the "old skool" of OOP -- overdesign everything and make everything conform to a particular arbitrary set of rules.
I don't mean that to sound insulting -- I used to be the same way. My coworkers changing my code was a constant annoyance because they would stomp all over my little flower garden. In retrospect, however, many of the changes that were made were good -- very simple, very clean. I was determined that good code should be uber general.
If you are referring to the coding styles and such -- that means that the team wasn't doing XP, because XP involves a coding standard. When I first saw that having a coding standard was one of the XP practices [extremeprogramming.org], I thought it odd and overly specific. But it is necessary to fit in with the XP practice of "collective ownership" of all code.
If they broke the functionality... well, then why wasn't there a unit test written? The avid unit testing of XP is one of the most visibly enjoyable aspects of the methodology. Traditionally, writing regression tests is a burden.
Beck and others have noticed how much better/faster/more confidently code can be developed if the unit tests are written before the units are implemented. You then implement until all the tests pass, and then you're done.
However, even if you're doing test-first programming like that, breakage will occasionally occur. But when things do break, you have to try to figure out (and write!) a test that could've ensured that the breakage wouldn't have happened. Eventually, (or so I've heard), you learn how to test well enough that you don't have this problem nearly ever.
Remember what ol' Kent Beck says about XP: "80% of the benefit comes from following the last 20% of the practices". In other words, if you are doing nine out of the twelve practices, you're probably only seeing 20% of the benefit. As soon as you plug in those last three is as soon as you'll see the biggest benefit of the methodology.
Of course, my diagnosis could've been completely wrong here -- I'm just taking some guesses.
- Dr. Foo Barson
We need something (Score:2)
I am sure we have all experienced the horror stories of pointy haired managers in the real world. Maybe one of these days, Stephen King will even do a story on it. Those needing examples can inspect this site [rinkworks.com], and also check out this column [byte.com]. Although there are many other examples easy to find around the net.
Sadly, the thing that worries me is that it takes more than a haircut cut to change a pointy haired manager.
The art of managing your managers is an arcane art indeed.
XP works... (Score:2)
That said, XP is not for every programming environment. Layers of management will hinder the iterative process; programmers must be comfortable with "refactoring" their design. We have three developers for our vertical market toolkit, and we can work closely with QA and our customers.
XP assumes that your receive useful feedback from your users and QA. Perhaps our greatest struggle has been to get NEGATIVE feedback from our customers. We bring them down visit, spend a couple days showing them what we have, and what we get are lots of suggestions for additional features, but few comments regarding the overall design and organization of the components. I hope this means we got it right in the first place...
:)
--
Scott Robert Ladd
Master of Complexity
Destroyer of Order and Chaos
Re:Tried over-the-shoulder (Score:2)
My programmers have tried it and noticed two phenomena
Many of us multitask when we're working alone. Code a little, read slashdot a little, read email a little, code a little... My programmers find that when they pair program they don't switch to browsing the web or reading email when their partner is sitting there waiting to make progress on their task. It keeps them focused. Since it's active for both participants, however, they don't get fatigued by staying on one task for a long time.
Re:One problem (Score:2)
A lot of management types haven't got a clue when it comes to computers and programming. I've had one virtually freaking out at me because I refused to give an estimate on how long to solve a problem. Ya know, the kind where the 'Where's it going wrong' part could take between minutes and days to find.
Management likes nice, clear deadlines; they also like squeezing as much work out of programmers as possible - and unrealistic deadlines are one such way they do that.
Re:Methodology of the day (Score:2)
Software must compile with no errors before shipping.
All code that causes and error should be fixed before trying to compile again
A compiler must be used to compile the code
Any code which generates incorrect results must be fixed before shipping
All new code introduced into a project must meed these tough compiling guidlines before it's allowed into a shipping product.
Yes, these may seem radical to you but remember, all new methods seem this way at the start. Trust me, use my methods and you will make great code
Yours,
Bob
Re:Just say no (Score:2)
I have no problem with someone viewing my code. But as I write it? Over my literal shoulder? It's hard enough to think with phones ringing, loud conversations outside my cube and tech support questions every 10 minutes--I don't also need someone sitting behind me humming and clipping his nails.
You guys are all on a hair-trigger with the anti-machoism. I wasn't saying I didn't want anyone to see my code--I was saying I don't need company in an already small cubicle.
--
MailOne [openone.com]
Re:Just say no (Score:2)
And since we all know vitamin B is good, we'll take megadoses. Oops, megadoses of B are poisonous, we are now dead. More is not always better.
"Maybe when you grow up a bit you'll understand something about working with other people."
"Working with" other people is no problem. Enduring every little typo and thinko (not to mention spending hours at a time with a random coworker) is a totally different beast.
--
MailOne [openone.com]
Re:Good Grief Another Load of BS (Score:2)
Stroustrup,
Bentley,
etc...
Somebody PLEASE mod that up...
More on topic: It's apparent that some people don't see the value in a formal methodologies. Not that you can blame them; some of the more popular OO methodologies, for example, are too cumbersome for any but the largest companies that can afford to subsidize the related management overhead.
OTOH, it strikes me as sheer laziness to dismiss all methodologies out-of-hand. I'll have to express doubt that the original poster in this thread has worked on a non-trivial project involving more than two coders - it's possible, but it seems unlikely. Even the most coherent thinkers, when working in teams, need to have some guidelines to keep the code readable, the object design coherent, and the work on track toward well-defined business goals.
Of course, that doesn't mean you have to Get Religion and follow Booch or XP or pure structured programming, but it does mean implementing and enforcing things like coding standards, code reviews, estimates, and some measurement of the progress against the goals.
OK,
- B
--
Re:He has a point (Score:2)
My personal motivations, real or imagined, are completely irrelevent to the debate. In fact, the link you provided has a perfect analogy to this situation. In that, a priest's anti-abortion arguments are dismissed out of hand because of his religious beliefs. When you attack a person's motivation for making an argument rather than the substance of his argument, it is an ad hominem attack.
You could make this whole debate thing more challenging by not providing links that disprove your arguments.
Re:He has a point (Score:2)
You do not understand the purpose of public debate. It is not to convince your opponent that he is wrong. It is to convince the observers that your opponent is wrong. Ad hominem attacks sway only weak-minded people.
But I have pretty much given up arguing with crazy people
And that, class, was an ad hominem attack.
Re: New Age Programming B.S. (Score:2)
That's what is called an "ad hominem" attack and I have better things to do than get involved in something that is already turning far too personal.
Thank you for responding to my original post and have a good day.
Re:New Age Programming B.S. (Score:2)
The problem with books like these is that I cannot tell at whom they are aimed. Someone who is an experienced software manager is unlikely to need to an all-encompassing book like XP. It is more likely that they will need focused books on individual subjects like configuration management, needs analysis, and so forth. Nor are they written for the neophyte software manager as they contain far too much for a novice to digest.
There's nothing in your reasoning that precludes it from generalising to all fields, and allowing us to reach the absurd conclusion that noone needs to read.
My reasoning is that people need to read books appropriate to their experience and responsibilities.
I can understand the folly of touting a book or methodology as an instant cure-all, but in the high tech industry, dismissing new ideas because you think that you already "know it all" is hardly a recipe for success.
I agree. What I have found be be fairly useless in my reading are books that attempt to reinvent entire software development methodologies rather than hone skills in focused areas.
Peace.
Re: New Age Programming B.S. (Score:2)
On what do you base this claim?
21 years of professional, successful embedded systems development and project management experience. On what do you base your claim that I am wrong?
Most embedded systems people I know would go about the two pretty much the same way. Of course, the functional requirements will be radically different, but the process will be very similar.
Please warn me if you ever get involved in avionics or any other form of embedded systems development where the safety of people is involved. The level of peer review, design review, testing, etc. is orders of magnitude different when working with embedded systems that can affect the people's safety. Tools that you might trust for developing a programmable home thermostat would never pass muster for a heart monitor.
XP Explained suggests that if your manager is clueless, you simply adopt the XP methodology without telling him. You are a professional software programmer, aren't you? Do what you need to do to get the job done right.
So, without the approval of management, everyone in the software group should start arranging meetings with the customer to discuss requirements and implementation? This isn't Oz. The software methodology is normally set from above. Besides, I am not about to go out on a limb to implement a utopian methodology that I do not believe in.
If you managers have any understanding of the software development process, they will probably already have a development model in place that is appropriate for your project(s), customer(s), organization, and budget.
This may come as a shock to you, but that's really not the case.
My two decades plus of experience leads me to believe that you are simply wrong. Get some more years of experience under your belt before making blanket statements like that one.
One problem (Score:2)
"Wouldn't it be nice if customers, management, and programmers could work together to produce good software on schedule and under budget? "
This implies that you should constantly bring in projects under-budget. You know what happens when you do that? The budget for your next project gets cut. If you manage to come in under budget again, then it gets cut further. Also, management starts thinking that all of your estimates are off, and starts factoring this into their budget decesions. Just let things cost what they cost, for dog's sake! (This is all from actual work experience.) Ok, the anal-retentive rant is over... resume previous discussion.
You want corn? I give you corn.
Re:Extreme programming? (Score:2)
It just depends on the toughness of the words of your era.
I'm surprised they used "Extreme" though. I thought that died the day I saw a children's science sight with at least 30 different topics, each one headed by "Extreme", as in "Extreme Animals", "Extreme Geometry", and "Extreme Lemon Battery".
Good for some situations (Score:2)
XP's placebo effect (Score:2)
no luck here (Score:3)
Pair programming is lame, IMHO. 2 people will tend to either wander away from the programming topic, as sitting and watchng programming happen can never be as involving as actually programming. Also, 2 or more people tend to bicker over editor styles, code quirks (comment format and such) that gets overlooked during a code review.
Refactoring is a good idea when used sparingly, I think. Everyone complains about cruft but rewriting things to make it go away is seen as wasteful. YMMV but we've refactored a few things and had it work for the better.
Still, I think the majority of the XP "movement" is an effort to change the status quo by being, well, extreme, like asking your mom if you can stay out till 3am if you only want to stay out till midnight.
Re:Quit reading books, eh? (Score:3)
That's a myth. Sure, there are a lot of people who think they're smarter than anyone who went before (or anyone else, period) but most of those people are wrong. What do you think, that somehow people from past generations were uniformly stupider than those in the current generation? Do you believe that you just happened to be lucky enough to be born during a period of rapid evolution for the human species?
No, 'fraid not. Older programmers are not, statistically speaking, stupider than younger ones. They may have learned programming when the state of the art was far behind where it is today, but their capacity for learning is no less. For every old dog I've known who couldn't or wouldn't learn new tricks, I've met at least two young pups who think they know everything there is to know after two years (or less) doing easy types of programming. The difference is that experience always counts for something. It may not count as much as a solid grounding in the latest tools and techniques, but everyone picks up some sort of useful tricks in a couple of decades of real work - in any field, not just programming.
The point, as so many other posters have made, is that it's at least as foolish for you to dismiss the concerns of the old hands as it would be for them to dismiss XP, without examining the essentials. They have good reasons to be skeptical. Anyone who has been in this business has seen people talk about all sorts of methodologies exactly the same way folks here are talking about XP now, and most of the talk has turned out to be just hot air. If you want to convince those people of how great XP is you'll have to address their concerns head-on...and you do need to convince them. Those skeptics are going to be or in decision-making positions for a good many years yet, as you try to deploy XP, and if their concerns are not addressed it will be impossible to develop the spirit that is necessary for XP to work. As long as you keep making flippant remarks about learning capacity, neither they nor people your own age will want to get involved with a methodology that requires them to work closely with you.
BTW, since it does seem slightly relevant, I'm 35. This would seem to put me right between the "old dogs" and the "young pups".
I think there's a growing recognition in the industry that trying to turn good engineers into managers is a dumb idea. What you end up doing is depriving yourself of a good engineer's technical contributions, and getting yourself a medium-to-lousy manager in return (because the optimal temperaments for top-level developers and for managers are almost opposite). This recognition is a good thing.
This also brings me to another point I've been pondering. Our love for egalitarian solutions seems to have become an actual phobia about all forms of hierarchy. Just look at how client/server has given way to P2P. Much of the appeal of XP seems to be that the pair programming approach replaces a more traditional approach in which senior engineers look over the shoulders of junior ones to keep them out of the weeds. Just say those words a couple of times...senior, junior...in today's intellectual climate even the words themselves seem slightly taboo. People love the idea of doing away with such distinctions, even though they've proven very useful. I have to wonder whether this is in part because, with the explosion of the Internet and e-everything, so many shops just don't have any senior engineers and they have to find methodologies that work without them. To be fair, I also wonder if some of the resistance to methodologies like XP is rooted in senior engineers who thoroughly enjoy being "on top" in the traditional model and worry about losing that. I wonder, and I'm sure there's a grain of truth there, but I don't think it's as much of a factor as distrust of "magic bullet" salesmen. I still think that, in an organization that does have the benefit of experienced technical leadership, pairwise programming might be a costly solution to a minor or nonexistent problem.
Again, you may have an answer to this, but the objection - in your colleagues' minds, not necessarily in mine - must be overcome if you want XP to be adopted. You can't just say "read the book" either. You - not "you" specifically but "you" referring to all the XP proponents out there - need to learn how to be effective XP advocates in your own right, and in general the way people have been going about it on this thread so far is definitely not going to be very effective in the workplace. The first thing an effective advocate has to do is learn and accept and admit the limitations and drawbacks of the thing they're advocating, so that they don't immediately lose credibility with people who are accustomed to questioning things and looking for weaknesses. After all, questioning things and looking for weakness are traits we encourage in developers.
XP works great for us (Score:3)
The planning meetings, stories and tasks keep everyone on track. The pair programming makes the code better and teaches everyone about the code.
It is working out far better than our previous development methodology, which I will call "Extreme Failure".
Extreme programming? (Score:3)
This bugged me, too (Score:3)
Yeah, this bugged me, too.
The way they talk about it is in pretty absolute terms. Don't look ahead, just focus on the immediate concern, etc, etc. It sounds ridiculous, and when you take it literally, it is ridiculous.
But after a little while, I began to see the problem they were trying to address. It's very easy to say things like "well since we need to display one graph, we should build a whole graphing framework, as we'll probably need to do more graphs eventually." And then you go off and spend a month building a (very nice, very solid) graphing framework for your one graph. But it was fun and interesting and the Right Way To Do Graphing, so you feel good about your work. And once they ask for more graphs, you'll be golden.
But then it turns out that nobody ever asks for more graphs, and so your nice framework never gets used. Worse, people don't use the program much because it's missing features that they need, features that you could have added in that month. So the project gets cancelled and your code goes to waste.
So their point is not don't do work that you need 15 minutes from now but more like don't do work that you won't need for a month. You just figure out what you need to get this version finished and code that. That doesn't mean writing shoddy code to get it out the door, of course; you write good, solid stuff, but nothing more than you need. When the next revision comes, then code that then.
If you haven't experienced refactoring, this still seems ridiculous. But if you already have an lot of unit tests and functional tests, you become confident that you can improve your design incrementally and still have everything work. Which means that painting yourself into a corner is not the problem that it was before.
Personally, I am still getting used to this. But so far it has worked pretty well.
Re:My (USD) $0.02 (Score:3)
That's not true in my experience. It does require programmers to have a semi-sane understanding of how good they are, though. And it requires that everybody have a fair bit of team spirit. Novices must be smart enough to know that they should not be doing major redesign without talking it over with the team first. (And really, the same goes for the experts.)
1. Tightly Focused team, with no distractions from other priorities. (I typically work on 2-3 projects at a time, each at various states. My scheduling precludes any 'shared programming' time.)
This isn't necessarily true, but XP does require a lot of communication between developers on a project. Pair programming and a shared project room are good ways to do this. Solo developers communicating by email is a very bad way to do it; if you're in that condition, XP is a bad choice. So if you have to work on multiple projects in an XP context, you have to rigidly schedule things so that, e.g., all the work on Project A happens in the mornings, and all work on Project B happens in the afternoons.
I'll argue the other way: The lowest skilled team member will drag the team and the project down.
It depends on what you mean by lowest skilled. If you mean "fresh out of school with little experience", then you try to make sure that the novice gets paired up with senior developers for a while. At first, they'll have little to contribute (e.g., "You missed a semicolon") but quickly they become more productive.
If, on the other hand, you mean "never likely to be a good programmer", then yes, they are bad for the project and should be fired. But even in this case, pair programming, good unit tests, shared code ownership, and strong version control limit the damage one rogue idiot can do much more than on a project that doesn't use these techniques.
3. Well targeted goals.
I agree that this is good to have, but it's much less of a problem on an XP project than, say, a traditional Waterfall lifecycle project. A frequent release schedule (XP recommends every 1-3 weeks) is the key. Everybody gets together and sets the goals for the first version. In a couple of weeks, everybody gets together and says "Oh! Now we see that we should really be going Y instead of X" and so you code the next revision that way. When goals are poorly understood, Evolutionary Delivery lifecycles are great for reducing risk.
Re:New Age Programming B.S. (Score:3)
Beck's book is aimed at all members of a development team, for the simple fact that XP is a team-oriented approach. It covers a broad range of areas because he feels that the value comes not in the indivdual pieces but in the way they reinforce one another. E.g., refactoring without good unit tests is dangerous. Common code ownership is very risky without the good communication promoted pair programming. Limited planning is very risky without short release cycles. And so on.
Someone who is an experienced software manager is unlikely to need to an all-encompassing book like XP.
There are two problems with this statement. One is the implied suggestion that an overwhelming majority of software managers are so experienced that they know the basics solidly. The second is that there is some all-encompassing framework that is so well established that all good managers share the same view of software development.
The truth is that neither is true. I'm not sure where you work, but I've been doing development for 15 years, and I'd say that good project managers, well versed in all important aspects of developing software are the exception, not the rule.
And as to the notion that the foundations of our discipline are so well-settled and obvious that a book with a broad view of the process is useless, I can only laugh. As a developer and the son of a developer, I can assure you that this isn't true. The goals, resources, methods, tools, and methodologies have all changed relentlessly, and that won't stop until Moore's law gives out.
If you've got the twenty-plus years of experience that you claim, you know that almost everything thought of as "fundamental" thirty years ago is different now, and you need only pick up a few CS textbooks to verify that. Even today, a large percentage of software development projects fail utterly, suggesting that, as a profession, we don't really have our acts together.
So bravo for books that are broad in scope! XP may or may not suit your projects (and indeed, I would be surprised to see it working well in an embedded-systems context), at least Beck took a broad look at the process and challenged some sacred cows. I don't buy all of what he's selling, but I found it valuable to read his book.
He has a point (Score:3)
Actually, that's not an ad hominem attack. An ad hominem attack [nizkor.org] is one where you attack the arguer on the basis of something both personal and irrelevant. For example, calling you ugly and smelly would be an ad hominem attack. The claim that you have an axe to grind, however, may be personal but it's sure not irrelevant.
And really, the guy has a point. Your first post in this thread [slashdot.org] is titled "New Age Programming B.S."; it's a full-tilt screed against a book you obviously haven't read. Why would you do that?
Is it because you know the author to be a charlatan and a cad? Is it because, although you didn't read the book on XP, you did try its techniques faithfully and found them not to work? Is it because you are the author of a study on XP that shows it to be a fraud? If so, you never mentioned it.
So when somebody posts a rant about a book that he hasn't read and doesn't back his frothing up with some other justification, it is reasonable to presume that you indeed do have an axe to grind.
Indeed, I find it really weird that of all the negative posts in this thread, almost none of them are from people who have actually tried the XP methods on a project. It seems like there's a lot of axe-grinding going on, although that's nothing new for slashdot.
Re:My GF did this (Score:3)
Sure, he did make that assumption backwards from you said - but I fail to see how "things were constantly being broken" and "they adhered to unit testing" are compatible.
Not that I know shit. Shrug.
--
Re:Just say no (Score:3)
If your partner is humming and clipping his nails, he's being a lousy partner. The 'watching' half of the pair is not just sitting there passively; he should be actively participating, keeping track of what has to be done next to keep the rest of the system consistent with the changes you're making now, etc.
You mention trouble with distractions in the office. This is actually one of the values of pair programming that isn't immediately obvious: It's actually a great focuser and shield against distractions. Coworkers who want to pull you into a discussion about last night's must-see-TV are far less likely to try to do so when you're working with a partner. And you're far more likely to be disciplined about ignoring these things, yourself. It's also helped me that I treat pair programming sessions every bit as formally as I do meetings: My partner and I set a time, and come up with an agenda in advance of the actual session.
You guys are all on a hair-trigger with the anti-machoism.
I don't think it qualifies as being in hair-trigger mode to respond strongly to someone who proclaims that "the only programmer I'll allow to watch over my shoulder is a dead programmer". If you want calm, measured responses, you should probably speak in a calm, measured manner yourself.
Survivor programming (Score:3)
Re:Extreme programming? (Score:3)
LOTS ON WIKI (Score:3)
There is *ALOT* of discussion on Extreme Programming over at wiki... [c2.com]
If you aren't already familiar with wiki - do so. Comments posted there generally have *ALOT* more relevance than most of the whiney, dumb-ass trolls you see all too often around here!
It can sometimes be difficult to navigate, and sometimes the concepts flow as smoothly as a pile of boulders - but it's definitely something to check out if you haven't already...
Seems like a marketting move to me. (Score:3)
I`ve read the XP Explained book of Kent Beck, and I can really advise everybody to get it. That one was a fast read, in fact it reads so fast that you stop thinking about the concepts that are presented, because they seem so natural to do (in a perfect world ofcourse). Reading the tiny review on his successor, this book seems like a poor sequel to the original. I`ll try to fetch it from my local library when it comes available, because those anecdotes are what really sticks into your mind when it comes to remembering the important stuff, but I`m not sure if it can serve me. I`m all for the practical nature of things, and XP Explained`s theoretical principles was even boring at times, so revisiting the same deal in a practical context might be interesting to read, but maybe not to buy.
Besides I think that to really succeed in working with XP, you have to reinvent it yourself in your own situation with your own needs and peculiarities. This book can sure give you some bright ideas, but it`s not something you should depend on.
Sounds Interesting, but ... (Score:3)
Re:One problem (Score:3)
You want corn? I give you corn.
My GF did this (Score:3)
Basically, everyone walked over everyone else's code, things never once worked properly. Without any sense of forward thought, basically the project went to a standstill.
The situation wasn't helped by the fact that one or two incompetent programmers on the team destroys the whole project. My GF was one of the few intelligent coders on the team, largely the reason that any forward progress was made (as vague as progress was). However, pretty much every day, someone stupid would destroy her code under the guise of XP's constant state of refactoring.
It cause numerous conflicts between the programmers and a few people quit, including my GF.
Re:New Age Programming B.S. (Score:4)
It's so convenient to paint things in black and white. Either management are incurably ignorant, hence nothing will save them, or they are devine oracles, and they already know everything. The ignorant are too stupid to learn, the wise already know everything, therefore learning new things is pointless.
There's nothing in your reasoning that precludes it from generalising to all fields, and allowing us to reach the absurd conclusion that noone needs to read.
I can understand the folly of touting a book or methodology as an instant cure-all, but in the high tech industry, dismissing new ideas because you think that you already "know it all" is hardly a recipe for success.
Re:Methodology of the day (Score:4)
The first big unique thing about XP is its rejection of the "exponential cost curve" taught in most software engineering classes. Instead of trying to anticipate future requirements and design accordingly, XP advocates keeping the code simple, flexible, and solidly regression-testable, so that the cost of changing code is always cheap. In my experience, this has proven a really good idea.
The other unique thing about XP is that it gives structural support to the so-called "good habits" of software engineering, rather than relying on exceptional programmers to (hopefully) implement them. Over-the-shoulder coding is a wonderful technique that I at least hadn't tried before (your coding buddy doesn't even have to be a particularly strong coder, and it still works very well). The continuous test cycles and writing tests before code are not new, but the discipline of implementing them is most welcome. Short iterations with a focus on business value is practically unheard of -- instead, the focus is usually on getting a complete spec before moving to a rigid (and thus usually brittle) implementation, etc.
Of course XP isn't unique in one sense: some of the best programmers I've known already use these techniques in their own work. But, much as How to Win Friends and Influence People doesn't tell you anything you didn't already know, it's useful to have it all in one place.
Take a closer look at XP if you haven't; what your post points out to me is that, like other fad processes, people will probably rush to implement XP, miss the point, and denounce it as a fad instead of looking for the core contributions to the art.
Clearly, you didn't read it (Score:4)
Beck makes pretty clear that the XP model is not for everybody. He even has a chapter titled When You Shouldn't Try XP.
He also makes many of the same points you make; especially that understanding customer needs is crucial. He goes further to say that the only will way to understand customer needs is to involve them (or, as you say, their representatives) closely in the development process, so that you have lots of feedback.
Because XP has a lot in common with the Evolutionary Delivery lifecycle, it also substantally reduces budget-related risk. Since you are regularly producing high-quality working versions that have successively more features, you always have something to deliver.
So I agree that 'silver bullet' solutions are bad but I disagree that XP is such a solution. It's just a collection of development techniques that are, for the most part, entirely uncontroversial. The only new thing is the emphasis on combining them in a way that they reinforce one another.
I agree that if you have an evil (or willfully clueless) manager, no book will help. But there are a lot of people who manage software projects (or who manage people who manage projects) that are ignorant but willing to learn, especially if it means the difference between success and failure. And for those people, sticking a few books under their noses will help a great deal. For these managers, I give away copies of Rapid Development [fatbrain.com] or The Software Project Survival Guide [fatbrain.com] depending on how technical they are. And if your project is suited to an XP approach, then a book explaining the business case for using XP will help them understand why they should back you, rather then meddling with things they don't get.
Attitude (Score:4)
You don't code until you need to code it. (No that doesn't mean you don't do a proper design) You refactor code and design as soon as you run into delays or problems. This way you avoid cruft that builds up. You are under constant code review (when programming in partners). You get exposed to the whole system and not just one small section. Interestingly many of the XP practices go against what traditional SE teaches. - has a good overview of the processes and atmosphere that XP creates. Check out the XP Practices in particular.
I've tried implementing some of these practices myself in personal programming. I can tell you it takes more engery but so far I like what I'm seeing from my progress. I'd like to see how effective pair programming can be as well. I've done a bit of this at work but only at weekly code reviews.
Now I don't think it's the silver bullet or holy grail but I'll try anything to hasten the development process whie lowering defect count. I don't try to do everything that XP advocates.
Anyways, if there's one thing you should probably get from XP it's an agressive mentality when programming.
sounds like an old technique (Score:4)
New Age Programming B.S. (Score:4)
The conclusion: If you work for clueless managers, sticking books (like the one reviewed here) under their noses is not going to fix the problem. If you managers have any understanding of the software development process, they will probably already have a development model in place that is appropriate for your project(s), customer(s), organization, and budget.
Re:My GF did this (Score:5)
What a convenient and obnoxious rationalization. This idea that if you're not doing all of XP you can expect failure is offensive enough. Besides the Inquisitorial insistence on total adherence to every minor tenet of The One True Faith, if a methodology is so fragile that the tiniest deviation leads to failure then that methodology is worthless in the real world.
What's worse than that, though, is the way that, without knowing anything about what this fellow's GF's company did, you reason backward from the fact that they failed to the conclusion that they must not have been doing XP "properly". That's just nauseating.
FYI, we just had a fairly detailed discussion of XP here last week, in the Making Software Suck Less [slashdot.org] thread. Of particular interest might be my XP is no panacea [slashdot.org] post, which contains a lot that would be directly on-topic in this thread (but I prefer linking to copying).
The good, the bad, and the ugly (Score:5)
Logic (Score:5)
1) No method is fool proof.
2) Every method has advantages.
3) Every method has disadvantages.
4) Good talent and a reasonable method work.
4) Bad talent or a poor method will not work.
I used some pair programming with some junior programmers to rapidly develop a database interface. We had code reviews and detailed coding standards, including coded examples. All code had to have a testing module that test all aspects of the interface.
It worked well, and the quality of the code was good. We ran a little late, but that was for the programmers to get up to speed with the method. I think the code quality would have been only average if the programmers had been working alone. The bug level using this method quickly dropped to zero before this sub-system was integrated with the front end. I found that refreshing, compared to traditional methods.
I don't think XP will work for prima-donna or insecure people. It is just too exposed for those types. you have to be willing to let other people understand your code and style. You also have to accept they may have some constructive criticism. For example the junior programmers made some darn good suggestion on the example code I had created.
I will continue to dabble in XP as so far it has shown good results.
If it works for you, use it, if not find a method that does work.
Any change will be met with resistance from those who feel threatened by it. If you feel threatened by something try to dig deep and understand it and why you feel threatened. Don't just discount it as a silly sound-bite. | http://slashdot.org/story/01/01/30/044210/extreme-programming-installed | crawl-003 | refinedweb | 14,547 | 71.04 |
Recently I got major help from tutor but I forget some of the stuff he said and (unlike the next time tomorrow) I didn't have a recorder on me at the time so I can't go back and refer to what he was saying -_- .
I'd just like explanations to understand this.See even though my tutor graciously commented to the side what each line of code does I don't think I still know why what is there is there and how each statement works.
This is the nearly done code.
import java.util.Scanner; /* * To change this template, choose Tools | Templates * and open the template in the editor. */ import java.util.Scanner; // brings in library to read keyboard input //import java.lang.String; public class Lab1 { /** * @param args the command line arguments */ public static void main(String[] args) { Scanner usernum=new Scanner (System.in); //create object for keyboard input System.out.print("enter number of groups "); // ask user for a number int userInput = usernum.nextInt(); // takes keyboard input and put into variable userInput System.out.print("ASCII 1-127\n"); // print title for (int groups=0; groups<=127; groups=groups+1){ // loop until the counter is over userInput number String dec=Integer.toString(groups); //convert to dec String hex=Integer.toHexString(groups); //convert dec to hex char ascii=(char)groups; //convert dec to ascii System.out.print(dec+" "+hex+" "+ascii+" "); // print on screen if ((groups % userInput)==0) { System.out.print("\n"); } } System.out.print("\n"); } }
This is the nearly done code.What questions are left of this is a step that says that "each group of small columns will be 8 chars wide and properly aligned for values less than 6".What does that mean when you got something like this?. | https://www.javaprogrammingforums.com/java-theory-questions/22304-why-things-code-mean-what-they-mean-incomplete-step-documentation-help.html | CC-MAIN-2019-51 | refinedweb | 296 | 65.12 |
If you've found this article useful, consider buying me a cup of coffee.
This article was originally published on my blog. communicate with other programs or services.
A great metaphor I came across (don't remember the original source) was thinking of an API as a waiter at a restaurant.
So you, the customer, talk to a waiter to get your food. The waiter then talks to the chef who actually makes your food. The chef gives the food to the waiter. Finally, the waiter then gives the food to you.
Unpacking this metaphor, the user (customer) sends a request to an API (waiter). The API (waiter) then talks to the service you want to reach, usually a database (chef). That database (chef) then sends data (food) back to the API (waiter). And finally, the API (waiter) then gives you back data (food).
Great! Now that we know what an API is and have a general understanding of how it works, we can now move on to using one.
Acquiring the Jokes
To actually get the jokes, we'll have to use the Official Joke API. This API is extremely easy to use, you just use this link and every time you call it, it will return a random joke in JSON format. Simple!
Creating a Discord Application
To create a Discord bot and invite it into your own Discord server, you can follow these steps here.
You can name it whatever you'd like but I'll just name it "Jokes-Bot."
Also, under the Settings tab to the left, go to "OAuth2" > "Scopes" and tick the bot option. Then in the same page, go to "Bot Permissions" and tick the Administrator box.
Now copy the link from the "Scopes" section and paste it onto your browser to add the bot to your Discord server.
Cool! Now just take a note of the Discord bot token in the "Bot" tab because we'll be needing it later.
Downloading the Discord.py Library
We'll be using the Discord.py library to create our Discord bot.
To install this library onto your machine you can follow these steps, or simply run the following command on the command line.
python3 -m pip install -U discord.py
If you don't already have Python and pip installed you can go here to install them.
Installing the Requests Module
Finally we need to download and install one more thing. To actually send requests and receive data from an API using Python we'll need to download and install the python module Requests.
Simply type in the following command onto the command line to install this module.
pip3 install requests
Now it's finally the time you've all been waiting for, the code.
Testing the Bot
To test that our bot actually works, copy and paste the following code into your favorite text editor.
To learn more about what this code does go here.
Now insert your Discord bot token into the last line where it says "your token here." Save your python file as "discord_joke_bot.py" and then input the following command onto the terminal to run and test your bot.
python3 discord_joke_bot.py
If all went well you should see something like the following:
Now we can type the following command
$hello into our Discord server and the bot should respond back with "Hello!"
Cool so the bot works.
Also, whenever we make changes to the bot, you'll have to restart the python script by running it again.
Making API Requests With Python
Now let's take a look at how to actually make API requests with Python and the Requests module.
The following code is what we'll be using to make requests to the Joke API.
Let me explain what's going on here.
On line 1 we import the Requests module.
Line 3 is the URL we'll be using and I just defined it in the global scope.
Now on line 6, I define a function named "check_valid_status_code" that will check if the status code from the API is successful or not.
If the status code is equal to 200, meaning it's a successful call, it will return back the request in JSON format. If it's unsuccessful, it will just return false.
Most of the time, whenever you make an API call, the API will return a status code. These status codes tell you whether an API call was successful or not. To learn more about status codes check out this article.
Now on line 13 I define another function named "get_joke." This function, as implied by its name will get a joke.
First it makes a GET request to the URL we pass it. Then it will call "check_valid_response_code" and store whatever it returns into a variable named "data." Finally it returns the variable "data."
Adding the $joke Command
We can finally move on to actually adding in our "$joke" command.
First let's import "joke_api.py" into the "discord_joke_bot.py" file with the following line:
import joke_api.py
Now, in the file named "discord_joke_bot.py," we can simply replace the if statement containing the "$hello" parameter on line 16 with the following if statement.
Let me explain what's going on here.
So first on line 1, we check if the message we type into Discord starts with the "$joke" string.
Then on line 2 we call "get_joke" function from the "joke_api.py" file and save it into a variable named "joke."
The variable "joke" stores a dictionary with the following keys: id, type, setup, and punchline.
The only dictionary keys we are interested in are "setup" and "punchline."
Now on line 4 we check if the variable "joke" is "False." If so we just return an error message.
Else, if it's not false we return the joke setup with "joke['setup']" adding a newline character, '\n' and then adding the joke punchline with "joke['punchline']."
Now if we test this in our Discord server with our bot we should receive a joke from our bot.
Great! It works.
Complete Code
In total we should have two files. Namely the "discord_joke_bot.py" file and the "joke_api.py" file.
The following is the "discord_joke_bot.py" file.
The following is the "joke_api.py" file.
Congratulations, you just created your first Discord bot using Python!
You can also find the complete code over at my Github.
Resources
Here is a list of resources that may be helpful:
Discussion (2)
Nice article and it is easy to follow the instruction. thanks for the knowledge~
It's great that python also has the capability to send requests. Nice article | https://dev.to/alanjc/creating-a-discord-bot-with-python-33f3 | CC-MAIN-2022-21 | refinedweb | 1,114 | 74.79 |
Solution 1
def feel_the_power(initial_number, power): result = 1 power_count = 0 while power_count < power: result *= initial_number power_count += 1 return result
Time is over! You can keep submitting you assignments, but they won't compute for the score of this quiz.
The Power of While Loops
Use a while loop to complete the function
feel_the_power so that it takes the
initial_number and multiplies it by itself
power number of times.
Examples:
>>> feel_the_power(3, 0) 1 >>> feel_the_power(3, 1) 3 >>> feel_the_power(3, 2) 9 >>> feel_the_power(3, 3) 27
Hint: You'll need additional variables to keep track of both the result and how many times you've multiplied it by itself.
Hint 2: Remember, with multiplication, you start your result at 1 instead of 0 because if you multiply by 0 you get 0.
Hint 3: NO INFINITE LOOPS!
Test Cases
test while loop power zero - Run Test
def test_while_loop_power_zero(): assert feel_the_power(3, 0) == 1
test while loop power one - Run Test
def test_while_loop_power_one(): assert feel_the_power(3, 1) == 3 | https://learn.rmotr.com/python/introduction-to-programming-with-python/getting-started/the-power-of-while-loops | CC-MAIN-2018-47 | refinedweb | 166 | 57.3 |
Need help cloning? Learn how to
clone a repository.
Atlassian SourceTree
is a free Git and Mercurial client for Windows.
Atlassian SourceTree
is a free Git and Mercurial client for Mac.
Verify PCH works if variant dir has spaces in its name
"""
-import sys
import time
import TestSCons
test = TestSCons.TestSCons(match = TestSCons.match_re)
-if sys.platform != 'win32':
- msg = "Skipping Visual C/C++ test on non-Windows platform '%s'\n" % sys.platform
- test.skip_test(msg)
-
-import SCons.Tool.MSCommon as msc
-if not msc.msvc_exists():
- msg = "No MSVC toolchain found...skipping test\n"
+test.skip_if_not_msvc()
test.write('Main.cpp', """\
#include "Precompiled.h" | https://bitbucket.org/russel/scons/diff/test/MSVC/pch-spaces-subdir.py?diff2=c2bf08acde38&at=default | CC-MAIN-2015-48 | refinedweb | 102 | 64.57 |
Occasionally I make apps that create bitmaps and save them. To do so you need to use an encoder to turn the bitmapdata bits into a byte array that can be saved in some image format. AS3 has a PNGEncoder class, but the main problem with it is that it’s pretty slow. I’m saving a 4000×4000 bitmap and it takes sometimes well over 30 seconds, during which time, the app completely freezes up.
Some time last year I was looking around to see if anyone had created an asynchronous version, i.e. one where you could tell it to encode your bitmap and it would do a little bit each frame and tell you when it was done. I wasn’t able to find one. At the time, I took a quick look at the idea of converting the PNGEncoder to do this, but never followed through. Yesterday I started an app that really needed this functionality, and I took another look at it.
Basically the encoder writes some header stuff into a byte array, then loops through from y = 0 to y = height in an outer loop, and from x = 0 to x = width in an inner loop, where it deals with each pixel, writing it to the byte array. Finally, it sets a few more bits and ends off.
What I did was extract the inner loop into its own method, writeRow. And the stuff after the loop into a method called completeWrite. This required making a lot of local variables into class variables. Finally, I converted the outer loop into an onEnterFrame function that listens to the ENTER_FRAME event of a Sprite that I create for no other purpose than to have an ENTER_FRAME event. It’s pretty ugly, I know, but it seems the enter frame got much better performance than a timer. With a timer, whatever your delay is will be inserted between each loop, whereas the enter frame will run ASAP. You could make a really small delay, like 1 millisecond, but that seems like it’s open to some bad side effects. I felt more comfortable with the magic sprite.
Then I found that rather than doing just a single row on each frame, I got better results if I did a chunk of rows. I’m getting pretty good results at 20 rows at a time for a 4000×4000 bitmap, but I didn’t do any kind of benchmarking or testing. This could (should) probably be exposed as a settable parameter.
Anyway, each time it encodes a chunk of rows, it dispatches a progress event, and when it’s done, it dispatches a complete event. I also made a progress property that is just the current row divided by the total height. And of course a png property that lets you get at the byte array when it’s complete.
Originally, I tried extending the original PNGEncoder class and changing the parts I needed to. But everything in there is private, and I needed it to extend EventDispatcher to be able to dispatch events. So it’s a pure copy, paste, and change job.
//////////////////////////////////////////////////////////////////////////////// // // ADOBE SYSTEMS INCORPORATED // Copyright 2007 Adobe Systems Incorporated // All Rights Reserved. // // NOTICE: Adobe permits you to use, modify, and distribute this file // in accordance with the terms of the license agreement accompanying it. // //////////////////////////////////////////////////////////////////////////////// package mx.graphics.codec { import flash.display.BitmapData; import flash.display.Sprite; import flash.events.Event; import flash.events.EventDispatcher; import flash.events.ProgressEvent; import flash.utils.ByteArray; import flashx.textLayout.formats.Float; /** * The PNGEncoder class converts raw bitmap images into encoded * images using Portable Network Graphics (PNG) lossless compression. * * <p>For the PNG specification, see</p>. * * @langversion 3.0 * @playerversion Flash 9 * @playerversion AIR 1.1 * @productversion Flex 3 */ public class PNGEncoderAsync extends EventDispatcher { // include "../../core/Version.as"; //-------------------------------------------------------------------------- // // Class constants // //-------------------------------------------------------------------------- /** * @private * The MIME type for a PNG image. */ private static const CONTENT_TYPE:String = "image/png"; //-------------------------------------------------------------------------- // // Constructor // //-------------------------------------------------------------------------- /** * Constructor. * * @langversion 3.0 * @playerversion Flash 9 * @playerversion AIR 1.1 * @productversion Flex 3 */ public function PNGEncoderAsync() { super(); initializeCRCTable(); } //-------------------------------------------------------------------------- // // Variables // //-------------------------------------------------------------------------- /** * @private * Used for computing the cyclic redundancy checksum * at the end of each chunk. */ private var crcTable:Array; private var IDAT:ByteArray; private var sourceBitmapData:BitmapData; private var sourceByteArray:ByteArray; private var transparent:Boolean; private var width:int; private var height:int; private var y:int; private var _png:ByteArray; private var sprite:Sprite; //-------------------------------------------------------------------------- // // Properties // //-------------------------------------------------------------------------- //---------------------------------- // contentType //---------------------------------- /** * The MIME type for the PNG encoded image. * The value is <code>"image/png"</code>. * * @langversion 3.0 * @playerversion Flash 9 * @playerversion AIR 1.1 * @productversion Flex 3 */ public function get contentType():String { return CONTENT_TYPE; } //-------------------------------------------------------------------------- // // Methods // //-------------------------------------------------------------------------- /** * Converts the pixels of a BitmapData object * to a PNG-encoded ByteArray object. * * @param bitmapData The input BitmapData object. * * @return Returns a ByteArray object containing PNG-encoded image data. * * @langversion 3.0 * @playerversion Flash 9 * @playerversion AIR 1.1 * @productversion Flex 3 */ public function encode(bitmapData:BitmapData):void { return internalEncode(bitmapData, bitmapData.width, bitmapData.height, bitmapData.transparent); } /** * Converts a ByteArray object containing raw pixels * in 32-bit ARGB (Alpha, Red, Green, Blue) format * to a new PNG-encoded ByteArray object. * The original ByteArray is left unchanged. * * @param byteArray The input ByteArray object containing raw pixels. * This ByteArray should contain * <code>4 * width * height</code> bytes. * Each pixel is represented by 4 bytes, in the order ARGB. * The first four bytes represent the top-left pixel of the image. * The next four bytes represent the pixel to its right, etc. * Each row follows the previous one without any padding. * * @param width The width of the input image, in pixels. * * @param height The height of the input image, in pixels. * * @param transparent If <code>false</code>, alpha channel information * is ignored but you still must represent each pixel * as four bytes in ARGB format. * * @return Returns a ByteArray object containing PNG-encoded image data. * * @langversion 3.0 * @playerversion Flash 9 * @playerversion AIR 1.1 * @productversion Flex 3 */ public function encodeByteArray(byteArray:ByteArray, width:int, height:int, transparent:Boolean = true):void { internalEncode(byteArray, width, height, transparent); } /** * @private */ private function initializeCRCTable():void { crcTable = []; for (var n:uint = 0; n < 256; n++) { var c:uint = n; for (var k:uint = 0; k < 8; k++) { if (c & 1) c = uint(uint(0xedb88320) ^ uint(c >>> 1)); else c = uint(c >>> 1); } crcTable[n] = c; } } /** * @private */ private function internalEncode(source:Object, width:int, height:int, transparent:Boolean = true):void { // The source is either a BitmapData or a ByteArray. sourceBitmapData = source as BitmapData; sourceByteArray = source as ByteArray; this.transparent = transparent; this.width = width; this.height = height; if (sourceByteArray) sourceByteArray.position = 0; // Create output byte array _png =(width); IHDR.writeInt(height); IHDR.writeByte(8); // bit depth per channel IHDR.writeByte(6); // color type: RGBA IHDR.writeByte(0); // compression method IHDR.writeByte(0); // filter method IHDR.writeByte(0); // interlace method writeChunk(_png, 0x49484452, IHDR); // Build IDAT chunk IDAT = new ByteArray(); y = 0; sprite = new Sprite(); sprite.addEventListener(Event.ENTER_FRAME, onEnterFrame); } protected function onEnterFrame(event:Event):void { for(var i:int = 0; i < 20; i++) { writeRow(); y++; if(y >= height) { sprite.removeEventListener(Event.ENTER_FRAME, onEnterFrame); completeWrite(); } } } private function completeWrite():void { IDAT.compress(); writeChunk(_png, 0x49444154, IDAT); // Build IEND chunk writeChunk(_png, 0x49454E44, null); // return PNG _png.position = 0; dispatchEvent(new Event(Event.COMPLETE)); } private function writeRow():void { IDAT.writeByte(0); // no filter var x:int; var pixel:uint; if (!transparent) { for (x = 0; x < width; x++) { if (sourceBitmapData) pixel = sourceBitmapData.getPixel(x, y); else pixel = sourceByteArray.readUnsignedInt(); IDAT.writeUnsignedInt(uint(((pixel & 0xFFFFFF) << 8) | 0xFF)); } } else { for (x = 0; x < width; x++) { if (sourceBitmapData) pixel = sourceBitmapData.getPixel32(x, y); else pixel = sourceByteArray.readUnsignedInt(); IDAT.writeUnsignedInt(uint(((pixel & 0xFFFFFF) << 8) | (pixel >>> 24))); } } dispatchEvent(new ProgressEvent(ProgressEvent.PROGRESS)); } /** * @private */ private function writeChunk(png:ByteArray, type:uint, data:ByteArray):void { // Write length of data. var len:uint = 0; if (data) len = data.length; png.writeUnsignedInt(len); // Write chunk type. var typePos:uint = png.position; png.writeUnsignedInt(type); // Write data. if (data) png.writeBytes(data); // Write CRC of chunk type and data. var crcPos:uint = png.position; png.position = typePos; var crc:uint = 0xFFFFFFFF; for (var i:uint = typePos; i < crcPos; i++) { crc = uint(crcTable[(crc ^ png.readUnsignedByte()) & uint(0xFF)] ^ uint(crc >>> 8)); } crc = uint(crc ^ uint(0xFFFFFFFF)); png.position = crcPos; png.writeUnsignedInt(crc); } public function get png():ByteArray { return _png; } public function get progress():Number { return y / height; } } }
I’m not putting this out here as any kind of proof of my brilliance, as it’s not very pretty code at all. I did the bare minimum refactoring to get the thing to work in my project. But it does work, and works damn well. Well enough to call it a done for the project I need it for. I mentioned it on twitter and found out that as opposed to the last time I checked, a few other people have created similar classes. A few I am aware of now:
And apparently the hype framework has one built in too, that you might be able to steal.
But, there can never be enough free code out there, right? If this helps anyone, or they can use it as a launching point to create something better, great. If not, well, I wasted a few minutes posting this, so be it. 🙂
Grand stuff. My hack was a last-minute kind of thing, so.. Odd that I didn’t think of just creating an internal Sprite for the enterframe event like you have. I suppose mentally I’ve just divorced things that aren’t on the display list from the events associated with it.
I recently tried this with JPEGEncoder. Once caveat is that if you’re uploading the file, you’ll have to deal with Adobe’s new “user interaction” requirements for file uploads.
Nice work. Sounds and looks similar to the Async jpeg encoder I wrote last year.
@ line 175 shouldn’t it say “return internalEncode…”?
i am going to try this out, thanks, keith, for poasting, so i want to make sure that line is right.
I got a lot of weird problems until i included a break in the enterframe loop. You do not want the loop to continue after the end is reached. Might be worth an update of the originall code?
if(y>= height)
{
sprite.removeEventListener(Event.ENTER_FRAME, onEnterFrame);
completeWrite();
break;
}
I’m not a huge fan of chunking data this way, there seems to be too huge an overhead per frame for my liking. If it takes 30 seconds to encode a .png straight, how long does it take to do it in chunks?
There is also a knack in getting the right amount of data processed per frame. You say you get pretty good results at 20 rows, but that may not be the same for everyone. It will differ from machine to machine.
Another method I experimented with was LocalConnection and connecting to a second swf which has the sole purpose of processing your data and returning it to the main swf. This would create a real thread for the processing. It has a couple of caveats though, there is the 40k limit on LocalConnection parameters (but you can always packet the data), and a second swf may not be available to you.
I agree it’s not the ideal solution, but it’s simple and works, makes the ui responsive and gives you feedback. If it takes a few seconds longer to encode, I think that’s a good tradeoff. You could easily extend this to make the number of rows per frame settable, or even do it intelligently with a timer so it would work the same in all machines.
I don’t see the local connection concept as any better a solution, for the reasons you mentioned and then some.
This would make a perfect use case if Adobe is looking at any kind of concurrency API.
No, I didn’t find the LocalConnection all that good, for one-off tasks at least. If you can get over the setup and 40k limit it did perform well at running constant background tasks as it runs completely independently. I suppose it’s a bit like message passing concurrency in languages like Erlang.
I messed around with dynamic chunk sizes to keep the frame rate stable across machines, but the overhead became too much. The time increased ten-fold. Basic chunks like you’ve implemented work the best.
I spoke to a few of the guys from Adobe at FotB last year and their response was that they’re reluctant to introduce any sort of concurrency API for a couple of reasons, the complexity it would add to the language, and that if they did something they would have to get it right first time as there would be no going back. The only good news is that it’s still a topic amongst the Adobe team.
For reference, Alex Harui from Adobe started a discussion about this a couple of years ago:
I wouldn’t be at all surprised if we saw the beginnings of something like this in the coming year.
Great class. I used it in my project.
But for some reason it launches COMPLETE event 18 times. So I had to run removeListener right from COMPLETE handler.
Whats with this line popping up in the source code ?
src=”” alt=”8)” class=”wp-smiley”>
I was very excited to find this but I simply can’t get it to work. It seems to encode the bitmapdata, launches the complete event (18 times like the above comment) but then the returned bytearray is not as expected. But it does return “something”…
Does anyone have any usage samples?
I solved my issue. Originally, I had modified the lines that had the code “p-includes/images/smilies/icon_cool.gif” alt=”8)”… because this seemed like a copy/paste error. But I didn’t modify the lines correctly. So I changed the two lines to:
1 Old:
IDAT.writeUnsignedInt(uint(((pixel & 0xFFFFFF) << | 0xFF));
Change to:
IDAT.writeUnsignedInt(uint(((pixel & 0xFFFFFF) << 8) | 0xFF));
2 Old:
IDAT.writeUnsignedInt(uint(((pixel & 0xFFFFFF) << |
(pixel >>> 24)));
Change to:
IDAT.writeUnsignedInt(uint(((pixel & 0xFFFFFF) <>> 24)));
This solved my problem and the now the ENcoder works great. Big thanks to the author.
For anyone else keeping score, that smiley-face above should be replaced with the numeral 8 followed by a close-paren. But oh well.
Also, please add the statement “break’;” or “return;'” after “completeWrite()”.
Otherwise, if you have a height that is not a multiple of 20 (due to the hard-coded “20” in your “for” loop), the loop will keep iterating, resulting in an end of file error.
Thx.
“8)”
One last comment on this generally incomplete inquiry:
On a large PNG (say, 5MB), these two synchronous operations both take a long time to complete:
IDAT.compress();
and
writeChunk(_png, 0x49444154, IDAT);
Each of these statements can take over a second, locking up FlashPlayer, and partially negating the benefits of the extra effort involved in setting up the asynchronous encoding performed before that point.
Does anyone has a sample code for using the new class ?
I use: import mx.graphics.codec.PNGEncoderAsync; var abc:PNGEncoderAsync = new PNGEncoderAsync();
Flash throws error 1046 and 1180. | http://www.bit-101.com/blog/?p=2581 | CC-MAIN-2017-17 | refinedweb | 2,528 | 66.23 |
This used to work just fine (as in, 2 weeks ago or so):
import os
import sublime
import sublime_plugin
class BlameCommand(sublime_plugin.TextCommand):
def run(self, edit):
if len(self.view.file_name()) > 0:
folder_name, file_name = os.path.split(self.view.file_name())
begin_line, begin_column = self.view.rowcol(self.view.sel()[0].begin())
end_line, end_column = self.view.rowcol(self.view.sel()[0].end())
begin_line = str(begin_line)
end_line = str(end_line)
lines = begin_line + ',' + end_line
self.view.window().run_command('exec', {'cmd': 'git', 'blame', '-L', lines, file_name], 'working_dir': folder_name})
sublime.status_message("git blame -L " + lines + " " + file_name)
def is_enabled(self):
return self.view.file_name() and len(self.view.file_name()) > 0
...but now it does nothing. No error message or anything. Did something change in the API that deprecates anything above? Could someone who uses git try this out and see if it works for them? I had this for user key bindings:
{ "keys": "super+shift+b"], "command": "blame" }
How odd. "super+shift+b" does nothing, but if I change it to "super+alt+b", it works. I don't see any other plugin or default command that uses super+shift+b... I wonder why that doesn't work? | https://forum.sublimetext.com/t/my-blame-plugin-used-to-work-but-now-it-doesnt/2144 | CC-MAIN-2016-44 | refinedweb | 191 | 54.49 |
my ps
about
instructor-led
on-demand!
screencasts
blogs
technical staff
Don Box's Spoutlet
»
LINQ Panel: Whadya wanna Know?
LINQ Panel: Whadya wanna Know?)
I'm moderating a panel tomorrow at PDC on the LINQ project.
I have no shortage of ideas for topics/questions, but I'd love to hear from the rest of the world what y'all want to hear folks who've been creating and using this stuff.
Please use the comments on this blog, as my email is pretty sporadic while here at PDC. I'm mainly looking for fodder, so expect me to mangle your ideas to fit the event.
Posted
Sep 15 2005, 03:12 PM
by
don-box
Mark Mehelis
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-15-2005 10:23 AM
Don I have some fears of LINQ - and so does my lead dev...
Ramblings--- intentionally going awry...
1. I need to create a data-storage mechanism for my app.
I need to store data to the hard drive (say via the
file system). I can then create objects that read and
write to the file-system to persist data (i.e., objects
like CustomerCarRental.saveAgreement(), etc).
2. The file-system I use needs to be faster, so I need to
create index files and create some background processes
that use n-tree (binary, 2-3 trees, etc) methods to
keep the data stored correctly for performance.
3. This makes it difficult when I insert or update data
in this scheme. So, I would *really* like an engine
that handles this, so I done keep duplicating this over
and over.
4. It would be nice if the interface into this engine was
standard. I do not want to touch ISAM (Indexed
Sequential Access Method) files directly, so it would
be great if there was an interprettive script-parsing
layer that fit on top of the database.
5. Now that I interface with the database via scripts, it
would *really* be nice if there was some standardization
of the syntax of the scripts. Maybe a Standard Query
Language would be nice (it would be great if we set up
an American National Standards Institute that would make
the language standard for all of us trying to create
database engines. That way, when I realize that my engine
sucks, I can use my buddies, knowing that my interface
to the database will be the same through the same scripts.
6. Hey, the idea of using scripts is really cool, since now
I can dynamically generate the scripts (like I do HTML)
to create adhoc-browse windows. Users can filter based
on, say, "LIKE" and "IN" and other cool filter criteria.
Now I can understand what is going on in my buddy's app
when he executes "select CustomerName, CustomerAddress,
RentalDate from Agreements where CustomerName like
'%Smith%'". This is cool since I can also change the
SQL a bit to get better performance, too (i.e., joining
with major-filters bottom-right and going to top-left
for the more in-depth filtering, or whatever).
7. I like this interpretter-pattern thing for interfacing
with the database. Now, I can go to any application and
understand what is going on, since I know the scripting
language. I can look into my buddy's CustomerCarRental
object to see that the "save()" method is executing a
script like "insert into Agreements ( CustomerName,
CustomerAddress, RentalDate, ReturnDate, Price ) values
( 'Joe Smith', '123 J St. Napa, CA 94553', '10/01/2005',
'10/15/2005', '$225.00' )".
8. Man-oh-man. This is great. The *last* thing I need is
for someone to de-standardize this great thing we have
going. I don't to learn some one-off way of interacting
with the database (ie. "x=TL.locateUpdatableRec('somecriteria');
x.setCustomerName('J Smith'); x.setCustomerAddress( 'xxx' );
etc, etc, x.commitData();" vs the insert from (7)). Man
it would also suck since I couldn't tweek the SQL for
performance, either (see (6)). I would probably have to
use a lesser-functional means to get *larger* sets of
data, then "interrogate" the data sets to filter down
to what I want... or something.
9. I sure hope that if someone *did* decide to build a mapped-
object layer, that they wouldn't require a code-generator
to "create" code that gets shoved into by code-base. It
would only be able to be modified through the external
tool used to created it. It would such to generate, tweek &
modify-to-suit, change the database, re-generate and lose
my changes (or maybe we would need to add a level of
complexity to the development scheme--- "always cut-and-
save changes before re-genning the db objects"... or
something).
10. I sure hope that if someone *did* decide to build a mapped-
object layer, that they wouldn't try to get fancy and
cache-stuff from the database. That would suck if an
external process or application changed the database without
the db-object-layer (ORM) knowing about it. It would also suck
since my application-servers would need to be clustered,
and any disperate applications of processes would now need
to talk through a proprietary interface to *my* application
instead of talking SQL directly with the database. Now I
would need to create my own listener-scheme (i.e.,
web-services, RMI calls, etc) instead putting that burden
on the database (i.e., EJB.RemoteCustomerCarRental.blahBlah(),
etc).
11. But it would not suck if I was the guy setting up this
entire architecture. I'd weasle-in, become the sole entry
point to the database, then up my maintenance rates. Who
the hell else could be brought in and up-to-speed on all of
my proprietary makings? I would *own* my customers! I'd be
a monopoly and millionare by attrition! Ya, baby!
Chris Woodruff
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-15-2005 10:40 AM
Don --
I have followed Comega for a while and really was excited about the language and if it could be brought into the C# superset. I love that LINQ will allow the developer to use relation and XML data easily in the .NET universe. Maybe this question is not the best asked here but I will still ask it. What about all the cool other language constructs that Comega had? Will any of these be added to .NET (or have they and I just missed them):
1. Stream types
2. Anonymous structs ("tuple types")
3. Choice types
4. Content classes
Thanks and have a good time with LINQ. I have been waiting for this for 10 years. Cannot wait until I can use it in production development.
Keith J. Farmer
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-15-2005 12:15 PM
Don:
I've been playing with the bits, and I'm pretty excited about this (I just wish it were RTM here, now). The following would be of interest to people on my end:
1 -- Extending DLinq to other databases. Oracle for example, any RDBMS in general, any tabular store (csv files on the file system, for example) for bonus points. DLinq over Web Services would be, well, amazing.
2 -- GLinq, for Linq over graphs in general.
3 -- Choice types would be of *incredible* help for a project we have. It would solve some very difficult problems we're having in designing a framework. If you want, email me and I can explain.
3 -- Streams: do we need them now that we have yield?
Mark Miller
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-15-2005 12:17 PM
* What is a projection for release? It seems to be targetted at Orcas, but since we have some bits in our hands now, will we see incremental production releases against whidbey?
* What is the plan for working with other DB providers like Oracle? Things like handling the type disconnect between DBs like Oracle and .NET (which is less than elegant at the moment). Those situations will be magnified by pulling those into the language.
* There doesn't seem to be a complete story on interop with Indigo. For instance, how will this technology change Indigo? I can definitely see dealing with services as query-able datasources.
* Large, "read-only" style apps (data visualization, etc) seem to need a voice in this design. For instance, they don't need the overhead of change tracking, identity management and some of the other things that the DataContext seems to be forcing on the model, we just need large streams of data. Also, relationships in such scenarios are not defined by an object model or hierarchy, how can these be described more concisely in the query (where's the JOIN statement).
Sean Chase
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-15-2005 1:14 PM
lambda lambda lambda for all of use LISP-Losers. :-)
Marcus Widerberg
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-15-2005 1:55 PM
1. There has been no talk on polyphonic c#, is the plan for it to be included in C# 3.0 / undecided / or no?
2. Suggestion: I would like to know a little more about the implementation (current + ideas) of the different features. Ex: How are extension methods implemented? That would explain how are extension method calls are dispatched, which I would like to know. Ie, is there a performance penalty for that (and the other features).
WWs Blog
wrote
The LINQ Project
on 09-15-2005 3:11 PM
Microsoft has a really interesting project, codename LINQ, that will extend the .NET Framework. It's...
Scott
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-15-2005 3:54 PM
I'll second the call for more info on the Lambda expressions. How do we use them? Can the be created for our custom types? Generic types?
Barry Gervin's Software Architecture Perspectives
wrote
More on LINQ, XLinq and perspectives.
on 09-15-2005 4:06 PM
William Stacey
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-15-2005 6:13 PM
The Provider model needs to simple and also rich with a .Net template. Naturally, many Linq providers come to mind quickly:
- AD (Add, change, delete user/groups, etc)
- Exchange (ACID mail and mailboxes)
- Local Hardware (enum)
- NTFS/FAT (ACID)
- File properties. (file size, attributes, etc)
- Process/Threads (including kill)
- Perf Mon
- Instrament the app.
- Eventlog and app eventlog (Add/ch/del)
- Shares/Active users
- RSS
- Registry
Really almost all apps should export a Linq provider so that you can at least do management and export data as needed. The "var" thing is causing some strange looks on the c# groups. Looks great however for the day I have been looking at it. Thanks MS.
--William Stacey [MVP]
William Stacey
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-15-2005 6:22 PM
- Basically, the Dlinq stuff is an object wrapper which outputs a Sql string. Do you ever see a time where this could be done more directly?
- Are direct .Net types in the database ever going to happen to eliminate lossy behavior and type translation pain.
Eric A.
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-15-2005 11:22 PM
@Mark Mehelis: Interestingly, VB9.0 implements a more "SQL" approach to the querying process:
[shamelessly taken from MSDN]
Class Country
Public Property Name As String
Public Property Area As Float
Public Property Population As Integer
End Class
Dim Countries = _
{ new Country{ _
.Name = "Palau", .Area = 458, .Population = 16952 }, _
new Country{ _
.Name = "Monaco", .Area = 1.9, .Population = 31719 }, _
new Country{ _
.Name = "Belize", .Area = 22960, .Population = 219296 }, _
new Country{ _
.Name = "Madagascar", .Area = 587040, .Population = 13670507 } _
}
Dim SmallCountries = Select Country _
From Country In Countries _
Where Country.Population < 1000000
For Each Country As Country In SmallCountries
Console.WriteLine(Country.Name)
[/shamelessly taken from MSDN]
As seems apparent, the data retrieval of SmallCountries is much more similar to SQL as it is written today.
What odd folks over there in Redmond.
Regards,
Eric
Eric A.
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-15-2005 11:23 PM
Sorry, I should have included the source of the shameless theft:
Regards,
Eric A.
Richard Reukema
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-16-2005 8:09 AM
I haven't seen anything in regard to stored procs? I personnelly don't issue selects directly against tables from my code - there are no examples of taking the data from a stored proc, and getting it into a collection. Did I miss something?
Mark Mehelis
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-16-2005 8:16 AM
Thanks Eric... that is good to see something that is more SQL like.
I just don't like the idea of taking my data access and sprinkling it amongst the code. I see a major headache in trying to maintain any application that is not trivial in size.
How can I "tune" the data access via this code that is generated?
What happens to my data access when I add a column, subtract a column, modify a column? Currently in SQL as long as I don't "touch" a column from the SQL I can make changes at the Database and not break other applications which touch the database.
If I have this data accessed in memory what happens when a trigger fires and makes a change to the database or another application I co-exist with changes the data underneith me. My in memory data is now dirty... how is this handled?
If for some reason I need to take my application to say Oracle from SQL server or the other way around... is the SQL that is generated genaric enough to not blow up?
- Mark
Bryant Likes
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-16-2005 9:27 AM
1) If we drink the linq coolaid, what should we start doing today to make moving to linq easier in the future?
2) What happens to linq objects that get passed between app domains? What about returning linq generated objects from a web service?
Chris Woodruff
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-16-2005 10:05 AM
I agree with Richard about stored procedures. I almost never use inline SQL for data retrieval. Will LINQ have the ability to interrogate my stored procedure return dataset and allow me to work with said dataset? That is what I also need.
Bryant Likes
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-16-2005 10:25 AM
3) Can the schemas in XLink be generated the same way they are in DLink by using .NET classes?
Mark Mehelis
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-16-2005 10:33 AM
The syntax between the different languages within .NET should be the same IMHO. My reasoning would be that say I have a LINQ query that I can't get to work they way I want and I go to my neighbor and ask for help... sorry dude you write VB.NET I write C# ... I don't understand your specific dialect of LINQ.
So if nothing else atleast standardize within MS on LINQ and how the language syntax works. It would suck to have to throw out an application because my VB/C# coder quit and I don't have anyone else to fill his/her shoes immediately and I don't have another coder in that particular language.
YanivG
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-20-2005 12:43 AM
Check out the discussion on LINQ syntax at Eran's blog at.
christophep@avanade.com
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-21-2005 2:07 AM
it should be better to increase the speed of the collections namespace and incorporate into it things like ones we have in STL (C++) rather than finding experimental stuff. In real world projetcs (not PPT ones), we used stored proc and I a guy have the idea to use a ISAM provider or a custom tree algorythm, the management is not possible anymore. We have left the isam area, we have powerfull structures, iterators and algorythms, so... C# does not make me smile.
In fact, finding and entry in a 5 row struct it not very cool & smart.
Sorry Don.
christophep@avanade.com
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-21-2005 2:09 AM
erratum: C# 3.0 does not make me smile.
but C# 2.0 is excellent.
Marc Brooks
wrote
re: LINQ Panel: Whadya wanna Know?
on 09-23-2005 3:52 PM
Can we campaign to get 'var' change to something less overloaded with "variant" and "variable typed" overtones? How about "let". Here's the link for LadyBug
xlink and linq
wrote
xlink and linq
on 07-07-2008 5:18 AM
Pingback from xlink and linq | http://www.pluralsight.com/community/blogs/dbox/archive/2005/09/15/14832.aspx | crawl-002 | refinedweb | 2,839 | 73.58 |
One of the greatest performance benefits can be achieved with caching of data, the fetching of which always involves some form of overhead. For XML files, this is the file system, and for databases, it is the connection and physical extraction of the data. Two types of data are generally displayed in Web pages: data that changes often, and data that doesn't change often. If the data doesn't change often, there is no point in going through the expensive operation of connecting to the database to get the data and using valuable database resources. A better solution would be to cache the data, thus saving the time and resources of the database server.
The problem with caching data, or even caching entire ASP.NET pages that have database-driven data on them, is what to do if the data changes. In fact, how do you even know if the data has changed? ASP.NET 2.0 provides features that allow its built-in cache to be invalidated when data changes so that the page is regenerated. This brings the best of both worldsfresh data, but cached for increased performance.
The features of cache invalidation depend upon the database server, and both SQL Server 2000 and 2005 support this, although with different features.
SQL Server 2005 supports notifications via a service brokera feature that allows it to notify client applications when data has changed. This can be combined with ASP.NET's cache so that pages can be notified when the data they rely upon changes. When the data changes, SQL Server notifies ASP.NET, the page is evicted from the cache, and the next request will see the page regenerated with the fresh data.
Cache invalidation works with both SQL Server 2005 and the Express editions, but with the Express edition it will only work if user instancing is not used. That means that you cannot have the User Instance=true keywords in your connection string, and that you must have an attached database.
In operation, SQL Server cache invalidation is seamless, but it does require some initial setup.
The setup required depends upon how you connect to SQL Server and whether the user is a database owner (and hence has administrative rights in the database). Whatever permissions the user has, there is a one-time setup, involving ensuring that the database is at the correct version number and that the service broker endpoint is created.
For new databases, the version number will be correct, but for old databases that you have attached, it may not be. You can check this by issuing the sp_helpdb command in a new query, which will return a list of all databases and associated details, as shown in Figure. Here you can see the list of databases; one of the columns is compatibility_level, which must be 90 for the service broker to work.
You can upgrade the compatibility level of a database by executing the following simple command:
exec sp_dbcmptlevel 'Northwind', '90'
You simply supply the database name and the level to upgrade to. When the version is correct, you can create the broker endpoint, using the script shown in Listing 6.10.
USE master
GO
CREATE ENDPOINT BrokerEndpoint
STATE = STARTED
AS TCP ( LISTENER_PORT = 4037 )
FOR SERVICE_BROKER ( AUTHENTICATION = WINDOWS )
GO
ALTER DATABASE Northwind SET ENABLE_BROKER;
GO
This script is available as CreateAndEnableServiceBroker.sql in the databases directory of the downloadable samples.
If you are connecting to SQL Server as a trusted user, such as using integrated security, and that user has administrative rights, then this is all you require for the configuration. If you're not an administrative user, whether using integrated security or not, then you need to grant the database user additional permissions.
For non-administrative users, the setup is also a one-time affair, but it is necessary to grant permissions so that the user can create the required objects. For this, you should run the script detailed in Listing 6.11.
- sql_dependencey_subscriber role in SQL Server
EXEC sp_addrole 'sql_dependency_subscriber'
- Permissions needed for users to use the Start method
GRANT CREATE PROCEDURE to startUser
GRANT CREATE QUEUE to startUser
GRANT CREATE SERVICE to startUser
GRANT REFERENCES on CONTRACT::
[]
to startUser
GRANT VIEW DEFINITION TO startUser
- Permissions needed for users to Execute
GRANT SELECT to executeUser
GRANT SUBSCRIBE QUERY NOTIFICATIONS TO executeUser
GRANT RECEIVE ON QueryNotificationErrorsQueue TO executeUser
GRANT REFERENCES on CONTRACT::
[]
to executeUser
EXEC sp_addrolemember 'sql_dependency_subscriber', 'executeUser'
This script is available as EnableServiceBrokerNonAdmin.sql in the databases directory of the downloadable samples.
Three sections appear in Listing 6.11. The first simply creates a new role for the subscriber of notifications. The second creates the permissions for the user to execute the Start methodthis is something we'll be covering soon. The third section creates permissions for the user executing the database query. The startUser and executeUser can be the same user and can be a Windows user account (such as ASPNET) or an explicit SQL Server user account.
If the user doesn't have correct permissions, then you may see an error such as:
Cannot find the contract
'
', because it does not exist or you do not have permission.
Invalid object name 'SqlQueryNotificationService-d1963e55-3e62-4d54-
b9ca-b4c02c9e6291'.
Once the database and permissions have been configured, you can start using SQL notifications, but one important point to note is that the syntax used for the query must follow certain conditions. The first is that you cannot use * to represent all columnscolumns must be explicitly named. The second is that the table name must be qualified with its owner. For example:
SELECT ProductID, ProductName FROM dbo.Products
If you think you have everything configured correctly, but your pages don't seem to be evicted from the cache when you change the data, then you need to check the query as well as the database compatibility version (see Figure). Once permissions are correct, you will not see any exceptions regarding cached pages dependent upon SQL data, because failures happen silently.
Using the SQL Server 2005 cache invalidation is extremely simple, because you use the same features as you use for standard page caching, but this time you add the SqlDependency attribute:
<%@ OutputCache Duration="30" VaryByParam="none"
SqlDependency="CommandNotification" %>
The page would now be output-cached, but a dependency would be created on any data commands within the page. Using CommandNotification means that the page is cached until notified by SQL server. For example, consider Listing 6.12, which has output caching enabled, based on SQL commands. The data source and grid controls contain no additions to take care of the caching, and were there more data controls with different queries, then a change to either data source would result in the page being evicted from the cache.
<%@ Page Language="C#" ... %>
<% OutputCache Duration="30" VaryByParam="none"
SqlDependency="CommandNotification" %>
<html>
<form>
<h1><%=DateTime.Now %></h1>
<asp:SqlDataSource ID="SqlDataSource1" runat="server"
ConnectionString="<%$ConnectionStrings:NorthwindConnectString%>"
SelectCommand="SELECT [ProductID], [ProductName], [UnitsInStock],
[UnitsOnOrder] FROM [dbo].[Products]'>
</asp:SqlDataSource>
<asp:GridView
<Columns>
<asp:BoundField
<asp:BoundField
<asp:BoundField
<asp:BoundField
</Columns>
</asp:GridView>
</form>
</html>
This scenario can easily be tested by calling the page and clicking Refresh. The date should remain the same. But if you modify a row in the Products table and then click Refresh, the page will be updated with a new date. Because this query selects all rows, any change to the underlying data will result in the cache being invalidated. However, if the query had a WHERE clause, invalidation would only take place if the changed data was part of the set of rows returned by the query; changes to rows not part of the query have no effect upon the cache.
If you wish to cache only the data on a page, you have two options. You can wrap the data (data source and grid) up in a user control and use fragment caching, or you can add the caching dependency to the data source control directly and remove it from the page, as shown in Listing 6.13.
<asp:SqlDataSource ID="SqlDataSource1" runat="server"
EnableCaching="true" SqlCacheDependency="CommandNotification"
CacheDuration="30"
ConnectionString="<%$ ConnectionStrings:NorthwindConnectString %>"
SelectCommand="SELECT [ProductID], [ProductName], [UnitsInStock],
[UnitsOnOrder] FROM [dbo].[Products]">
</asp:SqlDataSource>
In effect, this is similar to fragment caching for any controls on the page that are bound to the data source.
If you wish to use a business or data layer to abstract your data access code, caching can still be used, and there are two ways to achieve this. The first is to use output caching and have the page dependent upon the data, and the second is to only cache the data. For the first option, you use the same method as previously shown, adding the OutputCache directive to the page with the SqlCacheDependency attribute set to CommandNotification. An ObjectDataSource control can be used to fetch the data from the data layer, as shown in Listing 6.14.
<% Page Language="C#" ... %>
<% OutputCache Duration="30" VaryByParam="none"
SqlDependency="CommandNotification" %>
<html>
<form>
<h1><%=DateTime.Now %></h1>
<asp:ObjectDataSource
</asp:ObjectDataSource>
<asp:GridView
</form>
</html>
The data layer simply fetches the data, as shown in Listing 6.15.
public static class ProductsDataLayer
{
public static DataTable Read2()
{
using (SqlConnection conn = new
SqlConnection(ConfigurationManager.ConnectionStrings[
"NorthwindConnectString"].ConnectionString))
{
conn.Open();
SqlCommand cmd = new SqlCommand("usp_GetProductsOrdered",
conn);
cmd.CommandType = CommandType.StoredProcedure;
DataTable tbl = new DataTable();
tbl.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection));
return tbl;
}
}
}
The query can be a SQL statement or a stored procedure, as long as the actual SQL statement follows the rules for query notificationsexplicit column names and two-part table names. In addition, you should not use SET NOCOUNT ON in a stored procedure or the rowset will not be cacheable.
If you do not wish to place cache details within the page, it can be done programmatically by way of the SqlCacheDependencyClass and a method on the Response object. For example, consider Listing 6.16, which returns a DataTable, perhaps as a function within a page. Here the SqlCacheDependency object is created explicitly, with the SqlCommand object passed in as a parameter. This creates a dependency based upon the command. The dependency is then added to the list of dependencies of the ASP.NET cache using the AddCacheDependency method.
As well as adding items to the cache, the API also exposes other features. For example:
Response.Cache.SetExpires(DateTime.Now.AddSeconds(30));
Response.Cache.SetCacheability(HttpCacheability.Public) ;
Response.AddCacheDependency(dependency);));
Response.AddCacheDependency(dependency);
return tbl;
}
If this code is in a class in the App_Code directory, the Response can be accessed with the HttpContext object:
HttpContext.Current.Response.AddCacheDependency(dependency);
This is not something you'd want to explicitly do in business or data layers though, because it ties the method to the interface, which could reduce reuse of this code for other scenarios.
Another method of caching is to cache only the data, leaving the page uncached. This works in a similar way to fragment caching, or adding the cache details to the data source control.
Listing 6.17 shows a standard pattern for caching using the Cache object of ASP.NET; the Cache object is the API into the underlying caching mechanism, so you can manipulate it directly as well as through page and control attributes.
In the ReadCached method, the first action is to check the Cache for an item; the cache provides a simple dictionary approach, so items can be accessed by a key value, Products in this case. If the item doesn't exist in the cache, the command is executed to fetch the data; note that a SqlDependency is explicitly created (and it has to be created before the command is executed). Once the data has been fetched, it is placed into the cache using the Insert method; the first parameter is the key, the second is the data being stored, and the third is the dependency. Once stored in the cache, the data is returned. The final code line will only execute if the item is already in the cache, so the Get method is used to fetch the item using its key value. The item is returned from the cache as an Object and thus has to be cast to its original data type, a DataTable.
public static class ProductsDataLayer
{
public static DataTable ReadCached()
{
if (HttpContext.Current.Cache["Products"] == null)
{));
HttpContext.Current.Cache.Insert("Products",
tbl, dependency);
return tbl;
}
}
return (DataTable)HttpContext.Current.Cache.Get("Products");
}
}
This code doesn't affect the page caching but uses the same mechanism. If the data changes, the cache receives notification from SQL Server, and the item is evicted from the cache.
Caching using SQL Server 7 and 2000 uses many of the same constructs as for SQL Server 2005, but works in a different way. The first thing to note is that SQL Server 2000 does not use notifications, which means that caching is polling-based. The database isn't continuously polled, so there is no huge overhead. It works like this:
You have to explicitly enable caching on a database and table level.
A new table is created that has one entry for each table upon which cache dependencies exist. There is only one row per enabled table, so the number of rows in this table will never exceed the number of tables in the database.
Triggers are added to tables enabled for caching, so that data changes result in an update to the notifications table.
A background thread in ASP.NET polls the change notifications table for changes. If a row in the change notifications table has changed, then the page dependent upon this table is evicted from the cache.
The second point to note is that with SQL Server 2000, cache invalidation is based upon any changes to the entire table. So even changes to rows that are not part of the result set you are displaying will affect the page cache.
To enable SQL Server 2000 for cache invalidation, you need to run a command line tool, aspnet_regsql, stored in the framework directory (\WINDOWS\Microsoft.NET\Framework\v2.0.50727). This tool has several uses, including adding application services such as membership and personalization to databases, and there are a number of command-line switches. The options for cache invalidation are shown in Figure.
Flag
Description
?
Displays a help listing of the various flags supported by the tool.
S
Names the SQL Server to connect to. This can be either the computer name or the IP address.
U
Names the user to connect as when using SQL Server Authentication (e.g., the SQL Server administrator account, sa).
P
Used in conjunction with the U flag to specify the user's password.
E
Connects to the SQL Server when using Windows Authentication and the current user has administrator capabilities on the database. The U and P flags are not needed when using E.
t
Specifies the table to apply necessary changes for SQL Server cache invalidation to.
d
Specifies the database to apply changes for SQL Server cache invalidation to.
ed
Enables a database for SQL cache dependency. This requires the d option.
dd
Disables a database for SQL cache dependency. This requires the d option.
et
Enables a table for SQL cache dependency. This requires the t option.
dt
Disables a table for SQL cache dependency. This requires the t option.
lt
Lists all tables enabled for SQL cache dependency.
Before a database table can participate in SQL cache invalidation, both the database and table must be enabled. To enable a database on a machine, use the following command:
aspnet_regsqlcache.exe -U [user] -P [password] -ed -d [database]
If you have a separate database server and don't have ASP.NET 2.0 installed, then you can enable the database on any server and simply move the database files to the database server.
Figure shows an example of enabling a SQL Server running on the local machine. The E flag is used for Windows authentication. The ed flag is used to enable the database, and the database is specified with the d flag. This creates a new table named AspNet_SqlCacheTablesForChangeNotification.
This new table contains the columns shown in Figure.
Column
tableName
Stores the name of all tables in the current database capable of participating in change notifications.
notificationCreated
Sets the timestamp indicating when the table was enabled for notifications.
changeId
Sets the numeric change ID incremented when a table is changed.
Now that the database is enabled for change notifications, you need to enlist tables that you wish to watch for changes.
After you enable the database for change notifications, you need to enlist selected tables for change notifications, and for this you use the et and t flags:
aspnet_regsqlcache.exe -U [user] -P [password] -et -t [table] -d
[database]
For example, if you want to enable the Products tables in the Northwind database, you execute aspnet_regsql as shown in Figure.
This creates a trigger Products_AspNet_SqlCacheNotification_Trigger on the Products table and also adds an entry into the AspNet_SqlCache TablesForChangeNotification table for the Products table. Whenever data within the Products table is updated, inserted, or deleted, the trigger causes the changeId value stored in the AspNet_SqlCacheTablesForChangeNotification table to be incremented.
When you use SQL Server 2000 cache invalidation, ASP.NET polls the database for changes. The information about the polling is defined in web.config, in the caching section, as shown in Listing 6.18.
<caching>
<sqlCacheDependency enabled="true" pollTime="10000">
<databases>
<add name="Northwind" connectionStringName="Northwind2000"
pollTime="5000" />
</databases>
</sqlCacheDependency>
</caching>
The SqlCacheDependency section contains two attributes: enabled, to turn the feature on or off, and pollTime, which is the time in milliseconds between polls of the database. The pollTime defaults to 5000. The databases sections details the databases upon which polling will take place, and follows the standard provider pattern of having add and remove elements. For add, the name is the key and doesn't have to correspond to the database being polled, although obviously a similar name makes sense. The connectionStringName identifies the connection string from the connectionStrings section, and pollTime specifies the polling time for this particular entry, overriding the pollTime set on the sqlCacheDependency element.
The use of SQL Server 2000 for cache invalidation is similar to that for SQL Server 2005; the attributes and usage of controls is the same, but the dependency differs. For SQL Server 2000, instead of CommandNotification, you use the key name from the configuration and the table name, separated by a colon (:). For example:
<% OutputCache Duration="30" VaryByparam="note"
SqlDependency="Northwind:Products" %>
In use, the page works exactly the same as for SQL Server 2005 notifications; upon first request, the page will be cached and will not be evicted from the cache until data has changed. Of course, the eviction doesn't happen immediately after the data changes but only after the poll time has elapsed.
The replacement of CommandNotification with the cache key and table applies to the API as well, when you create the SqlCacheDependency:
SqlCacheDependency dependency = new
SqlCacheDependency("Northwind", "Products");
Here the first parameter is the key into the databases section of the caching configuration, and the second parameter is the table name.
On the first poll, the list of notification-enabled tables is returned from the database. This list of tables is used to construct a cache entry for each table returned. Any dependencies requested through SqlCacheDependency are then made on this hidden cache entry. Thus, multiple SqlCacheDependency instances can be made for the same table, all dependent on one entry in the cache. When the table cache entry changes, it invalidates all dependent cache items.
The following is an example session (which assumes that the Northwind database and Products table are already configured for change notifications).
The user creates the page default.aspx and instructs the page to output to the cache and be dependent on the Northwind database's Products table.
The page is requested.
SqlCacheDependency is created and polling begins.
An entry in the cache is created for the Products table (e.g., Products_Table) by ASP.NET. This entry stores the changeId value returned from the database.
The output-cached page is made dependent on the Products_Table cache entry.
The page is output cached and subsequent requests draw the page from the cache.
A sales manager updates the Products table for a new Web site special sale.
The Northwind Products table changes and the changeId for this table is updated in the AspNet_SqlCacheTablesForChangeNotification table.
The next poll by ASP.NET gets the new changeId value for the Products table.
The Products_Table cache key is updated with the new changeId value, causing all dependent cache keys to be invalidated, including the default.aspx page.
The next request to the ASP.NET application causes the page to re-execute (because it is no longer in the output cache) and get added | http://codeidol.com/community/dotnet/data-caching/17262/ | CC-MAIN-2017-17 | refinedweb | 3,471 | 53.51 |
?) Thanks! There is one catch: I followed Graham's idea of generalizing the signature (which is orthogonal to my fix). As per earlier mails, the signature of the current notFollowedBy notFollowedBy :: Show tok => GenParser tok st tok -> GenParser tok st () notFollowedBy p = try (do{ c <- p; unexpected (show [c]) } <|> return () ) is rather curious--<TV lawyer>as if the author knew about the problem, and required a parser returning tok so no-one would notice</TV lawyer>. Actually, maybe the reason is error reporting. Graham's "show a" might be confusing (who knows whether a is anything like a representation of the input?). It seems you'd like to grab the Expect message out of p, but I'm not sure if this is possible. Also, in the tok case, there is a small change in the error message (c vs [c]). Even if these are not fixible, they seem to me minor problems compared with the benefit of a more general type. So I'm happy with my version as written. I'm not set up to check it in, so you should probably do it. Don't forget that there is a copy of the code in the documentation. Andrew | http://www.haskell.org/pipermail/haskell/2004-February/013631.html | CC-MAIN-2014-41 | refinedweb | 201 | 69.11 |
hi,
Does Turbo c++ compiler conforms with ANSI standards. Becaues I am using Turbo C++ 3.0 compiler. But it doesn't support the type "bool" and namespaces.
hi,
Does Turbo c++ compiler conforms with ANSI standards. Becaues I am using Turbo C++ 3.0 compiler. But it doesn't support the type "bool" and namespaces.
You probably have an older version that doesn't support the new ISO standard.
-Prelude
My best code is written with the delete key.
A valid question would be, "does any compiler fully support the standard?", I've yet to find one.
Wave upon wave of demented avengers march cheerfully out of obscurity unto the dream.
Well... in fact most do. You can include parameters that will make them compile to fully ANSI/ISO. Like the '-ansi -pedantic' for gcc I learned on this forum some time ago. (Btw, I didn't take the '-pedantic' pun lightly)
But, unfortunately, when you do that you are in for a nasty surprise. No matter how tidy your code is, the fact is that the libs that come with those compilers aren't fully compliant. I had the standard lib issuing errors all over when I tried the above on mingw
So, instead of having a fully portable language like it could be described (and it was intended) we have yet another spaghetti one simply because library designers aren't as "pedantic" as they should be.
Regards,
Mario Figueiredo
Using Borland C++ Builder 5
Read the Tao of Programming
This advise was brought to you by the Comitee for a Service Packless World | https://cboard.cprogramming.com/cplusplus-programming/18645-cplusplus-standards.html | CC-MAIN-2017-39 | refinedweb | 266 | 73.98 |
[OwnCloud] Losing contacts after sync with DAVdroid 0.8.1+
- AutoImport-pejakm last edited by rfc2822
Both 0.8.1 and 0.8.0 versions affected. I sync to ownCloud server (v8.1, yesterday upgraded from v8.0) installed on my computer. OwnCloud correctly displays 469 contacts, DAVdroid synchronizes them all (I monitor number of contacts during synchronization), but as soon as it finishes, number of contacts comes down to 37. I tried several times, contacts are always the same. I even uninstalled Viber and WhatsApp, thinking they could mess with contacts. One thing that I noticed, is that vCard version (DAVdroid settings) is 4.0, I’m not sure what was it before upgrade to 0.8.1. On my 4.4.4 rom the vCard version was always 3.0.
Currently I’m running Lollipop (Euphoria OS). Logcat extracted, I can upload if required.
- devvv4ever last edited by
seems that this could be a new issue with owncloud 8.1. if you downgrade to your previous version of oc does it work?
Reverting to ownCloud 8.0 would be too complicated, as I need to remove everything, install previous version and start over. I’ll try to figure out the difference between raw vcard file of contacts which remained in DAVdroid and those which didn’t.
Okay, It was smoother than I thought.
I reverted ownCloud back to v8.0, started from clean, also started from a clean profile in DAVdroid. This time in options there is vCard v3.0. Appart from that I had to start synchronization twice (first time only ~350 out 470 contacts was synchronized), everything works. Just left to see whether it will survive moving DAVdroid to /system and reboot.
- devvv4ever last edited by
alright, thanks for the update! yes, I thought of a problem with owncloud because with testing internally we had problems too and the owncloud releases have been always a bit unstable recently (at least with their caldav/carddav part) :]
hopefully they’ll fix it soon!
I have made a bug report upstream:.
- AutoImport-magenbrot last edited by
I have the same problem with OC 8.1 and DAVdroid 0.8.1 since upgrading owncloud. I have tried various times with various options (and also with different devices).
If I sync my contacts with “CardDAV-Sync free” from the play store every contact gets transferred (just without the groups). So maybe DAVdroid needs some changes too?
Maybe this is related to the cookie support introduced with DAVdroid/0.8.1?
Unfortunately, I couldn’t test with OwnCloud/8.1 because after the upgrade from 8.0, the OwnCloud Contact/Calendar apps were removed and I can’t install them anymore.
- AutoImport-aslmx last edited by
I have also seen a lot of Sync Issues in the notification drawer. I had not yet connected this to the OC 8.1 update, but now it looks like this is the reason. I’m to lazy to downgrade and upgrade again.
I achieved the cleanest sync by deleting both databases (phone + oc) and then imported the multi-vcf exported from OC on the phone into the newly created empty Davdroid account.
Took some time until the contacts were uploaded, but then it was at least almost okay. (some addresses seem to have been messed…).
Will try to reduce my contact editing until either side fixes this…
- AutoImport-mherzberg last edited by
After some debugging I suspect the bug lies within this line in method
deleteAllExceptRemoteNamesin class LocalCollection:319:
sqlFileNames.add(DatabaseUtils.sqlEscapeString(res.getName()));
When I checked my contact database on my device before
deleteAllExceptRemoteNamesgets executed, all my owncloud contacts were present and their source ids contained ‘@foo.bar.de.vcf’. However the
sqlEscapeStringmethod escapes the @ to %40, so (almost) no contacts matches the sql query anymore and get deleted. I replaced the line with
sqlFileNames.add(DatabaseUtils.sqlEscapeString(res.getName()).replace("%40", "@"));, and my contacts seemed to stay on my device.
- AutoImport-mhzawadi last edited by
Hi, I get the same issue with both 0.8.0 and 0.8.1 to ownCloud 8.1.
I did notice that the old ownCloud contacts didnt have the vcard 4.0 option ticked.
nginx/1.8.0
PHP 5.4.41-0+deb7u1 (fpm-fcgi) with XCache v2.0.0
ownCloud 8.1.0 with ‘memcache.local’ => ‘\OC\Memcache\XCache’
DavDroid/0.8.1 (0.8.0 also) on HTC M8 android 5.0.
Then this is DAVdroid bug, after all.
- AutoImport-mokkin last edited by
In my case the the sync of several addressbooks from Owncloud 8.1 with DavDroid 0.8.1 works fine, but the calendar sync doesn’t work. The account setup works and it says “synced” but nothing is displayed.
- AutoImport-julienval last edited would be happy to test your change if you can supply a precompiled apk
I can reproduce the issue on both of my phones
- AutoImport-flocke last edited by
I would be available for testing as well. Just upload a apk somewhere and give me the link.
- AutoImport-mherzberg last edited by
Here is the link to my test build: davdroid-debug.apk
Please note that this is just a test build from me related to this issue and should not be used for daily usage. I do not take any responsibility for any harm caused by using this apk.
This is the git diff from the code I used:
diff --git a/app/src/main/java/at/bitfire/davdroid/resource/LocalAddressBook.java b/app/src/main/java/at/bitfire/davdroid/resource/LocalAddressBook.java index 65e2891..3479ac9 100644 --- a/app/src/main/java/at/bitfire/davdroid/resource/LocalAddressBook.java +++ b/app/src/main/java/at/bitfire/davdroid/resource/LocalAddressBook.java @@ -139,7 +139,7 @@ public class LocalAddressBook extends LocalCollection<Contact> { if (remoteResources.length != 0) { List<String> sqlFileNames = new LinkedList<>(); for (Resource res : remoteResources) - sqlFileNames.add(DatabaseUtils.sqlEscapeString(res.getName())); + sqlFileNames.add(DatabaseUtils.sqlEscapeString(res.getName()).replace("%40", "@")); where = entryColumnRemoteName() + " NOT IN (" + StringUtils.join(sqlFileNames, ",") + ")"; } else where = entryColumnRemoteName() + " IS NOT NULL"; can’t reproduce that.
Log.i(TAG, "SQLite escaped: " + DatabaseUtils.sqlEscapeString("a@b.com"));
gives
a@b.comhere.
However the sqlEscapeString method escapes the @ to %40
Can you back this up? I don’t think
@should be escaped in SQL strings, and
sqlEscapeStringdoesn’t behave like that here.
When trying to reproduce this bug, I ran into. Is sharing enabled in your installation? If not, can you try to enable it?
Can you please
- delete or rename your OwnCloud log (
data/owncloud.log)
- do a DAVdroid sync
- access Contacts module in the Web Interface
- post the contents of your
owncloud.log
- AutoImport-protist last edited by
I can confirm that mherzberg’s apk fixes this for me. Thank you!
Previously, I was getting error messages in the notification menu every hour or so. Unfortunately, I forget what they said precisely.
I attempted uninstalling DAVdroid, then installing version 0.7.7. My phone would only import three contacts (out of 150). Oddly enough, I think these were the only contacts that I had added via the phone/DAVdroid. The others had been added/imported via OwnCloud’s web interface a while ago.
I also attempted installing DAVdroid 0.8.0 and 0.8.1. Neither fixed this issue. Only when installing mherzberg fix did it work. Thanks again!
- AutoImport-protist last edited by
Ahhh… after a while I’ve noticed mherzberg’s apk is buggy too. Every time DAVdroid syncs, it creates an additional copy of each contact, except the three contacts that were the only ones synced in the official versions of DAVdroid (as per my previous post). Hence, after three syncs, I have one copy of these original three in my Android address book as expected, but three copies of every other contact! | https://forums.bitfire.at/topic/689/owncloud-losing-contacts-after-sync-with-davdroid-0-8-1 | CC-MAIN-2019-26 | refinedweb | 1,298 | 68.97 |
Charting Data at the Bottom of the World
I have an odd job: I’m the only programmer for about 500 miles. I look after experiments on a remote Antarctic research station and look after the data they produce. As well as the scientific stuff knocking about, we have between 20 and 70 people, most of them keen on the weather. Either because we can’t work if its windy, or can enjoy a spot of kite skiing if it’s just windy enough, everyone here wants to know what’s going on outside.
Luckily we have a few climate science experiments running, including a weather station. For a few years now, data from the weather station has been available on people’s computers through a Perl Tk application and some slightly baroque shuttling of records between three different data servers and the network the office computers run on. All is well and good, and we leave it well alone, as it’s worked well. Recently, a new experiment installed on the station provides an up-to-the-minute profile of wind speeds over the first 30 meters of the air. It’s there to support research into interactions between snow and air in Antarctica, but it’s also crucial information if you want to head out and whiz about behind a kite.
The data from this mast goes to a remote machine that allows users to VNC in to check its health, and logs this data to a binary format of its own making. People around the station have taken to logging in to this machine before heading out, which is probably not the best way keep the data rolling in without interruption. Rather than forbidding access to this useful source of local data, we decided to upgrade our weather display system to include the major parameters recorded by the mast.
Alas, while fairly nice to use, Tk is a bit fiddly and not exactly my cup of tea. Adding new displays to an existing application can be time-consuming, as you must re-learn the relations among each different widget, pane, and button. Added to this programming burden, even if we could find every copy of the application scattered around our network, we’d have to do so every time we added some other source of data. We settled instead on a complete rewrite as a CGI script and some automatically generated graphs. A fancier man than me might call that a three-tier application, but then, he’d probably be selling you something at the same time.
Mountains of Data
Before you can see what the weather is doing (beyond looking out of the window), you need to get at the raw numbers somehow. Ours are provided by state-of-the-art scientific instruments in state-of-the-art data formats; that is to say, partly as lines of ASCII data in columns, and partly as fixed-length records in a binary file. No matter, though. Perl and some friends from CPAN make fast work of building meaning from tumbled piles of data.
Before doing anything, I set up a couple of objects to hold some data values. Each set of observations has a class corresponding to the experiment that generated it. The classes also contain
read_file factory methods that read a file and produce a list of observations. To make things as quick (to write) as possible, I used
Class::Accessor to autogenerate
get and
set methods for my objects:
# Current weather data package Z::Weather; use base qw(Class::Accessor); Z::Weather->mk_accessors( qw(time temp pressure wind dir) );
This automatically creates a
new() method for
Z::Weather. Call it as:
my $weather = Z::Weather->new({time => $time, temp => $temp, pressure => $pres, wind => $wind, dir
It also generates
get and
set accessors for each field:
# set $weather->temp(30); # get my $temp = $weather->temp();
(The “codename” used when shipping items to our station is
Z, so I’ve used that as my little local namespace, too.)
From our mast, we have a number of observations taken at different heights, so I wanted a slightly more complicated representation, using a class to represent the mast and another to represent each level on the mast.
package Z::Mast; use base qw(Class::Accessor); Z::Mast->mk_accessors(qw(time values)); package Z::Mast::Level; use base qw(Class::Accessor); Z::Mast::Level->mk_accessors(qw(wind dir level));
Remember that
Z::Mast::values will set and get a reference to an array of
::Level objects. If I wanted to enforce that, I could override the methods provided by
Class::Accessor, but that would create work that I can get away without doing for this simple case.
Now that I know what the data will look like in Perl, I can wrench it from the covetous hands of our data loggers and turn it into something I can use.
First, I decided to deal with the plain ASCII file. This contains single lines, with the time of observation first, then white-space-separated values for temperature, pressure, wind speed, direction, and a few others that I don’t care about.
Z::Weather needs to use a couple of modules and add a couple of methods:
use IO::All; sub from_file { my $class = shift; my $io = io(shift); my @recs = (); while (my $line = $io->readline()) { chomp($line); push @recs, $class->_line($line); } return @recs; }
I expect to call this as:
my @weather_records = Z::Weather->fromfile("weather.data");
Using the
IO::All module to access the files both makes it very easy to read the file and also allows calling code to instead supply an
IO::All object of its own, or to call this method with a filehandle already opened to the data source. This will make it easy to obtain data from some other source; for instance, if the experiment changes to provide a socket from which to read the current values.
Parsing the data is the responsibility of another method,
_line(), which expects lines like:
2006 02 06 01 25 -10.4 983.2 23.5 260.1 use DateTime; sub _line { my ($class, $line) = @_; my @vals = split /\s+/, $line; # extract time fields and turn into DateTime object my($y, $m, $d, $h, $min) = $line =~ /^(\d{4}) (\d\d) (\d\d) (\d\d) (\d\d)/; my $t = DateTime->new(year=>$y,month=>$m,day=>$d,hour=>$h,minute=>$min); # return a new Z::Weather record, using the magic new() method return $class->new({time => $t, temp => $vals[5], pressure => $vals[6], wind => $vals[7], dir => $vals[8], }); }
split and Perl’s magic make sense of the data points, and the
DateTime module take cares of the details of when the record was produced. I find it much easier to turn any time-related value into a
DateTime object at the soonest possible moment, so that the rest of my code can expect
DateTime objects. It becomes easier to reuse in other projects. If you find yourself writing code to handle leap years every other day, then make using
DateTime your number one new habit.
I deal with the mast data in a similar way, except that the other format is fixed-length binary records. The time of the recording is stored in the first four bytes as the number of seconds into an arbitrary epoch. I correct this into Unix time when creating its
DateTime object. Values are stored as two-byte, network-endian unsigned shorts stored as hundredths of the recorded values.
unpack() comes to my aid here.
sub from_file { my $class = shift; my $io = io(shift); my ($rec, @recs); while ($io->read($rec, 62) == 62) { push @recs, $class->_record($rec); } return @recs; } # map height of reading to offsets in binary record our %heights = qw(1 24 2 28 4 32 8 36 15 40 30 44); use constant MAST_EPOCH => 2082844800; sub _record { my ($class, $rec) = @_; # extract the time as a 4 byte network order integer, and correct epoch my $crazy_time = unpack("N", $rec); my $time = DateTime->from_epoch(epoch=>$crazy_time-MAST_EPOCH); # then a series of (speed, dir) 2 byte pairs further into the record my @vals; foreach my $offset (sort values %heights) { my ($speed, $dir) = unpack("nn", substr($rec, $offset)); push @vals, Z::Mast::Level->new({wind=>$speed*100, dir => $dir*100, level=>$heights{$offset}}); } return $class->new({time => $time, values => \@vals}); }
Again, I can call this using any one of the types supported by
IO::All. Again, I wield
DateTime to my advantage to turn a time stored in an unusual epoch quickly into an object which anything or anyone else can understand. There are a few magic numbers here, but that’s what you end up with when you deal with other people’s crazy file formats. The key thing is to record magic numbers in one place, to allow other people to change them if they need to, both in your code and from their own code (hence the
our variable), and finally, to let values pass from undocumented darkness into visible, named objects as soon as possible.
Displaying Data
I now have hold of the weather data and have forced it into a form that I can follow. Now I get to show it to someone else. I did this in two different ways: as raw data through a web page and as a pre-generated chart embedded in the page.
In each case, the code has to read in files to obtain the necessary data:
my @weather_records = Z::Weather->from_file('weather.data.dat');
Then it needs to produce the web page:
use Template; my $template = Template->new(); print "Content-type: text/html\n\n"; $template->process(\*DATA, { now => $weather_records[-1], records => \@weather_records, }) || die "Could not process template: ".$template->error()."\n";
This isn’t really all that interesting. In fact, it looks almost like this does nothing at all. I’ve pulled in the
Template module, told it to build and output a template defined after the
__END__ of the script, and given it two template variables to play with. The template looks something like:
__END__ <html><head><title>Weather</title></head> <body> <h2>Latest weather data at [% now.time %]<a name="/h2"> <P>T: [% now.temp %] °C P: [% now.pressure %] kPa W: [% now.wind %] kts D: [% now.dir %] °</p> <P><img src="/weather_chart.png"><br> <img src="/mast_chart.png"</p> <table> <tr><th> Time </th><th> Temp </th><th> Wind </th></tr> [% FOREACH rec IN records %] <tr> <td>[% rec.time %]</td> <td>[% rec.temp %]</td> <td>[% rec.wind %]</td> </tr> [% END %] </table> </body></html>
The template uses the syntax of the
Template-Toolkit, a general-purpose templating framework. It’s useful because it allows the separation of display and formatting of data from the code that generates it. There’s no Perl code in the template, and no HTML will appear in any of my Perl code. While the output generated now is ugly and basic, it will be easy to make it flashy later, once I have the program working, without having to change anything in the program itself to do so. As I’ve prepared our data carefully as objects with sensible methods, I can just hand a bunch of these over to the template and let it suck out whatever it wants to show.
Pretty Pictures
Producing the charts is, again, a simple business (by now, the theme of this article should be emerging). Gone are the days when you’d have to scratch your head figuring out how to draw lines and plot points; gone even are the days when you have to bang your head hard against the confused API of some long-forgotten module. Taking the mast values as an example, I first need to read in the data:
my @mast_values = Z::Mast->from_file('mast.data.dat');
Because old weather is old news, I throw away any values older than three hours, using
DateTime and
DateTime::Duration methods in a
grep:
use DateTime; use DateTime::Duration; my $now = DateTime->now(); my $age = DateTime::Duration->new(hours => 3); @mast_values = grep { $_->time + $age > $now } @mast_values;
This is so, so much easier than fiddling around with epochs and
3*3600 all over the place. If you find yourself writing 3600 anywhere in your code, you should be using
DateTime::Duration instead. Next, I feed the data points into the
Chart::Lines module, a part of the
Chart distribution. I use this in three phases. First, I create a new
Chart and specify how large the resulting graphic should be:
use Chart::Lines; my $chart = Chart::Lines->new($x_size, $y_size);
Then I set up a few options to tweak how the chart will display:
$chart->set( legend => 'none', xy_plot => 'true', grey_background => 0, y_label => 'Wind kts', x_label => 'Hours ago', colors => { y_label => [0xff, 0xee, 0xee], text => [0xff,0xee,0xff], dataset0 => [0xff,0,0], dataset1 => [0,0xff,0xff], dataset2 => [0,0,0xff], background => [0x55, 0x00, 0x55], }, );
These are mostly self-explanatory; the
Chart documentation covers them in detail. I set
xy_plot to
true so that the module will use the first dataset as the
x values and all of the other datasets as the
y values for a line. I set a bunch of rather bright colors, to keep my avid customers cheerful, and set the text used to label the chart.
my @labels = map {($now->epoch - $_->time->epoch) / 60} @mast_values;
Finally, I used a series of
map expressions to extract
x and
y values from the data. One turns the
DateTime times into a number of minutes ago. These values are the
x values.
y values are the appropriate parameters extracted from the nested
Z::Mast and
Z::Mast::Label objects. The rest of the code provides the data to the plotting method of the chart, directing it to write out a .png file (Figure 1).
$chart->png("mast.png", [ \@labels, [map {$_->values()->[0]->wind} @mast_values], [map {$_->values()->[1]->wind} @mast_values], [map {$_->values()->[2]->wind} @mast_values], ]);
Figure 1. The resulting chart
All I need now is a working HTTP server and a
crontab entry or two to run the graphic generation programs. It is possible to use the
Chart modules to generate CGI output directly using the
Chart::cgi method, but I found that this was too slow once lots of different clients accessed the weather data at the same time. It was a simple task to instead switch to a
crontab-based approach for the graphs, with a CGI script still providing real-time access to the current values.
Conclusions
The
Chart family of modules provides more than just an
x-
y plot. Pie, bar, Pareto, and mountain charts, amongst others, are available through the same API as I discussed in this article. They are just as easy to whip into being to satisfy even the most demanding of data consumers.
The Template Toolkit is used mainly for more complicated websites and content management systems, but it will simplify the production of simple sites and pages, allowing you to concentrate on the detail of the problem by separating data and its presentation. Even though a problem is simple and allows a fast solution, you can reach your goal faster still by pulling in big tools to do little jobs.
As for the
DateTime module, I simply wouldn’t go anywhere without it. These days, I find myself automatically typing
use DateTime; along with
warnings and
strict at the head of every Perl program I write.
Class::Accessors makes the creation of data representation objects faster than typing in a C struct, provides some level of documentation about what the data you’re dealing with, and allows for reuse. You could just stick everything into layers of nested hashes and arrays, but this is a certain path to eventual confusion.
Class::Accessors will keep you sane and save your fingers at the same time.
IO::All should be a part of your day-to-day toolkit; the new idioms it provides will soon see you using it everywhere, even in one-liners.
One of the many joys of programming lies in the satisfaction we receive when we make someone’s life that little bit better. Perl makes it easy, fast, and fun for us to tread that path. Perl’s greatest strength, the rock upon which its greatness is founded, is the speed with which we can take a problem, or a cool idea, and structure our half-formed thoughts into a soundly built solution.
Download the example code for this article. | https://www.perl.com/pub/2006/05/04/charting-data.html/ | CC-MAIN-2022-40 | refinedweb | 2,756 | 56.18 |
1) Which of the following statements are true regarding the Support packages?
A. Support package replaces an object affected by an error, unlike manual corrections support packages are not recognized as modifications during upgrade and are overwritten.
B. Support packages are to be applied in the correct order by creating a patch queue and required confirmation at each step. SAP support package manager (SPAM) is the sap tool which has to be used for applying support packages.
C. Support packages will alter the user developed programs wherever necessary, which are belonging to any user created namespaces.
D. During the support package application, modification adjustment may be required using SPDD and SPAU. Modifications request appears for every sap standard object which is changed from its original (ex for applying an OSS note. ) Modification adjustment performed once can be transported to other systems in the landscape.
E. Support packages delivered by SAP contain tools and standard advance corrections. A support package eliminates an error in an SAP system
Answer: A, B, D
2) In a system landscape of multiple SAP Instances, how does SAP application take care of data consistency and scalability? (Choose all that apply.)
A. Presentation servers can log on to any application or to a group of application servers, which can be controlled by login load balancing.
B. Message server takes care of controlling the updates, being executed by the individual application servers and well as by the central instance.
C. Several application servers can be installed and each of them configured to create buffers. Application servers communicate with the Central Instance (CI), where CI controls the integrity of the application and each of the apps, takes its share of the load.
D. Message server co-ordinates the updates, which are performed on the central instance.
Answer: A and C
3) What is the role of SAP lock mechanism, how does it help in maintaining the consistency of data in a logical unit of work (LOW). (Choose all that apply.)
A. There exists a lock server on every sap application server apart from the central instance server.
B. The purpose of lock mechanism is to ensure that same data is not modified by two users in the database simultaneously.
C. Lock entries should never be deleted, even if they are several days.
D. Locks are generated automatically according to the definition of the object in the ABAP dictionary. Locks entries in the enqueue table are usually set and deleted automatically when user programs access a data object and release it again.
Answer: B and D
4) Error messages such as 'System panic', 'Not enough core', 'No space left on device', which refer to memory bottlenecks or more precisely a swap space bottleneck are observed in the system log. What could be the causes? (Choose all that apply.)
A. There is not enough free swap space available in the operating system, the swap space in the operating system is consumed, this could also happen due to non-sap application running on the same system.
B. The amount of physical RAM should be twice the amount of the swap space
C. The problem could be with SAP processes, or external processes, the operating system could not allocate any more heap memory, this lead to the stopping of the work processes.
D. The highest value set in the sap profile parameter that limits the swap space usage has been exceeded.
Ans: A, C and D
5) The Internet communication manager as an integral part of the SAP WAS, and it serves which of the following functions? (Choose all that apply.)
A. ICM processes the incoming request using “threads” the threads communicate with the work processes to complete a task.
B. ICM ensures communication between the SAP WAS and the outside world using HTTP, HTTPS and SMTP protocols.
C. ICM generates the Internet server cache to optimize performance when redundant requests.
D. ICM acts like a firewall and ensures that the communication is encrypted and signed with a digital certificate.
Ans: B
6) An asynchronous update is usually used in an SAP Logical unit work (SAP – LUW). How does this process help the users in relevance to updating? (Choose all that apply.)
A. Asynchronous update allows multiple users to update the same record and consistency is verified after the completion of transactions.
B. Asynchronous update uses in an SAP-LUW allows the system to temporarily collect changes made by users and then at the end of the dialog phase, make the necessary changes to the database in a separate update work process.
C. Asynchronous update does not affect the way the database performs an update, the database remains consistent at all times.
Ans: B
7) Snapshots of sm50 show several work processes in PRIV mode frequently. Which of the following this does NOT indicate?
A. Extended memory or the limit for the extended memory should be increased.
B. A few users might be executing queries which high amount of resources, the user context data should be analyzed for resource intensive queries.
C. Transactions ST02 and ST06 should be analyzed for buffer swaps.
D. The swap space at the OS level is not sufficient for the current user load.
E.The work processes currently in PRIV mode are stopped and waiting for resources.
Ans: B
8) What role does SAP memory management play to avoid bottlenecks in the system? (Choose all that apply.)
A. A user request is not processed in just one work process, the user switches in and out of work processes during a transaction, making an optimal usage of work processes.
B. The dispatcher prevents the users from using excessive memory by locking the users after a certain memory limit.
C. SAP memory provides buffer sharing, user context data is not always stored in a local memory, instead it resides in extended memory after using initial roll memory. Extended memory is a configurable amount of common memory area for all the user context data. This feature provides efficient memory usage.
D. When the context of a work process changes, the user context is not copied, instead it is assigned to alternating work processes by mapping operations. Less data is copied and mapping is not work-intensive. This improves response times and is resource efficient.
Ans: A, C and D
9) An error occurred in the Oracle database, which makes the database unusable, ex. A user deletes a table from the database. How would you correct such an error? (Choose all that apply.)
A. Restoration of the database can be a solution, provided it is followed by a point in time recovery, however, there can be a data loss, any changes which were made to the object from the break down moment would be lost.
B. Lost data and the table can be recovered by creating the table and data can be loaded into the table using programs.
C. In general, you cannot use the Oracle Export/Import tools to recover a lost SAP object. The reason for this is that the SAP database tables are often shared system-wide. A user cannot import the ATAB (central control table) to recover an individual SAP table, for example, as this would risk overwriting the work of other users.
D. An object from the ABAP Dictionary or the ABAP Repository is involved. The ABAP/4 Dictionary and the correction system both perform version backups of these objects within the SAP System. If you can carry on working with that version of the object (ideally, the object has not been changed recently), then you can restore it.
E. You had exported the table earlier using the SAP utility R3trans, thus backing it up. You can use this copy to restore the condition of the table at the time of the export. (Take into account possible database inconsistencies).
Ans: A and C
10) An error occurred in the Oracle database after which the Oracle instance stopped responding. This failure can happen due to CPU or memory overload or due to hardware failure also. The following points are true in such a situation. (Choose all that apply.)
A. Oracle database will need an instant recovery after a restart, this recovery will be performed automatically by the system recovery (SMON) if the file system is intact.
B. Only Transactional data of the SAP application involving an update will be lost, other transactional data will be retrieved from the SAP extended memory.
C. All transaction data involved in the SAP transactions which is not saved by the user will be lost.
D. Database administrator intervention may be required for the recovery if the file system is not intact i.e. if control file, data files or redo log files are not available.
Ans: A, B and D
11) The transaction for update management (sm13) displays a number of records with status “error”. What could be the probable reasons? (Choose all that apply.)
A. The system is too busy with a batch loading process and hence could not perform any updates.
B. Many users are in the middle of executing transactions and did not press the save button yet.
C. By selecting the error record in the update management, the relevant error information recorded in the update header can be displayed. Isolated local problems are usually errors in update function modules or in the associated programs.
D. There could be a problem affecting the entire database, the update problem would be resolved when the system error is eliminated.
Ans: C and D
12) How can we judge whether the Oracle database performance is good and if there is any scope for improvement? (Choose all that apply.)
A. Transaction ST03 gives a snapshot and historical information of the performance data, the database time is displayed in the overall response time, the database time should not exceed more than 40% of the total response time.
B. A snapshot of the OS also indicates the free CPU time, paging and memory, a resource crunch in the OS will automatically slow down the database performance. An idle time of 30% for the CPU is an indication of resource availability.
C. Check whether the Oracle shared processes are running at the OS level
D. Disk access times can be found in transaction ST04, this gives response time for the data files information of the access times for queries, if response time is more than 10ms, reorganization or distribution of data files is recommended.
Ans: A, B and D
13) Which of the following are recommendations for a login load balancing? (choose all that apply.)
A. In a landscape with several application server instances, specific servers should be assigned to specific application workgroup.
B. It is not practical to assign separate application servers if the users are not using many application components, common logon groups are recommended.
C. Always have more than one application server in a logon group so that users are directed to the available server in case one is down.
D. If there are two application servers assign each server to a separate logon group to optimize the resources.
Ans: C and D
14) How does SAP Web application server function as a Web server and a Web client? (Choose all that apply.)
A. SAP WAS can create a HTTP request in an ABAP program in its Client Role.
B. SAP WAS can accept HTTP, HTTPS and SMTP protocols from a Web Client.
C. A configured J2EE server alone acts a web Server.
D. The in-built HTTP application server, with or without J2EE server can be a Web server and also a Web Client.
Ans) B
15) Which of the following is NOT one of the security features provided in the SAP WAS?
A. SAP trust manager supports the use of public-key technology to establish trust infrastructure. The trust manager performs the PSE and certificate maintenance functions such as generating key pairs, creating certificate requests to be signed by a Certification Authority (CA), and maintaining the list of trusted CAs that the server accepts.
B. SAP WAS supports the Secure Sockets Layer (SSL) protocol.
C. Information about system resources and system services (system ID, application configuration, printer configuration) requires kerberos authentication.
D. User authentication is provided using either logon tickets or X.509 client certificates.
16) What are the advantages of using a SAP Web dispatcher? (Choose all that apply.)
A. SAP web dispatcher can be a one point access for HTTP(S) providing load balancing, by dispatching the request to the server with the greatest capacity.
B. SAP web dispatcher decides whether the incoming HTTP request be forwarded to ABAP or a J2EE server, it determines the group of servers that could execute the request. Load is then balanced within this group.
C. Web dispatcher can also be used to start and stop the sap web server.
D. SAP Web dispatcher is useful for load balancing in the Web environment. In the classic SAP system, load balancing is done by the message server
17) Which one of the following step cannot lead to a clue if problems occur during startup of SAP application on Windows platform?
A. Check the startup log located in the work directory, Check the trace files of the individual SAP processes, dev_ms for developer trace of message server, dev_disp for developer trace of dispatcher, dev_w (m number of work process) for developer trace of work processes.
B. Event Viewer to access the Administrative Tools (Common) Select Programs Application. You are shown aWindows event log. In the menu bar, select Log list of errors, warnings, and information generated by the application software. You can display detailed information by clicking on a particular log.
C. Check if Windows operating system password has been changed for the SIDadm user.
D. Check that the SAP service (SAP_, e.g. SAPC11_00) was started.
18) Some users experience slow response time and others experience good response time on a regular day when accessing a SAP application. What could be the possible reasons? (Choose all that apply.)
A. A database lock created automatically by the system in a users transaction, can lock the update for a particular object, other users required to use the same transaction have to wait until the lock is released.
B. Database may not be available at this point of time.
C. Dialog processes have been loaded with long running dialog steps, a few users have occupied many dialog work processes. And the remaining dialog work processes have to attend a large number of users, who face long response time.
D. One or more work processes must have been in PRIV mode, check sm66 for the work process status.
19) Which of the following statements are true regarding the startup procedure of an SAP system on Windows OS? (Choose all that apply.)
A. R/3 processes read the instance profile for the parameter values and start accordingly.
B. Sapstart triggered from the SAP MMC console program starts the message server, dispatcher, collector and the sender. On a dialog instance, message server is not started.
C. R/3 processes read the appropriate parameters from a C source in the R/3 kernel, these values are replaced from the values in the default profile and the instance profile subsequently.
D. Work processes are created according to the information provided in the profiles. And the work processes get connected to the database.
20) What is zero administration memory management, in reference to Windows operating system, how effective is it? (choose all that apply.)
A. Zero administration memory management works by dynamically managed extended memory, providing a nearly unlimited memory resource.
B. Since most of the user context data is located in the extended memory instead of local memory, memory utilization is very efficient.
C. Unlimited memory is allocated to individual work processes, ensuring the completeness of the user request.
D. Under zero administration memory management, extended memory extends itself as user requirement increases, however, the max limit for this extension is the Windows page file. | https://www.stechies.com/support-package-eliminates-error-sap-system/ | CC-MAIN-2020-29 | refinedweb | 2,663 | 54.12 |
CUDAfy.NET is a powerful open source library for non-graphics programming on NVIDIA Graphics Processing Units (GPU) from Microsoft .NET. Recently support for AMD GPUs and x86 CPUs has been added. This brings the CUDA programming model to devices beyond those from NVIDIA. You can get a better background on CUDAfy in the following CodeProject articles:
Without duplicating too much from these earlier articles, let's ask briefly why we should consider using a GPU for non-graphics tasks? The reason is that given the right task, there is a massive amount of horsepower available in GPUs that is outside games, video processing and CAD packages relatively unused. Initially in specific markets such as seismology, finance, physics, biology, we saw increasing use of GPUs to accelerate the extremely compute intensive tasks. However because programming of GPUs is still a different approach to conventional programming, the benefit had to be significant enough to make the effort worthwhile. NVIDIA's CUDA platform has really simplified development. Arguments that it is still too complicated miss the point that massively parallel programming actually needs a different way of looking at things. How you split an algorithm over 1000s of cores must and should be different. Do not get fooled by Intel's claims for their Xeon Phi. Just because each core is roughly speaking x86, it does not mean you can just run your app there and get improvements. Also be wary of joining the OpenCL camp too quickly. Yes it does in theory allow targeting of AMD GPUs, Intel CPUs, NVIDIA GPUs and even FPGAs, but anyone who has compared CUDA runtime with OpenCL quickly sees how complex and verbose OpenCL can be.
The CUDA programming model remains the finest means of handling massively parallel architectures. Tools such as CUDAfy have lowered the hurdle for .NET developers by exposing the programming interface in a clean and easy manner, arguably cleaner than CUDA itself. There remains the one issue with CUDA - it only supports NVIDIA GPUs. Until now. CUDAfy.NET has been updated to allow use of the CUDA model with AMD GPUs and x86 CPUs. Programming of some of Bittware's FPGA boards should also be possible. How is this done? Under the hood CUDAfy can make use of OpenCL. This requires installing the free NVIDIA, AMD and/or Intel OpenCL SDKs. You as someone with CUDA experience do not need to learn much new to now target all these other processors.
This article builds upon the earlier High Performance Queries: GPU vs. PLINQ vs. LINQ and ports this to also support OpenCL devices and adds benchmarking so you can easily compare performance. Ready for a LINQ vs PLINQ vs CUDA vs AMD GPU vs Intel CPU shoot-out? CUDA and OpenCL.
We will be using CUDAfy.NET for programming the GPU. The CUDAfy library is included in the downloads, however if you want to use CUDA or OpenCL, you will need to download and install:
Ironically getting CUDA working is the toughest option since it also requires Visual Studio. Without Visual Studio, you can target NVIDIA GPUs by using OpenCL which is part of the CUDA SDK. query. These are defined as TrackPoint and TrackQuery. It would be convenient if the host and GPU code can be shared where possible:
TrackPoint
TrackQuery(float lon, float.
struct
CudafyIgnore
Time
DateTime
[CudafyIgnore]
public DateTime Time
{
get { return new DateTime(TimeStamp); }
set { TimeStamp = value.Ticks; }
}
Now compared to CUDA, OpenCL is rather primitive and basic. Constructors and methods cannot be placed in structs. Because of this, we've needed to change the code used in the previous article. Nothing more complicated than copy-paste and minor edit, but this is a limitation of OpenCL and CUDAfy cannot (currently) shield you from this. This also answers the question of why keep supporting CUDA if NVIDIA GPUs also support OpenCL - because much more is possible with CUDA.
Now another change that sharp eyed readers will have noticed is that here we are using float instead of double. This is because unlike in CUDA if a device does not support double, then the code will not compile. It does not silently revert to single. Since double is only supported on higher end AMD GPUs, we'll live without it for now.
float
double
double
The method that will do the work in testing whether a GPS point is within radius meters of a target point is defined as:
public double DistanceTo(TrackPoint A, TrackPoint B)
{
float dDistance = Single.MinValue;
float dLat1InRad = A.Latitude * CONSTS.PI2;
float dLong1InRad = A.Longitude * CONSTS.PI2;
float dLat2InRad = B.Latitude * CONSTS.PI2;
float dLong2InRad = B.Longitude * CONSTS.PI2;
float dLongitude = dLong2InRad - dLong1InRad;
float dLatitude = dLat2InRad - dLat1InRad;
// Intermediate result a.
float a = GMath.Pow(Math.Sin(dLatitude / 2.0F), 2.0F) +
GMath.Cos(dLat1InRad) * GMath.Cos(dLat2InRad) *
GMath.Pow(GMath.Sin(dLongitude / 2.0F), 2.0F);
// Intermediate result c (great circle distance in Radians).
float c = 2.0F * GMath.Atan2(GMath.Sqrt(a), GMath.Sqrt(1.0F - a));
// Distance
dDistance = CONSTS.EARTHRADIUS * c;
return dDistance * 1000.0F;
} because it's the great circle. Calculating the distance requires a fair bit of floating point math processing.
255
AsParallel
private IEnumerable<TrackPoint> GetPoints(TrackPoint[] targets,
float,
float radius, long targetStartTime, long targetEndTime)
{
float minDist = Single];
float d = TrackQuery.DistanceTo(tp, target);
if (d <= radius && d < minDist)
{
minDist = d;
index = (byte)i;
}
}
}
return index;
}
Let's now look at the code that will communicate with the GPU. This is implemented in the TrackQuery class. The constructor takes care of Cudafying or loading from a previously serialized module. For CUDA,. For OpenCL, it is roughly the same - we generate OpenCL code and store this code into the module, but we do not compile. Compilation takes place when loading the module. Here we wish to Cudafy two types: TrackQuery and TrackPoint.
public class TrackQuery
{
public TrackQuery(GPGPU gpu)
{
var props = gpu.GetDeviceProperties();
var name = props.PlatformName + props.Name;
// Does an already serialized valid cudafy module xml file exist. If so
// no need to re-generate it.
var mod = CudafyModule.TryDeserialize(name);
// The gpu can be either a CudaGPU or an OpenCLDevice.
// Realize that an NVIDIA GPU can be both!
// And an Intel CPU can show up as both an AMD and an
// Intel OpenCL device if both OpenCL SDKs are
// installed.
CudafyTranslator.Language = (gpu is CudaGPU) ? eLanguage.Cuda : eLanguage.OpenCL;
if (mod == null || !mod.TryVerifyChecksums())
{
// Convert the .NET code within these two types
// into either CUDA C or OpenCL C. If CUDA C then
// also compile into PTX.
mod = CudafyTranslator.Cudafy(typeof(TrackPoint), typeof(TrackQuery));
// Store to file for re-use
mod.Serialize(name);
}
try
{
// Load the module on to the device. If OpenCL then compile the source.
gpu.LoadModule(mod);
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine(mod.CompilerOutput);
throw;
}
_gpu = gpu;
} 160,000,000 bytes (two System.Single.Single. OpenCL and CUDA are very similar in this regard so adding OpenCL support to CUDAfy was relatively straightforward.. Constant memory is handled differently on OpenCL, but CUDAfy shields you from this and standard CUDA practice can be used.. OpenCL specifies the blocks and threads differently from CUDA but again this is transparent to the CUDAfy developer. struct
public IEnumerable<TrackPoint> SelectPoints(TrackPoint[] targets,
float. OpenCL uses a rather different means of accessing the IDs, one that is arguably cleaner than CUDA. With CUDAfy, you can have your cake and eat it because either the OpenCL or CUDA means can be used for both OpenCL and CUDA targets. For this example, we will make use of yet another form of GPU memory: shared. This memory is shared between all the threads within one block. Again OpenCL and CUDA are similar here., float];
float minDist = Single.
float d = DistanceTo(tp, time is in milliseconds and lower is better. The results are.
The difference between CUDA and OpenCL on the same NVIDIA device is minimal in this case. However in some algorithms the difference is much more pronounced. What is interesting is that OpenCL on the first generation Intel Core i5 CPU is about twice as good as PLinq. For socket 1155 Ivy Bridge we can expect better results since the Intel OpenCL SDK can make use of the integrated GPUs. As we move towards Haswell, Intel OpenCL could be very promising especially for low power, small footprint solutions.
The above benchmark was run on a mighty Intel i7 980X machine. I left out the LINQ test since it would make the other times virtually invisible. On-board is an NVIDIA GTX660Ti and an AMD 6570. Obviously things are a bit quicker than with the laptop. The most interesting result is actually that running on the Intel i7 CPU the AMD OpenCL SDK (yellow) wipes the floor with the Intel OpenCL SDK (grey)! Both are of course still left standing when measured against the GPUs. Even the relatively cheap AMD 6570 makes a good effort.
The flexibility of OpenCL to target GPU, CPU or even FPGA is enticing, but having said this, if you know that you will be using an NVIDIA device, there are still many compelling reasons to stick with CUDA. CUDA remains a much more pleasurable development experience, the language and tools are far more refined and the availability of a wide range of libraries that are interoperable with CUDA is a deal clincher. Think of FFT, BLAS and Random number generation as just the beginning of the CUDA offerings. Many of these libraries have been made accessible from CUDAfy, but only for CUDA targets.
This project makes use of the splash screen code featured in the Code Project article How to do Application Initialization while showing a SplashScreen.
The Cudafy.NET SDK includes two large example projects featuring amongst others ray tracing, ripple effects, and fractals. Many of the examples are fully supported on both CUDA and OpenCL.. If using the LGPL version, we ask you to please consider making a donation to Harmony through Education. This small charity is helping handicapped children in developing countries. Read more on the charity page of the Cudafy website.
This article, along with any associated source code and files, is licensed under The GNU Lesser General Public License (LGPLv3)
float num2 = 3,402823E+38f;
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/script/Articles/View.aspx?aid=572583 | CC-MAIN-2014-35 | refinedweb | 1,734 | 57.37 |
getdns reference¶
getdns contexts¶
This section describes the getdns Context object, as well as its as its methods and attributes.
- class
getdns.
Context([set_from_os])¶
Creates a context, an opaque object which describes the environment within which a DNS query executes. This includes namespaces, root servers, resolution types, and so on. These are accessed programmatically through the attributes described below.
Context() takes one optional constructor argument.
set_from_osis an integer and may take the value either 0 or 1. If 1, which most developers will want, getdns will populate the context with default values for the platform on which it’s running.
The
Contextclass has the following public read/write attributes:
append_name¶
Specifies whether to append a suffix to the query string before the API starts resolving a name. Its value must be one of
getdns.APPEND_NAME_ALWAYS,
getdns.APPEND_NAME_ONLY_TO_SINGLE_LABEL_AFTER_FAILURE,
getdns.APPEND_NAME_ONLY_TO_MULTIPLE_LABEL_NAME_AFTER_FAILURE, or
getdns.APPEND_NAME_NEVER. This controls whether or not to append the suffix given by
suffix.
dns_root_servers¶
The value of dns_root_servers is a list of dictionaries containing addresses to be used for looking up top-level domains. Each dict in the list contains two key-value pairs:
- address_data: a string representation of an IPv4 or IPv6 address
- address_type: either the string “IPv4” or “IPv6”
For example, the addresses list could look like
>>> addrs = [ { 'address_data': '2001:7b8:206:1::4:53', 'address_type': 'IPv6' }, ... { 'address_data': '65.22.9.1', 'address_type': 'IPv4' } ] >>> mycontext.dns_root_servers = addrs
dns_transport_list¶
An ordered list of transport options to be used for DNS lookups, ordered by preference (first choice as list element 0, second as list element 1, and so on). The possible values are
getdns.TRANSPORT_UDP,
getdns.TRANSPORT_TCP, and
getdns.TRANSPORT_TLS.
dnssec_allowed_skew¶
Its value is the number of seconds of skew that is allowed in either direction when checking an RRSIG’s Expiration and Inception fields. The default is 0.
dnssec_trust_anchors¶
Its value is a list of DNSSEC trust anchors, expressed as RDATAs from DNSKEY resource records.
edns_client_subnet_private¶
May be set to 0 or 1. When 1, requests upstreams not to reveal query’s originating network.
edns_maximum_udp_payload_size¶
Its value must be an integer between 512 and 65535, inclusive. The default is 512.
follow_redirects¶
Specifies whether or not DNS queries follow redirects. The value must be one of
getdns.REDIRECTS_FOLLOWfor normal following of redirects though CNAME and DNAME; or
getdns.REDIRECTS_DO_NOT_FOLLOWto cause any lookups that would have gone through CNAME and DNAME to return the CNAME or DNAME, not the eventual target.
implementation_string¶
A string describing the implementation of the underlying getdns library, retrieved from libgetdns. Currently ““
limit_outstanding_queries¶
Specifies limit (an integer value) on the number of outstanding DNS queries. The API will block itself from sending more queries if it is about to exceed this value, and instead keep those queries in an internal queue. The a value of 0 indicates that the number of outstanding DNS queries is unlimited.
namespaces¶
The namespaces attribute takes an ordered list of namespaces that will be queried. (Important: this context setting is ignored for the getdns.general() function; it is used for the other functions.) The allowed values are
getdns.NAMESPACE_DNS,
getdns.NAMESPACE_LOCALNAMES,
getdns.NAMESPACE_NETBIOS,
getdns.NAMESPACE_MDNS, and
getdns.NAMESPACE_NIS. When a normal lookup is done, the API does the lookups in the order given and stops when it gets the first result; a different method with the same result would be to run the queries in parallel and return when it gets the first result. Because lookups might be done over different mechanisms because of the different namespaces, there can be information leakage that is similar to that seen with POSIX getaddrinfo(). The default is determined by the OS.
resolution_type¶
Specifies whether DNS queries are performed with nonrecursive lookups or as a stub resolver. The value is either
getdns.RESOLUTION_RECURSINGor
getdns.RESOLUTION_STUB.
If an implementation of this API is only able to act as a recursive resolver, setting resolution_type to
getdns.RESOLUTION_STUBwill throw an exception.
suffix¶
Its value is a list of strings to be appended based on
append_name. The list elements must follow the rules in RFC 4343#section-2.1
tls_authentication¶
The mechanism to be used for authenticating the TLS server when using a TLS transport. May be
getdns.AUTHENTICATION_REQUIREDor
getdns.AUTHENTICATION_NONE. (getdns.AUTHENTICATION_HOSTNAME remains as an alias for getdns.AUTHENTICATION_REQUIRED but is deprecated and will be removed in a future release)
tls_query_padding_blocksize¶
Optional padding blocksize for queries when using TLS. Used to increase the difficulty for observers to guess traffic content.
upstream_recursive_servers¶
A list of dicts defining where a stub resolver will send queries. Each dict in the list contains at least two names: address_type (either “IPv4” or “IPv6”) and address_data (whose value is a string representation of an IP address). It might also contain “port” to specify which port to use to contact these DNS servers; the default is 53. If the stub and a recursive resolver both support TSIG (RFC 2845), the upstream_list entry can also contain tsig_algorithm (a string) that is the name of the TSIG hash algorithm, and tsig_secret (a base64 string) that is the TSIG key.
There is also now support for pinning an upstream’s certificate’s public keys with pinsets (when using TLS for transport). Add an element to the upstream_recursive_server list entry, called ‘tls_pubkey_pinset’, which is a list of public key pins. (See the example code in our examples directory).
The
Contextclass includes public methods to execute a DNS query, as well as a method to return the entire set of context attributes as a Python dictionary.
Contextmethods are described below:
general(name, request_type[, extensions][, userarg][, transaction_id][, callback])¶
Context.general()is used for looking up any type of DNS record. The keyword arguments are:
name: a string containing the query term.
request_type: a DNS RR type as a getdns constant (listed here)
extensions: optional. A dictionary containing attribute/value pairs, as described below
userarg: optional. A string containing arbitrary user data; this is opaque to getdns
transaction_id: optional. An integer.
callback: optional. This is a function name. If it is present the query will be performed asynchronously (described below).
address(name[, extensions][, userarg][, transaction_id][, callback])¶
There are two critical differences between
Context.address()and
Context.general()beyond the missing request_type argument:
- In
Context.address(), the name argument can only take a host name.
Context.address()always uses all of namespaces from the context (to better emulate getaddrinfo()), while
Context.general()only uses the DNS namespace.
hostname(name[, extensions][, userarg][, transaction_id][, callback])¶
The address is given as a dictionary. The dictionary must have two names:
address_type: must be a string matching either “IPv4” or “IPv6”
address_data: a string representation of an IPv4 or IPv6 IP address
service(name[, extensions][, userarg][, transaction_id][, callback])¶
namemust be a domain name for an SRV lookup. The call returns the relevant SRV information for the name
get_api_information()¶
Retrieves context information. The information is returned as a Python dictionary with the following keys:
version_string
implementation_string
resolution_type
all_context
all_contextis a dictionary containing the following keys:
append_name
dns_transport
dnssec_allowed_skew
edns_do_bit
edns_extended_rcode
edns_version
follow_redirects
limit_outstanding_queries
namespaces
suffix
timeout
tls_authentication
upstream_recursive_servers
The
getdns module has the following read-only attribute:
Extensions¶
Extensions are Python dictionaries, with the keys being the names of the
extensions. The definition of each extension describes the values that
may be assigned to that extension. For most extensions it is a Boolean,
and since the default value is “False” it will most often take the value
getdns.EXTENSION_TRUE.
The extensions currently supported by
getdns are:
-
dnssec_return_status
-
dnssec_return_only_secure
-
dnssec_return_validation_chain
-
return_both_v4_and_v6
-
add_opt_parameters
-
add_warning_for_bad_dns
-
specify_class
-
return_call_reporting
Extensions for DNSSEC¶
If an application wants the API to do DNSSEC validation for a request, it must set one or more DNSSEC-related extensions. Note that the default is for none of these extensions to be set and the API will not perform DNSSEC validation. Note that getting DNSSEC results can take longer in a few circumstances.
To return the DNSSEC status for each DNS record in the
replies_tree list, use the
dnssec_return_status
extension. Set the extension’s value to
getdns.EXTENSION_TRUE to cause the returned status to have
the name
dnssec_status added to the other names in
the record’s dictionary (“header”, “question”, and so on). The
potential values for that name are
getdns.DNSSEC_SECURE,
getdns.DNSSEC_BOGUS,
getdns.DNSSEC_INDETERMINATE, and
getdns.DNSSEC_INSECURE.
If instead of returning the status, you want to only see
secure results, use the
dnssec_return_only_secure
extension. The extension’s value. Set the extension’s value to
getdns.EXTENSION_TRUE to cause a set of additional
DNSSEC-related records needed for validation to be returned
in the
response object...
Returning both IPv4 and IPv6 responses¶
Many applications want to get both IPv4 and IPv6 addresses
in a single call so that the results can be processed
together. The
address()
method is able to do this automatically. If you are
using the
general() method,
you can enable this with the
return_both_v4_and_v6
extension. The extension’s value must be set to
getdns.EXTENSION_TRUE to cause the results to be the lookup
of either A or AAAA records to include any A and AAAA
records for the queried name (otherwise, the extension does
nothing). These results are expected to be usable with Happy
Eyeballs systems that will find the best socket for an
application.
Setting up OPT resource records¶
For lookups that need an OPT resource record in the
Additional Data section, use the
add_opt_parameters
extension. The extension’s value (a dict) contains the
parameters; these are described in more detail in
RFC 2671. They are:
-
maximum_udp_payload_size: an integer between 512 and 65535 inclusive. If not specified it defaults to the value in the getdns context.
-
extended_rcode: an integer between 0 and 255 inclusive. If not specified it defaults to the value in the getdns context.
-
version: an integer betwen 0 and 255 inclusive. If not specified it defaults to 0.
-
do_bit: must be either 0 or 1. If not specified it defaults to the value in the getdns context.
-
options: a list containing dictionaries for each option to be specified. Each dictionary contains two keys:
option_code(an integer) and
option_data(in the form appropriate for that option code).
client_subnet.py program in our example directory
shows how to pack and send an OPT record.
Getting Warnings for Responses that Violate the DNS Standard¶
To receive a warning if a particular response violates some
parts of the DNS standard, use the
add_warning_for_bad_dns
extension. The extension’s value is set to
getdns.EXTENSION_TRUE to cause each reply in the
replies_tree to contain an additional name,
bad_dns (a
list). The list is zero or more values that indicate types of
bad DNS found in that reply. The list of values is:
A DNS query type that does not allow a target to be a CNAME pointed to a CNAME
One or more labels in a returned domain name is all-numeric; this is not legal for a hostname
A DNS query for a type other than CNAME returned a CNAME response
Using other class types¶
The vast majority of DNS requests are made with the Internet
(IN) class. To make a request in a different DNS class, use,
the
specify_class extension. The extension’s value (an int)
contains the class number. Few applications will ever use
this extension.
Extensions relating to the API¶
An application might want to see debugging information for
queries, such as the length of time it takes for each query
to return to the API. Use the
return_call_reporting
extension. The extension’s value is set to
getdns.EXTENSION_TRUE to add the name
call_reporting (a
list) to the top level of the
response object. Each member
of the list is a dict that represents one call made for the
call to the API. Each member has the following names:
-
query_nameis the name that was sent
-
query_typeis the type that was queried for
-
query_tois the address to which the query was sent
-
start_timeis the time the query started in milliseconds since the epoch, represented as an integer
-
end_timeis the time the query was received in milliseconds since the epoch, represented as an integer
-
entire_replyis the entire response received
-
dnssec_resultis the DNSSEC status, or
getdns.DNSSEC_NOT_PERFORMEDif DNSSEC validation was not performed
Asynchronous queries¶
The getdns Python bindings support asynchronous queries, in which a query returns immediately and a callback function is invoked when the response data are returned. The query method interfaces are fundamentally the same, with a few differences:
- The query returns a transaction id. That transaction id may be used to cancel future callbacks
- The query invocation includes the name of a callback function. For example, if you’d like to call the function “my_callback” when the query returns, an address lookup could look like>>> c = getdns.Context() >>> tid = c.address('', callback=my_callback)
- We’ve introduced a new
Contextmethod, called
run. When your program is ready to check to see whether or not the query has returned, invoke the run() method on your context. Note that we use the libevent asynchronous event library and an event_base is associated with a context. So, if you have multiple outstanding events associated with a particular context,
runwill invoke all of those that are waiting and ready.
- In previous releases the callback argument took the form of a literal string, but as of this release you may pass in the name of any Python runnable, without quotes. The newer form is preferred.
The callback script takes four arguments:
type,
result,
userarg, and
transaction_id. The ``type
argument contains the callback type, which may have one of
the following values:
-
getdns.CALLBACK_COMPLETE: The query was successful and the results are contained in the
resultargument
-
getdns.CALLBACK_CANCEL: The callback was cancelled before the results were processed
-
getdns.CALLBACK_TIMEOUT: The query timed out before the results were processed
-
getdns.CALLBACK_ERROR: An unspecified error occurred
The
result argument contains a result object, with the
query response
The
userarg argument contains the optional user argument
that was passed to the query at the time it was invoked.
The
transaction_id argument contains the transaction_id
associated with a particular query; this is the same
transaction id that was returned when the query was invoked.
This is an example callback function:
def cbk(type, result, userarg, tid): if type == getdns.CALLBACK_COMPLETE: status = result.status if status == getdns.RESPSTATUS_GOOD: for addr in result.just_address_answers: addr_type = addr['address_type'] addr_data = addr['address_data'] print '{0}: {1} {2}'.format(userarg, addr_type, addr_data) elif status == getdns.RESPSTATUS_NO_SECURE_ANSWERS: print "{0}: No DNSSEC secured responses found".format(hostname) else: print "{0}: getdns.address() returned error: {1}".format(hostname, status) elif type == getdns.CALLBACK_CANCEL: print 'Callback cancelled' elif type == getdns.CALLBACK_TIMEOUT: print 'Query timed out' else: print 'Unknown error' | https://getdns.readthedocs.io/en/latest/functions.html | CC-MAIN-2022-05 | refinedweb | 2,413 | 56.25 |
How to Index Anything
Up to now, we've talked only about indexing HTML, XML and text files. Here's a more-advanced example: indexing PDF documents from the Linux Documentation Project.
For SWISH-E to index arbitrary files, PDF or otherwise, we must convert the files to text, ideally resembling HTML or XML, and arrange to have SWISH-E index the results.
We could index the PDF files by converting each to a corresponding file on disk and then index those, but instead we'll use this opportunity to introduce a more flexible way to index data: SWISH-E's programmatic access method (Figure 2).
To index the PDF files, start by creating a SWISH-E configuration file, calling it howto-pdf.conf and endowing it with the following contents:
# howto-pdf.conf IndexDir ./howto-pdf-prog.pl # prog file to hand us XML docs IndexFile ./howto-pdf.index # Index to create. UseStemming yes MetaNames swishtitle swishdocpath
Here, the IndexDir directive specifies what SWISH-E calls an external program that will return data about what is to be indexed, instead of a directory containing all the files. The UseStemming yes directive requests SWISH-E to stem words to their root forms before indexing and searching. Without stemming, searching for the word “runs” on a document containing the word “running” will not match. With stemming, SWISH-E recognizes that “runs” and “running” both have the same root, or stem word, and finds the document relevant.
Last in our configuration file, but certainly not least, is the MetaNames directive. This line adds a special ability to our index—the ability to search on only the titles or filenames of the files.
Now, let's write the external program to return information about the PDF files we're indexing. Conveniently, the SWISH-E source ships with an example module, pdf2xml.pm, which uses the xpdf package to convert PDF to XML, prefixed with appropriate headers for SWISH-E. We use this module, copied to ~/indices, in our external program howto-pdf-prog.pl:
#!/usr/bin/perl -w use pdf2xml; my @files = `find ../HOWTO-pdfs/ -name '*.pdf' -print`; for (@files) { chomp(); my $xml_record_ref = pdf2xml($_); # this is one XML file with a SWISH-E header print $$xml_record_ref; }
Equipped with the SWISH-E configuration file and the external program above, let's build the index:
% swish-e -c howto-pdf.conf -S prog
The -S prog option tells SWISH-E to consider the IndexDir specified as a program that returns information about the data to be indexed. If you forget to include -S prog when using an external program with SWISH-E, you'll be indexing the external program itself, not the documents it describes.
When the PDF index is built, we can perform searches:
% swish-e -f howto-pdf.index -m 2 -w boot disk
We should get results similar to:
1000 ../HOWTO-pdfs/Bootdisk-HOWTO.pdf "Bootdisk-HOWTO.pdf" 127194 983 ../HOWTO-pdfs/Large-Disk-HOWTO.pdf "Large-Disk-HOWTO.pdf" 85280
The MetaNames directive also lets us search on the titles and paths of the PDF files:
% swish-e -f howto-pdf.index -w swishtitle=apache % swish-e -f howto-pdf.index -w swishdocpath=linux
All corresponding combinations of searches are supported. For example:
% swish-e -f howto-pdf.index -w '(larry and wall) OR (swishdocpath=linux OR swishtitle=kernel)'
The quoting above is necessary to protect the parentheses from interpretation by the shell.
For our final example, we show how to make a useful and powerful index of man pages and how to use the SWISH::API Perl module to write a searching client for the index. Again, first write the configuration file:
# sman-index.conf IndexFile ./sman.index # Index to create. IndexDir ./sman-index-prog.pl IndexComments no # don't index text in comments UseStemming yes MetaNames swishtitle desc sec PropertyNames desc sec
We've described most of these directives already, but we're defining some new MetaNames and introducing something called PropertyNames.
In a nutshell, MetaNames are what SWISH-E actually searches on. The default MetaName is swishdefault, and that's what is searched on when no MetaName is specified in a query. PropertyNames are fields that can be returned describing hits.
SWISH-E results normally are returned with several Auto Properties including swishtitle, swishdesc, swishrank and swishdocpath. The MetaNames directive in our configuration specifies that we want to be able to search independently not only on each whole document, but also on only the title, the description or the section. The PropertyNames line specifies that we want the sec and desc properties, the man page's section and short description, to be returned separately with each hit.
The work of converting the man pages to XML and wrapping it in headers for SWISH-E is performed in Listing 1 (sman-index-prog.pl).
Listing 1. sman-index-prog.pl converts man pages to XML for indexing.
#!/usr/bin/perl -w use strict; use File::Find; my ($cnt, @files) = (0, get_man_files()); warn scalar @files, " man pages to index...\n"; for my $f (@files) { warn "processing $cnt\n" unless ++$cnt % 20; my ($hashref) = parse_man($f); my $xml = make_xml($hashref); my $size = length $xml; # NOTE: Fails if UTF print "Path-Name: $f\n", "Document-Type: XML*\n", "Content-Length: $size\n\n", $xml; } sub get_man_files { # get english manfiles my @files; chomp(my $man_path = $ENV{MANPATH} || `manpath` || '/usr/share/man'); find( sub { my $n = $File::Find::name; push @files, $n if -f $n && $n =~ m!man/man.*\.! }, split /:/, $man_path ); return @files; } sub make_xml { # output xml version of hash my ($metas) = @_; # escapes vals as side-effect my $xml = join ("\n", map { "<$_>" . escape($metas->{$_}) . "</$_>" } keys %$metas); my $pre = qq{<?xml version="1.0"?>\n}; return qq{$pre<all>$xml</all>\n}; } sub escape { # modifies scalar you pass! return "" unless defined($_[0]); s/&/&/g, s/</</g, s/>/>/g for $_[0]; return $_[0]; } sub parse_man { # this is the bulk my ($file) = @_; my ($manpage, $cur_content) = ('', ''); my ($cur_section,%h) = qw(NOSECTION); open FH, "man $file | col -b |" or die "Failed to run man: $!"; my ($line1, $lineM) = (scalar(<FH>) || "", ""); while ( <FH> ) { # parse manpage into sections $line1 = $_ if $line1 =~ /^\s*$/; $manpage .= $lineM = $_ unless /^\s*$/; if (s/^(\w(\s|\w)+)// || s/^\s*(NAME)/$1/i){ chomp( my $sec = $1 ); # section title $h{$cur_section} .= $cur_content; $cur_content = ""; $cur_section = $sec; # new section name } $cur_content .= $_ unless /^\s*$/; } $h{$cur_section} .= $cur_content; # examine NAME, HEADer, FOOTer, (and # maybe the filename too). close(FH) or die "Failed close on pipe to man"; @h{qw(A_AHEAD A_BFOOT)} = ($line1, $lineM); my ($mn, $ms, $md) = ("","","",""); # NAME mn, DESCRIPTION md, & SECTION ms for(sort keys(%h)) { # A_AHEAD & A_BFOOT first my ($k, $v) = ($_, $h{$_}); # copy key&val if (/^A_(AHEAD|BFOOT)$/) { #get sec or cmd # look for the 'section' in ()'s if ($v =~ /\(([^)]+)\)\s*$/) {$ms||= $1;} } elsif($k =~ s/^\s*(NOSECTION|NAME)\s*//) { my $namestr = $v || $k; # 'cmd - a desc' if ($namestr =~ /(\S.*)\s+--?\s*(.*)/) { $mn ||= $1 || ""; $md ||= $2 || ""; } else { # that regex could fail. $md ||= $namestr || $v; } } } if (!$ms && $file =~ m!/man/man([^/]*)/!) { $ms = $1; # get sec from path if not found } ($mn = $file) =~ s!(^.*/)|(\.gz$)!! unless $mn; my %metas; @metas{qw(swishtitle sec desc page)} = ($mn, $ms, $md, $manpage); return ( \%metas ); # return ref to 5-key hash. }
The first for loop in Listing 1 is the main loop of the program. It looks at each man page, parses it as needed, converts it to XML and wraps it in the appropriate headers for SWISH-E:
get_man_file() uses File::Find to traverse the man directories to find man page source files.
make_xml() and escape() together create XML from the hashref returned by parse_man().
parse_man() performs the nitty-gritty work of getting the relevant fields from the man page source.
Now that we've explained it, let's use it:
% swish-e -c sman-index.conf -S prog
When that's done, you can test the index as before, using swish-e's -w option.
As our final example, we discuss a Perl script that uses SWISH::API to use the index we just built to provide an improved version of the UNIX standby apropos. The code is included in Listing 2 (sman). Here's a brief rundown: lines 1-14 set things up and parse command-line options, lines 15-23 issue the query and do cursory error handling and lines 24-39 present the search results using Properties returned through the SWISH::API.
Listing 2. sman is a command-line utility to search man pages.
#!/usr/bin/perl -w use strict; use Getopt::Long qw(GetOptions); use SWISH::API; my ($max,$rankshow,$fileshow,$cnt) = (20,0,0,0); my $index = "./sman.index"; GetOptions( "max=i" => \$max, "index=s" => \$index, "rank" => \$rankshow, "file" => \$fileshow, ); my $query = join(" ", @ARGV); my $handle = SWISH::API->new($index); my $results = $handle->Query( $query ); if ( $results->Hits() <= 0 ) { warn "No Results for '$query'.\n"; } if ( my $error = $handle->Error( ) ) { warn "Error: ", $handle->ErrorString(), "\n"; } while ( ($cnt++ < $max) && (my $res = $results->NextResult)) { printf "%4d ", $res->Property( "swishrank" ) if $rankshow; my $title = $res->Property( "swishtitle" ); if (my $cmd = $res->Property( "cmd" )) { $title .= " [$cmd]"; } printf "%-25s (%s) %-30s", $title, $res->Property( "sec" ), $res->Property( "desc" ); printf " %s", $res->Property( "swishdocpath" ) if $fileshow; print "\n"; }
The Perl client is that simple. Let's use ours to issue searches on our man pages such as:
% ./sman -m 1 boot disk
We should get back:
bootparam (7) Introduction to boot time para...
But we now also can do searches like:
% ./sman sec=3 perl
to limit searches to section 3. The sman program also accepts the command-line option --max=# to specify the maximum number of hits returned, --file to show the source file of the man page and --rank to show each hit's rank for the given query:
% ./sman --max=1 --file --rank boot
This returns:
1000 lilo.conf (5) configuration file for lilo /usr/man/man5/lilo.conf.5
Notice the rank as the first column and the source file as the last one.
An enhanced version of the sman package will be available at joshr.com/src | http://www.linuxjournal.com/article/6652?page=0,2&quicktabs_1=1 | CC-MAIN-2016-22 | refinedweb | 1,681 | 61.97 |
Hi there,
matplotlib.text.Text._get_layout(self, renderer) caches
its return value in the dictionary matplotlib.text.Text.cached.
Since it is never emptied, it causes problems when one
creates many figures.
Below, t0.png is ok but t1.png has the vertical tick label 0 in the
wrong place.
import matplotlib
matplotlib.use('Agg')
from matplotlib.matlab import *
y = [[100, 250], [10, 25]]
for i in range(2):
figure(1)
bar([0,1], y[i])
savefig('t%d.png' % i)
# Uncomment to fix
#matplotlib.text.Text.cached = {}
close('all')
--
Patrik
Here's a proposed patch for the caching problem in text.py.
(I'm sending it before someone actually adds
matplotlib.text.Text.cached = {}
to, e.g., matlab.close(). Oh, the horror :-)
--
Patrik
>>>>> "Patrik" == Patrik Simons <patrik.simons@...> writes:
Patrik> Here's a proposed patch for the caching problem in
Patrik> text.py. (I'm sending it before someone actually adds
Patrik> matplotlib.text.Text.cached = {} to, e.g.,
Patrik> matlab.close(). Oh, the horror :-)
What happened when you put it there?
It's not clear to me where to me where that misplaced zero is coming
from. Since the two figures are identical in size, I would think the
cached location of '0' from the first iteration of the loop would be
suitable for the second iteration. Do you understand how this is
failing?
The main reason for the cache was for efficiency in animated plots.
Eg, if you are just updating the data in a plot and then redrawing,
you don't want to do all the number crunching for text layout. With
rotated text and matrix operations to get the layout right, this can
get expensive.
I read over your patch. I wonder if a simpler and cleaner solution
might just be to move the cached into the __init__ method. Ie, make
it instance specific. This would still provide the cache efficiency
for animated plots, but should fix the problem you encountered.
It might also be less mind-bending than the solution you posted, at
least at this hour of the morning :-)
But I *would* like to understand how the current situation is
failing. I note that it does not occur if you replace figure(1) with
figure(i+1).
JDH | http://sourceforge.net/p/matplotlib/mailman/matplotlib-devel/thread/m2r7ouqz4m.fsf@mother.paradise.lost/ | CC-MAIN-2014-41 | refinedweb | 375 | 68.47 |
React: Communicating With Children
How do custom React Components communicate with their children?
A Simple Case
Given two components,
A and
B, where
A renders
arbitrary children and
B renders a
display prop.
import React, { Component } from "react";class A extends Component {render() {return <div>{this.props.children}</div>;}}class B extends Component {render() {return <span>{this.props.display}</span>;}}
They can be rendered with React DOM as such:
ReactDOM.render(<A><B display='thing-one'/><B display='thing-two'/></A>,document.body)
which yields a simple
thing-onething-two result.
Controlling Props
So far, nothing special. Let’s say we want A to control the
display property of all
Bs or wrap every child in an
element with a specific CSS class. We can simply alter the
A component to map over any children, replacing the
display prop using
cloneElement.
class A extends Component {render() {const { children } = this.props;return (<div>{children &&React.Children.map(children, (child, i) =>React.cloneElement(child, {display: `thing-${i}`}))}</div>);}}
Note that the same render:
ReactDOM.render(<A><B display="thing-one" /><B display="thing-two" /></A>,document.body);
returns a new result
thing-0thing-1. This is because we
have successfully overridden the
display prop of all
children rendered by
A.
Handlers and State
Let’s say that every time the user clicks on
B, we want to
update the state of
A with a counter. We can simply add
some inital state to
A and pass in an additional handler
prop which is defined on
A. We use fat-arrow autobinding
shorthand syntax so that
this int he
onChildClick
handler refers to
A’s
this. Then we make sure that
B
can accept an
onClick handler.
class A extends Component {state = {counter: 0};onChildClick = e => {this.setState({counter: this.state.counter + 1});};render() {const { children } = this.props;return (<div><h1>{this.state.counter}</h1>{children &&React.Children.map(children, (child, i) =>React.cloneElement(child, {display: `thing-${i}`,onClick: this.onChildClick}))}</div>);}}class B extends Component {render() {return (<span onClick={this.props.onClick}>{this.props.display}</span>);}}
Now, rendering the same way as before, we get a new output:
0thing-0thing-1
and every time
thing-0 or
thing-1 is clicked, the
counter in
A is updated. | https://www.christopherbiscardi.com/post/react-communicating-with-children/ | CC-MAIN-2019-47 | refinedweb | 375 | 52.05 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Configuring the JSON Serializer9:49 with James Churchill
Let's see how we can configure the JSON serializer in order to resolve the issue that we encountered when serializing our data to JSON.3.3 -b configuring-the-json-serializer
Configuring Json.NET to Ignore Model Properties
Fortunately, the Json.NET framework provides a convenient way to tell the serializer to ignore a property. To get started with this change, we need to add the Json.NET framework—using the NuGet package manager—to the shared class library.
Go ahead and stop the app if it's running. Then use the NuGet package manager to add the Json.NET framework (v6.0.4) to the shared class library. We're installing an older version of the Json.NET framework, because that's the version that was installed when we installed the Web API framework.
After the package has finished installing, open the Activity.cs file located in the "Models" folder.
And add the JsonIgnore attribute to the
Entries property.
[JsonIgnore] public IList<Entry> Entries { get; set; }
Now, let's retest by pressing F5 to start debugging. Using Postman again, make a GET request against the "/api/entries" endpoint. In Visual Studio, let's remove any breakpoints and press F5 to continue execution.
And back in Postman, we can see a much slimmed down version of our JSON entries data!
Reducing Response Data Payloads Using DTOs
Our current response data payload contains more data than it needs to. As we saw above, we can decorate the Activity
Entries property with a
JsonIgnore attribute in order to instruct the JSON serializer to not serialize that property.
We could also leverage data transfer objects—or DTOs—to refine or shape the response data. If we had an EntryListDto class that looked like this:
public class EntryListDto { public int Id { get; set; } public DateTime Date { get; set; } public string ActivityName { get; set; } public decimal Duration { get; set; } public string IntensityName { get; set; } }
We could use like this in our EntriesController's
Get method:
public IHttpActionResult Get() { var entries = _entriesRepository.GetList(); var entryDtos = entries.Select(e => new EntryListDto() { Id = e.Id, Date = e.Date, ActivityName = e.Activity.Name, Duration = e.Duration, IntensityName = e.Intensity.ToString() }).ToList(); return Ok(entryDtos); }
We'll also see an example later in this section of using DTOs to prevent users from under-posting data.
Additional Learning
- Json.NET ContractResolver
- Entity Framework Resources
- 0:00
When making a get request against the entry's endpoint,
- 0:03
we encountered an issue with serializing our data to JASON.
- 0:06
Let's take a closer look to why this error is occurring.
- 0:10
In the shared class library,
- 0:13
let's open the Entry.cs file in the models folder.
- 0:18
When an entry object is serialized,
- 0:20
each entry class property is written to the response stream.
- 0:31
One of those properties is the activity navigation property.
- 0:45
If we review the link query,
- 0:47
that is being executed when we call the entries repository's GetList method,
- 0:52
we can see that the activity navigation property is being included.
- 0:57
That means that when the serializer encounters the activity property, it'll
- 1:02
hold a reference to an activity object, which the serializer will serialize.
- 1:15
When the activity object is serialized,
- 1:17
each activity class property is written to the response string.
- 1:21
Including this entries navigations collection property.
- 1:27
At this point you might wonder,
- 1:29
will the entries navigation collection property contain any items?
- 1:35
In the link query, we can see that we aren't explicitly loading the activity
- 1:39
entries navigation collection property.
- 1:41
But unfortunately, at least in this situation, it doesn't matter.
- 1:46
EF's automatic relationship fix-up will ensure that the entry's property
- 1:50
will contain items.
- 1:52
Since each entry object that an activity could be associated with,
- 1:57
will already be loaded into the database context.
- 2:00
So, the serializer will enumerate the entries collection,
- 2:04
serializing each entry object.
- 2:07
Each entry object's activity property will be serialized, so
- 2:11
each activity object will also be serialized
- 2:14
which in turn will cause the entries property to be serialized, and so on.
- 2:20
This is known as a reference loop.
- 2:22
Jason.net with just the JASON framework, that web APIs JASON media type form matter
- 2:27
utilizes, is intelligent enough to detect when a reference loop occurs, and
- 2:32
will by default throw an exception.
- 2:35
Fortunately, Jason.Net provides an option for configuring reference loop handling,
- 2:39
that will prevent the exception from being thrown.
- 2:48
In our Web API configuration file, we can configure the JsonSerializerSettings.
- 2:55
First, let's get a reference to the JsonSerializerSettings object.
- 3:01
>> Let's name our variable JsonSerializerSettings.
- 3:08
Then, to get a reference to the serializer settings object, we reference
- 3:13
config.Formatters.JsonFormatter.Serialize- rSettings.
- 3:22
Then we can set the reference loop handling property
- 3:26
value to the reference loop handling ignore enumeration value.
- 3:30
This configures Jason.Net to simply ignore any reference loops that it encounters.
- 3:36
Before we retest our API, let's make one more change to our serializer settings.
- 3:43
Json.Net, by default, will preserve the casing of our property names.
- 3:50
Our C Sharp model properties are named using pascal case,
- 3:54
meaning that the first letter is upper cased.
- 3:58
So, all of our property names in the response JSON data
- 4:01
will also use pascal case.
- 4:04
In JavaScript it's a common convention when naming properties to use camel case.
- 4:09
Meaning that the first letter is lower cased.
- 4:12
As a result, when our client app displays data from our API, they'll look for
- 4:16
property names that start with a lower case letter, not an upper case letter.
- 4:21
JavaScript is case sensitive.
- 4:24
So even this small difference in naming, is significant.
- 4:27
So much so, that if we dont camel case our property names,
- 4:32
our client app won't be able to display data from our API.
- 4:37
Luckily, we can configure Json.Net to camel case our property names.
- 4:41
To do that we can set
- 4:45
the jsonSerializerSettings.ContractResolver
- 4:54
property, to an instance of
- 4:59
the CamelCasePropertyNamesContractResolver.
- 5:08
Be sure to add a using directive to the Newtonsoft.Jason.Serialization name space.
- 5:18
The ContractResolver property, provides a way to customize how
- 5:22
the Jason.Serializer serializes, and de-serializes .Net objects to JSON.
- 5:29
The CamelCasePropertyNamesContractResolver class,
- 5:33
inherits from the default ContractResolver class.
- 5:38
And overrides the base class,
- 5:40
so that the JSON property names are written in CamelCase.
- 5:44
With this configuration change in place, let's retest our API.
- 5:58
Using Postman again, make a get request against the API entries endpoint.
- 6:07
Here we are in our constructor for our API controller.
- 6:10
Let's remove our breakpoints, and press F5 to continue execution.
- 6:17
And we didn't get an exception this time, and in Postman,
- 6:21
we can see our entry data formated as JSON.
- 6:27
Let's take a closer look at our JASON data,
- 6:29
by copying the response data to the clipboard.
- 6:37
And pasting it into a new text file in visual studio.
- 6:59
Notice that Json.Net is still serializing the activity objects entries navigation
- 7:03
collection property.
- 7:05
But, each entry object within that collection,
- 7:08
doesn't include its activity property.
- 7:12
This keeps the reference loop from occurring.
- 7:16
While we want each entries activity property to be serialized ,we don't need
- 7:21
the activities entries property to be serialized, the client doesn't use it.
- 7:26
Including it, doesn't break anything in a client, but
- 7:29
it makes the overall size of the response payload bigger than it needs to be.
- 7:35
Fortunately, the Jason.Net framework provides a convenient way,
- 7:38
to tell the serializer to ignore a property.
- 7:41
By decorating that property with a JASON ignore attribute.
- 7:45
For more information on how to do that, see the teacher's notes.
- 7:49
Now, lets test using the fitness for our client map.
- 7:59
This red banner here,
- 8:00
is telling us that we're still using the client side in memory data.
- 8:04
So, we need to update the client apps configuration,
- 8:08
use the API To do that,
- 8:13
open the root of the project, the client-app-config.json.file.
- 8:19
And change the use in memory data app setting to false.
- 8:24
Save the file, switch back to the browser, and reload the page.
- 8:30
And here's our data displayed in the Fitness Frog client app.
- 8:37
It's important to review the data that you send to your API's clients.
- 8:41
Ease of use, performance, and security, are all important considerations.
- 8:47
By sending only the data that clients actually need.
- 8:51
Will help make your API easier to use,
- 8:53
while also reducing the overall payload of your HTTP responses.
- 8:58
It's also important to not accidentally leak any sensitive information.
- 9:03
There are a number of techniques, that you can use to refine your HTTP responses.
- 9:08
For instance you can decorate your model class properties,
- 9:11
with JSON ignore attributes.
- 9:13
Which will cause Json.Net to ignore those properties
- 9:16
during the serialization process.
- 9:19
You could also leverage data transfer objects, or DTO's.
- 9:24
DTOs are classes like entities are model classes, but are used specifically for
- 9:29
the transfer of data between the API, and the client.
- 9:33
Having a set of classes for
- 9:34
your data transfer objects separate from your model classes, gives you
- 9:39
complete control over the properties, that are included in your HTTP responses.
- 9:44
For more information about either of these techniques, see the teacher's notes. | https://teamtreehouse.com/library/configuring-the-json-serializer | CC-MAIN-2019-43 | refinedweb | 1,792 | 57.87 |
I am using the Blue Pelican Java textbook and am stuck on the project for Lesson 19. It asks to:
Modify the code below to print two side-by-side columns. The first column should be in ascending order (like the code below will print), and the second column should be in descending order. The output should be:
Ascend Descend
Agnes Thomas
Alfred Mary
Alvin Lee
Bernard Herman
Bill Ezra
Ezra Bill
Herman Bernard
Lee Alvin
Mary Alfred
Thomas Agnes
import java.util.*;
public class Tester
{
public static void main(String args[])
{
String ss[] = {"Bill", "Mary", "Lee", "Agnes", "Alfred", "Thomas", "Alvin", "Bernard", "Ezra", "Herman"};
Arrays.sort(ss);
for(String varSs: ss)
System.out.println(varSs);
}
}
You could try:
Arrays.sort(ss, Collections.reverseOrder());
import java.util.Collections which you've already imported includes the reverseOrder method which will sort in descending order. | https://codedump.io/share/oUcjobPwYwAR/1/lesson-19-project---two-orders-for-price-of-one---sorting-arrays | CC-MAIN-2018-13 | refinedweb | 142 | 65.73 |
tuning in, you can click on my mugshot or this link to get to the first two posts of the ongoing “Dojo Goodness” series that I’m writing to promote my upcoming book, Dojo: The Definitive Guide, which is available on Amazon as well as the O’Reilly catalog.
One of the first things you should know is that some of the most commonly used animation functions are packaged right into Base while others that offer more frill are available via Core’s
fx module. A way to help keep track of where everything is at is to look at the namespace–everything in Base is located in the base level
dojo namespace, while the Core add-ons are available via the
dojo.fx namespace. For this episode, we’ll be using the
fadeIn and
fadeOut functions as well as the uber-flexible
animateProperty function that are all included with Base. We’ll tap into
dojo.fx to get
chain and
combine, which can be used to chain together and combine animations, respectively.
You can use Dojo’s official API to get the full scoop on any of the specific functions mentioned, but for brevity, we’ll stick to the most basic usage here. For the fading functions, that amounts to a couple of one-liners to fade in and fade out a node. As the upcoming snippet suggests, Dojo’s animation API generally returns an object that you have to manually start via its
play() method. In addition to getting some other finely grained control, this makes it that much simpler to chain and combine animations (as you’ll see in a moment.) Finally, some code:
//fade some nodes! dojo.fadeOut({node : 'someNodeId'}).play(); //then sometime later... dojo.fadeIn({node : 'someNodeId'}).play();
The
animateProperty function is pretty intuitive as well and is used to animate arbitrary CSS properties. For the upcoming example, we’ll be using it to animate the height and width of a square on the screen to create an implode and explode effect like so:
//create an implode animation var implode = dojo.animateProperty({ node : 'someNodeId', properties : { height : {end : 0}, width : {end : 0} } }); //create an explode animation (assumes width and height are both 0 when it is run) var explode = dojo.animateProperty({ node : 'someNodeId', properties : { height : {end : 300}, //fill in whatever height/width you need... width : {end : 300} } });
But wait–that’s not all. Simple animations are cool and all, but we can also spice it up by chaining and combining them; as you’re about to see, it’s as easy as it should be. Here’s the chaining portion, which entails nothing more than passing in a series of animation objects to the
dojo.fx.chain function:
//chain together two fade animations dojo.fx.chain([ dojo.fadeOut({node : 'someNodeId'}), dojo.fadeIn({node : 'someNodeId'}) ]).play();
Combining works the very same way. If you wanted to combine a
fadeOut and the custom
implode function, you could do it like so:
//combine two animations dojo.fx.combine([ dojo.fadeOut({node : 'someNodeId'}), implode //the same one we created a few moments ago ]).play();
That’s it for the animation part of this episode, but to give you a way to manage all of this mayhem, let’s use some
Button dijits, which make it dirt simple to execute JavaScript code. In this particular example, we’ll use
dojo/method
SCRIPT tags to intercept the
Button dijit’s
onClick extension point and kick of an animation all from markup! The general pattern works thusly:
<div dojoType="dijit.form.Button">Do Something! <script type="dojo/method" event="onClick" args="evt"> //you can inspect the W3C standardized event object, evt, or do something else like... someAnimation.play(); //other sweet JavaScript code goes here... </script> </div>
So, without further ado, here’s the grand finale that puts all of this in perspective. This code creates a blue square in the center of your screen (relative to the viewport) and provides you with three buttons to control some animations. You should be able to copy and paste it directly into a local page and it should just work.
<html> <head> <title>Animation Station!</title> <style type="text/css"> @import ""; @import ""; </style> <script type="text/javascript" src="" djConfig="parseOnLoad:true"> </script> <script type="text/javascript"> dojo.require("dojo.fx"); //for dojo.fx.chain and dojo.fx.combine dojo.require("dojo.parser"); dojo.require("dijit.form.Button"); dojo.addOnLoad(function() { //get the dimensions of the current viewport var viewport = dijit.getViewport(); var viewport_height = viewport.h; var viewport_width = viewport.w; //center a square in the viewport var square_dimension = 300; var e = document.createElement("div"); e. <div dojoType="dijit.form.Button">Fade In/Out <script type="dojo/method" event="onClick" args="evt"> fadeAnim.play(); </script> </div> <div dojoType="dijit.form.Button">Implode/Explode <script type="dojo/method" event="onClick" args="evt"> implExplAnim.play(); </script> </div> <div dojoType="dijit.form.Button">Both! <script type="dojo/method" event="onClick" args="evt"> fadeImplExplAnim.play(); </script> </div> </body> </html>
As a final note, the soon to be released Dojo version 1.1 (currently in Beta 3) contains a couple of enhancements that further simplifies some of the version 1.0 syntax and saves you some typing. If you’re browsing the online API, don’t let those enhancements catch you off guard since the code on this page is using the latest version 1.0.x bug fix release.
Until Dojo: The Definitive Guide comes out, be sure to check back here each week for more Dojo goodness.
Animations are nice, but sometimes you just want to hide or show an element, with no bells or whistles such as fades and whatnot. I've studied the tutorials, Dojo book online, and API docs for hours, and yet this one simple thing -- something built into competitors such as prototype -- I cannot find in dojo. This is where the project fails me -- the simple things are hard, which is not the way it should be. (Don't get me wrong -- I like dojo, and am using it for most of my ajax needs, but this part gets me every time.)
@Matthew: If I understand you correctly, then you'd perhaps want to just set a node's CSS "visibility" property to "hidden"? If that's all you want to do, then dojo.style(someNode, "visibility", "hidden") would do the trick. If that's not what you're looking to do, let me know. What you're trying to do *should* be a quick one-liner like that...
@ptwobrussel: basically, yes. But it's non-intuitive -- prototype has hide() and show() which are simple and tell you what they're doing. (Internally, hide() sets the display property to 'none', so in dojo you could do similarly, and indeed I've done so.) Dojo has great utilities and functionality -- don't get me wrong. The grids and charts are worth their weight in gold. But the simple things people do everyday -- autocompletion, hiding and showing nodes, etc. -- do not have simple, well-documented solutions. That's the one area where I'd like to see dojo improve.
@Matthew: Well, just so you know, we are *very* serious about improving documentation--hence, my upcoming book and the much improved API docs online to name two. The dojocampus.org site is also starting to see some really good growth. But as you hint, it won't all happen overnight, and your desire for better documentation is a very fair one.
One other important thing is to consider that somewhat differentiates Dojo is that it doesn't attempt to create an entire artificial layer on top of DHTML. Rather, it tries to plug holes where they need to be and leaves the rest as is. I can't say this authoritatively, but I would suspect that was the pragmatism used with not having something like a dojo.show and dojo.hide function i.e. the rationale might have been to leave it to dojo.style since we're essentially talking about CSS styling and dojo.style already does that quite effectively.
Anyway, just thought that backdrop might be useful. Thanks for taking the time to post something! I hope you'll keep reading
Now the trick is getting this to work in firefox...I tried and it worked in IE8 beta1, but not in firefox 2.0.0.14
oops! I take my last comment back, adblockplus was just blocking out the css sheet, my bad! | http://www.oreillynet.com/onlamp/blog/2008/03/dojo_goodness_part_3_animation_1.html | crawl-001 | refinedweb | 1,407 | 64.81 |
//**************************************
// Name: Salary Range Checker in C++
// Description:In this article I would like to share a sample program that will demonstrate how to use if - else if statement in C++. I also added exception handling capabilities that will only accept numbers as input values in our program using limits library in C++. I hope you will find my work useful. I am using Dev C++ in developing this program..
// By: Jake R. Pomperada
//**************************************
// salary.cpp
// written by Mr. Jake R. Pomperada, MAED-IT
// July 29, 2018 Sunday
#include <iostream>
#include<limits>
using namespace std;
int main()
{
int salary=0;
char reply;
do {
system("cls");
cout << "\n\n";
cout <<"\tSalary Range Checker in C++";
cout <<"\n\n";
cout << "\tCreated By Mr. Jake R. Pomperada, MAED-IT";
cout << "\n\n";
cout << "\tWhat is your salary : ";
while(!(cin >> salary)){
cin.clear();
cin.ignore(numeric_limits<streamsize>::max(), '\n');
cout << "\tInvalid input. Try again: ";
cout << "\n\n";
cout << "\tWhat is your salary : ";
}
if (salary <=0) {
cout <<"\n\n";
cout <<"\tYour salary ranges is very small. Try Again";
}
else if (salary >= 1 && salary <= 999) {
cout <<"\n\n";
cout <<"\tYour salary ranges from PHP 1 to PHP 999.";
}
else if (salary >= 1000 && salary <= 5999) {
cout <<"\n\n";
cout <<"\tYour salary ranges from PHP 1,000 to PHP 5,9999.";
}
else if (salary >= 6000 && salary <= 10999) {
cout <<"\n\n";
cout <<"\tYour salary ranges from PHP 6,000 to PHP 10,999.";
}
else if (salary >= 11000 && salary <= 15999) {
cout <<"\n\n";
cout <<"\tYour salary ranges from PHP 11,000 to PHP 15999.";
}
else if (salary >= 16000 && salary <= 20000) {
cout <<"\n\n";
cout <<"\tYour salary ranges from PHP 16,000 to PHP 20,000.";
}
else
{
cout <<"\n\n";
cout <<"\tYour is above P20,000 and beyond.";
}
cout <<"\n\n";
cout << "\tDo you want to continue (Y/N)? : ";
cin >> reply;
} while (reply=='Y' || reply=='y');
cout << "\n\n";
cout << "\tThank you for using this software";. | http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=13986&lngWId=3 | CC-MAIN-2019-09 | refinedweb | 314 | 82.04 |
Futures in
Rust and Haskell
Bhargav Voleti and Matthew Wraith
The Languages
Rust
- Systems programming language from Mozilla
- Focus on memory safety and performance
- Ripgrep, servo, aws firecracker, and many backend services.
Rust Syntax
use std::fmt::Debug; trait HasArea { fn area(&self) -> f64; } impl HasArea for Rectangle { fn area(&self) -> f64 { self.length * self.height } } #[derive(Debug)] struct Rectangle { length: f64, height: f64 } // The generic `T` must implement `Debug`. Regardless // of the type, this will work properly. fn print_debug<T: Debug>(t: &T) { println!("{:?}", t); } // `T` must implement `HasArea`. Any function which meets // the bound can access `HasArea`'s function `area`. fn area<T: HasArea>(t: &T) -> f64 { t.area() } fn main() { let rectangle = Rectangle { length: 3.0, height: 4.0 }; print_debug(&rectangle); println!("Area: {}", area(&rectangle)); }
Rust Ownership
- Provides memory safety
- Each value can only have one binding at a time.
- Once binding goes out of scope, value is dropped.
- At any given time, you can have either one mutable reference or any number of immutable references.
- References must always be valid.
Haskell
- Statically typed, lazy, purely functional programming language designed by committee in the early 90s
- Many compilers that implement the Haskell standard, but GHC is the most popular
- Focuses on correctness and expressivity but still fast
-
Haskell Syntax
class HasArea a where area :: a -> Double instance HasArea Rectangle where area r = length r * height r data Rectangle = Rectangle { length :: Double , height :: Double } deriving Show scaleRectangle :: Double -> Rectangle -> Rectangle scaleRectangle s (Rectangle l h) = Rectangle (s * l) (s * h) debugArea :: (Show a, HasArea a) => a -> IO () debugArea a = do print a putStrLn ("Area: " <> show (area a)) main :: IO () main = do let rectangle = Rectangle { length = 3.0, height = 4.0 } debugArea rectangle
-- "Mappable" class Functor f where fmap :: (a -> b) -> f a -> f b instance Functor [] where fmap :: (a -> b) -> [a] -> [b] instance Functor (Map k) where fmap :: (a -> b) -> Map k a -> Map k b instance Functor IO where fmap :: (a -> b) -> IO a -> IO b -- Mapping +1 over a list fmap (+ 1) [1,2,3] == [2,3,4] readFile :: FilePath -> IO String -- Read the file and append !!! to the result fmap (<> "!!!") (readFile file) :: IO String
Functor, aka Mappable
-- "Mappable" class Functor f where fmap :: (a -> b) -> f a -> f b -- "AndThenable" class Functor f => Monad f where return :: a -> f a (>>=) :: f a -> (a -> f b) -> f b instance Monad Maybe where return :: a -> Maybe a (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b instance Monad [] where return :: a -> [a] (>>=) :: [a] -> (a -> [b]) -> [b] instance Monad IO return :: a -> IO a (>>=) :: IO a -> (a -> IO b) -> IO b
What the hell are monads?
readFile :: FilePath -> IO String print :: Show a => a -> IO () -- >>= for IO (>>=) :: IO a -> (a -> IO b) -> IO b -- Read a file then do another IO action main :: IO () main = readFile file >>= print -- Bind to a name, concat results [1,2,3] >>= \x -> [x + x, x * x] == [2,1,4,4,6,9] [0,1,2,3] >>= \x -> [2 * x, 2 * x + 1] == [0,1,2,3,4,5,6,7] -- Works just as well for Maybe (aka, Option) lookup :: k -> Map k v -> Maybe v lookup "x" m >>= \x -> return (x + x) -- Read the file, append !!!, bind to "contents", -- print contents appended to itself. -- If the file reads "hello" the output to stdout will be "hello!!! hello!!!" fmap (<> "!!!") (readFile file) >>= \contents -> print (contents <> " " <> contents)
The M-word
fn friends_in_common( name1: Name, name2: Name, people: Map<Name, Set<Friends>>, ) -> Option<Set<Friends>> { match people.lookup(name1) { Some(friendSet1) => match people.lookup(name2) { Some(friendSet2) => friendSet1.intersection(friendSet2), None => None, } None => None, } }
>>= == and_then
lookup :: k -> Map k v -> Maybe v friendsInCommon :: Name -> Name -> Map Name (Set Friends) -> Maybe (Set Friends) friendsInCommon name1 name2 people = case lookup name1 people of Just friendSet1 -> case lookup name2 people of Just friendSet2 -> Just (intersection friendSet1 friendSet2) Nothing -> Nothing Nothing -> Nothing
fn friends_in_common( name1: Name, name2: Name, people: Map<Name, Set<Friends>>, ) -> Option<Set<Friends>> { people.lookup(name1).and_then(|friendSet1| { people.lookup(name2).and_then(|friendSet2| { friendSet1.intersection(friendSet2)) }) }) }
>>= == and_then
lookup :: k -> Map k v -> Maybe v friendsInCommon :: Name -> Name -> Map Name (Set Friends) -> Maybe (Set Friends) friendsInCommon name1 name2 people = do friendSet1 <- lookup name1 people friendSet2 <- lookup name2 people return (intersection friendSet1 friendSet2) friendsInCommon name1 name2 people = lookup name1 people >>= (\friendSet1 -> lookup name2 people >>= (\friendSet2 -> Just (intersection friendSet1 friendSet2)))
-- Read a file then do another IO action main :: IO () main = do contents <- readFile file print contents -- Bind to a name, concat results do x <- [1,2,3] [x + x, x * x] -- Works just as well for Maybe (aka, Option) do x <- lookup "x" m return (x + x) -- If the file reads "hello" the output to stdout will be "hello!!!hello!!!" do contents <- fmap (<> "!!!") (readFile file) print (contents <> contents) return contents
The M-word
The Idea
What's is a future?
A value which represents the completion of an async computation.
- Database queries
- An RPC invocation
- A timeout
- An external API request.
- Offload long running CPU-intensive task
- Performing I/O operations
How do they work
- The future simply represents the async computation.
- The computation itself is generally performed by an event loop.
- Cooperative multitasking vs Preemptive multitasking.
- DO NOT BLOCK.
Why would you want to use them
- High performance evented systems.
- Threaded model has lots of overhead when running lots of small tasks.
- GUIs
The Architecture
Rust
Rust
Execution model
You create a future
let response = client.get("");
You set it up to use the value after it has been computed.
let response_is_ok = response .and_then(|resp| { println!("Status: {}", resp.status()); Ok(()) });
You actually run it to perform the computation
tokio::run(response_is_ok);
Haskell
- Async is a futures library, and io-streams is a streaming library.
- Focus on a small and simple API with lots of expressiveness and obviously correct constructions.
- Can call wait or poll on Asyncs.
- io-streams have InputStreams, for reading from, and OutputStreams for writing to.
Execution model
do response <- async (get client "") responseIsOkay <- wait (fmap (\resp -> isOk (status resp)) response) putStrLn ("Got: " <> show responseIsOkay)
async creates an Async data type that can be polled or waited upon.
STM (briefly)
- Like database transactions for memory
- Atomic operations on memory
- Log state before transaction, rollback if needed
- Previous STM talk:
The Differences
- Haskell uses a heavy run-time system
- Haskell has lightweight threads that map to OS threads.
- Futures in Rust have a Stream type. IO-streams is a separate library in Haskell.
- >>= is exactly the same as and_then in rust. But >>= works for sequencing any IO.
The libraries
THE TYPES
pub enum Async<T> { Ready(T), NotReady, } trait Future { /// The type of the value returned when the future completes. type Item; /// The type representing errors that occurred while /// processing the computation. type Error; /// The function that will be repeatedly called to see if the future /// has completed or not. The `Async` enum can either be `Ready` or /// `NotReady` and indicates whether the future is ready to produce /// a value or not. fn poll(&mut self) -> Result<Async<Self::Item>, Self::Error>; }
Rust
Defined in the Futures library
Rust
WORKING WITH FUTURES
/// Change the type of the result of the future. fn map<F, U>(self, f: F) -> Map<Self, F> where F: FnOnce(Self::Item) -> U, Self: Sized /// This will call the passed in closure with the result of /// the future, only if it was successful. fn and_then<F, B>(self, f: F) -> AndThen<Self, B, F> where F: FnOnce(Self::Item) -> B, B: IntoFuture<Error = Self::Error>, Self: Sized, /// Waits for either one of two futures to complete. fn select<B>(self, other: B) -> Select<Self, B::Future> where B: IntoFuture<Item = Self::Item, Error = Self::Error>, Self: Sized, /// And many more:
Provides a lot of adapters similar to iter().
Rust
STREAMS OF VALUES
- Two parts:
- Stream
- Think of it as an iterator of futures.
- Represents 3 states:
Ok(Some(value)) -> got value
- Ok(None) -> Stream closed
- Err(err) -> some error occurred
- Sink
- As the name suggests, this future simply writes to underlying IO
- Provides back pressure via `start_send`
Rust
Tokio
From the Tokio docs, Tokio is a:
- A multi threaded, work-stealing based task scheduler.
- A reactor backed by the operating system's event queue (epoll, kqueue, IOCP, etc...).
- Asynchronous TCP and UDP sockets.
- Asynchronous filesystem operations.
- Timer API for scheduling work in the future.
Rust
Tokio
- The thing that runs your futures.
- Provides types for working with sockets and I/O.
/// Creates a new executor and spawns the future on it and /// runs it to completion. pub fn run<F>(future: F) where F: Future<Item = (), Error = ()> + Send + 'static, /// Spawns the given future on the executor. pub fn spawn<F>(f: F) -> Spawn where F: Future<Item = (), Error = ()> + 'static + Send, /// There is a separate API for futures which don't have to be `Send`
Async API
data Async a instance Functor Async class Functor f where fmap :: (a -> b) -> f a -> f b -- fmap :: (a -> b) -> Async a -> Async b -- (>>=) :: IO a -> (a -> IO b) -> IO b async :: IO a -> IO (Async a) wait :: Async a -> IO a
Async API
data Async a instance Functor Async async :: IO a -> IO (Async a) wait :: Async a -> IO a poll :: Async a -> IO (Maybe (Either SomeException a)) waitEither :: Async a -> Async b -> IO (Either a b) waitBoth :: Async a -> Async b -> IO (a, b) waitAny :: [Async a] -> IO (Async a, a) mapConcurrently :: Traversable t => (a -> IO b) -> t a -> IO (t b)
Async Implementation
data Async a = Async { asyncThreadId :: ThreadId , asyncWait :: STM (Either SomeException a) } -- atomically :: STM a -> IO a -- run 'asyncWait', ie. read from TMVar, and either throw or return wait :: Async a -> IO a wait a = atomically (asyncWait a >>= either throw return) -- Do your action on another thread and put the result in TMVar, -- waiting on an async is just reading from TMVar async :: IO a -> IO (Async a) async action = do var <- atomically newEmptyTMVar t <- forkFinally action (\r -> atomically (putTMVar var r)) return (Async t (readTMVar var))
Async Implementation
data Async a = Async { asyncThreadId :: ThreadId , asyncWait :: STM (Either SomeException a) } -- atomically :: STM a -> IO a -- If the first action completes without retrying then it forms -- the result of the orElse. Otherwise, if the first action retries, -- then the second action is tried in its place. If both fail, repeat. -- orElse :: STM a -> STM a -> STM a poll :: Async a -> IO (Maybe (Either SomeException a)) poll a = atomically ((fmap Just (asyncWait a)) `orElse` return Nothing) waitEither :: Async a -> Async b -> IO (Either a b) waitEither left right = atomically ( (fmap Left (waitSTM left)) `orElse` (fmap Right (waitSTM right)) )
Haskell
io-streams
data InputStream a data OutputStream a -- Read from InputStreams read :: InputStream a -> IO (Maybe a) -- Write to OutputStreams, Nothing is end of stream write :: Maybe a -> OutputStream a -> IO () -- Take InputStream and pipe to OutputStream connect :: InputStream a -> OutputStream a -> IO () -- Merge all input streams concurrentMerge :: [InputStream a] -> IO (InputStream a) -- Ends of concurrent queue makeChanPipe :: IO (InputStream a, OutputStream a)
The Code!
Rust
Simple IO
TO THE CODE!
Rust
Simple IO - Redux
use tokio::io::{copy, stdout}; use futures::Future; fn main() { let task = tokio::fs::File::open("/Users/bigb/vimwiki/index.md") .and_then(|file| { // do something with the file ... copy(file, stdout()) }) .and_then(|(n, _, _)| { println!("Printed {} bytes", n); Ok(()) }) .map_err(|e| { // handle errors eprintln!("IO error: {:?}", e); }); tokio::run(task); }
Haskell
Simple IO
-- Print all of the file, then traverse the contents again to -- get the number of bytes simpleFileDump :: FilePath -> IO () simpleFileDump file = do contents <- openFile file print contents putStrLn ("Printed " <> show (length contents) <> " bytes") -- Count all the bytes that are consumed by a stream countInput :: InputStream ByteString -> IO (InputStream ByteString, IO Int64) -- Open a file as a stream and close it when done withFileAsInput :: FilePath -> (InputStream ByteString -> IO a) -> IO a -- Stream byte by byte into stdout, counting streamingFileDump :: FilePath -> IO () streamingFileDump file = do numBytes <- withFileAsInput file (\fileStream -> do (countedStream, getBytes) <- countBytes fileStream connect countedStream stdout getBytes) putStrLn ("Printed " <> show numBytes <> " bytes")
Rust
World's dumbest HTTP client
Haskell
World's dumbest HTTP client
getUrl :: URL -> IO Response status :: Response -> StatusCode mapConcurrently :: Traversable t => (a -> IO b) -> t a -> IO (t b) -- when t is a list: mapConcurrently :: (a -> IO b) -> [a] -> IO [b] -- Concurrently get the responses from various urls -- and then print the status codes mapConcurrently getUrl urls >>= traverse (print . status)
A Chat server!
BACK TO THE CODE!
The End
Questions?
careers@bitnomial.com
Async IO in Rust and Haskell
By wraithm | https://slides.com/wraithm/async-io-in-rust-and-haskell/ | CC-MAIN-2019-51 | refinedweb | 2,065 | 57.1 |
import "go.chromium.org/chromiumos/infra/go/internal/osutils"
Verify that a directory exists at the given relative path.
func FindInPathParents( pathToFind string, startPath string, endPath string, testFunc func(string) bool) string
Look for a relative path, ascending through parent directories.
Args:
pathToFind: The relative path to look for. startPath: The path to start the search from. If |startPath| is a directory, it will be included in the directories that are searched. endPath: The path to stop searching. testFunc: The function to use to verify the relative path.
Verify that the given relative path exists.
Package osutils imports 2 packages (graph). Updated 2019-11-12. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/chromiumos/infra/go/internal/osutils | CC-MAIN-2019-47 | refinedweb | 111 | 52.76 |
Revision history for Search::Elasticsearch 5.01 2016-10-19 Doc fixes 5.00 2016-10-19 2.00 2015-10-28 The default client is now '2_0::Direct', for use with Elasticsearch 2.x. Specify client '1_0::Direct' if using with Elasticsearch 1.x. Breaking: * The field parameter to indices.get_field_mapping() has been renamed to fields New features: * Added fields param to Bulk helper * The name parameter to indices.get_template() can accept multiple options * Added indices.forcemerge() and deprecated indices.optimize() * The index parameter to indices.open() and indices.close() is required * Added allow_no_indices, expand_wildcards, and ignore_unavailable params to indices.flush_synced() * Added the timeout param to cluster.stats(), nodes.hot_threads(), nodes.stats(), and nodes.info() * cluster.health() can accept multiple indices * Added cat.repositories() and cat.snapshots() * Added detect_noop param to update() * search_shards() accepts multi values for index/type * delete_template() requires an id * Add fork protection to Scroll and Async::Scroll Bug fix: * Added missing debug QS param 1.99 2015-08-26 This release provides support for Elasticsearch 2.0.0-beta1 and above, but the default client is still '1_0::Direct' and will remain so until version 2.00 is released. New features: * Added default_qs_params, which will be added to every request * Added max_time to the Bulk helper, to flush after a max elapsed time * Added filter_path parameter to all methods which return JSON * Added indices.flush_synced() * Added render_search_template() * Added cat.nodeattrs() * Added human flag to indices.get and indices.get_settings * Added rewrite flag to indices.validate_query * Added rewrite flag to indices.analyze * Added fields param to bulk() * Added update_all_types to indices.create and indices.put_mapping * Added request_cache to indices.put_warmer and indices.stats * Added request to indices.clear_cache * Added RequestTimeout exception for server-side timeouts * Updated Plugin::Watcher with 1.0 API Removed: * Removed id and id_cache from indices.clear_cache * Removed filter and filter_cache from indices.clear_cache * Removed ignore_conflict from indices.put_mapping Bugfixes: * Fixed error handling in Hijk * Fixed live test to non-existent IP address 1.20 2015-05-17 Deprecated: * Search::Elasticsearch::Client::Direct in favour of Search::Elasticsearch::Client::1_0::Direct New features: * Added support for structured JSON exceptions in Elasticsearch 2.0 * Added support for plugins * Added Search::Elasticsearch::Client::2_0::Direct for the upcoming Elasticsearch 2.0 with these changes: * removed delete_by_query() * removed termvector() * removed indices.delete_mapping() * removed nodes.shutdown() * removed indices.status() * added terminate_after param to search() * added dfs param to termvectors() * removed filter_keys param from indices.clear_cache() * removed full param from indices.flush() * removed force param from indics.optmize() * removed replication param from all CRUD methods * removed mlt() method Bug fix: * The bulk buffer was being cleared on a NoNodes exception Added class: Added methods: * field_stats() Added params: * allow_no_indices, expand_wildcards, ignore_unavailable to cluster.state() * fielddata_fields to search() * master_timeout to indices.get_template() and indices.exists_template() * detect_noop to update() * only_ancient_segments to upgrade() * analyze_wildcards, analyzer, default_operator, df, lenient, lowercase_expanded_terms, and q to count(), search_exists() and indices.validate_query() Removed methods: * benchmark.* - never released in Elasticsearch Also: * arrays of enum query string params are now flattened as CSV * enum expand_wildcards also accepts: none, all * Search::Elasticsearch is no longer a Moo class * Updated elasticsearch.org URLs to use elastic.co instead * the request body is retained in exceptions * upgraded Hijk to 0.20 1.19 2015-01-15 Added method: * cat.segments() Added exceptions: * Unauthorized - for invalid user creds * SSL - for invalid SSL certs Renamed exception: * ClusterBlock -> Forbidden Also: * Simplified SSL support for HTTP::Tiny, LWP and improved instructions * Added optional tests for https/authz/authen 1.17 2014-12-29 Bug fix: * handle_args were not being passed to all backends, meaning that (eg) cookies could not be used Dependency bump: * Log::Any 1.02 broke bwc - fixed to work with new version Added params: * op_type, version, version_type to indices.put_template * version, version_type to indices.delete_template * version, version_type to termvectors * master_timeout, timeout to cluster.put_settings * ignore_idle_threads to nodes.hot_threads * terminate_after to search Deprecated: * termvector in favour of termvectors (but old method still works for now) 1.16 2014-11-15 Added dependency on Pod::Simple, which was causing installation on perl 5.8 to fail Added params: * percolate_preference and percolate_routing to percolate() Bug fix: * the index param is now required for indices.delete() 1.15 2014-11-05 Enhancements: * All backends (except Hijk) now default to not verifying SSL identities, but accept ssl_options to allow backend-specific configuration * Improved Mojo exceptions Bug fix: * is_https() didn't work Changed: * index param to put_alias() is now required Added methods: * index.get() * search_exists() * indices.upgrade() * indices.get_upgrade() * snapshot.verify_repository() Added parameters: * query_cache to search(), clear_cache(), stats() * wait_if_ongoing to flush() * script_id and scripted_upsert to update() * version and version_type to put_script(), get_script(), delete_script(), put_template(), get_template(), and delete_template() * op_type to put_script() and put_template() * metric to cluster_reroute() * realtime to termvector() and mtermvector() * dfs to termvector() Removed parameters: * filter_metadata from cluster_reroute() * search_query_hint from mlt() Bumped versions: JSON::XS 2.26 Package::Stash 0.34 Log::Any 0.15 1.14 2014-07-24 Added support for indexed scripts and indexed templates. 1.13 2014-06-13 Breaking change: The Scroll helper used to pass the scroll ID to scroll() and clear_scroll() in the query string by default, with the scroll_in_body parameter to change the behaviour. This was causing frequent errors with long scroll IDs, so the new default behaviour is to pass the scroll ID in the body, with the scroll_in_qs parameter to change that behaviour. All Search::Elasticsearch HTTP backends are now fork safe. Added track_scores param to search() Added create param to indices.put_template() Removed index_templates param from cluster.state() Removed indices_boost param from search() Added percolate_format param to percolate() Added cat.fielddata() 1.12 2014-05-09 Fixed bug when trying to reindex from a subref Added search_shards() Added char_filters to indices.analyze() Removed index_templates from cluster.state() Added conf to TestServer for passing arbitrary config 1.11 2014-04-23 Switched default Serializer::JSON to use JSON::MaybeXS, and added Serializer backends for Cpanel::JSON::XS, JSON::XS and JSON::PP Added scroll_in_body flag for Scroll helper Added support for: * search_template() * snapshot->status() * indices->recovery() * benchmark() * list_benchmarks() * abort_benchmark() 1.10 2014-03-05 Moved all modules to Search::Elasticsearch namespace. See 1.05 2014-03-05 Deprecated the Elasticsearch namespace in favour of Search::Elasticsearch. See Improved the Bulk->reindex() API. Now accepts a remote $es object. Improved documentation. Added Hijk backend. 1.04 2014-02-27 Changed the default Cxn to HTTPTiny v0.043. Now provides persistent connections and is a lot faster than LWP. Changed ES::Scroll to pass the scroll_id in the URL instead of the body. Better support for older versions and servers behind caching proxies. 1.03 2014-02-12 Fixed node sniffing to work across 0.90 and 1.0 1.02 2014-02-11 Fixed bug in Elasticsearch::Scroll::next when called in scalar context 1.01 2014-02-09 Fixed plugin loader to work with latest version of Module::Runtime which complains about undefined versions 1.00 2014-02-07 API updated to be compatible with v1.x branch of Elasticsearch. BACKWARDS COMPATIBILITY: To use this client with versions of Elasticsearch before 1.x, specify the client version as: $es = Elasticsearch->new( client => '0_90::Direct' ); 0.76 2013-12-02 Added support for send_get_body_as GET/POST/source Added timeout to bulk API 0.75 2013-10-24 Fixed the sniff regex to accommodate hostnames when present 0.74 2013-10-03 Fixed a timeout bug in LWP with persistent connections and bad params when using https 0.73 2013-10-02 Added Elasticsearch::Cxn::LWP Added Elasticsearch::TestServer Die with explanation if a user on a case-insensitive file system loads this module instead of ElasticSearch 0.72 2013-09-29 Added Elasticsearch::Bulk and Elasticsearch::Scroll Changed `https` to `use_https` for compatibility with elasticsearch-py Numerous fixes for different Perl versions, and Moo 1.003 now required 0.71 2013-09-24 Fixed dist.ini to list dependencies correctly 0.70 2013-09-24 Bumped version numbers because CPAN clashes with ElasticSearch.pm 0.04 2013-09-23 First release | https://metacpan.org/changes/distribution/Search-Elasticsearch | CC-MAIN-2016-44 | refinedweb | 1,341 | 52.15 |
a side-by-side reference sheet
sheet one: grammar and invocation | variables and expressions | arithmetic and logic | strings | regexes | dates and time | arrays | dictionaries | functions | execution control | exceptions | concurrency
sheet two: streams | files | file formats | directories | processes and environment | option parsing | libraries and namespaces | objects | inheritance and polymorphism | reflection | net and web | unit tests | debugging and profiling | java interop
Streams
standard file handles
The names for standard input, standard output, and standard error.
read line from stdin
How to read a line from standard input.
The illustrated function read the standard input stream until a end-of-line marker is found or the end of the stream is encountered. Only in the former case will the returned string be terminated by an end-of-line marker.
php:
fgets takes an optional second parameter to specify the maximum line length. If the length limit is encountered before a newline, the string returned will not be newline terminated.
ruby:
gets takes an optional parameter to specify the maximum line length. If the length limit is encountered before a newline, the string returned will not be newline terminated.
end-of-file behavior
What happens when attempting to read a line and the seek point is after the last newline or at the end of the file.
chomp
Remove a newline, carriage return, or carriage return newline pair from the end of a line if there is one.
php:
chop removes all trailing whitespace. It is an alias for rtrim.
python:
Python strings are immutable. rstrip returns a modified copy of the string. rstrip('\r\n') is not identical to chomp because it removes all contiguous carriage returns and newlines at the end of the string.
ruby:
chomp! modifies the string in place. chomp returns a modified copy.
write line to stdout
How to write a line to standard out. The line will be terminated by an operating system appropriate end of line marker.
python:
print appends a newline to the output. To suppress this behavior, put a trailing comma after the last argument. If given multiple arguments, print joins them with spaces.
In Python 2 print parses as a keyword and parentheses are not required:
print "Hello, World!"
ruby:
puts appends a newline to the output. print does not.
write formatted string to stdout
How to format variables and write them to standard out.
The function printf from the C standard library is a familiar example. It has a notation for format strings which uses percent signs %. Many other languages provide an implementation of printf.
open file for reading
How to open a file for reading.
ruby:
When File.open is given a block, the file is closed when the block terminates.
open file for writing
How to open a file for writing. If the file exists its contents will be overwritten.
set file handle encoding
How to open a file and specify the character encoding.
python:
The encoding of a file handle can be changed after it is opened:
find.encoding = 'UTF-8'
If the encoding is set to UTF-8, but the file contains byte sequences that are not a possible UTF-8 encoding, Python will raise a UnicodeDecodeError.
open file for appending
How to open a file with the seek point at the end of the file. If the file exists its contents will be preserved.
close file
How to close a file.
close file implicitly
How to have a file closed when a block is exited.
python:
File handles are closed when the variable holding them is garbage collected, but there is no guarantee when or if a variable will be garbage collected.
ruby:
File handles are closed when the variable holding them is garbage collected, but there is no guarantee when or if a variable will be garbage collected.
i/o error
How I/O errors are treated.
encoding error
read line
How to read up to the next newline in a file.
iterate over file by line
How to iterate over a file line by line.
read file into array of string
How to put the lines of a file into an array of strings.
read file into string
How to put the contents of a file into a single string.
write string
How to write a string to a file handle.
write line
How to write a line to a file handle. An operating system appropriate end-of-line marker is appended to the output.
**php:
Newlines in strings are translated to the operating system appropriate line terminator unless the file handle was opened with a mode string that contained 'b'.
python:
When file handles are opened with the mode strings 'r', 'w', or 'a', the file handle is in text mode. In text mode the operating system line terminator is translated to '\n' when reading and '\n' is translated back to the operating system line terminator when writing. The standard file handles sys.stdin, sys.stdout, and sys.stderr are opened in text mode.
When file handles are opened with the mode strings 'rb', 'rw', or 'ra', the file handle is in binary mode and line terminator translation is not performed. The operating system line terminator is available in os.linesep.
flush file handle
How to flush a file handle that has been written to.
end-of-file test
How to test whether the seek point of a file handle is at the end of the file.
file handle position
How to get or set the file handle seek point.
The seek point is where the next read on the file handle will begin. The seek point is measured in bytes starting from zero.
open temporary file
How to get a file handle to a file that will be removed automatically sometime between when the file handle is closed and the interpreter exits.
The file is guaranteed not to have existed before it was opened.
The file handle is opened for both reading and writing so that the information written to the file can be recovered by seeking to the beginning of the file and reading from the file handle.
On POSIX operating systems it is possible to unlink a file after opening it. The file is removed from the directory but continues to exist as long as the file handle is open. This guarantees that no other process will be able to read or modify the file contents.
php:
Here is how to create a temporary file with a name:
$path = tempnam(sys_get_temp_dir(), ""); $f = fopen($path, "w+");
python:
To unlink a temporary file on open, used TemporaryFile instead of NamedTemporaryFile:
import tempfile f = tempfile.TemporaryFile()
in memory file
How to create a file descriptor which writes to an in-memory buffer.
python:
StringIO also supports the standard methods for reading input. To use them the client must first seek to the beginning of the in-memory file:
f = StringIO() f.write('lorem ipsum\n') f.seek(0) r.read()
Files
file exists test, file regular test
How to test whether a file exists; how to test whether a file is a regular file (i.e. not a directory, special device, or named pipe).
file size
How to get the file size in bytes.
is file readable, writable, executable
How to test whether a file is readable, writable, or executable.
python:
The flags can be or'ed to test for multiple permissions:
os.access('/etc/hosts', os.R_OK | os.W_OK | os.X_OK)
set file permissions
How to set the permissions on the file.
For Perl, Python, and Ruby, the mode argument is in the same format as the one used with the Unix chmod command. It uses bitmasking to get the various permissions which is why it is normally an octal literal.
The mode argument should not be provided as a string such as "0755". Python and Ruby will raise an exception if a string is provided. Perl will convert "0755" to 755 and not 0755 which is equal to 493 in decimal.
last modification time
How to get the last modification time of a file.
For a regular file, the last modification time is the most recent time that the contents were altered.
For a directory, the last modification time is the most recent time that a file in the directory was added, removed, or renamed.
copy file, remove file, rename file
How to copy a file; how to remove a file; how to rename a file.
create symlink, symlink test, readlink
How to create a symlink; how to test whether a file is a symlink; how to get the target of a symlink.
generate unused file name
How to generate an unused file name. The file is created to avoid a race condition with another process looking for an unused file name.
The file is not implicitly deleted.
File Formats
parse csv
How to parse a CSV file and iterate through the rows.
generate csv
How to generate a CSV file from an array of tuples.
parse json
How to decode a string of JSON.
JSON data consists of objects, arrays, and JSON values. Objects are dictionaries in which the keys are strings and the values are JSON values. Arrays contain JSON values. JSON values can be objects, arrays, strings, numbers, true, false, or null.
A JSON string is JSON data encoded using the corresponding literal notation used by JavaScript source code.
JSON strings are sequences of Unicode characters. The following backslash escape sequences are supported:
\" \\ \/ \b \f \n \r \t \uhhhh.
generate json
How to encode data as a JSON string.
parse yaml
How to parse a string of YAML.
YAML is sometimes used to serialize objects. Deserializing such YAML results in the constructor of the object being executed. The YAML decoding techniques illustrated here are "safe" in that they will not execute code, however.
generate yaml
How to generate a string of YAML.
parse xml
How to parse XML and extract nodes using XPath.
ruby:
Another way of handling an XPath expression which matches multiple nodes:
XPath.each(doc,"/a/b/c") do |node| puts node.text end
generate xml
How to build an XML document.
An XML document can be constructed by concatenating strings, but the techniques illustrated here guarantee the result to be well-formed XML.
parse html
How to parse an HTML document.
Directories
working directory
How to get and set the working directory.
build pathname
How to construct a pathname without hard coding the system file separator.
dirname and basename
How to extract the directory portion of a pathname; how to extract the non-directory portion of a pathname.
absolute pathname
How to get the get the absolute pathname for a pathname. If the pathname is relative the working directory will be appended.
In the examples provided, if /foo/bar is the working directory and .. is the relative path, then the return value is foo
iterate over directory by file
How to iterate through the files in a directory.
In PHP, Perl, and Ruby, the files representing the directory itself . and the parent directory .. are returned.
php:
The code in the example will stop if a filename which evaluates as FALSE is encountered. One such filename is "0". A safer way to iterate through the directory is:
if ($dir = opendir("/etc")) { while (FALSE !== ($file = readdir($dir))) { echo "$file\n"; } closedir($dir); }
python:
file() is the file handle constructor. file can be used as a local variable name but doing so hides the constructor. It can still be invoked by the synonym open(), however.
os.listdir() does not return the special files . and .. which represent the directory itself and the parent directory.
glob paths
How to iterate over files using a glob pattern.
Glob patterns employ these special characters:
Use glob patterns instead of simple directory iteration when
- dot files, including the directory itself (.) and the parent directory (..), should skipped
- a subset of the files in a directory, where the subset can be specified with a glob pattern, is desired
- files from multiple directories, where the directories can be specified with a glob pattern, are desired
- the full pathnames of the files is desired
php:
glob takes a second argument for flags. The flag GLOB_BRACE enables brace notation.
python:
glob.glob returns a list. glob.iglob accepts the same arguments and returns an iterator.
ruby:
Ruby globs support brace notation.
A brace expression matches any of the comma separated strings inside the braces.
Dir.glob("/{bin,etc,usr}/*").each do |path| puts path end
make directory
How to create a directory.
If needed, the examples will create more than one directory.
No error will result if a directory at the pathname already exists. An exception will be raised if the pathname is occupied by a regular file, however.
recursive copy
How to perform a recursive copy. If the source is a directory, then the directory and all its contents will be copied.
remove empty directory
How to remove an empty directory. The operation will fail if the directory is not empty.
remove directory and contents
How to remove a directory and all its contents.
directory test
How to determine if a pathname is a directory.
generate unused directory
How to generate an unused directory. The directory is created to avoid a race condition with another process looking for an unused directory.
The directory is not implicitly deleted.
ruby:
When Dir.mktmpdir is provided with a block the directory is deleted after the block finishes executing:
require 'tmpdir' require 'fileutils' Dir.mktmpdir("/tmp/foo") do |path| puts path FileUtils.cp("/etc/hosts", "#{path}/hosts") end
system temporary file directory
The name of the system provided directory for temporary files.
On Linux the directory is often /tmp, and the operating system is often configured to delete the contents of /tmp at boot.
Processes and Environment
command line arguments
How to access arguments provided at the command line when the script was run; how to get the name of the script.
environment variable
How to get and set an environment variable. If an environment variable is set the new value is inherited by child processes.
php:
putenv returns a boolean indicating success. The command can fail because when PHP is running in safe mode only some environment variables are writable.
get pid, parent pid
How to get the process id of the interpreter process; how to get the id of the parent process.
ruby:
The process pid is also available in the global variable $$.
user id and name
How to get the user id of the interpreter process; how to get the username associated with the user id.
When writing a setuid application on Unix, there is a distinction between the real user id and the effective user id. The code examples return the real user id.
The process may be able to determine the username by inspecting environment variables. A POSIX system is required to set the environment variable LOGNAME at login. Unix systems often set USER at login, and Windows systems set %USERNAME%. There is nothing to prevent the user from altering any of these environment variables after login. The methods illustrated in the examples are thus more secure.
python:
How to get the effective user id:
os.geteuid()
ruby:
How to get the effective user id:
Process.euid
exit
python:
It is possible to register code to be executed upon exit:
import atexit atexit.register(print, "goodbye")
It is possible to terminate a script without executing registered exit code by calling os._exit.
ruby:
It is possible to register code to be executed upon exit:
at_exit { puts "goodbye" }
The script can be terminated without executing registered exit code by calling exit!.
set signal handler
How to register a signal handling function.
external command
How to execute an external command.
shell-escaped external command
How to prevent shell injection.
command substitution
How to invoke an external command and read its output into a variable.
The use of backticks for this operation goes back to the Bourne shell (1977).
python:
A more concise solution is:
file = os.popen('ls -l /tmp').read()
os.popen was marked as deprecated in Python 2.6 but it is still available in Python 2.7 and Python 3.2.
ruby:
%x can be used with any delimiter. If the opening delimiter is (, [, or {, the closing delimiter must be ), ], or }.
Option Parsing
command line options
How to process command line options.
We describe the style used by getopt_long from the C standard library. The characteristics of this style are:
- Options can be short or long. Short options are a single character preceded by a hyphen. Long options are a word preceded by two hyphens.
- A double hyphen by itself can be used to terminate option processing. Arguments after the double hyphen are treated as positional arguments and can start with a hyphen.
- Options can be declared to be with or without argument. Options without argument are used to set a boolean value to true.
- Short options without argument can share a hyphen.
- Long options can be separated from their argument by a space or an equals sign (=). Short options can be separated from their argument by nothing, a space, or an equals sign (=).
The option processing function should identify the positional arguments. These are the command line arguments which are not options, option arguments, or the double hyphen used to terminate option processing. getopt_long permits options to occur after positional arguments.
python:
The type of an argument can be specified using the named parameter type:
parser.add_argument('--count', '-c', dest='count', type=int) parser.add_argument('--ratio', '-r', dest='ratio', type=float)
If the argument cannot be converted to the type, the script prints out a usage statement and exits with a non-zero value.
The default value is None, but this can be changed using the named parameter default:
parser.add_argument('--file', '-f', dest='file', default='tmpfile') parser.add_argument('--count', '-c', dest='count', type=int, default=1) parser.add_argument('--ratio', '-r', dest='ratio', type=float, default=0.5)
Libraries and Namespaces
Terminology used in this sheet:
- library: code in its own file that can be included, loaded, or linked by client code.
- client: code which calls code in a separate file.
- top-level file or top-level script: the file containing the code in the program which executes first.
- load: to add definitions in a file to the text of a running process.
- namespace: a set of names that can be imported as a unit.
- import: to add definitions defined elsewhere to a scope.
- unqualified import: to add definitions to a scope using the same identifiers as where they are defined.
- qualified import: to add definitions to a scope. The identifiers in the scope are derived from the original identifiers in a formulaic manner. Usually the name of the namespace is added as a prefix.
- label: one of the parts of a qualified identifier.
- alias import: to add a definition to a scope under an identifier which is specified in the import statement.
- package: one or more libraries that can be installed by a package manager.
load library
Execute the specified file. Normally this is used on a file which only contains declarations at the top level.
php:
include_once behaves like require_once except that it is not fatal if an error is encountered executing the library.
load library in subdirectory
How to load a library in a subdirectory of the library path.
hot patch
How to reload a library. Altered definitions in the library will replace previous versions of the definition.
php:
Also include.
load error
How errors which are encountered while loading libraries are handled.
main routine in library
How to put code in a library which executes only when the file is run as a top-level script.
library path
The library path is a list of directory paths which are searched when loading libraries.
library path environment variable
How to augment the library path by setting an environment variable before invoking the interpreter.
library path command line option
How to augment the library path by providing a command line option when invoking the interpreter.
simple global identifiers
multiple label identifiers
label separator
The punctuation used to separate the labels in the full name of a subnamespace.
root namespace definition
namespace declaration
How to declare a section of code as belonging to a namespace.
subnamespace declaration
How to declare a section of code as belonging to a subnamespace.
import namespace
import subnamespace
import all definitions in namespace
How to import all the definitions in a namespace.
import definitions
How to import specific definitions from a namespace.
list installed packages, install a package
How to show the installed 3rd party packages, and how to install a new 3rd party package.
python
Two ways to list the installed modules and the modules in the standard library:
$ pydoc modules
$ python >>> help('modules')
Most 3rd party Python code is packaged using distutils, which is in the Python standard library. The code is placed in a directory with a setup.py file. The code is installed by running the Python interpreter on setup.py:
package specification format
The format of the file used to specify a package.
python:
How to create a Python package using distutils. Suppose that the file foo.py contains the following code:
def add(x, y): return x+y
In the same directory as foo.py create setup.py with the following contents:
#!/usr/bin/env python from distutils.core import setup setup(name='foo', version='1.0', py_modules=['foo'], )
Create a tarball of the directory for distribution:
$ tar cf foo-1.0.tar foo $ gzip foo-1.0.tar
To install a tar, perform the following:
$ tar xf foo-1.0.tar.gz $ cd foo $ sudo python setup.py install
If you want people to be able to install the package with pip, upload the tarball to the Python Package Index.
ruby:
For an example of how to create a gem, create a directory called foo. Inside it create a file called lib/foo.rb which contains:
def add(x, y) x + y end
Then create a file called foo.gemspec containing:
spec = Gem::Specification.new do |s| s.name = 'foo' s.authors = 'Joe Foo' s.version = '1.0' s.summary = 'a gem' s.files = Dir['lib/*.rb'] end
To create the gem, run this command:
$ gem build foo.gemspec
A file called foo-1.0.gem is created. To install foo.rb run this command:
$ gem install foo-1.0.gem
Objects
An object is a set of functions called methods which have shared access to the object's instance variables. An object's methods and instance variables are collectively called its members. If a member of an object can be accessed or invoked by code which is not in a member of the object, it is public. Otherwise it is private.
A class is a set of objects which have the same method definitions. The objects in the set are instances of the class. Functions defined in the class namespace which are not object methods are called class methods. A class method which returns instances of the class is called a factory method. If there is class method which is responsible for creating all instances, it is called a constructor. The existence of a constructor does not preclude the existence of other factory methods since they can invoke the constructor and return its return value.
A class may contain class variables. These are global variables defined in the namespace of the class.
A method which returns the value of an instance variable is called a getter. A method which sets the value of an instance variable is called a setter. Getters and setters and seem pointless at first blush as one could make the underlying instance variable public instead. In practice getters and setters make code more maintainable. Consistent use of getters and setters conforms with the Uniform Access Principle and makes the API presented by an object to its clients simpler.
Perl instance variables are private, so Perl enforces a good practice at the cost of requiring boilerplate code for defining getters and setters.
Python instance variables are public. Although this permits concise class definitions, a maintainer of a Python class may find it difficult to replace an instance variable with a derived value when clients are accessing the instance variable directly. With an old-style Python class, the maintainer can't make the change without breaking the client code. With a new-style class the maintainer can replace an instance variable with a getter and setter and mark them with the @property decorator.
Ruby, like Perl, has private instance variables. It has the directives attr_reader, attr_writer, and attr_accessor for defining getters and setters. Ruby classes are objects and in particular they are instances of the Module class. The directives attr_reader, attr_writer, and attr_accessor are instance methods defined in the Module class which execute when the class block executes.
define class
php:
Properties (i.e. instance variables) must be declared public, protected, or private. Methods can optionally be declared public, protected, or private. Methods without a visibility modifier are public.
python:
As of Python 2.2, classes are of two types: new-style classes and old-style classes. The class type is determined by the type of class(es) the class inherits from. If no superclasses are specified, then the class is old-style. As of Python 3.0, all classes are new-style.
New-style classes have these features which old-style classes don't:
- universal base class called object.
- descriptors and properties. Also the __getattribute__ method for intercepting all attribute access.
- change in how the diamond problem is handled. If a class inherits from multiple parents which in turn inherit from a common grandparent, then when checking for an attribute or method, all parents will be checked before the grandparent.
create object
How to create an object.
get and set attribute
How to get and set an attribute.
python:
Defining explicit setters and getters in Python is considered poor style. Extra logic can be achieved without disrupting the clients of the class by creating a property:
def getValue(self): print("getValue called") return self.__dict__['value'] def setValue(self,v): print("setValue called") self.__dict__['value'] = v value = property(fget=getValue, fset = setValue)
instance variable visibility
How instance variable access works.
define method
How to define a method.
invoke method
How to invoke a method.
destructor
How to define a destructor.
python:
A Python destructor is not guaranteed to be called when all references to an object go out of scope, but apparently this is how the CPython implementations work.
ruby:
Ruby lacks a destructor. It is possible to register a block to be executed before the memory for an object is released by the garbage collector. A ruby interpreter may exit without releasing memory for objects that have gone out of scope and in this case the finalizer will not get called. Furthermore, if the finalizer block holds on to a reference to the object, it will prevent the garbage collector from freeing the object.
method missing
How to handle when a caller invokes an undefined method.
php:
Define the method __callStatic to handle calls to undefined class methods.
python:
__getattr__ is invoked when an attribute (instance variable or method) is missing. By contrast, __getattribute__, which is only available in Python 3, is always invoked, and can be used to intercept access to attributes that exist. __setattr__ and __delattr__ are invoked when attempting to set or delete attributes that don't exist. The del statement is used to delete an attribute.
ruby:
Define the method self.method_missing to handle calls to undefined class methods.
define class method
invoke class method
How to invoke a class method.
define class variable
get and set class variable
method alias
How to create an alias for a method.
ruby:
Ruby provides the keyword alias and the method alias_method in the class Module. Inside a class body they behave idenitically. When called from inside a method alias has no effect but alias_method works as expected. Hence some recommend always using alias_method.
Inheritance and Polymorphism
A subclass is a class whose objects contain all of the methods from another class called the superclass. Objects in the subclass should in principle be usable anywhere objects in the superclass can be used. The subclass may have extra methods which are not found in the superclass. Moreover it may replace method definitions in the superclass with its own definitions provided the signature remains the same. This is called overriding.
It is sometimes useful to define superclass which is never instantiated. Such a class is called an abstract class. An abstract class is way to share code between two or more subclasses or to define the API that two or more subclasses should implement.
inheritance
How to use inheritance.
mixin
operator overloading
How to define the behavior of the binary operators.
Reflection
object id
How to get an identifier for an object or a value.
inspect type
php:
The PHP manual says that the strings returned by gettype are subject to change and advises using the following predicates instead:
is_null is_bool is_numeric is_int is_float is_string is_array is_object is_resource
All possible return values of gettype are listed.
basic types
inspect class
How to get the class of an object.
javascript:
inspect class hierarchy
has method?
python:
hasattr(o,'reverse') will return True if there is an instance variable named 'reverse'.
message passing
javascript:
The following works in Firefox:
var o = {} o.__noSuchMethod__ = function(name) { alert('you called ' + name) } o.whoopsie()
eval
How to interpret a string as code and return its value.
php:
The value of the string is the value of of the return statement that terminates execution. If execution falls off the end of the string without encountering a return statement, the eval evaluates as NULL.
python:
The argument of eval must be an expression or a SyntaxError is raised. The Python version of the mini-REPL is thus considerably less powerful than the versions for the other languages. It cannot define a function or even create a variable via assignment.
list object methods
list object attributes
python:
dir(o) returns methods and instance variables.
pretty print
How to display the contents of a data structure for debugging purposes.
source line number and file name
How to get the current line number and file name of the source code.
command line documentation
How to get documentation from the command line.
ruby:
Searching for Math.atan2 will return either class method or instance method documentation. If there is documentation for both one can be specific with the following notation:
$ ri Math::atan2 $ ri Math#atan2
Net and Web
get local hostname, dns lookup, reverse dns lookup
How to get the hostname and the ip address of the local machine without connecting to a socket.
The operating system should provide a method for determining the hostname. Linux provides the uname system call.
A DNS lookup can be performed to determine the IP address for the local machine. This may fail if the DNS server is unaware of the local machine or if the DNS server has incorrect information about the local host.
A reverse DNS lookup can be performed to find the hostname associated with an IP address. This may fail for the same reasons a forward DNS lookup might fail.
http get
How to make an HTTP GET request and read the response into a string.
http post
serve working directory
A command line invocation to start a single process web server which serves the working directory at.
$ sudo cpan -i IO::All $ perl -MIO::All -e 'io(":8000")->fork->accept->(sub { $_[0] < io(-x $1 ? "./$1 |" : $1) if /^GET \/(.*) / })'
absolute url
How to construct an absolute URL from a base URL and a relative URL as documented in RFC 1808.
When constructing the absolute URL, the rightmost path component of the base URL is removed unless it ends with a slash /. The query string and fragment of the base URL are always removed.
If the relative URL starts with a slash / then the entire path of the base URL is removed.
If the relative URL starts with one or more occurrences of ../ then one or more path components are removed from the base URL.
The base URL and the relative URL will be joined by a single slash / in the absolute URL.
php:
Here is a PHP function which computes absolute urls.
parse url
How to extract the protocol, host, port, path, query string, and fragment from a URL. How to extract the parameters from the query string.
python:
urlparse can also be used to parse FTP URLs:
up = urlparse.urlparse(';type=binary') # 'foo' up.username # 'bar' up.password # 'type=binary' up.params
ruby:
How to parse an FTP URL:
up = URI(';type=binary') # "foo" up.user # up.password "bar" # "binary" up.typecode
url encode/decode
How to URL encode and URL unencode a string.
URL encoding, also called percent encoding, is described in RFC 3986. It replaces all characters except for the letters, digits, and a few punctuation marks with a percent sign followed by their two digit hex encoding. The characters which are not escaped are:
A-Z a-z 0-9 - _ . ~
URL encoding can be used to encode UTF-8, in which case each byte of a UTF-8 character is encoded separately.
When form data is sent from a browser to a server via an HTTP GET or an HTTP POST, the data is percent encoded but spaces are replaced by plus signs + instead of %20. The MIME type for form data is application/x-www-form-urlencoded.
python:
In Python 3 the functions quote_plus, unquote_plus, quote, and unquote moved from urllib to urllib.parse.
urllib.quote replaces a space character with %20.
urllib.unquote does not replace + with a space character.
html escape
How to escape special characters in HTML character data; how to escape special characters in HTML attribute values; how to unescape HTML entities.
In character data, such as what occurs in between a start and end tag, the characters <, >, and & must be replaced by <, >, and &.
Attribute values in HTML tags must be quoted if they contain a space or any of the characters "'`=<>. Attribute values can be double quoted or single quoted. Double quotes and single quotes can be escaped by using the HTMl entities " and '. It is not necessary to escape the characters <, >, and & inside quoted attribute values.
php:
The flag ENT_NOQUOTES to the function htmlspecialchars causes double quotes " to be escaped.
The flag ENT_QUOTES causes single quotes ' to be escaped.
base64 encode/decode
How to encode binary data in ASCII using the Base64 encoding scheme.
A popular Base64 encoding is the one defined by RFC 2045 for MIME. Every 3 bytes of input is mapped to 4 of these characters: [A-Za-z0-9/+].
If the input does not consist of a multiple of three characters, then the output is padded with one or two hyphens: =.
Whitespace can inserted freely into Base64 output; this is necessary to support transmission by email. When converting Base64 back to binary whitespace is ignored.
Unit Tests
test class
How to define a test class and make a truth assertion.
The argument of a truth assertion is typically an expression. It is a good practice to include a failure message as a second argument which prints out variables in the expression.
run tests; run test method
How to run all the tests in a test class; how to run a single test from the test class.
equality assertion
How to test for equality.
python:
Note that assertEquals does not print the values of its first two arguments when the assertion fails. A third argument can be used to provide a more informative failure message.
approximate assertion
How to assert that two floating point numbers are approximately equal.
regex assertion
How to test that a string matches a regex.
exception assertion
How to test whether an exception is raised.
mock method
How to create a mock method.
A mock method is used when calling the real method from a unit test would be undesirable. The method that is mocked is not in the code that is being tested, but rather a library which is used by that code. Mock methods can raise exceptions if the test fails to invoke them or if the wrong arguments are provided.
python:
assert_called_once_with can takes the same number of arguments as the method being mocked.
If the mock method was called multiple times, the method assert_called_with can be used in place of asert_called_once_with to make an assertion about the arguments that were used in the most recent call.
A mock method which raises an exception:
foo = Foo() foo.run = mock.Mock(side_effect=KeyError('foo')) with self.assertRaises(KeyError): foo.run(13) foo.run.assert_called_with(13)
ruby:
The with method takes the same number of arguments as the method being mocked.
Other methods are available for use in the chain which defines the assertion. The once method can be replaced by never or twice. If there is uncertainty about how often the method will be called one can used at_least_once, at_least(m), at_most_once, at_most(n) to set lower or upper bounds. times(m..n) takes a range to set both the lower and upper bound.
A mock method which raises an exception:
foo = mock() foo.expects(:run). raises(exception = RuntimeError, message = 'bam!'). with(13). once assert_raises(RuntimeError) do foo.run(13) end
There is also a method called yields which can be used in the chain which defines the assertion. It makes the mock method yield to a block. It takes as arguments the arguments it passes to the block.
setup
How to define a setup method which gets called before every test.
teardown
How to define a cleanup method which gets called after every test.
Debugging and Profiling
check syntax
How to check the syntax of code without executing it.
flags for stronger and strongest warnings
Flags to increase the warnings issued by the interpreter.
python:
The -t flag warns about inconsistent use of tabs in the source code. The -3 flag is a Python 2.X option which warns about syntax which is no longer valid in Python 3.X.
lint
A lint tool.
source cleanup
A tool which detects or removes semantically insignificant variation in the source code.
run debugger
How to run a script under the debugger.
debugger commands
A selection of commands available when running the debugger. The gdb commands are provided for comparison.
benchmark code
How to run a snippet of code repeatedly and get the user, system, and total wall clock time.
profile code
How to run the interpreter on a script and get the number of calls and total execution time for each function or method.
Java Interoperation
version
Version of the scripting language JVM implementation used in this reference sheet.
repl
Command line name of the repl.
interpreter
Command line name of the interpreter.
compiler
Command line name of the tool which compiles source to java byte code.
prologue
Code necessary to make java code accessible.
new
How to create a java object.
method
How to invoke a java method.
import
How to import names into the current namespace.
import non-bundled java library
How to import a non-bundled Java library
shadowing avoidance
How to import Java names which are the same as native names.
convert native array to java array
How to convert a native array to a Java array.
are java classes subclassable?
Can a Java class be subclassed?
are java classes open?
Can a Java array be monkey patched? | http://hyperpolyglot.org/scripting2 | CC-MAIN-2016-44 | refinedweb | 6,607 | 66.54 |
CS50x Substitution
From problem set 2
This time we’re preparing to code a substitution cipher. Instead of getting a number for key, we’ll be getting a string. A 26 character long string to be more exact, where each character will replace it’s equivalent index on the alphabet.
“A key, for example, might be the string
NQXPOMAFTRHLZGECYJIUWSKDVB. This 26-character key means that
A (the first letter of the alphabet) should be converted into
N (the first character of the key),
B (the second letter of the alphabet) should be converted into
Q (the second character of the key), and so forth.”
$ ./substitution YTNSHKVEFXRBAUQZCLWDMIPGJO
plaintext: HELLO
ciphertext: EHBBQ
Acquiring and validating the key should follow basically the same steps we saw in Caesar, with some tests needing a little more thinking through. Encrypting should be as easy or easier.
Some instructions before proceeding include that we should only modify alphabetical characters and that if the case is lower or upper doesn’t matter for the key, but the text must remain at the same case it was given. Any time you use return to terminate it must return 1. Remember, the key must be 26 characters long with no repetitions and alphabetical only.
// Validate the key
// Check argc for 2 and argv[1] for len and non-alpha chars or repetitions
// Ask user for plain text and store it
// Replace each char in text with the char with same index in key. Keep case and non-alpha
Seems simple enough, the core work here is making the code check check and check for every little thing. After some thinking, base code ended up like this:
// Ciphers a text using substitution cipher#include <stdio.h>
#include <cs50.h>
#include <ctype.h>
#include <string.h>// Prototype functions
void subs(char *, char*);int main(int argc, char *argv[])
{
//validate key here
string text = get_string("Enter Text: ");
subs(text, argv[1]);
}void subs(char *t, char *k)
{
}
As I did in Caesar, I’m checking for argc and argv inside main function. I included ctype and string along with the other usual libraries. We have the proto for our only custom function and the get plaintext part is pretty much done. So we proceed to validate the key.
int main(int argc, char *argv[])
{
//validate key here
if (argc != 2)
{
printf("Usage: ./substitution key\n");
return 1;
}
if (strlen(argv [1]) != 26)
{
printf("Key must contain 26 characters.\n");
return 1;
}
for (int i = 0; i < strlen(argv[1]); i++)
{
if (isalpha(argv[1][i]) == 0)
{
printf("Key must only contain alphabetic characters. \n");
return 1;
}
for (int r = i + 1; r < strlen(argv[1]); r++)
{
if (toupper(argv[1][r]) == toupper(argv[1][i]))
{
printf("Key must only contain one of each alphabetic character. \n");
return 1;
}
}
}
First the code checks if argc is correct with 2 parameters (./substitution key) if not we explain the error and exit. Ok on that, we check for the key lenght, again if it’s not 26 we explain and exit. Now we need to check if every character inside the key is a alphabetic character, for this a loop iterates through the string checking with isalpha every character. Instructions also warned us about repetition, to check that we need a nested loop. With the nested loop we can keep the value from de first loop and compare it against every other character using value + 1 inside the nested loop. Paying attention that our key is not case sensitive we need to make sure both letter are upper or lower case, I chose upper for no particular reason, it’s easier to use toupper for both letters than to discover the case one value is using and copying it to the other. To end our key checks we return a error if we find a duplicated character, and exit.
From here we can test if our code is checking keys correctly. Grab a example from problem set and try to make it fail.
~/pset2/substitution/ $ ./substitution VcHpRzGjNtLsKfBdQwAxEuYmOi
Enter Text:
~/pset2/substitution/ $ ./substitution VcHpRzGjNtLsKfBdQwAxEuYmOiL
Key must contain 26 characters.
~/pset2/substitution/ $ ./substitution VcHpRzGjNtLsKfBdQwAxEuYmO5
Key must only contain alphabetic characters.
~/pset2/substitution/ $ ./substitution VcHpRzGjNtLsKfBdQwAxEuYmOv
Key must only contain one of each alphabetic character.
~/pset2/substitution/ $ ./substitution VcHpRzGjNtLsKfBdQwAxEuYmOi o
Usage: ./substitution key
Now that our key is working correctly for sure, we can proceed to encrypting the text. Strings t and k go for text and key respectively.
void subs(char *t, char *k)
{
printf("ciphertext: ");
for (int i = 0; i < strlen(t); i++)
{
if (isalpha(t[i]) != 0)
{
if (isupper(t[i]) != 0)
{
int index = t[i] - 'A';
printf("%c", toupper(k[index]));
}
else
{
int index = t[i] - 'a';
printf("%c", tolower(k[index]));
}
}
else
{
printf("%c", t[i]);
}
}
printf("\n");
}
The function begins printing “ciphertext”, no breaking line. Then a loop starts with value 0 to length of text. It first checks if the character in t[0] is alphabetic, then if it’s upper or lowercase. Inside the ifs checking the case, we use a formula very much like Caesars to get the alphabetical index of t[0]. Then it’s just a matter of printing that same index, but from the key string instead of text string. We do it for each single character on the text, one at a time. On the other hand, if it’s not alphabetical, we just print it as it is. To end the function and the program, just print a new line.
Staff tests and full code with comments bellow.
Results generated by style50 v2.7.4
Looks good!
Results for cs50/problems/2020/x/substitution generated by check50 v3.1.2
:) substitution.c exists
:) substitution.c compiles
:) encrypts "A" as "Z" using ZYXWVUTSRQPONMLKJIHGFEDCBA as key
:) encrypts "a" as "z" using ZYXWVUTSRQPONMLKJIHGFEDCBA as key
:) encrypts "ABC" as "NJQ" using NJQSUYBRXMOPFTHZVAWCGILKED as key
:) encrypts "XyZ" as "KeD" using NJQSUYBRXMOPFTHZVAWCGILKED all alphabetic characters using DWUSXNPQKEGCZFJBTLYROHIAVM as key
:) handles lack of key
:) handles invalid key length
:) handles invalid characters in key
:) handles duplicate characters in key
:) handles multiple duplicate characters in key | https://guilherme-pirani.medium.com/cs50x-substitution-fcb34689e499 | CC-MAIN-2022-40 | refinedweb | 1,012 | 55.24 |
Retrieve robot name
Is there a way to retrieve robot name from the urdf in c++? Or any other way to get it? Thanks
Is there a way to retrieve robot name from the urdf in c++? Or any other way to get it? Thanks
Hi @muttidrhummer!
I have created a video that answer exactly this question.
Summarizing, this can be accomplished by loading a URDF file, parsing it and calling the getName() method of the C++ Urdf model.
To see how to load and parse a URDF file, you can follow the Parse a urdf file tutorial.
After you create your
parser.cpp, before the
return 0; you should add
ROS_INFO("The robot name is: %s", model.getName().c_str());
to print the name of the robot on the URDF file, as we can see on the code below:
#include <urdf/model.h> #include "ros/ros.h" int main(int argc, char** argv){ ros::init(argc, argv, "my_parser"); if (argc != 2){ ROS_ERROR("Need a urdf file as argument"); return -1; } std::string urdf_file = argv[1]; urdf::Model model; if (!model.initFile(urdf_file)){ ROS_ERROR("Failed to parse urdf file"); return -1; } ROS_INFO("Successfully parsed urdf file"); ROS_INFO("The robot name is: %s", model.getName().c_str()); return 0; }
After compile the code and generate the executable called
parser following the Parse a urdf file tutorial, you can call it with the path of the URDF file as argument.
Something like:
rosrun your_package_name parser /path/to/your/file.urdf
If you still have any doubt, just check the video.
Thanks, works like a charm.
Perfect
You're Welcome.
Please start posting anonymously - your entry will be published after you log in or create a new account.
Asked: 2017-10-03 11:34:58 -0500
Seen: 655 times
Last updated: Oct 16 '17
Robot Upstart not find launch files in script-Noetic
robot (PR2) physical information
Moveit-Pkg for Industrial Robot demo.launch Error: Topics already advertised
Problem with Robot_Localization for IMU and wheel encoder
Real Robot and Rviz path is not matching
Problem with the robot model collapsing [closed]
ROS_RB5_Wrapper is connection refused [closed]
Caterpillar track odometry model
any reason you need the name? There's got to be an easier way, but you could read the robot description from the parameter server then parse the xml string. If you know you're going to need the name why not load the name into the parameter server in your launch file? | https://answers.ros.org/question/272187/retrieve-robot-name/?answer=273168 | CC-MAIN-2022-21 | refinedweb | 408 | 64.81 |
Basically, I made a small calculator. It does the PEMDAS and factorials. Part of my code is here:
#include "stdafx.h" #include "iostream" #include "stdio.h" #include "math.h" using namespace std; short MENU (void); void raiz (void); void suma (void); etc.....
int main () { short OPC; OPC=MENU (); do { switch (OPC) { case 1: raiz(); break; case 2: suma(); break; etc..... case 8: break; default: cout<<"\n Seleccion Erronea. \n "; OPC=MENU(); } } while ((OPC>8) || (OPC<1)); cout<<"\n\n\t\t "; system ("pause"); return 0; }
After that, are the modules themselves (short MENU, void suma, etc) which do each process. Nevertheless, I want that if I selected the suma (Addition) and finished with it, the program would send me back to the menu and keep on going until I select case 8 (Quit). | https://www.daniweb.com/programming/software-development/threads/374277/how-to-make-a-program-loop | CC-MAIN-2017-09 | refinedweb | 133 | 84.57 |
Advertise Jobs
Perl
module
-
Part of CPAN
distribution
Inline-Python 0.20.
Inline::Python - Write Perl subs and classes in Python.
print "9 + 16 = ", add(9, 16), "\n";
print "9 - 16 = ", subtract(9, 16), "\n";
use Inline Python => <<'END_OF_PYTHON_CODE';
def add(x,y):
return x + y
def subtract(x,y):
return x - y
END_OF_PYTHON_CODE.
Inline::Python
One of the coolest new features in this release of Inline::Python is that
you can now access Perl from within your embedded Python code. This allows
all sorts of interesting interaction between the languages.
This document describes Inline::Python, the Perl package which gives you
access to a Python interpreter. For lack of a better place to keep it, it
also gives you instructions on how to use perlmodule, the Python package
which gives you access to the Perl interpreter.
perlmodule
Using Inline::Python will seem very similar to using another Inline
language, thanks to Inline's consistent look and feel.
This section will explain the different ways to use Inline::Python.
For more details on Inline, see 'perldoc Inline'.'.
Maybe you have a whole library written in Python that only needs one entry
point. You'll want to import that function. It's as easy as this:
doit();
from mylibrary import doit
Inline::Python actually binds to every function in Python's "global" namespace
(those of you in the know, know that namespace is called '__main__'). So if
you had another function there, you'd get that too.
If you've written a library in Python, you'll make it object-oriented.
That's just something Python folks do. So you'll probably want to import a
class, not a function. That's just as easy:
my $obj = new Myclass;
from mylibrary import myclass as Myclass
What if you have a class that wasn't imported? Can you deal with instances
of that class properly?
Of course you can! Check this out:
use Inline Python => <<END;
def Foo():
class Bar:
def __init__(self):
print "new Bar()"
def tank():
return 10
return Bar()
my $o = Foo();
$o->tank();
In this example, Bar isn't imported because it isn't a global -- it's hidden
inside the function Foo(). But Foo() is imported into Perl, and it returns an
instance of the Bar class. What happens then?
Bar
Whenever Inline::Python needs to return an instance of a class to Perl, it
generates an instance of Inline::Python::Object, the base class for all
Inline::Python objects. This base class knows how to do all the things you
need: calling methods, in this case..
perl
The perl package exposes Perl packages and subs. It uses the same code as
Inline::Python to automatically translate parameters and return values as
needed. Packages and subs are represented as PerlPkg and PerlSub,
respectively.
PerlPkg
PerlSub(source code)
Unlike Python, Perl has no exec() -- the eval() function always returns the
result of the code it evaluated. eval() takes exactly one argument, the
perl source code, and returns the result of the evaluation.
require(module name)
use(module name)
Use require() instead of import. In Python, you'd say this:
import:
Python's __getattr__() function allows the package to dynamically return
something to satisfy the request. For instance, you can get at the subs
in a perl package by using dir() (which is the same as getattr(perl,
'__methods__')..
perl.f.
When Inline::Python imports a class or function, it creates subs in Perl
which delegate the action to some C functions I've written, which know how
to call Python functions and methods.
class Foo:
def __init__(self):
print "new Foo object being created"
self.data = {}
def get_data(self): return self.data
def set_data(self,dat):
self.data = dat
Inline::Python actually generates this code and eval()s it:
package main::Foo;
@main::Foo::ISA = qw(Inline::Python::Object);
sub new {
splice @_, 1, 0, "__main__", "Foo";
return &Inline::Python::py_new_object;
}
sub set_data {
splice @_, 1, 0, "set_data";
return &Inline::Python::py_call_method;
}
sub get_data {
splice @_, 1, 0, "get_data";
return &Inline::Python::py_call_method;
}
sub __init__ {
splice @_, 1, 0, "__init__";
return &Inline::Python::py_call_method;
}
More about those py_* functions, and how to generate this snippet of code
yourself, in the next section.
py_*
my $o = Inline::Python::Object->new('__main__', 'MyClass');
$o->put("candy", "yummy");
die "Ooops" unless $o->get("candy") eq 'yummy';
Inline::Python provides a full suite of exportable functions you can use to
manipulate Python objects and functions "directly"..
undef);
def Foo():
return 42
#_bind_class("main::Foo", "__main__", "Foo", "set_data", "get_data");
my $o = new Foo;
This call to py_bind_class() will generate this code and eval() it: inheritence tree to the AUTOLOAD method. I recommend binding to
the functions you know about, especially if you're the one writing the code.
If it's auto-generated, use py_study_package(), described below. instructions.
Here are some things to watch out for:
Note that the namespace imported into Perl is NOT recursively
traversed. Only Python globals are imported into Perl --
subclasses, subfunctions, and other modules are not imported.
Example:
import mymodule
class A:
class B: pass
The namespace imported into perl is ONLY that related to A. Nothing
related to mymodule or B is imported, unless some Python code
explictly copies variables from the mymodule namespace into the global
namespace before Perl binds to it.
A
mymodule
B
Neil Watkiss <NEILW@cpan.org>
Brian Ingerson <INGY@cpan.org> is the author of Inline, Inline::C and
Inline::CPR. He was responsible for much encouragement and many
suggestions throughout the development of Inline::Python.
All Rights Reserved. This module is free software. It may be used,
redistributed and/or modified under the same terms as Perl itself.
(see) | http://aspn.activestate.com/ASPN/CodeDoc/Inline-Python/Python.html | crawl-002 | refinedweb | 948 | 63.49 |
Asked by:
Error: Cannot open source file "StdAfx.h" (Intellisense)
I am trying a test port of a VC++ project (ATL/WTL) from VC2005 to the 2010 release candidate.
On opening some of my source files in the code editor I noticed a large number of Intellisense red error "squiggles" the first of which is "Error Cannot open source file "StdAfx.h"". The rest of them seem to be due to missing pre-compiled header symbols as would be expected. However dispite this the project builds and runs without any problem, so the compiler can find the precompiled header, but Intellisense cannot!
I discovered that the files effected were all in a subdirectory off the main source directory. I tried replacing "#include "stdafx.h" with "#include "..\stdafx.h". This fixed the intellisense problem but now the file will not compile:
Error 6 error C1010: unexpected end of file while looking for precompiled header. Did you forget to add '#include "StdAfx.h"' to your source?
This after a number of warning C4627 including for "..\stdafx.h" (skipped when looking for precompiled header use). So it seems that Intellisense and the compiler use different search rules when looking for header files and that the compiler will only recognise the pre-compiled header file if it is specified exactly without a path.
I did find a work-round for this bug: have two include lines as follows:
#include "stdafx.h" #include "..\stdafx.h"
Now both Intellisense and compiler work, with just the one Intellisense error squiggle under the first #include.
Please let me know if there is a setting to fix this properly, although even if there is there is still a bug but it would then be in the project migration process.Saturday, May 01, 2010 10:41 AM
Question
All replies
- It looks like an Intellisense bug. An other quicker fix is to add dummy StdAfx.h files in all source code sub-directories that reference the main stdafx.h. Specifically the dummy stdafx.h will only have one line, such as:
#include "..\StdAfx.h"
These dummy files seem to keep Intellisense happy, and the compiler ignores them...Thursday, May 27, 2010 7:28 AM
The squiggle parser doesn't use the same pre-compiler header (PCH) mechanism as the compiler (cl.exe). The squiggle parser needs to find the actual header files. It relies on the include search path to do so. In this case, the squiggle parser fails to find stdafx.h since the .cpp file is in a different directory from stdafx.h.
You can fix this problem by adding "$(ProjectDir)" (or wherever the stdafx.h is) to list of directories under Project->Properties->Configuration Propertes->C/C++->General->Additional Include Directories.
Also take a look at: To approach IntelliSense failure in C++ projects/bullet 2.
Tuesday, June 08, 2010 10:12 PM
- Proposed as answer by Martha Wieczorek MSFT Thursday, September 02, 2010 11:26 PM
- Just had the same problem...I closed the project and reopened it and the problem dispeared !!!???Saturday, October 16, 2010 9:12 AM | http://social.msdn.microsoft.com/Forums/en-US/efdf4ae8-437d-4a7f-8cf5-b895257da2bd/error-cannot-open-source-file-stdafxh-intellisense?forum=vcprerelease | CC-MAIN-2014-10 | refinedweb | 512 | 66.23 |
As application developers, we're always challenged with new problems to solve for our customers and stakeholders. Whether you're creating desktop, enterprise, or SOA applications, you will often face the need to persist data to some sort of datastore. Although there are other options, most developers choose to use a relational database to persist data, as well as to store application state.
SQL is a great language for selecting, inserting, and updating data in the database, but it's not well suited for handling things when a particular part of your application changes state, and you need to do something about it. What do I mean by that? Let's take a look at some common, everyday examples of state changes that should be familiar to everyone.
I'm referring to any changes in state within an application, such as:
So, in a typical three tier application, as shown in Figure 1 below, most developers will move all the logic to the 2nd tier (note that for a desktop app, the 1st and 2nd tiers are combined) to monitor state changes, and to act accordingly.
So, for Java developers, this means that you will need to spin up a thread that will periodically poll the database to determine if any changes in state occur. As with any polling application the question immediate arises, 'How often should I poll?' If you poll the database too frequently, then you're consuming precious resources (such as CPU cycles, heap space, and a db connection) that could be used by the rest of the application. However, if you don't poll your database often enough, then you may have unwanted side-effects in your application because it's not responsive enough. For instance, if you're using a database to maintain the status of employment for employees in a corporation, then polling the database every 24 hours will result in the possibility of a terminated employee having access to the corporate resource for up to an entire day.
Therefore, the purpose of this article is to show developers how to completely eliminate the necessity for polling databases for state changes. I want to show you how to create Java triggers in the Oracle database that handle state changes by themselves.
Wait, There's a JVM Inside the Oracle Database?
Yes, there's a JVM inside the Oracle database. Yes, it's been there for years in fact, since the days of Oracle 8i. Yes, it's available for application developers to use in their own applications -- and yes, you can achieve a performance improvement by utilizing it. How much of an improvement? Some tests have shown that JDBC operations utilizing the internal OracleJVM compared to an external JVM can increase performance by 600%, which is quite impressive. Figure 2, located below shows a modified 3-tier architecture using the OracleJVM.
Once you think about it, it's easy to see how the OracleJVM is able to boast such incredible performance numbers:
Now if you're an enterprise developer, does this mean that you can completely dispose of your application server and deploy all your code in the OracleJVM? Absolutely not. You're still going to need (at least) one machine that is capable of handling a large number of HTTP requests from your web clients. This machine should have a pool of connections to the database, and should be able to reuse those connections to handle various operations on behalf of the end-users. However, this does mean that if you have any internal tasks that are contained within the database itself, then it's a likely candidate to be encapsulated within the database's JVM such as internal state changes! The following table maps OracleJVM versions with Java SE compatibility.
Now before we go through the steps involved on how to create a pure-Java trigger in the Oracle database, we're going to take on a simpler task. Let's see how to create and load a simple Java class in the database and execute a method from the command line. Listing 1, shown below is the source code for
firstclass.java.sql
firstclass.java.sql
create or replace java source named FirstClass as public class FirstClass{ public static String greeting(String name){ return "Hello " + name + "!"; } } / create or replace function firstclass_greeting (name varchar2) return varchar as language java name 'FirstClass.greeting(java.lang.String) return java.lang.String'; /
As you can see, the first create statement encloses the pure-Java source code for a class named
FirstClass (the first create statement is terminated by the "/"). When this statement is executed, then your Java class will be loaded in the OracleJVM. The second create statement associates the
FirstClass.greeting() method with a callable function name of
firstclass_greeting(). The second create statement is not necessary to load, compile, or initialize your Java class, but it provides a PL/SQL wrapper around your Java class so that entities in the SQL engine can call it. One more thing to notice about Listing 1: our PL/SQL wrapper is creating a
function because of the fact that we're returning something (in this case, a
String) after execution. Later on, you should notice that I'll create a procedure when I want to wrap a Java
void method.
Now when you load
firstclass.java.sql into SQLPlus and execute it, then you should see the following results, as shown in Listing 2.
firstclass.java.sql
SQL> @firstclass.java.sql Java created. Function created.
So now we have our class loaded in the OracleJVM, and we also have a function that maps to our class's static method. So let's call that function and see what happens:
firstclass_greeting()function
SQL> select firstclass_greeting('Bruce') from dual; FIRSTCLASS_GREETING('BRUCE') -------------------------------------------------------------------------------- Hello Bruce!
Great, so at this point, we have successfully tested that we can load Java classes in the OracleJVM, create a PL/SQL wrapper around Java methods, and call the wrapper from within the database. Now, let's write a pure Java trigger that actually solves a real business problem.
So let's examine a classic use case that should be familiar with most people: book sales via an online bookstore. One of the problems that any retailer faces (whether online or brick & mortar) is ensuring that items for sale are always kept in stock. Figure 3 depicts a simple data model for a bookstore's database.
In order to create the tables required for this example, as well as to insert some sample data in the database, just execute the SQL script contained in Listing 4.
drop table books / drop table publisher_supply_orders / create table books( book_id number primary key, publisher_id number, page_count number, author_name varchar2(50), book_title varchar2(50), description varchar2(500), status varchar2(10), inventory_qty number ) / insert into books values(100, 200, 234, 'Bruce Hopkins', 'Bluetooth for Java', 'great book', 'IN STOCK', 10); insert into books values(101, 200, 401, 'Sam Jones', 'Living on the East Coast', 'worth every penny', 'IN STOCK', 50); insert into books values(102, 250, 278, 'Max Jason', 'The South of France', 'a best-seller', 'IN STOCK', 20); create table publisher_supply_orders( book_id number, publisher_id number, order_quantity number ) /
So, as you can imagine, the business logic for our internal trigger is very simple. Whenever the books table is updated, check to see if the
books.inventory_qty field is less than a particular threshold. If the
books.inventory_qty is less than the threshold (which I set to be 5 books), then reorder the book by inserting a new row in the
publisher_supply_orders table. At a later time, a batch process can read all the entries in the
publisher_supply_orders and place the actual orders with the publishers at the one time. This way, you can aggregate the orders with the individual publishers, which is a lot more efficient compared to submitting the orders directly to the publishers for each title one at a time. The code located in Listing 5, located below creates the Java class, the PL/SQL wrapper, as well as the trigger that the database will use to call your Java class's static method.
ReorderTrigger.java.sql
create or replace java source named "ReorderTrigger" as import java.sql.*; import oracle.jdbc.driver.*; public class ReorderTrigger { public static int REORDER_THRESHOLD = 5; public static int REORDER_QTY = 25; public static void reorderBooks(Integer bookID, Integer publisherID, Integer inventoryQty) { if(inventoryQty.intValue() < REORDER_THRESHOLD){ try { Connection conn = DriverManager.getConnection("jdbc:default:connection:"); PreparedStatement prep = conn.prepareStatement("insert into publisher_supply_orders values(?,?,?)"); prep.setInt(1, bookID); prep.setInt(2, publisherID); prep.setInt(3, REORDER_QTY); prep.executeUpdate(); prep.close(); conn.close(); } catch (Exception e){ } } } } / create or replace procedure procedure_reorderbooks(bookID number, publisherID number, inventoryQty number) as language java name 'ReorderTrigger.reorderBooks(java.lang.Integer, java.lang.Integer)'; / create or replace trigger trigger_reorderbooks before update on books for each row begin procedure_reorderbooks(:new.book_id, :new.publisher_id, :new.inventory_qty); end; /
As you can see in Listing 5, when your Java code executes from within the OracleJVM you still use all the JDBC classes and paradigms that you would use as if your code executed outside of the database. You should notice that the connection String that allows you to utilize the internal JDBC driver in the OracleJVM is "
jdbc:default:connection:". Additionally, you should notice that we're creating a procedure as our PL/SQL wrapper for our Java method since we're not returning a result from the method call. After you execute the
ReorderTrigger.java.sql file, you should see the results shown in Listing 6, below.
ReorderTrigger.java.sql
SQL> @ReorderTrigger.java.sql Java created. Procedure created. Trigger created.
So now that we have all the components in place for the trigger to execute, let's update the books table so that the
inventory_qty is less than the threshold of 5 books. Figure 4 shows one of the rows of sample data in the books table to be modified under the threshold, and Figure 5 shows the new row that was inserted in the
publisher_supply_orders table automatically by our trigger.
publisher_supply_ordersTable Shows a New Row from the Trigger
Whew! We've covered a lot of material today. The Oracle database is a powerful development platform that not only allows developers to store and query relational data, but it also includes a JVM, which enables developers to create powerful server-side applications. Trust me, we've only scratched the surface here... Stay tuned for more. | http://www.oracle.com/technetwork/articles/javase/index-138215.html | CC-MAIN-2016-07 | refinedweb | 1,737 | 51.68 |
Created on 2010-11-08 17:38 by Valentin.Lorentz, last changed 2014-04-19 08:30 by orsenthil. This issue is now closed.
Hello,
I had this traceback:
Traceback (most recent call last):
...
File "/home/progval/workspace/Supybot/Supybot-plugins/Packages/plugin.py", line 123, in extractData
file = tarfile.open(fileobj=file_)
File "/usr/lib/python2.6/tarfile.py", line 1651, in open
saved_pos = fileobj.tell()
AttributeError: addinfourl instance has no attribute 'tell'
The repr(file_) is : <addinfourl at 47496224 whose fp = <socket._fileobject object at 0x2c933d0>> (the file has been downloaded with urllib).
I use Python 2.6.6 (from Debian Sid repo)
Best regards,
ProgVal
With a socket file you cannot rely on autodetection of the format. I suggest to try
tarfile.open(fileobj=file_, mode='r:')
or
tarfile.open(fileobj=file_, mode='r:gz')
Thanks for your answer.
Sorry, no change...
I'm sure the traceback changed then?
mode='r:' uses a different code path.
Here is the new traceback:
Traceback (most recent call last):
...
File "/home/progval/workspace/Supybot/Supybot-plugins/Packages/plugin.py", line 123, in extractData
file = tarfile.open(fileobj=file_, mode='r:')
File "/usr/lib/python2.6/tarfile.py", line 1671, in open
return func(name, filemode, fileobj, **kwargs)
File "/usr/lib/python2.6/tarfile.py", line 1698, in taropen
return cls(name, mode, fileobj, **kwargs)
File "/usr/lib/python2.6/tarfile.py", line 1563, in __init__
self.offset = self.fileobj.tell()
AttributeError: addinfourl instance has no attribute 'tell'
2.6 only gets security fixes. Can you reproduce the bug with current versions?
I also have the bug with Python 2.7.1rc1 (from Debian 'Experimental' repo)
Thanks. Do you want to work on a patch? We need to had a test to reproduce the error and then fix the code.
Same result on all tested versions (2.5 -> 2.7) :
------------------------------------------
import urllib2
import tarfile
tarfile.open(fileobj=open('/home/progval/Downloads/GoodFrench.tar'), mode='r:') # Works
tarfile.open(fileobj=urllib2.urlopen(urllib2.Request('')), mode='r:') # Fails
------------------------------------------
I don't understand, because /home/progval/Downloads/GoodFrench.tar is the same file...
I don't know if I have the skills to do the patch, I use Python for less than 9 months...
2.5 and 2.6 don’t get bug fixes anymore, only security fixes. If you want to try to fix the code, there are some guidelines at
This code :
import urllib2
print urllib2.urlopen('').read(500)
Prints :
data/0000755000175000017500000000000011466030066011677 5ustar progvalprogval
If the problem comes from networking libraries, I really don't have the skills :s!)
I don't think this will be solved. File-like objects (in this case IO wrappers for the socket) may have different capabilities and tarfile is just expecting too much.
My patch for #15002 relieved the situation somewhat by providing tell() but the IO stream just isn't seekable. I think you'll have to download to a temporary file first to give tarfile all the capabilities it needs.
I guess this should be rejected.
Not being an export on tar at all, but I tried getting anything working without tell() and seek() but couldn't.
The code reads as if its supposed to support some tar formats that do not require seeking, but that would be rather hard to predict on a file-by-file basis, I guess.
I agree. Having tell on a file descriptor of a URL request is not going to be of help. You can easily write to a local file and use all the local file features, if it is things like .tell is desired. | https://bugs.python.org/issue10362 | CC-MAIN-2019-22 | refinedweb | 598 | 70.09 |
Advanced Debugging with Pry
This tutorial helps broaden your basic knowledge of debugging by providing you with a different approach and tool for debugging in Ruby.
Before we begin I will like us to establish some common ground. A basic understanding of the following is required:
IntroductionIntroduction
Pry was and is still being developed by John Mair (project founder; aka banisterfiend; twitter; blog), Conrad Irwin and Ryan Fitzgerald. It is also aided by a healthy number of contributors.
Simply put, Pry is IRB on steroids. With Pry, you get all the features of IRB, plus some more. Besides being able to intercept sessions and perfom REPL-like actions in your terminal, you are also able to use your command shell integrations.
Pry can do quite a number of things; however, for the sake of this tutorial, we will focus on using pry to debug.
To set up pry on your machine, run the
gem install pry command. This would fetch
pry from rubygems.org
Basic UsageBasic Usage
As mentioned above, you can use pry to intercept your session and evaluate each method call during the interception ,
Here we have a class,
Greet, that greets people in several languages as shown below:
class Greet def self.english(username) "Hello #{username}" end def self.spanish(username) "Holla #{username}" end def self.deutch(username) "Hallo #{usernme}" end def self.aragonese(username) "ola #{username}" end end
You probably noticed that we have a typo in the Deutch method.
If we try to get Deutch greeting by calling the method using
Greet.deutch("John Doe"), you'll get an
undefined local variable or method 'usernme' error.
If we put pry here, we would be able to evaluate each method individually.
In order to use pry, we must put it into the file we're working with and "bind" it at the specific point we want to intercept.
In this case, we will have
require 'pry'; class Greet def self.english(username) "Hello #{username}" end def self.spanish(username) "Holla #{username}" end def self.deutch(username) binding.pry "Hallo #{usernme}" end def self.aragonese(username) "ola #{username}" end end
When we run the file again using the Deutch Greeting,
Greet.deutch("John Doe"), a pry session is initiated. We would be able to access all the methods and arguments in the class.
From: /Users/Ruby/tutorial.rb @ line 15 Greet.deutch: 14: def self.deutch(username) => 15: binding.pry 16: "Hallo #{usernme}" 17: end # Checking the argument being passed to the method [1] pry(Greet)> username => "John Doe" # Making method calls in pry [2] pry(Greet)> aragonese(name) => ola John Doe # Running individual blocks of code # In an attempt to debug our error we will run the content of your `deutch` method [3] pry(Greet)> "Hallo #{usernme}" NameError: undefined local variable or method `usernme' for Greet:Class If we look closely we'll find that we have argument name and the one in our code block dont match We will simply fix this error by adding the missing character in our code block [4] pry(Greet)> "Hallo #{username}" => Hallo John Doe
The application of this are not limited to method calls or accessing variables alone. To exit your pry session, you can enter the command
exit or
ctrl + D.
Doing More with PryDoing More with Pry
What if we could inspect all the local variables in our code snippet within our pry session?
From: /Users/Ruby/tutorial.rb @ line 15 Greet.deutch: 14: def self.deutch(name) => 15: binding.pry 16: end # lets see the methods and local variables available # We do this by running the 'ls' command [1] pry(Greet)> ls Greet.methods: aragonese deutch english spanish locals: _ __ _dir_ _ex_ _file_ _in_ _out_ _pry_ name ## Then we run the 'cd' command against our name variable [2] pry(Greet)> cd name And we run the 'ls' command to see all the methods for our name variable whose data type is a string [3] pry("John Doe"):1> ls Comparable#methods: < <= > >= between? String#methods: % [] capitalize! chr downcase encode gsub intern next! rindex setbyte squeeze! sum to_str unicode_normalized? * []= casecmp clear downcase! encode! gsub! length oct rjust shellescape start_with? swapcase to_sym unpack + ascii_only? center codepoints dump encoding hash lines ord rpartition shellsplit strip swapcase! tr upcase << b chars concat each_byte end_with? hex ljust partition rstrip size strip! to_c tr! upcase! <=> bytes chomp count each_char eql? include? lstrip prepend rstrip! slice sub to_f tr_s upto == bytesize chomp! crypt each_codepoint force_encoding index lstrip! replace scan slice! sub! to_i tr_s! valid_encoding? === byteslice chop delete each_line freeze insert match reverse scrub split succ to_r unicode_normalize =~ capitalize chop! delete! empty? getbyte inspect next reverse! scrub! squeeze succ! to_s unicode_normalize! self.methods: __pry__ locals: _ __ _dir_ _ex_ _file_ _in_ _out_ _pry_
It's a lot right? With the above, you do not neccesarily have to know all of Ruby's string methods by heart. However, I recommend you to look into each one of the methofs to know when and when not to use them. Ruby Doc is very useful in this regard. Kudos to you if you already know all these methods by heart! I honestly wouldnt mind buying you coffee if you're in the neighbourhood!
You can use the approach above to look into the internals of both your File and Exception classes.
To do that, we must modify our pry session for this hedgecase:
From: /Users/Ruby/tutorial.rb @ line 15 Greet.deutch: 14: def self.deutch(name) => 15: binding.pry 16: end # We will trigger an exception by attempting to divide a string by an integer [1] pry(Greet)> name/2 NoMethodError: undefined method `/' for "John Doe":String from (pry):1:in `deutch' # This exception is stored in the _ex_ local variable we saw above. # We can access the exception message at any point in time by looking into the variable [2] pry(Greet)> _ex_.message => "undefined method `/' for \"John\Doe":String" # Then we can run a backtrace also using the _ex_ variable [3] pry(Greet)>_ex_.backtrace ["(pry):1:in `deutch'", "/Users/Admin/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/pry-0.10.4/lib/pry/pry_instance.rb:355:in `eval'", "/Users/Admin/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/pry-0.10.4/lib/pry/pry_instance.rb:355:in `evaluate_ruby'", "/Users/Admin/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/pry-0.10.4/lib/pry/pry_instance.rb:323:in `handle_line'", ....
The above is extremely useful when you're trying to debug complex projects and scripts. You can also check last exception message by using the
wtf command in pry. The backtrace will help seek out the origination of your bug.
Using pry in LoopsUsing pry in Loops
Imagine a simple method that takes an array argument and outputs a hash with the names of students and an assigned serial number to each students.
Such a method will look like the one we have below:
def student_serial(i) new_hash = {} i.each_with_index do |value, index| new_hash["#{value}"] = index end new_hash end
Using the conditional, you can specify at what point you want to intercept your code and initiate a pry session.
def student_serial(i) new_hash = {} i.each_with_index do |value, index| # we will add a conditional pry to # start a session when our value is nil binding.pry if value.nil? new_hash["#{value}"] = index end new_hash end
We only covered some of the basic things pry can do — imagine how much more you can accomplish when you use Pry with Rails.
Yes, pry is awesome, but it doesn't just end here! In the second part of this series, we will combine pry with steering and pry with rails. | https://www.codementor.io/amodutemitope838/advanced-debugging-with-pry-4h331kv87 | CC-MAIN-2017-47 | refinedweb | 1,286 | 65.93 |
On Sat, Aug 21, 2004 at 05:48:49PM +0200, Johannes Berg wrote: > Tom, and others, have repeatedly said that the namespace is just 2- > dimensional and the ordering isn't really too special. [Some of that is > probably to quiet people wanting to change things, but anyway] Still, I think it would obviously confuse people if the order they see everyday in their names was not the same as that presented in a browser. > Thats pretty cool though really cluttered. If this was displayed with > the other view (version -> branch) then it'd look like > moin > 1.1 > cvs base-0 .. patch-47 > features base-0 > fixes base-0 .. patch-15 > > which looks much cleaner than > > moin > cvs > 1.1 base-0 .. patch-47 > features > 1.1 base-0 > fixes > 1.1 base-0 .. patch-15 Sure it `looks clean', and if all your branches have the same basicaly vestigial version number, it makes little difference. _But_ if the archive actually _uses_ version numbers, it will suddenly start to look a lot less clean, and different versions of the same category/branch will be separated in a most inconvenient manner. IOW, your example above is sort of a straw-man argument. I'd suggest that a much better strategy to avoid unnecessary clutter in the output format would be to make sure that `vestigial' versions don't consume any unnecessary space. One way to do this might be to put the first version entry on the same line as the branch name -- then in the common case where there's only one version, you'd get only one line per branch, but in the case where multiple versions are used, they would be grouped together just like abrowse output or whatever. Here's your example above presented using this format (plus an other version added to illustrate the multiple version case): moin cvs 1.1 base-0 .. patch-47 features 1.1 base-0 1.2 base-0 .. patch 5 fixes 1.1 base-0 .. patch-15 -Miles -- "1971 pickup truck; will trade for guns" | http://lists.gnu.org/archive/html/gnu-arch-users/2004-08/msg00280.html | CC-MAIN-2015-14 | refinedweb | 346 | 71.75 |
Local domain server access issues
I moved our DHCP from a windows small business 2008 server to our sonicwall firewall. when I try and domain a computer to the local domain on the server I get an error."That domain couldnt be found. Check the domain name and try again." What are potential troubleshooting steps i can take? Networking is not my strong suit.
See also questions close to this topic
- How could I use blocking (net.Conn) Read() function,.
What want is reading data every loop tower for actualizing game data. In the current state, my loop does not loop and stops at the first
recovery()call that contains the
Read()call..
- C++ Uploading file to ftp server
I am not totally new to coding but new to Server stuff with c++. I found this code on StackOverflow (C++ Uploading file to ftp)
#include <wininet.h> #pragma comment(lib, "Wininet") int upload() { HINTERNET hInternet = InternetOpen(NULL, INTERNET_OPEN_TYPE_DIRECT, NULL, NULL, 0); HINTERNET hFtpSession = InternetConnect(hInternet, "HOST", INTERNET_DEFAULT_FTP_PORT, "USER", "PASS", INTERNET_SERVICE_FTP, INTERNET_FLAG_PASSIVE, 0); FtpPutFile(hFtpSession, "C:/test.txt", "/test.txt", FTP_TRANSFER_TYPE_BINARY, 0); std::cout << "File Uploaded." << std::endl; InternetCloseHandle(hFtpSession); InternetCloseHandle(hInternet); return 0; } int main() { upload(); return 0; }
for everyone who wonders where it is, its an answer.
So now my problem: I don't understand what this code does!! Can please somebody explain this to me plus the comment below this answer. I know that this is much in demand but can please someone explain this to me and the things this op got wrong so I don't make those mistakes (I know I formatted this question pretty shitty I am open for suggestions and feedback)
Thank you for any answers :)
- Azure Hybrid Benefit for Windows Server 2016 using Windows Server 2019 licences
Is it possible to use Windows Server 2019 Datacenter licences to enable Azure Hybrid Benefit for Windows Server 2016 Datacenter virtual machines?
How many licences are needed? Is it simply 1 per VM core?
Thanks
- Architecture of a UI with multiple separate components
Problem
I have site A written in React.
I want to render smaller React components into site A whenever a user navigates to a certain page within site A.
Each smaller React component lives within its own repository, as does site A.
Question
How can I dynamically load these components into site A when site A is in production?
What kind of workflow can I set up for developing the smaller React components locally within site A?
I was thinking of using Web Components () but don't want to have the components already deployed somewhere and just load those components from the server. Preferably there would be a solution where I can set up something in my pipeline to point to the repositories where the smaller components exist and package those along with the site A code into one bundle whenever any component is built.
That also brings up the other problem of loading the same dependencies multiple times (like React, React DOM) due to different projects being packaged.
Other ideas are possibly using npm modules, iframes, etc.
- DNS CNAME from www to root still shows www
Yesterday I bought a new domain for my website. I created a CNAME DNS entry from "" to "mywebsite.com". So people who visit the website with www should be redirected to the root domain. But if I enter "" in the browser and the website loads correctly but still shows "" in the bar. I already used the dig command in the terminal and it shows the correct cname and then the correct resolution from "mywebsite.com" to the IP address. I am pretty new with DNS and don't know what I am doing wrong.
Apache Virtual Host Config:
<VirtualHost *:80> ServerName Redirect 301 / </VirtualHost>
- TXT entries from gmail overrides my gitlab pages TXT
I'm trying to verify a gitlab pages domain
So, in my DNS provider, I add a TXT register like:
_gitlab-pages-verification-code.mysite.fr TXT gitlab-pages-verification-code=08206beaab9ad1079993f245f1419a22
but I already have
@ 3600 IN TXT "v=spf1 mx include:_spf.google.com ?all"
that seems to override all my TXT entries.
When I do
dig +short txt mysite.fr
I will not see the TXT entry as long as I don't delete the google entry.
How should I do that? I also read that I can't delete google entry because it will periodically check it.
Any ideas ?
- How can I setup docker compose to use my docker-container DNS?
I have a semi interesting problem in that I'd like to have docker-containers address eachother by fqdn.
The business I work at has a "microservices" architecture and I've just revamped our docker-compose instrumentation so that we can route to apps.
One problem is that they have no concept of a message bus and apps directly talk to eachother by FQDN
I've just managed to get the IP of the
dnsmasqbased internal DNS using
ping dnsmasq -c 1 | egrep -o "\(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\)" | egrep -o "\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}"
I'd like to know if there is a better way to get the IP of that machine to set as a nameserver. I'm thinking that so long as our base docker images use Alpine or some derivative they should accept an amendment to
resolv.conf
-. | http://quabr.com/50775391/local-domain-server-access-issues | CC-MAIN-2019-13 | refinedweb | 910 | 62.88 |
29 May 2009 17:37 [Source: ICIS news]
LONDON (ICIS news)--Oil previously kept on floating storage is being unloaded as the ICE futures Brent forward curve has become less steep, a trader said on Friday.
“The market structure has changed and it is no longer possible to store oil on floating storage and sell it at a profit in the future,” a physical oil trader said.
“Market participants will be looking at unloading their previously stored cargoes,” the trader went on to say.
Meanwhile, in the ?xml:namespace>
During the past few months, a combination of low freight rates and low demand resulted in a steep contango, whereby the front end of the futures curve is much lower than the outer months.
This opened the economics for traders to hold on to the physical oil by storing it on tankers and making a profit by selling the barrels in the future at a higher price by using the futures markets.
The spread between the first and second month contract on the ICE Brent forward curve has narrowed from more than $1/bbl to around $0.70/bbl.
On Friday, prices reached new six-month highs over a weakening US dollar, sharp drops in the US oil stock figures on Thursday, OPEC’s decision to leave output unchanged and gains in the equity markets reflecting optimism over a possible global economic recovery.
By 16:15 GMT, July Brent crude was trading at $65.41/bbl, a gain of $1.02/bbl from the Thursday close of $64.39/bbl.
At the same time, July NYMEX light sweet crude futures were trading around $66.12/bbl, up $1.04 | http://www.icis.com/Articles/2009/05/29/9220728/oil-unloaded-from-tankers-as-contango-flattens-trader.html | CC-MAIN-2013-48 | refinedweb | 278 | 68.1 |
Hi it’s me again,
So I am at a point where I am trying to look at the status flags on each of 8 Tic controllers. I am using the serial code example shown on your website HERE .
I am using another function that calls that function and reads the bytearray output, converts it to a string and writes it to an html formatted text file
def get_all_status_flags(self):
b = self.get_variables(0x01, 8)
lstb = list(b)
error_flag = str(lstb)
return error_flag
function to write errorStatus file:
def errorStatus(self):
dataFile = ‘/var/www/html/Pantex/errorStatus.xml’
flag = tic.get_position_uncertain_flag()
if axis == 1:
f = open(dataFile, ‘w’)
f.write("" + “\n”)
f.write(" " + “\n”)
f.write(" axis" + str(axis) + “\n”)
f.write(" " + flag + “\n”)
f.write(" " + “\n”)
f.flush()
f.close()
Each Tic is called sequentially axis1, axis2, axis3…axis8 so I can see the flag values for each Tic Controller while they are are running. I “expected” something like: [1,0,0,1,0,0,0,0] , but I am getting rather different results.
In this particular run, all the axes were set with reverse limit switches (wired NC), on 1,2,3,4 had limit switches connected, 5,6,7,8 limit switches are not connected, so would appear as if the home limit switch was activated.
So I believe the first bit (0) reads 1 when running, except the 9 must mean a limit switch has been hit, bit 3 is showing 128 on all, this (I am guessing) means the limit switch is wired for reverse, not forward.
Anyway, is there a table that shows how to interpret the values. I’ve not seen it in the Tic user’s guide. If I understand teh method, I can see it being a powerful debug/error and status tool.
Thanks,
John | https://forum.pololu.com/t/tic-controller-0x01-flag-values-question/21212 | CC-MAIN-2022-21 | refinedweb | 304 | 73.47 |
table of contents
NAME¶
offsetof - offset of a structure member
SYNOPSIS¶
#include <stddef.h>
size_t offsetof(type, member);
DESCRIPTION¶
offsetof() returns the offset of the given member within the given type, in units of bytes.
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008, C89, C99.
EXAMPLES¶u; c=%zu; d=%zu a=%zu\n",
offsetof(struct s, i), offsetof(struct s, c),
offsetof(struct s, d), offsetof(struct s, a));
printf("sizeof(struct s)=%zu\n", sizeof(struct s));
exit(EXIT_SUCCESS); }
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at | https://manpages.debian.org/unstable/manpages-dev/offsetof.3.en.html | CC-MAIN-2022-21 | refinedweb | 116 | 57.98 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.