text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Opened 10 years ago
Closed 10 years ago
Last modified 7 years ago
#2838 closed defect (wontfix)
ifequal and ifnotequal give unexpected results for True/False comparison
Description
The ifequal and ifnotequal tags behave in an unexpected manner when comparing a variable against True or False. They never evaluate to true or false when comparing a variable against True or False. I think this happens when django.template.defaulttags.IfEqualNode tries to resolve True/False using django.template.resolve_variable and gets and exception, so it uses None for the comparison.
I know it's better to write if or if not tags in templates, but this kind of mistake could trip up beginners. One possible fix is to change resolve_variable so it knows about True and False, another would be to simply document this and recommend if/if not instead.
An example template with the problem:
<html> <head> <title>ifequal bug</title> </head> <body> <h1>ifequal bug</h1> <p>{% for i in stuff %} {{i}}{% ifequal forloop.last True %}.{% endifequal %} {% endfor %}</p> <p>{% for i in stuff %} {{i}}{% if forloop.last %}.{% endif %} {% endfor %}</p> <h1>ifnotequal</h1> <p>{% for i in stuff %} {% ifnotequal forloop.first False %},{% endifnotequal %}{{i}} {% endfor %}</p> <p>{% for i in stuff %} {% if not forloop.first %},{% endif %}{{i}} {% endfor %}</p> </body> </html>
and the output:
ifequal bug 1 2 3 4 1 2 3 4. ifnotequal ,1 ,2 ,3 ,4 1 ,2 ,3 ,4
Change History (3)
comment:1 Changed 10 years ago by mtredinnick
- Resolution set to wontfix
- Status changed from new to closed
comment:2 Changed 10 years ago by mtredinnick
comment:3 Changed 7 years ago by sgornick
Shouldn't an attempt to do this throw an exception then instead of executing it and providing an unexpected result?
No, we don't want to do this (it was in very briefly and reverted in [3680]). Template tags know about two types of things: context variables (words without quotes) and strings (words in double quotes). Your wish is that it treats certain other words as reserved words, which just overloads things further.
See this thread on django-developers for an explanation of why we do things the current way. | https://code.djangoproject.com/ticket/2838 | CC-MAIN-2016-30 | refinedweb | 363 | 59.74 |
)
Dhananjay Kumar (2)
Sandeep Singh Shekhawat(2)
Vulpes (2)
Nimit Joshi(2)
Abhimanyu K Vatsa(2)
Rajesh VS(2)
Pankaj Kumar Choudhary(1)
Akhil Mittal(1)
Vijay Prativadi(1)
Puran Mehra(1)
Shamim Uddin(1)
Saillesh Pawar(1)
Ehsan Sajjad(1)
Rahul Prajapat(1)
Bruno Leonardo Michels(1)
Ehtesham Mehmood(1)
Satendra Singh Bhati(1)
Hemant Srivastava(1)
Debadatta Mishra(1)
Amit Choudhary(1)
Sandeep Sharma(1)
Ashish Shukla(1)
Dhanushka Athukorala(1)
Sharad Gupta(1)
Lajapathy Arun(1)
Jaish Mathews(1)
Praveen Moosad(1)
bobdain (1)
Ran Kornfeld(1)
Jasper vd(1)
Jibin Pan(1)
Mike Gold(1)
Jigar Desai(1)
Chetan V Nadgouda(1)
Resources
No resource found
How To Work With Enums In ASP.NET MVC
Jan 06, 2016.
In this article you will learn how to work with Enums in ASP.NET MVC.
Understanding Enums Constants in C#
May 30, 2015.
This article explains enum constants in C#.
Diving Into OOP (Day 6) : Understanding Enums in C# (A Practical Approach)
Aug 15, 2014.
This article of the series “Diving into OOP” will explain the enum datatype in C#.
Creating a DropDownList For Enums in ASP.Net MVC
Oct 02, 2013.
This article explains how to populate a DropDownList by enum type in ASP.NET MVC.
Select Data With Enums Via EDF Framework 5.0
Dec 15, 2012.
Today, in this article let’s play around with one of the interesting and most useful concepts in Entity Data Model Framework 5.0.
Creating Generic Enums using C#
Oct 11, 2011.
An enum variable can then be set to any one of these constants or (in the case of ‘Flags’ enums) to a meaningful combination of them.
Enums in C#
Jun 11, 2009.
This article demonstrates how to use enumerations in C#.
Quick Reference To TypeScript
Sep 07, 2016.
In this article, you will learn about TypeScript.
Learn Tiny Bit Of C# In 7 Days - Day 4
Jan 31, 2016.
In this article you will learn about Delegates, Enums, Attributes, Generics in C#. This is part 4 of the series.
Bind Enum To DropDownList In ASP.NET MVC
Dec 18, 2015.
In this article you will learn how to bind Enum to DropDownList in ASP.NET MVC.
Enum in C#
Aug 16, 2015.
This article explains enumerations of the C# language.
Using Reflection to Get Enum Description and Value
May 31, 2015.
In this article you will see how to handle enum values with descriptions..
Getting Started With Enum Support in MVC 5 View
Jan 22, 2014.
This article describes how to upgrade Visual Studio and work with Enum support in MVC 5 View.
Enum in Switch Case in Java
Oct 11, 2013.
In this article you will learn about the enum and switch case in Java and also how enum can be used in switch case.
Programmatically Binding DataSource to ComboBox in Multiple Ways
Oct 05, 2013.
In this article, we will learn how to bind ComboBox DataSource to Array of objects, List of Objects, DataTable, DataSet, DataView and Enumeration Values.
Usage of Class and Enum Inside an Interface
Aug 17, 2013.
This small article provides an outline of the usage of classes and enums inside an interface that seem to be unusual to a developer.
Enum Operations For Name Value Attributes
Jun 10, 2013.
We'll see in this article how to manipulate the values, names and attributes using Reflection at run time.
Introduction To Enum In Java
Jun 03, 2013.
In this article we discuss Enum as a new feature of Java.
Use and Implemenration of the Enum Type
Mar 31, 2013.
Here we will see the use and implementation of the Enum type.
Understanding Delegates Predicates and Lambda
Feb 22, 2013.
To have a clear undestanding of Predicates, you must have a good understanding og delegates.
Enumeration In C#
Jan 02, 2013.
In this article I explain how to use enum, create an enum and get values from an enum with their enumeration list.
Enum in TypeScript
Dec 07, 2012.
In this article I am going to explain how to use enum in TypeScript.
Enum Support (EF Designer) in Entity Framework 5
Sep 27, 2012.
Entity Framework 5 brings number of improvements and Enum Support in EF Designer or Code First is one of them. In this post you will learn it by creating a simple console application then will add EF Designer and will sketch the Model on designer surface.
Enum Support (Code First) in Entity Framework 5
Sep 26, 2012.
Entity Framework 5 brings number of improvements and Enum Support in Code First is one of them.
How to Play With Enum in C#
Mar 17, 2012.
In this article you will see use of Enum variables in C#.#?
Enumeration in DataContract of WCF
Nov 01, 2010.
By default Enums are serializable. If we define Enum at service side and use it in Data Contract, it is exposed at the client side.
.Net 4.0 Code Level Enhancements
May 10, 2010.
I am publishing here some features which is mainly meant to get a quick start for developers.
Convert String to Enum
Feb 06, 2010.
How to convert a string value to an enumeration.
Types in C#
Nov 13, 2009.
In this article I will explain about data types in C#.
Binding an Enum to a ComboBox
Oct 06, 2009.
The following code snippet shows how to bind an enumeration to a ComboBox in WPF or Windows Forms using C#.
Setting Enum's Through Reflection
Sep 25, 2006.
This article show to solve the problem of how to set an enum type in a dynamically loaded DLL.
Six Java features C# developers will kill for...
Jul 06, 2006.
Not everything on the .NET framework is perfect, and Microsoft still has more improvements to implement. This time we will look at six features available to Java developers but unfortunally absent from C#.
Convert an Enum to a String
Oct 24, 2005.
This article shows how to convert an enum value to a string value.
How do I Convert a String to an Enum Value?
Sep 10, 2005.
In this How do I, you will learn how to convert a string to an enum value in C#..
Enhanced XP Button Control
Dec 12, 2003.
The enhanced XP style button is very easy to use and it supports rectangle, circle or ellipse shape with image and different colors. This control also inherit most of the properties from the Forms.Button.
Music Editing Program in C# and .NET
Jan 28, 2003.
This program will create music from a file of letter-coded notes. It will also print and print preview the music.
Creating Exploded Pie Chart Having Click Through Functionality in C#
Dec 26, 2001.
In this article I would like to show you code that would create exploded pie chart and implementing click through functionality to that chart.
Working with Namespaces in C#
Nov 07, 2001.
In C#, namespaces are used to logically arrange classes, structs, interfaces, enums and delegates. The namespaces in C# can be nested. That means one namespace can contain other namespaces also.
Enumerators in C#
Oct 25, 2001.
An enumeration (enum) is a special form of value type, which inherits from System.Enum and supplies alternate names for the values of an underlying primitive type.
PaintBrush in C#
Jan 10, 2001.
The article is the paintbrush application, which demonstrates the different aspects of C# language and certain namespaces. The concepts like EventHandling and class designs are also present.
About En. | http://www.c-sharpcorner.com/tags/Enums | CC-MAIN-2016-50 | refinedweb | 1,249 | 75.91 |
What′s New in dotTrace
This page guides you through notable updates in recent dotTrace releases. Highlights include support for Visual Studio 2017 and simplified profiling of async code.
This page guides you through notable updates in recent dotTrace releases. Highlights include support for Visual Studio 2017 and simplified profiling of async code.
The downside of asynchronous code is it's extremely difficult to profile and analyze its performance..
To learn more, see Analyzing performance of asynchronous .NET code with dotTrace.
Do you remember the "performance forecasting" feature in the Performance Viewer?
Now, you can do the same thing in the Timeline Viewer. Simply exclude a particular method from the Call Tree, and dotTrace will recalculate the entire snapshot as if there is no such method.
When examining a list of top methods in Methods and Subsystems, it may be helpful to quickly view backtraces (an inverted call tree) of a particular method to identify its origin. Now, you can do this right in Methods and Subsystems without switching to the Call Tree.: e.g. the way the methods' time is calculated in Methods and Subsystems and system calls folding.
When navigating a call tree, it's always been tough to understand how you ended up at a particular function. Not anymore with dotTrace 2017.2: the Call Tree view shows all your transitions in the left gutter.
The command-line profiler finally supports the Timeline profiling type.
It's also worth noting that dotTrace command-line tools are now available as a NuGet package.
dotTrace 2017.1, along with other products of the ReSharper Ultimate family, can now be installed into Visual Studio 2017.
You can now attach the profiler to running applications using drag and drop. Simply drop a special icon onto the application window that you want to profile.
In 2016.3, Timeline Viewer gets one of the greatest Performance Viewer's features: Subsystems.
The mechanics of Subsystems are quite simple: in most cases, each subsystem just groups calls made within a certain namespace or assembly. It is extremely useful when you need to quickly evaluate how time in a particular call subtree is distributed among various components: user and system code, WPF, LINQ, collections, strings, and more.
Subsystems are very flexible. If you use third-party frameworks in your solution, simply add the corresponding subsystems to dotTrace. Just a quick glance at the call's Subsystems will allow you to understand how much time this call spends in a particular framework.
dotTrace 2016.3 is able to collect data about memory allocations made to the native heap.
The Native Memory Allocation event filter allows you to see what methods are making the allocations and analyze all issues related to the native memory: potential memory leaks, issues with unmanaged components used by your managed code, and so on. | http://www.jetbrains.com/profiler/whatsnew/index.html?rss | CC-MAIN-2018-13 | refinedweb | 471 | 55.44 |
How can I determine whether a string represents an integer or not?
This is a discussion on How can I determine whether a string represents an integer or not? within the C++ Programming forums, part of the General Programming Boards category; How can I determine whether a string represents an integer or not?...
How can I determine whether a string represents an integer or not?
AIM: MarderIII
Well, you could use atoi (Ascii TO Integer) or the like (atof, atol, etc) and determine if the result is a number. atoi returns 0 if the string was not a number. Possibly your string IS the number 0, so just for added safety, in this example we check to make sure that both the value is non-zero and the string does not contain zero.
Code:int num; char * string = "12345"; num = atoi( string ); if (num == 0 && string[0] != '0') printf("Not a Number!"); else print("Number is: %i", num);
... Come to think of it, alternatly you could parse your string for valid integers, either ignoring invalid characters (alphabet, etc) or by simply flagging the string as non-numeric if one is found. i.e.
Code:BOOL IsValidNumber(char * string) { for(int i = 0; i < strlen( string ); i ++) { //ASCII value of 0 = 48, 9 = 57. So if value is outside of numeric range then fail //Checking for negative sign "-" could be added: ASCII value 45. if (string[i] < 48 || string[i] > 57) return FALSE; } return TRUE; }
Or to be more thorough you could check that all of the string is valid in every conceivable way :-)
Code:#include <iostream> #include <cstdlib> #include <string> #include <limits> #include <cerrno> using namespace std; bool is_number(string& src) { const char *start = src.c_str(); errno = 0; char *end; long string_value = strtol(start, &end, 10); if ( errno == ERANGE || // Did we set off errno? string_value > numeric_limits<int>::max() || // Is it too big? string_value < numeric_limits<int>::min() || // Too small? static_cast<unsigned>(end - start) != src.length() // Non-numerics? ) { return false; } return true; } int main() { string s = "12345"; if (is_number(s)) { cout<<"It's a number!"<<endl; } }
*Cela*
Im teaching myself c++ and this board is a LOT of help. I looked at the code offered and managed to get the below to work for Borland C++ 5.02. I think this is what I want.
Code:#include <iostream> #include <string> #include <limits> int main() { int num; char string[256]; cin.getline(string, 256, '\n'); num = atoi( string ); if (num == 0 && string[0] != '0') { cout <<"Not a Number!"; } else { cout<<"Number is: "<<num; } return 0; }
AIM: MarderIII
>>I think this is what I want.
Close enough, if you're teaching yourself C++ then rock solid validation might be more than you want :-)
*Cela*
Rock-solid validation is what drives me to learn more!
AIM: MarderIII
Celas example is accurate, inclusive, and safe... though a little excessive.
>>Rock-solid validation is what drives me to learn more!
Bravo. Its always nice to see people who want to learn. I'll offer some related information.
Something to look for when processing something like this, is that your char * string is valid. The line "... && string[0] != '0'" will explode if string is not a valid pointer. So checks like:
if (string != NULL) ... //proceed with code
could prove useful. In your particular case that would be useless as string is not a pointer, but something like that would be a smart idea using the IsValidNumber(...) function example.
Or u can use the standard lib functions -
#include <locale>
isdigit(); //This fxn determines whether a letter in a string is a numerical value '3' is numerical 'L' is not
I think this used to be in the <cctype.h> header as I remember reading that those checking functions were there but MSDN says it's in the <locale> header. Go figure?
My Avatar says: "Stay in School"
Rocco is the Boy!
"SHUT YOUR LIPS..."
>>Celas example is accurate, inclusive, and safe... though a little excessive.
There's nothing wrong with being excessive in the right situations, like nuclear reactor or space station controllers :-) But you're right, it's a little much for most normal use
*Cela* | http://cboard.cprogramming.com/cplusplus-programming/33859-how-can-i-determine-whether-string-represents-integer-not.html | CC-MAIN-2015-48 | refinedweb | 688 | 73.17 |
Introduction
Hello SAP Community,
As you may know, with the release of Fiori Elements Floorplans (List Report, Analytical List Report, Overview Page) most of the requirements raised by Business Consultants are being solved by using them.
But there are still some cases where the Floorplan doesn’t apply to your needs, and we need to use FreeStyle applications. The features of these kind of apps are that they are built without using the standard templates, but adding manually the controls desired to the fiori application.
If this is your case, It is a good option to use smart Controls for building them due to the reduced amount of code we have to use to show the information.
Also, as these controls are used by a large amount of standard apps, they are constantly being updated, therefore we can be sure that our apps are going to get all the new functionalities released for these controls.
When we refer to smart controls, we are talking about the following sapui5 controls:
From this list, in my experience, I found that the first three of them are the most common ones to use to solve our customer needs.
In case you are curious about how to implement a Smart FilterBar and extend it, check the following link.
For extending Smart Tables there is already information about how to extend it: Link1, Link2, Link3 and Link4.
But regarding Charts, I was not able to find much information about how to extend them, so I am going to detail next how to do it.
Typical Use Cases
In today’s world, most of the information is consumed through images or videos, so we can consider that our users are expecting us to deliver solutions following this trend. One way we could achieve that, is by using Smart Charts. In case you want to see how Smart Charts look like, you can find several examples in the official documentation.
When working with Smart Charts, there are some cases where the standard properties, methods or events of the control may not satisfy your needs. In this cases, we have the possibility to extend the functionality of the standard control.
Typical requirements where we may need to extend the Smart Chart control, could be when customer wants to execute certain actions when clicking a determined bar of you chart, or implement some action like filtering, navigation, send an email, download an excel, show a popup, etcetera.
The way to resolve this is by using the component sap.chart.Chart, inside the definition of the Smart Chart as we are going to see next.
Coding
I am going to assume you have already placed the Smart Chart in your application and it’s working fine.
Just a brief introduction, the only thing you have to define when using Smart Charts is the EntitySet of the oData from which you are going to get the data, and fill some of the preferences you want to apply to the graphic. Then, you have to add the annotation file with the dimensions and measures your graphic is supposed to have. Take a look at this blog if you are interested on how to do that.
"> </smartchart:SmartChart>
In order to extend the Smart Chart with the sap.chart.Chart component, you should add first the namespace to the view.
To do this, go to your View tag and add the xmlns:chart=”sap.chart” tag.
<mvc:View
After we have added the namespace to the View tag, we have to indicate the Smart Chart that we are going to extend the sap.chart.Chart component used by it for rendering the graphic.
To be able to extend the Smart Chart, we have to add component sap.chart.Chart as we have mentioned previously.
"> <chart:Chart </chart:Chart> </smartchart:SmartChart>
As you can see, in this case we have configured three different event handlers to manage the events selectData, deselectData and renderComplete of the sap.chart.Chart component.
Keep in mind that these events were not available in the standard Smart Chart control, and we were able to introduce them by extending the controller.
Finally, we have to add the three methods indicated in the xml view in the controller logic, and code the logic needed to execute the tasks we want.
Event onSelect:
//Method executed when clicking on a bar of the graphic onSelect: function (oEvent) { //Get SmartFilterBar var oGlobalFilter = this.getView().byId("yourId--smartFilterBar"); var filterValue = oEvent.mParameters.data[0].data.<yourProperty>; var oDefaultFilter = { "yourProperty": filterValue }; //Set SmartFilterBar initial values oGlobalFilter.setFilterData(oDefaultFilter); oGlobalFilter.fireSearch(); }
Event onDeselect:
//Method executed when deselecting a bar of your graphic onDeselect: function (oEvent) { //Your logic }
Event onRenderComplete:
//Method executed after graphic is completely rendered onRenderComplete: function(oEvent){ //<your logic> }
As you have seen, it’s very easy to use this smart control and extend it. In return, the output and functionality you get is quite amazing.
Well I hope the content of this blog is useful for you. Don’t hesitate in comment and share your own experiences.
Best regards,
Emanuel | https://blogs.sap.com/2020/08/26/how-to-extend-smart-controls-smartchart-onselect-event/ | CC-MAIN-2021-25 | refinedweb | 846 | 58.82 |
When working with distributed systems so many of the problems we're trying to solve boil down to managing state across multiple nodes. Whether it's replicating databases, mirroring message queues or achieving consensus in a blockchain network, the crux of the problem is keeping a distributed state machine in sync.
Replication is not even the hardest problem. When working with a distributed system we have to pick a trade-off: consistency or availability (see CAP theorem). To illustrate this better I think it's helpful to start with what could go wrong:
- Fail-stop failure
- Byzantine failure
Fail Stop
Fail-stop failures happen when a node in a cluster stops responding. This could be caused by a crash, a network split or simply network delays (it's impossible to differentiate a network delay from a network failure). These types of failures can generally cause two types of problems: systems not available to respond to requests or corrupted data.
Systems that try to be highly available (AP) would generally accept requests on both sides of a network split, and when that split is healed, you run into a problem of reconciling different views of the latest data for each split. Depending on how that reconciliation is done, you can end up with corrupted or lost data.
Systems that try to be highly consistent (CP) will only accept requests to an elected leader. This means that clients can potentially get an error back if they send a request on the wrong side of a network split. Raft is purely a CP system. If you have 5 nodes and lose 2, the system will be available and consistent. Lose 3 nodes, and you are not highly available anymore because the remaining two nodes are in a stalemate trying to pick a leader.
Byzantine failure
This second class of failure is a lot more interesting. If previously we were concerned with data corruption or availability, in this case we ask ourselves: what if we have a malicious client that purposely sends us bad data that we replicate across the cluster?
If we do entertain the idea of trust-less distributed consensus, things start to extend from pure technical solutions into things like game theory: if we cannot stop someone we do not trust from acting maliciously, can we align our interests to ensure they would never do that in the first place?
We could impose fines on those node operators that cheat the system. An escrow service could lock deposit funds for everyone involved and penalize a bad actor. Proof of stake is one example of a consensus mechanism that locks a participant's funds and penalizes them if they act maliciously. Proof of work mechanisms make it as difficult to break the network as it is to profit by using it correctly.
One can look at blockchains beyong the hype, from a distributed systems perspective and see that they're really just Byzantine Fault-Tolerant Replicating State Machines. If in a classic distributed state machine we have a log of state changes from which we can reconstruct the state, in a blockchain our log is using cryptography to ensure past state changes cannot be modified.
Raft
Let's return to the subject of my write-up: Raft. In keeping true to the mission statement of this algorithm, I will attempt to explain it in layman terms in this next section. It is truly remarkable that you can digest a distributed consensus algorithm so easily. You may be wondering: why would I care about Raft, who uses that anyway?
Raft use cases generally fall into two categories: data replication or distributed coordination / metadata replication. Here are some examples:
- etcd, Consul or ZooKeeper are examples of metadata replication. They don't replicated data itself but store things like configuration settings or service discovery. etcd and Consul use Raft while Zookeeper uses Zab.
- MongoDB uses Raft for leader election but not data replication - what I called distributed coordination
- InfluxDB uses Raft to replicated meta-data about its nodes
- RabbitMQ uses Raft to replicate data in Quorum Queues
There's even a blockchain project that allows you to use Raft for establishing consensus. Although not using Raft, Tendermint is another very interesting project worth checking if if you're into distributed consensus and blockchains.
Raft Nodes
I will preface this by sharing an absolutely amazing visualization of Raft that I used while learning this. If you are more of a visual learner, I highly recommend checking this Raft explanation out!
Nodes in a Raft cluster all start as Followers. The other possible states they can be in are Candidates or Leaders. Each node is initialized with a random election timeout (150-300ms) and the first node to reach that timeout will promote itself to Candidate.
Note that in order to prevent a stale mate, the election timeout has to be higher than the amount of time it takes for nodes to send messages to each other. The life cycle of a Raft cluster alternates between election and normal operation.
Elections happen when a Leader becomes unresponsive and does not send heart beats to all the nodes in the cluster. In order to keep all nodes in sync about the current cycle, we store and replicate a term on each node. A term is a monotonically increasing number stored on each node. Initially all nodes start at term 0. After the first election is successful, all nodes will bump the term to 1 and so on.
The term also helps when a network split occurs: if we have two leader elections in each split but only one completes successfully and therefore bumps the term, when we recover the lower term, the Leader steps down.
Leader Election
After the Candidate node times out, it votes for itself, bumps up the term and sends Vote Requests to all other nodes. If they have not voted yet in this term, they send their votes back. As soon as the Candidate node reaches a majority vote, it promotes itself to the Leader and starts sending Append Entries messages as heartbeats (the interval is configured by the heartbeat timeout). These Append Entries serve a dual purpose: after the Leader is elected, they are used to replicate data from the Leader to the rest of the nodes as well as heart beats.
We could run into a scenario where two nodes reach election timeout at the same time and both send Vote Requests. In this scenario, none of the Candidates will receive majority, the election timeout is once again randomized and the process repeats until only one Candidate self-promotes.
Log Replication
Raft is an example of an asymmetric consensus, meaning all requests are handled by the Leader. Some implementations would accept requests on any node and redirect that to the Leader. This does have the potential to bottleneck the Leader. Generally, you won't find clusters of Raft with more than 5 nodes in the wild. For example, YugabyteDB uses one Raft cluster for coordinating shard metadata, but for the actual data replication they employ a Raft shard per cluster.
A client sends a change to the Leader and the Leader appends that change to its log (uncommited). The Leader then sends this change using an Append Entry message on the next heartbeat. Once the majority of the nodes respond that they also commited the change to their local logs, the Leader commits that change and responds to the client. It then sends another Append Entry message to let the nodes know that they should also commit that change.
Consistency during Network Partitions
Let's assume we have a 5 node cluster and a network partition separates that cluster into two groups of 3 and 2 nodes.
The top partition does not have a Leader, therefore after the heartbeat timeout is reached, one of the nodes will become a Candidate. A Leader can be elected in this case and the election term will be bumped.
With two Leaders across partitions we can potentially have two clients making requests to each Leader. The top partition will successfully commit the change because it will achieve majority, but the bottom will not, as the entry will stay uncommited in both the Leader and the Follower.
Once the partition is healed, the bottom Leader will receive a heart beat and see a higher election term. It will step down and roll back their uncommited changes to match the new Leader. This gets the whole cluster back to a consistent replicated state.
You can see why an AP system could end up with bad data. If a client sends a request to the bottom split and we acknowledge that request as successful, when the split heals we discard those changes leading to data loss.
Fun with Elixir
What initially prompted my research into Raft was Chris Keathley's Elixir implementation. Although not complete, it is a great reference implementation in a very suitable language. OTP makes it very easy to send RPC requests between nodes, having Dynamic Supervisors restart the nodes is super handy and GenServers make it easy to work with both timeouts and asyncrounous message passing. On top of that, the library comes with a GenStateMachine you can use to manage the state. You can use the macro provided or roll your own module. Data is stored on disk using RocksDB.
The following is the lib/raft/server.ex. It initializes each node, reads the logs from disk, if such exist, and recurses on reset_timeout.
def init({:follower, name, config}) do Logger.info(fmt(%{me: name}, :follower, "Starting Raft state machine")) %{term: current_term} = Log.get_metadata(name) configuration = Log.get_configuration(name) state = %{ @initial_state | me: name, state_machine: config.state_machine, state_machine_state: config.state_machine.init(name), config: config, current_term: current_term, configuration: configuration, } Logger.info(fmt(state, :follower, "State has been restored"), [server: name]) {:ok, :follower, reset_timeout(state)} end
The server module is the heart of the implementation and is neatly organized into the Follower and Leader callbacks.
Peers are started as Dynamically Supervised children. This ensures that if one of them goes down, the children will get restarted. Also, if the Raft.Server.Supervisor crashes, it will restart the children once it comes back up.
def start_peer(name, config) do DynamicSupervisor.start_child(__MODULE__, {PeerSupervisor, {name, config}}) end
RPC calls between nodes are also supervised:
def send_msg(rpc) do Task.Supervisor.start_child(Raft.RPC.Supervisor, fn -> do_send(rpc) end) end defp do_send(%{from: from, to: to}=rpc) do to |> Server.to_server(to) |> GenStateMachine.call(rpc) |> case do %AppendEntriesResp{}=resp -> GenStateMachine.cast(from, resp) %RequestVoteResp{}=resp -> GenStateMachine.cast(from, resp) error -> error end end
What's the use case for an Elixir Raft implementation?
Since I am just speculating here, it is advisable to take all of this with a grain of salt.
Here is a useful quote from Erlang and OTP in Action to kick off this discussion:
A cluster of Erlang nodes can consist of two or more nodes. As a practical limit, you may have a couple of dozen, but probably not hundreds of nodes. This is because the cluster is a fully connected network, and the communication overhead for keeping machines in touch with each other increases quadratically with the number of nodes.
This is good to keep in mind. One use case for Raft I could think of is replacing the global process registry with a Raft cluster, to distribute meta information about name -> node@pid mappings. This allows to run as many nodes as you want, without running them in a cluster and relying on an external system for coordination. I have heard of some doing this in the wild with ZooKeeper. You will also run into Raft if you decide to go the Kubernetes way: the etcd distributed key value store establishes consensus using Raft and is used to store all data in Kubernetes (its configuration data, its state, and its metadata). | https://www.monkeyvault.net/distributed-consensus-and-other-stories/ | CC-MAIN-2020-05 | refinedweb | 1,988 | 61.36 |
index
Programming Tutorials : Many
Free Programming tutorials
Many links to the online programming tutorials. These
programming tutorial links will help you in increasing your programming skills.
Index Out of Bound Exception
the exception and keeps your program running. Place the code that cause...
Index Out of Bound Exception
Index Out of Bound Exception are the Unchecked Exception
$_GET[] index is not defined
can use the following code:
if (isset($_GET['someparameter']))
{
// your code here
}else{
//your code here
}
Thanks...$_GET[] index is not defined Hi,
What could be solution
Spring Constructor arg index
Constructor Arguments Index
In this example you will see how inject the arguments into your bean
according to the constructor argument index...;
<constructor-arg index="0" value
Search index
Search index how to write code for advanced search button
code problem - Java Beginners
your problem in details.
Which keyword search the line.
Thanks...code problem Dear sir, my question was actually how to find a particual line's contain,
example if a file contains-
hy,
how r u?
regrd,
john
code problem - Java Beginners
help me Hi friend,
Code to help in solving the problem :
import...) {
System.out.println(cell.getStringCellValue());
// Your code here...code problem Dear sir,
I have an excel file in D: drive called
code problem - Java Beginners
. Hi friend,
Code to help in solving the problem :
public...code problem Dear sir,
I've some string called "JohnSon" that has...);
//
System.out.print("Input your secret password : ");
Input.nextToken
code problem - Java Beginners
your code and then again try it :
import java.io.*;
class FileRead...code problem My code is below:
import java.io.*;
class FileRead...());
}
}
}
Dear sir,
my problem is that suppose i enter line number: 3
if line
code related problem
code related problem this code is compiling but not running please... GridLayout(3,1));
JLabel label1=new JLabel("Enter your name:", Label.RIGHT);
JLabel label2=new JLabel("Enter your age:", Label.RIGHT);
JTextField
hibernate code problem - Hibernate
thanks
shakti Hi friend,
Your code is :
String SQL_QUERY =" from...hibernate code problem String SQL_QUERY =" from Insurance...: "
+ insurance. getInsuranceName());
}
in the above code,the hibernate
PHP PDO Index
. Using the
following and subsequent tutorial's coding you can create your... execute the following piece of
code and you will get to know about that, if you want...; your desired drives and
remove the semicolons from that line:
z-index always on top
z-index always on top Hi,
How to make my div always on top using the z-index property?
Thanks
Hi,
You can use the following code:
.mydiv
{
z-index:9999;
}
Thanks
problem in java code - Java Beginners
problem in java code In displaying an matrix in normal java code we use two for loops...When i attended an interview.....the hr asked me to display... your requirement
public class DisplayArray {
public static void main friends,
I used the following code... is the code:
Display file upload form to the user... totalBytesRead = 0;
//this loop converting the uploaded file into byte code
while
JavaScript array index of
JavaScript array index of
In this Tutorial we want to describe that makes you to easy to understand
JavaScript array index of. We are using JavaScript... line.
1)index of( ) - This return the position of the value that is hold
Give me the source code for this problem
Give me the source code for this problem Ram likes skiing a lot. That's not very surprising, since skiing is really great. The problem with skiing... point one has to wait for ski-lift to get to higher altitude.
Ram seeks your
Avl tre problem code - Java Beginners
Avl tre problem code package DataStructures... and
* removals at any index in the list.
*
* This list implementation utilises a tree... index.
*
* @param index the index to retrieve
* @return
index - Java Beginners
;Hi Friend,
Try the following code:
import java.io.*;
import java.awt.
error:Parameter index out of range (1 > number of parameters, which is 0).
error:Parameter index out of range (1 > number of parameters, which is 0). my code:String org=request.getParameter("Org"); String desg... the setXXXX() method of PreparedStatement and in your SQL string you didn't use
;
JAVA SECTION
Java
Tutorials |
Java Code
Commenting out your code - Java Tutorials
Commenting Erroneous Code & Unicode newline Correct
In this section, you will find an interesting problem related to
commenting erroneous code... be its possible solution.
First you need to get a look of the code:
public
Skyline Problem
. Your method should solve the problem three different ways; each solution... is the line tracing the largest y value from any rectangle in S
The Problem:
Write... the "Skyliner" interface which is comprised of the following code
Problem
();
stdIn.close();
echoSocket.close();
}
}
this is my code,
compiled
problem
;Hi,
What do you want to do with this code. Please clarify
Problem
();
echoSocket.close();
}
}
this is my code,
compiled with no errors, but when running giving
explanation to your code - Java Beginners
)?And i have imported many classes.but in your code you imported a very few classes.sir i need your brief explanation on this code and also your guidance.this...explanation to your code sir,
I am very happy that you have
including index in java regular expression
including index in java regular expression Hi,
I am using java regular expression to merge using underscore consecutive capatalized words e.g....);
}
}
Looking forward for your help...
Thanks
YJS
Problem with arraylist
()but it shows false Please specify the full code. Check your if condition. It seems that problem...Problem with arraylist I am currently working on a java project
Servlet problem
connectivity code it works but problem is with servlet page.
My servlet code... problem from last three month and now i hope rose india developers...))
{
JOptionPane.showMessageDialog(null, "Your account
uploading problem
about file into database lib.
i use navicat Mysql ...
i use this code...
<...();
}
}
}
}
%>
my problem...:
firstly....
then problem solved...
bt real problem is when i upload files fusing mozilla
Shopping Cart Index Page
Modules in a application
Working Of An Application
Download Source Code
Download Source Code (As WAR file
reconfirm problem
{
}
}
Please post your code so that we can help you easily.
...reconfirm problem sir,
I tried ur following code
<script....
print("code sample");
Ur Code
function deleteRecord
Foreach loop with negative index in velocity
Index: -2
Download
code...
Foreach loop with negative index
in velocity
... with negative index in velocity.
The method used in this example
Pointing last index of the Matcher
Pointing last index of the Matcher
... and also indicate the last index of the matcher using expression.For this we... will arrive.":- Declaring text as String in which
last index of word Tim
problem with code - Ajax
problem with code hi friends i am sending a code in that when we... to use ajax i wrote the code of jsp and the remaning code by using ajax is not wrritened by observing these below code please try the remainning ajax code its
code problem - Java Beginners
code problem Dear sir,
my problem is that I've a string value if this String value has "quit" then output should be "bye". i want to make this program using SWITCH CASE statement. how to implement String value in Switch plz
JavaScript array index of
JavaScript array index of
... to easy to understand
JavaScript array index of. We are using JavaScript...)index of( ) - This return the position of the value that is hold
code problem - Java Beginners
code problem Dear sir,
My problem is that i have some string value and in some case i want to remove all the value of this string, i tried this code-
Response.str.clear();
but it shows some error called "response package
Hibernate code problem - Hibernate
problem please send me code.
Visit for more information.
http...Hibernate code problem Hi
This is Raju.I tried the first example........How can i solve this problem.
Am i need to Kept any other jars in path.
DOM paring problem
DOM paring problem Hi My code always show arrayindexoutofbound
here is my code i am using arraylist to display and DOM parser to retrieve
XML...:
65: <%=getStat_list(nodeList).get(index)%>
66
code problem - Java Beginners
code problem Dear sir, my problem is given below:
suppose a file Carries the following lines-
Name: john
age: 45
Address: goa
phone...; Hi friend,
Code to help in solving the problem :
import java.io.
; Hi friend,
Code to help in solving the problem :
import java.io....code problem Dear Sir, I've to make a program where there are 10 lines of a particular file i've to display every Line's contains, example-
Line1
Application context problem code
Application context problem code now i am posting my code here .
i...");
// code to set test action environment
createAction("/create", "Test", "list");
// code to execute test action
String result = proxy.execute();
// code
code problem - Struts
code problem hi friends i have 1 doubt regarding how to write the code of opensheet in action class i have done it as
the action class code...();
System.out.println(conn);
//Code to generate new sheet i.e login time IS NULL
code problem - Java Beginners
code problem i've to send a login packet( username & password..., what code should be compatible, as i heared with UDP programing there is no Guarantee of packet's delevery so send me code made of TCP,
plz help me
Hibernate code problem - Hibernate
Hibernate code problem how to write hibernate Left outer join program
Code problem - Java Beginners
Code problem Hi.
I want to create a drop down button,where the value will be hard coded and on selecting an option another drop down button... and these values will come from the database.
Can anyone help me with the code
Ask Programming Questions and Discuss your Problems
Ask Programming Questions and Discuss your Problems
Dear Users, Analyzing your plenty of problems and your love for Java and Java and Java related fields, Roseindia Technologies
code problem - Java Beginners
code problem I want to write a program that read a string and prints the following
1)number of digits
2)number o lower case characters
3)number of space charaters
Here is the code which I suppose is almost correct.However
combo box code problem
combo box code problem in this my problem related to :
when i select state MP then i wil open the its corresponding city but in database it only stores the option value no like MP at option value 10 then it will stores the 10
Code Problem - Struts
Code Problem Sir, am using struts for my application.i want to uses session variable value in action class to send that values as parameter to function.ex.when i login into page,i will welcome with corresponding user homepage
Java code problem
Java code problem Please check the errors,if any,in this code...i am a java beginner
import java.util.ArrayList;
import java.util.Calendar;
import java.util.Date;
import java.util.HashMap;
import java.util.Map
Hibernate code problem - Hibernate
Hibernate code problem Hai, iam working on simple login application using hibernate in spring.
I've used Spring dependency injection too.I.... Please find my src code here...
----------------controller Layer
Code Problem - Applet
Code Problem How to set a background color for frame and panel ...?
What is the difference btw these two(in setting background color)..? Hi Friend,
If you simply want to set background color for panel, try
What is License Terms for using your code
What is License Terms for using your code Hi,
I have used Datepicker.java with some modification from your site. Can you please inform me what is the License terms for using it ?
Thanks
Venkatesh .P
this code will be problem it display the error again send jsp for registration form
this code will be problem it display the error again send jsp for registration... in database as text and set the mobile field or telephone field as string in your code... RESEND THE CODE
org.apache.jasper.JasperException: java.lang.NumberFormatException
Write a byte into byte buffer at given index.
Write a byte into byte buffer at given index.
In this tutorial, we...; index.
ByteBuffer API:
The java.nio.ByteBuffer class extends... ByteBuffer
putChar(int index, byte b)
The putChar(..) method write
problem in setting path - Ant
the reply
its urgent Hi friend,
Your code :
i setted the path...problem in setting path hi friends i am getting a problem in setting but i followed the procedure same as like in
bliffoscope data analsys problem
. Design and write test code that submits the test data to your package and prints...bliffoscope data analsys problem Bliffoscope Data Analysis Problem
It's April 1, 2143. Your job is to save the world.
Well, a little world
pagination problem - JSP-Servlet
(),my problem is that where i have to put & how to do pagination?
my code... The Following.....");
out.println("");
out.println("");
out
.println("Enter Your First Name");
out
.println("Enter Your Last Name");
out
.println("Enter Your
pagination problem - JSP-Servlet
(),my problem is that where i have to put & how to do pagination?
my code........");
out.println("");
out.println("");
out
.println("Enter Your First Name");
out
.println("Enter Your Last Name");
out
UIScrollView Toolbar problem
the toolbar as subview on it. But my problem is ..the ScrollView type and when i am... suggest..
Thanks in Advance!
First of all in your UIView drag... on it.. create the outlet and connect to file owner.
In your view did load method....Write a float value into float buffer at given index.
In this tutorial...;given index.
FloatBuffer API:
The java.nio.FloatBuffer class extends
validation problem in struts - Struts
validation problem in struts hi friends...
m working on one.... nd i write d code for sql connection and user athunetication in d action class... 1st user d authentication condition is nt working . My action calss code
problem 1 - Java Beginners
problem 1 Hi,
please help me!!!! How can i code in java using Two-dimensional Arrays? This question is related to the one i posted before...
(one for the names, one for the scores, and one for the deviations) in your main
What is Index?
What is Index? What is Index
java problem - Java Beginners
the test program to check your work.
Hotel.java
This file declares a class.... After each Room has been instantiated, we will assume that the array
index... will
be in array cell with index 2, room numbered 5 will be in array cell with index 5, etc
LOGIN PROBLEM - JDBC
and passowrd
then my problem is how can we write the code for validating the userid and passowrd
iam having doubt in that code
sir iam doing the project...LOGIN PROBLEM sir iam harikrishna studying B.Tech Fourth year
sir
Java implementation problem
problem in your post previews.
please consider
1. 2. points just after main... code :
Main thread
1. create three threads
2. wait for completion of stage 2.../answers/viewqa/Java-Beginners/28578-java-implementation-problem-.html
Form Processing Problem
. This is the Code where the circular is updated and asks for Circular Reference number...;
//this loop converting the uploaded file into byte code...); //the changed code->1
int lastIndex = contentType.lastIndexOf
problem 2= C#
then the Message box should pop up with "Please Enter your User Name!!" Write the code for the following form based on the criteria given below:
User... with "Please Enter your User Name!!" .It should be same for all the fields.
When
java problem - Java Beginners
] - 3;
}
THANK YOU GUYS FOR SHARING YOUR IDEA ABOUT THIS.THANK YOU SO MUCH!ILL WAIT YOUR RESPONSE.... Hi friend,
Code :
import java.util.... int.What is stored in list after the following Java code executes?
for (i | http://www.roseindia.net/tutorialhelp/comment/45264 | CC-MAIN-2016-07 | refinedweb | 2,636 | 66.13 |
I am trying to create a robot with one DC motor for movement and one servo motor for steering using an Arduino Uno. The servo is simply connected to pin 9, and the DC motor is connected to pin 10 through a Mosfet (see attached diagram). I was hoping that when I type an “L” or an “R” in the serial monitor the Servo moves to a certain position, and when type a number between 0 and 255, the DC motor rotates at that speed. So far I have the code to make the Servo turn, according to numbers entered into the Serial monitor and the same for the DC motor. How would I make the Servo respond to letters and how would I make them work from the same code?
Here is the code for the DC Motor:
int motorControlPin = 10; void setup() { pinMode(motorControlPin, OUTPUT); Serial.begin(9600); Serial.println("Enter a Value between 0-255 to set the speed of the motor"); } void loop() { if (Serial.available()) { int speed = Serial.parseInt(); analogWrite(motorControlPin, speed); } }
Here is the code for the Servo:
#include <Servo.h> Servo servo1; long num; void setup() { servo1.attach(9); Serial.begin(9600); Serial.print("Enter Position = "); } void loop() { while(Serial.available()>0) { num= Serial.parseInt(); Serial.print(num); Serial.println(" degree"); Serial.print("Enter Position = "); } servo1.write(num); delay(15); } | https://forum.arduino.cc/t/controlling-a-servo-motor-and-a-dc-motor-from-the-serial-monitor/413616 | CC-MAIN-2021-43 | refinedweb | 227 | 57.06 |
Introduction to NumPy(Part-I)
In this article, we will learn about Introduction to NumPy.. It is open-source software that has many contributors.
NumPy is so fundamental that it's hard to imagine a world without it. The image below explains its gigantic usage and life without it.
But why do you need NumPy and in what ways they are helpful to us?
NumPy is easy to implement and effective at speeding up your code. This is through vectorization and NumPy’s built-in histogram function. The vectorization in itself is awesome because you can type array arithmetic like scalar arithmetic.
Installing NumPy
First, you need to know which Python version you have. To check the installed Python version follow the following commands:
If you have Python 2 then use
python -V
If you have Python 3 then use
python3 -V
You will see the version of Python present in your system.
Windows user can run the following command in their command prompt
Install PIP before installing NumPy
Python 2
pip install numpy
Python 3
pip3 install numpy
Linux users can follow the steps mentioned in the link.
Importing NumPy
In Python, packages/libraries like NumPy, matplotlib, scipy, etc. are nothing but an extension module. Therefore, whenever you want to use an attribute of the NumPy library, you have to import it as a header file. For the sake of convenience, we have created “np” an alias of NumPy. This alias is widely popular in the data science world.
import numpy as np
Arrays in NumPy
>>> import numpy as np
>>> arr=np.array([1,2,3,4,5,6,7,7,8,9])
>>> print(arr)
[1 2 3 4 5 6 7 7 8 9]
>>> arr
>>> array([1, 2, 3, 4, 5, 6, 7, 7, 8, 9])
>>> arr.dtype
dtype('int32')
>>> print("No. of dimensions: ", arr.ndim)
No. of dimensions: 1
>>> print("Shape of array: ", arr.shape)
Shape of array: (10,)
>>> print("Size of array: ", arr.size)
Size of array: 10>>> a=np.array(1,2,3,4,5) #INCORRECT WAY
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: only 2 non-keyword arguments accepted
Array creation: There are several ways of creating an array in Python. For example, you can create an array from a regular Python list or tuple using the
array function. The type of the resulting array is deduced from the type of the elements in the sequences.
NumPy has a function named
arange. It helps in creating the one-dimensional array. For creating a multi-dimensional array you need to reshape it. To make it a multi-dimensional array, chain its output with the
reshape function.
>>>import Numpy as np
>>>array = np.arange(10)# ONE DIMENSIONAL ARRAY
>>>array
>>>array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
#MULTI-DIMENSIONAL ARRAY
>>>array = np.arange(10).reshape(2,5)
>>>array
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9]])
#It will be craete 10 intergers and then convert the array into a two-dimensional array with 2rows and 5 columns.
Python has many other functions like
zerosand
ones to quickly create and populate an array.
You can use the
zeros function to create an array filled with zeros. The parameters to the function represent the number of rows and columns (or its dimensions).
>>>np.zeros((4,5))
array([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]])
You can use the
ones function to create an array filled with ones.
>>>np.ones((4,5))
array([[1,1,1,1,1],
[1,1,1,1,1],
[1,1,1,1,1],
[1,1,1,1,1]])
The
empty function creates an array. Its initial content is random and depends on the state of the memory.
>>> np.empty((4,5))
array([[6.23042070e-307, 4.67296746e-307, 1.69121096e-306,
3.11522054e-307, 1.42413555e-306],
[1.78019082e-306, 1.37959740e-306, 6.23057349e-307,
1.02360935e-306, 1.69120416e-306],
[1.78022342e-306, 6.23058028e-307, 1.06811422e-306,
1.33508761e-307, 1.78022342e-306],
[1.05700345e-307, 1.11261977e-306, 1.69113762e-306,
1.33511562e-306, 2.18565567e-312]])
The
full function creates a n * n array filled with the given value.
The
eye function lets you create a n * n matrix with the diagonal 1s and the others 0.
The function
linspace returns evenly spaced numbers over a specified interval.
>>> np.linspace(0,5,10)
array([0. , 0.55555556, 1.11111111, 1.66666667, 2.22222222,
2.77777778, 3.33333333, 3.88888889, 4.44444444, 5. ])
You can also use special library functions to create arrays. For example, to create an array filled with random values between 0 and 1, use
random the function. This is particularly useful for problems where you need a random state to get started.
To learn more I am leaving a link to the website.
The numpy.ravel() functions return a contiguous flattened array(1D array with all the input-array elements and with the same type as it). A copy is made only if needed.
SYNATX: numpy.ravel(array, order = 'C')
The numpy.flatten() functions returns copy of n-dimensional into one-dimensional array. It flattens source array into one-dimensional.
SYNTAX: numpy.flatten(array,order='C')
Here I got something interesting. I used both the functions ravel() and flatten() on the same array. The output of both the function was the same list. So what’s the actual difference between the two functions?
import numpy as np
y = np.array(((1,2,3),(4,5,6),(7,8,9)))
OUTPUT:
print(y.flatten())
[1 2 3 4 5 6 7 8 9]
print(y.ravel())
[1 2 3 4 5 6 7 8 9]
flattenalways returns a copy.
ravelreturns a view of the original array whenever possible. This isn't visible in the printed output, but if you modify the array returned by ravel, it may modify the entries in the original array. If you modify the entries in an array returned from flattening this will never happen. ravel will often be faster since no memory is copied, but you have to be more careful about modifying the array it returns.
Indexing and slicing arrays
Indexing is used to obtain individual elements from an array, but it can also be used to obtain entire rows, columns, or planes from multi-dimensional arrays.
>>>import numpy as np
>>>arr=np.array([0,1,2,3,4,5,6,7,8,9])
>>>arr[9] # output: 8
>>>arr.reshape(2,5)
>>>arr
array([[0,1,2,3,4],
[5,6,7,8,9]])
Note : Our array is temporarily changed.
>>>arr=arr.reshape(2,5)#this changes our one-dimensional array to two-dimensional array permanently.
>>>arr[1,4]# output: 9
In a 2-D array,
- the first index selects the row
- the second index selects the column
In a 3-D array,
- The first index, i, selects the matrix
- The second index, j, selects the row
- The third index, k, select the column
Slicing:
>>> arr[1:8]
array([1, 2, 3, 4, 5, 6, 7])
>>>arr=arr.reshape(2,5)
>>> arr[1:,2:4]
array([[7, 8]])
Linear Algebra
The Linear Algebra module of NumPy offers various methods to apply linear algebra on NumPy array. Some of them are list below-
- rank, determinant, trace, etc. of an array.
- eigenvalues of matrices
- matrix and vector products (dot, inner, outer, etc. product), matrix exponentiation
- solve linear or tensor equations and much more!
NumPy Matrix
>>> A = np.array([[ 1, 2 ,3], [ 4, 5 ,6]])
>>> B = np.array([7,8,9])
>>> A
array([[1, 2, 3],
[4, 5, 6]])
>>> B
array([7, 8, 9])
>>> A.shape
(2, 3)
>>> B.shape
(3,)
>>> A.T
array([[1, 4],
[2, 5],
[3, 6]])
>>>
>>> B.T
array([7, 8, 9])
>>>
>>> A.dot(B)
array([ 50, 122])
>>>
>>> np.dot(A,B)
Ax = b : numpy.linalg
Now we want to solve Ax = b:
>>> import numpy as np
>>> from numpy.linalg import solve
>>> A = np.array([[1,2],[3,4]])
>>> A
array([[1, 2],
[3, 4]])
>>> b = np.array([10, 20])
>>> b
>>> x = solve(A,b)
>>> x
array([ 0., 5.])
>>> a = np.array([[3,-9],[2,5]])
>>> np.linalg.det(a)
33.000000000000014
Eigenvalue and vectors: The eig returns two tuples: the first one is the eigen values and the second one is a matrix whose columns are the two eigen vectors.
>>>from numpy.linalg import eig
>>> arr=np.array([[1,2],[3,4]])
>>> arr
array([[1, 2],
[3, 4]])
>>> eig(arr)
(array([-0.37228132, 5.37228132]), array([[-0.82456484, -0.41597356],
[ 0.56576746, -0.90937671]]))
Python is just an ocean of modules. Get to know them from its official documentation. That’s all in this article.
Thank you :)
References | https://sweta-akb15.medium.com/introduction-to-numpy-2c31a4aa1da6?source=---------4---------------------------- | CC-MAIN-2021-31 | refinedweb | 1,464 | 66.94 |
Free "1000 Java Tips" eBook is here! It is huge collection of big and small Java
programming articles and tips. Please take your copy here.
Take your copy of free "Java Technology Screensaver"!.
JavaFAQ Home » Java Notes by Fred Swartz
A javax.swing.Timer object calls an action listener at
regular intervals or only once. For example, it can be
used to show frames of an animation many times per second, repaint a clock every
second, or check a server every hour.
javax.swing.Timer
Java 2 added another class by the same name, but. To prevent ambiguous references, always write the
fully qualified name.
java.util
java.util.Timer
import java.awt.event.*; // for ActionListener and ActionEvent
javax.swing.Timer yourTimer =
new javax.swing.Timer(int milliseconds, ActionListener doIt);
javax.swing.Timer t = new javax.swing.Timer(1000, new ActionListener() {
public void actionPerformed(ActionEvent e) {
p.repaint();
}
});
start
t.start();
Call the timer's stop method. Example,
stop
t.stop();
See the discussion below to see when it's important to stop a timer in
an applet..
Starting and stopping a timer is very simple. For example, just add the
following lines to your applet (assuming t is a Timer):
t
public void start() {t.start();}
public void stop() {t.stop();}
t.setRepeats(boolean flag);
t.setInitialDelay(int initialDelay);
if (t.isRunning()) ...
And others.
RSS feed Java FAQ News | http://www.javafaq.nu/java-article732.html | CC-MAIN-2015-06 | refinedweb | 230 | 63.46 |
This document contains answers to frequently asked questions
about Oracle's SQLJ drivers. Note that these address specific technical
questions only and are used to document solutions to frequent customer
questions as well as any known problems. Please consult the SQLJ
Developer�s Guide and Reference for full background information.
If you have any questions or feedback on this document,
please e-mail mailto:helpsqlj_us@oracle.com
or mailto:sqljsup_us@oracle.com.
Last updated: 27 April 2001
Quick Jump
Need Troubleshooting?
Go to the Troubleshooting Checklist
to solve a problem or issue you have encountered.
Or, jump directly to one of the following
Start the General Questions part,
or go to one of the following
Contents
Part A. Troubleshooting
1. Troubleshooting Checklist
2. Problems Translating
and Compiling SQLJ Programs
2.1 Errors
When Starting the Translator
3.1 Error
Messages Encountered During Deployment or Runtime
4. SQLJ (and JDBC) Basics
4.1 SQLJ
Resources
Is there a reason
to write PL/SQL stored procedures instead of Java stored procedures?
Is there a translator
for PL/SQL Stored Procedures/Packages into Java?
Does SQLJ implement the ANSI SQLJ
specification?
Does SQLJ implement the ISO SQLJ specification?
What software components
are required to translate a SQLJ application? to run a SQLJ application?
Can I use the Oracle
SQLJ translator to translate an application to run against a non-Oracle
SQLJ runtime?
Is SQLJ really database
neutral?
7.1 Basic
Functionality
The items in this section provide information regarding
common problems and errors encountered by SQLJ users.
1. Troubleshooting
Checklist
This list should help you to systematically identify a
problem you are encountering and point you to where specific issues are
being addressed in this FAQ.
SQLJ Error Report
Environment Description
Platform/OS Version:
Database version:
PATH value:
CLASSPATH value:
SQLJ_OPTIONS value:
JDK version (java -version):
JDBC version:
SQLJ version (sqlj -version-long):
Problem Description
Problem Area:
_ unable to start translator
_ translation problem - _ offline or _ online
_ runtime problem
_ deployment problem
Error Message:
Error Description:
2.
Problems Translating and Compiling SQLJ Programs
The following problems might be encountered when trying
to invoke the translator, or during translation itself. The first and second
group of questions address specific error messages; the third group addresses
somewhat more general questions.
In either group, some problems may be indicated as being
general configuration issues, not specific to your SQLJ code. In these
cases, or for general information about SQLJ installation and configuration,
see the "Getting Started" chapter of the SQLJ Developer's Guide and
Reference.
In brief, before concerning yourself with particular code
issues in your application, you should be sure that you can already run
the Java compiler, connect to the database using JDBC, and invoke the translator.
2.1
Errors When Starting the Translator
"Error in sqlj stub:
invalid argument"
(This is likely a general configuration issue, not something
specific to your code.)
The SQLJ translator issues: "Error
in sqlj stub: invalid argument" when it is launched. This may happen
with SQLJ 8.1.6 on a Windows platform. The problem occurs when the SQLJ
wrapper executable sqlj.exe calls
java
<java_arguments> sqlj.tools.Sqlj
<sqlj_arguments>
The problem may have to do with the size of the CLASSPATH.
Also note that SQLJ does not currently support a CLASSPATH
containing one or more spaces.
You can have SQLJ display (but not run) the full Java
command line above by issuing
sqlj
-n <options>
You can try to resolve this issue with one or more of the following:
If you are using the Java Development Kit from Sun, you
may be experiencing problems from conflicts with previously installed Java
environments. The easiest way to avoid such problems is to customize your
CLASSPATH setting to include only what
you need for the JDK, the JDBC driver, and SQLJ. Essentially, you should
start with a clean slate.
You can accomplish this as follows:
This may also be caused by PATH
or CLASSPATH problems. See the answer to
the previous question.
"ExceptionInInitializerError:
NullPointerException"
If you see the following stack trace:
If you are using the command-line version of SQLJ, download
and install SQLJ 8.1.6 or later."
The error "Exception in thread main java.lang.NoClassDefFoundError:
sqlj/tools/Sqlj" indicates that the SQLJ translator class files cannot
be found. Most likely, your CLASSPATH is
missing the SQLJ translator.zip file. This
file can usually be found in [Oracle Home]/sqlj/lib/.
Also remember to add one of the SQLJ runtime zip files to the CLASSPATH
� this is required in SQLJ version 8.1.7 and later (see "Error:
SQLJ runtime library is missing").
"NoClassDefFoundError:
sun/io/CharToByteConverter"
Running sqlj results in
the following error message.
"Error: SQLJ
runtime library is missing"
This indicates that your CLASSPATH
contains the translator.zip file, but is
missing the runtime library. The SQLJ library files can usually be found
in [Oracle Home]/sqlj/lib/.
This is likely a result of the SQLJ command line being
too long when invoking the SQLJ translator on Windows. You can do the following
to reduce the size of the command line.
This message may be encountered with SQLJ 8.1.6 on Windows.
The problem is that the SQLJ wrapper executable (sqlj.exe)
is unable to determine the location of the Oracle installation from the
system registry and exits.
SQLJ translator hangs
and/or does not show any error messages from the Java compiler (in JDK
1.3)
When using SQLJ 8.1.7 or earlier under JDK 1.3 you may
experience hangs, or you will notice that Java compilation errors do not
show up anymore. (If you experience hangs and are not using JDK 1.3 or
later, please see SQLJ hangs during translation)
The problem is that with JDK 1.3 the javac
compiler sends error messages to standard error instead of standard output.
However, the SQLJ translator tries to capture messages from standard error.
Since this issue affects Oracle SQLJ versions 8.1.7 and earlier, you may
want to upgrade to SQLJ version 9.0.1 or later. Otherwise you may be able
to use the following workaround.
"Cannot load
JDBC driver class oracle.jdbc.driver.OracleDriver"
When SQLJ translation results in the following warning:
Cannot
load JDBC driver class oracle.jdbc.driver.OracleDriver.
You need to have the JDBC classes111.zip
(for JDK 1.1.x) or classes12.zip (for JDK
1.2 or later) in your CLASSPATH environment
variable. The following command line should print out the methods on the
OracleDriver class.
javap
oracle.jdbc.driver.OracleDriver
Subsequently, the following should print out all sorts
of version information on SQLJ, JDBC and the JDK you are using.
sqlj
-version-long
If this shows the JDBC driver version as 0.0, then the
Oracle JDBC driver is not yet in the CLASSPATH.
"Return type
XXXX ... is not a visible Java type"
You may be trying to return an iterator instance from
a stored procedure, or as a column of a result set, and this iterator type
is declared as follows:
"Missing equal
This may be caused by using the pre-ANSI style of connection
contexts, using parentheses instead of brackets. Instead of:
(This is a general configuration issue.)
The SQLJUTL package should
be installed automatically in the SYS schema.
It is used for semantics-checking of Oracle stored procedures and functions
that have been defined to have default arguments.
If the translator indicates that it cannot find SQLJUTL,
use the following SQL command (from SQL*Plus, for example) to see if it
exists:
"Error: unable to load
... java.lang.NoClassDefFoundError"
This may indicate that the translator cannot find your
compiled .class files during customization,
and is particularly likely if you are using a Java package structure and
are not using the SQLJ -d option during
translation.
As a first step, rerun the translator with the -status
flag enabled. If this error occurs after the [Customizing]
line in the status output, this indicates a problem in finding your .class
files during customization.
You can avoid this problem by using the SQLJ -d
option when you translate. This option allows you to specify a root directory,
and all profiles and .class files are placed
underneath that root directory according to the package structure. Also,
note that this directory must be in your CLASSPATH.
"Unable to convert
...Xxx.ser to a class file"
This occurs if you are using the SQLJ -ser2class
option (to convert .ser profiles to .class
files) and the translator cannot find the .ser
files to convert. This problem is similar to the NoClassDefFoundError
discussed in the previous question, and can also be resolved by using the
SQLJ -d option during translation.
My code declares
a block-level iterator class, and an instance of this class is later created
and used in the same block. SQLJ appears to translate my code without difficulty,
but javac gives the following error: "Error: Class Xxx not found in type
declaration"
This problem, where Xxx
is the iterator class name, results from a known bug in the javac
compiler, version 1.1.x. Here is an example of a method that would result
in this error:
"Error in Java
compilation: CreateProcess: javac"
If the following error occurs during translation.
2.3
Additional Translation Issues
SQLJ hangs during translation.
If you are using JDK 1.3 then please see "JDK
1.3: SQLJ translator hangs and/or does not show any error messages from
the Java compiler". If the SQLJ translator appears to hang during translation,
interrupt the translation and add the -status
flag to diagnose the problem:
Note: If you are using SQLJ 8.1.5, 8.0.5, or 7.3.4
on NT, you should always use the -passes
options. Otherwise you will experience this hang.
SQLJ translates
but does not produce any .class files
If you run the SQLJ translator, but Java compilation appears
to fail silently (you can verify with the -status
command line option whether compilation was started) you can try one or
more of the following.
If no previously discussed diagnosis fits your problem,
you may want to try the following.
If you have a genuine out-of-memory condition during SQLJ
translation (as opposed to the issue described in the preceding question),
you can increase your Java VM's memory allocation by passing additional
flags to the SQLJ translator.
For example, you can set a heap size of 128 MBytes for
the JDK 1.1.x Java VM with the -mx128M
flag. By prefixing this flag with -J, you
can use it with the SQLJ translator as follows:
sqlj -J-mx128M ...
Note that you must either pass this flag on the command
line, or place it in the SQLJ_OPTIONS environment
variable.
Also note that the -mx
flag has changed to -Xmx in JDK 1.2.
Is there
a way to speed up translation of a .sqlj file with online checking enabled?
If you enable the translator cache option (-cache=true),
then SQLJ remembers the result of online checking and stores it in the
file SQLChecker.cache. This removes the
need to connect to the database for every #sql
statement in your program. Note, however, that only those SQL statements
that do not result in error or warning messages are being cached. Whenever
SQLChecker.cache has become too large,
or you want to start with a clean cache, you can just delete file SQLChecker.cache
before running SQLJ.
Why are database
errors reported as warnings? When I use the Oracle online checker, why
do I get one error and one additional warning from the database?
Errors reported by the database are passed on as warnings
by SQLJ. This is because the database reporting may not be fully accurate,
resulting in spurious errors. The signature of stored functions and procedures
is analyzed directly from the SQLJ client. But these function and procedure
invocations are also passed to the database for explicit analysis. This
results in duplicate reporting of certain errors or warnings concerning
stored procedures and functions.
"Type class_name
of host item #nn is not permitted in JDBC"
What does the following warning message mean?
Why was
the profile-key class not generated?
I used the SQLJ translator to translate a couple of .sqlj
files. Some of them have profile-key classes associated but some don't.
Can you tell me why? Do missing profile-key classes hurt?
A profile keys class (and associated .ser
files) will only be generated if you have an actual SQLJ statement, that
is:
In SQLJ version 9.0.1 and later, you can specify the command
line option -codegen=oracle. This results
in the direct generation of Oracle JDBC code. In this neither .ser
nor profile keys file are being generated for any of your SQLJ statements.
3.
Problems Deploying and Running SQLJ Applications and Applets
For general information about deploying and running SQLJ
programs, see "Deploying SQLJ Programs" and "Running SQLJ Programs" later
in this FAQ.
3.1
Error Messages Encountered During Deployment or Runtime
"SQLException:
No suitable driver"
If you see the exception trace:
"SQLException:
unable to load connect properties file: connect.properties"
When trying to establish a SQLJ connection using the code
The message: "The network adapter could not establish
the connection." means one of two things:
If a listener is up and running as specified, but there
is no database with the
specified Oracle SID, you will see the message:
If the username and/or password is not valid, you get:
If you see the exception:
You may see this error if you have deployed SQLJ code
to the server-side Java VM and are subsequently trying to run it. The error
occurs because the SQLJ .ser profile files
have not been deployed with the rest of your application, or they may not
have been deployed into the same package as the original SQLJ classes that
they reference. Please refer to the explanation of the error "ClassNotFoundException:
xxx.yyy_SJProfile0 for class xxx.yyy_SJProfileKeys" directly below.
"ClassNotFoundException:
xxx.yyy_SJProfile0 for class xxx.yyy_SJProfileKeys"
If you see an exception such as:
Setting a root directory. You may want to use the
SQLJ -d <rootdir> flag
to ensure that all files required by your project are deposited under a
particular directory hierarchy before you run jar
or loadjava.
Applets and converting .ser to .class. There is
an additional consideration for applets. Older browser versions that are
based on JDK 1.0.2, such as Netscape Navigator 3.0 and Microsoft Internet
Explorer 3.0, did not have support for loading a serialized object from
a resource file associated with the applet. Additionally, the Navigator
browser does not permit resources to be loaded from files with a ".ser"
extension. These limitations also result in the ClassNotFoundException
when you try to run the applet. As a work-around, specify the SQLJ translator
-ser2class option when you translate the
code. This instructs SQLJ to convert profiles into Java class format in
.class files. If you translate MyApplet.sqlj
this way, for example, the profile would be in MyApplet_SJProfile0.class
instead of MyApplet_SJProfile0.ser. Then
you would have to ensure that the profile .class
file is available in the SQLJ runtime environment, as with a profile .ser
file.
Deploying code to the server, such as Java Stored Procedures
and Enterprise Java Beans. Essentially, you have three choices in uploading
your code with loadjava: (1) you can upload
source code and have the server compile your code, (2) you can upload .class
and .ser files, and (3) you can convert .ser
files to .class files and upload these together with the other .ser
files. If you use method (2) or (3) and omit one or more .ser and -respectively-
converted .class files, you will encounter the ClassNotFoundException
when your program is run.
Notes:
My SQLJ program precompiles and compiles successfully,
resulting in a .class file which I load without error using loadjava. I
then create a procedure to run the stored procedure. When I run the procedure,
I get the error:
"SQLNullException:
cannot fetch null into primitive data type"
You cannot assign a null
value to a Java primitive type (such as int
or float), but you can work around this
by using a Java object type (one of the java.lang
classes). For example, replace a declaration of the form:
An SQLJ statement that calls a stored function or procedure
which returns a REF CURSOR issues the exception: "SQLException: Invalid
column type".
This can be caused by one of the following:
This error may occur when selecting an SQL object / a
VARRAY / a nested table. The following are possible reasons for this message
I am using Oracle 8.1.6 and got the following exception
at the code generated by the SQLJ translator for a deployed EJB on Sun's
J2EE server:
"java.lang.ClassCastException:
weblogic.jdbc20.rmi.SerialCallableStatement", and "weblogic.jdbc20.rmi.SerialConnection
... failed"
When accessing VARRAY parameters in Oracle8i Stored Procedures
from a Weblogic server I experience the folowing exceptions:
Note that with the Oracle 9i release, Oracle JDBC support
will be fully defined through oracle.jdbc.OracleXxxx
interfaces. Any JDBC driver that supports these interfaces will also
be supported by SQLJ with full Oracle-specific functionality. In all other
cases, the JDBC driver needs to be treated as a generic driver (translation
setting -profile=false)
and no support for Oracle-specific types is available.
"java.lang.NoClassDefFoundError:
oracle/jdbc/OraclePreparedStatement"
When using JDK 1.2 and classes12.zip
the statement oracle.sqlj.runtime.Oracle.close()
results in the following exception:
If you encounter the error message:
When using JDK 1.2.1 it is possible you may see a NullPointerException
due to overzealous garbage collection on part of the JavaVM. Specifically,
we have seen positional iterator objects being nulled out before they could
be closed, even though they were clearly in scope. You can eliminate this
problem by switching to JDK 1.2.2.
ORA-01000: maximum
open cursors exceeded
"I suspect that statement.close() leaves DB Cursors open.
... After doing a lot of database fetches in JDBC, I eventually get the
following error:"
"We are using Oracle's Java implementation of connection
pooling (OracleConnectionCacheImpl) in
a Java client program which makes calls to Java stored procedures in the
database. The problem we are seeing is that cursors that are implicitly
defined and spawned from #sql { ...
} statements in the Java code -such as from SELECT INTO statements-
are opened in the stored procedures but never closed. Information about
what cursors are opened comes from the V$SQL_TEXT_WITH_NEWLINES type views."
(a) #sql { SELECT col1,col2 INTO
:x, :y FROM TAB WHERE ... };
#sql
{BEGIN SELECT col1,col2 INTO
:OUT x, :OUT y FROM TAB WHERE ...; END;};
When calling a stored procedure that returns a REF CURSOR
multiple times from the same session, I receive the following error. I
am already using the close() method on
the Java result set, but that does not appear to affect the database cursors.
How can I avoid this error?
Performance
of Java and SQLJ in Stored Procedures
If your code consists mostly of SQL, then PL/SQL will
generally be more performant since it implements the SQL datatypes directly.
Specifically, conversions between SQL and Java representations tend to
be costly. On the other hand, if you are doing lots of computations and
logic processing, then Java will show better performance. In general, you
may want to use Java or PL/SQL, depending on what you are more comfortable
with. However, with Java you do get the advantage of wider portability
- you can run the same code in the server, the middle tier, or the client,
and even with different vendors.
When using SQLJ in the server, you may want to consider
the following tips to ensure good performance:
Is there any way to mimic the user defined and Oracle
pre-defined PL/SQL exceptions in SQLJ?
Unfortunately, there is not. When the Java code in the
Java Stored Procedure throws an exception, this is rendered as "ORA-29532:
Java call terminated by uncaught Java exception: ..." at the SQL level.
"ORA-29531
- no method ... in class ..."
"When trying to call a Java Stored Procedure that was
uploaded with loadjava the following message occurs:
"I deployed a stateless, container-managed Enterprise
Java bean into Oracle 8.1.7. The bean contains a method with a SQLJ insert
statement. Even though that appears to work, no commit is triggered by
the Bean Container when the method call is finished."
On Solaris, this should have worked. On NT you have to
explicitly lookup the datasource and do a ds.getConnection()
in order to enlist the datasource with the Container. If you use the default
kprb (server-JDBC) connection, you have
to explicitly set the default-enlist tag
to true in the XML deployment descriptor. The following is an example of
using <default-enlist>:
<oracle-descriptor>
<mappings>
<ejb-mapping>
<ejb-name>TestEJB</ejb-name>
<jndi-name>test/TestEJB</jndi-name>
</ejb-mapping>
<transaction-manager>
<default-enlist>true</default-enlist>
</transaction-manager>
</mappings>
</oracle-descriptor>
"java.security.AccessControlException:
the Permission (java.net.SocketPermission) has not been granted"
"When executing code to create an explicit database connection
from inside the server as follows:
CALL dbms_java.grant_permission
('username',
'java.net.SocketPermission', '*', 'connect,resolve');
Other SocketPermission�s
that you may want to grant are accept and
listen.
3.3
Using REF Cursors
Error:
class cannot be constructed as an iterator: classname
"This error is encountered when trying to return a REF
CURSOR as an OUT parameter of a PL/SQL block. What SQLJ and Java types
are permitted for such arguments?"
"Could you show some sample code of a PL/SQL Stored Procedure
that can return a JDBC ResultSet or a SQLJ iterator to the client?"
In Oracle release 8.1.7 and earlier you cannot return
a result set back from a Java stored procedure, though you can do so from
a PL/SQL stored procedure. Oracle release 9.0.1 and later permit you to
return a result set from a Java Stored Procedure. You have to open the
result set by executing a SQL statement. (There is an Oracle-specific API
to enable this server-side functionality - please refer to the JDBC
Developer's Guide and Reference.) You can optionally retrieve rows
from it in the server-side Java code. Finally, you pass the result set
with the remaining rows on it from the Java stored procedure as a REF CURSOR
OUT parameter or return.
REF CURSORs
returned from Stored Procedures lose scrollability
"An Oracle Stored Procedure returns a REF CURSOR to client
Java code as a java.sql.ResultSet. The
problem is that the scrollability of the ResultSet
is lost - it becomes a TYPE_FORWARD_ONLY
ResultSet."
Result set scrollability is not supported on result sets
returned from a stored procedure or function.
If you look into how scrollability is implemented in the
Oracle JDBC driver, you will understand why this cannot be done. Oracle
SQL does not support scrollable cursors natively. Thus the behavior is
emulated by selecting ROWID's together with the rows specified in your
query (in other words: your query is modified at JDBC runtime). While this
is possible for top-level queries, there is no way in which the JDBC runtime
can modify the original query executed inside of a stored procedure and
returned as a REF CURSOR.
3.4
Additional Deployment and Runtime Issues
How can I speed up
execution of my SQLJ application?
Tips on improving performance with SQLJ.
I'm having
problems with retrieval of CHAR fields in SELECT statements.
If your table has a CHAR column, such as TITLE CHAR(120),
then beware of SQL blank padding behavior.
Instead of selecting only the exact string, as follows:
#sql iter =
{ SELECT NAME,
TITLE FROM Tab WHERE TITLE = :("Dawn") };
you will have to use wildcard search parameters, as in
the following:
#sql iter =
{ SELECT NAME,
TITLE FROM Tab WHERE TITLE LIKE :("Dawn%") };
Character
comparison works with "" but not with NULL.
"The following query works from SQL*Plus:
In general, if you check for a NULL value you should not
use a bind variable, but rather employ the SQL syntax "IS
NULL". Thus instead of using
Using "WHERE
column IN (value_list)" with a value_list of unknown size
My SQL statement has the following form.
SELECT * FROM TAB WHERE col IN (value1,value2,�)
The list value1, value2, � is a list of host variables.
However, I do not know ahead of time, how many variables will be in the
list. How can I write this in my SQLJ program?
You have several alternatives:
When I try to update a record using SQLJ, the data remains
unchanged. What is going on?
A couple of notes on this.
Is it possible to use host variables to substitute values
into a SQLJ DDL Statement, such as an "ALTER SEQUENCE" statement where
the sequence increment is reset?
No. Oracle's SQL engine does not let you use bind variables
in DDL statements. In this case you have to revert to JDBC. For example,
if you are using the SQLJ default context, you could say the following:
In SQLJ version 8.1.7 or later you can use connections
obtained from a JDBC ConnectionCache, or
connection classes that wrap or delegate to Oracle JDBC connections, or
JDBC connections obtained from a DataSource.
Assuming you obtained jdbcConnection this
way, you can now do the following:
JDeveloper 3.0 was released prior to SQLJ release 8.1.6,
so uses release 8.1.5, which does not support JDK 1.2. To run the application
under JDK 1.2, you will have to download the SQLJ 8.1.6 SDK runtime patch
from the Oracle Technology Network (OTN) Web site (technet.oracle.com).
Oracle SQLJ 8.1.6 and later works with JDK 1.2.
I am running
against an Oracle 8.0 database and my SQL object updates are not working.
Even though the JDBC 8.1.x drivers are backward compatible
to 8.0.x databases, they require an 8.1.x database to support SQL object
features. (JDBC 8.0.x drivers do not support SQL objects.)
How can I use SQLJ
in the middle-tier (or EJB, or XXX) environment Y?
This is easiest if the middle tier directly supports SQLJ.
Although -in principle- SQLJ can be used in any environment that provides
JDBC connectivity, a few issues need to be addressed in practice.
How can I use SQLJ with
Oracle's BC4J?
"I am using BC4J (Oracle's Business Components For Java)
in my application, and would like to write SQLJ code for performing some
work in the database. Is it possible to do that?"
The following trick lets you obtain a real JDBC connection
and create a SQLJ connection context:
import java.sql.Connection; import
sqlj.runtime.ref.DefaultContext;
�
Connection conn = getDBTransaction()
.createCallableStatement("select 1 from dual",1)
.getConnection();
DefaultContext ctx = new DefaultContext(conn);
#sql [ctx] � { � };
How can I use SQLJ
with JservApache?
"Could anyone can help me to JservApache? How do I install
JDBC and SQLJ to Apache?"
Add the following lines to your jserv.properties
for all jars, classes, and zips you want to use:
wrapper.classpath=<path>
If you use SQLJ, then do not include the SQLJ classes in the servlet
repository. Instead, make them a part of the 'main' servlet and also incude
them in wrapper.classpath.
Part B. General
Questions
Answers here are only intended to give you a general grasp
or idea; they do not go into detail. For further information about any
of these topics, please refer to the SQLJ Developer's Guide and Reference.
4. SQLJ
(and JDBC) Basics
4.1
SQLJ Resources
Are there SQLJ books and
other resources that would help a newcomer?
Extensive documentation on SQLJ is available either from
the Oracle Technology Network (OTN) Web site (technet.oracle.com: click
on "Technologies", then Java -> SQLJ & JDBC) or from the Oracle SQLJ
distribution:
In addition, you can reference the following:
Is there an Oracle
mailto:helpsqlj_us@oracle.com
or. mailto:sqljsup_us@oracle.com.
You can also send suggestions for modifications to this
FAQ list.
Where can I find
the SQLJ source code (known as the "reference implementation") that Oracle
has made available freely to the public and other database vendors?
Where can I download
the SQLJ translator for a given platform?
SQLJ is available from the Oracle Technology Network (OTN)
Web site:
Click on the "Software" button, and then "Select a Utility
or Driver" -> "SQLJ Translator". We provide downloads for Solaris and Windows
NT. However, these versions only differ in the line termination characters
used in textual files. If you are using a UNIX platform, you can adapt
the sqlj/bin/sqlj shell script for your
platform. For Windows platforms, we provide the sqlj\bin\sqlj.exe
wrapper executable. However, this executable is only verified for Windows
NT. It may not work on other Windows versions.
Why do I find information
mentioning JSQL, I thought it is called SQLJ?
I found some Web pages that talk about JSQL, not SQLJ.
I thought that the standard for embedding SQL in Java is called SQLJ.
You are right�it is called SQLJ. In the beginning it was
called JSQL. However, the name JSQL had been trademarked by Caribou Lake
Software for their JDBC driver product, which is not related in any way
to SQLJ. The old information that you saw was still using the initial name
before it was renamed to SQLJ.
Can SQLJ be faster
than JDBC?
There are some papers which notice that SQLJ code could
be faster then JDBC. How does this work, if SQLJ is nothing more than a
layer on top of JDBC?
To the extent that SQLJ is a layer on top of JDBC it cannot
be faster than JDBC. (Incidentally, this is the current situation of Oracle
SQLJ.) However, to the extent in which SQLJ can exploit the fact that it
represents static SQL statements, rather than dynamic ones, it can become
faster than JDBC, for example through precompilation of SQL code, or predetermination
of SQL types. In these cases the SQLJ runtime needs to be vendor-specific.
A tuned SQLJ runtime can short-circuit several functions that the JDBC
runtime always would have to perform dynamically at runtime, such as registration
and checking of types, processing of JDBC escape sequences, other special
SQL processing (for example to support scrollable result sets).
4.2 SQLJ
Overview
Is SQLJ Y2K-compliant?
Yes, both the SQLJ translator and the SQLJ runtime are
Y2K-compliant, presuming your Java environment is Y2K-compliant.
What are the pros and
cons of choosing Java and SQLJ over C/C++?
Pros:
Stored procedures are pieces of code executed in the database
as part of your database session. In Oracle Databases, stored procedures
are usually written either in PL/SQL (a proprietary Oracle language) or
in Java. If you write your stored procedure in Java you use JDBC or SQLJ
to access the database. You can write essentially the same Java code for
accessing the database from the client, from the middle tier, or from the
server-side JavaVM.
Is there
a reason to write PL/SQL stored procedures instead of Java stored procedures?
Using Java does have some additional cost.
Is there a
translator for PL/SQL Stored Procedures/Packages into Java?
Note that Oracle is committed to continuing support and
development of PL/SQL - there is no reason to convert PL/SQL into Java
for future compatibility. Also, PL/SQL and Java are fully interoperable:
you can call PL/SQL stored procedures from Java using JDBC or SQLJ and
vice versa.
There are third-party products (for example from Quintessence
Systems) that provide for an automated migration from
PL/SQL to Java. At this point, however, there is no tool that translates
directy into SQLJ.
Does SQLJ implement the
ANSI SQLJ specification?
Yes, although some minor features, while recognized by
Oracle SQLJ, may not be supported by the Oracle JDBC drivers or Oracle
database (such as the sensitivity, holdability,
and returnability constants you can set
in a with clause, and positioned UPDATE/DELETE/INSERT
operations -see also "Does Oracle SQLJ support
the WHERE CURRENT OF construct?").
Does SQLJ implement the
ISO SQLJ specification?
Yes, Oracle SQLJ 8.1.7 and later supports the ISO specification.
Note that the SQLJ ISO standard requires full support for JDK 1.2 or later.
Thus, in order to be fully ISO compliant you would need to use JDK 1.2
or later and translate as well as run your SQLJ program with
runtime12.zip (or runtime12ee.zip).
Additional features, while recognized by Oracle SQLJ, are not supported
by the Oracle JDBC drivers or Oracle database: the path
and transformGroup attributes on connection
contexts, as well as type map property entries of the kinds JAVA_OBJECT
and DISTINCT are not supported presently
by Oracle. Furthermore, some minor SQLJ features that were already part
of the ANSI standard, while recognized by Oracle SQLJ, are not supported
by the Oracle JDBC drivers or Oracle database. These features include the
sensitivity, holdability,
and returnability constants you can set
in a with clause, and positioned UPDATE/DELETE/INSERT
operations -see also "Does Oracle SQLJ support
the WHERE CURRENT OF construct?").
What software components
are required to translate a SQLJ application? to run a SQLJ application?
To translate your application you need the following:
IMPORTANT NOTE: In SQLJ 8.1.6 and earlier there
was only one runtime.zip library. Furthermore,
this library was also contained in translator.zip.
Starting with 8.1.7, the runtime classes have been removed from the translator
library and an appropriate runtime library must now be provided separately
on the CLASSPATH.
Can I use the
Oracle SQLJ translator to translate an application to run against a non-Oracle
SQLJ runtime?
Yes, if you do not use Oracle-specific features in your
code and do not use the default Oracle customizer (for example, if you
set -profile=false for translation).
Is SQLJ really
database neutral?
Yes. The SQLJ translator makes minimal assumptions about
the SQL dialect. We assume, for example, that you can have the following:
#sql positer = { ... SQL operation
...};
if the SQL statement begins with SELECT,
but not if it begins with INSERT.
The SQL in such constructs is simply passed to the JDBC
driver. If your JDBC driver and database understand the SQL dialect you
embed, SQLJ doesn't complain. When semantic analysis is done on a SELECT
statement in a #sql construct, SQLJ does
not make assumptions that your SELECT statement
syntax is SQL92.
Nothing about the SQLJ language is JDBC-specific. It
can in principle be implemented with interfaces other than JDBC. The Oracle
implementation of SQLJ happens to be based on JDBC.
4.3 JDBC
Drivers
What should I
know about the Oracle JDBC drivers?
Yes. Oracle SQLJ is based on the ANSI-standard SQLJ Reference
Implementation. If you do not use Oracle-specific features and do not use
the Oracle customizer (for example, if you set -profile=false
for translation), then you do not have to use an Oracle JDBC driver. With
SQLJ version 9.0.1 and later specify runtime-nonoracle.zip
in your CLASSPATH.
4.4
SQLJ, JDBC, and JDK Versions
What are the different
versions of Oracle SQLJ and how do I get them?
How do the Oracle
SQLJ 8.0.5 and 7.3.4 versions differ from the 8.1.5 versions?
They don't. These versions differ in name and packaging
only. Since SQLJ 8.1.5 is available with the Oracle 8.1.5 database we provide
SQLJ 8.0.5/7.3.4 version for free download, so that programmers can use
SQLJ in conjunction with 8.0.x or 7.3.4 JDBC drivers against an 8.0.x or
7.3.x database.
The main discrepancies to keep in mind are restrictions
(and some changes) when using the 8.0.x and 7.3.x JDBC drivers:
Oracle SQLJ does not support JDK 1.0.2.
Do I need a different
set of SQLJ class files depending on whether I'm running under JDK 1.1.x
or JDK 1.2 or later?
There is only a single translator.zip
file for all JDK and JDBC versions. Additionally, if you are using SQLJ
8.1.7 or later, you need to select one of several runtime libraries:
5. SQLJ
Language and Programming
In the sample programs,
I see the following import clauses at the beginning of the source code.
What are these packages? Do I need to use the same import statements in
my own SQLJ files?
import java.sql.*;
import sqlj.runtime.*;
import sqlj.runtime.ref.*;
import oracle.sqlj.runtime.*;
These packages belong to the standard JDBC API (java.sql.*)
and the SQLJ runtime API (sqlj.runtime.*).
It's often simplest to import the entire packages; however, you may choose
instead to import only those classes that your application will use directly.
Discussion of connection contexts can get a little deep,
but you can think of a connection context as a framework for a set of connections
for SQL operations that use a particular set of database resources. This
mechanism allows you to implement more robust semantics-checking during
translation.
In SQLJ, each database connection uses its own instance
of a connection context class. Each connection of a particular connection
context uses an instance of a single connection context class. SQLJ provides
one connection context class--sqlj.runtime.ref.DefaultContext--and
you can declare additional connection context classes as needed.
Each connection context uses its own class, so has its
own "type". Such "strong typing" is one of the key advantages of SQLJ,
allowing for rigorous semantics-checking during translation.
The fact is, however, that many (even most) applications
need only one connection context and so can get by using only the DefaultContext
class without declaring any additional classes.
(In case you do want to use multiple connection context
classes, see How do I create iterator classes and connection context classes?
later in this FAQ.)
What's an iterator?
An iterator is SQLJ's version of a result set, but a strongly
typed version--column types and (optionally) column names are specified.
You declare an iterator class for each kind of iterator
you want to use (where "kind of iterator" refers to iterators with a given
set of columns). As with connection contexts, this strong typing is a key
advantage of SQLJ. There are two categories of iterators--"named iterators",
where you specify column names as well as column types, and "positional
iterators", where you specify only column types (which SQLJ references
by position instead of by name).
How do I create iterator
classes and connection context classes?
SQLJ provides syntax for declaring (i.e., creating) iterator
classes and connection context classes.
Here is an example of a named iterator class declaration
(with a String column named ename
and a double column named sal):
#sql public iterator NamedIterClass
(String ename, double sal);
Here is an example of a positional iterator class declaration:
#sql public iterator PosIterClass
(String, sal);
And here is an example of a connection context class declaration:
#sqlj public context MyContextClass;
You can declare an iterator or connection context class wherever it would
be legal to define a class of any kind. When the SQLJ translator encounters
an iterator declaration or connection context declaration, it inserts a
class definition into the .java output
file.
Note: If you declare an iterator class or connection
context class at the class level or nested-class level, it might be advisable
to declare it public static as opposed
to simply public.
What's an
execution "context"?
Execution contexts provide a means of exerting control
and checking status of your SQL operations. Each execution context instance
is an instance of the standard sqlj.runtime.ExecutionContext
class. There is an implicit execution context instance with each connection
context instance, but you can explicitly create and use execution context
instances as well.
Examples of execution context class methods include:
(See the next question regarding how to specify which
connection context instance and/or execution context instance to use for
a given statement.)
How do I specify
which connection context instance and/or execution context instance a SQLJ
statement should use?
Suppose you have instantiated a connection context instance
connctxt and an execution context instance
execctxt. Consider the following examples.
#sql [connctxt] { ...SQL operation...};
The preceding specifies that connctxt
should be used for this statement. The default execution context instance
will be used.
#sql [execctxt] { ...SQL operation...};
The preceding specifies that exectxt
should be used for this statement. The default connection context instance
will be used.
#sql [connctxt, execctxt] { ...SQL
operation...};
The preceding specifes that connctxt
and exectxt should both be used (the connection
context instance must precede the execution context instance when specifying
both).
What exactly does
the fetch size correspond to?
Does anyone know what exactly the fetch size corresponds
to, (the number of rows returned?)?
Yes, it is a hint as to how many rows are supposed to
be returned in a single round trip when you read through a result set.
The default is 10 rows. If the result sets contains certain kinds of columns
(namely LONG, or LONG RAW) then the JDBC driver always uses a fetch size
of 1 implicitly.
Where is the
"Oracle" class and what does it provide?
The Oracle class is in
the oracle.sqlj.runtime package for release
8.1.5 and higher and provides convenient static methods to create and close
instances of the standard sqlj.runtime.ref.DefaultContext
class, used for database connections.
Use Oracle.connect() to
create a DefaultContext instance and establish
it as your default connection.
Use Oracle.getConnection()
to simply create a DefaultContext instance.
Use Oracle.close() (available
in release 8.1.6 and higher) to close your default connection.
For earlier versions of Oracle SQLJ, you will have to
use equivalent functionality (constructors and methods) of the standard
sqlj.runtime.ref.DefaultContext class.
See the SQLJ Developer's Guide and Reference for more information.
Can I use SQLJ
to write multithreaded applications?
The short answer is Yes. The long answer involves connection
context instances and execution context instances.
Given that SQLJ supports
only static SQL, can I intermix SQLJ and JDBC statements in my source code
so that I can use dynamic SQL in the same application?
Yes--you can have JDBC statements in your SQLJ source
code. Furthermore, features are built into SQLJ to allow convenient interoperability
between SQLJ iterators and JDBC result sets and between SQLJ connections
and JDBC connections.
If you are using SQLJ version 9.0.1 or later, you got
the best of both worlds: you can embed dynamic SQL code into SQLJ statements
using the syntax ":{ java String expression
}" - see sqlj/demo/DynamicDemo.sqlj
in your SQLJ distribution for more details. The remainder discusses how
to use the general SQLJ-JDBC interaoperability.
To create a JDBC result set rs
from a SQLJ iterator iter, use the getResultSet()
method of your iterator instance:
ResultSet rs = iter.getResultSet();
To create a SQLJ iterator iter
from a JDBC result set rs, use a SQLJ cast
statement:
#sql iter = { CAST : rs };
To create a JDBC connection instance conn
from a SQLJ connection context instance ctx
(inheriting the same underlying connection to the database), use the getConnection()
method of your connection context instance:
Connection conn = ctx.getConnection();
To create a SQLJ connection context instance ctx
from a JDBC connection instance conn (again
inheriting the same underlying connection to the database), use the connection
context class constructor that takes a JDBC connection instance as input:
DefaultContext defctx = new DefaultContext(conn);
(This example uses DefaultContext,
the default connection context class provided with SQLJ.)
Note: Another way to use dynamic SQL in an Oracle
SQLJ application is through PL/SQL, as discussed under Can SQLJ interact
with PL/SQL in my source code? later in this FAQ.
6.
Oracle-Specific Features
Do I need
to do anything special to use Oracle-specific extensions?
No--just do a standard Oracle SQLJ installation, use an
Oracle JDBC driver at translation and runtime (as is typical), and use
the default settings of the Oracle SQLJ translator.
The Oracle semantics-checkers (in oracle.sqlj.checker)
and Oracle customizer (in oracle.sqlj.runtime.util)
are included with a standard installation. The
SQLJUTL package, required for online checking of stored procedures
in an Oracle database, is also included with a standard installation.
By default, when you run the translator it will use the
oracle.sqlj.checker.OracleChecker semantics-checker
front end, which in turn will run an Oracle semantics checker that is appropriate
for your offline/online settings, JDBC driver, and database version.
Also by default, the translator will run the Oracle customizer
so that your application can use Oracle-specific features at runtime.
What Oracle-specific
features does Oracle SQLJ support?
Oracle SQLJ supports the following Oracle-specific types
as host variables and iterator columns:
In addition to the preceding type extensions, Oracle SQLJ
offers the following extended functionality:
"Can you briefly summarize how to use the CustomDatum
feature? I am entirely new to this."
CustomDatum classes are
used (most often) to treat SQL Object type instances as instances of a
Java class. From Java you read and write instances of the CustomDatum
implementation, and these get translated in the database to SQL objects.
The easiest is to generate your CustomDatum
Java classes with the JPublisher tool. Consider the following example.
Note: In Oracle 9.0.1 and later CustomDatum
has been superseded by ORAData, though
the former is still supported. The functionality of ORAData
is nearly indentical to CustomDatum.
Or, go to the full documentation set at
and look at the JDBC, SQLJ, and JPublisher manuals.
How can I pass
objects between my SQLJ program and Oracle Stored Procedures?
You have two basic choices:
Yes, and in fact this is another way (in addition to using
JDBC code) to employ dynamic SQL in an Oracle SQLJ application. (Of course
using PL/SQL is not a feature of standard SQLJ; your application would
not be portable to other platforms.)
Within your SQLJ statements, you can use PL/SQL
anonymous blocks and call PL/SQL stored procedures and stored functions,
as in the following examples: Anonymous block:
#sql {
DECLARE
n NUMBER;
BEGIN
n := 1;
WHILE n <= 100 LOOP
INSERT INTO emp (empno) VALUES(2000 + n);
n := n + 1;
END LOOP;
END
};
Stored procedure call (returns the maximum deadline as
an output parameter into an output host expression):
#sql { CALL MAX_DEADLINE(:out
maxDeadline) };
Stored function call (returns the maximum deadline as
a function return into a result expression):
#sql maxDeadline = { VALUES(GET_MAX_DEADLINE)
};
Does Oracle SQLJ
support PL/SQL BOOLEAN, RECORD, and TABLE types as input and output parameters?
Oracle SQLJ will support theses types as soon as the Oracle
JDBC drivers do, but it is unclear when or if JDBC will support them. In
the meantime, however, there are workarounds - you can create wrapper procedures
that handle the data as types supported by JDBC.
For example, say that in your SQLJ application you want
to call a stored procedure that takes a PL/SQL BOOLEAN
input parameter. You can create a stored procedure that takes a character
or number from JDBC and passes it to the original stored procedure as a
BOOLEAN. If the original procedure is PROC,
for example, you can create a wrapper procedure MY_PROC
that takes a 0 and converts it to FALSE
or takes a 1 and converts it to TRUE, and
then calls PROC..
Does Oracle SQLJ
support the WHERE CURRENT OF construct?
Not as of release 9.0.1, though possibly in the future.
As a workaround, you can explicitly select the ROWID column into your iterator,
and then use WHERE ROWID=xxx in place of
WHERE CURRENT OF in the UPDATE
statement. This workaround would go along the following lines.
Standard SQLJ code:
Does Oracle SQLJ
support DML returning?
Not currently. As a workaround, use a PL/SQL block, for
example as follows:
#sql {
BEGIN
UPDATE ...
RETURNING x, y, z INTO :OUT x, :OUT y, :OUT z;
END
};
Can I use
the SQL object features with a JDBC 8.1.x driver against an 8.0.x database?
No. Even though the 8.1.x JDBC drivers are backward compatible
to 8.0.x databases, they require an 8.1.x database to support SQL object
features. In a configuration using an 8.1.x driver and 8.0.x database,
you might be able to compile a SQLJ application that uses SQL objects,
but it will not work at runtime.
Can I use SQLJ
to return PL/SQL arrays from stored procedures?
Although SQL VARRAYs have been supported by SQLJ and JDBC,
PL/SQL index tables are not supported by SQLJ or by JDBC at this time.
You would have to use a PL/SQL stored procedure wrapper to convert a PL/SQL
index table to of from a VARRAY or nested table to access the argument
from Java.
For example, if you are trying to call procedure proc01,
defined as follows:
package pack01 is
type rec01 is record(n1
number, d1 date);
procedure proc01 (r rec01);
...
end;
you can create a wrapper method as follows:
package pack01_wrapper is
procedure proc01_wrapper
(n1 number, d1 date);
...
end;
package body pack01_wrapper is
procedure proc01_wrapper
(n1 number, d1 date) is
r pack01.rec01;
begin
r.n1 := n1;
r.d1 := d1;
pack01.proc01;
end;
...
end;
With Oracle 8i, new object types and new collection
types (VARRAYs and nested tables) are available in JDBC. Thus, your wrapper
package could use an object type with the same attributes as the record,
rather than "exploding" the record into individual components as shown
here.
Does Oracle SQLJ
support REF CURSORS?
Oracle SQLJ supports REF CURSORS along the same lines
that Oracle's JDBC drivers do.
Specifically, a REF CURSOR that is a SELECT column, a
function return, or an OUT parameter of a procedure can be materialized
in SQLJ as a JDBC ResultSet or a SQLJ iterator
instance. In Oracle 9.0.1 and later JDBC stored procedures permit passing
a JDBC ResultSet out as a REF CURSOR argument
(with an Oracle-specific API to enable this). However, at this point SQLJ
stored procedures do not permit passing SQLJ iterators in this way.
7.
Translation (and Compilation and Customization) Process
7.1
Basic Functionality
How do I
run the SQLJ translator?
The Oracle SQLJ installation includes a front-end utility
that automatically runs the Oracle SQLJ translator, your Java compiler,
and the Oracle profile customizer.
For example, on Solaris you run the UNIX command-line
utility sqlj:
sqlj Foo.sqlj
What
are the fundamental steps in translating a SQLJ application? What is input
and output during the translation process?
There are three basic steps: translation, compilation,
and customization (optional). When you run the SQLJ front-end utility,
by default all three steps are executed automatically.
In the translation step, the SQLJ translator
processes, checks, and translates your .sqlj
source file. It checks it for rudimentary semantics errors and optionally
connects to a target database to verify your SQL instructions against the
actual resources in the specified database schema. The translator produces
the following:
Note: if you use Oracle SQLJ 9.0.1 or later you
can specify -codegen=oracle. In this case, the SQLJ translator generates
Oracle JDBC code directly, and skips the generation and customization of
.ser files.
Do I have to customize
my application? Where do I get the customizer utility?
You only need to customize your application if you are
using features that are specific to a particular database or JDBC driver,
such as vendor-specific datatypes or performance enhancements. If you are
using only standard features, then you can disable the command-line -profile
flag to skip the customization step.
Vendors provide vendor-specific customizers. The
Oracle customizer is provided with the Oracle SQLJ product and by default
is executed automatically when you run the front-end SQLJ utility.
For basic use of
SQLJ, how much do I need to know about profiles (.ser files)?
Unless you plan to get fancy, you only need to be aware
of their existence and naming conventions. A profile contains all of the
information about your SQL statements--commands, input parameters, and
output parameters.
If your source file is Foo.sqlj,
the profile will be in Foo_SJProfile0.ser.
Any additional profiles would be in Foo_SJProfile1.ser,
Foo_SJProfile2.ser, and so on.
You will have more than one profile for an application
if you use more than one connection context class, but that is an advanced
topic. Most applications use only one connection context class and therefore
have only one profile. (Connection contexts are explained under What's
a connection "context"? earlier in this FAQ.)
Can I compile
my SQLJ program without customization? What do I need to know to create
an application that can be deployed to multiple platforms?
SQLJ is very interesting because of the possibility of
interoperability and including vendor customizations. According to the
Oracle SQLJ programmer's guide, Oracle customization is performed by default.
This requires that the Oracle customization classes and Oracle JDBC driver
is needed to deploy on any platform when using a customized .ser
file.
-profile=false - turns
off customization
-default-customizer=classname
-sets the default customizer. If classname is empty this in fact turns
off customization.
Online checking uses a connection to a specified database
to check your SQL operations against database resources.
Both kinds of checking do the following: 1) analyze the
types of Java expressions in your SQLJ executable statements; 2) categorize
your embedded SQL operations (based on SQL keywords such as SELECT and
INSERT).
In addition, online checking (but not offline checking)
does the following: 3) analyzes your embedded SQL operations and checks
their syntax against the database; 4) checks the Java types in your SQLJ
executable statements against SQL types of corresponding database columns
and stored procedure and function parameters.
I know
I can enable online checking by giving the option -user=scott. How can
I disable online checking?
Specify an empty user name: -user=
Similarly, to enable online checking for a particular
connection context: -user@Ctx=scott
And to disable online checking for a particular connection
context: -user@Ctx=
7.2
Java Configurations
Can I use Java
compilers other than javac (the standard compiler included with the Sun
JDKs)?
Yes. the SQLJ translator defaults to the standard javac
compiler, but lets you specify an alternative compiler through the command-line
-compiler-executable option. Any compiler
you use, however, must behave as follows:
Is it possible to use
the JDK 1.0.2 version of the javac compiler to compile .java files generated
by the SQLJ translator?
No. You need a Java compiler from JDK 1.1.x or 1.2.
7.3 National
Language Support
Does the Oracle SQLJ
translator provide NLS support?
Yes, SQLJ provides NLS support in the following areas:
With SQLJ release 9.0.1 and later additional support is
provided for SQL NATIONAL LANGUAGE CHARACTER SET unicode columns through
additional SQLJ-specific types: oracle.sql.NCHAR
/ NCLOB / NString,
etc.
Can I specify
the encoding for SQLJ to use?
Yes. You can use the command-line -encoding
option to specify the NLS encoding that the SQLJ translator will apply
to .sqlj and .java
files being input and .java files that
it outputs. The default is whatever is in your file.encoding
system property.
The translator also passes the -encoding
value to the Java compiler it will use, unless you have instructed it not
to do so by disabling -compiler-encoding-flag.
8.
Development Environments
Can I develop and
debug SQLJ programs with Oracle JDeveloper?
Yes, JDeveloper fully incorporates Oracle SQLJ.
Developing: Creating SQLJ code in JDeveloper is
no different than creating Java code. SQLJ files (.sqlj)
can be included in your projects as well as Java files (.java).
Compiling: When you compile a SQLJ source file
(identified by the .sqlj file extension),
the Oracle SQLJ translator and Oracle profile customizer are automatically
invoked. Prior to compiling, you can use JDeveloper to set some of the
SQLJ command-line translation options.
Debugging: SQLJ statements can be debugged in-line
as your application executes, as with any Java statements. Reported line
numbers map back to the original SQLJ source code (as opposed to the Java
code that the translator generates).
Note: JDeveloper version 3.0 (and 2.0), were packaged
with Oracle SQLJ version 8.1.5. While JDeveloper 3.0 supports JDK 1.2,
Oracle SQLJ 8.1.5 does not. If you want to develop a JDK 1.2 application
that uses SQLJ in JDeveloper 3.0, you need to download and apply the SQLJ
8.1.6 SDK patch release from the Oracle Technology Network (OTN) Web site
(technet.oracle.com). SQLJ versions provided with JDeveloper 3.1 and higher
do not have this limitation.
Can I develop and
debug SQLJ programs with other IDEs such as VisualAge for Java, Visual
Café, or Visual J++?
SQLJ is fully integrated with Oracle JDeveloper. In addition,
VisualAge For Java supports an IBM-specific version of SQLJ in the IDE.
The SQLJ standard incorporates a set of interfaces and
APIs designed to ensure that IDE vendors can easily integrate SQLJ development
and debugging capabilities.
Almost all of the current IDEs provide hooks to integrate
preprocessors. You may already be able to incorporate SQLJ into your development
environment. Consult your IDE documentation for information.
Compiling sqlj
via ant (jakarta)
"How do you configure jakarta ant to compile .sqlj
programs?"
Note that the source for the sqlj wrapper executable sqlj.exe
is available on the SQLJ downloads at.
This executable is NT equivalent to the Unix sqlj
shell script, and it expands wildcards, performs some transformations on
options in the command line options, and it also looks up some environment
variable settings. Essentially, it boils down to calling:
java sqlj.tools.Sqlj <options and files>
The following are some snippets from an ant build.xml.
This omits many details and the sqlj target
does not work, since SQLJ expects a list of files rather than being able
to accept wildcards.
<target name="sqlj>
<java classname="${sqlj.main}"
args="-props=${sqlj.propfile}
gen/*"
fork="yes" failonerror="yes">
<classpath>
<pathelement location="translator.zip"/>
<pathelement location="runtime12.zip"/>
<pathelement location="classes12.zip"/>
</classpath>
</java>
</target>
Can SQLJ be used
in Java applets?
Yes. The SQLJ runtime environment consists of a thin layer
of pure Java code together with the JDBC driver being used. Oracle offers
the 100% Java JDBC Thin driver, which is roughly 150K compressed and can
be downloaded into a client browser along with the applet. In your applet
code, specify the Oracle JDBC Thin driver for your database connections.
See the SQLJ Developer's Guide and Reference for more information.
No, this is prevented by Java applet security. However,
you could use the Oracle Net8 Connection Manager product to work around
this limitation. Alternatively, you may be able to use browser-specific
APIs to obtain the required privileges.
For end-users,
what browsers will work with SQLJ applets?
Netscape Communicator 4.x and Microsoft Internet Explorer
4.x include JDK 1.1.x and are known to work. Communicator 5.x and Internet
Explorer 5.x include JDK 1.2 and are presumed to work
Netscape Navigator 3.x and Microsoft Internet Explorer
3.x use JDK 1.0.2. To use these browser versions, you must use a plug-in
or some other means of employing JDK 1.1.x or above.
Can SQLJ be
used in middle-tier Java applications?
Yes, middle-tier SQLJ applications can be executed in
any JDK 1.1.x-compliant or 1.2-compliant Java Web server or Java application
server, including the Oracle Application Server. In a middle-tier environment,
SQLJ applications can use either the JDBC OCI driver or the JDBC Thin driver.
Can SQLJ be
used inside servlets?
Yes. Write it as any servlet, but with the .sqlj
file name extension for your source file. Then run the SQLJ translator
as usual.
Can SQLJ
be used inside JavaServer Pages?
No, as of OC4J 10.1.3.1, this is no longer supported.
Can SQLJ applications
work across firewalls?
Yes--there are no firewall limitations specific to SQLJ.
Because the runtime environment for a SQLJ application consists of a thin
layer of pure Java code on top of a JDBC driver, a SQLJ application will
work with any firewall with which the chosen JDBC driver will work.
The Oracle JDBC OCI driver and Thin driver can work in
either an intranet or extranet setting. In an extranet deployment, the
drivers can be used with firewalls which have been SQL*Net certified.
The following firewalls have been certified with SQL*Net:
Can I
use operating system authentication to connect from SQLJ?
You can logon using external credentials with the OCI
driver by passing in nulls as username
and password.
10.
Running SQLJ Programs
What debugging
support does SQLJ offer?
At the simple end of the spectrum are the translator -linemap
option and the server-side debug option.
The -linemap option instructs
the SQLJ translator to map line numbers in the SQLJ source code to line
numbers in the generated .class file (the
.class file produced by the compiler during
compilation of the .java file that was
generated by the translator). This way, you can trace runtime errors to
lines of code in your SQLJ source file.
If you are using the SQLJ translator that is embedded
in the server, there is no -linemap option
but the same functionality is implemented automatically. You can also use
the server-side debug option, which is
similar in nature to the -g option of the
javac compiler.
At the more complex end of the spectrum is a special profile
customizer known as the "auditor installer"--sqlj.runtime.profile.util.AuditorInstaller.
This customizer inserts debugging statements--called auditors--into profiles
that you specify on the SQLJ command line. The debugging statements will
execute during runtime as you execute the application, displaying a trace
of method calls and values returned. (Some of the questions under "Basic
Functionality" earlier in this FAQ touch on profiles and customizers.)
The -P-debug command-line
option will instruct SQLJ to run the auditor installer.
SQLJ debugging support is also built into Oracle JDeveloper.
(See Can I develop and debug SQLJ programs with Oracle JDeveloper? earlier
in this FAQ.)
If I translate
an application with one version of the Oracle SQLJ translator, will I be
able to run the application against a future version of the Oracle SQLJ
runtime?
Yes, Oracle will maintain this sort of backward compatibility.
For example, an application that you translate with the 8.1.5 version of
the translator will run against the 8.1.6 (or the 9.0.1) version of the
SQLJ runtime.
Naturally, you must also consider Java version compatibilities.
For example, you may not be able to run a particular JDK 1.2 application
in a JDK 1.1 environment. | http://www.oracle.com/technology/tech/java/sqlj_jdbc/htdocs/faq.html | crawl-002 | refinedweb | 10,522 | 57.37 |
New version of Kubernetes is out, so here we are with another Kubernetes article. With Kubernetes 1.5, the kubeadm is still in alpha, and it is not recommended to use it in production as it still does not support load balancer. We are going to install well known online sock shop as a demo, and we will use nodeport to expose the service.
Installing Kubernetes 1.5 on all nodes
Lets add kubernetes repository for CentOS:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes] name=Kubernetes
baseurl=
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF
After adding the repo, we need to turn off SElinux because it does not play very well with kubernetes. To turn off it momentarily, type
setenforce 0
To make it persist after reboot, use nano to edit SElinux config file like this:
nano /etc/selinux/config
and make sure SELINUX line is set to permissive or disabled:
SELINUX=disabled
Save the file and we can continue to installing the required packages.
yum install docker kubelet kubeadm kubectl kubernetes-cni
To enable docker auto start at boot, run this command:
systemctl enable docker
And to start it now, run the following.
systemctl start docker
Next lets do the same for kubelet
systemctl enable kubelet
systemctl start kubelet
Setting up the cluster
First thing we need to do is decide the master of our new cluster. If all nodes are set up like above is shown, next we run our designated master node the following command.
kubeadm init
Note that you can not run this command twice, you will need to tear down the cluster before running it second time. The output will be similar to this:
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: firewalld is active, please ensure ports [6443 9898 10250] are open or your cluster may not function correctly
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "9a6b48.b4011ffeeb237381"
105.821991 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 4.505809 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 68.003359 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Your Kubernetes master has initialized successfully!
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
You can now join any number of machines by running the following on each node:
kubeadm join --token=9a6b48.b4011ffeeb237381 45.55.128.42
Installing pod network and adding nodes to a cluster
In the above part, we initialized the cluster master, and we got in the last line command with a token that we will use to add nodes. But before we do that, we need to install pod network.
kubectl apply -f
There are lots of ways to have pod network, but above one is maybe the simplest. It uses Container Network Interface or CNI, which is proposed standard for networking containers on Linux.
Next we can add nodes to the cluster with running this command on all the nodes
kubeadm join --token=bb6fc2.be0345f5b02a32a0 45.55.128.42
The token is sanitized, so that you could not add nodes to my cluster. Next lets enable pods to run on master and not only on nodes.
kubectl taint nodes --all dedicated-
After this we can check nodes to see if all are online.
kubectl get nodes
Installing microservices example
There is simple microservices example that we will use to test our cluster. It is online shop for socks.
First we will add namespace sock shop
kubectl create namespace sock-shop
And then we create the service
kubectl apply -n sock-shop -f ""
After this we need to wait some time for containers to get created and then we can try to visit the new site. In order to visit it, we must know its address. Lets examine the service
kubectl describe svc front-end -n sock-shop
It will give you output similar to this
Name: front-end
Namespace: sock-shop
Labels: name=front-end
Selector: name=front-end
Type: NodePort
IP: 10.104.11.202
Port: <unset> 80/TCP
NodePort: <unset> 31500/TCP
Endpoints: 10.32.0.4:8079
Session Affinity: None
No events.
The bold line is highlighted by me because we need the port number that service is using. We need to combine port number with address of one of our nodes, and we will get to the site.
Conclusion
So we have successfully set Kubernetes 1.5 cluster with kubeadm on CentOS 7. In our case it is three nodes cluster but kubeadm enables you to easily scale the cluster with adding new nodes. Be sure to keep your token private because with a token and public ip, anyone can add nodes to your cluster. With that we end this article, thank you for reading and have a nice day.
Have anything to say? | https://linoxide.com/containers/setup-kubernetes-kubeadm-centos/ | CC-MAIN-2018-05 | refinedweb | 859 | 61.46 |
What is Recursive Function?
Recursive Function in Python is used for repetitively calling the same function until the loop reaches the desired value during the program execution, by using the divide and conquer logic. One of the obvious disadvantages of using a recursive function in the Python program is ‘if the recurrence is not a controlled flow, it might lead to consumption of a solid portion of system memory’. For rectifying this problem, an incremental conditional loop can be used in place of Recursive function in python programming language.
Recursive Function in Python
The concept of recursion remains the same in Python. The function calls itself to breakdown the problem into smaller problems. The simplest example we could think of recursion would be finding the factorial of a number.
Let’s say we need to find the factorial of number 5 => 5! (Our problem)
To find 5! the problem can be broken into smaller ones 5! => 5 x 4!
So, to get 5! We need to find 4! and multiply it with 5.
Let’s keep on dividing the problem
5! = 4! x 5
4! = 3! x 4
3! = 3 x 2!
2! = 2 x 1!
1! = 1
When it reaches the smallest chunk i.e, getting the factorial of 1 we can return the result as
Let’s take a pseudo-code example:-
Algorithm for factorial
Let us see the algorithm for factorial:
function get_factorial( n ):
if n < 2:
return 1
else:
return get_factorial(n -1)
Function calls
Suppose we are finding a factorial of 5.
get_factorial(5) 5 * get_factorial(4) = returns 120 #1st call
get_factorial(4) 4 * get_factorial(3) = returns 24 #2nd call
get_factorial(3) 3 * get_factorial(2) = returns 6 #3rd call
get_factorial(2) 2 * get_factorial(1) = returns 2 #4th call
get_factorial(1) returns 1 #5th call
The end result will be 120 where we started the execution of the function. Our recursion function will stop when the number is so reduced that the result can be obtained.
- The first call which is getting the factorial of 5 will result in a recursive condition where it will be added to the stack and another call will be made after reducing the number to 4.
- This recursion will keep on calling and breaking the problem into smaller chunks until it reaches the base condition.
- The base condition here is when the number is 1.
- Every recursive function has its own recursive condition and a base condition.
Pros and Cons of Python Recursive Function
- The execution of recursion is so that it won’t do any calculations until reaches base condition.
- In reaching to base conditions you may run out of memory.
- In a large problem where there can be a million steps or we can say a million recursions to do the program might end up giving memory error or a segmentation fault.
- 1000000! = 1000000 * 999999 ! =?
- Recursion is different than iteration it doesn’t scale up like an iterative method.
- Different languages have different optimizations for recursion.
- In many languages, the iterative method would perform better than recursion.
- Every language has some restrictions over the depth of recursion which you might face when solving large problems.
- Sometimes it’s hard to understand the complex problems with recursion whereas it’s pretty simple with iteration.
Some Pros
- In some cases, recursion is a convenient and faster way to use.
- Very much useful in the traversal of the tree and binary search.
Python Code – Recursion vs Iteration
We understood what is recursion and how it works in Python, as we know all languages have different implementation of recursion for memory and computational optimizations. There can be a case where iteration would be faster than recursion.
Here we would compare both recursion and iteration method to see how Python performs in both the cases.
1. Recursion Code for Factorial
def get_recursive_factorial(n):
if n < 0:
return -1
elif n < 2: #base condition
return 1
else:
return n * get_recursive_factorial(n -1) #recursion condition
2. Factorial problem using iteration (looping)
def get_iterative_factorial(n):
if n < 0:
return -1
else:
fact = 1
for i in range( 1, n+1 ):
fact *= i
return fact
3. Printing Results
print(get_recursive_factorial(6))
print(get_iterative_factorial(6))
Output:
As we can see both give the same output as we have written the same logic. Here we can’t see any difference in execution.
Let’s add some time code to get more information about the execution of recursion and iteration in Python.
We will import the “time” library and check what time recursion and iteration take to return the result.
4. Code with time calculation
import time
def get_recursive_factorial(n):
if n < 0:
return -1
elif n < 2:
return 1
else:
return n * get_recursive_factorial(n-1)
def get_iterative_factorial(n):
if n < 0 :
return -1
else:
fact = 1
for i in range(1, n+1):
fact *= i
return fact
start_time = time.time()
get_recursive_factorial(100)
print("Recursion--- %s seconds ---" % (time.time() - start_time))
start_time = time.time()
get_iterative_factorial(100)
print("Iteration--- %s seconds ---" % (time.time() - start_time))
We will do repeated executions with a different value for factorial and see the results. The below results could vary on machine to machine. We have used MacBook Pro 16 GB RAM i7.
We are using Python 3.7 for execution
Case 1:- Factorial of 6:
Case 2: Factorial of 50:
Case 3: Factorial of 100:
Case 4: Factorial of 500:
Case 5: Factorial of 1000:
We have analyzed both methods in a different problem. Moreover, both have performed similar except case 4.
In case 5 we got an error while doing it with recursion.
Python got a restriction on the maximum depth you can go with recursion but the same problem I was able to solve it with iteration.
Python has restrictions against the problem of overflow. Python is not optimized for tail recursion, and uncontrolled recursion causes a stack overflow.
“sys.getrecursionlimit()” function would tell you the limit for recursion.
The recursion limit can be changed but not recommended it could be dangerous.
Conclusion – Python Recursive Function
- Recursion is a handy solution for some problems like tree traversal and other problems.
- Python is not a functional programing language and we can see recursion stack is not that optimized as compared to iteration.
- We should use iteration in our algorithm as its more optimized in Python and gives you better speed.
Recommended Articles
This is a guide to Recursive Function in Python. Here we discuss What is Recursive Function, Recursive function in Python, Algorithm for factorial, etc. You can also go through our other suggested articles to learn more– | https://www.educba.com/recursive-function-in-python/ | CC-MAIN-2020-29 | refinedweb | 1,092 | 55.84 |
Local .NET Development With Docker
Learn how to work with Docker and JetBrains Rider in our development environment.
At this point in the tutorial, you may be formulating ideas and thoughts around containerization, what it is, and if it is for you. This section will start using Docker with several types of .NET applications and see what the experience is like for an average .NET developer.
For the following tutorials, we'll be using JetBrains Rider and the .NET 5 SDK. We'll also need Docker Desktop installed to run our containers.
Docker Hello, World
What tutorial would be complete without a "Hello, World" application? Starting from JetBrains Rider's new solution dialog, we can select Console Application from the templates. In the configuration window on the right, we need to pick the Linux option from the Docker Support selection.
Once our solution is created and loaded, we'll see three files in our new console project:
Program.cs,
Dockerfile, and
.dockerignore. Before we look at the
Dockerfile, let's change the output of our console application.
using System;Console.WriteLine("Hello Docker!");
Now, let's open our
Dockerfile and see what's happening line by line. The default Docker definition that comes with the .NET 5 templates does something familiar in the Docker ecosystem. It utilizes two images to create a final image. Our application's final image will be much smaller because it will only be using the .NET runtime rather than the entire SDK.
FROM mcr.microsoft.com/dotnet/runtime:5.0 AS baseWORKDIR /appFROMFROM build AS publishRUN dotnet publish "HelloDocker.csproj" -c Release -o /app/publishFROM base AS finalWORKDIR /appCOPY --from=publish /app/publish .ENTRYPOINT ["dotnet", "HelloDocker.dll"]
The first line of the
Dockerfile denotes our parent image. In this case, we are using the
dotnet/runtime image for .NET 5. The second line in our Docker definition file sets our working directory of
/app. In this case, we are creating the destination folder for the final build of our application.
FROM mcr.microsoft.com/dotnet/runtime:5.0 AS baseWORKDIR /app
The following section does several operations, including using a different base image of
dotnet/sdk, creating a
src directory, copying the project file along with the source code of our project, and then building our project into the final output directory of
/app/build.
FROM
Note the use of the
AS keyword. We're giving build steps intermediate image names so we can reference them throughout the containerization process. The next few lines will use our previous
build image to publish our project, assuming it succeeded to build.
FROM build AS publishRUN dotnet publish "HelloDocker.csproj" -c Release -o /app/publish
The final section will reuse our base image and copy our build and publish steps into our
/app directory. It will also set the entry point for our container, which will run our .NET application.
FROM base AS finalWORKDIR /appCOPY --from=publish /app/publish .ENTRYPOINT ["dotnet", "HelloDocker.dll"]
Great! Now, let's run this application inside a container. We'll go through two ways: Docker CLI and using JetBrains Rider's Docker integration.
Having stepped through the
Dockerfile, a common question newcomers to Docker might ask is:
Why is the
Dockerfiledefinition running
dotnet restorewith just the project file, and only later is it running
dotnet build?
The reason lies in how Docker stores images. Each step in the
Dockerfile is a separate read-only layer, and the Docker engine will only replace layers when there are changes. Updating code typically happens more than adding or updating package references, and separating the two ensures that a full package restore is only executed when that layer changes. The management of layers allows Docker to reuse and speed up image builds.
Docker CLI
We need to open a terminal window at the root of the solution, where we'll run the following command to build our
Dockerfile into an image. In our case, our project name is
HelloDocker, but adjust the command according to your project name. We also have a few optional build flags like
--rm, which cleans up any intermediary images, and
-t, which will give our new image the name
hello-docker helping us find it easily.
Docker persists images on our machine long after a successful build. Persistence increase future build performance, but comes at the cost of disk space. The
--rm flag is useful in short-term experiments, where we might be exploring different build configurations.
docker build -f HelloDocker/Dockerfile -t hello-docker --rm .
The
. at the end tells the Docker CLI what context it should upload to our container images. If we were to forget to add the path, our Docker image build would fail because Docker will not find our project files.
Running the command, we see the following output in our terminal.
➜ docker build -f HelloDocker/Dockerfile -t hello-docker --rm .[+] Building 4.7s (18/18) FINISHED=> [internal] load build definition from Dockerfile 0.0s=> => transferring dockerfile: 37B 0.0s=> [internal] load .dockerignore 0.0s=> => transferring context: 2B 0.0s=> [internal] load metadata for mcr.microsoft.com/dotnet/sdk:5.0 0.0s=> [internal] load metadata for mcr.microsoft.com/dotnet/runtime:5.0 0.0s=> [build 1/7] FROM mcr.microsoft.com/dotnet/sdk:5.0 0.0s=> [internal] load build context 0.0s=> => transferring context: 6.61kB 0.0s=> [base 1/2] FROM mcr.microsoft.com/dotnet/runtime:5.0 0.0s=> CACHED [build 2/7] WORKDIR /src 0.0s=> CACHED [build 3/7] COPY [HelloDocker/HelloDocker.csproj, HelloDocker/] 0.0s=> CACHED [build 4/7] RUN dotnet restore "HelloDocker/HelloDocker.csproj" 0.0s=> [build 5/7] COPY . . 0.0s=> [build 6/7] WORKDIR /src/HelloDocker 0.0s=> [build 7/7] RUN dotnet build "HelloDocker.csproj" -c Release -o /app/build 2.7s=> [publish 1/1] RUN dotnet publish "HelloDocker.csproj" -c Release -o /app/publish 1.8s=> CACHED [base 2/2] WORKDIR /app 0.0s=> CACHED [final 1/2] WORKDIR /app 0.0s=> CACHED [final 2/2] COPY --from=publish /app/publish . 0.0s=> exporting to image 0.0s=> => exporting layers 0.0s=> => writing image sha256:b8ff862ff829ece58c3ac884c5bdc895795347caeefd12e7597ce8f2e9ac3912 0.0s=> => naming to docker.io/library/hello-docker 0.0s
Now, let's run our image in a new container using the Docker CLI command
run.
docker run hello-docker
Running the command will give us the output from our first running Docker-hosted application.
➜ docker run hello-dockerHello Docker!
JetBrains Rider Docker Integration
Congratulations, we did it! Now, let's look at the easier way to build and run Docker containers using JetBrains Rider.
JetBrains Rider comes bundled with Docker integration, giving developers who prefer a GUI experience all the tools necessary to define, build, and deploy images. With the
Dockerfile in our editor, we'll see two green chevrons in our file's top-left.
Let's set some command-line flags we had during our CLI experience. We need to click the chevrons and select the
Edit HelloDocker/Dockerfile option.
From the
Edit Run Configuration dialog, we'll set the
Image tag to
hello-docker and add the build option of
--rm for this straightforward example. If we don't see the build options, we can click the
Modify Options and enable the text box.
Once we've applied our changes, we can run them either from the dialog or from the editor window using the chevrons. We'll see our image along with the container in the Services tool window.
We'll talk more about the Services tool window in the following sections of this tutorial.
There we have it! We've created an image from our
Dockerfile definition using the Docker CLI and JetBrains Rider's docker integration. Developers should be familiar with the CLI, but there's no beating the convenience of clicking a few buttons. | https://www.jetbrains.com/dotnet/guide/tutorials/docker-dotnet/local-dotnet-development-docker/ | CC-MAIN-2021-43 | refinedweb | 1,305 | 59.4 |
I have recently completed the Orion Constellations path project for the “Visualize Data with Python” skillpath and I encourage any feedback on the project.
Project: Visualizing the Orion Constellation
In this project we will be visualizing the Orion constellation in 2D and 3D using the Matplotlib function .scatter().
The goal of the project is to understand spatial perspective. Once we visualize Orion in both 2D and 3D, we will be able to see the difference in the constellation shape humans see from earth versus the actual position of the stars that make up this constellation.
1. Set-Up
We will add
%matplotlib notebookin the cell below. This statement will allow us to be able to rotate out visualization in this jupyter notebook.
We will be importing
matplotlib.pyplotas usual.
In order to see our 3D visualization, we also need to add this new line after we import Matplotlib:
from mpl_toolkits.mplot3d import Axes3D
%matplotlib notebook from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D
2. Get familiar with real data
The
x ,
y , and
z lists below are composed of the x, y, z coordinates for each star in the collection of stars that make up the Orion constellation as documented in a paper by Nottingham Trent Univesity on “The Orion constellation as an installation” found here.
#]
3. Create a 2D Visualization
Before we visualize the stars in 3D, let’s get a sense of what they look like in 2D.
We first create a figure for the 2d plot and save it to a variable name
fig_2d .
Then we add a subplot
.add_subplot() as the single subplot, with
1,1,1 .
We use the scatter function to visualize our
x and
y coordinates.
We also set the background color to “black” and the marker on the datapoints to “star” so that they imitate the stars in the black sky.
We finally give a title and render our visualization.
The 2D visualization dows not look like the Orion constellation we see in the night sky. There is a curve to the sky, and this is a flat visualization, but we will visualize it in 3D in the next step to get a better sense of the actual star positions.
fig_2d = plt.figure() ax = fig_2d.add_subplot(1,1,1) plt.scatter(x,y, color = 'yellow', marker = '*') plt.title('2D Visualization of the Orion Constellation') plt.xlabel('Orion x Coordinates') plt.ylabel('Orion y Coordinates') ax.set_facecolor('xkcd:black') plt.show()
4. Create a 3D Visualization
We first create a figure for the 3D plot and save it to a variable name
fig_3d .
Since this will be a 3D projection, we want to tell Matplotlib this will be a 3D plot.
To add a 3D projection, we must include a the projection argument. It would look like this:
projection="3d"
Then we add our subplot with
.add_subplot() as the single subplot
1,1,1 and specify our
projection as
3d :
fig_3d.add_subplot(1,1,1,projection="3d") )
Since this visualization will be in 3D, we will need our third dimension. In this case, our
z coordinate.
We then create a new variable
constellation3d and call the scatter function with our
x ,
y and
z coordinates.
We also set the background color to “black” and the marker on the datapoints to “star” so that they imitate the stars in the black sky.
We finally give a title and render our visualization.
fig_3d = plt.figure() constellation3d = fig_3d.add_subplot(1,1,1,projection="3d") constellation3d.scatter(x,y,z, color = 'yellow', marker = '*', s=50) plt.title('3D Visualization of the Orion Constellation') constellation3d.set_xlabel('Orion x Coordinates') constellation3d.set_ylabel('Orion y Coordinates') constellation3d.set_zlabel('Orion z Coordinates') plt.gca().patch.set_facecolor('white') constellation3d.w_xaxis.set_pane_color((0, 0, 0, 1.0)) constellation3d.w_yaxis.set_pane_color((0, 0, 0, 1.0)) constellation3d.w_zaxis.set_pane_color((0, 0, 0, 1.0)) plt.show()
Any feedback is most welcome. Thanks a lot! | https://discuss.codecademy.com/t/visualize-data-with-python-orion-constellation-project/516053 | CC-MAIN-2020-45 | refinedweb | 653 | 50.12 |
I know how to use both for loops and if statements on separate lines, such as:
>>> a = [2,3,4,5,6,7,8,9,0] ... xyz = [0,12,4,6,242,7,9] ... for x in xyz: ... if x in a: ... print(x) 0,4,6,7,9
And I know I can use a list comprehension to combine these when the statements are simple, such as:
print([x for x in xyz if x in a])
But what I can’t find is a good example anywhere (to copy and learn from) demonstrating a complex set of commands (not just “print x”) that occur following a combination of a for loop and some if statements. Something that I would expect looks like:
for x in xyz if x not in a: print(x...)
Is this just not the way python is supposed to work?
You can use generator expressions like this:
gen = (x for x in xyz if x not in a) for x in gen: print x
As per The Zen of Python (if you are wondering whether your code is “Pythonic”, that’s the place to go):
- Beautiful is better than ugly.
- Explicit is better than implicit.
- Simple is better than complex.
- Flat is better than nested.
- Readability counts.
The Pythonic way of getting the
sorted
intersection of two
sets is:
>>> sorted(set(a).intersection(xyz)) [0, 4, 6, 7, 9]
Or those elements that are
xyz but not in
a:
>>> sorted(set(xyz).difference(a)) [12, 242]
But for a more complicated loop you may want to flatten it by iterating over a well-named generator expression and/or calling out to a well-named function. Trying to fit everything on one line is rarely “Pythonic”.
Update following additional comments on your question and the accepted answer
I’m not sure what you are trying to do with
enumerate, but if
a is a dictionary, you probably want to use the keys, like this:
>>> a = { ... 2: 'Turtle Doves', ... 3: 'French Hens', ... 4: 'Colly Birds', ... 5: 'Gold Rings', ... 6: 'Geese-a-Laying', ... 7: 'Swans-a-Swimming', ... 8: 'Maids-a-Milking', ... 9: 'Ladies Dancing', ... 0: 'Camel Books', ... } >>> >>> xyz = [0, 12, 4, 6, 242, 7, 9] >>> >>> known_things = sorted(set(a.iterkeys()).intersection(xyz)) >>> unknown_things = sorted(set(xyz).difference(a.iterkeys())) >>> >>> for thing in known_things: ... print 'I know about', a[thing] ... I know about Camel Books I know about Colly Birds I know about Geese-a-Laying I know about Swans-a-Swimming I know about Ladies Dancing >>> print '...but...' ...but... >>> >>> for thing in unknown_things: ... print "I don't know what happened on the {0}th day of Christmas".format(thing) ... I don't know what happened on the 12th day of Christmas I don't know what happened on the 242th day of Christmas
I personally think this is the prettiest version:
a = [2,3,4,5,6,7,8,9,0] xyz = [0,12,4,6,242,7,9] for x in filter(lambda w: w in a, xyz): print x
Edit
if you are very keen on avoiding to use lambda you can use partial function application and use the operator module (that provides functions of most operators).
from operator import contains from functools import partial print(list(filter(partial(contains, a), xyz)))
a = [2,3,4,5,6,7,8,9,0] xyz = [0,12,4,6,242,7,9] set(a) & set(xyz) set([0, 9, 4, 6, 7])
I would probably use:
for x in xyz: if x not in a: print x...
You can use generators too, if generator expressions become too involved or complex:
def gen(): for x in xyz: if x in a: yield x for x in gen(): print x
Use
intersection or
intersection_update
intersection :
a = [2,3,4,5,6,7,8,9,0] xyz = [0,12,4,6,242,7,9] ans = sorted(set(a).intersection(set(xyz)))
intersection_update:
a = [2,3,4,5,6,7,8,9,0] xyz = [0,12,4,6,242,7,9] b = set(a) b.intersection_update(xyz)
then
bis your answer
Tags: laravelpython, oop | https://exceptionshub.com/pythonic-way-to-combine-for-loop-and-if-statement.html | CC-MAIN-2022-05 | refinedweb | 683 | 71.55 |
Data Mode (Default)
The RN4870 introduces a private Generic Attribute Protocol (GATT) service named “Transparent UART”. This service simplifies serial data transfers over Bluetooth® Low Energy (BLE) devices. When the RN4870 is in Data mode and it is connected to another BLE device, the RN4870 acts as a data pipe. This means that any serial data sent into the RN4870 UART will be transferred to the connected peer device via Transparent UART Bluetooth service. When data is received from the peer device over the air via Transparent UART connection, this data outputs directly to the UART as well. Data mode is the default mode of the RN4870.
Procedure for Connecting Two Modules via Transparent UART
Module #1 SS,C0 // Command to enable Device Information Profile and // Transparent UART services R,1 // Reboot. Needed for all Set commands to take effect Module #2 SS,C0 R,1 F // Starts scanning for peripheral devices and also // returns the MAC address along with other relevant data C,0,<MAC Address> // Command to attempt a connection with the desired remote device
Command Mode
In a typical use case, a host MCU uses ASCII commands over UART to control and exchange data with the RN4870 BLE module. The RN4870 could be set into Command mode for configuration and/or control operations. In Command mode, all UART data are treated as ASCII commands sent into the module's UART interface. The ASCII commands can control functions such as connection setup/teardown, accessing GATT characteristics, changing configuration settings and reading status.
To enter Command mode from Data mode, type $ into a terminal emulator with the following settings:
Once the RN4870 enters Command mode, you will see a CMD> prompt sent to the UART to indicate the start of the Command mode session.
The list of commands available on the RN4870 can be classified into the following groups:
- Action commands - used start a process or a functionality or display information.
- Set/Get commands - used to configure or read the configuration of the various functions of the module.
- List commands - list critical information in multiple lines.
- Service definition - used to define services and their characteristics.
- Characteristic access - used to access and write the server/client characteristics.
- Script control - used for the scripting process of the module.
All commands must end with a carriage return ('\r'). The RN4870 will send responses to all commands sent by the host microcontroller. In most cases, Set or Action commands sent by the host microcontroller get the response AOK; however, there are many command-specific responses. By default, when the RN4870 is ready to receive the next command, the command prompt CMD> is sent to UART. It is strongly recommended that the host microcontroller wait until responses are received for commands sent before sending the next command.
If you wish to return to Data mode, enter --- at the command prompt; you will then see and END message indicating the command session has been terminated.
Script Mode
The script capability enables the RN4870 module to run relatively simple operations without a host MCU. Script mode enables the user to write ASCII based script into the RN4870's Non-Volatile Memory and execute the application logic automatically through the script. A script consists of ASCII commands that do not need to be compiled or processed, it remains in the RN4870's NVM and does not alter the core firmware in any way. RN4870 scripting is event driven. All event scripts start with an event label followed by one or more logic operations or ASCII commands. If an event label is defined, once that event is triggered, control is passed over to the script engine. The script engine starts executing the commands that are listed below the event label until the end of the script or until encountering another event label. There are 12 currently defined events:
More details can be found on Chapter 3 of the RN4870/71 Bluetooth Low-Energy Module User's Guide.
Remote Command Mode
RN4870 has the capability of Remote Command mode over UART transparent connection. This feature enables the user to execute commands on a connected peer device. The command is sent to the connected remote device, executed at the remote device and the result is sent back to the local device.
Remote Command mode provides a method to enable stand-alone implementation without a host MCU for the remote device. A local device can use Remote Command Mode to get access to the remote device (module), as well as access and control to all its analog or digital I/O ports. All application logics can be performed locally without the remote device's interference. Therefore, there is no required programming or application logic to run on the remote device, making the remote device extremely easy to implement at a low cost.
Before using Remote Command mode, the UART Transparent service should be enabled using the SS command, (specifically SS,C0 to support device info and UART Transparent services).
There are certain conditions that must be met in order to use Remote Command mode, they include:
- Both local and remote devices support UART Transparent feature.
- The two devices are already connected and secured.
To enter Remote Command mode, use the command !,1.
Upon receiving the request to start the Remote command session, the RN4870 will accept the request if the following conditions are met:
- The BLE link between devices is secured
- Both local and remote devices must share the same pin code (SP command)
If the above conditions are not met, BLE link will be disconnected immediately. Once you are in Remote Command mode, the command prompt CMD> will be changed to RMT>.
To exit Remote Command mode, the local device needs to get back to Command mode by typing $, followed by command !,0. | http://microchipdeveloper.com/ble:rn4870-operating-modes | CC-MAIN-2017-09 | refinedweb | 963 | 50.67 |
Lab 4: Python Lists, Data Abstraction, Trees
Due by 11:59pm on Thursday, July 8.
Starter Files
Download lab
- branches: a list of trees directly under the tree's root
-
List Comprehensions
Q1: Couple
Relevant Topics: List comprehensions, t): """Return a list of two-element lists in which the i-th element is [s[i], t[i]]. >>> a = [1, 2, 3] >>> b = [4, 5, 6] >>> couple(a, b) [[1, 4], [2, 5], [3, 6]] >>> c = ['c', 6] >>> d = ['s', '1'] >>> couple(c, d) [['c', 's'], [6, '1']] """ assert len(s) == len(t) "*** YOUR CODE HERE ***"
Use Ok to test your code:
python3 ok -q couple2: Distance
Relevant Topics: Data Abstraction_a, city_b): """ >>> city_a = make_city('city_a', 0, 1) >>> city_b = make_city('city_b', 0, 2) >>> distance(city_a, city_b) 1.0 >>> city_c = make_city('city_c', 6.5, 12) >>> city_d = make_city('city_d', 2.5, 15) >>> distance(city_c, city_d) 5.0 """ "*** YOUR CODE HERE ***"
Use Ok to test your code:
python3 ok -q distance
Q3: Closer city
Relevant Topics: Data Abstraction you use your
distancefunction to find the distance between the given location and each of the given cities?
def closer_city(lat, lon, city_a, city_b): """ Returns the name of either city_a or city_b, whichever is closest to coordinate (lat, lon). If the two cities are the same distance away from the coordinate, consider city_b to be the closer city. >>> ***"
Use Ok to test your code:
python3 ok -q closer_city
Q_city_abstraction
The
check_city.
Trees
Q5: Finding Berries!
The squirrels on campus need your help! There are a lot of trees on campus and
the squirrels would like to know which ones contain berries. Define the function
berry_finder, which takes in a tree and returns
True if the tree contains a
node with the value
'berry' and
False otherwise.
Hint: Considering using a for loop to iterate through each of the branches recursively!
def berry_finder(t): """Returns True if t contains a node with the value 'berry' and False otherwise. >>> scrat = tree('berry') >>> berry_finder(scrat) True >>> sproul = tree('roots', [tree('branch1', [tree('leaf'), tree('berry')]), tree('branch2')]) >>> berry_finder(sproul) True >>> numbers = tree(1, [tree(2), tree(3, [tree(4), tree(5)]), tree(6, [tree(7)])]) >>> berry_finder(numbers) False >>> t = tree(1, [tree('berry',[tree('not berry')])]) >>> berry_finder(t) True """ "*** YOUR CODE HERE ***"
Use Ok to test your code:
python3 ok -q berry_finder
Q6: Sprout leaves
Define a function
sprout_leaves that takes in a tree,
t, and a list of
leaves,
leaves. It produces a new tree that is identical to
t, but where each
old leaf node has new branches, one for each leaf in
leaves.
For example, say we have the tree
t = tree(1, [tree(2), tree(3, [tree(4)])]):
1 / \ 2 3 | 4
If we call
sprout_leaves(t, [5, 6]), the result is the following tree:
1 / \ 2 3 / \ | 5 6 4 / \ 5 6
def sprout_leaves(t, leaves): """Sprout new leaves containing the data in leaves ***"
Use Ok to test your code:
python3 ok -q sprout_leaves
Q7: Don't violate the abstraction barrier!
Note: this question has no code-writing component (if you implemented
berry_finderand
sprout_leaves
berry_finder and
sprout_leaves
even if you violated the abstraction barrier. To check whether or not you
did so, run the following command:
Use Ok to test your code:
python3 ok -q check_abstraction
The
check_abstraction function exists only for the doctest, which swaps
out the implementations of the
tree tree with a new list object or indexing into a tree, with the appropriate constructor or selector.
Make sure that your functions pass the tests with both the first and the second implementations of the Tree ADT and that you understand why they should work for both before moving on.
Submit
Make sure to submit this assignment by running:
python3 ok --submit
Optional Questions
Q8: Coordinates
Implement a function
coords that takes a function
fn, a sequence
seq,
and a
lower and
upper bound on the output of the function.
coords then
returns a list of coordinate pairs (lists) such that:
- Each (x, y) pair is represented as
[x, fn(x)]
- The x-coordinates are elements in the sequence
- The result contains only pairs whose y-coordinate is within the upper and lower bounds (inclusive)
See the doctest for examples.
Note: your answer can only be one line long. You should make use of list comprehensions!
def coords(fn, seq, lower, upper): """ >>> seq = [-4, -2, 0, 1, 3] >>> fn = lambda x: x**2 >>> coords(fn, seq, 1, 9) [[-2, 4], [1, 1], [3, 9]] """ "*** YOUR CODE HERE ***" return ______
Use Ok to test your code:
python3 ok -q coords
Reflect: What are the drawbacks to the one-line answer, in terms of using computer resources?
Q9: Riffle Shuffle
A common way of shuffling cards is known as the riffle shuffle. The shuffle produces a new configuration of cards in which the top card is followed by the middle card, then by the second card, then the card after the middle, and so forth.
Write a list comprehension that riffle shuffles a sequence of items. You can assume the sequence contains an even number of items.
Hint: To write this as a single comprehension, you may find the expression
k%2, which evaluates to 0 on even numbers and 1 on odd numbers, to be useful.
Consider how you can use the 0 or 1 returned by
k%2 to alternatively access the
beginning and the middle of the list. _______
Use Ok to test your code:
python3 ok -q riffle
Q10:
Fun Question!11: ***" "*** YOUR CODE HERE ***" prev = word return table
Use Ok to test your code:
python3 ok -q build_successors_table
Q12:(path,.' | https://inst.eecs.berkeley.edu/~cs61a/su21/lab/lab04/ | CC-MAIN-2021-49 | refinedweb | 930 | 63.22 |
class Solution { private: /// This mask is used to check the highest bit in a int number. int mask = 1 << 31; /// *code* is an int number that encodes the path to the leaf. /// *counter* is the height of the full binary tree (upper n - 1 levels). bool check (TreeNode* root, int code, int counter) { code <<= (32 - counter); while (counter > 0) { /// When the highest code is 0, move to left. Otherwise move to left. if (code & mask) root = root->right; else root = root->left; code <<= 1; --counter; } return root != nullptr; } public: int countNodes(TreeNode* root) { int counter = 0; TreeNode *left = root, *right = root; while (left && right) { left = left->left; right = right->right; ++counter; } if (counter == 0) return 0; if (!left) return (pow(2, counter) - 1); /// leftcode encodes the path to the left most leaf (all 0s). /// rightcode encodes the path to the right most (possible) leaf (all 1s). int leftcode = 0, rightcode = 0; for (int i = 0; i < counter; ++i) { rightcode <<= 1; rightcode += 1; }; while (leftcode != rightcode && leftcode != rightcode - 1) { /// Get path to the middle leaf node. int mid = leftcode + (rightcode - leftcode) / 2; if (check(root, mid, counter)) leftcode = mid; else rightcode = mid - 1; } if (check(root, rightcode, counter)) return pow(2, counter) + rightcode; else return pow(2, counter) + leftcode; } };
Yes, this method has limitation - the height of this tree must be smaller than 32 (or we can use long to extend this limitation).
Time complexity of this method is O(log(n) * log(n))
- Need log(n) times to check the leaf node is null or not.
- For each check, we need O(height) = O(log(n)) moves along the tree. | https://discuss.leetcode.com/topic/62573/o-log-n-2-c-binary-search-method-concise-with-explanation | CC-MAIN-2017-47 | refinedweb | 270 | 86.94 |
Django Djumpstart: Build a To-do List in 30 Minutes
Being a web developer isn’t always exciting. There are lots of tedious things you end up doing over and over again: writing code to talk to a database, writing code to handle page templating, writing an administrative interface … Sooner or later you start to wish someone would just come along and wrap up all that repetitive code in nice, integrated, reusable packages, right? Well, today’s your lucky day, because someone finally has.
Say hello to Django. In this article, I’ll be walking through the process of creating a simple application — a to-do list — with Django; this tutorial will only cover a small portion of what Django can do for you, but it’ll be a good start and (hopefully) enough to whet your appetite for more.
An Integrated Web Framework
In a nutshell, Django is a rapid web development framework. Like a number of other frameworks that have been making news recently (for example, Ruby on Rails), Django is designed to take care of tedious and repetitive tasks for you, freeing you up to write interesting code again. However, unlike most of the other frameworks, Django goes a few steps further and tries to provide generic building blocks that you can stick together to accomplish common tasks (like building administrative interfaces or RSS feeds). Everyone who works to develop Django also uses the framework, so anything it can do to make our jobs easier is a candidate for inclusion.
Django started life at the Lawrence Journal-World, a newspaper which serves a small town in northeastern Kansas, growing from the need to develop full-featured web applications to meet newsroom deadlines. The Journal-World released Django under an open-source license in July 2005, after it had been under development, and in use, at the paper for a couple of years.
Django is written in Python, a modern, object-oriented programming language; that means that the applications you write with Django will be in Python, too.
Python: the Five-minute Tour
If you’re used to languages like PHP or Perl, Python might look a little strange to you, but once you get to know it, it’ll be like a breath of fresh air.
There are two big things you’ll notice when you start using Python for the first time.
First, there aren’t any curly brackets marking blocks of code; you’ll more than likely indent your code inside functions, for loops, if statements and such regardless of which language you use, so Python relies on this indentation to tell it where those blocks of code begin and end.
Second, the core Python language is deliberately kept small and lightweight, with functions that might be built into the core of other languages (for example: regular expressions) instead supplied in "modules" that you can import into your programs. Python comes with a solid set of standard modules to cover a programmer’s most common needs.
If you’re new to Python, it’d probably be a good idea to read through the official tutorial to get a feel for the basics of the language. The official documentation for Python also includes a complete listing of all the standard modules and explanations of how to use them, so browse through that list to see what Python can do out-of-the-box.
Getting Django
Since it’s written in Python, Django requires you to have Python installed before you can use it. If you’re on Mac OS X or Linux, you probably already have Python installed, but if you’re on Windows or if (for some strange reason) Python wasn’t preinstalled on your computer, you can download it from python.org. As I write this article, the latest version of Python is 2.4.3, but version 2.5 should be released any day now; there shouldn’t be any incompatibilities between the two versions, but if you’re going to install Python, it’s probably safest to stick with version 2.4.3 until any bugs in 2.5 have been ironed out. The only version restriction that Django imposes is a requirement that you use Python 2.3 or higher.
Once you have Python, the official Django install documentation will step you through the process of downloading and installing Django. As part of this process, you’ll need to download and install a Python database adapter module for the database you’ll be using. I’ll be using MySQL in this article; you can download the adapter (a module called
mysqldb) directly from SourceForge (at the time of publication, this package will not work with Python 2.5; you’ll have to use Python 2.4.3 until the SourceForge project is updated).
Windows users should just grab the
.exe version, Mac users can grab a pre-built Mac installer package from the PythonMac site, and Linux users should be able to get a pre-built package from their Linux distributor, or manually build the module using the
.tar.gz download from SourceForge.
If you’d prefer to use a different database, the Django installation documentation has links to download the adapter modules for PostgreSQL and SQLite, which are the two other databases Django officially supports (support for Oracle and Microsoft SQL Server is in development, but at this point is still experimental).
Once you’ve got Python installed, you’ll need to get set up Django itself. The official Django documentation provides good instructions for Linux and Mac users, but Windows users will have to adapt the directions slightly:
In order to open the Django tarball, use a program such as WinZip instead of
tar.
You’ll need to add
C:Python24 to the
PATH environment variable. You can do this either through the My Computer Properties dialog, or by entering the following at the command line:
SET PATH=%PATH%;C:Python24
Of course, this command will only affect the current command line, so you’ll have to retype it every time you work with Django. Your best bet is to set the
PATH variable once and for all in My Computer Properties.
There’s no equivalent to
sudo in Windows. To set up Django, enter the following commands:
cd path-to-django
setup.py install
After setting up Django, add the
django/bin directory to the
PATH environment variable. Again, you can do this once through the My Computer Properties dialog, or each time you work with Django by entering the following at the command line:
SET PATH=%PATH%;path-to-djangodjangobin
Diving In
Let’s explore Django by writing a simple application. "Getting things done" is a popular mantra these days, so we’ll build an easy tool to help with that: a to-do list manager.
Now that you’ve got Django installed, simply open up a command line, navigate to the directory in which you want to keep your code, and type this command:
django-admin.py startproject gtd
That will start a new Django "project" for you; Django draws a distinction between an "application," which usually provides a specific set of features, and a "project," which is usually a collection of different applications working together on the same web site.
Running the
startproject command will automatically create a new directory with the name
gtd, and place a few files inside it: a blank file called
__init__.py, which tells Python that the directory is a Python module; a Python script called
manage.py, which contains some utilities for working with your project; a settings file called
settings.py; and a URL configuration file called
urls.py.
At this point, you can test that everything was set up properly by typing this command (run this from a command line, inside the "gtd" directory):
manage.py runserver
Django includes a lightweight web server for testing purposes, so you don’t have to set up Apache just to work on your project. The command
manage.py runserver will start it up. By default, the built-in server runs on port 8000, so you should be able to type into your browser and see a nice page telling you that Django is working.
To stop the built-in server, press Ctrl+Break on Windows, or Ctrl+C on Mac OS X or Linux.
Now that we know Django is set up properly, we can start working on our to-do list application. Type this command:
manage.py startapp todo
This will create a directory called
todo, and automatically drop in a few files for you:
__init__.py, again to tell Python that the directory is a Python module; and two files for application code:
models.py and
views.py.
Writing Models
One of the more tedious parts of web development is laying out all the database tables you’ll need, figuring out which types of columns you’ll want, and working out how to get data into and out of them. Django solves these problems by letting you define "models." Django’s models are just Python classes that inherit from Django’s own base Model class, and they let you specify all the attributes of a particular type of object in code. The Model class knows how to translate its properties into values for storage in the database, so most of the time you don’t have to think about that — you just interact with the objects as you would in any other object-oriented language.
For this application, we’ll need two models: one representing a list, and one representing an item in a list. In database terms, these models will end up being two tables: one for lists, and one for the items in those lists. Each of the list items will have a foreign key that specifies the list to which it belongs.
Let’s start with the model for the list. Open up the
models.py file that Django created for you, and below the line that says "Create your models here," add this code:
class List(models.Model):
title = models.CharField(maxlength=250, unique=True)
def __str__(self):
return self.title
class Meta:
ordering = ['title']
class Admin:
pass
In a moment, when we tell Django to create our database tables, the above will translate into a table called
list, with two columns:
- An integer primary key column called
id(Django generates this automatically for you; you don’t have to specify it explicitly).
- A 250-character-wide
VARCHARcolumn called
title. Additionally, a
UNIQUEconstraint will be created on this column, ensuring that we can’t create two to-do lists with the same title.
You’ll notice there’s also a method in the class called
__str__. This method is just like
toString in Java or .NET — whenever Python needs to show a string representation of an object, it calls that object’s
__str__ method. The one we’ve defined here will return the to-do list’s title, which is probably the most useful way to represent it as a string.
The
class Meta part allows us to set options that will tell Django how we want the model to behave. We can set a lot of options here, but for now we’ll just let Django know we want our lists to be ordered by their titles. When Django queries the database for to-do lists, it will order them by the title column.
The
class Admin bit allows us to set options for Django’s automatic administrative interface, which we’ll see later. The
pass keyword tells Django to just use its defaults.
Now let’s write the model for the items in the to-do lists. It looks like this:
import datetime
PRIORITY_CHOICES = (
(1, 'Low'),
(2, 'Normal'),
(3, 'High'),
)
class Item(models.Model):
title = models.CharField(maxlength=250)
created_date = models.DateTimeField(default=datetime.datetime.now)
priority = models.IntegerField(choices=PRIORITY_CHOICES, default=2)
completed = models.BooleanField(default=False)
todo_list = models.ForeignKey(List)
def __str__(self):
return self.title
class Meta:
ordering = ['-priority', 'title']
class Admin:
pass
This model is a little more complicated, but should be easy enough to understand. There are a couple of neat tricks we’re using here that deserve a quick mention, though.
The item’s priority will be stored in the database as an integer, but using the
choices argument and passing it the
PRIORITY_CHOICES list tells Django to only allow the values we’ve specified in
PRIORITY_CHOICES. The
PRIORITY_CHOICES list also lets us specify human-readable names that correspond to each value, and Django will take advantage of those for displaying HTML forms.
created_date will be a
DATETIME column in the database, and
datetime.datetime.now is a standard Python function which, as its name implies, returns the current date and time. To use this function, we need to include the line import datetime before the model’s definition.
We’ve specified that list items should be ordered by two columns:
priority and
title. The
- in front of
priority tells Django to use descending order for the priority column, so Django will include
ORDER BY priority DESC title ASC in its queries whenever it deals with list items.
Now that our models are completed, it’s time to get Django to create database tables based on them. In Django parlance, this is called installing the models.
Installing the Models
The first step in installing our models is to tell Django which database we’re using, and that we want the models we just created to be installed. To do this, open up the
settings.py file in your project directory, and change these settings.
DATABASE_ENGINEshould be changed to whatever type of database you're going to use. As I mentioned earlier, I'm using MySQL as I write this, so I'll change the setting like so:
DATABASE_ENGINE = "mysql"
DATABASE_NAMEshould be changed to the name of the actual database you're using:
DATABASE_NAME = "djangotest"
Make sure that this database exists! If you choose to use SQLite, Django can automatically create a database file for you, but that’s not possible with MySQL and PostgreSQL.
DATABASE_USERand
DATABASE_PASSWORDshould be changed to the username and password of a user who has full access to the database. For example:GA_googleFillSlot("Articles_6_300x250");
DATABASE_USER = 'django'
DATABASE_PASSWORD = 'swordfish'
If you’re using SQLite, these settings don’t need to be filled in, as SQLite doesn’t have a user/password system.
If your MySQL database is hosted on a separate machine, you’ll have to set
DATABASE_HOST. If MySQL is running on the same server, you can leave this empty.
If MySQL is not set up to listen to its default port, you’ll need to set
DATABASE_PORT to MySQL’s port number.
Down toward the bottom of the settings file is a list called
INSTALLED_APPS, which lists all the applications you’re using. By default, several of the applications bundled with Django will be listed here. Add
gtd.todo to the list like so:
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'gtd.todo',
)
Once these settings are changed and saved, type this command in the
gtd directory:
manage.py syncdb
You’ll see some output scroll past as Django sets up the database tables. It will also prompt you to create a "superuser"; Django’s authentication system is installed by default, and creating a superuser account at this point means you’ll be able to log in to Django’s automatic administrative interface when we set that up. Go ahead and create a superuser now.
Automatic Administration
At this point, we could write our own code to interact with the models we’ve set up, but Django provides a free, built-in administrative application that lets us start playing with the data immediately. To use it, you only need to do a couple of things:
- In the
settings.pyfile, add
django.contrib.adminto the list
INSTALLED_APPS.
- In the project’s
urls.pyfile, locate the line that says "uncomment this for admin", then remove the
#from the start of the following line of Python code to uncomment it.
Run
manage.py syncdb again, and the administrative interface will be installed.
Now start up the testing server again (by running
manage.py runserver) and load in your browser, which should show you a login screen. Log in with the username and password you specified for your superuser, and you’ll find yourself in Django’s admin interface.
The main page of the admin system shows a list of installed models, classified by the application of which they’re part. If you click on one of the models, you’ll see a list of objects for that model. From this page, you can also change existing objects or add new ones.
Let’s create a to-do list. On the main admin page, click the "Add" link next to "Lists". Fill in any value you like for the list’s title, then save it.
Go back to the main admin page, click the "Add" link next to "Items," and fill in the details for your list’s first item. Each item has to be related to a to-do list, and Django will automatically create a drop-down menu that shows all the to-do lists that have been created so far.
This is a pretty nice interface to have — especially considering how little work was involved in setting it up — but this is just the default admin interface that Django provides. There are a ton of options (all covered in Django’s official documentation) you that can tweak in order to have the admin interface behave the way you want, and you never have to "rebuild" or re-generate any files to use them — the admin interface is generated on the fly, and changes you make to the configuration can take effect immediately.
If you’d like to learn more about customizing this admin interface, check out the official documentation — this documentation includes details of how you can enable a very nice edit-in-place feature, which you could use to edit many list items in a single page.
Delving into Views
Now that we have a nice little admin interface, let’s talk about views. As nice as the admin interface is, you’re probably always going to need at least a couple of additional pages to get your data to appear exactly as you want it to. Views are the functions that generate these pages in your application.
For example, one thing that would be nice to have in this application is a page that shows all of our to-do lists, along with the percentage of items in those lists that have been completed. It would be sort of a "status report" that we could check in on every once in a while. So let’s write a view that gives us this status report.
Django views are, for the most part, just ordinary Python functions. The URL configuration file (
urls.py) decides which URL goes to which view; Django then calls the correct view function, passing it the incoming HTTP request as an argument. Here’s the code for our "status report" view; it should go into the
views.py file in the
todo directory:
from django.shortcuts import render_to_response
from gtd.todo.models import List
def status_report(request):
todo_listing = []
for todo_list in List.objects.all():
todo_dict = {}
todo_dict['list_object'] = todo_list
todo_dict['item_count'] = todo_list.item_set.count()
todo_dict['items_complete'] = todo_list.item_set.filter(completed=True).count()
todo_dict['percent_complete'] = int(float(todo_dict['items_complete']) / todo_dict['item_count'] * 100)
todo_listing.append(todo_dict)
return render_to_response('status_report.html', { 'todo_listing': todo_listing })
As Python functions go, this one’s pretty simple, but it does show off a few of the nice things that Django can do:
List.objects.all, as you might guess, is a method that returns all of our to-do lists so that we can loop through them. Django will figure out the correct SQL and execute it for you automatically.
- Each to-do list has an
item_setproperty, which represents the list’s items. We can use the
item_set.allmethod to get all of the items in the list, or we could use the
item_set.filtermethod to get only a certain subset of the items in the list. We could also use
List.objects.filterto get only the to-do lists that match a certain set of criteria.
- The function
render_to_responsehandles the business of returning an actual web page. It takes the name of a template to use (more on that in a moment), and a dictionary ("dictionary" is Python’s name for an associative array) of variables and values to which the template should have access, and takes care of rendering the template and sending an HTTP response.
The actual logic involved here isn’t very complex; we’re building a list called
todo_listing, and each item in it will be a dictionary that contains information about one of the to-do lists. The only really complex part of that is figuring out the percentage of items completed. You’ll notice that it does a little bit of typecasting. That’s needed because, by default, Python does "integer division" when both of the numbers are integers — integer division always returns an integer. But we want a decimal number that we can convert into a percentage, so I’ve explicitly coerced the number of completed items to a floating-point number.
Writing the Template
We’ve told the view to use a template called
status_report.html, but we haven’t created that yet. Luckily, creating templates for Django is incredibly easy. In the todo directory, create a new directory called
templates, and in it, create the file
status_report.html. Here’s the code for the template:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "">
<html xmlns="">
<head>
<meta http-
<title>To-do List Status Report</title>
</head>
<body>
<h1>To-do list status report</h1>
{% for list_dict in todo_listing %}
<h2>{{ list_dict.list_object.title }}</h2>
<ul>
<li>Number of items: {{ list_dict.item_count }}</li>
<li>Number completed: {{ list_dict.items_complete }} ({{ list_dict.percent_complete }}%)</li>
</ul>
{% endfor %}
</body>
</html>
For the most part, Django templates look like HTML with just a couple extra things mixed in. There are "template tags," which let you perform some rudimentary logic in the template, and there are variables, which will automatically be filled with values passed in to
render_to_response. Tags start with
{% and end with
%}, while variables start with
{{ and end with
}}.
In this particular template, we’re using two tags:
{% for list_dict in todo_listing %} and
{% endfor %}. These tags tell the Django template system that we want to loop through each item in this list and do something with it. When we’re done with the code in the loop, we use the
{% endfor %} tag to say so. Within the loop, we retrieve the values we set in the view in order to display the to-do list’s title, the number of items in the list, and so on.
Making it Work
Now that we have our view and template, we just have to give Django a couple pieces of information and it’ll all work! First, we need to tell Django where we’re storing the templates for our application; this is controlled by the
TEMPLATE_DIRS setting in the
settings.py file. Just go in and add a line with the path to the location at which you put the "status_report.html" template. In my case, I added this:
'/Users/jbennett/django-projects/gtd/todo/templates',
It’s important to put the comma on the end of this line.
Once that’s done, we just need to set up a URL for our view, which we do in the
urls.py file. Immediately below the line you un-commented earlier for the admin interface’s URLs, add this line:
(r'^report/$', 'gtd.todo.views.status_report'),
Again, the comma is important.
Now, start up the testing server (manage.py runserver), and visit. You should see something like this:
Django’s URL configuration is pretty simple; each line in the
urls.py file has at least two things in it:
- a regular expression that specifies the URL or URLs to match
- the view function to use on URLs matching that regular expression, or a call to
include, which can pull in other lists of URLs (the admin interface, for example, has its own
urls.pyfile, and just uses include to tell Django to use that file for any URL that starts with
admin)
Where to Go from Here
So far we’ve written around fifty or sixty lines of code, and we’ve got the beginnings of a pretty nice little to-do application:
- We have database tables set up to store the to-do lists and their items.
- We have a nice administrative interface for creating and managing the lists.
- We have a quick "status report" page that tells us how we’re progressing on each list’s items.
That’s not bad at all, but it barely scratches the surface of what Django can do; there’s a ton of features rolled into Django already, and more are under development. Here are some of the highlights:
- a full-featured database API
- a built-in authentication and permissions system for user accounts
- "Generic views," which save you from having to write code for common things like date-based content archives
- a built-in cache system to help you squeeze every possible ounce of performance out of Django
- an internationalization system to make it easy to translate your application’s interface into other languages
- easy, automatic generation of RSS feeds and Google sitemaps
- easy serialization of data to XML or JSON, for easy use with AJAX
- plus a whole lot more
If you’d like to learn more about Django, swing by the official site, peruse the documentation (which includes a tutorial that covers a lot of useful pieces of Django), and feel free to ask questions on the Django-users mailing list or in our IRC channel (#django on irc.freenode.net).
- Roderick Mackenzie | http://www.sitepoint.com/build-to-do-list-30-minutes/ | CC-MAIN-2015-06 | refinedweb | 4,356 | 60.55 |
CC'ing pkg-octave-devel since this impacts Octave packaging as well. Bradley M. Froehle wrote: > The fix for this bug provided in 1.8.9-1~exp2 has cause me a good deal of > headache today. > > For me, the issue is triggered as some C++ code containing: > > #include <hdf5.h> // sets OMPI_SKIP_MPICXX 1 > #include <mpi.h> // MPI C++ namespace now is NOT available > > Note that even trying to unset OMPI_SKIP_MPICXX before including mpi.h > won't work > because OMPI_MPI_H will still be defined and the mpi.h header won't be > processed again.
If you switch the order so mpi.h is included first does that fix it? I'm not dismissing, just making sure I understand the problem. > I respectfully request that this patch be backed out until a better > solution can be > worked out. Totally disabling the MPI C++ bindings when including HDF5 is > not > an acceptable side effect. > > I've looked into this bug a bit today and I'd suggest that instead the > `mkoctfile-mpi.diff` patch in src:octave (from bug #598227) be modified to > be something more like: > > -: ${XTRA_CXXFLAGS=%OCTAVE_CONF_XTRA_CXXFLAGS%} > +: ${XTRA_CXXFLAGS=-I/usr/include/mpi -DOMPI_SKIP_MPICXX=1 > -DMPICH_SKIP_MPICXX=1 %OCTAVE_CONF_XTRA_CXXFLAGS%} > > That would contain the bug fix to Octave (which is the only place where the > bug seems to have surfaced). Reverting this patch and moving the fix into Octave would be one acceptable solution that should have the same effect as far as Octave is concerned. IMHO this is still an HDF5 bug, see below. > Normally this is not an issue --- a developer would use mpicc or mpicxx to > do the compilation > and linking and this would automatically ensure that the correct mpi > libraries are used. Octave > is broken because it is using g++ and hacking in the MPI include directory > without following it > up with the necessary link flags. Octave is not broken, it is simply using HDF5 in a C++ source file and does not care about or use MPI. However we do want to support co-installation for users that do want both Octave and MPI. Octave shouldn't have to care which flavor of HDF5 is installed. Consider these simple examples: $ cat hdf5test.c #include <hdf5.h> // C source file follows main() {} $ cat hdf5test.cc #include <hdf5.h> // C++ source file follows main() {} $ gcc -o hdf5test hdf5test.c -lhdf5 $ g++ -o hdf5test hdf5test.cc -lhdf5 Works if libhdf5-7 and libhdf5-dev are installed. If HDF5 were providing a consistent interface this would also work with libhdf5-openmpi-7 and libhdf5-openmpi-dev installed. As it stands now, however, I need to compile with (assuming the patch is reverted) $ gcc -I/usr/include/mpi -o hdf5test hdf5test.c -lhdf5 $ g++ -I/usr/include/mpi -DOMPI_SKIP_MPICXX -o hdf5test hdf5test.cc -lhdf5 or $ g++ -I/usr/include/mpi -o hdf5test hdf5test.cc -lhdf5 -lmpi++ -lmpi Not ideal and could certainly affect users other than Octave. -- mike _______________________________________________ Pkg-grass-devel mailing list Pkg-grass-devel@lists.alioth.debian.org | https://www.mail-archive.com/pkg-grass-devel@lists.alioth.debian.org/msg13144.html | CC-MAIN-2018-47 | refinedweb | 498 | 67.25 |
Right i'm back again to pickle your brains (which is normally JavaPF lol ;))
Basically i've got some code below which removes all the whitespace from a text file.
Originally it read through the whole file line by line, but this can take forever as some of the files that i'm dealing with can be quite large.
What i want the program to do is just remove the white space from the xml tag headers and not the whole file. For instance the tags appear as < t a g >< / t a g > instead of the normal way <tag></tag> which is a problem when i'm trying to use DOMParse to get the xml from the file.
As you can see from the code i added in the line
Code :
if (strLine.contains("< M D R - D V D >"))
This was just to see if the program would pick up the that particular tag which it did, however my file has loads of different tags.
My question is, is there any way of modifying the code to make the program pull all the tag names using a single line of code without having to enter every single tag name?
I have a few other questions, but i'll get this one out of the way first.
Code :
import java.util.regex.*; import java.io.*; public class regularexpressions{ public static void main(String[] args) throws IOException{ BufferedReader bf = new BufferedReader(new InputStreamReader(System.in)); System.out.print("Enter file name: "); String filename = bf.readLine(); File file = new File(filename); if(!filename.endsWith(".txt")){ System.out.println("Usage: This is not a text file!"); System.exit(0); } else if(!file.exists()){ System.out.println("File not found!"); System.exit(0); } FileInputStream fstream = new FileInputStream(filename); DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); Pattern p; Matcher m; String afterReplace = ""; String strLine; String inputText = ""; while ((strLine = br.readLine()) != null) if (strLine.contains("< M D R - D V D >")){ System.out.println (strLine); inputText = strLine; p = Pattern.compile("\\s+"); m = p.matcher(inputText); System.out.println(afterReplace); afterReplace = afterReplace + m.replaceAll("") + "\r\n"; } FileWriter fstream1 = new FileWriter(filename); BufferedWriter out = new BufferedWriter(fstream1); out.write(afterReplace); in.close(); out.close(); } }
Thanks
John | http://www.javaprogrammingforums.com/%20file-i-o-other-i-o-streams/372-enhancement-program-removing-whitespace-text-file-printingthethread.html | CC-MAIN-2016-26 | refinedweb | 374 | 57.47 |
Most.
Escaped character
Description
ordinary characters
Characters other than . $ ^ { [ ( | ) * + ? \ match themselves.
\a
Matches a bell (alarm) \u0007.
\b
Matches a backspace \u0008 if in a [] character class; otherwise, see the note following this table.
\t
Matches a tab \u0009.
\r
Matches a carriage return \u000D. Note that \r is not equivalent to the newline character, \n.
Matches an ASCII control character; for example, \cC is control-C.
\u0020
Matches a Unicode character using hexadecimal representation (exactly four digits).
\
When followed by a character that is not recognized as an escaped character, matches that character. For example, \* is the same as \x2A.
The escaped character \b is a special case. In a regular expression, \b denotes a word boundary (between \w and \W characters) except within a [] character class, where \b refers to the backspace character. In a replacement pattern, \b always denotes a backspace.
string c = "hello \t world"; // hello world
string d = @"hello \t world"; // hello \t world
Drop the at (@) from your code and you should be fine.
using System.Text.RegularExpressions;class test { static void Main() { System.Console.WriteLine( Regex.Replace( "abcDEFghi", "(DEF)", "\t" ) ); }}
Dropping the @ would be fine if I wanted a tab character embedded in the string. However, this documentation suggests that the '\' character followed by the 't' character inside a replacement pattern has special meaning, but it doesn't treat it that way. | http://msdn.microsoft.com/en-us/library/4edbef7e.aspx | crawl-002 | refinedweb | 229 | 58.28 |
MASTERTAG DEVELOPER GUIDE
- Cody Cooper
- 1 years ago
- Views:
Transcription
1 MASTERTAG DEVELOPER GUIDE
2 TABLE OF CONTENTS 1 Introduction What is the zanox MasterTag? What is the zanox page type? Create a MasterTag application in the zanox Application Store Basic application data Application settings What are application settings? Naming conventions for application settings Create an application setting for your MasterTag application "Hello World" sample application Develop an application Create a basic code template Coding requirements Local variables Page type settings Applications and external data Data retrieval methods Reference of automatically transferred data zanox helper functions How do I use helper functions? Reference of zanox helper functions Things you shouldn't do in your application code Requirement 1: Do not use external JavaScript libraries Requirement 2: Do not use global JavaScript variables and /or functions Test an application Create an empty test page Simulate application settings in your test environment Release an application Optimise the application source code Application approval MasterTag approval checklist ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 2
3 6 Tips for advanced users Develop an application without generating a test code Speed up application development Create an appropriate test page ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 3
4 1 Introduction 1.1 What is the zanox Master. In the following sections of this document you will learn how you can develop applications for the zanox MasterTag and provide them to zanox advertisers. 1.2 What is the zanox page type? Each of the 7 script containers can only be used on specific pages within the advertiser website. These pages are called zanox page types. The following page types exist: Page Type home product search category basket checkout generic Description This page type refers to the homepage of the advertiser website, shop, etc. This page type refers to all pages where the user can find details on a selected product. This page type refers to a page where the user can perform product searches and search results are listed. This page type refers to all pages which list product categories. This page type refers to the page where the shopping basket is displayed to the user prior to purchase. This page type refers to the sales confirmation page which is displayed to the user after purchase. This page type refers to any other pages on the advertiser s website. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 4
5 If you develop a MasterTag application, it must be able to work on all of the above pages. The zanox page type is passed to the application as a setting at runtime. The first step you should do as a developer is to think about which page types your application must support. For instance, if an application has to retrieve data on the product page (e.g. the ID of the displayed product) and on the checkout page (e.g. check the product the user bought against the product the user viewed on the product page), it has to support the two page types product and checkout. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 5
6 2 Create a MasterTag application in the zanox Application Store 2.1 Basic application data Developers create and distribute MasterTag applications via the zanox Application Store. Note that you need to register with zanox as publisher to be able to log in to the zanox Application Store. To create a new MasterTag application in the zanox Application Store, proceed as follows: Step 1: Go to the zanox Application Store at Step 2: Click on the button Connect with zanox and enter your login credentials (publisher account login name and password). Step 3: Click on the tab Developers in the upper menu bar. The developer area of the zanox Application Store opens. The tab Applications for Sales shows a list of all your applications you may want to make publicly available in the zanox Application Store. If you have not yet started developing, the list will be empty. Figure 1: Your applications in the zanox Application Store Step 4: To create a new application, click on the button New application A new menu bar opens underneath the list of applications. Step 5: Click on the tab General and enter the following basic application data: ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 6
7 Field Application Application type Description Detailed description Format Widget width Widget height Programming language Search tags Description Name of your application For MasterTag applications select the application type Widget Short description of your application More detailed description of your application Size of your application. Select Free Format if the application does not display data (for example a banner) on a webpage. Application width in pixels. Only required if Free Format was selected. Select 0 for applications that do not display anything on a webpage. Application height in pixels. See above. Programming language of your application (usually JavaScript) Enter the string MasterTag followed by a comma-separated list of zanox page types your application supports, for example: MasterTag,home,product,search,category,basket,checkout,generic Video URL Documentation Website URL Version URL with a tutorial video for your application, URL to external documentation of your application URL to your company s website Version of your application ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 7
8 Figure 2: Basic information about your application ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 8
9 Step 6: Optionally click on the tab Application T & C to add some terms and conditions. These terms and conditions will be displayed to all users that get your application from the zanox Application Store. Figure 3: Terms and conditions of your application ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 9
10 Step 7: Finally, define the user group for your application. MasterTag applications are always developed for advertisers, so click on the tab Categories and select Applications for Advertisers. Step 8: Figure 4: Application target group Click the button Save to save your basic application data. The new application and its basic application data will be saved in your developer area of the zanox Application Store. 2.2 Application settings What are application settings? Application settings are some kind of configuration parameters you may want to use for your MasterTag application. The values of the application settings are passed to your application at runtime and can be either static or computed dynamically by the zanox loader script (zanox.js script). Application settings are used by your MasterTag application to retrieve information from different page types of the advertiser s website. Imagine a scenario where your application needs to retrieve the following information: On the product page: ID of the displayed product On the checkout page: Total amount of money the user spent on a sale In order to retrieve this information you will have to create two application settings. The next sections will show you how to create those settings Naming conventions for application settings If you create a new application setting, you have to follow a special naming convention. This is necessary so that advertisers can understand which application settings retrieve which information from which page types. Setting names should look like this: <prefix>_<name> Use camelcase for application setting names (e.g. mysuperspecialsetting). The prefix must be one of the following strings (which match the names of the zanox page type, see section What is the zanox page type?): ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 10
11 Prefix name Description Example home Application setting that only retrieves information from the homepage of the advertiser s website home_productid product Application setting that only retrieves information from the product page of the advertiser s website product_productid search Application setting that only retrieves information from the search results page of the advertiser s website search_searchstring category Application setting that only retrieves information from the category page of the advertiser s website category_categoryname basket Application setting that only retrieves information from the basket page of the advertiser s website basket_products checkout Application setting that only retrieves information from the checkout page of the advertiser s website checkout_totalamount Create an application setting for your MasterTag application To create a new application setting for your MasterTag application, proceed as follows: Step 1: Click on the MasterTag application from your list of applications for which you want to create a new application setting. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 11
12 Step 2: Click on the tab Widget Code, then click on the button New setting. A dialogue window opens. Step 3: Figure 5: Create an application setting for your MasterTag application Enter the name and the default value for the new application setting into the dialogue window. Name is the setting key which you use in your application source code to retrieve the setting value. For instance, a valid name could be "product_productid ". For more information on naming conventions see section Naming conventions for application settings. Value is the default value which is passed to your application if no other value was specified for this setting during the configuration of a specific MasterTag. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 12
13 You can also define your setting as mandatory. Advertisers must provide the values for mandatory settings in their live environments in order for the MasterTag application to function properly. Figure 6: Create an application setting ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 13
14 Step 4: Click the button Create to create the new application setting. The application setting is displayed in your list of settings available for the MasterTag application. Figure 7: Your application settings 2.3 "Hello World" sample application Now you can start writing your application code. This is the source code of a very simple hello world sample application: <script type="text/javascript"> zanox.setcallback("insert YOUR APPLICATION ID HERE", function(data) { var applicationid = data.app.id; var message = "Hello World! I'm application " + applicationid; }); alert( message ); </script> After loading the sample application displays a message which contains the application ID. In order for the sample application to work you need to perform the following steps: ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 14
15 Step 1: Replace the string "INSERT YOUR APPLICATION ID HERE" with your actual application ID first. You can find your application ID on the tab zanox Keys (see next section Create a basic code template). Step 2: Click on the tab Widget Code and copy and paste the code into the appropriate field in the zanox Application Store. Step 3: Click the button Save to save the application code. Figure 8: Sample application code ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 15
16 3 Develop an application 3.1 Create a basic code template The most simple valid zanox MasterTag application would look like this: <script type="text/javascript"> zanox.setcallback("insert YOUR APPLICATION ID HERE", function(data) { }); // enter your code here </script> "INSERT YOUR APPLICATION ID HERE " has to be replaced with your actual zanox application ID. You can find your application ID on the tab zanox Keys. Figure 9: Access your application ID Coding requirements As an application developer you are free to do what you want in your source code. However, you have to consider the following coding requirement: You have to call the JavaScript function "zanox.setcallback" once in your source code and pass the following parameters: Your zanox application ID The JavaScript callback function which will act as an entry point for your code. The JavaScript callback function will automatically be called when a HTML page with a MasterTag and your associated MasterTag application has been loaded. The callback function must accept one parameter (called "data" in the example above). The value of this parameter will be a JavaScript object at runtime which contains some metadata about your application as well as all the settings you have defined for your application with their actual values. You can find a full reference of all data passed to this function in the section Reference of automatically transferred data). ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 16
17 3.1.2 Local variables In your application code you often access the settings. To simplify this step you can create a local variable as shown below: <script type="text/javascript"> zanox.setcallback("insert HERE YOUR APPLICATION ID", function(data) { var settings = data.app.settings; // save a shortcut alert(settings["my_setting_key"]); // just for test }); </script> Page type settings For a MasterTag application to function properly your application code must be able to identify the page type it is currently running on (see section What is the zanox page type? for more information). Your application must support a setting called "pagetype" which will be used for that purpose. Below you will find an example of how to use the pagetye setting: <script type="text/javascript"> zanox.setcallback("insert HERE YOUR APPLICATION ID", function(data) { var settings = data.app.settings; var pagetype = settings["pagetype"]; // get pagetype if (pagetype == null) return; // just to be sure switch (pagetype) { case "basket": // app runs on basket page, do the following... break; case "category": // app runs on category page, do the following... break; case "checkout": // app runs on checkout page, do the following... break; ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 17
18 case "generic": // app runs on any other page, do the following... break; case "home": // app runs on home page, do the following... break; case "product": // app runs on product page, do the following... break; case "search": // app runs on search page, do the following... break; } }); </script> If you want to use CSS to style your application or use external JavaScript files, just either add HTML <style> or <script src="..."> tags to your source code. 3.2 Applications and external data Data retrieval methods If your application needs external data to function properly, you can use the following methods to retrieve the required data: Access the browser DOM directly and parse the HTML document (only works if your application does not run inside an iframe). Use the provided metadata that is automatically passed to your application. Use the provided settings that are automatically passed to your application. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 18
19 3.2.2 Reference of automatically transferred data The following JavaScript object is automatically passed to the application s callback function you specify in the zanox.setcallback function call: var data = { "mediaslot" : { }, "id" : "MASTERTAG_ID", "height" : 0, "width" : 0 "adspace" : { "id" : 123 // id of ad space associated to mastertag }, "app" : { "id" : "APPID", "height": 500, "width": 500, // id of your app // height of your app (same as in Application Store) // width of your app (same as in Application Store) "container" : { }, "id" : "mycontainer" // container id of your app (see below) "settings" : { // "list" of all defined settings (keys + values) "key1" : "value1", "key2" : "value2" }, "connectids" : { "developer" : "DEVELOPERCONNECTID", "publisher" : "PUBLISHERCONNECTID" // connect ID of app developer // connect ID customer account }, "rendermode" : "iframe" // app is placed in an "iframe" or "inline" in page }; } The application container ID has a special meaning. It is a string which contains the ID of a HTML element that surrounds your application inside the browser DOM. Usually, this would be a <div> element. To add content to the webpage it runs into, you need to modify the browser DOM (insert or manipulate some DOM nodes) accordingly. But first, you have to find the proper location where to insert your content. To add content (e.g. a banner) at the exact same position of the MasterTag code, you have to use the provided container ID to find the corresponding DOM node. Just call the JavaScript function ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 19
20 document.getelementbyid("provided_container_id") to identify the DOM node. This works even if there is more than one MasterTag on the same page with the same associated application. In this case, your application s callback function is called multiple times for each MasterTag and each time provides a different container ID. Find some examples below: To access the application container <div> use: var containernode = document.getelementbyid( data.app.container.id ); To get the ID of the ad space which is associated to the MasterTag your application was assigned to use: var adspaceid = data.adspace.id; To access a specific setting value use: var pagetype = data.app.settings["pagetype"]; To iterate through all available settings use: for (var key in data.app.settings) { var value = data.app.settings[key]; alert(key + ":" + value); } 3.3 zanox helper functions How do I use helper functions? You can access a zanox helper function via the global zanox JavaScript object. Its available in any page that includes the loader script (zanox.js script) regardless of whether the application is loaded inside an iframe or not. See the next section for a complete reference of all available functions. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 20
21 3.3.2 Reference of zanox helper functions Function: version() Description: Use this function to retrieve the current version of the zanox.js script. Parameters: None Returns: A string with the version number. Example: alert( zanox.version() ); Function: load( url, oncomplete ) Description: This function loads a JavaScript file from the provided URL, executes its content and calls the provided oncomplete JavaScript function when finished. This function is the most useful of all helper functions. You can use it in your source code to load external JavaScript files at runtime as well as to make zanox API REST calls (which could also be treated as external JavaScript files). Parameters: Parameter Name url oncomplete Parameter Description String that contains the URL to load an external Javascript file JavaScript function that is called after executing the file content Returns: Nothing Example: zanox.load( "" ); Function: loadall( urls, oncomplete ) Description: This function loads all JavaScript files from the provided URLs, executes their content and calls the provided oncomplete JavaScript function when finished. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 21
22 Parameters: Parameter Name Parameter Description urls String array of URLs to load external JavaScript files oncomplete JavaScript function that is called after executing the file content Returns: Nothing Example: var myscripts = [ "", "" ]; zanox.loadall( myscripts ); Function: setcallback( appid, callback ) Description: This function must be called once you are in your application. It tells the zanox loader script (zanox.js script) to call the provided callback function after the application with the given application ID has been loaded. The provided callback function is used as an entry point into your application code. Parameters: Parameter Name Parameter Description appid Application ID from the zanox Application Store callback JavaScript function that is called after the app has been loaded Returns: Nothing Example: zanox.setcallback("app_id", function(data) { /* */ } ); Function: setinnerhtml( node, content ) Description: Use this function to manipulate the content of the provided DOM node. Basically, the function sets its value for innerhtml to the passed content. The function takes care of included script tags. If the content contains a <script src="..."> tag, the referenced external JavaScript file is loaded and executed. If the content contains inline scripts (via <script> tags), they are executed too. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 22
23 Parameters: Parameter Name node content Parameter Description DOM node whose content you want to modify New content (a HTML string) for the provided node Returns: Nothing Example: zanox.setinnerhtml( document.body, "<div>hello World!</div>" ); 3.4 Things you shouldn't do in your application code If you have configured your application to run directly inside the parent page (the Load in iframe check box is unchecked in the zanox Application Store), you should take special care while coding your application because it might run on many different websites in different environments. Your application must not influence the existing functionality of a website. Hence, it is strongly recommended to pay attention to the below coding requirements. Please note: If your application runs in an iframe instead, you can freely design the application code as the application will be isolated from the website it runs in Requirement 1: Do not use external JavaScript libraries Avoid using external JavaScript libraries like jquery, prototype, etc. Often, those libraries do not work together or unpredictably influence each other if they are used in parallel on the same HTML page. JavaScript errors might occur in your application or even in the parent page. As a developer you never know which other JavaScript frameworks are used by the webpage your application runs in. Hence, avoid the use of JavaScript frameworks where possible and use only plain JavaScript functionality instead Requirement 2: Do not use global JavaScript variables and /or functions Avoid using global JavaScript variables and/or functions. Using global JavaScript variables and/or functions in your source code may lead to an unpredictable behaviour of your application as you never know if the HTML page, which hosts your application, also uses the same variables or functions. If both share the same data, everything might go well, but in the worst case, one or both of them will not function properly anymore. Therefore only use local variables or functions. Use the JavaScript "var" statement every time you declare a variable. If you leave it out, the variable becomes global and might cause problems later. In terms of functions, we recommend to create one global JavaScript object which acts like a container for all your functions (sometimes also called a namespace object). Assign a unique name to the global JavaScript object to avoid conflicts. If you use the application code template defined earlier (see section Create a basic code template), declare your functions with the "function" keyword. These functions will be scoped to the anonymous callback function we pass to the "zanox.setcallback" method. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 23
24 Find an example below: <script type="text/javascript"> zanox.setcallback("insert HERE YOUR APPLICATION ID", function(data) { var mylocalvar = "Hello Local"; myglobalvar = "Hello World"; // local variable // global, because var is missing! // this function is NOT global, so it's ok function sayhello() { alert( mylocalvar ); } }); sayhello(); </script> ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 24
25 4 Test an application 4.1 Create an empty test page If you want to test your application, you have to generate a test code first. To generate the test code proceed as follows: Step 1: Log in to the zanox Application Store and click on the MasterTag application from your list of applications for which you want to generate a test code. Step 2: Click on the tab Widget Code, then click the button Get the code. Figure 10: Access the application code ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 25
26 A dialogue window prompts you to one of your ad spaces for which you want to generate the code. Step 3: Figure 11: Generate the application code Click the button Generate code to pull the application code and place the generated code snippet on a HTML page. Step 4: We recommend to create an empty HTML page for testing as shown below: <html> <head> </head> <body> <!-- --> put test code from Application Store here (XXXXXXXX is your application ID) <div class="zx_xxxxxxxxxxxxxx zx_mediaslot"> <script type="text/javascript"> window._zx = window._zx []; window._zx.push({"id":"xxxxxxxxxxxxxx"}); ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 26
27 (function(d) { var s = d.createelement("script"); s.async = true; s.src = (d.location.protocol == "https:"? "https:" : "http:") + "...js"; var a = d.getelementsbytagname("script")[0]; a.parentnode.insertbefore(s, a); }(document)); </script> </div> </body> </html> Step 5: Save the HTML page anywhere you want (e.g. on your local hard drive) and open it in your browser. The application is now loaded from the zanox servers, executed and displayed in the browser after a short moment of time. 4.2 Simulate application settings in your test environment To test if your application settings work properly and if the code handles different value combinations, make some small changes to the generated code from the zanox Application Store. This will help you simulate how settings are passed. This line must be edited: window._zx.push({"id":"xxxxxxxxxxxxxx"}); The push function takes one JavasSript object as a parameter (written in JSON notation). This object usually contains only one property "id", whose value is the ID of the MasterTag to be loaded. To simulate passing of settings, add a new property to that object, called "settings". // same line as above, only formatted differently for better understanding window._zx.push({ }); "id" : "XXXXXXXXXXXXXX" Now modify the code (added code marked in red): window._zx.push({ "id" : "XXXXXXXXXXXXXX", "settings" : { } }); The value of the new "settings" property is itself also a JavaScript object. To simulate the passing of a new setting to your application, you have to add it to the "settings" object. For instance, if you want to pass the setting "my_setting_1" with the value "myvalue_1", do the following: ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 27
28 window._zx.push({ "id" : "XXXXXXXXXXXXXX", "settings" : { } }); "my_setting_1" : "myvalue_1" You can also pass a comma-separated list of settings. window._zx.push({ "id" : "XXXXXXXXXXXXXX", "settings" : { } }); "my_setting_1" : "myvalue_1", "my_setting_2" : "myvalue_2", "my_setting_3" : "myvalue_3" Please keep in mind the following limitation: The above testing method will only work for settings which have been previously defined by you in the zanox Application Store (see section Create an application setting for your MasterTag). If you now click refresh in your browser, the setting values are passed to the application s callback function as described above and your application should change its behaviour. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 28
29 5 Release an application 5.1 Optimise the application source code After you finished developing your application and you are sure it works like it should you can now release it to the public. But before you do that, you should minimise the size of your application source code by: Minifying the JavaScript code Removing all comments from the HTML, JavaScript and CSS code Removing all other unused code Minifying the JavaScript code is the most important step to optimise your source code. Minifying is done by removing comments and whitespaces (spaces, tabs etc.) from the code and, depending on the algorithm used, by compressing the source code afterwards. There are some tools available which may help you minifying your source code, for example YUI or Dojo Shrinksafe. If you prefer a simple website where you can paste your code, push a button and get your minified code back, use for example: After optimising your applications source code, just update it in the zanox Application Store and you're ready for the final step. 5.2 Application approval The very last step to release your application to the public is to request approval by zanox. Only approved applications are available to users and clients in the zanox Application Store. To request approval proceed as follows: Step 1: Make sure that your application complies with the requirements set out in the MasterTag approval checklist. Step 2: Go to your developer area in the zanox Application Store and click on the tab Applications for Sale. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 29
30 Step 3: Select the application you want to release from the list of your available applications and click on it. Figure 12: Select your application for release ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 30
31 Step 4: Scroll down to the bottom of the page and click the button Save & submit. Figure 13: Save and submit your application ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 31
32 A confirmation dialogue opens. Step 5: Figure 14: Confirm application submission Click the button Submit to submit your application to the approval queue. The application is now reviewed and tested by zanox according to our code quality requirements. Once zanox approves your application you will receive an and the application will be publicly available in the zanox Application Store. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 32
33 5.3 MasterTag approval checklist Before sending a MasterTag application for approval, please make sure you have checked the following: 1 Is there an application description? 2 Is the application type set to "widget"? 3 Is the application set to "Application for Advertisers"? 4 Have you provided a logo? 5 Are there search tags in the format "MasterTag, product, basket, checkout"? 6 Have all "document.write" commands in the JavaScript widget code been eliminated? 7 Has the widget code been embedded once in a test page without causing any JavaScript errors? 8 Does the widget code begin with the tag "<script 'type=text/javascript'>"? 9 10 Does the widget code command zanox.getcallback(app_id) use the application's correct APP_ID? Are the application settings formatted correctly according to the naming conventions in section 2.2.2? ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 33
34 6 Tips for advanced users 6.1 Develop an application without generating a test code Speed up application development Most of the time, developing software is an iterative process. You change your source code and immediately afterwards want to see how the software s behaviour has changed. This implies that you have to do the following steps over and over again: 1. Modify your applications source code 2. Open the zanox Application Store 3. Copy source code in the appropriate field in the developer area 4. Update 5. Open the browser 6. Refresh and check the new behaviour One way to optimise the development process is to reduce the steps which are required to update your application s source code (steps 2, 3, 4). These steps are needed as your test page contains a MasterTag test code that only works in conjunction with the zanox backend. Wouldn't it be nice if you could just change your code and refresh your browser and skipt the update steps in the zanox Application Store? To do so, you have to create a special HTML test page which allows you to develop your application without the MasterTag test code. Instead, you do an integration test after you have finished developing (see next section for a detailed explanation). Advantages: You only have to generate a code in the zanox Application Store once when you do an integration test of your application. You do not have to update the applications source code in the zanox Application Store every time you want to check your source code behaviour. Instead, you just refresh the browser. You can easily debug your application, for instance with Firefox. Debugging can get very complicated if you use a MasterTag test code while developing. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 34
35 6.1.2 Create an appropriate test page Let s assume, you have an empty HTML page like this: <html> <head> </head> <body> </body> </html> To prepare this page for testing proceed as follows: Step 1: Place your application s source code on the page. If your source code is located in an external JavaScript file, use a <script src="..." /> tag. If want to copy your code into the test page, use a normal <script> tag. Step 2: If your application uses some of the loader script functionality (e.g. the JavaScript function zanox.load), include the loader script in your test page: <script src=""></script> Step 3: Include a HTML element in your test page which acts as a container for your application (see section Reference of automatically transferred data). You can use the following sample code: <div id="testcontainer"></div> Step 4: Add some JavaScript code that simulates parts of the internal functionality of the zanox loader script. var data = { "mediaslot" : { }, "id" : "SOME_MASTERTAG_ID", "height" : 0, "width" : 0 "adspace" : { }, "id" : 123 "app" : { "id" : "MY_APP_ID", "height": 0, ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 35
36 "width": 0, "container" : { "id" : "testcontainer" }, "settings" : { // add your settings here }, "connectids" : { "developer" : "DEVELOPER_CONNECT_ID", "publisher" : "PUBLISHER_CONNECT_ID" }, "rendermode" : "inline" }; } var zanox = zanox {}; zanox.setcallback = function( appid, callback ) { } callback(data); The structure of the data object is exactly the same as described in section Reference of automatically transferred data. There is only one difference: You are creating this object manually right now. That means you have full control over what data is passed to your application. The most important lines of code in the sample above are: var zanox = zanox {}; zanox.setcallback = function( appid, callback ) { } callback(data); As mentioned above, you have to call the zanox.setcallback function at least once in our application source code. The code above overwrites this function and immediately calls your application s init function (callback function) passing the data object you have created manually (Basically, it simulates what the zanox loader script is doing internally). Hence, you do not need a MasterTag test code any more when developing a MasterTag application. ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED 06/2012 PAGE 36
37 ZANOX.de AG Stralauer Allee Berlin Deutschland ZANOX.de AG MASTERTAG DEVELOPER GUIDE LAST UPDATED blog.zanox.com 06/2012 PAGE wiki.zanox.com 37
MASTERTAG VALUE-ADDED SERVICES FOR ADVERTISERS SET-UP GUIDE
MASTERTAG VALUE-ADDED SERVICES FOR ADVERTISERS SET-UP GUIDE Dear advertiser, The present document gives you an overview of the zanox MasterTag technology. You will receive detailed information on its main
Yandex.Widgets Quick start
17.09.2013 .. Version 2 Document build date: 17.09.2013. This volume is a part of Yandex technical documentation. Yandex helpdesk site: 2008 2013 Yandex LLC. All rights reserved.
Slide.Show Quick Start Guide
Slide.Show Quick Start Guide Vertigo Software December 2007 Contents Introduction... 1 Your first slideshow with Slide.Show... 1 Step 1: Embed the control... 2 Step 2: Configure the control... 3 Step 3:
Performance Testing for Ajax Applications
Radview Software How to Performance Testing for Ajax Applications Rich internet applications are growing rapidly and AJAX technologies serve as the building blocks for such applications. These new technologies
PLAYER DEVELOPER GUIDE
PLAYER DEVELOPER GUIDE CONTENTS CREATING AND BRANDING A PLAYER IN BACKLOT 5 Player Platform and Browser Support 5 How Player Works 6 Setting up Players Using the Backlot API 6 Creating a Player Using the,
Pay with Amazon Integration Guide
2 2 Contents... 4 Introduction to Pay with Amazon... 5 Before you start - Important Information... 5 Important Advanced Payment APIs prerequisites... 5 How does Pay with Amazon work?...6 Key concepts in
English. Asema.com Portlets Programmers' Manual
English Asema.com Portlets Programmers' Manual Asema.com Portlets : Programmers' Manual Asema Electronics Ltd Copyright 2011-2013 No part of this publication may be reproduced, published, stored in an
Getting Started Guide
Getting Started Guide Table of Contents OggChat Overview... 3 Getting Started Basic Setup... 3 Dashboard... 4 Creating an Operator... 5 Connecting OggChat to your Google Account... 6 Creating a Chat Widget...
Portals and Hosted Files
12 Portals and Hosted Files This chapter introduces Progress Rollbase Portals, portal pages, portal visitors setup and management, portal access control and login/authentication and recommended guidelines
PASTPERFECT-ONLINE DESIGN GUIDE
PASTPERFECT-ONLINE DESIGN GUIDE INTRODUCTION Making your collections available and searchable online to Internet visitors is an exciting venture, now made easier with PastPerfect-Online. Once you have
Login with Amazon. Getting Started Guide for Websites. Version 1.0
Login with Amazon Getting Started Guide for Websites Version 1.0 Login with Amazon: Getting Started Guide for Websites Copyright 2016 Amazon Services, LLC or its affiliates. All rights reserved. Amazon
jquery Tutorial for Beginners: Nothing But the Goods
jquery Tutorial for Beginners: Nothing But the Goods Not too long ago I wrote an article for Six Revisions called Getting Started with jquery that covered some important things (concept-wise) that beginning
Magento module Documentation
Table of contents 1 General... 4 1.1 Languages... 4 2 Installation... 4 2.1 Search module... 4 2.2 Installation in Magento... 6 2.3 Installation as a local package... 7 2.4 Uninstalling the module... 8
Rochester Institute of Technology. Finance and Administration. Drupal 7 Training Documentation
Rochester Institute of Technology Finance and Administration Drupal 7 Training Documentation Written by: Enterprise Web Applications Team CONTENTS Workflow... 4 Example of how the workflow works... 4 Login
Operational Decision Manager Worklight Integration
Copyright IBM Corporation 2013 All rights reserved IBM Operational Decision Manager V8.5 Lab exercise Operational Decision Manager Worklight Integration Integrate dynamic business rules into a Worklight
Hermes.Net Web Campaign Page 2 26
...................... Hermes.Net Web Campaign Page 2 26 Table of Context 1. Introduction... 3 2. Create and configure Web Campaign 4... 2.1 Create a Web Campaign 4 2.2 General Configuration... 5 2.2.1
Microsoft Expression Web
Microsoft Expression Web Microsoft Expression Web is the new program from Microsoft to replace Frontpage as a website editing program. While the layout has changed, it still functions much the same as
Reference Guide for WebCDM Application 2013 CEICData. All rights reserved. Version 1.2 Created On February 5, 2007 Last Modified August 27, 2013 Table of Contents 1 SUPPORTED BROWSERS... 3 1.1 INTERNET
MAGENTO THEME SHOE STORE
MAGENTO THEME SHOE STORE Developer: BSEtec Email: support@bsetec.com Website: Facebook Profile: License: GPLv3 or later License URL:
User Guides Neetrix SiteFront
User Guides Neetrix SiteFront The website building and content management system 2011 v1.00 1 INTRODUCTION This documentation takes you through the process of building a website step by step using the
Visualizing an OrientDB Graph Database with KeyLines
Visualizing an OrientDB Graph Database with KeyLines Visualizing an OrientDB Graph Database with KeyLines 1! Introduction 2! What is a graph database? 2! What is OrientDB? 2! Why visualize OrientDB? 3!
QQ WebAgent Quick Start Guide
QQ WebAgent Quick Start Guide Contents QQ WebAgent Quick Start Guide... 1 Implementing QQ WebAgent. on Your Web Site... 2 What You Need to Do... 2 Instructions for Web designers, Webmasters or Web Hosting
HTML Application Creation. Rich Loen CTO, InGenius June 26, 2007
HTML Application Creation Rich Loen CTO, InGenius June 26, 2007 What is the HTML Toolkit? Allows dealers and developers to create HTML-based applications that run on 5330 and 5340 phone sets. Screen-savers.
Dreamweaver Tutorials Creating a Web Contact Form
Dreamweaver Tutorials This tutorial will explain how to create an online contact form. There are two pages involved: the form and the confirmation page. When a user presses the submit button on the form,
The purpose of jquery is to make it much easier to use JavaScript on your website.
jquery Introduction (Source:w3schools.com) The purpose of jquery is to make it much easier to use JavaScript on your website. What is jquery? jquery is a lightweight, "write less, do more", JavaScript
Table of Contents. 1. Content Approval...1 EVALUATION COPY
Table of Contents Table of Contents 1. Content Approval...1 Enabling Content Approval...1 Content Approval Workflows...4 Exercise 1: Enabling and Using SharePoint Content Approval...9 Exercise 2: Enabling
Quick Start Guide. Installation and Setup
Quick Start Guide Installation and Setup Introduction Velaro s live help and survey management system provides an exciting new way to engage your customers and website visitors. While adding any new technology
Website Login Integration
SSO Widget Website Login Integration October 2015 Table of Contents Introduction... 3 Getting Started... 5 Creating your Login Form... 5 Full code for the example (including CSS and JavaScript):... 7 2
Building A Very Simple Web Site
Sitecore CMS 6.2 Building A Very Simple Web Site Rev 100601 Sitecore CMS 6. 2 Building A Very Simple Web Site A Self-Study Guide for Developers Table of Contents Chapter 1 Introduction... 3 Chapter 2 Building
Nintex Forms 2013 Help
Nintex Forms 2013 Help Last updated: Friday, April 17, 2015 1 Administration and Configuration 1.1 Licensing settings 1.2 Activating Nintex Forms 1.3 Web Application activation settings 1.4 Manage device
Instructions for Embedding a Kudos Display within Your Website
Instructions for Embedding a Kudos Display within Your Website You may use either of two technologies for this embedment. A. You may directly insert the underlying PHP code; or B. You may insert some JavaScript...
Drupal CMS for marketing sites
Drupal CMS for marketing sites Intro Sample sites: End to End flow Folder Structure Project setup Content Folder Data Store (Drupal CMS) Importing/Exporting Content Database Migrations Backend Config Unit
Apple Applications > Safari 2008-10-15
Safari User Guide for Web Developers Apple Applications > Safari 2008-10-15 Apple Inc. 2008 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system,
Section 1: Overture (Yahoo) PPC Conversion Tracking Activation
PPC Conversion Tracking Setup Guide Version.001 This guide includes instructions for setting up both Overture (Yahoo) and Google (Adwords) Pay Per Click (PPC) conversion tracking in a variety of Reservation.
HP Business Process Monitor
HP Business Process Monitor For the Windows operating system Software Version: 9.23 BPM Monitoring Solutions Best Practices Document Release Date: December 2013 Software Release Date: December 2013 Legal
RHYTHMYX USER MANUAL EDITING WEB PAGES
RHYTHMYX USER MANUAL EDITING WEB PAGES Rhythmyx Content Management Server... 1 Content Explorer Window... 2 Display Options... 3 Editing an Existing Web Page... 4 Creating a Generic Content Item -- a Web on
1. Starting the management of a subscribers list with emill
The sending of newsletters is the basis of an efficient email marketing communication for small to large companies. All emill editions include the necessary tools to automate the management of a subscribers
Swinburne University of Technology
Swinburne University of Technology EndNote X7.2 Basics For Mac Swinburne Library EndNote resources page: These notes include excerpts from the EndNote
Your Blueprint websites Content Management System (CMS).
Your Blueprint websites Content Management System (CMS). Your Blueprint website comes with its own content management system (CMS) so that you can make your site your own. It is simple to use and allows
PORTAL ADMINISTRATION
1 Portal Administration User s Guide PORTAL ADMINISTRATION GUIDE Page 1 2 Portal Administration User s Guide Table of Contents Introduction...5 Core Portal Framework Concepts...5 Key Items...5 Layouts...5
SmartBar for MS CRM 2013
SmartBar for MS CRM 2013 Version 2013.26 - April 2014 Installation and User Guide (How to install/uninstall and use SmartBar for MS CRM 2013) The content of this document is subject to change without notice.
Startup Guide. Version 2.3.9
Startup Guide Version 2.3.9 Installation and initial setup Your welcome email included a link to download the ORBTR plugin. Save the software to your hard drive and log into the admin panel of your WordPress
Google Analytics Audit. Prepared For: Xxxxx
Google Analytics Audit Prepared For: Xxxxx Please Note: We have edited all images and some text to protect the privacy of our client. 1. General Setup 3 1.1 current analytics tracking code 3 1.2 test purchase
Timeline for Microsoft Dynamics CRM
Timeline for Microsoft Dynamics CRM A beautiful and intuitive way to view activity or record history for CRM entities Version 2 Contents Why a timeline?... 3 What does the timeline do?... 3 Default entities
IBM Operational Decision Manager Version 8 Release 5. Getting Started with Business Rules
IBM Operational Decision Manager Version 8 Release 5 Getting Started with Business Rules Note Before using this information and the product it supports, read the information in Notices on page 43. This
Example. Represent this as XML
Example INF 221 program class INF 133 quiz Assignment Represent this as XML JSON There is not an absolutely correct answer to how to interpret this tree in the respective languages. There are multiple
Developer Guide: Hybrid Apps. SAP Mobile Platform 2.3
Developer Guide: Hybrid Apps SAP Mobile Platform 2.3 DOCUMENT ID: DC01920-01-0230-01 LAST REVISED: February 2013 Copyright 2013 by Sybase, Inc. All rights reserved. This publication pertains to Sybase
Web Development 1 A4 Project Description Web Architecture
Web Development 1 Introduction to A4, Architecture, Core Technologies A4 Project Description 2 Web Architecture 3 Web Service Web Service Web Service Browser Javascript Database Javascript Other Stuff:...
Joomla! 2.5.x Training Manual
Joomla! 2.5.x Training Manual Joomla is an online content management system that keeps track of all content on your website including text, images, links, and documents. This manual includes several tutorials
Installation & Configuration Guide Professional Edition
Installation & Configuration Guide Professional Edition Version 2.3 Updated January 2014 Table of Contents Getting Started... 3 Introduction... 3 Requirements... 3 Support... 4 Recommended Browsers...
JavaScript Basics & HTML DOM. Sang Shin Java Technology Architect Sun Microsystems, Inc. sang.shin@sun.com
JavaScript Basics & HTML DOM Sang Shin Java Technology Architect Sun Microsystems, Inc. sang.shin@sun.com 2 Disclaimer & Acknowledgments Even though Sang Shin is a full-time employee
ThirtySix Software WRITE ONCE. APPROVE ONCE. USE EVERYWHERE. SMARTDOCS 2014.1 SHAREPOINT CONFIGURATION GUIDE THIRTYSIX SOFTWARE
ThirtySix Software WRITE ONCE. APPROVE ONCE. USE EVERYWHERE. SMARTDOCS 2014.1 SHAREPOINT CONFIGURATION GUIDE THIRTYSIX SOFTWARE UPDATED MAY 2014 Table of Contents Table of Contents...
CSC309 Winter 2016 Lecture 3. Larry Zhang
CSC309 Winter 2016 Lecture 3 Larry Zhang 1 Why Javascript Javascript is for dynamically manipulate the front-end of your web page. Add/remove/change the content/attributes of an HTML element Change the
ecommercesoftwareone Advance User s Guide -
Advance User s Guide - Contents Background 3 Method 4 Step 1 - Select Advance site layout 4 Step 2 - Identify Home page code of top/left and bottom/right sections 6 Step
Document Services Online Customer Guide
Document Services Online Customer Guide Logging in... 3 Registering an Account... 3 Navigating DSO... 4 Basic Orders... 5 Getting Started... 5 Attaching Files & Print Options... 7 Advanced Print Options
Usage Tracking for IBM InfoSphere Business Glossary
Usage Tracking for IBM InfoSphere Business Glossary InfoSphere Business Glossary Version 8.7 and later includes a feature that allows you to track usage of InfoSphere Business Glossary through web analytics
CNAME and Redirection Code Instructions Table of Contents Using a Custom Domain/SubDomain... 3 Specific Provider Instructions... 3 1&1... 3 DNS Park... 4 enom... 4 Eurodns.com... 4 EveryDNS.net... 5 Gandi.net...
INSTANT MAGAZINE QUICK GUIDE
INSTANT MAGAZINE QUICK GUIDE Create an online magazine in a jiffy It s great that you ll be working with our tool! We hope you ll enjoy the creative process. Take a moment to read this quick guide and
Terminal Four. Content Management System. Moderator Access
Terminal Four Content Management System Moderator Access Terminal Four is a content management system that will easily allow users to manage their college web pages at anytime, anywhere. The system is
Virtual Contact Center
Virtual Contact Center NetSuite Integration Configuration Guide Version 8.0 Revision 1.0 Copyright 2014, 8x8, Inc. All rights reserved. This document is provided for information purposes only and the contents
Load testing with. WAPT Cloud. Quick Start Guide
Load testing with WAPT Cloud Quick Start Guide This document describes step by step how to create a simple typical test for a web application, execute it and interpret the results. 2007-2015 SoftLog | http://docplayer.net/13359946-Mastertag-developer-guide.html | CC-MAIN-2017-51 | refinedweb | 7,923 | 54.42 |
I try to use nltk to do some words processing, but there is a warning. I find out the if there is the word like “Nations�“, the program would throw a warning. I wonder if there is any way to stop the program after the warning caused. Thank you
warning:
*UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal if word[0].lower() not in stopwords.words():*
Best answer
A warning is a non-fatal error. Something is wrong, but the program can continue.
They can be handled with the standard library module
warnings or through the command line, passing the flag
-Werror. Programatically:
import warnings with warnings.catch_warnings(): warnings.simplefilter('error') function_raising_warning() | https://pythonquestion.com/post/how-to-stop-program-if-there-is-a-warning/ | CC-MAIN-2020-16 | refinedweb | 120 | 67.15 |
In this post:
- Has the WMI Problem with Dates and Daylight Saving Time Been Resolved?
- How Can I Change the Color of Error Messages in the Windows PowerShell Console?
- How Can I Read a Text File and Update 2,000 User Accounts?
- Can I Use a WildCard Character for File Names with the FileSystemObject?
- How Can I Trim Strings That Are Returned from WMI Using Windows PowerShell?
Has the WMI Problem with Dates and Daylight Saving Time Been Resolved?
Hey, Scripting Guy! Back in the spring, I ran into a problem using the Win32_NTLogEvent WMI class in a script. The problem was the change to daylight saving time (DST) back in March in the United States. After a couple of weeks, the problem corrected itself (when the original DST time was reached), but it was a major pain. Has the situation with incorrect dates being reported via WMI been fixed? I also noticed it seemed to affect the Win32_Process WMI class as well.
-- PG
Hello PG,
Scripting Guy Ed Wilson here. I am completely familiar with the problem you describe, as I received several e-mails to scripter@microsoft.com describing the issue with dates and times in the Win32_NTLogEvent WMI class. I contacted the WMI team, and they tell me the problem has been resolved. The hotfix ID is KB 970413. Make sure you get the hotfix installed on all of your servers you will be querying via script.
How Can I Change the Color of Error Messages in the Windows PowerShell Console?
Hey Scripting Guy! The lurid red on black error messages produced by Windows PowerShell are almost unreadable. How can I control the background and text colors of the error messages? I tried using the window properties, but they do not have an effect on this situation. Thanks.
-- DM
Hello DM,
To change the color of the error messages in Windows PowerShell, you need to set a new value for $Host.PrivateData.ErrorForegroundColor. If you query the privatedata property directly, you will see the current settings for the error, warning, debug, and other messages:
PS
If you wish to change the value, you assign a new color directly to the value you wish to change. This is seen here:
PS C:> $Host.PrivateData.ErrorForegroundColor = "cyan"
Please note that the change will only work for that particular Windows PowerShell session, so you may want to add the changes to your Windows PowerShell profile.
To determine the colors that are available,
You can also simply supply a bogus value for the systemColor. When this happens, Windows PowerShell will generate an error that contains allowed enumeration values. This technique is seen here:
PS C:> $Host.PrivateData.ErrorForegroundColor = "fusa"
Exception setting "ErrorForegroundColor": "Cannot convert value "fusa""."
At line:1 char:19
+ $Host.PrivateData. <<<< ErrorForegroundColor = "fusa"
+ CategoryInfo : InvalidOperation: (:) [], RuntimeException
+ FullyQualifiedErrorId : PropertyAssignmentException
PS C:>
How Can I Read a Text File and Update 2,000 User Accounts?
Hey, Scripting Guy! We were looking at updating the account expiry date for about 2,000 users; however, we really didn’t want to do this manually. We have a csv/text file of the users and their expiry dates that needs to be set. We have been able to find the VBScript for updating a single user, but this still means manual input for each user. Any help would be greatly appreciated. Here is the script we have:
strExpireDate = "<Date>"
strUserDN = "<UserDN>"
set objUser = GetObject("LDAP://" & strUserDN) objUser.AccountExpirationDate = strExpireDate objUser.SetInfo WScript.Echo "Set user " & strUserDN & " to expire on " & strExpireDate
' These two lines would disable account expiration for the user ' objUser.Put "accountExpires",0 ' objUser.SetInfo
-- CG
Hello CG,
You will need to read the CSV file first. To do this, you can use the FileSystemObject, and the ReadLine method. After you have read the line, you will need to use the split function to break up your line. Store the results in the strUserDN variable. A sample script is seen in the TechNet Script Center Gallery.
You could also use a script similar to this one on the TechNet Script Center Gallery.
Can I Use a WildCard Character for File Names with the FileSystemObject?
Hey, Scripting Guy! Please look at the following script, noting the last line:
'Test Station File Cleaner
'4/17/09 S Barton
strcomputer="."
on error resume next
filename1="c:tunnelerror.log"
Set fso = CreateObject("Scripting.FileSystemObject")
set filesize=fso.GetFile(filename1)
'wscript.echo filesize.size
If filesize.size >= 1000000 then
fso.deleteFile "c:tunnelerror.log"
end if
filename2="c:tunneldata.log"
set filesize=fso.GetFile(filename2)
'wscript.echo filesize.size
If filesize.size >= 1000000 then
fso.deleteFile "c:tunneldata.log"
end if
FSO.DeleteFile("C:Error*")
I have been under the impression that the "*" wildcard character will not work in VBScript. Yet in this case, it does. Attached are two samples of the error files it removes. It does not work without the "*" or with the .log extension. This script is on our test stations and is run daily by Task Scheduler. Have I had the wrong impression?
-- SB
Hello SB,
According to MSDN, the DeleteFile method will accept wildcard characters in the last portion of the file path. Therefore, the command seen here will work:
FSO.DeleteFile("C:Error*")
However, when you attempt to use a wildcard character for the first portion of the file name, it does not work. This is seen here:
FSO.DeleteFile("C:Error*.log")
How Can I Trim Strings That Are Returned from WMI Using Windows PowerShell?
Hey, Scripting Guy! I was looking at one of the Windows Powershell Tips of the Week, but I am having an issue trying to duplicate one of the tips. Perhaps you can shed some light on why I can't reproduce your results. Go easy on me; I am a total newbie with Windows Powershell. Here is the tip I am talking about:_")
Here is the code I tried, but it is not working for me:
$computer = "LocalHost"
$namespace = "rootCIMV2"
$BIOSVer = Get-WmiObject -class Win32_BIOS | Select SMBIOSBIOSVERSION
$Model = Get-WmiObject win32_computersystem | Select Model
$GX280 = "08"
$GX270 = "07"
$GX260 = "09"
$BIOS = $BIOSVer.TrimStart("@{SMBIOSBIOSVersion=A")
$Dell = $Model.TrimStart("@{Model=OptiPlex ")
IF ($bios -eq $GX280)
{
write-host "GX280"
}
elseif ($bios -eq $GX270)
{
Write-host "2"
}
elseif ($bios -eq $GX260)
{
Write-host "3"
}
The error I am getting is seen here:
-- MH
I just stumbled upon your blog after reading your blog posts wanted to say thanks.i highly appreciate the blogger for doing this effort. | https://blogs.technet.microsoft.com/heyscriptingguy/2009/10/16/hey-scripting-guy-quick-hits-friday-the-scripting-guys-respond-to-a-bunch-of-questions-101609/ | CC-MAIN-2018-05 | refinedweb | 1,078 | 67.25 |
MQ Postcard
IBM WebSphere MQ (previously known as “MQSeries”) ships with a “postcard” app, amqpcard.exe, a simple demo that can be used to verify the installation of MQ. The postcard app is supplied without source code. At some point, IBM started shipping Java classes for MQ, and they also delivered a Java postcard app, including source code, to illustrate how you would connect to MQ from Java. The Java postcard app inter-communicates with the binary postcard app. I have produced a postcard app in C# that interoperates with both. It’s available in source form, within a Visual Studio project.
MQ is .NET-enabled
In October 2003, IBM shipped the MQ Classes for .NET as part of CSD05 for MQSeries v5.3 on Windows. CSD 05 (and later) for MQ v5.3 on Windows now includes a .NET library, packaged as a single assembly, amqmdnet.dll. IBM calls this the “MQ Classes for .NET”. The library exposes an set of classes in the IBM.WMQ namespace, things like MQMessage, MQQueue, MQQueueManager, and so on, that let any .NET app directly interact with MQ. The MQ installation on Windows can be a “server” or “client” installation – in MQ-speak, that means: with or without a local reliable store for messages. There is even documentation for this class library.
What can a .NET app do with MQSeries?
What can a .NET app do with MQ? Just about anything a Java app can do, or a VB6 app using the ActiveX control for MQ, or a C-app, using the MQ API. Put and send messages, inquire as to MQ QM properties (is it syncpoint enabled?, what version of MQ is the QM ?), do blocking or non-blocking calls, use multiple threads, participate in transactions, and so on. The postcard app I mentioned above, shows some of the above.
What about PCF?
Within the “MQ Classes for .NET”, there is also a partial implementation of the PCF stuff. If you know MQ, you know what this means. PCF is the Programmable Control Facility, I think, and it is basically a way to send administrative commands to MQ Queue Managers, using the MQ infrastructure itself. PCF messages are specially formed at the sending end, and specially handled at the receiving end. IBM started creating a set of objects that aid in “doing PCF” with .NET, but as I said, it is a partial implementation, and is not documented as far as I am aware. PCF is documented, but the implementation of PCF within this class library is not.
The sample app I deliver here does not do PCF or exercise any of that PCF code.
Some Further Background on MQ
In IBM-speak, CSD stands for “Cumulative Service distribution”, which means “fixpack”. It’s confusing because for some IBM software products, like DB2 UDB, IBM uses the actual term “fixpack”. But for others, like MQ, IBM uses “CSD”. Whatever.
Whether called CSD or fixpack, obviously there is a different series of fixes for each product. The latest CSD for MQv5.3 on Windows is CSD08, released in October 2004. I think IBM recommends that you stay current with the CSDs. You don’t need to upgrade your entire MQ network at the same time. You can upgrade your Windows server to CSD08 without touching your os390 system or AS400. Get the latest MQ CSDs here.
As with some other vendors, some fixpacks from IBM include new function. CSD05 for MQ v5.3 was one of those that shipped new function, specifically, the MQ Classes for .NET.
The History around the .NET support in MQ
A long, long time ago, I think it might have been spring or summer 2002, an MQ guru and all-around good guy named Neil Kolban quietly published a class library written in C# for interacting with MQSeries. This was, as I understand it, basically a conversion of the Java class library into C#, using the Java Language Conversion Assistant that Microsoft had released. The .NET Class library for MQ was one of those really cool things, but it was totally unsupported. Kind of a skunkworks project. An illustration. Apparently there was sufficient interest in getting “real” support for .NET in MQ that IBM made a formal effort, this time headed by the MQ lab in Hursley, to build the thing. They started with Kolban’s work and made it real. This eventually was delivered as supportpac ma7p. A minor digression: Supportpac is a funny term. A supportpac is a package of complementary software — sometimes utilities, sometimes just class libraries — shipped by IBM. I have only heard of them related to MQSeries, so I don’t think it is an IBM-general concept. Anyway, it’s a funny term because in contrast to the name, “supportpacs” are generally NOT supported by IBM.
ma7p was released in February 2003, I believe. Again there was interest from the developer community, and so IBM “promoted” the .NET classes for MQ into a bonafide part of the product, complete with full support. This was initially released as part of CSD 05 for WMQ v.5.3, in October 2003.
Listen, if you google around on kolban.com, you can find the original MQSeries.NET package. Don’t download that. That’s the thing that is not supported. It’s still available but you really should go to the supported IBM library – get it through CSD08.
Finally, a Disclaimer
I am not an employee of the IBM Company. I work for Microsoft. Anything I say about IBM or its products should be considered in that light. For authoritative information on MQ, go to IBM.
-Dino
Awhile back, I was building a .NET application which needed to do two-phase commit between MS SQL 2000
hi, i am trying to run this application to connect to our MQ server (which is version 5.1). So far all i have been able to get was an error code MQRC_Q_MGR_NOT_AVAILABLE
I have installed the MQ client 5.3 for windows on my windows XP machine (remote client)
I am using .NET 2.0
I am just wondering if there is a reason I am getting this error. I have tried to telnet into the mq server and was able to do so. I have verified the channel name and the Que manager name and they are correct as well. Please let me know if you have any ideas. Could it be that i am using the MQ client version 5.3 and the Server is 5.1 that i aam not being able to connect ot he que manager?
Adnan, I don’t know why you aren’t connecting to MQSeries. There’s a myriad of reasons you will get that RC from MQ. But someone knows. I would suggest that you try your query at the forums at .
Also, it seems to me that v5.1 is a very old server – and you probably need to upgrade.
Hi!
Tried to download the source code for the c# postcard app with no success. Is it still available??
The source is still available, yes. The site was undergoing a migration that day. It’s up now.
Need to have details of MQ Series connectivity and using MQ Series from Dotnet | https://blogs.msdn.microsoft.com/dotnetinterop/2004/11/08/net-and-mqseries/ | CC-MAIN-2017-22 | refinedweb | 1,213 | 76.62 |
#include <pstring.h>
An important feature of the string class, which is not present in other container classes, is that when the string contents is changed, that is resized or elements set, the string is "dereferenced", and a duplicate made of its contents. That is this instance of the array is disconnected from all other references to the string data, if any, and a new string array contents created. For example consider the following: {verbatim} PString s1 = "String"; // New array allocated and set to "String" PString s2 = s1; // s2 has pointer to same array as s1 and reference count is 2 for both s1[0] = 's'; // Breaks references into different strings {verbatim} at the end s1 is "string" and s2 is "String" both with reference count of 1.
The functions that will "break" a reference are SetSize()#, SetMinSize()#, GetPointer()#, SetAt()# and operator[]#.
Note that the array is a '' terminated string as in C strings. Thus the memory allocated, and the length of the string may be different values.
Also note that the PString is inherently an 8 bit string. The character set is not defined for most operations and it may be any 8 bit character set. However when conversions are being made to or from 2 byte formats then the PString is assumed to be the UTF-8 format. The 2 byte format is nominally UCS-2 (aka BMP string) and while it is not exactly the same as UNICODE they are compatible enough for them to be treated the same for most real world usage.
Definition at line 376 of file pstring.h. | http://pwlib.sourcearchive.com/documentation/1.10.2/classPString.html | CC-MAIN-2018-05 | refinedweb | 265 | 55.98 |
Dear Team,
We want to run multiple instances on the same server. Please suggest the required configuration changes for this operation. Now we are using the community version of aerospike.
Thanks Sumit Thakur
Dear Team,
We want to run multiple instances on the same server. Please suggest the required configuration changes for this operation. Now we are using the community version of aerospike.
Thanks Sumit Thakur
I want to get more throughput by using single hardware machine. It can be reduce deployment cost in production. Please suggest if you have better idea.
First, Aerospike isn’t Redis, which runs on a single core. The asd daemon will expand to use all the cores on your box automatically. It’s designed to use the hardware resources efficiently. Theoretically you can run multiple instances on a physical machine using something like Docker or virtual machines, but be aware that each layer of virtualization adds overhead that takes away from your machine’s real capacity. Bare metal deployments are going to use the resources of that machine better.
Second, a single physical machine is a single point of failure. It doesn’t matter if you run 3 instances of Aerospike on that single machine - if the power supply gives out, or a drive fails, or the NIC malfunctions all 3 will be affected. Aerospike is a distributed database, and it allows you to very easily form a cluster over multiple machines. You should deploy over multiple machines (typically 3 or more) to be more resilient.
Thanks, Ronen
I run multiple instance per server in our environment. To do it properly requires a few configuration changes in both Aerospike configs and network configs (especially if you wish to use the auto-pin NUMA or CPU options). I would never run it in docker on bare metal – not when it is capable of running multiple instances natively.
As was pointed out though, you’ll want to run multiple servers to avoid any potential downtime or data loss. I run 3 servers, 6 total instances (2 per) using NUMA auto pinning. This has served us extremely well for the many months I’ve had the cluster in service.
As @Kargh pointed out, the only time it makes any sense to run multiple instances on a single physical machine is to take advantage of pinning to the same NUMA node. This must be used in conjunction with rack awareness, so it’s a bit difficult to set up. You do get a very big performance boost, especially for in-memory namespaces on multi-core systems.
The What’s New in Aerospike 3.12 blog post goes over auto-pinning to CPU and NUMA.
I run 2 per for two reasons. First, the servers I’m using are extremely powerful and it doesn’t make sense to let all the CPU sit idle. Second is because of NUMA. With 2 CPUs and 2 NICs, I can pretty much segregate the two instances. The performance boost is pretty huge but it does take some non-standard configuration to make it work. | https://discuss.aerospike.com/t/multiple-instances-on-the-same-server/5079 | CC-MAIN-2018-30 | refinedweb | 512 | 65.12 |
This file documents the revision history for App::Cinema. 1.00 01/04/2010 - initial revision, generated by Catalyst 1.02 01/05/2010 - move to database: done! only change configure part and The rest of application is completely unchanged. 1.05 01/11/2010 - fix bug : delete user - authorization-role 1.10 01/12/2010 - upload project to web hosting - move hard-coded db setting to yml - use date format => HTTP::Date::time2iso(time) - add a record many-to-many => /item/add - new gui 1.11 01/15/2010 - add image URL to item creation screen - add "Latest Movies" to menu - add event features to log user's action - add search feature by textfield to the menu 1.12 01/19/2010 - namespace is changed to App::Cinema from MyApp - add comments to each module - modify /usr/add.tt2: users can choose their role either as user or admin 1.13 01/20/2010 - add POD to User.pm, Item.pm, Menu.pm, About.pm, Login.pm, Logout.pm, News.pm - add 'News' features 1.14 01/25/2010 - add search by item, news, event, user - move error message to yml - Moose can work well with my app now. - handle duplicated PK exception when adding a user - add Moose class - App::Cinema::Event 1.15 01/29/2010 - add captcha - switch db to mysql 1.16 01/31/2010 - add validate to name field - modify Role based access control - add description of Role based access control <> 1.17 02/02/2010 - send email to user when check out movie - sysadmin can edit all users' roles - every account can edit its own roles - user can send email to ask for upgrading to vip - sysadmin can only upgrade account to vipuser but can't edit other info - fixed bug: delete user also delete its 'has many' data - admin can add and delete movie created by yourself - everyone can only search his/her own event except sysadmin - keep search conditions after the result displays 1.18 - sysadmin can activate/deactivate user account instead of deleting it - change font-family to Sans | https://metacpan.org/changes/distribution/App-Cinema | CC-MAIN-2016-50 | refinedweb | 349 | 62.68 |
SharePoint Online Enterprises App Model Management Policies and Process
Applies to: Office 365 Dedicated Plans
Topic Last Modified: 2014-01-13
This document describes the policies and process that govern how subscribers to Microsoft® SharePoint® Online (SPO) for Enterprises Dedicated Plans deploy App Model-based custom solutions to the SharePoint Online environment. Custom solutions include solutions and products that are developed by third parties and code developed in-house by customers. You can deploy SharePoint solutions to any Web application in the SharePoint Online Dedicated environment (Portal, Team, Personal Sites, or Partner Sites*).
This document is for informational purposes only. It represents the features and SKUs that were available at the time of publication but it does not officially represent and may not fully match the current in-market service, nor does it guarantee future availability of any SKUs or features. For the most current information, see the SharePoint Online Dedicated on TechNet.
This document and process do not apply to the following methods of customizing a SharePoint Online environment:
End-user solution. A customization that is performed through a Web-based mechanism (for example, applying style sheets, XSLT and out-of-the-box Web Parts, and other SharePoint Designer declarative solutions).
Sandboxed solution (partially trusted code). A customization that is created by developers and that typically contains .NET–based code and dependent files that can be deployed to SharePoint site collections by the customer’s site collection administrators. SharePoint sandbox solutions that are based on server code are deprecated in SharePoint 2013 in favor of developing apps for SharePoint. You can still install sandboxed solutions to site collections on SharePoint 2013. The declarative sandbox solution is still one of the supported solution development options and it will continue to be supported.
The goal of the policies and process in this document is to ensure that App Model custom solutions deployed to the SharePoint Online environment can be operated, managed, secured, and scaled following established best practices.
SharePoint 2013 introduces a new app development model, the Cloud App Model (CAM), which reduces the need for code running on the SharePoint Server. The key benefits are two-fold: greater app deployment flexibility and backend server/service stability.
Starting in the SharePoint 2013 release, the SharePoint Online Dedicated platform supports the App Model as its default custom solution building method.
What does the App Model offer to SharePoint Online Dedicated customers?
Self service agility − Customers can deploy code based on their own project schedule with no dependency on SharePoint Online Dedicated support schedule.
Custom defined software deployment cycle and process − Since there is no server-side code running on the SharePoint Online Dedicated Farm, the customer has full control of the solution development and deployment processes. No SharePoint Online review is required.
Lower custom upgrade support cost − Separation of the platform update from the customer solution upgrade lowers the total cost of ownership for custom solution maintenance and support.
Empower developers with common Web development skills − The App Model intends to reduce the developer ramp-up time by offering a familiar Web development experience, thereby reducing dependency on knowledge of core SharePoint concepts when writing extension code.
This section presents an overview of the steps that are generally involved in the development and deployment process for the App Model solutions.
Before you start to build customer Apps, there are some governance decisions you have to make. Each decision point is supported by the SharePoint Online Dedicated platform either as part of the farm build, by using the change request (CR) process, or by using self-service configuration options, such as granting the App Catalog Site Collection Permissions. See Appendix B for a detailed App Related CR list.
For more information, see Plan for Apps for SharePoint 2013.
The following table lists roles and responsibilities involved in the design, development, deployment, management, and use of apps.
All the above roles and responsibilities are performed by Customer IT teams and site users. Essentially, the architecture gives control back to the customer when building customization or exposing rich data using the SharePoint online farm.
App configuration information breaks down into two parts: Basic and advanced.
Basic configuration (applies to all customers) − See SPOD 13.1 Build Guide. The following sections in the Build Guide are directly relevant to app configuration:
Section 2.3. Network and DNS Configuration
Use *.001dspoapp.com for the SharePoint Apps namespace (sample app domain).
Section 11.14. Create Service Applications
Use New-SPSubscriptionSettingsServiceApplication for the config subscription service.
Section 11.14.1. Configure the App Management Service
Please follow all steps in this section.
Advanced Configuration (Provided on an as-needed basis for customers that require a provider-hosted app) – See SPOD 13.1 App Support Policy and Procedure Documentation.
SharePoint Online Dedicated supports provider-hosted apps running on Azure, on premises, or on third-party hosting facilities. An Office 365 SharePoint Multi-tenancy site is used to authorize provider-hosted apps on the SharePoint Online Dedicated site. Please follow the link below to configure your on-premise farm so that you may receive configuration guidance. Since this is not a configuration unique to SharePoint Online Dedicated, there is a link to the TechNet article that you need for detailed configuration steps.
Step 1 -- Plan the hosting environment
-
For a provider-hosted app to work on SharePoint Online Dedicated, the app hosting server(s) have to be able to talk to ACS in order to perform a token exchange. Customers may need additional configurations to enable this scenario (Web proxy settings, and so on).
The specific configuration steps can be found at: How to: Use an Office 365 SharePoint site to authorize provider-hosted apps on an on-premises SharePoint site.
The SharePoint Online Dedicated platform facilitates two broad approaches to hosting your apps for SharePoint: SharePoint-hosted and Provider-hosted. For the Provider-Hosted model, the hosted location can either be on premise Web servers, third-party hosted servers or Window Azure servers. These are not exclusive categories: An app for SharePoint can have both SharePoint-hosted and remotely hosted components. Each approach has key features you should consider when deciding how to host your apps.
The table below is a high level summary of hosting options. For detailed descriptions of each option and design considerations, please see the following articles.
Hosting Options Comparison
How does SharePoint Online Dedicated support the Provider-hosted Model?
A SharePoint Online Dedicated farm trusts the ACS instance associated with a customer’s own subscription. ACS then acts as a common authentication broker between the SharePoint Online Dedicated farm and the app and as the online security token service (STS). ACS generates context tokens when the app requests access to a SharePoint resource.
For information regarding setting up the ACS trust broker, see How to use an Office 365 SharePoint site to authorize provider-hosted apps on an on-premises SharePoint site.
For a provider-hosted app to work on SharePoint Online Dedicated, the app hosting server(s) have to be able to talk to ACS in order to perform token exchange. A customer may need additional configurations to enable this scenario (for example, Web proxy settings, and so on).
Configure Provider Hosted Model is a related CR: SPOD-13-204.
Description: Configure support for the provider hosted apps using ACS as the trust broker. This configuration applies to both customer-created apps and SharePoint App Store apps.
If you decide that your app is using the Provider Hosted model and you are hosting it within your own data center Web servers or some other application hosting infrastructure under your control (for example, Windows Azure), please submit the change request for the SharePoint Online Dedicated team to configure provider hosted app support. This is a one-time only configuration that supports all your provider hosted apps.
The support for different types of Apps is the same across SharePoint Online Multi-Tennant, Dedicated and on premise. For detailed documentation, please review Accessing App from UI.
For more information please see the following:
Optimize App Experiences for the quick start guide.
Host Web, App Web and SharePoint Components in SharePoint Appsfor technical details on app options.
Doing things the App Way for some scenario based App Design Options.
App Types Sample Code
*Declarative workflow can only run within the app Web.
The app authentication in SharePoint 2013 is separate from user authentication and is not used as a sign-in authentication protocol by SharePoint users. App authentication uses the Open Authorization (OAuth) 2.0 protocol and does not add to the set of user authentication or sign-on protocols, such as WS-Federation. App authentication and OAuth do not appear in the list of identity providers.
The authorization policy determines whether the app access SharePoint resources based on current user’s context, app’s security context or a combination of the two. SharePoint Online Dedicated supports User-only, App-only and user+ app policy. See App Authentication Policy for supported authorization policy details and usage guidance.
Sample code: make APP only policy type calls in your App.
An app for SharePoint has its own identity and is associated with a security principal, called an app principal. Like users and groups, an app principal has certain permissions and rights. The app principal has full control rights to the app Web so it only needs to request permissions to SharePoint resources in the host Web or other locations outside the app Web.
For detailed app permission consideration and design guidance, please see App Permissions.
SharePoint App Permission Request Scope URIs and Descriptions
For more information, please see the following:
Plan for App Authentication
Perform basic CRUD operation in an App
App Scoped BCS external content types
Create social features in app
Use Search Rest API in an app
Beyond the app’s own permission and policy, App Catalog Admin can also control who or what site can install the apps. See the package section later in this document for more details.
For building an on premise SharePoint Online Dedicated farm for development or integrated testing purpose, please refer to the 13.1 build guide for SharePoint Online Dedicated farm build instructions. In addition to the build guide, if you want to set up the provider hosted app support, you need to follow the set up ACS trust guidance to create the SharePoint Online Dedicated compatible environment. In addition, you can follow the article on setting up App Development Support for configuration guidance.
SharePoint Online Dedicated offers a Pre-Production Environment (PPE) that customer IT Pros can use to perform the integration test on a SharePoint Online Dedicated farm and to verify that all configuration settings are working. When deploying an app to a SharePoint Online Dedicated environment, there may be additional network connectivity and DNS configuration required. The PPE environment is intended to help you identify those required configuration and validate settings before deploying to Production.
Unlike full-trust solution development, it is not a requirement to first deploy an app to SharePoint Online Dedicated PPE before production. Developers may choose to develop directly in the SharePoint Online Dedicated production farm using a developer site collection, which fully isolates an app from all other site collections. For more information on this alternative model, see the Developing apps for SharePoint on a remote system topic. Ultimately, organizations should chose the developer model and SDLC that best fits their business.
One of the new features of the App Model is that you can actually develop App Model solutions on the client machines without having the full SharePoint farm installed on the developer box. This offers great portability, developer agility and simplifies your development environment support cost. If you work with a remote installation, you need to install the client object model redistributable from SharePoint Client Components on the target installation.
Developer Tooling Support
Before you start developing apps, it is recommended that you go through these articles to get familiar with designing SharePoint apps. It covers a wealth of topics such as App Manifest, data access, user identity interaction, add SharePoint feature to your apps, localization, and so on. In particular, if you are an app architect, it is recommended that you think about the app tiered design options (UI, Business Logic and Data Layer allocation).
App development for SharePoint Online Dedicated follows the same principal as app for SharePoint 2013. There are various MSDN and TechNet articles addressing API selection, Data access considerations, and so on. Notice that SharePoint Online Dedicated does not support customer SQL DB hosted within a SharePoint Online Dedicated farm infrastructure. SharePoint Online Dedicated is NOT supporting the Autohosted mode; therefore, Azure Blob storage is an option for storage.
Apps can have dependencies. Please see Registering app dependencies in SharePoint 2013 for some general guidance. Be aware that not all services are supported on SharePoint Online Dedicated platform. Please refer to the Service Description for supported dependency services.
In SharePoint 2013 there is a new site template called Developer Site Template. The site template is available under the Collaboration Site template type. This is the only target site collection that apps can be deployed from Visual Studio 2012 directly without install in the App Catalog site first.
Before you begin the development process, refer to the following.
This guide focuses on deploying apps for the App Catalog. Please see Publish apps for Office and SharePoint for App Store App development for additional details.
For a Provider hosted app to be able to interact with SharePoint 2013 using OAuth, an app must first have an app identity. To deploy to SharePoint Online Dedicated farm, developers must get an app identity for their app by registering their app. When you register your app, SharePoint generates the client ID, the client secret to assign to the corresponding App display name, and the app domain. Optionally, you can enter a redirect URI associated the app when registering your new app. In some cases, it also gets a redirect URI associated with it. After you've registered your app, your app has an app identity and is associated with a security principal.
During the development process, Visual Studio 2012 and SharePoint development tools in Visual Studio 2012 can create a temporary app identity for use during the app development. But in order to deploy an App to an integrated environment (customer farm or SharePoint Online Dedicated farm), register Apps by using the appregnew.aspx Page. For detailed technical information, please refer to Guidelines for registering apps for SharePoint 2013.
Before you build your provider hosted app, you have to generate an app ID, app secret. To generate and create these values, go to http://[SharePointServerName]/_layouts/15/appregnew.aspx.
Because SharePoint Online Dedicated had configured trust with Azure ACS, the same App ID/Secret works across SharePoint Online Dedicated prod, PPE and DR farms. Developers only need register once from one of the farms and include the ID in the package, and the same package can be deployed across PPE/PROD and DR farms. If you chose develop and test SharePoint apps in an on-premises environment first, you need to do some repackaging work to incorporate the SharePoint Online Dedicated App ID and Secret key before moving the app to SharePoint Online Dedicated.
Screenshot for AppRegNew page:
If you choose to publish your provider-hosted app from the Visual Studio 2012 UI by using the Visual Studio publish wizard, Visual Studio prompts you for a client ID and client secret during the publishing process and it puts the information in the correct place for you.
In the Publish dialog box, choose Publish. The resulting app package file (which has the Windows Azure Web Site package inside) has an .APP extension and is saved in the app.publish subfolder of either the bin\Debug or bin\Release folder of your Microsoft Visual Studio 2012 project.
For detailed description of Visual Studio Features supporting SharePoint development, please see What's new in SharePoint development tools in Visual Studio 2012
Alternative to using publishing wizard, you can also modify the configuration and manifest files yourself based on the generated IDs from the appregnew.aspx page. You must acquire these IDs from the target environment. The IDs for SharePoint Online Dedicated PPE, Production and DR are all the same.
The following is an example of how the ClientId value is used in the AppManifest.xml file:
In the Web.config file in your Visual Studio project, enter the app ID value as the ClientId value.
The following is an example of how the values are used in the Web.config file of a Web application:
You can use Visual Studio to deploy and debug apps on your own farm for testing purpose. If you use Visual Studio to deploy to other site templates, you encounter the following error during deployment: Install app for SharePoint: Sideloading of apps is not enabled on this site.
For deploying to the SharePoint Online Dedicated PPE/PROD environment, please use the publishing wizard or your own integrated build procedure to generate the .app package, then upload the .app package, which contains the specific farm generated ID and Secret to the App Catalog site for deployment. Once the app package is uploaded, it is available for installation in the target farm. SharePoint Online Dedicated farms (PPE, PROD and DE) share the same app ID and client secret for the same app.
Log in to the Microsoft SharePoint Online Farm (PPE or PROD).
Navigate to the corporate catalog site pre-created for your Web application.
On the App Catalog page, choose the upload link, browse to your app for SharePoint package on the Add a document form and choose OK. An item property form opens.
Fill out the form as needed and choose Save. The app for SharePoint is saved in the catalog.
Navigate to any Website in the Web application and choose Site Contents to open the Site Contents page.
Choose Add an App, and on the Add an App page, find the app. If there are too many to scroll through, you can enter any part of the app title (that you entered for Title in the AppManifest.xml file) into the search box.
When you find the app, choose the Details link beneath it, and then on the app details page that opens, choose Add It.
You are prompted grant it permissions. Choose Trust It in the dialog box.
The Site Contents page opens, and the app is listed. For a short time a message below the title indicates that it is being added. When this message disappears, you can choose the app title to launch the app. (You may need to refresh the page before the message disappears.) In the continuing example of this article, because the Start Page was set to the URL of the remote Web application's Default.aspx page, that page opens.
In order to deploy the app packages, the IT Admin should have proper permission to the App Catalog site (/sites/appcatalog) as the member of the site Owners or Designers group for the App Catalog. The catalog site admin can choose to grant additional permissions for delegated management tasks.
An app for SharePoint has an app scope. The two possible app scopes are Web scope or tenant scope. In a SharePoint Online Dedicated farm, the tenant case under this context is the Web application host of the catalog site. The difference is not an intrinsic property of the app and you do not decide what the scope of your app is. The decision is made by IT administrators and it is determined by how the app is installed. After an app is uploaded to the app for SharePoint App Catalog of a tenancy, it is immediately available to be installed on Websites within the tenancy on a Website-by-Website basis. Apps that are installed this way have Web scope. Customer IT administrators have another option.. You can batch-install an app after you install it at the catalog site and use the deploy button to deploy to various filtered target sites.
If an app that includes an app web is batch-installed, only one app web is created and shared by all of the host web sites on which the app is installed. The app web is located in the site collection of the corporate App Catalog.
A batch installed app can be removed by uninstalling it from the catalog site. For batch deployed apps, each target site collection user can see the app. When a user clicks the app launch link, they are routed to the App Catalog site, which is the host Web for the shared app.
Remember, uninstalling applications for SharePoint instances must be clean− everything installed by the application must be uninstalled with no artifacts left behind.
Note that even though an app is deployed as a tenant scoped app, if site owners can see the app, they can still install the app within their own site. In that case, you see two app links within the site: one is called shared, and the other is the site collection hosted instance.
There are also some additional limitations on what kind of apps can be batch-installed. For details, please see Tenancies and deployment scopes for apps for SharePoint .
App Catalogs are created for each Web application (for example, http://[Web application name]/sites/appcatalog). App Catalog sites can only be accessed by customer IT admins who have a Web application policy that grants explicit Full Control rights to all site collections in a Web application. IT admins who have rights may use the standard site permissions UI to grant additional access to users. Users must be given read permission in order to browse the App Catalog from their site collections. You can use permissions to control which users can see the newly deployed apps and install them on their own site. See Configure App Catalog Site for details.
Once the app is available in the App Catalog, end users can search for them in their own site collections. For more information, see Manage the App Catalog in SharePoint 2013 for details on how to add apps and remove apps from the App Catalog.
App Catalog admins can also perform app store purchase request management tasks within the catalog site. In addition to request management, the license management delegation is also managed within the site. See App Request management section under the App Catalog Site management article for details.
From the App Catalog site, the admin can manage store app license assignment. See Monitor and manage app licenses in SharePoint Server 2013 for details regarding app license management.
App Catalog site permission and features
On SharePoint Online Dedicated, there is an App Catalog for each Web application (team, portal, mysite). Therefore, apps are targeted at each Web application. You can assign separate owners to manage each Web application’s corporate App Catalog site. As the catalog administrator, the user can manage app purchase request, license allocation, license delegation, purchase store apps or deploy internal apps so they are available for each site collection owner to install within their own site collection. The Catalog admin can also deploy a tenant scoped app which means that the App Catalog admin can choose to deploy the app at the centralized location and make the app accessible to many site collections. The App Catalog admin can also deploy upgrade apps by deploying a new version. All site collection owners can then deploy the latest version within their sites. App Catalog owners can remove apps from the available app list by uninstalling the app. No new instance can be deployed after an app is removed from the App Catalog.
Once the app is deployed at the site collection, the owner of the site collection can monitor the usage and any error associated with the app. The usage data is collected on a daily basis.
By default, a SharePoint Online Dedicated farm is configured to support app store purchase. If your organization wants to turn it off, you can request this using the CR process.
Related CR: SPOD-13-202 App Store Settings.
Description: This request disallows site collection admins to install a SharePoint App Store App on their own site. By default, site collection admins can install apps directly. You can file this CR so that all app store app installations have to go through the app request management process. Your App Catalog Admin approves the app purchase requests before an end user can install the app. Customers can build a custom workflow to handle the app request approval process.
Please use CR SPOD-13-202 App Store Settings to require users to get an App Catalog admin before they directly install the apps from the store.
An update to an app is deployed in app for SharePoint packages in the same way that the first version of the app is deployed. The app for SharePoint update process ensures that the app's data is preserved if the update fails for any reason. For an update, you use the same product ID in the app manifest that you used for the original version. The version number in the app manifest should be greater than the version number of the original app or the most recent update. There are additional consideration when planning upgrade, such as data migration and compatibility. For the complete guidance, please see App for SharePoint update process .
Please see Add troubleshooting instrumentation to an app for SharePoint for more information regarding how to use UI to perform trouble shooting and instrumentation support. The IIS-ASP.net level tracing can only be configured at the app hosting location (customer app servers, Azure). SharePoint Online Dedicated has its own monitoring in place and can not be customized. For customer trouble shooting tips, please review the trouble shooting section.
This section describes the policies established by Microsoft for custom solution deployments to the SharePoint Online service.
SPOD App Model Solution Policy Deployment Paths
For app development, it is not required that customers build an On Premise farm following our build guide. Customers can instead use Developer Sites in production to create and test apps before deploying to the production App Catalogs.
App Catalog sites are created as part of the standard SharePoint 2013 SPO-D farm build. Customers can create their own developer site collections with the Developer Site template. Note that for upgrade customers, 2013 site creation must be enabled to access this new SharePoint 2013 site template.
Follow the SharePoint 2013 published software boundaries
Recommended Guidelines for Web Apps
The severity levels assigned to incidents caused by non-functional apps in Production are defined in the Support Service Description. As the apps are sequestered to a separate Web application that is hosted locally or remotely, the impact to the SharePoint farm is limited. If an app impacts the stability and service uptime levels of the SharePoint Online farm, Microsoft might remove the app from the catalog to contain the impact.
If an app is having production issues, customer IT admins can remove the app from the catalog to prevent it from being further deployed. Customer is responsible for the deployment and removal of the app.
In case the app has already been deployed to site collections, each site collection admin can monitor the app instance usage, view error information and uninstall the app.
With the new App Model, the server code and the core solution logic are running either on the client side or hosted on the application server. SharePoint Online Dedicated continues to monitor the health of the farm to ensure you have a healthy farm to support your solution. The customer needs to monitor the health, performance and capacity of the application servers hosting your app. The specific monitoring tool and process varies depending on the Web application implementation. For more information about Microsoft Web application stacks, see the following:
For reasons of confidentiality, it is the policy of Microsoft not to provide ULS, IIS, application, security, and system logs to customers. If you have an issue affecting your deployment environment, the resolution of which may require examination of log contents, SharePoint Online provides assistance when you do the following:
Submit an SR for the issue.
Provide detailed steps so that Microsoft can reproduce the issue.
Microsoft then makes use of the appropriate log information to resolve the issue.
We support SharePoint App Store Solutions. For details, please see Hosting options for apps for SharePoint on how to deploy App Store Apps.
This content cannot be displayed in a frame error − This error is caused by cookie sharing across different IE Zones. For more information, see.
Multiple login prompts when using the app − To prevent this issue, please add the following App Management URL domains to your local intranet zones:
*.ppexxxdspoapp.com to local intranet sites
*.xxxdspodapp.com to local intranet sites
Where xxx is a unique number assigned to each customer. See the IT requirements doc.
SAML claim authentication − SharePoint hosted apps are not supported on Web applications that use SAML claim authentication.
The specified application identifier [cbd25585-947e-4b9d-8686-ed695101ed9a] is invalid or does not exist − Resolve this issue by creating the ClientID and ClientSecret using the _layouts/15/appregnew.aspx page in your target farm. You need to register your app on the target SharePoint farm where you intend to deploy your app. You can then use the generated ClientID and ClientSecret while packaging your app in your development environment. For SPOD, PPE, PROD, and DE farms, share the same ACS instance. This way, the same APPID/Secret can be used in the same package across three farms.
App removal − When you remove an app, you are not only uninstalling the app itself, but the whole app web and the content it contains.
App permission request scope − See App Permissions for details.
Aggregated app monitoring view is not supported − Site collection admins can go in each site collection to view app instance usage. At the central admin level, registering an app to monitor and view total instance installed and their aggregated usage, monitoring.
Apps for Office may not start if you disable protected mode in the Internet zone in IE − For more information, see.
Batch removal of apps from multiple site collections− PowerShell is used to support batch operations for on-premise farms. Currently, SPO does not offer this feature. For more information, see Remove app for SharePoint instances from a SharePoint 2013 site.
SharePoint 2013 Best Practices
App upgrades must have incremented versions to enable existing users the ability to download the new version. Versions are in this format: 1.0.0.0.
If you do not increment the version in the new upgrade XML file within the app package, no one is able to download the new app to their site.
For an app hosted on SharePoint, the authentication and the authorization is not an issue, because all your app resources (pages/lists) are stored in the app-Web scope and SharePoint Online takes care of it. However, for the Provider-hosted and Auto-hosted apps, the app resources are stored either on the developer hosted server or on Antares (Azure). You need protect them yourself and deal with situations when access is needed to the host Web resources. Therefore, the authentication and authorization considerations are important when using these two types of hosting method.
Apps are a new concept for SharePoint 15, empowering end users to add new functionality to their sites while still ensuring security and reliability for the SharePoint site itself. Creating a good app requires not only just making awesome functionality (although that’s obviously important), but also ensuring that the app looks right and fits seamlessly into the site where it’s installed. SharePoint Online Dedicated provides several ways to make your UX follow the SharePoint Online Dedicated style:
Use this procedure to add Chrome controls to your page.
Add the following reference to the JavaScript:
Add the following div section in your page content:
Set the options in your JavaScript.
Set the host site URL dynamically in your code because you do not know where the app installed during your development.
For CSS, you need to reference defaultcss.ashx in your page. You need to find the host Web URL and then use this URL when you generate the HTML. Remember that this does NOT guarantee the change on the theme when the host Web’s theme changes as the CSS is cached by the browser.
Logging and tracing is very important for every app during the troubleshooting. SharePoint Online has provided an API for logging custom errors. Developers can use a provided client-side object model API to report errors in their code to the administrator.
This API could be used in all of the hosting methods. The Site Collection Admin could use the monitor callout to check the runtime errors like below:
The App Callout
For the CSOM API, you need to use OAuth to communicate back to the SharePoint site to make the API call. To do this, use the following steps:
Get the refresh token from the context token
Use the refresh token to get the access token
Use the access token to get the client context
Use the client context to call the Logging API
You have to update an app for SharePoint if you add functionality, fix a bug, or make a security update. An update to an app is deployed in app for SharePoint packages in the same way that the first version of the app is deployed. The app for SharePoint update process ensures that the app's data is preserved if the update fails for any reason. Please review App for SharePoint update process for the basic concept of the app upgrade.
For the SharePoint-hosted apps, if you have defined module files in the app Web, you need:
Add the
ReplaceContentattribute to replace the contents of existing files during upgrade, like below:
Also add an Upgrade Actions section in your feature.xml for which ElementManifests to run on upgrade like below:
Also, if you have Client Web Part defined in the app Web, you need to keep the feature ID the same as a previous version to make sure it’s not deleted after the upgrade.
If the app is provider-hosted, you provide the update logic for all the non-SharePoint components of the app. This logic can go in the Post Update Web service or in first run after update logic of the app itself.
Troubleshooting App Model Development in SharePoint Online Dedicated environments can be broadly divided into three distinct categories - basic App Model trouble shooting, HTTP trouble shooting and CSOM trouble shooting. For each of these categories, knowledge of Web development principals, techniques and tools is required along side a firm understanding of the SharePoint App Model development process (see Build Apps for SharePoint).
App Catalogs are used by the SharePoint App Model as a repository for available apps that can be deployed throughout the environment. You can perform basic troubleshooting of the App Catalog to identify and resolve a number of potential issues, such as the following:
Are you seeing inconsistent behaviors between Web applications? Each Web application uses a different App Catalog which may lead to different apps being available at different URLs. If you are seeing inconsistency of app availability across different Web applications, consider uploading the required apps to the App Catalog defined for each Web applications.
If some users cannot see specific apps, they may not have access to the App Catalog site collection for the Web application or item level security may be preventing them from accessing specific items from the App Catalog. Verify the users who are attempting to add the app has access to the App Catalog site collection via the SharePoint permissions check.
If you have chosen to allow apps to be added to your SharePoint environment from the SharePoint store, you may need to manage the licenses associated with these apps. Typically, one or more license managers are assigned to handle these activities. It is important that you maintain adequate licenses for the each app to support your usage patterns.
From time to time it may be necessary to monitor app usage, runtime errors or upgrade errors. As a site collection owner, from the site apps usage page, you can see the total number of app instances and any potential errors they may have encountered during installation, during runtime or while upgrading. From the app error list you can determine if you want to remove the app because there are too many errors or the app is not working as expected.
Occasionally, it may be necessary to use tools outside of SharePoint to inspect, detect and correct errors in apps or the supporting technologies required to operate apps successfully. These tools can help inspect the flow of HTTP requests and responses from the client machine between either the SharePoint Online Dedicated SharePoint host Web and app Web or the SharePoint Online Dedicated SharePoint host Web and provider hosted app Web. One such example of these tools is the Internet Explorer Developer Tools (see Discovering Windows Internet Explorer Developer Tools ).
When trouble shooting apps, the network captures performed via the Internet Developer Tools can provide a great deal on information about the app behavior and cross Web communications. To start a network capture using the Internet Explorer Developer Tools, navigate to the Network tab within the developer tools and click on Start Capture. Now access your app and begin to visualize the http request and responses being made during app execution. Stop the capture once your app has loaded and executed. Once a capture has been gathered, verify that the HTTP requests and responses are as expected, in particular pay attention to the HTTP status codes. With the exception of initial authentications request handshaking, status code 401 warrants further investigation. Status codes in the 500s indicate an app or Web problem and should be investigated.
A significant number of App Model developments utilize the SharePoint Client Side Object Model (CSOM) and in turn, a high proportion of the CSOM used by apps uses JavaScript. Knowing how to trouble shoot JavaScript code is a valuable trouble shooting skill.
By using the Internet Explorer Developer Tools, app developers can inspect, interact and debug JavaScript scripts and functions in real time.
Apps for SharePoint and Office TechNet Blogs
SharePoint Mobile Offerings
SharePoint 2013 Branding Samples
Using VS2012 to develop and production debugging for SharePoint 2013
How to: Use an Office 365 SharePoint site to authorize provider-hosted apps on an on-premises SharePoint site
Configure app authentication in SharePoint Server 2013
Reimagine SharePoint Development | http://technet.microsoft.com/en-us/library/dn198209(technet.10) | CC-MAIN-2014-15 | refinedweb | 6,468 | 52.19 |
sprintf() prototype
int sprintf( char* buffer, const char* format, ... );
The
sprintf() function writes the string pointed to by format to buffer. The string format may contain format specifiers starting with % which are replaced by the values of variables that are passed to the
sprintf() function as additional arguments.
It is defined in <cstdio> header file.
sprintf() Parameters
- buffer: Pointer to the string buffer.
sprintf() Return value
- If successful, the
sprintf()function returns number of characters that would have been written for sufficiently large buffer excluding the terminating null character.
- On failure it returns a negative value.
Example: How sprintf() function works
#include <cstdio> #include <iostream> using namespace std; int main() { char buffer[100]; int retVal; char name[] = "Max"; int age = 23; retVal = sprintf(buffer, "Hi, I am %s and I am %d years old", name, age); cout << buffer << endl; cout << "Number of characters written = " << retVal << endl; return 0; }
When you run the program, the output will be:
Hi, I am Max and I am 23 years old Number of characters written = 34 | https://www.programiz.com/cpp-programming/library-function/cstdio/sprintf | CC-MAIN-2020-16 | refinedweb | 172 | 58.52 |
mul¶
paddle.fluid.layers.
mul(x, y, x_num_col_dims=1, y_num_col_dims=1, name=None)[source]
Mul Operator. This operator is used to perform matrix multiplication for input $x$ and $y$. The equation is:\[Out = x * y\]
Both the input $x$ and $y$ can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input $x$.
- Parameters
x (Variable) – The first input Tensor/LoDTensor of mul_op.
y (Variable) – The second input Tensor/LoDTensor of mul_op.
x_num_col_dims (int, optional) – The mul_op can take tensors with more than two dimensions as its inputs. If the input $x$ is a tensor with more than two dimensions, $x$ will be flattened into a two-dimensional matrix first. The flattening rule is: the first num_col_dims will be flattened to form the first dimension of the final matrix (the height of the matrix), and the rest rank(x) - num_col_dims dimensions are flattened to form the second dimension of the final matrix (the width of the matrix). As a result, height of the flattened matrix is equal to the product of $x$’s first x_num_col_dims dimensions’ sizes, and width of the flattened matrix is equal to the product of $x$’s last rank(x) - num_col_dims dimensions’ size. For example, suppose $x$ is a 6-dimensional tensor with the shape [2, 3, 4, 5, 6], and x_num_col_dims = 3. Thus, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30]. Default is 1.
y_num_col_dims (int, optional) – The mul_op can take tensors with more than two dimensions as its inputs. If the input $y$ is a tensor with more than two dimensions, $y$ will be flattened into a two-dimensional matrix first. The attribute y_num_col_dims determines how $y$ is flattened. See comments of x_num_col_dims for more details. Default is 1.
name (str, optional) – Name of the output. Normally there is no need for user to set this property. For more information, please refer to Name. Default is None.
- Returns
The output Tensor/LoDTensor of mul op.
- Return type
Variable(Tensor/LoDTensor)
Examples
import paddle.fluid as fluid dataX = fluid.layers.data(name="dataX", append_batch_size = False, shape=[2, 5], dtype="float32") dataY = fluid.layers.data(name="dataY", append_batch_size = False, shape=[5, 3], dtype="float32") output = fluid.layers.mul(dataX, dataY, x_num_col_dims = 1, y_num_col_dims = 1) | https://www.paddlepaddle.org.cn/documentation/docs/en/api/layers/mul.html | CC-MAIN-2020-05 | refinedweb | 386 | 58.79 |
A Data Scientist happened upon a load of stuff – junk, at first glance – and wondered, as was his wont, “what can I get out of this?”
Conceded: as an opening line this is less suited to a tech blog than an old-fashioned yarn. In an (arguably) funny way, this isn’t far from the truth. My answer: get some holism down your neck. Make it into a modest, non-production Hadoop cluster and enjoy a large amount of fault-tolerant storage, faster processing of large files than you’d get on a single high-spec machine, the safety of not having placed all your data-eggs in one basket, and an interesting challenge. Squeeze the final, and not inconsiderable, bit of business value out of it.
To explain, when I say “stuff”, what I mean is 6 reasonable but no longer DC-standard rack servers, and more discarded dev desktops than you can shake a duster at. News of the former reached my Data Scientist colleague and I by way of a last call before they went into the skip; I found the latter buried in the boiler room when looking for bonus cabling. As a northerner with a correspondingly traumatic upbringing, instinct won out and, being unable to see it thrown away, I requested to use the hardware.
I’m not gonna lie. They were literally dumped at my feet, “unsupported”. Fortunately, the same qualities of character that refused to see the computers go to waste saw me through the backbreaking physical labour of racking and cabling them up. Having installed Ubuntu Server 13 on each of the boxes, I had soon pinged my desktop upstairs successfully and could flee the freezing server room to administrate from upstairs. Things picked up from here, generally speaking.
The hurdle immediately ahead was the formality of installing and correctly configuring Hadoop on all of the boxes, and this, you may be glad to know, brings me to the point of this blog post. Those making their first tentative steps into the world of Hadoop may be interested to know how exactly this was achieved, and indeed, I defy anyone to point me towards a comprehensive Hadoop-from-scratch quick start which leaves you with a working version of a recent release of Hadoop. Were it not for the fact that Hadoop 2.x has significant configuration differences to Hadoop 1.x, Michael Noll’s excellently put-together page would be ideal. It’s still a superb pointer in itself and was valuable to me during my first youthful fumblings with Hadoop 18 months ago. The inclusion of important lines of bash neatly quashes the sorts of ambiguity that may arise from instructions like “move the file to HDFS” which you sometimes find.
In any case, motivated by the keenness to see cool technology adopted as easily and widely as possible, I propose in this to briefly explain the configuration steps necessary to get me into a state of reverse cartography. (Acknowledged irony: there will probably be a time when someone reads this and it’s out of date. Apologies in advance.) Having set up a single node, it’s actually more of a hassle to backtrack over your configuration to add more nodes than to just go straight to a multi-node cluster. Here’s now to do the latter.
Setting the Scene
The Hadoop architecture can be summarised in saying that it elegantly facilitates doing two things in a distributed manner: storing files, and processing files. The two poles of the Hadoop world which respectively deal with these are known as the DFS (distributed file system) layer, and the MapReduce layer. Each layer knows about the other, but can, and indeed must, be configured, administrated and used fairly independently across a cluster. There’s an interesting history to both of these computing paradigms, and many papers written by the likes of Google and Facebook describing their inner workings. A quick Youtube yields some equally illuminating talks. My personal favourites on the two layers are this for HDFS and this for MapReduce.
Typically a cluster of n computers (nodes) will have 1 master node, which coordinates the distribution of storage and processing, and (n-1) slave nodes which do the actual stuff. The modules (daemons) in Hadoop 2.x which control all of these are as below.
Obligatory diagram:
Based on your current cluster setup, Hadoop makes a bunch of intelligent decisions about where to put files, and which machines to do certain bits of processing on, motivated by maximising redundancy and fault tolerance by clever replication choices, minimising network overhead, optimally leveraging each bit of hardware, and so on. The way that the architecture makes these decisions in such a way that you, the Hadoop developer, don’t have to worry about them, is where the real beauty and power of Hadoop lies. We’ll see later in this blog how, whilst HDFS and MapReduce are breathtakingly complex and scalable under the bonnet, leveraging their power is no more difficult than performing normal straightforward file system operations and file processing in Linux.
So. At this stage, all you have is a collection of virginal, disparate machines that can see each other on the network, but beyond that share no particular sense of togetherness. Each must undergo the same setup procedure before it’s ready to pull its weight in the cluster. In a production environment, this would be achieved by means of an automated deployment script, so that nodes could be added easily and arbitrarily, but that is both overkill and an unnecessary complication here. Good old-fasioned Bash elbow grease will see us through.
Having said that, one expedient whose virtues I will extol is a little gem of software called SuperPutty, which will send the same command from any single Windows PC to all the Linux boxes simultaneously, in so doing greatly reducing repetitiveness and cutting out chances for human error:
Using SuperPutty to send commands en-masse is only the same as doing the same thing on each box in sequence.
Connect to all the boxes and make sure you’re at the same bash prompt on all of them. SuperPutty will let you store connection authentication details to save you even more time in swiftly connecting to every machine in your cluster. (Disclaimer: if you do store passwords, anyone with Linux knowledge who finds your unattended, unlocked PC could connect to your cluster and perform wild-rogue Hadoop operations on your data. Think carefully.)
Masters and Slaves
One of your computers will be the master node, and the rest slaves. The master’s disks are the only ones that need to have an appropriate RAID configuration, since Hadoop itself handles replication in a better way in HDFS: choose JBOD for the slaves. If one of your machines stands above the rest in terms of RAM and/or processing power, choose this as the master.
Since Hadoop juggles data around amongst nodes like there’s no tomorrow, there are a few networking prerequisites to sort, to make sure it can do this unimpeded and all nodes can communicate freely with each other.
Hosts
Working with IPs is a lot like teaching cats to read: it quickly becomes tedious. The file /etc/hosts enables you to specify names for IP addresses, then you can just use the names. Every node needs to know about every other node. You’ll want your hosts file on each of the boxes to look something like this so you can refer to (eg) slave 11 without having to know (or calculate!) slave 11’s IP:
123.1.1.25 master
123.1.1.26 slave001
123.1.1.27 slave002
123.1.1.28 slave003
123.1.1.29 slave004
... etc
It’s also a good idea to disable IPv6 on the Hadoop boxes to avoid potential confusion regarding localhost addresses… Fire every box the below commands to append the necessary lines to /etc/sysctl.conf…
sean@node:~$ echo "#disable ipv6" | sudo tee -a /etc/sysctl.conf
sean@node:~$ echo "net.ipv6.conf.all.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
sean@node:~$ echo "net.ipv6.conf.default.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
sean@node:~$ echo "net.ipv6.conf.lo.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
The machines need to be rebooted for the changes to come into effect…
sean@node:~$ sudo shutdown -r now
Once they come back up, run the following to check whether IPv6 has indeed been disabled. A value of 1 would indicate that all is well.
sean@node:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
Setting up the Hadoop User
For uniformity across your cluster, you’ll want to have a dedicated Hadoop user with which to connect and do work…
sean@node:~$ sudo addgroup hadoop
sean@node:~$ sudo adduser --ingroup hadoop hduser
sean@node:~$ sudo adduser hduser sudo
We’ll now switch users and work as the new Hadoop user…
sean@node:~$ su - hduser
hduser@node:~$
SSH Promiscuity
Communication between nodes take place by way of the secure shell (SSH) protocol. The idea is to enable every box to passwordlessly use an SSH connection to itself, and then copy those authentication details to every other box in the cluster, so that any given box is on familiar terms with any other and Hadoop is unshackled to work its magic!
Firstly, send every box the instruction to make a passwordless SSH key to itself for hduser:
hduser@node:~$ ssh-keygen -t rsa -P ""
Bash will prompt you for a location in which to store this newly-created key. Just press enter for default:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa): Created directory '/home/hduser/.ssh'.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub
The key fingerprint is: 9b:82...................:0e:d2 hduser@ubuntu
The key's randomart image is: [weird ascii image]
Copy this new key into the local list of authorised keys:
hduser@node:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
The final step in enabling local SSH is to connect – this will save the fingerprint of the host to the list of familiar hosts.
hduser@node:~$ ssh hduser@localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is d7:87...............:36:26
Are you sure you want to continue connecting? yes
Warning: permanently added 'localhost' (RSA) to the list of known hosts.
Now, to allow all the boxes to enjoy the same level of familiarity with each other, fire them all this command:
hduser@node:~$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@master
This will make every box send its SSH key to the master node. Unfortunately, you have to repeat this to tell every box to send its key to every node…
hduser@node:~$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@slave001
hduser@node:~$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@slave002
hduser@node:~$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@slave003
etc...
Finally, and this is also a bit tedious, via SuperPutty make every box SSH to each box in turn and check that all’s well. Ie, send them all:
hduser@node:~$ ssh master
…check that they all have…
hduser@node:~$ ssh slave001
… check that they all have… etc.
This is a one-time thing; after any box has connected to any other one time, the link between them remains.
Java
The next prerequisite to sort is a Java environment, as the Hadoop core is written Java (although you can harness the power of MapReduce in any language you please, as we shall see). If you’re fortunate, your machines will have internet access, in which case fire the following command to them all using SuperPutty:
hduser@node:~$ sudo apt-get install openjdk-6-jre
If like mine, however, your machines were considered ticking chemical time bombs by infrastructure and hence weren’t granted internet access, what you’ll want to do is download a JDK to a computer that does have internet access and can also see your Hadoop boxes on the network, and fire the files over from there. So on your internet-connected box:
32 bit version:
hduser@node:~$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24="
64 bit version:
hduser@node:~$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24="
Now then! Each of your Hadoop nodes wants to connect to this box and pull over the Java files. Find its IP by typing ifconfig, and then fire this command to all of your Hadoop nodes:
hduser@node:~$ scp user@internetbox:/locationoffile/rightarchtecturefile.bin $HOME
Be careful to get the edition matching the machine, be it 32bit or 64bit.
Now execute the following on the Hadoop machines to install Java…
32 bit machines:
hduser@node:~$ chmod u+x jre-6u34-linux-i586.bin
hduser@node:~$ ./jre-6u34-linux-i586.bin
hduser@node:~$ sudo mkdir -p /usr/lib/jvm
hduser@node:~$ sudo mv jre1.6.0_34 /usr/lib/jvm/
hduser@node:~$ sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jre1.6.0_34/bin/java" 1
hduser@node:~$ sudo update-alternatives --install "/usr/lib/mozilla/plugins/libjavaplugin.so" "mozilla-javaplugin.so" "/usr/lib/jvm/jre1.6.0_34/lib/i386/libnpjp2.so" 1
hduser@node:~$ sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jre1.6.0_34/bin/javaws" 1
hduser@node:~$ sudo update-alternatives --config java
hduser@node:~$ sudo update-alternatives --config javac
hduser@node:~$ export JAVA_HOME=/usr/lib/jvm/jre1.6.0_34/
64 bit machines:
hduser@node:~$ chmod u+x jdk-6u45-linux-x64.bin
hduser@node:~$ ./jdk-6u45-linux-x64.bin
hduser@node:~$ sudo mv jdk1.6.0_45 /opt
hduser@node:~$ sudo update-alternatives --install "/usr/bin/java" "java" "/opt/jdk1.6.0_45/bin/java" 1
hduser@node:~$ sudo update-alternatives --install "/usr/bin/javac" "javac" "/opt/jdk1.6.0_45/bin/javac" 1
hduser@node:~$ sudo update-alternatives --install "/usr/lib/mozilla/plugins/libjavaplugin.so" "mozilla-javaplugin.so" "/opt/jdk1.6.0_45/jre/lib/amd64/libnpjp2.so" 1
hduser@node:~$ sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/opt/jdk1.6.0_45/bin/javaws" 1
hduser@node:~$ sudo update-alternatives --config java
hduser@node:~$ sudo update-alternatives --config javac
hduser@node:~$ export JAVA_HOME=/opt/jdk1.6.0_45/
Finally, test by firing all machines
hduser@node:~$ java --version
You should see something like this:
hduser@node:~$ java -version
java version "1.6.0_45"
Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode)
Installing Hadoop
Download Hadoop 2.2.0 into the directory /usr/local from the best possible source:
hduser@node:~$ cd /usr/local
hduser@node:~$ wget
If your boxes don’t have internet connectivity, use the same workaround we used above to circuitously get Java.
Unzip, tidy up and make appropriate ownership changes:
hduser@node:~$ sudo tar xzf hadoop-2.2.0.tar.gz
hduser@node:~$ sudo mv hadoop-2.2.0 hadoop
hduser@node:~$ sudo chown -R hduser:hadoop hadoop
Finally, append the appropriate environment variable settings and aliases to the bash configuration file:
hduser@node:~$ echo "" | sudo tee -a $HOME/.bashrc
hduser@node:~$ echo "export HADOOP_HOME=/usr/local/hadoop" | sudo tee -a $HOME/.bashrc
hduser@node:~$ echo "" | sudo tee -a $HOME/.bashrc
#32 bit version:
hduser@node:~$ echo "export JAVA_HOME=/usr/lib/jvm/jre1.6.0_34" | sudo tee -a $HOME/.bashrc
#64 bit version:
hduser@node:~$ echo "export JAVA_HOME=/opt/jdk1.6.0_45" | sudo tee -a $HOME/.bashrc
hduser@node:~$ echo "" | sudo tee -a $HOME/.bashrc
hduser@node:~$ echo "unalias fs &> /dev/null" | sudo tee -a $HOME/.bashrc
hduser@node:~$ echo "alias fs &>"hadoop fs"" | sudo tee -a $HOME/.bashrc
hduser@node:~$ echo "unalias hls &> /dev/null" | sudo tee -a $HOME/.bashrc
hduser@node:~$ echo "alias hls="fs -ls" | sudo tee -a $HOME/.bashrc
hduser@node:~$ echo "" | sudo tee -a $HOME/.bashrc
hduser@node:~$ echo "export PATH=$PATH:$HADOOP_HOME/bin" | sudo tee -a $HOME/.bashrc
There are a few changes that must be made to the configuration files in /usr/local/hadoop/etc/hadoop which inform the HDFS and MapReduce layers. Editing these on every machine at once via SuperPutty requires skill, especially when, having made the changes, you realise that you can’t send an “escape” character to every machine at once. There’s a solution involving mapping other, sendable, characters to the escape key, but that’s “out of scope” here 😉 Here’s what the files should look like.
core-site.xml
It needs to look like this on all machines, master and slave alike:
[code language=”xml”]
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
</configuration>[/code]
hadoop-env.sh
There’s only one change that needs to be made to this mofo; locate the line which specifies JAVA_HOME (helpfully commented with “the Java implementation to use”). Assuming a Java setup like that described above, this should read
32 bit machines:
export JAVA_HOME=/usr/lib/jvm/jre1.6.0_34/
64 bit machines:
export JAVA_HOME=/opt/jdk1.6.0_45/
hdfs-site.xml
This specifies the replication level of file blocks. Note that your physical storage size will be divided by this number to give the storage you’ll have in HDFS.
[code language=”xml”]
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>[/code]
Additionally, it’s necessary to create a local directory on each box for Hadoop to use:
hduser@node:~$ sudo mkdir -p /app/hadoop/tmp
hduser@node:~$ sudo chown hduser:hadoop /app/hadoop/tmp
mapred-site.xml
Which MapReduce implementation to use. At the moment we’re on YARN (“Yet Another Resource Negotiator”…………).
[code language=”xml”]
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>[/code]
yarn-site.xml
Controls the actual MapReduce configuration. Without further ado, this is what you want:
[code language=”xml”]
>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8040</value>
</property>
</configuration>[/code]
Slaves
In short, the master needs to consider itself and every other node a slave. Each slave needs to consider itself, and itself only, a slave. The entirety of your slaves file ought to look like this:
Master:
master
slave001
slave002
slave003
etc
Slave xyz:
slavexyz
Formatting the Filesystem
Much like manually deleting your data, formatting a HDFS filesystem containing data will delete any data you might have in it, so don’t do that if you don’t want to delete your data. Warnings notwithstanding, execute the following on the master node to format the HDFS namespace:
hduser@master:~$ cd /usr/local/hadoop
hduser@master:~$ bin/hadoop namenode -format
Bringing up the Cluster
This is the moment that the band strikes up. If you’re not already there, switch to the Hadoop directory…
hduser@master:~$ cd /usr/local/hadoop
Fire this shizz to start the DFS layer:
hduser@master:/usr/bin/hadoop$ sbin/start-dfs.sh
You should see this kind of thing:
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting namenodes on [master]
master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-master.out
slave001: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-slave001.out
slave002: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-slave002.out
slave003: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-slave003.out
...etc
Now start the MapReduce layer:
hduser@master:/usr/local/hadoop$ sbin/start-yarn.sh
Expect to be greeted by something like this:
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-master.out
slave001: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-slave001.out
slave002: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-slave002.out
slave003: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser/nodemanager-slave003.out
...
Also start the job history server…
hduser@master:/usr/local/hadoop$ sbin/mr-jobhistory-daemon.sh start historyserver
Surveying One’s Empire
By this stage your Hadoop cluster is humming like a dynamo. There are several web interfaces which provide a tangible window into the specification of the cluster as a whole…
For the DFS layer, have a look at.
And for a breakdown of the exact condition of each node in your DFS layer,
And for the MapReduce layer, look at,
The First Distributed MapReduce
MapReduce is nothing more than a certain way to phrase a script to process a file, which is friendly to distributed computing. There’s a mapper, and a reducer. The “mapper” must be able to process any arbitrary fragment of the file (eg, count the number of occurrences of something within that fragment), independently and obliviously of the contents of the rest of the file. This is why it’s so scalable. The “reducer” aggregates the outputs of the mappers to give the final result (eg, sum up the occurrences of something reported by each of the mappers to give the total number of occurrences). Again, the way that you only have to write the mapper and reducer, and Hadoop handles the rest (deploying a copy of the mapper to every worker node, “shuffling” the mapper outputs for the reducer, re-allocating failed maps etc), is why Hadoop is well good. Indeed, a well-maintained cluster is much like American dance/rap duo LMFAO: every day it’s shuffling.
Later in this blog we’ll address how to write MapReduces; for now let’s just perform one and let the cluster stretch its legs for the first time.
Make a cheeky text file (example.txt):
Example text file
Contains example text
Make a directory in HDFS, lob the new file in there, and check that it’s there:
hduser@master:/usr/local/hadoop$ bin/hadoop fs -mkdir /test
hduser@master:/usr/local/hadoop$ bin/hadoop fs -put example.txt /test
hduser@master:/usr/local/hadoop$ bin/hadoop fs -ls /test
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 hduser supergroup 50 2013-12-23 09:09 /test/example.txt
As you can see, the Hadoop file system commands are very similar to the normal Linux ones. Now run the example MapReduce:
hduser@master:/usr/local/hadoop$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /test /testout
Hadoop will immediately inform you that a load of things are now deprecated – ignore these warnings, it seems that deprecation is the final stage in creating new Hadoop modules – and then more interestingly keep you posted on the progress of the job…
INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1387739551023_0001
INFO impl.YarnClientImpl: Submitted application application_1387739551023_0001 to ResourceManager at master/1.1.1.1:8040
INFO mapreduce.Job: The url to track the job:
INFO mapreduce.Job: Running job: job_1387739551023_0001
INFO mapreduce.Job: Job job_1387739551023_0001 running in uber mode : false
INFO mapreduce.Job: map 0% reduce 0%
INFO mapreduce.Job: map 100% reduce 0%
INFO mapreduce.Job: map 100% reduce 100%
INFO mapreduce.Job: Job job_1387739551023_0001 completed successfully
INFO mapreduce.Job: Counters: 43
File System Counters
FILE: Number of bytes read=173
FILE: Number of bytes written=158211
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=202
HDFS: Number of bytes written=123
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=7683
Total time spent by all reduces in occupied slots (ms)=11281
Map-Reduce Framework
Map input records=2
Map output records=11
Map output bytes=145
Map output materialized bytes=173
Input split bytes=101
Combine input records=11
Combine output records=11
Reduce input groups=11
Reduce shuffle bytes=173
Reduce input records=11
Reduce output records=11
Spilled Records=22
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=127
CPU time spent (ms)=2570
Physical memory (bytes) snapshot=291241984
Virtual memory (bytes) snapshot=1030144000
Total committed heap usage (bytes)=181075968
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=101
File Output Format Counters
Bytes Written=123
hduser@master:/usr/local/hadoop$
GLORY. We can examine the output thus:
hduser@master:/usr/local/hadoop$ bin/hadoop fs -ls /testout
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r-- 3 hduser supergroup 50 2013-12-23 09:10 /testout/_SUCCESS
-rw-r--r-- 3 hduser supergroup 50 2013-12-23 09:10 /testout/part-r-00000
hduser@master:/usr/local/hadoop$ bin/hadoop fs -cat /testout/part-r-00000
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Contains 1
Example 1
example 1
file 1
text 2
Business value, delivered. If you want to retrieve the output file from HDFS back to your local filesystem, run
hduser@master:/usr/local/hadoop$ bin/hadoop fs -get /testout
hduser@master:/usr/local/hadoop$ ls | grep testout
testout
And there it is! Now that your Hadoop cluster is essentially a self-aware beacon of supercomputing, stay tuned for further posts on using Hadoop to do interesting/lucrative things! 🙂 | http://dogdogfish.com/author/sdurx/ | CC-MAIN-2018-17 | refinedweb | 4,312 | 53.81 |
{-# ] #-} -- TODO: Perhaps should introduce a class for the "splittable range" concept. data InclusiveRange = InclusiveRange Int Int -- |angeThresh :: NFData a => Int -- ^ threshold -> InclusiveRange -- ^ range over which to calculate -> (Int -> Par a) -- ^ compute one result -> (a -> a -> Par a) -- ^ combine two results (associative) -> a -- ^ initial result -> Par a parMapReduceRangeThresh threshold (InclusiveRange min max) fn binop init = loop min max where loop min max | max - min <= threshold = let mapred a b = do x <- fn b; result <- a `binop` x return result in foldM mapred init [min..max] | otherwise = do let mid = min + ((max - min) `quot` 2) rght <- spawn $ loop (mid+1) max l <- loop min mid r <- get rght l `binop` r -- How many tasks per process should we aim for. Higher numbers -- improve load balance but put more pressure on the scheduler. auto_partition_factor :: Int auto_partition_factor = 4 -- | \"Auto-partitioning\" version of 'parMapReduceRangeThresh' that chooses the threshold based on -- the size of the range and the number of processors.. parMapReduceRange :: NFData a => InclusiveRange -> (Int -> Par a) -> (a -> a -> Par a) -> a -> Par a parMapReduceRange (InclusiveRange start end) fn binop init = loop (length segs) segs where segs = splitInclusiveRange (auto_partition_factor * numCapabilities) (start,end) loop 1 [(st,en)] = let mapred a b = do x <- fn b; result <- a `binop` x return result in foldM mapred init [st..en] loop n segs = let half = n `quot` 2 (left,right) = splitAt half segs in do l <- spawn$ loop half left r <- loop (n-half) right l' <- get l l' `binop` r -- TODO: A version that works for any splittable input domain. In this case -- the "threshold" is a predicate on inputs. -- parMapReduceRangeGeneric :: (inp -> Bool) -> (inp -> Maybe (inp,inp)) -> inp -> -- Experimental: -- |'. parFor :: InclusiveRange -> (Int -> Par ()) -> Par () parFor (InclusiveRange start end) body = do let run (x,y) = for_ x (y+1) body range_segments = splitInclusiveRange (4*numCapabilities) (start,end) vars <- M.forM range_segments (\ pr -> spawn_ (run pr)) M.mapM_ get vars return () splitInclusiveRange :: Int -> (Int, Int) -> [(Int, Int)] splitInclusiveRange pieces (start,end) = map largepiece [0..remain-1] ++ map smallpiece [remain..pieces-1] where len = end - start + 1 -- inclusive [start,end] (portion, remain) = len `quotRem` pieces largepiece i = let offset = start + (i * (portion + 1)) in (offset, offset + portion) smallpiece i = let offset = start + (i * portion) + remain in (offset, offset + portion - 1) -- My own forM for numeric ranges (not requiring deforestation optimizations). -- Inclusive start, exclusive end. {-# INLINE for_ #-} for_ :: Monad m => Int -> Int -> (Int -> m ()) -> m () for_ start end _fn | start > end = error "for_: start is greater than end" for_ start end fn = loop start where loop !i | i == end = return () | otherwise = do fn i; loop (i+1) | http://hackage.haskell.org/package/monad-par-0.1.0.3/docs/src/Control-Monad-Par.html | CC-MAIN-2014-23 | refinedweb | 424 | 55.98 |
Implementing Date Ranges in Elixir
by Implementing the Enumerable Protocol
TLDR; Click here to cut the crap!
The Background
Here in The Plant, we’re required to write our work-log every day. Basically, describing what you’ve done today and creating an issue on Github, and then your team lead gives it a score. I’m too lazy for that and I didn’t record anything for the past July… So now I have to create over 20 issues, it’s tedious and may take me half an hour.
“What if it can be automated?”
I’m sure 👉you have the same moment if you’re a developer. Developers’ time is so precious that we can’t afford for 30 minutes repeating ourselves, but would spend hours automate things! Even it’s a one-time job, or it shouldn’t be automated at all!
I didn’t write my work-log not just because I’m super lazy. I recently shifted my role from a tech lead(a team of 3–5 people) to a product manager(QOR: the SDK for E-Commerce & CMS, written in Go). I was busy reading, learning, arguing, planning… And suffering! I felt unproductive, and most of the time, I didn’t know what to do. So these were the obstacles that I didn’t keep my work-log.
So I really need to get out of it. I’m too far away from my comfort zone, I guess I can retreat back for a while?🙈🙉🙊
Interacting with the Github API
Yes, I’ve got enough good excuses to get my hands dirty. I’d like to write some code to help me quickly create 20 Github issues, that’s it! I’m sure I could do it with Ruby pretty quickly. But that’s boring. I’m always interested in Elixir, but I’ve never had a chance to use it in production. Now I have 20 issues to create, maybe I try out its fancy concurrency feature. To avoid hitting the request threshold, maybe I should set up a pool. To resend a failed request, maybe there should be a supervisor, I can play with the famous OTP framework. These all seems very interesting.
Interacting with the Github API is supposed to be easy. Usually, using my email and password to exchange an access token, so that I can use the API freely. But I’m using 2-factor authentication, that could be a problem. Luckily, Github has sophisticated docs. I managed to use the basic authentication with an X-GitHub-OTP header to create an access token. With this access token, my program can create an issue on Github on behalf of me.
I use curl to make a POST request to get the access token, now I need to make HTTP request in Elixir. httpoison is a good option. A pitfall here. The Github API uses JSON format. Naturally, I would use a Map for the parameters. But httpoison doesn’t automatically convert a Map into a JSON object. So I need to use poison for help.
headers = %{“Authorization” => “token your_access_token”}
body = %{“title”: “Hello world!”} |> Poison.encode!
HTTPoison.post!("https://...", body, headers)
The format of an issue title is:
[score]/[hour]/[date]/[content]
Now I need some dates, from July 1 to 31, except the weekends. It’s intuitive in Ruby.
d1 = Date.new 2016, 7, 1
d2 = Date.new 2016, 7, 31
(d1 .. d2).each { |d| puts d unless d.sunday? or d.saturday? }
But how do I do that in Elixir? Elixir just added Date, DateTime, NaiveDateTime in version 1.3, which is great, but it doesn’t support ranges.
Well, I don’t need a DateRange to do the iteration. There are 31 days in July, so I can do:
def july do
1..31
|> Enum.map(fn i ->
{:ok, d} = Date.new 2016, 7, i
d
end)
|> Enum.filter(fn d -> !weekend?(d) end)
end
def weekend? date do
weekday = date |> Date.to_erl |> :calendar.day_of_the_week
weekday > 5
end
See that :calendar, it’s an Erlang library. You can use Erlang in Elixir seamlessly!(That says, Elixir Date doesn’t have a convenient way to tell whether a date is a weekday or not…)
Okay! Now I can create my Github issues.
Implementing ExclusiveDateRange and DateRange
Shall I make concurrent requests now? No, it doesn’t interest me now, as Elixir doesn’t support date ranges, which bothers me! Using date ranges is common, there must be handy libs out there. But I don’t care, I want to implement my own, even a naive one.
Elixir does support Range, but just for integers, e.g, (1..10).
A Range is represented internally as a struct.
A Range implements the Enumerable protocol, which means all of the functions in the Enum module is available
Great, I have the basic idea about how to implement the date range:
* It should be a struct(../2 feels like magic to me)
* It should implement the Enumerable protocol
The struct is fairly simple:
defmodule DateRange do
defstruct first: nil, last: nil
end
d1 = ~D[2016-07-01]
d2 = ~D[2016-07-31]
%DateRange{first: d1, last: d2}
# => %DateRange{first: ~D[2016-07-01], last: ~D[2016-07-31]}
It simply holds the first and last element of a range.
Now I want to be able to do this:
# iterate through a date range, and print the dates
date_range |> Enum.each(fn date -> date |> inspect |> IO.puts end)
That is to implement the Enumerable protocol for the DateRange module. What I need to do it to implement the following 3 functions:
count(enumerable) # Retrieves the enumerable’s size
member?(enumerable, element) # Checks if an element exists within the enumerable
reduce(enumerable, acc, fun) # Reduces the enumerable into an element
count/1 and member?/2 is straightforward. For reduce/3, you can think about a date range being transformed to a list of dates. Confusing? Keep reading, you’ll get a better understanding later.
Now I have to think about the next date(how to iterate through a date range). I need to be very careful about which month has 30 days and which month has 31 days. And the special February.
But there’s an easier way. Date.new/3 returns a tuple of {:ok, date} or {:error, :invalid_date}. So I can simply increase the day field, and if I get an error, then increase the month field, if I get another error, then increase the year field.
def next %Date{year: y, month: m, day: d} do
case Date.new(y, m, d + 1) do
{:ok, nd} -> nd
{:error, _} ->
case Date.new(y, m + 1, 1) do
{:ok, nd} -> nd
{:error, _} ->
{:ok, nd} = Date.new(y + 1, 1, 1)
nd
end
end
end
Here’s the complete implementation. Read the gist if need syntax highlighted.
defmodule ExclusiveDateRange do
defstruct first: nil, last: nil
def new(first, last) do
%ExclusiveDateRange{first: first, last: last}
end
defimpl Enumerable, for: ExclusiveDateRange do
def reduce _dr, {:halt, acc}, _fun do
{:halted, acc}
end
def reduce(dr, {:suspend, acc}, fun) do
{:suspended, acc, &reduce(dr, &1, fun)}
end
def reduce(%ExclusiveDateRange{first: d, last: d}, {:cont, acc}, _fun) do # when first == last, ends the recursion
{:done, acc}
end
def reduce(%ExclusiveDateRange{first: d1, last: d2}, {:cont, acc}, fun) do
reduce(%ExclusiveDateRange{first: next(d1), last: d2}, fun.(d1, acc), fun)
end
defp next %Date{year: y, month: m, day: d} do
case Date.new(y, m, d + 1) do
{:ok, nd} -> nd
{:error, :invalid_date} ->
case Date.new(y, m + 1, 1) do
{:ok, nd} -> nd
{:error, :invalid_date} ->
{:ok, nd} = Date.new(y + 1, 1, 1)
nd
end
end
end
def member?(%ExclusiveDateRange{first: d1, last: d2}, value) do
days1 = date_to_gregorian_days d1
days2 = date_to_gregorian_days(d2) - 1
days3 = date_to_gregorian_days value
{:ok, Enum.member?(days1..days2, days3)}
end
def count(%ExclusiveDateRange{first: d1, last: d2}) do
days1 = date_to_gregorian_days d1
days2 = date_to_gregorian_days d2
{:ok, days2 - days1}
end
defp date_to_gregorian_days date do
date |> Date.to_erl |> :calendar.date_to_gregorian_days
end
end
end
The problem with the above implementation is:
* it’s exclusive
* it doesn’t support reversed ranges
Take a look at the next/1 function again, now think about if it has to return the previous date, it’d be much more complex.
What if dates can be represented as integers?
Then a date range would actually be an integer range, then we can reuse Elixir’s Range. Isn’t it great?
:calendar.date_to_gregorian_days/1 converts a Date into an integer, and if you add 1 to the integer, it increases 1 day. You can also decrease it. Now, we don’t have to worry about manipulating year, month, and day.
Read the gist if need syntax highlighted.
defmodule DateRange do
defstruct first: nil, last: nil, first_gregorian_days: nil, last_greogorian_days: nil
defmodule Operator do
def first <~> last do
DateRange.new first, last
end
end
defmodule Helpers do
def date_to_gregorian_days date do
date |> Date.to_erl |> :calendar.date_to_gregorian_days
end
def gregorian_days_to_date int do
{:ok, date} = int |> :calendar.gregorian_days_to_date |> Date.from_erl
date
end
end
def new(first, last) do
d1 = Helpers.date_to_gregorian_days first
d2 = Helpers.date_to_gregorian_days last
%DateRange{first: first, last: last, first_gregorian_days: d1, last_greogorian_days: d2}
end
defimpl Enumerable, for: DateRange do
def reduce(%DateRange{first_gregorian_days: d1, last_greogorian_days: d2}, acc, fun) do
reduce(d1, d2, acc, fun, d1 <= d2)
end
defp reduce(_x, _y, {:halt, acc}, _fun, _up) do
{:halted, acc}
end
defp reduce(x, y, {:suspend, acc}, fun, up) do
{:suspended, acc, &reduce(x, y, &1, fun, up)}
end
# normal ranges
defp reduce(x, y, {:cont, acc}, fun, true) when x <= y do
d = Helpers.gregorian_days_to_date x # NOTE it yeilds a date, not an integer
reduce(x + 1, y, fun.(d, acc), fun, true)
end
# reversed ranges
defp reduce(x, y, {:cont, acc}, fun, false) when x >= y do
d = Helpers.gregorian_days_to_date x # NOTE it yeilds a date, not an integer
reduce(x - 1, y, fun.(d, acc), fun, false)
end
defp reduce(_, _, {:cont, acc}, _fun, _up) do
{:done, acc}
end
def member?(%DateRange{first_gregorian_days: d1, last_greogorian_days: d2}, value) do
val = Helpers.date_to_gregorian_days value
{:ok, Enum.member?(d1..d2, val)}
end
def count(%DateRange{first_gregorian_days: d1, last_greogorian_days: d2}) do
if d1 <= d2 do
{:ok, d2 - d1 + 1}
else
{:ok, d1 - d2 + 1}
end
end
end
end
The reduce/3 and reduce/5 are completely borrowed from Range. In reduce/5, an integer is converted to a Date. Because when enumerating a DateRange, we naturally expect a Date, not a integer.
See the DateRange.Operator, I also implemented a <~> operator, which is similar to the ../2 macro, you can have (1..10). If you import the DateRange.Operator module, you can write ~D[2016–07–01] <~> ~D[2016–07–31].
Just like you can have a reversed integer range, e.g (10..1), you can have a reversed DateRange, like ~D[2016–07–31] <~> ~D[2016–07–01].
Note: the code in this post is not well-tested, use at your own risk.
A Date Comparison Bug
While playing with the Date, I found a bug.
iex(1)> d1 = ~D[2016-08-01]
~D[2016-08-01]
iex(2)> d2 = ~D[2016-08-02]
~D[2016-08-02]
iex(3)> d3 = ~D[2016-07-02]
~D[2016-07-02]
iex(4)> d1 < d2
true
iex(5)> d3 < d1
false
Jose prefers to say Ecto.Dates are not comparable, though. So I guess it’s not a bug, and I didn’t create an issue for the Elixir team.
Summary
There are a few things I learned:
* Maybe the best way to relax is to do something I’m good at or interested in.
* Protocols provide a good way to achieve extensibility.
* Read the docs.
* Don’t be too lazy.😜
Please hit the ♥ below if you found this post useful, so that others can read it. | https://medium.com/@natetsai/implementing-date-ranges-in-elixir-384460f3b7fa | CC-MAIN-2018-43 | refinedweb | 1,973 | 66.23 |
JNA brings native code to JRuby
Until now, supporting POSIX calls was quite difficult with JRuby. Using equivalent Java APIs is one way to go, but even if a Java equivalent of the functionality exists, it might not have the right semantics. And if the Java platform lacks the functionality, there remain only workarounds like launching command line programs to do the tasks.
Charles Nutter of the JRuby team reports the native code and POSIX on his blog:
I know, I know. It's got native code in it, and that's bad. It's using JNI, and that's bad. But damn, for the life of me I couldn't see enough negatives to using Java Native Access that would outweigh the incredible positives. For example, the new POSIX interface I just committed.
import com.sun.jna.Library;
public interface POSIX extends Library {
public int chmod(String filename, int mode);
public int chown(String filename, int owner, int group);
}
For the given example, this code using JNA is all that's needed to load and access the library:
import com.sun.jna.NativeThis loads the standard C library, and gives access to chmod (for changing file access permissions) and chown (changing the owner of a file) functions. Of course, this approach is not limited to these two functions. Adding further functions to the
POSIX posix = (POSIX)Native.loadLibrary("c", POSIX.class);
POSIXinterface from the code sample would allow access to ever more functions of C stdlib. After all,
Native.loadLibrarysimply tries to map the names from the Java interface methods to C functions in the library and make them accessible.
JNA still uses JNI under the covers to access libffi which does all the magic. Using JNI brings along problems, for instance it's possible to run afoul of security managers that might not allow it or get into trouble with JEE containers.
Obviously, whenever native libraries are shipped, they need to fit the platform. The currently available JNA release ships with libraries compiled for Win32, Linux 32 and 64 bit x86 versions, Solaris SPARC and x86, FreeBSD, and Darwin (MacOS X) for PPC and x86, which covers quite a lot of the available spectrum.
Easy access to native libraries from JRuby is useful, but JNA opens another possibility: support for Ruby native extensions. These are shared libraries that are loaded in a Ruby process and can access the internals of the Ruby interpreter. Native extensions are widely used. For instance, the popular rcov tool, used to determine the test coverage of a piece of code, makes use of the Ruby API to see which code is actually executed in a test run.
Support for this is not as easy as the examples above, as it requires a full implementation of the Ruby C Language API that extensions use to interact with the Ruby runtime. This is a two way affair: native code can call this API, but the Ruby runtime also invokes callbacks for certain events. For more details on native extensions, see the Extending Ruby chapter of the online version of Programming Ruby.
Is the lack of POSIX functionality in JRuby a problem for you? What native extensions do you miss in JRuby?
JNI required by JNA?
by
Chris Morris
Re: JNI required by JNA?
by
Werner Schuster
Also: look into the jna.jar (that you can download at the JNA site) - it ships with native libs for various | http://www.infoq.com/news/2007/09/jna-jruby/ | CC-MAIN-2015-11 | refinedweb | 572 | 62.48 |
MODERN MULTITHREADING
Implementing, Testing, and
Debugging Multithreaded Java and
C++/Pthreads/Win32 Programs
RICHARD H. CARVER
KUO-CHUNG TAI
A JOHN WILEY & SONS, INC., PUBLICATION
MODERN MULTITHREADING
MODERN MULTITHREADING
Implementing, Testing, and
Debugging Multithreaded Java and
C++/Pthreads/Win32 Programs
RICHARD H. CARVER
KUO-CHUNG TAI
A JOHN WILEY & SONS, INC., PUBLICATION
Copyright:
Carver, Richard H., 1960–
Modern multithreading: implementing, testing, and debugging multithreaded Java and
C++/Pthreads/Win32 programs / by Richard H. Carver and Kuo-Chung Tai.
p. cm.
Includes bibliographical references and index.
ISBN-13 978-0-471-72504-6 (paper)
ISBN-10 0-471-72504-8 (paper)
1. Parallel programming (Computer science) 2. Threads (Computer programs) I. Tai,
Kuo-Chung. II. Title.
QA76.642.C38 2006
005.1′1–dc22
2005045775
Printed in the United States of America.
10 9 8 7 6 5 4 3 2 1
MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at. Requests
CONTENTS
Preface xi
1 Introduction to Concurrent Programming 1
1.1 Processes and Threads: An Operating System’s View, 1
1.2 Advantages of Multithreading, 3
1.3 Threads in Java, 4
1.4 Threads in Win32, 6
1.5 Pthreads, 9
1.6 C++ Thread Class, 14
1.6.1 C++ Class Thread for Win32, 14
1.6.2 C++ Class Thread for Pthreads, 19
1.7 Thread Communication, 19
1.7.1 Nondeterministic Execution Behavior, 23
1.7.2 Atomic Actions, 25
1.8 Testing and Debugging Multithreaded Programs, 29
1.8.1 Problems and Issues, 30
1.8.2 Class TDThread for Testing and Debugging, 34
1.8.3 Tracing and Replaying Executions with Class Template
sharedVariable<>, 37
1.9 Thread Synchronization, 38
Further Reading, 38
References, 39
Exercises, 41
v
vi CONTENTS
2 The Critical Section Problem 46
2.1 Software Solutions to the Two-Thread Critical Section
Problem, 47
2.1.1 Incorrect Solution 1, 48
2.1.2 Incorrect Solution 2, 49
2.1.3 Incorrect Solution 3, 50
2.1.4 Peterson’s Algorithm, 52
2.1.5 Using the volatile Modifier, 53
2.2 Ticket-Based Solutions to the n-Thread Critical Section
Problem, 54
2.2.1 Ticket Algorithm, 54
2.2.2 Bakery Algorithm, 56
2.3 Hardware Solutions to the n-Thread Critical Section
Problem, 58
2.3.1 Partial Solution, 59
2.3.2 Complete Solution, 59
2.3.3 Note on Busy-Waiting, 60
2.4 Deadlock, Livelock, and Starvation, 62
2.4.1 Deadlock, 62
2.4.2 Livelock, 62
2.4.3 Starvation, 63
2.5 Tracing and Replay for Shared Variables, 64
2.5.1 ReadWrite-Sequences, 65
2.5.2 Alternative Definition of ReadWrite-Sequences, 67
2.5.3 Tracing and Replaying ReadWrite-Sequences, 68
2.5.4 Class Template sharedVariable<>, 70
2.5.5 Putting It All Together, 71
2.5.6 Note on Shared Memory Consistency, 74
Further Reading, 77
References, 78
Exercises, 79
3 Semaphores and Locks 84
3.1 Counting Semaphores, 84
3.2 Using Semaphores, 86
3.2.1 Resource Allocation, 86
3.2.2 More Semaphore Patterns, 87
3.3 Binary Semaphores and Locks, 90
3.4 Implementing Semaphores, 92
3.4.1 Implementing P() and V(), 92
3.4.2 VP() Operation, 94
3.5 Semaphore-Based Solutions to Concurrent Programming
Problems, 96
3.5.1 Event Ordering, 96
CONTENTS vii
3.5.2 Bounded Buffer, 96
3.5.3 Dining Philosophers, 98
3.5.4 Readers and Writers, 101
3.5.5 Simulating Counting Semaphores, 108
3.6 Semaphores and Locks in Java, 111
3.6.1 Class countingSemaphore, 111
3.6.2 Class mutexLock, 113
3.6.3 Class Semaphore, 115
3.6.4 Class ReentrantLock, 116
3.6.5 Example: Java Bounded Buffer, 116
3.7 Semaphores and Locks in Win32, 119
3.7.1 CRITICAL SECTION, 119
3.7.2 Mutex, 122
3.7.3 Semaphore, 124
3.7.4 Events, 132
3.7.5 Other Synchronization Functions, 134
3.7.6 Example: C++/Win32 Bounded Buffer, 134
3.8 Semaphores and Locks in Pthreads, 134
3.8.1 Mutex, 136
3.8.2 Semaphore, 137
3.9 Another Note on Shared Memory Consistency, 141
3.10 Tracing, Testing, and Replay for Semaphores and Locks, 143
3.10.1 Nondeterministic Testing with the Lockset
Algorithm, 143
3.10.2 Simple SYN-Sequences for Semaphores and
Locks, 146
3.10.3 Tracing and Replaying Simple PV-Sequences and
LockUnlock-Sequences, 150
3.10.4 Deadlock Detection, 154
3.10.5 Reachability Testing for Semaphores and Locks, 157
3.10.6 Putting It All Together, 160
Further Reading, 163
References, 164
Exercises, 166
4 Monitors 177
4.1 Definition of Monitors, 178
4.1.1 Mutual Exclusion, 178
4.1.2 Condition Variables and SC Signaling, 178
4.2 Monitor-Based Solutions to Concurrent Programming
Problems, 182
4.2.1 Simulating Counting Semaphores, 182
4.2.2 Simulating Binary Semaphores, 183
4.2.3 Dining Philosophers, 183
4.2.4 Readers and Writers, 187
viii CONTENTS
4.3 Monitors in Java, 187
4.3.1 Better countingSemaphore, 190
4.3.2 notify vs. notifyAll, 191
4.3.3 Simulating Multiple Condition Variables, 194
4.4 Monitors in Pthreads, 194
4.4.1 Pthreads Condition Variables, 196
4.4.2 Condition Variables in J2SE 5.0, 196
4.5 Signaling Disciplines, 199
4.5.1 Signal-and-Urgent-Wait, 199
4.5.2 Signal-and-Exit, 202
4.5.3 Urgent-Signal-and-Continue, 204
4.5.4 Comparing SU and SC Signals, 204
4.6 Using Semaphores to Implement Monitors, 206
4.6.1 SC Signaling, 206
4.6.2 SU Signaling, 207
4.7 Monitor Toolbox for Java, 209
4.7.1 Toolbox for SC Signaling in Java, 210
4.7.2 Toolbox for SU Signaling in Java, 210
4.8 Monitor Toolbox for Win32/C++/Pthreads, 211
4.8.1 Toolbox for SC Signaling in C++/Win32/Pthreads, 213
4.8.2 Toolbox for SU Signaling in C++/Win32/Pthreads, 213
4.9 Nested Monitor Calls, 213
4.10 Tracing and Replay for Monitors, 217
4.10.1 Simple M-Sequences, 217
4.10.2 Tracing and Replaying Simple M-Sequences, 219
4.10.3 Other Approaches to Program Replay, 220
4.11 Testing Monitor-Based Programs, 222
4.11.1 M-Sequences, 222
4.11.2 Determining the Feasibility of an M-Sequence, 227
4.11.3 Determining the Feasibility of a
Communication-Sequence, 233
4.11.4 Reachability Testing for Monitors, 233
4.11.5 Putting It All Together, 235
Further Reading, 243
References, 243
Exercises, 245
5 Message Passing 258
5.1 Channel Objects, 258
5.1.1 Channel Objects in Java, 259
5.1.2 Channel Objects in C++/Win32, 263
5.2 Rendezvous, 266
5.3 Selective Wait, 272
CONTENTS ix
5.4 Message-Based Solutions to Concurrent Programming
Problems, 275
5.4.1 Readers and Writers, 275
5.4.2 Resource Allocation, 278
5.4.3 Simulating Counting Semaphores, 281
5.5 Tracing, Testing, and Replay for Message-Passing
Programs, 281
5.5.1 SR-Sequences, 282
5.5.2 Simple SR-Sequences, 288
5.5.3 Determining the Feasibility of an SR-Sequence, 290
5.5.4 Deterministic Testing, 296
5.5.5 Reachability Testing for Message-Passing
Programs, 297
5.5.6 Putting It All Together, 299
Further Reading, 304
References, 304
Exercises, 304
6 Message Passing in Distributed Programs 312
6.1 TCP Sockets, 312
6.1.1 Channel Reliability, 313
6.1.2 TCP Sockets in Java, 314
6.2 Java TCP Channel Classes, 317
6.2.1 Classes TCPSender and TCPMailbox, 318
6.2.2 Classes TCPSynchronousSender and
TCPSynchronousMailbox, 326
6.2.3 Class TCPSelectableSynchronousMailbox, 328
6.3 Timestamps and Event Ordering, 329
6.3.1 Event-Ordering Problems, 330
6.3.2 Local Real-Time Clocks, 331
6.3.3 Global Real-Time Clocks, 332
6.3.4 Causality, 332
6.3.5 Integer Timestamps, 334
6.3.6 Vector Timestamps, 335
6.3.7 Timestamps for Programs Using Messages and Shared
Variables, 339
6.4 Message-Based Solutions to Distributed Programming
Problems, 341
6.4.1 Distributed Mutual Exclusion, 341
6.4.2 Distributed Readers and Writers, 346
6.4.3 Alternating Bit Protocol, 348
6.5 Testing and Debugging Distributed Programs, 353
6.5.1 Object-Based Sequences, 353
6.5.2 Simple Sequences, 362
x CONTENTS
6.5.3 Tracing, Testing, and Replaying CARC-Sequences and
CSC-Sequences, 362
6.5.4 Putting It All Together, 369
6.5.5 Other Approaches to Replaying Distributed
Programs, 371
Further Reading, 374
References, 375
Exercises, 376
7 Testing and Debugging Concurrent Programs 381
7.1 Synchronization Sequences of Concurrent Programs, 383
7.1.1 Complete Events vs. Simple Events, 383
7.1.2 Total Ordering vs. Partial Ordering, 386
7.2 Paths of Concurrent Programs, 388
7.2.1 Defining a Path, 388
7.2.2 Path-Based Testing and Coverage Criteria, 391
7.3 Definitions of Correctness and Faults for Concurrent
Programs, 395
7.3.1 Defining Correctness for Concurrent Programs, 395
7.3.2 Failures and Faults in Concurrent Programs, 397
7.3.3 Deadlock, Livelock, and Starvation, 400
7.4 Approaches to Testing Concurrent Programs, 408
7.4.1 Nondeterministic Testing, 409
7.4.2 Deterministic Testing, 410
7.4.3 Combinations of Deterministic and Nondeterministic
Testing, 414
7.5 Reachability Testing, 419
7.5.1 Reachability Testing Process, 420
7.5.2 SYN-Sequences for Reachability Testing, 424
7.5.3 Race Analysis of SYN-Sequences, 429
7.5.4 Timestamp Assignment, 433
7.5.5 Computing Race Variants, 439
7.5.6 Reachability Testing Algorithm, 441
7.5.7 Research Directions, 447
Further Reading, 449
References, 449
Exercises, 452
Index 457
PREFACE
This is a textbook on multithreaded programming. The objective of this book
is to teach students about languages and libraries for multithreaded program-
ming, to help students develop problem-solving and programming skills, and to
describe and demonstrate various testing and debugging techniques that have been
developed for multithreaded programs over the past 20 years. It covers threads,
semaphores, locks, monitors, message passing, and the relevant parts of Java,
the POSIX Pthreads library, and the Windows Win32 Application Programming
Interface (API).
The book is unique in that it provides in-depth coverage on testing and debug-
ging multithreaded programs, a topic that typically receives little attention. The
title Modern Multithreading reflects the fact that there are effective and relatively
new testing and debugging techniques for multithreaded programs. The material
in this book was developed in concurrent programming courses that the authors
have taught for 20 years. This material includes results from the authors’ research
in concurrent programming, emphasizing tools and techniques that are of practi-
cal use. A class library has been implemented to provide working examples of
all the material that is covered.
Classroom Use
In our experience, students have a hard time learning to write concurrent pro-
grams. If they manage to get their programs to run, they usually encounter
deadlocks and other intermittent failures, and soon discover how difficult it is to
reproduce the failures and locate the cause of the problem. Essentially, they have
no way to check the correctness of their programs, which interferes with learn-
ing. Instructors face the same problem when grading multithreaded programs. It
xi
xii PREFACE
is tedious, time consuming, and often impossible to assess student programs by
hand. The class libraries that we have developed, and the testing techniques they
support, can be used to assess student programs. When we assign programming
problems in our courses, we also provide test cases that the students must use
to assess the correctness of their programs. This is very helpful for the students
and the instructors.
This book is designed for upper-level undergraduates and graduate students
in computer science. It can be used as a main text in a concurrent programming
course or could be used as a supplementary text for an operating systems course or
a software engineering course. Since the text emphasizes practical material, pro-
vides working code, and addresses testing and debugging problems that receive
little or no attention in many other books, we believe that it will also be helpful
to programmers in industry.
The text assumes that students have the following background:
ž Programming experience as typically gained in CS 1 and CS 2 courses.
ž Knowledge of elementary data structures as learned in a CS 2 course.
ž An understanding of Java fundamentals. Students should be familiar with
object-oriented programming in Java, but no “advanced” knowledge is
necessary.
ž An understanding of C++ fundamentals. We use only the basic object-
oriented programming features of C++.
ž A prior course on operating systems is helpful but not required.
We have made an effort to minimize the differences between our Java and C++
programs. We use object-oriented features that are common to both languages,
and the class library has been implemented in both languages. Although we don’t
illustrate every example in both Java and C++, the differences are very minor
and it is easy to translate program examples from one language to the other.
Content
The book has seven chapters. Chapter 1 defines operating systems terms such
as process, thread, and context switch. It then shows how to create threads, first
in Java and then in C++ using both the POSIX Pthreads library and the Win32
API. A C++ Thread class is provided to hide the details of thread creation
in Pthreads/Win32. C++ programs that use the Thread class look remarkably
similar to multithreaded Java programs. Fundamental concepts, such as atomicity
and nondeterminism, are described using simple program examples. Chapter 1
ends by listing the issues and problems that arise when testing and debugging
multithreaded programs. To illustrate the interesting things to come, we present
a simple multithreaded C++ program that is capable of tracing and replaying its
own executions.
Chapter 2 introduces concurrent programming by describing various solutions
to the critical section problem. This problem is easy to understand but hard
PREFACE xiii
to solve. The advantage of focusing on this problem is that it can be solved
without introducing complicated new programming constructs. Students gain a
quick appreciation for the programming skills that they need to acquire. Chapter 2
also demonstrates how to trace and replay Peterson’s solution to the critical
section problem, which offers a straightforward introduction to several testing and
debugging issues. The synchronization library implements the various techniques
that are described.
Chapters 3, 4, and 5 cover semaphores, monitors and message passing, respec-
tively. Each chapter describes one of these constructs and shows how to use
it to solve programming problems. Semaphore and Lock classes for Java and
C++/Win32/Pthreads are presented in Chapter 3. Chapter 4 presents monitor
classes for Java and C++/Win32/Pthreads. Chapter 5 presents mailbox classes
with send/receive methods and a selective wait statement. These chapters also
cover the built-in support that Win32 and Pthreads provide for these constructs,
as well as the support provided by J2SE 5.0 (Java 2 Platform, Standard Edi-
tion 5.0). Each chapter addresses a particular testing or debugging problem
and shows how to solve it. The synchronization library implements the test-
ing and debugging techniques so that students can apply them to their own
programs.
Chapter 6 covers message passing in a distributed environment. It presents
several Java mailbox classes that hide the details of TCP message passing and
shows how to solve several distributed programming problems in Java. It also
shows how to test and debug programs in a distributed environment (e.g., accu-
rately tracing program executions by using vector timestamps). This chapter by
no means provides complete coverage of distributed programming. Rather, it is
meant to introduce students to the difficulty of distributed programming and to
show them that the testing and debugging techniques presented in earlier chapters
can be extended to work in a distributed environment. The synchronization library
implements the various techniques.
Chapter 7 covers concepts that are fundamental to testing and debugging
concurrent programs. It defines important terms, presents several test coverage
criteria for concurrent programs, and describes the various approaches to test-
ing concurrent programs. This chapter organizes and summarizes the testing and
debugging material that is presented in depth in Chapters 2 to 6. This organiza-
tion provides two paths through the text. Instructors can cover the testing and
debugging material in the last sections of Chapters 2 to 6 as they go through those
chapters, or they can cover those sections when they cover Chapter 7. Chapter
7 also discusses reachability testing, which offers a bridge between testing and
verification, and is implemented in the synchronization library.
Each chapter has exercises at the end. Some of the exercises explore the con-
cepts covered in the chapter, whereas others require a program to be written.
In our courses we cover all the chapters and give six homework assignments,
two in-class exams, and a project. We usually supplement the text with readings
on model checking, process algebra, specification languages, and other research
topics.
xiv PREFACE
Online Resources
The home page for this book is located at∼rcarver/ModernMultithreading
This Web site contains the source code for all the listings in the text and for the
synchronization libraries. It also contains startup files and test cases for some of
the exercises. Solutions to the exercises are available for instructors, as is a copy
of our lecture notes. There will also be an errata page.
Acknowledgments
The suggestions we received from the anonymous reviewers were very help-
ful. The National Science Foundation supported our research through grants
CCR-8907807, CCR-9320992, CCR-9309043, and CCR-9804112. We thank our
research assistants and the students in our courses at North Carolina State and
George Mason University for helping us solve many interesting problems. We
also thank Professor Jeff Lei at the University of Texas at Arlington for using
early versions of this book in his courses.
My friend, colleague, and coauthor Professor K. C. Tai passed away before
we could complete this book. K.C. was an outstanding teacher, a world-class
researcher in the areas of software engineering, concurrent systems, programming
languages, and compiler construction, and an impeccable and highly respected
professional. If the reader finds this book helpful, it is a tribute to K.C.’s many
contributions. Certainly, K.C. would have fixed the faults that I failed to find.
RICHARD H. CARVER
Fairfax, Virginia
July 2005
rcarver@cs.gmu.edu
1
INTRODUCTION TO CONCURRENT
PROGRAMMING
A concurrent program contains two or more threads that execute concurrently
and work together to perform some task. In this chapter we begin with an oper-
ating system’s view of a concurrent program. The operating system manages
the program’s use of hardware and software resources and allows the program’s
threads to share the central processing units (CPUs). We then learn how to define
and create threads in Java and also in C++ using the Windows Win32 API
and the POSIX Pthreads library. Java provides a Thread class, so multithreaded
Java programs are object-oriented. Win32 and Pthreads provide a set of function
calls for creating and manipulating threads. We wrap a C++ Thread class around
these functions so that we can write C++/Win32 and C++/Pthreads multithreaded
programs that have the same object-oriented structure as Java programs.
All concurrent programs exhibit unpredictable behavior. This creates new chal-
lenges for programmers, especially those learning to write concurrent programs.
In this chapter we learn the reason for this unpredictable behavior and examine
the problems it causes during testing and debugging.
1.1 PROCESSES AND THREADS: AN OPERATING SYSTEM’S VIEW
When a program is executed, the operating system creates a process containing
the code and data of the program and manages the process until the program
terminates. User processes are created for user programs, and system processes
Modern Multithreading: Implementing, Testing, and Debugging Multithreaded Java
and C++/Pthreads/Win32 Programs, By Richard H. Carver and Kuo-Chung Tai
1
2 INTRODUCTION TO CONCURRENT PROGRAMMING
are created for system programs. A user process has its own logical address space,
separate from the space of other user processes and separate from the space (called
the kernel space) of the system processes. This means that two processes may
reference the same logical address, but this address will be mapped to different
physical memory locations. Thus, processes do not share memory unless they
make special arrangements with the operating system to do so.
Multiprocessing operating systems enable several programs to execute simul-
taneously. The operating system is responsible for allocating the computer’s
resources among competing processes. These shared resources include memory,
peripheral devices such as printers, and the CPU(s). The goal of a multiprocess-
ing operating system is to have some process executing at all times in order to
maximize CPU utilization.
Within a process, program execution entails initializing and maintaining a
great deal of information [Anderson et al. 1989]. For instance:
ž The process state (e.g., ready, running, waiting, or stopped)
ž The program counter, which contains the address of the next instruction to
be executed for this process
ž Saved CPU register values
ž Memory management information (page tables and swap files), file descrip-
tors, and outstanding input/output (I/O) requests
The volume of this per-process information makes it expensive to create and
manage processes.
A thread is a unit of control within a process. When a thread runs, it executes
a function in the program. The process associated with a running program starts
with one running thread, called the main thread, which executes the “main”
function of the program. In a multithreaded program, the main thread creates
other threads, which execute other functions. These other threads can create even
more threads, and so on. Threads are created using constructs provided by the
programming language or the functions provided by an application programming
interface (API).
Each thread has its own stack of activation records and its own copy of
the CPU registers, including the stack pointer and the program counter, which
together describe the state of the thread’s execution. However, the threads in a
multithreaded process share the data, code, resources, and address space of their
process. The per-process state information listed above is also shared by the
threads in the program, which greatly reduces the overhead involved in creating
and managing threads. In Win32 a program can create multiple processes or
multiple threads. Since thread creation in Win32 has lower overhead, we focus
on single-process multithreaded Win32 programs.
The operating system must decide how to allocate the CPUs among the pro-
cesses and threads in the system. In some systems, the operating system selects a
process to run and the process selected chooses which of its threads will execute.
Alternatively, the threads are scheduled directly by the operating system. At any
ADVANTAGES OF MULTITHREADING 3
given moment, multiple processes, each containing one or more threads, may be
executing. However, some threads may not be ready for execution. For example,
some threads may be waiting for an I/O request to complete. The scheduling
policy determines which of the ready threads is selected for execution.
In general, each ready thread receives a time slice (called a quantum) of
the CPU. If a thread decides to wait for something, it relinquishes the CPU
voluntarily. Otherwise, when a hardware timer determines that a running thread’s
quantum has completed, an interrupt occurs and the thread is preempted to allow
another ready thread to run. If there are multiple CPUs, multiple threads can
execute at the same time. On a computer with a single CPU, threads have the
appearance of executing simultaneously, although they actually take turns running
and they may not receive equal time. Hence, some threads may appear to run at
a faster rate than others.
The scheduling policy may also consider a thread’s priority and the type of
processing that the thread performs, giving some threads preference over others.
We assume that the scheduling policy is fair, which means that every ready thread
eventually gets a chance to execute. A concurrent program’s correctness should
not depend on its threads being scheduled in a certain order.
Switching the CPU from one process or thread to another, known as a context
switch, requires saving the state of the old process or thread and loading the state
of the new one. Since there may be several hundred context switches per second,
context switches can potentially add significant overhead to an execution.
1.2 ADVANTAGES OF MULTITHREADING
Multithreading allows a process to overlap I/O and computation. One thread can
execute while another thread is waiting for an I/O operation to complete. Mul-
tread-
ing can speed up performance through parallelism. A program that makes full
use of two processors may run in close to half the time. However, this level of
speedup usually cannot be obtained, due to the communication overhead required
for coordinating the threads (see Exercise 1.11).
Multithreading has some advantages over multiple processes. Threads require
less overhead to manage than processes, and intraprocess thread communication
is less expensive than interprocess communication. Multiprocess concurrent pro-
grams do have one advantage: Each process can execute on a different machine
(in which case, each process is often a multithreaded program). This type of
concurrent program is called a distributed program. Examples of distributed pro-
grams are file servers (e.g., NFS), file transfer clients and servers (e.g., FTP),
remote log-in clients and servers (e.g., Telnet), groupware programs, and Web
browsers and servers. The main disadvantage of concurrent programs is that they
4 INTRODUCTION TO CONCURRENT PROGRAMMING
are extremely difficult to develop. Concurrent programs often contain bugs that
are notoriously difficult to find and fix. Once we have examined several concur-
rent programs, we’ll take a closer look at the special problems that arise when
we test and debug them.
1.3 THREADS IN JAVA
A Java program has a main thread that executes the main() function. In addi-
tion, several system threads are started automatically whenever a Java program
is executed. Thus, every Java program is a concurrent program, although the
programmer may not be aware that multiple threads are running. Java provides
a Thread class for defining user threads. One way to define a thread is to define
a class that extends (i.e., inherits from) the Thread class. Class simpleThread in
Listing 1.1 extends class Thread. Method run() contains the code that will be exe-
cuted when a simpleThread is started. The default run() method inherited from
class Thread is empty, so a new run() method must be defined in simpleThread
in order for the thread to do something useful.
The main thread creates simpleThreads named thread1 and thread2 and
starts them. (These threads continue to run after the main thread completes its
statements.) Threads thread1 and thread2 each display a simple message and
terminate. The integer IDs passed as arguments to the simpleThread constructor
are used to distinguish between the two instances of simpleThread.
A second way to define a user thread in Java is to use the Runnable interface.
Class simpleRunnable in Listing 1.2 implements the Runnable interface, which
means that simpleRunnable must provide an implementation of method run().
The main method creates a Runnable instance r of class simpleRunnable, passes
r as an argument to the Thread class constructor for thread3, and starts thread3.
Using a Runnable object to define the run() method offers one advantage
over extending class Thread. Since class simpleRunnable implements interface
Runnable, it is not required to extend class Thread, which means that
class simpleThread extends Thread {
public simpleThread(int ID) {myID = ID;}
public void run() {System.out.println(‘‘Thread ’’ + myID + ‘‘ is running.’’);}
private int myID;
}
public class javaConcurrentProgram {
public static void main(String[] args) {
simpleThread thread1 = new simpleThread(1);
simpleThread thread2 = new simpleThread(2);
thread1.start(); thread2.start(); // causes the run() methods to execute
}
}
Listing 1.1 Simple concurrent Java program.
THREADS IN JAVA 5
class simpleRunnable implements Runnable {
public simpleRunnable(int ID) {myID = ID;}
public void run() {System.out.println(‘‘Thread ’’ + myID + ‘‘ is running.’’);}
private int myID;
}
public class javaConcurrentProgram2 {
public static void main(String[] args) {
Runnable r = new simpleRunnable(3);
Thread thread3 = new Thread(r); // thread3 executed r’s run() method
thread3.start();
}
}
Listing 1.2 Java’s Runnable interface.
simpleRunnable could, if desired, extend some other class. This is important
since a Java class cannot extend more than one other class. (A Java class can
implement one or more interfaces but can extend only one class.)
The details about how Java threads are scheduled vary from system to system.
Java threads can be assigned a priority, which affects how threads are selected
for execution. Using method setPriority(), a thread T can be assigned a priority in
a range from Thread.MIN PRIORITY (usually, 1) to Thread.MAX PRIORITY
(usually, 10):
T.setPriority(6);
Higher-priority threads get preference over lower-priority threads, but it is diffi-
cult to make more specific scheduling guarantees based only on thread priorities.
We will not be assigning priorities to the threads in this book, which means
that user threads will always have the same priority. However, even if all the
threads have the same priority, a thread may not be certain to get a chance to
run. Consider a thread that is executing the following infinite loop:
while (true) { ; }
This loop contains no I/O statements or any other statements that require the
thread to release the CPU voluntarily. In this case the operating system must
preempt the thread to allow other threads to run. Java does not guarantee that the
underlying thread scheduling policy is preemptive. Thus, once a thread begins
executing this loop, there is no guarantee that any other threads will execute. To
be safe, we can add a sleep statement to this loop:
while (true) {
try {Thread.sleep(100);} // delay thread for 100 milliseconds
// (i.e., 0.1 second)
6 INTRODUCTION TO CONCURRENT PROGRAMMING
catch (InterruptedException e) {} // InterruptedException must be caught
// when sleep() is called
}
Executing the sleep() statement will force a context switch, giving the other
threads a chance to run. In this book we assume that the underlying thread
scheduling policy is preemptive, so that sleep() statements are not necessary to
ensure fair scheduling. However, since sleep() statements have a dramatic effect
on execution, we will see later that they can be very useful during testing.
1.4 THREADS IN Win32
Multithreaded programs in Windows use the functions in the Win32 API. Threads
are created by calling function CreateThread() or function beginthreadex(). If
a program needs to use the multithreaded C run-time library, it should use
beginthreadex() to create threads; otherwise, it can use CreateThread(). Whether
a program needs to use the multithreaded C run-time library depends on which
of the library functions it calls. Some of the functions in the single-threaded run-
time library may not work properly in a multithreaded program. This includes
functions malloc() and free() (or new and delete in C++), any of the functions
in stdio.h or io.h, and functions such as asctime(), strtok(), and rand(). For the
sake of simplicity and safety, we use only beginthreadex() in this book. (Since
the parameters for beginthreadex() and CreateThread() are almost identical, we
will essentially be learning how to use both functions.) Details about choos-
ing between the single- and multithreaded C run-time libraries can be found
in [Beveridge and Wiener 1997].
Function beginthreadex() takes six parameters and returns a pointer, called a
handle, to the newly created thread. This handle must be saved so that it can be
passed to other Win32 functions that manipulate threads:
unsigned long _beginthreadex(
void* security, // security attribute
unsigned stackSize, // size of the thread’s stack
unsigned ( __stdcall *funcStart ) (void *), // starting address of the function
// to run
void* argList, // arguments to be passed to the
// thread
unsigned initFlags, // initial state of the thread: running
// or suspended
unsigned* threadAddr // thread ID
);
THREADS IN Win32 7
The parameters of function beginthreadex() are as follows:
ž security: a security attribute, which in our programs is always the default
value NULL.
ž stackSize: the size, in bytes, of the new thread’s stack. We will use the
default value 0, which specifies that the stack size defaults to the stack size
of the main thread.
ž funcStart: the (address of a) function that the thread will execute. (This
function plays the same role as the run() method in Java.)
ž argList: an argument to be passed to the thread. This is either a 32-bit value
or a 32-bit pointer to a data structure. The Win32 type for void* is LPVOID.
ž initFlags: a value that is either 0 or CREATE SUSPENDED. The value 0
specifies that the thread should begin execution immediately upon creation.
The value CREATE SUSPENDED specifies that the thread is suspended
immediately after it is created and will not run until the Win32 function
ResumeThread (HANDLE hThread) is called on it.
ž threadAddr: the address of a memory location that will receive an identifier
assigned to the thread by Win32.
If beginthreadex() is successful, it returns a valid thread handle, which must
be cast to the Win32 type HANDLE to be used in other functions. It returns 0 if
it fails.
The program in Listing 1.3 is a C++/Win32 version of the simple Java pro-
gram in Listing 1.1. Array threadArray stores the handles for the two threads
created in main(). Each thread executes the code in function simpleThread(),
which displays the ID assigned by the user and returns the ID. Thread IDs are
integers that the user supplies as the fourth argument on the call to function
beginthreadex(). Function beginthreadex() forwards the IDs as arguments to
thread function simpleThread() when the threads are created.
Threads created in main() will not continue to run after the main thread exits.
Thus, the main thread must wait for both of the threads it created to complete
before it exits the main() function. (This behavior is opposite that of Java’s
main() method.) It does this by calling function WaitForMultipleObjects(). The
second argument to WaitForMultipleObjects() is the array that holds the thread
handles, and the first argument is the size of this array. The third argument
TRUE indicates that the function will wait for all of the threads to complete.
If FALSE were used instead, the function would wait until any one of the
threads completed. The fourth argument is a timeout duration in milliseconds.
The value INFINITE means that there is no time limit on how long WaitForMul-
tipleObjects() should wait for the threads to complete. When both threads have
completed, function GetExitCodeThread() is used to capture the return values of
the threads.
8 INTRODUCTION TO CONCURRENT PROGRAMMING
#include <iostream>
#include <windows.h>
#include <process.h> // needed for function _beginthreadex()
void PrintError(LPT(szBuf),
&numWritten,FALSE);
LocalFree(lpErrorBuf);
exit(errorCode);
}
unsigned WINAPI simpleThread (LPVOID myID) {
// myID receives the 4th argument of _beginthreadex().
// Note: ‘‘WINAPI’’ refers to the ‘‘__stdcall’’ calling convention
// API functions, and ‘‘LPVOID’’ is a Win32 data type defined as void*
std::cout << "Thread " << (unsigned) myID << "is running" << std::endl;
return (unsigned) myID;
}
int main() {
const int numThreads = 2;
HANDLE threadArray[numThreads]; // array of thread handles
unsigned threadID; // returned by _beginthreadex(), but not used
DWORD rc; // return code; (DWORD is defined in WIN32 as unsigned long)
// Create two threads and store their handles in array threadArray
threadArray[0] = (HANDLE) _beginthreadex(NULL, 0, simpleThread,
(LPVOID) 1U, 0, &threadID);
if (!threadArray[0])
PrintError("_beginthreadex failed at ",__FILE__,__LINE__);
threadArray[1] = (HANDLE) _beginthreadex(NULL, 0, simpleThread,
(LPVOID) 2U, 0, &threadID);
if (!threadArray[1])
PrintError("_beginthreadex failed at ",__FILE__,__LINE__);
rc = WaitForMultipleObjects(numThreads,threadArray,TRUE,INFINITE);
//wait for the threads to finish
Listing 1.3 Simple concurrent program using C++/Win32.
PTHREADS 9
if (!(rc >= WAIT_OBJECT_0 && rc < WAIT_OBJECT_0+numThreads))
PrintError("WaitForMultipleObjects failed at ",__FILE__,__LINE__);
DWORD result1, result2; // these variables will receive the return values
rc = GetExitCodeThread(threadArray[0],&result1);
if (!rc) PrintError("GetExitCodeThread failed at ",__FILE__,__LINE__);
rc = GetExitCodeThread(threadArray[1],&result2);
if (!rc) PrintError("GetExitCodeThread failed at ",__FILE__,__LINE__);
std::cout << "thread1:" << result1 << "thread2:" << result2 << std::endl;
rc = CloseHandle(threadArray[0]); // release reference to thread when finished
if (!rc) PrintError("CloseHandle failed at ",__FILE__,__LINE__);
rc = CloseHandle(threadArray[1]);
if (!rc) PrintError("CloseHandle failed at ",__FILE__,__LINE__);
return 0;
}
Listing 1.3 (continued )
Every Win32 process has at least one thread, which we have been referring
to as the main thread. Processes can be assigned to a priority class (e.g., High or
Low), and the threads within a process can be assigned a priority that is higher or
lower than their parent process. The Windows operating system uses preemptive,
priority-based scheduling. Threads are scheduled based on their priority levels,
giving preference to higher-priority threads. Since we will not be using thread
priorities in our Win32 programs, we will assume that the operating system will
give a time slice to each program thread, in round-robin fashion. (The threads in
a Win32 program will be competing for the CPU with threads in other programs
and with system threads, and these other threads may have higher priorities.)
1.5 PTHREADS
A POSIX thread is created by calling function pthread create():
int pthread_create() {
pthread_t* thread, // thread ID
const pthread_attr_t* attr, // thread attributes
void* (*start)(void*), // starting address of the function to run
void* arg // an argument to be passed to the thread
};
The parameters for pthread create() are as follows:
ž thread: the address of a memory location that will receive an identifier
assigned to the thread if creation is successful. A thread can get its own
10 INTRODUCTION TO CONCURRENT PROGRAMMING
identifier by calling pthread self(). Two identifiers can be compared using
pthread equal(ID1,ID2).
ž attr: the address of a variable of type pthread attr t, which can be used to
specify certain attributes of the thread created.
ž start: the (address of the) function that the thread will execute. (This func-
tion plays the same role as the run() method in Java.)
ž arg: an argument to be passed to the thread.
If pthread create() is successful, it returns 0; otherwise, it returns an error
code from the <errno.h> header file. The other Pthreads functions follow the
same error-handling scheme.
The program in Listing 1.4 is a C++/Pthreads version of the C++/Win32 pro-
gram in Listing 1.3. A Pthreads program must include the standard header file
<pthread.h> for the Pthreads library. Array threadArray stores the Pthreads
IDs for the two threads created in main(). Thread IDs are of type pthread t.
Each thread executes the code in function simpleThread(), which displays the
IDS assigned by the user. The IDS are integers that are supplied as the fourth
argument on the call to function pthread create(). Function pthread create() for-
wards the IDs as arguments to thread function simpleThread() when the threads
are created.
Threads have attributes that can be set when they are created. These attributes
include the size of a thread’s stack, its priority, and the policy for schedul-
ing threads. In most cases the default attributes are sufficient. Attributes are set
by declaring and initializing an attributes object. Each attribute in the attributes
object has a pair of functions for reading (get) and writing (set) its
value.
In Listing 1.4, the attribute object threadAttribute is initialized by calling
pthread attr init(). The scheduling scope attribute is set to PTHREAD SCOPE
SYSTEM by calling pthread attr setscope(). This attribute indicates that we want
the threads to be scheduled directly by the operating system. The default value
for this attribute is PTHREAD SCOPE PROCESS, which indicates that only the
process, not the threads, will be visible to the operating system. When the operat-
ing system schedules the process, the scheduling routines in the Pthreads library
will choose which thread to run. The address of threadAttribute is passed as the
second argument on the call to pthread create().
As in Win32, the main thread must wait for the two threads it created to
complete before it exits the main() function. It does this by calling function
pthread join() twice. The first argument to pthread join() is the thread ID of
the thread to wait on. The second argument is the address of a variable that will
receive the return value of the thread. In our program, neither thread returns a
value, so we use NULL for the second argument. (The value NULL can also be
used if there is a return value that we wish to ignore.)
PTHREADS 11
#include <iostream>
#include <pthread.h>
#include <errno.h>
void PrintError(char* msg, int status, char* fileName, int lineNumber) {
std::cout << msg << ' ' << fileName << ":" << lineNumber
<< "- " << strerror(status) << std::endl;
}
void* simpleThread (void* myID) { // myID is the fourth argument of
// pthread_create ()
std::cout << "Thread " << (long) myID << "is running" << std::endl;
return NULL; // implicit call to pthread_exit(NULL);
}
int main() {
pthread_t threadArray[2]; // array of thread IDs
int status; // error code
pthread_attr_t threadAttribute; // thread attribute
status = pthread_attr_init(&threadAttribute); // initialize attribute object
if (status != 0) { PrintError("pthread_attr_init failed at", status, __FILE__,
__LINE__); exit(status);}
// set the scheduling scope attribute
status = pthread_attr_setscope(&threadAttribute,
PTHREAD_SCOPE_SYSTEM);
if (status != 0) { PrintError("pthread_attr_setscope failed at", status, __FILE__,
__LINE__); exit(status);}
// Create two threads and store their IDs in array threadArray
status = pthread_create(&threadArray[0], &threadAttribute, simpleThread,
(void*) 1L);
if (status != 0) { PrintError("pthread_create failed at", status, __FILE__,
__LINE__); exit(status);}
status = pthread_create(&threadArray[1], &threadAttribute, simpleThread,
(void*) 2L);
if (status != 0) { PrintError("pthread_create failed at", status, __FILE__,
__LINE__); exit(status);}
status = pthread_attr_destroy(&threadAttribute); // destroy the attribute object
if (status != 0) { PrintError("pthread_attr_destroy failed at", status, __FILE__,
__LINE__); exit(status);}
status = pthread_join(threadArray[0],NULL); // wait for threads to finish
if (status != 0) { PrintError("pthread_join failed at", status, __FILE__,
__LINE__); exit(status);}
int status = pthread_attr_init(&threadAttribute); // initialize attribute object
status = pthread_join(threadArray[1],NULL);
if (status != 0) { PrintError("pthread_join failed at", status, __FILE__,
__LINE__); exit(status);}
}
Listing 1.4 Simple concurrent program using C++/Pthreads.
12 INTRODUCTION TO CONCURRENT PROGRAMMING
Suppose that instead of returning NULL, thread function simpleThread()
returned the value of parameter myID :
void* simpleThread (void* myID) { // myID was the fourth argument on the call
// to pthread_create ()
std::cout << "Thread " << (long) myID << "is running" << std::endl;
return myID; // implicit call to pthread_exit(myID);
}
We can use function pthread join() to capture the values returned by the
threads:
long result1, result2; // these variables will receive the return values
status = pthread_join(threadArray[0],(void**) &result1);
if (status != 0) { /* ... */ }
status = pthread_join(threadArray[1],(void**) &result2);
if (status != 0) { /* ... */ }
std::cout << ‘‘thread1:’’ << (long) result1 << ‘‘ thread2:’’ << (long) result2
<< std::endl;
A thread usually terminates by returning from its thread function. What hap-
pens after that depends on whether the thread has been detached. Threads that are
terminated but not detached retain system resources that have been allocated to
them. This means that the return values for undetached threads are still available
and can be accessed by calling pthread join(). Detaching a thread allows the
system to reclaim the resources allocated to that thread. But a detached thread
cannot be joined.
You can detach a thread anytime by calling function pthread detach(). For
example, the main thread can detach the first thread by calling
status = pthread_detach(threadArray[0]);
Or the first thread can detach itself:
status = pthread_detach(pthread_self());
Calling pthread join(threadArray[0],NULL) also detaches the first thread. We
will typically use pthread join() to detach threads, which will make our Pthreads
programs look very similar to Win32 programs.
If you create a thread that definitely will not be joined, you can use an attribute
object to ensure that when the thread is created, it is already detached. The
code for creating threads in a detached state is shown below. Attribute detach-
state is set to PTHREAD CREATE DETACHED. The other possible value for
PTHREADS 13
this attribute is PTHREAD CREATE JOINABLE. (By default, threads are sup-
posed to be created joinable. To ensure that your threads are joinable, you may
want to use an attribute object and set the detachstate attribute explicitly to
PTHREAD CREATE JOINABLE.)
int main() {
pthread_t threadArray[2]; // array of thread IDs
int status; // error code
pthread_attr_t threadAttribute; // thread attribute
status = pthread_attr_init(&threadAttribute); // initialize the attribute object
if (status != 0) { /* ... */ }
// set the detachstate attribute to detached
status = pthread_attr_setdetachstate(&threadAttribute,
PTHREAD_CREATE_DETACHED);
if (status != 0) { /* ... */ }
// create two threads in the detached state
status = pthread_create(&threadArray[0], &threadAttribute, simpleThread,
(void*) 1L);
if (status != 0) { /* ... */ }
status = pthread_create(&threadArray[1], &threadAttribute, simpleThread,
(void*) 2L);
if (status != 0) { /* ... */ }
// destroy the attribute object when it is no longer needed
status = pthread_attr_destroy(&threadAttribute);
if (status != 0) { /* ... */ }
// allow all threads to complete
pthread_exit(NULL);
}
When the threads terminate, their resources are reclaimed by the system. This
also means that the threads cannot be joined. Later we will learn about several
synchronization constructs that can be used to simulate a join operation. We
can use one of these constructs to create threads in a detached state but still be
notified when they have completed their tasks.
Since the threads are created in a detached state, the main thread cannot call
pthread join() to wait for them to complete. But we still need to ensure that the
threads have a chance to complete before the program (i.e., the process) exits.
We do this by having the main thread call pthread exit at the end of the main()
function. This allows the main thread to terminate but ensures that the program
does not terminate until the last thread has terminated. The resulting program
14 INTRODUCTION TO CONCURRENT PROGRAMMING
behaves similar to the Java versions in Listings 1.1 and 1.2 in that the threads
created in main() continue to run after the main thread completes.
1.6 C++ THREAD CLASS
Details about Win32 and POSIX threads can be encapsulated in a C++ Thread
class. Class Thread hides some of the complexity of using the Win32 and POSIX
thread functions and allows us to write multithreaded C++ programs that have
an object-oriented structure that is almost identical to that of Java programs.
Using a Thread class will also make it easier for us to provide some of the
basic services that are needed for testing and debugging multithreaded programs.
The implementation of these services can be hidden inside the Thread class,
enabling developers to use these services without knowing any details about
their implementation.
1.6.1 C++ Class Thread for Win32
Listing 1.5 shows C++ classes Runnable and Thread for Win32. Since Win32
thread functions can return a value, we allow method run() to return a value.
The return value can be retrieved by using a Pthreads-style call to method join().
Class Runnable simulates Java’s Runnable interface. Similar to the way that we
created threads in Java, we can write a C++ class that provides a run() method
and inherits from Runnable; create an instance of this class; pass a pointer to that
instance as an argument to the Thread class constructor; and call start() on the
Thread object. Alternatively, we can write C++ classes that inherit directly from
class Thread and create instances of these classes on the heap or on the stack.
(Java Thread objects, like other Java objects, are never created on the stack.) Cass
was returned by run().
Java has a built-in join() operation. There was no need to call method join() in
the Java programs in Listings 1.1 and 1.2 since threads created in a Java main()
method continue to run after the main thread completes. Method join() is useful
in Java when one thread needs to make sure that other threads have completed
before, say, accessing their results. As we mentioned earlier, Java’s run() method
cannot return a value (but see Exercise 1.4), so results must be obtained some
other way.
The program in Listing 1.6 illustrates the use of C++ classes Thread and
Runnable. It is designed to look like the Java programs in Listings 1.1 and 1.2.
[Note that a C-style cast (int) x can be written in C++ as reinterpret cast<int>(x),
which is used for converting between unrelated types (as in void* and int).] The
C++ THREAD CLASS 15
class Runnable {
public:
virtual void* run() = 0;
virtual~Runnable() = 0;
};
Runnable::~Runnable() {} // function body required for pure virtual destructors
class Thread {
public:
Thread(std::auto_ptr<Runnable> runnable_);
Thread();
virtual~Thread();
void start(); // starts a suspended thread
void* join(); // wait for thread to complete
private:
HANDLE hThread;
unsigned winThreadID; // Win32 thread ID
std::auto_ptr<Runnable> runnable;
Thread(const Thread&);
const Thread& operator=(const Thread&);
void setCompleted(); // called when run() completes
void* result; // stores value returned by> runnable_) : runnable(runnable_) {
if (runnable.get() == NULL)
PrintError("Thread(std::auto_ptr<Runnable> runnable_) failed at ",
__FILE__,__LINE__);
hThread = (HANDLE)_beginthreadex(NULL,0,Thread::startThreadRunnable,
(LPVOID)this, CREATE_SUSPENDED, &winThreadID );
if (!hThread) PrintError("_beginthreadex failed at ",__FILE__,__LINE__);
}
Thread::Thread(): runnable(NULL) {
hThread = (HANDLE)_beginthreadex(NULL,0,Thread::startThread,
(LPVOID)this, CREATE_SUSPENDED, &winThreadID );
if (!hThread) PrintError("_beginthreadex failed at ",__FILE__,__LINE__);
}
unsigned WINAPI Thread::startThreadRunnable(LPVOID pVoid){
Thread* runnableThread = static_cast<Thread*> (pVoid);
Listing 1.5 C++/Win32 classes Runnable and Thread.
16 INTRODUCTION TO CONCURRENT PROGRAMMING (winThreadID != GetCurrentThreadId()) {
DWORD rc = CloseHandle(hThread);
if (!rc) PrintError("CloseHandle failed at ",__FILE__,__LINE__);
}
// note that the runnable object (if any) is automatically deleted by auto_ptr.
}
void Thread::start() {
assert(hThread != NULL);
DWORD rc = ResumeThread(hThread);
// thread was created in suspended state so this starts it running
if (!rc) PrintError("ResumeThread failed at ",__FILE__,__LINE__);
}
void* Thread::join() {
/* a thread calling T.join() waits until thread T completes; see Section 3.7.4.*/
return result; // return the void* value that was returned by method run()
}
void Thread::setCompleted() {
/* notify any threads that are waiting in join(); see Section 3.7.4. */
}
void Thread::PrintError(LPTSTR lpszFunction,LPSTR fileName, int lineNumber)
{ /* see Listing 1.3 */}
Listing 1.5 (continued )
implementation of class Thread is not simple. When a C++ Thred is created,
the corresponding Thread constructor calls function beginthreadex() with the
following arguments:
ž NULL. This is the default value for security attributes.
ž 0. This is the default value for stack size.
ž The third argument is either Thread::startThread() or Thread::startThread-
Runnable(). Method startThread() is the startup method for threads created
by inheriting from class Thread. Method startThreadRunnable() is the startup
method for threads created from Runnable objects.
C++ THREAD CLASS 17() {
std::auto_ptr<Runnable> r(new simpleRunnable(1));
std::auto_ptr<Thread> thread1(new Thread(r));
thread1->start();
std::auto_ptr<simpleThread> thread2(new simpleThread(2));
thread2->start();
simpleThread thread3(3);
thread3.start();
// thread1 and thread2 are created on the heap; thread3 is created on the stack
// wait for the threads to finish
int result1 = reinterpret_cast<int>(thread1->join());
int result2 = reinterpret_cast<int>(thread2->join());
int result3 = reinterpret_cast<int>(thread3.join());
std::cout << result1 << ' ' << result2 << ' ' << result3 << std::endl;
return 0;
// the destructors for thread1 and thread2 will automatically delete the
// pointed-at thread objects
}
Listing 1.6 Using C++ classes Runnable and Thread .
ž .
18 INTRODUCTION TO CONCURRENT PROGRAMMING
ž CREATE SUSPENDED. A Win32 thread is created to execute the startup
method, but this thread is created in suspended mode, so the startup method
does not begin executing until method start() is called on the thread.
Since the Win32 thread is created in suspended mode, the thread is not actu-
ally started until method Thread::start() is called. Method Thread::start() calls
Win32 function ResumeThread(), which allows the thread to be scheduled and
the startup method to begin execution. The startup method is either startThread()
or startThreadRunnable(), depending on which Thread constructor was used to
create the Thread object.
Method startThread() casts its void* pointer parameter to Thread* and then
calls the run() method of its Thread* parameter. When the run() method returns,
startThread() calls setCompleted() to set the thread’s status to completed and to
notify any threads waiting in join() that the thread has completed. The return
value of the run() method is saved so that it can be retrieved in method join().
Static method startThreadRunnable() performs similar steps when threads are
created from Runnable objects. Method startThreadRunnable() calls the run()
method of the Runnable object held by its Thread* parameter and then calls
setCompleted().
In Listing 1.6 we use auto ptr<> objects to manage the destruction of two
of the threads and the Runnable object r . When auto ptr<> objects thread1 and
thread2 are destroyed automatically at the end of the program, their destructors
will invoke delete automatically on the pointers with which they were initial-
ized. This is true no matter whether the main function exits normally or by means
of an exception. Passing an auto ptr<Runnable> object to the Thread class
constructor passes ownership of the Runnable object from the main thread to the
child thread. The auto ptr<Runnable> object in the child thread that receives. In general, if one thread passes
an object to another thread, it must be clear which thread owns the object and will
clean up when the object is no longer needed. This ownership issue is raised again
in Chapter 5, where threads communicate by passing message objects instead of
accessing global variables.
Note that startup functions startThreadRunnable() and startThread() are static
member functions. To understand why they are static, recall that function begin-
threadex() expects to receive the address of a startup function that has a single
(void* ) parameter. A nonstatic member function that declares a single parameter
actually has two parameters. This is because each nonstatic member function has
THREAD COMMUNICATION 19
in addition to its declared parameters, a hidden parameter that corresponds to
the this pointer. (If you execute myObject.foo(x), the value of the this pointer
in method foo() is the address of myObject.) Thus, if the startup function is a
nonstatic member function, the hidden parameter gets in the way and the call to
the startup function fails. Static member functions do not have hidden parameters.
1.6.2 C++ Class Thread for Pthreads
Listing 1.7 shows C++ classes Runnable and Thread for Pthreads. The interfaces
for these classes are nearly identical to the Win32 versions. The only difference
is that the Thread class constructor has a parameter indicating whether or not the
thread is to be created in a detached state. The default is undetached. The program
in Listing 1.6 can be executed as a Pthreads program without making changes.
The main difference in the implementation of the Pthreads Thread class is that
threads are created in the start() method instead of the Thread class constructor.
This is because threads cannot be created in the suspended state and then later
resumed. Thus, we create and start a thread in one step. Note also that calls to
method join() are simply passed through to method pthread join().
1.7 THREAD COMMUNICATION
The concurrent programs we have seen so far are not very interesting because
the threads they contain do not work together. For threads to work together,
they must communicate. One way for threads to communicate is by accessing
shared memory. Threads in the same program can reference global variables or
call methods on a shared object, subject to the naming and scope rules of the
programming language. Threads in different processes can access the same kernel
objects by calling kernel routines. It is the programmer’s responsibility to define
the shared variables and objects that are needed for communication. Forming the
necessary connections between the threads and the kernel objects is handled by
the compiler and the linker.
Threads can also communicate by sending and receiving messages across
communication channels. A channel may be implemented as an object in shared
memory, in which case message passing is just a particular style of shared mem-
ory communication. A channel might also connect threads in different programs,
possibly running on different machines that do not share memory. Forming net-
work connections between programs on different machines requires help from
the operating system and brings up distributed programming issues such as how
to name and reference channels that span multiple programs, how to resolve pro-
gram references to objects that exist on different machines, and the reliability of
passing messages across a network. Message passing is discussed in Chapters 5
and 6. In this chapter we use simple shared variable communication to introduce
the subtleties of concurrent programming.
20 INTRODUCTION TO CONCURRENT PROGRAMMING
class Runnable {
public:
virtual void* run() = 0;
virtual~Runnable() = 0;
};
// a function body is required for pure virtual destructors
Runnable::~Runnable() {}
class Thread {
public:
Thread(auto_ptr<Runnable> runnable_, bool isDetached = false);
Thread(bool isDetached = false);
virtual~Thread();
void start();
void* join();
private:
pthread_t PthreadThreadID; // thread ID
bool detached; // true if thread created in detached state;
false otherwise
pthread_attr_t threadAttribute;
auto_ptr<Runnable> runnable;
Thread(const Thread&);
const Thread& operator= (const Thread&);
void setCompleted();
void* result; // stores return value of run()
virtual void* run() {}
static void* startThreadRunnable(void* pVoid);
static void* startThread(void* pVoid);
void PrintError(char* msg, int status, char* fileName, int lineNumber);
};
Thread::Thread(auto_ptr<Runnable> runnable_, bool isDetached) :
runnable(runnable_),detached(isDetached){
if (runnable.get() == NULL) {
std::cout << "Thread::Thread(auto_ptr<Runnable> runnable_,
bool isDetached) failed at " << ' ' << __FILE__ << ":"
<< __LINE__ << "- " << "runnable is NULL " << std::endl; exit(-1);
}
}
Thread::Thread(bool isDetached) : runnable(NULL), detached(isDetached){ }
void* Thread::startThreadRunnable(void* pVoid){
// thread start function when a Runnable is involved
Thread* runnableThread = static_cast<Thread*> (pVoid);
assert(runnableThread);
Listing 1.7 C++/Pthreads classes Runnable and Thread .
THREAD COMMUNICATION 21
runnableThread->result = runnableThread->runnable->run();
runnableThread->setCompleted();
return runnableThread->result;
}
void* Thread::startThread(void* pVoid) {
// thread start function when no Runnable is involved
Thread* aThread = static_cast<Thread*> (pVoid);
assert(aThread);
aThread->result = aThread->run();
aThread->setCompleted();
return aThread->result;
}
Thread::~Thread() { }
void Thread::start() {
int status = pthread_attr_init(&threadAttribute); // initialize attribute object
if (status != 0) { PrintError("pthread_attr_init failed at", status, __FILE__,
__LINE__); exit(status);}
status = pthread_attr_setscope(&threadAttribute,
PTHREAD_SCOPE_SYSTEM);
if (status != 0) { PrintError("pthread_attr_setscope failed at",
status, __FILE__, __LINE__); exit(status);}
if (!detached) {
if (runnable.get() == NULL) {
int status = pthread_create(&PthreadThreadID,&threadAttribute,
Thread::startThread,(void*) this);
if (status != 0) { PrintError("pthread_create failed at",
status, __FILE__, __LINE__); exit(status);}
}
else {
int status = pthread_create(&PthreadThreadID,&threadAttribute,
Thread::startThreadRunnable, (void*)this);
if (status != 0) {PrintError("pthread_create failed at",
status, __FILE__, __LINE__); exit(status);}
}
}
else {
// set the detachstate attribute to detached
status = pthread_attr_setdetachstate(&threadAttribute,
PTHREAD_CREATE_DETACHED);
if (status != 0){
PrintError("pthread_attr_setdetachstate failed at",
status,__FILE__,__LINE__);exit(status);
}
Listing 1.7 (continued )
22 INTRODUCTION TO CONCURRENT PROGRAMMING
if (runnable.get() == NULL) {
status = pthread_create(&PthreadThreadID,&threadAttribute,
Thread::startThread, (void*) this);
if (status != 0) {PrintError("pthread_create failed at",
status, __FILE__, __LINE__);exit(status);}
}
else {
status = pthread_create(&PthreadThreadID,&threadAttribute,
Thread::startThreadRunnable, (void*) this);
if (status != 0) {PrintError("pthread_create failed at",
status, __FILE__, __LINE__); exit(status);}
}
}
status = pthread_attr_destroy(&threadAttribute);
if (status != 0) { PrintError("pthread_attr_destroy failed at",
status, __FILE__, __LINE__); exit(status);}
}
void* Thread::join() {
int status = pthread_join(PthreadThreadID,NULL);
// result was already saved by thread start functions
if (status != 0) { PrintError("pthread_join failed at",
status, __FILE__, __LINE__); exit(status);}
return result;
}
void Thread::setCompleted() {/* completion was handled by pthread_join() */}
void Thread::PrintError(char* msg, int status, char* fileName, int lineNumber)
{/*see Listing 1.4 */}
Listing 1.7 (continued )
Listing 1.8 shows a C++ program in which the main thread creates two com-
municatingThreads. Each communicatingThread increments the global shared
variable s 10 million times. The main thread uses join() to wait for the com-
municatingThreads to complete, then displays the final value of s. When you
execute the program in Listing 1.8, you might expect the final value of s to be
20 million. However, this may not always be what happens. For example, we
executed this program 50 times. In 49 of the executions, the value 20000000
was displayed, but the value displayed for one of the executions was 19215861.
This example illustrates two important facts of life for concurrent programmers.
The first is that the execution of a concurrent program is nondeterministic: Two
executions of the same program with the same input can produce different
results. This is true even for correct concurrent programs, so nondeterministic
behavior should not be equated with incorrect behavior. The second fact is that
subtle programming errors involving shared variables can produce unexpected
results.
THREAD COMMUNICATION 23
int s=0; // shared variable s
class communicatingThread: public Thread {
public:
communicatingThread (int ID) : myID(ID) {}
virtual void* run();
private:
int myID;
};
void* communicatingThread::run() {
std::cout << "Thread " << myID << "is running" << std::endl;
for (int i=0; i<10000000; i++) // increment; // expected final value of s is 20000000
return 0;
}
Listing 1.8 Shared variable communication.
1.7.1 Nondeterministic Execution Behavior
The following two examples illustrate nondeterministic behavior. In Example 1
each thread executes a single statement, but the order in which the three state-
ments are executed is unpredictable:
Example 1 Assume that integer x is initially 0.
Thread1 Thread2 Thread3
(1) x = 1; (2) x = 2; (3) y = x;
The final value of y is unpredictable, but it is expected to be either 0, 1, or 2.
Following are some of the possible interleavings of these three statements:
(3), (1), (2) ⇒ final value of y is 0
(2), (1), (3) ⇒ final value of y is 1
(1), (2), (3) ⇒ final value of y is 2
24 INTRODUCTION TO CONCURRENT PROGRAMMING
We do not expect y to have the final value 3, which might happen if the assign-
ment statements in Thread1 and Thread2 are executed at about the same time
and x is assigned some of the bits in the binary representation of 1 and some
of the bits in the binary representation of 2. The memory hardware guarantees
that this cannot happen by ensuring that read and write operations on integer
variables do not overlap. (Below, such operations are called atomic operations.)
In the next example, Thread3 will loop forever if and only if the value of x
is 1 whenever the condition (x == 2) is evaluated.
Example 2 Assume that x is initially 2.
Thread1 Thread2 Thread3
while (true) { while (true) { while (true) {
(1) x = 1; (2) x = 2; (3) if (x == 2) exit(0);
} } }
Thread3 will never terminate if statements (1), (2), and (3) are interleaved as
follows: (2), (1), (3), (2), (1), (3), (2), (1), (3), . . . . This interleaving is probably
not likely to happen, but if it did, it would not be completely unexpected.
In general, nondeterministic execution behavior is caused by one or more of
the following:
ž The unpredictable rate of progress of threads executing on a single processor
(due to context switches between the threads)
ž The unpredictable rate of progress of threads executing on different proces-
sors (due to differences in processor speeds)
ž The use of nondeterministic programming constructs, which make unpre-
dictable selections between two or more possible actions (we look at exam-
ples of this in Chapters 5 and 6)
Nondeterministic results do not necessarily indicate the presence of an error.
Threads are frequently used to model real-world objects, and the real world is
nondeterministic. Furthermore, it can be difficult and unnatural to model non-
deterministic behavior with a deterministic program, but this is sometimes done
to avoid dealing with nondeterministic executions. Some parallel programs are
expected to be deterministic [Empath et al. 1992], but these types of programs
do not appear in this book.
Nondeterminism adds flexibility to a design. As an example, consider two
robots that are working on an assembly line. Robot 1 produces parts that Robot
2 assembles into some sort of component. To compensate for differences in the
rates at which the two robots work, we can place a buffer between the robots.
Robot 1 produces parts and deposits them into the buffer, and Robot 2 withdraws
THREAD COMMUNICATION 25
the parts and assembles them into finished components. This adds some flexibility
to the assembly line, but the order in which parts are deposited and withdraw by
the robots may be nondeterministic. If the actions of the robots are controlled by
software, using one thread to control each robot and one thread to manage the
buffer, the behavior of the threads will also be nondeterministic.
Nondeterminism and concurrency are related concepts. Consider two nonover-
lapping events A and B that execute concurrently. The fact that A and B are
concurrent means that they can occur in either order. Thus, their concurrency can
be modeled as a nondeterministic choice between two interleavings of events:
(A followed by B) or (B followed by A). This interleaving model of concur-
rency is used by certain techniques for building and verifying models of program
behavior. But notice that the possible number of interleavings explodes as the
number of concurrent events increases, which makes even small programs hard
to manage. We quantify this explosion later.
Nondeterminism is an inherent property of concurrent programs. The burden
of dealing with nondeterminism falls on the programmer, who must ensure that
threads are correctly synchronized without imposing unnecessary constraints that
only reduce the level of concurrency. In our assembly line example, the buffer
allows the two robots to work concurrently and somewhat independently. How-
ever, the buffer has a limited capacity, so the robots and the buffer must be
properly synchronized. Namely, it is the programmer’s responsibility to ensure
that Robot 1 is not allowed to deposit a part when the buffer is completely full.
Similarly, Robot 2 must be delayed when the buffer becomes empty. Finally,
to avoid collisions, the robots should not be allowed to access the buffer at the
same time. Thread synchronization is a major concern of the rest of this book.
Nondeterministic executions create major problems during testing and debug-
ging. Developers rely on repeated, deterministic executions to find and fix pro-
gramming errors. When we observe a failure in a single-threaded program, we
fully expect to be able to reproduce the failure so that we can locate and fix the
bug that caused it. If a failure is not reproducible, that tells us something about
the problem since only certain types of bugs (e.g., uninitialized variables) cause
nondeterministic failures in sequential programs. Concurrent programs, on the
other hand, are inherently nondeterministic. Coping successfully with nondeter-
minism during testing and debugging is essential for concurrent programmers.
After we look at some common programming errors, we examine the types of
problems that nondeterministic executions cause during testing and debugging.
1.7.2 Atomic Actions
One common source of bugs in concurrent programs is the failure to implement
atomic actions correctly. An atomic action acts on the state of a program. The
program’s state contains a value for each variable defined in the program and
other implicit variables, such as the program counter. An atomic action transforms
the state of the program, and the state transformation is indivisible. For example,
suppose that in the initial state of a program the variable x has the value 0. Then
26 INTRODUCTION TO CONCURRENT PROGRAMMING
after executing an atomic assignment statement that assigns 1 to x, the program
will be in a new state in which x has the value 1.
The requirement for state transformations to be indivisible does not necessarily
mean that context switches cannot occur in the middle of an atomic action. A state
transformation performed during an atomic action is indivisible if other threads
can see the program’s state as it appears before the action or after the action, but
not some intermediate state while the action is occurring. Thus, we can allow a
context switch to occur while one thread is performing an atomic action, even if
that action involves many variables and possibly multiple assignment statements,
as long as we don’t allow the other threads to see or interfere with the action
while it is in progress. As we will see, this means that we may have to block the
other threads until the action is finished.
The execution of a concurrent program results in a sequence of atomic actions
for each thread. Since the state transformation caused by an atomic action is
indivisible, executing a set of atomic actions concurrently is equivalent to exe-
cuting them in some sequential order. A particular execution can be characterized
as a (nondeterministic) interleaving of the atomic actions performed by the
threads. (The relationship between concurrency and nondeterminism was dis-
cussed above.) This interleaving determines the result of the execution.
Individual machine instructions such as load, add, subtract, and store are
typically executed atomically; this is guaranteed by the memory hardware. In
Java, an assignment of 32 bits or less is guaranteed to be implemented atomically,
so an assignment statement such as x = 1 for a variable x of type int is an atomic
action. In general, however, the execution of an assignment statement may not
be atomic.
Nonatomic Arithmetic Expressions and Assignment Statements When we
write a concurrent program, we might assume that the execution of a single
arithmetic expression or assignment statement is an atomic action. However, an
arithmetic expression or assignment statement is compiled into several machine
instructions, and an interleaving of the machine instructions from two or more
expressions or assignment statements may produce unexpected results.
Example 3 Assume that y and z are initially 0.
Thread1 Thread2
x = y + z; y = 1;
z = 2;
If we (incorrectly) assume that execution of each assignment statement is an
atomic action, the expected final value of x computed by Thread1 is 0, 1,
or 3, representing the sums 0 + 0, 1 + 0, and 1 + 2, respectively. However,
the machine instructions for Thread1 and Thread2 will look something like
the following:
THREAD COMMUNICATION 27
Thread1 Thread2
(1) load r1, y (4) assign y, 1
(2) add r1, z (5) assign z, 2
(3) store r1, x
Below are some of the possible interleavings of these machine instructions. The
character * indicates an unexpected result:
(1), (2), (3), (4), (5) ⇒ x is 0
(4), (1), (2), (3), (5) ⇒ x is 1
(1), (4), (5), (2), (3) ⇒ x is 2 *
(4), (5), (1), (2), (3) ⇒ x is 3
Example 4 Assume that the initial value of x is 0.
Thread1 Thread2
x = x + 1; x = 2;
Again we are assuming incorrectly that the execution of each assignment state-
ment is an atomic action, so the expected final value of x is 2 or 3. The machine
instructions for Thread1 and Thread2 are
Thread1 Thread2
(1) load r1, x (4) assign x, 2
(2) add r1, 1
(3) store r1, x
Following are some of the possible interleavings of these machine instructions.
Once again, the character “*” indicates an unexpected result.
(1), (2), (3), (4) => x is 2
(4), (1), (2), (3) => x is 3
(1), (2), (4), (3) => x is 1 *
As these examples illustrate, since machine instructions are atomic actions, it is
the interleaving of the machine instructions, not the interleaving of the statements,
that determines the result. If there are n threads (Thread1, Thread2, . . . , Threadn)
such that Thread i executes mi atomic actions, the number of possible interleav-
ings of the atomic actions is
(m1 * m2 * ... * mn)!
(m1! * m2! * ... * mn!)
28 INTRODUCTION TO CONCURRENT PROGRAMMING
This formula shows mathematically what concurrent programmers are quick to
learn—there is no such thing as a simple concurrent program.
Andrews [2000] defined a condition called at-most-once under which expres-
sion evaluations and assignments will appear to be atomic. (Our definition of
at-most-once is slightly stronger than Andrews’s; see Exercise 1.10.) A criti-
cal reference in an expression is a reference to a variable that is changed by
another thread.
ž An assignment statement x = e satisfies the at-most-once property if either:
(1) e contains at most one critical reference and x is neither read nor written
by another thread, or
(2) e contains no critical references, in which case x may be read or written
by other threads.
ž An expression that is not in an assignment satisfies at-most-once if it con-
tains no more than one critical reference.
This condition is called at-most-once because there can be at most one shared
variable, and the shared variable can be referenced at most one time. Assignment
statements that satisfy at-most-once appear to execute atomically even though
they are not atomic. That is, we would get the same results from executing these
assignment statements even if we were to somehow prevent the interleaving
of their machine instructions so that the assignment statements were forced to
execute atomically. Next, we’ll use several examples to illustrate at-most-once.
Consider Example 3 again. The expression in the assignment statement in
Thread1 makes a critical reference to y and a critical reference to z, so the
assignment statement in Thread1 does not satisfy the at-most-once property.
The expressions in the assignment statements in Thread2 contain no critical
references, so the assignment statements in Thread2 satisfy at-most-once.
Example 5 Assume that x and y are initially 0.
Thread1 Thread2
x = y + 1; y = 1;
Both assignment statements satisfy the at-most-once condition. The expression in
Thread1’s assignment statement references y (one critical reference), but x is not
referenced by Thread2, and the expression in Thread2’ s assignment statement
has no critical references. The final value of x is nondeterministic and is either
1 or 2, as expected.
Nonatomic Groups of Statements Another type of undesirable nondeterminism
in a concurrent program is caused by interleaving groups of statements, even
though each statement may be atomic. In the following example, methods deposit
and withdraw are used to manage a linked list implementation of a buffer. One
TESTING AND DEBUGGING MULTITHREADED PROGRAMS 29
thread calls method deposit while another calls method withdraw. As shown
below, interleaving the statements in methods deposit and withdraw may produce
unexpected results.
Example 6 Variable first points to the first Node in the list. Assume that the
list is not empty
class Node {
public:
valueType value;
Node* next;
}
Node* first; // first points to the first Node in the list;
void deposit(valueType value) {
Node* p = new Node; // (1)
p->value = value; // (2)
p->next = first; // (3)
first = p; // (4) insert the new Node at the front of the list
}
valueType withdraw() {
valueType value = first->value; // (5) withdraw the first value in the list
first = first->next; // (6) remove the first Node from the list
return value; // (7) return the withdrawn value
}
If two threads try to deposit and withdraw a value at the same time, the
following interleaving of statements is possible:
valueType value = first->value; // (5) in withdraw
Node* p = new Node(); // (1) in deposit
p->value = value // (2) in deposit
p->next = first; // (3) in deposit
first = p; // (4) in deposit
first = first->next; // (6) in withdraw
return value; // (7) in withdraw
At the end of this sequence, the withdrawn item is still pointed to by first and
the deposited item has been lost. To fix this problem, each of methods deposit
and withdraw must be implemented as an atomic action. Later we will see how
to make a statement or a group of statements execute atomically.
1.8 TESTING AND DEBUGGING MULTITHREADED PROGRAMS
Looking back at Listing 1.8, we now know why the program sometimes failed
to produce the expected final value 2000000 for s. The interleaving of machine
30 INTRODUCTION TO CONCURRENT PROGRAMMING
instructions for thread1 and thread2 sometimes increased the value of s by one
even though two increments were performed. Failures like this one, which do not
occur during every execution, create extra problems during testing and debugging.
ž The purpose of testing is to find program failures.
The term failure is used when a program produces unintended results.
ž A failure is an observed departure of the external result of software operation
from software requirements or user expectations [IEEE90].
Failures can be caused by hardware or software faults or by user errors.
ž A software fault (or defect, or bug) is a defective, missing, or extra instruc-
tion or a set of related instructions that is the cause of one or more actual
or potential failures [IEEE88].
Software faults are the result of programming errors. For example, an error in
writing an if-else statement may result in a fault that will cause an execution to
take a wrong branch. If the execution produces the wrong result, it is said to fail.
(The execution may produce a correct result even though it takes a wrong branch.)
ž Debugging is the process of locating and correcting faults.
The conventional approach to testing and debugging a program is as follows:
1. Select a set of test inputs.
2. Execute the program once with each input and compare the test results
with the intended results.
3. If a test input finds a failure, execute the program again with the same
input in order to collect debugging information and find the fault that
caused the failure.
4. After the fault has been located and corrected, execute the program again
with each of the test inputs to verify that the fault has been corrected and
that, in doing so, no new faults have been introduced. This type of testing,
called regression testing, is also needed after the program has been modified
during the maintenance phase.
This cyclical process of testing, followed by debugging, followed by more
testing, is commonly applied to sequential programs. Unfortunately, this process
breaks down when it is applied to concurrent programs.
1.8.1 Problems and Issues
Let CP be a concurrent program. Multiple executions of CP with the same input
may produce different results. This nondeterministic execution behavior creates
the following problems during the testing and debugging cycle of CP:
TESTING AND DEBUGGING MULTITHREADED PROGRAMS 31
Problem 1 When testing CP with input X, a single execution is insufficient
to determine the correctness of CP with X. Even if CP with input X has been
executed successfully many times, it is possible that a future execution of CP
with X will produce an incorrect result.
Problem 2 When debugging a failed execution of CP with input X, there is no
guarantee that this execution will be repeated by executing CP with X.
Problem 3 After CP has been modified to correct a fault detected during a
failed execution of CP with input X, one or more successful executions of CP
with X during regression testing do not imply that the detected fault has been
corrected or that no new faults have been introduced.
A major objective of this book is to show that these problems can be solved
and that the familiar testing and debugging cycle can be applied effectively to
concurrent programs. There are many issues that must be dealt with to solve
these problems.
Program Replay Programmers rely on debugging techniques that assume pro-
gram failures can be reproduced. This assumption of reproducible testing does
not hold for concurrent programs. Repeating an execution of a concurrent pro-
gram is called program replay. A major issue is how to replay executions of
concurrent programs and how to build libraries and tools that support replay.
Program Tracing Before an execution can be replayed it must be traced. But
what exactly does it mean to replay an execution? Consider a sequential C++
program that executes in a multiprocessing environment. If the program is exe-
cuted twice and the inputs and outputs are the same for both executions, are
these executions identical? In a multiprocessing environment, context switches
will occur during program execution. Furthermore, for two different program
executions, the points at which the context switches occur are not likely to be
the same. Is this difference important? We assume that context switches are trans-
parent to the execution, so although the executions may not be identical, they
appear to be “equivalent”. Now consider a concurrent program. Are the context
switches among the threads in a program important? Must we somehow trace the
points at which the context switches occur and then repeat these switch points
during replay?
The questions that need to be answered for tracing are: What should be
replayed, and how do we capture the necessary execution information in a trace?
The rest of this book answers these questions for various types of programs. In
general, an execution trace will contain information about the sequence of actions
performed by each thread. An execution trace should contain sufficient informa-
tion to replay the execution or to perform some other task, but attention must
also be paid to the space and time overhead for capturing and storing the trace.
In addition, an observability problem occurs in distributed systems, where it is
32 INTRODUCTION TO CONCURRENT PROGRAMMING
difficult to observe accurately the order in which actions on different computers
occur during an execution.
Sequence Feasibility A sequence of actions that is allowed by a program is said
to be a feasible sequence. Program replay always involves repeating a feasible
sequence of actions. This is because the sequence to be replayed was traced during
an actual execution and thus is known to be allowed by the program. Testing, on
the other hand, involves determining whether or not a given sequence is feasible
or infeasible. “Good” sequences are expected to be feasible, and “bad” sequences
are expected to be infeasible.
Selecting effective test sequences is a difficult problem. Perhaps the sim-
plest technique for selecting test sequences is to execute the program under test
repeatedly and allow sequences to be exercised nondeterministically. If enough
executions are performed, one or more failures may be observed. This type of
testing, called nondeterministic testing, is easy to carry out, but it can be very
inefficient. It is possible that some program behaviors are exercised many times,
whereas others are never exercised at all. Also, nondeterministic testing cannot
show that bad sequences are infeasible.
An alternative approach is called deterministic testing, which attempts to force
selected sequences to be exercised. We expect that good sequences can be forced
to occur, whereas bad sequences cannot. This approach allows a program to
be tested with carefully selected test sequences and can be used to supple-
ment the sequences exercised during nondeterministic testing. However, choosing
sequences that are effective for detecting faults is difficult to do. A test coverage
criterion can be used to guide the selection of tests and to determine when to
stop testing.
The information and the technique used to determine the feasibility of a
sequence are different from those used to replay a sequence. In general, dif-
ferent types of execution traces can be defined and used for different purposes.
Other purposes include visualizing an execution and checking the validity of
an execution.
Sequence Validity A sequence of actions captured in a trace is definitely feasible,
but the sequence may or may not have been intended to be feasible. We assume
that each program has a specification that describes the intended behavior of the
program. Sequences allowed by the specification (i.e., good sequences) are called
valid sequences; other sequences are called invalid sequences. A major issue is
how to check the validity of a sequence captured in a trace. A goal of testing is
to find valid sequences that are infeasible and invalid sequences that are feasible;
such sequences are evidence of a program failure. In the absence of a complete
program specification, a set of valid and invalid sequences can serve as a partial
specification.
Probe Effect Modifying a concurrent program to capture a trace of its execu-
tion may interfere with the normal execution of the program [LeDoux and Parker
TESTING AND DEBUGGING MULTITHREADED PROGRAMS 33
1985; Gait 1986]. Thus, the program will behave differently after the trace rou-
tines have been added. In the worst case, some failures that would have been
observed without adding the trace routines will no longer be observed. If the
trace routines are eventually removed, additional time should be spent testing the
unmodified program to find these failures. In other cases, an incorrect program
may stop failing when trace routines are inserted for debugging.
The probe effect is not always negative. There may be failures that cannot
be observed without perturbing the executions. Programs may fail when running
under abnormal conditions, so checking some abnormal executions is not a bad
idea. (We point out that it may be difficult to define “normal” and “abnormal”
executions.) If we can create random interference during the executions, we may
be able to achieve better test coverage. If a program fails during an early part of
the life cycle, when we have tool support for tracing and replaying executions
and determining the cause of the failure, we are better off than if it fails later.
One way to address the probe effect is to make sure that every feasible
sequence is exercised at least once during testing. A major issue then is how
to identify and exercise all of the feasible sequences of a program. One way of
doing this, called reachability testing, is described in later chapters. However,
since the number of sequences required for exhaustive testing may be huge, it
might take too much time to exercise all the sequences or too much memory to
enumerate them. In that case we can select a subset of sequences that we hope will
be effective for detecting faults and use these sequences for deterministic testing.
The probe effect is different from the observability problem mentioned above
[Fidge 1996]. The observability problem is concerned with the difficulty of trac-
ing a given execution accurately, whereas the probe effect is concerned with the
ability to perform a given execution at all. Both of these problems are different
from the replay problem, which deals with repeating an execution that has already
been observed.
Real Time The probe effect is a major issue for real-time concurrent programs.
The correctness of a real-time program depends not only on its logical behav-
ior but also on the time at which its results are produced [Tsai et al. 1996]. A
real-time program may have computation deadlines that will be missed if trace
functions are added to the program. Instead, tracing can be performed by using
special hardware to remove the probe effect or by trying to account for or min-
imize the probe effect. Real-time programs may also receive sensor inputs that
must be captured for replay. Some of the techniques we cover in this book are
helpful for testing and debugging the logical behavior of real-time systems, but
we will not address the special issues associated with timing correctness.
Tools Solutions to these testing and debugging problems must be supported
by tools. Debugging tools that are integrated with compilers and operating sys-
tems can accomplish more than tools built from libraries of source code, such
as the libraries presented in this book. Access to the underlying virtual machine,
compiler, operating system, or run-time system maximizes the ability of a tool to
34 INTRODUCTION TO CONCURRENT PROGRAMMING
observe and control program executions. On the other hand, this type of low-level
access limits the portability of the tool. Also, there is a difference between the
system-level events one deals with in an operating system (e.g., interrupts and
context switches) and the high-level abstract events that programmers think of
when specifying and writing their programs. For example, knowing the number
of read and write operations that each thread executes during its time quantum
may provide sufficient information for replay, but it will not help with under-
standing an execution or determining its validity. Different levels of abstraction
are appropriate for different activities.
Life-Cycle Issues Typically, a concurrent program, like a sequential program, is
subjected to two types of testing during its life cycle. During black-box testing,
which often occurs during system and user-acceptance testing, access to the
program’s implementation is not allowed. Thus, only the specification of the
program can be used for test generation, and only the result (including the output
and termination condition) of each execution can be collected. During white-box
testing, access to the implementation is allowed. Thus, any desired information
about each execution can be collected, and the implementation can be analyzed
and used for generating tests. White-box testing gives the programmer unlimited
ability to observe and control executions, but it is usually not appropriate during
system and acceptance testing. At these later stages of the life cycle, it is often
the case that a program’s source code cannot be accessed, or it is simply too
large and complex to be of practical use. Addressing the problems caused by
nondeterminism requires tools that can observe and control executions, but testing
techniques that can be used during both white- and black-box testing may be the
most useful.
The testing and debugging issues noted above are addressed in this book. We
now give a small example of how we handle some of them. We’ve modified the
C++ program in Listing 1.8 so that it can trace and replay its own executions. The
new program is shown in Listing 1.9. Classes TDThread and sharedVariable<>
provide functions that are needed for execution tracing and replay. Below we
provide a brief description of these classes and preview some of the testing and
debugging techniques that are described in detail in later chapters.
1.8.2 Class TDThread for Testing and Debugging
Threads in Listing 1.9 are created by inheriting from C++ class TDThread instead
of class Thread. Class TDThread provides the same interface as our C++ Thread
class, plus several additional functions that are used internally during tracing and
replay. (We will be using a Java version of TDThread in our Java programs.) The
main purpose of class TDThread is to generate, automatically and unobtrusively,
an integer identifier (ID) and a name for each thread. Thread IDs are recorded
in execution traces. The name for thread T is based on the name of its parent
TESTING AND DEBUGGING MULTITHREADED PROGRAMS 35
sharedVariable<int> s(0); // shared variable s
class communicatingThread: public TDThread {
public:
communicatingThread(int ID) : myID(ID) {}
virtual void* run();
private:
int myID;
};
void* communicatingThread::run() {
std::cout << "Thread " << myID << "is running" << std::endl;
for (int i=0; i<2; i++) // increment s two times ; // the expected final value of s is 4
return 0;
}
Listing 1.9 Using classes TDThread and sharedVariable<>.
thread and the order in which T and any of its sibling threads are constructed
by their parent thread. For example, suppose that we have the following main()
function, which creates two creatorThreads:
int main() {
TDThread* T1 = new creatorThread;
TDThread* T2 = new creatorThread;
...
}
Suppose also that the creatorThread ::run() method creates two nestedThreads:
void run() {
TDThread* TA = new nestedThread;
TDThread* TB = new nestedThread;
...
}
36 INTRODUCTION TO CONCURRENT PROGRAMMIN | https://ar.b-ok.org/book/3675140/1990b6 | CC-MAIN-2020-05 | refinedweb | 15,153 | 55.74 |
Removing the Dust from the Loader’s Unexplored Auditing API
In the Linux security world, there are a couple of known and dangerous mechanisms that can be abused for malicious purposes. One of those is a feature of the dynamic linker/loader (ld.so): by setting an environment variable named LD_PRELOAD to an arbitrary shared object path, all dynamic executables in this environment will load the shared library first during their initialization process. This library is loaded before all other libraries the process needs to load in order to run.
Threat actors use this trick both to intercept library calls (for process hiding and rootkit implementation in user space), and to inject code in general. You can read more about it here.
As the
man page of ld.so states:
But is this trick really the first to load before everything? Can we load before it?
In SentinelOne’s Linux research team, we found a technique that can load shared libraries even before LD_PRELOAD.
The Loader’s Auditing API
We discovered that the loader exports an API that provides the ability to debug the loading process. It can be done by setting another powerful environment variable, LD_AUDIT. If this environment variable is present, the linker first loads the shared library from the assigned path, and then calls specific functions from it. The functions will be called on loader actions like object search, new library loading, and more. It is documented in the man page ‘‘rtld-audit’.
This API is mostly unknown in the Linux community, so in this article, we are going to give a brief overview of key functions of the API, while showing how we can maneuver it to do much more than simply auditing: we will utilize it defensively against LD_PRELOAD attacks. Afterward, we will reveal how it can be used offensively as well.
We begin by setting both LD_PRELOAD and LD_AUDIT to verify that the auditing library will be loaded first.
// preloadlib.so #include <stdio.h> __attribute__((constructor)) static void init(void) { printf("I'm loaded from LD_PRELOADn"); }
// auditlib.so #include <stdio.h> __attribute__((constructor)) static void init(void) { printf("I'm loaded from LD_AUDITn"); } unsigned int la_version(unsigned int version) { return version; }
Voila! Our auditing library is loaded and executes code before the preloaded one. Let’s explore what we can do using the auditing API.
*Note that in order to use the auditing API, we are required to export the function
la_version. This function is invoked first in order to report the loader’s version and verify that the auditing library supports this version. We can use this function to perform our initialization instead of the constructor function, since using constructor functions in libraries isn’t recommended if there is an alternative.
Disabling Preloading
The auditing API allows you to not only get information during the executable’s loading stage but also to change its behavior. An example of that involves the function
la_objsearch (from rtld-audit).
As the
man page states, if we export a function named
la_objsearch with the signature above from our library, it will be called each time the loader has a library to load. Furthermore, if we return NULL instead of the library name, we will stop the library search, and the library will not be loaded at all.
In case a loading procedure we are auditing is about to preload an unwanted library, we can recognize it before it happens by retrieving the LD_PRELOAD environment variable. If this is the case, we can block it by returning NULL.
const char* preloaded; unsigned int la_version(unsigned int version) { preloaded = getenv("LD_PRELOAD"); return version; } char * la_objsearch(const char *name, uintptr_t *cookie, unsigned int flag) { if (NULL != preloaded && strcmp(name, preloaded) == 0) { fprintf(stderr, "Disabling the loading of a 'preload' library: %sn", name); return NULL; } return (char *)name; }
Let’s try our new preload-disabling library to prevent a real threat: we will take libprocesshider, an open-source rootkit intended to hide a process in the system using the preload technique.
Note that there are other methods to invoke the preloading mechanism. For example (as mentioned in the rootkit repo), writing the library name to the file in the path “/etc/ld.so.preload”. For the sake of clarity, we will only address the environment variable LD_PRELOAD, even though the example can be tweaked to work with “/etc/ld.so.preload” as well.
We will run a python script with the name
evil_script.py, the same process name to hide as the example in the repo. ‘
Let’s verify that the rootkit works, and
evil_script.py is hidden.
Now, what happens if we invoke it with our LD_AUDIT library as well?
The script isn’t hidden anymore, and the rootkit is disabled! Note that the loader printed an error: its origin is in the function
do_preload from the loader’s code – the function that handles preloaded libraries only.
After successfully disabling the library, we now want to investigate the rootkit and find out which function it intercepts in order to hide the process. In case the rootkit isn’t known, we can reverse-engineer it or attach a debugger to the library in a process context, but the audit API gives us a much easier option.
From the docs, we see there are two interesting API functions for this purpose,
la_objopen and
la_symbind64:
After the loader finds and opens a library, and when the library is about to load, the function
la_objopen in our auditing library will be invoked.
The struct link_map holds the library name; hence, we can know if the library about to be loaded is the one specified in LD_PRELOAD.
At this point, the library is loaded into the process’s address space. Whenever an imported function is going to be called for the first time, the process will ask the loader to resolve the function’s symbol to its address.
We can monitor symbol-binding in the opened library if we return the flags “LA_FLG_BINDTO” and “LA_FLG_BINDFROM” in
la_objopen. After returning these flags, each symbol binding inside the opened library will invoke the following API function:
Note: In case it’s a 32-bit platform, the invoked API function will be named
la_symbind32.
Using everything we’ve learned, we can now write the following code:
- Capture the
la_objopencall for the preloaded library, store its cookie (a unique identifier value for the library), and mark it for auditing to receive
la_symbind64calls with the functions it intercepts.
- In the bind API call, we will verify the cookie of the referrer to make sure it’s the current library, and report the name of the intercepted function.
uintptr_t * preloaded_cookie = NULL; unsigned int la_objopen(struct link_map *map, Lmid_t lmid, uintptr_t *cookie){ if (NULL != preloaded && strcmp(map->l_name, preloaded) == 0) { fprintf(stderr, "A 'preload' library is about to load: %s. Following its function bindingn", map->l_name); preloaded_cookie = cookie; return LA_FLG_BINDTO | LA_FLG_BINDFROM; } return 0; } uintptr_t la_symbind64(Elf64_Sym *sym, unsigned int ndx, uintptr_t *refcook, uintptr_t *defcook, unsigned int *flags, const char *symname) { if (refcook == preloaded_cookie) { fprintf(stderr, "Function '%s' is interceptedn", symname); } return sym->st_value; }
Let’s compile and see what
libprocesshider really does:
Alright! The only intercepted function is
readdir. This can be verified in the rootkit source code. This is the
libc function which
ps calls on “/proc”, the directory where Linux stores information about processes. Each time
readdir is called on “/proc”, the rootkit finds
evil_script.py’s PID and removes it from the directory listing that
readdir returns. This way, any dynamic binary that tries to retrieve the process list using
libc, won’t see the hidden process.
Code Injection and Interception
We had fun controlling and disabling the preloaded library, but can we use LD_AUDIT for other purposes as well?
As we have seen before, the LD_AUDIT technique can also be used to run arbitrary code in another process’s context (code injection) by simply writing the desired code in
la_version.
But what about intercepting library function calls the same way LD_PRELOAD does? Can we use what we learned about the auditing API and use LD_AUDIT to load a rootkit?
As we have seen in previous audit functions, the API function
la_symbind64 does not only provide information but can also modify the program behavior:
Therefore, we can override any desired function with our own in the resolving stage!
As we learned before, to write a process-hiding rootkit, we need to hook the
readdir function symbol with our own function.
Let’s try and adapt
libprocesshider to work with LD_AUDIT. We will add the following code to
processhider.c:
unsigned int la_version(unsigned int version) { return version; } unsigned int la_objopen(struct link_map *map, Lmid_t lmid, uintptr_t *cookie){ return LA_FLG_BINDTO | LA_FLG_BINDFROM; } uintptr_t la_symbind64(Elf64_Sym *sym, unsigned int ndx, uintptr_t *refcook, uintptr_t *defcook, unsigned int *flags, const char *symname) { if (strcmp(symname, "readdir") == 0) { fprintf(stderr, "'readdir' is called, intercepting.n"); // readdir is the tampered function declared in processhider.c return readdir; } return sym->st_value; }
And to the final moment:
That’s it,
evil_script.py is nowhere to be seen! We adapted a rootkit that uses a known technique to our newly explored LD_AUDIT technique!
Conclusion
The dynamic loader’s auditing API is a very powerful tool. It can be easily used to control the loaded libraries and modify the behavior of the process it is attached to.
In this post, we explored the auditing API, a powerful API that hasn’t been discussed before. We first showed its superiority to LD_PRELOAD, which derives from the fact it is loaded first. Then we showed its capabilities that allowed us to easily monitor and disable preloading. Finally, we showed that it can replace LD_PRELOAD.
The main problem with using the LD_PRELOAD trick for malicious actors is that it is already widely known – exposed malwares use it, and it even has a MITRE technique. System administrators, SOC, and IR teams know to look for it, and some even disable it preventatively system-wide.
That is why adversaries can benefit from the fact that LD_AUDIT is unfamiliar for those purposes.
Note that other than LD_AUDIT, there are other dangerous and unexplored vectors like other LD_* variables and DT_RPATH which can also be abused. It is clear that the loader has a much higher destructive potential than we would expect from a process loader.
During the writing of the article, while looking for references, we found that the “disable preloading” part had been introduced by ForensicITGuy. He also created a functional library called libpreloadvaccine which can be integrated into Linux environments.
Further Reading
Aside from the links provided in the article, the following may also be of interest.
- The loader’s source code is inside glibc’s repository:
- More on process hiding and LD_PRELOAD:
- Thorough explanation on symbol resolving:
- More on the loader’s internals:
The most relevant files are rtld.c and all dl-*.c | https://www.sentinelone.com/labs/leveraging-ld_audit-to-beat-the-traditional-linux-library-preloading-technique/ | CC-MAIN-2022-33 | refinedweb | 1,811 | 61.36 |
Today we’re excited to announce the public availability of PostSharp 3.1 Preview. You can download the new Visual Studio tooling from our website, our update your packages using NuGet Package Manager. In the latter case, make sure to enable the “Include Prereleases” option.
PostSharp 3.1 includes the following features:
PostSharp 3.1 is a free upgrade for all customers with a valid support subscription. The license key of PostSharp 3.0 will work. advise had no chance to get ever fired.
PostSharp 3.1 gets much smarter. OnMethodBoundaryAspect now understands that is being applied to a state machine, and works as you would expect. To enable the new behavior, you need to set the OnMethodBoundaryAspect.ApplyToStateMachine property to true. It is false by default for backward compatibility.
But there is more: the aspects define two advises: OnYield and OnResume. For the sake of backward compatibility, we could not add them to the IOnMethodBoundaryAspect interface, so we defined a new interface IOnStateMachineBoundaryAspect with these two methods.
More blogging about this later.
To discover more on your own, try to apply the following aspect to an async method or an iterator:
[Serializable]
public class MyAspect : OnMethodBoundaryAspect, IOnStateMachineBoundaryAspect
{
public static StringBuilder Trace = new StringBuilder();
public override void OnEntry(MethodExecutionArgs args)
{
Console.WriteLine("OnEntry");
}
public void OnResume(MethodExecutionArgs args)
{
Console.WriteLine("OnResume");
}
public void OnYield(MethodExecutionArgs args)
{
Console.WriteLine("OnYield");
}
public override void OnSuccess(MethodExecutionArgs args)
{
Console.WriteLine("OnSuccess");
}
public override void OnExit(MethodExecutionArgs args)
{
Console.WriteLine("OnExit");
}
}
Read more about this feature.
The first time you compile a project using a specific build of PostSharp, you will be proposed to install itself into GAC and create native images.
Doing will decrease build time of a fraction of a second for each project. A potentially substantial gain if you have a lot of projects. You can uninstall these images at any time from PostSharp options in Visual Studio.
Under the cover, PostSharp will just install itself in GAC using gacutil and will generate native images using ngen. The feature does not affect build servers or cloud builds, so only build time on developers’ workstations will be improved.
Keep in mind that your end-users will not have PostSharp installed in GAC, so you still need to distribute PostSharp.dll..
I know, it seems like an obvious feature. But it was actually quite complex to implement. PostSharp works as MSIL level and the file that’s supposed to MSIL back to source – the PDB file – won’t tell you where a type, field, or abstract method is implemented. Indeed, it only contains “sequence points”, mapping instructions to lines of code. That is, only the inside of method bodies are mapped to source code. So, we had to rely on a source code parser to make the missing like ourselves.
This feature is available for the C# language only.
Note that this feature is not yet fully stable. There are still many situations where locations cannot be resolved.
We made it easier to add a policy to a whole solution. Say you want to add logging to the whole solution. Previously, you had to add the aspect to every project of the solution. Now, you can right-click on the solution and just add it to the solution.
This is not just a UI tweak. To make this scenario possible, we had to do significant work on our configuration subsystem:
Thanks to this improvements, you can add not only our own ready-made aspects to the whole solution, but also your own aspects.
The configuration subsystem is currently largely undocumented and there was not a good use case for it until PostSharp 3.0. I’ll blog further about this feature and we’ll update the documentation.
Read more about support for solution-level policies.
PostSharp Diagnostics Pattern Library was good at adding a lot of logging, but the log quickly became unreadable because the output was not indented. We fixed that. If the logging back-end supports indentation, we’ll call its Indent or Unindent method. Otherwise, we’ll create indentation using spaces.
Our implementation is thread-safe and still does not require you to add a reference to any PostSharp library at runtime (the reference is required at build time and will be removed).
Note that while working on PostSharp 3.1, we still added some features to PostSharp 3.0. The most important ones are support for Windows 8.1 and Visual Studio 2013. Keeping in pace with changes of development environments is a challenge in itself, and we’re glad this we handled it smoothly, without forcing customers to wait for a new minor version.
Please report any issue on our support forum. We’d love to hear about your feedback.
Happy PostSharping – faster than ever.
-gael >>!
Happy PostSharping!
-Igal | http://www.postsharp.net/blog/category/Announcement?page=2 | CC-MAIN-2018-17 | refinedweb | 801 | 58.58 |
Tim Ellison wrote:
> Mikhail Loenko wrote:
>> Seems like I've managed to reproduce the problem:
>>
>> I've refactored the code using Eclipse: all that is under
>> com.openintel.drl.security
>> I've moved under
>> org.apache.harmony
>
> Can you make that org.apache.harmony.security -- then we keep the module
> name in there. When we build into a single rt.jar it will avoid any
> unfortunate name collisions with implementations in other modules.
That's where I'm putting it. (You aren't reading my checkins, I assume :)
I'm just doing
s/com.openintel.drl/org.apache.harmony/
And was thinking that we might want to put "classlib" after o.a.h to
keep the o.a.h namespace a bit more open....
geir | http://mail-archives.apache.org/mod_mbox/harmony-dev/200601.mbox/%3C43CE2699.6080301@pobox.com%3E | CC-MAIN-2018-30 | refinedweb | 125 | 62.34 |
Chic Event is simple object-oriented event system for JavaScript
Chic Event is simple object-oriented event system for JavaScript. It's built on top of Chic.
Current Version: 0.0.1
Automated Build Status:
Node Support: 0.6, 0.8
Browser Support: Untested, coming soon
You can use Chic Event on the server side with Node.js and npm:
$ npm install chic-event
On the client side, you can either install Chic Event through Component:
$ component install rowanmanning/chic-event
or by simply including
chic-event.js in your page:
In Node.js or using Component, you can include Chic Event in your script by using require:
var chicEvent = require'chic-event';var Event = chicEventEvent;var EventEmitter = chicEventEventEmitter;
If you're just including with a
<script>,
Event and
EventEmitter are available in the
chic.event namespace:
var Event = chiceventEvent;var EventEmitter = chiceventEventEmitter;
The rest of the examples assume you've got the
Event and
EventEmitter variables already.
Chic Event's classes are build with Chic. Read the documentation to familiarise yourself with the API.
The
EventEmitter class is best used by extending it. Doing so will give your new classes the ability to handle and emit events.
EventEmitter has an
extend static method; you can create your own class like this:
var Animal = EventEmitterextend;var fluffy = ;
Once you have a class, you'll be able to use the following methods:
Bind a handler to an event type. This accepts two arguments:
type: (string) The type of event to bind to.
handler: (function) What to call when this type of event is emitted.
fluffyon'eat' ;
Emit an event. This accepts two arguments:
type: (string) The type of event to emit.
event: (Event|mixed) The event object to pass to all handlers, or data with which to construct an event object.
The following examples are equivalent:
var event = food: 'steak';fluffyemit'eat' event;
fluffyemit'eat' food: 'steak';
Each of the handlers bound to the given event type will be called with the
Event object as their first argument:
fluffyon'eat'if eventdatafood !== 'steak'return 'Fluffy sicked up his ' + eventdatafood;;fluffyemit'eat' food: 'carrots'; // Fluffy sicked up his carrots
Remove a specific handler from an event type. This accepts two arguments:
type: (string) The type of event to remove the handler from.
handler: (function) The handler to remove.
fluffyon'eat' onEat; // bind a handlerfluffyoff'eat' onEat; // then remove it
Remove all handlers from an event type. This accepts one argument:
type: (string) The type of event to remove the handlers from.
fluffyon'eat' ; // bind handlersfluffyon'eat' ;fluffyoff'eat'; // then remove them
Remove all handlers from all event types. Call with no arguments.
fluffyon'eat' ; // bind handlersfluffyon'poop' ;fluffyoff; // then remove them
The
Event class is used to hold event data and allow handlers to stop events, preventing further handlers from executing.
This property contains the type of the event. It defaults to
null and is set when passed into
EventEmitter.emit:
var event = ;fluffyemit'eat' event;eventtype; // eat
This property contains the
EventEmitter instance which triggered the event. It defaults to
null and is set when passed into
EventEmitter.emit:
var event = ;fluffyemit'eat' event;eventtarget === fluffy; // true
This property contains the arbitrary event data which can be passed in during construction:
var event = food: 'steak';eventdatafood; // steak
If
EventEmitter.emit is called with anything other than an
Event instance, then a new
Event is created with the data passed into the constructor:
fluffyon'eat'eventdatafood; // steak;fluffyemit'eat' food: 'steak';
Event handlers are executed in the order they are added. This method allows a handler to prevent execution of any other handlers later in the stack:
fluffyon'eat'eventstop;;fluffyon'eat'// never called;fluffyemit'eat';
This method returns whether the
Event instance has been stopped with
Event.stop:
var event = ;eventstopped; // falseeventstop;eventstopped; // true
To develop Chic Event, you'll need to clone the repo and install dependencies:
$ npm install
No code will be accepted unless all tests are passing and there are no lint errors. Commands are outlined below:
Run JSHint with the correct config against the code-base:
$ npm run-script lint
Run unit tests on the command line in a Node environment:
$ npm test
Chic Event is licensed under the MIT license. | https://www.npmjs.com/package/chic-event | CC-MAIN-2015-35 | refinedweb | 704 | 62.38 |
ECL Grid
Use a grid to give your design a basic underlying structure. By placing your design elements on the invisible lines of a grid, you will create a more structured and coherent composition.
The ECL Grid uses Bootstrap v4's grid. Designs are based on the 12-column Bootstrap grid system.
We use 5 breakpoints, of which 3 of them are native to Bootstrap.
Bootstrap includes a responsive, mobile first fluid grid system that appropriately scales up to 12 columns as the device or viewport size increases.
It includes predefined classes for easy layout options, as well as powerful mixins for generating more semantic layouts.
Refer to
Bootstrap documentation for
understanding the general concepts. When using the grid of ECL, use
.ecl-
namespace in front of Bootstrap's classes in order to avoid collisions. | https://www.npmtrends.com/@ecl/ec-layout-grid | CC-MAIN-2021-39 | refinedweb | 135 | 58.08 |
Everyone knows the ToolTip control (the programmer ones!!) which are included in the Common Controls version 4.0 since Win95 (that’s what saw first, I haven't ever caught the Win3.11). Since then the control itself have been modified and enhanced in many ways, and after a while, WinXP came to life and the Common Controls version 6.0 saw light with the Balloon ToolTip control included, which is the subject of this article.
Neither creating a Balloon ToolTip nor implementing an IExtender is a new subject, not even a hard one, but getting the functionality of both techniques could have some attention, and that's what this article discusses: how to combine the good of both, in a way that's as simple as using the native .NET ToolTip control.
IExtender
ToolTip
While searching around, I found a great article about the Balloon ToolTip control (actually about the ToolTip control, in all its shapes and uses). This article (which could be found at CodeProject too) was a great reference to me and a good, live example of using the Balloon ToolTip control, but the control described suffered from its complexity (not a real complexity, but it had to be operated programmatically), something that pushed me to investigate the IExtender and to accomplish this work.
Reading this article would give you a general understanding of the basic ideas discussed here, but to have a complete understanding of every concept, you must have some knowledge of the Win32 API calls and their uses. You don't have to be a professional to get it right and shouldn't be a fresh guy either (you shouldn't have to jump to the nearest programming book you have to figure out what is the Win32 LPTSTR or how to allocate some bytes in the unmanaged heap). I know that was the first .NET promise - the ability to avoid using the Win32 APIs, but if you didn't knew, the APIs are not dead yet, and will not die soon, so you must get yourself comfortable using them in your own code.
LPTSTR
After this heavy theoretical talk, we can start getting our hands dirty.
The IExtender interface's idea is to provide a service to another control, to give the functionality you wrote not to your class but to other classes. A problem you solve in your code is presented to other classes to be used and not used directly by your own class as all ordinary classes do. In other words, you solve other components' problems in their own territory.
This interface provides a single method CanExtend, which expects you to have an object passed to it, and it returns a boolean as an answer. The object passed to this method represents the selected control at design time, and the bool result specifies if the class that implements this method should provide its services to this object. That's the whole story simplified.
CanExtend
bool
For almost any control that implements the IExtender interface, the CanExtend method should return true for each control it should extend, except itself, and any other control that may be illogical to support, and in our case, every control is more than welcome to get served except for this control and the Form control too.
true
Form
public bool CanExtend(object extendee)
{
if(extendee is control && !(extendee is BalloonToolTip)
&& !(extendee is From))
{
return true;
}
return false;
}
Based on the result of this call, the VS designer, or any other third-party designer used, decides whether to provide the specified service to the control or not, so when you select a control, the designer calls this method at design time, passing the selected control to it as a parameter, and if its get true, the provided functionality would appear in the selected control property page (and in the code too) as a new property, exactly as if it was implemented in the original control code.
The ToolTip control is somehow confusing while reading about it in the MSDN, and to get it right, you should distinguish between two concepts, the ToolTip control itself and the tool it supports. The first one, the ToolTip control, is the parent window which draws the text and behaves as it is ordered to, and the tool is the logical structure that contains the text to be drawn and controls how to display the tip. For this reason, a single ToolTip control can have as many tools as you want, one for each control you choose.
That's it. You create the ToolTip control which is independent from its associated tools, and has its own general properties (its width, color and period of appearance). Then you create as many tools as you need, and you add these tools to the ToolTip control.
As with any other control, by supplying the CreateWindowEX API function with the correct parameters, a ToolTip control could be created, and a handle to it would be returned. For information regarding parameters usage and their meaning, you may refer to MSDN and have a complete description of each of them, but for this article's sake, it's as follows:
CreateWindowEX
IntPtr toolwindow;
Toolwindow = CreateWindowEx(0, “tooltips_class32”, string.Empty,
WS_POPUP | TTS_BALLOON | TTS_NOPREFIX | TTS_ALWAYSTIP,
CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT,
CW_USEDEFAULT, IntPtr.Zero, 0, 0, 0);
The most important of these constants is the TTS_BALLOON which enforces the function to create a ToolTip control that has a cartoon-style balloon. The second parameter specifies the class from which to create the window (in this case, the ToolTip class), and for the parent parameter, we pass a zero pointer to indicate that this window has no parent (it's a popup window). And finally, the return value is a handle that represents our ToolTip control for the lifetime of the class.
TTS_BALLOON
For now, we have a ToolTip control ready to be used, but how do we communicate with it? From the .NET perspective, this ToolTip control is the core of our component, but from the Win32 perspective, it's just a window, and to manipulate it, we have to follow the Win32 message based communication rules through the SendMessage API function.
SendMessage
The ToolTip control defines a number of messages to be sent to it as commands that specify its appearance and its behavior. These messages are listed in MSDN, and our control doesn't use all of them, just the ones that are useful for our implementation.
TTM_ACTIVATE
TTM_ADDTOOL
TTM_DELTOOL
TTM_SETTITLE
TTM_SETTIPBKCOLOR
TTM_SETTIPTEXTCOLOR
TTM_SETDELAYTIME
TTM_UPDATETIPTEXT
TTM_SETTOOLINFO
TOOLINFO
TTM_GETTOOLINFO
This summarized list is by no means a reference to the messages that the ToolTip control uses. For a detailed description of each of these listed and any not listed messages, refer to the MSDN documentation. Most of the messages have meaningful names and don't need any further explanation.
The TTM_ADDTOOL message is our key to add a tool to the ToolTip control, and as I said before, the ToolTip control is separated from its associated tools. The ToolTip control does the housekeeping work and displays the balloon window with the specified setting, and each tool of its associated tools is a TOOLINFO structure that holds the information needed to accomplish displaying a balloon window.
Take a look at the TOOLINFO structure here:
typedef struct tagTOOLINFO
{
UNIT cbSize;
UNIT uFlag;
HWND hwnd;
UNIT_PTR uId;
RECT rect;
HINSTANCE hinst;
LPTSTR lpszText;
#if(_WIN32_IE >= 0x0300)
LPARAM lparam
#endif
}TOOLINFO
This structure represents a tool contained in a ToolTip control. The cbSize member must specify the size of this structure in bytes, the uFlag member controls the display of the ToolTip, hwnd is the handle to the control (the control in your form design) that contains this tool, rect is the member which holds the ClientRectangle of the control that contains this tool, and lpszText is the buffer that contains the string to be displayed in the ToolTip balloon window.
cbSize
uFlag
hwnd
rect
ClientRectangle
lpszText
If you have to deal with unmanaged code like these Win32 API calls, let me introduce you to a good friend, the System.Runtime.InteropServices.Marshal class. For any travel outside the managed world, this tough guy is your best friend. There are handy useful static functions in the Marshal class that would solve almost all your problems with unmanaged code, and one of them shows how to convert a managed version of the TOOLINFO structure into a memory pointer to be passed into the SendMessage API function. This function talks the unmanaged language and cannot accept your managed TOOLINFO object, so you have to go a little bit low-level to workaround this situation, and here is how:
System.Runtime.InteropServices.Marshal
Marshal
// first we have to construct a managed toolinfo structure
toolinfo tf = new toolinfo;
// now specify the size of this structure in the size member
tf.size = Marshal.SizeOf(typeof(toolinfo));
// specify how the balloon window should be displayed
tf.flag = TTF_SUBCLASS | TTF_TRANSPARENT;
// associate the structure with the target control (the one on your form)
tf.parent = targetcontrol.Handle;
// specify the text you want to be displayed
tf.text = “Tooltip text to be displayed”;
// we need a temporary pointer to hold our unmanaged copy of TOOLINFO object
IntPtr tempptr;
// now allocate enough memory to hold this structure in the unmanaged heap
tempptr = Marshal.AllocHGlobal(tf.size);
// copy the content of our filled managed
// TOOLINFO object to the newly allocated memory space
Marshal.StructureToPtr(tf, tempptr, false);
// now send a message to our ToolTip control that we have a tool to be added
// and remember that our ToolTip is just a pointer return by CreateWindowEx
SendMessage(tooltipptr, TTM_ADDTOOL, 0, tempptr);
// now our tool have been added to the ToolTip control,
// and we have to clean out what we did
Marshal.FreeHGlobal(tempptr);
For those who started scratching their heads (asking what the hell was that?) if any, I'll just say that explaining about how to work with unmanaged code is beyond the scope of this article. Anything that seems to you to appear either in the Marshal class or in the ToolTip constants and messages, is explained shortly in the comment, and if you don't get it yet, you have to refer to the MSDN (if you still cannot get it, then either I'm a bad writer or you are a bad reader).
As an answer, not yet. There is a little detail that's not been discussed yet. The IExtender interface provides an extended property to other classes, but our class that implements the IExtender must be marked with the ProviderProperty attribute. This attribute along with the IExtender accomplishes this task. This attribute constructor accept two parameters, a string specifying the name of the property your class will provide to other components, and the type of the receiver of this extended property.
ProviderProperty
[ProvideProperty(“BalloonText”, typeof(Control))]
public class BalloonToolTip : System.ComponentModel.Component,
IExtenderProvider
{
...
}
After adding this attribute, and when we add our control to the designer, every supported control on the form will have a new property added to its properties, named as the text specified in the ProvideProperty attribute (BalloonText).
ProvideProperty
BalloonText
Now we have done marking our class with its provided property name and receiver type, but this is still not enough from a code perspective. In our code, there must be a Get/Set pair of functions with exactly the same name as our extended property. To put it another way: we specified “BalloonText” as the name of our new property and we have to supply the “GetBalloonText” and the “SetBalloonText” functions as well.
GetBalloonText
SetBalloonText
The Get function is a function that returns a string (surprised? !!) which is the string associated with the control passed to it as a parameter. This parameter must be the same type as the receiver type in the ProvideProperty attribute.
public string GetBalloonText(Control parent)
{
... // return the string associated with the tool
// that have its parent equal to the passed control.
}
And the Set function is a void function (surprised too? !!) and expects to have two parameters passed to it, the control that you want to add the property value to it, and the string value to be added.
void
public void SetBalloonText(Control parent, string value)
{
... // Add the passed string to a new tool
// with its parent equal to the passed control.
}
These functions do not appear as ordinary functions in the code, but as properties in the control they have extended, so don't get confused (after all, that's what the IExtender should do).
The last thing to say is that it's your choice how to implement these functions. You may have a Hashtable to hold the values, getting it right from the tool the control contains, having an array with somehow the correct index to store these values, or you might even write a whole new class to do this job, but it's your choice, and I chose the Hashtable option.
Hashtable
The hash table is a place to store a collection of key/value pairs, and in our case, we already have these key/value pairs. It's our control/property pair. For each control, we have a unique string as its BalloonText value, so I used a hash table to store these facts. The control is the key and its associated “BallonText” is the value.
BallonText
For the Get function, there is nothing more to say, but for the Set function there is:
public void SetBalloonText(Control parent, string value)
{
if(value == null)
value = string.Empty;
if(value == string.Empty)
// the user delete our property value,
// so he don’t want our service
{
hashtable.Remove(parent);
// remove our pair from the collection
... // create a toolinfo object, assign its parent
// member the handle of the passed control
// and send a message with TTM_DELTOOL
}
else
// the user has assigned our property a value,
// so he want our service
{
if(hashtable.Contains(parent))
// check whether we have added the control before or not
{
// if we did, then the user has just modified
// the property text, so update
// the hashtable and the tool
hashtable[parent] = value;
...
// create a toolinfo object, assign its parent member
// the handle of the passed control
// and send a message with TTM_UPDATETIPTEXT
}
else
{
// the user assign a new value to the property,
// so add it to the hashtable and add a new tool
hashtable.Add(parent, value);
...
// create a toolinfo object fill its
// values and send a message with TTM_ADDTOL
}
}
}
Congratulations, mission accomplished successfully!
There are a couple of properties I have added to the control but I didn't mention any of it here, they are too simple to be explained. I tried to comment any interesting or important point in the code example.
If you take a look at the code example, you may wonder why I would add a new EventHandler for the Resize event of the target control?
EventHandler
Resize
Simply, because if you omit this, the BalloonToolTip will not recognize any new size of your control, and will not be activated on all the control client rectangles outside its original one that it was added to with the designer. For that reason, any change in the control size must be updated with the Win32 ToolTip tool associated with that resized control, and where is it better to do that than in the Resize event?.
BalloonToolTip
You may also note that I don't add any EventHandler to the MouseHover of the target controls. That's because I used the TTF_SUBCALSS flag when I created the TOOLINFO structure, and if you refer to the MSDN, you'll find that this flag enforces the Win32 ToolTip control itself to add the EventHandler and perform any required housekeeping, for free!!.
MouseHover
TTF_SUBCALSS
I hope that this article. | http://www.codeproject.com/Articles/12322/Balloon-ToolTip-Control?fid=242526&df=90&mpp=10&sort=Position&spc=None&select=3596598&tid=2583831 | CC-MAIN-2015-32 | refinedweb | 2,624 | 54.66 |
There are few issues regarding XML validation that cause as many headaches as validation of business rules (constraints on relations between element and attribute content in an XML document). This hack helps relieve that headache.
Even after the release of the new, grammar-based schema languages XML Schema and RELAX NG, it remains difficult to express restrictions on relations between the contents of various elements and attributes. This hack introduces a method that makes it possible to validate these kinds of rules by combining two XML Schema languages, RELAX NG () and Schematron ().
W3C XML Schema () lacks much support for co-occurrence constraints, and RELAX NG supports them only to the extent that the presence or absence of a particular element or attribute value changes the validation rules. On the other hand, Schematron provides good support for these types of constraints. Schematron is a rule-based language that uses path expressions instead of grammars to define what is allowed in an XML document. This means that instead of creating a grammar for an XML document, a Schematron schema makes assertions applied to a specific context within the document. If the assertion fails, a diagnostic message that is supplied by the author of the schema is displayed.
One drawback of Schematron is that, although the definition of detailed rules is easy, it can often be a bit cumbersome to define structure. A better language for defining structure is RELAX NG, so the combination of the two is perfect to create a very powerful validation mechanism.
As an example, here is a simple mathematical calculation modeled in XML (add.xml):
<addition result="3"> <number>1</number> <number>2</number> </addition>
This example shows a simple addition between two numbers, each modeled with a number element, and the result of the addition specified in the result attribute of the surrounding addition element.
A RELAX NG schema (in XML syntax) to validate this little document is very easy to write and can, for example, look like add.rng in Example 5-15.
<grammar xmlns="" datatypeLibrary=""> <start> <element name="addition"> <ref name="number"/> <ref name="number"/> <attribute name="result"> <data type="decimal"/> </attribute> </element> </start> <define name="number"> <element name="number"> <data type="decimal"/> </element> </define> </grammar>
The schema defines the structure for the document as well as specifying the correct datatype for the number element and the result attribute. The problem is that the previous schema will also validate the following instance, which is structurally correct but mathematically incorrect (badadd.xml):
<addition result="5"> <number>1</number> <number>2</number> </addition>
In RELAX NG, there is no way to specify that the value of the result attribute should equal the sum of the values in the two number elements except by faking it using value elements. By "faking it" I mean that during RELAX NG validation the actual addition does not take place, just the checking of values against a schema. Schematron, on the other hand, is very good at specifying these kinds of relationships. Before explaining how to embed Schematron rules in the RELAX NG schema, let's backtrack and briefly look at how Schematron works.
As mentioned earlier, Schematron uses path expressions to make assertions applied to a specific context within the instance document. Each assertion specifies a test condition that evaluates to either true or false. If the condition evaluates to false then a specific message, specified by the schema author, is given as a validation message. In order to implement the Schematron path expressions, XPath is used with various extensions provided by XSLT. This is very good in terms of validation purposes because it means that the only thing needed for validation with Schematron is an XSLT processor.
In order to define the context and the assertions, a basic Schematron schema consists of three layers: patterns, rules, and assertions. In its simple form, the pattern works as a grouping mechanism for the rules and provides a pattern name that is displayed together with the assertion message if the assertion fails. The rule specifies the context for the assertions, and the assertion itself specifies the test condition that should be evaluated. In XML terms, the pattern is defined using a pattern element, rules are defined using rule elements as children of the pattern element, and assertions are defined using assert elements as children of the rule element.
A Schematron rule for validation of the addition constraint above could look something like this (add.sch):
<sch:schema xmlns: <sch:pattern <sch:rule <sch:assertThe addition result is not correct.</sch:assert> </sch:rule> </sch:pattern> </sch:schema>
The rule has a context attribute that specifies the addition element to be the context for the assertion. The assertion has a test attribute that specifies the condition that should be evaluated. In this case, the condition is to validate that the value of the result attribute has the same value as the sum of the values in the two number elements. If this Schematron rule were applied to the erroneous XML instance badadd.xml, a validation message similar to this would be displayed:
From pattern "Validate calculation result": Assertion fails: "The addition result is not correct." at /addition[1] <addition result="2">...</>
So, now we have one RELAX NG schema to validate the structure and one Schematron rule to validate the calculation constraint, and the only thing left is to combine them by embedding the Schematron rule in the RELAX NG schema (dropping the sch:schema document element). This is made possible because a RELAX NG processor will ignore all elements that are not declared in the RELAX NG namespace. The combined schema will then look like this (addsch.rng):
<?xml version="1.0" encoding="UTF-8"?> <grammar xmlns="" datatypeLibrary="" xmlns: <start> <element name="addition"> <sch:pattern <sch:rule <sch:assertThe addition result is not correct.</sch:assert> </sch:rule> </sch:pattern> <ref name="number"/> <ref name="number"/> <attribute name="result"> <data type="decimal"/> </attribute> </element> </start> <define name="number"> <element name="number"> <data type="decimal"/> </element> </define> </grammar>
The exact location of the embedded Schematron rule does not matter?it can be placed anywhere in the RELAX NG schema. A good location for the embedded rule is within the definition of the element that is the context for the Schematron rule (shown emphasized in the combined schema). The finished RELAX NG schema with embedded Schematron rules is ready for validation, and the only thing left is an explanation of the validation process.
You can use the Topologi Schematron Validator () to validate add.xml against addsch.rng (Figure 5-9). This validator not only validates against Schematron schemas, but also XML Schema, DTDs, and RELAX NG with embedded Schematron schemas. After downloading and installing the application, open it and then select the working directory for both the XML document and schema. Select the XML document add.xml and the schema addsch.rng, and then click Run. Results are displayed in a dialog box. Try it with badadd.xml to see the difference in results.
Without a validator like Topologi to validate the embedded Schematron rules, you can extract them from the RELAX NG schema and validate them separately using normal Schematron validation.
Luckily, this is very easy to do with an XSLT stylesheet called RNG2Schtrn.xsl (), which will merge all embedded Schematron rules and create a separate Schematron schema. This stylesheet is already in the working directory where you unzipped the file archive.
Apply this stylesheet to the RELAX NG schema with the embedded stylesheet with Xalan C++ [Hack #32] :
xalan -o newadd.sch addsch.rng RNG2Schtrn.xsl
When successful, this transformation will produce this result (newadd.sch):
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <sch:schema xmlns: <sch:pattern <sch:rule <sch:assertThe addition result is not correct.</sch:assert> </sch:rule> </sch:pattern> <sch:diagnostics/> </sch:schema>
Then you can use Topologi to validate add.xml against newadd.sch, or you can use Jing to do it (Version 20030619 of the JAR?not jing.exe?offers provisional support of Schematron):
java -jar jing.jar newadd.sch add.xml
Figure 5-10 describes the process of extracting a Schematron schema that is embedded in a RELAX NG schema (XML syntax), and then processing the RELAX NG and Schematron schemas separately.
"An Introduction to Schematron," by Eddie Robertsson, on XML.com:
"Combining RELAX NG and Schematron," by Eddie Robertsson, on XML.com:
?Eddie Robertsson | https://etutorials.org/XML/xml+hacks/Chapter+5.+Defining+XML+Vocabularies+with+Schema+Languages/Hack+77+Use+RELAX+NG+and+Schematron+Together+to+Validate+Business+Rules/ | CC-MAIN-2021-31 | refinedweb | 1,401 | 53.1 |
Is Mike Acton, a proponent of Data-Oriented Design. DoD wants you to think about the problem down to the fundamental, cacheline level, scope, and think of programs in terms of DATA, not CODE. Code merely transforms the data, the data is what's important.
Code should be designed with caches and architectures in mind. There is no such thing as "generic software"--all software has a subset of target platforms. Nobody is writing code for Google datafarms AND Arduinos AND Apple II's. There is SOME subset of target platforms and within that there are assumptions that can then proliferate through the code.
Optimization is NOT an after-thought. "Premature optimization is the root of all evil" is a statement that's been abused to apply to places that it never was designed for. Fast programs should be designed with understanding of the data from the start, not as an after-thought. That doesn't mean you start with assembly. You START by understanding the fundamental problem down to the bare metal.
Everyone is taught to use classes wrong. Everyone is taught "Want a tree? Make a tree class." But what happens when you want a big tree? Inherit a big tree! Okay, now you want an invisible tree! Sorry, you need multiple inheritance now. What happens when you want a big, invisible, red tree? Enjoy your class explosions.
Classes are data structures. Data structures should be bundled with relevant data that the CPU is going to use. Whether a programmer can visualize it as a real-world object doesn't matter. The programmer isn't the one running the code.
Programmers are taught to treat every object as a SINGLE class, which ends up being horrific for computers. The second you have an *object with .execute() method, you've screwed yourself. Why? Because you've built the assumption that every object is completely unique, sitting by it self, and there's no possibility to run batches of objects together.
Another thing people do is make insanely generic "one class does all" data structures which are impossible to optimize (for a compiler or human). (See last link below.)
DoD teaches you to think (and understand!) of the most common use case, and optimize for that first. Per the OGRE notes below, how often is a single (geometric) plane used and sent around? Or are they almost always sent around in arrays of planes? Then tailor code for arrays of planes!
Here's an amazing talk on DoD which downright pissed off the CppCon folks because they love their abstractions:
A newer talk/interview:
"Adding something to the problem [like unnecessary abstraction], never makes the problem simpler."
"One of the difficult problems in programming... is naming. Because by naming something, you're introducing a problem space--your [unnecessary] mental model into the problem space."
"Trying to apply a 'real world' model to what you've called a 'garbage problem' is not better. [...] You picked a model that's better for your game based on testing, 3 months of hardwork, having a very clear input and output. The output is Tommy smiles. The answer is the stuff you came up with. It's not garbage. It's the right answer. It works. It doesn't matter that there's no name for it. It doesn't matter that it's not 'real'."
"Any game has key metrics whether or not you notice them. If you're movement speed is 10 meters/S, that defines how much assets you can physically stream from your hardware. If you design for 10, and then move up to 20 m/s, you've got to redo everything [art assets, level designs, code, etc] so you can fit in that thinner budget now. And the opposite problem, you go from 10 to 5, and now you're NOT using all of the graphics budget you could have and your game doesn't look competitive." (He also goes on to suggest that critical factors / metrics should be "locked down" where possible to prevent the entire pipeline from spending millions adapting to the new constraint. And understanding "the cost of changing my mind.")
"Then you're optimizing at the end, and any time I hear it suggested, I want to slap somebody."
"If you're going to write for an architecture [x86, x64, ARM], then read the associated CPU's manuals. Period. Otherwise, you're just guessing." [paraphrased]
-------
Check out his complete roasting of the OGRE library which the devs then took to heart and improved their code big time.
[edit]
Also, here's a good introduction to Data-oriented Design (by someone else):
Here's a reddit post from a guy who said he benefited from the talks.
Of course, in typical Reddit fashion, there's always a top comment "explaining" that he's full of crap. I think I'll take the words of a proven technical director of a AAA studio over some whiny Redditor.
[edit] OH, I FORGOT Here's one of the best PDF slides on the subject. Goes into LOTS of details yet stays to the point.
Apparently with the Sony Cell processor you HAVE to use DoD or it'll run like piss so Sony put lots of investment into education. I just wish the video of the talk was online.
-----sig:“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
Optimization is NOT an after-thought.
Thank you for expressing this. I feel the same way. I also feel sometimes like I am in the minority, so it is nice to hear a big-time programmer support this idea. The most jaw-dropping part of the talk was during Q&A, when he, as the keynote speaker at a C++ convention, told an auditorium full of C++ programmers that he would rather use C99.
A few years ago, my code was completely different than it is now. I basically abstracted myself into a non-existent corner. I have used straight C ever since... Live and learn, I guess.
A few other ideas I enjoyed:
Helping the compiler as much as possible by preconditioning loops.
"Print it out and zip it."
Do not model code after the real world; model it after the actual data... Is this not self-evident, though? Speaking from personal experience, people may tend to do the opposite out of pure pleasure. (Writing "bad" abstract code can be fun.)
Allegro Game Videos
All I want is C with namespaces and templates. Since I can't have it, I still to C++ without classes. Losing named initialization in structs is a blow, but an acceptable one.
I disagree with Acton on templates. He opposes them because they kill compile times and recommends the C preprocessor as an alternative. Templates do hurt compile times, but you can actually debug them, as opposed to the preprocessor and they offer some increased flexibility.
The PS3 and XBox 360 both pretty much required data oriented design. They used in-order execution processors, so they weren't nearly as fault tolerant toward shit code.
I think D actually fixes a lot of the problems he hates about C++, and is still suitable for system-level code. The biggest problem with D is really the lack of ecosystem (people and their projects!) which requires plenty of C and C++ to D wrappers--at which point your data loses all of D's features when they leave your code. That can be "not a problem" for some library calls, but it could be a serialization-esk problem for others.
D's module system has much faster compile times than C++. So much so that the old men on the C++ Committee are considering adding them to C++... a decade late.
And D's templates are WAY more intuitive and powerful, and can have good error messages. Not perfect error messages, but a thousand times better than the STL wall-of-text crap.
I'm working on a (internal) game framework in D right now.
[edit] Oh, one more thing. The most common issue people take with D is the garbage collector. There's nothing in the actual language that requires the use of GC. RAII is just fine. Pointers, malloc, and realloc (<-!!) are all just fine. The standard lib could be replaced with a non-GC standard lib (just like many AAA-studios do when they rip out STL and roll their own). The only reason it doesn't exist is that the community is too small to warrant "re-inventing the wheel" just to appease people that, likely, don't give a crap about D anyway.
Yea, I'm also a fan of Mike Acton - follow him on twitter. I really like his theory and approach in general, which encourages no abstraction, and tries to get you to think about how the computer itself works, like a physical machine.
--AllegroFlare • allegro.cc markdown • Allegro logo
There's nothing in the actual language that requires the use of GC..
"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]
He opposes them because they kill compile times and recommends the C preprocessor as an alternative.
He recommends dynamic generation of code, using a language with strong text manipulation capabilities. I use Python for this exact purpose. Acton's presentation mentions Perl (though does not explicitly mention how Perl is used).
Then again, perhaps I misunderstood.
I admire Mark for understanding both ends of the spectrum. I saw his code for one of the SpeedHacks. If I remember correctly, it had healthy levels of abstraction--and speaking from personal experience, I find casting real-world light on data-world problems helps me better understand them... so these aren't commandments. Just rules of thumb.
Edit: I've finally thought of a question that I would have asked during the Q&A session. I would have liked to hear more about Acton's aversion to Boolean variables (as opposed to bit fields)... Heck, I'd like to hear from anyone. If taken to the extreme, Booleans have no place in platform/data-centric development... so do you never use Booleans?
It should be stressed that just because I'm a fan of the guy doesn't mean I do everything he says on every project. I find his alternative style a breathe of fresh air, and apply the techniques where applicable.
In my current project, my objects are so bloody complex and abstract (where scripts do the actual input), that so far, I'm just biting the bullet and keeping them treated individually so I can keep my head wrapped around them. Later on, I may try to switch things in to arrays of data. (Though I am using static arrays / pools, and D's template system allows me to build and adjust the size of these plain arrays very easily.)
On the other hand, I'm working on a message system and a "two phase" object execution system so that all objects can be executed completely (or mostly) independent of each other, and then their "effects" are queued up and applied after all scripts have run. So all objects can be split across as many CPU cores as you have, and the only things that have to be synced are what I call "bridges" between "domains" (a distinct set of objects and memory for a core). That's been pretty fun to design. Since my world can change, and object densities can move around, I'm still looking into my options for dynamically splitting the world into domains ala BSP/quadtrees, or something more context-aware..
Again, to be noted, I'm still learning the language. I'm not saying it's perfect. I'm saying the biggest problem is simply the lack of more users. And the reason for the lack of users? It's simple. It doesn't have a big company backing it (Rust/Mozilla. Go/Google. C#/Microsoft.) It isn't taught in most schools because it's a system language. It's not "cool" because it's not a new, crappy, hipster web language... because it's a system language. C and C++ aren't "cool" anymore but have plenty of pre-existing systems.
D is too humble of a language to catch most hipsters. It doesn't say on the webpage that "Everyone else sucks, and using our new Javascript framework is the greatest thing since the Macintosh." But hipsters are dying. So give it some time.
Here's an actual project, that stripped out all GC:
(As well as the linking article on Stack Overflow about GC-less D.)
Up to 300% improvement by replacing the GC and standard lib. That's surely a lot. But it's not orders-of-a-magnitude either. Which is when I would start making a Hard Decision (TM). 300% in 2012, is about equivalent to me starting a project today with a modern CPU.
(Another guy did a GC-less Phobos stdlib for games back in ... 2005.)
That was way back in 2012, and things have improved. (e.g. IIRc, LDC didn't even exist then.) He didn't say it was "impossible" or even complain at all in that article. (I'm sure it was a lot of work, but so is programming in general.) When I ran some memory intensive tests on raw data a year ago, I had about 200% slowdown over C++ with the same code--with zero compiler tweaks or D-specific workarounds.
Yeah, 200% is a lot. But for an indie developer, the time and effort to develop is also a critical factor--not just speed of execution. There's TONS of games that have come out that were written in C# or even freaking JAVA (Minecraft?!). And yet, Minecraft is one of the most popular games of all time. (Ran like piss on my Netbook when it first came out. But my netbook is in parts in a box, and Minecraft is still selling copies.)
I'm building my game in D as an real-world, full project-sized, experiment. And I'll have much more to say about building a game in D when it's done.
So my point with D is not that it's perfect. It's that it's viable. And yes, GC-less D is possible if you're willing to put the effort in and restrict your feature-set a little.
But that's a HUGE DIFFERENCE from trying to run say, C#, without a garbage collector, which is actually impossible.
If taken to the extreme, Booleans have no place in platform/data-centric development... so do you never use Booleans?
While I can't answer what you said, what blew me away in his OGRE slides (see my first post) is that he said, "Why can't a float be used as a boolean?" (instead of casting a rapidly used function from float back to bool before returning the condition value.)
That blew my mind. Floats CAN, and that takes out an unnecessary conversion. Oh sure, we're "taught" in college and by fellow programmers to FEAR floating point numbers. "Never compare a float!!! (except less than or greater than!)" and that "floats aren't accurate!". Yet, I ran a test (and a Google would also suffice) that an IEEE float will actually store ANY INTEGER up to 16,777,216 completely accurately. Nobody bothered to tell me that, yet it's a fact that can be relied on.
So some times, yeah, it'd be faster to leave a float as a float. But we're always taught to needlessly encapsulate things into black boxes, out of the fear as if "nobody could possibly understand these things in real situations, so we should just restrict ourselves to a subset of use cases."
And what's so refreshing about Mike Acton is he says the opposite. "Understand the architecture or you're not an engineer."
Which IS something taught in real engineering departments, like my Mechanical Engineering degree. We were never told, "Hey, physics is hard. So let's just ignore it and design things that are bigger, fatter, and more expensive than they need to be." (Enjoy getting fired!) But somehow, that mentality is pervasive in the programming community. Just like (taking it back to my first post) how Mike Acton wants to slap someone anytime they want to abuse the "Premature Optimization is the root of all evil." quote.
[edit 2]
WAIT, I remember ONE thing related to your boolean question. Bools represent DECISIONS. An object with one bool could actually be two separate object cases. Two bools, four.
So often, instead of using bools, they'll SPLIT into separate arrays and/or SORT in the same single array. Hence the "print the result and zip it." You want your BRANCH decision (for a one bool situation) to all occur ONCE. All bool=false objects at say, the top, and then a sudden switch to all the bool=true objects at the bottom. Or vice-versa. The point is that all the like objects are together so the branch prediction doesn't have to explode as three objects come in =true with their code, and then 1 object with =false which uses a different set of code, and then back and forth, each time killing the branch prediction and destroying the pipeline.
Split data where possible and treat them as independent arrays.
Sort data in the other situation, so that branches only occur at the change-over point.
I've seen that mentioned in a bunch of game programmer conferences... not just Mike Acton.
Now, I'm still not sure the best way to address objects that are FLIPPING state dynamically. You could "sort" it, but then, you're still branching in that sort. However, if the sort takes the "branch hit" once, and then the rest of your code references those objects multiple times (where they'd hit those branches many times) that may work... also, you could do a subset sorting algorithm with only "reduces branching" but not "eliminates" yet still improves the overall performance.
That is, even with a program that only reads a dataset ONCE per frame, you might be able to SORT some subset of that dataset to reduce the entropy a little bit and have the additional sorting cost of that reduction be less than the branching cost of an unsorted data set. So you "clean up" your data set a little bit each time.
Then again, that would still require the main program to HAVE those if(bool) statements, where as the splitting method removes them altogether. So like I said, I haven't figured out the best solution yet. But if they were all in the same array, reducing the entropy a little bit each frame, may work great when your dataset only increases entropy a little bit each frame. (That is, some objects change bool state, but not ALL. So where they WERE sorted, they're now "mostly sorted" and you're constantly reducing that state by a little bit with your sort algorithm.)
So often, instead of using bools, they'll SPLIT into separate arrays and/or SORT in the same single array.
Ahh, that makes sense.
Hey Siege, about Closures, I've not really considered how they're handed in memory. However, I just randomly stumbled on a comment that references GC and closures:
As a fun fact, if you modify druntime's allocator to be malloc(), you can use free(delegate.ptr) to manually manage closures, though this takes a lot of care to know if it actually should be freed or not.
Interesting... not sure if it answers your question/concerns about GC-free closures.
What are you typically using closures for?
Here is a collection of nice resources regarding data-oriented design:
--@fjordaroo | https://www.allegro.cc/forums/thread/616761/1028604 | CC-MAIN-2018-09 | refinedweb | 3,347 | 74.08 |
Reads log file lines that have not been read.
Project description). -n N, --every-n=N Update the offset file every N'th time we read a line (as opposed to only when we reach the end of the file). --no-copytruncate Don't support copytruncate-style log rotation. Instead, if the log file shrinks, print a warning. --read-from-end Read log file from the end if offset file is missing. Useful for large files. --log-pattern Custom log rotation glob pattern. Use %s to represent the original filename. You may use this multiple times to provide multiple patterns. --full_lines Only log when line ends in a newline `\n` (default: False) --version Print version and exit.
In your code:
from pygtail import Pygtail for line in Pygtail("some.log"): sys.stdout.write(line)
Contributing
Pull requests are very much welcome, but I will not merge your changes if you don’t include a test. Run tests with python setup.py test.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
pygtail-0.12.0.tar.gz (12.5 kB view hashes)
Built Distribution
pygtail-0.12.0-py3-none-any.whl (13.6 kB view hashes) | https://pypi.org/project/pygtail/ | CC-MAIN-2022-21 | refinedweb | 219 | 69.38 |
also wanted all users to have a Gourmious account so I didn’t allow users to login using their Facebook credentials if they didn’t have a Gourmious account. I decided to support the following 3 scenarios :
– user has a Django account and he logs in with Facebook. I associate his Facebook account to his Django account. This needs to be done only once.
– user does not have a Django account and tries to login using Facebook. I ask him first to create a Django account and I associate both accounts.
– user logs in using his Django credentials.
Facebook oauth is easier than the old Facebook connect. I was so happy to migrate to this new scheme.
I assume that you already have a server running your Django application and a Facebook app.
We are going to use the Django app django_facebook_oauth to make our life easier. Make sure you have python simplejson package installed.
Facebook Platform uses the OAuth 2.0 protocol for authentication and authorization. Read this first before continuing.
Facebook authentication
Clone the Django Facebook oauth app source code:
git clone
Copy the folder django_facebook_oauth to your Django project apps folder and rename it to facebook.
We will assume that your apps folder is called apps.
Django settings.py
We need to make the following changes:
- Add ‘apps.facebook.backend.FacebookBackend’ to AUTHENTICATION_BACKENDS.
- Add ‘apps.facebook’ to INSTALLED_APPS
- Add your Facebook API app ID: API_ID = xxxx.
- Add your Facebook secret key: APP_SECRET = xxxx.
Django urls.py
Add the following to urlpatterns:
(r'', include('apps.facebook.urls')),
Database changes
You need to add the following table so you can associate Django user IDs with Facebook user IDs and store the Facebook session token. You can use the syncdb feature or create the table manually.
CREATE TABLE `facebook_facebookuser` ( `id` int(11) NOT NULL auto_increment, `user_id` int(11) NOT NULL, `facebook_id` varchar(150) NOT NULL, `access_token` varchar(150) default NULL, PRIMARY KEY (`id`), UNIQUE KEY `user_id` (`user_id`), UNIQUE KEY `facebook_id` (`facebook_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8
Facebook authentication flow
1- user clicks on the Facebook login button
2- Django Facebook app authenticate view is called
3- facebook open graph authorize url is called with application ID, redirect uri = /authenticate, scope = ‘publish_stream’ to ask for permission to post to user’s wall
4- Dajgno Facebook app authenticate view is called with a parameter called code.
5- authentication using the Django Facebook app backend by passing the code back to Facebook along with the app secret.
6- Facebook returns a session token to be used for actions like posting messages to the user’s wall.
7- if the user is already in facebook_facebookuser table, login and redirect to home page. If user is not in the table, ask him to login using your app credentials so you can associate his Django app user id with his Facebook user id.
Django Facebook app views.py
I modified views.py to use my login form to associate Django user id with Facebook ID instead of using the register form included in the Django Facebook app.
When the token is returned and the user does not exist in the facebook table, I add a parameter to the request session to indicate that the user needs to login first so I can join the accounts.
if user != None: login(request, user) return HttpResponseRedirect('/') else: # lines added request.session['ask_to_login_facebook'] = '1' return HttpResponseRedirect('/') #return HttpResponseRedirect('/register/')
I check for this session parameter in my template to popup a login form.
When the user enters his Django app credentials, I add an entry to the Facebook table before logging in the user. This operation just needs to be done once. After that, I know how to match a user Facebook id to the Django app user id so the user can directly login using the Facebook connect button.
This the code in my Django app views.py
# check credentials user = authenticate(username=username, password=password) if user is not None: if user.is_active: if 'fb_id' in request.session: fb_user = FacebookUser(user = user, facebook_id = request.session['fb_id'], access_token = request.session['fb_token']) fb_user.save() del request.session['fb_id'] del request.session['fb_token'] login(request, user) status = 0 else: status = 1 else: status = 1 msg = _('Invalid password')
Django Facebook app backend.py
I modified this file to also add the token to the request session so I can use it when I add an entry to the facebook user table in views.py.
try: fb_user = FacebookUser.objects.get(facebook_id = str(profile["id"])) except FacebookUser.DoesNotExist: request.session['fb_id'] = profile['id'] # line added request.session['fb_token'] = access_token return None fb_user.access_token = access_token fb_user.save()
Django app template
This is what I added to my Django app login form to support the Facebook login button.
<form id="form_authenticate" action="/authenticate" method="POST"> <p>blablabla</a></p> <input id="form_authenticate_button" type="image" src="link_to_facebook_button" onClick="javascript:$('#form_authenticate').submit();"> </form>
I also added a ‘Post to Facebook’ check box to my ‘Post dish review’ form so the user can decide what gets posted to his Facebook wall.
I used to post messages using the Javascript SDK but I now do it on the server side using the Facebook python SDK.
Add the file facebook.py () to the Facebook app folder and rename it facebook_sdk.py .
We call put_wall_post() to post a message to the user’s wall.
Here is the code I am using in my Django app views.py to format the parameters before calling put_wall_post() .
def post_to_facebook(request): try: fb_user = FacebookUser.objects.get(user = request.user) # GraphAPI is the main class from facebook_sdp.py graph = GraphAPI(fb_user.access_token) attachment = {} message = 'test message' caption = 'test caption' attachment['caption'] = caption attachment['name'] = 'test name' attachment['link'] = 'link_to_picture' attachment['description'] = 'test description' graph.put_wall_post(message, attachment) return 0 except: logging.debug('Facebook post failed')
That’s it for now.
If you enjoyed this article, check out my web app Gourmious to discover and share your favorite restaurant dishes. It would be cool if you could add some of your favorite restaurant dishes.
Hi,
I have a couple of questions:
1) Does the refer to or?
2) Does the fb_token found in # check credentials refer to the oauth access token ?
That’s all for now!.
Link | October 2nd, 2010 at 10:55 pm
1) it refers to the package:
2) Yes.
Link | October 2nd, 2010 at 11:53 pm
I’m new to Django and could use some debugging pointers. Should I be able to get this to work if my server is localhost?
I’m seeing the following error:({
“error”: {
“type”: “OAuthException”,
“message”: “Missing redirect_uri parameter.”
}
});
Thanks
Link | December 10th, 2010 at 11:06 am
@patrick
It should work.
What is redirect_uri set to in authenticate_view() in the Facebook Django library ? What is your Facebook application URL set to ?
Link | December 16th, 2010 at 5:50 pm
Hi, I have to do what you did with Gourmious, but I’m not sure about something, every time you log into the web is going to ask me to connect to facebook?, or just the first time and then asueme that I publish on facebook too?
Link | June 23rd, 2011 at 10:50 am
@Miguel: The first step is to associate the facebook account to your web site account. I took this approach as I wanted every user to have an account on my side. You will have to look at the advantages and disadvantages of that versus only using their Facebook account. When this is done, the user just needs to click on the ‘Connect with Facebook’ button each time he wants to login using his facebook account. An access token will be received by the Django backend. You can use this access token to post on their wall.
Link | July 9th, 2011 at 3:58 pm
HI Laurent,
The First code you have written ,for login, is in facebook app’s view or you have created your own view.
Link | December 28th, 2011 at 9:59 am
@codepoet The first code snippet is part of the function authenticate_view of the Facebook app view.
Link | January 2nd, 2012 at 3:17 pm
Hi Laurent
Thanks for your response. As per the details you have given i have cloned the application and put it in my project and rename it to facebook. when i check this facebook’s view i only found two methods ‘login’ and ‘callback’. The ‘authenticate_view’ is a new function you have writen?
Link | January 4th, 2012 at 8:11 am
Just as a hint. If you’re using pip & virtualenv (which you should use), you can just “pip install git+git:…..”. It would be easier than “git clone and then copy etc.”
Link | January 30th, 2012 at 3:33 am | http://www.laurentluce.com/posts/facebook-connect-with-django/ | CC-MAIN-2018-17 | refinedweb | 1,457 | 57.37 |
When I first encountered F# (and Functional Programming, or FP) my initial
reaction was two-fold: so what, and why? I decided to try and answer those
questions by writing a few less-than-trivial components in F#. What struck
me first was that I could tell that my code still looked imperative - lots of
mutable variables, "for" loops instead of recursion, etc. I thought, this
is interesting, I have to actually learn how to think differently with FP, which
led to deeper questions - why do I have to think differently, how do I think
differently, and finally, is thinking more in FP terms actually an improvement
in how I architect applications?
These were a more interesting questions to me than the syntactical differences
between F# and C#. How to think in FP terms was something that I also
found lacking - most of the resources on FP dive right into how great FP is in
processing lists, which isn't teaching me anything about why its better and how
to think like an FP programmer. Also, I was intrigued because it is rare that I encounter
something (at least in technological circles) that requires me to re-evaluate my
concepts of programming at a fundamental level. I do not subscribe to the
dogmatic "FP is better" evangelizing that goes on (even as I myself evangelize
my own ideas!), but I figured that there was probably something of value about
FP that could improve my software architecture abilities if brought in
balance with practices that I already consider to be fundamental techniques
for anything but trivial applets.
Therefore, in this article, I will explore what it is about one's thinking
that needs to shift and the balanced benefits of having another way of
thinking about application development that can be added to the imperative / object-oriented
tool chest, and more to the point, why FP facilitates a different way of thinking. To some, this might be obvious, but it was
far from obvious for me, having been mired in OO architecture and working with
imperative languages for the last 30
years. Nor will I advocate that FP is fundamentally better - it is
different, and when a problem requires something different, FP is another tool to consider.
While I will provide examples of OO paradigms supported by F#, I will take the
position that this is "bad" FP, and look instead at how, by thinking
differently, we can avoid OO constructs and live more in the world of pure FP
thinking. All of the examples here are
implemented in VS2010. I have not found any code incompatibility issues
with VS2008 and the F# plugin, nor with VS2011.
For those that don't want to read the whole article, feel free to skip to the
summary for the "leading thoughts" that illustrate the difference between OOP /
imperative thinking and FP thinking.
Also, the reader is expected to have some familiarity already with F#,
particularly with the way class types are defined and the "match" statement.
The first thing that I decided I needed to figure out was out how to put my
toe into the water, and that meant learning how to call F# functions from C#,
with which I'm already intimately familiar. Also, I'd read a
lot about FP and mutability and words I never encountered before like currying,
closures, continuations, and monads. Figuring out how to code in a
language that is naturally immutable took me down
the first "think differently" rabbit hole with regards to variables, state, and
list management. Furthermore, I discovered that anything having to do with lists is
typically handled in FP with recursion rather than looping. In imperative
languages, recursion always raises the specter of stack overflows (too many
recursive calls) and performance issues (exactly how many levels of function
calls do I need to return from?), so this was an immediate red flag for me that
needed to be closely looked at to see if this made any sense to actually use FP
for anything practical, meaning, the processing of sometimes millions of items
in a collection. Lastly, when I finally got my head wrapped around FP
sufficiently to do something somewhat practical and actually applicable to FP, I
encountered some significant gotcha's with another feature of FP, type
inference.
Probably 95% of my code base is in C# at this point, so it seemed natural to
figure out how to call F# function from C#. This would at least provide a
foundation that I am intimately familiar with, from which to springboard into
F#.
From a blank solution, create a C# console project and an F# project, and
reference the F# project in the C# project's references:
In the default "Module1.fs" file that VS2010 creates, replace the module name
and create a simple function two add two parameters:
module FSModule
let Add x y = x + y
In the C# program, call the function and write the result to the console:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace WhyFP
{
class Program
{
static void Main(string[] args)
{
int ret = FSModule.Add(1, 2);
Console.WriteLine(ret);
}
}
}
Yay, it worked:
If you typed in this example instead of copy-pasting from the article, you will discover
that FSModule is not highlighted, and because Intellisense doesn't know about
the FSModule module, the Add method isn't known, so Intellisense fails you
again. So, this is the first thing one encounters - in order for
Intellisense to work, the F# library must be compiled first! This behavior
is different than occurs working in other .NET OO languages.
No code is required to define a module. If a codefile does not contain a
leading namespace or module declaration, F# code will implicitly place the code
in a module, where the name of the module is the same as the file name with the
first letter capitalized. To access code in another module, simply use .
notation: moduleName.member. Notice that this notation is similar to the syntax
used to access static members -- this is not a coincidence. F# modules are
compiled as classes which only contain static members, values, and type
definitions.1
Being used to namespaces, and knowing the F# supported namespaces, I wanted
to understand the difference between namespaces and modules. This code
results in the error "Namespaces cannot contain values.":
namespace FSModuleNS
let Add x y = x + y
Namespaces are used only for hierarchical categorization of modules,
classes, and other namespaces1. Therefore, the F# code has to
look like this:
namespace FSModuleNS
module FSModule =
let Add x y = x + y
and the C# code would be modified to call the Add function this way:
int ret = FSModuleNS.FSModule.Add(1, 2);
This might seem trivial, but it's a slight shift from how we go about
defining things in C# and is work noting.
I rapidly discovered two other "problems" with F# code, both involving
forward references. These may seem trivial but it is illustrative of the
many "WTF?" experiences one can have learning a new language and development
environment.
Add another module, we'll call it "Xyz.fs" and the function:
module Xyz
let MagicNumber = 5
then reference this function in the Add function:
let Add x y = x + y + Xyz.MagicNumber
This will result in a compiler error "The module or namespace 'Xyz' is not
defined. This is because F# does not permit forward references in the file
listings of the project. The project files:
must be rearranged as follows:
While I'm on the subject, notice that the module definition Xyz.fs is not
followed by and '='. This is a top level module declaration. This
one:
module FSModule =
is a local module declaration. Basically, a top level module
declaration applies to the entire contents of a file, whereas local module
declarations let you partition the file into different modules. Here is
the salient point though with regards to the implementation: A module "...is
implemented as a common language runtime (CLR) class that has only static
members."2
One uses the "open" keyword (similar to "using") to avoid having to qualify
the module name. For example:
namespace FSModuleNS
open Xyz
module FSModule =
let Add x y = x + y + MagicNumber
This keyword applies to F# modules as well as .NET namespaces.
This is a somewhat contrived example:
type Customer =
{
FirstName : Name;
LastName : Name;
}
type Name = string
but is illustrative of the fact that F# does not support forward references.
The above code will result in the error "The type 'Name' is not defined.
To fix this, we need to use the "and" keyword:
type Customer =
{
FirstName : Name;
LastName : Name;
}
and
Name = string
Everything about OO / imperative languages is mutable. I certainly
don't usually go about making fields in C# readonly - at best a
property might have a protected or private setter (or no setter at all), but
which still allows the implementing
class to modify the underlying field value. Conversely, everything
about FP is about immutability - you have to explicitly use the mutable
keyword to make something modifiable. This is the first real mental
stumbling block to thinking in functional programming terms, because it begs the
question, how do you actually do anything in FP then?
readonly
mutable
FP Thinking, #1: In functional programming, we must embrace the idea
and its implications that, once something is initialized, it cannot be changed.
Let's look at this a
little closer. Let's consider a simple traffic light class (please ignore all the casting
around the enum usage, it's bad design, but I wanted a short and simple example):
public class TrafficLight
{
public enum State { Red, Green, Yellow };
public State LightState { get; protected set; }
protected int[] stateDurations = { 10, 5, 1 };
protected DateTime transitionTime;
public TrafficLight()
{
LightState = State.Red;
InitializeDuration();
}
public bool CheckState()
{
bool lightChanged = false;
if (DateTime.Now >= transitionTime)
{
LightState = (State)(((int)LightState + 1) % 3);
InitializeDuration();
lightChanged = true;
}
return lightChanged;
}
protected void InitializeDuration()
{
int durationAtState = stateDurations[(int)LightState];
transitionTime = DateTime.Now.AddSeconds(durationAtState);
}
}
You can verify that this code works (10 seconds of red, 5 seconds of green, 1
second of yellow, etc) by writing a simple looping test case:
TrafficLight tl = new TrafficLight();
for (int i = 0; i < 20; i++)
{
tl.CheckState();
Console.WriteLine(tl.LightState);
System.Threading.Thread.Sleep(1000);
}
The "problem", if you will, with OOP is that classes do not just provide a
wrapper for methods (thus addressing issues such as modularity) but they also
manage state.
A class, in OOP, once instantiated, is a little package of state and methods
(aka functions) that you can call to "compute" something based on the current
state, along with functions that will alter the current state. Often you
have methods that do both - perform a computation that also alters state, known
as a "side-effect" either on itself or another object.
In the heyday of imperative languages like C, Pascal, and BASIC, state was
effectively a globally accessible thing. OOP "fixed" the scope, visibility
and global state management issues by introducing classes, however, state
remained associated with the class instance. A few years ago I wrote an
article
What's Wrong With Objects? and I would have to say at this point that the
answer is much simpler: objects are an entanglement of:
What this means is that an object is a mutable, complicated entity that is hard to test
and reuse. As a result, all sorts of additional technologies have arisen
to support these complex little creatures - unit test engines, design patterns,
code generators, ORM's, etc. Immutability (and the lack of side-effects)
is something that FP'ers like to point out as being an advantage to FP.
Immutability is a fundamental "think differently" principle of FP:
FP Thinking, #2: With FP, there are no side-effects because a state
change is represented by a new instance. So, stop thinking in terms of
changing state in an existing instance, and start thinking in terms of "this
state change means I need a new instance."
Of course, you can write programs in F# that introduce side-effects,
which introduces a corollary:
FP Thinking, #3: Even if the language
supports mutability, this should be avoided at all cost (except when interfacing
to a language like C/C++/C# that requires mutability to do anything useful) as
mutability breaks many of the advantages of FP.
For example, the whole class could be poorly written in F#:
module FSTrafficLight
type LightStateEnum =
| Red
| Green
| Yellow
type TrafficLight() as this =
let mutable myLightState = Red
let mutable transitionTime = System.DateTime.Now
do this.InitializeDuration
member this.LightState
with get() = myLightState
and set(value) = myLightState <- value
member this.GetDuration =
match this.LightState with
| Red -> 10.0
| Green -> 5.0
| Yellow -> 1.0
member this.NextState =
match this.LightState with
| Red -> Green
| Green -> Yellow
| Yellow -> Red
member this.CheckState() =
let now = System.DateTime.Now
match now with
| now when now >= transitionTime ->
myLightState <- this.NextState
this.InitializeDuration
true
| _ ->
false
member this.InitializeDuration =
let durationAtState = this.GetDuration
transitionTime <- System.DateTime.Now.AddSeconds(durationAtState)
But this is a poorly written F# class because of the mutable fields. Here's the test code in C#:
static void FSharpTrafficLight()
{
FSTrafficLight.TrafficLight tl = new FSTrafficLight.TrafficLight();
for (int i = 0; i < 20; i++)
{
tl.CheckState();
Console.WriteLine((TrafficLight.State)tl.LightState.Tag);
System.Threading.Thread.Sleep(1000);
}
}
(As an aside, notice how the F# discriminated union LightStateEnum has a Tag property to
get the enum value, which is mapped to a C# enum.)
LightStateEnum
Tag
Whenever you see a construct like this:
myLightState <- this.NextState
you are mutating the state of an existing instance. Stop and realize
that this is breaking FP.
How then would the above class be written in a more appropriate FP style?
First, here's the code:
module FSBetterTrafficLight
type LightStateEnum =
| Red
| Green
| Yellow
type TrafficLight(state, nextEventTime : System.DateTime) =
let myLightState = state
let transitionTime = nextEventTime
new() =
let initialState = LightStateEnum.Red
let nextEventTime = TrafficLight.GetEventTime initialState
TrafficLight(initialState, nextEventTime)
member this.LightState
with get() = myLightState
static member GetDuration state =
match state with
| Red -> 10.0
| Green -> 5.0
| Yellow -> 1.0
static member NextState state =
match state with
| Red -> Green
| Green -> Yellow
| Yellow -> Red
static member GetEventTime state =
let now = System.DateTime.Now
let duration = TrafficLight.GetDuration state
now.AddSeconds(duration)
member this.CheckState() =
let now = System.DateTime.Now
match now with
| now when now >= transitionTime ->
let nextState = TrafficLight.NextState myLightState
let nextEventTime = TrafficLight.GetEventTime nextState
TrafficLight(nextState, nextEventTime)
| _ -> this
and here's how we use it:
static void FSharpBetterTrafficLight()
{
FSBetterTrafficLight.TrafficLight tl = new FSBetterTrafficLight.TrafficLight();
for (int i = 0; i < 20; i++)
{
tl = tl.CheckState();
Console.WriteLine((TrafficLight.State)tl.LightState.Tag);
System.Threading.Thread.Sleep(1000);
}
}
Notice this line in the C# code that calls our F# code:
tl = tl.CheckState();
Every time we call call CheckState, we are either assigning a new instance
(when the state of the traffic light has changed) or are continuing to use the
same instance (if the state does not change.) The astute reader will say
"wait a minute, all you've done is push the mutability back onto the caller!"
This is correct, but keep in mind that we calling the F# function from a
language (C#) that is by nature mutable. Later we will see how to call the
CheckState function from within F# (inherently immutable) without using any
mutable fields.
CheckState
For the record, I am not a proponent of using classes in F#. They
support working with the .NET framework and imperative languages, and so far, I
haven't encountered anything I can do with classes that I can't do in F# without
classes, using functions and features such as partial functions (more on this
later.) If you think of a class (even an immutable class) in terms of
typical OOD, you will immediately be thinking about inheritance and
polymorphism.
FP Thinking #4: Stop thinking about objects. Stop thinking about
inheritance and polymorphism. Separate out the fields that the class
encapsulates into an FP representation. Separate out the methods that the
class encapsulates into discrete static functions. Learn out how to use
partial functions to leverage the concept of inheritance. Stop using
polymorphism altogether by naming your functions better - polymorphism is
actually nothing more than a band-aid for weak thinking.
In order to eliminate side-effects, the constructor of an immutable class in
F# must fully initialize its fields. In fact:
FP Thinking #5: Learn how to think about structures (OO classes, FP
records) as fully initialized and immutable entities. Figure out how to
think in terms of fully initialized implementations. Learn to think purely
in terms of initialization and computation. Replace "assignment" with "new
instance."
If a field is mutable, it is "assignable", which violates our "no mutable fields" rule
as we are changing the state of the current instance. In the above code, I
provide a default constructor that will initialize the state of class fully by
calling the parameterized constructor. The parameterized constructor
simply initializes the fields to the values passed in by the parameters..
Notice that the following methods are static:
Notice that these all take a state parameter. Here we have autonomous
member methods that don't care about the state of the class, they are, well,
autonomous! These become ridiculously simple yet powerful (because
seriously, what can go wrong???) methods that are completely decoupled from the
class instance and its state. One might argue that the power of a class is
that you don't have to pass state parameters to the methods in the class.
This is true, however, it also couples the methods to the class state, which
then brings us back to the issue of side-effects (one of several issues).
By passing everything to a function that it needs to perform its computation,
you have instead an autonomous, easily testable function. The parameters
of a function fully describe everything that that function requires to perform
its computation - there is no guesswork as to what "internal", stateful,
mutable, fields that function is also relying upon.
FP Thinking #7: An autonomous function is autonomous because its
parameters describe everything it needs to perform its computation. Stop
entangling your functions with a mix of parameters and stateful fields.
Instead, create functions that have no dependency on anything other than what is
being passed to them.
It takes a while to get used to, but the idea is simple: if you have a
computation to perform, pass in all the information that the computation
requires rather than relying on the state of fields in the class itself.
Among other things, this means that your computations are truly re-usable: the
class simply becomes a wrapper for the computation - you don't even need to
instantiate the class because the computations are static methods. Yes,
there are arguments that can be made against this, but remember a couple things:
FP Thinking #8: Eliminate methods that change state by
changing field values of the current instance. If you need to change the
state of an object, it instead becomes a new instance representing that new
state.
In the above code, the member method CheckState does not change
the state of the class (it can't because nothing is mutable), instead it creates
a new instance with the new state. The method either returns itself if
it's not yet time to transition to another state, or it returns a new instance.
You can see how this affects the code that calls the class - it now constantly
re-assigns the instance as it loops through 20 iterations.
Using better FP practices in F#, we have achieved a few things that solve the
problems mentioned earlier with classes:
However, we can still do better, by entirely eliminating the class, which
after all, is nothing more than a convenient container at this point. This
is illustrated next.
FP Thinking #9: Manage your state with parameters and return values
(stack-based state management) rather
than heap-based management.
Rethinking how state is managed, and particularly, thinking about state as
something that is stack-based rather than heap-based, is a fundamentally
different way of thinking about programming. This is antithetical to much
of the foundation of imperative programming - in early languages (BASIC, Pascal,
Fortran, assembly, etc) fields were assigned to physical memory locations,
either explicitly, as in early microprocessor programming, or implicitly by the
compiler. In languages that support a memory pool (C, C++, etc), the
concept of a "heap" came into being to support the dynamic creation and
destruction of structures. As OO programmers, we are
very entrenched in thinking about objects, which are heap-based. They
float out there somewhere in memory, and in the days of C/C++, are easily
stepped on and required explicit lifetime management. While .NET creates a
safer, more secure, and implicit lifetime management environment (along with the
potential pitfalls of automatic memory management and garbage collection), all it really
is doing is hiding the problem. With FP, the problem goes entirely away
when you start thinking in terms of stack-based state. Let's look at a
classless (haha) implementation of the traffic light:
module ClasslessTrafficLight
type LightStateEnum =
| Red
| Green
| Yellow
let GetDuration state =
match state with
| Red -> 10.0
| Green -> 5.0
| Yellow -> 1.0
let NextState state =
match state with
| Red -> Green
| Green -> Yellow
| Yellow -> Red
let GetEventTime state (now : System.DateTime) =
let duration = GetDuration state
now.AddSeconds(duration)
let CheckState(currentState, transitionTime) =
let now = System.DateTime.Now
match now with
| now when now >= transitionTime ->
let nextState = NextState currentState
let nextEventTime = GetEventTime nextState now
(nextState, nextEventTime)
| _ -> (currentState, transitionTime)
let InitialState = LightStateEnum.Red
let InitialEvent = (InitialState, GetEventTime InitialState System.DateTime.Now)
and its usage:
static void FSClasslessTrafficLight()
{
var state = ClasslessTrafficLight.InitialEvent;
for (int i = 0; i < 20; i++)
{
state = ClasslessTrafficLight.CheckState(state.Item1, state.Item2);
Console.WriteLine((TrafficLight.State)state.Item1.Tag);
System.Threading.Thread.Sleep(1000);
}
}
OK, if you observe carefully, you will notice that I have cheated - I've
eliminated the explicit class but I'm actually still managing the state in a
class - in this case, a Tuple<>. If you inspect the
type "state", which I've conveniently hidden as a "var", you will observe that
is of the type:
Tuple<>
Tuple<ClasslessTrafficLight.LightStateEnum, DateTime>
The important point here is not that I've merely replaced a specific class
with a more general purpose container, but rather that I've completely decoupled
the container managing the state (a Tuple in this case) from the computations
that change the state. Furthermore, the state change is managed by a new
Tuple rather than changing the values of fields in the Tuple. However, as
with the earlier example, the caller (C#) is still managing the state in a
mutable field. We still have to eliminate this, and we will, soon!
In the traffic light example, we can construct a simple class inheritance
model in C# to construct two kinds of traffic lights -
a 3 light version and a 2 light version:
public abstract class TrafficLightBase
{
public enum State { Red, Green, Yellow };
public State LightState { get; protected set; }
protected DateTime transitionTime;
public TrafficLightBase()
{
LightState = State.Red;
InitializeDuration();
}
public abstract bool CheckState();
protected abstract void InitializeDuration();
}
public class ThreeColorTrafficLight : TrafficLightBase
{
public override bool CheckState()
{
bool lightChanged = false;
if (DateTime.Now >= transitionTime)
{
switch (LightState)
{
case State.Red:
LightState = State.Green;
break;
case State.Green:
LightState = State.Yellow;
break;
case State.Yellow:
LightState = State.Red;
break;
}
InitializeDuration();
lightChanged = true;
}
return lightChanged;
}
protected override void InitializeDuration()
{
int durationAtState = 0;
switch (LightState)
{
case State.Red:
durationAtState = 10;
break;
case State.Green:
durationAtState = 5;
break;
case State.Yellow:
durationAtState = 1;
break;
}
transitionTime = DateTime.Now.AddSeconds(durationAtState);
}
}
public class TwoColorTrafficLight : TrafficLightBase
{
public override bool CheckState()
{
bool lightChanged = false;
if (DateTime.Now >= transitionTime)
{
switch (LightState)
{
case State.Red:
LightState = State.Green;
break;
case State.Green:
LightState = State.Red;
break;
}
InitializeDuration();
lightChanged = true;
}
return lightChanged;
}
protected override void InitializeDuration()
{
int durationAtState = 0;
switch (LightState)
{
case State.Red:
durationAtState = 3;
break;
case State.Green:
durationAtState = 3;
break;
}
transitionTime = DateTime.Now.AddSeconds(durationAtState);
}
}
In the C# code, we have two virtual methods for obtaining the next state and the
state duration, depending on whether the traffic light has three colors (red,
green, yellow) or just two colors (red and green.)
In the F# code, we first create some functions that are similar to our
derived classes override methods:
Similar to TwoColorTrafficLight.InitializeDuration:
let GetDuration2Color state =
match state with
| Red -> 3.0
| Green -> 3.0
Similar to ThreeColorTrafficLight.InitializeDuration:
let GetDuration3Color state =
match state with
| Red -> 10.0
| Green -> 5.0
| Yellow -> 1.0
Similar to TwoColorTrafficLight.CheckState:
let NextState2Color state =
match state with
| Red -> Green
| Green -> Red
Similar to ThreeColorTrafficLight.CheckState:
let NextState3Color state =
match state with
| Red -> Green
| Green -> Yellow
| Yellow -> Red
Next, we modify the CheckState function to take two additional parameters,
which are themselves functions. The idea is, we will pass in the
appropriate function for determining the next state and the state duration,
given the type of traffic light we want to simulate:
let CheckState fncNextState fncEventTime (currentState, transitionTime) =
let now = System.DateTime.Now
match now with
| now when now >= transitionTime ->
let nextState = fncNextState currentState
let nextEventTime = GetEventTime fncEventTime nextState now
(nextState, nextEventTime)
| _ -> (currentState, transitionTime)
Now, here comes the fun (pun intended) part. We create a couple
partial functions that provide these two parameters (notice that they are the
first two parameters of the CheckState function):
let CheckState2Color = CheckState NextState2Color GetDuration2Color
let CheckState3Color = CheckState NextState3Color GetDuration3Color
Instead of inheritance with virtual methods, we are taking advantage of a feature
of FP called partial application. This concept (along with its cousins
currying and closure) are discussed next, particularly with how we think about OO inheritance differently in FP.
It turns out that there's a lot of confusion regarding the terms "Partial
Application" and "Currying". These concepts are fundamental to thinking in
FP. For example, inheritance is a fundamental concept in OOP. One
feature of inheritance is that it hides from programmer the concrete type that
is implementing a behavior. This concrete type can be thought of as a
"state instance" of an object graph. Depending on which subclass is
instantiated, we can affect the behavior of the application. In FP, we can
do the same thing but with partial application of functions and by passing
functions as parameters. This
requires a different way of thinking about inheritance..
Basically, what you are doing with FP is explicitly implementing the concept
of inheritance through the use of what in the OO world would be considered to
be function pointers. This is similar to the Action<T, ...> and Func<T,
..., TResult> classes in the .NET 4.5 library.
Action<T, ...>
Func<T,
..., TResult>
"[P]artial application ... refers to the process of fixing a number of
arguments to a function, producing another function of smaller arity."4
Arity is simply a fancy word for "the number of arguments or operands that the
function takes."6 In the F# code, depending on the "state
instance" that we want (a 2 color or 3 color traffic light), we create partial
functions:
What this means is that CheckState2Color and CheckState3Color are providing
the first two parameters, and all that the caller needs to do is provide the
last parameter, which is the tuple (currentState, transitionTime).
(currentState, transitionTime)
Then, depending on the traffic light type that we want to "instantiate", we
return the desired CheckState partial function, which is similar to
a factory pattern that we're used in OOP:
let CheckTrafficLightState trafficLightType =
match trafficLightType with
| Color2 -> CheckState2Color
| Color3 -> CheckState3Color
We then provide the rest of the parameters in the ProcessTrafficLight
function:
ProcessTrafficLight
let ProcessTrafficLight(trafficLightType, currentState, transitionTime) = CheckTrafficLightState trafficLightType currentState transitionTime
In all my other examples, we have been calling the F# code from C#. An
example of calling the above F# function from C# would look like this:
for (int i = 0; i < 20; i++)
{
state = ClasslessTrafficLight.ProcessTrafficLight(trafficLightType, state.Item1, state.Item2);
Console.WriteLine((TrafficLight.State)state.Item1.Tag);
System.Threading.Thread.Sleep(1000);
}
This is not ideal, as the the CheckTrafficLightState function is called
all the time. Instead, let's move the loop that is in the C# code into the
F# code, and we can see a cleaner implementation:
let StartTrafficLight trafficLightType iterations =
let checkLightState = CheckTrafficLightState trafficLightType
let startingState = InitialEvent trafficLightType
RunTrafficLight checkLightState startingState iterations
Here we have a simple function which first assigns the partial function
CheckState2Color or CheckState3Color to checkLightState. We then call the
function RunTrafficLight to run the traffic light for the number of specified
iterations. We'll look at RunTrafficLight in a bit. But
first, a word about currying.
CheckState2Color
CheckState3Color
checkLightState
RunTrafficLight
"[C]urrying is the technique of transforming a function that takes multiple
arguments (or an n-tuple of arguments) in such a way that it can be called as a
chain of functions each with a single argument (partial application.)"5
This is the correct definition of currying and you can see that currying
is a special form of partial application in which the curried function takes
only one parameter. In the partial application examples above, these
qualify as curried functions because the curried function has only the tuple as
the remaining parameter. There is a lot of confusion around currying, as
evidenced by this statement: "When a function is curried, it returns another function whose signature
contains the remaining arguments."3 This is not completely
accurate. It is currying if and only if the returned function takes only one
parameter. It is considered "partial application" if the return function
takes more than one parameter.
In the above code, I did not present the loop that runs the traffic light for
the specified number of iterations. Here is the F# code:
let rec RunTrafficLight checkLightState currentState n =
match n with
| 0 -> ()
| _ ->
let nextState = checkLightState currentState
printfn "%s" (ParseLightState (fst(nextState)))
System.Threading.Thread.Sleep(1000)
RunTrafficLight checkLightState nextState (n-1)
Here we finally see how the traffic light is run (in F#) without the use of a
mutable field to handle the return of the checkLightState function.
We are instead taking advantage of FP in two ways:
FP Thinking #11: Recursion is the way you iteratively (confusing, isn't
it?) work with state changes and the new instances of entities created by a
state change. The salient point in thinking about recursion is to identify
the entity (entities) whose state will change, and to pass those entities as
parameters to the function, recursively. In this way, the state changes
can be expressed as new instances passed to the function, rather than using
mutable fields of the same instance.
While F# supports "for" loops, if we were to write the code
imperatively in
F#, it would end up looking like this:
let ForLoopTrafficLights trafficLightType iterations =
let checkLightState = CheckTrafficLightState trafficLightType
let mutable currentState = InitialEvent trafficLightType
for i in 1..iterations do
currentState <- checkLightState currentState
printfn "%s" (ParseLightState (fst(currentState)))
System.Threading.Thread.Sleep(1000)
Notice that this requires a mutable field to maintain the traffic light state.
We want to eliminate mutable fields, so instead, we use a recursive call.
By using a recursive call, nextState is parameterized and we use
the stack (theoretically) as a means of managing changing state.
Now, the important thing to realize is that the compiler will translate your
recursive call into an iterative loop (there's lots of discussions about things
like "tail recursion" that you can Google.) Observe the decompiled F# recursive code
(using DotPeek), in
C#:
nextState
public static void RunTrafficLight<a>(FSharpFunc<Tuple<ClasslessTrafficLight.LightStateEnum, a>,
Tuple<ClasslessTrafficLight.LightStateEnum, a>> checkLightState,
ClasslessTrafficLight.LightStateEnum currentState_0,
a currentState_1,
int n)
{
while (true)
{
Tuple<ClasslessTrafficLight.LightStateEnum, a> func = new Tuple<ClasslessTrafficLight.LightStateEnum, a>(currentState_0, currentState_1);
switch (n)
{
case 0:
goto label_1;
default:
Tuple<ClasslessTrafficLight.LightStateEnum, a> tuple1 = checkLightState.Invoke(func);
ExtraTopLevelOperators.PrintFormatLine<FSharpFunc<string, Unit>>((PrintfFormat<FSharpFunc<string, Unit>, TextWriter, Unit, Unit>)
new PrintfFormat<FSharpFunc<string, Unit>, TextWriter, Unit, Unit, string>("%s"))
.Invoke(ClasslessTrafficLight.ParseLightState(Operators.Fst<ClasslessTrafficLight.LightStateEnum, a>(tuple1)));
Thread.Sleep(1000);
FSharpFunc<Tuple<ClasslessTrafficLight.LightStateEnum, a>, Tuple<ClasslessTrafficLight.LightStateEnum, a>> fsharpFunc = checkLightState;
Tuple<ClasslessTrafficLight.LightStateEnum, a> tuple2 = tuple1;
ClasslessTrafficLight.LightStateEnum lightStateEnum = tuple2.Item1;
a a = tuple2.Item2;
--n;
currentState_1 = a;
currentState_0 = lightStateEnum;
checkLightState = fsharpFunc;
continue;
}
}
label_1:;
}
Note that the recursive call is implemented as an infinite loop which
case 0 breaks out of. Also note how the parameters, which would normally
be pushed onto the stack for each recursive call, are handled by the parameters
currentState_0 and currentState_1. It is interesting to note that the IL
code creates mutable variables, but then again, one would expect this as
microprocessors have very imperative-based instruction sets, mutable registers,
and of course work with mutable memory. We certainly do
not want to implement a recursive call as true recursion in the IL or assembly
language--imagine the amount of memory needed and the performance degradation of
pushing values onto the stack if we were to recursively process a list of 10
million items!
Lists illustrate another way to think about iteration using recursion.
One of the common examples one almost immediately encounters when learning about
F# (or any FP language) deals with lists. Certainly, what I was left with
was wondering, why would I want to work with lists recursively rather than
iteratively? The answer to this is fairly straight forward:.
Consider how we might want to initialize a list of traffic lights in C#:
static List<TrafficLightBase> InitializeList()
{
List<TrafficLightBase> lights = new List<TrafficLightBase>();
for (int i = 0; i < 20; i++)
{
lights.Add(new ThreeColorTrafficLight());
}
return lights;
}
And how we iterate through the list:
static void IterateLights(List<TrafficLightBase> lights)
{
foreach (TrafficLightBase tl in lights)
{
tl.CheckState();
}
}
This looks very clean, from a imperative perspective, but from an FP
perspective, the thinking needs to be different:
Add
Granted, except for the first issue (mutable lists), this reads like I'm
trying to invent problems. Even mutable lists isn't really an issue as
long as we're not manipulating the list in multiple threads. However, what
I'm illustrating here is not that one way is better than the other but that,
when you are working with lists in FP, you have to literally think differently.
Let's look at the above code implemented in F#. The first thing we
change is that we're not instantiating a class - rather, we're creating a list
of states, which is just one thing that a class provides us:
let rec InitializeTrafficLights trafficLightType iterations lights =
match iterations with
| 0 -> lights
| _ -> InitializeTrafficLights trafficLightType (iterations-1) (InitialEvent trafficLightType :: lights)
This creates a list of "iterations" items using a recursive function call,
and is typical of list operations, a match statement describes what is done when
all the iterations are processed (the "0" case) as compared to when there are
some iterations left (the "_", or "any", case). To call this function in
C#, we would write:
var list = ClasslessTrafficLight.InitializeTrafficLights(
ClasslessTrafficLight.TrafficLightType.Color2,
20,
FSharpList<Tuple<ClasslessTrafficLight.LightStateEnum, DateTime>>.Empty);
Notice that we need to pass in an empty list. Also note that the above
F# code is only one option for initializing a list. Another option is
(similar to the C# code):
let InitializeTrafficLights2 trafficLightType iterations = [for i in 1 .. iterations -> InitialEvent trafficLightType]
Both examples illustrate initializing an immutable list. In the first
example, we recursively create a new list by prepending a "work item" to the
original list. In the second F# example, we initialize the list with a
"for" loop. Because the more common operation on lists involves working
with the head and tail of the list, let's look at the generated code for the
recursive function:
public static FSharpList<Tuple<ClasslessTrafficLight.LightStateEnum, DateTime>> InitializeTrafficLights(
ClasslessTrafficLight.TrafficLightType trafficLightType,
int iterations,
FSharpList<Tuple<ClasslessTrafficLight.LightStateEnum, DateTime>> lights)
{
while (true)
{
switch (iterations)
{
case 0:
goto label_1;
default:
ClasslessTrafficLight.TrafficLightType trafficLightType1 = trafficLightType;
int num = iterations - 1;
lights = FSharpList<Tuple<ClasslessTrafficLight.LightStateEnum, DateTime>>.Cons(ClasslessTrafficLight.InitialEvent(trafficLightType), lights);
iterations = num;
trafficLightType = trafficLightType1;
continue;
}
}
label_1:
return lights;
}
We note a couple things about this method:
Cons
FSharpList
F# lists are immutable linked lists. "The important thing to understand
is that cons executes in constant, O(1), time. To join an element to an
immutable linked list all you need to do is put that value in a list node and
set its 'next list node' to be the first element in the existing list.
Everything 'after' the new node is none the wiser."7 This
"trick" ensures that the original list has not changed, we are simply creating a
new list that consists of the new item node prepended (its "next node" points to
the first item) to the existing list. If we were to append the item to an existing list, we would need to copy the
original list and link the last node to the new item, requiring O(n) operations..
Microsoft provides this example8 as a true recursive call:
let rec sum list =
match list with
| [] -> 0
| head :: tail -> head + sum tail
and indeed, when we use DotPeek to inspect this code, we see that it is truly
recursive:
public static int sum(FSharpList<int> list)
{
FSharpList<int> fsharpList1 = list;
if (fsharpList1.get_TailOrNull() == null)
return 0;
FSharpList<int> fsharpList2 = fsharpList1;
FSharpList<int> tailOrNull = fsharpList2.get_TailOrNull();
return fsharpList2.get_HeadOrDefault() + ClasslessTrafficLight.sum(tailOrNull);
}
To avoid this, we use a technique called "tail recursion" by employing an
accumulator:
let rec sum2 list acc =
match list with
| [] -> acc
| head :: tail -> sum2 tail (head + acc)
and we see that indeed, the recursion has now been converted to iteration in
the IL:
public static int sum2(FSharpList<int> list, int acc)
{
while (true)
{
FSharpList<int> fsharpList1 = list;
if (fsharpList1.get_TailOrNull() != null)
{
FSharpList<int> fsharpList2 = fsharpList1;
FSharpList<int> tailOrNull = fsharpList2.get_TailOrNull();
int headOrDefault = fsharpList2.get_HeadOrDefault();
FSharpList<int> fsharpList3 = tailOrNull;
acc = headOrDefault + acc;
list = fsharpList3;
}
else
break;
}
return acc;
}
Notice though, in the F# code, the parenthesis in the this line:
| head :: tail -> sum2 tail head + acc
The parenthesis around (head + tail) tells the compiler that we
are defining a function that takes
two parameters, adds them, and returns the sum, and it is this computed value
that is passed as the second parameter to sum2. If we omit the
parenthesis, we are back to a true recursive
function!
(head + tail)
sum2
public static int sum3(FSharpList<int> list, int acc)
{
FSharpList<int> fsharpList1 = list;
if (fsharpList1.get_TailOrNull() == null)
return acc;
FSharpList<int> fsharpList2 = fsharpList1;
return ClasslessTrafficLight.sum3(fsharpList2.get_TailOrNull(), fsharpList2.get_HeadOrDefault()) + acc;
}
Why is this? Essentially, from the compiler's perspective, without the
parenthesis, the call to sum3 is being made with the list's head item
and, when sum3 returns, the accumulator value is added to the return
value of sum3. Note how the parenthesis in the call are lined
up:
sum3
return ClasslessTrafficLight.sum3(fsharpList2.get_TailOrNull(), fsharpList2.get_HeadOrDefault()) + acc;
^ ^
and that "acc" is added to the return of sum3 rather than the sum being
passed in to sum3. Therefore, it is not enough to say "we are
using an accumulator", you must use the accumulator correctly!.
Another way to look at this is illustrated by explicitly creating an Add
function:
let Add a b = a + b
let rec sum4 list acc =
match list with
| [] -> acc
| head :: tail -> sum4 tail Add head acc
This code will not compile. It generates the error "Type mismatch.
Expecting a 'a -> 'b -> 'c but given a 'c The resulting type would be infinite
when unifying ''a' and ''b -> 'c -> 'a'". This is a very confusing error
message, but we can basically reduce it to "sum4 tail Add" doesn't have the same
signature as the definition. If we write the code like this:
let rec sum4 list acc =
match list with
| [] -> acc
| head :: tail -> sum4 tail (Add head acc)
Explicitly saying that the result of the computation "Add head acc" is the
second parameter, then everything works great.
Conversely, consider this code:
let rec Accumulate list f acc =
match list with
| [] -> acc
| head :: tail -> Accumulate tail f (f head acc)
let Adder = Accumulate [1;2;3] Add 0
Here, we explicitly pass in the accumulator function (Add in this case) and
we can see how the accumulator function is passed in as an argument to the
Accumulate function and is itself used in tail recursion form to perform the
accumulation operation.
Basic list iteration, without using all the complicated recursive functions
described above, can be achieved rather simply using the "iter" method of the F#
List class, but it's easy to do it the wrong way. First, let's look at
this code, which doesn't compile:
let AddMe =
let acc = 0
List.iter(fun x -> acc <- acc + x) [1;2;3]
acc
The astute reader will figure out that the reason it doesn't compile is that
"acc" is not mutable. So let's fix that. This code, however, also
does not compile:
let AddMe =
let mutable acc = 0
List.iter(fun x -> acc <- acc + x) [1;2;3]
acc
The compiler gives us a very interesting error:
"The mutable variable 'acc' is used in an invalid way. Mutable variables
cannot be captured by closures. Consider eliminating this use of mutation or
using a heap-allocated mutable reference cell via 'ref' and '!'."
Well yes, we already know how to avoid mutation by using recursion. But
what is this thing "closure"? According to Microsoft9:
"Closures are local functions that are generated by certain F# expressions,
such as lambda expressions, sequence expressions, computation expressions, and
curried functions that use partially applied arguments. The closures generated
by these expressions are stored for later evaluation. This process is not
compatible with mutable variables. Therefore, if you need mutable state in such
an expression, you have to use reference cells."
which means that we have to write the code like this:
let AddMe =
let acc = ref 0
List.iter(fun x -> acc := !acc + x) [1;2;3]
acc
and if we run this in F# Interactive, we get the desired result:
val AddMe : int ref = {contents = 6;}
FP Thinking #15: The List.iter method should not be used for
accumulator operations. This is different from our thinking in C#,
especially with regards to extension methods on lists which provide nice ways of
iterating through a list and performing some sort of an accumulation operation.
If you're not already familiar with LINQ in .NET OO languages like C#, you
should become so now, because many of
the operations in LINQ are similar to those provided in F# (and FP languages in general) that
are more appropriate to use than simple list iteration.
What I mean by the above leading thought is, poke around and look at what is
a better solution, and you will discover the List.fold method (in LINQ, it's the
Aggregate method). For example, in C#, you can write this without any
problem:
static int ListTest()
{
int acc=0;
new List<int>() { 1, 2, 3 }.ForEach(t => acc = acc + t);
return acc;
}
But in FP, instead of doing this nasty iteration with
references, we can instead write an elegant:
let AddMe2 = List.fold(fun acc i -> acc + i) 0 [1;2;3]
and in C#, it could be written as:
static int ListAggregate()
{
return new List<int>() { 1, 2, 3 }.Aggregate((n, acc) => acc += n);
}
Of course, one would normally just use the Sum extension method.
Sum
Notice how the F# Iter.fold provides an accumulator for us! Here we have a
nice function that takes the initial accumulator value (0 in our case) and the
list, and iterates over the list, applying the function over each item and
feeding the result of the function into the computation for the next time.
One of the nice things about FP is also something that will bite you: type
inference. Consider this code, in which we define two record types:
type Rec1 =
{
Count : int;
}
type Rec2 =
{
Count : int;
Sum : int;
}
let Foo = {Count = 1}
This code generates the error "No assignment given for field 'Sum' of type
'ClasslessTrafficLight.Rec2'." Wait, are you telling me that the
compiler is too stupid to figure out that, based on the fact that I'm only
initializing Count, I want an instance of Rec1? Yes, that is exactly what
I am telling you. And this can lead to some rather confusing and difficult
to understand errors with functional programming. In fact, you can create
code that will compile and run just fine but will give you the wrong type!
Consider this code:
type Rec1 =
{
Count : int;
}
type Rec2 =
{
Count : int;
}
let Foo = {Count = 1}
If we throw this into FSI (F# Interactive), we get:
val Foo : Rec2 = {Count = 1;}
Foo evaluates as a type Rec2! This may not be what we want, and we
certainly do not get a compiler error indicating that Rec1 and Rec2 are exactly
the same type! To resolve this, you would have to explicitly define Foo's
type:
let Foo : Rec1 = {Count = 1}
or, give unique names to the record fields:
type Rec1 =
{
CountA : int;
}
type Rec2 =
{
CountB : int;
}
let Rec1 = {CountA = 1}
FP Thinking #16: When defining record types, the type inference engine
will use only the first field of the record to determine type. It will
make your life a lot easier if you give fields unique names to help the type
inference engine. This also makes the code more readable for a person,
because they can easily figure out what record type is being initialized or
used, even if it should be obvious.
John Hughes has written an excellent paper on
Why
Functional Programming Matters. In the introduction, he makes a very
important statement:
"The special characteristics and advantages of functional programming are
often summed up more or less as follows. Functional programs contain no
assignment statements, so variables, once given a value, never change. More
generally, functional programs contain no side-effects at all. A function call
can have no effect other than to compute its result. This eliminates a major
source of bugs, and also makes the order of execution irrelevant — since no
side-effect can change an expression’s value, it can be evaluated at any time.
This relieves the programmer of the burden of prescribing the flow of control.
Since expressions can be evaluated at any time, one can freely replace variables
by
their values and vice versa — that is, programs are “referentially transparent”.
This freedom helps make functional programs more tractable mathematically than
their conventional counterparts.
Such a catalogue of “advantages” is all very well, but one must not be
surprised if outsiders don’t take it too seriously. It says a lot about what
functional programming isn’t (it has no assignment, no side effects, no flow of
control) but not much about what it is. The functional programmer sounds rather
like a medieval monk, denying himself the pleasures of life in the hope that it
will make him virtuous. To those more interested in material benefits, these
“advantages” are totally unconvincing."
Obviously, because F# supports mutable variables, it contains assignment
statements, so side-effects are just as easily accomplished with F# as they are
with imperative / OO languages. Hence my strong recommendation that you
should avoid mutable entities when programming in F#. Also, flow control
is important, especially when dealing with user interactions. It
actually becomes a problem in FP when you have an I/O stream that you want to
present in a particular order. Realistically, "the burden of flow of
control" is something that we have to deal with whenever the program interfaces
with the outside world. However, we should also "loosen" our thinking with
regards to control flow. For example, in a imperative language, we will
typically get all the data into some sort of container and then hand off that
container to a method that processes it. In FP thinking, we might pass to
the processing function the function to load the data. However, this
probably causes some other unintentional side-effects of the application itself:
what if the data needs to be processed by two different algorithms? We
certainly don't want to load the data twice - this may be very inefficient!
Hence, and quite realistically, the author points out that "outsiders don't take
[those advantages] too seriously."
The paper takes the position that instead of these usually touted benefits,
the importance of functional programming is that it provides improved modularity
and two completely new ways of gluing together programs that enhances
modularity, and that this enhanced modularity is the main benefit of functional
programming. These two types of glue can be described as:
I don't particularly buy into this reasoning very much, as it reminds me of
the reasons touted for why object oriented programming is better: reusability.
Except for general purpose operations, I don't find that classes in the an OO
paradigm are particularly re-usable, and in my (albeit limited) experience with
F#, most of the functions that I write solve very domain-specific problems and
are not particularly re-usable either. Still, John Hughes' paper is worth
reading.
Slava Akhmechet, in his excellent blog entry
Functional Programming For
The Rest Of Us, writes:
."
Eugene Wallingford, in his also excellent paper
Functional Programming Patterns and Their Role in Instruction, writes:
"Functional programming is a powerful style for writing programs, yet many
students never fully appreciate it. After programming in an imperative style,
where state and state changes are central, the idioms of the functional style
can feel uncomfortable. Further, many functional programming ideas involve
abstractions beyond what other styles allow, and often students do not fully
understand the reasons for using them. ...
Software patterns began as an industry phenomenon, an attempt to document bits
of working knowledge that go beyond what developers learned in their academic
study. They have become a standard educational device both in industry and
in object-oriented (OO) design and programming instruction at universities.
Given their utility in industry and in OO instruction, patterns offer a
promising approach to help students and faculty learn to write functional
programs. Such patterns will document the common techniques and program
structures used by functional programmers, and pattern languages will document
the process of using functional programming patterns in the construction of
larger programs—ideally, complete programs that solve problems of real
interest."
I would concur with this, and hopefully in this article have also illustrated
that FP patterns do exist and that they are considerably different than OO
patterns. There is some overlap (for example, using partial functions
instead of inheritance in a factory method) and there are new patterns to be
discovered in FP.
Another good read on FP patterns in
Brian's Inside F#.
Tomas Petricek writes
here:
"...there are some aspects of functional programming that make testing of
functional programs a lot easier.
I would have to add that immutability significantly reduces the complexity of
unit testing. Furthermore, by teasing apart a typical OO class into its
constituents - state, computations, and state change methods - unit testing FP
code is also considerably easier. And if unit testing is simplified, then
the likelihood of bugs is also reduced, in my opinion.
In my experience, programming in F#:
Action<>
Func<>
Most of these "benefits" can be fairly easily accomplished in OO /
imperative
code as well. In fact, my experiences with FP have resulted in my being a
better OO programmer, and my learning "FP thinking" has improved my software
architecture abilities as well. Overall however, FP is, in my opinion, a
very useful tool that solves certain architecture / programming problems better
than OO paradigms, and other problems it does not do so well.
Functional programming requires a different way of thinking. What I
have attempted to explore here are some fundamental concepts regarding core
concepts of FP: immutability, recursion / iteration, and list operations, and
how they require a fundamental different way of thinking in order to become a
successful FP programmer. There's a lot more that can be said on this
topic - I feel in many ways that I have just skimmed the surface. I have
also probably made a number of errors that I hope experienced FP'ers will point
out for the benefit of all!
FP Thinking, #1: In functional programming, we must embrace the idea and
its implications that, once something is initialized, it cannot be changed.
FP Thinking, #2: With FP, there are no side-effects because a state change is
represented by a new instance. So, stop thinking in terms of changing
state in an existing instance, and start thinking in terms of "this state change
means I need a new instance."
FP Thinking #5: Learn how to think about structures (OO classes, FP
records) as fully initialized and immutable entities. Figure out how to
think in terms of fully initialized implementations. Learn to think purely
in terms of initialization and computation. Replace "assignment" with "new
instance.".
FP Thinking #15: The List.iter method should not be used for accumulator
operations. This is different from our thinking in C#, especially with
regards to extension methods, which provide nice ways of iterating through a
list and performing some sort of a accumulation operation. If you're not
already familiar with LINQ, you should do so now, because many of the operations
in LINQ are available in F# (and FP languages in general) that are more
appropriate to use than simple list iteration.
1 -
2 -
3 -
4 -
5-
6 -
7 -
8 -
9 -
DotPeek, a free .NET decompiler:
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Amir Mohammad Nasrollahi wrote:Excellent tips and explanation!
Fernando A. Gomez F. wrote:Much more understandable than Syme's books
Chris Woodward wrote:The only thing I'd add is more about monads
Sacha Barber wrote:Going to have to learn F# I thinks. Seems it has its place
einy wrote:Have you tried Nemerle?
Paulo Zemek wrote:As you've shown, when we do recursive calls to avoid a mutable variable, the compiler ends-up doing a kind of jump/loop and, in fact, it does mutate a variable.
Paulo Zemek wrote:It has really reused the variables?
Paulo Zemek wrote:Personally, I like the idea of immutable instances for many scenarios, but local variables are not one of them.
*pre-emptive celebratory nipple tassle jiggle* - Sean Ewington
"Mind bleach! Send me mind bleach!" - Nagy Vilmos
Pete O'Hanlon wrote:his opinion was that functional programming was a niche that would never really go anywhere because it was just so difficult to write full applications with it
Nemanja Trifunovic wrote:it would be better to use word "imperative" instead of "procedural" for the opposite of functional.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/462767/How-to-Think-Like-a-Functional-Programmer | CC-MAIN-2016-26 | refinedweb | 9,136 | 52.19 |
WMI Isn't Working!
Troubleshooting Problems with WMI Scripts and the WMI Service
Important
Before you begin repairing suspected problems with the WMI service it is strongly recommended that you run the new WMI Diagnosis Utility. You should always run this utility before making any changes to the WMI service.
WMI is a resilient piece of software. For example, suppose the WMI service is stopped, yet you still try running a WMI script. What do you think will happen? Will the script fail? Will the computer lock up? Will the very fabric of space and time be ripped apart? Not exactly. Instead, the WMI service will probably just restart itself and run your script. Most likely you will never even know that the service had been stopped.
Is there a lesson in that for script writers? You bet there is: if you are having problems with a WMI script, don’t jump to conclusions and assume that WMI is broken. If you have a script that doesn’t work it’s usually due to a problem with the script and not due to a problem with WMI. Admittedly, you could encounter a catastrophic failure of WMI itself, a failure that requires you to re-register all the WMI components or to rebuild the WMI repository. However, it’s far more likely that any problems you are experiencing are the result of doing something like typing in an incorrect namespace or trying to connect to a remote computer where you do not have local Administrator rights.
This document (developed in conjunction with the WMI team at Microsoft) is designed to help you troubleshoot problems with WMI scripts and the WMI service. Although the focus here is on scripting, the same troubleshooting information can be applied to other WMI consumers, such as Systems Management Server (SMS). Scenarios – and the error codes they produce – will often be the same regardless of whether you encounter problems using a script, the WMIC command line, a compiled application (such as SMS) that calls WMI, etc.
The topics in this document are listed roughly in order; for example, the last thing you should do is rebuild the entire Repository. However, it’s important to note that problems with WMI scripts and/or the WMI service can’t always be resolved on a step-by-step basis. Because of that, we encourage you to read the entire document and familiarize yourself with both the things that can go wrong with WMI scripts as well with steps you can take to try and correct those problems.
We also encourage you to play close attention to any error messages you receive when trying to run your script; those error messages are described in the WMI SDK and can often tip you off as to why the script isn’t working. In fact, if you know which error message is being generated by your script or if your problem seems to fit one of the more-common scenarios, then you can jump directly to the appropriate topic listed below (although we still recommend you read the entire document).
On This Page
- My script doesn’t return any data
- I can’t connect to a remote computer
- I’m getting an 0x8004100E (“Invalid Namespace”) error
- I’m getting an 0x80041010 (“Invalid Class”) error
- I’m getting an error (0x800A01B6) that says a property or method is not supported
- I’ve verified that the namespace, class, and property names are correct, yet my script still doesn’t work
- I’m getting an 0x80041013 (“Provider not found”) or an 0x80041014 (“Component failed to initialize) error
- I’m getting an error regarding provider registration
- I have a script that I know is valid, the WMI service is running, and I’ve re-registered all the .dlls, yet my script still doesn’t work
- I’ve rebuilt the WMI Repository and my script still doesn’t work
My script doesn’t return any data
To begin with, don’t panic: after all, it’s possible for a WMI script to work perfectly at yet not return any data. How? Well, consider the following script, which returns the name of each tape drive installed on a computer:
Suppose you don’t have any tape drives installed on the computer. In that case the script will return no information whatsoever. That’s to be expected: after all, the script can’t return the names of the tape drives installed on the computer if there aren’t any tape drives installed on the computer.
One way to check for this situation – the script is working correctly, but there is no data for it to return – is to retrieve the value of the Count property. Each collection returned by a WMI query includes a Count property that tells you the number of items in that collection; this is true even if there are no items in the collection (in that case, the Count will be reported as 0). For example, here’s a script that reports the number of tape drives installed on a computer (that is, the number of Win32_TapeDrive instances):
If the Count comes back as 0 then your script is probably working fine. If the Count is greater than 0 (meaning you have at least one tape drive installed), then you have a problem. In that case, you might have written a WQL query so specific that nothing meets the criteria. For example, this script returns information about all the services named Alerter that are currently running:
Needless to say, if the Alerter service on a computer is stopped then this script will not return any data; that’s because the service must meet both criteria: it must have the Name Alerter and a State of Running.
If you are having difficulty getting a script to return data, you might begin by simplifying your WQL query. For example, this script removes the Where clause and simply reports back the Name and State for each service installed on a computer:
By running this script, you can check the name and state of the Alerter service and then, if necessary, adjust your Where clause and try again. Because all Windows computers (well, except for Windows 98 computers) must have at least some services installed the preceding script should always return data of some kind. If it does not, then you have a more serious problem and should continue reading.
I can’t connect to a remote computer
Often times you will have a script that works great on your local computer; however, the minute you point that script towards a remote machine you get an error message similar to this:
If you get a message like this one it is possible that the WMI service is experiencing problems on either your local computer or on the remote computer. However, that is usually not the case. Instead, problems connecting to a remote computer typically revolve around one of the following scenarios:
The remote computer is not online. Sometimes the simplest explanation is the best: if a computer is offline you won’t be able to connect to it using WMI (or using much of anything, for that matter). If you are getting a “Remote server machine does not exist or is unavailable” error the first thing you should do is verify that the computer is actually online; you can do this by trying to ping the remote machine, or by trying to connect to it using a command-line or GUI tool. Difficulties with the network tend to be more common than difficulties with the WMI service; because of that, you should investigate problems with the network before investigating problems with WMI.
You do not have local Administrator rights on the remote computer. As a regular user (that is, a non-Administrator) you have limited ability to run WMI scripts on the local computer; typically you will be able to retrieve information using WMI but will not be able to use WMI to change settings. This is not the case when connecting to a remote computer: in order to use WMI remotely, you must have local Administrator rights on the remote machine. If you do not, access to that computer will be denied.
Ok, so maybe that isn’t entirely true: it’s possible for you to be granted access to a WMI namespace even if you aren’t a local Administrator. (This is rarely done, but it’s possible.) You can check namespace security by right-clicking the My Computer icon and selecting Manage. In Computer Management, expand Services and Applications, right-click WMI Control and then click Properties. Information about namespace security can be found on the Security tab in the WMI Control Properties dialog box.
You can find additional information about setting WMI security in this Knowledge Base article.
A firewall is blocking access to the remote computer. WMI uses the DCOM (Distributed COM) and RPC (Remote Procedure Call) protocols to traverse the network. By default, many firewalls block DCOM and RPC traffic; if your firewall is blocking these protocols then your script will fail. For example, the Windows Firewall found in Microsoft Windows XP Service Pack 2 is configured to automatically block all unsolicited network traffic, including DCOM and WMI: in its default configuration, the Windows Firewall will reject an incoming WMI request and give you a “Remote server machine does not exist or is unavailable” error
If you are sure that a computer is online and you know that you have local administrator rights on that computer, then problems getting past a firewall often explain why your script is failing. We can’t tell you how to configure your firewall to permit DCOM and RPC traffic; that obviously depends on the type of firewall you have. However, if you suspect that the Windows Firewall is to blame you can find information about managing and configuring the firewall settings in the article I Married Bigfoot! Oh, and Windows Service Pack 2 Made My Computer Disappear. The WMI SDK also has additional information about connecting to the WMI service through the Windows firewall.
The version of WMI on your local computer is not compatible with the version of WMI on the remote computer. Unfortunately, not all versions of WMI are created equal. If you are running Windows XP or Windows Server 2003 (and assuming you have the appropriate Administrator rights and have your firewalls configured properly), you should be able to connect to any remote computer. This is not necessarily the case with older versions of Windows. For example, suppose you have Windows 2000 with Service Pack 1 installed. With this setup you will not be able to connect to a remote computer running Windows Server 2003. Instead, you must have Service Pack 2 (or a later service pack) installed in order to connect to remote machines running Windows Server 2003.
For more information about connecting to machines running different versions of Windows, see the WMI SDK topic Connecting Between Different Operating Systems.
If you are having problems connecting to a remote computer one of the first things you should do is determine whether the problem lies with the script or with making the connection. To do that, go to the remote computer and try running the script locally (that is, start the script directly from that remote machine). If the script works your problem likely lies in making a connection and could be due to a firewall or DCOM setting. If the script does not work, that suggests a problem with the WMI service on the remote computer. In that case, try to get the script to run locally before attempting another remote connection.
You can find a fairly extensive discussion of WMI and remote machines in the WMI SDK topic Connecting to WMI on a Remote Computer.
I’m getting an 0x8004100E (“Invalid Namespace”) error
Typically invalid namespace errors are due to one of two problems. For one, you might simply have misspelled the namespace name. For example, this script tries to connect to the root\cim2v namespace; because there is no such namespace, the script fails with an 0x8004100E error:
In this case, changing root\cim2v to the correct name – root\cimv2 – fixes the problem.
Alternatively, you might be trying to connect to a namespace that, while valid on some computers, does not exist on other computers. For example, the root\RSOP namespace can be found on Windows XP and Windows Server 2003, but not on Windows 2000; that means this script will fail on a machine running Windows 2000:
Likewise, certain applications create their own namespaces, custom namespaces that are not part of the default installation of the operating system. If you are running the SMS client on a computer then that computer will have the root\SMSDm namespace; if a computer does not have SMS installed, however, it will not have that particular namespace. Differences in the operating system and differences in installed applications and hardware often explain why a script runs fine on Computer A yet fails with an Invalid Namespace error on Computer B.
If you receive an Invalid Namespace error you should rule out these two possibilities (a misspelled namespace name or a namespace that doesn’t exist on a particular computer) before jumping to more drastic conclusions. One easy to way to verify both the existence and the correct spelling of a namespace is to use Scriptomatic 2.0. Although the Scriptomatic utility was designed as a way to help you write simple WMI scripts, the tool can also be used to diagnose problems with the WMI service.
After downloading and installing the Scriptomatic, start the utility and wait for the WMI namespace information to load. At that point click the drop-down list labeled WMI Namespace:
This drop-down list shows all of the WMI namespaces found on the computer, making it easy for you to verify that: 1) the namespace actually exists on the machine, and 2) the namespace name has been spelled correctly.
But what if the namespace does exist, and what if you did spell the name correctly? In that case, you might try this script, which attempts to bind to the root\default namespace:
If this script succeeds, that suggests that the WMI service is working correctly; in that case, it might be just one particular namespace that is experiencing problems. If so, you might need to recompile the MOF file for the namespace in question; for more information on recompiling MOF files see I’m getting an error regarding provider registration. If the connection to root\default fails, that suggests more serious problems. In that case, try stopping and restarting the WMI service (see I’ve verified that the namespace, class, and property names are correct, yet my script still doesn’t work) and then proceed from there.
You can also use the following command from the command prompt to try connecting to the root\default namespace:
If you receive the message “Interactive mode set” then you were able to connect to the namespace. If you receive an “Invalid namespace” message that means the connection failed.
I’m getting an 0x80041010 (“Invalid Class”) error
An error message of 0x80041010 means that you are trying to reference a WMI class that does not exist. This error typically occurs when:
You misspell the name of a class. For example, you try to connect to a class named Win32_Services (with an s on the end) when the actual class name is Win32_Service (without an s on the end).
You reference the wrong namespace. Often-times script writers connect to the root\cimv2 namespace and then try to access the StdRegProv class. Unfortunately, StdRegProv actually resides in the root\default namespace.
You try to access a class that is not supported by a particular operating system. For example, the SystemRestore class (found in the root\default namespace) is supported only on Windows XP. If you try to access that class on, say, a computer running Windows 2000 you will probably get an “Invalid Class” error.
Note. Instead of an error 0x80041010, you might get error 0x80041002 (“Object could not be found”) or error 0x80041006 (“Insufficient memory”) when trying to connect to a nonexistent class.
If you are getting an error regarding an invalid class, then once again it’s Scriptomatic to the rescue. After downloading and installing Scriptomatic, start the utility and wait for the WMI namespace information to load. When it does, click the drop-down labeled WMI Namespace and select the appropriate namespace. (For example, if your class is ostensibly in the root\wmi namespace then select root\wmi.) After the class information loads, click the WMI Class drop-down and verify whether or not the class exists and, if it does, whether or not you have spelled the name correctly:
If you do not see the class listed, then it either does not exist or is actually housed in a different namespace. You can select alternate WMI namespaces from the drop-down list as a way to browse through the entire WMI Repository and see if the class can be found.
If the class exists in the specified namespace (and if you spelled both the namespace and class names correctly) you might be experiencing more serious problems with the WMI service. In that case, you might try this script, which attempts to bind to the Win32_Process class:
If this script succeeds that suggests that the WMI service is working correctly; in that case, it might be just that this one particular class that is experiencing problems. If so, you might need to recompile the MOF file for the class in question. For more information on recompiling MOF files see I’m getting an error regarding provider registration. If the connection to the Win32_Process class fails, that suggests more serious problems. In that case, try stopping and restarting the WMI service (see I’ve verified that the namespace, class, and property names are correct, yet my script still doesn’t work) and then proceed from there.
You can also run the following command from the command prompt to try connecting to the Win32_Process class:
If the connection succeeds a list of all the processes currently running on the computer will be displayed in the command window. If the connection fails you will get an “Invalid class” error.
I’m getting an error (0x800A01B6) that says a property or method is not supported
On occasion you might run a script and receive an error message similar to this:
Generally speaking this error crops up in one of two instances. In many cases you have simply typed in an incorrect property or method name; in the example shown above, we are trying to return information about a non-existent property named Namex rather than a property named Name. Use the information that appears in the dialog box when troubleshooting this problem: the dialog box will typically tell you the line of code where the error occurred (here, line 5) as well as the invalid property or method name (Namex). You can then use either the Scriptomatic or Wbemtest.exe to verify the actual property name.
You might also encounter this problem when working with computers running different versions of Windows. For example, on Windows XP and Windows Server 2003 the Win32_UserAccount class includes a property named LocalAccount. The Win32_UserAccount class is also found on Windows 2000; however, the version of WMI installed on Windows 2000 does not include the LocalAccount property. If you are working with different versions of Windows, use the Scriptomatic to compare the properties found on one computer with the properties found on the other computer.
Note. Suppose the two versions of WMI are different. Is there any way to upgrade Windows 2000 so that it has the same classes, properties, and methods found on Windows XP? Unfortunately, no: there is no upgrade path for WMI.
If the property name is correct but your script still fails, see the next section of this document: I’ve verified that the namespace, class, and property names are correct, yet my script still doesn’t work.
You can also type the following command from the command prompt in order to get an XML listing of all the properties and methods for a WMI class:
I’ve verified that the namespace, class, and property names are correct, yet my script still doesn’t work
Typically the WMI service (winmgmt) is always running: the service starts at the same time the computer starts and does not stop until the computer shuts down. The service can be stopped; however, it is designed to automatically restart any time you connect to a WMI namespace using a tool (such as Wbemtest.exe) or a WMI script.
Although unlikely, it is possible for the service to stop and then fail to restart itself when you run a script; alternatively, if you are having problems with your scripts you might be able to solve those problems by explicitly stopping and restarting the service. You should always try stopping and restarting the service before taking more drastic measures, such as rebuilding the WMI Repository.
Restarting the WMI Service
You can restart the WMI service by typing the following command from the command prompt:
If the service fails to restart or if this fails to solve your problem, try the steps listed in the following section.
Stopping and Starting the WMI Service
If you are experiencing problems with the WMI service you might need to manually stop and restart the service. Before doing so you should enable WMI’s verbose logging option. This provides additional information in the WMI error logs that might be useful in diagnosing the problem. To enable verbose logging using the WMI control, do the following:
Open the Computer Management MMC snap-in and expand Services and Applications.
Right-click WMI Control and click Properties.
In the WMI Control Properties dialog box, on the Logging tab, select Verbose (includes extra information for Microsoft troubleshooting) and then click OK.
Alternatively, you can modify the following registry values:
Set HKEY_LOCAL_MACHINE\Software\Microsoft\WBEM\CIMOM\Logging to 2.
Set HKEY_LOCAL_MACHINE\Software\Microsoft\WBEM\CIMOM\Logging File Max Size to 4000000.
After enabling verbose logging try stopping the WMI service by typing the following at the command prompt:
If the net stop command fails you can force the service to stop by typing this:
Important. If you are running Windows XP or Windows Server 2003 the WMI service runs inside a process named Svchost; this process contains other services as well as WMI. Because of that, you should not try to stop Svchost; if you succeed, you’ll stop all the other services running in that process as well. Instead, use net stop winmgmt or winmgmt /kill in order to stop just the WMI service.
If the service does not restart try rebooting the computer to see if that corrects the problem. If it does not, then continue reading.
I’m getting an 0x80041013 (“Provider not found”) or an 0x80041014 (“Component failed to initialize) error
To be honest, errors such as this are a bit more difficult to resolve. Error number 0x80041013 means that “COM cannot locate a provider referenced in the schema;” error number 0x80041014 means that a component (such as a WMI provider) “failed to initialize for internal reasons.” If you receive one of these two errors, you will likely need to re-register all the .DLL and .EXE files associated with WMI.
Note. If you are experiencing problems with a single class or namespace you might be able to fix the problem by re-registering only the .DLL files associated with that class or namespace. The one drawback to this approach: it’s not always obvious which .DLL files are associated with a particular namespace.
To begin with, you must re-register all the .DLL files found in the %windir%\System32\Wbem folder. Open a command window and use the cd command to change to the %windir%\System32\Wbem directory. From there, type the following at the command prompt and then press ENTER:
After the .DLLs have been re-registered you will then need to re-register any .EXE files found in the Wbem folder, except for Mofcomp.exe and Wmic.exe. For example, to re-register the executable file Scrcons.exe type the following from the command prompt:
After re-registering the .EXE files try your script again. If it fails with error number 0x80041011, 0x80041012, or 0x80041085, then see the section of this document titled I’m getting an error regarding provider registration.
I’m getting an error regarding provider registration
Even after re-registering your WMI components (see I’m getting an 0x80041013 (“Provider not found”) or an 0x80041014 (“Component failed to initialize”) error) you might not be able to run your script; in particular, you might get one of the following errors:
If you receive one of these errors you will need to recompile one or more of your .MOF files. A .
Before you begin recompiling .MOF files note two things. First, most .MOF files can be found in the folder %windir%\System32\Wbem. However, there can be exceptions. You might want to search your hard disk for all .MOF files before beginning.
Second, there are a few applications that automatically populate the Repository during installation; these applications do not use .MOF files. If one of these applications is responsible for the problem with the WMI service (an admittedly difficult problem to diagnose) the only way to correct the class definitions in the Repository is by reinstalling the application.
The easiest way to recompile your .MOF and .MFL files (a .MOF file will often have a corresponding .MFL file which contains localized descriptions of classes, properties, and methods) is to open a command window and use the cd command to switch to the %windir%\System32\Wbem folder. From there type the following:
If you use a script or batch file to recompile .MOF files or if you manually call Mofcomp.exe make sure that you compile a .MOF file before you compile the corresponding .MFL file. In other words, compile items in this order:
To be honest, it’s faster and easier to rebuild the WMI Repository than it is to recompile all the MOF and MFL files; for information on rebuilding the Repository, see the section of this document titled I have a script that I know is valid, the WMI service is running, and I’ve re-registered all the .dlls, yet my script still doesn’t work. Recompiling .MOF and .MFL files is useful if you have reason to believe that only a portion of the registry is corrupted; in that case, you can try recompiling just the affected MOF and MFL files without having to rebuild the entire Repository.
Of course, that leads to two questions. First, why not just rebuild the Repository? You can, but there are some risks involved in wiping out and replacing the Repository. For example, the WMI Repository is primarily a storehouse for meta-information about WMI itself. However, some static class data can be (and often is) stored in the Repository as well. If you rebuild the Repository all that data will be lost. (One such example: any permanent event consumers registered with the WMI service.) Admittedly, this will not cause problems for most people, but it is a possibility.
That leads us to question two: if you decide to rebuild just a portion of the Repository how do you know which MOF and MFL files to recompile? The easiest way to do that is to search the .MOF files (typically found in the folder %windir%\System32\Wbem) for the name of the class you are having difficulty with. Class names will be spelled out in the appropriate .MOF file because those files are responsible for adding information about classes to the WMI Repository.
I have a script that I know is valid, the WMI service is running, and I’ve re-registered all the .dlls, yet my script still doesn’t work
The WMI Repository (%windir%\System32\Wbem\Repository) is the database that stores meta-information and definitions for WMI classes; in some cases, the Repository also stores static class data as well. If the Repository becomes corrupted then the WMI service will not be able to function correctly. If you have tried everything else up to this point – from verifying the namespace to recompiling individual .MOF files – you might have a corrupted Repository. Fortunately, you might be able to repair the Repository; the steps you need to take to do so depend on the version of Windows you are running.
Important. Before you rebuild the Repository make sure you have tried the other remedies in this document; rebuilding the Repository should only be done as a last resort. If you are experiencing problems with the WMI service on a remote computer try your script or application locally on that machine before rebuilding the Repository; as noted earlier, network connectivity is far more likely to be the source of problems than the WMI service itself. Also, use the WMI Control to verify that permissions have been correctly set on the WMI namespaces before you resort to rebuilding the Repository.
Windows Server 2003, Service Pack 1
If you are experiencing problems on a computer running Windows Server 2003, Service Pack 1, you should first run a consistency check by typing the following from the command prompt:
Important. The parameter CheckWMISetup is case-sensitive: it must be typed in exactly as shown. If you type in something else, say, checkwmisetup, the consistency check will fail with the message “Missing entry: checkwmisetup”.
After issuing this command, check the WMI Setup log (%windir%\System32\Wbem\Logs\Setup.log). If you see entries similar to this, then the consistency check has failed:
Note. If there are no entries for the date on which you ran Wbemupgd that means that no inconsistencies were found.
If an inconsistency is discovered, that is indicative of a corrupt Repository. You can then try to repair the Repository by typing the following command from the command prompt (again, the parameter RepairWMISetup is case-sensitive):
If the repair command succeeds, you should see an entry similar to this in the Setup log:
If the repair fails or if your script still does not work then you will need to contact Microsoft Product Support Services.
Microsoft Windows XP, Service Pack 2
If you are running Windows XP, Service Pack 2 you can use a single command to detect and repair a corrupted WMI Repository. To do so, type the following from the command prompt (note that the parameter UpgradeRepository is case-sensitive and must be typed exactly as shown):
After running UpgradeRepository you can verify the results by looking ay the Setup log. If inconsistencies are detected and if the operating system was able to rebuild the Repository you should see information in Setup.log similar to this:
(Wed Oct 12 13:46:36 2005): =========================================================================== (Wed Oct 12 13:46:36 2005): Beginning WBEM Service Pack Installation (Wed Oct 12 13:46:36 2005): Current build of wbemupgd.dll is 5.1.2600.2180 (xpsp_sp2_rtm.040803-2158) (Wed Oct 12 13:46:36 2005): Current build of wbemcore.dll is 5.1.2600.2180 (xpsp_sp2_rtm.040803-2158) (Wed Oct 12 13:46:52 2005): Inconsistent repository detected; it will be recreated - - (Wed Oct 12 13:47:33 2005): Wbemupgd.dll Service Security upgrade succeeded (XP SP update). (Wed Oct 12 13:47:33 2005): WBEM Service Pack Installation completed. (Wed Oct 12 13:47:33 2005): ===========================================================================
Note. There will probably be other entries in the log as well, but you should specifically look for the ones shown above.
If the repair fails or if your script still does not work then you will need to contact Microsoft Product Support Services.
Other Versions of Windows
Only Windows Server 2003, Service Pack 1 and Windows XP, Service Pack 2 include built-in commands for rebuilding the WMI Repository..) By renaming the folder the operating system will no longer be able to find the WMI Repository. As a result, it will automatically rebuild the Repository the next time it needs to access WMI information.
- Restart the WMI service (net start winmgmt) and try your script again.
If your script still fails, try manually rebuilding the repository using the steps described in the section of this document titled I’m getting an error regarding provider registration.
Suppose this fixes the problem?
If your have rebuilt the Repository and your script now works, you might consider the following. Stop the WMI service and rename the current Repository folder (for example, Repository_good), then make Repository_bad the Repository once more (by renaming it back to Repository). Restart the WMI service and see if the script works. If it does not, then simply make Repository_good the WMI Repository once more.
Why would you do something like this? Well, as we noted, there are some risks inherent in rebuilding the Repository. For example, you might have applications that only update the Repository during installation; these applications do not have or use MOF files. If you rebuild the Repository then WMI data for those applications will be lost, at least until you re-install the programs. Likewise, you might have permanent event consumers or other types of data that are stored in the Repository; when the Repository is rebuilt that data will be lost. If you can use the old Repository you will save yourself from data loss like this.
I’ve rebuilt the WMI Repository and my script still doesn’t work
What if you’ve tried all the steps in this document and WMI still does not work? In that case, you will need to contact Microsoft Product Support Services (PSS). Please make sure you know:
- The operating system on which the problem occurred.
- The latest service pack installed on the computer.
- Whether or not the operating system is the client or server version (e.g., Windows 2000 Professional vs. Windows 2000 Server).
- The type of hard drive (IDE or SCSI) on which the operating system is installed.
For more information, go to. | https://technet.microsoft.com/en-us/library/ff406382.aspx | CC-MAIN-2017-04 | refinedweb | 5,706 | 58.42 |
The problem
:
Lira is a little girl form Bytenicut, a small and cozy village located in the country of Byteland.
As the village is located on a somewhat hidden and isolated area, little Lira is a bit lonely and she needs to invent new games that she can play for herself.
However, Lira is also very clever, so, she already invented a new game.
She has many stones with her, which she will display on groups of three stones on the ground on a triangle like shape and then, she will select two triangles, one with the smallest area and one with the largest area as the most beautiful ones.
While it’s easy for Lira to “estimate” the areas of the triangles by their relative sizes, it’s harder for her to actually calculate these areas.
But, it turns out, that Lira is also friends with YOU, an exceptional Mathematics student, and she knew that you would know exactly how to do such verification.
Lira also numbered the triangles from 1 to N, and now she wants to know the indices of the triangles with the smallest and largest area respectively.
It is now up to you, to help Lira and calculate the areas of the triangles and output their numbers.
Input
The first line of the input file contains an integer, N, denoting the number of triangles on the given input file.
Then N lines follow, each line containing six space-separated integers, denoting the coordinates x1, y1, x2, y2, x3, y3
Output
You should output two space separated integers, the indexes of the triangles with the smallest and largest area, respectively.
If there are multiple triangles with the same area, then the last index should be printed.
Constraints
2 ≤ N ≤ 100
-1000 ≤ xi, yi ≤ 1000
Example
Input:
2
0 0 0 100 100 0
1 1 1 5 5 1
Output:
2 1
My Solution
#include <stdio.h> float findArea(int arr[6]){ float value; value = (arr[0]*(arr[3]-arr[5])) + (arr[2] * (arr[5] - arr[1])) + (arr[4]*(arr[1] - arr[3])); value /= 2; if(value<0) value *= -1; return value; } int main(){ int N; int i,j; int tri[100][6]; int smallId, largeId; float smallValue,largeValue; float temp; scanf("%d",&N); for(i=0;i<N;i++){ for(j=0;j<6;j++) scanf("%d",&tri[i][j]); } smallId = 0; largeId = 0; temp = findArea(tri[0]); // printf("first area=%f\n",temp); smallValue = largeValue = temp; for (i=1;i<N;i++){ temp = findArea(tri[i]); // printf("area %d = %f\n",i,temp); if(temp<=smallValue){ smallId = i; smallValue = temp; } if(temp>=largeValue){ largeId = i; largeValue = temp; } } printf("%d %d",smallId+1,largeId+1); return 0; }
Hey Daniel! This is a problem from an ongoing contest and you shouldn’t be posting solutions before it gets over. I am sure this will be helpful but not now. Please hide this post and make it visible after a few days when the contest is over. If the admins at codechef get to know about this, they might ban you.
My bad. I removed the solution already. | https://www.programminglogic.com/codechef-find-area-of-triangles/ | CC-MAIN-2019-13 | refinedweb | 521 | 54.26 |
On Thu, Sep 30, 2010 at 12:11:00PM +0200, Stefano Sabatini wrote: >) > > @@ -964,6 +964,13 @@ > > * - decoding: Set by libavcodec\ > > */\ > > void *hwaccel_picture_private;\ > > +\ > > + /**\ > > + * number of audio samples (per channel) described by this frame\ > > + * - encoding: Set by user.\ > > + * - decoding: Set by libavcodec.\ > > + */\ > > + int nb_samples;\ > > > > > > #define FF_QSCALE_TYPE_MPEG1 0 > > @@ -3478,8 +3485,11 @@ > > const uint8_t *buf, int buf_size); > > #endif > > > > > +#if LIBAVCODEC_VERSION_MAJOR < 53 > > Define a symbol FF_API_AVCODEC_DECODE_AUDIO3 and use it like it's done > for the other FF_API_* symbols, this will help to test regressions. Instead, just merge this #if block with the one just above, which is using the new FF_API_AUDIO_OLD define. The same applies to utils.c. Aurel | http://ffmpeg.org/pipermail/ffmpeg-devel/2010-September/102355.html | CC-MAIN-2013-20 | refinedweb | 106 | 60.04 |
#include <ntwserver/ntw.h>The first task is to include the ntw header file. This tutorial assumes you're on a Unix type machine and have installed the ntw server as a library, which also installs the header files.
/** Callback Function. * We can attach this function to a widget * so that something happens after an event. */ void finished(ntw_event_data *evdata, ntwWidget *userdata){ printf("Window closed\n"); ntw_main_quit(); }This is a callback function. A callback function is run in response to a user interface event, like "button clicked" or "mouse moved." They are always void functions. We'll learn more about callbacks later. For now, this one ends execution of the program.
/** Main. */ int main (int argc, char **argv){ /** Defining the widgets. * We'll use a window, button, and label to start off. */ ntwWidget *window; ntwWidget *button; ntwWidget *label.
/** ntw_init must be called before we actually * do anything. This sets up the ntw server to listen on * our port. It also does some widget preparation. */ ntw_init(argc, argv);We have to call ntw_init before doing anything else. This sets up communications between the client and our helloworld server. The code below won't be executed until a client connects.
/** Now we'll create a window. * The first two arguments are the size of the window in pixels, * and the last is the height. */ window = ntw_window_new(150, 50, "Hello World"); /** Creating the button. * Like the window, the button is a container that * can hold one other widget. */ button = ntw_button_new(); /** Now that we have a button, we can put it * inside the window. */ ntw_container_add(window, button); /** Creating the label. * We don't just want a blank button. * The label widget is a string of text that we * can put in the button. */ label = ntw_label_new("Click to Quit"); /** Now that we have a label, we can put it * inside the button, the same way that we * put the button in the window. */ ntw_container_add(button, label); /** Now let's handle some events by attaching callback functions to widgets. * This first will call finished when the button is clicked. */ ntw_add_callback(button, BUTTON_CLICKED, (func_ptr) &finished, NULL, ASYNC); /** And these will call finished if the window if the user closes the window. */ ntw_add_callback(window, DESTROY_EVENT, (func_ptr) &finished, NULL, ASYNC); ntw_add_callback(window, DELETE_EVENT, (func_ptr) &finished, NULL, ASYNC); /** By default, top level windows are not visible. * This is so that we don't see the widgets appear * one by one as they're added to the window. * Now that everything's ready, we can make the window * visible. */ ntw_widget_show(window); /** ntw_main -- the main event loop * This is what handles events, and we have to call it after * everything else is prepared. */ ntw_main(); return 0; }
gcc -I/usr/local/include/ -L/usr/local/lib -lntwserver -o helloworld helloworld.cYou should now have a helloworld executable, which you can run like this:
./helloworld
The ntw server will start. In another! | http://ntw.sourceforge.net/Docs/Tutorials/helloworldC.html | CC-MAIN-2018-05 | refinedweb | 478 | 67.35 |
os_log(3) BSD Library Functions Manual os_log(3)
NAME
os_log, os_log_info, os_log_debug, os_log_error, os_log_fault -- log a message scoped by the current activity (if present)
SYNOPSIS
#include <os/log.h> void os_log(os_log_t log, const char *format, ...); void os_log_info(os_log_t log, const char *format, ...); void os_log_debug(os_log_t log, const char *format, ...); void os_log_error(os_log_t log, const char *format, ...); void os_log_fault(os_log_t log, const char *format, ...);
DESCRIPTION
The unified logging system provides a single, efficient, high performance set of APIs for capturing log messages(1) command-line tool and through the use of custom logging configuration profiles. Log messages are viewed using the Console app in /Applications/Utilities/ and the log(1) command-line tool. Logging and activity tracing are integrated to make problem diagnosis easier. If activity tracing is used while log- ging, related messages are automatically correlated. The unified logging system considers dynamic strings and complex dynamic objects to be private, and does not collect them automatically. To ensure the privacy of users, it is recommended that log messages consist strictly of static strings and numbers, which are collected automatically by the system. In situations where it is necessary to capture a dynamic string, and it would not compromise user privacy, you may explicitly declare the string public by using the public keyword in the log format string. For example, %{public}s. Log arguments can also be specified as private by using the private keyword in the log format string. For exam- ple, %{private}d. To format a log message, use a printf(3) format string. You may also use the "%@" format specifier for use with Obj-C/CF/Swift objects, and %.*P which can be used to decode arbitrary binary data. The logging system also supports custom decoding of values by denoting value types inline in the format %{value_type}d. The built-in value type decoders are: Value type Custom specifier Example output BOOL %{BOOL}d YES bool %{bool}d true darwin.errno %{darwin.errno}d [32: Broken pipe] darwin.mode %{darwin.mode}d drwxr-xr-x darwin.signal %{darwin.signal}d [sigsegv: Segmentation Fault] time_t %{time_t}d 2016-01-12 19:41:37 timeval %{timeval}.*P 2016-01-12 19:41:37.774236 timespec %{timespec}.*P 2016-01-12 19:41:37.2382382823 bitrate %{bitrate}d 123 kbps iec-bitrate %{iec-bitrate}d 118 Kibps uuid_t %{uuid_t}.16P 10742E39-0657-41F8-AB99-878C5EC2DCAA sockaddr %{network:sockaddr}.*P fe80::f:86ff:fee9:5c16 in_addr %{network:in_addr}d 127.0.0.1 in6_addr %{network:in6_addr}.16P fe80::f:86ff:fee9:5c16 Use os_log and its variants to log messages to the system datastore based on rules defined by the os_log_t object, see os_log_create(3). Generally, use the OS_LOG_DEFAULT constant to perform logging using the system defined behavior. Create a custom log object when you want to tag messages with a specific subsystem and category for the purpose of fil- tering, or to customize the logging behavior of your subsystem with a profile for debugging purposes. os_log is a "default" type of log message that always captured in memory or on disk from system. Limit use to messages that would help diagnose a failure, crash, etc. for production installations. os_log_info is an "info" type of log message used for additional informa- tion. Depending on configuration these messages may be enabled and only captured in memory. They can be optionally configured to persist to disk using a profile or via tools. os_log_debug is a "debug" type of log message that is only recorded when it is specifically requested by tools or configured as such. Debug mes- sages should be used for development use, i.e., additional information that is typically only useful during code development. os_log_error is an "error" type of log message that is related to the local process or framework. If a specific os_activity_id_t is present on the thread, that activity will be captured with all messages available and saved as a snapshot of the error event. os_log_fault is a "fault" type of log message that may involve system- services or multiple processes. If a specific os_activity_id_t is present on a thread, that activity will be captured with all messages available from all involved processes and saved as a snapshot of the fault event.
EXAMPLES
Example use of log messages. #include <os/log.h> #include <pwd.h> #include <errno.h> int main(int argc, const char * argv[]) { uid_t uid; os_log(OS_LOG_DEFAULT, "Standard log message."); os_log_info(OS_LOG_DEFAULT, "Additional info for troubleshooting."); os_log_debug(OS_LOG_DEFAULT, "Debug level messages."); struct passwd *pwd = getpwuid(uid); if (pwd == NULL) { os_log_error(OS_LOG_DEFAULT, "failed to lookup user %d", uid); return ENOENT; } }
SEE ALSO
log(1), os_log_create(3), os_activity_initiate(3) Darwin June 2, 2016 Darwin
Mac OS X 10.12.6 - Generated Sun Oct 29 14:45:55 CDT 2017 | http://www.manpagez.com/man/3/os_log/ | CC-MAIN-2018-39 | refinedweb | 791 | 50.23 |
Google Tasks Tray
Nov 23, 2010 By Shawn Powers
Shawn Powers shows us how to use the program "alltray" to put applications up in the system tray, even if they're not designed to do so. Shawn demonstrates with Google Tasks and Prism, but you can use it for whatever program you)
Made it to Lifehacker
Shawn, you made it to lifehacker! Congrats!...
Of all the things I've lost, I miss my mind the most!
awesome tip
great tip. Thanks.
tech tips return
whoa! good thing that your back with the tech tips! :)
For Apps users..
For Google Apps users, the URL for Tasks is
miss
Hi !
Shown Powers, you are newly in Linux Jornal ? Thats very good ! lool
I miss you, your tech tips of the day ... ! lool
Its good to see you can back ! lol
Bye
Maybe another example,
Maybe another example, non-prism, maybe, it would have been better for avoiding the complexity of messing up parameters for xulrunner and/or alltray...
But interesting nevertheless, I've never heard of this program.
You can do something similar
You can do something similar with python, webkit and gtk (or another gui widget tookit).
import gtk
import webkit
import gobject
def destroy(widget, data=None):
gtk.main_quit()
gobject.threads_init()
window = gtk.Window()
scroller = gtk.ScrolledWindow()
browser = webkit.WebView()
window.set_title("My WebApp") #sets the window title
window.resize(800, 600) #sets the window size
window.add(scroller)
scroller.add(browser)
browser.open("") #sets the url of the webapp
window.connect("destroy", destroy)
window.show_all()
gtk.main()
This should work with the default flavor of Ubuntu. If you are using something else, you just need to make sure python, webkit, and gtk are installed.
Sorry line 6
Sorry line 6 'gtk.main_quit()' should have been indented.
awesome
dude this is cool how it worked. but could you help if there's an easy way to set the icon that shows up in the taskbar? Cuz that'd be awesome. If not don't worry about it...thanks for the tip!
It looks like there is a way
It looks like there is a way to specify the icon alltray uses with the -i option.
This worked for me.
alltray python /path/to/script.py -i /path/to/icon.png -st -na
Loved the tip
Also loved your bumper sticker "Driver carries no cash"
Loved the tip
and the bumper sticker | http://www.linuxjournal.com/video/google-tasks-tray?quicktabs_1=1 | CC-MAIN-2017-09 | refinedweb | 401 | 78.35 |
Introduction to Strictfp in Java
The primary function of strictfp in Java is to provide the same result on every platform when floating-point variables are in operation. Back in time, when java was being developed by James Gosling and his team, one of the main aspects of the Language was Platform Independency. Compiling, Interpreting and Executing the same code on different machines and making sure the result is the same and not manipulated by the Processor.
When calculating floating points on various platforms, the results can vary due to the hardware’s capability of processing floating-point by CPU. Strictfp ensures that you get the exact floating-point output on all platforms. With IEEE 754, Standard for Floating-Point Arithmetic, strictfp was introduced in JVM 1.2 version.
For a variety of CPUs, the standard precisions might vary, 32-bit precision is different than of 62-bit and so on for x86 machines. Usage of strictfp makes sure, when the same code is executed on different platforms, the output shall not be manipulated due to different precision and this works fine with greater precision platforms. Understand strictfp as FP Strict, meaning Floating Point Strict.
Syntax in Strictfp
The strictfp is simple and starts with the keyword strictfp. Below are the syntaxes for class, an interface and a method:
1. Class
strictfp class sample1{
// code here will me implicitly strictfp.
}
2. An Interface
strictfp interface sample2 {
// methods here will be implicitly strictfp
}
3. Method
class India {
strictfp void population()
}
In the above samples, we have simply added the keyword strictfp for the class and interface. The above examples demonstrate how strictfp the keyword must be used and now below is an example of where not to use strictfp keyword.
class sample1{
strictfp float a;
}
Here, assigning strictfp keyword to the variable won’t work. It shall strictly be passed along with class or an interface.
Example to Implement in Strictfp
Below are the examples to implement in Strictfp:
Example #1
We will now demonstrate the working of the strictfp keyword by implementing the same in a code example. Below is the program for the same:
Code:
public class Test {
public strictfp double add()
{
double number1 = 10e+10;
double number2 = 6e+08;
return (number1+number2);
}
public static strictfp void main(String[] args)
{
Test example = new Test();
System.out.println(example.add());
}
}
Output:
Explanation to the above code: Started with class Test definition, then we created a method with a double data type and assigned with it the strictfp keyword. In the add(), we created two variables named number1 and number2 of double data type and in the next line, the method returns the addition of both these variables. Later we have our main class, which is also strictfp equipped and code block inside inherits the strictfp properties. Created a new object of our class and in the end, we have our output print statement, which simply prints the added value of the two variables.
Example #2
Let us now demonstrate the same strictfp keyword into different programs. Here we won’t do any calculations instead we will pass a value in float variable and print it but with strictfp keyword in use.
Code:
public class new_strictfp {
float f = 9.381f;
strictfp public void displayValue(){
System.out.println(f);
}
public static void main(String[] args) {
new_strictfp Demo = new new_strictfp ();
Demo.displayValue();
}
}
Output:
Explanation to the above code: This is another example where we have implemented the strictfp keyword. Started with public class, and created a new variable with float data type and value. Then created a method to print the float value and in our main class, we have initialized an object of our class. After the object, we have our output statement which calls our earlier created method. Here, our method created is strictfp enabled, meaning the floating-point value assigned will be printed as it is.
Rules to Remember
Like any other keyword in any programming language, strictfp in Java has its uses and rules and to achieve the intended results, rules specified must be followed. Now let’s understand some rules before we implement the keyword.
- The strictfp keyword cannot be implemented with constructors, it works fine with classes, interfaces, and methods.
- When strictfp is declared with an interface or a class, it will implicitly implement strictfp for all the methods and nested types within.
- strictfp can be implemented with abstract class or interface but not with abstract methods.
- Along with constructors, strictfp cannot be used with variables, refer to the sample code mentioned earlier.
Following the above-mentioned rules will ensure the proper use of strictfp and the possible difference between floating-point precision giving different results will be avoided.
Advantages of Strictfp in Java
Below are the advantages that come along with the keyword. Advantages of using strictfp are various from a developer’s perspective, few are listed below:
- Accuracy with Floating Point Result, on various machines.
- strictfp harnesses the precision and speed for floating-point operations.
- It is completely unbiased for 32-bit and 64-bit and clearly focuses on better results.
- Lastly, using this keyword as no disadvantage, no increase in time compressibility or execution speed.
- It is important to understand, without the use of strictfp keyword and its function, the floating-point output can be manipulated according to the target machine’s CPU precision point by the JVM and JIT compiler.
- This is just another better way of ensuring the basics of Java Programming Language, which is “Write Once Run Anywhere” and not getting different or wrong results everywhere except the source machine.
Conclusion – Strictfp in Java
we understood what a Strictfp Keyword in Java is, then we explored the uses and were not to use this keyword. It was introduced in JVM 1.2 following the Floating Point Standards of IEE 754. We demonstrated the strictfp keyword with syntaxes and later implemented them in examples. It is a modifier that ensures the same results.
Recommended Articles
This is a guide to Strictfp in Java. Here we discuss Syntax to Strictfp, the example to implement with rules to remember and advantages. You can also go through our other related articles to learn more – | https://www.educba.com/strictfp-in-java/?source=leftnav | CC-MAIN-2020-34 | refinedweb | 1,029 | 53.1 |
cc [ flag ... ] file ... -lsocket -lnsl [ library ... ]
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h> suitable for presentation. The af
argument specifies the family of the address. routine will store the resulting
string. The size argument specifies the size
of this buffer. The application must specify a non-NULL cp_addr() and inet_network()
routines interpret character strings representing numbers expressed in the
IPv4 standard `.' notation, address.
There are three conventional forms for representing IPv6 addresses
as strings:
1080:0:0:0:8:800:200C:417A
Note that it is not necessary to write the leading zeros in an individual
field. However, there must be at least one numeral in every field, except
as described below.
1080::8:800:200C:417A
::FFFF:129.144.52.38
::129.144.52.38
::FFFF:d.d.d
::FFFF:d.d
::d.d.d
::d.d
::FFFF:d
::255.255.0.d
Values specified using `.' notation take one of
the following forms:
d.d.d.d
d.d.d
d.d
d; otherwise, a leading 0 implies octal; otherwise,
the number is interpreted as decimal.
For IPv4 addresses, inet_pton() only accepts a
string in the standard IPv4 dotted-decimal form:
d.d.d.d. | http://www.shrubbery.net/solaris9ab/SUNWaman/hman3socket/inet.3socket.html | CC-MAIN-2014-42 | refinedweb | 205 | 53.07 |
Guide to Linear Regressions in Python
This article outlines the essentials to define a linear regression in python using numpy.
A linear regression is one of the simplest and oldest, and still incredibly important, types of model available for predictive and inferential data analysis. With origins dating to the late 19th century, it’s become well tested, and indeed widely used. In fact, the concepts used to create linear regressions are essential to the foundations of machine learning, including advanced methods like deep learning and neural networks.
Put simply, a linear regression is a “best fit line” used to approximate a trend in a graph. In predictive applications, it’s used to provide an estimate, or “best guess” when a direct measurement is not taken or cannot be made.
What a great concept! Find the best fit line to a given set of data points, and just like that estimates can be made. In order to do this, parameters must be given to create this “best fit”.
As is the case for the great majority of linear regressions, the parameter used is known as the “least squares” method. In other words, “squares” are defined and methods are determined to minimize them (find the “least” of the “squares”).
What exactly is getting “squared”? These are the distances from the data points to the regression line. This is a way to quantify the error in the estimate. In this way, a line can be found that has the least amount of error, and can be concluded to be the best estimate based on these parameters.
For regressions, two types of data are required. These are dependent and independent variables used in tandem. For instance, time is independent, as it marches forward at a (relatively) constant rate. While a distance an be a dependent variable, when measuring the distance traveled, the distance may change relative to time, but time will indeed march tenaciously forward independent of the distance traveled.
As always, begin with essential imports, in this case NumPy for manipulating data, and Matplotlib for visualizations:
import numpy as np
import matplotlib as plt
Add the data of interest. In this case, I manufacture some:
# create independent and dependent variables
# independent
X = np.array([1,2,3,4,5,6,7,8,9,10], dtype=np.float64)
# dependent
Y = np.array([6,6,8,10,10,12,12,15,13,16], dtype=np.float64)
Just looking at the arrays there seems to be a nice correlation, however, it’s wise to visualize your data with the tools at your disposal. Make a Matplotlib plot to get an idea of existing trends:
plt.scatter(X,Y)#################
Returns:See figure below.
Indeed there is a strong positive correlation. The data points can be used to calculate the slope, m, of the linear regression using the formula below.
Define a function to calculate the slope based on input arrays:
def calc_slope(X,Y):
# calculate the slope using formula above
m = (((np.mean(X)*np.mean(Y)) - np.mean(X*Y)) /
((np.mean(X)**2) - np.mean(X*X)))
return m
Traditionally, lines are defined in slope- intercept form; perhaps y = mx+b rings a bell. In this case, the x is given in the form of the values in the X array, m can be calculated with the function defined above, so what remains is to find b, the y-intercept of the regression.
def best_fit(X,Y):
# find intercept by solving slope-intercept equation
m = calc_slope(X,Y)
b = np.mean(Y) — m*np.mean(X)
return m, b
With these pieces, a regression function may be define. This will take the slope and intercept from the best_fit function and find the values of the regression line for the given data:
def reg_line (m, b, X):
return [(m*x)+b for x in X]
Now, simply add the regression to the data!
plt.scatter(X,Y,color='#003F72', label="Input Data")
plt.plot(X, regression_line,color='r', label= "Regression Line")
plt.legend()#################
Returns:See figure below.
| https://samjdedes.medium.com/guide-to-linear-regressions-in-python-bba7f68fb6af?source=post_internal_links---------0---------------------------- | CC-MAIN-2022-40 | refinedweb | 671 | 63.09 |
Hi,
I have the RGB & HSL values of some colors. I want to use to use the RGB or HSL to color the text on my screen.
And I need to change the background to white.
Please help me. I am doing this to associate some letters with colors and some other stuff that would be too complicated to explain.
Thx.
By the way the code below is the closest I got to what i want but I don;t have all the colors I want.
Code:#define BLACK 0 #define BLUE 1 #define GREEN 2 #define CYAN 3 #define RED 4 #define MAGENTA 5 #define BROWN 6 #define LIGHTGREY 7 #define DARKGREY 8 #define LIGHTBLUE 9 #define LIGHTGREEN 10 #define LIGHTCYAN 11 #define LIGHTRED 12 #define LIGHTMAGENTA 13 #define YELLOW 14 #define WHITE 15 #define BLINK 128 #include <windows.h> #include <iostream> using namespace std; int main() { srand((unsigned) time(NULL)); cout << "Hello World!" << endl; cout << "Welcome to C++ Programming" << endl; int k; do{ //SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), 2); //replace the 0 with a number for the color you want // 0xFF0000 TextBackground(1) SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), 3 ); cin >> k; cout <<"\n"; int n = rand() % 26; char c = (char)(n+65); n = rand() % 26; char d = (char)(n+65); cout << c << d << "\n"; }while(k == 0); return 0; } | http://cboard.cprogramming.com/cplusplus-programming/129262-colors.html | CC-MAIN-2015-11 | refinedweb | 217 | 65.66 |
On Tue, 6 Mar 2012, Ingo Molnar wrote:> > All percpu_xxx() functions get removed.>> Fair enough - then please see my namespace comments for the> other patch, before we start spreading these APIs to hundreds of> places ...Its already almost everywhere... This removes some of the last holdouts.> Also, IMHO the lack of debugging API is troubling as well.For that there would need to be some basic understanding as to what isgoing on here. These are fundamentally atomic style operations and I keephearing that people want to check if they are in a preempt section etc.I do not see how that is possible for the this_cpu_ops.However, checks could be done this for the __this_cpu_ops (which rely onthe processor not changing during their execution for their fallbackimplementation, therefore preempt should be off) by simply adding a checkto the basic macros (add to __this_cpu_ptr for example). The problem isthen that there needs to be an audit first to see if there are cases wherewe run into trouble with such an approach. There are situations in whichthe use of __this_cpu ops is legitimate for other reasons if no concurrentper cpu access can occur or we cannot be moved to a different processor(because the process pinned to a processor f.e.)> Thomas found a couple of really, really hairy this_cpu related> bugs.Well yeah if one uses per cpu operations and then allows execution tooccur on a different processor then one will not operate on the sameentity. The same issue would have occurred if they had used the oldpercpu operations. | https://lkml.org/lkml/2012/3/6/334 | CC-MAIN-2017-43 | refinedweb | 258 | 61.67 |
libpgm (3) - Linux Man Pages
libpgm: libnetpbm functions to read and write PGM image files
NAME
libpgm - libnetpbm functions to read and write PGM image files
SYNOPSIS
#include <netpbm/pgm.h>
void pgm_init( int *argcP, char *argv[] );
gray ** pgm_allocarray( int cols, int rows );
gray * pgm_allocrow( intcols );
void pgm_freearray( gray **grays, introws );
void pgm_freerow( gray *grayrow);
void pgm_readpgminit( FILE *fp, int *colsP, int *rowsP, gray *maxvalP, int *formatP );
void pgm_readpgmrow( FILE *fp, gray *grayrow, int cols, gray maxval, int format );
gray ** pgm_readpgm( FILE *fp, int *colsP, int *rowsP, gray *maxvalP );
void pgm_writepgminit( FILE * fp , int cols, int rows, gray maxval, int forceplain );
void pgm_writepgmrow( FILE *fp, gray *grayrow, int cols, gray maxval, int forceplain );
void pgm_writepgm( FILE *fp, gray ** grays, int cols, int rows, gray maxval, int forceplain );
void pgm_writepgm( FILE *fp, gray **grays, int cols, int rows, gray maxval, int forceplain );
void pgm_nextimage( FILE *file, int * const eofP);
void pgm_check( FILE * file, const enum pm_check_type check_type, const int format, const int cols, const int rows, const int maxval, enum pm_check_code * const retval);
typedef ... gray;
#define PGM_MAXMAXVAL ...
#define PGM_OVERALLMAXVAL ...
#define PGM_FORMAT ...
#define RPGM_FORMAT ...
#define PGM_TYPE PGM_FORMAT
#define
PGM_FORMAT_TYPE(format) ...
DESCRIPTION
These library functions are part of Netpbm(1)
TYPES AND CONSTANTS
Each gray should contain only the values between 0 and PGM_OVERALLMAXVAL.
PGM_OVERALLMAXVAL is the maximum value of a maxval in a PGM file. PGM_MAXMAXVAL is the maximum value of a maxval in a PGM file that is compatible with the PGM format as it existed before April 2000. It is also the maximum value of a maxval that results in the minimum possible raster size for a particular image. I.e an image with a maxval higher than PGM_MAXMAXVAL cannot be read or generated by old PGM processing programs and requires more file space.
PGM_FORMAT is the format code for a Plain PGM format image file. RPGM_FORMAT is the format code for a Raw PGM format image file. PGM_TYPE is the format type code for the PGM formats. PGM_FORMAT_TYPE is a macro that generates code to compute the format type code of a PBM or PGM format from the format code which is its argument.
INITIALIZATION
pgm_init() is obsolete (at least since Netpbm 9.25 (March 2002)). Use pm_proginit() instead.
pgm_init() is identical to pm_proginit.
MEMORY MANAGEMENT
pgm_allocarray() allocates an array of grays.
pgm_allocrow() allocates a row of the given number of grays.
pgm_freearray() frees the array allocated with pgm_allocarray() containing the given number of rows.
pgm_freerow() frees a row of grays allocated with pgm_allocrow().
READING FILES
If a function in this section is called on a PBM format file, it translates the PBM file into a PGM file on the fly and functions as if it were called on the equivalent PGM file. The format value returned by pgm_readpgminit() is, however, not translated. It represents the actual format of the PBM file.
pgm_readpgminit() reads the header of a PGM file, returning all the information from the header and leaving the file positioned just after the header.
pgm_readpgmrow() reads a row of grays into the grayrow array. format, cols, and maxval are the values returned by pgm_readpgminit().
pgm_readpgm() reads an entire PGM image into memory, returning the allocated array as its return value and returning the information from the header as rows, cols, and maxval. This function combines pgm_readpgminit(), pgm_allocarray(), and pgm_readpgmrow().
pgm_readpgminit() and pgm_readpgm abort the program with a message to Standard Error if the PGM image header is not syntactically valid, including if it contains a number too large to be processed using the system's normal data structures (to wit, a number that won't fit in a C 'int').
WRITING FILES
pgm_writepgminit() writes the header for a PGM file and leaves it positioned just after the header.
forceplain is a logical value that tells pgm_writepgminit() to write a header for a plain PGM format file, as opposed to a raw PGM format file.
pgm_writepgmrow() writes the row grayrow to a PGM file. For meaningful results, cols, maxval, and forceplain must be the same as was used with pgm_writepgminit().
pgm_writepgm() write the header and all data for a PGM image. This function combines pgm_writepgminit() and pgm_writepgmrow().
MISCELLANEOUS
pgm_nextimage() positions a PGM input file to the next image in it (so that a subsequent pgm_readpgminit() reads its header).
pgm_nextimage() is analogous to pbm_nextimage(), but works on PGM and PBM files.
pgm_check() checks for the common file integrity error where the file is the wrong size to contain all the image data.
pgm_check() is analogous to pbm_check(), but works on PGM and PBM files.
SEE ALSOlibpbm(1) , libppm(1) , libpnm(1)
Linux man pages generated by: SysTutorials | https://www.systutorials.com/docs/linux/man/3-libpgm/ | CC-MAIN-2020-16 | refinedweb | 768 | 52.7 |
Templates are rendered by the view in order to present data to your users. Typically, you’ll be rendering an HTML page, but you may use templates for generating other things.
Ferris uses the Jinja2 template engine and adds a few helper functions and filters, some layouts, and a theme system.
Jinja2 is a complex template engine with many features, many of which are used by Ferris. It’s highly recommended to read through Jinja2’s documentation.
Templates are stored in /app/templates/[controller]/[prefix_][action].html. So if you had a controller named Bears and an action named picnic, you would create a file named /app/templates/bears/picnic.html. If you have a controller named Pages, an action named view with a prefix of mobile, you would create the template at /app/template/pages/mobile_view.html.
It can sometimes be difficult to visualize how all of these pieces work together. Consider the template to be the starting point; this is the first file the template engine examines when rendering. Let’s take a Posts controller that’s rendering the view action. The template will be templates/posts/view.html. Let’s assume the template looks like this:
{% extends "layouts/simple.html" %} {% block header %} {{post.title}} {% endblock %} {% block content %} {{post.content}} {% endblock %}
And the layout that it extends (simple.html) looks like this:
<html> <body> <header> {% block header %}{% endblock %} </header> <section> {% block section %}{% endblock %} </section> </body> </html>
When this template is rendered the blocks are placed into the corresponding blocks in the layout. Here’s an image that visualizes that:
Templates can just be simple static html:
<div> Hello, world! </div>
However, you probably want to provide some sort of data:
<div> Hello, {{name}}</div>
Again see the Jinja2 documenation for more information on the syntax.
Usually, a template will inherit from a layout and provide one or more blocks with content:
{% extends "layouts/default.html" %} {% block layout_content %} <div> Hello! </div> {% endblock %}
Also, a template can inherit from any other valid template (which in turn may inherit from another template or layout):
{% extends "posts/list.html" %} {% block post_title %} <h1 class='super-large'>{{post.title}}</h1> {% endblock %}
Data is provided in a controller via the dictionary self.context which is an alias to self.meta.view.context:
self.context['species'] = "Raxacoricofallapatorian"
This data can now be accessed by name in the template:
This is a {{species}}!
Of course, more complex data can be provided such as a Model instance:
self.context['species'] = Species.find_by_planet('Gallifrey')
Properties on that object can be accessed in the template:
The primary species of the planet {{species.planet}} is {{species.name}}.
Layouts reside inside of app/templates/layouts and serve as the base templates for regular templates to inherit from.
For example, here’s a layout named large_text.html:
<h1>{% block content %}{% endblock %}</h1>
Here’s a template that inherits from it:
{% extends "layouts/large_text.html" %} {% block content %}Yeah, Big Text!{% endblock %}
For more info on template inheritence, see the jinja2 docs on inheritance.
Ferris provides two standard layouts. You can use overloading as described below to customize these layouts.
Located at layouts/default.html, the default layout provides a simple Twitter Bootstrap layout and a few blocks.
Text inside of the <title> tag.
Markup inside of the <head> tag, useful for adding meta tags and such.
Everything inside the body tag, wraps layout_content, layout_before_content, and layout_after_content
Content inside of the <div class='container'> tag, the common spot for overriding and placing content.
Content before the <div class='container'> tag
Content after the <div class='container'> tag
Content right before the closing body tag. Useful for adding scripts and stylesheets.
Located at layouts/admin.html, the admin layout is used by Scaffolding and provides a layout with a side bar and navigation bar.
It contains all of the same blocks as the default layout, plus:
Contains the side navigation.
Contains layout_header, layout_sidebar, and layout_content
Contains the breadcrumb and sits between layout_nav_bar and the two columns.
Contains the action pallete (or any other content to be placed in the sidebar).
Elements are typically located in app/templates/elements or for very specific elements app/templates/[handler]/elements. There’s nothing special about element other than just the organization and the idea.
Take this element post-item.html for example:
<h1>{{post.name}}</h1> <div>{{post.content}}</div>
And this template that uses the element:
<section> {% for post in posts %} {% include "elements/post-item.html" with context %} {% endfor %} </section>
The code from the element gets placed in the template where it’s included and has access to the context. This is somewhat similar to the idea of blocks in layouts.
It sometimes helps to visualize this:
Macros are usually located in app/templates/macros/[name].html, although some items choose app/templates/[name]/macros.html. The first format is used when the macro is general purpose, while the second is used when macros are restricted to one set of templates.
Macros are files that contain a collection of jinja2 macros.
Ferris provides a handful of built-in macros to help in creating templates. Each of these are documented in their associated module.
Templates can be overloaded. Overloading is different from inheritance – inheritance involves creating a new template that re-uses blocks from another template while overloading completely replaces the original template. This is very useful for customizing templates that are built into ferris.
For example, you’re probably going to create your own layouts. Ferris includes a default layout at layouts/default.html and it’s likely that during prototyping of your templates already inherit from this. Eventually you’ll be ready to put your own look and feel. You can copy ferris/templates/layouts/default.html into app/templates/layouts/default.html. The file in app folder will override the one in ferris and all templates that use {% extends 'layouts/default.html'} will now use the one in app. You can now customize it as needed.
You can use this to customize everything in Ferris’ templates - from layouts to macros to scaffolding.
Templates are resolved by name in the following order:
- First, the theme is taken into account. This whole resolution order is first applied to the theme folder if the theme is set.
- app/templates
- plugins/[plugin]/templates
- ferris/templates
- If using scaffolding, Ferris will check scaffolding/[action].html. This can be overwritten. Simply copy the scaffolding folder into app/templates and see Scaffolding for more details.
For example if you render posts/view.html Ferris checks for that template in each of those folders from top to bottom and uses the first one it finds. Notice that app takes precedence over everything else; this means you can override any template from Ferris and any plugins inside of your app.
Also note that you can use prefixed paths to explicitly access un-overloaded templates. For example if I wanted the layout that’s in ferris and not in app I can use ferris/layouts/default.app. The following prefixes are available:
- ferris - maps to /ferris/templates
- app - maps to /app/templates
- [plugin]/ - maps to /plugins/[plugin]/templates
Themes are a collection of templates, elements, and macros that can override those in the root or default theme. Themes are located in app/templates/themes/[name]. Their directory structure mirrors that of the root app/templates folder.
For example, if you have a root template structure like this:
* posts * list.html * view.html * elements * post.html
And you created a new theme called mobile under app/templates/themes/mobile with the following directory structure:
* posts * view.html * elements * post.html
If you switch the theme to mobile, then the template engine will use the posts/view.html and elements/post.html templates from the mobile folder.
However, because we did not specify a posts/list.html in the mobile theme, Ferris will use the posts/list.html in the root theme. In short, if a theme isn’t provided a template it will fall back and use the root theme’s template.
You can set the theme from a controller using self.meta.view.theme:
def startup(self): self.meta.view.theme = 'mobile'
Ferris adds a few useful items to the template context.
Note
Although Sphinx displays these functions as belonging to the ‘template’ module, do not use that name when calling the function. Call the function using it’s name only.
Uses format_value() to translate a value into a formatted string.
Maps to time_util.localize() to localize a datetime object.
Uses ferris.json_util to serialize an object to JSON. Can also be used as a filter.
Allows you to inflect words:
{{inflector.titleize('a_string')}} {{inflector.underscore('SomeThing')}} {{inflector.pluralize('plate')}} {{inflector.singularize('plates')}}
Maps to google.appengine.ext.ndb.
The ferris object provides:
The name of the current route.
The current application’s hostname. This is from google.appengine.api.identity.get_default_version_hostname().
Ferris’ version string.
Maps to google.appengine.api.users.
Checks if the current user is an administrator as defined in the Google App Engine console. Maps directly to google.appengine.api.users.is_current_user_admin.
The view’s current theme.
All of the Settings configured for the application.
Returns true if the given plugin is registered.
List of active plugins.
When rendering from a controller the this object is available and provides:
The name of the current controller.
The current route - exposes Controller.route
This is also avaiable via the top-level alias route.
The route’s prefix
The route’s action
Maps to Controller.uri to generate urls.
This is also avaiable via the top-level alias uri.
Maps to Controller.uri_exists to check the existance of routes.
This is also avaiable via the top-level alias uri_exists.
Maps to Controller.on_uri to check the the user is on the given route.
This is also avaiable via the top-level alias on_uri.
Exposes the webapp2.Request object.
This is also avaiable via the top-level alias request.
The current user.
This is also avaiable via the top-level alias user.
Encodes a ndb key into an urlsafe string.
Decodes a urlsafe string into an ndb key.
Can be used to transform objects into strings.
Formatters are provided for datetime, date, ndb.Key, and ndb.Model classes. By default the date objects are localized and nicely formatted. The ndb formatters call the __unicode__ or __str__ method for their associated entities.
You can register additional formatters or overload the existing ones. Simply include something similar to the following in one of the bootstrap scripts (routes.py or listeners.py):
def format_foo(foo): return "%s: %s" % (foo.name, foo.value) from ferris.core import template template.formatters[Foo] = format_foo
Similar to controller events templates may also emit events by way of views. See Views for more information.
To trigger an event from a template use this.events like so:
{{this.events.my_event()}}
The default layouts that come with ferris offer the following built-in events:
Fires just before the closing body tag.
Fires just before the closing head tag.
Fires just before the view’s content.
Fires just after the view’s content.
Sometimes it’s useful to manually render a template outside of a controller or view context.
Renders the template given by name with the given context (variables). Uses the global context.
Note that none of the context provided by the controller (this) will be available.
Example:
from ferris.core.template import render_template context = {"test": "One"} result = render_template("test/test_template.html", context=context) print result | http://ferris-framework.appspot.com/docs21/users_guide/templates.html | CC-MAIN-2017-13 | refinedweb | 1,909 | 61.02 |
I begin with a description of packet generation tools. Custom packet
generators, like hping and
nemesis, will allow you to create custom packets
to test protocols. Load generators, like MGEN,
will let you flood your network with packets to see how your network
responds to the additional traffic. We conclude with a brief
discussion of network emulators and simulators.
Many of the tools described in this chapter and the next are not
tools that you will need often, if ever. But should the need arise,
you will want to know about them. Some of these tools are described
quite briefly. My goal is to familiarize you with the tools rather
than to provide a detailed introduction. Unless you have a specific
need for one of these tools, you'll probably want to just skim
these chapters initially. Should the need arise, you'll know
the appropriate tool exists and can turn to the references for more
information.
First, to test software configuration and
protocols, it may be necessary to control the content of individual
fields within packets. For example, customized packets can be
essential to test whether a firewall is performing correctly. They
can also be used to investigate problems with specific protocols or
to collect information such as path MTU. They are wonderful learning
tools, but using them can be a lot of work and will require a very
detailed knowledge of the relevant protocols.
The second reason for generating
packets is to test performance. For this purpose, you typically
generate a large number of packets to see how your network or devices
on the network respond to the increased load. We have already done
some of this. In Chapter 4, "Path Characteristics", we looked at tools
that generated streams of packets to analyze link and path
performance. Basically, any network benchmark will have a packet
generator as a component. Typically, however, you won't have
much control over this component. The tools described here give you
much greater control over the number, size, and spacing of packets.
Unlike custom packet generators, load generators typically
won't provide much control over the contents of the packets.
These two uses are best thought of as extremes on a continuum rather
than mutually exclusive categories. Some programs lie somewhere
between these two extremes, providing a moderate degree of control
over packet contents and the functionality to generate multiple
packets. There is no one ideal tool, so you may want to become
familiar with several, depending on your needs.
Two programs,
hping and nemesis, are
briefly described here. A number of additional tools are cited at the
end of this section in case these utilities don't provide the
exact functionality you want or aren't easily ported to your
system. Of the two, hping is probably the better
known, but nemesis has features that recommend
it. Neither is perfect.
Generally, once you have the idea of how to use one of these tools,
learning another is simply a matter of identifying the options of
interest. Most custom packet generators have a reasonable set of
defaults that you can start with. Depending on what you want to do,
you select the appropriate options to change just what is
necessary -- ideally as little as possible.
Custom packet tools have a mixed reputation. They are extremely
powerful tools and, as such, can be abused. And some of their authors
seem to take great pride in this potential. These are definitely
tools that you should use with care. For some purposes, such as
testing firewalls, they can be indispensable. Just make sure it is
your firewall, and not someone else's, that you are testing.
When run with the default parameters, it
looks a lot like ping and is useful for checking
connectivity:
lnx1# hping 205.153.63.30
eth0 default routing interface selected (according to /proc)
HPING 205.153.63.30 (eth0 205.153.63.30): NO FLAGS are set, 40 headers + 0 data
bytes
46 bytes from 205.153.63.30: flags=RA seq=0 ttl=126 id=786 win=0 rtt=4.4 ms
46 bytes from 205.153.63.30: flags=RA seq=1 ttl=126 id=1554 win=0 rtt=4.5 ms
46 bytes from 205.153.63.30: flags=RA seq=2 ttl=126 id=2066 win=0 rtt=4.6 ms
46 bytes from 205.153.63.30: flags=RA seq=3 ttl=126 id=2578 win=0 rtt=5.5 ms
46 bytes from 205.153.63.30: flags=RA seq=4 ttl=126 id=3090 win=0 rtt=4.5 ms
--- 205.153.63.30 hping statistic ---
5 packets tramitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 4.4/4.7/5.5 ms
When using ICMP, this is what one of the replies from the output
looks like:
46 bytes from 205.153.63.30: icmp_seq=0 ttl=126 id=53524 rtt=2.2 ms
If you want more information, you can use
-V for verbose mode. Here is what a reply looks
like with this option:
46 bytes from 172.16.2.236: flags=RA seq=0 ttl=63 id=12961 win=0 rtt=1.0 ms
tos = 0 len = 40
seq = 0 ack = 108515096
sum = a5bc urp = 0
Other options that control the general
behavior of hping include
-c to set the number of packets to send,
-i to set the time between packets,
-n for numeric output (no name resolution), and
-q for quiet output (just summary lines when
done).
Another group of options allows you to
control the contents of the packet header. For example, the
-a option can be used to specify an arbitrary
source address for a packet. Here is an example:
lnx1# hping2 -a 205.153.63.30 172.16.2.236
eth0 default routing interface selected (according to /proc)
HPING 172.16.2.236 (eth0 172.16.2.236): NO FLAGS are set, 40 headers + 0 data
bytes
--- 172.16.2.236 hping statistic ---
4 packets tramitted, 0 packets received, 100% packet loss
round-trip min/avg/max = 0.0/0.0/0.0 ms
Spoofing source
addresses can be useful when testing router and firewall setup, but
you should do this in a controlled environment. All routers should be
configured to drop any packets with invalid source addresses. That
is, if a packet claims to have a source that is not on the local
network or that is not from a device for which the local network
should be forwarding a packet, then the source address is illegal and
the packet should be dropped. By creating packets with illegal source
addresses, you can test your routers to be sure they are, in fact,
dropping these packets. Of course, you need to use a tool like
ethereal or tcpdump to see
what is getting through and what is blocked.[36]
[36]If this is all you are testing, you may
prefer to use a specialized tool like
egressor.
bsd2# hping -2 -p 53 -E data.dns -d 31 205.153.63.30
Be warned, constructing a usable data file is nontrivial. Here is a
crude C program that will construct the data needed for this DNS
example:
#include <stdio.h>
main( )
{
FILE *fp;
fp=fopen("data.dns", "w");
fprintf(fp, "%c%c%c%c", 0x00, 0x01, 0x01, 0x00);
fprintf(fp, "%c%c%c%c", 0x00, 0x01, 0x00, 0x00);
fprintf(fp, "%c%c%c%c", 0x00, 0x00, 0x00, 0x00);
fprintf(fp, "%c%s", 0x03, "www");
fprintf(fp, "%c%s", 0x05, "cisco");
fprintf(fp, "%c%s%c", 0x03, "com", 0x00);
fprintf(fp, "%c%c%c%c", 0x00, 0x01, 0x00, 0x01);
fclose(fp);
}
Finally, hping can
also be put in dump mode so that the contents of the reply packets
are displayed in hex:
bsd2# hping -c 1 -j 172.16.2.230
HPING 172.16.2.230 (ep0 172.16.2.230): NO FLAGS are set, 40 headers + 0 data
bytes
46 bytes from 172.16.2.230: flags=RA seq=0 ttl=128 id=60017 win=0 rtt=2.1 ms
0060 9706 2222 0060 088f 5f0e 0800 4500
0028 ea71 0000 8006 f26b ac10 02e6 ac10
02ec 0000 0a88 0000 0000 1f41 a761 5014
0000 80b3 0000 0000 0000 0000
--- 172.16.2.230 hping statistic ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 2.1/2.1/2.1 ms
Here is an example that sends a TCP packet:
bsd2# nemesis-tcp -v -D 205.153.63.30 -S 205.153.60.236
TCP Packet Injection -=- The NEMESIS Project 1.1
(c) 1999, 2000 obecian <obecian@celerity.bartoli.org>
205.153.63.30
[IP] 205.153.60.236 > [Ports] 42069 > 23
[Flags]
[TCP Urgent Pointer] 2048
[Window Size] 512
[IP ID] 0
[IP TTL] 254
[IP TOS] 0x18
[IP Frag] 0x4000
[IP Options]
Wrote 40 bytes
TCP Packet Injected
Here is an example setting the SYN and ACK flags and the destination
port:
bsd2# nemesis-tcp -S 172.16.2.236 -D 205.153.63.30 -fS -fA -y 22
The other programs in the nemesis suite work
pretty much the same way. Here is an example for sending an ICMP ECHO
REQUEST:
bsd2# nemesis-icmp -v -S 172.16.2.236 -D 205.153.63.30 -i 8
ICMP Packet Injection -=- The NEMESIS Project 1.1
(c) 1999, 2000 obecian <obecian@celerity.bartoli.org>
[IP] 172.16.2.236 > 205.153.63.30
[Type] ECHO REQUEST
[Sequence number] 0
[IP ID] 0
[IP TTL] 254
[IP TOS] 0x18
[IP Frag] 0x4000
Wrote 48 bytes
ICMP Packet Injected
The -P option can
be used to read the data for the packet from a file. For example,
here is the syntax to send a DNS query.
bsd2# nemesis-dns -v -S 172.16.2.236 -D 205.153.63.30 -q 1 -P data.dns
DNS Packet Injection -=- The NEMESIS Project 1.1
(c) 1999, 2000 obecian <obecian@celerity.bartoli.org>
[IP] 172.16.2.236 > 205.153.63.30
[Ports] 42069 > 53
[# Questions] 1
[# Answer RRs] 0
[# Authority RRs] 0
[# Additional RRs] 0
[IP ID] 420
[IP TTL] 254
[IP TOS] 0x18
[IP Frag] 0x4000
[IP Options]
00 01 01 00 00 01 00 00 00 00 00 00 03 77 77 .............ww
77 05 63 69 73 63 6F 03 63 6F 6D 00 00 01 00 w.cisco.com....
01 .
Wrote 40 bytes
DNS Packet Injected
bsd2# ipsend -v -i ep0 -g 172.16.2.1 -d 205.153.63.30
Device: ep0
Source: 172.16.2.236
Dest: 205.153.63.30
Gateway: 172.16.2.1
mtu: 1500
Yet
another program worth considering is sock.
sock is described in the first volume of Richard
W. Stevens' TCP/Illustrated and is freely
downloadable. While sock doesn't give the
range of control some of these other programs give, it is a nice
pedagogical tool for learning about TCP/IP. Beware, there are other
totally unrelated programs called sock.
Finally, some sniffers and
analyzers support the capture and retransmission of packets. Look at
the documentation for the sniffer you are using, particularly if it
is a commercial product. If you decide to use this feature, proceed
with care. Retransmission of traffic, if used indiscriminately, can
create some severe problems.
socket and netcat
While they don't fit cleanly into
this or the next category, netcat (or
nc) and Juergen Nickelsen's
socket are worth mentioning. (The
netcat documentation identifies only the author
as Hobbit.) Both are programs that can be used to establish a
connection between two machines. They are useful for debugging,
moving files, and exploring and learning about TCP/IP. Both can be
used from scripts.
You'll need to start one copy as a server (in listen mode) on
one computer:
bsd1# nc -l -p 2000
Then start another as a client on a second computer:
bsd2# nc 172.16.2.231 2000
Here is the equivalent command for socket as a
server:
bsd1# socket -s 2000
Here is the equivalent command for a client:
bsd2# socket 172.16.2.231 2000
In all examples 2000 is an arbitrarily selected
port number.
Here is a simple example using nc to copy a file
from one system to another. The server is opened with output
redirected to a file:
bsd1# nc -l -p 2000 > tmp
Then the file is piped to the client:
bsd2# cat README | nc 172.16.2.231 2000
^C punt!
Finally, nc is terminated with a Ctrl-C. The
contents of README on bsd1
have been copied to the file tmp on
bsd2. These programs can be cleaner than
telnet in some testing situations since, unlike
telnet, they don't attempt any session
negotiations when started. Play with them, and you are sure to find a
number of other uses.
You'll need to start one copy as a server (in listen mode) on
one computer:
bsd1# nc -l -p 2000
bsd2# nc 172.16.2.231 2000
bsd1# socket -s 2000
bsd2# socket 172.16.2.231 2000
Here is a simple example using nc to copy a file
from one system to another. The server is opened with output
redirected to a file:
bsd1# nc -l -p 2000 > tmp
bsd2# cat README | nc 172.16.2.231 2000
^C punt!
Almost any application can be used to
generate traffic. A few tools, such as ping and
ttcp, are particularly easy to use for this
purpose. For example, by starting multiple ping
sessions in the background, by varying the period between packets
with the -i option, and by varying the packet
sizes with the -s option, you can easily
generate a wide range of traffic loads. Unfortunately, this
won't generate the type of traffic you may need for some types
of tests. Two tools, spray and
mgen, are described here. The better known of
these is probably spray. (It was introduced in
Chapter 4, "Path Characteristics".) It is also frequently included with
systems so you may already have a copy. mgen is
one of the most versatile.
Here is an example of spray using default values:
bsd2# spray sol1
sending 1162 packets of lnth 86 to 172.16.2.233 ...
in 0.12 seconds elapsed time
191 packets (16.44%) dropped
Sent: 9581 packets/sec, 804.7K bytes/sec
Rcvd: 8006 packets/sec, 672.4K bytes/sec
You
should not be alarmed that packets are being dropped. The idea is to
send packets as fast as possible so that the interface will be
stressed and packets will be lost. spray is most
useful in comparing the performance of two machines. For example, you
might want to see if your server can keep up with your clients. To
test this, you'll want to use spray to
send packets from the client to the server. If the number of packets
dropped is about the same, the machines are fairly evenly matched. If
a client is able to overwhelm a server, then you may have a potential
problem.
In the previous example, spray was run on
bsd2, flooding sol1. Here
are the results of running spray on
sol1, flooding bsd2 :
sol1# spray bsd2
sending 1162 packets of length 86 to 172.16.2.236 ...
610 packets (52.496%) dropped by 172.16.2.236
36 packets/sec, 3144 bytes/sec
Unfortunately, while spray can alert you to a
problem, it is unable to differentiate among the various reasons why
a packet was lost -- collision, slow interface, lack of buffer
space, and so on. The obvious things to look at are the speed of the
computer and its interfaces.
The traffic generation tool is
mgen. It can be run in command-line mode or by
using the -g option in graphical mode. At its
simplest, it can be used with command-line options to generate
traffic. Here is a simple example:
bsd2# mgen -i ep0 -b 205.153.63.30:2000 -r 10 -s 64 -d 5
MGEN: Version 3.1a3
MGEN: Loading event queue ...
MGEN: Seeding random number generator ...
MGEN: Beginning packet generation ...
(Hit <CTRL-C> to stop)Trying to set IP_TOS = 0x0
MGEN: Packets Tx'd : 50
MGEN: Transmission period: 5.018 seconds.
MGEN: Ave Tx pkt rate : 9.964 pps.
MGEN: Interface Stats : ep0
Frames Tx'd : 55
Tx Errors : 0
Collisions : 0
MGEN: Done.
Other options for
mgen include setting the interface
(-i), the destination address and port
(-b), the packet rate (-r),
the packet size (-s), and the duration of the
flow in seconds (-d ). There are a number of
other options described in the documentation, such as the type of
service and TTL fields.
The real strength of
mgen comes when you use it with a script. Here
is a very simple example of a script called demo
:
START NOW
00001 1 ON 205.153.63.30:5000 PERIODIC 5 64
05000 1 MOD 205.153.63.30:5000 POISSON 20 64
15000 1 OFF
Here is an example of the invocation of mgen
with a script:
bsd2# mgen -i ep0 demo
MGEN: Version 3.1a3
MGEN: Loading event queue ...
MGEN: Seeding random number generator ...
MGEN: Beginning packet generation ...
MGEN: Packets Tx'd : 226
MGEN: Transmission period: 15.047 seconds.
MGEN: Ave Tx pkt rate : 15.019 pps.
MGEN: Interface Stats : ep0
Frames Tx'd : 234
Tx Errors : 0
Collisions : 0
MGEN: Done.
For many purposes,
mgen is the only tool from the MGEN tool set
that you will need. But for some purposes, you will need more.
drec is a receiver program that can log received
data. mgen and drec can be
used with RSVP (with ISI's rsvpd). You
will recall that with RSVP, the client must establish the session.
drec has this capability. Like
mgen, drec has an optional
graphical interface. In addition to mgen and
drec, the MGEN tool set includes a number of
additional utilities that can be used to analyze the data collected
by drec.
One last note on load
generators -- software load generators assume that the systems
they run on are fast enough to generate enough traffic to adequately
load the system being tested. In some circumstances, this will not be
true. For some applications, dedicated hardware load generators must
be used. | https://docstore.mik.ua/orelly/networking_2ndEd/tshoot/ch09_01.htm | CC-MAIN-2019-51 | refinedweb | 3,047 | 74.69 |
Class yii\base\Model
Model is the base class for data models.
Model implements the following commonly used features:
- attribute declaration: by default, every public class member is considered as a model attribute
- attribute labels: each attribute may be associated with a label for display purpose
- massive attribute assignment
- scenario-based validation
Model also raises the following events when performing data validation:
- EVENT_BEFORE_VALIDATE: an event raised at the beginning of validate()
- EVENT_AFTER_VALIDATE: an event raised at the end of validate()
You may directly use Model to store model data, or extend it with customization.
For more details and usage information on Model, see the guide article on models.
Public Properties
Hide inherited properties
Public Methods
Protected Methods
Events
Constants
Property Details
Attribute values (name => value).
public void setAttributes ( $values, $safeOnly = true )
Errors for all attributes or the specified attribute. Empty array is returned if no error. Note that when returning errors for all attributes, the result is a two-dimensional array, like the following:
[ 'username' => [ 'Username is required.', 'Username must contain only word characters.', ], 'email' => [ 'Email address is invalid.', ] ]
The first errors. The array keys are the attribute names, and the array values are the corresponding error messages. An empty array will be returned if there is no error.
An iterator for traversing the items in the list.
The scenario that this model is in. Defaults to SCENARIO_DEFAULT.
public void setScenario ( $value )
All the validators declared in the model.
Method Details
Returns the attribute names that are subject to validation in the current scenario.
Adds a new error to the specified attribute.
Adds a list of errors.
This method is invoked after validation ends.
The default implementation raises an
afterValidate event.
You may override this method to do postprocessing after validation.
Make sure the parent implementation is invoked so that the event can be raised.
Returns the attribute hints.
Attribute hints are mainly used for display purpose. For example, given an attribute
isPublic, we can declare a hint
Whether the post should be visible for not logged in users,
which provides user-friendly description of the attribute meaning and can be displayed to end users.
Unlike label hint will not be generated, if its explicit declaration is omitted.
Note, in order to inherit hints defined in the parent class, a child class needs to
merge the parent hints with child hints using functions such as
array_merge().().
See also generateAttributeLabel().
Returns the list of attribute names.
By default, this method returns all public non-static properties of the class. You may override this method to change the default behavior.
This method is invoked before validation starts.
The default implementation raises a
beforeValidate event.
You may override this method to do preliminary checks before validation.
Make sure the parent implementation is invoked so that the event can be raised.
Removes errors for all attributes or a single attribute.
Creates validator objects based on the validation rules specified in rules().
Unlike getValidators(), each time this method is called, a new list of validators will be returned.; }, ];
In this method, you may also want to return different lists of fields based on some context information. For example, depending on $scenario or the privilege of the current application user, you may return different sets of visible fields or filter out some fields.
The default implementation of this method returns attributes() indexed by the same attribute names..
Generates a user friendly attribute label based on the give attribute name.
This is done by replacing underscores, dashes and dots with blanks and changing the first letter of each word to upper case. For example, 'department_name' or 'DepartmentName' will generate 'Department Name'.
Returns the text hint for the specified attribute.
See also attributeHints().
Returns the text label for the specified attribute.
See also:
Returns attribute values.
Returns the errors for all attributes or a single attribute.
See also:
Returns the first error of the specified attribute.
See also:
Returns the first error of every attribute in the model.
See also:
Returns an iterator for traversing the attributes in the model.
This method is required by the interface IteratorAggregate.
Returns the scenario that this model is used in.
Scenario affects how validation is performed and which attributes can be massively assigned.
Returns all the validators declared in rules().
This method differs from getActiveValidators() in that the latter only returns the validators applicable to the current $scenario.
Because this method returns an ArrayObject object, you may manipulate it by inserting or removing validators (useful in model behaviors). For example,
$model->validators[] = $newValidator;
Returns a value indicating whether there is any validation error.
Returns a value indicating whether the attribute is active in the current scenario.
See also activeAttributes().
Returns a value indicating whether the attribute is required.
This is determined by checking if the attribute is associated with a required validation rule in the current $scenario.
Note that when the validator has a conditional validation applied using
$when this method will return
false regardless of the
when condition because it may be called be
before the model is loaded with data.
Returns a value indicating whether the attribute is safe for massive assignments.
See also safeAttributes().
Populates the model with input data.
This method provides a convenient shortcut for:
if (isset($_POST['FormName'])) { $model->attributes = $_POST['FormName']; if ($model->save()) { // handle success } }
which, with
load() can be written as:
if ($model->load($_POST) && $model->save()) { // handle success }
load() gets the
'FormName' from the model's formName() method (which you may override), unless the
$formName parameter is given. If the form name is empty,
load() populates the model with the whole of
$data,
instead of
$data['FormName'].
Note, that the data being populated is subject to the safety check by setAttributes().
Populates a set of models with the data from end user.
This method is mainly used to collect tabular data input.
The data to be loaded for each model is
$data[formName][index], where
formName
refers to the value of formName(), and
index the index of the model in the
$models array.
If formName() is empty,
$data[index] will be used to populate each model.
The data being populated to each model is subject to the safety check by setAttributes().
Returns whether there is an element at the specified offset.
This method is required by the SPL interface ArrayAccess.
It is implicitly called when you use something like
isset($model[$offset]).
Returns the element at the specified offset.
This method is required by the SPL interface ArrayAccess.
It is implicitly called when you use something like
$value = $model[$offset];.
Sets the element at the specified offset.
This method is required by the SPL interface ArrayAccess.
It is implicitly called when you use something like
$model[$offset] = $item;.
Sets the element value at the specified offset to null.
This method is required by the SPL interface ArrayAccess.
It is implicitly called when you use something like
unset($model[$offset]).
This method is invoked when an unsafe attribute is being massively assigned.
The default implementation will log a warning message if YII_DEBUG is on. It does nothing otherwise. yii\validators built-in valid().
See also scenarios().
Returns the attribute names that are safe to be massively assigned in the current scenario.
Returns a list of scenarios and the corresponding active attributes.
An active attribute is one that is subject to validation in the current scenario. The returned array should be in the following format:
[ 'scenario1' => ['attribute11', 'attribute12', ...], 'scenario2' => ['attribute21', 'attribute22', ...], ... ]
By default, an active attribute is considered safe and can be massively assigned.
If an attribute should NOT be massively assigned (thus considered unsafe),
please prefix the attribute with an exclamation character (e.g.
'!rank').
The default implementation of this method will return all scenarios found in the rules() declaration. A special scenario named SCENARIO_DEFAULT will contain all attributes found in the rules(). Each scenario will be associated with the attributes that are being validated by the validation rules that apply to the scenario.
Sets the attribute values in a massive way.
See also:
Sets the scenario for the model.
Note that this method does not check if the scenario exists or not. The method validate() will perform this check.
Performs the data validation.
This method executes the validation rules applicable to the current $scenario. The following criteria are used to determine whether a rule is currently applicable:
- the rule must be associated with the attributes relevant to the current scenario;
- the rules must be effective for the current scenario.
This method will call beforeValidate() and afterValidate() before and after the actual validation, respectively. If beforeValidate() returns false, the validation will be cancelled and afterValidate() will not be called.
Errors found during the validation can be retrieved via getErrors(), getFirstErrors() and getFirstError().
Validates multiple models.
This method will validate every model. The models being validated may be of the same or different types.
Event Details
An event raised at the end of validate()
An event raised at the beginning of validate(). You may set yii\base\ModelEvent::$isValid to be false to stop the validation. | http://www.yiiframework.com/doc-2.0/yii-base-model.html | CC-MAIN-2017-30 | refinedweb | 1,503 | 50.33 |
Reading and Writing CSV Files in Python using CSV Module & Pandas
What is a CSV file?
A CSV file is a type of plain text file that uses specific structuring to arrange tabular data. CSV is a common format for data interchange as it's compact, simple and general. Many online services allow its users to export tabular data from the website into a CSV file. Files of CSV will open into Excel, and nearly all databases have a tool to allow import from CSV file. The standard format is defined by rows and columns data. Moreover, each row is terminated by a newline to begin the next row. Also within the row, each column is separated by a comma.
In this tutorial, you will learn:
- What is a CSV file?
- CSV Sample File.
- Python CSV Module
- CSV Module Functions
- Reading CSV Files
- Reading as a Dictionary
- Writing to CSV Files
- Reading CSV Files with Pandas
- Writing to CSV Files with Pandas
CSV Sample File.
Data in the form of tables is also called CSV (comma separated values) - literally "comma-separated values." This is a text format intended for the presentation of tabular data. Each line of the file is one line of the table. The values of individual columns are separated by a separator symbol - a comma (,), a semicolon (;) or another symbol. CSV can be easily read and processed by Python.
Consider the following Tabe
Table Data
You can represent this table in csv as below.
CSV Data
Programming language, Designed by, Appeared, Extension
Python, Guido van Rossum, 1991, .py
Java, James Gosling, 1995, .java
C++, Bjarne Stroustrup,1983,.cpp
As you can see each row is a new line, and each column is separated with a comma. This is an example of how a CSV file looks like.
Python CSV Module
Python provides a CSV module to handle CSV files. To read/write data, you need to loop through rows of the CSV. You need to use the split method to get data from specified columns.
CSV Module Functions
In CSV module documentation you can find following functions:
- csv.field_size_limit – return maximum field size
- csv.get_dialect – get the dialect which is associated with the name
- csv.list_dialects – show all registered dialects
- csv.reader – read data from a csv file
- csv.register_dialect - associate dialect with name
- csv.writer – write data to a csv file
- csv.unregister_dialect - delete the dialect associated with the name the dialect registry
- csv.QUOTE_ALL - Quote everything, regardless of type.
- csv.QUOTE_MINIMAL - Quote fields with special characters
- csv.QUOTE_NONNUMERIC - Quote all fields that aren't numbers value
- csv.QUOTE_NONE – Don't quote anything in output
In this tutorial, we are going to focus only on the reader and writer functions which allow you to edit, modify, and manipulate the data in a CSV file.
How to. Let's take a look at this example, and we will find out that working with csv file isn't so hard.
#import necessary modules import csv with open('X:\data.csv','rt')as f: data = csv.reader(f) for row in data: print(row)
When you execute the program above, the output will be:
['Programming language; Designed by; Appeared; Extension'] ['Python; Guido van Rossum; 1991; .py'] ['Java; James Gosling; 1995; .java'] ['C++; Bjarne Stroustrup;1983;.cpp']
How to Read a CSV as a Dictionary
You can also you use DictReader to read CSV files. The results are interpreted as a dictionary where the header row is the key, and other rows are values.
Consider the following code
#import necessary modules import csv reader = csv.DictReader(open("file2.csv")) for raw in reader: print(raw)
The result of this code is:
OrderedDict([('Programming language', 'Python'), ('Designed by', 'Guido van Rossum'), (' Appeared', ' 1991'), (' Extension', ' .py')]) OrderedDict([('Programming language', 'Java'), ('Designed by', 'James Gosling'), (' Appeared', ' 1995'), (' Extension', ' .java')]) OrderedDict([('Programming language', 'C++'), ('Designed by', ' Bjarne Stroustrup'), (' Appeared', ' 1985'), (' Extension', ' .cpp')])
And this way to read data from CSV file is much easier than earlier method. However, this is not isn't the best way to read data.
How to write CSV File
When you have a set of data that you would like to store in a CSV file you have to use writer() function. To iterate the data over the rows(lines), you have to use the writerow() function.
Consider the following example. We write data into a file "writeData.csv" where the delimiter is an apostrophe.
#import necessary modules import csv with open('X:\writeData.csv', mode='w') as file: writer = csv.writer(file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) #way to write to csv file writer.writerow(['Programming language', 'Designed by', 'Appeared', 'Extension']) writer.writerow(['Python', 'Guido van Rossum', '1991', '.py']) writer.writerow(['Java', 'James Gosling', '1995', '.java']) writer.writerow(['C++', 'Bjarne Stroustrup', '1985', '.cpp'])
Result in csv file is:
Programming language, Designed by, Appeared, Extension Python, Guido van Rossum, 1991, .py Java, James Gosling, 1995, .java C++, Bjarne Stroustrup,1983,.cpp
Reading CSV Files with Pandas
Pandas is an opensource library that allows to you perform data manipulation in Python. Pandas provide an easy way to create, manipulate and delete the data.
You must install pandas library with command <code>pip install pandas</code>. In windows, you will execute this command in Command Prompt while in Linux in the Terminal.
Reading the CSV into a pandas DataFrame is very quick and easy:
#import necessary modules import pandas result = pandas.read_csv('X:\data.csv') print(result)
Result:
Programming language, Designed by, Appeared, Extension 0 Python, Guido van Rossum, 1991, .py 1 Java, James Gosling, 1995, .java 2 C++, Bjarne Stroustrup,1983,.cpp
Very useful library. In just three lines of code you the same result as earlier. Pandas know that the first line of the CSV contained column names, and it will use them automatically.
Writing to CSV Files with Pandas
Writing to CSV file with Pandas is as easy as reading. Here you can convince in it. First you must create DataFrame based on the following code.
from pandas import DataFrame C = {'Programming language': ['Python','Java', 'C++'], 'Designed by': ['Guido van Rossum', 'James Gosling', 'Bjarne Stroustrup'], 'Appeared': ['1991', '1995', '1985'], 'Extension': ['.py', '.java', '.cpp'], } df = DataFrame(C, columns= ['Programming language', 'Designed by', 'Appeared', 'Extension']) export_csv = df.to_csv (r'X:\pandaresult.csv', index = None, header=True) # here you have to write path, where result file will be stored print (df)
Here is the output
Programming language, Designed by, Appeared, Extension 0 Python, Guido van Rossum, 1991, .py 1 Java, James Gosling, 1995, .java 2 C++, Bjarne Stroustrup,1983,.cpp
And CSV file is created at the specified location.
Conclusion
So, now you know how use method 'csv' and also read and write data in CSV format. CSV files are widely used in software applications because they are easy to read and manage, and their small size makes them relatively fast for processing and transmission.
The csv module provides various functions and classes which allow you to read and write easily. You can look at the official Python documentation and find some more interesting tips and modules. CSV is the best way for saving, viewing, and sending data. Actually, it isn't so hard to learn as it seems at the beginning. But with a little practice, you'll master it.
Pandas is a great alternative to read CSV files.
Also, there are other ways to parse text files with libraries like ANTLR, PLY, and PlyPlus. They can all handle heavy-duty parsing, and if simple String manipulation doesn't work, there are regular expressions which you can use. | https://girishgodage.in/blog/pythonReadWriteCSV | CC-MAIN-2022-40 | refinedweb | 1,258 | 67.65 |
Hi all,
I’m going through the ex 43 in LPTHW and was wondering why do we need an engine at all ?
Mainly because I’m not able to completely comprehend the engine code.
I tried to do it without the engine and it works…
from sys import exit from random import randint from textwrap import dedent class Scene(object): def enter(self): print("This scene is not yet configured.") print("Subclass it and implement enter().") exit(1) class Death(): def enter(self): quips = [ "Taunt1: You died. You kinda suck at this.", "Taunt2: Your mom would be proud... if she were smarter.", "Taunt3: hdh" ] print(quips[randint(0, len(quips)-1)]) exit(1) class CentralCorridor(Scene): def enter(self): print(dedent(""" The Gothons of Planet Percal #25 have invaded your ship and destroyed your entire crew. You are th last surviving member and your last mission is to get the neutron destruct bomb from the Weapons Armory, put it in the bridge, and blow the ship after getting into(""" You're dead """)) return Death.enter(self) elif action == "dodge": print(dedent(""" you dodge, you die """)) return Death.enter(self) elif action == 'tell a joke': print(dedent(""" Joke: Good one! jump through the weapon aromory door. """)) return LaserWeaponArmory.enter(self) else: print("Does not Compute") return CentralCorridor.enter(self) class LaserWeaponArmory(Scene): def enter(self): print(dedent(""" You do a dive and get in. There's a keypad lock on th box ad you need the code to get the bomb out. If you get the code wrong ten times then the lock closes forever and you can't get the bomb. The code is 3 digits """)) code = f"{randint(1,9)}{randint(1,9)}{randint(1,9)}" guess = input("[keypad]>") guesses = 0 while (guess != code and int(guess) != 999) and guesses < 10: print("BZZZZZEDDD!") guesses += 1 guess = input("[keypad]>") if guess == code or int(guess) == 999: print(dedent(""" You guessed right code you advance to bridge """)) return TheBridge.enter(self) else: print(dedent(""" the lock buzzes last time and you die """)) return Death.enter(self) class TheBridge(Scene): def enter(self): print(dedent(""" On bridge there are 5 more gothons """)) action = input("What do you do > ") if action == "throw the bomb": print(dedent(""" You failed your mission """)) return Death.enter(self) elif action == "slowly place the bomb": print(dedent(""" You get into stealth mode and place the bomb safely!! """)) return EscapePod.enter(self) else: print("Does not Compute") return TheBridge.enter(self) class EscapePod(Scene): def enter(self): print(dedent(""" There are 5 pods.... Which one do you take?? """)) good_pod = randint(1,5) guess = input("[pod # ]> ") if guess != good_pod and int(guess) != 9: print(dedent(""" wrong pod..... you die """)) return Death.enter(self) else: print(dedent(""" You chose the right . You won!!! """)) return Finished.enter(self) class Finished(Scene): def enter(self): print("You won!!!!. Good job!!.") game = CentralCorridor() game.enter() | https://forum.learncodethehardway.com/t/ex43-why-do-we-need-an-engine/4360 | CC-MAIN-2022-40 | refinedweb | 481 | 68.06 |
WEBVTT
00:00:00.725 --> 00:00:03.892
(bright techno music)
00:00:18.622 --> 00:00:20.875 line:15%
Alright, so I'm excited for the fourth year in a row,
00:00:20.875 --> 00:00:23.391 line:15%
we've got Bjarne here to talk to us about the keynote
00:00:23.391 --> 00:00:25.600 line:15%
that you did here at CppCon.
00:00:25.600 --> 00:00:27.430 line:15%
What is the topic of this year's talk?
00:00:27.430 --> 00:00:31.776 line:15%
Well, this year's talk was teaching and learning
00:00:31.776 --> 00:00:35.876 line:15%
C++, that's basically modern C++,
00:00:35.876 --> 00:00:38.609 line:15%
and I had to find something that I hadn't talked about
00:00:38.609 --> 00:00:42.776 line:15%
the other years, and something I knew something about also,
00:00:44.294 --> 00:00:46.866
and something that was important for the community.
00:00:46.866 --> 00:00:51.057
And it seems that I hit something that people like,
00:00:51.057 --> 00:00:53.172
and people are interested in.
00:00:53.172 --> 00:00:57.339
The comments afterwards, both on email and face to face
00:00:58.929 --> 00:01:01.150
have been quite positive.
00:01:01.150 --> 00:01:04.349
I was a bit nervous, I mean, this is a bunch of developers,
00:01:04.349 --> 00:01:06.442
geeks and such...
Yes!
00:01:06.442 --> 00:01:08.503
And you talk to them about education,
00:01:08.503 --> 00:01:10.544
it means a lot of them are just happy to be
00:01:10.544 --> 00:01:14.669
out of university, or wherever, but, no, it seemed to work.
00:01:14.669 --> 00:01:16.669
Yeah, one of the things I really enjoyed about the talk,
00:01:16.669 --> 00:01:18.155
which has already actually happened at the time
00:01:18.155 --> 00:01:20.118
when we are filming this, is that, like,
00:01:20.118 --> 00:01:21.550
there was this sort of call to action,
00:01:21.550 --> 00:01:23.697
and this reminder that all of us in the audience,
00:01:23.697 --> 00:01:25.074
sort of are teachers, right?
00:01:25.074 --> 00:01:25.907
Oh yes.
00:01:25.907 --> 00:01:27.396
Regardless of if we see ourselves as if that's
00:01:27.396 --> 00:01:29.526
a part of our identity as developers.
00:01:29.526 --> 00:01:31.013
That's right. I mean, we're developers,
00:01:31.013 --> 00:01:33.482
we have junior colleagues, we have senior colleagues
00:01:33.482 --> 00:01:36.435
that teach us, that's why it's teaching and learning.
00:01:36.435 --> 00:01:39.693
We're mentors, we have mentors, we learn new systems,
00:01:39.693 --> 00:01:42.008
we teach our systems to others.
00:01:42.008 --> 00:01:44.431
It's deep, deep, in our culture.
00:01:44.431 --> 00:01:48.598
Any well-operating development organization has endless,
00:01:49.838 --> 00:01:53.838
sort of, information flows among the developers.
00:01:54.694 --> 00:01:55.527
Right.
00:01:55.527 --> 00:01:58.245
And that was what I was thinking about.
00:01:58.245 --> 00:01:59.821
And sort of like you've got this role that you play
00:01:59.821 --> 00:02:01.433
inside of your development organization,
00:02:01.433 --> 00:02:02.798
then you have the same role that you kind of play
00:02:02.798 --> 00:02:04.234
in the community these days, right?
00:02:04.234 --> 00:02:06.071
In the community too, and you know,
00:02:06.071 --> 00:02:08.970
there's also a lot of people here who really are
00:02:08.970 --> 00:02:12.507
full time teachers, or part time teachers like me.
00:02:12.507 --> 00:02:13.706
Yes.
00:02:13.706 --> 00:02:17.873
And, then there are trainers, that means different
00:02:18.789 --> 00:02:22.460
from universities and those people will question about
00:02:22.460 --> 00:02:26.627
how they could help with their high school volunteering.
00:02:27.492 --> 00:02:30.159
Yeah.
It's pretty universal.
00:02:31.347 --> 00:02:33.126
Actually, let me introduce you really quick,
00:02:33.126 --> 00:02:36.670
this here, for those who don't know, is Gabriel Dos Reis,
00:02:36.670 --> 00:02:40.046
also on the Microsoft C++ team.
00:02:40.046 --> 00:02:43.078
Yeah, I've worked with Bjarne for a long time.
00:02:43.078 --> 00:02:47.000
Yeah. You can always recognize Gaby by the hat.
00:02:47.000 --> 00:02:48.295
(laughs)
Yes.
00:02:48.295 --> 00:02:50.756
We worked together in Texas for a while.
00:02:50.756 --> 00:02:52.793
(laughs)
00:02:52.793 --> 00:02:56.194
Yeah, so, you mentioned earlier, modern C++,
00:02:56.194 --> 00:02:59.160
and what is it, and why is it important?
00:02:59.160 --> 00:03:03.328
Oh yes, one of the themes that ran through the talk
00:03:03.328 --> 00:03:07.495
was we have to teach modern C++, and, so, what is that?
00:03:08.483 --> 00:03:13.297
And I define it as the best use, the most effective use,
00:03:13.297 --> 00:03:16.297
of whatever is the current standard.
00:03:17.542 --> 00:03:22.126
So, it's different today than it was three years ago,
00:03:22.126 --> 00:03:24.603
it will be different three years from now.
00:03:24.603 --> 00:03:27.142
So, it's not one of these things, like,
00:03:27.142 --> 00:03:29.440
there was a book about modern mathematics, I think,
00:03:29.440 --> 00:03:33.143
in '22, and people are still calling it modern mathematics.
00:03:33.143 --> 00:03:34.270
(laughs)
00:03:34.270 --> 00:03:36.132
When I was a student in the '60s...
00:03:36.132 --> 00:03:37.391
You know, I could be offended, right?
00:03:37.391 --> 00:03:38.302
(laughs)
00:03:38.302 --> 00:03:40.351
Well, he was a Frenchman...
00:03:40.351 --> 00:03:42.434
(laughs)
00:03:43.352 --> 00:03:47.099
But the point is, that it is the current,
00:03:47.099 --> 00:03:49.541
but, anyways, it's a good word.
00:03:49.541 --> 00:03:53.168
So, one of my points is, that there's no such thing as
00:03:53.168 --> 00:03:57.191
value neutral teaching, you're always teaching values.
00:03:57.191 --> 00:03:59.240
You're teaching what's good, what's bad,
00:03:59.240 --> 00:04:01.628
what should be done, what should be avoided;
00:04:01.628 --> 00:04:04.472
that's a decision, and a lot of people don't make
00:04:04.472 --> 00:04:08.639
that decision, they just take what we've "always done"...
00:04:10.374 --> 00:04:11.207
Yes.
00:04:11.207 --> 00:04:14.448
Which is one of the most horrible phrases there is,
00:04:14.448 --> 00:04:18.615
or "this is not the way we do it here". Oh I hate that.
00:04:20.276 --> 00:04:24.443
I definitely can say the C++ we started with in '95
00:04:25.473 --> 00:04:28.162
is very different from the one we're using today.
00:04:28.162 --> 00:04:29.710
Yeah, we've learned!
00:04:29.710 --> 00:04:33.523
But at the time when I started, Alex Stepanov, STL,
00:04:33.523 --> 00:04:37.690
just blew my mind, I was an aspiring mathematician
00:04:38.721 --> 00:04:41.766
at the time when I saw this thing coming out,
00:04:41.766 --> 00:04:45.933
I printed, and studied, it was nice, just beautiful.
00:04:47.248 --> 00:04:51.115
But then, it's not exactly the same C++ today,
00:04:51.115 --> 00:04:55.282
and so, how do you get to know what you used to do
00:04:56.282 --> 00:04:58.331
is bad, and then it becomes...
00:04:58.331 --> 00:05:00.480
It used to be good, and becomes bad.
00:05:00.480 --> 00:05:04.219
Yeah, it's...good and bad are the wrong words,
00:05:04.219 --> 00:05:08.007
you're very right about that, it's really more like,
00:05:08.007 --> 00:05:12.007
better and a bit dated.
(laughs)
00:05:12.981 --> 00:05:15.815
And, a lot of people are still stuck
00:05:15.815 --> 00:05:19.982
in the '80s and '90s, C-like stuff, or Java-like stuff.
00:05:23.808 --> 00:05:28.208
Everything has a new class you have to locate.
00:05:28.208 --> 00:05:30.888
We're beyond that, yeah, news all over the place,
00:05:30.888 --> 00:05:34.848
error codes, all the ways you can forget to test,
00:05:34.848 --> 00:05:38.098
as opposed to exceptions, and, lots of,
00:05:39.526 --> 00:05:42.828
sort of, habits that people have developed,
00:05:42.828 --> 00:05:46.448
that they don't even remember why they have done them.
00:05:46.448 --> 00:05:49.398
I mean, this is particularly bad when you have professors
00:05:49.398 --> 00:05:51.677
doing it in universities, so they produce hundreds
00:05:51.677 --> 00:05:55.248
of students with the same misconceptions every year.
00:05:55.248 --> 00:05:56.648
(laughs)
Yes.
00:05:56.648 --> 00:05:58.277
We need to get to the professors,
00:05:58.277 --> 00:06:00.758
we need to get to the courses, that's how,
00:06:00.758 --> 00:06:02.808
actually, to go back two years,
00:06:02.808 --> 00:06:07.188
that's why we launched the C++ Core Guidelines
00:06:07.188 --> 00:06:10.076
because that is an attempt to answer the question,
00:06:10.076 --> 00:06:12.268
"What is good C++ now?".
00:06:12.268 --> 00:06:13.788
So, one of the things that I've,
00:06:13.788 --> 00:06:16.988
really come through for me in the last year is, you know,
00:06:16.988 --> 00:06:19.688
I think our penetration of people knowing about that
00:06:19.688 --> 00:06:22.488
is not yet quite where it needs to be.
00:06:22.488 --> 00:06:24.138
You say that is the core guidelines?
00:06:24.138 --> 00:06:25.298
The Cpp Core Guidelines.
00:06:25.298 --> 00:06:27.718
So I wanna make sure for the folks watching this,
00:06:27.718 --> 00:06:29.558
actually, just give a real quick capsule summary
00:06:29.558 --> 00:06:31.948
about the purpose of that, and what it is.
00:06:31.948 --> 00:06:36.115
Okay, this came from some of us in different places,
00:06:37.408 --> 00:06:40.146
different continents, discovering that people wanted
00:06:40.146 --> 00:06:43.398
to know what modern C++ was.
00:06:43.398 --> 00:06:45.848
And there was a wide disagreement about what it was.
00:06:45.848 --> 00:06:46.768
(laughs)
00:06:46.768 --> 00:06:49.876
So, I started to write some guidelines
00:06:49.876 --> 00:06:52.948
to be used by Morgan Stanley, and I found that my friends
00:06:52.948 --> 00:06:56.238
at Microsoft was writing some guidelines for how
00:06:56.238 --> 00:06:58.838
it should be done in Microsoft, and some people at Cern
00:06:58.838 --> 00:07:02.308
was writing what it should be like at Cern.
00:07:02.308 --> 00:07:04.038
And, there was more places than that,
00:07:04.038 --> 00:07:06.418
but that shows the breadth of the problem,
00:07:06.418 --> 00:07:08.788
and then we got together and write one of them.
00:07:08.788 --> 00:07:13.471
And this project is going on, we have a set of rules,
00:07:13.471 --> 00:07:18.261
lots of them, some philosophical and high-level,
00:07:18.261 --> 00:07:20.997
to give intellectual framework around it,
00:07:20.997 --> 00:07:25.164
some detailed so that they are checkable, verifiable.
00:07:28.446 --> 00:07:32.613
And, the point being that, that is defining modern C++
00:07:36.812 --> 00:07:40.158
as we are doing it, it's not the whole standard,
00:07:40.158 --> 00:07:43.982
it is what we know works, we look at bug reports
00:07:43.982 --> 00:07:47.696
and bug listings to see what mistakes people do,
00:07:47.696 --> 00:07:50.688
and then we scratch our head vigorously until we find
00:07:50.688 --> 00:07:54.226
a rule that eliminates the problem.
00:07:54.226 --> 00:07:57.149
Right.
At least for most people.
00:07:57.149 --> 00:07:59.393
And, since it's not a language, but a guideline,
00:07:59.393 --> 00:08:02.095
people can choose which part of the rules
00:08:02.095 --> 00:08:05.142
they will live with, but for instance,
00:08:05.142 --> 00:08:08.515
we should initialize all variables.
00:08:08.515 --> 00:08:09.581
Yes.
00:08:09.581 --> 00:08:13.973
It's a very useful rule, it gets a lot of people out
00:08:13.973 --> 00:08:16.550
of trouble, even people who think they're too smart
00:08:16.550 --> 00:08:17.796
to get into trouble.
00:08:17.796 --> 00:08:18.645
(laughs)
00:08:18.645 --> 00:08:20.436
I have seen that life. (laughs)
00:08:20.436 --> 00:08:22.416
They are the ones that actually cause most problems,
00:08:22.416 --> 00:08:24.474
and so, it's very simple, and it's verifiable.
00:08:24.474 --> 00:08:27.588
So, there's a project, there's actually several projects,
00:08:27.588 --> 00:08:31.171
to write verifiers of these rules, so that,
00:08:32.353 --> 00:08:34.984
instead of trying to learn all the rules,
00:08:34.984 --> 00:08:38.058
my recommendation is to learn the intellectual framework,
00:08:38.058 --> 00:08:41.964
the philosophical rules, and then give your program
00:08:41.964 --> 00:08:46.133
to the checker, and it will tell you which rules you forgot.
00:08:46.133 --> 00:08:47.454
Yeah.
Right.
00:08:47.454 --> 00:08:50.692
Yeah, I think of your tools, which have checkers...
00:08:50.692 --> 00:08:52.994
(whispers) Our, our!
Oh.
00:08:52.994 --> 00:08:54.548
(laughs)
00:08:54.548 --> 00:08:56.856
Yes, see, now we're checkers!
00:08:56.856 --> 00:08:57.754
(laughs)
00:08:57.754 --> 00:09:01.169
Of the guidelines, principally worked on by Neil MacIntosh.
00:09:01.169 --> 00:09:03.989
Yes, Neil is working hard on that, I know.
00:09:03.989 --> 00:09:06.712
So, I've been supplying test cases...
00:09:06.712 --> 00:09:08.295
Yes, yes, much appreciated.
00:09:08.295 --> 00:09:09.314
(laughs)
00:09:09.314 --> 00:09:11.069
So, you said something earlier that
00:09:11.069 --> 00:09:13.076
I want to get back to, which is that,
00:09:13.076 --> 00:09:16.578
the guidelines I noticed, they cannot be a standard.
00:09:16.578 --> 00:09:17.958
No, they shouldn't be.
00:09:17.958 --> 00:09:19.960
So, that sometimes gets lost.
00:09:19.960 --> 00:09:20.793
Yeah.
00:09:20.793 --> 00:09:22.739
Some, like me, who made a mistake of learning
00:09:22.739 --> 00:09:26.322
C++ from the...from the standard,
00:09:27.182 --> 00:09:30.562
know that, that's not a way you want to learn
00:09:30.562 --> 00:09:33.462
to write good C++, you need people,
00:09:33.462 --> 00:09:38.102
or, you know, books, or guidelines to tell you
00:09:38.102 --> 00:09:39.832
how to write good C++.
00:09:39.832 --> 00:09:43.483
One way of seeing that is that stability,
00:09:43.483 --> 00:09:47.316
compatibility, or decade, is a feature.
00:09:47.316 --> 00:09:49.704
When we improve the language, we don't want to
00:09:49.704 --> 00:09:52.132
break the old code.
Right.
00:09:52.132 --> 00:09:55.120
Now, when we want to move the style up,
00:09:55.120 --> 00:09:57.192
to get better performance, better reliability,
00:09:57.192 --> 00:10:00.252
better maintenance, we do want to break code.
00:10:00.252 --> 00:10:03.392
We want to take the old code, and get rid of it!
00:10:03.392 --> 00:10:06.642
And that's just one thing that I do work with
00:10:06.642 --> 00:10:09.059
a bit for real in my day job.
00:10:10.442 --> 00:10:14.272
And also, I think a lot about it, and so,
00:10:14.272 --> 00:10:18.272
the standard is very concerned with the ultimate
00:10:20.272 --> 00:10:24.439
generality and the ultimate compatibility of C++.
00:10:25.283 --> 00:10:29.911
Whereas, the guidelines are concerned with the best
00:10:29.911 --> 00:10:34.348
and simplest way of getting things done in C++,
00:10:34.348 --> 00:10:38.397
and how to get rid of the old, sort of,
00:10:38.397 --> 00:10:41.730
dark holes, where people but their bugs.
00:10:42.812 --> 00:10:44.584
It's a different thing.
00:10:44.584 --> 00:10:47.317
Okay, so I wanted to go further, so,
00:10:47.317 --> 00:10:49.516
you have education, education is how you,
00:10:49.516 --> 00:10:52.305
of course we have the currents, they go first,
00:10:52.305 --> 00:10:55.795
but also, the next generation, how you build.
00:10:55.795 --> 00:11:00.138
We have some impacts on how we practice programming,
00:11:00.138 --> 00:11:02.497
because programming has become so entrenched,
00:11:02.497 --> 00:11:06.664
and I actually got news that some folks in Europe,
00:11:09.629 --> 00:11:12.773
like the Institution of Engineering and Technology,
00:11:12.773 --> 00:11:14.615
gave you the Faraday Award.
00:11:14.615 --> 00:11:15.996
Oh yes.
00:11:15.996 --> 00:11:17.926
Would you like to say something about that?
00:11:17.926 --> 00:11:18.759
Sure.
00:11:18.759 --> 00:11:21.250
And explain to the audience what it means?
00:11:21.250 --> 00:11:24.004
I read that it is, now based on the impacts
00:11:24.004 --> 00:11:28.934
on society and industry, the history of computing,
00:11:28.934 --> 00:11:30.809
and nurturing C++.
00:11:30.809 --> 00:11:33.603
Yeah, they're using some large words, so...
00:11:33.603 --> 00:11:34.996
(laughs)
00:11:34.996 --> 00:11:38.843
The IET is the oldest, possibly largest,
00:11:38.843 --> 00:11:42.768
engineering professional organization on Earth.
00:11:42.768 --> 00:11:45.639
At least, so they say, and I have no reason
00:11:45.639 --> 00:11:48.926
to disbelieve them.
(laughs)
00:11:48.926 --> 00:11:51.926
And, it's not just computer science,
00:11:52.857 --> 00:11:56.083
it's not just...it's all engineering.
00:11:56.083 --> 00:11:58.068
In the broad idea? Awesome.
00:11:58.068 --> 00:12:00.931
In the broad. And so broad that they have people
00:12:00.931 --> 00:12:04.800
like Rutherford and JJ Thomson who got the award
00:12:04.800 --> 00:12:07.110
for things like, discovering the electron.
00:12:07.110 --> 00:12:09.140
Yeah, that's...that's humbling!
00:12:09.140 --> 00:12:11.020
(laughs)
00:12:11.020 --> 00:12:14.280
Electrons are quite important for our industry!
00:12:14.280 --> 00:12:16.330
So, I looked at this list,
00:12:16.330 --> 00:12:19.450
I'd never heard of the Faraday Medal,
00:12:19.450 --> 00:12:22.580
and I had heard about some of the ancestor
00:12:22.580 --> 00:12:25.740
organizations to the IET, but I looked at this,
00:12:25.740 --> 00:12:28.410
and I looked at the list of people who had gotten it,
00:12:28.410 --> 00:12:30.460
and this was downright intimidating!
00:12:30.460 --> 00:12:31.710
(laughs)
00:12:31.710 --> 00:12:33.545
You don't mess with JJ Thomson!
00:12:33.545 --> 00:12:34.556
(laughs)
00:12:34.556 --> 00:12:36.043
And for people who don't know physicists,
00:12:36.043 --> 00:12:39.691
we can list some of the physicists, Knuth,
00:12:39.691 --> 00:12:43.858
Maurice Wilkes, what's his name with the quick sword?
00:12:47.814 --> 00:12:48.647
Hoare?
00:12:48.647 --> 00:12:53.304
Hoare, Tony Hoare, and a guy I talked a lot with
00:12:53.304 --> 00:12:57.471
when I was a student in Cambridge, not David Wheeler,
00:12:59.024 --> 00:13:02.107
the other one, I'm so bad with names!
00:13:03.469 --> 00:13:04.841
I wasn't there!
00:13:04.841 --> 00:13:08.241
We'll look it up, we'll put it in there, it'll be great!
00:13:08.241 --> 00:13:09.074
(laughs)
00:13:09.074 --> 00:13:11.827
There's only four computer scientists who's gotten it,
00:13:11.827 --> 00:13:14.906
so, the rest are engineers and physicists.
00:13:14.906 --> 00:13:18.071
But, I think it's a great acknowledgement
00:13:18.071 --> 00:13:22.693
of the contribution of C++ and the C++ community,
00:13:22.693 --> 00:13:24.635
to the world at large.
00:13:24.635 --> 00:13:27.316
You know, I always go on about the applications,
00:13:27.316 --> 00:13:29.379
and how the applications are interesting,
00:13:29.379 --> 00:13:32.377
and Mars Rovers, and Higgs Boson,
00:13:32.377 --> 00:13:35.556
and fancy sports cars, or whatever it is.
00:13:35.556 --> 00:13:37.313
(laughs)
00:13:37.313 --> 00:13:41.063
So you have now, guidelines for developers,
00:13:42.152 --> 00:13:44.058
that's the investment of your people.
00:13:44.058 --> 00:13:46.370
Yeah.
And they need tools.
00:13:46.370 --> 00:13:48.622
And you built this language tool,
00:13:48.622 --> 00:13:50.996
and what would be your advice,
00:13:50.996 --> 00:13:54.471
in terms of education, for aspiring language designers,
00:13:54.471 --> 00:13:57.708
and people who would like to follow your footsteps?
00:13:57.708 --> 00:14:01.875
Because, you know, we still need to look ahead and plan.
00:14:03.947 --> 00:14:07.548
It's, a question I get asked in many, many ways, often.
00:14:07.548 --> 00:14:09.289
Sorry! (laughs)
00:14:09.289 --> 00:14:12.956
And, my first response to, "how do I use",
00:14:14.411 --> 00:14:17.371
"what should I do to write a programming language",
00:14:17.371 --> 00:14:21.121
my first answer is, "WHY?!" Or even, "Don't."
00:14:22.695 --> 00:14:23.871
(laughs)
00:14:23.871 --> 00:14:25.332
But you know that won't work.
00:14:25.332 --> 00:14:26.254
(laughs)
00:14:26.254 --> 00:14:29.124
I know it won't work...the point is,
00:14:29.124 --> 00:14:32.457
it's what so many new computer scientists,
00:14:32.457 --> 00:14:34.365
hackers, game developers, and such,
00:14:34.365 --> 00:14:38.532
really want to do, and the success rate is minuscule.
00:14:39.985 --> 00:14:42.771
So, it is, for most people trying to build
00:14:42.771 --> 00:14:45.274
a programming language, it's sort of the failure
00:14:45.274 --> 00:14:49.072
of their career, and maybe they learn from the failure,
00:14:49.072 --> 00:14:53.337
and that's a good thing, but, thinking that you built
00:14:53.337 --> 00:14:57.126
the next great programming language to be used by,
00:14:57.126 --> 00:14:59.498
sort of, everybody on Earth,
00:14:59.498 --> 00:15:02.415
is setting yourself up for failure.
00:15:03.389 --> 00:15:06.747
And I don't think anybody with a reasonably-sized ego
00:15:06.747 --> 00:15:10.176
would try that because, I mean, it's just a failure.
00:15:10.176 --> 00:15:13.189
So, my excuse, by the way, is I didn't mean to do it.
00:15:13.189 --> 00:15:14.841
(laughs)
00:15:14.841 --> 00:15:17.127
I just needed a tool, and I built a tool.
00:15:17.127 --> 00:15:19.306
So you failed miserably! (laughs)
00:15:19.306 --> 00:15:20.893
As in so many other things!
00:15:20.893 --> 00:15:24.972
But, once I had a tool, and my friends depended on it,
00:15:24.972 --> 00:15:26.781
I had to improve it because otherwise,
00:15:26.781 --> 00:15:29.700
I'd leave my friends in a lurch, I can't do that.
00:15:29.700 --> 00:15:33.806
But, the next thing is, what is the problem to be solved?
00:15:33.806 --> 00:15:37.639
A language that is designed to solve a problem
00:15:39.046 --> 00:15:42.782
that needs solving and does it well. Succeeds.
00:15:42.782 --> 00:15:46.300
A language that is a 'me too' language,
00:15:46.300 --> 00:15:50.061
typically don't succeed unless it has a huge organization
00:15:50.061 --> 00:15:53.860
spending hundreds, millions, of dollars getting there.
00:15:53.860 --> 00:15:56.239
And the average person who will want to write their own
00:15:56.239 --> 00:15:58.168
programming language, don't have that.
00:15:58.168 --> 00:15:59.402
Right.
00:15:59.402 --> 00:16:02.336
So, basically, be sure you solve a problem.
00:16:02.336 --> 00:16:05.645
And, this is not original, Dennis Ritchie was the one
00:16:05.645 --> 00:16:08.300
that said, well, there's languages that are meant
00:16:08.300 --> 00:16:12.581
to solve a problem, and there's languages that are meant
00:16:12.581 --> 00:16:15.993
to make a point.
(laughs)
00:16:15.993 --> 00:16:19.512
I could point out that Niklaus Wirth was in the audience
00:16:19.512 --> 00:16:24.308
and had been saying something not very polite before.
00:16:24.308 --> 00:16:26.391
Okay, so, basically for
00:16:28.232 --> 00:16:31.891
aspiring language designers, don't.
00:16:31.891 --> 00:16:34.311
Get into a different profession.
00:16:34.311 --> 00:16:38.491
First level, don't. Second level, what's the problem?
00:16:38.491 --> 00:16:41.221
And you know, we have computer science,
00:16:41.221 --> 00:16:43.941
we have theory, we have practice, we have tools,
00:16:43.941 --> 00:16:46.631
that once you've gotten to the fact that
00:16:46.631 --> 00:16:49.551
what is the problem, and how are the first orders
00:16:49.551 --> 00:16:53.361
of ideas of how to solve it, we can solve it.
00:16:53.361 --> 00:16:57.528
Okay, yeah, so we have guidelines, we have your checkers,
00:16:59.361 --> 00:17:02.267
what else do you believe would help
00:17:02.267 --> 00:17:05.100
with the education of programmers?
00:17:08.941 --> 00:17:13.530
We need time and patience to teach the professors.
00:17:13.530 --> 00:17:15.312
Teach the professors.
00:17:15.312 --> 00:17:17.401
Teach the professors, you know, by definition,
00:17:17.401 --> 00:17:18.606
a professor knows everything.
00:17:18.606 --> 00:17:20.318
Yes, that's why I was surprised.
00:17:20.318 --> 00:17:21.655
And you have to break through this.
00:17:21.655 --> 00:17:24.959
But, this is not that simple, it's not...they're lazy.
00:17:24.959 --> 00:17:27.217
Partly, they have never practiced.
00:17:27.217 --> 00:17:30.416
Their previous job was, student.
00:17:30.416 --> 00:17:34.123
So, their view of what is important can be warped.
00:17:34.123 --> 00:17:37.651
Secondly, they are under pressure from deans and such,
00:17:37.651 --> 00:17:40.525
to not waste time on learning things, because,
00:17:40.525 --> 00:17:43.635
by definition, they're professors, and they know everything.
00:17:43.635 --> 00:17:46.461
And so, they don't get a time to experiment,
00:17:46.461 --> 00:17:50.375
and to go and talk to industry and such.
00:17:50.375 --> 00:17:52.487
Yes, there's sabbaticals for that,
00:17:52.487 --> 00:17:55.279
but quite often, that's not where it's focused.
00:17:55.279 --> 00:17:57.412
Because, you know, if you're a professor,
00:17:57.412 --> 00:18:00.509
you are on top of the heap in your local environment,
00:18:00.509 --> 00:18:03.033
if you go in and try to learn something brand new,
00:18:03.033 --> 00:18:05.734
you're the novice, that's unpleasant.
00:18:05.734 --> 00:18:06.721
Yes, uncomfortable.
00:18:06.721 --> 00:18:08.708
Uncomfortable for somebody who's used to having
00:18:08.708 --> 00:18:11.742
his joke always funny.
(laughs)
00:18:11.742 --> 00:18:14.095
And so, we need to teach that, and also,
00:18:14.095 --> 00:18:16.972
we have to make it easier to deploy things.
00:18:16.972 --> 00:18:19.456
We teach far too much, 'how do you write a full loop',
00:18:19.456 --> 00:18:21.453
and 'how do you initialize a variable'.
00:18:21.453 --> 00:18:22.289
(laughs)
00:18:22.289 --> 00:18:27.058
We should say, 'how do we animate something on a screen'.
00:18:27.058 --> 00:18:29.878
And to do that, it's not that we don't have libraries
00:18:29.878 --> 00:18:33.634
that can do it, systems that can do it, we have too many.
00:18:33.634 --> 00:18:36.191
Because we can't choose.
00:18:36.191 --> 00:18:40.358
We have no way of packaging it, shipping it, installing it.
00:18:42.116 --> 00:18:46.371
As I said in my talk, I want to be able to say,
00:18:46.371 --> 00:18:49.704
"Get xyz library, install, xyz library".
00:18:52.254 --> 00:18:54.518
That's about as verbose as it should be.
00:18:54.518 --> 00:18:55.456
(laughs)
00:18:55.456 --> 00:18:58.849
I separate the two because networking separated from
00:18:58.849 --> 00:19:03.016
what I can do locally, then finally, I want modules,
00:19:05.462 --> 00:19:07.828
as a language improvement, so that I can say,
00:19:07.828 --> 00:19:11.904
import "xyz graphics" or whatever is the module,
00:19:11.904 --> 00:19:14.458
that should be it, those are the three stages.
00:19:14.458 --> 00:19:16.324
And now you're using it, yep.
00:19:16.324 --> 00:19:19.212
Get it down from the web, get it installed,
00:19:19.212 --> 00:19:21.502
so that I can use it in my code,
00:19:21.502 --> 00:19:24.102
import the saga using modules.
00:19:24.102 --> 00:19:26.817
Gabi is the main work on modules just now,
00:19:26.817 --> 00:19:28.734
so, this is good stuff.
00:19:31.358 --> 00:19:34.460
And, that way, people can start using things
00:19:34.460 --> 00:19:38.627
that are way above the level of the individual language.
00:19:40.935 --> 00:19:45.128
We don't want physicists to go around fiddling.
00:19:45.128 --> 00:19:46.401
(laughs)
00:19:46.401 --> 00:19:48.752
Single dimension arrays to do
00:19:48.752 --> 00:19:51.150
their in-dimensional computations.
00:19:51.150 --> 00:19:54.330
We want to say, "Get the in-dimensional array,
00:19:54.330 --> 00:19:57.163
install the in-dimensional array."
00:19:58.720 --> 00:20:02.220
A of 9, 10, 11, 13, boom, V, do the stuff.
00:20:05.033 --> 00:20:06.391
(laughs)
00:20:06.391 --> 00:20:09.189
That's what we need to do, and that way,
00:20:09.189 --> 00:20:11.032
we can actually teach better.
00:20:11.032 --> 00:20:15.335
We can teach not the, how to find bug in horrible code,
00:20:15.335 --> 00:20:19.195
but how to use reasonable abstractions,
00:20:19.195 --> 00:20:23.112
reasonable and relevant to a particular person.
00:20:23.967 --> 00:20:27.466
The other set of uses I'm very interested in is,
00:20:27.466 --> 00:20:30.162
the sort of "casual users", the people that are
00:20:30.162 --> 00:20:33.393
professionals in other fields,
00:20:33.393 --> 00:20:38.369
and they should not be left with the low level facilities
00:20:38.369 --> 00:20:42.119
that are hardly good enough for computer specialists.
00:20:42.119 --> 00:20:44.973
I want to segway into something I was meaning to ask,
00:20:44.973 --> 00:20:48.014
that is, the language can always be improved,
00:20:48.014 --> 00:20:50.740
especially if it's only a language thing,
00:20:50.740 --> 00:20:53.167
you can always find things to add,
00:20:53.167 --> 00:20:56.805
but what, in your view...I'm not asking you
00:20:56.805 --> 00:21:00.565
to make a prediction about what will be in C++
00:21:00.565 --> 00:21:04.129
20 something, but what in your view,
00:21:04.129 --> 00:21:07.314
should the committee, centers committee,
00:21:07.314 --> 00:21:10.923
consider to improve and facilitate the lives
00:21:10.923 --> 00:21:12.763
of people using the language?
00:21:12.763 --> 00:21:17.012
First of all, they should consider the larger community.
00:21:17.012 --> 00:21:20.395
Language is too expert-friendly because it's designed
00:21:20.395 --> 00:21:24.395
by experts, to a very large extent, for experts.
00:21:24.395 --> 00:21:27.719
That's wrong. It should be done by experts
00:21:27.719 --> 00:21:32.589
for a huge, diverse community, some of whom are experts,
00:21:32.589 --> 00:21:36.442
and most of them aren't. At least not in language terms.
00:21:36.442 --> 00:21:41.120
So, I think we are doing too many little things,
00:21:41.120 --> 00:21:44.000
and not enough big things.
00:21:44.000 --> 00:21:47.754
I want concepts, of course, I want modules,
00:21:47.754 --> 00:21:51.138
I want simple static reflection.
00:21:51.138 --> 00:21:54.805
Simply to pave a loop over a data structure,
00:21:55.704 --> 00:21:59.058
and do little things, like generate object maps,
00:21:59.058 --> 00:22:03.050
or serializers and such, and not much more than that.
00:22:03.050 --> 00:22:04.440
Okay, mmhmmm.
00:22:04.440 --> 00:22:07.353
Otherwise, people will devise their own dialect,
00:22:07.353 --> 00:22:09.574
so that's not good, but I think those three
00:22:09.574 --> 00:22:12.152
are probably the most important language
00:22:12.152 --> 00:22:14.471
for zillions of things.
00:22:14.471 --> 00:22:16.680
Then I want better concurrency support,
00:22:16.680 --> 00:22:19.667
I want the then for the futures.
00:22:19.667 --> 00:22:22.848
We are getting the code things right?
00:22:22.848 --> 00:22:25.345
We are getting the networking stuff,
00:22:25.345 --> 00:22:28.746
but I want better parallel algorithms
00:22:28.746 --> 00:22:30.264
and things like that.
00:22:30.264 --> 00:22:32.808
Yeah, we have some support for SMI, SIMD,
00:22:32.808 --> 00:22:33.778
and other algorithms...
00:22:33.778 --> 00:22:35.011
SIMD, yes.
00:22:35.011 --> 00:22:37.556
Or, do you think we need more?
00:22:37.556 --> 00:22:41.793
I don't know, I want SIMD, but how I want SIMD,
00:22:41.793 --> 00:22:43.404
do I want the SIMD vector,
00:22:43.404 --> 00:22:47.070
or do I want SIMD enabled algorithms?
00:22:47.070 --> 00:22:51.449
I think it would be nicer if we could get the SIMD
00:22:51.449 --> 00:22:55.305
enabled algorithms because I don't really want to fiddle
00:22:55.305 --> 00:22:58.638
with SIMD because...but, it's important.
00:22:59.786 --> 00:23:02.291
This is where performance goes like that,
00:23:02.291 --> 00:23:05.283
rather than like that, so we have to do it,
00:23:05.283 --> 00:23:07.132
it is just a question of when.
00:23:07.132 --> 00:23:10.166
But, I thought I was covering when I said parallel
00:23:10.166 --> 00:23:14.333
algorithms, but we need to lift our coding away from
00:23:15.854 --> 00:23:18.702
threads and locks, which is the worst way of writing
00:23:18.702 --> 00:23:23.405
concurrent systems, and that's what we do 95% of the time.
00:23:23.405 --> 00:23:26.106
And, that's wrong, it's a bug, bug, sink.
00:23:26.106 --> 00:23:29.944
I've had lots of success recently by simplifying things,
00:23:29.944 --> 00:23:34.587
making it higher level, and then finding it runs faster.
00:23:34.587 --> 00:23:36.453
Yes, I have done that myself.
00:23:36.453 --> 00:23:38.748
People fiddling with little details,
00:23:38.748 --> 00:23:42.483
they just don't know how much time they waste, right?
00:23:42.483 --> 00:23:46.185
A nice, clean, high level algorithm, concurrent, parallel,
00:23:46.185 --> 00:23:48.935
serial, I don't know...but clean.
00:23:50.664 --> 00:23:54.638
With not too many, if, and then, and elses, and buts,
00:23:54.638 --> 00:23:59.576
and whatever; and the optimizer goes... (slurp)
00:23:59.576 --> 00:24:01.495
(laughs)
00:24:01.495 --> 00:24:04.155
And out come some very interesting code.
00:24:04.155 --> 00:24:08.322
I was just looking at the talk about constexpr functions,
00:24:10.140 --> 00:24:13.106
another piece that we did a few years ago,
00:24:13.106 --> 00:24:16.189
but, how we use the compile explorer,
00:24:19.868 --> 00:24:22.237
so we can actually see the effect of code.
00:24:22.237 --> 00:24:26.259
And, I think that might help in the future,
00:24:26.259 --> 00:24:29.165
so people actually don't write ugly code
00:24:29.165 --> 00:24:31.165
and assume it is faster.
00:24:33.189 --> 00:24:35.982
Now they can write the code and see what it does.
00:24:35.982 --> 00:24:38.848
Yeah, it can leave zero abstraction penalty,
00:24:38.848 --> 00:24:41.013
as opposed to just imagining it.
00:24:41.013 --> 00:24:41.846
Yes.
00:24:41.846 --> 00:24:43.867
And because we have so many experts in the community,
00:24:43.867 --> 00:24:45.471
I think the fact that people can very quickly go in
00:24:45.471 --> 00:24:47.411
with compiler explorer and see the effect
00:24:47.411 --> 00:24:49.850
of their abstractions, I would have been skeptical,
00:24:49.850 --> 00:24:52.237
like, who's gonna look at the algorithm? Come on, right?
00:24:52.237 --> 00:24:53.070
(laughs)
00:24:53.070 --> 00:24:54.609
But actually, no, I mean if you're in there,
00:24:54.609 --> 00:24:56.161
and you're an expert, and you're looking
00:24:56.161 --> 00:24:58.481
to write that best library, it can be very powerful.
00:24:58.481 --> 00:25:00.498
That feedback loop.
00:25:00.498 --> 00:25:03.962
And yeah, it helps with people being convinced that
00:25:03.962 --> 00:25:06.350
you can actually have a program at a socially
00:25:06.350 --> 00:25:11.187
high level abstraction, yet, it's the middle, with very few.
00:25:11.187 --> 00:25:13.265
They just have to remember to turn on the optimizer.
00:25:13.265 --> 00:25:14.367
(laughs)
00:25:14.367 --> 00:25:15.534
Yes, you have to check that!
00:25:15.534 --> 00:25:19.701
Which the very, very opinionated novices sometimes forget.
00:25:21.500 --> 00:25:22.978
Yes, that's right, folks.
00:25:22.978 --> 00:25:25.783
That was, I guess, the other main takeaway from the talk,
00:25:25.783 --> 00:25:27.827
dash 02! (laughs)
00:25:27.827 --> 00:25:29.494
-02, I mean...
00:25:30.927 --> 00:25:34.594
Yeah, so, to wrap up, so going back again,
00:25:36.467 --> 00:25:40.696
to C++95, when I started, and C++17,
00:25:40.696 --> 00:25:44.863
which we just got, the C+ seems to have come from
00:25:48.676 --> 00:25:51.451
some bad reputation, it's like, 'oh you're going to copy,
00:25:51.451 --> 00:25:53.464
so don't copy people', right?
00:25:53.464 --> 00:25:54.925
Yeah.
00:25:54.925 --> 00:25:56.782
And then we gotta move some intakes and a bunch of,
00:25:56.782 --> 00:26:00.949
you know, competition, which removes a lot of stuff.
00:26:02.940 --> 00:26:06.523
How do you see the future of C++?
00:26:07.511 --> 00:26:10.400
Now I'm asking you to take a crystal ball, and...
00:26:10.400 --> 00:26:14.567
Okay, I mean, my explicit aim with the core guidelines
00:26:15.867 --> 00:26:19.034
is completely and guaranteed type-safe
00:26:21.169 --> 00:26:25.336
and resource-safe C++, and basically by default,
00:26:26.722 --> 00:26:29.146
don't fiddle around with pointers,
00:26:29.146 --> 00:26:32.282
have all dangling pointers detected,
00:26:32.282 --> 00:26:36.449
there should be a mode where all runs are plausible.
00:26:38.754 --> 00:26:42.329
That is optional because, it may cost you about 5%
00:26:42.329 --> 00:26:46.496
if you do it, but certainly doing de-bug and such,
00:26:47.980 --> 00:26:51.575
and for that, we don't need that many more language
00:26:51.575 --> 00:26:55.575
features, I want the concept so that I can write
00:26:57.054 --> 00:26:59.670
generic code that is reasonable.
00:26:59.670 --> 00:27:01.033
We have concepts.
00:27:01.033 --> 00:27:03.221
We have concepts, but...
00:27:03.221 --> 00:27:05.034
Not exactly what we want, exactly what we wanted.
00:27:05.034 --> 00:27:06.948
Not exactly what I want, it's not as simple as
00:27:06.948 --> 00:27:09.582
what I want, I would like to see simpler code.
00:27:09.582 --> 00:27:14.352
Simpler code has fewer bugs in it, that's simple as that.
00:27:14.352 --> 00:27:17.213
We're also going to go for the contract stuff
00:27:17.213 --> 00:27:20.058
to put run time assertions in place
00:27:20.058 --> 00:27:22.225
if we mess up our designs.
00:27:25.901 --> 00:27:29.185
And so, the type and resource-safe thing
00:27:29.185 --> 00:27:32.018
is really the core of my thinking,
00:27:33.198 --> 00:27:36.493
and then simplification, make simple things simple.
00:27:36.493 --> 00:27:39.230
I mean, we have been doing this again and again,
00:27:39.230 --> 00:27:43.397
compile time programming simplifies a lot of messy stuff.
00:27:44.361 --> 00:27:47.499
Sometimes on the envelope, and then constants go in,
00:27:47.499 --> 00:27:49.769
and five years later, it's the wrong constant.
00:27:49.769 --> 00:27:50.748
(laughs)
00:27:50.748 --> 00:27:52.845
Now it can be in the code directly stating
00:27:52.845 --> 00:27:57.207
what it's supposed to do, we have simplified the loops,
00:27:57.207 --> 00:27:59.831
the parallel algorithms, we'll simplify some
00:27:59.831 --> 00:28:04.255
of the major uses of stuff, and save us from writing code.
00:28:04.255 --> 00:28:05.674
So, that's the second part.
00:28:05.674 --> 00:28:09.100
So, I say, type and resource safety and simplicity.
00:28:09.100 --> 00:28:11.877
Okay, great. So, I think the call to action here is,
00:28:11.877 --> 00:28:13.769
of course, by all means, check out the keynote,
00:28:13.769 --> 00:28:15.290
which I'm guessing will be up by now,
00:28:15.290 --> 00:28:17.285
we'll have a link below, and then I guess,
00:28:17.285 --> 00:28:19.371
let's go out and take those lessons
00:28:19.371 --> 00:28:21.444
and go and educate you coworkers.
00:28:21.444 --> 00:28:24.242
Yeah, and if you find something in the guidelines
00:28:24.242 --> 00:28:28.842
that look wrong or is missing, you know where to find us.
00:28:28.842 --> 00:28:31.426
It's on the Github, everybody can see it,
00:28:31.426 --> 00:28:34.824
and our email addresses are known too.
00:28:34.824 --> 00:28:36.704
File an issue, and, okay, thank you very much!
00:28:36.704 --> 00:28:38.007 line:15%
Thank you.
Thank you.
00:28:38.007 --> 00:28:41.174 line:15%
(bright techno music) | https://channel9.msdn.com/Shows/C9-GoingNative/Bjarne-Stroustrup-Interview-at-cppcon-2017/captions?f=webvtt&l=en | CC-MAIN-2018-34 | refinedweb | 8,779 | 87.31 |
retitle 312078 ITA: ht -- Viewer/editor/analyser (mostly) for executables owner 312078 ! thanks Hi! * Alexander Schmehl <tolimar@debian.org> [060329 20:51]: > * Florian Ernst <florian@uni-hd.de> [060329 20:00]: > > The following packages are up for adoption, if you want them > > please just take them; [..] > > ht - Viewer/editor/analyser (mostly) for executables > And I use that from time to time, so I'm willing to take it; but after > reading #312078 I'm unsure if I'm knowledgeable enough for that myself. Since I haven't heard from you, yet (and I'm about to leave again for a couple of days to a no-net area), and since Luk Claes volunteered to me Co-Maintainer, we are taking it. Upload will follow, when I return (which should be Sunday evening). Frank, I would welcome feedback from you, what the remaining issues you mention in your RFA are. I didn't saw any bug reports or new upstream releases (and upstream doesn't seem to release a new release every hour...) Yours sincerely, Alexander --
Attachment:
signature.asc
Description: Digital signature | https://lists.debian.org/debian-devel/2006/03/msg01235.html | CC-MAIN-2015-48 | refinedweb | 181 | 65.22 |
Regular readers of this column might notice
that I've spent a lot of time on WSDL. There's a reason for this:
after XML and SOAP, I firmly believe that service description is
the most important component of designing, building, and deploying
heterogeneous web services.
There are a couple of reasons why this is true. First, it is a
precise description of what the bits on the wire will be. Without
that, it is exceedingly difficult for distributed applications
to talk to each other. It's still possible, and great things
can be made — SAMBA is a tour de force of reverse engineering
— but that's not really how you want your customers, partners,
or colleagues to be spending their development time.
Second, if you have the luxury of time and resources — not
to mention the managerial foresight — to be able to first design
your network interfaces before developing the code behind them,
then the service description will be the first thing you will
write. The phrase "contract-first" seems to be increasingly
bandied about as the official jargon to use here.
Even if you're wrapping a legacy application, you'll need a
service description that other web services applications can
use. Of course, there are many data-binding tools available that are,
for example, capable of turning a Java class into an XML Schema
or a Windows-compatible wizard with a "generate WSDL" button.
But even then, you'll have to give those descriptions to others,
the tools may have bugs (surprising, I know, but it's been known
to happen), or you'll need to hand-tweak the generated files
because a particular customer "just wants the namespace or URL
changed." At that point, you don't want to have to burn
the midnight oil with a copy of the WSDL and Schema specs in hand
and the generated WSDL file on your screen.
Instead, following a few simple style rules can make your
service description — the combination of Schema and WSDL —
pretty boilerplate. The results should be pretty WS-I compliant,
follow a service-oriented architecture, and be all-around
buzzword compliant.
First, you have to decide on a couple of names. You need
Note that the first two can be the same. I really like the convention
of ending a namespace with a pound sign, because items within the
namespace can use the URL fragment notation.
Even if you don't leverage that, and just put an HTML file at the URL
with an appropriately-named anchor, I just think it's a neat hack.
I suppose that, politically, the URI's should really be International
Resource Identifiers (IRIs), but SOAP 1.1 and WSDL 1.0 aren't defined
for IRIs. To accommodate international names, the IRI RFC
defines how to map IRIs into URIs, and you should do that.
In the fragments below, I'll use as the URI and
sample as the service name.
sample
Let's lay out the skeleton WSDL file. I use DTD entities as a
simple "#define", which lets me re-use the same skeleton.
It's also very convenient. I always use xmlns:tns as a namespace prefix for
the targetNamespace of whatever file I'm writing.
xmlns:tns
targetNamespace
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE definitions [
<!ENTITY BASE ''>
<!ENTITY COMPONENT 'service'>
<!ENTITY SERVICE 'service'>
<!ENTITY SERVICEURl ''>
]>
<definitions name="&SERVICE;"
xmlns=""
xmlns:soapbind=""
xmlns:
<documentation>
This code is in the public domain.
</documentation>
< Obviously, the value for @location is wrong! -->
<import namespace="&BASE;"
location=''/>
<!--
Message definitions will go here.
-->
<portType name="&COMPONENT;-porttype">
<!--
Operation definitions will go here.
-->
</portType>
<binding name="&COMPONENT;-binding" type="tns:&COMPONENT;-porttype">
<soapbind:binding
<!--
Bound operations will go here.
-->
</binding>
<service name="&SERVICE">
<port name="&COMPONENT-port" binding="tns:&COMPONENT;-binding">
<soapbind:address
</port>
</service>
</definitions>
Within WSDL, each part of the file is in a separate namespace,
just like the same tag names within a C struct can
exist within different structures. If you want, you can remove all
the -binding, etc., suffixes from the outline above.
I tried that once and it made things too hard for me to follow;
a little redundancy is sometimes both good and useful.
struct
-binding
Another thing that's very useful about this outline is that you can
follow the template for several different schemas, ports, and
bindings, specifying a single service to implement them.
If you do this, put the schema, message, portType,
and binding elements each in their own file. Your
top-level service description should just import
each separate component, and then only needs to have multiple
port entries within the service element.
This seems like very straightforward way to get re-use.
portType
binding
import
port
service
Now it's time to define the operations.
We'll only define one here; we'll take the nearby
operation from Norm Walsh's
where in the
world service. There's very little thinking involved; it's all
boilerplate.
nearby
We'll work from the bottom (or end of the WSDL file) up.
First, we'll be exchanging literal XML, so we define the bound
operation:
<operation name="nearby-operation">
<soapbind:operation
<input name="nearby-oprequest">
<soapbind:body
</input>
<output name="nearby-opresponse">
<soapbind:body
</output>
</operation>
Note that we're defining a very simple request/response operation
that exchanges messages whose names are pretty formulaic.
It turns out that the messages themselves are pretty formulaic, too.
The following fragment goes in the portType element:
<operation name="nearby-operation">
<input name="nearby-oprequest" message="tns:nearby-request"/>
<output name="nearby-opresponse" message="tns:nearby-response"/>
</operation>
Now we need to define the actual messages.
For the first time, we need to think about the data.
We want to avoid using a type system and instead concentrate on
what the data looks like. This means using the element name,
rather than type, to describe the message.
While it admittedly looks a bit too RPC-ish, I often use the following
structure for defining messages:
<message name="nearby-request">
<part name="body" element="tns:nearby"/>
<message name="nearby-response">
<part name="body" element="tns:nearby-response"/>
More from Rich Salz
SOA Made Real
The xml:id Conundrum
Freeze the Core
WSDL 2: Just Say No
XKMS Messages in Detail
This has a couple of ramifications.
First, you could write your schema in RelaxNG and use
Trang
to convert it to XSD.
Second, if the same element appears in multiple operations, you'll
have to define wrapper elements that are operation-specific; this
is admittedly kind of bogus.
Third, your service is more likely to be future-proof, since the core
definition of the nearby element might be expanded,
and your code should still work.
(That depends on the semantics of the expansion, of course.)
It turns out that for the common case — exchanging SOAP messages
in a request/response exchange pattern — WSDL 1.0 is both flexible
and verbose. It's probably quite feasible to define your own
service language and then use XSLT to generate the WSDL.
© , O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.xml.com/pub/a/ws/2005/03/30/salz.html | CC-MAIN-2013-20 | refinedweb | 1,202 | 54.42 |
looooong time ago:
On Tue, Feb 19, 2002 at 11:17:04AM -0800, Chuck Esterbrook wrote:
-> On Tuesday 19 February 2002 10:37 am, Titus Brown wrote:
-> > Incidentally, for the PyWX out-of-process adapter I wouldn't have to
-> > change a single piece of code, were CGIAdapter a bit more flexible
-> > about accepting in/out arguments (rather than assuming
-> > sys.stdin/sys.stdout). ?Would you mind if I went in and added this
-> > flexibility (i.e. does it seem like a bad thing to have)?
->
-> Sounds fine to me.
Well, I finally got around to it ;).
Attached is a context diff against the current CVS that allows creators
of CGIAdapter objects to override sys.stdin, sys.stdout, sys.stderr, and
os.environ. This is necessary for threaded servers, because the
'sys' namespace (like all module namespaces) is per-process rather than
per-thread; hence, 'sys.stdin' cannot easily be over-ridden to point to
a connection-specific input handle when there are multiple connections
per process.
If this is confusing to anyone, all you really need to know is that
the following two calls to the CGIAdapter constructor are equivalent:
CGIAdapter(webKitDir)
CGIAdapter(webKitDir, sys.stdin, sys.stdout, sys.stderr, os.environ)
so everything should work as before, for those who don't change their
code.
Let me know if you have any questions...
This patch is necessary for the following Webware handler to work
in PyWX under threaded mode:
---
import sys, os, PyWX_buffer, ns_setup
from UserDict import UserDict
from WebKit.CGIAdapter import CGIAdapter
WebKit_dir = '/u/t/software/Webware-0.7/WebKit'
WebKit_base_url = '/WebKit.cgi'
def handle(conn):
environ = UserDict() # necessary for .data member.
environ.update(os.environ)
ns_setup.create_cgi_environ(conn, environ)
if not environ['CONTENT_LENGTH']:
environ['CONTENT_LENGTH'] = 0
environ['SCRIPT_NAME'] = WebKit_base_url
environ['PATH_INFO'] = conn.request.url[len(environ['SCRIPT_NAME']):]
buff = PyWX_buffer.GetBuff()
p = CGIAdapter(WebKit_dir, buff, buff, sys.stderr, environ)
p.run()
---
I'll check in examples etc. to the PyWX code base.
cheers,
--titus | https://sourceforge.net/p/pywx/mailman/pywx-discuss/thread/20030110184657.GA2557@caltech.edu/ | CC-MAIN-2018-09 | refinedweb | 327 | 50.43 |
Please have a look in the following OSS message: 414920
SAP writes:
Symptom
When you select a standard variant via Settings -> Layout -> Select, the following error message appears
“Template not found in the BDS – Layout:
Template Guid:….
If you then go to Settings -> Layout -> Save, you get the NOTHING_FOUND short dump.
Additional key words
ABAP List Viewer, SAP List Viewer,
Display mode, layout administration, default, default layout variant, default variant, access variant, access layout
CL_ALV_BDS====================CP
Message number 0K 412
Cause and prerequisites
The GUID stored in the layout (=Template) is no longer available in the Business Document Service (BDS).
Solution
Implement the following source code corrections according to the correction instructions.You can then save the variant under another name or use the same name again if the appropriate authorization is available.
This is a workaround.The definitive solution is the CLEAN_VARIANT method contained in Basis Support Package 21 for 46C.However, the OK 036 error occurs there with SAP standard variants “The name of layout XX is in the SAP namespace”. This error is eliminated with note #418527.
The Source code corrections is in the OSS note. Please ask your Basis specialist to implement this for you if you don’t have access to OSS.
Hope this helps.
Regards
REGISTER or login:
Discuss This Question: | http://itknowledgeexchange.techtarget.com/itanswers/outputting-report-data-to-ms-excel/ | CC-MAIN-2015-32 | refinedweb | 217 | 54.93 |
GR-SAKURA Special Project: Sketch on Web Compiler
Overview
This project introduces how to sketch on the web compiler for GR-SAKURA. Both Windows and Mac are available.
Preparation
You will need a GR-SAKURA board and a USB cable (Mini
Use the button as shown here to create your new project. Those logging in for the first time should skip this step.
Select an appropriate template, such as GR-SAKURA_Sketch_Vxxx.zip, where xxx indicates the version. Since there are many templates, entering "sakura" in the filter makes it easier to find. After selecting the template, give the project an appropriate name and click "Create Project".
3. Displaying Your Sketch
Clicking "Create Project" brings you to the Web Compiler IDE window. Click "gr_sketch.cpp" file under the Explore menu. The source code of the file will appear in the right window. This is the sample code that will drive the LED on GR-SAKURA.
4. Build the Code
Since the code shown on the screen is a working sample, you don't need to debug the code. To build the code, click the "Execute Build" icon on the right navigation menu shown in the image. "sakura_sketch.bin" under the .CPP file shown in the image. Move the mouse over the bin file and right click - this will bring up the pop-up menu. Then, click "Download" menu.
6. Connect GR-SAKURA
Connect GR-SAKURA to a PC with a USB cable. Then, push the Reset button on GR-SAKURA. GR-SAKURA is displayed as a USB storage.
7. Flashing (Copy Program to GR-SAKURA)
Copy and paste the sakura_sketch.bin to the storage. Now, the LED on GR-SAKURA should be flashing.
Sample Code for Making the LED Flash Like a Firefly
Copy and paste the sample program below to your sketch on the web compiler.
#include <arduino.h> void setup(){ } void loop(){ for(int i = 0; i < 256; i++){ analogWrite(PIN_LED0, i); delay(5); } for(int i = 0; i < 256; i++){ analogWrite(PIN_LED0, 255 - i); delay(5); } } | https://www.renesas.com/us/en/products/gadget-renesas/boards/gr-sakura/project-sketch-on-web-compiler.html | CC-MAIN-2019-43 | refinedweb | 338 | 76.11 |
I hate to play the annoying dull person who doesn't have a clue what he is talking about. But, unfortunately, that's me as far as linux is concerned...
I'm doing some file manipulations in python that use the zipfile module. The version of python currently installed is 2.4.3, but the zipfile module utilizes 'with' statements, which I think came out in version 2.5 (I'm also going to need to use the tarfile module; although I haven't tested it, I imagine I'll run into the same problem).
My current plan of action is to figure out how to upgrade the python installation. But, knowing absolutely nothing about linux and very little about python, I don't even know where to start. I look at some similar posts, which mentioned installing it in another directory. I didn't install it in the first place though. I don't even know how to install stuff in linux...
Any help would be greatly appreciated! Also, if there's a better way than trying to update python, I'm totally open to suggestions. Just remember: my linux intelligence is about equivalent to that of a four-year-old. Thanks!
This question came from our site for professional and enthusiast programmers.
zipfile
tarfile
with
sudo apt-get install python3.2
The easiest way to upgrade Python on Linux is to have someone else do it for you; your distribution often has many people who package upgrades specifically for that distribution. Before you roll up your sleeves and get into it, see if they have already done the work.
Basically, this means learning what type of package manager your distribution uses, and then seeing if the next release is available. Weaknesses to this technique exist:
For my distribution, rpm is the package manager, and yum is the command line front-end. To check if an update exists for a package with yum, I type
rpm
yum check-updates <package name>
and to update something I type
yum update <package name>
Whatever your outcome is with respect to your Python module, you should learn a bit about your package manager and how to use it to avoid the rush of needing to learn it at a critical juncture.
In the event that you do not find the right version packaged for you, sometimes a search on Google will yield a person who packaged the right version for your distribution, even though your distribution hasn't done so yet. If so, and if you trust the other person to not be doing something malicious, then you can typically install their package.
If no prepackaged version exists, then you are stuck and need to read the "how to install" pages of the particular item, and for Python, it closely reflects Maxime's "the hard way", which might sound intimidating, but after you do it a few times, you realize it isn't really that hard.
Keep in mind that software installed outside of the package manager's knowledge probably will never be known to the package manager, so future use of the package manager with respect to that software will probably need to be handled specially, to prevent the package manager from doing something that makes sense if your version of Python isn't installed.
just work with zip commands directly, instead of using the old zip module:
import subprocess
subprocess.call('unzip <path to your file>')
or alternatively, use subprocess.Popen.
And After submitting my answer, I would like to title this part:
Guessing from the Python version you are using a Red Hat 5 or CentOS 5.
Here is how it is looking on my Red Hat 5.8 at work:
oz@server ~ $ /usr/bin/python
Python 2.4.3 (#1, Dec 22 2011, 12:12:01)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> f = open('/etc/redhat-release')
>>> f.readlines()
['Red Hat Enterprise Linux Server release 5.8 (Tikanga)\n']
Most other Ubuntu version and Debian have newer Python version.
Anyway, upgrading Python should work like this:
THIS is important otherwise you will erase your system PYTHON !!! you don't want to do that is, because many system components depend on Python.
./configure --prefix=/opt/python2.7 --enable-shared
Noe, compile the software itself:
make
Then install it:
make altinstall
Don't miss the --prefix=/opt
./configure --prefix=/opt/python2.7
Finally, you need to set your PATH variable to include the python where Python was installed.
export PATH=$PATH:/opt/python2.7/bin/
To make that permanent, edit your .bashrc, and add the above line!
Good luck and welcome to Linux :-)
Not sure why there's so much resistance to just building Python from source, because it's really quite simple:
$ tar jxf Python-2.7.3.tar.bz2 # unpack the source tarball
$ cd Python-2.7.3 # cd into the source directory
$ ./configure --prefix=$HOME # configure for your system
$ make # build everything
$ make install # install (in $HOME/bin, $HOME/share, etc )
If you are nervous about doing the install step, first do make DESTDIR=/tmp/foo install and install everything in /tmp/foo. Inspect that directory tree, then rm -rf /tmp/foo (Unless you had important files in /tmp/foo before you started the procedure, of course!)
make DESTDIR=/tmp/foo install
rm -rf /tmp/foo
/tmp/foo
If you get errors, then there will be some work to do, but chances are very good that the build will work without errors.
Once you've done this, you should be able to run $HOME/bin/python
$HOME/bin/python
By posting your answer, you agree to the privacy policy and terms of service.
asked
2 years ago
viewed
3660 times
active
1 year ago | http://serverfault.com/questions/392310/upgrading-python-2-x-on-centos-or-rhel/392391 | CC-MAIN-2014-42 | refinedweb | 973 | 61.97 |
Unlocking ES2015 features with Webpack and Babel
This post is part of a series of ES2015 posts. We'll be covering new JavaScript functionality every week for the coming two months.
After being in the working draft state for a long time, the ES2015 (formerly known as ECMAScript 6 or ES6 shorthand) specification has reached a definitive state a while ago. For a long time now, BabelJS, a Javascript transpiler, formerly known as 6to5, has been available for developers that would already like to use ES2015 features in their projects.
In this blog post I will show you how you can integrate Webpack, a Javascript module builder/loader, with Babel to automate the transpiling of ES2015 code to ES5. Besides that I'll also explain you how to automatically generate source maps to ease development and debugging.
Webpack
Introduction
Webpack is a Javascript module builder and module loader. With Webpack you can pack a variety of different modules (AMD, CommonJS, ES2015, ...) with their dependencies into static file bundles. Webpack provides you with loaders which essentially allow you to pre-process your source files before requiring or loading them. If you are familiar with tools like Grunt or Gulp you can think of loaders as tasks to be executed before bundling sources. To make your life even easier, Webpack also comes with a development server with file watch support and browser reloading.
Installation
In order to use Webpack all you need is npm, the Node Package Manager, available by downloading either Node or io.js. Once you've got npm up and running all you need to do to setup Webpack globally is install it using npm:
npm install -g webpack
Alternatively, you can include it just in the projects of your preference using the following command:
npm install --save-dev webpack
Babel
Introduction
With Babel, a Javascript transpiler, you can write your code using ES2015 (and even some ES7 features) and convert it to ES5 so that well-known browsers will be able to interpret it. On the Babel website you can find a list of supported features and how you can use these in your project today. For the React developers among us, Babel also comes with JSX support out of the box.
Alternatively, there is the Google Traceur compiler which essentially solves the same problem as Babel. There are multiple Webpack loaders available for Traceur of which traceur-loader seems to be the most popular one.
Installation
Assuming you already have npm installed, installing Babel is as easy as running:
npm install --save-dev babel-loader
This command will add babel-loader to your project's package.json. Run the following command if you prefer installing it globally:
npm install -g babel-loader
Project structure
webpack-babel-integration-example/ src/ DateTime.js Greeting.js main.js index.html package.json webpack.config.js
Webpack's configuration can be found in the root directory of the project, named webpack.config.js. The ES6 Javascript sources that I wish to transpile to ES5 will be located under the src/ folder.
Webpack configuration
The Webpack configuration file that is required is a very straightforward configuration or a few aspects:
- my main source entry
- the output path and bundle name
- the development tools that I would like to use
- a list of module loaders that I would like to apply to my source
var path = require('path'); module.exports = { entry: './src/main.js', output: { path: path.join(__dirname, 'build'), filename: 'bundle.js' }, devtool: 'inline-source-map', module: { loaders: [ { test: path.join(__dirname, 'src'), loader: 'babel-loader' } ] } };
The snippet above shows you that my source entry is set to src/main.js, the output is set to create a build/bundle.js, I would like Webpack to generate inline source maps and I would like to run the babel-loader for all files located in src/.
ES6 sources
A simple ES6 class
Greeting.js contains a simple class with only the
toString method implemented to return a String that will greet the user:
class Greeting { toString() { return 'Hello visitor'; } } export default Greeting
Using packages in your ES2015 code
Often enough, you rely on a bunch of different packages that you include in your project using npm. In my example, I'll use the popular date time library called Moment.js. In this example, I'll use Moment.js to display the current date and time to the user.
Run the following command to install Moment.js as a local dependency in your project:
npm install --save-dev moment
I have created the DateTime.js class which again only implements the
toString method to return the current date and time in the default date format.
import moment from 'moment'; class DateTime { toString() { return 'The current date time is: ' + moment().format(); } } export default DateTime
After importing the package using the import statement you can use it anywhere within the source file.
Your main entry
In the Webpack configuration I specified a src/main.js file to be my source entry. In this file I simply import both classes that I created, I target different DOM elements and output the
toString implementations from both classes into these DOM objects.
import Greeting from './Greeting.js'; import DateTime from './DateTime.js'; var h1 = document.querySelector('h1'); h1.textContent = new Greeting(); var h2 = document.querySelector('h2'); h2.textContent = new DateTime();
HTML
After setting up my ES2015 sources that will display the greeting in an h1 tag and the current date time in an h2 tag it is time to setup my index.html. Being a straightforward HTML-file, the only thing that is really important is that you point the script tag to the transpiled bundle file, in this example being build/bundle.js.
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Webpack and Babel integration example</title> </head> <body> <h1></h1> <h2></h2> <script src="build/bundle.js"></script> </body> </html>
Running the application
In this example project, running my application is as simple as opening the index.html in your favorite browser. However, before doing this you will need to instruct Webpack to actually run the loaders and thus transpile your sources into the build/bundle.js required by the index.html.
You can run Webpack in watch mode, meaning that it will monitor your source files for changes and automatically run the module loaders defined in your configuration. Execute the following command to run in watch mode:
webpack --watch
If you are using my example project from Github (link at the bottom), you can also use the following script which I've set up in the package.json:
npm run watch
Easier debugging using source maps
Debugging transpiled ES5 is a huge pain which will make you want to go back to writing ES5 without thinking. To ease development and debugging of ES2015 I can rely on source maps generated by Webpack. While running Webpack (normal or in watch mode) with the devtool property set to inline-source-map you can view the ES2015 source files and actually place breakpoints in them using your browser's development tools.
Running the example project with a breakpoint inside the DateTime.js
toString method using the Chrome developer tools.
Conclusion
As you've just seen, setting up everything you need to get started with ES2015 is extremely easy. Webpack is a great utility that will allow you to easily set up your complete front-end build pipeline and seamlessly integrates with Babel to include code transpiling into the build pipeline. With the help of source maps even debugging becomes easy again.
Sample project
The entire sample project as introduced above can be found on Github.
Thank you very much for this wonderful guide! Worked perfectly for me.
Great tutorial. Thanks. I've been learning webpack and babel. However, with all kinds of other tools to fiddle with (gulp, browserify, angular, react, typescript etc.) I've had some trouble figuring out the minimal set of babel-XXX npm packages I needed to get a simple es6 file working. Your tutorial gave me exactly what I needed. However, now that babel has been upgraded from v5 to v6 your tutorial and your github site need some slight updates. Specifically, this project requires `npm install --save-dev babel-core babel-preset-es2015` (in addition to `babel-loader`) and requires a `.babelrc` file that contains `{"presets": ["es2015"]}`. BTW, for your own info, note that I found your tutorial because it's listed on the official webpack site on its list of tutorials (). | http://blog.xebia.com/unlocking-es2015-features-with-webpack-and-babel/ | CC-MAIN-2017-13 | refinedweb | 1,422 | 53.31 |
Just add it to the existing modex. -Nathan
> On Jul 22, 2019, at 12:20 PM, Adrian Reber via users > <users@lists.open-mpi.org> are always welcome. What would be great is a nice big warning >>>>> that CMA support is disabled because the processes are on different >>>>> namespaces. Ideally all MPI processes should be on the same namespace to >>>>> ensure the best performance. >>>>> >>>>> -Nathan >>>>> >>>>>> On Jul 21, 2019, at 2:53 PM, Adrian Reber via users >>>>>> <users@lists.open-mpi.org> wrote: >>>>>> >>>>>> For completeness I am mentioning my results also here. >>>>>> >>>>>> To be able to mount file systems in the container it can only work if >>>>>> user namespaces are used and even if the user IDs are all the same (in >>>>>> each container and on the host), to be able to ptrace the kernel also >>>>>> checks if the processes are in the same user namespace (in addition to >>>>>> being owned by the same user). This check - same user namespace - fails >>>>>> and so process_vm_readv() and process_vm_writev() will also fail. >>>>>> >>>>>> So Open MPI's checks are currently not enough to detect if 'cma' can be >>>>>> used. Checking for the same user namespace would also be necessary. >>>>>> >>>>>> Is this a use case important enough to accept a patch for it? >>>>>> >>>>>> Adrian >>>>>> >>>>>>> On Fri, Jul 12, 2019 at 03:42:15PM +0200, Adrian Reber via users wrote: >>>>>>> Gilles, >>>>>>> >>>>>>> thanks again. Adding '--mca btl_vader_single_copy_mechanism none' helps >>>>>>> indeed. >>>>>>> >>>>>>> The default seems to be 'cma' and that seems to use process_vm_readv() >>>>>>> and process_vm_writev(). That seems to require CAP_SYS_PTRACE, but >>>>>>> telling Podman to give the process CAP_SYS_PTRACE with >>>>>>> '--cap-add=SYS_PTRACE' >>>>>>> does not seem to be enough. Not sure yet if this related to the fact >>>>>>> that Podman is running rootless. I will continue to investigate, but now >>>>>>> I know where to look. Thanks! >>>>>>> >>>>>>> Adrian >>>>>>> >>>>>>>> On Fri, Jul 12, 2019 at 06:48:59PM +0900, Gilles Gouaillardet via >>>>>>>> users wrote: >>>>>>>> Adrian, >>>>>>>> >>>>>>>> Can you try >>>>>>>> mpirun --mca btl_vader_copy_mechanism none ... >>>>>>>> >>>>>>>> Please double check the MCA parameter name, I am AFK >>>>>>>> >>>>>>>> IIRC, the default copy mechanism used by vader directly accesses the >>>>>>>> remote process address space, and this requires some permission >>>>>>>> (ptrace?) that might be dropped by podman. >>>>>>>> >>>>>>>> Note Open MPI might not detect both MPI tasks run on the same node >>>>>>>> because of podman. >>>>>>>> If you use UCX, then btl/vader is not used at all (pml/ucx is used >>>>>>>> instead) >>>>>>>> >>>>>>>> >>>>>>>> Cheers, >>>>>>>> >>>>>>>> Gilles >>>>>>>> >>>>>>>> Sent from my iPod >>>>>>>> >>>>>>>>> On Jul 12, 2019, at 18:33, Adrian Reber via users >>>>>>>>> <users@lists.open-mpi.org> wrote: >>>>>>>>> >>>>>>>>> So upstream Podman was really fast and merged a PR which makes my >>>>>>>>> wrapper unnecessary: >>>>>>>>> >>>>>>>>> Add support for --env-host : >>>>>>>>> >>>>>>>>> >>>>>>>>> As commented in the PR I can now start mpirun with Podman without a >>>>>>>>> wrapper: >>>>>>>>> >>>>>>>>> $ mpirun --hostfile ~/hosts --mca orte_tmpdir_base /tmp/podman-mpirun >>>>>>>>> podman run --env-host --security-opt label=disable -v >>>>>>>>> /tmp/podman-mpirun:/tmp/podman-mpirun --userns=keep-id --net=host >>>>>>>>> mpi-test /home/mpi/ring >>>>>>>>> Rank 0 has cleared MPI_Init >>>>>>>>> Rank 1 has cleared MPI_Init >>>>>>>>> Rank 0 has completed ring >>>>>>>>> Rank 0 has completed MPI_Barrier >>>>>>>>> Rank 1 has completed ring >>>>>>>>> Rank 1 has completed MPI_Barrier >>>>>>>>> >>>>>>>>> This is example was using TCP and on an InfiniBand based system I have >>>>>>>>> to map the InfiniBand devices into the container. >>>>>>>>> >>>>>>>>> $ mpirun --mca btl ^openib --hostfile ~/hosts --mca orte_tmpdir_base >>>>>>>>> /tmp/podman-mpirun podman run --env-host -v >>>>>>>>> /tmp/podman-mpirun:/tmp/podman-mpirun --security-opt label=disable >>>>>>>>> --userns=keep-id --device /dev/infiniband/uverbs0 --device >>>>>>>>> /dev/infiniband/umad0 --device /dev/infiniband/rdma_cm --net=host >>>>>>>>> mpi-test /home/mpi/ring >>>>>>>>> Rank 0 has cleared MPI_Init >>>>>>>>> Rank 1 has cleared MPI_Init >>>>>>>>> Rank 0 has completed ring >>>>>>>>> Rank 0 has completed MPI_Barrier >>>>>>>>> Rank 1 has completed ring >>>>>>>>> Rank 1 has completed MPI_Barrier >>>>>>>>> >>>>>>>>> This is all running without root and only using Podman's rootless >>>>>>>>> support. >>>>>>>>> >>>>>>>>> Running multiple processes on one system, however, still gives me an >>>>>>>>> error. If I disable vader I guess that Open MPI is using TCP for >>>>>>>>> localhost communication and that works. But with vader it fails. >>>>>>>>> >>>>>>>>> The first error message I get is a segfault: >>>>>>>>> >>>>>>>>> [test1:00001] *** Process received signal *** >>>>>>>>> [test1:00001] Signal: Segmentation fault (11) >>>>>>>>> [test1:00001] Signal code: Address not mapped (1) >>>>>>>>> [test1:00001] Failing at address: 0x7fb7b1552010 >>>>>>>>> [test1:00001] [ 0] /lib64/libpthread.so.0(+0x12d80)[0x7f6299456d80] >>>>>>>>> [test1:00001] [ 1] >>>>>>>>> /usr/lib64/openmpi/lib/openmpi/mca_btl_vader.so(mca_btl_vader_send+0x3db)[0x7f628b33ab0b] >>>>>>>>> [test1:00001] [ 2] >>>>>>>>> /usr/lib64/openmpi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_start_rdma+0x1fb)[0x7f62901d24bb] >>>>>>>>> [test1:00001] [ 3] >>>>>>>>> /usr/lib64/openmpi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send+0xfd6)[0x7f62901be086] >>>>>>>>> [test1:00001] [ 4] >>>>>>>>> /usr/lib64/openmpi/lib/libmpi.so.40(PMPI_Send+0x1bd)[0x7f62996f862d] >>>>>>>>> [test1:00001] [ 5] /home/mpi/ring[0x400b76] >>>>>>>>> [test1:00001] [ 6] >>>>>>>>> /lib64/libc.so.6(__libc_start_main+0xf3)[0x7f62990a3813] >>>>>>>>> [test1:00001] [ 7] /home/mpi/ring[0x4008be] >>>>>>>>> [test1:00001] *** End of error message *** >>>>>>>>> >>>>>>>>> Guessing that vader uses shared memory this is expected to fail, with >>>>>>>>> all the namespace isolations in place. Maybe not with a segfault, but >>>>>>>>> each container has its own shared memory. So next step was to use the >>>>>>>>> host's ipc and pid namespace and mount /dev/shm: >>>>>>>>> >>>>>>>>> '-v /dev/shm:/dev/shm --ipc=host --pid=host' >>>>>>>>> >>>>>>>>> Which does not segfault, but still does not look correct: >>>>>>>>> >>>>>>>>> Rank 0 has cleared MPI_Init >>>>>>>>> Rank 1 has cleared MPI_Init >>>>>>>>> Rank 2 has cleared MPI_Init >>>>>>>>> >>>>>>>>> Rank 0 has completed ring >>>>>>>>> Rank 2 has completed ring >>>>>>>>> Rank 0 has completed MPI_Barrier >>>>>>>>> Rank 1 has completed ring >>>>>>>>> Rank 2 has completed MPI_Barrier >>>>>>>>> Rank 1 has completed MPI_Barrier >>>>>>>>> >>>>>>>>> This is using the Open MPI ring.c example with SIZE increased from 20 >>>>>>>>> to 20000. >>>>>>>>> >>>>>>>>> Any recommendations what vader needs to communicate correctly? >>>>>>>>> >>>>>>>>> Adrian >>>>>>>>> >>>>>>>>>> On Thu, Jul 11, 2019 at 12:07:35PM +0200, Adrian Reber via users >>>>>>>>>> wrote: >>>>>>>>>> Gilles, >>>>>>>>>> >>>>>>>>>> thanks for pointing out the environment variables. I quickly created >>>>>>>>>> a >>>>>>>>>> wrapper which tells Podman to re-export all OMPI_ and PMIX_ variables >>>>>>>>>> (grep "\(PMIX\|OMPI\)"). Now it works: >>>>>>>>>> >>>>>>>>>> $ mpirun --hostfile ~/hosts ./wrapper -v /tmp:/tmp --userns=keep-id >>>>>>>>>> --net=host mpi-test /home/mpi/hello >>>>>>>>>> >>>>>>>>>> Hello, world (2 procs total) >>>>>>>>>> --> Process # 0 of 2 is alive. ->test1 >>>>>>>>>> --> Process # 1 of 2 is alive. ->test2 >>>>>>>>>> >>>>>>>>>> I need to tell Podman to mount /tmp from the host into the >>>>>>>>>> container, as >>>>>>>>>> I am running rootless I also need to tell Podman to use the same >>>>>>>>>> user ID >>>>>>>>>> in the container as outside (so that the Open MPI files in /tmp) can >>>>>>>>>> be >>>>>>>>>> shared and I am also running without a network namespace. >>>>>>>>>> >>>>>>>>>> So this is now with the full Podman provided isolation except the >>>>>>>>>> network namespace. Thanks for you help! >>>>>>>>>> >>>>>>>>>> Adrian >>>>>>>>>> >>>>>>>>>>> On Thu, Jul 11, 2019 at 04:47:21PM +0900, Gilles Gouaillardet via >>>>>>>>>>> users wrote: >>>>>>>>>>> Adrian, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> the MPI application relies on some environment variables (they >>>>>>>>>>> typically >>>>>>>>>>> start with OMPI_ and PMIX_). >>>>>>>>>>> >>>>>>>>>>> The MPI application internally uses a PMIx client that must be able >>>>>>>>>>> to >>>>>>>>>>> contact a PMIx server >>>>>>>>>>> >>>>>>>>>>> (that is included in mpirun and the orted daemon(s) spawned on the >>>>>>>>>>> remote >>>>>>>>>>> hosts). >>>>>>>>>>> >>>>>>>>>>> located on the same host. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> If podman provides some isolation between the app inside the >>>>>>>>>>> container (e.g. >>>>>>>>>>> /home/mpi/hello) >>>>>>>>>>> >>>>>>>>>>> and the outside world (e.g. mpirun/orted), that won't be an easy >>>>>>>>>>> ride. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Cheers, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Gilles >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> On 7/11/2019 4:35 PM, Adrian Reber via users wrote: >>>>>>>>>>>> I did a quick test to see if I can use Podman in combination with >>>>>>>>>>>> Open >>>>>>>>>>>> MPI: >>>>>>>>>>>> >>>>>>>>>>>> [test@test1 ~]$ mpirun --hostfile ~/hosts podman run >>>>>>>>>>>> quay.io/adrianreber/mpi-test /home/mpi/hello >>>>>>>>>>>> >>>>>>>>>>>> Hello, world (1 procs total) >>>>>>>>>>>> --> Process # 0 of 1 is alive. ->789b8fb622ef >>>>>>>>>>>> >>>>>>>>>>>> Hello, world (1 procs total) >>>>>>>>>>>> --> Process # 0 of 1 is alive. ->749eb4e1c01a >>>>>>>>>>>> >>>>>>>>>>>> The test program (hello) is taken from >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> The problem with this is that each process thinks it is process 0 >>>>>>>>>>>> of 1 >>>>>>>>>>>> instead of >>>>>>>>>>>> >>>>>>>>>>>> Hello, world (2 procs total) >>>>>>>>>>>> --> Process # 1 of 2 is alive. ->test1 >>>>>>>>>>>> --> Process # 0 of 2 is alive. ->test2 >>>>>>>>>>>> >>>>>>>>>>>> My questions is how is the rank determined? What resources do I >>>>>>>>>>>> need to have >>>>>>>>>>>> in my container to correctly determine the rank. >>>>>>>>>>>> >>>>>>>>>>>> This is Podman 1.4.2 and Open MPI 4.0.1. >>>>>>>>>>>> >>>>>>>>>>>> Adrian >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> >>>>>> >>>> Adrian >>>> >>> _______________________________________________ >>> users mailing list >>> users@lists.open-mpi.org >>> >> >> >> _______________________________________________ >> users mailing list >> users@lists.open-mpi.org >> > _______________________________________________ > users mailing list > users@lists.open-mpi.org > _______________________________________________ users mailing list users@lists.open-mpi.org | https://www.mail-archive.com/users@lists.open-mpi.org/msg33354.html | CC-MAIN-2019-35 | refinedweb | 1,365 | 61.87 |
XML, just like many more structured data formats, was not designed to be human-friendly. That’s why many network engineers lose interest in YANG as soon as the conversation gets to the XML part. JSON is a much more human-readable alternative, however very few devices support RESTCONF, and the ones that do may have buggy implementations. At the same time, a lot of network engineers have happily embraced Ansible, which extensively uses YAML. That’s why I’ve decided to write a Python module that would program network devices using YANG and NETCONF according to configuration data described in a YAML format.
In the previous post I have introduced a new open-source tool called YDK, designed to create API bindings for YANG models and interact with network devices using NETCONF or RESTCONF protocols. I have also mentioned that I would still prefer to use pyangbind along with other open-source tools to achieve the same functionality. Now, two weeks later, I must admin I have been converted. Initially, I was planning to write a simple REST API client to interact with RESTCONF interface of IOS XE, create an API binding with pyangbind, use it to produce the JSON output, convert it to XML and send it to the device, similar to what I’ve described in my netconf and restconf posts. However, I’ve realised that YDK can already do all what I need with just a few function calls. All what I’ve got left to do is create a wrapper module to consume the YAML data and use it to automatically populate YDK bindings.
This post will be mostly about the internal structure of this wrapper module I call
ydk_yaml.py, which will serve as a base library for a YANG Ansible module, which I will describe in my next post. This post will be very programming-oriented, I’ll start with a quick overview of some of the programming concepts being used by the module and then move on to the details of module implementation. Those who are not interested in technical details can jump straight to the examples sections at the end of this post for a quick demonstration of how it works.
Recursion
One of the main tasks of
ydk_yaml.py module is to be able parse a YAML data structure. This data structure, when loaded into Python, is stored as a collection of Python objects like dictionaries, lists and primitive data types like strings, integers and booleans. One key property of YAML data structures is that they can be represented as trees and parsing trees is a very well-known programming problem.
After having completed this programming course I fell in love with functional programming and recursions. Every problem I see, I try to solve with a recursive function. Recursions are very interesting in a way that they are very difficult to understand but relatively easy to write. Any recursive function will consist of a number of
if/then/else conditional statements. The first one (or few)
if statements are called the base of a recursion - this is where recursion stops and the value is returned to the outer function. The remaining few
if statements will implement the recursion by calling the same function with a reduced input. You can find a much better explanation of recursive functions here. For now, let’s consider the problem of parsing the following tree-like data structure:
{ 'parent': { 'child_1': { 'leaf_1': 'value_1' }, 'child_1': 'value_2' } }
Recursive function to parse this data structure written in a pseudo-language will look something like this:
def recursion(input_key, input_value): if input_value is String: return process(input_value) elif input_value is Dictonary: for key, value in input_value.keys_and_values(): return recursion(key, value)
The beauty of recursive functions is that they are capable parsing data structures of arbitrary complexity. That means if we had 1000 randomly nested child elements in the parent data structure, they all could have been parsed by the same 6-line function.
Introspection
Introspection refers to the ability of Python to examine objects at runtime. It can be useful when dealing with object of arbitrary structure, e.g. a YAML document. Introspection is used whenever there is a need for a function to behave differently based on the runtime data. In the above pseudo-language example, the two conditional statements are the examples of introspection. Whenever we need to determine the type of an object in Python we can either use a built-in function
type(obj) which returns the type of an object or
isinstance(obj, type) which checks if the object is an instance or a descendant of a particular type. This is how we can re-write the above two conditional statements using real Python:
if isinstance(input_value, str): print('input value is a string') elif isinstance(input_value, dict): print('intput value is a dictionary')
Metaprogramming
Another programming concept used in my Python module is metaprogramming. Metaprogramming, in general, refers to an ability of programs to write themselves. This is what compilers normally do when they read the program written in a higher-level language and translate it to a lower-level language, like assembler. What I’ve used in my module is the simplest version of metaprogramming - dynamic getting and setting of object attributes. For example, this is how we would configure BGP using YDK Python binding, as described in my previous post:
bgp.id = 100 n = bgp.Neighbor() n.id = '2.2.2.2' n.remote_as = 65100 bgp.neighbor.append(n)
The same code could be re-written using the
getattr and
setattr method calls:
setattr(bgp, 'id', 100) n = getattr(bgp, 'Neighbor')() setattr(n, 'id', '2.2.2.2') setattr(n, 'remote_as', 65100) getattr(bgp, 'neighbor').append(n)
This is also very useful when working with arbitrary data structures and objects. In my case the goal was to write a module that would be completely independent of the structure of a particular YANG model, which means that I can not know the structure of the Python binding generated by YDK. However, I can “guess” the name of the attributes if I assume that my YAML document is structured exactly like the YANG model. This simple assumption allows me to implement YAML mapping for all possible YANG models with just a single function.
YANG mapping to YAML
As I’ve mentioned in my previous post, YANG is simply a way to define the structure of an XML document. At the same time, it is known that YANG-based XML can be mapped to JSON as described in this RFC. Since YAML is a superset of JSON, it’s easy to come up with a similar XML-to-YAML mapping convention. The following table contains the mapping between some of the most common YAML and YANG data structures and types:
Using this table, it’s easy to map the YANG data model to a YAML document. Let me demonstrate it on IOS XE’s native OSPF data model. First, I’ve generated a tree representation of an OSPF data model using pyang:
pyang -f tree --tree-path "/native/router/ospf" ~/ydk-gen/gen-api/.cache/models/[email protected]/ned.yang -o ospf.tree.
YANG instantiating function
At the heart of the
ydk_yaml module is a single recursive function that traverses the input YAML data structure and uses it to instantiate the YDK-generated Python binding. Here is a simple, abridged version of the function that demonstrates the main logic.
def instantiate(binding, model_key, model_value): if any(isinstance(model_value, x) for x in [str, bool, int]): setattr(binding, model_key, model_value) elif isinstance(model_value, list): for el in model_value: getattr(binding, model_key).append(instantiate(binding, model_key, el)) elif isinstance(model_value, dict): container_instance = getattr(binding, model_key)() for k, v in model_value.iteritems(): instantiate(container_instance, k, v) setattr(binding, model_key, container_instance)
Most of it should already make sense based on what I’ve covered above. The first conditional statement is the base of the recursion and performs the action of setting the value of a YANG Leaf element. The second conditional statement takes care of a YANG List by traversing all its elements, instantiating them recursively, and appends the result to a YDK binding. The last
elif statement creates a class instance for a YANG container, recursively populates its values and saves the final result inside a YDK binding.
The full version of this function covers a few extra corner cases and can be found here.
The YDK module wrapper
The final step is to write a wrapper class that would consume the YDK model binding along with the YAML data, and both instantiate and push the configuration down to the network device.
class YdkModel: def __init__(self, model, data): self.model = model self.data = data from ydk.models.cisco_ios_xe.ned import Native self.binding = Native() for k,v in self.data.iteritems(): instantiate(self.binding, k, v) def action(self, crud_action, device): from ydk.services import CRUDService from ydk.providers import NetconfServiceProvider provider = NetconfServiceProvider(address=device['hostname'], port=device['port'], username=device['username'], password=device['password'], protocol='ssh') crud = CRUDService() crud_instance = getattr(crud, crud_action) crud_instance(provider, self.binding) provider.close() return
The structure of this class is pretty simple. The constructor instantiates a YDK native data model and calls the recursive instantiation function to populate the binding. The action method implements standard CRUD actions using the YDK’s NETCONF provider. The full version of this Python module can be found here.
Configuration examples
In my Github repo, I’ve included a few examples of how to configure Interface, OSPF and BGP settings of IOS XE device. A helper Python script
1_send_yaml.py accepts the YANG model name and the name of the YAML configuration file as the input. It then instantiates the
YdkModel class and calls the
create action to push the configuration to the device. Let’s assume that we have the following YAML configuration data saved in a
bgp.yaml file:
+++ router: bgp: - id: 100 bgp: router_id: 1.1.1.1 fast_external_fallover: null update_delay: 15 neighbor: - id: 2.2.2.2 remote_as: 200 - id: 3.3.3.3 remote_as: 300 redistribute: connected: {}
To push this BGP configuration to the device all what I need to do is run the following command:
./1_send_yaml.py bgp bgp.yaml
The resulting configuration on IOS XE device would look like this:
router bgp 100 bgp router-id 1.1.1.1 bgp log-neighbor-changes bgp update-delay 15 redistribute connected neighbor 2.2.2.2 remote-as 200 neighbor 3.3.3.3 remote-as 300
To see more example, follow this link to my Github repo. | https://networkop.co.uk/blog/2017/03/13/yaml-yang/ | CC-MAIN-2020-29 | refinedweb | 1,769 | 53.51 |
Boo has built-in support for regular expression literals.
You
1 Comment
Rui A. Rebelo
Hope this is usefull for someone.
Suppose you want all the substrings in a text which match a certain regular expression. I use 2 different solutions:
import System.Text.RegularExpressions
TheSea="Fisherman has gone fishing. How many fishes will he catch?"
def GrabAll( SearchString as string, re as Regex):
m=re.Match( SearchString)
while m.Success:
yield m.Groups
m=m.NextMatch()
// the simplest & easiest way (without any help function)
for m as Match in @/Ffish.+/.Matches( TheSea):
print m.Value
// a more powerful technique, using named groups
Net=Regex("(?<Fishy>fish)(?<rest>.+)", RegexOptions.IgnoreCase)
for tag in GrabAll( TheSea, Net):
print tag"Fishy".Value, tag"rest".Value
/* Output:
Fisherman
fishing
fishes
Fish erman
fish ing
fish es
*/ | http://docs.codehaus.org/display/BOO/Regular+Expressions?focusedCommentId=27063 | CC-MAIN-2015-22 | refinedweb | 133 | 55.4 |
Engineering
Releases
News and Events
Now Available: SpringSource Tool Suite 2.3.1
The latest version of SpringSource Tool Suite (STS) is now available. STS is the best Eclipse-powered development environment for building Spring, Groovy and Grails powered enterprise applications. The new version (2.3.1) is now available for download and uses the latest Eclipse 3.5.2 base. Other features include
- The latest version of tc Server Developer Edition with improvements to the integration with the Spring Insight console.
- Namespace Handlers can now be loaded from the projects classpath which allows the use of namespaces that are not already known to STS
- Significant performance and memory consumption improvements.
Download | Install Instructions | ChangeLog | New & Noteworthy | JIRA
Now is a great time to start working with STS and please use the community forum to give your feedback and ask questions. | http://spring.io/blog/2010/03/11/now-available-springsource-tool-suite-2-3-1 | CC-MAIN-2017-43 | refinedweb | 141 | 54.42 |
Azure Service Bus
Whether an application or service runs in the cloud or on premises, it often needs to interact with other applications or services. To provide a broadly useful way to do this, Microsoft Azure offers Service Bus. This article takes a look at this technology, describing what it is and why you might want to use it.
Service Bus fundamentals
Different situations call for different styles of communication.. Figure 1 shows how this looks.
Figure 1:.
When you create a queue, topic, or relay, you give it a name. Combined with whatever you called your namespace, this name creates a unique identifier for the object. Applications can provide this name to Service Bus, then use that queue, topic, or relay to communicate with one another.
To use any of these objects in the relay scenario, Windows applications can use Windows Communication Foundation (WCF).. Service Bus is a communication mechanism in the cloud that's accessible from pretty much anywhere. How you use it depends on what your applications need to do.
Queues
Suppose you decide to connect two applications using a Service Bus queue. Figure 2 illustrates this situation.
Figure 2: Service Bus queues provide one-way asynchronous queuing.
The process is simple: A sender sends a message to a Service Bus queue, and a receiver picks up that message at some later time. A queue can have just a single receiver, as Figure 2 shows. Or, multiple applications can read from the same queue. In the latter situation, each message is read by just one receiver. For a multi-cast service, you should use a topic instead.
Each message has two parts: a set of properties, each a key/value pair, and a binary message body. How they're used depends on what an application is trying to do. For example, an application sending a message about a recent sale might include the properties Seller="Ava" and Amount=10000. The message body might contain a scanned image of the sale's signed contract or, if there isn't one, just remain empty.
A receiver can read a message from a Service Bus queue in two different ways. The first option, called ReceiveAndDelete, removes a message from the queue and immediately deletes it. This is simple, but if the receiver crashes before it finishes processing the message, the message will be lost. Because it's been removed from the queue, no other receiver can access it.
The second option, PeekLock, is meant to help with this problem. Like ReceiveAndDelete, a PeekLock read removes a message from the queue. It doesn't delete the message, however. Instead, it locks the message, making it invisible to other receivers, then waits for one of three events:
- If the receiver processes the message successfully, it calls Complete, and the queue deletes the message.
- If the receiver decides that it can't process the message successfully, it calls Abandon. The queue then removes the lock from the message and makes it available to other receivers.
- If the receiver calls neither of these within a configurable period of time (by default, 60 seconds), the queue assumes the receiver has failed. In this case, it behaves as if the receiver had called Abandon, making the message available to other receivers.
Notice what can happen here: the same message might be delivered twice, perhaps to two different receivers. Applications using Service Bus queues must be prepared for this. To make duplicate detection easier, each message has a unique MessageID property that by default stays the same no matter how many times the message is read from a queue.
Queues are useful in quite a few situations. They enable applications to communicate even when both aren't running at the same time, something that's especially handy with batch and mobile applications. A queue with multiple receivers also provides automatic load balancing, since sent messages are spread across these receivers.
Topics
Useful as they are, queues aren't always the right solution. Sometimes, Service Bus topics are better. Figure 3 illustrates this idea.
Figure 3: Based on the filter a subscribing application specifies, it can receive some or all of the messages sent to a Service Bus topic.
A topic is similar in many ways to a queue. Senders submit messages to a topic in the same way that they submit messages to a queue, and those messages look the same as with queues. The big difference is that topics enable each receiving application to create its own subscription by defining a filter. A subscriber will then see only the messages that match that filter. For example, Figure 3 shows a sender and a topic with three subscribers, each with its own filter:
- Subscriber 1 receives only messages that contain the property Seller="Ava".
- Subscriber 2 receives messages that contain the property Seller="Ruby" and/or contain an Amount property whose value is greater than 100,000. Perhaps Ruby is the sales manager,.
As with queues, subscribers to a topic can read messages using either ReceiveAndDelete or PeekLock. Unlike queues, however, a single message sent to a topic can be received by multiple subscriptions. This approach, commonly called publish and subscribe (or pub/sub), is useful whenever multiple applications are interested in the same messages. By defining the right filter, each subscriber can tap into just the part of the message stream that it needs to see.
Relays
Both queues and topics provide one-way asynchronous communication through a broker. Traffic flows in just one direction, and there's no direct connection between senders and receivers. But what if you don't want this? Suppose your applications need to both send and receive messages, or perhaps you want a direct link between them and you don't need a broker to store messages. To address scenarios such as this, Service Bus provides relays, as Figure 4 shows.
Figure 4: Service Bus relay provides synchronous, two-way communication between applications.
The obvious question to ask about relays is this: why would I use one? Even if I don't need queues, why make applications communicate via a cloud service rather than just interact directly? The answer is that talking directly can be harder than you might think.
Suppose you want to connect two on-premises applications, both running inside corporate datacenters. Each of these applications sits behind a firewall, and each datacenter probably uses network address translation (NAT). The firewall blocks incoming data on all but a few ports, and NAT implies that the machine each application is running on doesn't have a fixed IP address that you can reach directly from outside the datacenter. Without some extra help, connecting these applications over the public internet is problematic.
A Service Bus relay can help. To communicate bi-directionally through a relay, each application establishes an outbound TCP connection with Service Bus, then keeps it open. All communication between the two applications travels over these connections. Because each connection was established from inside the datacenter, the firewall allows incoming traffic to each application without opening new ports. This approach also gets around the NAT problem, because each application has a consistent endpoint in the cloud throughout the communication. By exchanging data through the relay, the applications can avoid the problems that would otherwise make communication difficult.
To use Service Bus relays, applications rely on the Windows Communication Foundation (WCF). Service Bus provides WCF bindings that make it straightforward for Windows applications to interact via relays. Applications that already use WCF can typically just specify one of these bindings, then talk to each other through a relay. Unlike queues and topics, however, using relays from non-Windows applications, while possible, requires some programming effort; no standard libraries are provided.
Unlike queues and topics, applications don't explicitly create relays. Instead, when an application that wishes to receive messages establishes a TCP connection with Service Bus, a relay is created automatically. When the connection is dropped, the relay is deleted. To enable an application to find the relay created by a specific listener, Service Bus provides a registry that enables applications to locate a specific relay by name.
Relays are the right solution when you need direct communication between applications. For example, consider an airline reservation system running in an on-premises datacenter that must be accessed from check-in kiosks, mobile devices, and other computers. Applications running on all of these systems could rely on Service Bus relays in the cloud to communicate, wherever they might be running.
Summary
Connecting applications has always been part of building complete solutions, and the range of scenarios that require applications and services to communicate with each other is set to increase as more applications and devices are connected to the Internet. By providing cloud-based technologies for achieving this through queues, topics, and relays, Service Bus aims to make this essential function easier to implement and more broadly available.
Next steps
Now that you've learned the fundamentals of Azure Service Bus, follow these links to learn more.
- How to use Service Bus queues
- How to use Service Bus topics
- How to use Service Bus relay
- Service Bus samples | https://azure.microsoft.com/en-us/documentation/articles/service-bus-fundamentals-hybrid-solutions/ | CC-MAIN-2016-40 | refinedweb | 1,527 | 54.63 |
Public Service Announcement: You may have noticed that trying to evaluate members using C#'s 'base' keyword in the debugger still calls the derived members. (The 'base' keyword lets you access base class member implementations from within a derived class, which is very useful when the members are polymorphic and so just calling the member directly would go through virtual dispatch and call the derived implementation).
Here's an example:
using System; class Program
{
static void Main(string[] args)
{
DerivedClass d = new DerivedClass();
d.Test();
}
} class BaseClass
{
public virtual string Thing()
{
return "base";
}
}
class DerivedClass : BaseClass
{
public override string Thing()
{
return "derived";
}
public void Test()public void Test()
{
// C#'s 'base' keyword lets us explicitly call the base class method instead
// of using polymorphism to call the derived method.
string b = base.Thing(); // This will be "base"
// This will print "base, derived"
Console.WriteLine("{0},{1}", b, this.Thing());
// However, if we evaluate "base.Thing()" in the debugger's watch window, it will// However, if we evaluate "base.Thing()" in the debugger's watch window, it will
// still use polymorphism and incorrectly evaluate to "derived" }
}
So if you evaluate "base.Thing()" in the debugger's immediate / watch windows, it yields "derived" instead of "base".
Why?
The problem is actually in the underlying CLR debugging services. To evaluate "base.Thing()", the debugger needs to do a func-eval. However, ICD does all func-evals with virtual dispatch.
On one hand, debugger authors really really don't want to be doing the virtual dispatch themselves. That would involve crawling all over the type / interface hierarchy trying to figure out which method to call. And most of the time, you want virtual dispatch, so that seemed an intelligent default. For example, this is the correct behavior for "this.Thing()", or inspecting any other variables. In fact, the only place I've see this break in practice is when explicitly inspecting polymorphic members via the 'base' keyword.
FWIW, I think the ideal thing to do would be to have a switch on ICorDebugEval that lets you switch between virtual dispatch and non-virtual dispatch.
Whose bug is this?
Clearly, there's a bug to the end-user, but there's an interesting philosophical discussion here about which component is at fault. The CLR's or Visual Studios? And if in VS, which component? The language specific expression evaluator? The debug-engine?
So you could argue the bug could be:
1) ICorDebug should have had better documentation to clearly call out that func-eval is virtual dispatch (the latest version of the idl file does; but I haven't checked the other docs, such as earlier idl files)
2) ICorDebug really should have had the functionality to begin with because it should ensure that it provides the necessary features for the overall system to work. (I personally wouldn't call this a "bug" because although it's missing functionality, the functionality it does have is correct; perhaps a "design shortcoming")
3) The debugger (Visual Studio) should not even attempt to evaluate virtuals on the 'base' keyword because it can't be implemented properly. So VS should just fail an attempt to evaluate "base.Thing()". For this case, you then have to decide which component within VS would be responsible.
FWIW, I think the ideal thing to do would be to have a switch on ICorDebugEval that lets you switch between virtual dispatch and non-virtual dispatch.
IMHO, first of all, it's a QA bug.
#1 and #3 are steps toward the same user visible mitigation: inform the user the functionality is missing.
The lack of expression evaluator reach is almost always considered a bug by a user, so transforming it into a proper error message simply makes it a less severe 'bug' from the user's perspective.
Given that the syntax and naming of features is different from language to language, the Expression Evaluator's must provide user visible error messages that reference those features.
Do you think the lack of support for this in ICorDebug would be considered a defect by Andreas Zeller's definitions? If not is this not a bug? Are Zeller's definitions lacking for some practical cases?
Steve - where are Zeller's definitions?
Sorry about that. They are in his book "Why program's fail". Fortunately Chapter 1 is online.
The distinction is defect, infection, failure:
I'd claim this is a failure from the user's point of view. But is there an infection or defect?
I think you could make an argument to classify it a lot of different ways. I think Zeller's definitions are hard to apply here.
I wouldn't say Incomplete features are bugs.
The closest he gets is alluding to it here: "In a modular program, a failure may happen because of incompatible interfaces of two modules."
I got a question about MDbg's func-eval syntax, which brings up a nice point about function resolution:...
Method calls using the C# ‘ base ’ keyword get compiled to an IL ‘ call ’ instruction, rather than the | https://blogs.msdn.microsoft.com/jmstall/2006/06/29/debugger-wont-properly-evaluate-cs-base-keyword/ | CC-MAIN-2017-30 | refinedweb | 841 | 55.74 |
Other Aliasleg
SYNOPSISpeg [-hvV -ooutput] [filename ...]
leg [-hvV -ooutput] [filename ...]
DESCRIPTIONpeg and leg are tools for generating recursive-descent parsers: programs that perform pattern matching on text. They process a Parsing Expression Grammar (PEG) [Ford 2004] to produce a program that recognises legal sentences of that grammar. peg processes PEGs written using the original syntax described by Ford; leg processes PEGs written using slightly different syntax and conventions that are intended to make it an attractive replacement for parsers built with lex(1) and yacc(1). Unlike lex and yacc, peg and leg support unlimited backtracking, provide ordered choice as a means for disambiguation, and can combine scanning (lexical analysis) and parsing (syntactic analysis) into a single activity.
peg reads the specified filenames, or standard input if no filenames are given, for a grammar describing the parser to generate. peg then generates a C source file that defines a function yyparse(). This C source file can be included in, or compiled and then linked with, a client program. Each time the client program calls yyparse() the parser consumes input text according to the parsing rules, starting from the first rule in the grammar. yyparse() returns non-zero if the input could be parsed according to the grammar; it returns zero if the input could not be parsed.
The prefix 'yy' or 'YY' is prepended to all externally-visible symbols in the generated parser. This is intended to reduce the risk of namespace pollution in client programs. (The choice of 'yy' is historical; see lex(1) and yacc(1), for example.)
OPTIONSpeg and leg provide the following options:
- -h
- prints a summary of available options and then exits.
- -ooutput
- writes the generated parser to the file output instead of the standard output.
- -v
- writes verbose information to standard error while working.
- -V
- writes version information to standard error then exits.
A SIMPLE EXAMPLEThe following peg input specifies a grammar with a single rule (called 'start') that is satisfied when the input contains the string "username".
start <- "username"(The quotation marks are not part of the matched text; they serve to indicate a literal string to be matched.) In other words, yyparse() in the generated C source will return non-zero only if the next eight characters read from the input spell the word "username". If the input contains anything else, yyparse() returns zero and no input will have been consumed. (Subsequent calls to yyparse() will also return zero, since the parser is effectively blocked looking for the string "username".) To ensure progress we can add an alternative clause to the 'start' rule that will match any single character if "username" is not found.
start <- "username" / .yyparse() now always returns non-zero (except at the very end of the input). To do something useful we can add actions to the rules. These actions are performed after a complete match is found (starting from the first rule) and are chosen according to the 'path' taken through the grammar to match the input. (Linguists would call this path a 'phrase marker'.)
start <- "username" { printf("%s\n", getlogin()); } / < . > { putchar(yytext[0]); }The first line instructs the parser to print the user's login name whenever it sees "username" in the input. If that match fails, the second line tells the parser to echo the next character on the input the standard output. Our parser is now performing useful work: it will copy the input to the output, replacing all occurrences of "username" with the user's account name.
Note the angle brackets ('<' and '>') that were added to the second alternative. These have no effect on the meaning of the rule, but serve to delimit the text made available to the following action in the variable yytext.
If the above grammar is placed in the file username.peg, running the command
peg -o username.c username.pegwill save the corresponding parser in the file username.c. To create a complete program this parser could be included by a C program as follows.
#include <stdio.h> /* printf(), putchar() */ #include <unistd.h> /* getlogin() */ #include "username.c" /* yyparse() */ int main() { while (yyparse()) /* repeat until EOF */ ; return 0; }
PEG GRAMMARSA grammar consists of a set of named rules.
name <- patternThe pattern contains one or more of the following elements.
- name
- The element stands for the entire pattern in the rule with the given name.
- "characters"
- A character or string enclosed in double quotes is matched literally. The ANSI C escape sequences are recognised within the characters.
- 'characters'
- A character or string enclosed in single quotes is matched literally, as above.
- [characters]
- A set of characters enclosed in square brackets matches any single character from the set, with escape characters recognised as above. If the set begins with an uparrow (^) then the set is negated (the element matches any character not in the set). Any pair of characters separated with a dash (-) represents the range of characters from the first to the second, inclusive. A single alphabetic character or underscore is matched by the following set.
[a-zA-Z_]Similarly, the following matches any single non-digit character.
[^0-9]
- .
- A dot matches any character. Note that the only time this fails is at the end of file, where there is no character to match.
- ( pattern )
- Parentheses are used for grouping (modifying the precedence of the operators described below).
- { action }
- Curly braces surround actions. The action is arbitrary C source code to be executed at the end of matching. Any braces within the action must be properly nested. Any input text that was matched before the action and delimited by angle brackets (see below) is made available within the action as the contents of the character array yytext. The length of (number of characters in) yytext is available in the variable yyleng. (These variable names are historical; see lex(1).)
- <
- An opening angle bracket always matches (consuming no input) and causes the parser to begin accumulating matched text. This text will be made available to actions in the variable yytext.
- >
- A closing angle bracket always matches (consuming no input) and causes the parser to stop accumulating text for yytext.
The above elements can be made optional and/or repeatable with the following suffixes:
- element ?
- The element is optional. If present on the input, it is consumed and the match succeeds. If not present on the input, no text is consumed and the match succeeds anyway.
- element +
- The element is repeatable. If present on the input, one or more occurrences of element are consumed and the match succeeds. If no occurrences of element are present on the input, the match fails.
- element *
- The element is optional and repeatable. If present on the input, one or more occurrences of element are consumed and the match succeeds. If no occurrences of element are present on the input, the match succeeds anyway.
The above elements and suffixes can be converted into predicates (that match arbitrary input text and subsequently succeed or fail without consuming that input) with the following prefixes:
- & element
- The predicate succeeds only if element can be matched. Input text scanned while matching element is not consumed from the input and remains available for subsequent matching.
- ! element
- The predicate succeeds only if element cannot be matched. Input text scanned while matching element is not consumed from the input and remains available for subsequent matching. A popular idiom is
A special form of the '&' predicate is provided:
- &{ expression }
- In this predicate the simple C expression (not statement) is evaluated immediately when the parser reaches the predicate. If the expression yields non-zero (true) the 'match' succeeds and the parser continues with the next element in the pattern. If the expression yields zero (false) the 'match' fails and the parser backs up to look for an alternative parse of the input.
Several elements (with or without prefixes and suffixes) can be combined into a sequence by writing them one after the other. The entire sequence matches only if each individual element within it matches, from left to right.
Sequences can be separated into disjoint alternatives by the alternation operator '/'.
- sequence-1 / sequence-2 / ... / sequence-N
- Each sequence is tried in turn until one of them matches, at which time matching for the overall pattern succeeds. If none of the sequences matches then the match of the overall pattern fails.
Finally, the pound sign (#) introduces a comment (discarded) that continues until the end of the line.
To summarise the above, the parser tries to match the input text against a pattern containing literals, names (representing other rules), and various operators (written as prefixes, suffixes, juxtaposition for sequencing and and infix alternation operator) that modify how the elements within the pattern are matched. Matches are made from left to right, 'descending' into named sub-rules as they are encountered. If the matching process fails, the parser 'back tracks' ('rewinding' the input appropriately in the process) to find the nearest alternative 'path' through the grammar. In other words the parser performs a depth-first, left-to-right search for the first successfully-matching path through the rules. If found, the actions along the successful path are executed (in the order they were encountered).
Note that predicates are evaluated immediately during the search for a successful match, since they contribute to the success or failure of the search. Actions, however, are evaluated only after a successful match has been found.
PEG GRAMMAR FOR PEG GRAMMARSThe grammar for peg grammars is shown below. This will both illustrate and formalise the above description.
Grammar <- Spacing Definition+ EndOfFile Definition <- Identifier LEFTARROW Expression Expression <- Sequence ( SLASH Sequence )* Sequence <- Prefix* Prefix <- AND Action / ( AND | NOT )? Suffix Suffix <- Primary ( QUERY / STAR / PLUS )? Primary <- Identifier !LEFTARROW / OPEN Expression CLOSE / Literal / Class / DOT / Action / BEGIN / END Identifier <- < IdentStart IdentCont* > Spacing IdentStart <- [a-zA-Z_] IdentCont <- IdentStart / [0-9] Literal <- ['] < ( !['] Char )* > ['] Spacing / ["] < ( !["] Char )* > ["] Spacing Class <- '[' < ( !']' Range )* > ']' Spacing Range <- Char '-' Char / Char Char <- '\\' [abefnrtv'"\[\]\\] / '\\' [0-3][0-7][0-7] / '\\' [0-7][0-7]? / '\\' '-' / !'\\' . LEFTARROW <- '<-' Spacing SLASH <- '/' Spacing AND <- '&' Spacing NOT <- '!' Spacing QUERY <- '?' Spacing STAR <- '*' Spacing PLUS <- '+' Spacing OPEN <- '(' Spacing CLOSE <- ')' Spacing DOT <- '.' Spacing Spacing <- ( Space / Comment )* Comment <- '#' ( !EndOfLine . )* EndOfLine Space <- ' ' / '\t' / EndOfLine EndOfLine <- '\r\n' / '\n' / '\r' EndOfFile <- !. Action <- '{' < [^}]* > '}' Spacing BEGIN <- '<' Spacing END <- '>' Spacing
LEG GRAMMARSleg is a variant of peg that adds some features of lex(1) and yacc(1). It differs from peg in the following ways.
- %{ text... %}
- A declaration section can appear anywhere that a rule definition is expected. The text between the delimiters '%{' and '%}' is copied verbatim to the generated C parser code before the code that implements the parser itself.
- name = pattern
- The 'assignment' operator replaces the left arrow operator '<-'.
- rule-name
- Hyphens can appear as letters in the names of rules. Each hyphen is converted into an underscore in the generated C source code. A single single hyphen '-' is a legal rule name.
- = [ \t\n\r]* number = [0-9]+ - name = [a-zA-Z_][a-zA_Z_0-9]* - l-paren = '(' - r-paren = ')' -This example shows how ignored whitespace can be obvious when reading the grammar and yet unobtrusive when placed liberally at the end of every rule associated with a lexical element.
- seq-1 | seq-2
- The alternation operator is vertical bar '|' rather than forward slash '/'. The peg rule
name <- sequence-1 / sequence-2 / sequence-3is therefore written
name = sequence-1 | sequence-2 | sequence-3 ;in leg (with the final semicolon being optional, as described next).
- exp ~ { action }
- A postfix operator ~{ action } can be placed after any expression and will behave like a normal action (arbitrary C code) except that it is invoked only when exp fails. It binds less tightly than any other operator except alternation and sequencing, and is intended to make error handling and recovery code easier to write. Note that yytext and yyleng are not available inside these actions, but the pointer variable yy is available to give the code access to any user-defined members of the parser state (see "CUSTOMISING THE PARSER" below). Note also that exp is always a single expression; to invoke an error action for any failure within a sequence, parentheses must be used to group the sequence into a single expression.
rule = e1 e2 e3 ~{ error("e[12] ok; e3 has failed"); } | ... rule = (e1 e2 e3) ~{ error("one of e[123] has failed"); } | ...
- pattern ;
- A semicolon punctuator can optionally terminate a pattern.
- %% text...
- A double percent '%%' terminates the rules (and declarations) section of the grammar. All text following '%%' is copied verbatim to the generated C parser code after the parser implementation code.
- $$ = value
- A sub-rule can return a semantic value from an action by assigning it to the pseudo-variable '$$'. All semantic values must have the same type (which defaults to 'int'). This type can be changed by defining YYSTYPE in a declaration section.
- identifier:name
- The semantic value returned (by assigning to '$$') from the sub-rule name is associated with the identifier and can be referred to in subsequent actions.
The desk calculator example below illustrates the use of '$$' and ':'.
LEG EXAMPLE: A DESK CALCULATORThe extensions in leg described above allow useful parsers and evaluators (including declarations, grammar rules, and supporting C functions such as 'main') to be kept within a single source file. To illustrate this we show a simple desk calculator supporting the four common arithmetic operators and named variables. The intermediate results of arithmetic evaluation will be accumulated on an implicit stack by returning them as semantic values from sub-rules.
%{ #include <stdio.h> /* printf() */ #include <stdlib.h> /* atoi() */ int vars[26]; %} Stmt = - e:Expr EOL { printf("%d\n", e); } | ( !EOL . )* EOL { printf("error\n"); } Expr = i:ID ASSIGN s:Sum { $$ = vars[i] = s; } | s:Sum { $$ = s; } Sum = l:Product ( PLUS r:Product { l += r; } | MINUS r:Product { l -= r; } )* { $$ = l; } Product = l:Value ( TIMES r:Value { l *= r; } | DIVIDE r:Value { l /= r; } )* { $$ = l; } Value = i:NUMBER { $$ = atoi(yytext); } | i:ID !ASSIGN { $$ = vars[i]; } | OPEN i:Expr CLOSE { $$ = i; } NUMBER = < [0-9]+ > - { $$ = atoi(yytext); } ID = < [a-z] > - { $$ = yytext[0] - 'a'; } ASSIGN = '=' - PLUS = '+' - MINUS = '-' - TIMES = '*' - DIVIDE = '/' - OPEN = '(' - CLOSE = ')' - - = [ \t]* EOL = '\n' | '\r\n' | '\r' | ';' %% int main() { while (yyparse()) ; return 0; }
LEG GRAMMAR FOR LEG GRAMMARSThe grammar for leg grammars is shown below. This will both illustrate and formalise the above description.
grammar = - ( declaration | definition )+ trailer? end-of-file' - TILDE = '~' - RPERCENT = '%}' - - = ( space | comment )* space = ' ' | '\t' | end-of-line comment = '#' ( !end-of-line . )* end-of-line end-of-line = '\r\n' | '\n' | '\r' end-of-file = !.
CUSTOMISING THE PARSERThe following symbols can be redefined in declaration sections to modify the generated parser code.
- YYSTYPE
- The semantic value type. The pseudo-variable '$$' and the identifiers 'bound' to rule results with the colon operator ':' should all be considered as being declared to have this type. The default value is 'int'.
- YYPARSE
- The name of the main entry point to the parser. The default value is 'yyparse'.
- YYPARSEFROM
- The name of an alternative entry point to the parser. This function expects one argument: the function corresponding to the rule from which the search for a match should begin. The default is 'yyparsefrom'. Note that yyparse() is defined as
int yyparse() { return yyparsefrom(yy_foo); }where 'foo' is the name of the first rule in the grammar.
- YY_INPUT(buf, result, max_size)
- This macro is invoked by the parser to obtain more input text. buf points to an area of memory that can hold at most max_size characters. The macro should copy input text to buf and then assign the integer variable result to indicate the number of characters copied. If no more input is available, the macro should assign 0 to result. By default, the YY_INPUT macro is defined as follows.
#define YY_INPUT(buf, result, max_size) \ { \ int yyc= getchar(); \ result= (EOF == yyc) ? 0 : (*(buf)= yyc, 1); \ }Note that if YY_CTX_LOCAL is defined (see below) then an additional first argument, containing the parser context, is passed to YY_INPUT.
- YY_DEBUG
- If this symbols is defined then additional code will be included in the parser that prints vast quantities of arcane information to the standard error while the parser is running.
- YY_BEGIN
- This macro is invoked to mark the start of input text that will be made available in actions as 'yytext'. This corresponds to occurrences of '<' in the grammar. These are converted into predicates that are expected to succeed. The default definition
#define YY_BEGIN (yybegin= yypos, 1)therefore saves the current input position and returns 1 ('true') as the result of the predicate.
- YY_END
- This macros corresponds to '>' in the grammar. Again, it is a predicate so the default definition saves the input position before 'succeeding'.
#define YY_END (yyend= yypos, 1)
- YY_PARSE(T)
- This macro declares the parser entry points (yyparse and yyparsefrom) to be of type T. The default definition
#define YY_PARSE(T) Tleaves yyparse() and yyparsefrom() with global visibility. If they should not be externally visible in other source files, this macro can be redefined to declare them 'static'.
#define YY_PARSE(T) static T
- YY_CTX_LOCAL
- If this symbol is defined during compilation of a generated parser then global parser state will be kept in a structure of type 'yycontext' which can be declared as a local variable. This allows multiple instances of parsers to coexist and to be thread-safe. The parsing function yyparse() will be declared to expect a first argument of type 'yycontext *', an instance of the structure holding the global state for the parser. This instance must be allocated and initialised to zero by the client. A trivial but complete example is as follows.
#include <stdio.h> #define YY_CTX_LOCAL #include "the-generated-parser.peg.c" int main() { yycontext ctx; memset(&ctx, 0, sizeof(yycontext)); while (yyparse(&ctx)); return 0; }Note that if this symbol is undefined then the compiled parser will statically allocate its global state and will be neither reentrant nor thread-safe. Note also that the parser yycontext structure is initialised automatically the first time yyparse() is called; this structure must therefore be properly initialised to zero before the first call to yyparse().
- YY_CTX_MEMBERS
- If YY_CTX_LOCAL is defined (see above) then the macro YY_CTX_MEMBERS can be defined to expand to any additional member field declarations that the client would like included in the declaration of the 'yycontext' structure type. These additional members are otherwise ignored by the generated parser. The instance of 'yycontext' associated with the currently-active parser is available within actions as the pointer variable yy.
- YY_BUFFER_SIZE
- The initial size of the text buffer, in bytes. The default is 1024 and the buffer size is doubled whenever required to meet demand during parsing. An application that typically parses much longer strings could increase this to avoid unnecessary buffer reallocation.
- YY_STACK_SIZE
- The initial size of the variable and action stacks. The default is 128, which is doubled whenever required to meet demand during parsing. Applications that have deep call stacks with many local variables, or that perform many actions after a single successful match, could increase this to avoid unnecessary buffer reallocation.
- YY_MALLOC(YY, SIZE)
- The memory allocator for all parser-related storage. The parameters are the current yycontext structure and the number of bytes to allocate. The default definition is: malloc(SIZE)
- YY_REALLOC(YY, PTR, SIZE)
- The memory reallocator for dynamically-grown storage (such as text buffers and variable stacks). The parameters are the current yycontext structure, the previously-allocated storage, and the number of bytes to which that storage should be grown. The default definition is: realloc(PTR, SIZE)
- YY_FREE(YY, PTR)
- The memory deallocator. The parameters are the current yycontext structure and the storage to deallocate. The default definition is: free(PTR)
- YYRELEASE
- The name of the function that releases all resources held by a yycontext structure. The default value is 'yyrelease'.
The following variables can be referred to within actions.
- char *yybuf
- This variable points to the parser's input buffer used to store input text that has not yet been matched.
- int yypos
- This is the offset (in yybuf) of the next character to be matched and consumed.
- char *yytext
- The most recent matched text delimited by '<' and '>' is stored in this variable.
- int yyleng
- This variable indicates the number of characters in 'yytext'.
- yycontext *yy
- This variable points to the instance of 'yycontext' associated with the currently-active parser.
Programs that wish to release all the resources associated with a parser can use the following function.
- yyrelease(yycontext*yy)
- Returns all parser-allocated storage associated with yy to the system. The storage will be reallocated on the next call to yyparse().
Note that the storage for the yycontext structure itself is never allocated or reclaimed implicitly. The application must allocate these structures in automatic storage, or use calloc() and free() to manage them explicitly. The example in the following section demonstrates one approach to resource management.
LEG EXAMPLE: EXTENDING THE PARSER'S CONTEXTThe yy variable passed to actions contains the state of the parser plus any additional fields defined by YY_CTX_MEMBERS. Theses fields can be used to store application-specific information that is global to a particular call of yyparse(). A trivial but complete leg example follows in which the yycontext structure is extended with a count of the number of newline characters seen in the input so far (the grammar otherwise consumes and ignores the entire input). The caller of yyparse() uses count to print the number of lines of input that were read.
%{ #define YY_CTX_LOCAL 1 #define YY_CTX_MEMBERS \ int count; %} Char = ('\n' | '\r\n' | '\r') { yy->count++ } | . %% #include <stdio.h> #include <string.h> int main() { /* create a local parser context in automatic storage */ yycontext yy; /* the context *must* be initialised to zero before first use*/ memset(&yy, 0, sizeof(yy)); while (yyparse(&yy)) ; printf("%d newlines\n", yy.count); /* release all resources associated with the context */ yyrelease(&yy); return 0; }
DIAGNOSTICSpeg and leg warn about the following conditions while converting a grammar into a parser.
- syntax error
- The input grammar was malformed in some way. The error message will include the text about to be matched (often backed up a huge amount from the actual location of the error) and the line number of the most recently considered character (which is often the real location of the problem).
- rule 'foo' used but not defined
- The grammar referred to a rule named 'foo' but no definition for it was given. Attempting to use the generated parser will likely result in errors from the linker due to undefined symbols associated with the missing rule.
- rule 'foo' defined but not used
- The grammar defined a rule named 'foo' and then ignored it. The code associated with the rule is included in the generated parser which will in all other respects be healthy.
- possible infinite left recursion in rule 'foo'
- There exists at least one path through the grammar that leads from the rule 'foo' back to (a recursive invocation of) the same rule without consuming any input.
Left recursion, especially that found in standards documents, is often 'direct' and implies trivial repetition.
# (6.7.6) direct-abstract-declarator = LPAREN abstract-declarator RPAREN | direct-abstract-declarator? LBRACKET assign-expr? RBRACKET | direct-abstract-declarator? LBRACKET STAR RBRACKET | direct-abstract-declarator? LPAREN param-type-list? RPARENThe recursion can easily be eliminated by converting the parts of the pattern following the recursion into a repeatable suffix.
# (6.7.6) direct-abstract-declarator = direct-abstract-declarator-head? direct-abstract-declarator-tail* direct-abstract-declarator-head = LPAREN abstract-declarator RPAREN direct-abstract-declarator-tail = LBRACKET assign-expr? RBRACKET | LBRACKET STAR RBRACKET | LPAREN param-type-list? RPAREN
CAVEATSA parser that accepts empty input will always succeed. Consider the following example, not atypical of a first attempt to write a PEG-based parser:
Program = Expression* Expression = "whatever" %% int main() { while (yyparse()) puts("success!"); return 0; }This program loops forever, no matter what (if any) input is provided on stdin. Many fixes are possible, the easiest being to insist that the parser always consumes some non-empty input. Changing the first line to
Program = Expression+accomplishes this. If the parser is expected to consume the entire input, then explicitly requiring the end-of-file is also highly recommended:
Program = Expression+ !.This works because the parser will only fail to match ("!" predicate) any character at all ("." expression) when it attempts to read beyond the end of the input.
BUGSYou have to type 'man peg' to read the manual page for leg(1).
The 'yy' and 'YY' prefixes cannot be changed.
Left recursion is detected in the input grammar but is not handled correctly in the generated parser.
Diagnostics for errors in the input grammar are obscure and not particularly helpful.
The operators ! and ~ should really be named the other way around.
Several commonly-used lex(1) features (yywrap(), yyin, etc.) are completely absent.
The generated parser does not contain '#line' directives to direct C compiler errors back to the grammar description when appropriate.
AUTHORpeg, leg and this manual page were written by Ian Piumarta (first-name at last-name dot com) while investigating the viability of regular and parsing-expression grammars for efficiently extracting type and signature information from C header files.
Please send bug reports and suggestions for improvements to the author at the above address. | https://manpages.org/peg | CC-MAIN-2022-21 | refinedweb | 4,172 | 55.03 |
Poetry Generation Using Tensorflow, Keras, and LSTM
What is RNN
Recurrent Neural Networks are the first of its kind State of the Art algorithms that can Memorize/remember previous inputs in memory, When a huge set of Sequential data is given to it. Recurrent Neural Networks are the first of its kind State of the Art algorithms that can Memorize/remember previous inputs in memory, When a huge set of Sequential data is given to it.
>NN’s
Different types of Recurrent Neural Networks.
- Image Classification
- Sequence output (e.g. image captioning takes an image and outputs a sentence of words).
- Sequence input (e.g. sentiment analysis where a given sentence is classified as expressing positive or negative sentiment).
- Sequence input and sequence output (e.g. Machine Translation: an RNN reads a sentence in English and then outputs a sentence in French).
- Synced sequence input and output (e.g. video classification where we wish to label each frame of the video)
The Problem of RNN’s or Long-Term Dependencies
- Vanishing Gradient
- Exploding Gradient
Vanishing Gradient
If the partial derivation of Error is less than 1, then when it get multiplied with the Learning rate which is also very less. then Multiplying learning rate with partial derivation of Error wont be a big change when compared with previous iteration.
Exploding Gradient
We speak of Exploding Gradients when the algorithm assigns a stupidly high importance to the weights, without much reason..
LSTMs are explicitly designed to avoid the long-term dependency problem. Remembering information for long periods of time is practically their default behavior, not something they struggle to learn!
Sequence Generation Scheme
Let’s Code
import tensorflow as tf import string import requests import pandas as pd
response = requests.get('')
response.text
'Looking for some education\nMade my way into the night\nAll that bullshit conversation\nBaby, can\'t you read the signs? I won\'t bore you with the details, baby\nI don\'t even wanna waste your time\nLet\'s just say that maybe\nYou could help me ease my mind\nI ain\'t Mr. Right But if you\'re looking for fast love\nIf that\'s love in your eyes\nIt\'s more than enough\nHad some bad love\nSo fast love is all that I\'ve got on my mind Ooh,
data = response.text.splitlines() len(data)
2400
len(" ".join(data))
91330
Build LSTM Model and Prepare X and y
token = Tokenizer() token.fit_on_texts(data)
# token.word_counts
help(token)
token.word_index
{'i': 1, 'you': 2, 'the': 3, 'me': 4, 'to': 5, ...}
encoded_text = token.texts_to_sequences(data) encoded_text
[[254, 21, 219, 725], [117, 8, 80, 153, 3, 133], [14, 10, 726, 727], ...]
x = ['i love you'] token.texts_to_sequences(x)
[[1, 11, 2]]
vocab_size = len(token.word_counts) + 1
Prepare Training Data
datalist = [] for d in encoded_text: if len(d)>1: for i in range(2, len(d)): datalist.append(d[:i]) print(d[:i])
Padding
max_length = 20 sequences = pad_sequences(datalist, maxlen=max_length, padding='pre') sequences
array([[ 0, 0, 0, ..., 0, 254, 21], [ 0, 0, 0, ..., 254, 21, 219], [ 0, 0, 0, ..., 0, 117, 8], ..., [ 0, 0, 0, ..., 17, 198, 17], [ 0, 0, 0, ..., 198, 17, 198], [ 0, 0, 0, ..., 17, 198, 6]], dtype=int32)
X = sequences[:, :-1] y = sequences[:, -1]
y = to_categorical(y, num_classes=vocab_size) seq_length = X.shape[1]
LSTM Model Training, 19, 50) 69800 _________________________________________________________________ lstm (LSTM) (None, 19, 100) 60400 _________________________________________________________________ lstm_1 (LSTM) (None, 100) 80400 _________________________________________________________________ dense (Dense) (None, 100) 10100 _________________________________________________________________ dense_1 (Dense) (None, 1396) 140996 ================================================================= Total params: 361,696 Trainable params: 361,696 Non-trainable params: 0 _________________________________________________________________
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, batch_size=32, epochs=50)
Epoch 49/50 445/445 [==============================] - 3s 6ms/step - loss: 0.5386 - accuracy: 0.8388 Epoch 50/50 445/445 [==============================] - 3s 6ms/step - loss: 0.5385 - accuracy: 0.8371
Poetry Generation
poetry_length = 10 def generate_poetry(seed_text, n_lines): for i in range(n_lines): text = [] for _ in range(poetry_length): encoded = token.texts_to_sequences([seed_text]) encoded = pad_sequences(encoded, maxlen=seq_length, padding='pre') y_pred = np.argmax(model.predict(encoded), axis=-1) predicted_word = "" for word, index in token.word_index.items(): if index == y_pred: predicted_word = word break seed_text = seed_text + ' ' + predicted_word text.append(predicted_word) seed_text = text[-1] text = ' '.join(text) print(text)
seed_text = 'i love you' generate_poetry(seed_text, 5)
is no and i want to do is wash your name i set fire to the beat tears are gonna understand last night she let the sky fall when it was just like a song i was so scared to make us grow from the arms of your love to
Watch Full Course Here: | https://kgptalkie.com/poetry-generation-using-tensorflow-keras-and-lstm/ | CC-MAIN-2021-17 | refinedweb | 765 | 57.27 |
React.
If you don’t have any experience with React Native check out 3 Steps to Build Your First Mobile App with React Native. It explains the basics and how to set up required software to get started.
Table of contents
What We Will Be Building
We’re going to build an app that has a tab bar with three tabs at the top. And each tab has some content. That’s how it’s going to look like.
For your reference, the final code for the app we’re building can be found in this GitHub repo.
What Do You Need To Know
Here is a list of all different things we’re going to use to build the app. It includes various React Native core components, other features, such as state or props, and various JavaScript methods or expressions, such as .map() or object destructuring.
- View. View is a fundamental container component for building app’s UI. It can be nested inside other views and have children of any type. Read more
- Text. Text is a component for displaying text. Read more
- StyleSheet. With React Native you can style your components with CSS-like styles. Style names and values usually match CSS styles, except names are written like
backgroundColorinstead of like
background-color. Read more
- Custom Components. You’re not limited to using React Native core components. You can create you own. Read more
- Props. Props are used to customize React Native components, both core, and your own components. You can use the same component in different parts of your app with different props to customize how it would look or behave. Read more
- State. Components in React Native use state for data that changes over time. Most of your components should not use state, but take some data from props and render it. However, sometimes you need to respond to user input. Read more
- TouchableOpacity. A wrapper component that allows views to respond to touches. Read more
- Javascript .map() Iterator. The
map()method creates a new array with the results of calling a provided function on every element in this array. Read more
- JavaScript ES6 Object Destructuring. We’ll use this ES6 feature to pull fields from objects passed as function parameter. Read more
- JavaScript ES6 Arrow Functions. An arrow function expression (also known as fat arrow function) has a shorter syntax compared to function expressions and lexically binds the this value. Arrow functions are always anonymous. For example a shorter version of
function times2(value) { return value * 2; }, using fat arrow function would be
value => value * 2. Read more
You can always get back to this list as you go if you find yourself having a hard time understanding how some of these things work.
Initialize New Project
Let’s start off by creating a new app. Open Terminal App and run these commands to initialize a new project and run it in an emulator.
react-native init Tabs; cd Tabs; react-native run-ios;
Enable Hot Reloading
Once your app is up and running, press ⌘D and select Enable Hot Reloading. This will save you some time having to reload the app manually every time you make a change.
Index Files
We’re going to re-use the same code for both, iOS and Android, so we don’t need two different index files. We’re going to create a file called
app.js to store our code and import that file in
index.ios.js and
index.android.js files.
Open
index.ios.js file and scrap all of the React Native boilerplate code to start from scratch. Do the same for
index.android.js. And add to both of them the following code.
import { AppRegistry } from 'react-native'; import App from './app'; AppRegistry.registerComponent('Tabs', () => App);
This code imports
App component from
app.js file and registers it as main app container. If you took at look at the emulator at this point, you would see an error screen. That’s because
app.js doesn’t exist yet, and therefore can’t be imported. So, let’s fix it.
App Component
Let’s start off by creating just a single screen with no tabs first. Create a new file and call it
app.js.
Import Components
First, import all component we’re going to use.
import React, { Component } from 'react'; import { StyleSheet, // CSS-like styles Text, // Renders text View // Container component } from 'react-native';
Define App Class
Next, define
App component class, which has required
render() method that returns a
View with a header and some text.
export default class App extends Component { render() { return ( <View style={styles.container}> <View style={styles.content}> <Text style={styles.header}> Welcome to React Native </Text> <Text style={styles.text}> The best technology to build cross platform mobile apps with </Text> </View> </View> ); } }
Add Styles
And lastly, add styles to change text size, color, background color, center it all on the screen, and stuff like that.
const styles = StyleSheet.create({ // App container container: { flex: 1, // Take up all screen backgroundColor: '#E91E63', // Background color }, // Tab content container content: { flex: 1, // Take up all available space justifyContent: 'center', // Center vertically alignItems: 'center', // Center horizontally backgroundColor: '#C2185B', // Darker background for content area }, // Content header header: { margin: 10, // Add margin color: '#FFFFFF', // White color fontFamily: 'Avenir', // Change font family fontSize: 26, // Bigger font size }, // Content text text: { marginHorizontal: 20, // Add horizontal margin color: 'rgba(255, 255, 255, 0.75)', // Semi-transparent text textAlign: 'center', // Center fontFamily: 'Avenir', fontSize: 18, }, });
Bring up the emulator window and see what we’ve got so far. Looks pretty good so far, but there is no tabs yet. Let’s deal with this next.
Adding Tabs
Fist, add the import statement for
Tabs component that we’re about to create.
import Tabs from './tabs';
And change
render() method to have multiple screens for different tabs, and wrap them into
Tabs component.
render() { return ( <View style={styles.container}> <Tabs> {/* First tab */} <View title="WELCOME" style={styles.content}> <Text style={styles.header}> Welcome to React Native </Text> <Text style={styles.text}> The best technology to build cross platform mobile apps with </Text> </View> {/* Second tab */} <View title="NATIVE" style={styles.content}> <Text style={styles.header}> Truly Native </Text> <Text style={styles.text}> Components you define will end up rendering as native platform widgets </Text> </View> {/* Third tab */} <View title="EASY" style={styles.content}> <Text style={styles.header}> Ease of Learning </Text> <Text style={styles.text}> It’s much easier to read and write comparing to native platform’s code </Text> </View> </Tabs> </View> ); }
We’ve just added two more Views,
title prop to each of three, and wrapped them into
Tabs component. The
title prop is not a prop supported by
View component itself. We’re going to read this prop value in
Tabs component and use to display a tab title on a tab bar.
Tabs Component
And finally, let’s create
Tabs component. As you noticed it’s going to receive multiple
View components with
title prop for tab title, and tab content inside it.
Create a new file and call it
tabs.js.
Import Components
First, import all component we’re going to use.
import React, { Component } from 'react'; import { StyleSheet, // CSS-like styles Text, // Renders text TouchableOpacity, // Pressable container View // Container component } from 'react-native';
Define Tabs Class
Next, define
Tabs component class.
export default class Tabs extends Component { // Initialize State state = { // First tab is active by default activeTab: 0 } // Pull children out of props passed from App component render({ children } = this.props) { return ( <View style={styles.container}> {/* Tabs row */} <View style={styles.tabsContainer}> {/* Pull props out of children, and pull title out of props */} {children.map(({ props: { title } }, index) => <TouchableOpacity style={[ // Default style for every tab styles.tabContainer, // Merge default style with styles.tabContainerActive for active tab index === this.state.activeTab ? styles.tabContainerActive : [] ]} // Change active tab onPress={() => this.setState({ activeTab: index }) } // Required key prop for components generated returned by map iterator key={index} > <Text style={styles.tabText}> {title} </Text> </TouchableOpacity> )} </View> {/* Content */} <View style={styles.contentContainer}> {children[this.state.activeTab]} </View> </View> ); } }
It has state with
activeTab value, which is
0 by default, what means that first tab is active when you launch the app. You might be wondering why is it
0 and not
1. That’s because javascript array element indexes start with
0, so the first element always has index
0, the second one has index
1, and so on.
In
render() it loops through
children prop, which is an array of those three
Views that we wrapped into
Tabs component in
app.js, and returns tappable
TouchableOpacity component for each tab. When it’s tapped it updates
activeTab in state.
And then it renders active tab’s content from
children array, using
this.state.activeTab index.
Add Styles
And lastly, add styles to style how tabs look.
const styles = StyleSheet.create({ // Component container container: { flex: 1, // Take up all available space }, // Tabs row container tabsContainer: { flexDirection: 'row', // Arrange tabs in a row paddingTop: 30, // Top padding }, // Individual tab container tabContainer: { flex: 1, // Take up equal amount of space for each tab paddingVertical: 15, // Vertical padding borderBottomWidth: 3, // Add thick border at the bottom borderBottomColor: 'transparent', // Transparent border for inactive tabs }, // Active tab container tabContainerActive: { borderBottomColor: '#FFFFFF', // White bottom border for active tabs }, // Tab text tabText: { color: '#FFFFFF', fontFamily: 'Avenir', fontWeight: 'bold', textAlign: 'center', }, // Content container contentContainer: { flex: 1 // Take up all available space } });
And There It Is
Bring up the emulator window and play around with the app. Try tapping tabs and see how the app works.
Takeaways
You’ve just built a component that you can re-use in your apps for both, iOS and Android platforms. This way your apps will look consistent on both platforms, and you won’t have to worry about building and maintaining two different tab components for each platform. | https://rationalappdev.com/universal-tab-bar-in-react-native/ | CC-MAIN-2020-40 | refinedweb | 1,647 | 66.23 |
Django was originally developed smack in the middle of the United States (literally;. Since many developers have at best a fuzzy understanding of these terms, we’ll define them briefly.
Internationalization refers to the process of designing programs for the potential use of any locale. This includes marking text (like UI elements and error messages) for future translation, abstracting the display of dates and times so that different local standards may be observed, providing support for differing time zones, and generally making sure that the code contains no assumptions about the location if its users. You’ll often see “internationalization” abbreviated I18N (the number over 40 different localization files. If you’re not a native English speaker, there’s a good chance that Django is already is translated into your primary language.
The same internationalization framework used for these localizations is available for you to use in your own code and templates.
In a nutshell,. (i.e., as a built-in language); def my_view(request): output = gettext("Welcome to my site.") return HttpResponse(output)
Most developers prefer to use _(), as it’s shorter., make-messages.py, won’t be able to find these strings. More on make-messages later.)
The strings you pass to _() or gettext() can take placeholders, specified with Python’s standard named-string interpolation syntax, for). If you use positional interpolation, translations won’t be able to reorder placeholder text.
Use the function django.utils.translation.gettext_noop() to mark a string as a translation string without actually translating it at that moment. Strings thus marked aren’t translated until the last possible moment.
Use this approach if you have constant strings that should be stored in the original language — such as strings in a database — but should be translated at the last possible point in time, such as when the string is presented to the user.
Use the function django.utils.translation.gettext_lazy() to translate strings lazily — when the value is accessed rather than when the gettext_lazy() function is called.
For example, to mark a fields’s help_text attribute as translatable, (otherwise they won’t be translated correctly on a per-user basis). And it’s a good idea to add translations for the field names and table names, too. This means writing explicit verbose_name and verbose_name_plural options in the Meta class: messages that have different singular and plural forms, for marks a string for translations:
<title>{% trans "This is the title." %}</title>
If you only want to mark a value for translation, but translate it later, use the noop option:
<title>{% trans "value" noop %}</title>
It’s not possible to use template variables in {% trans %} — only constant strings, in single or double quotes, are allowed. If your translations require variables (placeholders), use {% blocktrans %}, for example:
{% blocktrans %}This %}, for example:
{% blocktrans count list|length as counter %} There is only one {{ name }} object. {% plural %} There are {{ counter }} {{ name }} objects. {% endblocktrans %}
Internally, all block and inline translations use the appropriate gettext/ngettext call.
When you use RequestContext (see Chapter 10), your templates have access to three translation-specific variables:
You can also load these values using template tags:
{% load i18n %} {% get_current_language as LANGUAGE_CODE %} {% get_available_languages as LANGUAGES %} {% get_current_language_bidi as LANGUAGE_BIDI %}
Translation hooks are also available within any template block tag that accepts constant strings. In those cases, just use _() syntax to specify a translation string, for example:
{% some_special_tag _("Page not found") value|yesno:_("yes,no") %}
In this case, both the tag and the filter will see the already-translated string (i.e., the string is translated before being passed to the tag handler functions), so they don’t need to be aware of translations.
Once you’ve tagged your strings for later translation, you need to write (or obtain) the language translations themselves. In this section we explain how that works., bin/make-messages.py, that automates the creation and maintenance. Take a look at thelanguage codes in the django/conf/locale/ directory to see which languages are currently supported.
The script should be run from one of three places:
The script runs over the entire tree it is run on). The first time you run it on your tree you’ll need to create the locale directory.. application contains a translation string for the text "Welcome to my site.", like so:
_("Welcome to my site.")
then make-messages.py will have created a .po file containing the following snippet — a message:
#: path/to/python/module.py:23 msgid "Welcome to my site." msgstr ""
A quick explanation is in order:
Long messages are a special case. The first string directly after!
For example, here’s a multiline translation (taken from the Spanish localization that ships with Django):
msgid "" "There's been an error. It's been reported to the site administrators via e-" "mail and should be fixed shortly. Thanks for your patience." msgstr "" "Ha ocurrido un error. Se ha informado a los administradores del sitio " "mediante correo electrónico y debería arreglarse en breve. Gracias por su " "paciencia."
Note the trailing spaces..
Once you’ve prepared your translations — or, if you just want to use the translations that are included with Django — you’ll just need to activate translation for your application.
Behind the scenes, Django has a very flexible model of deciding which language should be used — installation-wide, for a particular user, or both.
To set an installation-wide language preference, set LANGUAGE_CODE in your settings file. Django uses this language as the default translation — the final attempt if no other translator finds a translation.
If all you want to do is run Django with your native language, and a language file is available for your language, simply set LANGUAGE_CODE.
If you want to let each individual user specify the.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.locale.LocaleMiddleware' )
LocaleMiddleware tries to determine the user’s language preference by following this algorithm:
In each of these places, the language preference is expected to be in the standard language format, as a string. For example, Brazilian Portuguese is pt-br. If a base language is available but the sub-language your LANGUAGES setting to a list of languages, for example:
LANGUAGES = ( ('de', _('German')), ('en', _('English')), )
This example restricts languages that are available for automatic selection to German and English (and any sub-language, like de-ch or en-us).
If you define a custom LANGUAGES, it’s OK to mark the languages as translation strings — but use a “dummy” gettext() function, not the one in django.utils.translation. You should never import django.utils.translation from within your settings file, because that module itself depends on the settings, and that would cause a circular import.
The solution is to use a “dummy” gettext() function. Here’s a sample settings file:
_ = lambda s: s LANGUAGES = ( ('de', _('German')), ('en', _('English')), )
With this arrangement, make-messages.py will still find and mark these strings for translation, but the translation won’t happen at runtime, so you’ll have to remember to wrap the languages in the real get, and maybe the validator messages, too.
Technical message IDs are easily recognized; they’re all uppercase. You don’t translate the message ID as with other messages; rather, request object. Feel free to read this value in your view code. Here’s a simple example:
def hello_world(request, count): if request.LANGUAGE_CODE == 'de-at': return HttpResponse("You prefer to read Austrian German.") else: return HttpResponse("You prefer to read another language.")
Note that, with static (i.e. without middleware) translation, the language is in settings.LANGUAGE_CODE, while with dynamic (middleware) translation, it’s in request.LANGUAGE_CODE. applications and put all translations into one big project message file. The choice is yours.
Note
If you’re using manually configured settings,:
To create message files, you use the same make-messages.py tool as with the Django message files. You only need to be in the right place — in the directory where either the conf/locale (in case of the source tree) or the locale/ (in case of application messages or project messages) directory is located. And you use the same compile-messages.py to produce the binary django.mo files that are used by gettext. application-specific translations. But using application-specific translations and project translations could produce weird problems with make-messages. make-messages, and friends from within JavaScript.
The main solution to these problems is the javascript_catalog view, which generates’re depending upon plus signs (+) in the URL. This is especially useful if your pages use code from different applications, site preceding code could have been written as follows: (e.g., in conjunction with ngettext to produce proper pluralization). creates or updates the translation catalog for JavaScript for German. After updating translation catalogs, just run compile-messages.py the same way as you do with normal Django translation catalogs.
If you know gettext, you might note these special things in the way Django does translation:
This chapter mostly concludes our coverage of Django’s features. You should now know enough to start producing your own Django sites.
However, writing the code is only the first step in deploying a successful Web site. The next two chapters cover the things you’ll need to know if you want your site to survive in the real world. Chapter 19 discuses how you can secure your sites and your users from malicious attackers, and Chapter 20 details how to deploy a Django application onto one or many servers.. | https://book.huihoo.com/django/en/1.0/chapter18/index.html | CC-MAIN-2019-09 | refinedweb | 1,586 | 55.84 |
Created on 2013-11-22 16:04 by brett.cannon, last changed 2019-07-15 18:52 by brett.cannon. This issue is now closed.
E.g. test_namespace_pkgs should be under test_importlib and so should test_namespace_pkgs. test_import can conceivably stay out if it's updated to only contain syntactic tests for the import statement.
This is so that it's easier to run import-related tests when making changes to importlib (otherwise one has to run the whole test suite or memorize the name of every test suite related to import/importlib).
+1
At least test_namespace_pkgs is already moved, see
Should then this issue be closed?
I see that there are a few other tests that have import on its name and are outside either test_import or test_importlib:
- test_pkgimport.py
- test_threaded_import.py
- test_zipimport.py
- test_zipimport_support.py
- threaded_import_hangers.py
Should those be actually moved to test_importlib?
Anything zipimport-related should stay separate as zipimport is not a part of importlib. The other three will have to be looked at to see what exactly they are testing to know whether they relate to importlib and the implementation of import or not (my guess is yes).
Decided to start working on this issue, seems like a solid starting point. After submitting a minor doc contribution, I wanted to also attempt to contribute to some of the easier enhancement requests.
From my current assessment, it appears that "test_pkgimport", "test_threaded_import", and "threaded_import_hangers" all would appropriately be categorized in test_importlib. Pkgimport attempts to build the package, perform basic filesystem tests, and ensure that run time errors are correctly functioning. "threaded_import" attempts to simultaneously important multiple modules by assigning each one to different threads. The purpose of "threaded_import_hangers" is specifically dependent on "threaded_import", so those two should be grouped together.
While going through test_pkgimport I noticed that it uses the method "random.choose" on lines 17 and 63. However, the standard seems to "random.choice". I could not find any specific mention of random.choose on the python docs, so this appears to be a deprecated method. As far as I'm aware "random.choice" follows current standard and is used far more frequently within the CPython repository. Would it be appropriate to change the "choose" method to "choice"?
Additionally, as far as naming convention goes, should the name of "test_pkgimport" instead be "test_pkg_import"? This is entirely semantics, but it makes for better consistency with the other file names and is a bit more readable. Also, would it be best to perform all of these changes within separate pull requests, or are they minor enough to be combined into one?
> Would it be appropriate to change the "choose" method to "choice"?
Yep, just make sure the tests still pass before and after the change. :)
> should the name of "test_pkgimport" instead be "test_pkg_import"?
Since things are moving it's fine to rename the file.
> would it be best to perform all of these changes within separate pull requests, or are they minor enough to be combined into one?
Separate are easier to review.
> Yep, just make sure the tests still pass before and after the change. :)
Sounds good, the first thing I had done before proposing the change was testing it in an IDE and using some logging to ensure that random.choose and random.choice were providing the same functionality. So hopefully the PR tests will pass as well.
> Separate are easier to review.
Alright, I'll do each of the file moves in individual PRs and then a separate one for changing random.choice to random.choose.
Thanks!
Created a PR for moving and renaming "test_pkimport". An initial approval was made, now it is awaiting core review ().
New changeset 80097e089ba22a42d804e65fbbcf35e5e49eed00 by Miss Islington (bot) (Kyle Stanley) in branch 'master':
bpo-19696: Moved "test_pkgimport.py" to dir "test_importlib" (GH-14303)
Thanks for the PR, aeros167! BTW, if you want to open a new issue and modernize the tests to use importlib directly that would be great!
> Thanks for the PR, aeros167! BTW, if you want to open a new issue and modernize the tests to use importlib directly that would be great!
Sounds good, I'll definitely look into doing that after finishing up this issue. Was waiting the previous PR to be merged before replacing the deprecated method "random.choose" with "random.choice" in "test_pkg_import.py" and then also moving the other two into test_importlib.
Thank you very much for the timely feedback. This has been a great experience for learning to contribute to larger open source projects. It's been an aspiration of mine to give back to Python, as it was my first serious programming language 5 years ago. This may be an incredibly minor contribution in the grand scheme of things, but my goal is for it to be the first of many (:
Created a new PR replacing the deprecated method "random.choose" with "random.choice" in "test_pkg_import.py" ().
Created a new PR for moving the last two files "test_threaded_import.py" and "threaded_import_hangers.py" to the directory "test_importlib". ()
There were some issues with automatically merging the changes, was the conflict because I attempted to move two files in a single PR? The previous one (which was just moving a single file) did not have any issues with merge conflicts.
This should be the final PR required for this issue, afterwards I'll look into creating a new issue for modernizing the tests.
New changeset 56ec4f1fdedd5b38deb06d94d51dd1a540262e90 by Miss Islington (bot) (Kyle Stanley) in branch 'master':
bpo-19696: Replace deprecated method in "test_import_pkg.py" (GH-14466)
In order to avoid the merge conflicts, I'm going to move test_threaded_imports.py and threaded_import_hangers.py in separate PRs. Here's the PR for moving test_threaded_imports.py.
Typo in previous comment: "test_threaded_imports.py" should be "test_threaded_import.py".
Opened a new PR for moving "threaded_import_hangers" to "test_importlib/" (PR-14642). This should be the final file in "tests/" to move into "test_importlib/", so the issue can be closed if this PR is merged. I'll create a new issue for modernizing the tests once this issue is closed.
Typo: The PR in the above comment should be 14655, not 14642.
New changeset a65c977552507dd19d2cc073fc91ae22cc66bbff by Miss Islington (bot) (Kyle Stanley) in branch 'master':
bpo-19696: Move threaded_import_hangers (GH-14655)
The latest PR-14655 moved the last file, "threaded_import_hangers.py" into the "test_importlib" directory. As far as I can tell, there's nothing else which needs to be moved there, so the issue can be closed. | https://bugs.python.org/issue19696 | CC-MAIN-2021-04 | refinedweb | 1,074 | 67.04 |
Declaration of the ns3::int64x64_t type using a native int128_t type. More...
#include "ns3/core-config.h"
#include <stdint.h>
#include <cmath>
Go to the source code of this file.
Declaration of the ns3::int64x64_t type using a native int128_t type.
Definition in file int64x64-128.h.
Floating point value of HP_MASK_LO + 1.
We really want:
but we can't call functions in const definitions.
We could make this a static and initialize in int64x64-128.cc or int64x64.cc, but this requires handling static initialization order when most of the implementation is inline. Instead, we resort to this define.
Definition at line 66 of file int64x64-128.h.
Referenced by ns3::int64x64::test::Int64x64HiLoTestCase::DoRun(), ns3::int64x64_t::GetDouble(), and ns3::int64x64_t::int64x64_t().
Definition at line 23 of file int64x64-128.h.
Definition at line 30 of file int64x64-128.h.
Definition at line 29 of file int64x64-128.h. | https://www.nsnam.org/doxygen/int64x64-128_8h.html | CC-MAIN-2021-21 | refinedweb | 150 | 55.3 |
.
You’re about to see that launching a Spring Batch job is quite simple thanks to the Spring Batch launcher API. But, how you end up launching your batch jobs depends on many parameters, so we provide you with basic concepts and some guidelines. By the end of this article, you’ll know where to look to set up a launching environment for your jobs. If you are interested in learning more tutorials on spring, please read spring tutorials.
also read:
Introducing the Spring Batch launch API
The heart of the Spring Batch launcher API is the JobLauncher interface. Here is a shortened version of this interface (we removed the exceptions for brevity):
public interface JobLauncher { public JobExecution run(Job job, JobParameters jobParameters) throws (…); }
The JobLauncher itself and the Job you pass to the run method are Spring beans. The call site typically builds the JobParameters argument on the fly. The following snippet shows how to use the job launcher to start a job execution with two parameters:
JobLauncher jobLauncher = context.getBean(JobLauncher.class); Job job = context.getBean(Job.class); jobLauncher.run( job, new JobParametersBuilder() .addString("inputFile", "file:./products.txt") .addDate("date", new Date()) .toJobParameters() );
Note the use a JobParametersBuilder to create a JobParameters instance. The JobParametersBuilder class provides a fluent-style API to construct job parameters. A job parameter consists of a key and a value. Spring Batch supports four types for job parameters: string, long, double, and date.
Job parameters and job instance Remember that job parameters define the instance of a job and that a job instance can have one or more corresponding executions. You can view an execution as an attempt to run a batch process.
Spring Batch provides an implementation of JobLauncher, whose only mandatory dependency is a job repository. The following snippet shows how to declare a job launcher with a persistent job repository:
<batch:job-repository <bean id="jobLauncher" class="org.springframework. [CA] batch.core.launch.support.SimpleJobLauncher"> <property name="jobRepository" ref="jobRepository" /> </bean>
That’s it; you know everything about the Spring Batch launcher API! Ok, not everything, we did not describe the JobExecution object returned by the run method. As you can guess, this object represents the execution coming out of the run method. The JobExecution interface provides the API to query the status of an execution: if it’s running, if it has finished or if it has failed. Because batch processes are often quite long to execute, Spring Batch offers both synchronous and asynchronous ways to launch jobs.
Synchronous vs. asynchronous launches
By default, the JobLauncher run method is synchronous: the caller waits until the job execution ends (successfully or not.) Figure 1 illustrates a synchronous launch.
Synchronous launching is good in some cases: if you write a Java main program that a system scheduler like cron launches periodically, you want to exit the program only when the execution ends. But, imagine that an HTTP request triggers the launching of a job. Writing a web controller that uses the job launcher to start Spring Batch jobs on HTTP requests is a handy way to integrate with external triggering systems. What happens if the launch is synchronous? The batch process will execute in the calling thread, monopolizing web container resources. Submit many batch processes in this way and they will use up all the threads of the web container, making it unable to process any other requests.
The solution is to make the job launcher asynchronous. Figure 2 shows how launching behaves when the job launcher is asynchronous.
To make the job launcher asynchronous, just provide it with an appropriate TaskExecutor, as shown in the following snippet:
<task:executor <bean id="jobLauncher" class="org.springframework. [CA] batch.core.launch.support.SimpleJobLauncher"> <property name="jobRepository" ref="jobRepository" /> <property name="taskExecutor" ref="executor" /> </bean>
In this example, we use a task executor with a thread pool of size 10. The executor will reuse threads from its pool to launch job executions asynchronously. Note the use of the executor XML element from the task namespace. This is a shortcut provided in Spring 3.0, but you can also define a task executor like any other bean (using an implementation like ThreadPoolTaskExecutor).
It is now time to guide you through the launching solutions.
Overview of launching solutions
We’ll now discuss many solutions to launch your Spring Batch jobs. There’s little chance you’ll use them all in one project. Many factors can lead you to choose a specific launching solution: launching frequency, number of jobs to launch, nature of the triggering event, type of job, duration of the job execution, and so on. Let’s explore some cases and present some guidelines.
Launching from the command line
A straightforward way to launch a Spring Batch job is to use the command line, which spawns a new Java Virtual Machine process for the execution, as figure 3 illustrates.
The triggering event can be a system scheduler like cron or even a human operator that knows when to launch the job. You’ll see that you can write your own Java launcher program but also that Spring Batch provides a generic command line launcher that you can reuse.
Embedding Spring Batch and a scheduler in a container
Spawning a JVM process for each execution can be costly, especially if it opens new connections to a database or creates object-relational mapping contexts. Such initializations are resource intensive and you probably don’t want the associated costs if your jobs run every minute. Another option is to embed Spring Batch into a container such that your Spring Batch environment is ready to run at any time and there is no need to set up Spring Batch for each job execution. You can also choose to embed a Java-based scheduler to start your jobs. Figure 4 illustrates this solution.
A web container is a popular way to embed a Spring Batch environment. Remember that Spring Batch runs everywhere the Spring Framework runs.
Embedding Spring Batch and triggering jobs by an external event
You can also have a mix of solutions: use cron because it is a popular solution in your company and embed Spring Batch in a web application because it avoids costly recurring initializations. The challenge here is to give cron access to the Spring Batch environment. Figure 5 illustrates this deployment.
The list of launching solutions we cover here is by no means exhaustive. The Spring Batch launcher API is simple to use, so you can imagine building other types of solutions, for example: event-driven with JMS or remote with JMX.
also read:
Summary
Launching Spring Batch jobs is easy. We covered some of the many scenarios, which are the most common scenarios you’ll meet in batch systems. With Spring Batch, you can stick to the popular cron + command line scenario using either your own Java program or Spring Batch’s generic command line runner. You can also choose to embed Spring Batch in a web application combined with a Java scheduler. | http://javabeat.net/launching-a-spring-batch-job/ | CC-MAIN-2017-04 | refinedweb | 1,171 | 53.71 |
Created attachment 8647464 [details] gdb bt Seen on current trunk on aries with debug gecko and monkey testing.
Can you get a backtrace for the main thread too?
Created attachment 8647468 [details] main thread
Hm thats interesting. 2 PA processes: root@aries:/ # b2g-ps APPLICATION SEC USER PID PPID VSIZE RSS WCHAN PC NAME b2g 0 root 5131 1 340116 131532 ffffffff b6ea72d4 t /system/b2g/b2g (Nuwa) 0 root 5138 5131 116028 26012 ffffffff b6ea6a60 S /system/b2g/b2g Homescreen 2 u0_a5468 5468 5138 148500 51608 ffffffff b6ea6894 S /system/b2g/b2g OperatorVariant 2 u0_a5498 5498 5138 131180 39240 ffffffff b6ea6894 S /system/b2g/b2g Built-in Keyboa 2 u0_a5645 5645 5138 139600 44812 ffffffff b6ea6894 S /system/b2g/b2g Usage 2 u0_a6313 6313 5138 133480 42136 ffffffff b6ea6894 S /system/b2g/b2g Find My Device 2 u0_a7362 7362 5138 130904 40420 ffffffff b6ea6894 S /system/b2g/b2g Browser 2 u0_a7709 7709 5138 136116 43668 ffffffff b6ea6894 S /system/b2g/b2g (Preallocated a 0 u0_a8226 8226 5138 119248 15760 ffffffff b6ea6a60 S /system/b2g/b2g (Preallocated a 0 u0_a8248 8248 5138 119248 15588 ffffffff b6ea6a60 S /system/b2g/b2g root@aries:/ #
Probably not related but I see a bunch of those messages in logcat: W/Nuwa ( 5703): Threads remaining at exit: W/Nuwa ( 5703): Chrome_ChildThr (origNativeThreadID=5225 recreatedNativeThreadID=5704) W/Nuwa ( 5703): ImageBridgeChil (origNativeThreadID=5276 recreatedNativeThreadID=5720) W/Nuwa ( 5703): BufferMgrChild (origNativeThreadID=5277 recreatedNativeThreadID=5721) W/Nuwa ( 5703): total: 3 outstanding threads. Please fix them so they're destroyed before this point! W/Nuwa ( 5703): note: sThreadCount=3, sThreadFreezeCount=18
One more find in logcat from when we hit the assertion: F/MOZ_Assert( 5131): Assertion failure: get() (dereferencing a UniquePtr containing nullptr), at ../../dist/include/mozilla/UniquePtr.h:286
Created attachment 8647998 [details] [diff] [review] Handle the failures of dispatching a runnable in the NuwaParent protocol actor.
(In reply to Cervantes Yu [:cyu] [:cervantes] from comment #10) > It turns out that using namespace base; in NuwaParent.cpp pollutes CrashReporterParent.cpp and the make windows compiler complain ambiguous usage of vsnprintf() with base::vsnprintf(). Hopefully this should build on Windows.
Created attachment 8649222 [details] [diff] [review] Handle the failures of dispatching a runnable in the NuwaParent protocol actor.
Created attachment 8649822 [details] [diff] [review] Handle the failures of dispatching a runnable in the NuwaParent protocol actor. Update: fix the build failure on windows.
Are you sure that we're actually failing to dispatch to the main thread? That should only fail very very late in shutdown.
From attachment 8647468 [details], the main thread is still running. I don't think it should ever fail dispatching to the main thread. On a closer look I found a race when 2 consecutive NuwaParent::RecvAddNewProcess() messages comes in, but it still doesn't explain the crash in MOZ_ALWAYS_TRUE().
Created attachment 8655399 [details] [diff] [review] A debug instrumentation to trigger a race condition This simulates that when 2 fork requests (one async, one sync) are on the fly. There is a race to trigger a crash in using mNewProcessFds. I am not sure if this is the same crash, but this needs to be fixed.
So ... should I actually review this patch?
(In reply to Kyle Huey [:khuey] (khuey@mozilla.com) from comment #17) > So ... should I actually review this patch? No. I'll focus on fixing the race first and request review if there is proof that the crash is in dispatching the runnable.
Nuwa is gone after bug 1284674. | https://bugzilla.mozilla.org/show_bug.cgi?id=1194180 | CC-MAIN-2017-43 | refinedweb | 577 | 61.06 |
i am not a good programmer and so i need help. im not asking for you to write the program but need help with it !
i have some coding but cant figure out how to put the rest together !
ok so this is what i have so far. im using netbeansIDE 6.9.1
math game !
i have to create a program that will ask the user for how many questions they want from 1-10.
and also if they want it to be addition, subtraction or multiplication, or a combination of them !
now i have to generate two random numbers from 1-10 ... which i did and use only those two numbers to do the addition, sub and multi !
if they pick 1 for addition then add the numbers an so on !
if they picked 10 for the first question ( how many questions they want) then i have to do the math game 10 times .. i cant figure that out !
now when the user puts in their answer i have to look at it then give the correct answer.
if the answer is correct accept it as correct if wrong then wrong.
at the end give the user the correct answer for the problems !
after all is done ... give the user the percent they got right and percent they got wrong. also if possible give which ones they got wrong and the answer for it !
package work; import java.util.Random; import java.util.Scanner; public class Project { public static void main(String[] args) { System.out.println("Welcome To The Mathematical Game."); byte answer; byte answer2; int num1 = 0, num2 = 0, correctAnswer, userAnswer; Scanner userInput = new Scanner(System.in); System.out.print("Please enter the number of questions you want" + " from 1 to 10: "); do { answer = userInput.nextByte(); if(answer > 10 || answer < 1) { System.out.print("Sorry, that number did not work. " + "Please enter a number from 1 to 10: "); } else { Scanner userInput2 = new Scanner(System.in); System.out.print(" 1 for addition, 2 for subtraction, 3 for multiplication " + "(Please pick a number accompanying the type): "); do { answer2 = userInput2.nextByte(); if(answer2 < 1 || answer2 > 3) { System.out.print("Please enter a valid number: " + "1 for addition, 2 for subtraction, 3 for multiplication: "); } else { Random rnd = new Random(); // generate two random numbers num1 = (int)((rnd.nextFloat() * 10 + 1)); num2 = (int)((rnd.nextFloat() * 10 + 1)); // give the user the two numbers System.out.println("The two numbers are: " + num1 + " and " + num2 + "); // ask the user to do the math System.out.println("If you picked 1 for addition then add" + " the two numbers above, if you picked 2 " + "for subtraction then subtract the second number " + "from the first, and if you picked 3 for " + "multiplication then multiply the numbers."); // the user inputs their answer here Scanner userInput3 = new Scanner(System.in); System.out.print("Please enter the answer: "); userAnswer = userInput3.nextInt(); if(answer2 == 1) { correctAnswer = num1 + num2; } else if(answer2 == 2) { correctAnswer = num2 - num1; } else { correctAnswer = num1 * num2; } } }while(answer2 > 0 || answer2 < 4); } }while(answer < 1 || answer > 10); } } | https://www.daniweb.com/programming/software-development/threads/350319/java-help | CC-MAIN-2022-21 | refinedweb | 506 | 67.76 |
One way to obfuscate code is clever use of arcane programming language syntax. Hackers are able to write completely unrecognizable code by exploiting dark corners of programming techniques and languages. Some of these attempts are quite impressive.
But it’s also possible to write clean source code that is nevertheless obfuscated. For example, it’s not at all obvious what the following Python code computes.
def f(x): return 4*x*(1-x) def g(x): return x**3.5 * (1-x)**2.5 sum = 0 x = 1/7.0 N = 1000000 for _ in range(N): x = f(x) sum += g(x) print(sum/N)
The key to unraveling this mystery is a post I wrote a few days ago that shows the x‘s in this code cover the unit interval like random samples from a beta(0.5, 0.5) distribution. That means the code is finding the expected value of g(X) where X has a beta(0.5, 0.5) distribution. The following computes this expected value.
The output of the program matches 1/60π to six decimal places.
Related post: What does this code do? | https://www.johndcook.com/blog/2017/10/03/clean-obfuscated-code/ | CC-MAIN-2019-18 | refinedweb | 191 | 66.13 |
Hello, here goes multipath-tools-0.2.6 It's a really big feature update. I hope I'll get feedback on the new features : 1) system-disk-on-SAN corner case treated with private namespace and ramfs for callback programs. Next release I'll move the mknod to the ramfs too by binding the ramfs to /tmp so scsi_id's mknod will have the safety net too. See prepare_namespace() in main.c for review. 2) multipath config tool now get path id from callback proggys. /bin/scsi_id by default. The tools now rely heavily on callbacks, so I'd like to have insights about how to treat the out-of-memory case with regard to these execv(). I'm also pondering ripping off the get_evpd_wiid() fallback : does someone care ? Finally note you *need* to update your config file. Not because I want you to ackowledge to huge work put into reformatting and commenting, but simply because the synthax has changed. Changelog for this release : * [multipathd] implement the system-disk-on-SAN safety net * [multipathd] add exit_daemon() wrapper function * [multipathd] mlockall() all daemon threads * [multipath] fix a bug in the mp_iopolicy_handler that kept the iopolicy per LUN override from working * [multipath] display the tur bit value in print_path as requested by SUN * try to open /$udev/reverse/$major:$minor before falling back to mknod * add "udev_dir" to the defaults block in the config file * merge "daemon" & "device_maps" config blocks into a new "defaults" block * [multipath] properly comment the config file * [multipath] generalize the dbg() macro usage Makefile now has a DEBUG flag * [multipath] move to callout based WWID fetching. Default to scsi_id callout. I merged execute_program from udev for that (so credit goes to GregKH) * [multipath] get rid of "devnodes in /dev" assumption ie move to "maj:min" device mapper target synthax Downloads and docs at regards, cvaroqui | https://www.redhat.com/archives/dm-devel/2004-July/msg00061.html | CC-MAIN-2015-48 | refinedweb | 308 | 52.29 |
Hey All,
I recently started Learning c++. I have purchased Jumping into c++ (and c++ for Dummies)and have a couple of Questions.
1. Are there answers for the Practice Problems anywhere from Jumping into C++
2. Also Having an issue with one of the Problems: The program runs but doesn't give me the result. Any thoughts?
Thanks for any helpThanks for any helpCode:#include <iostream> #include <cstdio> #include <cstdlib> using namespace std; int Addition(int Num1, int Num2) { int Ans = Num1 + Num2; return Ans; } int Subtraction(int Num1, int Num2) { int Ans = Num1 - Num2; return Ans; } int Multiplication(int Num1, int Num2) { int Ans = Num1 * Num2; return Ans; } int Division(int Num1, int Num2) { int Ans = Num1 / Num2; return Ans; } int main() { int nAns; //For Loop Version of Program int nNum1; int nNum2; char cOperand; cout << "Please enter the First Number: "; cin >> nNum1; cout << "\nPlease enter the Second Number: "; cin >> nNum2; cout << "\nPlease enter your Operator as + - / * %: "; cin >> cOperand; //cout << "The Answer is: " << ans << "\n"; switch (cOperand) { case '+': nAns = Addition(nNum1, nNum2); return nAns; break; case '-': nAns = Subtraction(nNum1, nNum2); return nAns; break; case '*': case 'x': case 'X': nAns = Multiplication(nNum1, nNum2); return nAns; break; case '/': nAns = Division(nNum1, nNum2); return nAns; break; default: cout << "Invalid Operator!"; break; } cin.ignore(); cout << "The Answer is: " << nAns; }
Edit: It actually looks to be getting the correct answer as when I run the program in Code:Blocks it exits and says: Process returned 10 (0xA). Its just not giving me the output of the answer. | http://cboard.cprogramming.com/cplusplus-programming/148098-newbie-help.html | CC-MAIN-2014-15 | refinedweb | 254 | 53.48 |
Important: Please read the Qt Code of Conduct -
How can i set <language conformance flag> like in Vs2017 ( /permissive/no)
I am trying to link Pytorch c++ frontend with Qt .
With Vs2017 its works fine when setting language conformance flag to "no".
How can i set this flag in Qt .pro file ?
Thank you in advance.
QMAKE_CXXFLAGS += /permissive/no
Tank you for your quick reply.
I just tried it and got warning [:-1: warning: D9002 : ignoring unknown option '/permissive/no']
I am intersted in setting this flag .
Perhaps it should be
/permissive-
@sierdzio seems like it's enabled by default in MSVC2017 i don't know how to disable it in Qmake file .
This might work:
QMAKE_CXXFLAGS -= /permissive-
@sierdzio I tried it ( no warnings ) but that doesn't seem to solve the problem. Maybe i am missing other points.
Until now the only workaround i found is (using Qt inside VS2017) :
After Using "Qt Visual Studio Tools" and setting the project in VS 2017 with "language conformance NO"
I had to do :
#undef slots
#include <torch/torch.h>
#define slots Q_SLOTS
(Still not working with Qt creator/Qmake)
- SGaist Lifetime Qt Champion last edited by
Hi,
Try adding:
CONFIG += no_keywords
to your .pro file.
This will disable the Qt specific keywords like signal and slot. You have to then use Q_SLOT and friends in your code. | https://forum.qt.io/topic/105901/how-can-i-set-language-conformance-flag-like-in-vs2017-permissive-no/5 | CC-MAIN-2020-24 | refinedweb | 226 | 71.44 |
=head1 NAME Object::InsideOut - Comprehensive inside-out object support module =head1 VERSION This document describes Object::InsideOut version 4.05 =head1 =head1 B<read-only> to prevent I<accidental> modifications to the ID. Object data (i.e., fields) are stored within the class's package in either arrays indexed by the object's ID, or hashes keyed to the object's ID. The virtues of the inside-out object model over the I<blessed hash> object model have been extolled in detail elsewhere. See the informational links under L</"SEE ALSO">. Briefly, inside-out objects offer the following advantages over I<blessed hash> objects: =over =item * Encapsulation Object data is enclosed within the class's code and is accessible only through the class-defined interface. =item * Field Name Collision Avoidance Inheritance using I<blessed hash> classes can lead to conflicts if any classes use the same name for a field (i.e., hash key). Inside-out objects are immune to this problem because object data is stored inside each class's package, and not in the object itself. =item * Compile-time Name Checking A common error with I<blessed hash> classes is the misspelling of field names: $obj->{'coment'} = 'Say what?'; # Should be 'comment' not 'coment' As there is no compile-time checking on hash keys, such errors do not usually manifest themselves until runtime. With inside-out objects, I<text> hash keys are not used for accessing field data. Field names and the data index (i.e., $$self) are checked by the Perl compiler such that any typos are easily caught using S<C<perl -c>>. $coment[$$self] = $value; # Causes a compile-time error # or with hash-based fields $comment{$$self} = $value; # Also causes a compile-time error =back Object::InsideOut offers all the capabilities of other inside-out object modules with the following additional key advantages: =over =item * Speed When using arrays to store object data, Object::InsideOut objects are as much as 40% faster than I<blessed hash> objects for fetching and setting data, and even with hashes they are still several percent faster than I<blessed hash> objects. =item * Threads Object::InsideOut is thread safe, and thoroughly supports sharing objects between threads using L<threads::shared>. =item * Flexibility Allows control over object ID specification, accessor naming, parameter name matching, and much more. =item * Runtime Support Supports classes that may be loaded at runtime (i.e., using S<C<eval { require ...; };>>). This makes it usable from within L<mod_perl>, as well. Also supports additions to class hierarchies, and dynamic creation of object fields during runtime. =item * Exception Objects Object::InsideOut uses L<Exception::Class> for handling errors in an OO-compatible manner. =item * Object Serialization Object::InsideOut has built-in support for object dumping and reloading that can be accomplished in either an automated fashion or through the use of class-supplied subroutines. Serialization using L<Storable> is also supported. =item * Foreign Class Inheritance Object::InsideOut allows classes to inherit from foreign (i.e., non-Object::InsideOut) classes, thus allowing you to sub-class other Perl class, and access their methods from your own objects. =item * Introspection Obtain constructor parameters and method metadata for Object::InsideOut classes. =back =head1 CLASSES To use this module, each of your classes will start with S C<base> pragma: It loads the parent module(s), calls their C<-E<gt>import()> methods, and sets up the sub-class's @ISA array. Therefore, you should not S<C<use base ...>> yourself, nor try to set up C<@ISA> arrays. Further, you should not use a class's C<@ISA> array to determine a class's hierarchy: See L</"INTROSPECTION"> for details on how to do this. If a parent class takes parameters (e.g., symbols to be exported via L<Exporter|/"Usage With C<Exporter>">), enclose them in an array ref (mandatory) following the name of the parent class: package My::Project; { use Object::InsideOut 'My::Class' => [ 'param1', 'param2' ], 'Another::Class' => [ 'param' ]; ... } =head1 OBJECTS =head2 Object Creation Objects are created using the C<-E<gt C<-E<gt>new()> method. Usually, object fields are initially populated with data as part of the object creation process by passing parameters to the C<-E<gt>new()> method. Parameters are passed in as combinations of S<C<key =E<gt> S<C<'foo' =E<gt> 'bar'>>, C<My::Class> will also get S<C<'param' =E<gt> 'value'>>, and C<Parent::Class> will also get S<C<'data' =E<gt> 'info'>>. In this scheme, class-specific parameters will override general parameters specified at a higher level: my $obj = My::Class->new( 'default' => 'bar', 'Parent::Class' => { 'default' => 'baz' }, ); C<My::Class> will get S<C<'default' =E<gt> 'bar'>>, and C<Parent::Class> will get S<C<'default' =E<gt> 'baz'>>. Calling C<-E<gt>new()> on an object works, too, and operates the same as calling C<-E<gt>new()> for the class of the object (i.e., C<$obj-E<gt>new()> is the same as C<ref($obj)-E<gt>new()>). How the parameters passed to the C<-E<gt>new()> method are used to initialize the object is discussed later under L</"OBJECT INITIALIZATION">. NOTE: You cannot create objects from Object::InsideOut itself: # This is an error # my $obj = Object::InsideOut->new(); In this way, Object::InsideOut is not an object class, but functions more like a pragma. =head2: C<$$self>. Normally, this is only needed when accessing the object's field data: my @my_field :Field; sub my_method { my $self = shift; ... my $data = $my_field[$$self]; ... } At all other times, and especially in application code, the object should be treated as an I<opaque> entity. =head1 ATTRIBUTES Much of the power of Object::InsideOut comes from the use of I<attributes>: I<Tags> on variables and subroutines that the L<attributes> module sends to Object::InsideOut at compile time. Object::InsideOut then makes use of the information in these tags to handle such operations as object construction, automatic accessor generation, and so on. (Note: The use of attributes is not the same thing as L<source filtering|Filter::Simple>.) L<attributes> module), an attribute's name should not be all lowercase. =head1 FIELDS =head2 Field Declarations Object data fields consist of arrays within a class's package into which data are stored using the object's ID as the array index. An array is declared as being an object field by following its declaration with the C<:Field> attribute: my @info :Field; Object data fields may also be hashes: my %data :Field; However, as array access is as much as 40% faster than hash access, you should stick to using arrays. See L</"HASH ONLY CLASSES"> for more information on when hashes may be required. =head2 Getting Data In class code, data can be fetched directly from an object's field array (hash) using the object's ID: $data = $field[$$self]; # or $data = $field{$$self}; =head2 Setting Data Analogous to the above, data can be put directly into an object's field array (hash) using the object's ID: $field[$$self] = $data; # or $field{$$self} = $data; However, in threaded applications that use data sharing (i.e., use C<threads::shared>), the above will not work when the object is shared between threads and the data being stored is either an array, hash or scalar reference (this includes other objects). This is because the C<$data> must first be converted into shared data before it can be put into the field. Therefore, Object::InsideOut automatically exports a method called C<-E<gt>set()> to each class. This method should be used in class code to put data into object fields whenever there is the possibility that the class code may be used in an application that uses L<threads::shared> (i.e., to make your class code B<thread-safe>). The C<-E<gt>set()> method handles all details of converting the data to a shared form, and storing it in the field. The C<-E<gt>set()> method, requires two arguments: A reference to the object field array/hash, and the data (as a scalar) to be put in it: my @my_field :Field; sub store_data { my ($self, $data) = @_; ... $self->set(\@my_field, $data); } To be clear, the C<-E<gt>set()> method is used inside class code; not application code. Use it inside any object methods that set data in object field arrays/hashes. In the event of a method naming conflict, the C<-E<gt>set()> method can be called using its fully-qualified name: $self->Object::InsideOut::set(\@field, $data); =head1 OBJECT INITIALIZATION As stated in L</"Object Creation">, object fields are initially populated with data as part of the object creation process by passing S<C<key =E<gt> value>> parameters to the C<-E<gt>new()> method. These parameters can be processed automatically into object fields, or can be passed to a class-specific object initialization subroutine. =head2 Field-Specific Parameters When an object creation parameter corresponds directly to an object field, you can specify for Object::InsideOut to automatically place the parameter into the field by adding the C<:Arg> attribute to the field declaration: my @foo :Field :Arg(foo); For the above, the following would result in C<$val> being placed in C<My::Class>'s C<@foo> field during object creation: my $obj = My::Class->new('foo' => $val); =head2 Object Initialization Subroutines Many times, object initialization parameters do not correspond directly to object fields, or they may require special handling. For these, parameter processing is accomplished through a combination of an C<:InitArgs> labeled hash, and an C<:Init> labeled subroutine. The C<:InitArgs> labeled hash specifies the parameters to be extracted from the argument list supplied to the C<-E<gt>new()> method. Those parameters (and only those parameters) which match the keys in the C<:InitArgs> hash are then packaged together into a single hash ref. The newly created object and this parameter hash ref are then sent to the C<:Init> subroutine for processing. Here is an example of a class with an I<automatically handled> field and an I<, C<$dat> being placed in the object's C<@my_data> field because the C<MY_DATA> key is specified in the C<:Arg> attribute for that field. Then, C<_init> is invoked with arguments consisting of the object (i.e., C<$self>) and a hash ref consisting only of S<C<{ 'MY_PARAM' =E<gt> $param }>> because the key C<MY_PARAM> is specified in the C<:InitArgs> hash. C<_init> checks that the parameter C<MY_PARAM> exists in the hash ref, and then (since it does exist) adds C<$parm> to the object's C<@my_field> field. =over =item Setting Data Data processed by the C<:Init> subroutine may be placed directly into the class's field arrays (hashes) using the object's ID (i.e., C<$$self>): $my_field[$$self] = $args->{'MY_PARAM'}; However, as shown in the example above, it is strongly recommended that you use the L<-E<gt>set()|/"Setting Data"> method: $self->set(\@my_field, $args->{'MY_PARAM'}); which handles converting the data to a shared format when needed for applications using L<threads::shared>. =item All Parameters The C<:InitArgs> hash and the C<:Arg> attribute on fields act as filters that constrain which initialization parameters are and are not sent to the C<:Init> subroutine. If, however, a class does not have an C<:InitArgs> hash B<and> does not use the C<:Arg> attribute on any of its fields, then its C<:Init> subroutine (if it exists, of course) will get all the initialization parameters supplied to the C<-E<gt>new()> method. =back =head2 Mandatory Parameters Field-specific parameters may be declared mandatory as follows: my @data :Field :Arg('Name' => 'data', 'Mandatory' => 1); If a mandatory parameter is missing from the argument list to C<-E<gt>new()>, an error is generated. For C<:Init> handled parameters, use: my %init_args :InitArgs = ( 'data' => { 'Mandatory' => 1, }, ); C<Mandatory> may be abbreviated to C<Mand>, and C<Required> or C<Req> are synonymous. =head2 Default Values For optional parameters, defaults can be specified for field-specific parameters using either of these syntaxes: my @data :Field :Arg('Name' => 'data', 'Default' => 'foo'); my @info :Field :Arg(info) :Default('bar'); If an optional parameter with a specified default is missing from the argument list to C<-E<gt>new()>, then the default is assigned to the field when the object is created (before the C<:Init> subroutine, if any, is called). The format for C<:Init> handled parameters is: my %init_args :InitArgs = ( 'data' => { 'Default' => 'foo', }, ); In this case, if the parameter is missing from the argument list to C<-E<gt>new()>, then the parameter key is paired with the default value and added to the C<:Init> argument hash ref (e.g., S<C<{ 'data' =E<gt> 'foo' }>>). Fields can also be assigned a default value even if not associated with an initialization parameter: my @hash :Field :Default({}); my @tuple :Field :Default([1, 'bar']); Note that when using C<:Default>, the value must be properly structured Perl code (e.g., strings must be quoted as illustrated above). C<Default> and C<:Default> may be abbreviated to C<Def> and C<:Def> respectively. =head3 Generated Default Values It is also possible to I<generate> default values on a per object basis by using code in the C<:Default> directive. my @IQ :Field :Default(50 + rand 100); my @ID :Field :Default(our $next; ++$next); The above, for example, will initialize the C<IQ> attribute of each new object to a different random number, while its C<ID> attribute will be initialized with a sequential integer. The code in a C<:Default> specifier can also refer to the object being initialized, either as C<$_[0]> or as C<$self>. For example: my @unique_ID :Field :Default($self->gen_unique_ID); Any code specified as a default will I C<Default> tag inside an C<:Arg> directive, you will need to wrap the code in a C<sub { }>, and C<$_[0]> (but not C<$self>) can be used to access the object being initialized: my @baz :Field :Arg(Name => 'baz', Default => sub { $_[0]->biz }); System functions need to similarly be wrapped in C<sub { }>: my @rand :Field :Type(numeric) :Arg(Name => 'Rand', Default => sub { rand }); Subroutines can be accessed using a code reference: my @data :Field :Arg(Name => 'Data', Default => \&gen_default); On the other hand, the above can also be simplified by using the C<:Default> directive instead: my @baz :Field :Arg(baz) :Default($self->biz); my @rand :Field :Arg(Rand) :Default(rand) :Type(numeric); my @data :Field :Arg(Data) :Default(gen_default); Using generated defaults in the C<:InitArgs> hash requires the use of the same types of syntax as with the C<Default> tag in an C<:Arg> directive: my %init_args :InitArgs = ( 'Baz' => { 'Default' => sub { $_[0]->biz }, }, 'Rand' => { 'Default' => sub { rand }, }, 'Data' => { 'Default' => \&gen_default, }, ); =head3 Sequential defaults In the previous section, one of the examples is not as safe or as convenient as it should be: my @ID :Field :Default(our $next; ++$next); The problem is the shared variable (C<$next>) that's needed to track the allocation of C<:SequenceFrom> directive (which can be abbreviated to C<:SeqFrom> or C<$previous_value++>. If it is an object, it is generated by calling C<< $obj->next() >> (or by calling C<$obj++> if the object doesn't have a C<:SequenceFrom> directive is just like a C<:Default>. For example, it can be used in conjunction with the C<:Arg> directive as follows: my @ID :Field :Arg(ID) :SeqFrom(1); However, not as a tag inside the C<:Arg> directive: my @ID :Field :Arg('Name' => 'ID', 'SeqFrom' => 1) # WRONG For the C<:InitArgs> hash, you will need to I<roll your own> sequential defaults if required: use feature 'state'; my %init_args :InitArgs = ( 'Counter' => { 'Default' => sub { state $next; ++$next } }, ); =head2 Parameter Name Matching Rather than having to rely on exact matches to parameter keys in the C<-E<gt C<$data> being placed in C<My::Class>'s C<@param> field during object creation: my $obj = My::Class->new('Parm' => $data); For C<:Init> handled parameters, you would similarly use: my %init_args :InitArgs = ( 'Param' => { 'Regex' => qr/^PARA?M$/i, }, ); In this case, the match results in S<C<{ 'Param' =E<gt> $data }>> being sent to the C<:Init> subroutine as the argument hash. Note that the C<:InitArgs> hash key is substituted for the original argument key. This eliminates the need for any parameter key pattern matching within the C<:Init> subroutine. C<Regexp> may be abbreviated to C<Regex> or C<Re>. =head2 Object Pre-initialization Occasionally, a child class may need to send a parameter to a parent class as part of object initialization. This can be accomplished by supplying a., C<$self>), and a hash ref of all the parameters from the C<-E<gt>new()> method call, including any additional parameters added by other C<:PreInit> subroutines. sub pre_init :PreInit { my ($self, $args) = @_; ... } The parameter hash ref will not be exactly as supplied to C<-E<gt>new()>, but will be I C<:PreInit> subroutine. The C<:PreInit> subroutine may then add, modify or even remove any parameters from the hash ref as needed for its purposes. After all the C<:PreInit> subroutines have been executed, object initialization will then proceed using the resulting parameter hash. The C<:PreInit> subroutine should not try to set data in its class's fields or in other class's fields (e.g., using I<set> methods) as such changes will be overwritten during initialization phase which follows pre-initialization. The C<:PreInit> subroutine is only intended for modifying initialization parameters prior to initialization. =head2: =over =item 1. The scalar reference for the object is created, populated with an L</"Object ID">, and blessed into the appropriate class. =item 2. L<:PreInit|/"Object Pre-initialization"> subroutines are called in order from the bottom of the class hierarchy upward (i.e., child classes first). =item 3. From the top of the class hierarchy downward (i.e., parent classes first), L</"Default Values"> are assigned to fields. (These may be overwritten by subsequent steps below.) =item 4. From the top of the class hierarchy downward, parameters to the C<-E<gt>new()> method are processed for C<:Arg> field attributes and entries in the C<:InitArgs> hash: =over =item a. L</"Parameter Preprocessing"> is performed. =item b. Checks for L</"Mandatory Parameters"> are made. =item c. L</"Default Values"> specified in the C<:InitArgs> hash are added for subsequent processing by the C<:Init> subroutine. =item d. L<Type checking|/"TYPE CHECKING"> is performed. =item e. L</"Field-Specific Parameters"> are assigned to fields. =back =item 5. From the top of the class hierarchy downward, L<:Init|/"Object Initialization Subroutines"> subroutines are called with parameters specified in the C<:InitArgs> hash. =item 6. Checks are made for any parameters to C<-E<gt>new()> that were not handled in the above. (See next section.) =back =head2 Unhandled Parameters It is an error to include any parameters to the C<-E<gt C<:InitArgs> hash B<and> does not use the C<:Arg> attribute on any of its fields B<and> uses an L<:Init|/"Object Initialization Subroutines"> subroutine for processing parameters. In such a case, it is not possible for Object::InsideOut to determine which if any of the parameters are not handled by the C<:Init> subroutine. If you add the following construct to the start of your application: BEGIN { no warnings 'once'; $OIO::Args::Unhandled::WARN_ONLY = 1; } then unhandled parameters will only generate warnings rather than causing exceptions to be thrown. =head2 Modifying C<:InitArgs> For performance purposes, Object::InsideOut I<normalizes> each class's C<:InitArgs> hash by creating keys in the form of C<'_X'> for the various options it handles (e.g., C<'_R'> for C<'Regexp'>). If a class has the unusual requirement to modify its C<:InitArgs> hash during runtime, then it must renormalize the hash after making such changes by invoking C<Object::InsideOut::normalize()> on it so that Object::InsideOut will pick up the changes: Object::InsideOut::normalize(\%init_args); =head1. =head2 Basic Accessors A I<get> accessor is vary basic: It just returns the value of an object's field: my @data :Field; sub fetch_data { my $self = shift; return ($data[$$self]); } and you would use it as follows: my $data = $obj->fetch_data(); To have Object::InsideOut generate such a I<get> accessor for you, add a C<:Get> attribute to the field declaration, specifying the name for the accessor in parentheses: my @data :Field :Get(fetch_data); Similarly, a I<set> accessor puts data in an object's field. The I<set> accessors generated by Object::InsideOut check that they are called with at least one argument. They are specified using the C<:Set> attribute: my @data :Field :Set(store_data); Some programmers use the convention of naming I<get> and I<set> accessors using I<get_> and I<set_> prefixes. Such I<standard> accessors can be generated using the C<:Standard> attribute (which may be abbreviated to C<:Std>): my @data :Field :Std(data); which is equivalent to: my @data :Field :Get(get_data) :Set(set_data); Other programmers prefer to use a single I<combination> accessors that performs both functions: When called with no arguments, it I<gets>, and when called with an argument, it I<sets>. Object::InsideOut will generate such accessors with the C<:Accessor> attribute. (This can be abbreviated to C<:Acc>, or you can use C<:Get_Set> or C<:Combined> or C<:Combo> or even C<Mutator>.) For example: my @data :Field :Acc(data); The generated accessor would be used in this manner: $obj->data($val); # Puts data into the object's field my $data = $obj->data(); # Fetches the object's field data =head2 I<Set> Accessor Return Value For any of the automatically generated methods that perform I<set> operations, the default for the method's return value is the value being set (i.e., the I<new> value). You can specify the I<set> accessor's return value using the C<Return> attribute parameter (which may be abbreviated to C<Ret>). For example, to explicitly specify the default behavior use: my @data :Field :Set('Name' => 'store_data', 'Return' => 'New'); You can specify that the accessor should return the I<old> (previous) value (or C<undef> if unset): my @data :Field :Acc('Name' => 'data', 'Ret' => 'Old'); You may use C<Previous>, C<Prev> or C<Prior> as synonyms for C<Old>. Finally, you can specify that the accessor should return the object itself: my @data :Field :Std('Name' => 'data', 'Ret' => 'Object'); C<Object> may be abbreviated to C<Obj>, and is also synonymous with C<Self>. =head2 Method Chaining An obvious case where method chaining can be used is when a field is used to store an object: A method for the stored object can be chained to the I<get> accessor call that retrieves that object: $obj->get_stored_object()->stored_object_method() Chaining can be done off of I<set> accessors based on their return value (see above). In this example with a I<set> accessor that returns the I<new> value: $obj->set_stored_object($stored_obj)->stored_object_method() the I<set_stored_object()> call stores the new object, returning it as well, and then the I<stored_object_method()> call is invoked via the stored/returned object. The same would work for I<set> accessors that return the I<old> value, too, but in that case the chained method is invoked via the previously stored (and now returned) object. If the L<Want> module (version 0.12 or later) is available, then Object::InsideOut also tries to do I<the right thing> with method chaining for I<set> accessors that don't store/return objects. In this case, the object used to invoke the I<set> accessor will also be used to invoke the chained method (just as though the I<set> accessor were declared with S<C<'Return' =E<gt> 'Object'>>): $obj->set_data('data')->do_something(); To make use of this feature, just add C<use Want;> to the beginning of your application. Note, however, that this special handling does not apply to I<get> accessors, nor to I<combination> accessors invoked without an argument (i.e., when used as a I<get> accessor). These must return objects in order for method chaining to succeed. =head2 :lvalue Accessors As documented in L<perlsub/"Lvalue subroutines">, an C<:lvalue> subroutine returns a modifiable value. This modifiable value can then, for example, be used on the left-hand side (hence C<LVALUE>) of an assignment statement, or a substitution regular expression. For Perl 5.8.0 and later, Object::InsideOut supports the generation of C<:lvalue> accessors such that their use in an C<LVALUE> context will set the value of the object's field. Just add C<'lvalue' =E<gt> 1> to the I<set> accessor's attribute. (C<'lvalue'> may be abbreviated to C<'lv'>.) Additionally, C<:Lvalue> (or its abbreviation C<:lv>) may be used for a combined I<get/set> I< C<:lvalue> accessors requires the installation of the L<Want> module (version 0.12 or later) from CPAN. See particularly the section L<Want/"Lvalue subroutines:"> for more information. C<:lvalue> accessors also work like regular I<set> accessors in being able to accept arguments, return values, and so on: my @pri :Field :Lvalue('Name' => 'priority', 'Return' => 'Old'); ... my $old_pri = $obj->priority(10); C<:lvalue> accessors can be used in L<method chains|/"Method Chaining">. B<Caveats>: While still classified as I<experimental>, Perl's support for C<:lvalue> subroutines has been around since 5.6.0, and a good number of CPAN modules make use of them. By definition, because C<:lvalue> accessors return the I<location> of a field, they break encapsulation. As a result, some OO advocates eschew the use of C<:lvalue> accessors. C<:lvalue> accessors are slower than corresponding I<non-lvalue> accessors. This is due to the fact that more code is needed to handle all the diverse ways in which C<:lvalue> accessors may be used. (I've done my best to optimize the generated code.) For example, here's the code that is generated for a simple combined accessor: *Foo::foo = sub { return ($$field[${$_[0]}]) if (@_ == 1); $$field[${$_[0]}] = $_[1]; }; And the corresponding code for an]}]; }; =head1 ALL-IN-ONE Parameter naming and accessor generation may be combined: my @data :Field :All(data); This is I<syntactic shorthand> for: my @data :Field :Arg(data) :Acc(data); If you want the accessor to be C<:lvalue>, use: my @data :Field :LV_All(data); If I<standard> accessors are desired, use: my @data :Field :Std_All(data); Attribute parameters affecting the I<set> accessor may also be used. For example, if you want I<standard> accessors with an C<:lvalue> I<set> accessor: my @data :Field :Std_All('Name' => 'data', 'Lvalue' => 1); If you want a combined accessor that returns the I<old> value on I<set> operations: my @data :Field :All('Name' => 'data', 'Ret' => 'Old'); And so on. If you need to add attribute parameters that affect the C<:Arg> portion (e.g., C<Default>, C<Mandatory>, etc.), then you cannot use C<:All>. Fall back to using the separate attributes. For example: my @data :Field :Arg('Name' => 'data', 'Mand' => 1) :Acc('Name' => 'data', 'Ret' => 'Old'); =head1 READONLY FIELDS If you want to declare a I<read-only> field (i.e., one that can be initialized and retrieved, but which doesn't have a I<set> accessor): my @data :Field :Arg(data) :Get(data); there is a I<syntactic shorthand> for that, too: my @data :Field :ReadOnly(data); or just: my @data :Field :RO(data); If a I<standard> I<get> accessor is desired, use: my @data :Field :Std_RO(data); For obvious reasons, attribute parameters affecting the I<set> accessor cannot be used with read-only fields, nor can C<:ReadOnly> be combined with C<:LValue>. As with C<:All>, if you need to add attribute parameters that affect the C<:Arg> portion then you cannot use the C<:RO> shorthand: Fall back to using the separate attributes in such cases. For example: my @data :Field :Arg('Name' => 'data', 'Mand' => 1) :Get('Name' => 'data'); =head1 DELEGATORS In addition to autogenerating accessors for a given field, you can also autogenerate I<delegators> to that field. A delegator is an accessor that forwards its call to one of the object's fields. For example, if your I<Car> object has an C<@engine> field, then you might need to send all acceleration requests to the I<Engine> object stored in that field. Likewise, all braking requests may need to be forwarded to I<Car>'s field that stores the I I<Car> needs to forward other method calls to its I<Engine> or I<Brakes>, this quickly becomes tedious, repetitive, and error-prone. So, instead, you can just tell Object::InsideOut that a particular method should be automatically forwarded to a particular field, by specifying a I<Brake> class provides an C<engage()> method, rather than a C<brake()> method, then you'd need C<Car::brake()> to be implemented as: sub brake { my ($self, $foot_pressure) = @_; $self->brakes->engage($foot_pressure); } You can achieve that using the C<:Handles> attribute, like so: my @brakes :Field :Get(brakes) :Handles(brake-->engage); The long arrow version still creates a delegator method C<brake()>, but makes that method delegate to your I<Brakes> object by calling its C<engage()> method instead. If you are delegating a large number of methods to a particular field, the C<Computer::Onboard> class changes, you have to change those C<:Handles> declarations, too. Sometimes, all you really want to say is: "This field should handle anything it I<can> handle". To do that, you write: my @onboard_computer :Field :Get(comp) :Type(Computer::Onboard) :Handles(Computer::Onboard); That is, if a C<:Handles> directive is given a name that includes a C<::>, it treats that name as a class name, rather than a method name. Then it checks that class's metadata (see L<INTROSPECTION>), retrieves a list of all the method names from the class, and uses that as the list of method names to delegate. Unlike an explicit C<:Handles( method_name )>, a C<:Handles( Class::Name )> is tolerant of name collisions. If any method of C<Class::Name> has the same name as another method or delegator that has already been installed in the current class, then C<:Handles> just silently ignores that particular method, and doesn't try to replace the existing one. In other words, a C<:Handles(Class::Name)> won't install a delegator to a method in C<Class::Name> if that method is already being handled somewhere else by the current class. For classes that don't have a C<::> in their name (e.g., C<DateTime> and C<POE>), just append a C<::> to the class name: my @init_time :Field :Get(init_time) :Type( DateTime ) :Default( DateTime->now() ) :Handles( DateTime:: ); Note that, when using the class-based version of ); C<Handles> may be abbreviated to C<Handle> or C<Hand>. NOTES: Failure to add the appropriate object to the delegation field will lead to errors such as: B<Can't call method "bar" on an undefined value>. Typos in C<:Handles> attribute declarations will lead to errors such as: B<Can't locate object method "bat" via package "Foo">. Adding an object of the wrong class to the delegation field will lead to the same error, but can be avoided by adding a C<:Type> attribute for the appropriate class. =head1 PERMISSIONS =head2 Restricted and Private Accessors By default, L<automatically generated accessors|/"ACCESSOR GENERATION">, can be called at any time. In other words, their access permission is I<public>. If desired, accessors can be made I<restricted> - in which case they can only be called from within the class and any child classes in the hierarchy that are derived from it - or I I<standard> pair of I<get_/set_> accessors, the permission setting is applied to both accessors. If different permissions are required on the two accessors, then you'll have to use separate C<:Get> and); C<Permission> may be abbreviated to C<Perm>; C<Private> may be abbreviated to C<Priv>; and C<Restricted> may be abbreviated to C<Restrict>. =head2 Restricted and Private Methods In the same vein as describe above, access to methods can be narrowed by use of C<:Restricted> and C<:Private> attributes. sub foo :Restricted { my $self = shift; ... } Without either of these attributes, most methods have I<public> access. If desired, you may explicitly label them with the C<:Public> attribute. =head2 Exemptions It is also possible to specify classes that are exempt from the I<Restricted> and I. =head2 Hidden Methods For subroutines marked with the following attributes (most of which are discussed later in this document): =over =item :ID =item :PreInit =item :Init =item :Replicate =item :Destroy =item :Automethod =item :Dumper =item :Pumper =item :MOD_*_ATTRS =item :FETCH_*_ATTRS =back Object::InsideOut normally renders them uncallable (hidden) to class and application code (as they should normally only be needed by Object::InsideOut itself). If needed, this behavior can be overridden by adding the C<Public>, C<Restricted> or C<Private> attribute parameters: sub _init :Init(private) # Callable from within this class { my ($self, $args) = @_; ... } =head2 Restricted and Private Classes Permission for object creation on a class can be narrowed by adding a C<:Restricted> or C<:Private> flag to its S<C<use Object::InsideOut ...>> declaration. This basically adds C<:Restricted/:Private> permissions on the C<-E<gt>new()> method for that class. Exemptions are also supported. package Foo; { use Object::InsideOut; ... } package Bar; { use Object::InsideOut 'Foo', ':Restricted(Ping, Pong)'; ... } In the above, class C<Bar> inherits from class C<Foo>, and its constructor is restricted to itself, classes that inherit from C<Bar>, and the classes C<Ping> and C<Pong>. As constructors are inherited, any class that inherits from C<Bar> would also be a restricted class. To overcome this, any child class would need to add its own permission declaration: package Baz; { use Object::InsideOut qw/Bar :Private(My::Class)/; ... } Here, class C<Baz> inherits from class C<Bar>, and its constructor is restricted to itself (i.e., private) and class C<My::Class>. Inheriting from a C<:Private> class is permitted, but objects cannot be created for that class unless it has a permission declaration of its own: package Zork; { use Object::InsideOut qw/:Public Baz/; ... } Here, class C<Zork> inherits from class C<Baz>, and its constructor has unrestricted access. (In general, don't use the C<:Public> declaration for a class except to overcome constructor permissions inherited from parent classes.) =head1 TYPE CHECKING Object::InsideOut can be directed to add type-checking code to the I<set/combined> accessors it generates, and to perform type checking on object initialization parameters. =head2 Field Type Checking Type checking for a field can be specified by adding the C<:Type> attribute to the field declaration: my @count :Field :Type(numeric); my @objs :Field :Type(list(My::Class)); The C<:Type> attribute results in type checking code being added to I<set/combined> accessors generated by Object::InsideOut, and will perform type checking on object initialization parameters processed by the C<:Arg> attribute. Available Types are: =over =item 'scalar' Permits anything that is not a reference. =item 'numeric' Can also be specified as C<Num> or C<Number>. This uses L<Scalar::Util::looks_like_number()|Scalar::Util/"looks_like_number EXPR"> to test the input value. =item 'list' or 'array' =item : =over =item 'scalar' Same as for the basic type above. =item 'numeric' Same as for the basic type above. =item A class name Same as for the basic type below. =item A reference type Any reference type (in all caps) as returned by L<ref()|perlfunc/"ref EXPR">). =back =item 'ARRAY_ref' =item 'ARRAY_ref(_subtype_)' This specifies that only a single array reference is permitted. Can also be specified as C<ARRAYref>. When specified, the contents of the array ref are checked against the specified subtype as per the above. =item 'HASH' This type permits an accessor to accept multiple S<C<key =E<gt> value>> pairs (which are then placed in a hash ref) or a single hash ref. For object initialization parameters, only a single ref is permitted. =item 'HASH_ref' This specifies that only a single hash reference is permitted. Can also be specified as C<HASHref>. =item 'SCALAR_ref' This type permits an accessor to accept a single scalar reference. Can also be specified as C<SCALARref>. =item A class name This permits only an object of the specified class, or one of its sub-classes (i.e., type checking is done using C<-E<gt>isa()>). For example, C<My::Class>. The class name C<UNIVERSAL> permits any object. The class name C<Object::InsideOut> permits any object generated by an Object::InsideOut class. =item Other reference type This permits only a reference of the specified type (as returned by L<ref()|perlfunc/"ref EXPR">). The type must be specified in all caps. For example, C<CODE>. =back The)); } } =head2 Type Checking on C<:Init> Parameters For object initialization parameters that are sent to the C<:Init> subroutine during object initialization, the parameter's type can be specified in the C<:InitArgs> hash for that parameter using the same types as specified in the previous section. For example: my %init_args :InitArgs = ( 'COUNT' => { 'Type' => 'numeric', }, 'OBJS' => { 'Type' => 'list(My::Class)', }, ); One exception involves custom type checking: If referenced in an C<:InitArgs> hash, the type checking subroutine cannot be made C<:Private>: package My::Class; { use Object::InsideOut; sub check_type # Cannot be :Private { ... } my %init_args :InitArgs = ( 'ARG' => { 'Type' => \&check_type, }, ); ... } Also, as shown, it also doesn't have to be a fully-qualified name. =head1 C<:Cumulative> attribute (or S<C<:Cumulative(top down)>>), and propagate from the I<top down> through the class hierarchy (i.e., from the parent classes down through the child classes). If tagged with S<C<:Cumulative(bottom up)>>, they will propagated from the object's class upward through the parent classes. =head1 CHAINED METHODS In addition to C<:Cumulative>, Object::InsideOut provides a way of creating methods that are chained together so that their return values are passed as input arguments to other similarly named methods in the same class hierarchy. In this way, the chained methods act as though they were I<piped> together. For example, imagine you had a method called C C<$self-E<gt C<:Chained> attribute to each class's C<format_name> in C<Customer> would cause leading and trailing whitespace to be removed, then the name to be properly cased, and finally whitespace to be compressed to a single space. The resulting C<$name> would be returned to the caller: my ($name) = $obj->format_name($name_raw); Unlike C<:Cumulative> methods, C<:Chained> methods B<always> returns an array - even if there is only one value returned. Therefore, C<:Chained> methods should always be called in an array context, as illustrated above. The default direction is to chain methods from the parent classes at the top of the class hierarchy down through the child classes. You may use the attribute S<C<:Chained(top down)>> to make this more explicit. If you label the method with the S<C<:Chained(bottom up)>> attribute, then the chained methods are called starting with the object's class and working upward through the parent classes in the class hierarchy, similar to how S<C<:Cumulative(bottom up)>> works. =head1 ARGUMENT MERGING As mentioned under L<"Object Creation">, the C<-E<gt>new()> method can take parameters that are passed in as combinations of S<C<key =E<gt> value>> pairs and/or hash refs: my $obj = My::Class->new( 'param_X' => 'value_X', 'param_Y' => 'value_Y', { 'param_A' => 'value_A', 'param_B' => 'value_B', }, { 'param_Q' => 'value_Q', }, ); The parameters are I<merged> into a single hash ref before they are processed. Adding the C<:MergeArgs> attribute to your methods gives them a similar capability. Your method will then get two arguments: The object and a single hash ref of the I<merged> arguments. For example: package Foo; { use Object::InsideOut; ... sub my_method :MergeArgs { my ($self, $args) = @_; my $param = $args->{'param'}; my $data = $args->{'data'}; my $flag = $args->{'flag'}; ... } } package main; my $obj = Foo->new(...); $obj->my_method( { 'data' => 42, 'flag' => 'true' }, 'param' => 'foo' ); =head1 ARGUMENT VALIDATION A number of users have asked about argument validation for methods: L<>. For this, I recommend using L<Params::Validate>: package Foo; { use Object::InsideOut; use Params::Validate ':all'; sub foo { my $self = shift; my %args = validate(@_, { bar => 1 }); my $bar = $args{bar}; ... } } Using L<Attribute::Params::Validate>, attributes are used for argument validation specifications: package Foo; { use Object::InsideOut; use Attribute::Params::Validate; sub foo :method :Validate(bar => 1) { my $self = shift; my %args = @_; my $bar = $args{bar}; ... } } Note that in the above, Perl's C<:method> attribute (in all lowercase) is needed. There is some incompatibility between Attribute::Params::Validate and some of Object::InsideOut's attributes. Namely, you cannot use C<:Validate> with C<:Private>, C<:Restricted>, C<:Cumulative>, C<:Chained> or C<:MergeArgs>. In these cases, use the C<validate()> function from L<Params::Validate> instead. =head1 AUTOMETHODS There are significant issues related to Perl's C<AUTOLOAD> mechanism that cause it to be ill-suited for use in a class hierarchy. Therefore, Object::InsideOut implements its own C<:Automethod> mechanism to overcome these problems. Classes requiring C<AUTOLOAD>-type capabilities must provided a subroutine labeled with the C<:Automethod> attribute. The C<:Automethod> subroutine will be called with the object and the arguments in the original method call (the same as for C<AUTOLOAD>). The C<:Automethod> subroutine should return either a subroutine reference that implements the requested method's functionality, or else just end with C<return;> to indicate that it doesn't know how to handle the request. Using its own C<AUTOLOAD> subroutine (which is exported to every class), Object::InsideOut walks through the class tree, calling each C<:Automethod> subroutine, as needed, to fulfill an unimplemented method call. The name of the method being called is passed as C<$_> instead of C<$AUTOLOAD>, and is I<not> prefixed with the class name. If the C<:Automethod> subroutine also needs to access the C<$_> from the caller's scope, it is available as C<$CALLER::_>. Automethods can also be made to act as L</"CUMULATIVE METHODS"> or L</"CHAINED METHODS">. In these cases, the C<:Automethod> subroutine should return two values: The subroutine ref to handle the method call, and a string designating the type of method. The designator has the same form as the attributes used to designate C<:Cumulative> and C<:Chained> methods: ':Cumulative' or ':Cumulative(top down)' ':Cumulative(bottom up)' ':Chained' or ':Chained(top down)' ':Chained(bottom up)' The following skeletal code illustrates how an I<OPTIONAL> code above for installing the generated handler as a method should not be used with C<:Cumulative> or C<:Chained> automethods. =head1 OBJECT SERIALIZATION =head2 Basic Serialization =over =item my $array_ref = $obj->dump(); =item my $string = $obj->dump(1); Object::InsideOut exports a method called C<-E<gt>dump()> to each class that returns either a I<Perl> or a string representation of the object that invokes the method. The I<Perl> representation is returned when C<-E<gt S<C<key =E<gt> value>> pairs for the object's fields. For example: [ 'My::Class::Sub', { 'My::Class' => { 'data' => 'value' }, 'My::Class::Sub' => { 'life' => 42 } } ] The name for an object field (I<data> and I<life> in the example above) can be specified by adding the C<:Name> attribute to the field: my @life :Field :Name(life); If the C<:Name> attribute is not used, then the name for a field will be either the name associated with an C<:All> or C<:Arg> attribute, its I<get> method name, its I<set> method name, or, failing all that, a string of the form C<ARRAY(0x...)> or C<HASH(0x...)>. When called with a I<true> argument, C<-E<gt>dump()> returns a string version of the I<Perl> representation using L<Data::Dumper>. Note that using L C<-E<gt>dump()> method can be called using its fully-qualified name: my $dump = $obj->Object::InsideOut::dump(); =item my $obj = Object::InsideOut->pump($data); C<Object::InsideOut-E<gt>pump()> takes the output from the C<-E<gt>dump()> method, and returns an object that is created using that data. If C<$data> is the array ref returned by using C<$obj-E<gt>dump()>, then the data is inserted directly into the corresponding fields for each class in the object's class hierarchy. If C<$data> is the string returned by using C<$obj-E<gt>dump(1)>, then it is C<eval>ed to turn it into an array ref, and then processed as above. B<Caveats>: If any of an object's fields are dumped to field name keys of the form C<ARRAY(0x...)> or C<HASH(0x...)> (see above), then the data will not be reloadable using C<Object::InsideOut-E<gt>pump()>. To overcome this problem, the class developer must either add C<:Name> attributes to the C<:Field> declarations (see above), or provide a C<:Dumper>/C<:Pumper> pair of subroutines as described below. Dynamically altering a class (e.g., using L<-E<gt>create_field()|/"DYNAMIC FIELD CREATION">) after objects have been dumped will result in C<undef> fields when pumped back in regardless of whether or not the added fields have defaults. Modifying the output from C<-E<gt>dump()>, and then feeding it into C<Object::InsideOut-E<gt>pump()> will work, but is not specifically supported. If you know what you're doing, fine, but you're on your own. =item C<:Dumper> Subroutine Attribute If a class requires special processing to dump its data, then it can provide a subroutine labeled with the C<:Dumper> attribute. This subroutine will be sent the object that is being dumped. It may then return any type of scalar the developer deems appropriate. Usually, this would be a hash ref containing S<C<key =E<gt> value>> pairs for the object's fields. For example: my @data :Field; sub _dump :Dumper { my $obj = $_[0]; my %field_data; $field_data{'data'} = $data[$$obj]; return (\%field_data); } Just be sure not to call your C<:Dumper> subroutine C<dump> as that is the name of the dump method exported by Object::InsideOut as explained above. =item C<:Pumper> Subroutine Attribute If a class supplies a C<:Dumper> subroutine, it will most likely need to provide a complementary C<:Pumper> labeled subroutine that will be used as part of creating an object from dumped data using C<Object::InsideOut-E<gt>pump()>. The subroutine will be supplied the new object that is being created, and whatever scalar was returned by the C<:Dumper> subroutine. The corresponding C<:Pumper> for the example C<:Dumper> above would be: sub _pump :Pumper { my ($obj, $field_data) = @_; $obj->set(\@data, $field_data->{'data'}); } =back =head2 Storable Object::InsideOut also supports object serialization using the L<Storable> module. There are two methods for specifying that a class can be serialized using L<Storable>. The first method involves adding L<Storable> to the Object::InsideOut declaration in your package: package My::Class; { use Object::InsideOut qw(Storable); ... } and adding S<C<use Storable;>> in your application. Then you can use the C<-E<gt>store()> and C<-E<gt>freeze()> methods to serialize your objects, and the C<retrieve()> and C<thaw()> subroutines to de-serialize them. package main; use Storable; use My::Class; my $obj = My::Class->new(...); $obj->store('/tmp/object.dat'); ... my $obj2 = retrieve('/tmp/object.dat'); The other method of specifying L<Storable> serialization involves setting a S<C<::storable>> variable inside a C<BEGIN> block for the class prior to its use: package main; use Storable; BEGIN { $My::Class::storable = 1; } use My::Class; NOTE: The I<caveats> discussed above for the C<-E<gt>pump()> method are also applicable when using the Storable module. =head1 OBJECT COERCION Object::InsideOut provides support for various forms of object coercion through the L<overload> mechanism. For instance, if you want an object to be usable directly in a string, you would supply a subroutine in your class labeled with the: =over =item :Stringify =item :Numerify =item :Boolify =item :Arrayify =item :Hashify =item :Globify =item :Codify =back Coercing an object to a scalar (C<:Scalarify>) is B<not> supported as C<$$obj> is the ID of the object and cannot be overridden. =head1 CLONING =head2 Object Cloning Copies of objects can be created using the C<-E<gt>clone()> method which is exported by Object::InsideOut to each class: my $obj2 = $obj->clone(); When called without arguments, C<-E<gt>clone()> creates a I<shallow> copy of the object, meaning that any complex data structures (i.e., array, hash or scalar refs) stored in the object will be shared with its clone. Calling C<-E<gt>clone()> with a I<true> argument: my $obj2 = $obj->clone(1); creates a I<deep> copy of the object such that internally held array, hash or scalar refs are I<replicated> and stored in the newly created clone. I<Deep> cloning can also be controlled at the field level, and is covered in the next section. Note that cloning does not clone internally held objects. For example, if C<$foo> contains a reference to C<$bar>, a clone of C<$foo> will also contain a reference to C<$bar>; not a clone of C<$bar>. If such behavior is needed, it must be provided using a L<:Replicate|/"Object Replication"> subroutine. =head2 Field Cloning Object cloning can be controlled at the field level such that specified fields are I<deeply> copied when C<-E<gt>clone()> is called without any arguments. This is done by adding the C<:Deep> attribute to the field: my @data :Field :Deep; =head1 WEAK FIELDS Frequently, it is useful to store L<weaken|Scalar::Util/"weaken REF">ed references to data or objects in a field. Such a field can be declared as C<:Weak> so that data (i.e., references) set via Object::InsideOut generated accessors, parameter processing using C<:Arg>, the C<-E<gt>set()> method, etc., will automatically be L<weaken|Scalar::Util/"weaken REF">ed after being stored in the field array/hash. my @data :Field :Weak; NOTE: If data in a I<weak> field is set directly (i.e., the C<-E<gt>set()> method is not used), then L<weaken()|Scalar::Util/"weaken REF"> must be invoked on the stored reference afterwards: $self->set(\@field, $data); Scalar::Util::weaken($field[$$self]); (This is another reason why the C<-E<gt>set()> method is recommended for setting field data within class code.) =head1 DYNAMIC FIELD CREATION Normally, object fields are declared as part of the class code. However, some classes may need the capability to create object fields I<on-the-fly>, for example, as part of an C<@> or C<%> to declare an array field or hash field, respectively. The remaining string arguments should be attributes declaring accessors and the like. The L<:hash_only|/"HASH ONLY CLASSES">. Here's an example of an'}; } } =head1 RUNTIME INHERITANCE The class method C<-E<gt>add_class()> provides the capability to dynamically add classes to a class hierarchy at runtime. For example, suppose you had a simple I<state> class: package Trait::State; { use Object::InsideOut; my %state :Field :Set(state); } This could be added to another class at runtime using: My::Class->add_class('Trait::State'); This permits, for example, application code to dynamically modify a class without having it create an actual sub-class. =head1 PREPROCESSING =head2 C<:Arg> attribute, the subroutine name must be fully-qualified, as illustrated. Further, if not referenced in the C<:InitArgs> hash, the preprocessing subroutine can be given the C<:Private> attribute. As the above illustrates, the parameter preprocessing subroutine is sent five arguments: =over =item * The name of the class associated with the parameter This would be C<My::Class> in the example above. =item * The name of the parameter Either C<DATA> or C<PARAM> in the example above. =item * A hash ref of the parameter's specifiers This is either a hash ref containing the C<:Arg> attribute parameters, or the hash ref paired to the parameter's key in the C<:InitArgs> hash. =item * The object being initialized =item * The parameter's value This is the value assigned to the parameter in the C<-E<gt>new()> method's argument list. If the parameter was not provided to C<-E<gt>new()>, then C<undef> will be sent. =back The return value of the preprocessing subroutine will then be assigned to the parameter. Be careful about what types of data the preprocessing subroutine tries to make use of C<external> to the arguments supplied. For instance, because the order of parameter processing is not specified, the preprocessing subroutine cannot rely on whether or not some other parameter is set. Such processing would need to be done in the C<:Init> subroutine. It can, however, make use of object data set by classes I<higher up> in the class hierarchy. (That is why the object is provided as one of the arguments.) Possible uses for parameter preprocessing include: =over =item * Overriding the supplied value (or even deleting it by returning C<undef>) =item * Providing a dynamically-determined default value =back I<Preprocess> may be abbreviated to I<Preproc> or I<Pre>. =head2 I<Set> Accessor Preprocessing You can specify a code ref (either in the form of an anonymous subroutine, or a fully-qualified subroutine name) for a I: =over =item * The object used to invoke the accessor =item * A reference to the field associated with the accessor =item * The argument(s) sent to the accessor There will always be at least one argument. =back Usually, the preprocessing subroutine would return just a single value. For fields declared as type C<List>, multiple values may be returned. Following preprocessing, the I<set> accessor will operate on whatever value(s) are returned by the preprocessing subroutine. =head1 SPECIAL PROCESSING =head2 Object ID By default, the ID of an object is derived from a sequence counter for the object's class hierarchy. This should suffice for nearly all cases of class development. If there is a special need for the module code to control the object ID (see L<Math::Random::MT::Auto> as an example), then a subroutine labelled with the C<:ID> attribute can be specified: sub _id :ID { my $class = $_[0]; # Generate/determine a unique object ID ... return ($id); } The ID returned by your subroutine can be any kind of I<regular> scalar (e.g., a string or a number). However, if the ID is something other than a low-valued integer, then you will have to architect B<all> your classes using hashes for the object fields. See L<HASH ONLY CLASSES> for details. Within any class hierarchy, only one class may specify an C<:ID> subroutine. =head2 Object Replication Object replication occurs explicitly when the C<-E<gt, C<$flag> will be set to the C<'CLONE'>, and the C<$parent> object is just a non-blessed anonymous scalar reference that contains the ID for the object in the parent thread. When invoked via the C<-E<gt>clone()> method, C<$flag> will be either an empty string which denotes that a I<shallow> copy is being produced for the clone, or C<$flag> will be set to C<'deep'> indicating a I<deep> copy is being produced. The C<:Replicate> subroutine only needs to deal with the special replication processing needed by the object: Object::InsideOut will handle all the other details. =head2 Object Destruction Object::InsideOut exports a C<DESTROY> method to each class that deletes an object's data from the object field arrays (hashes). If a class requires additional destruction processing (e.g., closing filehandles), then it must provide a subroutine labeled with the C<:Destroy> attribute. This subroutine will be sent the object that is being destroyed: sub _destroy :Destroy { my $obj = $_[0]; # Special object destruction processing } The C<:Destroy> subroutine only needs to deal with the special destruction processing: The C<DESTROY> method will handle all the other details of object destruction. =head1 C<Foreign::Class> has a class method called C<foo>. With the above, you can access that method using C<My::Class-E<gt>foo()> instead. Multiple foreign inheritance is supported, as well: package My::Class; { use Object::InsideOut qw(Foreign::Class Other::Foreign::Class); ... } =over =item $self->inherit($obj, ...); To use object methods from foreign classes, an object must I<inherit> from an object of that class. This would normally be done inside a class's C<:Init> subroutine: package My::Class; { use Object::InsideOut qw(Foreign::Class); sub init :Init { my ($self, $args) = @_; my $foreign_obj = Foreign::Class->new(...); $self->inherit($foreign_obj); } } Thus, with the above, if C<Foreign::Class> has an object method called C<bar>, you can call that method from your own objects: package main; my $obj = My::Class->new(); $obj->bar(); Object::InsideOut's C<AUTOLOAD> subroutine handles the dispatching of the C<-E<gt>bar()> method call using the internally held inherited object (in this case, C<$foreign_obj>). Multiple inheritance is supported, as well: You can call the C<-E<gt>inherit()> method multiple times, or make just one call with all the objects to be inherited from. C<-E<gt>inherit()> is a restricted method. In other words, you cannot use it on an object outside of code belonging to the object's class tree (e.g., you can't call it from application code). In the event of a method naming conflict, the C<-E<gt>inherit()> method can be called using its fully-qualified name: $self->Object::InsideOut::inherit($obj); =item my @objs = $self->heritage(); =item my $obj = $self->heritage($class); =item my @objs = $self->heritage($class1, $class2, ...); Your class code can retrieve any inherited objects using the C<-E<gt>heritage()> method. When called without any arguments, it returns a list of any objects that were stored by the calling class using the calling object. In other words, if class C<My::Class> uses object C<$obj> to store foreign objects C<$fobj1> and C<$fobj2>, then later on in class C<My::Class>, C<$obj-E<gt>heritage()> will return C<$fobj1> and C<$fobj2>. C<-E<gt>heritage()> can also be called with one or more class name arguments. In this case, only objects of the specified class(es) are returned. In the event of a method naming conflict, the C<-E<gt>heritage()> method can be called using its fully-qualified name: my @objs = $self->Object::InsideOut::heritage(); =item $self->disinherit($class [, ...]) =item $self->disinherit($obj [, ...]) The C<-E<gt>disinherit()> method disassociates (i.e., deletes) the inheritance of foreign object(s) from an object. The foreign objects may be specified by class, or using the actual inherited object (retrieved via C<-E<gt C<-E<gt>disinherit()> method can be called using its fully-qualified name: $self->Object::InsideOut::disinherit($obj [, ...]) =back B I<blessed hash> objects), is needed outside the class, then you'll need to write your own accessors for that. B<LIMITATION>: You cannot use fully-qualified method names to access foreign methods (when encapsulated foreign objects are involved). Thus, the following will not work: my $obj = My::Class->new(); $obj->Foreign::Class::bar(); Normally, you shouldn't ever need to do the above: C<$obj-E<gt>bar()> would suffice. The only time this may be an issue is when the I<native> class I<overrides> an inherited foreign class's method (e.g., C<My::Class> has its own C<-E<gt>bar()> method). Such overridden methods are not directly callable. If such overriding is intentional, then this should not be an issue: No one should be writing code that tries to by-pass the override. However, if the overriding is accidentally, then either the I<native> method should be renamed, or the I<native> class should provide a wrapper method so that the functionality of the overridden method is made available under a different name. =head2 C<use base> and Fully-qualified Method Names The foreign inheritance methodology handled by the above is predicated on non-Object::InsideOut classes that generate their own objects and expect their object methods to be invoked via those objects. There are exceptions to this rule: =over =item C<$obj-E<gt>inherit($foreign)> is not used.) In this case, you can either: a. Declare the foreign class using the standard method (i.e., S<C<use Object::InsideOut qw(Foreign::Class);>>), and invoke its methods using their full path (e.g., C<$obj-E<gt>Foreign::Class::method();>); or b. You can use the L<base> pragma so that you don't have to use the full path for foreign methods. package My::Class; { use Object::InsideOut; use base 'Foreign::Class'; ... } The former scheme is faster. =item 2. Foreign class methods that expect to be invoked via the inheriting class. As with the above, you can either invoke the class methods using their full path (e.g., C<My::Class-E<gt>Foreign::Class::method();>), or you can S<C<use base>> so that you don't have to use the full path. Again, using the full path is faster. L<Class::Singleton> is an example of this type of class. =item 3. Class methods that don't care how they are invoked (i.e., they don't make reference to the invoking class). In this case, you can either use S<C<use Object::InsideOut qw(Foreign::Class);>> for consistency, or use S<C<use base qw(Foreign::Class);>> if (slightly) better performance is needed. =back If you're not familiar with the inner workings of the foreign class such that you don't know if or which of the above exceptions applies, then the formulaic approach would be to first use the documented method for foreign inheritance (i.e., S<C<use Object::InsideOut qw(Foreign::Class);>>). If that works, then I strongly recommend that you just use that approach unless you have a good reason not to. If it doesn't work, then try S<C<use base>>. =head1 INTROSPECTION For Perl 5.8.0 and later, Object::InsideOut provides an introspection API that allow you to obtain metadata on a class's hierarchy, constructor parameters, and methods. =over =item my $meta = My::Class->meta(); =item my $meta = $obj->meta(); The C<-E<gt>meta()> method, which is exported by Object::InsideOut to each class, returns an L<Object::InsideOut::Metadata> object which can then be I(); =item My::Class->isa(); =item $obj->isa(); When called in an array context, calling C<-E<gt. =item My::Class->can(); =item $obj->can(); When called in an array context, calling C<-E<gt. =back See L<Object::InsideOut::Metadata> for more details. =head1 THREAD SUPPORT For Perl 5.8.1 and later, Object::InsideOut fully supports L<threads> (i.e., is thread safe), and supports the sharing of Object::InsideOut objects between threads using L<threads::shared>. To use Object::InsideOut in a threaded application, you must put S<C<use threads;>> at the beginning of the application. (The use of S<C<require threads;>> after the program is running is not supported.) If object sharing is to be utilized, then S<C<use threads::shared;>> should follow. If you just S<C<use threads;>>, then objects from one thread will be copied and made available in a child thread. The addition of S<C<use threads::shared;>> in and of itself does not alter the behavior of Object::InsideOut objects. The default behavior is to I<not> share objects between threads (i.e., they act the same as with S<C<use threads;>> alone). To enable the sharing of objects between threads, you must specify which classes will be involved with thread object sharing. There are two methods for doing this. The first involves setting a C<::shared> variable (inside a C<BEGIN> block) for the class prior to its use: use threads; use threads::shared; BEGIN { $My::Class::shared = 1; } use My::Class; The other method is for a class to add a C<:SHARED> flag to its S<C" =head1 HASH ONLY CLASSES For performance considerations, it is recommended that arrays be used for class fields whenever possible. The only time when hash-bases fields are required is when a class must provide its own L<object ID|/ I<hash only> requirement can be enforced by adding the C<:HASH_ONLY> flag to a class's S<C<use Object::InsideOut ...>> declaration: package My::Class; { use Object::InsideOut ':hash_only'; ... } This will cause Object::Inside to check every class in any class hierarchy that includes such flagged classes to make sure their fields are hashes and not arrays. It will also fail any L<-E<gt>create_field()|/"DYNAMIC FIELD CREATION"> call that tries to create an array-based field in any such class. =head1 SECURITY In the default case where Object::InsideOut provides object IDs that are sequential integers, it is possible to hack together a I C<:SECURE> flag to a class's S<C<use Object::InsideOut ...>> declaration: package My::Class; { use Object::InsideOut ':SECURE'; ... } This places the module C<Object::InsideOut::Secure> in the class hierarchy. Object::InsideOut::Secure provides an L<:ID subroutine|/"Object ID"> that generates random integers for object IDs, thus preventing other code from being able to create fake objects by I<guessing> at IDs. Using C<:SECURE> mode requires L<Math::Random::MT::Auto> (v5.04 or later). Because the object IDs used with C<:SECURE> mode are large random values, the L<:HASH_ONLY|/"HASH ONLY CLASSES"> flag is forced on all the classes in the hierarchy. For efficiency, it is recommended that the C<:SECURE> flag be added to the topmost class(es) in a hierarchy. =head1 ATTRIBUTE HANDLERS Object::InsideOut uses I<attribute 'modify' handlers> as described in L<attributes/"Package-specific Attribute Handling">, and provides a mechanism for adding attribute handlers to your own classes. Instead of naming your attribute handler as C<MODIFY_*_ATTRIBUTES>, name it something else and then label it with the C<:MODIFY_*_ATTRIBUTES> attribute (or C<:MOD_*_ATTRS> for short). Your handler should work just as described in L<attributes/"Package-specific Attribute Handling"> I<upward> through the class hierarchy (i.e., I<bottom up>). This provides child classes with the capability to I I<attribute 'fetch' handlers>, follow the same procedures: Label the subroutine with the C<:FETCH_*_ATTRIBUTES> attribute (or C<:FETCH_*_ATTRS> for short). Contrary to the documentation in L<attributes/"Package-specific Attribute Handling">, I<attribute 'fetch' handlers> receive B<two> arguments: The relevant package name, and a reference to a variable or subroutine for which package-defined attributes are desired. Attribute handlers are normal rendered L<hidden|/"Hidden Methods">. =head1 SPECIAL USAGE =head2 Usage With C<Exporter> It is possible to use L C<BEGIN> block is needed to ensure that the L<Exporter> symbol arrays (in this case C<@EXPORT_OK>) get populated properly. =head2 Usage With C<require> and C<mod_perl> Object::InsideOut usage under L<mod_perl> and with runtime-loaded classes is supported automatically; no special coding is required. B<Caveat>: Runtime loading of classes should be performed before any objects are created within any of the classes in their hierarchies. If Object::InsideOut cannot create a hierarchy because of previously created objects (even if all those objects have been destroyed), a runtime error will be generated. =head2 Singleton Classes A singleton class is a case where you would provide your own C<-E<gt>new()> method that in turn calls Object::InsideOut's C<-E<gt>new()> method: package My::Class; { use Object::InsideOut; my $singleton; sub new { my $thing = shift; if (! $singleton) { $singleton = $thing->Object::InsideOut::new(@_); } return ($singleton); } } =head1 DIAGNOSTICS Object::InsideOut uses C<Exception::Class> for reporting errors. The base error class for this module is: =over =item Invalid ARRAY/HASH attribute This error indicates you forgot C<use Object::InsideOut;> in your class's code. =back Object::InsideOut installs a C<__DIE__> handler (see L<perlfunc/"die LIST"> and L<perlfunc/"eval BLOCK">) to catch any errant exceptions from class-specific code, namely, C<:Init>, C<:Replicate>, C<:Destroy>, etc. subroutines. When using C<eval> blocks inside these subroutines, you should localize C<$SIG{'__DIE__'}> to keep Object::InsideOut's C<__DIE__> handler from interfering with exceptions generated inside the C<eval> blocks. For example: sub _init :Init { ... eval { local $SIG{'__DIE__'}; ... }; if $@ { # Handle caught exception } ... } Here's another example, where the C<die> function is used as a method of flow control for leaving an C<$SIG{'__DIE__'}>, you can workaround this deficiency with your own C<eval> block: eval { local $SIG{'__DIE__'}; # Suppress any existing __DIE__ handler Some::Module::func(); # Call function that fails to localize }; if ($@) { # Handle caught exception } In addition, you should file a bug report against the offending module along with a patch that adds the missing S<C<local $SIG{'__DIE__'};>> statement. =head1 BUGS AND LIMITATIONS If you receive an error similar to this: ERROR: Attempt to DESTROY object ID 1 of class Foo twice the cause may be that some module used by your application is doing C<require threads> somewhere in the background. L<DBI> is one such module. The workaround is to add C C<return;>. The equality operator (e.g., C<if ($obj1 == $obj2) { ...>) is overloaded for C<:SHARED> classes when L<threads::shared> is loaded. The L<overload> subroutine compares object classes and IDs because references to the same thread shared object may have different refaddrs. You cannot overload an object to a scalar context (i.e., can't C<:SCALARIFY>). You cannot use two instances of the same class with mixed thread object sharing in same application. Cannot use attributes on I<subroutine stubs> (i.e., forward declaration without later definition) with I<set> accessor accepts scalars, then you can store any inside-out object type in it. If its C<Type> is set to C<HASH>, then it can store any I L<threads::shared> version 1.39 and earlier, if storing shared objects inside other shared objects, you should use C<delete()> to remove them from internal fields (e.g., C<delete($field[$$self]);>) when necessary so that the objects' destructor gets called. Upgrading to version 1.40 or later alleviates most of this issue except during global destruction. See L<threads::shared|/"BUGS AND LIMITATIONS"> for more. With Perl 5.8.8 and earlier, there are bugs associated with L<threads::shared> that may prevent you from storing objects inside of shared objects, or using foreign inheritance with shared objects. With Perl 5.8.9 (and later) together with L<threads::shared> 1.15 (and later), you can store shared objects inside of other shared objects, and you can use foreign inheritance with shared objects (provided the foreign class supports shared objects as well). Due to internal complexities, the following actions are not supported in code that uses L<threads::shared> while there are any threads active: =over =item * Runtime loading of Object::InsideOut classes =item * Using L<-E<gt>add_class()|/"RUNTIME INHERITANCE"> =back C<:Automethod> subroutine. For Perl 5.8.0 there is no workaround: This bug causes Perl to core dump. For Perl 5.6.0 through 5.6.2, the workaround is to create a ref to the required variable inside the L<threads> and L<threads::shared> from CPAN, especially if you encounter other problems associated with threads. For Perl 5.8.4 and 5.8.5, the L</"Storable"> feature does not work due to a Perl bug. Use Object::InsideOut v1.33 if needed. Due to bugs in the Perl interpreter, using the introspection API (i.e. C<-E<gt>meta()>, etc.) requires Perl 5.8.0 or later. The version of L<Want> that is available via PPM for ActivePerl is defective, and causes failures when using C<:lvalue> accessors. Remove it, and then download and install the L<Want> module using CPAN. L<Devel::StackTrace> (used by L<Exception::Class>) makes use of the I<DB> namespace. As a consequence, Object::InsideOut thinks that S<C<package DB>> is already loaded. Therefore, if you create a class called I<DB> that is sub-classed by other packages, you may need to C<require> it as follows: package DB::Sub; { require DB; use Object::InsideOut qw(DB); ... } View existing bug reports at, and submit any new bugs, problems, patches, etc. to: L<> =head1 REQUIREMENTS =over =item Perl 5.6.0 or later =item L<Exception::Class> v1.22 or later =item L<Scalar::Util> v1.10 or later It is possible to install a I<pure perl> version of Scalar::Util, however, it will be missing the L<weaken()|Scalar::Util/"weaken REF"> function which is needed by Object::InsideOut. You'll need to upgrade your version of Scalar::Util to one that supports its C<XS> code. =item L<Test::More> v0.50 or later Needed for testing during installation. =item L<Want> v0.12 or later Optional. Provides support for L</":lvalue Accessors">. =item L<Math::Random::MT::Auto> v5.04 or later) Optional. Provides support for L<:SECURE mode|/"SECURITY">. =back To cover all of the above requirements and more, it is recommended that you install L<Bundle::Object::InsideOut> using CPAN: perl -MCPAN -e 'install Bundle::Object::InsideOut' This will install the latest versions of all the required and optional modules needed for full support of all of the features provided by Object::InsideOut. =head1 SEE ALSO L<Object::InsideOut> on MetaCPAN: L<> Code repository: L<> Inside-out Object Model: L<>, L<>, L<>, L<>, Chapters 15 and 16 of I<Perl Best Practices> by Damian Conway L<Object::InsideOut::Metadata> L<Storable>, L<Exception:Class>, L<Want>, L<Math::Random::MT::Auto>, L<attributes>, L<overload> Sample code in the I<examples> directory of this distribution on CPAN. =head1 ACKNOWLEDGEMENTS Abigail S<E<lt>perl AT abigail DOT nlE<gt>> for inside-out objects in general. Damian Conway S<E<lt>dconway AT cpan DOT orgE<gt>> for L<Class::Std>, and for delegator methods. David A. Golden S<E<lt>dagolden AT cpan DOT orgE<gt>> for thread handling for inside-out objects. Dan Kubb S<E<lt>dan.kubb-cpan AT autopilotmarketing DOT comE<gt>> for C<:Chained> methods. =head1 AUTHOR Jerry D. Hedden, S<E<lt>jdhedden AT cpan DOT orgE<gt>> =head1 COPYRIGHT AND LICENSE Copyright 2005 - 2012 Jerry D. Hedden. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =head1 TRANSLATIONS A Japanese translation of this documentation by TSUJII, Naofumi S<E<lt>tsun DOT nt AT gmail DOT comE<gt>> is available at L<>. =cut | http://web-stage.metacpan.org/release/Object-InsideOut/source/lib/Object/InsideOut.pod | CC-MAIN-2020-40 | refinedweb | 12,575 | 50.97 |
The Changelog – Episode #389
Securing the web with Let's Encrypt
featuring Josh Aas
Transcript
Securing the internet is obviously a big deal. Back in May of 2013 you thought so as well, and you started the Internet Security Research Group… But since then, Let’s Encrypt has issued (in all caps) one billion certificates, and that’s a big deal. That was an announcement you made February 27th. Talk about that moment for you. What was that like, to pin that post?
I still pretty clearly remember sitting around, debating with our staff whether we’ve issued one million certs in the first year or not. When we started, we had no idea what the scope of this would turn into. A billion is a big number, and it’s amazing to get here. So many ideas never turn out to what you wish they would turn into, and it’s pretty exciting that this team has built something that turned into what we wanted to be, which is serving so many websites around the internet. We’re getting close to around 200 million now, and that’s fantastic.
It’s often interesting because you can take for granted what’s right there in front of you today… So I kind of look at it like, new developers coming into the scene in the last two years - let’s say since the inception of Let’s Encrypt - and just the idea that it’s there and it’s fairly easy now to request a free certificate… We’re in a day now where I suppose the security of the internet is much more important as we all become more dependent on it, and it’s more prevalent in our everyday lives, especially in a day where right now we’re using a Zoom chat, so we’re assuming that this is encrypted… I’m not sure we’re seeing anything that’s concerned…
Well, we’re recording it ourselves, so maybe it’s not too private…
Right. The point is that not all the Zoom calls are ones you want to put onto the internet.
Yeah.
Obviously, security is a pretty interesting thing, but we came from a day where SSL certificates were very difficult to get, generally expensive, and just the process was very cumbersome. Now it’s a fairly easy process. People take that for granted.
Yeah. On the one hand, we want to be in a position where people can take us for granted, because we want the service to just happen for people. Ideally, you could just set up web servers and not even know that you have an SSL certificate, let alone where or how you got it. We’re all about automation and removing humans from the loop, so people have to do less.
[00:04:22.12] We’d love to get to a world where every time you spin up a server, it’s just got the certs you need in the background, it installs them correctly, and everything just work; people don’t need to know about us.
On the flipside, we do want people to know about us, because we’re a non-profit, and we need people to know about the great work that we do, and help fund our work. So yeah, every day we go out and try to put ourselves in the position where people can take us for granted, but… It does have some negative consequences.
Kind of a catch-22… What are your plans around that?
We continue doing what we’ve been doing - providing great service, building things that people can rely on, and make the internet more secure. On the technical side, it just happens… But we’ve got great communications people, and they go out and talk about what we do, talk to potential funders, and so far it’s working out great.
I always think about Wikipedia and their opportunity to throw Jimmy Wales’ note up there once a year, usually in December, and say “Hey, we’re a valuable resource. Send us your money.” And I was thinking like “What’s the Let’s Encrypt equivalent of that?” It’s probably a bad idea… You’ve got 200 million sites, but you don’t exactly wanna be injecting anything into that experience. Like you said, you wanna be as seamless as possible. So do you beat the drum mostly on the blog, or are there campaigns? What do you guys do to let people know what you’re up to?
We certainly don’t wanna go out there and inject messages into systems that people depend on. [laughter]
That’s a very bad idea.
Yeah… I didn’t think so.
We’ve got a social media presence, a blog, we also have a lot of contacts and people who understand what we do and support us, and we meet with them all the time. We go to companies, explain what we’re doing, “We’re here for you.” We talk to open source projects… Anywhere that we think understanding what we do can help. It’s just a lot of work for you to go out there and do.
Sometimes the media picks up good things, like a billion certificates. That’s really helpful, to get press coverage. Unfortunately, sometimes people find out about us when there’s a problem, like when you have a service that so many people depend on. When things go wrong, that’s when people notice. That’s when they stop taking you for granted. That’s not obviously something that we strive for. We’d prefer that that never happened. But software is complicated, systems are complicated. Every system has problems once in a while.
Some people learn about us when we have problems, and that sort of doesn’t inject the message into their life… Which hopefully is like “Look, this is something I depend on”, and when there’s an issue, you realize how much you depend on it. And the best way to help out is to support us.
What are some of the bigger problems you’ve had to mitigate over the years?
It mostly just comes down to normal software bugs. We have a software stack called Boulder, that is sort of the core of the certificate authority. It’s written in Go; it’s non-trivial in size, although it’s mostly been developed by an average of three engineers over five years.
So it’s fairly complex, and it’s non-trivial in size, and like all software, every once in a while there’s a bug there… And those bugs can cause either stability, or compliance, or security issues. We haven’t had too many security issues, but we have had some stability and compliance issues. When a bug pops up, usually we’re very quick to fix it, but when you’re serving 200 million websites…
[00:08:03.05] …any little thing becomes sort of a big thing. But I’m really proud of our track record here. We have a great track record for stability and reliability in security. And when incidents do come up, I’m particularly proud of how well we deal with them. We typically fix issues in a couple of hours max, then we go out and talk about it with public reports, and detailed public information, and lots of transparency very quickly. On most within a few hours. We really work hard to follow up and make sure that that type of problem doesn’t happen again; we use those things as a learning experience.
Well, given the opportunity, and potentially even by design, the ability for people to take Let’s Encrypt for granted – for the uninitiated, how do you describe Let’s Encrypt? What’s Let’s Encrypt today?
You mean how do I describe it to people who don’t really understand what SSL certificates are?
No, let’s take it from a developer’s point of view. Somebody who kind of gets it, but doesn’t – they’ve heard of Let’s Encrypt, but they don’t know all the bits and bobs; they don’t know all the details of what Let’s Encrypt is. What do you do?
Yeah, so if you’re a developer and you wanna set up an HTTPS site, you’re gonna need a certificate. Normally, without Let’s Encrypt in the world, you’d have to go find some place to buy a certificate and decide what kind of certificate you want, decide how much you wanna pay… You’d probably have to create some sort of certificate sign in request, or fill out some form containing a bunch of details about what exactly you want in your certificate… It can be a pretty time-consuming, costly and complicated process, and it’s frankly just really confusing. I think it’s the biggest reason that people didn’t deploy HTTPS prior to the existence of Let’s Encrypt.
So Let’s Encrypt really just tries to do away with all that. We try to make things as simple as possible. We have an API, you just submit a request using some API client software; you don’t need to write it yourself or know how it works. You just download some software for whatever platform you wanna use. That software knows how to talk to Let’s Encrypt. You just tell it what domains you want certificates for. Typically, the software will just go out and get the domains, complete the challenges, do what you need to do to get the cert, and then sometimes it’ll even install it for you.
All this doesn’t require you to know anything about how certificates work, or how you get them, or what’s in them. It doesn’t require any payment. One of the most important things about not requiring payment is that’s not necessarily just about the amount of money involved. It’s not free just because free equals zero dollars. It’s free because if you’re sitting in some big company and you wanna set up a site really quickly, and you wanna use best practices and deploy HTTPS, if you’ve gotta go to accounting and get a credit card and get approval to spend money, and set up recurring charges, and things like that…
It’s not gonna happen.
…even if you’re charging 10 cents a month, it’s a pretty big friction for you. You’re just gonna say “I’m not gonna go through this whole silly process. I’m just gonna set up the site without HTTPS.” And now you’ve got another insecure site on the web.
So not charging money is about not creating friction in the process, and not requiring humans to be involved any more than necessary. So yeah, Let’s Encrypt is a really automated, easy to use, free way to get a certificate.
We first covered Let’s Encrypt on the Changelog back in March of 2017, episode 243. We had Jacob Hoffman-Andrews from the EFF on the show. Probably a fun episode to go back and listen to in light of the success you’ll have had… Because that was very much near or shortly after the kick-off. So from there to a billion in under three years is pretty amazing. I think free is a huge aspect to that, but I’m just curious, from your perspective, what did all get right, in addition to making it free, which is a big aspect, of course? …to get the spread on. You spread like crazy, which is amazing. What did you do right to get here?
[00:12:20.05] First of all, let me recommend going back to that episode to anybody who wants to. Jacob is our lead software engineer, and he is brilliant. You’ll never regret listening to him.
Yeah, he was a great guest.
Like most things in cryptography adoption, it’s all about ease of use. You can come up with the most brilliant security or cryptography mechanism you want, but if it’s not easy enough to use, it’s not gonna see deployment. Not at scale anyway. So from the beginning, it’s been all about ease of use. For us, that really means automation and really just making it so people don’t really have to do anything.
In order to automate, you just wanna have computers do the work. On the Let’s Encrypt site we’ve got a bunch of computers that do the work on our side. It’s a little more complicated than that, but you know, computers do most of the work… And then on the client side, for people requesting certificates, we needed client software that works for everybody. And people use a lot of different stacks out there. Some people are on Linux, some people are on Windows, BSDs, and people are using Apache, NGINX… Whatever.
There’s a lot of different deployment environments out there, and there’s no way that we could write client software for all of those environments ourselves. It’s just not possible. But we came up with a really well-documented and standardized protocol, and then our community - it’s just amazing - has gone out and written hundreds of clients that work with this protocol. So now, no matter what your application stack is or your software stack, there’s almost certainly a Let’s Encrypt client out there for you to use, and all you need to do is just find that client, install it, and it will do most of the work for you. That’s something we really couldn’t have pulled off without our community.
Did you bootstrap any of that? Did you say “Well, we’ll do the Apache integration, or the NGINX integration and get the ball rolling”? Or was there immediate community support post-announcement, and maybe the publishing of (is it a spec?) the actual way it works.
Yeah, the protocol is called ACME, and it’s an IETS specification now. Before that, we just published the spec for it. We did originally create a client that we developed not for very long, because we realized that us putting resources into one client ourselves just does not cover enough use cases. There’s so much out there… So we really need to focus on the community-building clients and not us doing it.
And for that client, which has now been renamed to Certbot, the Electronic Frontier Foundation (EFF) - they were volunteering most of the work to make that client anyway. It wasn’t the Let’s Encrypt server-side engineers doing it. So we decided pretty early on that it makes more sense to just turn that client over to EFF and make it an EFF project, since they were doing most of the work anyway. And we wanted to focus on supporting this strong community instead of building clients ourselves.
So we did sort of try to bootstrap it by building this client early on, and it served its purpose well… But it’s at a much better home at EFF, and it’s really grown into an amazing client over there.
When we go back to the inception of things - I don’t wanna go too far back, but just enough to understand the crux of the problem. Obviously, an unsecure web is an issue, but what was the biggest thing that stood out to you, that made sense to move forward with the Internet Research Group, and that being the foundation behind Let’s Encrypt? What was the biggest problem happening, that was sort of like “This has gotta stop.”
[00:16:06.17] Yeah… There’s a few different people involved in starting ISRG, and I think they all have their own personal motivations for why they wanted to get into this… I wanna make clear this might not necessarily be true for all of them. But for me, at the time I was running the networking group at Mozilla, the group that does all the networking code in Firefox. And one of the most frustrating things about running that team is there was nothing you can do on the browser-side to make sites use HTTPS. So if the site doesn’t use HTTPS, you’re just stuck doing completely not secure networking, and there was no amount of code you can write, there’s no clever code you can write; you’re just stuck.
So if you’re sitting there, in charge of the networking stack for a major browser, it’s very frustrating that there’s nothing you can do about this. You can’t improve the situation. So we started thinking about “What’s the problem here? Why are people not using HTTPS?” And the biggest problem seemed to be that getting and managing certificates was too complicated, or too costly, or too time-consuming. For whatever reason, people didn’t wanna do that. Everything else is pretty easy. If you wanna turn on TLS in Apache or NGINX, it’s a pretty easy config flag. The software is all there. Easy to turn it on. You just can’t do it without a cert.
This really came to a head when I was participating in some of the discussions in the IETF about standardizing HTTP/2. That’s kind of a mouthful, so I’m just gonna call it H2 from here on out.
We do as well.
One of the big discussions in the standardization process for H2 was “Should H2 require HTTPS? Should it require TLS?” And for me, that felt like a no-brainer. Like, yeah, obviously H2 should require TLS. At that point I think it was like 2011 or 2012 when this conversation was happening, and it seemed like we know that not using TLS is a huge problem. It’s 2011. Why in the world would we ever create a new protocol that’s not secure by default? That seemed just crazy.
And there was a lot of pushback on that idea. Part of the pushback came from proxy providers; basically, people whose software and jobs depend on intercepting web traffic. So that’s sort of the expected pushback. But other than that, it just seemed like a no-brainer to me. But there was another objection, which was that if you make H2 require HTTPS, you are effectively gonna make it pay-to-play, and you’re gonna make it much harder to deploy, because you won’t be able to deploy H2 without buying certificates… And you won’t be able to deploy H2 without going through the certificate obtaining and managing processes. So the idea was that requiring TLS would make H2 deployment a handicap, basically, and pay-to-play.
That frankly felt pretty reasonable, and it felt to me like “If I’m gonna continue with this position that it should require TLS, and that it’s crazy to not to, then I need to be willing to deal with the criticism here. I need to have some answer to this.” At the time, I didn’t really have an answer. They were right - one of the great things about the web is that a lot of things don’t require you to pay. And if we required TLS for H2, you would effectively be putting a financial tax on anybody who wanted to deploy H2.
[00:20:05.17] And likely, people wouldn’t play.
Right.
So what’s the point of even creating the tech, or the new protocol, the new spec, unless people would use it… Except for the ones who can obviously pay.
Yeah, you hamper its adoption, and you also hamper the people who can’t afford to adopt it, right?
Yeah… I mean, the big companies that do it - you know, Google, Facebook, whatever…
For sure.
…that’s a lot of traffic on the web. So it still has a benefit, but I don’t think we should be designing major protocols like H2 primarily just for the biggest players out there. As centralized as the web has become in many ways, I think it’s still important to pursue some of the ideals of accessibility and making sure everyone has the ability to participate on the web.
So I thought those criticisms of requiring TLS were legitimate, and if I was gonna run around telling everybody that they have to use TLS all the time, then we needed to deal with that problem. So I went back to the team I was working on then, and we talked about a lot of different possible solutions, but it was hard to find any solution that was gonna solve the problem, and solve it in a reasonable amount of time. There were ideas where it’s like “Well, if we do this, then maybe in 10-20 years the situation is different and we can do this”, but that’s way too far. We should have done this 10-20 years ago, not be planning to do it 10-20 years in the future.
So the only solution we came up with that we were pretty sure would work, and that might work in a big way in 5 years or less, was that there just had to be a new certificate authority, that was public benefit, really easy to use, doesn’t cost money, and available all over the globe. Available everywhere, to everyone. Without that, we just didn’t see how we were gonna get out of this.
To be honest, I don’t think anybody involved in those discussions was thinking like “Man, I’m excited to spend the next 5 years of my life building a CA from scratch, and dropping all these other things that I wanted to do in my career, and build a CA.” It wasn’t the most attractive project. But it felt like this is what’s gotta happen. If we don’t do this, the web is gonna be not secure for a long time.
So we did it. We went out and started a new CA. At the time, I knew nothing about how to build or run a CA, so it was a lot of learning from me, and I think everybody involved. I don’t think anybody – we had some advisors who had some experience, but I don’t think anybody actually building the CA had ever built one before. So it was a big undertaking, but that’s why we did it.
And here we are, five years later, and… When we started, I think 39% of all page loads - so not websites, but 39% of the time you loaded any particular web page, it would be encrypted. And that’s mainly because of big websites like Google and Facebook and other big properties… And everything else wasn’t. Here we are, five years later, and in the United States I think we’re approaching 92%. Globally we’re over 80% now. And those have a great trendline up. So five years later we’ve encrypted most of it. We’ve got some more work to do, but we got there.
I’ve gotta imagine starting a CA from scratch is an undertaking… You’d mentioned that you had some advisors obviously giving some advice (that’s what advisors do), but the majority of everyone involved in co-founding the Internet Security Research Group had no clue how to do some of these things… So how did you get a clue? How did you do this? What’s involved in building a certificate authority?
Well, we got some advice from people, like you said, and sort of laid out some of the basics of how it works. There is a document out there called The Baseline Requirements, which is a document built by the CAs in the browsers combined, sort of come together in a forum called CA/Browser Forum. They created a document of all the rules and requirements that all CAs are required to abide by. And you can figure out a lot about how a CA is gonna have to work based on what those rules are… So we read those very carefully.
We hired some auditors who audit us every year, to make sure that we’re compliant with those, and some other rules. Our auditors helped us figure out some stuff… But yeah, mostly we just consumed all this information and started drawing out plans for how things work, and then we had iterated on them until it all seemed like it would work. Then we went out and bought the hardware, signed the agreements… Just got to work. Some things had to be iterated on a couple of times, but… Pretty much how you figure out anything else you don’t know.
Yeah. If I grabbed the right document, it’s 68 pages. I could imagine that’s quite a read, for one… Two, who gives the authority for a CA? Who do you get the authorization from to move forward?
Yeah, so you can start a CA and you can do whatever you want as a CA. The question is who trusts you? So if you start a CA and you do whatever you want, pretty much nobody’s gonna trust you, so it doesn’t matter that you’re really running a CA. If you’re running some sort of private CA, you need your clients to trust you. If you’re running a public CA like Let’s Encrypt, basically what that means is that the general public trusts you. What that comes down to is the browsers trusting you.
So if you wanna start a CA, you need at least all the major browsers to trust you. Today, that would be Google, Mozilla, Microsoft, Apple. Those are the big ones. If any one of them doesn’t trust you, then this whole thing falls apart. You can’t have a website that works in three of those browsers but not on iPhones, or something.
[00:27:51.20] So when you talk about becoming a publicly-trusted CA, what you’re really talking about is getting those four browser makers to trust you… And they all run what are called root programs inside their organizations. And those root programs decide who they trust, and then follow-up on compliance from everybody they’ve already decided to trust.
So when you start a CA, you need to build your systems, then you need to get them audited against the current auditing guidelines for CAs. Then you take that audit report and you include it with an application to each one of those root programs. So you’re submitting at least four applications to four different root programs, and they all have their different ways of applying.
Some of them are relatively simple emails or bugs to file, and some of them are longer applications. But you apply to all four of them, and then you wait to get accepted. That can take anywhere from three months to three years to get accepted. Then once you’re accepted, you need to wait for them to actually put that trust into the browsers.
So for something like Chrome or Safari, what that means is you’re waiting for them to ship a software update that includes your root of trust in it. And until that happens, until a user gets that update, their device still doesn’t trust you.
In the case of Microsoft, it’s more dynamic. They don’t do it through software updates necessarily. If they see a cert they don’t understand, they will query a server and check. So trust in the Microsoft ecosystem can be done pretty quickly. Once you’re in, you’re in. The real problem is stuff like Android in certain parts of the world. People have old Android devices that they don’t get updates anymore, and they never get rid of them, and in some cases they’re still manufacturing Android for devices… And those things are never gonna get an update. So if you want to get those devices to trust you, you’re really just talking about waiting for them to leave the ecosystem.
So the point of this is - between the time that you apply for trust and get approved, and then all the devices in the world actually trust you, you’re talking about a period of 6-10 years.
Wow. So commitment is required, I suppose…
Yeah…
…without saying so much. You think about some people’s plans for new ventures, whether they’re small ideas or big ideas; any sort of itch that’s scratched. So we talk to a lot of people who scratch itches around here and do something about it, and you have to think about your commitment level to said mission. If you have a horizon of say a year or two years, and you’re building a CA, maybe you need to stretch that quite significantly to 5-6, or maybe even further.
Yeah.
What was your horizon for this? Were you like 10-20 years that you kind of knew all this beforehand, or was it sort of learned as you go? Because you mentioned a lot of this was learning as you go.
So what I’ve just described is the basic process of getting trusted from scratch. And that does require a big commitment. It requires quite a bit of money to get set up to a point where you can pass audits and even apply, and then every year while you’re waiting for all this stuff to happen for 6-10 years you have to stay compliant, get audited every year. So you’re talking millions of dollars and 6-10 years before you can even be a publicly-trusted CA in any meaningful sense. That’s the basic process.
There is a way to make a shortcut, which is how Let’s Encrypt was able to start without waiting 6-10 years first. So we did go through the process that I’ve just described, building up our own root of trust from scratch… But the world right now does not really rely on that yet, because it has not been long enough.
[00:31:52.23] And somewhere around mid-to-late next year we’re gonna switch over to our own root that’s trusted from scratch. But from our inception through now, we have what’s called a cross-signature… Which means we knew we didn’t wanna wait 6-10 years to start offering Let’s Encrypt services, so we’ve found another CA that understood what we were trying to do and was willing to help. They had a root of trust that was already trusted.
What essentially happens is we create a contract, an agreement between us, and then their root of trust essentially lends its credibility to us. So they issued a certificate that our root is trusted by their root, and their root is trusted by the browsers, basically. So that’s called a cross-signature. We acquired that before we did much of anything… Because without that agreement in place, there’s really no CA.
So one of the first things we did was get that agreement, because without that agreement in place, there’s no point in buying hardware and doing anything else… Because –
It’s a long journey.
…we’re not gonna sit around and do nothing for 6-10 years. So that was a really critical agreement. We got that in place with a company called IdenTrust, who’s been a great partner for a while now. So we’re trusted through IdenTrust today, and mid-to-late next year we will stop using that cross-signature and just be trusted on our own.
That’s a big deal.
Yeah.
Will it be transparent, that trust, I suppose? …since it’s still you, it’s still trust, it’s still the same browser…
…unless you’re running an old Android device.
That’s right.
Yeah… You know, their root is really widely trusted, and that’s been fantastic. And it’s trusted all the way back into early Windows XP, and all the Android devices. The problem with that root is, you know, the longer a root is around, the more trusted it is, but eventually it expires. And that root expires next year.
I don’t remember the exact lifetime on it, but I think that’s a 20-year-old root, at least, if not more than 20 years. So the advantage of an old root is that it has really widely-trusted status, but the disadvantage is that it’s gonna expire soon. So we will be switching from an old root that’s very well trusted, to a younger root that is also very well trusted, but admittedly not quite far back as IdenTrust. But if you’re still running Windows XP on the public internet, you might have bigger problems than your certificate. Probably the same goes for really old versions of Android.
You’d mentioned the dollars involved for those 6-10 years, and from just groking past blog posts from you or others at Let’s Encrypt, it’s primarily a people cost, so staff. Is there a lot of cost aside from that when it comes to the fast-track that you took, or the non-fast track? Or the cost is primarily people?
Certainly the cost for us to run the CA today is primarily people. Roughly speaking, paying staff is about two-thirds of what we spend every year. There’s startup costs, and then there’s ongoing costs. For startup costs - usually, those cross-signing agreements do cost money, and that’s a non-trivial amount of money… So when another company agrees to trust a new CA, they are responsible for that trust. If the new CA that they’re trusting messes up, it’s on them. So in exchange for the liability they’re taking on by trusting, in our case, just some guy from Mozilla that walks into the office and says “I’m gonna quit my job and start a new CA, and it’s gonna do all these amazing things. You should trust–”
Hypothetically, right?
[00:35:53.23] Yeah… “You should trust us, and put your business and your reputation on the line” - it’s not an easy ask. So these cross-signing agreements - there’s a lot of liability involved, and for that reason they end up being non-trivial amounts of money. So that’s a big startup cost for anybody. Then going forward from that, that’s probably the biggest startup-specific cost, aside from maybe initial capital; you’re gonna need to buy servers, and HSMs, and things like that.
Ongoing, we have to buy a certain amount of hardware every year. We do use some cloud services, we use some external services, but mostly, aside from people, it’s about the data centers and what’s in them. Publicly-trusted CAs are not allowed to operate in the cloud, so we can’t run our CA systems on AWS or GCP or Azure or something like that. We have our own hardware in special, secure rooms inside data centers, and they’re not even normal data centers; they’re special, walled-off rooms in data centers, with a bunch of extra biometric access, and stuff like that. That stuff is a non-trivial expense, and you’ve got all this hardware that goes inside, you’re gonna have a lot of redundancy… So we pay for that stuff, and that’s where a lot of the rest of our budget goes.
Yeah. In a world where Let’s Encrypt is ubiquitous, which is what we’re getting to - liked you’d mentioned, 2,5 years ago 100 million certificates issued; a month or two ago, a billion. It’s quite a massive growth. In a world where Let’s Encrypt is ubiquitous, what’s the point of other CAs? I’m just thinking, why would anybody not use free?
There’s a lot that other CAs do that we don’t do. For example, we offer one specific type of certificate. You can’t change very much about it. We think shorter certificate lifetimes are better, so we don’t let anybody create a certificate that’s longer than 90 days in lifetime. And we also don’t offer human support. So there are a bunch of reasons to choose other CAs. For one thing, we don’t sign this sort of normal contracts; some people have procurement requirements, or they want certain things in contracts from their vendors, and we don’t do that. We don’t provide support. You can’t pay us for 24/7 phone support. If you don’t like the type of certificate we offer and you want something else, or you want it configured in some special way, we don’t do that…
So we’re sort of one size for everybody, and if that doesn’t work for you, then there’s luckily a lot of commercial CAs you can go to, and they’ll be happy to help you, I’m sure.
So you’re saying Let’s Encrypt is for everybody, but not for everybody.
Yeah, it’s a pretty basic option. I think it’s basic for two reasons. First of all, we’re trying to be a really efficient organization, so… As complicated as running a CA is, we try to scope that complexity and limit it as much as we can. We don’t wanna offer a ton of choices to people, because complexity just leads to more bugs, more cost, and things like that.
Another thing we’re really focused on is best practices. We tend to do whatever we think is the best practice. An example of that is the cryptographic algorithms that we offer, or the certificate lifetimes that we offer… Or you know, you can only get it through certain validation methods the way that we do it, because we think those are the only secure ones, or something like that.
But other people have other opinions. So between trying to be efficient and trying to focus on best practices, we offer a pretty limited service, that I think works for a lot of people, but not everybody.
And the web is huge. We may be serving 200 million sites right now, but there’s a lot more than 200 million sites right now, and they should all be using HTTPS. So there needs to be a robust ecosystem of CAs in the world, so that people who need something else besides what Let’s Encrypt provides have a place to go.
[00:40:05.13] Given the requirements to even be a trusted CA, it doesn’t seem like something that there are just handfuls of people listening to the show saying “I’m gonna drop what I’m doing today and become a CA.” It’s just such a long road. I almost think you have to really be invested, I suppose, in securing… And I guess that’s a whole different kind of problem or different kind of business. But given the amount of effort it takes to get there.
To start a business today, you have to kind of get to product-market fit. Create a product that people want, and people will buy. in this case, you can do that as a CA, but still not be able to sell it because it takes you so long to become a valid resource for giving it the trust; the trust is such a big deal. It’s an interesting business to create.
Yeah, creating a CA is not a decision to take lightly, but the way that most people create CAs now is not to wait for that process to play out.
They do the cross, like you’ve done.
A cross-signature… Or you just go acquire another CA. You just buy another CA.
There are at least a hundred existing publicly-trusted CAs. I’m not sure exactly how big the list is. They buy and sell each other all the time. Sometimes they go out of business. I don’t remember how many publicly-trusted CAs there are, but there’s at least a hundred… And if you wanna start a CA, you’re gonna start a serious business. One way is to go spend x millions of dollars on the cross-sign, or you can spend x millions of dollars just to go buy a CA.
Most of these CAs you’ve never heard of. They’re very small. They’re not under-the-couch money, but if you’re starting a trucking company and you need to go buy ten million dollars’ worth of trucks - that’s something people do all the time. You can probably go buy a CA for something in that order of money, so in some ways it’s not really that different from starting any other business, I would imagine. I only have experience with Let’s Encrypt in the cross-sign, I’ve never actually bought another CA… But I don’t think that the startup requirements are really that different, just because most people don’t wait for the from-scratch process to play out.
You mentioned the other certificate types. I’m just curious on your thoughts on extended validation certificates and the idea of wrapping up identity into encryption; establishing a secure connection. And then you also have this extended validation, so you know at least the CA trusts that the person who owns the certificate is who they claim to be. What are your thoughts on that style? I know you don’t offer it, but is it worthwhile?
I don’t know if I can say whether that’s worthwhile for any particular – you know, some people have specific needs, regulatory needs, or whatever… So I don’t know if I can say whether that’s worth any individual in general should do it. But I do have some thoughts… First of all, trying to include the identity of a legal organization in a cert does not affect encryption at all. You can’t tie the two together. You can put them in the same cert next to each other, but EV certs and OV certs, which contain this legal identity information - the encryption is no different than a DV cert. It’s the same.
The only theoretical value to that is if you display the identity to the user, and then let the user make a decision based on the identity they see. The problem there is – well, there’s a bunch of them. First of all, browsers are increasingly not showing that information to users, so there’s no point in having it in the cert if the browsers aren’t gonna show it to them. And the reason the browsers aren’t showing it as much anymore is that most researches show that people either don’t look at it at all, or don’t understand it when they do see it.
[00:44:02.07] So you’re not gonna build a secure system that relies on the average user on the internet looking at information and making informed decisions. That’s just not how security works. If that’s your plan, it’s not gonna result in generally increased security for anybody. It only works when it happens automatically, and doesn’t require people to look into it individually.
There’s a bank out there called USAA, and if you look at an EV cert from them, it says “United Something Automobile Something”. The spelled out name of the business is very long, and nobody knows what USAA stands for. They just know the bank has USAA. So when you see that kind of information pop up in an EV cert, how can you possibly expect anybody to make a reasonable decision about that.
So I don’t think that it’s very useful to put identity information in a certificate that’s used on the general internet… And the research backs that up, and browsers tend to agree and are dropping that from the UI.
Yeah, that’s true.
I don’t have any stake in this - we don’t issue it - so in some ways I don’t really care, but it seems not very helpful, and probably it’s not going to have a strong future on the internet.
No, I was just curious your thoughts on it.
I think at one point it was interesting because it was different. Not all certificates you’ve got gave you the option. So I can recall a day whenever you to GitHub.com, and it would say separately, twice almost; someone’s double-branding even. You know, heavy on the brand side.
With a big, green background, and then it was like very official…
Yeah, it seemed official. It seemed cool, it seemed secure. So I would think - which I don’t know all the research behind it, but from a UI perspective it’s probably cumbersome because it’s redundant… But it looked different than someone who didn’t pay anything for their certificate, or didn’t buy a certificate and then offered that… And it was unique. So you would see it happen on people who would wanna pay for it, I suppose.
Yeah… The problem there is that once you see it, it might possibly seem like a good thing to you… Although, again, the research shows that people don’t really react to it in a useful way. The problem is you don’t notice the absence of it.
Right.
If GitHub just didn’t have that, you wouldn’t say “Hey, it doesn’t have that thing. I’m gonna leave.”
Yeah. I may hop onto GitHub today and it doesn’t have it today. I don’t care.
Right. So it just turns out not to be very useful. And also, there’s a lot of issues with how it’s validated. In domain validation, where you’re just proving control of a server before you issue a cert, there are pretty clear and strong ways to do that validation. When people do identity validation, it basically involves phone calls, and faxing around document copies of your articles of incorporation, and copies of driver’s licenses, and stuff like that. It’s easier to mess with.
Pretty famously, recently someone registered Stripe Inc. in some state that’s not where the normal Stripe payment company is… And since registration of businesses is by state, they had an EV cert that they had Stripe Inc. Obviously, that’s not what you’d expect, but it’s not a bad cert. They legitimately did own Stripe Inc. in North Carolina, or wherever it was that they did.
Right. It’s like a namespace conflict, but it wasn’t inappropriate. The CA that issued that certificate could have went out to their business and got their articles of incorporation, and all the stuff, in the state that they’re in. So it’s completely valid.
[00:47:57.04] Yeah. There’s ultimately nothing wrong with that cert; they sort of arbitrarily revoked the cert, because they say “Well, we just don’t like that cert.” It brings a lot of arbitrariness into it. And that is a cool party trick, and it demonstrates some problems with EV certs, but the real issue is that nobody seems to care what’s in the cert anyway, so it doesn’t matter if you – you know, nobody really looks at it or makes security decisions on the basis of that stuff anyway, so it doesn’t matter. Namespace conflicts are sort of a second-order issue.
Right. I think it’s interesting that it was for a time almost like a status symbol amongst technology companies to have that. It was like “We’ve arrived” or “We have enough money to buy the more expensive–”, whatever it is. And really the browsers – like you said, the browsers, when they started to move that out of the way, in the browsers, when the vendors say “Yeah, let’s just go ahead… No one looks at it, except for nerds…” Most people don’t even look at the address bar. They don’t even know it exists, which is why the number one thing people google is “Google” or “Facebook”. They google Facebook to go to facebook.com, when they could just type it into their address bars, because people don’t…
Yeah, they’re missing four characters…
…look at the address bar, let alone “Is the background green? Does it have the thing?” You know… So really the browser vendors made that not a thing. I mean… That’s super-interesting. They’ve kind of like – because that was an advantage of a certain certificate or another; or it’s kind of an upsell. Isn’t it always an upsell? “Hey, get the Extended Validation Cert.” And it’s like, just the movements of the web, and the decision-making of the browser vendors basically just quashed the value there, because it was really only in the status symbol, like you said.
Yeah. I mean, “the advantage” is not really an advantage, because it doesn’t actually mean anything. It just takes up a bunch of UI space.
It’s kind of fascinating.
Yeah… Again, if your plan for security is to show average users some information and then expect them to make a really good decision based on that information, that is not ever gonna work. It doesn’t work for EV, it doesn’t work for anything else.
So Josh, we’ve been talking about Let’s Encrypt’s success over the five(ish) years you’ve been doing this. A lot has changed since the beginning, a lot has changed since 2017 when we had Jacob on the show saying “Let’s encrypt the web”, mostly extreme amounts of adoption. You have some stats in your billion certificates blog post that in June of 2017 approximately 59% of page loads used HTTPS globally, 64% in the U.S, and today that’s 81% of page loads use HTTPS globally. I think you mentioned that earlier in the conversation. And we’re at 91% in the United States. I wanted to reiterate that… That’s a massive number. 91% in the U.S. So you guys have played a large role in that. And I’m curious, because there’s also been – like, the web has changed alongside you, and the trends are changing, and security is more important, and all these things… So I’m curious, how much do you feel you’ve been pushing this up the hill, and how much do you feel that maybe you’ve been riding a wave in the last couple of years?
It doesn’t feel like pushing it up a hill so much. I think there was a lot of demand. I think developers understand that using HTTPS is a good thing; they understand that without it you’re not secure. I don’t think it’s hard to convince most of them to do it. I think they’re ready to do it if they have a reasonable option for doing it… And by reasonable I mean very easy to use.
[00:52:25.00] So we’ve put our service out there, and it’s not that hard to convince people to use Let’s Encrypt. We don’t really market or engage in too many activities around really trying to convince people to use Let’s Encrypt. Most of our efforts revolve around trying to get people to give back for using Let’s Encrypt, and keep stuff going. But yeah, it definitely doesn’t feel like pushing something uphill. It feels like people wanna do the right thing, they just need the tools. And now they have them.
I think the developer mindset - it’s my own personal opinion and experience - has changed, probably from “You should encrypt anything that’s important…”, I’m talking like 3-5 years ago that was kind of the ethos… Anything important - if you’re signing in, obviously if you’re making e-commerce transactions - those things should all be encrypted. Taking passwords etc. But that’s pretty much what needs to be encrypted. And I think nowadays, generally speaking, the ethos is “Just encrypt it all.”
All things.
Encrypt all the things.
Yeah. The thing that gets me about the first argument, that only important things should be encrypted, is that people need to remember that when data is not encrypted, not only can it be read by other people, but it can be modified. So any unencrypted traffic can have stuff injected into it. And it doesn’t matter whether that traffic is important or not. So if you’re on your banking website in one tab, and you think “Oh, that’s important. That needs to be encrypted”, and you’re over in another tab looking at memes, or something, and you think “These things are not important. These are just some mass-media GIFs flying around the internet. Why does that need to be encrypted?”, the problem is that unencrypted traffic can be modified. So you can have malware or some kind of exploit loaded into the traffic for that tab, that exploits your computer and now does stuff with your banking info, because they owned your browser through the unencrypted traffic in the meme tab.
Don’t go changing our memes.
It is really not a good idea to try to draw distinctions between what is important and what is not, because it’s all exploitable in the same ways, and that line just never gets drawn in the right place.
Yeah. Which makes celebrating a billion certificates all that more important, because you started out at the bottom and now you’re here, to use a rap song very wisely…
And that’s the thing…
You used that rap song very wisely.
…I mean, in 2017 a hundred million, and now you’re at a billion, five years later - that’s a big deal, and that means that so much more traffic and so many more people are not getting advantage taken over them. Or the opportunity to get the advantage taken over them because of being secure. And in a day prior to this, it cost money to enter; not that the money factor – I think it was just a barrier to the entry to using SSL, for the Jerods and the me’s of the world before, who said “Hey, only important traffic needs to be encrypted.” And now it’s like “Well, everything. For those reasons.” And that’s a big deal.
You mentioned that you haven’t done much to get there, so – I mean, going from zero to a hundred million, to a billion, no marketing, not much involved, just mostly community work, when you have to account for how you got here, how did you get here? What are the things you did to do that, specifically?
[00:55:58.10] Well, like I said, there was a lot of pent up demand, and we gave people the tools, and we made them easy to use. That’s really the gist of it. And then some people start doing the right thing, you get the numbers high enough, and then the mindset of the world switches from “HTTPS is an optional thing that you can have if you wanna spend time doing that” to “HTTPS is the standard thing that you need to do all the time, and if you don’t do it, you have a problem.”
One of the biggest accomplishments for us and for everybody – you know, it’s not just Let’s Encrypt; we’re not the only reason the web is where it is today. There’s lots of different people working on different great projects around the world that have helped promote HTTPS… But one of the big accomplishments of that community is that HTTPS is considered the standard today.
You set up a website. If you expect people to visit it, you need to have HTTPS. That’s a huge mindset change. [unintelligible 00:56:56.02] ways in which the internet has changed similarly in the past for other technologies. It is hard for me to imagine – or, not imagine… I don’t know everything about the history of the internet, but I don’t remember any other thing fundamentally changes how almost all this traffic flows across the internet in less than five years. I can’t think of another watershed change to how the internet functionally works that played out that quickly. I’m a huge fan of IPv6, but that transition has been dragging out for a long time…
It’s the slowest transition of all time.
You know… And when we started Let’s Encrypt, we were thinking “This cannot be an IPv6 trendline. It can’t be that way. We’ve gotta make sure this happens much faster than that.”
There’s a lot of other improvements to make to the internet, too. I hope this serves as an example of “If we wanna make a change, we can do it.” There’s a bunch of other stuff we should fix. It is possible to change major parts of how the internet works in big ways, in a few years, under the right circumstances, with the right plan.
What you’re sharing - it reminds me of this idea that I haven’t quite verbalized yet, but it’s this cog mentality. If you’ve ever heard of Seth Godin, he wrote a book called Linchpin. The idea is to be a linchpin. I think, in many ways, as individuals, we try to be really important… And that kind of goes against the idea of cog mentality, which means that you’re just a very sharp, very specific, very purposeful thing, as part of a much bigger, much more grand whole machine. So if it weren’t for the you’s of the world, Josh, doing Let’s Encrypt and all the effort here, then the browsers wouldn’t be able to its thing, and then the site developers wouldn’t be able to do their thing… So all these things are sort of in concert. A system. So this idea of a cog mentality really rings true here.
Yeah. Well, we’re happy to do what we do. But like I said, there’s a lot of people who play in this. Running CA servers and providing the APIs is an important part of this, but we wouldn’t be anywhere near where we are today if there weren’t hundreds of people out there writing ACME protocol clients that work with Let’s Encrypt, so people can just download software [unintelligible 00:59:17.06] it works.
The browsers have done a great job incentivizing moves to HTTPS by limiting new technology to HTTPS connections, and some UI work, things like that. So it’s been the browsers, it’s been the open source community, Let’s Encrypt… Lots of people involved. Even within Let’s Encrypt, there’s so many people involved in it. There’s the engineers that work on it, our sponsors, our funders… That’s huge. We don’t go anywhere unless somebody decides to write a pretty big check. And people who make those decisions, to write those checks - I feel like they often don’t get enough credit, because it’s not fun and open source-y, but that’s a big deal.
So the fact that there are people out there and companies that understand what we’re trying to do and they’re willing to write those checks, and stand up and really make the internet better - that’s where it starts.
[01:00:16.04] Are there any standout organizations that have been supporting you, either in big ways, or for a long time, that you’d like to give a shoutout to? Because like you said, they don’t get much credit. Maybe a logo on a web page somewhere. But do you have any major supporters? Like “We wouldn’t be here if it weren’t for this company, or this organization.”
Yeah, we’ve got over 70 corporate sponsors, so I’m definitely not gonna be able to list them all here… But our platinum sponsors are our biggest supporters; they write the biggest checks, and they’ve been fantastic. Companies don’t make the decisions, people inside those companies make the decisions, and I’m so glad that those people understand what we’re doing and get it done.
Our platinum sponsors right now are Mozilla, Cisco, Electronic Frontier Foundation, OVH - a company that I think not a ton of people in the U.S. have heard of, but they’re a great, huge cloud provider in Europe, and they have just been fantastic since very early on. And Google, specifically the Google Chrome team.
One thing that’s amazing is what you’ve done with that money. In that same post, you talk about how you’re serving 4x the websites that you were back then. And of course, now here you are at 200 million, so even more websites… And your budget hasn’t increased 4x, or anywhere near that. You went from a 2.61 million annual budget in 2017 to 3.35 million now, and from 11 staff members to 13 staff members. So only adding two staff in the course of three years, and 4x in your websites served - that’s pretty good numbers.
Yeah, internally we are obsessed with efficiency. Like I said, it’s a really big deal for people to entrust us with millions of dollars over a year. There’s a lot of good in the world that money can do. So when that money comes to us, it’s our obligation to make sure that we use it wisely, and do the most good we can do with it. That means delivering the best service, to the most people that we can. We take that responsibility to be good shepherds of that money really seriously.
So whenever we talk about a new service, or like a new feature, or some way in which we’re gonna expand our service, we have a whole bunch of things that we think about to make sure that we’re being efficient. One of the most important things is “Does it require any people to be involved with anything, anywhere on the chain?”
One of the reasons we don’t offer phone support is if we did, we would have to fill a skyscraper with people sitting by the phone. So when we think about delivering a feature, that feature cannot require support. We need to make sure that it’s so easy to use and so easy to document and so easy to automate that the people consuming the feature should not need support. Even the people who are the least technical. It should just happen. If they do need support, it should be as simple as reading a very easy to find bit of documentation.
So ease of use is, again, hugely important for efficiency. If it’s easy to use, then people don’t need to talk to you as much. If even some very small part of 1% of the people using Let’s Encrypt needed to actually talk to us on the phone about something, that would just be overwhelming. We can’t do that. So everything has to be very easy to use.
Internally, we think a lot about how much data do we store. We’re basically allergic to data. We only really hold on to what we really need to hold on to for either compliance purposes or to debug our own systems.
[01:03:56.22] But aside from that, we don’t wanna have more sensitive information than we need, we don’t wanna be sitting on piles of information where we have to pay for storage servers and things like that. We tend to just do what we need to do and not hold on tons of data. When we need to use an external service, we often find partners who are willing to provide the service free of charge to us, essentially as sponsors or donors.
We’re just very concerned about efficiencies, so we’re only 13 people today. We have a lot of specialized systems, but it’s probably – I don’t know what people imagine we run when they think about what is Let’s Encrypt’s actual hardware, but it’s like about three racks full of hardware. It’s not a ton of hardware. It’s all very carefully maintained in some ways, but you can fit – you know, modern servers are crazy powerful. You can fit a lot of stuff in there, you don’t need a lot of physical space.
We’ve got a couple different data centers, maybe three racks of hardware between them, and that’s triple-redundant. In theory, if we needed to, we could just run the CA out of one rack hardware, and that would serve all 200 million sites pretty easily.
So if you automate everything and get computers to do all the work for you, you can be pretty efficient. It still requires – I think this year we’re gonna spend a little under four million dollars, but that’s really not that much money. I’m fairly confident that there are Fortune 500 companies out there that spend more than four million dollars on their internal PKI systems.
Right. So you mentioned earlier that you weren’t sure you wanted to even spend the next five years of your life doing this, doing a CA, but you felt like somebody had to do it, and you were well-positioned and willing to. Here we are, you’ve done over a billion certificates, 200 million sites served, all these big numbers… And I think more importantly, those global trends, which from the very beginning you all have said you wanted to encrypt the whole web. The global trends I think are probably more important to you than Let’s Encrypt’s footprint on that. 81% globally, 91% in the U.S. Do you feel like Let’s Encrypt has accomplished its mission? Is there still a lot left to do?
Well, in the U.S. there’s still 9% of page loads that are not encrypted. Globally it’s still 19%.
And I think if you’re an engineer, you probably understand what I mean when I say something like “90% of the work is involved in finishing the last 10%.” The 10% or 20% around the world that haven’t moved to HTTPS yet, they’re almost by definition the ones that are hardest to reach. They either don’t know, or they don’t have the tools, or they have some reason why they haven’t switched, and it’s the people for whom it was easy to switch have mostly already done it.
So I think that last 10% is gonna be pretty – it’s not gonna be as easy as the 10% before that. And also, this service needs to continue going. I don’t know how long Let’s Encrypt is gonna need to be around, but it’s quite possible that it’ll be around 10, 20, 30 years. I have no idea. But it’s not like once you’ve encrypted a site, your work is done. You’ve gotta continue to issue new certificates on a regular basis.
So we need to be around for that, and in order to be around for that, we need to stay on the top of our game in terms of compliance and security. At the end of the day, people need to trust us. That’s what it really comes down to. We’re never done because trust is never done. If at any point the world loses confidence in us, we can either lose trust technically, where browsers don’t trust us on a cryptographic level; if our donors don’t trust us, we don’t get the money we need to continue… So the job is certainly not done. We need to maintain high standards and stay trusted for a long time now.
I wasn’t exactly thrilled about the idea of spending a huge chunk of my life building a CA and running it, but it turned out to be really great. It puts me in contact with so many people that are really passionate about making the web a better place, and that’s something I am very happy with. I love working with our board members, and our partners, and our community. I can have enthusiasm on tap anytime I want it, just calling people up and talking about what’s happening with us. So it’s turned out to be great, and our staff are wonderful, so as a job, I really couldn’t ask for more.
[01:08:25.02] Yeah, the benefits of a job often outweigh the job itself. Sometimes you don’t really care for the job itself, or the mission… Not so much the mission-mission, but the fact that you get to interface with so many people who care - it’s gotta be uplifting for you.
In light of what Jerod asked you, I’m sure that this will be a little easier for you to answer - or maybe not - but what’s on the horizon for you? Big picture. You mentioned 10, 30 years down the road, so you’ve gotta have some sort of idea… Give us a snapshot, maybe 1-2 years in the future. What’s something nearest on the horizon not many people know about, that is something you can share today?
Well, we’re gonna keep grinding and finishing encrypting the rest of the web; that we’ve already talked about. There’s so many more things that need to be worked on… I don’t know that – you know, Let’s Encrypt permission is pretty well scoped. We issue certificates, and our goal is to do 100% encryption, and be entrusted while we do that. In some ways, that’s a very narrow scope, and it’s a part of why we have done well, I think. But I think in doing this work, we’ve realized there are a bunch of other issues on the web that we need to solve. A couple of the ones that are top of my mind are – there’s a protocol called Border Gateway Protocol (BGP), and that is the protocol that’s used to decide how and where traffic gets routed around the internet.
So if you’re gonna send a packet from Seattle to Philadelphia, what exact route is that gonna take to get there. That’s all determined by BGP. That protocol is not secure. It’s very vulnerable, and I think the only reason it hasn’t been exploited more is it’s not very popular. People don’t know about it, the attackers don’t know about it. They’ve also had easier targets, but…
For as many security problems as we have, we’ve done a pretty good job working on them, and I think a useful way to think about the next 10-20 years of security is that I think we’re gonna keep pushing attackers down the stack. You improve application layer security, and then maybe the next step down the stack is the transport layer, or something like that. HTTPS, you encrypt that, and then the attacker has gotta move on from there. And right now, I think the next layer down that has not been exploited to its full potential yet is BGP. I think of that as the soft underbelly of the internet. Attackers are gonna take notice of it and they are gonna get better at it, and they can cause massive outages by doing that, they can reroute traffic wherever they want…
So I’m concerned about BGP, and that has some pretty direct impact on Let’s Encrypt, in that certain types of BGP exploits can be used to mess with certificate issuance processes. That’s true of any CA, it’s not specific to Let’s Encrypt; it’s just part of our risk profile as an industry. But it’s a hard thing to secure.
So I’m very interested in what we can do about BGP security going forward. That’s gonna require a lot of the big companies that operate the major pathways on the internet to change how they do things. So that’s one thing I’m interested in, and we do some work around that at Let’s Encrypt to mitigate the problem right now, and also try to invest a little bit in the long-term solutions there.
Interesting.
[01:11:54.27] Another thing that both affects us and that I’m personally pretty passionate about is memory safety. In the same way that it seems crazy to us now that you would start a major website and not use HTTPS - we know so much about the risk of that and it just seems crazy to do that now - I think we’re also gonna come to a point where we feel like it is crazy how people say stuff like “Well, I’m running a bank and I need to do some reverse-proxying, so I’m gonna spin up an instance of NGINX or Apache and do my reverse load balancing.” Because what you’re really saying there is “Why don’t I just stick several million lines of C code on the edge of my network? And that’ll probably be fine.” That code is not safe. [unintelligible 01:13:25.11].”
So we’re trying to remove that kind of code from inside Let’s Encrypt, because it’s a huge liability, but we/I am looking into ways that we can try to move the needle on this problem in software in general.
What’s your first step, do you think? What are some of your early insights on moving that needle?
The first step is don’t write any new code in C and C++, or any other memory-unsafe language. That should just be a given. If you can tolerate a garbage collector, if that’s fine, then you have a ton of options - Java, Go, whatever. If you want a memory-safe language that doesn’t use a garbage collector, go use Rust. You have that option now.
It seems like the next step would be having viable replacements for a lot of the software that’s already out there.
Yeah. The next step is we need to rewrite all the software that we already wrote in C and C++, and replace it. And when I tell people that, the most common reaction is like “You can’t possibly expect us to rewrite the world. That’s so unreasonable. You’re not a realistic person when you say that.” And you know, I really strongly object to that reaction. We’re in a world full of talented people who care, and we can absolutely accomplish that if we want to.
If your goal is to rewrite a major web server or a major proxy server, or a major library or whatever, in Rust - let’s just do it. Yeah, it’ll take five years, it’ll introduce some logic bugs along the way that will get fixed, but in the end, this software is gonna be around for a very long time. And we need to eliminate that massive class of bugs, because vulnerability scanning, and audits, and static analysis, pentesting - that stuff doesn’t even begin to deal with the problem. It’s a good thing to do if you’re stuck with C and C++, but it’s absolutely not gonna eliminate the bugs. That’s just not gonna go away until you rewrite it.
[01:16:05.22] What we’re doing right now, where we just spin up giant piles of C and C++ without thinking about it is – we should not be doing that. We can’t be doing that 10-20 years from now if we wanna try to have a more secure world than we have now. So I think we need to think bigger. We just need to think like “Yeah, let’s rewrite the world.” Rewriting a big web server is a big project, but I’m sure there are teams at any number of companies that could accomplish it on their own without a help, if they just decide to do it. Yeah, it’ll be five years, but whatever; five years from now, you put in some effort, and now you’ve got a much more secure software system.
So I’d like to just see some more ambition and some more optimistic thinking about this stuff. I think it’s really important. I don’t wanna be suffering from buffer overflows in everyday software that sits on the network edge 10-20 years from now.
My guess, Josh, is that you’re well-positioned to encourage, considering what you’ve done in the last five years… Going from zero to a billion certificates issued is a big deal; you’ve found a way to create a CA in a world where it’s very difficult. Obviously, there’s protocols by – was it the cross-signature? Is that what it’s called?
Yeah, cross-signature.
Cross-signature. Just that alone, that was a smart play, and you’ve been able to do so much… So I think you’ve probably piqued our interest, and as well many listeners listening to this show by saying so. We need you out there petitioning for this, and encouraging those out there that can do this to take on this mission to do so, and not look at the five or ten years that it might take to do it. We see such blowback when we don’t consider the large-scale costs over many years. If this software isn’t gonna go anywhere in the next 10, 20 or 30 years, then we’re gonna rely on it. And just like securing the web is more important than it has ever been, having secure software that doesn’t have memory issues or unsafe memory where you can do these things, it seems so clear to me.
Yeah. It’s gotta happen.
Josh, thank you so much for your mission. Thank you so much for Let’s Encrypt and the work you all have done; to you and the team. I know you’re not a lone soldier in this mission, but the many behind you enabling this. But without you and many others doing this, we would have 51% less internet secured, so thank you very much for that. We appreciate your mission, and we appreciate you. Thank you, Josh.
Thank you so much.
Our transcripts are open source on GitHub. Improvements are welcome. 💚 | https://changelog.com/podcast/389 | CC-MAIN-2020-24 | refinedweb | 13,392 | 77.77 |
Mutex
A mutex (abbreviated Mutually Exclusive access) is a synchronization object, a variant of semaphore with k=1. A mutex is said to be seized by a task decreasing k. It is released when the task restores k. Mutexes are typically used to protect a shared resource from concurrent access. A task seizes (or acquires) the mutex, then accesses the resource, and after that releases the mutex.
A mutex is a low-level synchronization primitive exposed to deadlocking. A deadlock can occur with just two tasks and two mutexes (if each task attempts to acquire both mutexes, but in the opposite order). Entering the deadlock is usually aggravated by a race condition state, which leads to sporadic hangups, which are very difficult to track down.
Contents
- 1 Variants of mutexes
- 2 Deadlock prevention
- 3 Sample implementations / APIs
- 3.1 Ada
- 3.2 BBC BASIC
- 3.3 C
- 3.4 C++
- 3.5 D
- 3.6 E
- 3.7 Erlang
- 3.8 Go
- 3.9 Haskell
- 3.10 Icon and Unicon
- 3.11 Java
- 3.12 Logtalk
- 3.13 Nim
- 3.14 Objective-C
- 3.15 Objeck
- 3.16 OCaml
- 3.17 Oforth
- 3.18 Oz
- 3.19 Perl
- 3.20 Perl 6
- 3.21 PicoLisp
- 3.22 PureBasic
- 3.23 Python
- 3.24 Racket
- 3.25 Ruby
- 3.26 Tcl
- 3.27 zkl
Variants of mutexes[edit]
Global and local mutexes[edit]
Usually the OS provides various implementations of mutexes corresponding to the variants of tasks available in the OS. For example, system-wide mutexes can be used by processes. Local mutexes can be used only by threads etc. This distinction is maintained because, depending on the hardware, seizing a global mutex might be a thousand times slower than seizing a local one.
Reentrant mutex[edit]
A reentrant mutex can be seized by the same task multiple times. Each seizing of the mutex is matched by releasing it, in order to allow another task to seize it.
Read write mutex[edit]
A read write mutex can be seized at two levels for read and for write. The mutex can be seized for read by any number of tasks. Only one task may seize it for 'write. Read write mutexes are usually used to protect resources which can be accessed in mutable and immutable ways. Immutable (read) access is granted concurrently for many tasks because they do not change the resource state. Read write mutexes can be reentrant, global or local. Further, promotion operations may be provided. That's when a task that has seized the mutex for write releases it while keeping seized for read. Note that the reverse operation is potentially deadlocking and requires some additional access policy control.
Deadlock prevention[edit]
There exists a simple technique of deadlock prevention when mutexes are seized in some fixed order. This is discussed in depth in the Dining philosophers problem.
Sample implementations / APIs[edit]
Ada[edit]
Ada provides higher-level concurrency primitives, which are complete in the sense that they also allow implementations of the lower-level ones, like mutexes. Here is an implementation of a plain non-reentrant mutex based on protected objects.
The mutex interface:
protected type Mutex is
entry Seize;
procedure Release;
private
Owned : Boolean := False;
end Mutex;
The implementation of:
protected body Mutex is
entry Seize when not Owned is
begin
Owned := True;
end Seize;
procedure Release is
begin
Owned := False;
end Release;
end Mutex;
Here the entry Seize has a queue of the tasks waiting for the mutex. The entry's barrier is closed when Owned is true. So any task calling to the entry will be queued. When the barrier is open the first task from the queue executes the entry and Owned becomes true closing the barrier again. The procedure Release simply sets Owned to false. Both Seize and Release are protected actions whose execution causes reevaluation of all barriers, in this case one of Seize.
Use:
declare
M : Mutex;
begin
M.Seize; -- Wait infinitely for the mutex to be free
... -- Critical code
M.Release; -- Release the mutex
...
M.Seize; -- Wait no longer than 0.5s
or delay 0.5;
raise Timed_Out;
end select;
... -- Critical code
M.Release; -- Release the mutex
end;
It is also possible to implement mutex as a monitor task.
BBC BASIC[edit]
REM Create mutex:
SYS "CreateMutex", 0, 0, 0 TO hMutex%
REM Wait to acquire mutex:
REPEAT
SYS "WaitForSingleObject", hMutex%, 1 TO res%
UNTIL res% = 0
REM Release mutex:
SYS "ReleaseMutex", hMutex%
REM Free mutex:
SYS "CloseHandle", hMutex%
C[edit]
Win32[edit]
To create a mutex operating system "object":
HANDLE hMutex = CreateMutex(NULL, FALSE, NULL);
To lock the mutex:
WaitForSingleObject(hMutex, INFINITE);
To unlock the mutex
ReleaseMutex(hMutex);
When the program is finished with the mutex:
CloseHandle(hMutex);
POSIX[edit]
Creating a mutex:
#include <pthread.h>
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
Or:
pthread_mutex_t mutex;
pthread_mutex_init(&mutex, NULL);
Locking:
int error = pthread_mutex_lock(&mutex);
Unlocking:
int error = pthread_mutex_unlock(&mutex);
Trying to lock (but do not wait if it can't)
int error = pthread_mutex_trylock(&mutex);
C++[edit]
Win32[edit]
POSIX[edit]
C++11[edit]
C++11 reference for mutexe related functionality in the standard library
D[edit]
class Synced
{
public:
synchronized int func (int input)
{
num += input;
return num;
}
private:
static num = 0;
}
Keep in mind that synchronized used as above works on a per-class-instance basis. This is described in [[1]].
The following example tries to illustrate the problem:
import tango.core.Thread, tango.io.Stdout, tango.util.log.Trace;
class Synced {
public synchronized int func (int input) {
Trace.formatln("in {} at func enter: {}", input, foo);
// stupid loop to consume some time
int arg;
for (int i = 0; i < 1000*input; ++i) {
for (int j = 0; j < 10_000; ++j) arg += j;
}
foo += input;
Trace.formatln("in {} at func exit: {}", input, foo);
return arg;
}
private static int foo;
}
void main(char[][] args) {
SimpleThread[] ht;
Stdout.print( "Starting application..." ).newline;
for (int i=0; i < 3; i++) {
Stdout.print( "Starting thread for: " )(i).newline;
ht ~= new SimpleThread(i+1);
ht[i].start();
}
// wait for all threads
foreach( s; ht )
s.join();
}
class SimpleThread : Thread
{
private int d_id;
this (int id) {
super (&run);
d_id = id;
}
void run() {
auto tested = new Synced;
Trace.formatln ("in run() {}", d_id);
tested.func(d_id);
}
}
Every created thread creates its own Synced object, and because the monitor created by synchronized statement is created for every object, each thread can enter the func() method.
To resolve that either func() could be done static (static member functions are synchronized per-class basis) or synchronized block should be used like here:
class Synced {
public int func (int input) {
synchronized(Synced.classinfo) {
// ...
foo += input;
// ...
}
return arg;
}
private static int foo;
}
E[edit]
E's approach to concurrency is to never block, in favor of message passing/event queues/callbacks. Therefore, it is unidiomatic to use a mutex at all, and incorrect, or rather unsafe, to use a mutex which blocks the calling thread. That said, here is a mutex written in E.
def makeMutex() {
# The mutex is available (released) if available is resolved, otherwise it
# has been seized/locked. The specific value of available is irrelevant.
var available := null
# The interface to the mutex is a function, taking a function (action)
# to be executed.
def mutex(action) {
# By assigning available to our promise here, the mutex remains
# unavailable to the /next/ caller until /this/ action has gotten
# its turn /and/ resolved its returned value.
available := Ref.whenResolved(available, fn _ { action <- () })
}
return mutex
}
This implementation of a mutex is designed to have a very short implementation as well as usage in E. The mutex object is a function which takes a function action to be executed once the mutex is available. The mutex is unavailable until the return value of action resolves. This interface has been chosen over lock and unlock operations to reduce the hazard of unbalanced lock/unlock pairs, and because it naturally fits into E code.
Usage example:
Creating the mutex:
? def mutex := makeMutex()
# value: <mutex>
Creating the shared resource:
? var value := 0
# value: 0
Manipulating the shared resource non-atomically so as to show a problem:
? for _ in 0..1 {
> when (def v := (&value) <- get()) -> {
> (&value) <- put(v + 1)
> }
> }
? value
# value: 1
The value has been incremented twice, but non-atomically, and so is 1 rather
than the intended 2.
? value := 0
# value: 0
This time, we use the mutex to protect the action.
? for _ in 0..1 {
> mutex(fn {
> when (def v := (&value) <- get()) -> {
> (&value) <- put(v + 1)
> }
> })
> }
? value
# value: 2
when blocks and
Ref.whenResolved return a promise for the result of the deferred action, so the mutex here waits for the gratuitously complicated increment to complete before becoming available for the next action.
Erlang[edit]
Erlang has no mutexes so this is a super simple one, hand built to allow 3 slowly printing processes to print until done before the next one starts.
-module( mutex ).
-export( [task/0] ).
task() ->
Mutex = erlang:spawn( fun() -> loop() end ),
[erlang:spawn(fun() -> random:seed( X, 0, 0 ), print(Mutex, X, 3) end) || X <- lists:seq(1, 3)].
loop() ->
receive
{acquire, Pid} ->
Pid ! {access, erlang:self()},
receive
{release, Pid} -> loop()
end
end.
mutex_acquire( Pid ) ->
Pid ! {acquire, erlang:self()},
receive
{access, Pid} -> ok
end.
mutex_release( Pid ) -> Pid ! {release, erlang:self()}.
print( _Mutex, _N, 0 ) -> ok;
print( Mutex, N, M ) ->
timer:sleep( random:uniform(100) ),
mutex_acquire( Mutex ),
io:fwrite( "Print ~p: ", [N] ),
[print_slow(X) || X <- lists:seq(1, 3)],
io:nl(),
mutex_release( Mutex ),
print( Mutex, N, M - 1 ).
print_slow( X ) ->
io:fwrite( " ~p", [X] ),
timer:sleep( 100 ).
- Output:
27> mutex:task(). Print 2: 1 2 3 Print 1: 1 2 3 Print 3: 1 2 3 Print 2: 1 2 3 Print 1: 1 2 3 Print 3: 1 2 3 Print 2: 1 2 3 Print 1: 1 2 3 Print 3: 1 2 3
Go[edit]
sync.Mutex[edit]
Go has mutexes, and here is an example use of a mutex, somewhat following the example of E. This code defines a slow incrementer, that reads a variable, then a significant amount of time later, writes an incremented value back to the variable. Two incrementers are started concurrently. Without the mutex, one would overwrite the other and the result would be 1. Using a mutex, as shown here, one waits for the other and the result is 2.
package main
import (
"fmt"
"sync"
"time"
)
var value int
var m sync.Mutex
var wg sync.WaitGroup
func slowInc() {
m.Lock()
v := value
time.Sleep(1e8)
value = v+1
m.Unlock()
wg.Done()
}
func main() {
wg.Add(2)
go slowInc()
go slowInc()
wg.Wait()
fmt.Println(value)
}
- Output:
2
Read-write mutex is provided by the sync.RWMutex type. For a code example using a RWMutex, see Atomic updates#RWMutex.
Channels[edit]
If a mutex is exactly what you need, sync.Mutex is there. As soon as things start getting complicated though, Go channels offer a much clearer alternative. As a gateway from mutexes to channels, here is the above program implemented with channels:
package main
import (
"fmt"
"time"
)
var value int
func slowInc(ch, done chan bool) {
// channel receive, used here to implement mutex lock.
// it will block until a value is available on the channel
<-ch
// same as above
v := value
time.Sleep(1e8)
value = v + 1
// channel send, equivalent to mutex unlock.
// makes a value available on channel
ch <- true
// channels can be used to signal completion too
done <- true
}
func main() {
ch := make(chan bool, 1) // ch used as a mutex
done := make(chan bool) // another channel used to signal completion
go slowInc(ch, done)
go slowInc(ch, done)
// a freshly created sync.Mutex starts out unlocked, but a freshly created
// channel is empty, which for us represents "locked." sending a value on
// the channel puts the value up for grabs, thus representing "unlocked."
ch <- true
<-done
<-done
fmt.Println(value)
}
The value passed on the channel is not accessed here, just as the internal state of a mutex is not accessed. Rather, it is only the effect of the value being available that is important. (Of course if you wanted to send something meaningful on the channel, a reference to the shared resource would be a good start...)
Haskell[edit]
Haskell has a slight variation on the mutex, namely the MVar. MVars, unlike mutexes, are containers. However, they are similar enough that MVar () is essentially a mutex. A MVar can be in two states: empty or full, only storing a value when full. There are 4 main ways to deal with MVars:
takeMVar :: MVar a -> IO a
putMVar :: MVar a -> a -> IO ()
tryTakeMVar :: MVar a -> IO (Maybe a)
tryPutMVar :: MVar a -> a -> IO Bool
takeMVar will attempt to fetch a value from the MVar, and will block while the MVar is empty. After using this, the MVar will be left empty. putMVar will attempt to put a value in a MVar, and will block while there already is a value in the MVar. This will leave the MVar full. The last two functions are non-blocking versions of takeMVar and putMVar, returning Nothing and False, respectively, if their blocking counterpart would have blocked.
For more information see the documentation.
Icon and Unicon[edit]
The following code uses features exclusive to Unicon.
x: = mutex() # create and return a mutex handle for sharing between threads needing to synchronize with each other
lock(x) # lock mutex x
trylock(x)) # non-blocking lock, succeeds only if there are no other thread already in the critical region
unlock(x) # unlock mutex x
Java[edit]
Java 5 added a
Semaphore class which can act as a mutex (as stated above, a mutex is "a variant of semaphore with k=1").
import java.util.concurrent.Semaphore;
public class VolatileClass{
public Semaphore mutex = new Semaphore(1); //also a "fair" boolean may be passed which,
//when true, queues requests for the lock
public void needsToBeSynched(){
//...
}
//delegate methods could be added for acquiring and releasing the mutex
}
Using the mutex:
public class TestVolitileClass throws Exception{
public static void main(String[] args){
VolatileClass vc = new VolatileClass();
vc.mutex.acquire(); //will wait automatically if another class has the mutex
//can be interrupted similarly to a Thread
//use acquireUninterruptibly() to avoid that
vc.needsToBeSynched();
vc.mutex.release();
}
}
Java also has the synchronized keyword, which allows almost any object to be used to enforce mutual exclusion.
public class Main {
static Object mutex = new Object();
static int i = 0;
public void addAndPrint()
{
System.out.print("" + i + " + 1 = ");
i++;
System.out.println("" + i);
}
public void subAndPrint()
{
System.out.print("" + i + " - 1 = ");
i--;
System.out.println("" + i);
}
public static void main(String[] args){
final Main m = new Main();
new Thread() {
public void run()
{
while (true) { synchronized(m.mutex) { m.addAndPrint(); } }
}
}.start();
new Thread() {
public void run()
{
while (true) { synchronized(m.mutex) { m.subAndPrint(); } }
}
}.start();
}
}
The "synchronized" keyword actually is a form of monitor, which was a later-proposed solution to the same problems that mutexes and semaphores were designed to solve. More about synchronization may be found on Sun's website - , and more about monitors may be found in any decent operating systems textbook.
Logtalk[edit]
Logtalk provides a synchronized/0 directive for synchronizing all object (or category) predicates using the same implicit mutex and a synchronized/1 directive for synchronizing a set of predicates using the same implicit mutex. Follow an usage example of the synchronized/1 directive (inspired by the Erlang example). Works when using SWI-Prolog, XSB, or YAP as the backend compiler.
:- object(slow_print).
:- threaded.
:- public(start/0).
:- private([slow_print_abc/0, slow_print_123/0]).
:- synchronized([slow_print_abc/0, slow_print_123/0]).
start :-
% launch two threads, running never ending goals
threaded((
repeat_abc,
repeat_123
)).
repeat_abc :-
repeat, slow_print_abc, fail.
repeat_123 :-
repeat, slow_print_123, fail.
slow_print_abc :-
write(a), thread_sleep(0.2),
write(b), thread_sleep(0.2),
write(c), nl.
slow_print_123 :-
write(1), thread_sleep(0.2),
write(2), thread_sleep(0.2),
write(3), nl.
:- end_object.
- Output:
?- slow_print::start. abc 123 abc 123 abc 123 abc 123 abc ...
Nim[edit]
For mutexes (called locks in Nim) threads support is required,
so compile using
nim --threads:on c mutex
Creating a mutex:
import locks
var mutex: TLock
initLock mutex
Locking:
acquire mutex
Unlocking:
release mutex
- Trying to lock (but do not wait if it can't)
let success = tryAcquire mutex
Objective-C[edit]
NSLock *m = [[NSLock alloc] init];
[m lock]; // locks in blocking mode
if ([m tryLock]) { // acquire a lock -- does not block if not acquired
// lock acquired
} else {
// already locked, does not block
}
[m unlock];
Reentrant mutex is provided by the NSRecursiveLock class.
Objective-C also has @synchronized() blocks, like Java.
Objeck[edit]
Objeck provides a simple way to lock a section of code. Please refer to the programer's guide for addition information.
m := ThreadMutex->New("lock a");
# section locked
critical(m) {
...
}
# section unlocked
OCaml[edit]
OCaml provides a built-in Mutex module.
It is very simple, there are four functions:
let m = Mutex.create() in
Mutex.lock m; (* locks in blocking mode *)
if (Mutex.try_lock m)
then ... (* did the lock *)
else ... (* already locked, do not block *)
Mutex.unlock m;
Oforth[edit]
Oforth has no mutex. A mutex can be simulated using a channel initialized with one object. A task can receive the object from the channel (get the mutex) and send it to the channel when the job is done. If the channel is empty, a task will wait until an object is available into the channel.
import: parallel
: job(mut)
mut receive drop
"I get the mutex !" .
2000 sleep
"Now I release the mutex" println
1 mut send drop ;
: mymutex
| mut |
Channel new dup send(1) drop ->mut
10 #[ #[ mut job ] & ] times ;
Oz[edit]
Oz has "locks" which are local, reentrant mutexes.
Creating a mutex:
declare L = {Lock.new}
The only way to acquire a mutex is to use the
lock syntax. This ensures that releasing a lock can never be forgotten. Even if an exception occurs, the lock will be released.
lock L then
{System.show exclusive}
end
To make it easier to work with objects, classes can be marked with the property
locking. Instances of such classes have their own internal lock and can use a variant of the
lock syntax:
class Test
prop locking
meth test
lock
{Show exclusive}
end
end
end
Perl[edit]
Code demonstrating shared resources and simple locking. Resource1 and Resource2 represent some limited resources that must be exclusively used and released by each thread. Each thread reports how many of each is available; if it goes below zero, something is wrong. Try comment out either of the "lock $lock*" line to see what happens without locking.
use Thread qw'async';
use threads::shared;
my ($lock1, $lock2, $resource1, $resource2) :shared = (0) x 4;
sub use_resource {
{ # curly provides lexical scope, exiting which causes lock to release
lock $lock1;
$resource1 --; # acquire resource
sleep(int rand 3); # artifical delay to pretend real work
$resource1 ++; # release resource
print "In thread ", threads->tid(), ": ";
print "Resource1 is $resource1\n";
}
{
lock $lock2;
$resource2 --;
sleep(int rand 3);
$resource2 ++;
print "In thread ", threads->tid(), ": ";
print "Resource2 is $resource2\n";
}
}
# create 9 threads and clean up each after they are done.
for ( map async{ use_resource }, 1 .. 9) {
$_->join
}
Perl 6[edit]
my $lock = Lock.new;
$lock.protect: { your-ad-here() }
Locks are reentrant. You may explicitly lock and unlock them, but the syntax above guarantees the lock will be unlocked on scope exit, even if by thrown exception or other exotic control flow. That being said, direct use of locks is discouraged in Perl 6 in favor of promises, channels, and supplies, which offer better composable semantics.
PicoLisp[edit]
PicoLisp uses several mechanisms of interprocess communication, mainly within the same process family (children of the same parent process) for database synchronization (e.g. 'lock', 'sync' or 'tell'.
For a simple synchronization of unrelated PicoLisp processes the 'acquire' / 'release' function pair can be used.
PureBasic[edit]
PureBasic has the following Mutex functions;
MyMutex=CreateMutex()
Result = TryLockMutex(MyMutex)
LockMutex(MyMutex)
UnlockMutex(MyMutex)
FreeMutex(MyMutex)
Example
Declare ThreadedTask(*MyArgument)
Define Mutex
If OpenConsole()
Define thread1, thread2, thread3
Mutex = CreateMutex()
thread1 = CreateThread(@ThreadedTask(), 1): Delay(5)
thread2 = CreateThread(@ThreadedTask(), 2): Delay(5)
thread3 = CreateThread(@ThreadedTask(), 3)
WaitThread(thread1)
WaitThread(thread2)
WaitThread(thread3)
PrintN(#CRLF$+"Press ENTER to exit"): Input()
FreeMutex(Mutex)
CloseConsole()
EndIf
Procedure ThreadedTask(*MyArgument)
Shared Mutex
Protected a, b
For a = 1 To 3
LockMutex(Mutex)
; Without Lock-/UnLockMutex() here the output from the parallel threads would be all mixed.
; Reading/Writing to shared memory resources are a common use for Mutextes i PureBasic
PrintN("Thread "+Str(*MyArgument)+": Print 3 numbers in a row:")
For b = 1 To 3
Delay(75)
PrintN("Thread "+Str(*MyArgument)+" : "+Str(b))
UnlockMutex(Mutex)
EndProcedure
Python[edit]
Demonstrating semaphores. Note that semaphores can be considered as a multiple version of mutex; while a mutex allows a singular exclusive access to code or resources, a semaphore grants access to a number of threads up to certain value.
import threading
from time import sleep
# res: max number of resources. If changed to 1, it functions
# identically to a mutex/lock object
res = 2
sema = threading.Semaphore(res)
class res_thread(threading.Thread):
def run(self):
global res
n = self.getName()
for i in range(1, 4):
# acquire a resource if available and work hard
# for 2 seconds. if all res are occupied, block
# and wait
sema.acquire()
res = res - 1
print n, "+ res count", res
sleep(2)
# after done with resource, return it to pool and flag so
res = res + 1
print n, "- res count", res
sema.release()
# create 4 threads, each acquire resorce and work
for i in range(1, 5):
t = res_thread()
t.start()
Racket[edit]
Racket has semaphores which can be used as mutexes in the usual way. With other language features this can be used to implement new features -- for example, here is how we would implement a protected-by-a-mutex function:
(define foo
(let ([sema (make-semaphore 1)])
(lambda (x)
(dynamic-wind (λ() (semaphore-wait sema))
(λ() (... do something ...))
(λ() (semaphore-post sema))))))
and it is now easy to turn this into a macro for definitions of such functions:
(define-syntax-rule (define/atomic (name arg ...) E ...)
(define name
(let ([sema (make-semaphore 1)])
(lambda (arg ...)
(dynamic-wind (λ() (semaphore-wait sema))
(λ() E ...)
(λ() (semaphore-post sema)))))))
;; this does the same as the above now:
(define/atomic (foo x)
(... do something ...))
But more than just linguistic features, Racket has many additional synchronization tools in its VM. Some notable examples: OS semaphore for use with OS threads, green threads, lightweight OS threads, and heavyweight OS threads, synchronization channels, thread mailboxes, CML-style event handling, generic synchronizeable event objects, non-blocking IO, etc, etc.
Ruby[edit]
Ruby's standard library includes a mutex_m module that can be mixed-in to a class.
require 'mutex_m'
class SomethingWithMutex
include Mutex_m
...
end
Individual objects can be extended with the module too
an_object = Object.new
an_object.extend(Mutex_m)
An object with mutex powers can then:
# acquire a lock -- block execution until it becomes free
an_object.mu_lock
# acquire a lock -- return immediately even if not acquired
got_lock = an_object.mu_try_lock
# have a lock?
if an_object.mu_locked? then ...
# release the lock
an_object.mu_unlock
# wrap a lock around a block of code -- block execution until it becomes free
an_object.my_synchronize do
do critical stuff
end
Tcl[edit]
Tcl's mutexes have four functions.
package require Thread
# How to create a mutex
set m [thread::mutex create]
# This will block if the lock is already held unless the mutex is made recursive
thread::mutex lock $m
# Now locked...
thread::mutex unlock $m
# Unlocked again
# Dispose of the mutex
thread::mutex destroy $m
There are also read-write mutexes available.
set rw [thread::rwmutex create]
# Get and drop a reader lock
thread::rwmutex rlock $rw
thread::rwmutex unlock $rw
# Get and drop a writer lock
thread::rwmutex wlock $rw
thread::rwmutex unlock $rw
thread::rwmutex destroy $rw
zkl[edit]
zkl has two mutex objects, Lock (mutex) and WriteLock a mutex that allows multiple readers but only one writer. The critical keyword fences code to ensure the lock is released when the code is done.
var lock=Atomic.Lock(); lock.acquire(); doSomething(); lock.release();
critical(lock){ doSomething(); }
var lock=Atomic.WriteLock();
lock.acquireForReading(); doSomeReading(); lock.readerRelease();
critical(lock,acquireForReading,readerRelease){ ... }
lock.acquireForWriting(); write(); lock.writerRelease(); | http://rosettacode.org/wiki/Mutex | CC-MAIN-2017-13 | refinedweb | 4,028 | 56.96 |
java.lang.Object
weblogic.xml.xpath.StreamXPathweblogic.xml.xpath.StreamXPath
public final class StreamXPath
Represents an xpath for evaluation against an
XMLInputStream. Instances of this class are installed
in an
which in turn can create
XPathStreamFactory
XMLInputStreams which
performs xpath matching.
An instance of
StreamXPath is a handle to a parsed
representation of a particular xpath expression - parsing occurs
when the StreamXPath is constructed. A single StreamXPath instance
can be installed any number of times in any number of factories so
that the expense of parsing a given xpath expression is paid only
once.
The sequential nature of stream-based XML processing makes it impractical for StreamXPath to fully support the XPath specification. The StreamXPath constructor will throw an XPathUnsupportedException if the given XPath contains any of the unsupported features or constructs described below:
The root expression of a StreamXPath is always a location path which describes the node-set to be observed. It may be absolute or relative; in fact, it makes no difference, since the evaluation always occurs relative to the first node (Element event) encountered on the stream (which is assumed to be the doucment root element).
All of the XPath 1.0 core functions are supported with the exception of the following: last, size, id, lang, and count.
The string() function is not fully supported; it cannot convert a nodeset containing element nodes to a string. This is because the string value of an element is dependent on it's descendant elements, which will not be processed until later in the stream. This limitation also applies to implicit uses of 'string()' as described in the spec - any case where the string-value of a nodeset must be determined.
The following axes are supported in the location path which is the root expression of the xpath: self, child, descendant, descendant-or-self, following, following-sibling, attribute and namespace.
The root location path may contain predicates, and those predicate expressions may in turn contain location paths. Location paths appearing anywhere inside a predicate must be relative (i.e. no leading slash) and may only use the following axes: self, ancestor, and ancestor-or-self, attribute, and namespace.
XPath models an XML document as a tree of nodes. For purposes of streaming evaluation, a node is actually described by two XMLEvents - a START_ELEMENT and an END_ELEMENT. Whenever a START_ELEMENT is encountered that matches the stream, it's corresponding END_ELEMENT will eventually be observed as well.
START_DOCUMENT and END_DOCUMENT are just treated as special cases of START_ELEMENT and END_ELEMENT. It is expected that the first event on the stream will be a START_DOCUMENT event, and the last event will be END_DOCUMENT, although this is not strictly enforced. START_DOCUMENT (along with it's corresponding END_DOCUMENT event) is mapped to the 'Root Node' of the XPath Data model described in section 5.1 of the XPath specification. Thus, an observer on an XPath '.' will always see the START_DOCUMENT and END_DOCUMENT events.
Note: when a both a START-ELEMENT and its attribute and/or namespace declarations element are matches, the element notification is sent out first, followed by the attributes, and finally the namespaces.
public StreamXPath(String xpath) throws XPathException
Constructs a new representation of the given xpath. The xpath provided in the string must be either a location path or an expression which evaluates to a boolean value. If the former, then the given location path simply becomes the 'root location path' as described in the 'XPath Support' section above.
In the latter case, where the given 'xpath' is a boolean
expression, it is implicitly converted into a location path that
is the equivalent of the following:
(//.[xpath] | //attribute::*[xpath] | //namespace::*[xpath])
which is to say, a node n (including attribute and namespace
nodes) is considered to be a match if and only if 'xpath'
evaluates to true in an xpath context whose context node is n.
This idiom is included primarily to facilitate the use of
StreamXPath in processing XML Signatures (see).
Note that all of the restrictions described above under 'XPath Support' will apply to such a synthesized location path. The boolean expression appears as a predicate in the location path, and so all of the restrictions described under 'Predicates' apply. In particular, note that only the acceptable predicate axes may be used within the boolean expression: self, ancestor, and ancestor-or-self, attribute, and namespace.
xpath- The string form of the xpath to be evaluated.
IllegalArgumentException- if
xpathis null.
XPathException- if
xpathis not a valid xpath or contains xpath constructs which are not supported.
public void setVariableBindings(Map variableBindings)
variableBindings- Provides a mapping for resolving variable names which may appear in the XPath. Values in the map which are instances of
java.lang.String,
java.lang.Boolean, or
java.lang.Numberwill be used as values of the corresponding XPath types. Values which are instances of
java.util.Collectionare assumed to be node-lists. Values of other types are not currently recognized. Passing null has the effect of removing all bindings.
public String toString()
toStringin class
Object
public boolean equals(Object o)
equalsin class
Object
public static void main(String[] args) | http://docs.oracle.com/cd/E12839_01/apirefs.1111/e13941/weblogic/xml/xpath/StreamXPath.html | CC-MAIN-2013-20 | refinedweb | 851 | 54.02 |
Hey Guys,
We have a problem with the POI API.
We want to check whether the external spreadsheet, to which a formula refers, is integrated properly. For this we call the method HSSFName.getSheetName ().
During this call, we get an ArrayIndexOutOfBoundsException. The reason is that, in this case the object Name (Konrekt HSSFName) does not refer to any formula and there is accordingly no PTGs. That's why flies here
"Ptg PTG = field_13_name_definition.getTokens () [0];" a ArrayIndexOutOfBoundsException.
Our suggestion would be as follows:
The method org.apache.poi.hssf.record.NameRecord.getExternSheetNumer () should be rewritten:
public int getExternSheetNumber(){
Ptg[] ptgs = field_13_name_definition.getTokens();
if (ptgs.lenght < 1) {
return 0;
}
Ptg ptg = ptgs[0];
if (ptg.getClass() == Area3DPtg.class){
return ((Area3DPtg) ptg).getExternSheetIndex();
}
if (ptg.getClass() == Ref3DPtg.class){
return ((Ref3DPtg) ptg).getExternSheetIndex();
}
return 0;
}
Any chance you could retest with 3.12 final?
The Problem still occurs in the version 3.12 Final
Can you provide a sample file and sample code which shows the Exception on HSSFName.getSheetName()
Nevermind, the following is sufficient to reproduce the problem:
NameRecord record = new NameRecord();
assertEquals(0, record.getExternSheetNumber());
this is fixed via r1686689 now.
Is the fix also available in Version 3.11?
How can i get the actuel POI-Version with the fix?
We usually don't backport features, only for CVEs.
So you need to grab a nightly until we push the next beta or final release.
See [1] or [2] for download infos
[1]
[2] | https://bz.apache.org/bugzilla/show_bug.cgi?id=57923 | CC-MAIN-2020-10 | refinedweb | 244 | 53.78 |
I am new to ATLAS and LAPACK. I really like what I see so far in both ATLAS and LAPACK. The problem is that somehow a symmetric matrix is not able to be diagonalized. Also, the info parameter is acting strange.
I first compute a symmetric matrix S=Y^tY with the ATLAS cblas_dsyrk and store it in lower half of S which is DxD. I then use LAPACK's dsyevd which I define as shown in the code below to conform to the fortran parameter passing since I will be using the fortran library, not clapack.
I compile with:
- Code: Select all
gcc -c file.c -I<path to atlas/include/cblas.h>
gcc -o file.o -L<path to atlas/lib> -llapack -lf77blas -lcblas -latlas -lgfortran
Note: Using atlas-3.8.4 and lapack-3.4.0. I built the lapack library by building the incomplete ATLAS liblapack.a, then the LAPACK version and combining the librarys with good old 'ar'. This was also suggested as the way to do things in the ATLAS user guide.
I check the info parameter and it is some very large positive number. Since it is not zero that means that a submatrix of the tridiagonal form was not able to be factored. This is a problem since all real finite dim symetric matrices can be diagonalized.
Then I set info=0 before hand and it returns info=0 at which point I find that several eigen values are negative, another falsehood.
I am not sure where the problem is coming from. I suspect that it may something to do with interfacing C with fortran since I am new to this. I find it strange that info does not seem to be modified by dsyevd.
- Code: Select all
#include <cblas.h>
/* define dsyevd */
void dsyevd_ (char *jobz, \
char * upon, \
long * n, \
double * A, \
long * lda, \
double * W, \
double * work, \
long * lwork, \
long * iwork, \
long * liwork, \
long * info);
/* define/malloc D, work, iwork, D_S, Y, S */
cblas_dsyrk (CblasColMajor, \
CblasLower, \
D, \
1, \
alpha, \
Y, \
D, \
0., \
S, \
D);
/*long info; uninitialized*/
long info = 0; /*initialized to zero*/
long query = -1;
char jobz = 'V';
char uplo = 'L';
dsyevd_ (&jobz, \
&uplo, \
&D, \
S, \
&D, \
D_S, \
work, \
&query, \
iwork, \
&query, \
&info);
/*find D_S[0] = -0.000000000000004323 < 0 no matter what info is returned as*/ | http://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=5&t=3058 | CC-MAIN-2017-17 | refinedweb | 387 | 73.68 |
Build a Reusable Datagrid to Make Life Easier
Are you still toiling with classic ASP for your customer’s Websites? Are you writing 20 or 30 page admin section for customers who just want to edit their News and Frequently Asked Questions pages? Well, put down your sticks because I’m about to hand you a golden Zippo!
If you freelance or run a small design company then you can’t financially justify continuing to use Classic ASP. It was just about 8 months ago that I can recall writing 8 or 10 pages just accomplish an ADD/EDIT/DELETE on a database table. Now I can do it in just one completely reusable page. I’d heard lots and lots about .NET, but I’d been rolling along great with ASP for so long and whenever I looked at it, the change seemed too daunting. I can’t remember what made me break down and get into .NET, but as soon as I saw what could be accomplished I knew that I’d made the right decision. After going through several books, I have built administration portions for 6 client sites, and each has been more functional and easier to use than the one before it.
What I am about to teach you is the ASP.NET datagrid. I’ll assume that you have a basic understanding of ASP.NET and VB.NET.
The DataGrid
With some initial work you can put together an ADD/EDIT/DELETE datagrid that will make your life a whole lot easier. Just imagine, every time your client wants to edit a simple database table like a calendar, “What’s New” section, or Frequently Asked Questions, all you have to do is make a copy of the last datagrid you made and spend 20 minutes making changes. It’s my favorite part of ASP.NET, just because it’s such a time saver. Now you can see what it will do for you.
I think that the best way to demonstrate this is going to be to let you see the evolution of the datagrid that I’m currently using. It started out with just EDIT/DELETE and now it takes care of all of my simple, yet time consuming tasks.
Right out of the box, the datagrid does almost everything you require. Here’s what you need to get started.
Sub Page_Load(sender as object, e as eventargs)
If Not Page.IsPostBack Then
BindDataGrid()
End If
End Sub
Sub BindDataGrid()
Dim objConn as New OleDbConnection("Provider=Microsoft
.Jet.OLEDB.4.0; Data Source=c:inetpubsitessite.com
wwwdatabasesite.mdb")
objConn.Open()
Dim ds as Dataset = New DataSet()
Dim objAdapter as New OleDbDataAdapter("SELECT * FROM
News ORDER BY NewsDate DESC;", objConn)
objAdapter.Fill(ds,"News")
EditNews.DataSource = ds.Tables("News").DefaultView
EditNews.DataBind()
objConn.Close()
End Sub
In this example I wrote a
BindDataGrid() function that will pull the data from the database and write it into the datagrid. You may also notice that I’m using OleDb. You can substitute any type of connection you want, just make sure to import the proper namespaces. I also wrote the
Page_Load function that simply runs the
BindDataGrid() function the first time the page loads.
Defining the Datagrid
Next up I’ll show you the beginnings of the datagrid. This is the part where you define what information will be in which column, and decide on the look and feel of the interface.
<form runat="server">
<asp:datagrid
<columns>
<asp:boundcolumn
<asp:templatecolumn
<ItemTemplate>
<asp:label Text='<%# Container.DataItem
("NewsTitle")%>' runat="server"></asp:label>
</ItemTemplate>
<EditItemTemplate>
<asp:textbox ID="NewsTitle"
TextMode="MultiLine" Rows="3" size="60"
runat="server" Text='<%#
Container.DataItem("NewsTitle")%>' />
</EditItemTemplate>
</asp:templatecolumn>
<asp:templatecolumn
<ItemTemplate>
<asp:label Text='<%# Container.DataItem
("NewsBody")%>'
runat="server"></asp:label>
</ItemTemplate>
<EditItemTemplate>
<asp:textbox ID="NewsBody"
Columns="50" TextMode="MultiLine"
Rows="8" runat="server" Text='<%#
Container.DataItem("NewsBody")%>' />
</EditItemTemplate>
</asp:templatecolumn>
<asp:templatecolumn
<ItemTemplate>
<asp:label Text='<%# Container.DataItem
("NewsWrittenBy")%>' runat="server"></asp:label>
</ItemTemplate>
<EditItemTemplate>
<asp:textbox ID="NewsAuthor" width="100"
runat="server" Text='<%# Container.DataItem
("NewsWrittenBy")%>' />
</EditItemTemplate>
</asp:templatecolumn>
<asp:templatecolumn
<ItemTemplate>
<asp:button
</ItemTemplate>
<EditItemTemplate>
<asp:button
<asp:button
<asp:button
</EditItemTemplate>
</asp:templatecolumn>
</columns>
</asp:datagrid>
</form>
The Attributes of the Code
Now this looks like a lot of HTML, but the thing to remember is that a good chunk of this will never change. The part that does change is in fact really easy to manipulate. Let’s start with the
<asp:datagrid> tag, and look at some if its attributes. First off, you must enclose the tag inside a
<form> tag with a
runat="server", otherwise none of your updating and deleting features will work. Next you have the colors, padding, and borders. You can figure those out. The main thing that we want to hit on is the functionality attributes.
The
DataKeyField is something that will have to be changed for every table you use. It carries on the unique identifier of each of your records, and will allow you to update and delete these records. The next big things are the “On Commands”. If you look at the example you can see an
OnUpdateCommand,
OnCancelCommand,
OnEditCommand, and an
OnDeleteCommand. These tell the datagrid which function to run when someone presses that button. These functions will be the next example. Before that, however, I did want to mention the column formatting and let you in on how it works.
There is a template for each column, even the one that contains the buttons. Each template contains the
<ItemTemplate> and the
<EditItemTemplate>. The
<ItemTemplate> will contain the dynamic content from the database in a read only form. In the
<EditItemTemplate> you must write everything to be displayed in
<asp:textbox>s. This will be what people see whenever they press the edit button on a certain row. You’ll use the
Container.DataItem("FieldName") to display your dynamic content on both sides: read only and edit. Now let’s see what these functions look like.
Sub DoItemEdit(objSource as Object, objArgs As DataGridCommandEventArgs)
EditNews.EditItemIndex = objArgs.Item.ItemIndex
BindDataGrid()
End Sub
Sub DoItemCancel(objSource as Object, objArgs As DataGridCommandEventArgs)
EditNews.EditItemIndex = -1
BindDataGrid()
End Sub
Sub DoItemUpdate(objSource as Object, objArgs As DataGridCommandEventArgs)
Dim strTitle, strBody,strWrittenBy as String
Dim intID as String
strTitle = Ctype(objArgs.Item.Cells(1).Controls(1), Textbox).text
strBody = Ctype(objArgs.Item.Cells(2).Controls(1), Textbox).text
strWrittenBy = Ctype(objArgs.Item.Cells(3).Controls(1), TextBox).text
'strDate =
intID = EditNews.DataKeys(objArgs.Item.ItemIndex)
Dim strSQL as String
strSQL = "UPDATE News SET NewsTitle='" & strTitle & _
"', NewsBody='" & strBody & _
"', NewsWrittenBy='" & strWrittenBy & _
"' WHERE NewsID=" & intID & ";"
ExecuteSQLStatement(strSQL)
EditNews.EditItemIndex = -1
BindDataGrid()
End Sub
Sub DoItemDelete(objSource as Object, objArgs as DataGridCommandEventArgs)
Dim intID as String
Dim strSQL as String
intID = EditNews.DataKeys(objArgs.Item.ItemIndex)
strSQL = "DELETE FROM News WHERE NewsID=" & intID & ";"
ExecuteSQLStatement(strSQL)
EditNews.EditItemIndex = -1
BindDataGrid()
End Sub
Sub ExecuteSQLStatement(strSQL as String)
Dim objConn as New OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;
Data Source=c:inetpubsitessite.comwwwdatabasesite.mdb")
objConn.Open()
Dim objCommand as New OleDbCommand(strSQL, objConn)
objCommand.ExecuteNonQuery()
objConn.Close()
End Sub
DoItemEditand
DoItemCancelare, as I am sure you can tell by looking at them, the easiest functions. After all they change where the
EditItemIndexis pointed. The
EditItemIndexis a number value that tells you which item is in edit mode. If you set the
EditItemIndexto -1 then none of the items are in edit mode. As you can see in the example,
DoItemEdittakes the
ItemIndexand sets the
EditItemIndexequal to it, and all that the
DoItemCancelever does is set the
EditItemIndexequal to -1.
Now, for the hardest thing in this whole process (which isn't really that hard). The
DoItemUpdatefunction gave me headaches when I was first starting. I have come to understand that it is very hard to refer to an object that is inside the datagrid. This is accomplished by referring to its cell and control number. Here's how it works. The
objArgsrefers to the selected row, then
Cellsrefers to the number of the cell counted from left to right starting at 0, and
Controlsrefers to the number of the control, be it a Textbox, Calendar, or any kind of control, inside that cell. Then, using the
Ctypefunction, the values of these are converted into strings and written into a SQL UPDATE statement. Then I wrote the
ExecuteSQLStatementfunction to take the string and run it at the database.
Next is the
DoItemDeletefunction, which is the last of our base functions. All that is required here is for you to use the same
Datakeycall from the update function, write an SQL DELETE statement, and send it to the
ExecuteSQLStatementfunction.
The only thing that I have left for you to figure out, is how to accomplish Add, after all if I told you everything, you wouldn't learn anything! I will point you in the right direction, though. You must write a function that adds a blank row to the database, and then makes the
EditItemIndexthat row.
Now, I know that what we just did might seem like a lot of work. That's because it is a lot, but the greatest thing about it is that it is a wonderful base for everything that you will ever need to do as far as editing in table or a view. My experience is that the time it takes to write this initially is more than compensated for by the hours you save every time you have to write an administration section for a Website. If you think about it, all that needs to change are the connection strings, SQL statements, and template columns. I can whip out a datagrid in 20 minutes that will edit a simple table, and be finished hours under the quoted time. I think this is one tool that a successful Web programmer cannot do without! | https://www.sitepoint.com/datagrid-life-easier/ | CC-MAIN-2019-13 | refinedweb | 1,647 | 64.51 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.