text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
hi have a script with 2 function like this :
public class first: MonoBehaviour {
static public void one(){
int numb = 10;
// something
}
static public void two(){
int helth = 20;
// something
}
}
.. and i have another script like this : .. public class secound: MonoBehaviour {
Update(){
//first.one(); // normal use .. but...
}
}
.... i want to set name of function in inspector like "tow" or "one" then that function execute in Update() in secound class ... and i want after i set the name of function... the variables inside that function showing in inspector .. so.. how do i do this ?
Answer by fafase
·
Jun 25, 2014 at 02:51 PM
The way to do this would be to have a Dictionary of string and method/delegate. Below is an example you need to make your own, but the idea is there.
public class First:MonoBehaviour{
public string method; // Set in inspector asthe name of the method
Dictionary<string, Action> dict = new Dictionary<string, Action>();
void Start(){
dict.Add("one",new Action(One);
dict.Add("two",new Action(Two);
if(dict.Contains(method)) // Make sure the string is in the dictionary
dict[method](); // call the method
}
}
Best would be to use a enum with the same principle:
public enum MethodName{
One, Two
}
public class First:MonoBehaviour{
public MethodName method; // Set in inspector as the name of the method
Dictionary<MethodName, Action> dict = new Dictionary<MethodName, Action>();
void Start(){
dict.Add(MethodName.One,new Action(One);
dict.Add(MethodName.Two,new Action(Two);
dict[method](); // call the method
}
}
This way, no chance that the string is wrongly written.
thanks for the answer .. but this two codes not working... how do i use it ? .... and i want when i set name of function the variables of that function showing in inspector .. i must use classes or functions for that ? ... i want do this image is a inspector:
Then you also need an editor script that based on the choice of the enum, it shows the corresponding variable. There are examples on the internet
Dictionary is a good idea but i want to get parameters of a function in another script then i can show in inspector with a editor script... in below example how can i get name of settext function in another script and then get parameters of that function ? ... i want to recognize functions and parameters of each function in another script... how do i do this ?
using UnityEngine;
using System.Collections;
public class first: MonoBehaviour
{
public void settext(string astring)
{
guiText.text = ast.
Custom editor script for folders?
1
Answer
Handles not displaying
1
Answer
Reset or Reinstantiate Custom Editors Target(s)
1
Answer
Prefabs aren't saving with Undo.RecordObject
4
Answers
Custom Editor for EdgeCollider2D
0
Answers | https://answers.unity.com/questions/734839/how-to-display-variable-of-function-in-inspector-1.html | CC-MAIN-2019-35 | refinedweb | 451 | 66.44 |
Yesterday, Google launched the beta version of its Cloud Source Repositories. Claiming to provide its users with a better search experience, Google Cloud Source Repositories is a Git-based source code repository built on Google Cloud.
The Cloud Source Repositories introduce a powerful code search feature, which uses the document index and retrieval methods similar to Google Search.
These Cloud source repositories can pose a major comeback for Google after Google Code, began shutting down in 2015. This could be a very strategic move for Google, as many coders have been looking for an alternative to GitHub, after its acquisition by Microsoft.
How does Google code search work?
Code search in Cloud Source Repositories optimizes the indexing, algorithms, and result types for searching code. On submitting a query, the query is sent to a root machine and sharded to hundreds of secondary machines. The machines look for matches by file names, classes, functions and other symbols, and matches the context and namespace of the symbols.
A single query can search across thousands of different repositories. Cloud Source Repositories also has a semantic understanding of the code. For Java, JavaScript, Go, C++, Python, TypeScript and Proto files, the tools will also return information on whether the match is a class, method, enum or field.
Solution to common code search challenges
#1 To execute searches across all the code at ones’ company
If a company has repositories storing different versions of the code, executing searches across all the code is exhaustive and ttime-consuming While using Cloud Source Repositories, the default branches of all repositories are always indexed and up-to-date. Hence, searching across all the code is faster.
#2 To search for code that performs a common operation
Cloud Source Repositories enables users to perform quick searches. Users can also save time by discovering and using the existing solution while avoiding bugs in their code.
#3 If a developer cannot remember the right way to use a common code component
Developers can enter a query and search across all of their company’s code for examples of how the common piece of code has been used successfully by other developers.
#4 Issues with production application
If a developer encounters a specific error message to the server logs that reads ‘User ID 123 not found in PaymentDatabase’, they can perform a regular expression search for ‘User ID .* not found in PaymentDatabase’ and instantly find the location in the code where this error was triggered.
All repositories that are either mirrored or added to Cloud Source Repositories can be searched in a single query. Cloud Source Repositories has a limited free tier that supports projects up to 50GB with a maximum of 5 users.
You can read more about Cloud Source Repositories in the official documentation.
Read Next
Google announces Flutter Release Preview 2 with extended support for Cupertino themed controls and more!
Google to allegedly launch a new Smart home device
Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers | https://hub.packtpub.com/google-announces-the-beta-version-of-cloud-source-repositories/ | CC-MAIN-2019-26 | refinedweb | 503 | 50.46 |
#include "llvm/IR/PassManager.h"
Go to the source code of this file.
This header provides classes for managing per-loop analyses. These are typically used as part of a loop pass pipeline over the loop nests of a function.
Loop analyses are allowed to make some simplifying assumptions: 1) Loops are, where possible, in simplified form. 2) Loops are always in LCSSA form. 3) A collection of analysis results are available:
The primary mechanism to provide these invariants is the loop pass manager, but they can also be manually provided in order to reason about a loop from outside of a dedicated pass manager.
Definition in file LoopAnalysisManager.h. | https://www.llvm.org/doxygen/LoopAnalysisManager_8h.html | CC-MAIN-2022-27 | refinedweb | 110 | 56.25 |
I
Bad:
1: Main(null);
2:
3: }
No empty line after an opening curly
Bad:
1: class Program
2: {
3:
4: static void Main(string[] args)
One empty line between same level type declarations
1: namespace Animals
2: {
3: class Animal
4: {
5: }
6:
7: class Giraffe : Animal
8: {
9: }
10: }
One empty line between members of a type
1: class Animal
2: {
3: public Animal()
4: {
5: }
6:
7: public void Eat(object food)
8: {
9: }
10:
11: public string Name { get; set; }
12: }
Whereas it’s OK to group single-line members:
1: class Customer
2: {
3: public string Name { get; set; }
4: public int Age { get; set; }
5: public string EMail { get; set; }
6:
7: public void Notify(string message)
8: {
9: }
10: }):
1: class Customer
2: {
3: #region Public properties
4:
5: public string Name { get; set; }
6: public int Age { get; set; }
7: public string EMail { get; set; }
8:
9: #endregion
10: don’t remember seeing any explicit guidelines on whitespace formatting for C# programs"
Well, Stylecop is trying to enforce these rules. I hope that more and more people will start adopting these.
>Usually #regions contain type members or whole types, less often parts of a method body.
I'd say that a region wrapping a part of a method body should never, ever be encouraged.
I do have to ask about the #regions. Do you actually write code like that? When I first saw #regions I thought they sounded like a great tool for organising code. I tried it for a while and found I really didn't like it. I hated not knowing whether I wasn't seeing just a bit of code or a whole lot. I got into the habit of toggling them all open every time I opened a file. But it was too late. Everybody else started doing it and I couldn't persuade them to stop. Now I think they should really just be reserved for files with sections that need to be processed by automated tools.
Apart from that diversion, I think your guidelines for vertical whitespace fit with what I'd do naturally.
Yes, I use regions in my code.
Not on the method level though - only to group type members. As I usually don't have more than one type per file, I don't have to group types.
Also, if I have a large type, I usually split it into parts using partial classes and name them like: Drawing.cs, Drawing.Serialization.cs, Drawing.Coordinates.cs, Drawing.Painting.cs, etc.
I don't mind seeing a line or two at the end or start of a class, but since those lines are useless you might as well leave them out.
Like you I place a line between classes, between methods and between properties, except for single-line properties. I do not usually insert lines between fields.
But unlike you I always allow myself to insert an extra blank line if that either helps to seperate members that belong to a different logical part of a class, or if it increases the readability by making the code less of a wall of text.
For example, in a Server class I might seperate networking-related fields like a Socket and some networking settings from fields like threads and syncronisation objects.
I also often insert two blank lines (instead of the usual single line) between a block of methods and a block of properties, so that it's easier to see where the methods end and the properties start. The same goes for inserting two lines between public methods and private methods, so that I can more easily see where the implementation details start.
I'd rather have a few blank lines too many than a few too little.
Thank you for submitting this cool story - Trackback from DotNetShoutout
What about a whitespaces within a methods? 🙂
In languages where whitespace does not matter, I honestly don't believe it should be saved with the file back to source control. The editor should be able to display code using fairly basic rules to suit you.
Hey Kirill,
Do you have any ideas if/when Visual Studio will start shipping StyleCop compliant [template] code? I can understand why the BCL isn't necessarily style-compliant but I believe it's critical to legitimize C# coding standards.
The strongest push back I have from some people on my team regarding style compliance is "Visual Studio doesn't do it that way".
Cheers,
Navid
With tools like Resharper and their functionality like Cleanup Code, I'd ask: why bother defining guidelines like this?
@Peter:
Well, firstly, someone has to *write* tools like Resharper 😉 I'm an IDE developer, I have to think about these things.
Besides, it was fun to observe these things and ask other people about their experiences.
Finally, a lot of people don't use tools like Resharper. And a great deal of them format their code without any sort of consistency. This blog post hopes to raise awareness about the issue: either use tools like Resharper or format your code yourself.
You'll be surprised how much poorly formatted code is out there. I've noticed that by default people who just begin to learn how to program don't pay any attention to this, which results in poorly formatted programs.
Navid: if you want, feel free to send me a list of where Visual Studio violates these guidelines and I will log all of them as bugs to be fixed.
Also, feel free to log bugs yourself at, especially after we release Visual Studio 2010 Beta 1.
Betty: agreed. Unfortunately the tools aren't there yet. I personally believe that the source code should be stored in a database in pre-parsed state, not text files.
I don't agree on the empty lines after #region and before #endregion. If you leave them out, #region/#endregion are more closely related to the things they are grouping. Compare this to a method definition, you wouldn't write:
void SomeMethod()
{
var x = 10;
}
Max, your approach is valid too. My guidelines are only enforced within my team 🙂
If your team chooses your way - great. Just be consistent.
The thing to remember is that there is a fine line between "standards" and aesthetically pleasing code. In the same way that one person can enjoy looking at something nice, another person may not think the target quite so appealing. Beauty is in the Eye of the Beholder (a great game btw 😀 ). | https://blogs.msdn.microsoft.com/kirillosenkov/2009/03/12/kirills-whitespace-guidelines-for-c/ | CC-MAIN-2017-30 | refinedweb | 1,092 | 67.89 |
Thinking About ETLs
My primary focus for the last year or so has been writing ETLs at work. It is an interesting problem because on some level it feels extremely easy, while in reality, it is a problem that is very difficult to abstract.
Queries
The essence of an ETL, beyond the obvious “extract, transform, load”, is the query. In the case of a database, the query is typically the SELECT statement, but it usually is more than that. It often includes the format of the results. You might need to chunk the data using multiple queries. There might be columns you skip or columns you create.
In non-database ETLs, it still ends up being very similar to query. You often still need to find boundaries for what you are extracting. For example, if you had a bunch of date stamped log files, doing a find /var/logs -name 2014*.log.gz could still be considered a query.
A query is important because ETLs are inherently fragile. ETLs are required because the standard interface to some data is not available due to some constraints. By bypassing standard, and more importantly supported, interfaces, you are on your own when it comes to ensuring the ETL runs. The database dump you are running might timeout. The machine you are reading files from may reboot. The REST API node you are hitting gets a new version and restarts. There are always good reasons for your ETL process to fail. The query makes it possible to go back and try things again, limiting them to the specific subset of data you are missing.
Transforms
ETLs often are considered part of some analytics pipeline. The goal of an ETL is typically to take some data from some system and transform it to a format that can be loaded into another system for analysis. A better principle is to consider storing the intermediaries such that transformation is focused on a specific generalized format, rather than a specific system such as a database.
This is much harder than it sounds.
The key to providing generic access to data is a standard schema for the data. The “shape” of the data needs to be described in a fashion that is actionable by the transformation process that loads the data into the analytics system.
The schema is more than a type system. Some data is heavy with metadata while other data is extremely consistent. The schema should provide notation for both extremes.
The schema also should provide hints on how to convert the data. The most important aspect of the schema is to communicate to the loading system how to transform and / or import the data. One system might happily accept a string with 2014-02-15 as a date if you specify it is a date, while others may need something more explicit. The schema should communicate that the data is date string with a specific format that the loading system can use accordingly.
The schema can be difficult to create. Metadata might need a suite of queries to other systems in order to fill in the data. There might need to be calculations that have to happen that the querying system doesn’t support. In these cases you are not just transforming the data, but processing it.
I admit I just made an arbitrary distinction and definition of “processing”, so let me explain.
Processing Data
In a transformation you take the data you have and change it. If I have a URL, I might transform it into JSON that looks like {‘url’: $URL}. Processing, on the other hand, uses the data to create new data. For example, if I have a RESTful resource, I might crawl it to create a single view of some tree of objects. The important difference is that we are creating new information by using other resources not found in the original query data.
The processing of data can be expensive. You might have to make many requests for every row of output in a database table. The calculations, while small, might be on a huge dataset. Whatever the processing that needs happen in order to get your data to a generically usable state, it is a difficult problem to abstract over a wide breadth of data.
While there is no silver bullet to processing data, there are tactics that can be used to process data reliably and reasonably fast. The key to abstracting processing is defining the unit of work.
A Unit of Work
“Unit of Work” is probably a loaded term, so once again, I’ll define what I mean here.
When processing data in an ETL, the Unit of Work is the combination of:
- an atomic record
- an atomic algorithm
- the ability to run the implementation
If all this sounds very map/reducey it is because it is! The difference is that in an ETL you don’t have the same reliability you’d have with something like Hadoop. There is no magical distributed file system that has your data ready to go on a cluster designed to run code explicitly written to support your map/reduce platform.
The key difference with processing data in ETLs vs. some system like Hadoop is the implementation and execution of the algorithm. The implementation includes:
- some command to run on the atomic record
- the information necessary to setup an environment for that script to run
- an automated to input the atomic record to the command
- a guarantee of reliable execution (or failure)
If we look at a system like Hadoop, and this applies to most map/reduce platforms that I’ve seen, there is an explicit step that takes data from some system and adds it to the HDFS store. There is another step that installs code, specifically written for Hadoop, onto the cluster. This code could be using Hadoop streaming or actual Java, but in either case, the installation is done via some deployment.
In other words, there is an unsaid step that Extracts data from some system, Transforms it for Hadoop and Loads it into HDFS. The processing in this case is getting the data from whatever the source system is into the analytics system, therefore, the requirements are slightly different.
We start off with a command. The command is simply an executable script like you would see in Hadoop streaming. No real difference here. Each line passed to the command contains the atomic record as usual.
Before we can run that command, we need to have an environment configured. In Hadoop, you’ve configured your cluster and deployed your code to the nodes. In an ETL system, due to the fragility and simpler processing requirements (no one should write a SQL-like system on top of an ETL framework), we want to set up an environment every time the command runs. By setting up this environment every time the command runs you allow a clear path for development of your ETL steps. Making the environment creation part of the development process it means that you ensure the deployment is tested along side the actual command(s) your ETL uses.
Once we have the command and an environment to run it in we need a way to get our atomic record to the command for actual processing. In Hadoop streaming, we use everyone’s favorite file handle, stdin. In an ETL system, while the command may still use stdin, the way the data enters the ETL system doesn’t necessarily have a distributed file system to use. Data might be downloaded from S3, some RESTful service, and / or some queue system. It important that you have a clear automated way to get data to an ETL processing node.
Finally, this processing must be reliable. ETLs are low priority. An ETL should not lock your production database for an hour in order to dump the data. Instead ETLs must quietly grab the data in a way that doesn’t add contention to the running systems. After all, you are extracting the data because a query on the production server will bog it down when it needs to be serving real time requests. An ETL system needs to reliably stop and start as necessary to get the data necessary and avoid adding more contention to an already resource intensive service.
Loading data from an ETL system requires analyzing the schema in order to construct the understanding between the analytics system and the data. In order to make this as flexible as possible, it is important that the schema use the source of data to add as much metadata as possible. If the data pulls from a Postgres table, the schema should idealling include most of the schema information. If that data must be loaded into some other RDBMS, you have all you need to safely read the data into the system.
Development and Maintenance
ETLs are always going to be changing. New analytics systems will be used and new source of data will be created. As the source system constraints change so do the constraints of an ETL system, again, with the ETL system being the lowest priority.
Since we can rely on ETLs changing and breaking, it is critical to raise awareness of maintenance within the system.
The key to creating a maintainable system is to build up from small tools. The reason being is that as you create small abstractions at a low level, you can reuse these easily. The trade off is that in the short term, more code is needed to accomplish common tasks. Over time, you find patterns specific to your organizations requirements that allow repetitive tasks to be abstracted into tools.
The converse to building up an ETL system based on small tools is to use a pre-built execution system. Unfortunately, pre-built ETL systems have been generalized for common tasks. As we’ve said earlier, ETLs are often changing and require more attention than a typical distributed system. The result is that using a pre-built ETL environment often means creating ETLs that allow the pre-built ETL system to do its work!
Testing
Our goal for our ETLs is to make them extremely easy to test. There are many facets to testing ETLs such as unit testing within an actual package. The testing that is most critical for development and maintenance is simply being able to quickly run and test a single step of an ETL.
For example, lets say we have an ETL that dumps a table, reformats some rows and creates a 10GB gzipped CSV file. I only mention the size here as it implies that it takes too long to run over the entire set of data every time while testing. The file will then be uploaded to S3 and notify a central data warehouse system. Here are some steps that the ETL might perform:
- Dumping the table
- Create a schema
- Processing the rows
- Gzipping the output
- Uploading the data
- Update the warehouse
Each of these steps should be runnable:
- locally on a fake or testing datbase
- locally, using a production database
- remotely using a production database and testing system (test bucket and test warehouse)
- remotely using the production database and production systems
By “runnable”, I mean that an ETL developer can run a command with a specific config and watch the output for issues.
These steps are all pretty basic, but the goal with an ETL system is to abstract the pieces that can be used across all ETLs in a way that is optimal for your system. For example, if your system is consistently streaming, your ETL framework might allow you to chain file handles together. For example
$ dump table | process rows | gzip | upload
Another option might be that each step produces a file that is used by the next step.
Both tactics are valid and can be optimized for over time to help distill ETLs to the minimal, changing requirements. In the above example, the database table dump could be abstracted to take the schema and some database settings to dump any table in your databases. The gzip, upload and data warehouse interactions can be broken out into a library and/or command line apps. Each of these optimizations are simple enough to be included in an ETL development framework without forcing a user to jump through a ton of hoops when a new data store needs to be considered.
An ETL Framework
Making it easy to develop ETLs means a framework. We want to create a Ruby on Rails for writing ETLs that makes it easy enough to get the easy stuff done and powerful enough to do deal with the corner cases. The framework revolves around the schema and the APIs to the different systems and libraries that provide language specific APIs.
At some level the framework needs to allow the introduction of other languages. My only suggestion here is that other languages are abstracted through a command line layer. The ETL framework can eventually call a command that could be written in whatever language the developer wants to use. ETLs are typically used to export data for to a system that is reasonably technical. Someone using this data most likely has some knowledge of some language such as R, Julia or maybe JavaScript. It is these technically savvy data wranglers we want to empower with the ETL framework in order to allow them to solve small ETL issues themselves and provide reliability where the system can be flaky.
Open Questions
The system I’ve described is what I’m working on. While I’m confident the design goals are reasonable, the implementation is going to be difficult. Specifically, the task of generically supporting many languages is challenging because each language has its own ecosystem and environment. Python is an easy language for this task b/c it is trivial to connect to a Ubuntu host have a good deal of the ecosystem in place. Other languages, such as R, probably require some coordination with the cluster provisioning system to make sure base requirements are available. That said, it is unclear if other languages provide small environments like virtualenvs do. Obviously typical scripting languages like Ruby and JavaScript have support for an application local environment, but I’m doubtful R or Julia would have the same facilities.
Another option would be to use a formal build / deployment pattern where a container is built. This answers many of the platform questions, but it brings up other questions such as how to make this available in the ETL Framework. It is ideal if an ETL author can simply call a command to test. If the author needs to build a container locally then I suspect that might be too large a requirement as each platform is going to be different. Obviously, we could introduce a build host to handle the build steps, but that makes it much harder for someone to feel confident the script they wrote will run in production.
The challenge is because our hope is to empower semi-technical ETL authors. If we compare this goal to people who can write HTML/CSS vs. programmers, it clarifies the requirements. A user learning to write HTML/CSS only has to open the file in a web browser to test it. If the page looks correct, they can be confident when they deploy it will work. The goal with the ETL framework and APIs is that the system can provide a similar work flow and ease of use.
Wrapping Up
I’ve written a LOT of ETL code over the past year. Much of what I propose above reflects my experiences. It also reflects the server environment in which these ETLs run as well as the organizational environment. ETLs are low priority code, by nature, that can be used to build first class products. Systems that require a lot of sysadmin time, server resources or have too specific an API may still be helpful moving data around, but they will fall short as systems evolve. My goal has been to create a system that evolves with the data in the organization and empowers a large number of users to distribute the task of developing and maintaining ETLs.
D Processing
It can be really hard to work with data programmatically. There is some moment when working with a large dataset where you realize you need to process the data in parallel. As a programmer, this sounds like it could be a fun problem, and in many cases it is fun to get all your cores working hard crunching data.
The problem is parallel processing never is purely a matter of distributing your work across CPUs. The hard part ends up being getting the data organized before sending it to your workers and doing something with the results. Tools like Hadoop boast processing terabytes of data, but it’s a little misleading because there is most likely a ton of code on either end of that processing.
The input and output code (I/O) can also have big impact on the processing itself. The input often needs to consider what the atomic unit is as well as what the “chunk” of data needs to be. For example, if you have 10 million tiny messages to process, you probably want to chunk up the million messages into 5000 messages when sending it to your worker nodes, yet the workers will need to know it is getting a chunk of messages vs. 1 message. Similarly, for some applications the message:chunk ratio needs to be tweaked.
In hadoop this sort of detail can be dealt with via HDFS, but hadoop is not trivial to set up. Not to mention if you have a bunch of data that doesn’t live in HDFS. The same goes for the output. When you are done, where does it go?
The point being is that “data” always tends towards spcificity. You can’t abstract away data. Data always ends up being physical at its core. Even if the processing happens in parallel, the I/O will always be a challenging constraint.
View Source
I watched a bit of this fireside chat with Steve Jobs. It was pretty interesting to hear Steve discuss the availability of the network and how it changes the way we can work. Specifically, he mentioned that because of NFS (presumably in the BSD family of unices), he could share his home directory on every computer he works on without ever having to think about back ups or syncing his work.
What occurred to me was how much of the software we use is taken for granted. Back in the day and educational license for Unix was around $1800! I can only imagine the difficulties becoming a software developer back then when all the interesting tools like databases or servers were prohibitively expensive!
It reminds me of when I first started learning about HTML and web development. I could always view the source to see what was happening. It became an essential part of how I saw the web and programming in general. The value of software was not only in its function, but in its transparency. The ability to read the source and support myself as a user allowed me the opportunity to understand why the software was so valuable.
When I think about how difficult it must have been to become a hacker back in the early days of personal computing, it is no wonder that free software and open source became so prevalent. These early pioneers had to learn the systems without reading the source! Learning meant reading through incomplete, poorly written manuals. When the manual was wrong or out of date, I can only imagine the hair pulling that must have occurred. The natural solution to this problem was to make the source available.
The process of programming is still very new and very detailed, while still being extremely generic. We are fortunate as programmers that the computing landscape was not able to enclose software development within proprietary walls like so many other technical fields. I’m very thankful I can view the source!
Property Pattern
I’ve found myself doing this quite a bit lately and thought it might be helpful to others.
Often times when I’m writing some code I want to access something as an attribute, even though it comes from some service or database. For example, say we want to download a bunch of files form some service and store them on our file system for processing.
Here is what we’d like the processing code to look like:
def process_files(self): for fn in self.downloaded_files: self.filter_and_store(fn)
We don’t really care what the filter_and_store method does. What we do care about is downloaded_files attribute.
Lets step back and see what the calling code might look like:
processor = MyServiceProcessor(conn) processor.process_files()
Again, this is pretty simple, but now we have a problem. When do we actually download the files and store them on the filesystem. One option would be to do something like this in our process_files method.
def process_files(self): self.downloaded_files = self.download_files() for fn in self.downloaded_files: self.filter_and_store(fn)
While it may not seem like a big deal, we just created a side effect. The downloaded_files attribute is getting set in the process_files method. There is a good chance the downloaded_files attribute is something you’d want to reuse. This creates an odd coupling between the process_files method and the downloaded_files method.
Another option would be to do something like this in the constructor:
def __init__(self, conn): self.downloaded_files = self.download_files()
Obviously, this is a bad idea. Anytime you instantiate the object it will seemingly try to reach out across some network and download a bunch of files. We can do better!
Here are some goals:
- keep the API simple by using a simple attribute, downloaded_files
- don’t download anything until it is required
- only download the files once per-object
- allow injecting downloaded values for tests
The way I’ve been solving this recently has been to use the following property pattern:
class MyServiceProcessor(object): def __init__(self, conn): self.conn = conn self._downloaded_files = None @property def downloaded_files(self): if not self._downloaded_files: self._downloaded_files = [] tmpdir = tempfile.mkdtemp() for obj in self.conn.resources(): self._downloaded_files.append(obj.download(tmpdir)) return self._downloaded_files def process_files(self): result = [] for fn in self.downloaded_files: result.append(self.filter_and_store(fn)) return result
Say we wanted to test our process_files method. It becomes much easier.
def setup(self): self.test_files = os.listdir(os.path.join(HERE, 'service_files')) self.conn = Mock() self.processor = MyServiceProcessor(self.conn) def test_process_files(self): # Just set the property variable to inject the values. self.processor._downloaded_files = self.test_files assert len(self.processor.process_files()) == len(self.test_files)
As you can see it was realy easy to inject our stub files. We know that we don’t perform any downloads until we have to. We also know that the downloads are only performed once.
Here is another variation I’ve used that doesn’t required setting up a _downloaded_files.
@property def downloaded_files(self): if not hasattr(self, '_downloaded_files'): ... return self._downloaded_files
Generally, I prefer the explicit _downloaded_files attribute in the constructor as it allows more granularity when setting a default value. You can set it as an empty list for example, which helps to communicate that the property will need to return a list.
Similarly, you can set the value to None and ensure that when the attribute is accessed, the value may become an empty list. This small differentiation helps to make the API easier to use. An empty list is still iterable while still being “falsey”.
This technique is nothing technically interesting. What I hope someone takes from this is how you can use this technique to write clearer code and encapsulate your implementation, while exposing a clear API between your objects. Just because you don’t publish a library, keeping your internal object APIs simple and communicative helps make your code easier to reason about.
One caveat is that this method can add a lot of small property methods to your classes. There is nothing wrong with this, but it might give a reader of your code the impression the classes are complex. One method to combat this is to use mixins.
class MyWorkerMixinProperties(object): def __init__(self, conn): self.conn = conn self._categories = None self._foo_resources = None sef._names = None @property def categories(self): if not self._categories: self._categories = self.conn.categories() return self._categories @property def foo_resources(self): if not self._foo_resources: self._foo_resources = self.conn.resources(name='foo') return self._foo_resources @property def names(self): if not self._names: self._names = [r.meta()['name'] for r in self.resources] class MyWorker(MyWorkerMixinProperties): def __init__(self, conn): MyWorkerMixinProperties.__init__(self, conn) def run(self): for resource in self.foo_resources: if resource.category in self.categories: self.put('/api/foos', { 'real_name': self.names[resource.name_id], 'values': self.process_values(resource.values), })
This is a somewhat contrived example, but the point being is that we’ve taken all our service based data and made it accessible via normal attributes. Each service request is encapsulated in a function, while our primary worker class has a reasonably straightforward implementation of some algorithm.
The big win here is clarity. You can write an algorithm by describing what it should do. You can then test the algorithm easily by injecting the values you know should produce the expected results. Furthermore, you’ve decoupled the algorithm from the I/O code, which is typically where you’ll see a good deal of repetition in the case of RESTful services or optimization when talking to databases. Lastly, it becomes trivial to inject values for testing.
Again, this isn’t rocket science. It is a really simple technique that can help make your code much clearer. I’ve found it really useful and I hope you do too!
Iterative Code Cycle
TDD prescribes a simple process for working on code.
- Write a failing test
- Write some code to get the test to pass
- Refactor
- Repeat
If we consider this cycle more generically, we see a typical cycle every modern software developer must use when writing code.
- Write some code
- Run the code
- Fix any problems
- Repeat
In this generic cycle you might use a REPL, a stand alone script, a debugger, etc. to quickly iterate on the code.
Personally, I’ve found that I do use a test for this iteration because it is integrated into my editor. The benefit of using my test suite is that I often have a repeatable test when I’m done that proves (to some level of confidence) the code works as I expect it to. It may not be entirely correct, but at least it codifies that I think it should work. When it does break, I can take a more TDD-like approach and fix the test, which makes it fail, and then fix the actual bug.
The essence then of any developer’s work is to make this cycle as quick as possible, no matter what tool you use to run and re-run your code. The process should be fluid and help get you in the flow when programming. If you do use tests for this process, it may be a helpful design tool. For example, if you are writing a client library for some service, you write an idealistic API you’d like to have without letting the implementation drive the design.
TDD has been on my mind recently as I’ve written a lot of code recently and have questioned whether or not my testing patterns have truly been helpful. It has been helpful in fixing bugs and provides a quick coding cycle. I’d argue the code has been improved, but at the same time, I do wonder if by making things testable I’ve introduced more abstractions than necessary. I’ve had to look back on some code that used these patterns and getting up to speed was somewhat difficult. At the same time, anytime you read code you need to put in effort in order to understand what is happening. Often times I’ll assume if code doesn’t immediately convey exactly what is happening it is terrible code. The reality is code is complex and takes effort to understand. It should be judged based on how reasonable it is fix once it is understood. In this way, I believe my test based coding cycle has proven itself to be valuable.
Obviously, the next person to look at the code will disagree, but hopefully once they understand what is going on, it won’t be too bad.
TDD
I watched DHH’s keynote at Railsconf 2014. A large part of his talk discusses the misassociation of TDD on metrics and making code “testable” rather than stepping back an focusing on clarity, as an author would when writing.
If you’ve ever tried to do true TDD, you might have a similar feeling that you’re doing it wrong. I know I have. Yet, I’ve also seen the benefit of iterating on code via writing tests. The faster the code / test cycle, the easier it is to experiment and write the code. Similarly, I’ve noticed more bugs show up in code that is not as well covered by tests. It might not be clear how DHH’s perspective then fits in with the benefits of testing and facets of TDD.
What I’ve found is that readability and clarity in code often comes by way of being testable. Tests and making code testable can go along way in finding the clarity that DHH describes. It can become clear very quickly that your class API is actually really difficult to use by writing a test. You can easily spot odd dependencies in a class by the number of mocks you are required to deal with in your tests. Sometimes I find it easier to write a quick test rather than spin up a repl to run and rerun code.
The point being is that TDD can be a helpful tool to write clear code. As DHH points out, it is not a singular path to a well thought out design. Unfortunately, just as people take TDD too literally, people will feel that any sort of granular testing is a waste of time. The irony here is that DHH says very clearly that we, as software writers, need to practice. Writing tests and re-writing tests are a great way to become a better writer. Just because the ideals presented in TDD might be a bit too extreme, the mechanism of a fast test suite and the goal for 100% coverage are still valuable in that they force you to think about and practice writing code.
The process of thinking about code is what is truly critical in almost all software development exercises. Writing tests first is just another way to slow you down and force you to think about your problem before hacking out some code. Some developers can avoid tests, most likely because they are really good about thinking about code before writing it. These people can likely iterate on ideas and concepts in their head before turning to the editor for the actual implementation. The rest of us can use the opportunity of writing tests, taking notes, and even drawing a diagram as tools to force us to think about our system before hacking some ugly code together.
Concurrency Transitions
Glyph, the creator of Twisted wrote an interesting article discussing the intrinsic flaws of using threads. The essential idea is that unless you know explicitly when you are switching contexts, it is extremely difficult to effectively reason about concurrency in code.
I agree that this is one way to handle concurrency. Glyph also provides a clear perspective into the underlying constraints of concurrent programming. The biggest constraint is that you need a way to guarantee a set of statements happens atomically. He suggests an event driven paradigm as how best to do this. In a typical async system, the work is built up using small procedures that run atomically, yielding back control to the main loop as they finish. The reason the async model works so well is because you eliminate all CPU based concurrency and allow work to happen while waiting for I/O.
There are other valid ways to achieve as similar effect. The key in all these methods, async included, is to know when you transition from atomic sequential operations to potentially concurrent, and often parallel, operations.
A great example of this mindset is found in functional programming, and specifically, in monads. A monad is essentially a guarantee that some set of operations will happen atomically. In a functional language, functions are considered “pure” meaning they don’t introduce any “side effects”, or more specifically, they do not change any state. Monads allow functional languages a way to interact with the outside world by providing a logical interface that the underlying system can use to do any necessary work to make the operation safe. Clojure, for example, uses a Software Transactional Memory system to safely apply changes to state. Another approach might be to use locking and mutexes. No matter the methodology, the goal is to provide a safe way to change state by allowing the developer an explicit way to identify portions of code that change external state.
Here is a classic example in Python of where mutable state can cause problems.
In Python, and the vast majority of languages, it is assumed that a function can act on a variable of a larger scope. This is possible thanks to mutable data structures. In the example above, calling the function multiple time doesn’t re-initialize argument to an empty list. It is a mutable data structure that exists as state. When the function is called that state changes and that change of state is considered a “side effect” in functional programming. This sort of issue is even more difficult in threaded programming because your state can cross threads in addition to lexical boundaries.
If we generalize the purpose of monads and Clojure’s reference types, we can establish that concurrent systems need to be able to manage the transitions between pure functionality (no state manipulation) and operations that effect state.
One methodology that I have found to be effective managing this transition is to use queues. More generally, this might be called message passing, but I don’t believe message passing guarantees the system understands when state changes. In the case of a queue, you have an obvious entrance and exit point for the transition between purity and side effects to take place.
The way to implement this sort of system is to consider each consumer of a queue as a different process. By considering consumers / producers as processes we ensure there is a clear boundary between them that protects shared memory, and more generally shared state. The queue then acts as bridge to cross the “physical” border. The queue also provides the control over the transition between pure functionality and side effects.
To relate this back to Glyph’s async perspective, when state is pushed onto the queue it is similar to yielding to the reactor in an async system. When state is popped off the queue into a process, it can be acted upon without worry of causing side effects that could effect other operations.
Glyph brought up the scenario where a function might yield multiple times in order to pass back control to the managing reactor. This becomes less necessary in the queue and process system I describe because there is no chance of a context switch interrupting an operation or slowing down the reactor. In a typical async framework, the job of the reactor is to order each bit of work. The work still happens in series. Therefore, if one operation takes a long time, it stops all other work from happening, assuming that work is not doing I/O. The queue and process system doesn’t have this same limitation as it is able to yield control to the queue at the correct logical point in the algorithm. Also, in terms of Python, the GIL is mitigated by using processes. The result is that you can program in a sequential manner for your algorithms, while still tackle problems concurrently.
Like anything, this queue and process model is not a panacea. If your data is large, you often need to pass around references to the data and where it can be retrieved. If that resource is not something that tries to handle concurrent connections, the file system for example, you still may run into concurrency issue accessing some resource. It also can be difficult to reason about failures in a queue based system. How full is too full? You can limit the queue size, but that might cause blocking issues that may be unreasonable.
There is no silver bullet, but if you understand the significance of transitions between pure functionality and side effects, you have a good chance of producing a reasonable system no matter what concurrency model you use. | http://ionrock.org/blog/2005/4 | CC-MAIN-2014-52 | refinedweb | 6,228 | 62.48 |
Problem Statement and Proposed Solution:
Anyone India, this is propagating a garbage crisis, which is the cause of a number of environmental problems and public health issues. India’s rapid economic growth has resulted in a substantial increase in solid waste generation in urban centres. Urban areas in India alone generate more than 100,000 metric tonnes of solid waste per day, which is higher than many countries’ total daily waste generation.
Traditionally, municipalities operate on weekly routes to pick up trash and recyclables on designated days, regardless of whether the containers are full or not. This increases further pollution due to the increase fuel consumption.
There are currently over 31,000 public Wi-Fi hotspots installed in India, according to industry estimates, and the number is expected to grow beyond 202,000 by 2018. India suffers from both inefficient waste infrastructure and increasing rates of solid waste generation per capita, due in part to the country’s service sector driven economic growth. This presents a case where both issues of service quality and waste quantity need to be handled together. This is a unique situation that developing Asian countries like India are being confronted with and as such its solutions must also be unique and smart. So why not use those public Wi-Fi hotspots for better waste management based on Internet of Things (IoT). Another national level initiative aimed at improved waste management is the Smart Cities mission under which 100 cities will be provided with significant funding to improve civic services infrastructure.
{By the way as I was making this documentation I got to knew that: In 2014, the Bihar government launched a free WiFi zone between NIT-Patna on Ashok Rajpath to Danapur. This 20-km stretch of free WiFi is the world’s longest corridor of free WiFi connectivity.}
My project aims to optimize waste collection and ultimately reduce fuel consumption. This project will be useful where Wi-Fi hotspots are readily available like hotels and shopping malls, restaurants, coffee shops, and retail outlet, college/institution premises, airports, railway stations, etc. This will help to route optimization which will reduce the number of trips of the garbage trucks hence less traffic will be generated and less pollutant emissions will be released into the air. Also it will help to keep those places free of overflowing garbage containers and this will have a positive impact on tourism.
The project consists of a Wi-Fi based sensors installed in the container lid. Each device will have a unique ID so as to know to which area the container belongs. The sensor measures the container filling level using ultrasonic technology, temperature using temperature sensor and periodically transmits all captured information to the Artik. The device can “talk” to the waste management company and thus can tell them whether the container is at full capacity, when it needs to be emptied, what temperature the container is at, and more, allowing the sanitation specialists to work more efficiently and cut unnecessary costs. Additionally, the sensors can help the company forecast when a dumpster will be full, allowing them to plan ahead future routes.
Block Diagram:
Setting Up Artik Cloud (To Receive Data from Esp8266):
We will create Device Type for our project and then create device based on that device type.
Setting Up Rules on Artik Cloud (To send mail):
We will create rules for our project in Artik cloud in which if data arrives from Esp8266 we will mail all the sensor data to our mail ID. Also another rule will be created in which if the garbage level is higher than a particular value it will a send a mail with threshold crossed subject.
Project Images:-
Working Video:-
Changes to be done in the Arduino Code provided below in the Code section {Eco Smart Trash Can Code v2.0}:
First of all you need Device ID and Device Token so that Esp8266 can publish data successfully to Artik cloud using MQTT.
1) After login (Artik Cloud) click on Devices as shown in the image below:
Then click on the gear icon as shown in the image below:
After clicking on the Gear icon you will get the Device ID and Device Token, copy it you will need those in your Arduino code {See the Eco Smart Trash Can Code v2.0 in the code section}.
In the code section first of all enter your SSID and Password:
/********************************************WiFi Access************************************************************** Enter the SSID and PASSWORD of your Wi-Fi Router *********************************************************************************************************************/ const char* _SSID = "Your_Router's_SSID"; //Wi-Fi SSID const char* _PASSWORD = "Your_Router's_PASSWORD"; // Wi-Fi Password
Then enter the Device ID and Device TOKEN of your device (you copied in earlier step)
/********************************************Artik cloud Aceess******************************************************* MQTT - Artik Cloud Server params Requires Device ID and Device Token which are created when you make your device in Artik Cloud!! *********************************************************************************************************************/ char Artik_Cloud_Server[] = "api.artik.cloud"; // Server int Artik_Cloud_Port = 8883; // MQTT Port char Client_Name[] = "ARTIK-IoT"; // Any Name char Device_ID[] = "822a647ea36b4fa39f1b6c41f0606c52"; // DEVICE ID char Device_TOKEN[] = "8b2e3dd50b594d01a42a12a25513551a"; // DEVICE TOKEN char MQTT_Publish[] = "/v1.1/messages/822a647ea36b4fa39f1b6c41f0606c52"; // (/v1.1/messages/"DEVICE ID")
"For Full Code Explanation see the comments in the Eco Smart Trash Can Code v2.0 below (I think i have explained most of the things)."
Eco Smart Trash Can Code v2.0 is just for understanding how to send data to Artik cloud using NodeMCU/ESP8266 and MQTT.
Further Code Improvements:-
To improve our product features we need to make certain changes in our code and we will save it with a different name:- "Eco Smart Trash Can Code v2.1".
To recognise which garbage bin is placed where we need to give our ECO a unique ID so that just by looking the number we can know the where the bin is placed. Instead of creating a unique ID ourself why not use the ESP8266 chip ID which is a unique 32-bit integer.
To get ESP8266 chip ID you have to use following function:
ESP.getChipId();
We will update our code so that now it will sent not only the sensor data (GarbageLevel and Temperature) but also the unique ID.
Also we need to read the battery level, so for better precision we are going to use ADC ADS1115 - a higher-precision ADC, which provides 16-bit precision over I2C. The chip can be configured as 4 single-ended input channels, or two differential channels. The address can be changed to one of four options so you can have up to 4 ADS1115's connected on a single 2-wire I2C bus for 16 single ended inputs. And a plus point it has Wide Supply Range: 2.0V to 5.5V.
For Reading ADC over I2C using ADS1115, we will use Adafruit_ADS1015.h library. We need to include first of Wire.h library as we are using I2C communication.
#include <Wire.h> #include <Adafruit_ADS1015.h> Adafruit_ADS1115 ads; /* Use this for the 16-bit version */
Then we have to set gain of ADS1115, we have set gain (+/- 2.048V) such that it must be lower than the supply voltage (3.3V). Now the precision is 0.0625mV. Then we need to call ads.begin(). This all things are done in setup function.
ads.setGain(GAIN_TWO); // 2x gain +/- 2.048V 1 bit = 1mV 0.0625mV ads.begin();
Then declare a 16-bit integer variable to store the adc reading of analog channel 0 of ADS1115 and then convert the sampled digital value to millivolts. This things are done in loop function (but can be done in setup function as required) before transmitting the data to Artik cloud.
int16_t adc0; adc0 = ads.readADC_SingleEnded(0); float millivolts= (adc0/32767.0) * 2048;
After adding this much changes, why not think about the power as our project is finally going to work on battery it is very important that it must atleast stay active for 6 months without charging. Since For this case, we don’t want to be changing the batteries constantly or charging it constantly.
Unfortunately, the ESP8266 is still a pretty power hungry device. If you want your project to run off a battery for more than a few hours, you have two options: get a huge battery or cleverly put the chip to sleep. So we need to significantly reduce the power consumption of our ESP8266 board using the deep sleep mode of the chip. For using deep sleep mode, GPIO 16 needs to be used to wake up out of deep-sleep mode, you'll need to connect it to the RESET pin.
Also note that: The ESP8266 can't be programmed while the GPIO 16 pin is connected to RESET pin. Make sure you disconnect the two pins before trying to upload a sketch.
In deep sleep mode, the ESP8266 maintains its RTC but shuts everything else off to hit about 60-80 µA. It can pull upwards of 200mA while it’s transmitting, and I usually measure an average of about 80mA in normal operation.
The ESP.deepSleep() function accepts microseconds as its parameter (1,000,000 microseconds = 1 second). For putting ESP8266 in deep sleep mode we need to call:
// Time to sleep (in seconds): int sleepTimeS = 30; ESP.deepSleep(sleepTimeS * 1000000);
The maximum value accepted is 4,294,967,295, which is about 71 minutes. | https://www.hackster.io/AzureDragon/eco-a-smart-garbage-container-70094e | CC-MAIN-2019-43 | refinedweb | 1,547 | 59.74 |
Python handles data of various formats mainly through the two libraries, Pandas and Numpy. We have already seen the important features of these two libraries in the previous chapters. In this chapter we will see some basic examples from each of the libraries on how to operate on data.
The most important object defined in NumPy is an N-dimensional array type called ndarray. It describes the collection of items of the same type. Items in the collection can be accessed using a zero-based index. An instance of ndarray class can be constructed by different array creation routines described later in the tutorial. The basic ndarray is created using an array function in NumPy as follows −
numpy.array
Following are some examples on Numpy Data handling.
# more than one dimensions import numpy as np a = np.array([[1, 2], [3, 4]]) print a
The output is as follows −
[[1, 2] [3, 4]]
# minimum dimensions import numpy as np a = np.array([1, 2, 3,4,5], ndmin = 2) print a
The output is as follows −
[[1, 2, 3, 4, 5]]
# dtype parameter import numpy as np a = np.array([1, 2, 3], dtype = complex) print a
The output is as follows −
[ 1.+0.j, 2.+0.j, 3.+0.j]
Pandas handles data through Series,Data Frame, and Panel. We will see some examples from each of these.
Series is a one-dimensional labeled array capable of holding data of any type (integer, string, float, python objects, etc.). The axis labels are collectively called index. A pandas Series can be created using the following constructor −
pandas.Series( data, index, dtype, copy)
Here we create a series from a Numpy Array.
#import the pandas library and aliasing as pd import pandas as pd import numpy as np data = np.array(['a','b','c','d']) s = pd.Series(data) print s
Its output is as follows −
0 a 1 b 2 c 3 d dtype: object
A Data frame is a two-dimensional data structure, i.e., data is aligned in a tabular fashion in rows and columns. A pandas DataFrame can be created using the following constructor −
pandas.DataFrame( data, index, columns, dtype, copy)
Let us now create an indexed DataFrame using arrays.
import pandas as pd data = {'Name':['Tom', 'Jack', 'Steve', 'Ricky'],'Age':[28,34,29,42]} df = pd.DataFrame(data, index=['rank1','rank2','rank3','rank4']) print df
Its output is as follows −
Age Name rank1 28 Tom rank2 34 Jack rank3 29 Steve rank4 42 Ricky
A panel is a 3D container of data. The term Panel data is derived from econometrics and is partially responsible for the name pandas − pan(el)-da(ta)-s.
A Panel can be created using the following constructor −
pandas.Panel(data, items, major_axis, minor_axis, dtype, copy)
In the below example we create a panel from dict of DataFrame Objects
#creating an empty panel import pandas as pd import numpy as np data = {'Item1' : pd.DataFrame(np.random.randn(4, 3)), 'Item2' : pd.DataFrame(np.random.randn(4, 2))} p = pd.Panel(data) print p
Its output is as follows −
<class 'pandas.core.panel.Panel'> Dimensions: 2 (items) x 4 (major_axis) x 5 (minor_axis) Items axis: 0 to 1 Major_axis axis: 0 to 3 Minor_axis axis: 0 to 4 | https://www.tutorialspoint.com/python_data_science/python_data_operations.htm | CC-MAIN-2020-10 | refinedweb | 546 | 57.06 |
Trust is at the heart of online commerce. When consumers visit your website, they need to be comfortable sharing their personal or financial information with you. Transport Layer Security (TLS certificates) and Secure Sockets Layer (SSL certificates) are the industry standard in ensuring that your website is valid and that consumers can safely and securely conduct business with you.
But if you’ve spent any time online, you’ve probably seen the dreaded “connection not secure” message:
The result is lost visitors which means not only lost revenue opportunities but loss of trust, as well. Given the importance of a valid certificate you can’t risk allowing your certificate to expire.
In this article, I’ll show you how to create a Python script that:
- Validates your TSL certificates
- Checks the expiration dates
- Notifies you before they expire so that you have time to react
- Automates the process so you don’t have to think about it
Before You Start: Install Install TLS Certificate Tools With This Ready-To-Use Python Environment
To follow along with the code in this Python TLS certificate checking tutorial, you’ll need to have a recent version of Python installed, along with all the packages used in this post. The quickest way to get up and running is to install the TLS Checker runtime environment for Windows, Mac or Linux, which contains a version of Python and all the packages you’ll need.
In order to download the ready-to-use TLS Checker Python/TLS-Checker"
For Mac or Linux users, run the following to automatically download and install our CLI, the State Tool along with the COVID Simulation runtime into a virtual environment:
sh <(curl -q) --activate-default Pizza-Team/TLS-Checker
An Introduction to TLS Certificates and Changes in 2020
Before we begin, let’s talk a little bit about certificates and some of the changes that browsers implemented in September of 2020. You need a certificate to create a secure and encrypted connection between a browser and a website. By default, this certificate needs to be issued by a Certificate Authority (CA) in order to be accepted as valid by the browser. A browser validates the certificate’s authenticity by testing it against the CA’s root certificate included with the browser. More than 200 CAs have their root certificates included with and trusted by the major browsers. Some examples of these CAs include GlobalSign, DigiCert, and Symantec.
On September 1st, 2020, most of the major browsers began requiring certificates with shorter lifespans to reduce the risk that hackers and organizations could compromise them with malicious intent. The net result is that certificates, which used to be valid for eight to ten years are now only valid for as little as 397 days. If they are valid for a more extended period, they risk rejection by the browser.
This means that certificates now expire at least seven times faster than they used to. And if your site is hosted on Amazon Web Services or similar cloud provider, certificates may expire even quicker.
Managing TLS Certificates With Python In 4 Steps
Step 1 — Checking the Certificate
There are a couple of Python packages that can help you check the status of a TLS certificate for a site, or for multiple sites, including:
Both of these packages offer the ability to execute a certificate check from the command line. All a script needs to do in order to access a site’s certificate is to create a connection. The code in the script below does just that using etsy.com as an example:
from urllib.request import ssl, socket import datetime, smtplib hostname = 'etsy.com' port = '443' context = ssl.create_default_context() with socket.create_connection((hostname, port)) as sock: with context.wrap_socket(sock, server_hostname = hostname) as ssock: certificate = ssock.getpeercert()
The certificate variable contains the certificate data and includes details about the subject (host organization), the issuer (CA), and the certificate’s lifespan:
{"subject": [[["countryName", "US"]], [["stateOrProvinceName", "New York"]], [["localityName", "Brooklyn"]], [["organizationName", "Etsy Inc."]], [["commonName", "etsy.com"]]], "issuer": [[["countryName", "BE"]], [["organizationName", "GlobalSign nv-sa"]], [["commonName", "GlobalSign CloudSSL CA - SHA256 - G3"]]], "version": 3, "serialNumber": "5385A384706D468C10A74AE7", "notBefore": "Aug 6 15:15:12 2020 GMT", "notAfter": "Apr 24 20:12:20 2021 GMT", "subjectAltName": [["DNS", "etsy.com"], ["DNS", "*.etsystatic.com"], ["DNS", "api-origin.etsy.com"], ["DNS", "api.etsy.com"], ["DNS", "m.etsy.com"], ["DNS", "openapi.etsy.com"], ["DNS", ""]], "OCSP": [""], "caIssuers": [""]}
Step 2 — Validating the Expiration Date
The certificate includes two timestamps:
- notBefore – Indicates when the certificate became active
- notAfter – Indicates when the certificate expires
The notAfter is what we’ll use to determine how long we have until the certificate expires. We’ll use a couple of functions from the datetime module to parse the timestamp and compare it with now.
certExpires = datetime.datetime.strptime(certificate['notAfter'], '%b %d %H:%M:%S %Y %Z') daysToExpiration = (certExpires - datetime.datetime.now()).days
A positive number indicates the number of days until the certificate expires. We can use this to send a notification one week before it expires and another one day before it expires. We’ll create a simple email notification function next, and then call it as needed.
if daysToExpiration == 7 or daysToExpiration == 1: send_notification(daysToExpiration)
Step 3 — Creating the Notification Action
We’re going to explore some better ways of automating and sending notifications next. Still, if you’re trying this out as a proof of concept, an email notification function like the one shown below will do the trick:
def send_notification(days_to_expire): smtp_port = 587 smtp_server = "smtp.acmecorp.com" sender_email = "[email protected]" receiver_email= = "[email protected]" password = input("Type your password and press enter: ") if days_to_expire== 1: days = "1 day" else: days = str(days_to_expire) + " days" message = """\ Subject: Certificate Expiration The TLS Certificate for your site expires in {days}""" email_context = ssl.create_default_context() with smtplib.SMTP(smtp_server, smtp_port) as server: server.starttls(context = email_context) server.login(sender_email, password) server.sendmail(sender_email, receiver_email, message.format(days = days))
This function works as a proof of concept, but it isn’t secure, scalable, or fault-tolerant. Next, let’s consider how you might implement this approach in a production environment.
Step 4 — Automating the Process
To make sure you’ll be notified of upcoming certificate expiration dates in time to replace them, you’ll want to execute the above script daily. You could set up a cron job on a server to run the script daily, or if your organization uses the public cloud, you can leverage their hosted services. I’ll use Amazon Web Services (AWS) as an example of how you might implement an automated and more resilient service.
AWS Lambda is a compute service that uses a serverless model. Lambda allows you to create a cloud-based function that you can run on a schedule. AWS Lambda supports Python, so you’re already most of the way there. The AWS Lambda Developer Guide can help get you started. Within the AWS environment, you can create policies that allow interactions between services with identity and access management, removing the need for passwords and authentication. Creating a policy that will enable the Lambda function to post notifications to AWS Secure Notification Service (SNS) would be the approach I would use. The boto3 library implements the AWS SDK for python applications, and an example is available here.
SNS allows you to create a topic to send notifications. You can add email destinations, SMS messages, and even trigger other services when the topic receives a notification. SNS allows you to decouple your validation code from the notification mechanism. You can learn more about getting started with SNS here.
AWS isn’t the only Cloud Provider that offers these services. Microsoft Azure, Google Cloud, and most other providers offer serverless and notification services, which you can leverage to automate your TLS validation strategy.
Conclusion: Using Python To Manage Your TLS Certificates
Any organization that manages a SaaS service knows the pain of trying to manage multiple servers and services, each with their own certificate. If any one of those certificates expires, it can have a cascading failure effect on all the other services in the chain. And with ever-shrinking certificate expiration windows, the problem is only getting worse.
Luckily, Python provides a simple way to automatically notify all stakeholders to ensure you (and your customers) are never taken by surprise when a TLS certificate expires.
- You can view the completed sample script in this GitHub repo.
- To get started building your own TLS certificate expiration solution, sign up for a free ActiveState Platform account so you can download our TLS Checking runtime environment and get started faster.
Are you looking for more Python tools for sysadmin and itadmin tasks? The simplest way to get started is with our IT Admin Tools runtime environment for Mac and Linux, or IT Admin Tools-Win for Windows, which come with a recent version of Python and all of the common Python tools for IT. | https://sweetcode.io/how-to-prevent-tls-certificates-from-expiring-with-python/ | CC-MAIN-2021-25 | refinedweb | 1,486 | 52.09 |
QPlastiqueStyle [SOLVED]
I have to recompile an old project which is full of QPlastiqueStyle which is not included in Qt 5.0.2. What can i do ?
Old styles are no longer shipped as part of Qt 5, but they are still available in the "qtstyleplugins": repository.
how can i install them to solve the problem ?
Did you try cloning the repository (git clone), building (qmake ; make) and installing (make install)?
i am working with windows.
I don't see how that is relevant. It looks like the the style plugins should be on all platforms.
for Windows you replace that 'make' with 'nmake' :)
this can't fix the issue with qplastique, i would like to ask how can i replace this. there is not in Qt 5
or replace them... ? how can i do that ?
I don't think you need Plastique though. In most cases you can use fusion style that comes with Qt 5 instead. The biggest difference is that the style headers are not exposed as public API, but you can in most cases just replace QPlastiqueStyle references with a QProxyStyle and set it's base style to "fusion".
how can i set it's base style to fusion ?
Use the style factory:
@
QStyle *style = new QProxyStyle (QStyleFactory::create("fusion"));
@
What project is it you want to port to Qt 5? I did remember a program I ported which had QPlastique. Seeing that Qt 5 didn't support it I simply removed all instances of it. It fixed it a bit, but whether or not it completely work I can't remember.
can i post you the code to suggest me any solution ?
Sure
@class CS57Colour : public QPlastiqueStyle
{
public:
CS57Colour (int Time);
void polish (QPalette &palette);) { //TRACE("reseted the focus"); btn->state = btn->state ^ State_HasFocus; } } QPlastiqueStyle::drawControl(element, btn, painter, widget);
}
else
{
QPlastiqueStyle::drawControl(element, option, painter, widget);
}
}
private:
QPalette m_Pal; // PALETTE USED FOR THIS S-57 TYPE OF CONTROLS
};
#endif // S57COLOUR_H@
...you should highlight your code by enclosing it in @.. It's quite unreadable
done!
[quote author="bruceoutdoors" date="1368455723"]...you should highlight your code by enclosing it in @.. It's quite unreadable[/quote]
o. Haha... I'm a novice myself see, but @Jens was right. QPlastique is deprecated and replaced by fusion. Have you come across this post:
If you're asking how to fix the code itself I'm not so sure. If there's anyone know would know it would be @Jens(I'm assuming it's the same Jens that wrote "this": ... but there's an example called stylesheet you can check out.
Hey clouca, I'm just going to take a guess. Try replacing all instances of "QPlastiqueStyle" with "QStyle". If it fails then... well, it's worth a try.
Or rather replace all those QPlastiqueStyle with QProxyStyle as I suggested earlier. :)
Well clouca you heard the man. I didn't realize I missed that part of Jens post. He practically answered your question right there.
I tried these solutions both but not work.. i will send you the error shortly
Thanks. It works.. but.. :) the menus have no outline.. how can i select a style from here like fusion ? | https://forum.qt.io/topic/27178/qplastiquestyle-solved/2 | CC-MAIN-2019-18 | refinedweb | 530 | 85.18 |
Android App Development: What You Need to Know about Wearable Apps
Android wearable apps are very much like phone apps. But if things are so similar, why not just write “Follow the steps you followed for any other app” and be done with it?
The answer is, some aspects of wearable app development are different from their phone and tablet counterparts. The most obvious difference is screen size. You can’t display very much on a one-inch screen, so you have to design your app accordingly.
A wearable app typically comes in two parts — one part that runs on the wearable device, and another part that runs on the user’s phone. The phone part can make use of the larger screen size, so the phone part can contain menus, setup screens, and other features. (Imagine that! A phone has a larger screen size!)
Another limitation for wearables is the number of classes in the API. The following packages don’t work with wearables:
android.webkit
android.print
android.app.backup
android.appwidget
android.hardware.usb
Like their phone counterparts, each make and model of wearable supports its own set of features. For example, some models have built-in heart rate monitors; others don’t. You can test for the presence of a heart rate monitor with the following code:
import android.content.pm.PackageManager; … PackageManager = context.getPackageManager(); if (packageManager.hasSystemFeature (PackageManager.FEATURE_SENSOR_HEART_RATE)) { // Etc.
The PackageManager class has dozens of constants like FEATURE_SENSOR_HEART_RATE for the many features that a device may or may not have.
Another important aspect of wearable development is the device’s timeout behavior. When you wake up a phone, you see a lock screen. And when you unlock the screen, you see whatever activity was running when the phone went to sleep. But wearables are different. When you wake up a wearable, there’s no lock screen. Instead, you see either the watch face (typically, the current time) or a new notification.
One way or another, activities on wearables don’t automatically stick around the way they do on phones and tablets. So if you want something that stays on the screen, you need an always-on app.
For information about always-on apps, visit Android’s Developer site. | https://www.dummies.com/web-design-development/mobile-apps/android-app-development-what-you-need-to-know-about-wearable-apps/ | CC-MAIN-2019-22 | refinedweb | 374 | 57.27 |
patterns for filtering data
Tim Lovern
Greenhorn
Joined: Dec 11, 2002
Posts: 11
posted
Dec 11, 2002 11:50:00
0
Are there any published
patterns
for data filtering?
What I need is a way to provide a generic filtering mechanism that can be applied to Value Objects.
A user could fill in a table on a form with selection criteria for selecting sales orders, for example.
In the simplest case, this information could be used to generate a query. However, because of the complexity of our environment, the simplest case is rare.
What would happen, is that some of the criteria entered can be used to generate the query. Other aspects would need to be applied to the returned data Value Objects to filter the data. (Value Objects VO's are used to isolate the business layer from the database layer)
one thought is a filter interface with some sort of factory object supplying to needed filters to the application.
The underlying goal, is to have a generic way of doing this so that we don't have to re-invent the wheel for each application that requires the ability to do filtering of data. We also want to make sure that whatever we doesn't require the users to know the underlying structure of the data either.
Hope this made sense, and if so any ideas???
Wilfried LAURENT
Ranch Hand
Joined: Jul 13, 2001
Posts: 269
posted
Dec 12, 2002 03:45:00
0
This is a message I posted in the UML forum some times ago. Looks like a Decorator:
Let's say you want to filter a Vector of Toto objects on some criteria.
You have FilterA which returns a filtered Vector of Toto objects matching the A criteria.
You have FilterB which returns a filtered Vector of Toto objects matching the B criteria.
You have FilterC which returns a filtered Vector of Toto objects matching the C criteria.
All of these classes are inherited from a common abstract class Filter.
public abstract class Filter { public abstract Vector sort(Vector v); } public class FilterA extends Filter { Filter filter; public FilterA(Filter aFilter) {filter=aFilter; } public Vector sort(Vector v) { Vector firstSort= filter.sort(v); Vector secondSort= mylocalSort(firstSort); return secondSort; } private Vector mylocalSort(Vector v) { //Sort Vector according to some criteria.... } }
Now you can combine Filter like you want:
You want a sort according to the A and B criteria. Alright let's do it:
Filter AB = new FilterA(new FilterB()); AB.sort(myVector);
You want a sort according to the A and C criteria. Alright let's do it:
Filter AC = new FilterA(new FilterC()); AC.sort(myVector);
You want a sort according to the A, B and C criteria. Alright let's do it:
Filter ABC = new FilterA(new FilterB(new FilterC())); ABC.sort(myVector);
Now if we did it with overloading you would have had a lot of classes:
class FilterAB,
class FilterAC,
class FilterABC,
class FilterBC,
along the FilterA, FilterB, Filter C. So 4 more classes. And this is a simple example and we were quite lucky that FilterAB does the same sort as FilterBA. Do the exercise with the java.io package!
Moreover, for the client, it does not matter what kind of concrete filter he is using. All he has to know is that he has a Filter, and that he can invoke sort on it.
Hope this helps.
W.
[ December 12, 2002: Message edited by: Wilfried LAURENT ]
Tim Lovern
Greenhorn
Joined: Dec 11, 2002
Posts: 11
posted
Dec 12, 2002 10:15:00
0
Very helpful, thanks! I'll need to chew on this a while and apply it to my situation. At a glance, looks like the concept is what I need.
Consider Paul's
rocket mass heater
.
subject: patterns for filtering data
Similar Threads
How to join different tables using CMP. Attn.Kyle
JSP ViewHelper DesignPattern
Sun ePractice Examination Errors - JMS
About patterns
[mock][J2EE pattern][Business Delegate]
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/370217/java/java/patterns-filtering-data | CC-MAIN-2013-48 | refinedweb | 678 | 62.88 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
Open PDF format report file name is not proper ?
Hello,
I created report it is open from wizard on open PDF file
name like : report_name.pdf i want to change the name.
Thank in Advance.
On Method of Wizard please add the name in return value.
for Example:
def _print_report(self, cr, uid, ids, data, context=None):
if context is None:
context = {}
data = self.pre_print_report(cr, uid, ids, data, context=context)
data['form'].update(self.read(cr, uid, ids, ['initial_balance', 'filter', 'page_split', 'amount_currency'])[0])
return {
'type': 'ir.actions.report.xml',
'report_name': 'product_ledger',
'datas': data,
'name': 'My REPORT NAME' #Give Your Report! | https://www.odoo.com/forum/help-1/question/open-pdf-format-report-file-name-is-not-proper-65385 | CC-MAIN-2016-50 | refinedweb | 133 | 60.72 |
I'm running an EBS-backed instance which acts as a software development team's build server (running Jenkins and host of other services). The server is running Linux (latest Ubuntu from the official AMIs).
I'd like to take regular, automated snapshots of the instance's associated EBS volume. I only need to keep one latest backup (i.e. old snapshots should be pruned), and a good frequency would be once a day.
It seems that Amazon does not provide such backup service out of the box, so you have to either go with 3rd party scripts or roll your own solution.
My question is, what is the simplest way to achieve this? I'd like a minimal amount of hassle, configuration, and external dependencies. Setting this up as some kind of timed script on the Linux box itself is, to my knowledge, a valid option.
Okay, for what it's worth, here's what I did. I hope my feeble scripts encourage people to post better solutions!
I wrote two simple bash scripts and automated them using cron. (For now I run these on a local server, as I think (?) it's not recommended to put AWS's certificates in the instances/AMIs/EBSs themselves.)
To create a new snapshot:
# ESB volume associated with the instance we want to back up:
EBS_VOL_ID=vol-xxxxyyyy
ec2-create-snapshot --region eu-west-1 -K pk.pem -C cert.pem -d "Automated backup" $EBS_VOL_ID
To prune all except latest snapshot:
EBS_VOL_ID=vol-xxxxyyyy
ec2-describe-snapshots --region eu-west-1 -K pk.pem -C cert.pem | grep "Automated backup" | grep "$EBS_VOL_ID" | awk '{ print $5 "\t" $2 }' | sort > .snapshots
latest_id=$(tail -n1 .snapshots | awk '{ print $2 }')
cat .snapshots | awk '{ print $2 }' > .snapshot_ids
for i in $(cat .snapshot_ids)
do
if [ "$i" != "$latest_id" ]
then
echo "Deleting snapshot $i"
ec2-delete-snapshot --region eu-west-1 -K pk.pem -C cert.pem $i
fi
done
(This parses appropriate snapshot information from ec2-describe-snapshots output and creates a temp file with [timestamp tab snapshot-id] entries (e.g.
2011-06-01T10:24:36+0000 snap-60507609) where the newest snapshot is on the last line.)
ec2-describe-snapshots
2011-06-01T10:24:36+0000 snap-60507609
Notes:
--region
ec2-create-snapshot
Disclaimer: This became partly an exercise in Bash/Unix programming for me, especially the prune script. I readily admit you'd most likely get a much clearer result with e.g. Python, when you need logic like "do something for all but the last item in a list". And even with Bash you could probably do this more elegantly (for instance, you don't really need temp files). So, please feel free to post other solutions!
Based on Jonik's concept, I created a python script using boto. You provide it a list of volumes to snapshot, and how many trailing snapshots to keep for each volume:
# Define the snapshots manage. We'll snapshot the specified volume ID, and only keep the X newest ones.
snapshots = [("vol-XXXXXXXX", 30), ("vol-YYYYYYYY", 180)]
import boto.ec2
auth = {"aws_access_key_id": "YOURACCESSKEY", "aws_secret_access_key": "YOURSECRETKEY"}
ec2 = boto.ec2.connect_to_region("YOURREGIONNAME", **auth)
description = "automated backup"
for volume, num_trailing in snapshots:
snaps = ec2.get_all_snapshots(filters={"volume-id": volume, "description": description})
print "%s: Creating new snapshot. %s automated snapshots currently exist." % (volume, len(snaps))
ec2.create_snapshot(volume, description)
purgeable = sorted(snaps, key=lambda x: x.start_time)[:-num_trailing]
print "Deleting snapshots for %s > %s: %s" % (volume, num_trailing, purgeable)
for snap in purgeable:
ec2.delete_snapshot(snap.id)
I set this up as Jenkins job (via the Python plugin), configured to run daily. If you are using IAM to manage credentials, note that this will require in ec2 policies: DescribeRegions, DescribeVolumes, CreateSnapshot, DeleteSnapshot, DescribeSnapshots, CreateTags (because of boto's implementation).
If you're open to external utilities, check out Skeddly.
Disclosure: I'm the CEO of Eleven41 Software, the company behind Skeddly.
I expanded on the idea of Jonik's script to allow multiple snapshots to be retained. The code is too long to fit in a comment so I'm adding a new answer. This code assumes all the right environment variables have been set up for the CLI tools. Also, this defaults to taking a snapshot of the current instance.
# Look up our instance ID using the magic URL
INSTANCE_ID=$(curl -s)
# The number of previous backups we want to keep
N=3
# get list of locally attached volumes via EC2 API:
VOLUME_LIST=$(ec2-describe-volumes | grep ${INSTANCE_ID} | awk '{ print $2 }')
DATE=$(date '+%Y-%m-%d-%H%M%S')
sync
# actually creating the snapshots
for VOLUME in $(echo $VOLUME_LIST); do
echo "Processing volume $VOLUME"
SNAPSHOT_LIST=$(ec2-describe-snapshots | grep completed | grep "Automatic snapshot" | grep $VOLUME | awk '{print $5 "\t" $2}' | sort | head "--lines=-$N" | awk '{print $2}')
ec2-create-snapshot $VOLUME -d "Automatic snapshot on $DATE"
for SNAPSHOT in $(echo $SNAPSHOT_LIST); do
ec2-delete-snapshot $SNAPSHOT
done
done
I have written a script in PHP that will automate EBS Snapshots and delete old ones. It will even email the results of the snapshots to you. You must configure the AWS PHP SDK and PHPMailer for email functionality, but both of these steps are pretty easy. Then you just run the script every night with CRON or Windows Scheduled Tasks. Detailed instructions and code can be found on my blog:
By posting your answer, you agree to the privacy policy and terms of service.
asked
2 years ago
viewed
2228 times
active
1 year ago | http://serverfault.com/questions/275245/automated-snapshots-of-ebs-backed-ec2-instance-running-ubuntu | CC-MAIN-2014-15 | refinedweb | 911 | 65.83 |
AdMob
Nav Gupta: “The beauty of this integration is that you do not have to change anything inside your ShiVa app! Simply follow the steps in Eclipse and no edits are required inside ShiVa specifically to get this up and running unless you wish to do anything custom.”
Change the screen orientation in ShiVa to match your PC/Mac (In this case, orientation 0). We will be swapping orientation MANUALLY in Eclipse. Should this pose an issue in your app (If you use portrait mode), then that is the only time you’ll be doing anything in ShiVa specifically. This tutorial handles LANDSCAPE orientation. You can easily swap it around to portrait by editing the text in the orientation step later.
Also we are setting up things in TEST MODE. You simply need to comment this out when you are planning to go live.
Video
[video_lightbox_youtube video_id=”JIWPVqKk0Ng” width=”1024″ height=”600″ auto_thumb=”1″]
Eclipse Steps
Before we begin, I am going to assume you already have Eclipse up and running.
– 1. Open Eclipse
– 2. Click File->New->Java Project
– 3. Name the project the title of your App/Game
– 4. Right click on the newly created project folder inside of Package Explorer
– 5. Click on Import
– 6. Select Archive File and Hit Next
– 7. Select Browse and find the Zip file you generated from the ShiVa UAT
– 8. Download the latest Google Admob SDK for Android
– 9. Right click on the project folder again and this time select New->Folder
– 10. Name the folder “libs” (No Quotes) and hit Finish
– 11. Unzip the SDK and from that window click and drag the GoogleAdMobAdsSdk-x.x.x.jar file to the newly created libs folder inside your project.
– 12. Right click on your project folder again, and this time select properties
– 13. Select Java Build Path and then the Libraries Tab
– 14. Click on Add Jar and select the GoogleAdMobAdsSdk-x.x.x.jar file from the libs folder
– 15. Once imported, click on the Order and Export tab. Make sure that you have checked off the GoogleAdMobAdsSdk-x.x.x.jar file and hit OK
– 16. Open the project.properties file and set the target to android-13 (android OS 3.2) and save the file. Update your Android SDK if you get an error doing this.
– 17. Open local.properties and at the very bottom of the file, place this line of code:
renderscript.opt.level=O0
– 18. Open the AndroidManifest.xml. Select the AndroidManifest.xml tab at the bottom so you can edit the code.
– 19. Go down to line 11 and change the orientation. Your line should look like this:
android:screenOrientation="landscape"
– 20. Go down to line 18 and add the following:
– 21. Go down to line 26 and add the following code (NOTE: When you are going to go live, you will be removing the READ_PHONE_STATE line as it is used only when testing the device to obtain the device ID. This ensures that you do not get banned from Admob for clicking your own ads during testing):
– 22. Now we will go to the main Java file.
– 23. Expand line 7 (Hit the + sign next to it) and add the following imports:
import com.google.ads.AdRequest; import android.telephony.TelephonyManager; import com.google.ads.AdSize; import com.google.ads.AdView;
– 24. Go down to line 125 and add the following code to the list of variables:
private AdView adView;
– 25. Open a browser window and go to and create a new android site.
– 26. Setup all the basics about your app there and then you will be given a Publisher ID which you will use in the next step.
– 27. Inside onCreate, go down to line 174 and add the following code:
// Create an ad (Note: You can choose AdSize.BANNER, AdSize.SMART_BANNER, etc. this.adView = new AdView(this, AdSize.SMART_BANNER, "PUBLISHER_ID_GOES_HERE"); AdView.LayoutParams adViewParams = new AdView.LayoutParams( AdView.LayoutParams.WRAP_CONTENT, AdView.LayoutParams.WRAP_CONTENT); //the next line is the key to putting your ad on the bottom. If you want it on //the top, you simply use ALIGN_PARENT_TOP. adViewParams.addRule(AdView.ALIGN_PARENT_BOTTOM);
– 28. Go down a little further to approx. line 186 (~194-195 approx), right after oSplashView and oViewGroup, and paste the following:
// make sure it is before the first oViewGroup.addView: oViewGroup.addView ( this.adView, adViewParams ) ;
– 29. After that, go down to line 190 (~212) (before the createAsync) add the following:
AdRequest adRequest = new AdRequest(); //Comment out the following lines when going LIVE to public adRequest.addTestDevice(AdRequest.TEST_EMULATOR); final TelephonyManager tm =(TelephonyManager)getBaseContext().getSystemService(Context.TELEPHONY_SERVICE); String deviceid = tm.getDeviceId(); adRequest.addTestDevice(deviceid); //END of Testing Area this.adView.loadAd(adRequest); this.adView.bringToFront(); oViewGroup.bringChildToFront(this.adView);
– 30. Go down to line 255 (~285) if you need to disable wakelock – simply change the setting to false instead of true.
– 31. Next, go down to line 299 (~282) and add the following line after the oViewGroup.addView:
oViewGroup.bringChildToFront( adView );
– 32. Go down to line 427 (~511) and add the following code after the IF statement, so you can see your ad again after a resume:
if ( adView != null ) { oViewGroup.bringChildToFront( adView ); }
– 33. Go down to line 492 (~573 onDestroy Function) and add the following IF statement after the o3DView IF statement:
// Destroy the AdView. if (adView != null) { adView.destroy(); }
– 34. Comment out the following line (approx. 874 (~1000)) so it looks like this:
//oThis.setRequestedOrientation ( ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE ) ;
This makes sure we do not have the screen change on its own after coming back to app.
– 35. Comment out the following line (approx. 905 (~1030)) so it looks like this:
//oThis.setRequestedOrientation ( ActivityInfo.SCREEN_ORIENTATION_PORTRAIT ) ; // TODO: restore original orientation
This makes sure we do not have the screen change on its own after coming back to app.
– 36. Compile and test on your device. If successful, simply comment out the test mode lines and you should be ready to go live!
Links
Banner Ad positioning fix
Another Tutorial @ jameselsey.co.uk
Admob Ad Positioning
Another Tutorial @jmsliu.com
Fix for Android Version @stackoverflow
Other Code Help @stackoverflow
Latest Version Code @stackoverflow
Previous Version Youtube Video by Fragagames
Google AdMob Ads SDK TestMode
DeviceID Code @stackoverflow | https://shiva-engine.com/knowledgebase/admob/ | CC-MAIN-2020-50 | refinedweb | 1,035 | 59.5 |
Norman released Seam 2.1.2 yesterday and it comes with much improved support for REST processing, compared to previous 2.1.x versions. We started integrating RESTEasy - an implementation of JAX-RS (JSR 311) - with Seam almost a year ago in a first prototype. We then waited for the JAX-RS spec to be finalized and for RESTEasy to be GA, which happened a few months ago. So based on that stable foundation we were able to finish the integration with Seam.
I'm going to demonstrate some of the unique features of that integration here, how you can create a RESTful Seam application or simply add an HTTP web service interface to an existing one.
Deploying resources and providers
With JAX-RS you write a plain Java class and put @javax.ws.rs.Path("/customer") on it to make it available under the HTTP base URI path /customer. You then map methods of that class to particular sub-paths and HTTP methods with @javax.ws.rs.GET, @POST, @DELETE, and so on. These classes are called Resource classes. The default life cycle of an instance is per-HTTP-request, an instance is created for a request and destroyed when processing completes and the response has been sent.
Converting HTTP entities (the body of an HTTP request) is the job of Provider classes, annotated with @javax.ws.rs.ext.Provider and usually stateless or singleton. They transform content between HTTP and Java types, say my.Customer entity to and from XML with JAXB. Providers also are the extension point in JAX-RS for custom exception converters, etc.
RESTEasy has its own classpath scanning routine that detects all resources and providers by looking for annotations. That requires a servlet context listener configured in web.xml. You'd also have to configure a request dispatcher servlet. Finally, if you'd like to make your resource classes EJBs, for automatic transaction demarcation and persistence context handling, you'd have to list these EJBs in web.xml as well. This last feature is a RESTEasy enhancement and not part of the JAX-RS specification.
If you use Seam with RESTEasy, none of this extra work is necessary. Of course it still needs to be done but you most likely have already configured the basic Seam listener and resource servlet in web.xml - almost all Seam applications have.
You do not have to configure RESTEasy at all. Just drop in the right JAR files (see the reference docs) and your Seam application will automatically find all @Path resources and @Provider's. Your stateless EJBs still need to be listed to be found, but that can be done in Seam's components.xml or programmatically through the usual Seam APIs. All the other RESTEasy configuration options and some useful other configuration features are available as well.
So without changing any code, you get easier deployment and integrated configuration of JAX-RS artifacts in your Seam application.
Utilizing Seam components
Resources and providers can be made Seam components, with bijection, life cycle management, authorization, interception, etc. Just put an @Name on your resource class:
@Name("customerResource") @Scope(ScopeType.EVENT) // Default @Path("/customer") public class MyCustomerResource { @In CustomerDAO customerDAO; @GET @Path("/{customerId}") @Produces("text/xml") @Restrict("#{s:hasRole('admin')}") public Customer getCustomer(@PathParam("customerId") int id) { return customerDAO.find(id); } }
Naturally REST-oriented architecture assumes that clients are maintaining application state, so your resource components would be EVENT or APPLICATION scoped, or STATELESS. Although SESSION scope is available, by default a session only spans a single HTTP request and it's automatically destroyed after the HTTP request. This behavior and how to configure it if you really want to transmit a session identifier between the REST client and server and utilize server-side SESSION scope across requests is explained in more detail in the reference docs. We already have some ideas for CONVERSATION scope integration, follow this design document fore more info.
Of course your resource Seam component doesn't have to be a POJO, you can also use @Stateless and turn it into an EJB. Another advantage here is that you do not have to list that EJB in components.xml or web.xml anymore as all Seam components are automatically found and registered according to their type.
The @Restrict annotation is just a regular Seam authorization check, currently you can configure Basic or Digest authentication as you'd for any other Seam application.
CRUD framework integration
Seam has a framework for building basic CRUD database applications quickly, you probably already have seen EntityHome and EntityQuery in other Seam examples. Jozef Hartinger built an extension that allows you to create a basic CRUD application with full HTTP/REST support in minutes. You can declare it through components.xml:
<framework:entity-home <framework:entity-query <resteasy:resource-home <resteasy:resource-query
You only have to create the my.Customer entity and you are ready to read from and write to the database through HTTP.
- A GET request to /customer?start=30&show=10 will execute the resourceCustomerQuery component and return a list of all customers with pagination, starting at row 30 with 10 rows in the result.
- You can GET, PUT, and DELETE a particular customer instance by sending HTTP requests to /customer/<customerId>.
- Sending a POST request to /customer creates a new customer entity instance and persists it.
Note that the <framework:...> mappings are part of the regular Seam CRUD framework with all the usual options such as query customization. The content will be transformed by the built-in RESTEasy providers for XML and JSON, for example. The XML transformation will use any JAXB bindings on your entity class.
You do not have to use XML configuration; as you'd with the Seam CRUD superclasses ResourceHome and ResourceQuery, you can write subclasses instead and configure the mapping with annotations.
There is a reason this CRUD framework feature is not documented in the current release: We are not sure the API will stay as it is. Consider this release as our proposal and we really need feedback on it, what works and what can be improved. Jozef also wrote a full-featured RESTful application with a jQuery based client for regular webbrowsers to demonstrate the CRUD framework. Have a look at the Tasks example in the Seam distribution. You can find more demo code and tests in the Restbay example which we use for general RESTEasy integration testing and demonstration.
Feature shortlist
I've only highlighted three of the main features of Seam and RESTEasy but there is more available and more to come:
Exceptions in JAX-RS applications are mapped to HTTP responses for clients with provider classes called ExceptionMapper. That can be much more work than it should, so you can also map exceptions in Seam's pages.xml declaratively, see docs.
You can write unit tests that pass mock HTTP request and response through Seam and RESTEasy, all with local calls not TCP sockets. We use them in the integration tests and so can you to test your application. See the reference docs.
There is already talk about
MVC and REST. What this all comes down to, at least from my standpoint, is that hypertext should drive the application state through linked resources (HATEOAS, Hypertext as the engine of application state). From a technical perspective, it simply means that we need more control over how the
view is rendered, not just marshaling dumb XML documents from Customer entities with JAXB defaults. We should render XHTML representations - which of course may include JAXB-rendered XML blobs in addition to links and forms - and be able to customize them with templates.
Facelets seems like a natural fit for this and we have a prototype for sending templated XHTML responses:
@GET @Path("/customer/{id}") @ProduceMime("application/xhtml+xml") @FaceletsXhtmlResponse( template = "/some/path/to/template/#{thisCanEvenBeEL}/foo.xhtml" ) @Out(value = "currentCustomer", scope = ScopeType.EVENT) public Customer getCustomer(@PathParam("id") String id) { ... }
This is just pseudo-code, this feature is not available in the release. It wouldn't be very useful as it is, because we don't know how to transform incoming HTTP requests with XHTML payload back into a Facelet view. It's not trivial to implement either and we'll probably wait for JSF2 before we finalize this. But it shows that providing a JSF-based human client interface and a RESTful HTTP web service interface in the same application might be a natural fit with the given technologies.
Next version?
The currently available RESTEasy version is still not GA, although it is a release candidate. There are also a few open issues with the integration code that we'd like to close, and we have to finalize the CRUD framework interface. This is all expected to happen in the Seam 2.2 releases.
More elaborate additional features such as conversation integration, representation templating, or additional authentication schemes are probably reserved for Seam3 as we might want to build on the new JSF2/JCDI standards as much as possible. Follow this wiki page for updates.
P.S. This book is an excellent starting point if you are wondering what this stuff is all about.
Update: I forgot to mention one important feature that some of you might like. You can annotate your Seam component (POJO or EJB) interface and not the bean class. For EJB Seam components, you actually have to annotate the local business interface.
@Path("/customer") public interface MyCustomerResource { @GET @Path("/{customerId}") @Produces("text/xml") public Customer getCustomer(@PathParam("customerId") int id); }
@Name("customerResource") public class MyCustomerResourceBean implements MyCustomerResource { @In CustomerDAO customerDAO; @Restrict("#{s:hasRole('admin')}") public Customer getCustomer(int id) { return customerDAO.find(id); } } | http://in.relation.to/2009/06/09/rest-support-in-latest-seam-21/ | CC-MAIN-2017-47 | refinedweb | 1,611 | 54.63 |
Recently Browsing 0 members
No registered users viewing this page.
Similar Content
- By SgtIgram
Hi!
First.. sorry for my bad english
For some time i play a bit with embedded Flash-Objects and after a few changed variables in the "root" of the flash-file with
$oRP.SetVariable("/:randomVar", "foobar") i came to the problem that i need to change a variable thats inside a function/class or whatever
the wanted variable is inside a
_global.client = new clientCom(fooBar); and is defined as
var _loc1_ = this; _loc1_.randomVariable = ""; how can i change it?
i tried something like
$oRP.SetVariable("/:client.randomVariable", "foobar") but nothing =(
i hope someone can help me!
greetings
- sgtigram
- By spudw2k
Recently I was using csvde to execute some LDAP queries on a domain controller to create some reports. I noticed that when I queried the objectSID, it was returned (output) in binary format instead of the S-#-#-##-### (string) format I needed to compare with. I found there was a function I could use in the Security.au3 UDF to convert the SID Binary value to the SID String format; however, the example in the help file collected the SID binary value by using another function to lookup an AD object by name. Since I already had the SID, this "step" was erroneous to me, but I was still required to do some work to make the _Security__SidToStringSid function accept my binary values--namely creating and populating a DLLStruct before using as a parameter for the SidToSTringSid function. Below is a small illustration of what I did. It wasn't particularly complicated or difficult, but may provide some insight to folks who don't mess/work with DLLStructs much. Also, my "real" script utilized a lengthy CSV report and parsed it to replace the binary values with the SID strings. I just wanted to share this snippet.
#include <security.au3> msgbox(0,"Builtin\Users",_SIDBinaryToStr("01020000000000052000000021020000")) msgbox(0,"Builtin\Guests",_SIDBinaryToStr("01020000000000052000000022020000")) msgbox(0,"Domain Users",_SIDBinaryToStr("010500000000000515000000e2ef6c5193efdefff2b6dd4401020000")) Func _SIDBinaryToStr($hSID) Local $tSID = DllStructCreate("byte SID[256]") DllStructSetData($tSID, "SID", Binary("0x" & $hSID)) Local $sStringSID = _Security__SidToStringSid($tSID) Return $sStringSID EndFunc
- By wakillon
A little try with SDL 2 Library, SDL 2 GFX and Sid music.
As usual, no externals files needed.
Press "Esc" for quit.
SDL2_Fireworks.au3
Happy New Year 2017
- lordsocke
Hey guys I want to build my selfe a little helptool for the FIFA Webapp from EA. Where I need to get in contact with the flashplugin.
I could use pixelsearch fuctions but they are a little u know... stupid
Is there a libary for flash? Thanks everyone
- | https://www.autoitscript.com/forum/topic/186266-soon-in-the-sky/?do=showReactionsComment&comment=1337663&changed=1&reaction=all | CC-MAIN-2022-21 | refinedweb | 434 | 54.73 |
Slashdot Log In
Shirky On Umbrellas, Taxis And Distributed Systems
There's a good article from Clay Shirky talking about the similarities between umbrellas, taxis and distributed computing. And if you really want more P2P than you can shake a fork at, the folks at ORA have also released an excerpt from the upcoming Dornfest and Brickley book.
This discussion has been archived. No new comments can be posted.
Shirky On Umbrellas, Taxis and Distributed Systems | Log In/Create an Account | Top | 40 comments | Search Discussion
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
This is the (Score:1)
It's pretty simple actually (Score:1)
Re:Where are the applications? (Score:2)
--
Where are the applications? (Score:5)
Firstly, the algorithm must be parallelizable. This means that it should be possible to split an algorithm which normally takes N time, across a number of, say P processors, and have it take less than N time, and ideally N/P time.. Recall that in most distributed computation applications connectivity will be limited to a 56k modem which is only connected to the Internet intermittently. Even if you limit users to broadband, communication bandwidth is still a problem.
Thirdly, the algorithm must be robust, if someone decides to screw things up, and hack their client to send back malicious data (as happened with Seti@home) they must not be able to invalidate the work that everyone else has done. Ideally there would be an easy way to validate the work done by each client in the system.
Now, I am not saying that there are no applications which do not conform to these criteria, for example, cracking crypto algorithms and processing information from space telescopes in search of intelligent life clearly work quite well - however neither of them can really be used to make vast amounts of money. The only other thing I can think of are genetic algorithms, but again, whether there is a revenue stream there is an important question.
Perhaps some of these distributed computation people have found a killer application for this technology, some of them certainly claim that they have, but I really wonder whether such applications will stand up to scrutiny on the grounds I outline above.
--
Re:User created metadata considered harmful (Score:1)
I'm presenting a talk at the O'Reilly p2p conference [oreilly.com] entitled "Attack Resistant Sharing of Metadata".
It's based on an idea of Raph Levien's [advogato.org], somewhat similar to the Advogato trust metric [advogato.org]. Basically, you only trust meta-data from your friends, or from people whose meta-data has been good in the past, and then to a lesser extent you trust their friends, but you dynamically adapt if someone starts distributing bad meta-data. We can't really prove that it will work, but it has some promising characteristics.
We are going to implement it on top of Mojo Nation [mojonation.net].
Regards,
Zooko
Back of the envelope math (Score:1)
For starters, the numbers were chosen to make the math come out nicely -- the average box less monitor is actually less than a grand, and the average machine is in service longer than 20,000 hours, which makes the nickel figure high.
Furthermore, if PopPower et al wanted to build a cycle farm, they'd use multi-CPU boxen, so the calculations get even more complicated. Finally, as we've seen in Cali, power requirements differ between consumer and business regimes.
So there are a lot of variables pulling the number this way and that, wiht an increasing degree of speculation, but sinece the real point I was trying to make -- your use of your PC has variable value to you between the hours you are and aren't using it, and that if you're using it, no pro-rate fee will induce you to stop -- would have been accurate even if the nickel number is low by a factor of 5, I left the back of the envelope calculation and went on.
-clay
Re:weird bit (Score:1)
I made a similar point in another thread [slashdot.org] on this topic.
-clay
Re:Where are the applications? (Score:3)
You forgot a forth and more critical criteria which all the P2P companies keep saying "pay no attention to the man behind the curtain"
Fourth, the company must not care about the data, algorithms, and results becoming public immediately. Available to any competitor or evil cracker who wants to mess with you.
Forget the other 3, you will have a nearly impossible time finding anyone willing (stupid enough) to give you money and live with #4.
Of course, some of us have known this for a very long time, commercial distributed computing was put to sleep in the 70's. But then, in the 70's VCs were smarter.
Napster (Score:2)
Re:Will companies really see so much profit? (Score:4)
I think that's precisely the problem. The things that we've found lend themselves well to distributed computing (SETI, cracking encryption) don't lend themselves as well to making money. What company wants to pay for either of the above two, let alone a lot of money?
That's not to say that P2P is already doomed though. I don't think that it's a technical problem at this point, I think it's a business problem. Someone has to figure out a problem that has two attributes: It must lend itself to being more quickly solved via distributed computing, and it must be something with such a high demand that someone is willing to pay big money.
It's very possible that P2P could take off...but I'm not holding my breath. Even if they solve the issue of "what problem is worth the money", there's still the problem of "who will let us use the cycles" and "how do we keep from getting cheated".
-Jer
What kinda math is that? (Score:4)
I don't understand how the author came to the nickel per hour number.
Sure, the cost of the machine boils down to (by his math) a nickle an hour, but that's not the same cost as the company would have to take on.
A company would have to buy the system, hire the IT personnel, cover their benefits, store them, pay for the electricity, pay for the heating/cooling, pay for maintenance, parts if they break, warranties, etc. These (and more) are little things that a home user might not even consider when determining if it's "worth it", and makes the "break even" point much higher than a nickle per hour.
I'd like to see the same breakdown done with some more accurate math.
-Jer
Re:Where are the applications? (Score:2)
Not necessarially. Depending on the cost of cycles, it may be sufficient to use a less efficient approach that is not completely scalable..
I do actuarial projections for a life insurance company. I have a set of assets (investments with future cash flows to the company) and liabilities (insurance policies with future cash flows). The liability cash flows influence what funds are available for investing (or dis-investing). Industry regulations require that I investigate the adaquacy of the type and amount of the company's assets under different interest rate environments. The regulators want to make sure that even if interest rates and/or equity values spike up or drop down dramatically, the company will not become insolvent. The tricky part is that the liability cash flows are often dependant on the interest income that they assets can generate and the interest income that assets can generate is dependant on the interest rate environment when each of the cash flows occurs.
Because of the interrelatedness of the two portfolios, there are two ways I can go about dividing up this project. I can slice by time, calculating all of the cash flows that I need at a given time to determine whether there is cash to invest or assets to sell. This is the most efficient method, but it has high communications requirements.
Or, I can project all of the liabilities over future times and get a series of liability cash flows which then imply a series of asset portfolios and interest rates and then iterate back and forth between liabilities and assets until the answers converge. (Typically on the order of 10 or so iterations and not hundreds or thousands). This is less efficient, but has lower communications requirements. If cycles are sufficiently cheap, it may pay to use a less efficient algorithm.
Thirdly, the algorithm must be robust, if someone decides to screw things up, and hack their client to send back malicious data (as happened with Seti@home) they must not be able to invalidate the work that everyone else has done.
That depends entirely on the incentives. SETI was vulnerable because there was a competition to rack up completed cells. If the incentives to participate are designed properly, the may be no incentive to hack the client.
Umbrella size (Score:1)
Re:Where are the applications? (Score:2)
The cost to buy and maintain the bandwidth needed to push the data out to distributed resources would be more than the cost of a mainframe.
Another analogy (Score:2)
There are a lot of unused cycles out there, but they are cheap and so finely dissolved that the extraction process isn't viable.
RDF ?= Robotech Defense Force (Score:2)
O you mean Resource Description Framework....
i always mix the two up..
nmarshall
The law is that which it boldly asserted and plausibly maintained..
Wrong authors (Score:1)
Will companies really see so much profit? (Score:5)
Similar to "Permutation City" (Score:1)
A good book (for other reasons as well). Unfortunately I managed to leave it in a Sydney hotel room.
Cheers,
SuperG
Umbrellas and Cabs (Score:1)
-Moondog
Re:analogy doesnt work (Score:1)
weird bit (Score:1)
MusicBrainz solves music metadata problems (Score:2)
More to P2P than cycles (Score:3)
Metadata early in the game (Score:1)
Isn't this what the cue cat people did when they embedded serial numbers into their scanners?
It allowed them to start creating a metadatabase on you!
Re:User created metadata considered harmful (Score:1)
Agreed, little motivation to donate bandwidth (Score:3)
Added to which, once we actually start paying for music downloads (its inevitable), there will be demand for reliable downloads. Hell, if I'm paying real money per song, timeouts and crappy connections are unacceptable. Once money enters into the equation, I want the media in a timely and efficient manner.
None of this matters in a future where everyone has fiber to the home, but we're at least fifteen years away from that being a reality for most citizens.
User created metadata considered harmful (Score:4)
This is why search engines that work off of metadata typically give you porn links for almost anything, and why Yahoo can't be spoofed (their surfers actually visit the site to see what its about).
Re:not enough of a return (Score:1)
not enough of a return (Score:4)
This hits the nail on the head. I'm willing to install the RC5 client on my machines for several reasons: 2. It's a project whose goals I more or less believe in. (SETI would be an even better match, but I ended up installing the dneet client first.)
3. I already installed it. Once it's been configured and set to run on my FreeBSD and linux boxen I can forget about it. More trouble to disable it or find a new distributed project, install that, configure it, and get it running on all my computers.
I think this article gets it right. The returns for me contributing my spare cycles as well as the effort to install and set up the clients is not worth whatever change they are paying. Like the article says, if they pay a nickel per processing hour, that takes roughly 2.28 years to earn a thousand dollars if my system is running the client 100% of the time at full processor speed. (I have no idea how much these systems actually pay, I'm just quoting the articles example.) The actual amount earned would actually be much less as I do various things with my system: burn CD's, play quake, write papers, etc. The long term return of pennies, or less than pennies on the hour makes me say that it's not worth it. And I suspect that without some higher incentive, like distributed.net crunching keys has been turned into a competition, most people just aren't going to take the trouble to signup for these paid distributed services. To have enough computers to make some serious money, you had to have enough money in the first place to make whatever they pay you small change.
Re:weird bit (Score:2)
-Nev
Another idea for payment of resources (Score:3)
Therefore, I propose that projects such as Popular Power, etc., abandon the idea of paying individuals a few nickles for some amount of cpu processing, but instead pay the charitable organization of the individuals choice.
For example, you could sign up your machine on, i.e, Team FSF, and for every X number of opperations your machine computes for these distributed projects, a dollar would be donated to the FSF.
Money, etc. (Score:3)
but the idea of someone paying my electric bill....
I gotta admit that I can see the potential for abuse on this one.
On the other hand, this comment tossed in at the end gives me the shivers:As a moment of paranoia sets in, I can see MS adding this element to there
I do not know what it is, but I always seem to have this moment of distrust whenever I read something involving MS.
Then again, maybe the MS marketroids read Slashdot, checking it out for this kind of thinking, in order to get new marketing ideas that they can use.
;-)
International usage ! (Score:1)
Are these guys ready to pay for foreign processing power too ?
If so, which companies ?
Re:analogy doesnt work (Score:1)
Re:Where are the applications? (Score:2)
I'm sure I'm not the only one who loves setting up scenes with a gazillion meshes and complex camera shots and ray traced textures mirroring each other into infinity. The wireframe of a wild animation is within reach of many typical desktops, it's the rendering that you'll never get --especially at high res-- without your own CPU farm.
So, my proposal to whatever company it was, was to allow artists to send in descriptions of their animation along with say a single screen shot and then CPU cycle donators could go to the site and decide which project they wanted to patronize.
The reward? --not money but a free copy of the final project.
In my mind, this is where the net can transcend conventional notions of economy. Heady stuff.
But what about the money? Well, the organizing site would have to get by on ad revenues. But since it would be an entertainment site, that might not be too bad.
Re:Will companies really see so much profit? (Score:1)
They don't even have to do that - they just need to set up the business case for a CPU cycles bidding market, and the applications will create themselves. (So, it is still a technical problem - creating the infrastructure so that arbitrary processing packages can be distributed according to the results of the bidding.)
Wired pondered this [wired.com] recently.. it could be really cool if someone can pull it off.
analogy doesnt work (Score:1)
mmmm, distributed chocolate. (Score:1)
Hmmm, I disagree (Score:1) | http://slashdot.org/articles/01/01/21/1630204.shtml | crawl-002 | refinedweb | 2,667 | 60.24 |
#include <Sequence_T.h>
#include <Sequence_T.h>
Inheritance diagram for TAO_Unbounded_Sequence< T >:
This class completes the implementation for TAO_Base_Sequence for the unbounded case.
Default constructor.
Constructor using a maximum length value.
0
Constructor using the data and memory management flag. Memory *must* be allocated using the ::allocbuf static method, since in the future this classes may use a global ACE_Allocator.
Copy constructor.
Dtor.
[virtual]
Implement the TAO_Base_Sequence methods (see Sequence.h).
Implements TAO_Base_Sequence.
Must deallocate the buffer and then set it to zero.
Some sequences (of objects and strings) require some cleanup if the sequence is shrunk. The spec requires the destructor to release the objects only from position <0> to <length-1>; so shrink and then delete could result in a memory leak.
Reimplemented from TAO_Base_Sequence.
[static]
Allocate storage for the sequence.
Free the sequence. straings,>.
Assignment operator.
operator []
Allows the buffer underlying a sequence to be replaced. The parameters to <replace> are identical in type, order, and purpose to those for the <T *data> constructor for the sequence. | http://www.dre.vanderbilt.edu/Doxygen/5.4.9/html/tao/classTAO__Unbounded__Sequence.html | CC-MAIN-2013-20 | refinedweb | 169 | 51.04 |
You can subscribe to this list here.
Showing
9
results of 9
Stefan Scholl <stesch@...> writes:
> * (md5:md5sum-sequence "foo")
>
> debugger invoked on a SIMPLE-ERROR in thread 24272:
> segmentation violation at #X97A9418
So although this isn't the friendliest way of telling you so, lying to
the compiler under (speed 3) (safety 0) is a relatively good way of
convincing the system that zero is equal to minus one, and once we've
reached that point, well, all bets are off. Essentially, given the
false ftype declamation for MD5::I, the compiler proves that the
universe is inconsistent: given that, the fact that it doesn't format
your hard drive is perhaps a bonus :-)
(Note that it _does_ tell you that the compilation failed: when
compiling md5.lisp, it throws a full WARNING when compiling MD5::I,
complaining of the type mismatch between the derived type and the
declared type; COMPILE-FILE also returns a failure value of T.)
So, this is really a feature. ;-)
Cheers,
Christophe
-- +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%") (pprint #36rJesusCollegeCambridge)
On Tue, Aug 31, 2004 at 05:15:05PM -0400, Zach Beane wrote:
>
> Anyway, he suggested patching line 242 of src/pcl/boot.lisp from
> #',fun-name to (fdefinition ',fun-name). I'm going to try that and
> test it and see what happens.
I changed that line, ran the tests, and everything seemed to work
ok. Here's a patch:
? tests/test-results.txt
Index: tests/clos.impure.lisp
===================================================================
RCS file: /cvsroot/sbcl/sbcl/tests/clos.impure.lisp,v
retrieving revision 1.54
diff -u -r1.54 clos.impure.lisp
--- tests/clos.impure.lisp 16 Jun 2004 21:00:24 -0000 1.54
+++ tests/clos.impure.lisp 31 Aug 2004 21:44:52 -0000
@@ -821,5 +821,14 @@
x)
(assert (= (fum 3) 3/2))
+;;; Bug reported by Zach Beane; incorrect return of (function
+;;; ',fun-name) in defgeneric
+(assert
+ (typep (funcall (compile nil
+ '(lambda () (flet ((nonsense () nil))
+ (defgeneric nonsense ())))))
+ 'generic-function))
+
+
;;;; success
(sb-ext:quit :unix-status 104)
Index: src/pcl/boot.lisp
===================================================================
RCS file: /cvsroot/sbcl/sbcl/src/pcl/boot.lisp,v
retrieving revision 1.82
diff -u -r1.82 boot.lisp
--- src/pcl/boot.lisp 1 Jul 2004 11:41:22 -0000 1.82
+++ src/pcl/boot.lisp 31 Aug 2004 21:44:52 -0000
@@ -239,7 +239,7 @@
(compile-or-load-defgeneric ',fun-name))
(load-defgeneric ',fun-name ',lambda-list ,@initargs)
,@(mapcar #'expand-method-definition methods)
- #',fun-name))))
+ (fdefinition ',fun-name)))))
(defun compile-or-load-defgeneric (fun-name)
(proclaim-as-fun-name fun-name)
I have been getting STYLE-WARNINGs in SLIME for defgeneric forms
recently, so I asked about it on #lisp. Christophe said it's a genuine
bug. Here's some sample code that he came up with that triggers the
bug:
(compile nil '(lambda () (flet ((foo () nil)) (defgeneric foo ()))))
I got it on relatively simple forms, like:
> (compile nil '(lambda () (defgeneric bar (baz))))
; in: DEFGENERIC BAR
; (DEFGENERIC BAR (BAZ))
; --> PROGN
; ==>
; #'BAR
;
; caught STYLE-WARNING:
; undefined function: BAR
;
; caught STYLE-WARNING:
; This function is undefined:
; BAR
;
; compilation unit finished
; caught 2 STYLE-WARNING conditions
Anyway, he suggested patching line 242 of src/pcl/boot.lisp from
#',fun-name to (fdefinition ',fun-name). I'm going to try that and
test it and see what happens.
Zach
Today, Stefan Scholl <stesch@...> wrote:
>
[- backtrace]
Right. This is a problem in MD5's function I, which returns negative
values, but is declaimed to return (unsigned-byte 32) - which confused
SBCL's compiler. (I'm Cc:ing Kevin Rosenberg and Pierre Mai on this)
It is fixed in sb-md5 like this:
Instead of the original form
(defun i (x y z)
(declare (type ub32 x y z)
(optimize (speed 3) (safety 0) (space 0) (debug 0)))
#+cmu
(kernel:32bit-logical-xor y (kernel:32bit-logical-orc2 x z))
#-cmu
(logxor y (logorc2 x z)))
, we have this:
(defun i (x y z)
(declare (type ub32 x y z)
(optimize (speed 3) (safety 0) (space 0) (debug 0)))
#+cmu
(kernel:32bit-logical-xor y (kernel:32bit-logical-orc2 x z))
#-cmu
(ldb (byte 32 0) (logxor y (logorc2 x z))))
(Note the LDB form). LOGORC2 with positive arguments returns a
negative number; the LDB form gives us an (unsigned-byte 32) again -
it also seems to allow modular arithmetic magick to happen, but I
can't verify this claim, as the original form compiles to bad code,
anyway (-;
Thanks to Xophe for providing the necessary information on this bug;
and thanks to you for reporting it (so that it can be fixed for good),
--
Andreas Fuchs, <asf@...>, asf@..., antifuchs
> To be clear: the preliminary nature of the Unicode support that I
> mentioned in the subject of this e-mail is extreme, as there is in
> fact no support for Unicode at all. However, it is my hope (though
> unfortunately not my expectation :-) that the version on the branch is
> no more broken than CVS HEAD; I would like that confirmed or refuted
> before proceeding.))
as per our #lisp discussion, of a unicode feature request:
request: to play nicely with the FFI, aliens. more specifically,
access to the underlying bytes of a unicode string (as a lisp (array
(unsigned-byte 32)) probably) at least for passing to foreign
funtions. a function which converts an (foreign) array of the
appropriate type to a unicode string, with minimal fuss.
rational: well, maybe one application isn't enough, but Elephant (my
CL database) wants to be able to serialize / deserialize lisp values
to C char arrays efficiently. numbers et al are pretty easy. for
strings, it prefers to use the underlying data, which i can memcpy to
buffers on the C side, since i provide a type tag. i guess any
application interested in serializing and deserializing Lisp values
efficiently will benefit -- RPC, CL-STORE, etc.
precedent: currently 8-bit strings can be passed to C functions via
the :cstring type. even better, i can pass 8-bit strings as char
arrays directly. i can pass allegro (and i'm pretty sure lispworks)
16-bit strings to foreign functions as char arrays, getting (without
conversion) the underlying unicode, e.g. a char array of length (* 2
(length string)). i can pass such arrays to functions like
(excl:native-to-string buffer
:length length
:external-format :unicode)
which seem to be pretty fast.
just wanted a tiny slice of Christophe's mindshare.
thanks for the great work, B
.
Help! 11 nested errors. SB-KERNEL:*MAXIMUM-ERROR-DEPTH* exceeded.
*
On Mon, 2004-08-30 at 16:42, David Steuber wrote:
> On Aug 30, 2004, at 6:14 PM, William Harold Newman wrote:
>
> > Oh, and by the way, thinking ahead to when sf lets me release: Does
> > anyone have any clever ideas about crypto signatures or other
> > replacements for the old MD5 signatures I used to post? Now that MD5
> > collisions have been reported, MD5 signatures aren't all that
> > reassuring any more...
>
> I hadn't heard about this. Are valid files causing MD5 collisions?
This came up in a recent crypto seminar. From what I understand, they
basically just said that it was more theoretically possible than
previously thought or random chance would have you believe. In other
words, if you're designing a new protocol, you should probably look at
something other than MD5. If you're running a protocol that uses MD5,
don't panic yet. You're still pretty safe, but less safe than we
previously thought.
> How about using SHA? Isn't that 168 bit hash rather than 128?
As far as I know, SHA-1 is still good.
--
Dave Roberts <ldave@...>
I finally got around to reinstalling Xcode 1.5 and have successfully
built SBCL 0.8.14 on Darwin with the following changes to
src/runtime/Config.ppc-darwin:
$ cvs diff src/runtime/Config.ppc-darwin
Index: src/runtime/Config.ppc-darwin
===================================================================
RCS file: /cvsroot/sbcl/sbcl/src/runtime/Config.ppc-darwin,v
retrieving revision 1.5
diff -r1.5 Config.ppc-darwin
2c2
< CFLAGS = -Dppc -g -Wall -O2 -no-cpp-precomp
---
> CFLAGS = -Dppc -g -Wall -O3 -no-cpp-precomp
7c7
< CC = gcc3
---
> CC = gcc | http://sourceforge.net/p/sbcl/mailman/sbcl-devel/?viewmonth=200408&viewday=31 | CC-MAIN-2014-23 | refinedweb | 1,373 | 54.42 |
Viewer component which moves the camera in a plane.
More...
#include <Inventor/Win/viewers/SoWinWalkViewer.h>
The paradigm for this viewer is a walk-through of an architectural model. Its primary behavior is forward, backward, and left/right turning motion while maintaining a constant "eye level". It is also possible to stop and look around at the scene. The eye level plane can be disabled, allowing the viewer to proceed in the "look at" direction, as if on an escalator. The eye level plane can also be translated up and down - similar to an elevator.
Left Mouse: Walk mode. Click down and move up and down for fowards and backwards motion. Move right and left for turning. Speed increases exponentially with the distance from the mouse-down origin.
Middle Mouse or
Ctrl + Left Mouse: Translate up, down, left, and right.
Ctrl + Middle Mouse: Tilt the camera up/down and right/left. This allows you to look around while stopped.
.
Keypad '-': Decrease viewer speed by 0.5.
Keypad '+': Increase viewer speed by 2.
SoWinFullViewer, SoWinViewer, SoWinComponent, SoWinRenderArea, SoWinPlaneViewer, SoWinExaminerViewer, SoWinFlyViewer
Constructor which specifies the viewer type.
Please refer to SoWinViewer for a description of the viewer types.
Destructor.
Get viewer speed multiplier..
Set viewer speed multiplier (default is. | https://developer.openinventor.com/refmans/latest/RefManCpp/class_so_win_walk_viewer.html | CC-MAIN-2021-39 | refinedweb | 207 | 52.46 |
Section (7) keyrings
Name
keyrings — in-kernel key management and retention facility
DESCRIPTION
The Linux key-management facility is primarily a way for various kernel components to retain or cache security data, authentication keys, encryption keys, and other data in the kernel.
System call interfaces are provided so that user-space programs can manage those objects and also use the facility for their own purposes; see add_key(2), request_key(2), and keyctl(2).
A library and some user-space utilities are provided to allow access to the facility. See keyctl(1), keyctl(3), and keyutils(7) for more information.
Keys
A key has the following attributes:
- Serial number (ID)
This is a unique integer handle by which a key is referred to in system calls. The serial number is sometimes synonymously referred as the key ID. Programmatically, key serial numbers are represented using the type
key_serial_t.
- Type
A key_zsingle_quotesz_s type defines what sort of data can be held in the key, how the proposed content of the key will be parsed, and how the payload will be used.
There are a number of general-purpose types available, plus some specialist types defined by specific kernel components.
- Description (name)
The key description is a printable string that is used as the search term for the key (in conjunction with the key type) as well as a display name. During searches, the description may be partially matched or exactly matched.
- Payload (data)
The payload is the actual content of a key. This is usually set when a key is created, but it is possible for the kernel to upcall to user space to finish the instantiation of a key if that key wasn_zsingle_quotesz_t already known to the kernel when it was requested. For further details, see request_key(2).
A key_zsingle_quotesz_s payload can be read and updated if the key type supports it and if suitable permission is granted to the caller.
- Access rights
Much as files do, each key has an owning user ID, an owning group ID, and a security label. Each key also has a set of permissions, though there are more than for a normal UNIX file, and there is an additional category—possessor—beyond the usual user, group, and other (see
Possession, below).
Note that keys are quota controlled, since they require unswappable kernel memory. The owning user ID specifies whose quota is to be debited.
- Expiration time
Each key can have an expiration time set. When that time is reached, the key is marked as being expired and accesses to it fail with the error EKEYEXPIRED. If not deleted, updated, or replaced, then, after a set amount of time, an expired key is automatically removed (garbage collected) along with all links to it, and attempts to access the key fail with the error ENOKEY.
- Reference count
Each key has a reference count. Keys are referenced by keyrings, by currently active users, and by a process_zsingle_quotesz_s credentials. When the reference count reaches zero, the key is scheduled for garbage collection.
Key types
The kernel provides several basic types of key:
keyring
_zsingle_quotesz_ _zsingle_quotesz_ _zsingle_quotesz_ Keyrings are special keys which store a set of links to other keys (including other keyrings), analogous to a directory holding links to files. The main purpose of a keyring is to prevent other keys from being garbage collected because nothing refers to them.
Keyrings with descriptions (names) that begin with a period (_zsingle_quotesz_._zsingle_quotesz_) are reserved to the implementation.
user
This is a general-purpose key type. The key is kept entirely within kernel memory. The payload may be read and updated by user-space applications.
The payload for keys of this type is a blob of arbitrary data of up to 32,767 bytes.
The description may be any valid string, though it is preferred that it start with a colon-delimited prefix representing the service to which the key is of interest (for instance
afs:mykey).
logon(since Linux 3.3)
This key type is essentially the same as
user, but it does not provide reading (i.e., the keyctl(2)
KEYCTL_READoperation), meaning that the key payload is never visible from user space. This is suitable for storing username-password pairs that should not be readable from user space.
The description of a
logonkey muststart with a non-empty colon-delimited prefix whose purpose is to identify the service to which the key belongs. (Note that this differs from keys of the
usertype, where the inclusion of a prefix is recommended but is not enforced.)
big_key(since Linux 3.13)
This key type is similar to the
userkey type, but it may hold a payload of up to 1 MiB in size. This key type is useful for purposes such as holding Kerberos ticket caches.
The payload data may be stored in a tmpfs filesystem, rather than in kernel memory, if the data size exceeds the overhead of storing the data in the filesystem. (Storing the data in a filesystem requires filesystem structures to be allocated in the kernel. The size of these structures determines the size threshold above which the tmpfs storage method is used.) Since Linux 4.8, the payload data is encrypted when stored in tmpfs, thereby preventing it from being written unencrypted into swap space.
There are more specialized key types available also, but they aren_zsingle_quotesz_t discussed here because they aren_zsingle_quotesz_t intended for normal user-space use.
Key type names that begin with a period (_zsingle_quotesz_._zsingle_quotesz_) are reserved to the implementation..
Various operations (system calls) may be applied only to keyrings:
Adding
A key may be added to a keyring by system calls that create keys. This prevents the new key from being immediately deleted when the system call key matching a particular type and description.
See keyctl_clear(3), keyctl_link(3), keyctl_search(3), and keyctl_unlink(3) for more information.
Anchoring keys
To prevent a key from being garbage collected, it must be anchored to keep its reference count elevated when it is not in active use by the kernel.
Keyrings are used to anchor other keys: each link is a reference on a key. Note that keyrings themselves are just keys and are also subject to the same anchoring requirement to prevent them being garbage collected.
The kernel makes available a number of anchor keyrings. Note that some of these keyrings will be created only when first accessed.
- Process keyrings
Process credentials themselves reference keyrings with specific semantics. These keyrings are pinned as long as the set of credentials exists, which is usually as long as the process exists.
There are three keyrings with different inheritance/sharing rules: the session-keyring(7) (inherited and shared by all child processes), the process-keyring(7) (shared by all threads in a process) and the thread-keyring(7) (specific to a particular thread).
As an alternative to using the actual keyring IDs, in calls to add_key(2), keyctl(2), and request_key(2), the special keyring values
KEY_SPEC_SESSION_KEYRING,
KEY_SPEC_PROCESS_KEYRING, and
KEY_SPEC_THREAD_KEYRINGcan be used to refer to the caller_zsingle_quotesz_s own instances of these keyrings.
- User keyrings
Each UID known to the kernel has a record that contains two keyrings: the user-keyring(7) and the user-session-keyring(7). These exist for as long as the UID record in the kernel exists.
As an alternative to using the actual keyring IDs, in calls to add_key(2), keyctl(2), and request_key(2), the special keyring values
KEY_SPEC_USER_KEYRINGand
KEY_SPEC_USER_SESSION_KEYRINGcan be used to refer to the caller_zsingle_quotesz_s own instances of these keyrings.
A link to the user keyring is placed in a new session keyring by pam_keyinit(8) when a new login session is initiated.
- Persistent keyrings
There is a persistent-keyring(7) available to each UID known to the system. It may persist beyond the life of the UID record previously mentioned, but has an expiration time set such that it is automatically cleaned up after a set time. The persistent keyring permits, for example, cron(8) scripts to use credentials that are left in the persistent keyring after the user logs out.
Note that the expiration time of the persistent keyring is reset every time the persistent key is requested.
- Special keyrings
There are special keyrings owned by the kernel that can anchor keys for special purposes. An example of this is the system keyring used for holding encryption keys for module signature verification.
These special keyrings are usually closed to direct alteration by user space.
An originally planned group keyring, for storing keys
associated with each GID known to the kernel, is not so far
implemented, is unlikely to be implemented. Nevertheless,
the constant
KEY_SPEC_GROUP_KEYRING has been defined
for this keyring.
Possession
The concept of possession is important to understanding the keyrings security model. Whether a thread possesses a key is determined by the following rules:
(1)
Any key or keyring that does not grant
searchpermission to the caller is ignored in all the following rules.
(2)
A thread possesses its session-keyring(7), process-keyring(7), and thread-keyring(7) directly because those keyrings are referred (see request_key(2)), then it also possesses the requester_zsingle_quotesz_s keyrings as in rule (1) as if it were the requester.
Note that possession is not a fundamental property of a key, but must rather be calculated each time the key is needed.
Possession is designed to allow set-user-ID programs run from, say a user_zsingle_quotesz_s shell to access the user_zsingle_quotesz_s keys. Granting permissions to the key possessor while denying them to the key owner and group allows the prevention of access to keys on the basis of UID and GID matches.
When it creates the session keyring, pam_keyinit(8) adds a link to the user-keyring(7), thus making the user keyring and anything it contains possessed by default.
Access rights
Each key has the following security-related attributes:
The owning user ID
The ID of a group that is permitted to access the key
A security label
A permissions mask
The permissions mask contains four sets of rights. The first three sets are mutually exclusive. One and only one will be in force for a particular access check. In order of descending priority, these three sets are:
user
The set specifies the rights granted if the key_zsingle_quotesz_s user ID matches the caller_zsingle_quotesz_s filesystem user ID.
group
The set specifies the rights granted if the user ID didn_zsingle_quotesz_t match and the key_zsingle_quotesz_s group ID matches the caller_zsingle_quotesz_s filesystem GID or one of the caller_zsingle_quotesz_s supplementary group IDs.
other
The set specifies the rights granted if neither the key_zsingle_quotesz_s user ID nor group ID matched.
The fourth set of rights is:
possessor
The set specifies the rights granted if a key is determined to be possessed by the caller.
The complete set of rights for a key is the union of whichever of the first three sets is applicable plus the fourth set if the key is possessed.
The set of rights that may be granted in each of the four masks is as follows:
view
The attributes of the key may be read. This includes the type, description, and access rights (excluding the security label).
read
For a key: the payload of the key may be read. For a keyring: the list of serial numbers (keys) to which the keyring has links may be read.
write
The payload of the key may be updated and the key may be revoked. For a keyring, links may be added to or removed from the keyring, and the keyring may be cleared completely (all links are removed),
search
For a key (or a keyring): the key may be found by a search. For a keyring: keys and keyrings that are linked to by the keyring may be searched.
link
Links may be created from keyrings to the key. The initial link to a key that is established when the key is created doesn_zsingle_quotesz_t require this permission.
setattr
The ownership details and security label of the key may be changed, the key_zsingle_quotesz_s expiration time may be set, and the key may be revoked.
In addition to access rights, any active Linux Security Module (LSM) may prevent access to a key if its policy so dictates. A key may be given a security label or other attribute by the LSM; this label is retrievable via keyctl_get_security(3).
See keyctl_chown(3), keyctl_describe(3), keyctl_get_security(3), keyctl_setperm(3), and selinux(8) for more information.
Searching for keys
One of the key features of the Linux key-management facility is the ability to find a key that a process is retaining. The request_key(2) system call is the primary point of access for user-space applications to find a key. (Internally, the kernel has something similar available for use by internal components that make use of keys.)
The search algorithm works as follows:
(1)
The process keyrings are searched in the following order: the thread thread-keyring(7) if it exists, the process-keyring(7) if it exists, and then either the session-keyring(7) if it exists or the user-session-keyring(7) if that exists.
(2)
If the caller was a process that was invoked by the request_key(2) upcall mechanism, then the keyrings of the original caller of request_key(2) will be searched as well.
(3)
The search of a keyring tree is in breadth-first order: no valid matching key is found, then the first noted error state is returned; otherwise, an ENOKEY error is returned.
It is also possible to search a specific keyring, in which case only steps (3) to (6) apply.
See request_key(2) and keyctl_search(3) for more information.
On-demand key creation
If a key cannot be found, request_key(2) will, if
given a
callout_info argument,
create a new key and then upcall to user space to
instantiate the key. This allows keys to be created on an
as-needed basis.
Typically, this will involve the kernel creating a new process that executes the request-key(8) program, which will then execute the appropriate handler based on its configuration.
The handler is passed a special authorization key that allows it and only it to instantiate the new key. This is also used to permit searches performed by the handler program to also search the requester_zsingle_quotesz_s keyrings.
See request_key(2), keyctl_assume_authority(3), keyctl_instantiate(3), keyctl_negate(3), keyctl_reject(3), request-key(8), and request-key.conf(5) for more information.
/proc files
The kernel provides various
/proc files that expose information about
keys or define limits on key usage.
/proc/keys(since Linux 2.6.10)
This file exposes a list of the keys for which the reading thread has
viewpermission, providing various information about each key. The thread need not possess the key for it to be visible in this file.
The only keys included in the list are those that grant
viewpermission to the reading process (regardless of whether or not it possesses them). LSM security checks are still performed, and may filter out further keys that the process is not authorized to view.
An example of the data that one might see in this file (with the columns numbered for easy reference below) is the following:
(1) (2) (3)(4) (5) (6) (7) (8) (9) 009a2028 I--Q--- 1 perm 3f010000 1000 1000 user krb_ccache:primary: 12 1806c4ba I--Q--- 1 perm 3f010000 1000 1000 keyring _pid: 2 25d3a08f I--Q--- 1 perm 1f3f0000 1000 65534 keyring _uid_ses.1000: 1 28576bd8 I--Q--- 3 perm 3f010000 1000 1000 keyring _krb: 1 2c546d21 I--Q--- 190 perm 3f030000 1000 1000 keyring _ses: 2 30a4e0be I------ 4 2d 1f030000 1000 65534 keyring _persistent.1000: 1 32100fab I--Q--- 4 perm 1f3f0000 1000 65534 keyring _uid.1000: 2 32a387ea I--Q--- 1 perm 3f010000 1000 1000 keyring _pid: 2 3ce56aea I--Q--- 5 perm 3f030000 1000 1000 keyring _ses: 1
The fields shown in each line of this file are as follows:
- ID (1)
The ID (serial number) of the key, expressed in hexadecimal.
- Flags (2)
A set of flags describing the state of the key:
I
The key has been instantiated.
R
The key has been revoked.
D
The key is dead (i.e., the key type has been unregistered). (A key may be briefly in this state during garbage collection.)
Q
The key contributes to the user_zsingle_quotesz_s quota.
U
The key is under construction via a callback to user space; see request-key(2).
N
The key is negatively instantiated.
i
The key has been invalidated.
- Usage (3)
This is a count of the number of kernel credential structures that are pinning the key (approximately: the number of threads and open file references that refer to this key).
- Timeout (4)
The amount of time until the key will expire, expressed in human-readable form (weeks, days, hours, minutes, and seconds). The string
permhere means that the key is permanent (no timeout). The string
expdmeans that the key has already expired, but has not yet been garbage collected.
- Permissions (5)
The key permissions, expressed as four hexadecimal bytes containing, from left to right, the possessor, user, group, and other permissions. Within each byte, the permission bits are as follows:
- 0x01
view
- Ox02
read
- 0x04
write
- 0x08
search
- 0x10
link
- 0x20
setattr
- UID (6)
The user ID of the key owner.
- GID (7)
The group ID of the key. The value −1 here means that the key has no group ID; this can occur in certain circumstances for keys created by the kernel.
- Type (8)
The key type (user, keyring, etc.)
- Description (9)
The key description (name). This field contains descriptive information about the key. For most key types, it has the form
name[: extra−info]
The
namesubfield is the key_zsingle_quotesz_s description (name). The optional
extra−infofield provides some further information about the key. The information that appears here depends on the key type, as follows:
userand
logon
The size in bytes of the key payload (expressed in decimal).
keyring
The number of keys linked to the keyring, or the string
emptyif there are no keys linked to the keyring.
big_key
The payload size in bytes, followed either by the string
[file], if the key payload exceeds the threshold that means that the payload is stored in a (swappable) tmpfs(5) filesystem, or otherwise the string
[buff], indicating that the key is small enough to reside in kernel memory.
For the
.request_key_authkey type (authorization key; see request_key(2)), the description field has the form shown in the following example:
key:c9a9b19 pid:28880 ci:10
The three subfields are as follows:
key
The hexadecimal ID of the key being instantiated in the requesting program.
pid
The PID of the requesting program.
ci
The length of the callout data with which the requested key should be instantiated (i.e., the length of the payload associated with the authorization key).
/proc/key-users(since Linux 2.6.10)
This file lists various information for each user ID that has at least one key on the system. An example of the data that one might see in this file is the following:
0: 10 9/9 2/1000000 22/25000000 42: 9 9/9 8/200 106/20000 1000: 11 11/11 10/200 271/20000
The fields shown in each line are as follows:
uid
The user ID.
usage
This is a kernel-internal usage count for the kernel structure used to record key users.
nkeys/
nikeys
The total number of keys owned by the user, and the number of those keys that have been instantiated.
qnkeys/
maxkeys
The number of keys owned by the user, and the maximum number of keys that the user may own.
qnbytes/
maxbytes
The number of bytes consumed in payloads of the keys owned by this user, and the upper limit on the number of bytes in key payloads for that user.
/proc/sys/kernel/keys/gc_delay(since Linux 2.6.32)
The value in this file specifies the interval, in seconds, after which revoked and expired keys will be garbage collected. The purpose of having such an interval is so that there is a window of time where user space can see an error (respectively EKEYREVOKED and EKEYEXPIRED) that indicates what happened to the key.
The default value in this file is 300 (i.e., 5 minutes).
/proc/sys/kernel/keys/persistent_keyring_expiry(since Linux 3.13)
This file defines an interval, in seconds, to which the persistent keyring_zsingle_quotesz_s expiration timer is reset each time the keyring is accessed (via keyctl_get_persistent(3) or the keyctl(2)
KEYCTL_GET_PERSISTENToperation.)
The default value in this file is 259200 (i.e., 3 days).
The following files (which are writable by privileged processes) are used to enforce quotas on the number of keys and number of bytes of data that can be stored in key payloads:
/proc/sys/kernel/keys/maxbytes(since Linux 2.6.26)
This is the maximum number of bytes of data that a nonroot user can hold in the payloads of the keys owned by the user.
The default value in this file is 20,000.
/proc/sys/kernel/keys/maxkeys(since Linux 2.6.26)
This is the maximum number of keys that a nonroot user may own.
The default value in this file is 200.
/proc/sys/kernel/keys/root_maxbytes(since Linux 2.6.26)
This is the maximum number of bytes of data that the root user (UID 0 in the root user namespace) can hold in the payloads of the keys owned by root.
The default value in this file is 25,000,000 (20,000 before Linux 3.17).
/proc/sys/kernel/keys/root_maxkeys(since Linux 2.6.26)
This is the maximum number of keys that the root user (UID 0 in the root user namespace) may own.
The default value in this file is 1,000,000 (200 before Linux 3.17).
With respect to keyrings, note that each link in a keyring consumes 4 bytes of the keyring payload.
Users
The Linux key-management facility has a number of users and usages, but is not limited to those that already exist.
In-kernel users of this facility include:
- Network filesystems - DNS
The kernel uses the upcall mechanism provided by the keys to upcall to user space to do DNS lookups and then to cache the results.
- AF_RXRPC and kAFS - Authentication
The AF_RXRPC network protocol and the in-kernel AFS filesystem use keys to store the ticket needed to do secured or encrypted traffic. These are then looked up by network operations on AF_RXRPC and filesystem operations on kAFS.
- NFS - User ID mapping
The NFS filesystem uses keys to store mappings of foreign user IDs to local user IDs.
- CIFS - Password
The CIFS filesystem uses keys to store passwords for accessing remote shares.
- Module verification
The kernel build process can be made to cryptographically sign modules. That signature is then checked when a module is loaded.
User-space(8) scripts can use them.
SEE ALSO
keyctl(1), add_key(2), keyctl(2), request_key(2), keyctl(3), keyutils(7), persistent-keyring(7), process-keyring(7), session-keyring(7), thread-keyring(7), user-keyring(7), user-session-keyring(7), pam_keyinit(8), request-key(8)
The kernel source files
Documentation/crypto/asymmetric-keys.txt
and under
Documentation/security/keys (or, before
Linux 4.13, in the file
Documentation/security/keys.txt). | https://manpages.net/detail.php?name=keyrings | CC-MAIN-2022-21 | refinedweb | 3,883 | 61.16 |
- Author:
- limodou
- Posted:
- March 2, 2007
- Language:
- Python
- Version:
- Pre .96
- middleware format json
- Score:
- 1 (after 1 ratings)
Note: This is a testing middleware. This snippets may be changed frequently later.
What's it
Sometimes I thought thow to easy the output data into another format, except html format. One way, you can use decorator, just like:
@render_template(template='xxx') def viewfunc(request,...):
And the output data of viewfunc should be pure data. And if want to output json format, you should change the decorator to:
@json_response def viewfunc(request,...):
I think it's not difficult. But if we can make it easier? Of cause, using middleware.
So you can see the code of
process_response, it'll judge the response object first, if it's an instance of HttpResponse, then directly return it. If it's not, then get the format of requst, if it's
json format, then use json_response() to render the result.
How to setup
request.format? In
process_request you and see, if the
request.REQUEST has a
format (you can setup it in settings.py with FORMAT_STRING option), then the
request.format will be set as it. If there is not a such key, then the default will be
json. So in your view code, you can just return a python variable, this middleware will automatically render this python variable into json format data and return.
For 0.2 it support xml-rpc. But it's very different from common implementation. For server url, you just need put the same url as the normal url, for example:
Notice that the format is 'xmlrpc'. A text client program is:
from xmlrpclib import ServerProxy server = ServerProxy("", verbose=True) print server.booklist({'name':'limodou'})
And the method 'booklist' of server is useless, because the url has include the really view function, so you can use any name after
server. And for parameters of the method, you should use a dict, and this dict will automatically convert into request.POST item. For above example,
{'name':'limodou'}, you can visit it via
request.POST['name'] .
For
html format, you can register a
format_processor callable object in
request object. And middleware will use this callable object if the format is
html.
Intall
Because the view function may return non-HttpResponse object, so this middleware should be installed at the end of MIDDLEWARE_CLASSES sections, so that the
process_response of this middleware can be invoked at the first time before others middlewares.
And I also think this mechanism can be extended later, for example support xml-rpc, template render later, etc, but I have not implemented them, just a thought.
Options
FORMAT_STRING used for specify the key name of format variable pair in QUERY_STRING or POST data, if you don't set it in settings.py, default is 'format'. DEFAYLT_FORMAT used for default format, if you don't set it in settings.py, default is 'json'.
Reference
Snippets 8 ajax protocol for data for json_response
Please login first before commenting. | https://djangosnippets.org/snippets/71/ | CC-MAIN-2017-04 | refinedweb | 497 | 66.44 |
Why Microsoft and Google are Cleaning Up With AJAX 443
OSS_ilation writes . Scalix is using AJAX in Scalix Web Access (SWA), a Web-delivered, e-mail application. AJAX enables advanced features like drag 'n drop, dropdown menus and faster performance capabilities, which are now making their way into Web applications, she said. These kinds of capabilities represent a significant leap in the advancement of Web apps."
Yet another "summary" lifted directly... (Score:2)
So it has DHTML/HTML and HTML? Wow, three HTMLs! Buzzwords ho!
Re:Yet another "summary" lifted directly... (Score:2)
Re:Yet another "summary" lifted directly... (Score:2, Insightful)
Not only that, but apparently, only Ajax creates Drag'n'Drop and drop down menus. You learn something new everyday! Here I thought that it was Javascript and CSS that did that.
The sad part of throwing buzzwords around is that people latch on to them and have no idea what they entail. "I want to use some ajax for my website. Do it." ??? Somebody just realized that they can use the XmlHttpRequest object and some server-side processing to save users from the need to refresh or load a whole new page. The rest
Re:Yet another "summary" lifted directly... (Score:2)
Sorry Microsoft (Score:2)
real reason why (Score:3, Insightful)
and charge "subscription fees" for it too.
Re:real reason why (Score:5, Informative)
Saying that AJAX will allow one to deliver desktop quality applications is like saying central heating will turn a mobile home into a mansion.
-matthew
Re:real reason why (Score:3, Interesting)
Re:real reason why (Score:3, Informative)
Re:real reason why (Score:3, Interesting)
Of course not. It delivers a desktop-LIKE feel to certain web applications. However, that is NOT the only point of it. Other advantages include:
#1 Immediate deployment - You can distr
Hype, Hype, Hype (Score:5, Interesting)
The truth is that the stuff we've seen in AJAX so far is nothing. I don't know about anyone else, but I've used it in regular webapps as nothing more than an interface enhancement. People don't even really notice the fact that the web pages work much smoother.
That being said, there's a massive untapped potential in this technology. I've got demos of Video Games in AJAX, as well as a full Desktop. I tried to get Google interested in the video games concept, but I'm afraid they ignored my communication.
Re:Hype, Hype, Hype (Score:2)
Re:Hype, Hype, Hype (Score:5, Insightful)
Re:Hype, Hype, Hype (Score:2)
Not everybody has JavaScript and the DOM, and it was never a crucial factor anyway. It's perfectly reasonable to write web applications that use AJAX when the user has the necessary technology available, and fall back to traditional operation when the user doesn't havethe necessary technology available. In fact, it's a legal requirement for many developers.
The big change is that big names like Google are starti
Re:Hype, Hype, Hype (Score:2)
Am I missing something? I've always thought that was part of why people didn't do this before - amount of coding needed to implement a simple app is vastly more than with something like
Re:Hype, Hype, Hype (Score:2)
Precisely. As I said in another post, the XMLHttpRequest is just icing on the cake.
I've always thought that was part of why people didn't do this before - amount of coding needed to implement a simple app is vastly more than with something like
It is true. However, the real reason
Thing is... i don't care (Score:3, Funny)
Do I care that I can get a full desktop application on the web? I don't because I already have one and free too. Video games? Nope, got'em and they're better too.
Do something I don't have. If it can get me laid all the better.
Re:Thing is... i don't care (Score:3, Insightful)
Actually, there's a huge market of "casual gamers" (a new term used to describe people who like to play web games and the like) that companies are having the hardest time reaching. One of the major obstacles in their way is the fact that these gamers are uninterested in installing Flash, Java, or any other plugin. If they don't get instant gratification, many of them simply leave. This means that all those super-APIs that companies like WildTangent and Unity
Re:Hype, Hype, Hype (Score:3, Interesting)
Funny thing is... (Score:5, Interesting)
Microsoft invented the XmlHttpRequest functionality, AND they've been using AJAX (before that's what it was called) in Outlook Web Access (OWA) for years. Nobody else in the company seemed to have caught on to it though.
Re:Funny thing is... (Score:2)
Re:Funny thing is... (Score:5, Funny)
Re:Funny thing is... (Score:5, Funny)
Re:Funny thing is... (Score:2)
Re:Funny thing is... (Score:5, Insightful)
Re:Funny thing is... (Score:2)
Re:Funny thing is... (Score:2)
Re:Funny thing is... (Score:4, Insightful)
Which comes as quite a surprise to everyone that's been doing the following since the mid 90s.
Create a frame driven page with one main frame and one tiny frame.
Whenever you want to perform an asynchronous action:
Load a page in to the small frame.
Have that page call an onload event that accesses a function in large frame.
All "AJAX" (which is just a dressing up of what was already there) does is use the request object which is just a cleaner way of what people have been doing for about ten years anyway.
There were also tricks for doing it with Java. But Microsoft had to supply an alternate mechanism because someone took Java out of the dominant web browser for a while. Can't think who might have done that though.
Re:Funny thing is... (Score:2)
But I agree with you, AJAX definately does the same thing in a much better and cleaner way.
-Rick
Re:Funny thing is... (Score:2)
Of course this makes sense as the primary purpose of exchange is to lock people into windows both on the server and client.
Re:Funny thing is... (Score:4, Informative)
Microsoft invented XMLHttpRequest because before that people were using tiny little java applets to accomplish the same thing. In fact the original version of remote scripting in IE also used a java applet. When MS decided that java was the enemy they figured a way to do it without java.
I for one see no need for AJAX, it's better to just write java applications or even applets (or thinlets).
Re:Funny thing is... (Score:2)
Re:Funny thing is... (Score:2)
Of course, we should probably not talk about this, as it pretty much destroys the typical slashdot
Microsoft pursuing it??? (Score:2, Informative)
Mod parent up (Score:2)
There's something very familiar about all this (Score:4, Interesting)
Anyway. Let's not fill this page up with 'Dupe' complaints. Macromedia are probably gonna have to re-think things (in the new Adobe environment, of course) since they were convinced that Flash would be the vehicle of choice in developing what they call Rich Web Applications. They'll now have to sell it on the basis that you can get a hell of a lot of functionality out of very few lines of Flex code.
It's gonna be interesting.
Well, duh (Score:4, Insightful)
What's next, summary teaching us what programming languages or computer is?
Bah, this is slightly annoying.
Re:Well, duh (Score:2)
Re:Well, duh (Score:2)
vs2005 (Score:2)
I've never used visual studio for web stuff, and I don't know if it can be used to do stuff like that without getting tied into asp or whatever, but it was impressive what they can do with it.
So what you're saying is... (Score:2, Insightful)
AJAX brings together some hot properties, Javascript, HTML/DHTML and HTML
So what you're trying to say is "AJAX brings together Javascript."
What my dog hears (Score:5, Funny)
"blah blah blah AJAX, blah blah blahblah AJAX!!1!. blahblahblah Google blah AJAX, blah Microsoft sux."
Re:What my dog hears (Score:3, Informative)
Wait, it's all a matter of time (Score:2, Insightful)
What i would like to see is the US goverment and other countries to force them to adopt clean, industry defined standards like the XML, HTML,CSS, AJAX and not an assimilated badly digest crappy way of doing things that breaks the WEB. They should be more humble since the WEB has given a good chance for all companies to develop and sell new products, and microsoft is no exception here, aldo they have wakeup lately to this.
The Big Question (Score:4, Insightful)
Who will be the first to try and patent something "using AJAX..."?
Incoming data (Score:5, Interesting)
This turns AJAX into more of an actual internet protocol, and I think it would really improve things.
Re:Incoming data (Score:2, Insightful)
I acknowledge that having all their users hitting gmail every 60 seconds may tax their system a bit more than they would like. I'm not sure I'm comfortable with having some sort of open port on my computer that accepts new mail messages. Isn't this the first step for a new kind of worms/viruses for our computers?
Re:Incoming data (Score:3, Insightful)
That would be nice but is unlikely to be a widespread solution. Huge numbers of ISPs do not allow incoming connections, many NAT boxes are outgoing only (there are some hacks to allow incoming connections but they aren't commonly implemented for corporate desktops), etc. IPv6 would be helpful in a move toward this kind of scenario..
But the best case right now is persistent conne
Re:Incoming data (Score:5, Informative)
Try this:
#!/usr/bin/perl
print "Content-type: text/plain\n\n";
$|=1;
(print '.'),sleep 1 while 1;
2. With XMLHttpRequest:
var req = new XMLHttpRequest();
req.multipart=1;
and the server-side part uses content-type: multipart/x-mixed-replace
Subscription-based software (Score:2)
What you want to learn, then, is RPC or CORBA or any of its variants. You may already realize this, but you've simply described a typical client-server application.
I think it would really improve things.
Maybe. Maybe not. Do you like the idea of subscription-based software? That's where AJAX inevitably leads.
What AJAX p
*rolls eyes* (Score:2)
How is this any different from Java Applets? (Score:4, Insightful)
Sad that AJAX is the only way (Score:3, Interesting)
If nothing else, if we want to download clients and run them in the browser, having them talk to a backend server for the data, why not get a more appropriate language? Java would be perfect if Sun weren't a bunch of asshats, but just because it won't ever be truly Free or cross platform is no reason to reject other candidates. Tcl/Tk has had a fully sandboxed browser plugin for a decade and it is 100% Free Software. It runs on every known platform where IE or Mozilla runs and could be ported anywhere else needed. I'm sure it isn't the only one. Or do we continue shoehorning everything into html?
AJAX: Almost Just like an Application! (Score:5, Interesting)
It is funny to watch technology reinvent itself in fast-forward.
I work for a company that did AJAX long before it was called AJAX. And now that it is the next hot thing they are moving away from it. Why? Because they already learned the lesson that everyone else is about to figure out: AJAX is a b*stard to code and maintain. It is easier to write a client-server application in a traditional language and web deploy it than to write this crazy JavaScript + XML + HTML + DHTML + CSS stuff.
Java and
Re:AJAX: Almost Just like an Application! (Score:3, Insightful)
Yes, it's still a b*tch to code and maintain a browser, and Microsoft still hasn't done it -- not really -- but a lot of
AJAX = Suckjax (Score:3, Insightful)
Sure, browsers work on every platform, and AJAX apps don't need a download, that's great. But the same thing could be done with java if everyone had a JVM, or anything else.
AJAXs means reinventing the GUI, only with a more difficult to use, hacked together API
Re:AJAX = Suckjax (Score:2)
You could do the same stuff with XUL also, if everybody ran a Mozilla based browser, or if XUL became a standard and other vendors started supporting it...
Re:AJAX = Suckjax (Score:2, Insightful)
This can be unwanted behavior in some instances, but it's nice to be able to hook people into a system without an install disk or download.
AJAX is providing that, but a desktop application is a lot nicer to work with. Plug in some remoting and you have a NICE client. Unfortunately, remoting does not seem to be the way that computing is going.
My two cents.
Re:AJAX = Suckjax (Score:5, Insightful)
See, Java *could* do this. Sure. I'll give you that. In fact, most people until recently HAD a JVM in their browser. Java applets should have taken over the world.
Why didn't they? Why is AJAX getting all the press Java should have gotten?
Me, I simply look at 2 things: gmail, and Google Maps. They both work, work well, and work better than anything else. Apparently millions of people agree with me, just look at the buzz around them. Are we all brainwashed by Google? Could these have been done as a Java applet? Maybe.
The fact is, they WEREN'T. Or if they were, no one used them. The way I see it, AJAX is the end all and be all (for now) because it WORKS. Maybe Java is just too slow (and here come a dozen posts claiming it's not). Maybe the wait time to load a JVM into memory, plus download an applet is too long. I don't know why Java hasn't been used, but it's not like no one's thought of it before.
I get the hype, myself. It means that I can sit at virtually any computer, type a URL, and BAM! Instant application. I've yet to see another technology that works this well.
Re:AJAX = Suckjax (Score:3, Insightful)
1: thanks to the sun vs ms issue developing browser applets that will run without 3rd party software required working in a horriblly old version of java and you couldn't even use the swing classes without downloading them at applet load time.
2: also a lot of java applets wouldn't work if you were browsing from behind a http proxy as they used other protocols to talk back.
3: you can't exactly call awt or swing nice to program for ;
Re:AJAX = Suckjax (Score:3, Insightful)
Google maps is such a great example. You go there, it works, and it's a great interface. It's not as nice as google earth, but I don't want a client/server map ap
Be Careful (Score:3, Informative)
Of course, you usually don't know if a page is using XMLHTTPRequest in a hidden frame unless you look really hard, so I guess the bottom line is never type anything on a web page you don't want the world to see. On the other hand, AFAIK (which doesn't mean much) this hasn't shown up in practice, so maybe it isn't that big a deal.
Re:Be Careful (Score:2)
AJAX has accessibility problems. (Score:2)
Cleaning up with Ajax (Score:2)
From the department of redundancy department: (Score:2)
Translation: Asynchronous Javascript and (x)(ht)ml bring together some hot properties: Javascript, HTML and HTML with Javascript, and HTML, according to Julie Hanna Farris.
Clean Machine? (Score:2)
Maybe the writer means "tidying up". Like hiding software problems on a backend that is maintained without the users noticing upgrades or details of failures. Maybe just putting a thin-client cross-platfor
AJAJSON (Score:2)
I prefer AJASON - that is, replace XML with JavaScript Object Notation or, serialized javasacript objects. It parses much faster and easier than XML.
I have a JSON class for PHP which lets me serialze any PHP object into JSON. I can send the JSON to the client, eval() it with javascript and viola, my PHP object is now a JavaScript object.
The only problem with it is that there isn't an object serializer built into JavaScript (that I'm aware), so sending data back to the PHP script isn't as easy. I haven't
List of websites using Ajax (Score:5, Informative)
Why Java? (Score:4, Insightful)
Netscape, LiveScript and JavaScript (sic) (Score:2, Interesting)
JavaSript is not related in any way to Java. It was a cold day in November 1995 when Bill Joy, in contract negotiations between Sun and Netscape, told them "sure, go ahead and use the name JavaScript."
Sort of funny when you think about the current protection of the Java trademark, or whatever it is.
p.s. yes I was there
Accessibility (Score:5, Informative)
So you can't use it in software that might be sold to, for example US Government customers -- no national laboratories, no NASA, etc.
UNLESS -- you write your own accessibility aids and write your own UI framework that compiles into both an AJAX version and a web accessible version.
That's a tall order. However, there is help.
You can write your web pages in HTML with XForms and let XForms handle the dynamic page aspects, and then offer up the HTML+XForms as the accessible version. (See the DHTML Accessibility Roadmap [w3.org].)
Everything that the AJAX cloud of applications does with the XMLHTTP object and updating the DOM on the fly to display choices can be done with XForms.
Then, you can use one of these mechanisms to convert the server-side XHTML+XForms file into AJAX:
If you want to serve up the XHTML+XForms directly, and not rely on any AJAX technologies, try these:
So, try them out, and see how much easier it is to write accessible code and properly separate your data and presentation layers when you use XHTML, CSS, and XForms. Then, choose a middleware solution or a browser-based solution and go forward knowing that you can meet architectural requirements without getting bogged down in JavaScript toolkits.
Re:Accessibility (Score:5, Informative)
This irritates me. This is not true. And yet moderators without a clue have pushed it up to +5, Informative. And any newbie web developers who read this are going to think that they have to choose between AJAX and accessibility. Some of them are going to choose AJAX and not bother with accessibility. If your post had been down at -1, Wrong, they might not get that impression, and would go on to write accessible AJAX web applications.
You don't have to choose. You don't have to write "UI frameworks" that you have to "compile". That's nonsense. What you do is you write the non-AJAX version, and then you add the AJAX as an optional extra. When people have Javascript turned off, they get the basic version seamlessly. Perfectly accessible, none of the complicated nonsense you claim is necessary.
Please stop propogating this myth. If you want to promote your favourite technologies, then by all means do so, but don't lie about the alternatives to make them look bad.
Section 508 Compliance (Score:3, Interesting)
I don't think there are too many screen readers our there that can handle AJAX quite yet.
Hmm.. screen reader built onto Firefox? Notices when stuff changes. I could build that. Sweet.
Re:Ditch Javascript (Score:5, Insightful)
Re:Ditch Javascript (Score:4, Insightful)
Re:Ditch Javascript (Score:2)
-Rick
Re:Ditch Javascript (Score:3, Informative)
Re:Ditch Javascript (Score:5, Insightful)
You say that we're "trying to force documents to be applications", and I agree. However, with HTML we're also trying to force applications to become documents. We need access to both layout models, because the Web contains both documents and applications. XUL provides this. For example, the XUL menus in the FireFox "chrome" are freeform, and the main part of the box layout is a container for freeform HTML, while the rest of the chrome follows a box model.
Even "document" pages usually contain some "application" elements; navigation buttons, or a search box, for example. The page should be treated as an application containing content, and not forced to hold both the framework and the content in one file, with the same layout model.
AAX??? (Score:3, Insightful)
With out Javascript AJAX doesn't work.
-Rick
Re:AAX??? (Score:4, Informative)
Re:AAX??? (Score:2, Interesting)
Re:AAX??? (Score:2)
Python can't run untrusted code. There used to be a way [python.org] of doing so, but it was shown to be fundamentally insecure and abandoned. Nobody has found it necessary enough to replace it with something more secure. A web browser called Grail [sourceforge.net] was written in Python years ago that could run Python scripts, but obviously that's insecure too as it relied on the above mentioned functionality.
Re:AAX??? (Score:3, Informative)
Existing APAX solutions use py2js so you write client code in python which is translated to javascript automagically.
See, e.g., Crackajax (or use plain py2js if you don't want a big framework).
Re:AAX??? (Score:2)
With out Javascript AJAX doesn't work.
Basically right unless you're willing to limit your client base to people with browsers that support other embedded languages (e.g Grail had embedded Python--but you couldn't do APAX with it as it lacked the AAX support).
But you can do your development in other languages and then convert to Javascript. I've known a few people who wrote simulated client-side Python and used py2js successfully. Still requires Javascript, but you can do your dev work
A better web page scripting language? (Score:5, Insightful)
I'm not sure if your comment was intended as a pointed jab at the buzzword status of AJAX or a serious suggestion that JavaScript is crappy, but I'm assuming the second.
There are some things about JavaScript that are really annoying. First, the object orientation seems very odd. It is well-rooted in the language, but it is quite annoying not to have real object namespaces (yes, you can use closures, but they're annoying and kludgy), real constructors, and that sort of stuff. It's almost as bad as Perl's hash + namespace = object idea, and worse in some ways.
What I'd like, I guess, is a language that is very similar to JavaScript, but has a real object-oriented system and better support for things like loading code dynamically. It's clear that JavaScript or some future variant of it is finally being used the right way--to make pages dynamic instead of just annoying--but right now it's very cumbersome. Loading Gmail, for example, is quite slow, because it (IIRC) downloads a huge chunk of code at the beginning. Perhaps someone (maybe me) could write a wrapper system in JavaScript that uses XmlHttpRequest to load JavaScript code on demand. But some sort of modular functionality ought to be officially added to JavaScript, before it's too late and we end up with the next "___ Wars"... this time it will be the fight between JavaScript frameworks.
Re:A better web page scripting language? (Score:3, Interesting)
JavaScript is object-oriented. You only call it "odd" because it's not the usual C++/Java/whatever object orientation you are used to, it's prototype-based like Self [wikipedia.org], not class-based. That's no less of a "real" object-oriented language.
Don't blame JavaScript for the shortcomings of GMail, it's simple to dynamically load JavaScript on demand. There's a lot of really screwed up stuff about
Re:Ditch Javascript - AVBSAX (Score:3, Interesting)
Actually, my biggest problem with Javascript was (is?) trying to understand all the little (or sometimes not-so-little) implementation differences, and write cross-browser script that didn't turn into zillions of checks:
Re:Ditch Javascript (Score:2)
Java != Javascript
Re:What is ? (Score:3, Informative)
-Rick
Re:What is ? (Score:2)
-Rick
Re:What is ? (Score:2)
""Atlas" is not merely another implementation of AJAX. Instead, "Atlas" extends the AJAX concept in two significant ways. First, the "Atlas" client script libraries dramatically simplify the tasks of creating rich UIs and remote procedures calls by providing you with true object-oriented APIs and components for Atlas development. Second, "Atlas" extends the AJAX concept by providing a rich, integrated server development platform in ASP.NET 2.0. The "Atlas" server components include ASP.N
Re:What is ? (Score:3, Informative)
Here's an overview. [asp.net]
Re:What is ? (Score:2)
I believe that it is called jscript.net
And why is it tied to JavaScript?
1. It runs client side.
2. As far as I know their isn't a
3. Why not? JavaScript is used just about everywhere.
Re:What is ? (Score:2)
Re:Where? ajaxian.com (Score:4, Informative)
Re:Where? ajaxian.com (Score:3, Interesting)
1. A couple of guys from "The Iliad".
2. The name of a bunch of cars from the early part of the 20th century.
3. A major Dutch soccer team.
4. A toilet and bath cleaner.
5. A town in Ontario.
6. A character from the movie "Flash Gordon".
7. A "web technology" whose component parts have existed for ages, but marketing people believe makes them sound smart and "cutting edge".
8. Many other things.
It is NOT, and has NEVER BEEN, a mere "window cleaner"! Good god, man!
Re:Where? (Score:2)
Re:"Google uses it, and Microsoft is pursuing it"? (Score:2, Interesting)
Indeed. This story is absolutely unbelievable revisionist history and nonsense. I've written about my feelings [yafla.com] of AJAX (in fact I was honored to see that an AC already referenced it in this thread), and this article is exactly what pisses me off about the new-to-web-apps "AJAX" converts. This messaging expert is yet another dumb-ass trying to get in on the Web 2.0 action to earn some VC funding. She even used the word "paradigm" to really put up the fla
Re:Platform independent? (Score:5, Interesting)
It's almost platform independent. The main problem which primarily afflicts Microsoft's use of AJAX, such as in Outlook Web, is the way that the "A" in AJAX is "started".
Basically to initiate an HTTP asynchronous request, the Javascript code must create a special object which encapsulates the request and communication. Althought the interface and use of this object is for the most part standard, the way in which it is initially created is not.
So if you want a platform independent AJAX app, you pretty much need a bit of code which does things the Microsoft way when the standard ways don't work. Like:
Now, Microsoft-written applications which use AJAX only try the MS ActiveX methods, and not the standard XMLHttpRequest() function. Thus, although most of the application could have worked in any browser, this simple omission by Microsoft insures it only works under IE (and locks you into their technology).
It should also be noted that AJAX is a methodology and not a strictly defined API. For instance most AJAX apps rely heavily on the DOM API, which Microsoft mostly but not entirely adheres to. So there's lots of things that can cause platform independence problems if not coded carefully. | https://developers.slashdot.org/story/05/11/10/2010245/why-microsoft-and-google-are-cleaning-up-with-ajax | CC-MAIN-2016-36 | refinedweb | 4,688 | 64.3 |
LIN.
Contents
- Pipelines and Method Chaining
- Debugging Unbound
- LINQPad: Visualizing Your Data
- Conclusion
- Footnotes
This article is for .NET developers who have not used LINQ, LINQ users who have not used LINQPad, and LINQPad users who have not used LINQPad Visualizer. (Some familiarity with LINQ is assumed though.) I take you beyond the basic concept of a LINQ query and reveal the simple techniques for creating LINQ chains, softly introducing the notion with analogous ideas in Unix and in .NET programming in general. As soon as you think about chaining, though, you have to be concerned about how to keep the “stuff” in the middle accessible, to keep it from becoming an opaque black box. I show you how to do this both in Visual Studio with a simple extension method and in LINQPad with its powerful Dump method. The accompanying code archive[1] lets you experiment with everything discussed as you read along.
Pipelines and Method Chaining
The Unix Origins: Command Pipelining
The concept of a software pipeline is not a C# innovation. In fact, it is not new at all. It is not even recent. Pipelines appeared in 1972 in Unix, following on Douglas McIlroy’s 1964 proposal. This example of a Unix pipeline (from the Wikipedia entry on pipelines) implements a simplistic, command-line spellchecker on Unix/Linux systems. This pipeline runs 7 independent applications. Each application ties its output to the input of the next application using the pipe (|) symbol. To tie the whole package together, the first application, curl, obtains its input from the web page supplied as an argument. The last application, less, feeds its output to the console where the user may view it. (The steps in between massage the data to identify, isolate, and sort individual words, then compare them to a reference dictionary.)
curl "" | \
sed 's/[^a-zA-Z ]/ /g' | \
tr 'A-Z ' 'a-z\n' | \
grep '[a-z]' | \
sort -u | \
comm -23 - /usr/share/dict/words | \
less
.NET Equivalent: Method Chaining
Fast forward and shift to the .NET environment. This next example illustrates method chaining, the code-level equivalent of application pipelining. You have almost certainly used method chaining but may not have seen the term before. Here you start with a whimsical string, swap parts, shrink it, chop it up, and finally write out its pieces. (This code is available in the StringMethodChaining project of the ChainingAndDebugging solution (VS2010) in the accompanying code archive.)
using System.Linq;
namespace StringMethodChaining
{
class Program
{
static void Main(string[] args)
{
"aardvarks AND antelopes AND hippopotami"
.Replace("antelopes", "centipedes")
.Substring("aardvarks".Length)
.ToLower()
.Trim()
.Split(new[] {"and"}, StringSplitOptions.None)
.ToList()
.ForEach(item => Console.WriteLine(item.Trim()));
Console.ReadLine();
}
}
}
The basic principle to observe from these cosmetically different examples is the same: connect building blocks together where the output type of one corresponds to the input type of the next. In the Unix/Linux case, command line applications typically use a text stream for input and generate a text stream for output. This allows you to connect any two components together. The C# case is rather more complicated on the surface because there is no “universal” input/output format. Rather you are free to define methods with arbitrary types for input and output to suit your needs. To create a pipeline, then, it is a simple matter of impedance matching[2]. Here are the methods used, explicitly showing their inputs and outputs. Note how the output of each method in the chain matches the input requirement of the next method:
Note that you could write the same code without chaining, by introducing a slew of temporary variables.
string s1 = "aardvarks AND antelopes AND hippopotami";
string s2 = s1.Replace("antelopes", "centipedes");
string s3 = s2.Substring("aardvarks".Length);
string s4 = s3.ToLower();
string s5 = s4.Trim();
string[] strings = s5.Split(new[] { "and" }, StringSplitOptions.None);
List<string> list = strings.ToList();
list.ForEach(item => Console.WriteLine(item.Trim()));
Console.ReadLine();
This code does the same thing but takes significantly more effort to comprehend. The previous code you could likely understand almost with a glance. Here, you have to stare at it for a bit. So is method chaining always better? Actually, no. If you want to debug this code and view an intermediate value, you cannot do it with pure method chaining. If you set a breakpoint and then execute, Figure 1 shows what you see when you land on the breakpoint.
Figure 1 Breakpoint on a Method Chain
The entire chain is considered an indivisible unit! That is because, well, it is. Remember that Visual Studio breaks on statements and any single method in the chain is not a statement in and of itself. From this breakpoint if you use the step over command, you advance to line 19, the Console.ReadLine. However, if you instead use the step into command, you advance to the Console.WriteLine on line 18, because though deeply embedded, Console.WriteLine is a full-fledged statement. Unfortunately, if you need to see any intermediate value other than item in that line, you have to rewrite the code to introduce separate statements with temporary variables.
The principled, high-minded designer in you is, I am sure, repulsed by such an unsatisfactory kludge. Fear not, for there is a better way. But first, take a look at method chaining as it applies to LINQ.
.NET Pipelines with LINQ
The vast majority of articles on LINQ[3] introduce it with a simple query like this:
var lowNums =
from n in numbers
where n < 5
select n;
They explain that you specify your data source with a from clause, filter it with a where clause, and project it to the desired result with a select clause. They then go on to show how to use that query typically with a foreach loop like this:
foreach (var x in lowNums)
{
Console.WriteLine(x);
}
Even articles discussing more advanced LINQ methods typically exhibit a simple example, which is fine and necessary, but almost always stop after proclaiming that you conclude your LINQ expression with a Select or GroupBy. Even the useful and venerable 101 LINQ Samples page from Microsoft shows only the simplest examples, yielding no clue about method chaining.
LINQ Queries may appear in one of two forms; the query above is written using query syntax. The next example uses method syntax (also called lambda syntax). The two forms are exactly equivalent (where they overlap), and performance is also exactly the same because, during compilation, query syntax expressions are converted to lambda syntax internally. However, the lambda syntax is richer, particularly in C#[4].
When it comes to method chaining, I prefer to use lambda syntax. This next example uses it to illustrate a real-world LINQ example. This code comes straight out of the HostSwitcher application that I built and discussed at length in my recent article, Creating Tray Applications in .NET: A Practical Guide. HostSwitcher lets you re-route entries in your hosts file with a single click on the context menu attached to the icon in the system tray. The application is heavily LINQ-centric. One key portion of code takes your hosts file (read into an array) and uses LINQ to convert it to a dictionary that is later consumed by other segments of the code to generate a context menu among other uses. The CreateMap method generates the projectDict dictionary:
{
projectDict =
hostFileData
.Select(line => new { line, m = FilteringRegex.Match(line) })
.Where(item => item.m.Success)
.Select(item => new
{
item.line,
item.m,
project = item.m.Groups[ProjectSubGroupIndex].ToString().Trim(),
serverGroup = item.m.Groups[ServerGroupSubGroupIndex].ToString().Trim()
})
.GroupBy(item => new { item.project, item.serverGroup }, item => item.line)
.GroupBy(projAndServer => projAndServer.Key.project)
.ToDictionary(
project => project.Key,
project => project.Select(item =>
new ServerGroup
{
Name = item.Key.serverGroup,
EnabledCount = item.Where(line => !IsCommented(line)).Count(),
DisabledCount = item.Where(line => IsCommented(line)).Count()
}
)
);
}
To understand method chaining with LINQ, consider the inputs and outputs of the LINQ methods in the above chain:
Observe that LINQ has a great affinity for IEnumerable<T> objects; many LINQ methods fit this footprint:
Therefore, LINQ naturally lends itself to method chaining! Kris Thompson’s blog contains a great reference of LINQ operators (LINQ Query Operators Cheatsheet), identifying the return values of each so you can see at a glance which ones lend themselves to LINQ chaining. Many of them—with IEnumerable<T> as both input and output—may be used at any position in a chain. But since all (?almost all) LINQ operators use IEnumerable<T> as input, all of them may be used at the end of the chain[5].
Breakpoints in a LINQ Chain
The final point here is that LINQ method chaining is different than normal method chaining with respect to stepping in the debugger. Though the entire chain is marked with a single breakpoint, once you reach the breakpoint you can step through a LINQ query. Figure 2 shows the scenario after having pressed the step over button a number of times. At that point, you can inspect local variables as on any breakpoint with the Immediate window, tooltips, etc. It is not that the methods are special in any sense as compared to the string methods you saw earlier. Rather, it is the method arguments that are different. A LINQ method typically takes a lambda expression, which is an anonymous function composed of expressions and statements. Thus, you may step onto these statements with the debugger as well.
Figure 2 Breakpoint on a LINQ Chain
Actually setting breakpoints on parts of the LINQ chain, however, is quirky. If you use the shortcut key (F9) pressing it once sets a breakpoint on the entire chain. Pressing it again, removes it. Repeat ad infinitum. If, instead you use the mouse to set a breakpoint by clicking on the grey channel at the left edge of the window the first click will perform the same (setting a breakpoint on the entire chain) independent of which line in the chain your mouse is adjacent to. I find, though, that if I stubbornly click in the channel adjacent to different lines within the chain I can sometimes get a breakpoint to stick.
Thus far you have seen how attempts at debugging method chains are useful to a degree, but still unsatisfactory. The next section shows you some powerful remedies.
Debugging Unbound
Simple Debugging: Injecting a NOP
The first technique to allow setting breakpoints inside a LINQ method chain is to add a nop: a statement that does nothing, but a statement is what you need! In LINQ a nop consists of a lambda expression that performs an identity transformation, but you want it to use a statement rather than an expression, i.e. this:
...rather than this:
The other crucial factor is that the statement must be accessible to the debugger, i.e. it must be on a line by itself. Then you can set a reliable breakpoint, as shown in Figure 3.
Figure 3 Adding Breakpoints with Embedded Statements
(Thanks to Eric White’s blog entry Debugging LINQ Queries for this tip.)
Advanced Debugging: Injecting a Watcher
In the previous section you learned to inject a simple inline expression. That worked because, being wrapped in a Select predicate, it still fits the classic LINQ signature:
That technique has its uses but to do anything non-trivial it is more useful to encapsulate a diagnostic routine into a separate package. To explore this avenue, consider this simple LINQ query to perform some trivial string operations. (The code in this section is available in the LinqMethodChaining project of the ChainingAndDebugging solution (VS2010) in the accompanying code archive.)
{
return words
.Select(word => word.Trim())
.Select(word => word.ToLower())
.Where(word => word.StartsWith("k"))
.OrderBy(word => word);
}
The input is this word list, which includes some different casings, some extraneous spaces, and is unordered.
{" KOOKABURRA", "Frogmouth", "kingfisher ", "loon", "merganser"};
The program to wrap around these pieces is just:
{
var newWords = ProcessWordList1(Words);
foreach (var word in newWords) { Console.WriteLine(word); }
Console.ReadLine();
}
The output from this is just these two birds: kingfisher followed by kookaburra. This example is deliberately simple but in the following discussion assume you have something more elaborate where the machinations it performs are non-obvious. To be able to examine the innards of the LINQ chain, create a new class to contain an extension method based on the Watch method in Bart De Smet's informative article LINQ to Objects – Debugging. I have enhanced his extension method to support multiple colors instead of a single color, and to show invisible characters for illustration. (I have also chosen to rename it from Watch to Dump to be consistent with subsequent portions of this article.) Here is my version:
public static class EnumerableDebugger
{
public static ConsoleColor DefaultColor = ConsoleColor.Yellow;
public static bool ShowWhiteSpace { get; set; }
public static IEnumerable<T> Dump<T>(this IEnumerable<T> input)
{
return Dump(input,
item => item != null ? item.ToString() : "(null)", DefaultColor);
}
public static IEnumerable<T> Dump<T>(
this IEnumerable<T> input,
ConsoleColor consoleColor)
{
return Dump(input,
item => item != null ? item.ToString() : "(null)", consoleColor);
}
public static IEnumerable<T> Dump<T>(
this IEnumerable<T> input,
Func<T, string> toString,
ConsoleColor consoleColor)
{
foreach (var item in input)
{
Console.ForegroundColor = consoleColor;
Console.WriteLine(
ShowWhiteSpace ? '[' + toString(item) + ']' : toString(item));
Console.ResetColor();
yield return item;
}
}
}
This extension method adds color-coded diagnostic output intermixed with your program’s normal output. More importantly, it performs an identity transformation on its input just like the previous nop technique: that is, it returns its input unchanged. Because of this, it is safe to inject this into the LINQ chain anywhere you like. Here is the method instrumented with Dump calls injected after every LINQ operation:
{
EnumerableDebugger.ShowWhiteSpace = true;
return words
.Dump()
.Select(word => word.Trim())
.Dump()
.Select(word => word.ToLower())
.Dump()
.Where(word => word.StartsWith("k"))
.Dump()
.OrderBy(word => word)
.Dump();
}
The output of the program is shown in Figure 4, left side. You can distinguish the program output from the diagnostic output in yellow but it is impossible to distinguish the multiple occurrences in yellow. By specifying non-default arguments to Dump you can enhance the output. The final version of ProcessWordList below uses the same Dump extension method but this time supplies two arguments, one to label the step and one to colorize the step. This method yields the output in Figure 4, right side.
{
EnumerableDebugger.ShowWhiteSpace = true;
return words
.Dump(w => "ORIGINAL: " + w, ConsoleColor.Yellow)
.Select(word => word.Trim())
.Dump(w => "TRIMMED: " + w, ConsoleColor.Yellow)
.Select(word => word.ToLower())
.Dump(w => "LOWERCASE: " + w, ConsoleColor.Green)
.Where(word => word.StartsWith("k"))
.Dump(w => "FILTERED to 'K': " + w, ConsoleColor.Red)
.OrderBy(word => word)
.Dump(w => "SORTED: " + w, ConsoleColor.Blue);
}
Figure 4 Output from Injecting the Dump Method
The labeled/color-coded output clearly communicates what step generates each line of output. It also reveals that LINQ really is a pipeline! Observe that the first word goes through the first 4 LINQ methods before the second word is even touched. The second word only survives the first 3 methods because it fails to make it through the filter looking for words starting with “k”. After all five words are processed by the first four steps, the remaining list—now just 2 words—is processed by the OrderBy method. OrderBy processes the whole list as a unit so it knows to wait for all the previous steps in the chain to complete. Notice that after OrderBy the data again flows in a pipeline from the final Dump call to the main program, which does a plain Console.WriteLine, because the blue Dump output is interleaved with the white standard output.
This injection technique is more powerful than the simple, inline approach given earlier. You could achieve a similar result by setting a breakpoint inside the Dump method, then manually examining values in the debugger. But this technique is particularly useful if you want to see a stream of output from a running program rather than stop at a breakpoint. It is also handy because you can compile in your injections and get diagnostic output without having to run inside Visual Studio. Also, by modifying the Dump method you can change your destination from the console to a log file for further analysis and post-processing. Finally, I encourage you to review DeSmet’s blog entry where he discusses further ways to extend the Dump / Watch method.
Dump Method in Visual Studio: Points to Take Away
- The Dump method is transparent to LINQ: its input passes through unchanged.
- You can instrument any step(s) in the LINQ chain you want to watch.
- You can observe the pipelining to debug interactions.
- You can dump simple values or complex objects because the Dump method lets you specify an arbitrary lambda expression.
- You can output derived values: for example, show not just each word but also its length—see the ProcessWordList4 method in the accompanying LinqMethodChaining project.
- Optional color coding and labeling let you clarify your output.
- To color code without labeling, use an identity lambda expression (x => x).
- To label without color coding, omit the color argument. (This uses the single DefaultColor, an exposed property.)
LINQPad: Visualizing Your Data
The techniques presented thus far give you useful and flexible capabilities for examining simple data. But when you want to examine complex objects you need the power of Joseph Albahari’s LINQPad. LINQPad is one of those rare applications that is elegant, powerful, and well-designed. As soon as you start using it you know it is “just right”. Remember back when you discovered the awesome power of Visual Studio; LINQPad is like that, too. (I have no affiliation with the application or its author. :-) LINQPad is a sandbox/IDE for .NET languages (C#, VB, SQL, F#) that lets you develop code without all the overhead of creating solutions and projects required in Visual Studio. I use it primarily for C# work though I have read some intriguing articles recently that some people use it to completely replace SQL Server Management Studio!
In the C# arena, LINQPad appears as if it converts C# from a compiled language to an interpreted language. You can just type in an expression and press Execute (F5). Change the language selector from C# Expression to C# Statements if you want to put a bit more code on the page, or to C# Program for full class support. So you can define classes if you need them but if you just want to try out a few isolated statements you can do that in an instant.
Besides the benefit of having a sandbox without the overhead, LINQPad includes powerful output visualization that is particularly useful with LINQ. (I guess that it was designed with this in mind—hence the name—but LINQPad should really be called something like .NET-Pad; it is not at all restricted to just LINQ.) The data visualization of LINQPad is outstanding, but learning how to use it takes exactly one sentence:
Append .Dump( ) or .Dump(“your title string”) to the end of something you want to examine.
That is it. Period. Honest. The remainder of this article just shows you some tips on how to gain the most leverage from that method call.
Basic LINQPad
As an introduction, I start with an illustration of two examples, borrowed from my previous article Using Three Flavors of LINQ to Populate a TreeView.
First the data:
"Albert", "George", "Harris", "David" };
Here is the most basic of LINQ queries; the output is a sorted list of elements, where each element is just an item from the names array. The result is fed to the Dump method and the output appears in Figure 5, left side:
The second example builds on this with query continuation. The output is, again, a list of elements, but here each element is a more complex structure, containing a length and a collection of zero or more names (Figure 5, right side):
orderby item
group item by item.Length into lengthGroups
orderby lengthGroups.Key descending
select lengthGroups;
query.Dump();).
Figure 5 Output from LINQPad’s Dump Method
Figure 6: Successive LINQpad Output
from a method chain
LINQPad with Method Chaining
Dumping the output of a query is certainly useful. But it becomes significantly better still if you can peek inside the LINQ chain, just as you saw earlier with the ProcessWordList2 and ProcessWordList3 methods. Recall that those used a custom Dump method in Visual Studio that was specifically designed as a pass-through method.
I have not seen it documented anywhere, but I thought that the LINQPad Dump method must surely be as well-designed as that, too! Here is the bird example shown earlier tailored for LINQPad. Paste this code fragment into a LINQPad buffer, set the language to C# Statements, and execute it.
{" KOOKABURRA", "Frogmouth", "kingfisher ", "loon", "merganser"};
Words
.Dump("ORIGINAL:")
.Select(word => word.Trim())
.Dump("TRIMMED:")
.Select(word => word.ToLower())
.Dump("LOWERCASE:")
.Where(word => word.StartsWith("k"))
.Dump("FILTERED to 'K':")
.OrderBy(word => word)
.Dump("SORTED:");
Figure 6 displays the result: a series of lists presented in a way that is instantly comprehensible. You see each step of the LINQ chain and can watch as each transformation occurs. The LINQPad Dump method is indeed transparent, returning its input unchanged to the next step in the chain!
This Dump method has a different signature than the custom Dump method presented earlier for Visual Studio use. The earlier one had two signatures: one with no arguments and one with two arguments, an IEnumerable<T> and a ConsoleColor. This one also has two signatures: one with no arguments and one with a single string. For the latter, the string is used as a title on the list block that follows.
Another difference to note is that this Dump method shows all the results from one step, then all the results from the next step, etc. The earlier Dump method showed individual results from one step intermingled with those of other steps, and let you see the actual sequence of execution. LINQPad is not changing the way the LINQ chain executes here; rather, I assume it is just collecting all the results internally and repackaging them in a clean visualization before presenting them.
Examining Complex Objects
The HostSwitcher subset.linq file in the accompanying code archive contains an excerpt of the HostSwitcher code, including the CreateMap method shown near the beginning of this article. This real-world example lets you experiment with complex objects in LINQPad. Open the file in LINQPad and execute it and you get the dump of two structures (Figure 7).
Figure 7 LINQPad Inspection of HostSwitcher’s Dictionary and Context Menu
The project dictionary displays the result of the CreateMap method that uses a complex LINQ chain to build a dictionary. Notice from the dump that it is a dictionary with string keys and IEnumerable<ServerGroup> values. The dictionary is available in a property so Dump can be directly invoked on that property, as shown in the first couple lines of the main program:
{
HostManager hostManager = new HostManager();
hostManager.CreateMap();
hostManager.projectDict.Dump("Project Dictionary");
ContextMenuStrip contextMenuStrip = new ContextMenuStrip ();
hostManager.BuildContextMenu(contextMenuStrip);
hostManager.DumpContextMenu(contextMenuStrip);
}
One of the uses of the dictionary is to create a dynamic context menu, enumerating each server group for each project. The context menu is constructed with other LINQ code so it is useful to check its structure as well. The context menu dump in Figure 7 displays the result of the BuildContextMenu method. Dump is also used here, but it is embedded in the DumpContextMenu method, which reformats the completed context menu before feeding it to Dump to get a more compact and meaningful output:
{
contextMenuStrip.Items
.Cast<ToolStripItem>()
.Select(item => new
{
Text = (item is ToolStripSeparator ? "-separator-" : item.Text),
Items = (
item is ToolStripMenuItem ?
((ToolStripMenuItem) item).DropDownItems
.Cast<ToolStripItem>()
.Select(subitem => new { subitem.Text, subitem.ToolTipText } )
: null
)
})
.Dump("Context Menu");
}
This method starts with the ToolStripItemCollection of the contextMenuStrip. Recall, however, that LINQ has great affinity for IEnumerable<T> objects. The Cast extension method converts the ToolStripItemCollection to the more palatable IEnumerable <ToolStripItem> for further processing. The Select method enumerates all the items in the context menu, with the label in the first output column and the contents in the second. The contents are generated by a nested LINQ query that extracts the label and the tooltip from each second-level menu item.
Both of these dumps show how LINQPad gives a great visualization of your output. But applying what you now know about injecting a watcher into the chain, it is a trivial matter to examine the innards of the LINQ steps in the CreateMap method. I have included five Dump method calls in the CreateMap method, but they are all commented out. Here is the same CreateMap method shown earlier, now with the Dump method calls included:
private void CreateMap()
{
projectDict =
hostFileData
.Select(line => new { line, m = FilteringRegex.Match(line) })
//.Dump()
.Where(item => item.m.Success)
//.Dump()
.Select(item => new
{
item.line,
project = item.m.Groups[ProjectSubGroupIndex].ToString().Trim(),
serverGroup = item.m.Groups[ServerGroupSubGroupIndex].ToString().Trim()
})
//.Dump()
.GroupBy(item => new { item.project, item.serverGroup }, item => item.line)
//.Dump()
.GroupBy(projAndServer => projAndServer.Key.project)
//.Dump()
.ToDictionary(
project => project.Key,
project => project.Select(item =>
new ServerGroup
{
Name = item.Key.serverGroup,
EnabledCount = item.Where(line => !IsCommented(line)).Count(),
DisabledCount = item.Where(line => IsCommented(line)).Count()
}
)
);
}
Uncomment any of those to see the intermediate data structures built on the way to coalescing into the compact dictionary result in Figure 7. Figure 8 shows the first portion of each of the five Dump calls in LINQPad. Compare each to the code above:
- After the first Select method, the data is projected into records with two fields. All inputs records are included because this point occurs before any filtering—notice the count of 26 records indicated at the top of the output.
- Here the output is filtered to include only those records with successful regular expression matches; the count is down to 16 records.
- The data is reformatted again to project into records with three fields that will be used in subsequent steps; there are still 16 records at this point.
- The first GroupBy reorganizes the data to 5 records grouped by project and server group.
- The second GroupBy then nests those groups in a parent grouping of just projects. This grouping arrangement then allows applying the ToDictionary method to get the final dictionary required.
Figure 8 LINQPad’s Inspection of HostSwitcher’s LINQ Chain
There and Back Again
As I was developing HostSwitcher’s CreateMap method in Visual Studio, I lamented that I could not see the data structures the way LINQPad could show them to me. So I copied most of my code into a new LINQPad file, added the appropriate references, and then worked on the method in LINQPad, copying it back to Visual Studio when I completed it.
Unfortunately, there is no automatic way to copy a Visual Studio project into LINQPad. I asked the author Joseph Albahari about importing a Visual Studio project into LINQPad in this StackOverflow post; while LINQPad does not do this, he is now thinking about at least adding a way to import references from a Visual Studio project. And, more immediately interesting, he pointed me to a Visual Studio add-in called LINQPad Visualizer by Robert Ivanc.
With Ivanc’s visualizer, you can get LINQPad’s Dump output inside Visual Studio! To do this, you need to set up a watch expression while you are debugging your code. But before you do that you need to install the add-in to Visual Studio. This is a two-step process. Be aware, however, that at the time of writing LINQPad Visualizer does not support Visual Studio 2010 yet, though Ivanc has assured me it is on his “to do” list.
2010.12.04 Breaking news: Just hours ago Robert Ivanc released a version that supports VS2010!
- From the link above, obtain the linqpadvisualizer.dll and, as Robert indicates in his instructions, copy the dll to the Visualizers folder of your Visual Studio instance (e.g. Documents\Visual Studio 2008\Visualizers). If the Visualizers folder does not exist, just create one.
- Copy the LINQPad executable (LINQPad.exe) into the same Visualizers folder. Also copy it to the same folder where Visual Studio’s devenv.exe executable resides (e.g. C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE). Another important point here: LINQPad is available for both .NET 3.5 and .NET 4.0. You can actually run both on your system without conflict. For LINQPad Visualizer, though, you must use the version for .NET 3.5 (which is LINQPad version 2.x ! ).
Because of the restriction of LINQPad Visualizer to Visual Studio 2008, the accompanying code archive includes a VS2008 version of the HostSwitcher solution so you can try out the code as you read on.
As shown in Figure 9, advance the debugger so that the object you are interested in is in scope (point 1)—notice the current line marked with the yellow arrow at the bottom of the code window. Next, open the watch window (point 2)—this should be one of the tabs in the group of tabs containing your errors, output, find, etc. Enter a new watch expression of this form:
Upon pressing return in the Name column of the watch window, you should see the WeakReference show up in the Value column with—and this is the important point—a dropdown indicator on the right edge of the Value field (point 3). Open that dropdown and select (probably) the only visualizer available, the Linqpad (sic) Visualizer. Upon making that selection, you should get a new pop-up window showing the output of the variable you specified in the same form as LINQPad’s Dump method would render it (point 4). My example shows the dictionary created by the CreateMap method, exactly as you saw it in Figure 7.
Figure 9 Using LINQPad’s Inspection in Visual Studio
LINQPad Visualizer definitely has value but there are a few issues to keep in mind:
- As already mentioned it does not yet provide Visual Studio 2010 support.
- If you leave the watch definition in place, the next time you debug the project and open the watch window the value field says “This expression causes side effects and will not be evaluated.” At the right edge instead of a dropdown icon you will find a refresh icon. Simply click that refresh icon to restore the dropdown.
- Most significantly, LINQPad Visualizer can only inspect objects that are marked as Serializable. (Ivanc clearly mentions this as a limitation on his web site, so kudos to him for that.) Unfortunately, I still had a bit of trouble with the dictionary example I have been using. If you look carefully in Figure 9 you will observe that the code for CreateMap is somewhat different than the code listing I originally presented for the method. To demonstrate LINQPad Visualizer I had to revert to this earlier version of the method. The more streamlined code (using the ToDictionary LINQ method) causes LINQPad Visualizer to throw an exception complaining that the new ServerGroup() construct is a non-serializable type even though it does have the [Serializable] attribute.
010.12.04 Breaking news: Ivanc just identified what caused the exception I encountered! Technically it was a user error (mine) but you need to know this vital piece of information to avoid it: The catch is that you cannot serialize things that are lazy evaluated, so by forcing evaluation (with for example a ToList() call) you convert to something that can be serialized. So my final code for CreateMap--the version with ToDictionary--may be used by adding a ToList() to the ToDictionary code segment (I have replaced a chunk of code with an ellipsis for clarity) as shown:
.ToDictionary(
project => project.Key,
project => project.Select(item => ...).ToList()
);
I point out the defects I have found not to condemn the product, but rather to help you work through them. I commend Ivanc on his efforts and look forward to improvements with this handy utility.
Conclusion
LINQ is a tremendous productivity boost when you understand its capabilities. Fortunately, it is a technology that you can learn a bit at a time and also apply a bit at a time; it does not require nor demand wholesale conversion from previous techniques. Use the Dump method presented here to prevent your LINQ chains from becoming opaque as you delve into more and more complex chains. As you are learning LINQ, LINQPad is invaluable, letting you experiment with code fragments with ease. But it is not just a tool for learning; it is great for “real-world” code development in general. When you need to work out some data flow, copy pieces over to LINQPad so you can developer and/or fine-tune it. Alternately, if it is cumbersome to find all the tendrils of the code you are working with to move it to LINQPad, bring LINQPad into Visual Studio with the handy LINQPad Visualizer, subject to the caveats mentioned. If you have not yet experienced LINQ, now is the time to give it a try!
Footnotes
[1] The code archive accompanying this article includes: a VS 2010 solution (ChainingAndDebugging) illustrating dumping in Visual Studio; a VS 2008 solution (HostSwitcher2008) illustrating the LINQPad Visualizer; and a LINQPad Queries folder with LINQPad examples.
[2] Impedance matching is a design practice in electrical engineering (used here as an analogy) whereby the output of one stage is designed to most efficiently and effectively match the input requirement of a subsequent stage in a pipeline process.
[3] LINQ comes in several main flavors—LINQ to SQL, LINQ to XML, LINQ to DataSet, and LINQ to Objects—and a whole variety of lesser known ones, too. This article focuses on LINQ to Objects but the principles apply to LINQ in general.
[4] See my earlier article Using LINQ Lambda Expressions to Design Customizable Generic Components for more on query syntax vs. method syntax.
[5] For further reference on LINQ operators, see the MSDN reference pages Enumerable Methods and Standard Query Operators Overview. | https://www.simple-talk.com/dotnet/.net-framework/linq-secrets-revealed-chaining-and-debugging/ | CC-MAIN-2015-27 | refinedweb | 5,689 | 54.32 |
0
How do you test a record to see if it is the last record? I want to append one string to the end of my script string if the record is the last one, and a continue string if it is not.
I'm showing the entire procedure below so that you guys can get a full understanding of what I'm trying to achieve.
Private Sub BuildScript() 'Build the SQL script to add the customers previously stored in the CUSTOMERS table Dim SelectString As String = "SELECT CustNum FROM Customers WHERE Location = " & LocationCode Dim ODBCCon As New OleDbConnection(Testing_ODBC_ConnectionString) Dim ODBCCommand As New OleDbCommand(SelectString, ODBCCon) ODBCCon.Open() Dim r As OleDbDataReader = ODBCCommand.ExecuteReader ScriptString = StartScript 'Start with beginning of script While r.Read AddCustomer(r(2)) 'Add customer pulled from database 'if record is the last record 'End While 'exit loop 'else ScriptString = ScriptString & ScriptContinue 'Format script to continue adding customers 'endif End While ScriptString = ScriptString & EndScript 'Append end of Script MessageBox.Show(ScriptString) End Sub | https://www.daniweb.com/programming/software-development/threads/87631/testing-for-the-last-record | CC-MAIN-2017-39 | refinedweb | 168 | 51.58 |
Language
Amaury Hernandez-Aguila
Contents
2 CX Programs Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 Everything in a Function is an Expression 12
2.2 Elements Redefinition 13
3 Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1 Primitive Types 14
3.2 Variables 15
3.3 Arrays 15
3.4 Slices 17
3.5 Structures 18
3.6 Scope of a Variable 19
4 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1 Lexical Scoping 22
4.2 Side Effects 22
4.3 Methods 22
5 Control Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.1 jmp and goto 24
5.2 if and if/else 25
5.3 for Loop 27
5.4 return 29
6 Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.1 CX Workspaces 32
7 Pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.1 Memory Segments 36
10 Garbage Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
10.1 What is Garbage Collection 48
10.2 CX’s Garbage Collector 49
11 Affordances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
12 Serialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
12.1 Serialization 56
12.2 Deserialization 58
13 Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
16 Unit Testing in CX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1. Getting Started with CX
This Chapter works as an introduction to Skycoin’s programming language: CX. In the following
Sections you will learn about the objectives and philosophy of the language and about the features
that make CX unique.
In this first Chapter you can find instructions on how to install CX, and how to write and run
your first program using the language. We will check how CX programs are internally represented
in Chapter 2, so we can understand some debugging features and the CX REPL. We’ll then review
some basic programming concepts in Chapter 3, like what is a variable and the types of data these
variables can represent, how you can group different values in arrays and slices, how you can group
different types of values in structures, and how you can change the scope of a variable in a program.
In Chapter 4 you will learn how to use functions and methods, and we’ll talk a bit about side effects.
The different control flow mechanisms that CX currently offers are covered in Chapter 5, such as if
and the for loop. The last fundamental piece is packages, which help us modularize our programs,
and they are covered in Chapter 6.
After Chapter 6, we’ll start covering more complex subjects, such as pointers in Chapter 7 and
how to use CX with OpenGL and GLFW in Chapter 8. Chapter 9 covers how CX can work both
as an interpreted and as a compiled language, and what advantages bring each mode. Chapters 10
and 11 describe CX’s garbage collector and affordances.Chapter 12 describes CX’s serialization
capabilities, and we’ll learn how we can serialize a full, running program, store it in a file, and later
deserialize it to continue its execution.
CX uses its affordance system to create a genetic programming algorithm that can be used to
create programs that create programs, and this feature is explained in Chapter 13. Talking about
creating programs that create programs, are you interested on creating your very own CX? If you are,
we’ll cover that subject in Chapter 14. Lastly, we’ll cover some advanced techniques you can use
while using the REPL to create a program in Chapter 15, and Chapter 16 teaches us how to create
unit tests to make sure everything works as intended while your programs grow larger.
1.1 What is CX? 7
1.2 Installing CX
Eventually, we’ll have a bootstrapped version of CX, and you’ll be able to compile CX using CX, but
in the meantime, you need to have a working Go installation to compile CX, as CX is implemented
in this language. Although providing instructions on how to install Go is out of the scope of this
book, we can give you some guidelines:
• At the time of writing, you can find instructions on how to install Go here:.
org/doc/install
• Make sure you get a version of Go superior to 1.8
• Correctly setting a Go environment – particularly the GOPATH variable – usually decreases
the chances of getting errors with the installation of CX
After getting your Go installation ready, you will need to install some libraries or programs,
depending on your operating system.
In the case of Linux distributions, you might need to install some OpenGL libraries, if you
haven’t done already. CX has been tested in Ubuntu, and the commands to get the required libraries
for this distribution are shown in Listing 1.1.
As there are dozens of Linux distributions, it’d be hard to give instructions on how to get the
correct libraries for each of them. Nevertheless, using your favorite search engine to find out the
names of those libraries for your distribution, and how to install them should be easy.
If you are using Windows, you might only need to install GCC. If you already installed GCC
through Cygwin, you might run into trouble, as Go apparently doesn’t get along with Cygwin. If you
haven’t installed GCC, you should install it either through tdm-gcc (.
net/) or Mingw ().
At the moment, most users of CX have installed it on MacOS systems, and in all of the cases the
installation of the language has been straightforward.
And finally, you’ll need Git installed, regardless of your operating system. If you find any
problems with the installation, we’ll be grateful if you can open an issue at CX’s GitHub repository
(), so we can improve the installation process!
After going through the hassles of installing Go and the required libraries, you should be able to
install CX by running either the cx.sh (for *nix users) or the cx.bat (for Windows users) installation
scripts, which can be found in CX’s GitHub repository (). If
you are running a *nix operating system, you can also try the command shown in Listing 1.2.
If everything went well, you should be able to see CX’s version printed in your terminal after
running cx -v.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 s t r . p r i n t ( " Hello , world ! " )
5 }
Listing 1.3: "Hello, world!" Example
We can see the essential parts of a CX program in the aforementioned program. Every CX
program has to be organized in packages (you can learn more about them in Chapter 6), and,
specifically, every CX program must declare a main package. Additionally, in this main package,
you must declare a main function, which will work as your program’s entry point. The entry point of
any program is the function, subroutine or instruction that will be run first, and which will tell the
operating system how to continue with the program’s execution.
After writing the program using your favorite text editor, save it to your computer using the
name hello-world.cx. You can then run it by using either cx hello-world.cx or cx hello-world.cx -i.
After executing either instruction, you should see the text Hello, world! printed to your terminal.
1.4 Introduction to the REPL 9
In case you’re curious about the -i flag, it instructs CX to interpret the program, instead of
compiling and then running it. You can learn more about this in Chapter 9. Also, there’s actually a
third way of running your program: cx hello-world.cx -r, but we’ll learn more about it in Chapter
15, and it’s related to the next Section.
1 CX 0 . 5 . 2 s t r . p r i n t ( " Hello , world ! " )
6
7 : f u n c main { . . .
8 ∗ : step 1
9 Hello , world !
10
11 : f u n c main { . . .
12 ∗
Listing 1.4: REPL Session Example
We can see that this REPL session example is another way of creating a Hello, world! program
in CX. The first thing to explain in a REPL session is that the asterisk or multiplication sign (*) is
telling the programmer that CX is awaiting for an instruction to be entered. This is called a REPL
prompt At line 5, we decide to enter an expression: str.print("Hello, world!"). But where does
this expression go? How does CX know what is the entry point in a REPL session? To answer this
question, we need to look at line 4. This line is telling us that we’re currently inside function main,
and that any expression that we write is going to be added to that function. This means that the entry
point of a program written using the REPL is still the main function.
Now, if we want to run the program, we need to use the :step meta-command, which is explained
further in Chapter 15. At line 8 we are telling CX to advance the program by 1 instruction, which
results in executing the str.print("Hello, world!") expression and prints the message to the terminal.
Something that you might have noted is that we writing str in front of print. This is explained in
the next Section.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 s t r . p r i n t ( " t y p e −s p e c i f i c f u n c t i o n " )
5 p r i n t ( " t y p e −g e n e r a l i z e d f u n c t i o n " )
6 i 3 2 . p r i n t ( i 3 2 . add ( 1 0 , 1 0 ) )
7 i32 . p r i n t (10 + 10)
8 }
Listing 1.5: Type-Specific Functions
Another kind of type-generalized functions are the infix arithmetic symbols, e.g., +, *, etc. The
parser will infer the type of its arguments and translate the arithmetic statement to an expression that
uses a type-specific function.
The objective of having a strict typing system like this is to promote safety. If the programmer
misinterprets data in a program and, for example, tries to send an i32 value to str.print(), this error
can be caught early at compile-time instead of being caught at run-time.
2. CX Programs Representation
When you create a CX program and run it with the cx command, the first thing that happens is that
the code gets parsed. Every statement, declaration and expression in your code is translated to a
series of adders, removers, selectors, getters and makers (these are covered in Chapter 14). The
trans-compiled version of a CX program is a series of these instructions that generate a structure
that holds all the necessary information for the CX runtime to execute the program. It is worth to
note that both interpreted and compiled versions of CX can read the same structure, so CX can have
some parts of its programs compiled while other parts run in an interpreted way. The programmer
can decide to compile certain functions that need to be fast, while having other functions to be
interpreted, so they can be modified interactively by the user or the affordance system, for example.
The structure that represents a CX program is generated by a parser, which reads the code that
you, the programmer, has written using your favorite text editor. This structure can be considered
as the program’s Abstract Syntax Tree (AST). CX’s REPL (introduced in Section 1.4) has a meta-
command that prints the AST of a program. This meta-command can be called by writing :dp or
:debugProgram in the REPL, and will print something similar to Listing 2.2.
If you want to try it out, you can save the program in Listing 2.1 to a file called ast-example.cx,
and load it to a REPL by executing the command cx ast-example.cx -r. Then in the REPL prompt,
just enter the meta-command :dp, and it should print the AST shown in Listing 2.2.
1 p a c k a g e main
2
3 var global i32
4
5 f u n c main ( ) {
6 var foo i32
7 foo = 5
8
9 s t r . p r i n t ( " H e l l o World ! " )
10 i32 . p r i n t (55)
11 i 3 2 . p r i n t ( i 3 2 . add ( g l o b a l , 1 0 ) )
12 Chapter 2. CX Programs Representation
12 }
Listing 2.1: Abstract Syntax Tree Example - Code
1 Program
2 0. − P a c k a g e : main
3 Globals
4 0. − G l o b a l : g l b l V a r i a b l e i 3 2
5 Functions
6 0. − F u n c t i o n : main ( ) ( )
7 0. − E x p r e s s i o n : f o o = i d e n t i t y ( 5 i 3 2 )
8 1. − E x p r e s s i o n : s t r . p r i n t ( " H e l l o World ! " s t r )
9 2. − E x p r e s s i o n : i 3 2 . p r i n t ( 5 5 i 3 2 )
10 3. − E x p r e s s i o n : l c l _ 0 = i 3 2 . add ( g l o b a l i 3 2 , 10 i 3 2 )
11 4. − E x p r e s s i o n : i 3 2 . p r i n t ( l c l _ 0 i 3 2 )
12 1. − F u n c t i o n : ∗ i n i t ( ) ( )
Listing 2.2: Abstract Syntax Tree Example
Let’s go line by line of Listing 2.2. Line 1 is first telling us that we are showing the AST of a
program. Line 2 then tells us that what follows are the contents of a package, which is named main.
We can then see that all the global variables declared in the package are going to be printed after
Line 3, which in this case is only one. Then we are presented with the last part of the package: the
functions. The first function is our main function, which is declared to not have any input parameters
nor output parameters, as seen at Line 6.
Before continuing with the analysis of the main function, let’s briefly discuss that *init function
at Line 12. This function is actually the first function to be called in a CX program. Yeah, we lied to
you, main is not the one called first. This function initializes all the global variables in your program,
and in future versions of CX you’ll be able to put other expressions you wish to run first, before your
program starts (this behavior is present in languages like Go).
Now, we can see something strange happening on main’s list of expressions: there is a function
call that we never wrote in our original CX source code (identity), and we can see a variable that we
never declared (lcl_0). The identity operator is used when we want to "make a copy" of a value to a
variable, and the variables called lcl_N, where N is an integer, are used as temporary variables that
hold intermediary calculations in nested function calls. There are other weird things that happen
when parsing a CX program, which we will see in later Chapters when dealing with programs’ ASTs,
but for now it’s enough for you to understand that there is not necessarily a one-to-one relationship
between your CX source code elements and the resulting AST. Actually, in more complex programs
the compiler will heavily modify the resulting AST in order to optimize your code. Nevertheless,
there is an important point that should be understood before continuing with the rest of the book,
and this is discussed in the next Section.
statement returning a value. In the CXGO implementation this doesn’t happen, as we try to mimic as
much as possible the behavior of Go. Nevertheless, it is important to take into account if you decide
to create your own CX implementation. You could, for example, implement a CX-based language
where the code in Listing 2.3 is valid, and it is allowed by the CX specification.
1 v a l = i f 5 > 4 t h e n 10 e l s e 20
2 print val / / This w i l l p r i n t 10!
Listing 2.3: Example of if/else Statement as an Expression
In some Chapters 5 we will see how CX transforms all the control flow mechanisms to a series
of jmps, where jmp (from the word "jump") is just an operator that takes a number of lines of code
(expressions, actually) to skip.
The reason behind this design choice is convenience: it’s easier to build a program structure
using this approach, and implementing some of the CX features, such as affordances, is a breeze if
you only have to deal with expressions. Another example is using a genetic programming algorithm
(see Chapter 13 to change a CX program’s structure: you only have to add, remove, change and
move around the same type of component: expressions.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 s t r . p r i n t ( " Hello ! " )
5 }
6
7 f u n c main ( ) {
8 s t r . p r i n t ( " Bye ! " )
9 }
Listing 2.4: Example Function Redefinition
Although you could use CX as a calculator and work with literal numbers all the time, it would be a
waste of power. In order to create more robust applications, you need to work with more complex
types of data, such as arrays and structures. Nevertheless, before learning about these complex data
structures, we need to review the different primitive types of data that CX offers at the moment.
floating-point number.
3.2 Variables 15
CX at the moment provides the following primitive types: byte, bool, str, i32, i64, f32, f64. All
of the integer and floating-point number types are signed, which means that half of the possible
values that they can represent are used to represent negative numbers. For example, a byte type in
CX is able to represent any integer number from -128 to 127, for a total of 256 different values. In
the future other primitive types will be incorporated, such as i16 (16-bit integer) and ui64 (unsigned
64-bit integer).
But this doesn’t mean that you are limited to only those types. They are called primitive types
because other more complex types are derived from them. These complex types are reviewed in the
following Sections of this Chapter.
3.2 Variables
Variables have been used in code examples in previous Chapters already, but they have not been
formally introduced. As was mentioned at the beginning of this Chapter, you could create programs
where you only use literal numbers, but you’d be extremely limited on what you can create. Variables
are one of those features that are very easy to understand and use, and yet, they greatly expand your
development capabilities. You can see how you can declare variables of the different primitive types
in CX in Listing 3.1.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 var optionCode byte
5 var i s A l i v e bool
6 v a r name s t r
7 v a r number i 3 2
8 v a r bigNumber i 6 4
9 var area f32
10 var e p s i l o n f64
11
12 name = " J o h n C o l e "
13 number = 14
14 }
Listing 3.1: Variable Declaration
As you can see, you can tell CX that you’re going to declare a variable by using the keyword
var, followed by the name of the variable, and finally the type that you desire that variable to have.
If you want to assign a value to that variable, you just write the name of the variable, then the equal
symbol (=) followed by the desired value.
It is interesting to note that variables are not actually needed in order to create a program,
but most – if not all – of the enterprise-level programming languages provide something similar
to the concept of variable. If you are curious about this, you can check some purely functional
programming languages like Haskell, and also learn about lambda calculus.
3.3 Arrays
If you have to create a program where you have to store three telephone numbers, you could just
create three different variables to hold each of them. But what if you had to store thousands of
16 Chapter 3. Data Structures
telephone numbers? Using variables to accomplish that task would be inefficient. The answer to this
problem is to use arrays.
Arrays are fixed length collections of elements of the same type. To store or access an element
in an array, you just need the name of the array and an index where you want to store the value to.
To declare an array you have to put square brackets before the type in a variable declaration, and
the number of elements that you want the array to hold must be inside the brackets. You can see an
example of an array of three 32-bit integers shown in Listing 5.8.
1 p a c k a g e main
2
3 f u n c main ( ) ( ) {
4 var foo [ 3 ] i32
5 f o o [ 0 ] = 10
6 f o o [ 1 ] = 20
7 f o o [ 2 ] = 30
8
9 i32 . p r i n t ( foo [ 2 ] )
10 }
Listing 3.2: Array Example
At Line 4 we can see the array declaration, at Lines 5, 6 and 7 the array gets initialized, and
finally at Line 9 we print the last element of the array, as arrays are zero-indexed in CX.
If you are curious enough (if you’re already a programmer, it doesn’t count), you could be
asking yourself: can you have arrays of arrays? The answer is: yes! You only need to put the extra
pair of brackets you need until you achieve the number of dimensions you want. An example of
multi-dimensional arrays is shown in Listing 3.3.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 var foo [ 3 ] [ 3 ] i32
5
6 f o o [ 1 ] [ 2 ] = 40
7
8 i32 . p r i n t ( foo [ 1 ] [ 2 ] )
9 }
Listing 3.3: Multi-dimensional Arrays
Before continuing to slices, it’s worth mentioning the existence of len. len is a type-generalized
function that accepts an array as its first and only input argument, and returns a 32-bit integer that
represents the number of elements that that array is capable of holding. This function is especially
useful when using arrays in combination with the for loop, which will be covered in Chapter 5. An
example of len’s usage can be seen in Listing 3.4. Please note that there are type-specific versions of
len for each of the primitive types.
1 p a c k a g e main
2
3.4 Slices 17
3 f u n c main ( ) {
4 var foo [10] i32
5
6 i32 . p r i n t ( len ( foo ) )
7 }
Listing 3.4: Printing Array Length
Please be careful with the sizes you choose for your arrays. If you create an array larger than 232 ,
you’ll get an error because 232 is the maximum array size or because you could exceed the maximum
memory allocated to CX by your operating system. Also, if you are working with very large arrays,
you’ll most likely want to create a pointer to it to send the array to the heap memory. CX passes
its arrays by value to other functions, which means that if you send a very big array to a function
as its argument, you’ll be creating a copy of it to be sent, which will be a very slow and memory
consuming operation. You’ll learn more about functions in Chapter 4 and about pointers in Chapter
7.
3.4 Slices
Under the hood, slices are just arrays. This means that a slice has the same performance in read/write
operations as an array. The advantage of using slices over arrays is that slices are incremented in
capacity automatically if it ever exceeds it. However, this can also be considered a disadvantage. A
slice in CX starts with a capacity of 32 elements. If this limit is reached, CX creates a copy of that
slice, but with an increased capacity of 2x its previous limit, which is 64 in its second iteration. As
you can see, most of the time a slice will be wasting memory, and time whenever CX creates a copy
of it in order to increase its limit.
It must be noted that capacity is not the same as size or length. Capacity represents the reserved
memory space for a slice, while size represents the actual number of slots in a slice that are being
used. You can understand better the difference if you run the code in Listing 3.5. Although any slice
will start with 32 slots reserved in memory, e.g., 32 ∗ 4 bytes for a []i32 slice, this doesn’t mean that
all of those slots have an actual value in there. Capacity is a concept related to performance rather
than to practicality.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 var s l i c e [ ] i32
5 s l i c e = append ( s l i c e , 1)
6 s l i c e = append ( s l i c e , 2)
7
8 i 3 2 . p r i n t ( l e n ( s l i c e ) ) / / p r i n t s 2 , n o t 32
9 }
Listing 3.5: Difference Between Capacity and Size
There are three native functions that are specifically designed to work with slices: make creates
a slice of a type and size that you specify, initializing the elements to the specified type’s nil
representation, e.g., 0 for an i32 and "" or an empty string for an str; append takes a slice and an
18 Chapter 3. Data Structures
element of the type of that slice, and puts it at the end of the slice; and lastly, copy creates a copy of
each of the elements of a slice, and puts each of the elements, in order, to the second slice until every
element has been copied to it or until the capacity of the second slice runs out.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 var s l i c e 1 [ ] i32
5 var s l i c e 2 [ ] i32
6
7 s l i c e 1 = make ( " [ ] i 3 2 " , 3 2 )
8 s l i c e 1 = append ( s l i c e 1 , 1)
9
10 s l i c e 2 = make ( " [ ] i 3 2 " , 3 2 )
11
12 copy ( s l i c e 2 , s l i c e 1 )
13 }
Listing 3.6: Slice-specific Native Functions
Listing 3.6 shows the declaration of two slices of type i32 at Lines 4 and 5. The first slice then
gets initialized using the make function, which creates a slice of size 32 in this case. This means
that slice1 now has a size of 32 elements and a capacity of 32 elements too. At Line 8, we append
a 1 to slice1, which makes the slice have now a size of 33 and a capacity of 64. After initializing
slice2 at Line 10, we copy the contents of slice1 to slice2. What do you think that are the elements
of slice2 now?
As a final note, slices are always allocated in the heap in CX due to their scalability nature. It
would be disastrous to have a slice grow in the stack, as it would make programs run very slow –
CX would need to juggle with the objects in the stack, making copies and moving them to different
positions. If slices are allocated in the heap, we can delegate all of these operations to CX’s garbage
collector, and keep the stack clean. This behavior will slightly change in the future, though. If CX’s
compiler can detect that a slice is never going to grow during a function call, we can then flag that
slice to be put in the stack for better performance. For more information about CX’s heap and stack,
you can read Chapters 7 and 10.
3.5 Structures
Structures allow the programmer to create more complex types. For example, you may want to
create a type Person where you can store a name and an age. This means that we want a mix of an
i32 and a str. A structure that solves this problem is presented in Listing 3.7
1 p a c k a g e main
2
3 type Person s t r u c t {
4 name s t r
5 age i32
6 }
7
8 f u n c main ( ) {
3.6 Scope of a Variable 19
9 v a r p1 P e r s o n
10 v a r p2 P e r s o n
11
12 p1 . name = " J o h n "
13 p1 . a g e = 22
14
15 p2 = P e r s o n {
16 name : " G a b r i e l l e " ,
17 a g e : 21
18 }
19
20 s t r . p r i n t ( p1 . name )
21 i 3 2 . p r i n t ( p1 . a g e )
22
23 s t r . p r i n t ( p2 . name )
24 i 3 2 . p r i n t ( p2 . a g e )
25 }
Listing 3.7: Type Person using Structures
The syntax for declaring a new structure or type is shown at Line 3, and Lines 4 and 5 show the
structure’s fields. The fields of a structure are the components that shape the type being defined by a
structure. In order to use your new Person type, we first need to declare and initialize variables that
use this type. This can be seen at Lines 9-13. Lines 12 and 13 show that we can initialize the struct’s
fields one by one, by using a dot notation, while Lines 15-18 show a different way of initialization:
the struct literal. A struct literal is created by writing the name of the type we want to initialize,
followed by the name of the struct fields’ names and their values separated by a colon. Each of these
field-value pairs need to be separated by a comma.
Both of these initialization approaches has its advantages. The dot notation has the advantage
of versatility: you can initialize different fields at different points in a program. For example, you
can initialize one field before a loop, and another field after that loop. On the other hand, the
struct literal approach has the advantages of readability and that it can be used as a function call’s
argument directly. For example, you can send a Person struct instance to a function call this way:
PrintName(Person {name: "John"}).
1 p a c k a g e main
2
3 var global i32
4
5 func foo ( ) {
6 i32 . p r i n t ( global )
7 / / i32 . p r i n t ( local ) / / t h i s w i l l r a i s e an e r r o r i f uncommented
8 }
20 Chapter 3. Data Structures
9
10 f u n c main ( ) {
11 var l o c a l i32
12
13 l o c a l = 10
14 g l o b a l = 15
15
16 i32 . p r i n t ( global )
17 i32 . p r i n t ( l o c a l )
18 }
Listing 3.8: Usage of Local and Global Variables
If you want to create a global variable, you only have to declare it outside any function declaration.
If you want a local variable, declare it inside the function you want it to have access to. Listing 3.8
shows an example that declares a global variable that is accessed to by two functions: main and foo,
and a local variable that is only accessible by the main function.
As a last note, global variables can also be accessed by other packages that import the package
containing said variable. You’ll learn more about packages in Chapter 6.
4. Functions
Unless you are learning an esoteric programming language, chances are that that language is going
to have some sort of subroutine mechanism. A subroutine is a named group of expressions and
statements that can be executed by using only its name. This allows a programmer to avoid writing
that group of expressions and statements again and again, every time they are needed. In CX,
subroutines are called functions, because they behave similarly to how mathematical functions
behave.
In CX, a function can receive a fixed number of input parameters, and like in Go, it can return a
fixed number of output parameters. These parameters must be of a specific type, either a primitive
type or a complex type. At the moment, both input and output parameters must have a name
associated to them, but this will change in the future and anonymous output parameters will be
possible. Parameters are a very powerful feature, because they allow us to have a function behave
differently depending on what data we send to it. Listing 4.1 shows how we can create a function
that calculates the area of a circle, and another function that calculates the perimeter of a circle.
1 p a c k a g e main
2
3 var PI f32 = 3.14159
4
5 func c i r c l e A r e a ( r a d i u s f32 ) ( area f32 ) {
6 a r e a = f 3 2 . mul ( f 3 2 . mul ( r a d i u s , r a d i u s ) , P I )
7 }
8
9 func c i r c l e P e r i m e t e r ( r a d i u s f32 ) ( p e r i m e t e r f32 ) {
10 p e r i m e t e r = f 3 2 . mul ( f 3 2 . mul ( 2 . 0 , r a d i u s ) , P I )
11 }
12
13 f u n c main ( ) ( ) {
14 var area f32
15 area = circleArea (2.0)
16 f32 . p r i n t ( area )
22 Chapter 4. Functions
17 f32 . p r i n t ( c i r c l e P e r i m e t e r ( 5 . 0 ) )
18 }
Listing 4.1: Determining Area and Perimeter of a Circle using Functions
If you needed to calculate the area of 20 circles, you’d only need to call circleArea 20 times,
instead of having to write f32 .mul(f32.mul(radius , radius ), PI) 20 times (although you’d probably be
using a for loop instead; see Chapter 5).
1 p a c k a g e main
2
3 func foo ( ) {
4 i32 . p r i n t ( x )
5 }
6
7 f u n c main ( ) {
8 var x i32
9 x = 15
10 foo ( )
11 }
Listing 4.2: Lexical Scoping
If CX was dynamically scoped, the code shown in Listing 4.2 would print 15, because the call to
foo at Line 10 would capture the value of the variable x declared in main. Instead, it will raise an
error because Line 4 is trying to access a variable that has not been previously declared.
4.3 Methods
Methods are a special type of functions that can be associated to user-defined types. Although
methods are not strictly necessary, as their functionality can be replaced by normal functions,
they provide some useful advantages. The first advantage is that different methods can have the
same name as long as they are associated to different types. This can help the programmer start
thinking only about the action that needs to be performed, instead of thinking about a name for that
4.3 Methods 23
specific structure. For example, instead of having to call functions named printPlayerName() and
printRefereeName(), you can simply call the structure instance’s method name printName(). This
situation is shown in Listing 4.3.
1 p a c k a g e main
2
3 type Player s t r u c t {
4 name s t r
5 }
6
7 type Referee s t r u c t {
8 name s t r
9 }
10
11 func ( p P l a y e r ) printName ( ) {
12 s t r . print ( " Player information " )
13 s t r . p r i n t ( p . name )
14 }
15
16 func ( r Referee ) printName ( ) {
17 s t r . p r i n t ( " Referee information " )
18 s t r . p r i n t ( r . name )
19 }
20
21 f u n c main ( ) {
22 var p Player
23 p . name = " M i c h a e l "
24
25 var r Referee
26 r . name = " Edward "
27
28 p . printName ( )
29 r . printName ( )
30 }
Listing 4.3: Methods Example
Another advantage of methods is that they promote safety, as they are associated to a particular
user-defined type. If a method is not defined for a type, this error will be caught at compile-time.
5. Control Flow
A program in CX is executed from top to bottom, one expression at a time. If you want a group of
expressions to not be executed, or executed only if certain condition is true, or executed a number of
times, you’ll need control flow statements. In CX, you have access to these control flow statements:
if and if/else, for loop, goto, and return.
In the following Section you’ll review the jmp function, and you’ll see that in CX, every control
flow statement is transformed to a series of jmps.
can use goto. goto will always perform an instruction jumping if encountered, and the number of
instructions that the program will be jumped to will be determined by a label. Listing 5.1 shows an
example where goto is used to jump directly to a print expression, and Listing ?? shows its AST.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 goto l a b e l
5 label1 :
6 s t r . p r i n t ( " t h i s s h o u l d n e v e r be r e a c h e d " )
7 label2 :
8 s t r . p r i n t ( " t h i s s h o u l d be p r i n t e d " )
9 }
Listing 5.1: Using goto for Control Flow
It is important to note that labels are only used by goto statements and affordances (see Chapter
11). If a label is encountered by the CX runtime, it will be ignored. Actually, if you check the AST
of the program in Listing 5.1 you will see that labels don’t appear: the parser read the labels and
transformed them to the number of expressions required by a jmp to make the CX program arrive
at that expression. In the case of the code shown in Listing 5.1, the number of instructions to be
skipped by goto label1 is +1 (it could be a negative number if it had to make a jump to an early
instruction).
1 p a c k a g e main
2
3 f u n c main ( ) {
4 if false {
5 var e r r i32
6 e r r = i32 . div (50 , 0)
7 s t r . p r i n t ( " T h i s w i l l n e v e r be p r i n t e d " )
8 }
9
10 if true {
11 s t r . p r i n t ( " This w i l l always p r i n t " )
26 Chapter 5. Control Flow
12 }
13
14 i f i32 . gt (5 , 3) {
15 s t r . p r i n t ( "5 i s g r e a t e r than 3" )
16 }
17 }
Listing 5.2: Using If for Control Flow.3: Listing 5.2’s Abstract Syntax Tree
If you want to execute certain block of code if the predicate is true, and a different block of code
if the predicate is false, you can extend the if statement to its if/else form. Listing 5.4 shows an
example of how to use if/else, and the AST for this example is shown in Listing 5.5.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 var out i32
5
6 i f i32 . l t e q (50 , 5) {
7 o u t = 100
8 } else {
9 o u t = 200
10 }
11
12 i32 . p r i n t ( out )
13 }
Listing 5.4: Using If/Else for Control Flow
5.3 for Loop 27.5: Listing 5.4’s Abstract Syntax Tree
The syntax of if and if/else is similar to Go’s syntax: you don’t need to enclose the predicate
in parentheses, unlike other languages like C, and the curly braces need to start after the condition,
or the parser will complain. The reason behind this is that in order to not be required to write a
semicolon after each expression, some tweaks needed to be implemented (just like in Go). As a
consequence of these tweaks, you are required to start your curly braces after the predicate. This has
the disadvantage of losing a bit of flexibility in how you are allowed to write your code, but it’s also
an advantage because the code now looks cleaner and more standardized.
1 p a c k a g e main
2
3 f u n c main ( ) ( ) {
4 for true {
5 s t r . p r i n t ( " I n f i n i t e loop ! " )
6 }
7 }
Listing 5.6: Infinite Loop Example
1 Program
28 Chapter 5. Control Flow.7: Listing 5.6’s Abstract Syntax Tree
First of all, don’t run the code above, as it’s an infinite loop. Although it’s essential to know how
to create an infinite loop, this infinite loop is particularly useless–it only prints "Infinite loop!" to the
terminal. This example illustrates how you can use a single argument as the predicate of a for loop,
as long as it evaluates to a boolean value.
1 p a c k a g e main
2
3 f u n c main ( ) ( ) {
4 var foo [ 5 ] i32
5 f o o [ 0 ] = 10
6 f o o [ 1 ] = 20
7 f o o [ 2 ] = 30
8 f o o [ 3 ] = 40
9 f o o [ 4 ] = 50
10
11 var c i32
12 f o r c = 0 ; c < 5 ; c ++ {
13 i32 . p r i n t ( foo [ c ] )
14 }
15 }
Listing 5.8: Traditional Syntax of For Loop.9: Listing 5.8’s Abstract Syntax Tree
The second example in Listing 5.8 shows the traditional syntax of a for loop, i.e., at Line 12 we
first initialize a variable, which is usually used as the counter, then we provide a predicate expression,
and finally an expression that is usually used to increment the counter. Listing 5.9 shows its AST.
5.4 return 29
5.4 return
The last control-flow statement is return. The only purpose of return is to make a function stop its
execution as soon as it is encountered. As it was mentioned in Chapter 4, return can’t be used to
return anonymous outputs, as they are not implemented yet. This means that you can’t use return
like this: return 5, " five " ; in a function that returns an i32 and a str, in that order. The correct way is
to first assign the desired values to the named outputs, and then call return whenever you want a
function to end prematurely.
1 p a c k a g e main
2
3 func foo ( ) ( out1 i32 , out2 s t r ) {
4 out1 = 5
5 out2 = " f i v e "
6
7 return
8
9 o u t 1 = 10
10 out2 = " ten "
11 }
12
13 f u n c main ( ) {
14 v a r num i 3 2
15 var t e x t s t r
16
17 num , t e x t = f o o ( )
18 }
Listing 5.10: Usage of return
The code shown in Listing 5.10 demonstrates how return prevents the function foo from
reassigning values to the output parameters.
1 Program
2 0. − P a c k a g e : main
3 Functions
4 0. − F u n c t i o n : f o o ( ) ( o u t 1 i 3 2 , o u t 2 s t r )
5 0. − E x p r e s s i o n : o u t 1 = i d e n t i t y ( 5 i 3 2 )
6 1. − E x p r e s s i o n : o u t 2 = i d e n t i t y ( s t r )
7 2. − E x p r e s s i o n : jmp ( b o o l )
8 3. − E x p r e s s i o n : o u t 1 = i d e n t i t y ( 1 0 i 3 2 )
9 4. − E x p r e s s i o n : o u t 2 = i d e n t i t y ( s t r )
10 1. − F u n c t i o n : main ( ) ( )
11 0. − E x p r e s s i o n : num , t e x t = f o o ( )
12 1. − E x p r e s s i o n : i 3 2 . p r i n t ( num i 3 2 )
13 2. − E x p r e s s i o n : s t r . p r i n t ( t e x t s t r )
14 2. − F u n c t i o n : ∗ i n i t ( ) ( )
Listing 5.11: Usage of return
The AST shown in Listing 5.11 demonstrates how a jmp is used to skip all the remaining
expressions. The parser calculates the number of expressions that follow the return statement, and
30 Chapter 5. Control Flow
If your project grows too big, you’ll need a better way to organize your code. A solution to this
would be to separate your functions into different files, but this is not a good solution as you could
still encounter problems if you end up naming another function with the same name. To make things
worse, as it was mentioned in Chapter 2, CX won’t complain if you redefine a function or a global
variable somewhere else in your code. The solution to this problem is modularization.
Modularization is a technique where you isolate groups of declarations in your source code under
a common module name. This module name works as a "last name" for all the declarations grouped
in that module, and gives every declaration a unique "full name" across all the source code files.
Each programming language has its own way of calling these isolated units of declarations. For
example, in C# they are called namespaces and in Python they are called modules. In CX, we call
these modules packages.
In Listing 6.1 we can see a program that got organized into three different packages: foo, bar
and main. Package foo declares 3 definitions: a structure named Point, a global variable named
num, and a function named bar. Package bar imports package foo and declares a single definition:
a function named returnPoint. As you can see, importing a package is handled by the import
keyword, followed by the name of the package that you want to import. Something interesting in the
function returnPoint is that it is using definitions defined in package foo. As we can see, in order
to access something from an imported package, you first need to write that package’s name, then a
period followed by the name of the definition of interest.
1 package foo
2
3 type Point s t r u c t {
4 x i32
5 y i32
6 }
7
8 v a r num i 3 2 = 15
9
32 Chapter 6. Packages
10 func bar ( ) {
11 s t r . p r i n t ( " From f o o p a c k a g e " )
12 }
13
14 package bar
15 import " foo "
16
17 func r e t u r n P o i n t ( ) ( r e s P o i n t foo . Point ) {
18 var r e s P o i n t foo . Point
19 r e s P o i n t = f o o . P o i n t { x : 1 0 , y : 20}
20 }
21
22 p a c k a g e main
23 import " foo "
24 import " bar "
25
26 f u n c main ( ) {
27 var aPoint foo . Point
28 a P o i n t . x = 30
29 a P o i n t . y = 70
30 aPoint = bar . r e t u r n P o i n t ( )
31
32 var check i32
33 c h e c k = 10
34 i32 . p r i n t ( check )
35
36 i32 . p r i n t ( aPoint . x )
37 i32 . p r i n t ( aPoint . y )
38
39 var foo1 foo . P o i n t
40 f o o 1 . x = 20
41 f o o 1 . y = 30
42 i32 . p r i n t ( foo1 . x )
43 i32 . p r i n t ( foo1 . y )
44
45 i 3 2 . p r i n t ( f o o . num )
46 foo . bar ( )
47 i 3 2 . p r i n t ( f o o . num )
48 }
Listing 6.1: Importing Packages Example
The different packages and their definitions can be placed altogether in a single file (unlike in
other languages, where you have to use a file or a directory for a single package or module), but this
can become unpractical sooner than later, so it is advised that you use a single package per directory,
as in the programming language Go. Also, CX projects behave similarly to Go projects, where you
have to place your files in a directory in a CX workspace. CX workspaces are described in Section
6.1.
6.1 CX Workspaces
Dividing your code into different files is essential as your projects grow bigger. CX takes an approach
similar to Go for handling projects: a package in a directory can split into a number of files, but
6.1 CX Workspaces 33
you can’t use more than one package declaration in these files inside this directory. In other words,
a directory represents a package. An exception to this rule would be declaring several packages
in a single file. The purpose of this exception is to allow the programmer to test ideas quickly
without them being required to create packages in different directories and another directory for their
application (which will contain the main package).
Listings 6.2 and 6.3 show the code for two packages: math and main. The math code needs
to be in a file named whatever you want, inside a directory that you should name the same as your
package. It’s not mandatory to do so, but the consistency helps other programmers that are reading
your code.
1 p a c k a g e math
2
3 func double ( ) ( out i32 ) {
4 o u t = i 3 2 . add ( 5 , 2 )
5 }
Listing 6.2: Package to be Imported
1 p a c k a g e main
2 i m p o r t " math "
3
4 f u n c main ( ) {
5 s t r . p r i n t ( " hi " )
6 var foo i32
7 f o o = math . d o u b l e ( )
8 i32 . p r i n t ( foo )
9 }
Listing 6.3: Main Package
1 Program
2 0. − P a c k a g e : math
3 Functions
4 0. − F u n c t i o n : d o u b l e ( ) ( o u t i 3 2 )
5 0. − E x p r e s s i o n : o u t = i 3 2 . add ( 5 i 3 2 , 2 i 3 2 )
6 1. − P a c k a g e : main
7 Imports
8 0. − I m p o r t : math
9 Functions
10 0. − F u n c t i o n : main ( ) ( )
11 0. − E x p r e s s i o n : s t r . p r i n t ( " h i " s t r )
12 1. − E x p r e s s i o n : f o o = d o u b l e ( )
13 2. − E x p r e s s i o n : i 3 2 . p r i n t ( f o o i 3 2 )
14 1. − F u n c t i o n : ∗ i n i t ( ) ( )
Listing 6.4: Resulting Abstract Syntax Tree
The AST for the full program can be seen in Listing 6.4 . As you can see, each package lists
34 Chapter 6. Packages
the packages that were imported. The names of these packages are the names that were given to the
package declaration. In other words, if you name your package’s directory foo but you declare your
package in your code as bar, CX will handle all the calls to this package through the latter instead of
the former name.
But where exactly do you have to put all this code? CX, as mentioned before, follows the same
philosophy as Go: you work in workspaces. A workspace is a directory dedicated solely to manage
your projects, dependencies, executables and shared libraries. A workspace can be any directory in
your file system that contains these three directories: bin, src and pkg. bin is used to store the binary
files of your projects and/or libraries; src is used to store the source code of your projects and their
dependencies; and pkg stores object files that are used to create the executables stored in bin.
After installing CX for the first time, the installation script will create a default workspace for
you located at $HOME/cx or %USERPROFILE%\cx, depending on what operating system you are
using: unix-based systems or Windows, respectively. If you want to override this, you can set the
environment variable $CXPATH or %CXPATH% to a file system path where you want your CX
workspace to reside.
A way to get started quickly with a new CX project is to use the CX executable to create one for
you. You only have to write cx -n or cx –new and a series of questions about your new project will
be asked to you that will be used to initialize it.
Just like in Go, a project without main package or function is considered a library to be imported
by other packages or applications, while a project with a main package and function is considered
an application that is going to be calling the other projects in the src/ directory of your workspace as
libraries.
If you’re working in a single file, you can just import your packages using the name you used in
the package declaration statement, like in Listing 6.1. If you are dealing with packages from different
directories in your workspace, then you need to make sure that you write the full path to the desired
package. For example, if the package you want to import is located in $CXPATH/src/math_stuff/ stats or
%CXPATH%\src\math_stuff\stats, you’d need to import the package like this: import " math_stuff / stats ". As
you can see, you have to omit the src part because all of the libraries need to be there anyway.
7. Pointers
Programming languages that use a stack to pass values to function calls can pass the actual value
or a reference to it. Passing by value means that all the bytes that represent that data structure are
copied to the function call. In the case of a simple integer or a floating-point number, this isn’t a big
problem, because you’re copying at most 8 bytes. The real problem arises when you try to pass a
really big data structure, like an array or a string (which is basically an array). Copying all these
bytes every time a function is called creates two problems: 1) it is slow; imagine that you have to
execute a for loop that iterates N times, where N is the size of your data structure, and you have to
do this every time you call that function; and 2) you are more prone to encounter a stack overflow
error, as you are filling your stack with all these copies of your data structure.
A solution to the pass-by-value problem is to use pass-by-reference. In pass-by-reference, instead
of copying the actual value, you send the address of the value that you want to use. A reference is
just a number that represents the index where you can find the actual value in memory, and as such,
a reference only needs 4 bytes, as it’s just a normal 32-bit integer. This also means that creating
a pointer to a 32-bit integer is useless if your purpose is to increase your program’s performance
(actually, using a pointer would make your program a tiny bit slower, because it needs to dereference
the pointer).
1 p a c k a g e main
2
3 type Cell s t r u c t {
4 id i32
5 drawable i32
6 a l i v e bool
7 aliveNext bool
8 x i32
9 y i32
10 }
11
12 f u n c main ( ) {
13 var c e l l s ∗[900] Cell
36 Chapter 7. Pointers
14 c e l l s = makeCells ( )
15
16 f o r b o o l . n o t ( g l f w . S h o u l d C l o s e ( " window " ) ) {
17 draw ( c e l l s , " window " , p r o g r a m )
18 }
19 }
Listing 7.1: Pointer to a Structure Instance
The code in Listing 7.1 presents a situation where using a pointer drastically improves the
performance of a program. Line 13 shows the declaration of a variable of type pointer to an array
of structure instances. This variable is then used to hold the output of makeCells, and the for loop
draws all the cells to a window. If we weren’t using a pointer, we’d need to pass by value all the 900
cells, which sum a total of 16,200 bytes. In contrast, by using a pointer we’re only sending 4 bytes
that represent the other 16,200 bytes.
This Listing shows an excerpt of an OpenGL example present in the CX git repository ().
The example is currently located at examples/opengl/conways−game−of−life−gc.cx, but this path could
change in the future. If you try to run this example using your local CX installation, you’ll find out
that it doesn’t run, so download the full example from the CX repository.
1 p a c k a g e main
2
3 var epsilon i32
4
5 f u n c b a r ( ) (w i 3 2 ) {
6 w = 5
7 }
8
9 f u n c f o o ( num1 i 3 2 , num2 i 3 2 ) ( r e s i 3 2 ) {
10 var weight i32
11
12 weight = bar ( )
7.1 Memory Segments 37
13
14 r e s = ( num1 + num2 ) ∗ w e i g h t ∗ e p s i l o n
15 }
16
17 f u n c main ( ) {
18 epsilon = 5
19 e p s i l o n = foo (10 , 10)
20 }
Listing 7.2: Pointer to a Structu
Listing 7.2 helps us to understand the different memory segments in CX. Line 3 declares a
global variable, which will be set in the data segment. In this particular program, the data segment is
only going to be 4 bytes long, as it only needs to store one 32-bit integer. Just after compiling the
program, these 4 bytes will be set to 0s, but as soon as the program is run, the very first instruction to
be run at Line 18 is an assignment to epsilon, which will modify the data segment to hold 5 0 0 0.
As the main function does not declare any variables, the stack segment will not be used until we
call foo at Line 19. Before starting the execution of this function call, CX reserves a certain amount
of bytes for that call in the stack. This amount of bytes needs to be constant throughout a program’s
execution, i.e. CX knows how many bytes to allocate for any function call after compile time. In
this case, foo needs 16 bytes, because it has two i32 input parameters, one i32 output parameter
and one i32 local variable declaration. Before foo ends its execution, it makes a function call to
bar. This means that CX needs to keep foo’s bytes "alive," as the function call has not finished yet.
Instead, CX needs to reserve 4 more bytes for bar for its i32 output parameter. Once bar finishes its
execution, the 4 bytes reserved for it can now be discarded, and the program’s execution returns to
foo. After Line 14 finishes, foo’s execution will also finish, and the bytes reserved for it can now
be discarded. Some details about this process were not mentioned on purpose, but the general idea
should be clear now.
As you can see, the stack is always growing and shrinking, and it does this in a linear manner,
i.e. you’re never going to be discarding bytes in the middle or at the beginning, only the most
recent reserved bytes are the ones that get discarded. This behavior avoids fragmentation, which is a
problem when using the heap segment (we’ll review this topic in Chapter 10).
CX does not support multi-threading yet, but it is interesting to note that multiple stacks need
to be used for multi-threaded programs. Every time you create a new thread, a new stack must be
assigned to that thread.
1 p a c k a g e main
2
3 f u n c g r e e t i n g s ( name s t r ) ( g s t r ) {
4 g = s p r i n t f ( " G r e e t i n g s , %s " , name )
5 }
6
7 f u n c main ( ) {
8 v a r name s t r
9 name = " W i l l i a m "
10 name = g r e e t i n g s ( name )
11 s t r . p r i n t ( name )
12 }
Listing 7.3: Pointer to a Structu
38 Chapter 7. Pointers
To begin understanding the heap segment, we can have a look at Listing 7.3. This program
creates a str variable, the string "William" is assigned to it, it is sent to greetings, and its result is
re-assigned to name to later be printed to the terminal. You may be wondering what this program
has to do with the heap, as no pointers are ever declared. Well, first you need to keep in mind that
the stack needs to grow/shrink in constant "chunks" of bytes and the data segment never grows or
shrinks. Now pay attention to Lines 9 and 10. First, name is holding the value "William" and then it
will hold the value "Greetings, William." Do you see the problem here? If these strings were handled
only by the stack, we would have a variable-sized function, which is not allowed.
Strings in CX basically behave as pointers. Whenever a string needs to be allocated, it is allocated
in the heap, not in the stack or data segments. After its allocation, a pointer to this string is assigned
in the stack. This way, functions that handle strings can be fixed-sized, as pointers always have a size
of 4 bytes. Going back to the example in Listing 7.3, name is first assigned the address of the string
"William" allocated in the heap, then a new string, "Greetings, William", is allocated in the call to
greetings, and its address is returned as its output and re-assigned to name. This means that you can
allocate whatever object you need at any point in the heap, in any order and wherever you want.
1 p a c k a g e main
2
3 type Point s t r u c t {
4 x i32
5 y i32
6 }
7
8 func CreatePoints ( ) {
9 var points [5] Point
10 var p t r ∗[5] Point
11
12 var c i32
13 f o r c = 0 ; c < 5 ; c ++ {
14 p o i n t s [ c ] = P o i n t { x : c , y : c + 1}
15 }
16
17 p t r = &p o i n t s
18 }
19
20 f u n c main ( ) {
21 for true {
22 CreatePoints ()
23 }
24 }
Listing 7.4: Pointer to a Structu
But allocating anything you want and wherever you want isn’t problematic? Indeed, it is so
problematic that in some programming languages you need to personally take care of what and
when you want to allocate a new object in the heap, and even when you need to destroy that object.
These languages are said to have "manual memory management," and perhaps the most popular
language of this type is C. For example, Listing 7.4 executes an infinite loop that repeatedly calls
CreatePoints, which creates an array of 5 Point instances, and allocates them in the heap. As you
can notice, nothing else happens with the pointer to this array, CreatePoints simply allocates this
7.1 Memory Segments 39
array of Point instances, and then returns. Now, as we are doing this an indefinitely number of times,
wouldn’t this program cause a heap overflow eventually? Not really, CX’s garbage collector will
be activated each time the heap is full, and remove the objects that are no longer being used. The
resulting dead objects could be anywhere in the heap, which will cause fragmentation, but don’t
worry as the garbage collector deals with this problem too. As can be noted, the heap is the most
flexible memory segment.
The last memory segment is the code segment. This segment can be modified at will, unlike in
other programming languages. This segment holds all the program’s elements, such as functions,
expressions and structure declarations. Modifying this memory segment will be discussed in Chapter
11.
8. OpenGL and GLFW with CX
In the Skycoin team we believe that a bright future exists for blockchain technologies in video game
development. For this reason, one of the first libraries that was developed for CX was the OpenGL
library. This Chapter presents some video game examples that should help you get started with video
game development in CX. In order to use the OpenGL and GLFW libraries in your CX programs,
just import "gl" and import "glfw" after a package declaration.
The current OpenGL library does not implement all of the OpenGL functions and constants, but
it should implement everything in the future. The OpenGL version that the CX library targets is 2.1.
CX also provides a GLFW library that helps the programmer set up things like windows and
input devices. The GLFW version targeted by the CX library is 3.2.
The examples in this Chapter are not explained thoroughly, as the purpose of this book is to
explain the features of the CX programming language, not to explain how OpenGL and GLFW work. g l f w . WindowHint ( g l f w . R e s i z a b l e , g l f w . F a l s e )
12 g l f w . WindowHint ( g l f w . C o n t e x t V e r s i o n M a j o r , 2 )
13 g l f w . WindowHint ( g l f w . C o n t e x t V e r s i o n M i n o r , 1 )
14
15 g l f w . CreateWindow ( " window " , w i d t h , h e i g h t , " Window Example " )
16 g l f w . M a k e C o n t e x t C u r r e n t ( " window " )
17
18 gl . I n i t ()
19 var program i32
41
20 program = g l . CreateProgram ( )
21 g l . LinkProgram ( program )
22
23 f o r b o o l . n o t ( g l f w . S h o u l d C l o s e ( " window " ) ) {
24 g l . C l e a r ( i 3 2 . b i t o r ( g l . COLOR_BUFFER_BIT , g l . DEPTH_BUFFER_BIT ) )
25
26 g l . UseProgram ( p r o g r a m )
27
28 glfw . P o l l E v e n t s ( )
29 g l f w . S w a p B u f f e r s ( " window " )
30 }
31 }
The first step to creating a video game is to create the window where everything is going to be
displayed. Listing 8.1 shows a bare-bones example that only displays an empty window. You could
think that it’s a lot of instructions to only accomplish a simple task such as creating a window, but
it’s the OpenGL way. This example can be used as a template to start a new OpenGL project in CX.
The window has a resolution of 800x600, as defined by the global variables width and height,
AT Lines 6, respectively. The function that actually creates the window to be displayed is created at
Line 15, and it is constantly being re-drawn in the loop that begins at Line 23.
12 g l f w . CreateWindow ( " window " , w i d t h , h e i g h t , " T r i a n g l e " )
13 g l f w . M a k e C o n t e x t C u r r e n t ( " window " )
14
15 gl . I n i t ()
16
17 var program i32
18 program = g l . CreateProgram ( )
19
20 g l . LinkProgram ( program )
21
22 f o r b o o l . n o t ( g l f w . S h o u l d C l o s e ( " window " ) ) {
23 g l . C l e a r ( g l . COLOR_BUFFER_BIT )
24
25 g l . UseProgram ( p r o g r a m )
26
27 g l . MatrixMode ( g l . PROJECTION )
28 gl . LoadIdentity ()
29 g l . MatrixMode ( g l .MODELVIEW)
30
42 Chapter 8. OpenGL and GLFW with CX
31 gl . B e g i n ( g l . TRIANGLES )
32 gl . Color3f (1.0 , 0.0 , 0.0)
33 gl . V e r t e x 3 f ( − 0 . 6 , −0.4 , 0 . 0 )
34 gl . Color3f (0.0 , 1.0 , 0.0)
35 gl . V e r t e x 3 f ( 0 . 6 , −0.4 , 0 . 0 )
36 gl . Color3f (0.0 , 0.0 , 1 . 0 ) ;
37 gl . Vertex3f (0.0 , 0.6 , 0 . 0 ) ;
38 gl . End ( ) ;
39
40 glfw . P o l l E v e n t s ( )
41 g l f w . S w a p B u f f e r s ( " window " )
42 }
43 }
Listing 8.2: Drawing a Triangle to a Window
Now that we can create a window and display it, let’s draw something on it. Listing 8.2 adds
some lines of code to the code in Listing 8.1 (Lines 27 - 38). Functions gl.Color3f and gl.Vertex3f
are used to assign a color and coordinates to a vertex for a triangle, enclosed by calls to gl.Begin and
gl.End. After running the code in this Listing, you should see a window with a triangle like in the
one displayed in Figure 8.1.
1 p a c k a g e main
2
3 import " gl "
4 import " glfw "
5
6 v a r w i d t h i 3 2 = 800
7 v a r h e i g h t i 3 2 = 600
8
9 type Ball s t r u c t {
10 x f32
11 y f32
12 vx f 3 2
43
13 vy f 3 2
14 g r a v i t y f32
15 r a d i u s f32
16 }
17
18 func drawBall ( b a l l Ball ) ( ) {
19 var f u l l _ a n g l e f32
20 f u l l _ a n g l e = f 3 2 . mul ( 2 . 0 , 3 . 1 4 1 5 9 2 6 5 4 )
21 var x f32
22 var y f32
23
24 g l . B e g i n ( g l . POLYGON)
25 gl . Color3f (1.0 , 1.0 , 1.0)
26
27 var i f32
28 f o r i = 0 . 0 ; f 3 2 . l t ( i , 2 0 . 0 ) ; i = f 3 2 . add ( i , 1 . 0 ) {
29 x = f 3 2 . add ( b a l l . x , f 3 2 . mul ( b a l l . r a d i u s , f 3 2 . c o s ( f 3 2 . d i v ( f 3 2 . mul ( i , f u l l _ a n g l e ) , 2 0 . 0 ) ) ) )
30 y = f 3 2 . add ( b a l l . y , f 3 2 . mul ( b a l l . r a d i u s , f 3 2 . s i n ( f 3 2 . d i v ( f 3 2 . mul ( i , f u l l _ a n g l e ) , 2 0 . 0 ) ) ) )
31
32 gl . Vertex2f (x , y )
33 }
34
35 g l . End ( )
36 }
37
38 f u n c main ( ) ( ) {
39 glfw . I n i t ( )
40
41 g l f w . CreateWindow ( " window " , w i d t h , h e i g h t , " B o u n c i n g B a l l " )
42 g l f w . M a k e C o n t e x t C u r r e n t ( " window " )
43
44 gl . I n i t ()
45 var program i32
46 program = g l . CreateProgram ( )
47 g l . LinkProgram ( program )
48
49 var b a l l Ball
50 ball = Ball {
51 radius : 0.05 ,
52 x : 0.0 ,
53 y : 0.0 ,
54 vx : 0 . 0 1 ,
55 vy : 0 . 0 1 ,
56 g r a v i t y : 0.01}
57
58 f o r b o o l . n o t ( g l f w . S h o u l d C l o s e ( " window " ) ) {
59 g l . C l e a r ( g l . COLOR_BUFFER_BIT )
60
61 g l . UseProgram ( p r o g r a m )
62
63 g l . MatrixMode ( g l . PROJECTION )
64 gl . LoadIdentity ()
65 g l . MatrixMode ( g l .MODELVIEW)
66
67 i f f 3 2 . l t e q ( f 3 2 . add ( b a l l . y , b a l l . r a d i u s ) , −1.0) {
68 b a l l . vy = f 3 2 . a b s ( b a l l . vy )
44 Chapter 8. OpenGL and GLFW with CX
69 } else {
70 b a l l . vy = f 3 2 . s u b ( b a l l . vy , b a l l . g r a v i t y )
71 }
72
73 b a l l . x = f 3 2 . add ( b a l l . x , b a l l . vx )
74 b a l l . y = f 3 2 . add ( b a l l . y , b a l l . vy )
75
76 drawBall ( b a l l )
77
78 glfw . P o l l E v e n t s ( )
79 g l f w . S w a p B u f f e r s ( " window " )
80 }
81 }
Listing 8.3: Bouncing Ball Example
As the final example of this Chapter, Listing 8.3 presents a little more complex situation. We
use a structure that will represent a ball to be drawn on screen, declared at Line 9. In the for loop
that updates the screen (Lines 58-80) we update the coordinates (x and y) of the ball, and draw the
ball’s new position to the window. The function drawBall uses the coordinates of the ball structure
instance as a center, and uses its radius to draw a circle using polygons, which represents the ball.
After running this last example, you should see a ball that starts at the center of the screen, and
starts falling and bouncing to the right of the screen. It should display something similar to Figure
8.2.
9. Interpreted and Compiled
As has been noted in previous Chapters, CX is both an interpreted and a compiled language. But this
doesn’t only mean that you can run a program by interpreting it or compile it and then run it; CX
goes further. CX can work with both compiled and interpreted code at the same time, just like some
languages, such as Common Lisp. The reason behind this design decision is that it maximizes the
number of features CX can provide. For example, a function that is constantly being constructed by
affordances is far easier to be evaluated if it’s purely interpreted, instead of recompile the function
every time (or maybe even the whole program).
CX started as being purely interpreted, mainly because the Skycoin team was still testing some
ideas on what direction the language was going to take. As the language progressed in complexity,
and we wanted to test programs that were more expensive regarding computational resources, it was
clear that CX needed to implement optimization techniques to the code it was generating. However,
we realized that the current design had reached certain limit. The generated programs were very
flexible, as many features of the language were managed by the underlaying language: Go. This
flexibility allowed CX to implement affordances, an integrated genetic programming algorithm and
other features in a short amount of time. Nevertheless, its speed was comparable to Martz’s Ruby
(no, not Ruby, Martz’s Ruby is about 5 times slower than Ruby). As a consequence to this, CX had
to take another direction in its design, and some core optimizations were implemented.
Nowadays CX is pretty fast, even if a plethora of optimizations still need to be implemented.
At some benchmark tests CX scored a similar speed to Python, but we still need to perform more
benchmarks to get a more objective conclusion. Even if the resulting speed is actually 5 times
slower than Python, it’s far better than before and, as stated above, many optimizations can still be
implemented.
1 $ cx h e l l o −w o r l d . cx
2 Hello , world !
3 $ cx h e l l o −w o r l d . cx − i
4 Hello , world !
Listing 9.1: Interpreting and Compiling the same Program
46 Chapter 9. Interpreted and Compiled
You may be wondering what happened to the interpreted version of CX. It’s still in use and it is
faster now than before. We realized that some of the optimizations that were implemented for the
compiled version can work with the CX interpreter, and it got benefited from them. It is still slow,
but it retained all its flexibility. If you open a CX REPL, you’ll be running the CX interpreter, and if
you run a $ cx example.cx command, without the -i flag, you’ll be running the compiled version of CX
(this is shown clearer in Listing 9.1).
Having both interpreted and compiled code results in a workflow you can follow to maximize
productivity and performance. You can use the CX interpreter to test code without having to be
re-compiling your code every time, and when you’re done testing and fine-tuning your code, you can
compile for speed and better memory management.
seconds. This performance can be acceptable for some kind of programs or for testing some ideas,
but it’s definitely unacceptable for most programs in their production stage.
The CX’s compiler is not exactly a compiler in the traditional sense of the word, but it definitely
will become one soon. We call it a compiler for now because it will become one and because of the
optimizations it makes to the generated code. In a sense, that can be already considered a compiler,
as the code is not run line-by-line as in an interpreter. We are only lacking a proper way to create
executables targeted to a platform (operating system and CPU).
As stated in the previous Section, CX’s compiled code performs similarly to Python in some
tests. Python should beat CX in other benchmarks, as it’s a language that has been optimized since
1991, but at least it’s not super slow as CX’s interpreted programs.
Another feature of CX’s compiler is that it has its own garbage collector now. Go’s garbage
collector is remarkable, but it was not working well with CX. Now that CX has its own memory
segments, we can optimize very well how that memory is allocated.
In conclusion, the compiler was not necessary in terms of features, but it was definitely necessary
as performance is almost always a critical aspect of any programming language. Even interpreted
languages are often discarded or chosen because of their speed or how well they manage memory.
10. Garbage Collector
CX is a garbage collected language, unlike other languages like C, where you have to manually
manage memory allocations and deallocations, or languages like Rust that adopt other techniques
to manage memory. Manual memory management brings the advantage of efficiency in memory
deallocations, but at the expense of possible memory leaks. If you define a routine where you allocate
some objects and then you forget to properly deallocate them when they’re no longer being used, you
could end up exhausting your heap memory, and the program could crash. Another problem is that it
could not necessarily crash immediately, but after some days or weeks of use. Maybe the program
is not properly deallocating a single object every hour, so exhausting your heap memory will take
some time, but it will definitely happen if the program is meant to be run forever, such as a web
service. For this reason, programs made in C, for example, are usually used to solve problems where
efficiency is far more important than reliability, and garbage collected languages, such as Java or Go,
are usually used to write software systems meant to run for large periods of time, where reliability is
preferred over resource efficiency. Manual memory management is less important nowadays that
computing resources are cheaper than ever (although this statement can not be treated as a fact, we
can clearly see a tendency to opt for automatic memory management in the present). Additionally,
many garbage collectors are now extremely efficient and the impact on a program’s performance
could be regarded as negligible in many situations.
For the reasons stated above, we decided to make CX a garbage collected language, although
in the future you’ll be able to handle memory manually too. One of the platforms that we want to
target in the future is micro-controllers, and manual memory management is usually preferred in this
situation. But for now, all programs made in CX are garbage collected.
destroyed in a sequential manner, and they’ll always have the same relative address in the stack.
However, programs limited to fixed-sized data structures are not going to be able to solve many
situations, or not conveniently at least. For this reason, it is practical to have another segment of
memory called the heap, where variable sized data structures can be stored. Objects in the heap, in
CX, start being allocated sequentially, just like in the stack, but they can be destroyed in an arbitrary
order. This behavior leaves fragmented chunks of memory being used, and other fragments that
are no longer being used. A garbage collector’s mission is to manage these fragmented chunks of
memory.
The concept of affordance was developed by the psychologist James J. Gibson, and it was first
presented in [Gib66]. The traditional explanation of what an affordance is can be found in the
aforementioned work, and it is as follows:.
There are obviously some rules or limitations on what it can be done with affordances. For
example, you cannot add a structure field to a function declaration. A less obvious example is that
you cannot send a 32-bit integer to str.print(), as this function is expecting an argument of type
str. These limitations help reduce the number of affordances in a program, but they can still be a
lot, even in relatively small programs. The solution to handling this problem is to implement some
mechanism that allows us to get only those affordances that are useful. This mechanism is a rule
set that you can define before asking CX what affordances are available at certain point during a
program’s execution. These rules will examine the elements that can be part of an affordance, and
check if they meet some criteria. For example, in a video game we could reject any player that has hit
points lower than certain quantity, or allow a boss to appear to the screen if the player has completed
certain quests. Although these examples could have been solved using simple if/else statements,
affordances can solve more obscure and complex problems, as they have a true global scope. For
example, if you wanted to check if an object has already been discarded by CX’s garbage collector,
you can do that with affordances. Or what about if you want to have access to values of variables in
previous function calls, that’s right, you can do that with affordances.
Affordances were created with the purpose of increasing security in a program. There are certain
types of attacks where a function call can access other parts of memory. In this case, affordances add
an extra layer of security, assuring that only a limited number of elements can interact with other
elements of a program.
It is worth noting that affordances not only act at compile-time, but also at runtime. You can
create a function that is constantly evaluating what is allowed in the interaction among a program’s
elements.
As a last note before looking at the examples that follow, please bear in mind that CX’s affordance
system is still under development and many of its features could change in the future. For example,
the rule set was previously managed by an embedded Prolog interpreter, and you had to know some
Prolog in order to use it. This was obviously a very bad idea, but it allowed us to experiment with
many of the possibilities of affordances. Now the rules are created using a very simple syntax. At
the moment, affordances can only work with expressions, but most of the code to manipulate other
program elements is almost complete.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 foo1 := 1
5 foo2 := 2
6 foo3 := 3
7
8 t a r g e t : = −>{
9 pkg ( main ) f n ( main ) exp ( m e s s a g e )
10 }
11
12 r u l e s : = −>{
13 allow (∗ > 1)
14 }
15
16 a f f s := a f f . query ( t a r g e t , r u l e s )
17 aff . print ( affs )
18 a f f . execute ( t a r g e t , affs , 0)
19
52 Chapter 11. Affordances
20 message :
21 i32 . p r i n t (0)
22 }
Listing 11.1: Using Affordances on an Expression
Listing 11.1 shows a basic program that uses affordances to filter among the possible values that
the expression at Line 21 can take. As this is a small program, the only possible values are those
being held by foo1, foo2 and foo3. In order to know what expression we want to target, we need
to label it first. To do this, we can simply use to-do labels, as seen at Line 20, where we label our
target expression as "message." The next step is to create a variable to hold the target expression.
To do this, we use the affordance mini programming-language, which is called by writing ->, and
we write the desired statements inside of the braces. Creating a target is done at Line 8. Targets are
constructed by going down in levels of scope: the package is specified first, then the function, and
lastly the expression. To specify the desired package, you use (pkg) followed by the name of the
package enclosed by parentheses. For functions, you use fn, followed by the name of the function
enclosed by parentheses. Lastly, to specify the expression, you use exp followed by the label given
to the expression, again, enclosed by parentheses.
Rules, as mentioned before, are used to filter the possible options. In this example, rules are
defined at Line 12, and they contain only one clause: allow anything that is greater than 1. The
asterisk in here represents the initial allowed objects to be sent to the targeted expression. As the
expression is waiting for a 32-bit integer, the asterisk will be of type i32. Think about it like how the
x in mathematics can mean any number but, in this case, it can mean any program element. If the
targeted expression can receive a structure instance as its input, we could create predicates of the
form ∗. field == something, for example.
Now that we have both the target and the rule set, we can query CX’s affordance system using
aff.query, as shown at Line 16. The results returned by aff.query can be pretty-printed to the
console by calling aff.print, as shown at Line 17. This is useful if you want the user to be involved
on what affordance to execute. For example, you could use affordances to create an entire program
just by selecting the options that you want, and aff.print would be used to let the programmer know
what affordance to execute. When you have chosen an appropriate affordance, either manually or
automatically, you can execute it by calling aff.execute, as shown at Line 18. aff.execute takes
three arguments as inputs: a target, the set of affordances, and an index representing the desired
option to execute. As you can see, you could execute the same affordance to several targets, and
execute several affordances by specifying different indexes. In the case of the above example, we
simply execute the first option, represented by index 0. After running the whole program, the number
2 should be printed to the console, as is the first element that is greater than 1.
1 p a c k a g e main
2
3 var goNorth i32 = 1
4 var goSouth i32 = 2
5 var goWest i 3 2 = 3
6 var goEast i32 = 4
7
8 f u n c map2Dto1D ( r i 3 2 , c i 3 2 , w i 3 2 ) ( i i 3 2 ) {
9 i = w ∗ r + c
10 }
53
11
12 f u n c map1Dto2D ( i i 3 2 , w i 3 2 ) ( r i 3 2 , c i 3 2 ) {
13 r = i / W
14 c = i %W
15 }
16
17 f u n c r o b o t ( row i 3 2 , c o l i 3 2 , a c t i o n i 3 2 ) ( r i 3 2 , c i 3 2 ) {
18 i f a c t i o n == 1 {
19 r = row − 1
20 c = col
21 }
22 i f a c t i o n == 2 {
23 r = row + 1
24 c = col
25 }
26 i f a c t i o n == 3 {
27 c = col − 1
28 r = row
29 }
30 i f a c t i o n == 4 {
31 c = col + 1
32 r = row
33 }
34 }
35
36 f u n c g e t R u l e s ( row i 3 2 , c o l i 3 2 , w i d t h i 3 2 , wallMap [ 2 5 ] b o o l , wormholeMap [ 2 5 ] b o o l ) ( r u l e s a f f ) {
37 r u l e s −>= a l l o w ( ∗ == ∗ )
38
39 i f wallMap [ map2Dto1D ( row − 1 , c o l , w i d t h ) ] {
40 r u l e s −>= −>{ r e j e c t ( ∗ == 1 ) }
41 }
42
43 i f wallMap [ map2Dto1D ( row + 1 , c o l , w i d t h ) ] {
44 r u l e s −>= −>{ r e j e c t ( ∗ == 2 ) }
45 }
46
47 i f wallMap [ map2Dto1D ( row , c o l + 1 , w i d t h ) ] {
48 r u l e s −>= −>{ r e j e c t ( ∗ == 3 ) }
49 }
50
51 i f wallMap [ map2Dto1D ( row , c o l − 1 , w i d t h ) ] {
52 r u l e s −>= −>{ r e j e c t ( ∗ == 3 ) }
53 }
54
55 i f wormholeMap [ map2Dto1D ( row − 1 , c o l , w i d t h ) ] {
56 r u l e s −>= −>{ a l l o w ( ∗ == 1 ) }
57 }
58
59 i f wormholeMap [ map2Dto1D ( row + 1 , c o l , w i d t h ) ] {
60 r u l e s −>= −>{ a l l o w ( ∗ == 2 ) }
61 }
62
63 i f wormholeMap [ map2Dto1D ( row , c o l + 1 , w i d t h ) ] {
64 r u l e s −>= −>{ a l l o w ( ∗ == 3 ) }
65 }
66
54 Chapter 11. Affordances
Listing 11.2 shows a much more complex example. The program is an extremely naive represen-
tation of a robot moving on a map. The map is built using two arrays, where each of the indexes
represents a room, and the indexes "surrounding" them are used as the contiguous rooms. One of the
arrays has walls, and the other one wormholes. If the robot encounters a wall on the map, it can’t
move to that direction, but if a wormhole is on the wall, it can move to the other side. In the arrays, a
true value means that a wall or a wormhole is present there, and a false means there is not. In the
example, there is no wormhole, so you can play with the values to see the different results.
12. Serialization
In CX, every program object and piece of data can be serialized at any moment, preserving any state
in which the program is. You can choose to serialize all the program or only certain parts of it, such
as structure instances or functions. These serialization features are very useful, as you can save a
program to a file or a database.
The serialization process not only involves the program structure, e.g. function declarations and
structure instances. Other parts of a CX program are also serialized, such as the call stack and the
different memory segments in CX. This means that a program can be totally or partially serialized,
and it can resume its execution later on. A program could be paused, serialized and sent over a
network to different computers to be executed in there. A common example of CX’s serialization
combined with other of CX’s features is as follows.
Imagine you want to evolve programs to predict a financial market’s price movement. You can
start evolving functions inside of a CX program using its integrated genetic programming system (see
Chapter 13). At certain points in time you can save these serializations to a database, for example,
programs which achieved a very good performance. You can then send some of these serializations
to other workstations or servers where they will initialize a separate evolutionary process. This is
something similar to taking some monkey from Earth to different planets in the Universe: wait a few
millions of years, and then check how they evolved in each of these planets (only if you believe in
the theory of evolution, though; otherwise, they will still be monkeys).
Evolutionary algorithms can often be manually manipulated (imagine aliens interfering with
the DNA of a planet’s species). A person can log in to one of the workstations or servers in this
evolutionary network, and check some of the individuals being evolved in CX. This person will
just have to pause the program using the CX REPL, and check the program’s structure using the
:dp meta-command (see Chapter 15). But maybe this person doesn’t know what can be added or
removed from the function being evolved. This is not a problem, because the function is evolving
according to a rule set established in CX’s affordance system (see Chapter 11, and you’d only need
to call the affordance system in the CX REPL and start selecting options from a menu. After being
happy with the changes, the program can be resumed again by issuing the meta-command :step 0,
so the program continues its execution (see Chapter 15).
56 Chapter 12. Serialization
12.1 Serialization
Now let’s see how you can serialize the different program elements and data in CX. }
Listing 12.1: Serialization of a Program
Listing 12.1 shows how to serialize a full program using the function serialize, which is the
one that we’re going to be using in all the subsequent examples. We can tell the function serialize
what to serialize by using the affordance operator (->) (see Chapter 11). In the case of serialization,
we’re only going to be using the affordance operator to specify a target to be serialized. In the case
of Listing 12.1, we’re leaving the target empty. This is a special case that instructs CX to serialize
everything or, in other words, the full program.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 var t a r g e t a f f
5 var r e s u l t [ ] byte
6
7 t a r g e t = −>{pkg ( main ) }
8 result = serialize ( target )
9 }
Listing 12.2: Serialization of a Package
If your program only has one package, as in Listing 12.2, you could end up with a similar
serialization as in Listing 12.1, but with some differences. −>{} instructs CX to serialize everything,
including CX’s memory segments (see Chapter 10), whereas −>{pkg(main)} is only going to cause a
serialization of the program segment (the program structure).
1 p a c k a g e main
2
3 f u n c main ( ) {
4 var t a r g e t a f f
5 var r e s u l t [ ] byte
6
7 t a r g e t = −>{mem( h e a p ) }
8 result = serialize ( target )
9
10 t a r g e t = −>{mem( s t a c k ) }
11 result = serialize ( target )
12.1 Serialization 57
12
13 t a r g e t = −>{mem( d a t a ) }
14 result = serialize ( target )
15 }
Listing 12.3: Serialization of the Memory Segments
To serialize the other memory segments, you can use the affordance target mem(), and give
either heap, stack or data as its argument, as seen in Listing 12.3.
It is worth noting that if you serialize your stack, you are actually serializing all the stacks. CX,
at the time of writing, is still not a multi-threaded programming language. Nevertheless, it should
soon become one, and the data contained in all of the stacks will be serialized. Additionally, CX
manages its call stack and stack separately, but both of these segments are serialized together when
calling serialize with mem(stack).
In later versions of CX, we might introduce native functions to process information from these
serialization results, but for now, you can only deserialize this information into another instance of a
CX program (see the next Section) or process the byte slice byte by byte to do whatever you require
to do.
1 p a c k a g e main
2
3 var foo i32
4
5 func bar ( ) {
6 s t r . p r i n t ( " Hi . " )
7 }
8
9 type foobar s t r u c t {
10 foo i32
11 }
12
13 f u n c main ( ) {
14 var t a r g e t a f f
15 var r e s u l t [ ] byte
16
17 t a r g e t = −>{pkg ( main ) v a r ( f o o ) }
18 result = serialize ( target )
19
20 t a r g e t = −>{pkg ( main ) f n ( b a r ) }
21 result = serialize ( target )
22
23 t a r g e t = −>{pkg ( main ) s t r c t ( f o o b a r ) }
24 result = serialize ( target )
25 }
Listing 12.4: Serialization of Declarations
Listing 12.4 shows how to serialize declarations in packages. Note that the structure and function
serializations are serializing the code representation of these declarations, and not instances of these.
In other words, we’re serializing the Person structure, not John or Ana, which would be instances
of this structure. In the case of the function declaration, CX does not have functions as first-class
58 Chapter 12. Serialization
objects yet, so there should not be any confusion, but it’s good to notice that we’re referring to the
function declaration itself, and not an instance of this function.
1 p a c k a g e main
2
3 type Point s t r u c t {
4 x i32
5 y i32
6 }
7
8 f u n c main ( ) {
9 var t a r g e t a f f
10 var r e s u l t [ ] byte
11
12 var foo i32
13 var bar Point
14
15 foobar :
16 i32 . p r i n t ( foo )
17
18 t a r g e t = −>{pkg ( main ) f n ( main ) v a r ( f o o ) }
19 result = serialize ( target )
20
21 t a r g e t = −>{pkg ( main ) f n ( b a r ) v a r ( b a r ) }
22 result = serialize ( target )
23
24 t a r g e t = −>{pkg ( main ) f n ( b a r ) e x p r ( f o o b a r ) }
25 result = serialize ( target )
26 }
Listing 12.5: Serialization of Expressions
Lastly, you can also serialize function statements, expressions and variable declarations. As you
can see in Listing 12.5, function variables are targeted by specifying the package, the function and
the name of the variable using pkg, fn and var in the affordance operator, respectively. The case of
targeting an expression is a bit more complex, as you need to label it first (Line 15), and then use
that label to target the expression in the affordance operator.
12.2 Deserialization
After serializing program elements or data using the procedures described in the last Section, you
may now want to deserialize the resulting slices of bytes. deserialize ( result )
12.2 Deserialization 59
10 }
Listing 12.6: Deserialization of a Program
We’re not going to be deserializing all of the examples from the last Section, as it’d be pointless.
You’re always going to have a slice of bytes, and they are always going to be deserialized by the
deserialize function. Listing 12.6 shows deserialize in action, which is deserializing a slice of bytes
representing the whole program.
What deserialize does is something similar to a patch. If a declaration in the slice of bytes
already exists in the current program, it will simply redefine it; if it does not exist, it will create
it. In the case of function declarations, statements and expressions, they can be applied using the
affordance system, although this functionality has not been implemented yet.
13. Genetic Programming
After the first version of affordances was implemented in CX, it seemed natural to use it for creating
a genetic programming algorithm. Genetic programming (GP) is an evolutionary algorithm that
automatically creates programs (programs creating programs!). In theory, you could only tell a GP
algorithm a set of goals and GP will generate the program for you, so you could tell it "I want an
operating system that does this and this and this" and it could arrive to that solution. But of course in
practice this would be extremely difficult. In reality, GP is usually used to find solutions to problems
that are relatively hard for a human being, but relatively easy for a computer to solve. For example,
you can use GP to find a mathematical model that describes a financial market (such as SKY price
movements!), and it will find it in minutes or maybe seconds, depending on your hardware. However,
obtaining such model by hand would be very difficult, as it could take you days, weeks or even
months to create such a model.
GP is pretty easy to understand. Imagine that you have a set of operators, such as +, -, / and
*. Now imagine that you have a set of input variables, such as x. Lastly, imagine that you have
the plot of a curve that curiously enough resembles the curve generated by plotting f (x) = x2 + x.
When run the GP algorithm, it will start making random combinations of operators and variables,
such as x + x, x + x + x, x ∗ x ∗ x, x ∗ x + x + x, and so on. These combinations of operators then will
be evaluated to see how well they perform. For example, x ∗ x + x + x will throw a curve that is
closer to our target function than, say, x + x (as this isn’t even a curve). Those combinations that
behave well are kept, while the ones that perform poorly are destroyed, just like in natural selection
where the strong individuals are the ones that survive (and hence the name genetic programming).
Again, as in natural selection, the strong individuals are the ones to reproduce and share their genetic
material among them to create stronger individuals. In the case of GP, the genetic material represents
mathematical terms. So, for example, if we reproduce x ∗ x and x ∗ x + x + x, we can end up with these
combinations: x ∗ x + x and x ∗ x + x + x (depending on how you design your crossover operators,
i.e. how you want the individuals to be reproducing), where the former corresponds to the terms
contained in an equivalent function to our target function.
As said at the beginning of this Chapter, CX’s GP is entirely based on affordances. If you read
Chapter 11, you now know that affordances can list all the possible actions that can be performed on
61
a program’s element, such as functions. Well, we can use this functionality to list all the operators
that can be used to create expressions for the target functions (the one that we want to simulate).
Then we can also use affordances to determine what we can send to these expressions. Also, if
we want to reproduce individuals, we can use affordances to know what expressions from each
individual can be obtained, and how they can be added to their offspring. You can do everything in a
GP using affordances.
1 p a c k a g e main
2
3 func realFn ( n f64 ) ( out f64 ) {
4 out = n ∗ n + n
5 }
6
7 f u n c simFn ( n f 6 4 ) ( o u t f 6 4 ) {}
8
9 f u n c main ( ) ( o u t f 6 4 ) {
10 var numPoints i32
11 var inps [ ] f64
12 var outs [ ] f64
13
14 var c i32
15
16 f o r c = 0 ; c < n u m P o i n t s ; c ++ {
17 i n p s = a p p e n d ( i n p s , i 3 2 . f 6 4 ( c ) − 1 0 . 0D)
18 }
19
20 f o r c = 0 ; c < n u m P o i n t s ; c ++ {
21 o u t s = append ( outs , r e a l F n ( i n p s [ c ] ) )
22 }
23
24 var t a r g e t a f f
25 t a r g e t = −>{pkg ( main ) f n ( r e a l F n ) }
26
27 v a r fnBag a f f
28 fnBag = −>{ f n ( f 6 4 . add ) f n ( f 6 4 . mul ) f n ( f 6 4 . s u b ) }
29
30 e v o l v e ( t a r g e t , fnBag , i n p s , o u t s , 5 , 1 0 0 , 0 . 1D)
31
32 s t r . p r i n t ( " Testing evolved s o l u t i o n " )
33 f o r c = 0 ; c < n u m P o i n t s ; c ++ {
34 p r i n t f ( "%f \ n " , simFn ( i n p s [ c ] ) )
35 }
36 }
37 o
But enough about theory, let’s see an example in action. Listing 13.1 shows how to use CX’s
GP to find a function that curve-fits f (x) = x2 + x. This target function is defined at Line 3, and the
function that will try to simulate the curve defined by the real function, simFn, is defined at Line 7.
As you can see, simFn starts as an empty function declaration. This is because the GP is going to
fill this function with expressions.
62 Chapter 13. Genetic Programming
After defining our simFn, we now need our data. In curve-fitting algorithms you usually need
two sets of data: the inputs and the outputs of the target function. For example, if you input 1, you’ll
get a 2, if you input a 2, you’ll get a 3, etc. In this case, the inputs are constructed at Line 17, while
the outputs at Line 21. The inputs range from -10 to 10, and the outputs are obtained by evaluating
realFn with these inputs.
The next step is to set a "bag" of operators. These operators are the ones that will be used to
create the CX expressions that will be inside simFn. In previous versions of CX we used a string
to define these operators, e.g. "i32.add|i32.mul|i32.sub", but now we have integrated affordances
with the GP even more, and we specify the functions using the affordance operator, as can be seen at
Line 28. Similarly, the function to be evolved was previously defined with a string, e.g. "simFn",
but now we also use the affordance operator, as seen at Line 25.
After having defined all the data mentioned in the previous paragraphs, we only need to decide
how many expressions should our simulated function have, for how many generations should our
algorithm run, and what’s our threshold error. "What the..." you may be saying to yourself at this
point, but the cure for this is to explain these concepts in the following paragraphs.
First, CX’s GP is of a certain type called cartesian genetic (CGP) programming, which was
devised by Miller and Thomson in [MT00]. In CGP you limit the number of expressions or statements
that can be defined in a function to be evolved. This is a simple method that completely eliminates
bloat, which is a major problem in traditional GP. In traditional GP, you can end up with evolved
functions having thousands and thousands of expressions, and many of them might not even make
any sense. For example, you could have expressions such as x + x + x − x − x − x or x ∗ x / x. CGP
has been proved in several research works that limiting the number of expressions forces GP to
improve its solutions, while completely eliminating bloat, and use less computing resources as an
extra.
Next, we have the number of generations. This parameter is clearly understood once we
remember that programs are reproduced or crossed over, just like in biological evolution. The
number of generations tell the GP how many times individuals are going to reproduce among
them. The first generation will create sons and daughters (this is a book that supports gender
equality after all), the next generation will create grandsons and granddaughters, the next will create
great-grandsons and great-granddaughters, and so on.
The last parameter is a threshold error, often called epsilon. In most problems trying to be solved
by any evolutionary algorithm, it will be very hard to achieve an exact solution. However, in all of
these problems, a close-enough solution is usually a sufficient solution. For example, take a look at
Figure 13.1. We can see two plots, and maybe our target function is represented by the blue line,
while our evolved solution is the red one. Maybe this is enough, depending on where we want to use
this function. For example, maybe we want to evolve a program that manages the cruise control of a
car, and if we don’t get an exact solution, the car might go 2 miles per hour faster or slower than our
desired speed limit, and this could be a very good solution. In other cases, any error is unacceptable,
such as in determining if a patient requires an amputation or not. Now, epsilon tells the GP algorithm
how bad a solution can perform while maintaining us happy with the results. In CX, this error is
obtained by calculating the mean-squared error (MSE), which is pretty easy to understand: you only
need subtract every simulated data point to each of its real counterpart, sum all of these numbers
and average them, and then get that number’s square root. If you are wondering why provide both a
number of generations and an epsilon, the answer is that the program will stop when any of these
criteria are met. These stop criteria can be interpreted as: "I’m willing to wait for 100 generations or
until the solution achieves this performance error," and this makes perfect sense once you realize
63
1 p a c k a g e main
2
3 import (
4 . " g i t h u b . com / s k y c o i n / cx / cx "
5 )
65
6
7 f u n c main ( ) {
8 prgrm : = MakeProgram ( )
9 mainPkg : = MakePackage ( " main " )
10 mainFn : = M a k e F u n c t i o n ( " main " )
11
12 prgrm . AddPackage ( mainPkg )
13 mainPkg . A d d F u n c t i o n ( mainFn )
14
15 prgrm . RunCompiled ( )
16 }
Listing 14.1: Writing a program using CX base language
Listing 14.1 shows how you can use the CX base language to create a very basic program that
can be run using Skycoin’s CX runtime. First we need to import the CX base language package, as
shown at Line 4. This package will give us access to makers, adders, removers and other utility
functions that will help us construct a compliant CX program. Line 8 creates the minimal CX
program you could create: a null program. A null program is one that does not have any package,
functions or anything in it. If you try to run this program, CX will complain because it doesn’t have
a main package nor a main function, but this doesn’t mean it’s not a valid CX program; if you were
developing a library, neither of these two components are required.
As we are interested on running a program, not just creating a library, we create a main package
and function, at Lines 9 and 10, respectively. As you can see, the naming convention for the functions
that are going to be creating the program elements is MakeXXX, and these functions will usually
require the essential properties to be sent as input parameters, such as the name of a package.
We already have the elements created, but they have not been added to the main program structure
yet. We can do this by calling an adder method on the elements that are going to hold these new
elements. In this case, we are interested on adding a function to a package, and adding that package
to the main program structure. To do this, we’re calling the program’s method AddPackage, and
sending the created package as an argument, and we do the same with the main function by calling
the package’s method AddFunction, and sending it as an argument. These operations are seen at
Lines 12 and 13.
Finally, even though the program does nothing, we run the program by calling the program’s
method RunCompiled at Line 15. Save that code to a file, run it by executing go run example.go, and
you should see... nothing. Let’s now create a more interesting program: let’s calculate 10 + 10!
1 p a c k a g e main
2
3 import (
4 . " g i t h u b . com / s k y c o i n / cx / cx "
5 )
6
7 f u n c main ( ) {
8 prgrm : = MakeProgram ( )
9 mainPkg : = MakePackage ( " main " )
10 mainFn : = M a k e F u n c t i o n ( " main " )
11 i n i t F n : = M a k e F u n c t i o n ( SYS_INIT_FUNC )
12
13 prgrm . AddPackage ( mainPkg )
66 Chapter 14. Understanding the CX Base Language
14 mainPkg . A d d F u n c t i o n ( mainFn )
15 mainPkg . A d d F u n c t i o n ( i n i t F n )
16
17 sum : = M a k e E x p r e s s i o n ( N a t i v e s [ OP_I32_ADD ] , " " , 0 )
18
19 num : = MakeArgument ( " " , " " , 0 )
20 num . AddType ( " i 3 2 " )
21 num . O f f s e t = 0
22
23 W r i t e T o S t a c k (& prgrm . S t a c k s [ 0 ] , 0 , [ ] b y t e { 1 0 , 0 , 0 , 0 } )
24
25 sum . A d d I n p u t ( num )
26 sum . A d d I n p u t ( num )
27
28 r e s u l t : = MakeArgument ( " r e s u l t " , " " , 0 )
29 r e s u l t . AddType ( " i 3 2 " )
30 sum . AddOutput ( r e s u l t )
31
32 p r n t : = M a k e E x p r e s s i o n ( N a t i v e s [ OP_I32_PRINT ] , " " , 0 )
33 p r n t . AddInput ( r e s u l t )
34
35 mainFn . A d d E x p r e s s i o n ( sum )
36 mainFn . A d d E x p r e s s i o n ( p r n t )
37
38 prgrm . RunCompiled ( 0 )
39 }
Listing 14.2: Summing 10 + 10 using CX base language
Listing 14.2 shows how you can double a number, and then print the result using the CX base
language. It’s quite a shock right? Similarly to the last example, we need to create a program, a
main package, and a main function. In addition to that, we need to create a *init function, which,
as explained in Chapter 2, initializes some parts of a CX program such as global variables. These
components are created and added to the program at Lines 8-15. Then we continue with the
expression that will perform the sum at Line 17.
As mentioned before, we’ll be doubling a number. This is important to note, as we’re not going
to be adding, for example, a 10 and another 10; instead, we’re using the same number to double
it. This might not make sense, but let’s see how we create the inputs. First, we need to create the
argument at Line 19. The first input argument to MakeArgument is the name of the argument,
which only makes sense if we’re creating a variable or symbol. In this case we just want to hold a
reference to the number 10, so we don’t really need to name the argument. Additionally, we already
have the internal name of this argument, as we’re assigning it to the Go variable num. num will be
pointing to the offset 0, as seen at Line 19. This means that whatever is stored at the beginning of a
stack frame, that’ll be the value of our num argument, and as it is of type i32, it’ll have a size of
4, which means that it’ll read the first 4 bytes of the stack frame. Before continuing with the next
parts, the second and third arguments of MakeArgument are the file name where the argument is
declared and line number respectively, which we don’t need for this example.
Next we will write our information to the stack. To do this, we’ll use the function WriteToStack,
which first takes a stack as its argument, then the offset at which it should start writing bytes, and
lastly the sequence of bytes to write. As has been mentioned in other Chapters, CX is currently
single-threaded, but it will become multi-threaded in the future. As a consequence of this, you need
67
to send WriteToStack a reference to the stack to which you want to write to. For this example, we
are using the first stack, we’ll start writing our bytes at the index 0 of the stack, and we’ll write
10, 0, 0, 0, which corresponds to the 32-bit integer 10.
After creating the argument and writing the bytes our argument will be pointing to, we can now
add this argument to our expression. This is done at Lines 25 and 26. As you can see, we are adding
the same argument twice to the expression (we want to be efficient with our memory, after all).
If we run the program until this point, CX will complain about evaluating the expression and not
using the result, similarly to what Go would throw. Let’s now assign the result to a variable to avoid
this error. To do this, we create another argument, as seen at Line 28, and then we add this argument
as the sum expression’s output, as seen at Line 30.
Now if we run the program until this point, we’ll have CX doubling 10, and overwriting this
number with 20. The reason behind this is that the output variable result has a default offset of 0
too, so it’s pointing to the same memory address as num.
The next expression does a call to i32.print, and is defined at Line 32. After defining the
expression, we can add an input argument to it which, in this case, is the result argument, as seen at
Line 33.
It has been an exhaustive journey, but we now only need to add the expressions to our main
function, as it’s done at Lines 35 and 36, and call our program’s RunCompiled method. Finally, if
we run our file with go run example.go, we’ll see a 20 being printed to the terminal. Feel proud about
your achievement!
As a final comment, if you want to create your own CX, you’ll need to be generating all of these
commands automatically. For example, you can use a parser such as goyacc (the one used by CXGO)
to generate the program’s structure.
15. CX’s Read-Eval-Print Loop
The REPL has been used to certain extent in the preceding Chapters, but its features have not been
thoroughly discussed. This Chapter aims to explain all of the currently developed features for CX’s
REPL.
Most of the features that are presented here are related to meta-commands, which are commands
that you can enter in the REPL that affect a program, but are not actual expressions, statements or
declarations.
15.1 Selectors
Let’s first discuss selectors, which are meta-commands that allow us to navigate a program’s structure
and target elements to be affected by other meta-commands.
1 CX 0 . 5 . 7
6
7 ∗ f u n c f o o ( ) {}
8
9 ∗ : dp
10 Program
11 0. − P a c k a g e : main
12 Functions
13 0. − F u n c t i o n : main ( ) ( )
14 1. − F u n c t i o n : ∗ i n i t ( ) ( )
15 2. − F u n c t i o n : f o o ( ) ( )
16
17 ∗ : f u n c f o o {}
18
19 : func foo { . . .
15.2 Stepping 69
20 ∗ i32 . p r i n t (5 + 5)
21
22 : func foo { . . .
23 ∗ : dp
24 Program
25 0. − P a c k a g e : main
26 Functions
27 0. − F u n c t i o n : main ( ) ( )
28 1. − F u n c t i o n : ∗ i n i t ( ) ( )
29 2. − F u n c t i o n : f o o ( ) ( )
30 0. − E x p r e s s i o n : ∗ l c l _ 0 i 3 2 = add ( 5 i 3 2 , 5 i 3 2 )
31 1. − E x p r e s s i o n : p r i n t f ( s t r , ∗ l c l _ 0 i 3 2 )
Listing 15.1: REPL function selection meta-command
Listing 15.1 shows a REPL session where we start inside the function main at Line 5, and then
we exit that scope using Ctrl-D. At any moment, you can know in what scope you are in by looking
at the line above the prompt, and you can go up one level in scope by hitting Ctrl-D. If you are in the
global scope and hit Ctrl-D, you’ll leave the CX REPL, so be careful.
After exiting main, we declare a new function in the global scope: foo, at Line 7, and we check
that it was actually created by debugging the program structure using the :dp meta-command (which
stands for "debug program"), at Line 9.
To change scope, we’ll use our first selector :func. Line 17 shows the meta-command in action,
and we can see that it changed the scope to foo’s at Line 20. At that same Line, we add an expression
to foo: a call to printf, which will only print 10 to the terminal. We again check that the expression
was correctly added by calling :dp.
The other selectors are :package and :struct, which change the scope to another package or to
another struct declaration, respectively.
15.2 Stepping
Let’s continue the REPL session from Listing 15.1 in Listing 15.2.
1 : func foo { . . .
2 ∗
3
4 ∗ : f u n c main {}
5
6 : f u n c main { . . .
7 ∗ foo ( )
8
9 : f u n c main { . . .
10 ∗ : dp
11 Program
12 0. − P a c k a g e : main
13 Functions
14 0. − F u n c t i o n : main ( ) ( )
15 0. − E x p r e s s i o n : f o o ( )
16 1. − F u n c t i o n : ∗ i n i t ( ) ( )
17 2. − F u n c t i o n : f o o ( ) ( )
18 0. − E x p r e s s i o n : ∗ l c l _ 0 i 3 2 = add ( 5 i 3 2 , 5 i 3 2 )
70 Chapter 15. CX’s Read-Eval-Print Loop
19 1. − E x p r e s s i o n : i 3 2 . p r i n t ( ∗ l c l _ 0 i 3 2 )
20
21 : f u n c main { . . .
22 ∗ : step 0
23 10
24
25 : f u n c main { . . .
26 ∗ : step 1
27 i n : main , e x p r \ # : 1 , c a l l i n g : main . f o o ( )
28
29 : f u n c main { . . .
30 ∗ : step 1
31 i n : foo , e x p r \ # : 1 , c a l l i n g : add ( )
32
33 : f u n c main { . . .
34 ∗ : step 1
35 i n : foo , e x p r \ # : 2 , c a l l i n g : i 3 2 . p r i n t ( )
36 10
37
38 : f u n c main { . . .
39 ∗ : step 1
40 in : terminated
41
42 : f u n c main { . . .
43 ∗ : step 1
44 i n : main , e x p r # : 1 , c a l l i n g : main . f o o ( )
45
46 : f u n c main { . . .
47 ∗ : step 1
48 i n : foo , e x p r # : 1 , c a l l i n g : add ( )
49
50 : f u n c main { . . .
51 ∗ : step 1
52 i n : foo , e x p r # : 2 , c a l l i n g : i 3 2 . p r i n t ( )
53 10
54
55 : f u n c main { . . .
56 ∗ : s t e p −1
57
58 : f u n c main { . . .
59 ∗ : step 1
60 i n : foo , e x p r # : 2 , c a l l i n g : i 3 2 . p r i n t ( )
61 10
62
63 : f u n c main { . . .
64 ∗ : s t e p −1
65
66 : f u n c main { . . .
67 ∗ : step 1
68 i n : foo , e x p r # : 2 , c a l l i n g : i 3 2 . p r i n t ( )
69 10
We want to add a call to foo in our main function, and we do this by simply writing foo() while
15.2 Stepping 71
being in the scope of the main function. To do this, we first need to exit foo’s scope by hitting
Ctrl+D, and using the :func selector, as seen at Line 4. After this, we can add the call to foo, and we
can check our new program’s structure using the :dp meta-command.
In order to test our program, we can use CX’s stepping features. First, if we want to run all the
program until the end, we can use the :step 0 meta-command, as seen at Line 22. But sometimes
we’ll need to check a program’s execution step by step, and we can do this by giving the :step
meta-command a different argument than 0. Starting at Line 26, we can see how the REPL tells us
at what line number we are in what function call. After issuing enough :step 1 meta-commands, we
finally see that the program finalizes at Line 39, with the REPL printing the message in : terminated .
An even more interesting feature of stepping is that you can give it negative arguments. If this is
the case, CX will create a behavior similar to a for loop, where the stepped back expressions will be
executed again. An example of negative stepping starts at Line 56. You can see how we step back
and forth to keep printing the number 10 to the terminal.
16. Unit Testing in CX
As CX grew, a mechanism to test all the features of the language was needed. Sometimes adding
a new feature to CX breaks other features of the language. For example, once methods were
added to the language, bugs related to accessing structure instance fields arose. The parser was
getting confused, as it didn’t know how to differentiate between, for example, instance . field and
instance . methodCall() . We were not noticing these errors until we actually run code involving method
calls or accessing fields. The solution to this problem is to unit test each of the features of the
language every time the language gets considerably modified.
At the time of writing, CX’s unit testing library consists of a single function: assert. As in other
languages, assert’s objective is to check if two arguments are equal. In CX, this test is performed
byte by byte, so a 32-bit integer is never going to be equal to a 64-bit integer, even if they represent
the same real number, because they have different sizes.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 var c o r r e c t [ ] bool
5
6 c o r r e c t = append ( c o r r e c t , a s s e r t ( i 3 2 . add ( 1 0 , 1 0 ) , 2 0 , " Add e r r o r " ) )
7 c o r r e c t = append ( c o r r e c t , a s s e r t (10 − 10 , 0 , " S u b t r a c t e r r o r " ) )
8 c o r r e c t = append ( c o r r e c t , a s s e r t ( i 3 2 . f 3 2 ( 1 0 ) , 1 0 . 0 , " P a r s e t o F32 e r r o r " ) )
9 a s s e r t (5 < 10 , true , " I32 Less than e r r o r " )
10 }
Listing 16.1 shows an example on how to use assert to test different arithmetic operations. The
first and second input arguments to assert are the ones that get compared byte by byte, while the
third argument is a custom error message that is appended to the default error message. In CX it’s
conventional to start with the expression to be tested as the first input argument, and then use the
73
second input argument as the desired result of the first input argument. The custom error message
is helpful to understand what expression raised an error, in addition to the usual file name and line
number thrown by CX.
Also, notice that assert returns a boolean argument, which indicates if the test was successful
or not. This might seem like it does not make sense, as assert will stop a program’s execution if
the test is not successful, but this behavior is there for two reasons: 1) you can count the number
of tests performed, and 2) CX will implement in the future a function, test.error, which tests if
an expression raised an error in a particular situation, while avoiding halting the program. For
example, i32 . div (0, 0) has to raise a divide by 0 error, and if it doesn’t, then this is an error. After
re-implementing this function (most likely with a different name, as the test package no longer
exists), we will be able to count how many tests return true and how many return false.
1 p a c k a g e main
2
3 f u n c main ( ) {
4 var check i32
5 c h e c k = 999
6
7 if 2 < 3 {
8 c h e c k = 333
9 }
10
11 a s s e r t ( check , 3 3 3 , " n o t e n t e r i n g I F e r r o r " )
12
13 if 3 < 2 {
14 c h e c k = 555
15 }
16
17 a s s e r t ( check , 3 3 3 , " e n t e r i n g I F e r r o r " )
18
19 if 2 < 3 {
20 c h e c k = 888
21 } else {
22 c h e c k = 444
23 }
24
25 a s s e r t ( check , 8 8 8 , " e n t e r i n g e l s e i n I F / ELSE e r r o r " )
26
27 if 3 < 2 {
28 c h e c k = 111
29 } else {
30 c h e c k = 777
31 }
32
33 a s s e r t ( check , 7 7 7 , " e n t e r i n g i f i n I F / ELSE e r r o r " )
34
35 if 3 > 0 {
36 i f 25.0 > 29.0 {
37 check = 0
38 a s s e r t ( check , 1 0 , " e n t e r i n g n e s t e d I F / ELSE 2 nd l e v e l e r r o r " )
39 } else {
40 i f 30L < 60L {
41 c h e c k = 999
74 Chapter 16. Unit Testing in CX
42 } else {
43 check = 0
44 a s s e r t ( check , 1 0 , " e n t e r i n g n e s t e d I F / ELSE 3 r d l e v e l e r r o r " )
45 }
46 }
47 } else {
48 check = 0
49 a s s e r t ( check , 1 0 , " e n t e r i n g n e s t e d I F / ELSE 1 s t l e v e l e r r o r " )
50 }
51
52 a s s e r t ( check , 9 9 9 , " e n t e r i n g n e s t e d I F / ELSE e r r o r " )
53
54 var i i32
55 f o r i = 0 ; i < 1 0 ; i = i 3 2 . add ( i , 1 ) {
56 check = i
57 }
58
59 a s s e r t ( check , 9 , "FOR l o o p e r r o r " )
60
61 f o r i = 1 ; i 3 2 . l t e q ( i , 1 0 ) ; i = i 3 2 . add ( i , 1 ) {
62 i f i 3 2 . eq ( i 3 2 . mod ( i , 2 ) , 0 0 0 0 ) {
63 check = i
64 } else {
65 check = i
66 }
67 }
68
69 a s s e r t ( check , 1 0 , "FOR−I F / ELSE l o o p e r r o r " )
70 }
Listing 16.2: Testing control flow statements
Listing 16.2 shows a more complex situation, where we are testing if the different control flow
statements of CX are behaving as intended or not. For example, in an if/else statement, if the
predicate is true, the then clause needs to be executed, not the else clause. To test this behavior, we
can create a "check" variable that is going to be changing its value, just like it can be seen at Line
8. If this if statement is successful, the check variable will change its value from 999 to 333. As
a consequence, we need to use assert to check if check’s value is now 333. If this is not the case,
we can now be sure that there’s an error with how the if statement is implemented, and we need
to correct it. Likewise, at Line 14 we check if the if statement is correctly not entering when its
predicate evaluates to false. If the if statement enters in this case, the value of check will be changed
to 555, so we need to test using assert that check’s value is still 333.
REFERENCES 75
References
[Gib66] James Jerome Gibson. “The senses considered as perceptual systems.” In: (1966) (cited
on page 50).
[LS86] Peter Lancaster and Kestutis Salkauskas. Curve and surface fitting: an introduction.
Academic press, 1986 (cited on page 46).
[MT00] Julian F Miller and Peter Thomson. “Cartesian genetic programming”. In: European
Conference on Genetic Programming. Springer. 2000, pages 121–132 (cited on page 62). | https://ru.scribd.com/document/395660281/cx-programming-langauge | CC-MAIN-2019-35 | refinedweb | 23,761 | 77.67 |
A HList derived widget that offers several add-on functions like sorting, reordering and inserting/retrieving of item text & data, suitable for perl Tk800.x (developed with Tk800.024). You can insert item-texts or item-text/-value pair/s into the DHL...MIKRA/Tk-MK-0.23 - 11 Feb 2014 11:31:34 GMT
MTDial Widget that allows the creation of circular dials that can turn indefinitely to produce arbitrary positive or negative values....WLMB/Tk-MTDial-0.001 - 26 Jul 2016 23:38:27 GMT
Tk::TFrame provides a frame but with a title which overlaps the border by half of it's height....SREZIC/Tk-GBARR-2.08 - 23 Sep 2008 19:50:04 GMT
"Tk::IFrame" defines a widget which enables multiple frames (cards) to be defined, and then stacked on top of each other. Each card has an associated tag, selecting this tag will cause the associated card to be brought to the top of the stack....SREZIC/Tk-GBARR-2.08 - 23 Sep 2008 19:50:04 GMT
A UpDown Navigation Control for Numbers List...SEN/Tk-Updown-1.0 - 15 Nov 2004 06:18:24 GMT
This is a graphic user interface to compare, up- or download local and remote directories....KNORR/Tk-Mirror-0.06 - 25 Oct 2008 14:53:52 GMT
PDURDEN/Tk-FormUI-1.07 - 29 Sep 2015 21:40:41 GMT
ILYAZ/Tk-OS2src-1.04 - 28 May 2001 18:03:00 GMT
In the context of this namespace, a Wizard is defined as a graphic user interface (GUI) that presents information, and possibly performs tasks, step-by-step via a series of different pages. Pages (or 'screens', or 'Wizard frames') may be chosen logic...LGODDARD/Tk-Wizard-2.158 - 23 Nov 2015 12:26:37 GMT
The EMatrix widget is a derived widget that provides 6 additional methods above and beyond the traditional Tk::TableMatrix widget....DJBERG/Tk-EMatrix-0.01 - 19 Jan 2001 14:14:23 GMT
This Module pops up a file selector box, with a directory entry with filter on top, a list of directories in the current directory, a list of files in the current directory, an entry for entering or modifying a file name, a read button, a write butto...ALSCH/Tk-SelFile-0.02 - 20 Nov 1995 16:33:46 GMT
SREZIC/Tk-Xcursor-0.03 - 04 Feb 2014 20:16:49 GMT
JIMI/Tk-MIMEApp-0.04 - 16 Jan 2014 09:04:05 GMT
Creates a new widget class based on Text-like widgets that can redefine the line number base (normally Text widgets start line numbers at 1), or possibly other manipulations on indexes....SREZIC/Tk-804.034 - 26 Aug 2017 15:26:56 GMT
Tk::Browser.pm creates a Perl library module browser. The browser window contains a module listing at the upper left, a symbol listing at the upper right, and a text display. If the argument to open() is a package or path name, the browser displays t...RKIES/Tk-Browser-0.82-1 - 23 Apr 2015 17:26:30 GMT | https://metacpan.org/search?p=3&q=module%3ATk | CC-MAIN-2019-22 | refinedweb | 514 | 54.02 |
> > Why do you think they _have to_ have "none"? Is it POSIXized or> > otherwise standardized? Where can I RTFM?>> I do not think they have to. They just are :-)>> fs/namespace.c:show_vfsmnt()>> ...> mangle(m, mnt->mnt_devname ? mnt->mnt_devname : "none");>>> I find this convention quite useful. It allows any program to easily> skip virtual filesystems. Using something like /dev or devfs in this> case does not add any bit of useful information but possibly adds to> confusion.Maybe you're right. It's up to maintainer to decide.Richard, do you need updated patch without "none" -> "devfs"?--vda-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2002/1/29/378 | CC-MAIN-2022-33 | refinedweb | 126 | 62.85 |
BBC micro:bit
Speaking Clock
Introduction
Just when you think that there aren't any more clocks that you want to make with the micro:bit...
This project has a micro:bit reading the time from an RTC. A second micro:bit is attached to a speaker. Press a button on the second micro:bit and it requests the time from the first one, then tells you the time in words.
In the photograph, on the left, you can see the micro:bit that does the speaking. It is attached to a 4tronix Power:Bit which is supplying battery power. The Power:Bit duplicates the edge connector. That is fed into a Pimoroni Noise:bit, which has the speaker and a lovely design. On the right of the photograph, I have a micro:bit going into a 4tronix inline edge connector. This is another neat product that duplicates the edge connector. I've mounted the micro:bit on a Bit:2:Pi, which I am just using for power. Connected to the inline breakout is a Hobbytronics DS1338 RTC breakout. The accessories are overkill here and mainly to provide convenient power and audio out. The Noise:bit looks so good, you just have to use it.
The basic project requires one micro:bit to be connected to a speaker. The other micro:bit needs to have a connection to an RTC. I chose to use 2 micro:bits for this because the Noise:bit uses the micro:bit as a power source and because the speech is easier to make out when you can be quite close to the speaker (so, portable is best).
Programming - RTC
The micro:bit with the RTC needs to read the time and send it by radio to the other one. It listens for a radio signal (a 't' being received on channel 10) and then reads the time, makes some adjustments and sends it by radio to the micro:bit with the speaker.
from microbit import * import radio chnl = 10 radio.config(channel=chnl) radio.on() def bcd2bin(value): return (value or 0) - 6 * ((value or 0) >> 4) def bin2bcd(value): return (value or 0) + 6 * ((value or 0) // 10) def get_time(): i2c.write(0x68, b'\x00') buf = i2c.read(0x68, 7) s = bcd2bin(buf[0] & 0x7F) m = bcd2bin(buf[1]) h = bcd2bin(buf[2]) w = bcd2bin(buf[3]) dd = bcd2bin(buf[4]) mm = bcd2bin(buf[5]) yy = bcd2bin(buf[6]) + 2000 return [s,m,h,w,dd,mm,yy] def set_time(s,m,h,w,dd,mm,yy): tm = bytes([0x00] + [bin2bcd(i) for i in [s,m,h,w,dd,mm,yy-2000]]) i2c.write(0x68,tm) while True: s = radio.receive() if s is not None: if s=="t": t = get_time() hh = t[2] hh = hh % 12 if hh==0: hh=12 h = str(hh) m = str(t[1]) tstr = h + " " + m radio.send(tstr) sleep(50)
Programming - Speaking
The receiving micro:bit responds to a button press by sending the time request signal. When it receives a time signal, it works out how to say the time and then says so.
from microbit import * import radio import speech chnl = 10 radio.config(channel=chnl) radio.on() nums_dict = {0: ('Zero', 'Ten'), 1: ('One', 'Eleven'), 2: ('Two', 'Twelve', 'Twenty'), 3: ('Three', 'thirteen', 'Thirty'), 4: ('Four', 'Fourteen', 'Forty'), 5: ('Five', 'Fifteen', 'Fifty'), 6: ('Six', 'Sixteen', 'Sixty'), 7: ('Seven', 'Seventeen', 'Seventy'), 8: ('Eight', 'Eighteen', 'Eighty'), 9: ('Nine', 'Nineteen', 'Ninety')} def numwords(str_num): if len(str_num) == 1: ones = int(str_num[0]) tens = 0 else: tens = int(str_num[0]) ones = int(str_num[1]) if tens == 0 or tens == 1: return nums_dict[ones][tens] else: word = nums_dict[tens][2] if ones>0: word += ' ' + nums_dict[ones][0] return word while True: if button_a.was_pressed(): radio.send("t") s = radio.receive() if s is not None: t = s.split() h = numwords(t[0]) m = numwords(t[1]) if t[1]=="0": m = "oh clock" elif int(t[1])<10: m = "oh " + m tstr = h + " " + m speech.say(tstr) sleep(50)
Review & Next Steps
It's reasonably easy to understand the time spoken through the speaker on the Noise:bit. It's always easier to understand what the micro:bit is saying when you have a rough idea of the kinds of words you should be hearing. If you play speech from the micro:bit to someone who doesn't already know what it is going to say, they have can have a hard time decoding it. If you tell them what it has said and play the same noise back, they will hear the words quite clearly. When you are expecting to hear some numbers, you tend to make them out quite well.
Using say isn't bad, but you can get better performance by working out the phonemes and using pronounce. A future version of this clock should probably do that.
I am old enough to remember a time when people were able to cope with the time being rounded to the nearest 5 minutes. A future version of this project should probably tell me the time and date the way I would prefer.
Plugging the micro:bit directly into the Noise:bit makes the speaker into a mouth for whatever graphic shown on the matrix. The portable speaking part of the project would do better with a different power solution. | http://www.multiwingspan.co.uk/micro.php?page=speakclock | CC-MAIN-2019-09 | refinedweb | 893 | 70.43 |
Miniblog: Python Syntax Refresher
I was a bit busy with a home improvement project this weekend, so I’m taking an opportunity to live up to some standards that we should all aspire to:
- Take time to do important things that aren’t relevant to my career. There are some people, I’m told, who live to code. I am not one of those people, but it’s easy to get sucked up in work when we spend 40+ hours each week on it. Life and the people in it should come first (for me, anyway).
- Stick to my routine, even if I have to alter the standard a bit. I once heard someone say “a half-assed workout is still better than no workout.” Today’s entry isn’t going to be as long as some of my others, but I’m still going to learn something that’ll make me a better programmer.
I was trying to figure out why a Python script wasn’t returning the results I was looking for, so I started to work my way through the logic. Python is not my first (or my second) language and I regularly struggle with the syntax. I came across a batch function that I had a little trouble following:
def batch(iterable, n):
length = len(iterable)
for index in range(0, length, n):
yield iterable[index : min(index + n, length)]
Based on what I know, batch means to take a collection and group its members in a certain way. But this function is iterating through a range and not the passed collection, so how does it work?
The Parameters
Our function takes two arguments: iterable and n. I checked the script to confirm the data types of these values and learned that iterable is an array of SQL query results. Since a query returns a row of data, it’s up to us to decide how to present that in Python code. Though it would be intuitive to use a dictionary (object) with keys and values set to column names and values, this script actually passes all the values in an array. So iterable is a very long array because each row from our query result adds 10+ elements to that array. The other parameter, n, is a constant variable set to an integer, in this case 1000.
for index in range
Now that we’ve got an idea of what the arguments might look like when this function is invoked, we can take a look at the logic. The first line sets length to the number of elements in the iterable array. For the sake of illustration, we can say that the query that generated iterable returned 4 rows of results and that the table itself has 10 columns. In that case, our iterable variable would be an array of 40 elements, so length would equal 40.
Next we use the for keyword, which creates a loop where elements are referred to by the variable name directly after for (index in this case). Next comes in which tells us what we’re iterating through. I know range is going to return a series of numbers, but I forgot that it could accept a third argument. In this case, that argument is n, which we know is set to 1000.
range’s third argument is a step value, which determines the distance between numbers in the range. For example, range(0, 10) returns:
[0,1,2,3,4,5,6,7,8,9]
But if we add a step value, as in range(0, 10, 2):
[0,2,4,6,8]
In our case, step is 1000, which makes me wonder how large our iterable data normally is. If the query truly returned only 4 rows, then we wouldn’t come close to reaching the first step. Our range would simply be [0].
But that might be what we’re looking for, so let’s examine the final, and for me the most confusing, line of our code:
yield iterable[index : min(index + n, length)]
First we have yield, which is like an iterative version of return. While return will exit our function when run, yield will send a value back while continuing to iterate. Our alternative would be to collect values and then return the collection, but the advantage to yield is that we don’t have to store all the data in memory as it’s being collected.
We know that iterable is an array, so my original thought was that the square brackets were referencing a single element in the collection. But something that I forgot about Python is that a colon inside square brackets can be used as a slice. So instead of returning one element from iterable, we’re returning a slice of it, starting from index (in this case, 0) and ending with the lesser of index + n (0 + 1000) and length (40).
Batch
When I first read this function, I took an educated guess at what “batch” meant. Now it’s a bit more clear: we’re taking a potentially large group and breaking it into pieces. In my contrived example, the collection was small enough that it didn’t have to be split at all, but if it were thousands of entries long, we would have yielded several results. Batching our results likely makes them easier to process and optimizes the script, which makes us all winners.
Sources
- Python range() function, GeeksforGeeks
- When to use yield instead of return in Python?, GeeksforGeeks
- Colon in Python — Why do we use (:) in Python?, AskPython | https://mike-diaz006.medium.com/miniblog-python-syntax-refresher-8ec40b4ef94?source=user_profile---------4---------------------------- | CC-MAIN-2021-43 | refinedweb | 934 | 66.37 |
There might be many reasons you’d want to fire a single Tag multiple times in Google Tag Manager. The most common one is when you want to deploy multiples of a single tracking point on the web. Perhaps you have a roll-up account you want to send the hits to, in addition to the site-specific tracking property.
Quite a while ago, I gave a solution for this with a specific focus on Google Analytics Tags. It leveraged the hitCallback feature of the Universal Analytics library by increasing a global counter each time a Tag had fired. This solution had a number of drawbacks: being GA-specific, polluting the global namespace, and requiring a unique setup for every single Tag you wanted to fire.
Not long after this, I actually started doing the whole thing in a different way. A much more durable, sensible, robust way, and in this article I want to open up the method. In fact, it’s not just me who wants to introduce this. This article is a collaboration with Marco Wekking from Adwise. He approached me with almost exactly the same solution asking for a review. Instead of reviewing, I asked if he would like to contribute to a collaboration of sorts on this post, and he gracefully accepted. So, much of this post is from Marco’s pen, with my typical bad humour and some JavaScript shenanigans intertwined in the rhetoric.
How it works
The skilfully grafted diagram above shows how the solution works. Every single event that passes through
dataLayer is duplicated by suffixing it with
.first and
.second. This way, all the Tags you want to fire twice per event just need to be modified to use the original event name plus the two suffixes. You can even use this in a Lookup Table to fire the hits to different Google Analytics tracking properties, for example.
It’s a simple, elegant solution to a pressing problem which, if you ask me, should be handled natively by the platform. This is such a common use case for so many people – it would be immensely helpful if you could just denote multiple cycles to each Tag using the Tag template settings instead of having to resort to hacks like this.
How to do it
The solution comprises three steps:
- Create a new Custom Event Trigger
- Create the Custom HTML Tag which manages the duplication
- Create new Triggers
We’ll wrap it up with an example of how to leverage this when firing to multiple Google Analytics properties! So stay tuned.
1. Create the Custom Event Trigger
To fire the Custom HTML Tag, we’ll need a Trigger which activates on every single event. Why? Because we want to duplicate every single event! There’s just one exception – we don’t want to fire the Trigger when the actual duplication events are pushed into
dataLayer, or we might find ourselves in an infinite loop. Ontologically fascinating, but not cool when it comes to site performance.
Before you create the Trigger, make sure you have the Event Built-in Variable checked in the appropriate slot of the Variables page in your container.
This is required for the Trigger to work.
Next, create a Trigger which matches all events except for the duplicated ones. In this example, we’re using the generic
.first and
.second as the suffixes, but nothing’s stopping you from using something more descriptive, for example
.Country and
.Rollup.
To reiterate: The exception is important! If you don’t add that Fire On condition, you will run into issues, as the Trigger would fire again and again with each duplication cycle. Yes, you could use the Tag Firing Options feature of the Tag templates, but it’s better to nip bad behavior in its bud.
2. Create the Custom HTML Tag
The Custom HTML Tag is the heart and soul of this solution. It’s very simple, but it does have some peculiarities that might surprise if you’re not familiar with how GTM works.
Actually, to ensure that one of these special features works as it should, you’ll need to activate yet another Built-In Variable: Container ID. This returns, surprise surprise, the public ID of the GTM container the Tag is firing in (GTM-XXXX).
Create a new Custom HTML Tag and add the following code within:
We’re wrapping the whole thing in an IIFE as we want to avoid polluting the global namespace. Next, we’re assigning a local variable
event with whatever value is currently stored in the Built-in Event variable (i.e. the event that was pushed in the first place).
Finally, we’re doing two consecutive
dataLayer.push() commands, one for each iteration of the cycle. If you want to add more events there, be my guest. As far as we know, there is no limitation to the number of events you can push into
dataLayer this way.
The latter push includes the special
eventCallback key (read more about it here). The key holds a callback function which is invoked as soon as Tags which fire on this particular push have signalled their completion. Within this callback, we’re using the
onHtmlSuccess() feature of GTM’s interface. This is something that was exposed for public use with the Tag sequencing feature. The only thing you need to know about it is that it’s our way of telling GTM that it can now proceed with whatever was going on before this loop of
dataLayer.push() commands.
In other words, if you’re duplicating a Click / Just Links, and you’ve got “Wait for Tags” checked (meaning the Trigger will wait for all dependent Tags to fire before proceeding with the link default action), the process goes something like this:
- The link click Trigger fires this Custom HTML Tag.
- The Custom HTML Tag pushes the first duplicated event, and any Tags which use it start their execution.
- Immediately after, the Custom HTML Tag pushes the second duplicated event, and any Tags which use it start their execution.
- Once the last Tag firing on the second duplicated push signals its completion, the
eventCallbackcallback is invoked, and the Custom HTML Tag tells the link click Trigger that everything is done, allowing it to proceed with the redirect.
Now, add the Trigger you created in the previous step to this Tag, and you’re ready to duplicate. You can preview this to see what happens in your dataLayer with each event.
As you can see, each event is duplicated. There’s the Pageview, followed by its duplicates: gtm.js.first and gtm.js.second. There’s DOM Ready, followed by its duplicates: gtm.dom.first and gtm.dom.second, and so on.
(If you don’t understand what “Pageview” and “gtm.js” have to do with each other, the latter is the underlying event representation of the former. More information in my Trigger guide.)
3. Create new Triggers
The only housekeeping pain you’ll need with this solution is with the Triggers. Here’s how it should work:
- To duplicate a Trigger, you need one “base” Trigger of the event type firing without any delimiting conditions. This is the event that is duplicated.
- To fire your duplicate Tags, you’ll need to use Custom Event Triggers which use the original event name (e.g. gtm.linkClick) plus .first or .second, all wrapped in a simple regular expression.
In other words, we’re taking a step back to how GTM used to work before the auto-event triggers. We’re creating listeners using the generic event Triggers, and when these are duplicated, the Custom Event Triggers are used to delimit the Tags to fire only when specific conditions exist.
NOTE! You don’t need to create a “base” Trigger for the Page View event type. This is because “Pageview”, “DOM Ready”, and “Window Loaded” are automatically pushed into
dataLayer as the page and GTM load. In other words, they are automatically duplicated, and you just need to focus on creating the Custom Event Triggers.
Look at the illustration below for clarity:
Let’s zoom in. If you have an “Outbound Links” Trigger, which fires when the Click URL is not your own hostname, the duplicated Trigger would look something like this:
Make note of the “Event name” field. That’s what it should look like for your Triggers. With custom event names it’s easy, as you’re the one pushing them into
dataLayer in the first place. With the built-in Triggers it might be a bit more difficult, so here’s a cheat sheet for you:
Make sure you’ve got the Fire On condition on the Custom Event Trigger, as you don’t need it on the generic event Trigger. If you miss it from the Custom Event Trigger, you’ll inadvertently fire the Tag whenever any such event is detected on the page. For example, if we’d left out the Click URL Hostname condition from the example above, the Outbound Links Tag would fire whenever the Link Click event is pushed into
dataLayer.
BONUS: Use variable Tracking ID
Here’s a tip straight from Marco. To create a variable which sends a different Universal Analytics tracking ID depending on which cycle of the duplication loop is currently active, use the following Custom JavaScript Variable:
This returns the UA code “UA-XXXXXXXX-1” if the Tag is firing on the first loop of the duplication cycle, and “UA-XXXXXXXX-2” if on the second. You might want to setup some type of fallback or default return value in case neither matches.
Overview and summary
While the solution works for all kinds of tags and creates less redundancy in many setups, it has some drawbacks of its own.
First, it inflates the number of
dataLayer events significantly. A simple setup with just two events (Page View and Outbound Link Clicks) already triples the amount. Note that this doesn’t really have any impact on performance.
dataLayer is just a message bus used by Google Tag Manager’s internal data model. Whenever GTM needs to access the “Data Layer”, it’s actually just performing a lookup in its own data model, so the size of
dataLayer is inconsequential here.
Second, it requires a different and probably more difficult approach to triggers. All tags need to be linked to custom event triggers instead of the built-in Trigger types you might have become used to. If you were around during GTM’s previous version, you might be familiar with the setup, as it resembles how things were done with the old auto-event tracking setup. However, when you really chew it down, all you’re actually doing is creating one extra Trigger per event type, and moving from the built-in Trigger types to Custom Event Triggers.
Finally, trigger conditions become more critical as they can cause infinite loops or Tags firing in wrong situations. While such accidents shouldn’t cause infinite regret (assuming you test before publish) it does remain another difficulty for you to deal with.
One thing you might be concerned about is whether or not all the Variables populated with the initial base Trigger are still available when the duplicate Custom Event Triggers fire. For example, you might need the Click URL Variable, and now you’re worried that it’s not available when the duplicate Triggers fire. Don’t worry! GTM persists Variable values until they’re overwritten or there’s a page unload/refresh. So, unless you’re manually overriding the Data Layer values populated by GTM’s auto-event Triggers, you should be fine.
Well, we might have overstated the simplicity and elegance of the setup, but the idea of intercepting each event and duplicating it is far more approachable than the complex hitCallback setup used before.
I want to thank Marco for putting the approach into writing, and most of the content in this article has come from his pen, edited to suit the devil-may-care style of this blog. Any errors, factual mistakes, or radicalist propaganda cleverly hidden in the whitespace is solely the fault of me, Simo, and I take full responsibility for all the uprisings that will inevitably follow. | https://www.simoahava.com/analytics/firing-a-single-tag-multiple-times-in-gtm/ | CC-MAIN-2017-30 | refinedweb | 2,046 | 60.65 |
mq_receive - receive a message from a message queue (REALTIME)
#include <mqueue.h> ssize_t mq_receive(mqd_t mqdes, char *msg_ptr, size_t msg_len, unsigned int *msg_prio);
The mq_receive() function is used to receive is removed from the queue and copied to the buffer pointed to by the msg_ptr argument.
If the value of msg_len is greater than {SSIZE_MAX}, the result is implementation-dependent.
If the argument msg_prio is not NULL, the priority of the selected message is removed from the queue, and mq_receive() returns an error.
Upon successful completion, mq_receive() returns the length of the selected message in bytes and the message is removed from the queue. Otherwise, no message is removed from the queue, the function returns a value of -1, and sets errno to indicate the error.
The mq_receive() function will() operation was interrupted by a signal.
- [ENOSYS]
- The mq_receive() function is not supported by this implementation.
The mq_receive() function may fail if:
- [EBADMSG]
- The implementation has detected a data corruption problem with the message.
None.
None.
mq_send(), <mqueue.h>, msgctl(), msgget(), msgrcv(), msgsnd().
Derived from the POSIX Realtime Extension (1003.1b-1993/1003.1i-1995) | http://pubs.opengroup.org/onlinepubs/007908775/xsh/mq_receive.html | CC-MAIN-2015-40 | refinedweb | 185 | 55.24 |
Search
FAQs
Recent Topics
Flagged Topics
Hot Topics
Best Topics
Register / Login
Win a copy of
Soft Skills: The software developer's life manual
this week in the
Jobs Discussion
forum!
Post Reply
Bookmark Topic
Watch Topic
New Topic
Java
Java JSRs
Mobile
Certification
Databases
Caching
Books
Engineering
Languages
Frameworks
Products
This Site
Careers
Other
all forums
Forum:
Sockets and Internet Protocols
Why is my connection reset on this test program?
Colm Dickson
Ranch Hand
Posts: 89
posted 6 years ago
0
Hi all.
I'm conducting a small client/server test...it's supposed to emulate a server that can handle multiple clients that connect to the server on the local machine. Currently I'm just sending the messages but have not set up and responses to come back from the server yet. I'm just checking that the name passed to the server can be looked up in the hashmap table initialised and filled by the server.
For this
test
, I'm just sending a few messages, one after the other, rather than using any gui / command line option to post the messages.
After these messages are sent, my program causes a 'connection reset' to be thrown ,accoridng to the stack trace.
The
BufferedReader
that I have on the server waiting for these messages from the client seems to be ok as it is always reading as long as the reader is not null.
I will post the code below. Can anyone help me with what is the exact cause of this socket error please?
Thanks.
Test Server code
public class NameServer { Socket socket; ServerSocket serverSocket; HashMap<String,String> map; PrintWriter print; BufferedReader buff; public NameServer() { try{ serverSocket = new ServerSocket(3200) ; } catch(IOException e) { e.printStackTrace(); } map = new HashMap<String,String>(); map.put("Mr Y", "12345"); map.put("Mr Y", "54321"); map.put("Mr Z", "334"); } public void processClient() { System.out.println("Process client") ; Thread t1 = new Thread(new GetClient(buff, print,map)); t1.start(); } public void openConnections() throws IOException { InputStream is = socket.getInputStream(); OutputStream os = socket.getOutputStream(); buff = new BufferedReader(new InputStreamReader(is)); print = new PrintWriter(os); System.out.println("Connections opened"); } public void run () { try { System.out.println("Awaiting client connections..."); while (true) { socket = serverSocket.accept(); System.out.println("Connection made with a client"); openConnections(); processClient(); } } catch(IOException e) { System.out.println("Trouble with connection "+e); } while (true) { } } }
Now the Runnable class to be run as a thread in the server in respinse to each client connection
public class GetClient implements Runnable { BufferedReader br; PrintWriter pw; HashMap<String,String> hm; /** Creates a new instance of GetClient and passes refs */ public AuthenticateClient(BufferedReader b,PrintWriter w, HashMap<String,String> map) { System.out.println("Creating intance of GetClient"); br= b; pw = w; hm = map; } public void run() { System.out.println("Run method"); String userName = null; String passWord = null; Object ob = null; try { while ((userName = br.readLine()) != null) { //passWord = br.readLine(); //System.out.println(userName); //userName = br.readLine(); //System.out.println(userName); System.out.println("wating"); if(userName == "") { pw.println("invalid user name "); } else { ob = hm.get(userName); //System.out.println("Retrieved value "+ob); } if(!(ob == null))//(!(ob.equals(null))) { System.out.println("Hello "+userName+" welcome to the system"); } else { System.out.println("Sorry..no user with user name "+userName+" exists on this system"); } }// end of while } catch(IOException e){ e.printStackTrace(); } } }
So here is my client
public class Test { Socket s; PrintWriter pw ; OutputStream os; /** Creates a new instance of Test */ public Test() { } public static void main(String[] args) { Test t1 = new Test(); System.out.println("Client attempting to connect to server..."); try{ t1.s = new Socket("127.0.0.1",3200); } catch(IOException e) { e.printStackTrace(); } try{ t1.os = t1.s.getOutputStream(); } catch(IOException e) { e.printStackTrace(); } t1.pw = new PrintWriter(t1.os,true); t1.pw.println("Mr X"); t1.pw.println("Mr Y"); t1.pw.println("Mr Z"); t1.pw.println("Mr AAA"); } }
I have a test class for the server (below). I then start the server running and then the client
public class TestNameServer { public static void main(String[] args) { NameServer ns = new NameServer(); ns.run(); } }
What I want is for my program to remain inside the while loop simply waiting for a new client message to be sent but as it is, it reads the messages sent and then an exception is thrown. I thought it was best to post my code. Hope someone can shed some light on this please as I really want to get my head around sockets...the basics at least.
Many thanks,
Colm
Post Reply
Bookmark Topic
Watch Topic
New Topic
Similar Threads
whats wrong in my server code ?
Socket problem
Help with sockets
Server and Client communication probs
Server & Client are both waiting | http://www.coderanch.com/t/463308/sockets/java/connection-reset-test-program | CC-MAIN-2016-07 | refinedweb | 785 | 50.43 |
On 2016-10-18 00:35, Richard Guy Briggs wrote: > On 2016-10-17 18:21, Steve Grubb wrote: > > On Monday, October 17, 2016 5:19:59 PM EDT Paul Moore wrote: > > > We haven't merged any of the session ID code into the kernel so > > > changes are still possible. The logic for supporting loginuid_set > > > (UID namespace issues) don't really apply to session IDs so I think we > > > can drop the sessionid_set part of the API and just use the -1 > > > sentinel. > > > >.
Advertising <r...@redhat.com> Kernel Security Engineering, Base Operating Systems, Red Hat Remote, Ottawa, Canada Voice: +1.647.777.2635, Internal: (81) 32635 -- Linux-audit mailing list Linux-audit@redhat.com | https://www.mail-archive.com/linux-audit@redhat.com/msg08680.html | CC-MAIN-2018-05 | refinedweb | 114 | 55.78 |
Tell us what you think of the site.
Create an empty set, add an animated attribute, then delete the key(s). The set itself is deleted along with the key. Why is this happening? Is it expected? If the set contains any objects, then it not deleted.
Thanks!
from pymel.core import *
select(clear = True)testSet = sets(name = "the_set")testSet.addAttr("animated_attribute", at = "bool", keyable = True)setKeyframe(testSet.animated_attribute, t = 5, v = True)#this next line results in the set itself being deleted!!cutKey(testSet.animated_attribute, option = "keys", clear = True)
I really did not expect that actually.
I was certain there must be something wrong with the code, but it seems not. It happens no matter how many curves are connected either, it will delete those too.
Will have to look further into this.
You can avoid it by disconnecting the attribute first, then cutting the key from the curve node instead (I have already tried this whilst connected, but same results)
disconnectAttr(testSet+'_animated_attribute.output', testSet.animated_attribute)cutKey(testSet+'_animated_attribute', option = "keys", clear = True)
Lee Dunham | Character TD
ldunham.blogspot.com | http://area.autodesk.com/forum/autodesk-maya/mel/set-is-deleted-when-its-custom-keys-are-removed/ | crawl-003 | refinedweb | 182 | 52.36 |
MicroPython Tutorial XVI
Ok, lets do something different. Now LEGO has an iOS app that you can use as a remote. It is very good and lets you design your own controls. But to use it you need to be running the LEGO scratch interface. If you’re on MicroPython SD card you need something else.
But wait BEFORE you read on, be warned that the something else in this tutorial needs a Wifi connection. Something like LEGO recommended EDIMAX USB adaptor, without it this tutorial won’t work.
I should say too that is isn’t a tutorial like the others I posted to-date. This is a tutorial for those with quite a bit of experience in coding. All in all it will be around 120+ lines of code.
Here is a video showing what this does too.
It is also not a code base designed to work by itself, but in co-operation with the app remoteCode, that you can download here. OK done that, lets go.
#!/usr/bin/env pybricks-micropython
from pybricks import ev3brick as brick
from pybricks.ev3devices import Motor, UltrasonicSensor, ColorSensor, GyroSensor, InfraredSensor
from pybricks.parameters import Port,Color
from pybricks.robotics import DriveBase
from pybricks.tools import print, wait, StopWatch
from time import sleep
import threading
import sysimport random
import socket
import os# Section 01hostname = os.popen('hostname -I').read().strip().split(" ")
print("hostname address",hostname[0])
hostIPA = hostname[0]
port = random.randint(50000,50999)# Section 02left = Motor(Port.C)
right = Motor(Port.B)
# 56 is the diameter of the wheels in mm
# 114 is the distance between them in mm
robot = DriveBase(left, right, 56, 114)print("host ",hostIPA)
print("port",port)# Section 03online = Trueai = socket.getaddrinfo(hostIPA,port)
addr = ai[0][-1]backlog = 5
size = 1024
s = socket.socket()
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(addr)
s.listen(backlog)# Section 0Etry:
res = s.accept()
while online:
client_s = res[0]
#client_addr = res[1]
req =client_s.recv(1024)
data=req.decode('utf-8')
print("data ",data)except AssertionError as error:
print("Closing socket",error)
client_s.close()
s.close()
We have imported all the classes we will ultimately need for this code to work to keep this tutorial manageable. Ok, we begin in Section 01 by running a query agains’t the OS class to get the IP address of our robot that we’re running on and a random port number. We need to give both these bits of information to the app so that it can communicate with the robot.
Next in Section 02 we have simply defined the two connected motors declaring them in the process in a drive pair.
In Section 03 we setup the code needed to communicate with the iOS device, and indeed try and make that connection.
In Section 0E we launch the main loop thru which the app remoteCode and our MicroPython code will talk, printing out the conversation that is taking place. Yes, 0E doesn’t follow 03, we dropped a few sections to keep things to get you going.
Download the app remoteCode here and copy and paste this to a python code file on your robot, run it and then run remoteCode on your iOS device. Enter the IP address you’re of your robot into the iOS app and the port and they should start talking to each other.
Assuming you’re connected and it all works, you should see the work “data #:connected” appear on the screen. This is the app talking to your Python app. Now go and select one of the interfaces “Keyboard, Touchpad or Motion”. You should see more text appearing, assuming you choose “Keyboard” for example, it will sa “#:begin” followed by “#:keypad”. Swipe the iPad right and it’ll say “#:end” and then return to the main menu.
Note if you iOS device screen locks during the process, you’ll need to quit the Python and re-run the process. You can quit your connection on the iOS device by shaking it.
And there you have it, the basis of our remote app. But ok, where do you go from here. Here is some more code to add to the mix. These two procedures show subroutines to interpret the #: commands you just saw returned by the app.
# Section 07def actionTrigger(data, client):
global transmit
if data[:5] == "#:end":
stopMotors()
brick.sound.beep()
peerMode = False if data[:6] == "#:peer":
peerMode = True
if data[:7] == "#:begin":
pass if data[:8] == "#:keypad":
brick.light(Color.YELLOW) if data[:10] == "#:touchpad":
brick.light(Color.RED) if data[:8] == "#:motion":
brick.light(Color.ORANGE) if data[:5] == "#:con": # connected
brick.sound.beep()if data[:5] == "#:dis": # disconnect
brick.light(Color.BLACK)
wait(2000)
brick.light(Color.GREEN)
client_s.close()
s.close() if data[:8] == "#:short":
stopMotors()
brick.sound.beep() if data[:6] == "#:long":
stopMotors()
brick.sound.beep()# Section 08def stopMotors():
print("STOP STOP STOP ")
robot.stop()
This code is reasonably self explicit. As you change to different interfaces, the colours of the robot will change in response to the different “#:” conversations the iOS code sends back to it.
We’re almost there. I am going to give you the last section which is the code bases you need for the tracker/motion interfaces and leave the keyboard one as an exercise for you to figure out. You should already have the general gist of the way it works now.
Firstly we add a method to interprete the streams of data that come back if you choose the motion or the tracker interface. Note the code here is a little more complicated since I want to ignore duplicate data packets sent by the app.
lastPitch = 0
lastRoll = 0# Section 09def joystick(pitch, roll):
global lastPitch
global lastRoll
calcPitch = int(round(pitch / 180 * 800,0)) # needs to be an integer
calcRoll = int(round(roll / 720 * 200,0)) # turns need to be slow and deliberate, this reduces roll by 4
if lastPitch != calcPitch or lastRoll != calcRoll:
print("calc",calcPitch,calcRoll)
robot.drive(calcPitch,calcRoll)
lastPitch = calcPitch
lastRoll = calcRoll
And then I add an else to the if we added early on in the main loop, so after the call to the actionTrigger routine it acts on the data packets prefixed with a “@:” lead.
elif data[:2] == "@:":
mCommands = data.split("\n")
for mCommand in mCommands:
if len(mCommand) != 0:
cords = mCommand[2:]
cord = cords.split(":")
try:
roll = float(cord[0])
pitch = float(cord[1])
joystick(pitch,roll)
except:
pass
When you’re all done, download the script and run it. Try using it with the touchpad or the motion interface and you’re be able to move you’re robot with your iOS device, under MicroPython!
Hint. If you’re in the motion interface, you need to keep the grey circle inside the red one to stop sending data, and indeed tap the circle itself to stop the robot. The touchpad is slightly easier, you just stop touching it.
I may publish a tutorial on getting the keyboard interface running in due time, but for now, its time to play :) A final final note, how do I know all this, well yes I confess I wrote the remoteCode app too. [Obviously it isn’t Python, it is in Swift]. | https://marklucking.medium.com/micropython-tutorial-xvi-1b34071f4640?source=post_internal_links---------3---------------------------- | CC-MAIN-2022-40 | refinedweb | 1,208 | 65.93 |
> november 2004
Filter by week:
1
2
3
4
5
Building a table from within a loop
Posted by Grant at 11/30/2004 9:39:05 PM
Hi, Ive got this loop within a loop and Im trying to build a table. Problem is when I exit the 2nd foreach loop (having created a new row) I cant add the new row because it isnt in scope anymore. Basically I have this custom arraylist containing a name property which has multiple attributes...
more >>
Problem with Custom Web Control Property
Posted by Dexter at 11/30/2004 1:51:32 PM
Hello all, I'm developing a custom web control that mount a webpage, but now i don't can to modify the properties of my custom web control. by exemplo: the property NumeroGuia is 0 by default, but if i try to modify to 5236, i don't can. Somebody can help-me with this custom web control? ...
more >>
Control no longer "draggable and droppable"
Posted by Scott Mitchell [MVP] at 11/29/2004 7:19:24 PM
(FYI, this question is cross-posted over the ASP.NET Forums -) I have created a custom control that, previously, had a great design time experience. I don't know what happened, but after Thanksgiving I returned to this project and...
more >>
question about my custom control
Posted by rodchar at 11/29/2004 1:19:14 PM
hey all, i was wondering, i have a custom control that inherits from the image button. i have 2 .aspx pages. all the custom control does is open a new window for help info. well, on 1 .aspx page the new window appears fine and in focus. however, on the second window the new window opens...
more >>
Extend the DataGrid
Posted by Sergio Florez M. at 11/29/2004 11:40:36 AM
I wish to extend the DataGrid control but my resulting control loses the ability to expose the <columns> and the different <style> tags in HTML mode. How do I expose them? and even better, how do I create and expose my own? -- Sergio Florez M. Medellín, Colombia. ...
more >>
Can't import .OCX assembly into Web Matrix Project.
Posted by Jason Robertson at 11/29/2004 2:33:05 AM
Hi, I am using Web Matrix Project as .NET programming environment. Can you show me detailed steps how to import the .ocx library into the Web Matrix Project. I can't seem to be able to import other assembly than .dll, .exe or ..mcl . I am pretty new to the .NET programming environment, and ma...
more >>
Icons in Custom Controls !!
Posted by Estratovarius at 11/28/2004 1:19:01 PM
How I can put an icon within a Web Custom Control ??...
more >>
how to conditionaly add a usercontrol? (ASCX)
Posted by KK at 11/27/2004 8:40:24 PM
Hi guys I have two user controls (ascx) I want to place them conditionaly in my aspx page. How to do that? For exmple, If Request["variable"] = "1" then DisplayASCXcontrol(1) Else DisplayASCXcontrol(2) End If So, according to the users choice when the page loads, it will contai...
more >>
Don't see what you're looking for? Search DevelopmentNow.com.
Attribute is not setting
Posted by Thiruppathi S at 11/26/2004 6:26:27 PM
Hi All, I had created a server control inheriting RequiredFiledValidator Class to add my custom attributes.But the controltovalidate attribute is not rendering. Plz help me. Regards, S.Thiruppathi ...
more >>
Problem with Build Custom Web Control
Posted by Dexter at 11/26/2004 1:30:17 PM
Hello all, I'm building a web custom control, but i hava a problem. In the Sub Render, i have a html code as output. All ok. I have a other class that has a function that return a image object, and i need to insert this image in my html of output, but it don't has a path. Protected Overrid...
more >>
Styleheets on my composite control
Posted by Chris at 11/25/2004 9:36:17 PM
I have built a custom control which has several controls which have had their cssclass attributes set to a class. When I applied the class in styles.css they changed. When I modify the styles they don't change. It is like they are cached. It can't be the control as the styles worked at first. Do...
more >>
Problem with web user control
Posted by oterox at 11/25/2004 9:13:11 PM
Hi! I have a user control WebUserControl1.ascx with a textbox.I want to change the text of the textbox from another page but i get the error "object reference not set to an instance of the object".The code is: protected System.Web.UI.WebControls.TextBox txtUC; public string ucTexto { set...
more >>
How to FTP via VPN to sites on different IP's ?
Posted by Jason Robertson at 11/25/2004 1:13:44 PM
Hi, using the VPN server's IP, 10.10....
more >>
User control - lacking positioning on page
Posted by ken bus at 11/24/2004 8:25:02 PM
It seems that the user control I created could not be positioned using the grid positioning like other controls on my web page. It appeared to use flow layout. Is this a problem with user controls and will I have to use a composite server control instead?...
more >>
Odd web control problem. Two instances of control A on different pages display each other's data!
Posted by ~~~ .NET Ed ~~~ at 11/24/2004 6:25:04 PM
I am currently using the ScottWatter (or ScottWater) Amazon Book Control in a website I am developing. Basically this is the outline: - It is a custom web control with designer support - Does not depend on any other library with the exception of a few Framework (System.Data,.XML,.Designer) l...
more >>
Any Good Books on Building controls
Posted by GaryB at 11/23/2004 6:02:45 PM
I just bought a book entitled "Developing Microsoft asp.net server controls and components." It shows how to build controls using C#, with code embedded in aspx pages instead of code behind, using the SDK instead of Visual Studio. I don't think anyone really develops code that way. Are th...
more >>
Problem in developing newsreader in vb.net
Posted by balu at 11/22/2004 7:11:25 PM
Hello, I am developing a Newsreader in vb.net and i am getting a problem in posting article to newserver for a particular set of newsgroups. i developin g this newsreader using nntp commands.I need vb.net code for posting a article . I getting a problem in defining a article format. Kindly ...
more >>
Detect IE update version or service pack number
Posted by henry at 11/22/2004 5:32:20 PM
Hi all, I'm building a website which provides contents that only work for some browser types and versions. I know I can get most of the client browser information using HttpBrowserCapabilities class, but I can't find a way to determine IE update version or service pack number. Any ideas? ...
more >>
"master" attribute not recognized
Posted by nidii NO[at]SPAM hotmail.com at 11/22/2004 2:17:29 AM
Hello, I'm trying to define a default master page in web.config for my web application lihe this: <appSettings/> <system.web> <!-- Set the default master page --> <pages master="Templates/MasterAdvertising.master"/> <!-...
more >>
accessing aspx object from ascx pages and vise verce?
Posted by rom at 11/22/2004 1:12:48 AM
Is there a better way of doing it, except from using the session variables? Thanks!...
more >>
Determine width for label or button
Posted by Leo at 11/21/2004 5:35:16 PM
Is there any way to determine the width needed for a web.ui.label or web.ui.button based on it's .text and .font properties? I know you can do this with windows.form controls but how could it be done for server controls. ...
more >>
Detecting design mode in custom control
Posted by Naveen Kohli at 11/21/2004 11:58:37 AM
Is it pretty safe to assume that if Context is null, then its design mode? if (null == this.Context) { return true; } Naveen Kohli ...
more >>
Debugging designer control
Posted by Serg at 11/18/2004 7:35:05 PM
Hello, in winforms, how do I debug a control (i.e. when it is being added and when its properties are being modified by the client window)? Thanks a lot...
more >>
Book on Designers?
Posted by Serg at 11/18/2004 5:59:03 PM
Hello Please recommend a good book on developing custom user controls using Designer namespace, exposing properties, events, ect... NOT a book explaining ..NET fundamentals, but rather an in-depth coverage with examples in C#. I am mostly interested in doing this for WinForms, but I didn't ...
more >>
DataGridCommand does not bubble?
Posted by Sergio Florez M. at 11/18/2004 2:59:23 PM
I'm creating a control which has a Button and a DataGrid among other things. I have overwritten the OnBubbleEvent() method to capture the Button's click event and it works fine. I also want to do this to capture a click on a ButtonColumn in the grid, but OnBubbleEvent() is never fired when I clic...
more >>
how to get form id in a composite control?
Posted by Robin Lilly' at 11/18/2004 1:21:16 PM
Hi, I have a composite control that I need to reference the form id to build some javascript that I am pushing out several places, such as: _Anchor.Attributes.Add("OnClick", "JavaScript:popUpCalendar(this, document." & _formId & ".elements['" & _TextBox.UniqueID & "'], ""m/d/yyyy"")") ...
more >>
Extending datagrid with new/insert button
Posted by Manuel Trunk at 11/17/2004 5:29:10 AM
Hi, I try to develop a server control which inherits from DataGrid and adds a new/insert button to this grid. This is no problem, but I can't get the event fired from this new button. I tried everything, implementing INamingContainer and IPostBackDataHandler, added the button to the footer...
more >>
ASP.NET 1.1: Page vs. Control events
Posted by Ole Hanson at 11/16/2004 6:19:39 PM
Hi I am having a number of User Controls (ascx) on my page (aspx). The page is in my scenario only working as a dumb "container "giving life to the Controls, as these are handling my UI-logic and communicating with each other (the controls are communicating) by subscribing to each others event...
more >>
Directing Domains
Posted by Andrew Bonney \(abweb\) at 11/16/2004 6:15:27 PM
Hi, I have never used ASP in my life so I NEED HELP!!! I have heard it is possible to create an asp script that when someone connects to a web server from the a domain (say), the asp script reads the domain/subdomain in the address bar and the locates the correct folder fo...
more >>
User Control Populate Form after Log On - newbie question
Posted by keith at 11/16/2004 11:13:07 AM
Hi, I have a user control that contains a login element, textboxes for username and password, and a button to submit, which appears in the header of each page in my asp.net application. All of the below is done with code behind. When the user correctly logs on, it sets a public property bL...
more >>
Building Nested Controls
Posted by Gaurav Vaish at 11/16/2004 10:54:46 AM
Hi, I want to have a control that can have children like: <prefix:Parent ... <Children> <prefix:Child .... </Children> </prefix:Parent> When I try to launch the page, it gives me error that: "Children" does not have a property ...
more >>
VS.NET 2003 Bug when ASP.NET control is in Design mode
Posted by ~~~ .NET Ed ~~~ at 11/14/2004 11:30:33 PM
Odd problem it is... on the design surface the web control is shown with red letters claiming that VS.NET was "Unable to create Control". But the entire assembly compiles flawlessly and even the test page displays the "damaged" controls perfectly. Looks like a VS.NET bug. ...
more >>
Saving Properties from IDE
Posted by Rob Thomson at 11/14/2004 9:10:13 AM
Hi Can anyone help I have a composite control, ie text box, 3 buttons and have exposed them to the ide, but I cannot get any of their values persisted, ie if I change the value in the ide, shut the page, and reopen it, then the property value is back to its default. Im guessing that I need ...
more >>
hardware arquitecture ASP.NET
Posted by smoncayo NO[at]SPAM gmail.com at 11/12/2004 5:19:21 PM
I have a question about hardware arquitecture for using in asp.net. I dont know what kind of minimum requirements I need for using an application client - server with ASP.NET , SQL SERVER and CRYSTAL REPORTS. Do you have some documentation? Thanks a lot!!!!...
more >>
Doubt : ASP.NET problem
Posted by ASP newbie at 11/12/2004 4:30:57 PM
I cannot run my asp.net application in w2k server. But the program works fine under w2k professional. Can anyone tell me is there any difference in the settings? Many thanks. ...
more >>
Dynamic Controls - Still!
Posted by Coleen at 11/12/2004 10:47:28 AM
Hi Jim Thanks for your help. I changed the table from an ASP.Net table to an HTML table. I am till having the problem of the control placed inside the table not triggering the on SelectedIndexChanged event. To be honest, I don't have a clue what you are talking about as to containers and pan...
more >>
What Component model to use??
Posted by GaryB at 11/12/2004 8:58:03 AM
I have an ASPX page and a vb codebehind file that takes as input a passed web datagrid as input. My page has a CrystalReportsViewer on it. My page produces a PDF report of the DataGrid that was passed. I want to package this functionality into a component so that a programmer can simply d...
more >>
Setting Control Properties with a Control Builder
Posted by Peter O'Connell at 11/11/2004 12:23:14 PM
I have a control that requires a custom ControlBuilder to parse certain child tags as controls, but I also want to use other tags to set properties. Is this possible? The default ControlBuilder, combined with the ParseChildren(true) attribute, will automatically assign child tags to corresp...
more >>
How does one make a "designer" for a structure?
Posted by ~~~ .NET Ed ~~~ at 11/10/2004 7:56:00 PM
The configuration parameters of my web control are contained in a separate class. One can access the configuration parameters via the member names. In the control class I have defined a property to access the entire configuration class, and marked it with the following attributes: [Catego...
more >>
Parser Error: Type XYZ does not have a property named 'cc3:MyItems' (complete posting) ASP.NET Web Control Error
Posted by ~~~ .NET Ed ~~~ at 11/10/2004 6:49:27 PM
I am having problems trying to get this part of the functionality working on my control and I hope somebody has a clue about how to resolve it. 1. How the relevant tags appear on the ASPX page. The @Register stuff is omitted. The control is basically a composite who also has a list of items ...
more >>
An example for creating a web custom control with full design view.
Posted by Houda Tahbaz at 11/10/2004 12:16:51 PM
Hi, I know I should use the ControlDesigner class if I want to implement design mode behavior for a composition based control. However, do you know any online examples which I could start with quickly? Thanks, Houda Tahbaz ...
more >>
Need Custom DataReader Loop in User Control: inline vs. code-behind
Posted by Jordan at 11/10/2004 9:05:20 AM
I'm trying to evaluate the benefits of designing user controls completely inline vs. using a code-behind file. I already have a user control designed that does the following: - gets QString values from URL - creates DataReader from Sproc - loops through datareader to display custom output (no...
more >>
UserControl.ClientID
Posted by Federico at 11/10/2004 7:08:02 AM
Hi everyone, the matter is about the property UserControl.ClientID. I have an ASP.NET UserControl (called DateBox) which contains a TextBox server control, and there is an aspx page that contains an instance of DateBox. The page needs to put the focus into the DateBox; to do this I use a li...
more >>
Creating dynamic Text boxes in VB .Net
Posted by Coleen at 11/8/2004 10:56:27 AM
Hi All :-) (I've posted at dotnet.languates.vb.controls, but not received any response since 11/5 - sorry for the doble-post, but I really could use some help...) I need to create some labels and text boxes dynamically, depending on the number input in a previous textbox on the page. What I ha...
more >>
Error: Webcontrol must have items of type X. SubLinks is of type Y
Posted by ~~~ .NET Ed ~~~ at 11/7/2004 5:16:42 PM
Hi, I have made a web control that has two sorts of items. The first is the Links property that has a persistence attribute of InnerProperty, the other is SubLinks with the same attribute. Both are of the same collection type. The control has its own designer class (although it doesn't render...
more >>
maintain viewstate in listboxes
Posted by DC Gringo at 11/6/2004 7:49:56 PM
I have an asp.net vb page with two web user controls on it. The first user control, uc1.ascx has a series of list boxes that are populated in series by each other with ONSELECTEDINDEXCHANGED. In the second control, uc2.ascx, lower down on the page, there is an imagebutton. When the imagebutt...
more >>
How to access values entered in User control in the main page.
Posted by vineetbatta at 11/6/2004 3:30:02 PM
Hi Guys, i have a user control which allows the user to enter Name& Address in text boxes. I use the same user control in the main page... Is there a simple way of accessing the Name & address entered in the text boxes of the user control in the main page(Page hosting the user control ...
more >>
Control and Parent
Posted by Zürcher See at 11/5/2004 4:07:25 PM
The Control class has the Parent property that is readonly. When the control is added to a controls collection of another control the Parent property refers to that control. "Who" set the Parent property? How to implement this "mechanism"? public abstract class MyControl { private MyCont...
more >>
Editcommancolumn in datagrind in asp.net
Posted by venkat_chellam NO[at]SPAM yahoo.com at 11/4/2004 7:19:12 AM
I have a question. I have datagrid in asp.net page. grid will be loaded some information on page load. One column of the grid is editcommandcolumn type with edit, update and cancel options. I don't want the edit button(linktype) to be enabled for all the rows. Bases on some information, ed...
more >>
Configuration
Posted by Paul Ledger at 11/4/2004 1:52:07 AM
I've written a control and I want to be able to pass certain information to it via a configuration file. I've created the config using the namespace + dll + config and I've placed it in the bin directory on the website. But when it comes to reading the information from the file using... str...
more >>
·
·
groups
Questions? Comments? Contact the
d
n | http://www.developmentnow.com/g/10_2004_11_0_0_0/dotnet-framework-aspnet-buildingcontrols.htm | crawl-001 | refinedweb | 3,331 | 65.83 |
Subject: Re: [boost] Boost Graph Library: why are `source()`/`target()` non-member functions?
From: Max Moroz (maxmoroz_at_[hidden])
Date: 2013-04-22 17:46:15
Jeremiah -
Thank you - this is exactly what I was missing.
Suppose I have a class DS that I want to adapt for use with Boost.Graph in
two different ways (for example, what is considered a source in one
adaptation should be a target in the other adaptation, and vice versa). How
can I create two functions source() without causing a name conflict? Don't
they both have to be defined in the boost namespace, with the same
signatures?
Also, I was wondering about an alternative approach. The interface could
require member functions, and to provide them I would put third-party
objects inside a wrapper class. The wrapper class would take the
third-party object as a constructor argument, save it as a private data
member, and expose the required interface in the form of member functions.
This seems to allow for better encapsulation than the free functions
approach. Does the free function approach have any advantages over the
wrapper approach?
Max
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2013/04/202750.php | CC-MAIN-2019-39 | refinedweb | 210 | 66.84 |
A file that has many names that all share the same I-node number.
A file system mounted using the mount o hard option. The hard option indicates that the retry request is continued until the server responds. The default for the mount command is hard.
An electrically wired outlet on a piece of equipment into which a plug or cable connects.
Namespace information that is similar in structure to the Unix directory tree. See namespace.
The portion of a file system allocated to a user for storing private files.
A node on the network.
Every system on the network usually has a unique hostname. Hostnames let users refer to any computer on the network by using a short, easily remembered name rather than the host's network IP address. Hostnames should be short, easy to spell, and lowercase, and they should have no more than 64 characters. The hostname command determines a system's host.
A collection of slices (hot spares) reserved for automatic substitution in case of slice failure in either a submirror or RAID5 metadevice. Hot spares are used to increase data availability.
A slice reserved to substitute automatically for a failed slice in a submirror or RAID5 metadevice. A hot spare must be a physical slice, not a metadevice.
These are devices that can be connected or disconnected while the system is running.
These devices allow for the connection and disconnection of peripherals or other components without rebooting the operating system.
High Sierra File System.
The central device through which all hosts in a twisted-pair Ethernet installation are connected. A hub shares bandwidth between all systems that are connected to it. See Switch. | http://books.gigatux.nl/mirror/solaris10examprep/0789734613/gloss01_div08.html | CC-MAIN-2018-22 | refinedweb | 278 | 58.28 |
C++ Annotated August 2021: Practical Modules, C++20 Attribute to Help with EBO, Valgrind, Intel Compiler, and CLion News
Our monthly C++ Annotated digest and its companion, the No Diagnostic Required show, released a new episode with August news!
If you).
August news
- Language news
- Learning
- Tools
- And finally, 10 years of C++ support in JetBrains tools!
Watch the August episode of No Diagnostic Required below, or keep reading for all the latest news!
Language News
D2422R0 “Remove nodiscard annotations from the standard library specification”
This is another example of a paper that has recently been covered on CppCast, it appears to be an unusual paper at first glance. That was certainly my reaction when I first heard it discussed on CppCast, and that seemed to be Rob and Jason’s reaction, too.
It’s worth digging into this a bit more to find out what’s really being said in this paper. It’s definitely a case where the devil is in the details and a little more background is insightful, so let’s dive in.
[[nodiscard]] is an attribute introduced in C++17 as a hint that compilers or other tools can choose to check whether the return value is used or explicitly cast to
void by the caller. It can be applied to a function (including member functions), in which case it refers to the return value of that function or to a type (including enums), in which case it applies to any function that returns values of that type – great for error codes or types. In C++20 it was extended to allow a message to be included that explains why the value should not be discarded.
Along with error types, functions that have no side-effects (are “pure”) also seem like good use cases. Back in May, we talked about a proposal for adding
[[nodiscard]] to the iterators library for this reason. There have been other proposals, many already accepted into the working draft for C++23, that add it to other functions, like memory allocation functions and
empty() member functions.
These all seem like great uses of
[[nodiscard]] – almost no-brainers, even (although there were some less obvious cases).
So what’s the problem? Rewind to where we said these should be considered “hints”. This is a definite case of “no diagnostic required”. On the other hand, standard library authors are entirely free to add
[[nodiscard]] anywhere they see fit, and many have done so. Microsoft’s stdlib contains over 400 such annotations, for example. Beyond that, even without
[[nodiscard]], tools (including compilers) are allowed to issue warnings when return values are discarded if they think there is a good case for doing so.
So that raises the question: should something that doesn’t specify anything be put into a specification? The only possible change is that some additional warnings may be issued. Useful warnings, for sure (assuming they’re not false positives), but at what cost?
The downsides include:
- Committee time. This is not as trivial as it sounds. The committee is already overstretched and these proposals take more time to review and process than you might think. If the goal is to get to 100% coverage on functions/types that should be annotated it may be worth the hit – at least at some point – but, as we’ll see, that may not be an appropriate goal.
- The danger of false positives and other unintended side-effects or bugs. The best way to minimize these is to learn from implementation experience. That can be put into motion now, by proposing patches to stdlib implementations to add
[[nodiscard]]in the appropriate places. Then we wait.
- A partial job, especially as coverage gets higher, raises questions about those places that have not received the annotation. Does it mean they have side effects? Is the return value unimportant?
- Verbosity. All these annotations add up, especially when added alongside const, constexpr, noexpect, and so on. There’s a can of worms here, but an important one, which we’ll come back to.
To sum up so far, D2422 is not proposing that we don’t want warnings for misuse of these functions and types. Rather it is saying that specifying them in the standard is not the best or most practical way to do so, so let’s slow down and reconsider. At the very least, let’s implement them directly in the major stdlibs, see how it shakes out, then eventually standardize the existing practice.
There may also be better ways that are being discussed within the committee, but for which there are no papers written yet. It’s too early to say what will come of these, but it seems important not to run in the opposite direction just yet. One of these ideas is: specifying (possibly in a separate document) en masse categories of functions for which
[[nodiscard]]-like behavior in the tooling may be applied (e.g. all empty() methods) rather than specifying that they each have the
[[nodiscard]] attribute. Another possibility is something like a
[[discardable]] attribute that we could start applying in cases where we know we want that ability. Then, existing compiler flags such as -Wunused become more useful due to less noise, but remain an opt-in option. There are also references to a proposal to add a
[[pure]] attribute. These are all ideas and opinions in the mix, but the underlying thought is that annotating
[[nodiscard]] everywhere is not a practical approach, and therefore spending committee time on doing so is inefficient at best and self-limiting at worst!
P2372r2 “Fixing locale handling in chrono formatters”,
P2419r0 “Clarify handling of encodings in localized formatting of chrono types”
After talking so much about one proposal (which itself is talking about the removal of proposals already adopted), let’s talk a bit about these couple of papers that, on the surface, relate to how
std::format works with
std::chrono. In fact these are really about text encoding. We’ve talked many times about
std::format and at least once about
std::chrono, but not enough about text encoding. It’s a really tricky, deep subject that is often overlooked, especially by application developers who very often like to pretend that it doesn’t exist at all! This is not helped by the standard library traditionally brushing all the handling under the locales carpet. The string classes (even
std::wstring) say nothing at all about encodings. There’s a lot of ongoing, largely unsung, work being done by SG16 (the Unicode Study Group) to bring stronger, fuller Unicode support to C++. Some of that work has been bearing fruit in the current standards, but the biggest payoffs are quite a way off yet. In the meantime, let’s at least try to remember that using UTF-8 doesn’t mean we don’t have to worry about text encodings.
P2388R1 “Minimum Contract Support: either Ignore or Check_and_abort”
Last month we looked at R0 of this paper, which at the time was called “Abort-only contract support”. It’s not just the name that has changed, though. The paper has been substantially re-organized and expanded, with most review actions addressed and wording added. There is a joke within the committee that it’s easy to encourage paper authors to explore “further work in this direction”, but in this case, I see this as a sign that things are actively moving along and this is being taken seriously. Maybe some form of Contracts support is on track for C++23, after all?
Now, back to standards that can be used today…
Empty Base Class Optimisation,
no_unique_address and
unique_ptr
If you haven’t heard about this new C++20 attribute –
[[no_unique_address]] – you definitely should check the article by Bartlomiej Filipek. He talks about Empty Base Class Optimization first, which allows saving memory on empty structs via inheritance. The idea is that if you know that your class is empty, then you can inherit from that class, and the compiler won’t enlarge your derived class.
However, C++20 brought an easier way to achieve a similar effect. A new attribute indicates that a unique address is not required for a non-static data member of a class. Interestingly, ABI might be affected as the usage of the
[[no_unique_address]] attribute changes the struct layout. This situation is discussed in the GitHub issue for Microsoft STL. It was noticed there that Clang listed this attribute as supported in v.9, but not when targeting Windows. And while Stephan T. Lavavej stated that the attribute is now supported in MSVC, it’s too late to request Clang support to take advantage of it before the C++20 ABI lockdown.
Don’t explicitly instantiate
std templates
The major outcome of the new article by Arthur O’Dwyer that C++ developers should take as a rule is in the title. With a good set of examples (from simple ones to real-life ones) he explains why it’s bad. The major idea is that implicit instantiation is lazy and can only instantiate the required parts, while explicit instantiation does so for every member. The standard library relies on this laziness a lot, so when you explicitly instantiate
std templates you get yourself into trouble.
For Clang users, there is an attribute that is probably not so widely known, which helps protect code from such explicit instantiations and related issues:
exclude_from_explicit_instantiation.
C++20 modules with GCC 11
As modules-related topics are so hot now, we’ll discuss a few recent posts dedicated to this long-awaited C++20 feature. Niall Cooling discusses the C++20 modules in his new article, focusing on two different approaches to organizing the module structure – single file modules and separate interface, and implementation files – to manage the structure more easily.
He builds a very basic Hello, World! example using these approaches and GCC 11, providing tips on constructing modules with this compiler specifically. There are some differences from Microsoft’s implementation, so check them out carefully if you have tried modules with MSVC before. Pay special attention to the lack of file name extensions and common agreements.
Niall shows how to build a simple module, export functions, namespaces, and types, import things, and build everything together. Module partitions are left out of the blog post scope and will be discussed at a later date.
Moving a project to C++ named Modules
The Microsoft team has also published a post on new module practices, which you can use as an example-based tutorial on building a named module for the existing code. The original project is published on GitHub, so you can play with it along with reading the article. Interestingly, the project is CMake-based, but to introduce modules you’ll have to switch to msbuild (which can be generated from CMake), as CMake still doesn’t have support for C++20 modules.
The newly created modules mimic the project’s include directories structure, and the modules are created from the corresponding header files. But the most interesting part is dedicated to the modularization of the 3rd part code, and this is where some non-trivial work is required. For example, static constants have to be wrapped with functions to import later. This is because an internal linkage entity can’t be exported.
All the efforts are rewarded in the end with a significant compilation time improvement!
Valgrind
After posting in the craft::cpp blog about sanitizers, Marin Peko did a dive into the Valgrind tool. You might say it’s too old, but it still can work better than sanitizers in a few use cases.. The most obvious one is catching issues in a library whose source code is inaccessible. Sanitizers require recompilation so they capitulate here immediately. While Valgrind works and provides meaningful results, suppression files help tune this result and make it even more useful. Another case is to search for memory errors with address sanitizers and at the same time detect uninitialized memory reads with memory sanitizers. That’s simply not possible with sanitizers, but no such problem exists with Valgrind.
Another interesting observation is that Valgrind can be customized. Its core part loads the software and disassembles it, and the tool plugin adds the instrumentation and assembles it back. This other part can serve different purposes: checking for memory leaks, detecting data races in multi-threaded applications, analyzing the heap memory usage, and so on. Valgrind is actually not a tool, but a family of tools based on the same core.
There are known limitations of the Valgrind approach and they are discussed in the article. For example, the execution times and memory usage are significantly larger than in the sanitizers case. Valgrind also won’t help you catch overflows in stack and global variables. This is because it only has access to the heap allocations performed by the malloc function. Before making a choice between Sanitizers and Valgrind, read through the article to learn the Valgrind basics.
Intel C/C++ compilers complete adoption of LLVM
Big news was announced by Intel – they moved their compiler to the LLVM infrastructure. Intel’s compiler might not be in the top-3 most used C++ compilers (Clang, GCC, MSVC), but it is still very popular and obviously an essential choice to get the best performance on Intel processors.
Moving to LLVM infrastructure is definitely a trend among C++ tool vendors. There are obvious reasons for that – there is a huge community caring and contributing to it, and it’s fully open-source which makes it a perfect choice for tooling. Intel got the latest C++ language standards nearly for free as a result of the migration.
Intel recommends users migrate to a new compiler as the old one will soon be moved to a legacy mode with no updates. The migration guide with many useful details is published. The Intel migration announcement also shares a set of benchmarks showing the compile time and performance benefits of the new Intel LLVM compiler.
CLion 2021.3 Roadmap
Following the CLion 2021.2 release in July, we published the vision for our next CLion release coming at the end of 2021. Our major focus is still on performance and eliminating freezes. In addition to that we’ll do our best to simplify user configuration efforts in several ways:
- Bundle the MinGW toolchain in CLion installer on Windows so you need fewer manual downloads and installations when starting with CLion on that platform.
- Add the ability to configure the toolchain environment via script (for example, if you use a script to initialize all the compiler environmental variables, including the addition of the bin and lib paths).
- Bundle Ninja and use it as the default generator for CMake projects, which is an essential default for many CMake projects nowadays.
- Finalize and release our long ongoing work on custom compilers. When it’s finished, you’ll be able to fill in configuration files (likely in the
yamlformat) for the compiler not natively supported by CLion, provide supported features, header search paths, defines, etc., and then use them in CLion to get your custom compiler “supported”.
- For Makefile project users, automatically find executables corresponding to Makefile build targets.
There are also several pain points in the debugger we plan to address, like long STL type names, “show as array” mode for pointer variables, and hex formatting for numerics. Check out the full plans in the blog post.
And finally, 10 years of C++ support in JetBrains tools!
On August 25, 2021 we celebrated 10 years of public support for C++ in JetBrains tools. It all started with AppCode. We were not expecting to come up with decent C++ support, but it turned out that it’s required for proper Objective-C++ support. So we started with macros in Objective-C++ code, STL auto-import, and some C++ completion. In later AppCode versions, we added libc++ support, correct template parsing, some C++11 features support, and Implement/Override for C++ code. But only when Google Test support landed in AppCode did it become serious enough and we started considering a standalone C++ IDE for the JetBrains family.
That’s how the idea of CLion was born. One interesting fact is that the first CLion demo was in September, 2013, by Dmitry Jemerov at JetBrains Day at FooCafé in Malmo, Sweden. Other names we considered for the IDE were CIDELight, Voidstar, Hexspeak, GottaC, and CTrait. Let us know what other facts you’d like to learn about our C++ Tools, CLion, and ReSharper C++.
About the authors | https://blog.jetbrains.com/clion/2021/09/cpp-annotated-august-2021/ | CC-MAIN-2022-21 | refinedweb | 2,769 | 60.75 |
Hi ALL,
I want to analyze my data by a python script. The data are in multiple file and they are kept in different directories. Moreover, the name of the files in those directories are same but have different extension. I want to get my output just by providing a single input filename. for example
my filename is 'test' and this will be my input
the actual filenames are suppose test.data1, test.data2, test.data3, test.data4 and among them two input files such as test1.data1 and test2.data2 are kept in test1, test2 directories. The test3 and test4 are output directories.My aim is access those directories through the python script and then access those datafile. All the four directories are present in my machine but the two output files will be generated with extension .data3 and .data4 through the script. I have started with this following script but I can't complete it. Any help will be appreciating
import re import numpy import os import glob filename =raw_input('enter the input file name: ') lines = open(input.data1, 'r').readlines() lines1 = open(input.data2, 'r').readlines() outfile1=open(input.data3, 'w') outfile2=open(input.data4, 'w')
Best Sudipta | https://www.daniweb.com/programming/software-development/threads/446496/accessing-directories-and-files-using-python | CC-MAIN-2017-17 | refinedweb | 201 | 69.18 |
User Tag List
Results 1 to 2 of 2
Hybrid View
- Join Date
- Jan 2010
- 4
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
current_user object is getting lost
Hello,
I'm working on an app that tracks oil/gas/electricity consumption. Can be seen on with login/password as guest/guest
This is all going fine. Now I decide that I also want to log water usage. I see that it's the exact same as the method for gas, i.e. a meter that logs usage in m3, so I figure I'll cut and paste as much code as possible.
I generate the scaffold and edit the existing view where I want to be able to define a new water account. This page is on and is the same page used to create a new gas account.
When I click on the link to "New Water Account" I get a error as follows
Code:
You have a nil object when you didn't expect it! The error occurred while evaluating nil.country
Code Ruby:
def edit logger.debug "Current user : #{@current_user}" @water_account = params[:id] ? @current_user.water_accounts.find_by_id(params[:id]) : WaterAccount.new @water_suppliers = @current_user.country.water_suppliers.find( :all, :order => 'name') @water_units = WaterUnit.find( :all, :order => 'name')
What I can't fathom is that on the same page for setting up new water/gas etc. accounts, the link to a new gas account works fine, and I copied the water_accounts_controller code directly from the gas_account_controller code, exactly the same. They are both accessed via the same view so I cant understand how in one case the @current_user is null and in the other case, its fine, and I can access it. BTW, the line that gives the error is the
Code Ruby:
@water_suppliers = @current_user.country.water_suppliers.find( :all, :order => 'name')
I'm stumped by the fact that , from a controller point of view, the code is exactly the same, and the code in the view which accesses both controllers is also the same but in one case the current_user object disappears.
Any help would be most appreciated.
/ Colm
- | http://www.sitepoint.com/forums/showthread.php?685676-current_user-object-is-getting-lost&mode=hybrid | CC-MAIN-2014-23 | refinedweb | 349 | 62.88 |
I have a bunch of methods that I use across the whole application, from various components. First of all, is this a good use-case for plug-ins?
I currently have a number of such plugins in my project, and sometimes I use the same function name across plugins. To clarify these are plugins that hold common functions that are used throughout the application, so I put them in plugins rather than duplicate them all over the place.
So for example, I have a
find() method in my
Customers.js plugin and a totally different
find() method in my
Products.js plugin.
Is there an elegant way to introduce a namespace such that I can do something like this in my components:
Vue.Customers.find() and
Vue.Products.find() ?
Thanks in advance. | https://forum.vuejs.org/t/custom-plugins-for-app-wide-functions/60134/2?u=lcarbonaro | CC-MAIN-2021-49 | refinedweb | 132 | 65.93 |
Agenda
See also: IRC log
<scribe> Scribe: wiecha
Leigh: no news
John: don't think Steven had any either
Nick: updated the list according to F2F and last meetings
there are a couple of resolutions w/o action items
Leigh: ok, let's assign those
<klotz>
<scribe> ACTION: Leigh Klotz to rename XForms for HTML to XForms Attributes for HTML [recorded in]
<trackbot> Created ACTION-615 - Klotz to rename XForms for HTML to XForms Attributes for HTML [on Leigh Klotz, Jr. - due 2010-04-21].
Leigh: the next one, for relaxing constraints, is already done
also for p3ptype
<klotz>
Leigh: still think we need to do something on task force for access control
does anybody implement parsing those headers?
XHR will enforce this already
would suggest we add access control back on the on-going agenda
John: let's update our understanding of the requirements here
Leigh: let's otherwise close the action on Mark
and track this as a group on the agenda
action to respond to last call for XHR...we haven't discussed this but did interact with them w/o success
<trackbot> Sorry, couldn't find user - to
so we should close this action
can close steven's action to check on f2f and recharting
and also for kenneth and json issues
forms wg charter w/john boyer is done
John: as would the item on my list
for kenneth, first item on json is done, can close
John: i've also completed my action on form parts...pls close
Leigh: steven has action to arrange for this
Nick: steven checking on 4-day meeting option
Leigh: so yes, we're meeting...either for 2 days for 4...will report to XCG
John: in looking at designing forms starting from existing schemas which make use of xsd:duration have found that xforms:duration is not supported, now say processors support all xsd datatypes except for duration and a few others
but durations are popular in BPM applications :-)
it happens that the schema processor attached to our xforms processor is fine with xsd:duration
so this is just a spec issue
no type MIP was created
given it's omitted from the spec
Leigh: there are problems with comparison operations
Nick: don't know how many days in a month so comparing two durations is hard
John: yes, but this seems outside the scope of XForms
we have subtypes with ordering, but not the composite type
those are useful additions
but doesn't imply losing xsd:duration
Nick: the spec only says xsd:duration is not recommended for ...
John: Section 5.1 say's they're
not supported in totality
... think this is mistaken...if I can support xsd:duration in my schema and in a type, hard to believe it's not valid in a type MIP
Leigh: supported as abstract datatype
John: yes, this means supported as base type to derive xforms:daytime and yearmonth durations
derived by restriction from xsd:duration
authors should be allowed to use xsd:duration even given its lack of convenience
Leigh: also for xforms:duration?
John: xsd:duration already allows empty string
Nick: do the same for duration as for the other types
John: agree
Leigh: what is the schema for schema for duration?
John: regular expression...
ours would be simpler...just the union of xsd:duration and empty string
could just do an erratum
optional for 1.1 processors...better than not having it at all
current behavior is not having any type restriction at all
Leigh: sounds good for basic type, why also the convenience type now?
John: would be forced to use a schema to define it myself in the app
Leigh: should this be a 1.2 module?
and then suggest implementing it in 1.1 processors...
Leigh: what's the cut between errata and 1.2?
John: for xsd:duration, claim it's not supported seems to be basically wrong
since it can be used directly in the type MIP
and our intention was to define the corresponding empty type for each supported xsd type
<klotz> Since XML Schema Part X does not define xs:duration as an abstract type, it is erroneous for XForms 1.1 to attempt to redefine it as such in a note.
John: then we could introduce the xforms type in the schema which is already not normative so updateable
Leigh: so the argument for 1.1
erratum is that it's incorrect as stated
... any opinion this should be delayed to 1.2?
(no response)
proposed resolution: amend the note stating xsd:duration is unsupported to allow for its use
<John_Boyer> In 5.1, XForms supports all XML schema 1.0 data types except... remove xsd:duration from that list
<John_Boyer> In the following note, The built-in datatype xsd:duration does not support a total ordering. Form authors are encouraged to used xforms:dayTimeDuration or xforms:yearMonthDuration instead
<John_Boyer> In 5.2.1, add duration to list of built-in primitive types in the xforms namespace
Resolution: xsd:duration and the xforms:duration including empty string is also supported
<scribe> ACTION: John_Boyer to write erratum supporting xsd:duration and xforms:duration [recorded in]
<trackbot> Sorry, couldn't find user - John_Boyer
Nick: have a 1.2 item related to this
<scribe> ACTION: John_Boyer to create erratum category on the forms wiki [recorded in]
<trackbot> Sorry, couldn't find user - John_Boyer
Nick: would like to get opinion on moving to xsd 1.1 part 2 data types in xforms 1.2
Leigh: is 1.1 rec?
Nick: still a WD
Leigh: think something is coming out soon
John: what is the NS for those types?
Leigh: would think it would be the same
Nick: xsd 1.1 is intended backwards compat. with 1.0
John: that's good since it would then be feasible for implementors to add those new types to schemas to provide all of xforms types
i.e. to avoid NS conflicts
Leigh: idea sounds fine depending on dates when drafts/recs published
John: also the question on the rest of xsd 1.1 not just the types
Nick: right, the types are the easy part
John: Phillip mentioned the utility of the assertion mechanism relative to xf:constraint
may cause some issues related to live validation when data are changing vs. validating one-time
Phillip: yes
John: constraints run during recalculate, schema runs during revalidate
so you'd have calculation like constraints running after the "calculate" phase is over
not saying this won't work...just not sure
Nick: just run schema processor every time
Leigh: global or tied to nodes?
need to add the general use of xsd 1.1 to the list of topics to discuss
also 0004 and 0005
Nick: still think that form parts look like custom components, and importing models is a bit different...john seems to agree
John: yes, feel that form parts is interesting but there's also XBL
those are more generic composite controls
form parts is a design-time injection of code...little closer to runtime model/@src but does two injections (model and control)
John: the important bit is defining the injection points into the UI layer
the model position isn't really functional
so external model imports using something like @src or @import is simpler than these two alternatives
Leigh: when you generate the data layer how do you avoid name conflicts?
John: programmatic renaming at design time
so @id conflict resolution is easier for us - no runtime issues
Leigh: if we used model/@src we'd depend on unique @id's
so why is include not good enough?
John: faster for us to get it in just as a design-time feature
Leigh: why does your use case suggest model/@src vs. include?
John: not sure it does
Leigh: this is my concern...that it seems just doing something very close to include is not sufficient
John: the other reason we're doing form parts is we have two injection points...more important is the UI one
w/o doing nested forms and model-model communication, form parts gives us reuse
so it we can't do both injection points model/@src probably isn't useful
John: simple case of reusable web services declarations would be useful
Leigh: include would probably handle that though
lack of parameters on straight inclusion is a problem
Leigh: one approach is to start with something simple, other option is not not "use up" that syntax and make the real solution harder
John: I'm winding up in that latter camp
Leigh: could start with model/@src with scoping rules
John: could then expand later with a portion of a document that contains UI w/updated content
Leigh: starts looking like xpointer
John: would probably focus on the UI injection and handle the model part as a side-effect
Leigh: in summary what's the take-away from your use case and model/@src?
John: like the @id scoping resolution to get closer to form parts
seems to be unavoidable part of a solution
Leigh: some form of encapsulation
John: yes
then if we could use that import element *anywhere* and if it had UI and model elements it would "do the right thing" in terms of import
Leigh: sounds like Nick's point that this sounds like a component
John: perhaps, but some of the other component technologies are a bit more complex in separating submodels
Leigh: that's a more extreme version of encapsulation
John: right...this leads to all the model-model communication issues
import for model only, the fragments just work together
Leigh: Nick, can you pick this up by email?
not sure what's the next step
Leigh: ok, let's pick it up in the next call
This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Found Scribe: wiecha Inferring ScribeNick: wiecha Default Present: ebruchez, Leigh_Klotz, wiecha, Nick_van_den_Bleeken, John_Boyer, +0782483aaaa Present: ebruchez Leigh_Klotz wiecha Nick_van_den_Bleeken John_Boyer +0782483aaaa WARNING: No meeting title found! You should specify the meeting title like this: <dbooth> Meeting: Weekly Baking Club Meeting Agenda: WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Got date from IRC log name: 14 Apr 2010 Guessing minutes URL: People with action items: john_boyer klotz leigh WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output] | http://www.w3.org/2010/04/14-forms-minutes.html | CC-MAIN-2017-30 | refinedweb | 1,758 | 52.83 |
wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. To create this article, 67 people, some anonymous, worked to edit and improve it over time.
This article has been viewed 459,942 times.
Learn more...
Ever wanted to program in C++? The best way to learn is by looking at examples. Take a look at the basic C++ programming outline to learn about the structure of a C++ program, then create a simple program on your own.
Steps
- 1Get a compiler and/or IDE. Three good choices are GCC, or if your computer is running Windows, Visual Studio Express Edition or Dev-C++.
- 2Try some example programs. Copy and paste the following into a text/code editor:
A simple program is given by Bjarne Stroustrup (developer of C++) to check your compiler:
- A program for finding the sum of two numbers:
[[Image:Create a Simple Program in C++ Step 2 Version 3.jpg|center]] ; }
- A program for finding the product in multiplication problems:
[[Image:Create a Simple Program in C++ Step 3 Version 3.jpg|center]] #include <iostream> int main() { int v1, v2, range; std::cout <<"Please input two numbers:"<< std::endl; std::cin >> v1 >> v2; if (v1 <= v2) { range = v2 - v1; } else { range = v1 - v2; } std::cout << "range = " << range << std::endl; return 0; }
- A program for finding the value of exponents:
[[Image:Create a Simple Program in C++ Step 4 Version 3.jpg|center]] #include <iostream> using namespace std; int main() { int value, pow, result=1; cout << "Please enter operand:" << endl; cin >> value; #cout << "Please enter exponent:" << endl; cin >> pow; for (int cnt=0; cnt!=pow; cnt++) result*=value; cout << value << " to the power of " << pow << " is: " << result << endl; return 0; }
#include <iostream>[[Image:Create a Simple Program in C++ Step 1 Version 3.jpg|center]] #include <string> using namespace std; int main () { string s; cout << "Your Name \n"; cin >> s; cout << "Hello, " << s << '\n' ; return 0; }
- 3Save this as a .cpp file with a name that accurately reflects your program. Don't confuse there are many other extensions for C++ files, choose any of them (like *.cc, *.cxx, *.c++, *.cp) .
- Hint: It should say Save as Type: {select "All Files"}
- 4Compile it. For users of linux and gcc compiler, use Command : g++ sum.cpp. Users of Window can use any C++ compiler, such as MS Visual C++,Dev-C++ or any other preferred program.
- 5Run the program. For users of Linux and gcc compiler Command : ./a.out (a.out is an executable file produce by compiler after compilation of program.)Advertisement
Community Q&A
- QuestionAre there online solving programs?Community AnswerYes, there are many types of online solving programming tests available on the internet.
- QuestionWhat are the uses of C++?Community AnswerEverything! It can be used in everything from programming games to making webpages, software, and databases.
- QuestionI'm interested in learning C++, but I don't have any money. What can I do?Community AnswerLook for tutorials like this one online and teach yourself.
- QuestionHow would I know which data type I'm supposed to use?Community AnswerThink about what the data needs to do. For example, if you need to do something like simple counting or a loop, use int. If you need to keep track of multiple characters, use a string.
- QuestionHow can I learn programming using C++?Sebir MoranCommunity AnswerWatch some tutorials on YouTube. Or buy a book on C++ programming like Sumita Arona's Programming with C++.
- QuestionWhat are the steps to start C++ programming?Community AnswerIf you know C you can can easily learn C++. There is no much difference in C & C++.
- QuestionHow can I create a program that stores school records through C++?Soos MartonCommunity AnswerThe simplest way is to use the standard input method (cin) to get the user's input, then store it in a file using fstreams.
- QuestionHow can I learn C programming in a simple way?Community AnswerYou can watch YouTube tutorials and follow along: that goes a long way. Alternatively, you could join a program such as Treehouse or Cademy. You could also pick up a book and read about C++.
- QuestionI want to create a program that allows my customers to search all the products in my store. What do I need to know to do this?Are LodgaardCommunity AnswerYou need to learn a programming language and some type of server language.
- QuestionIs C++ a better language than C#? If so, why?Community AnswerThey’re both object oriented, and similar in many ways, but I can’t say which one is better. C# is a little easier to learn, and widely used for most business-oriented applications nowadays, but it doesn’t have the same “power” as C++.
Video
Tips
- cin.ignore() prevents the program from ending prematurely and closing the window immediately (before you have time to see it)! Press any key if you want to end the program. cin.get() functions in a similar manner.Thanks!
- Add // before all of your comments.Thanks!
- Feel free to experiment!Thanks!
- Learn programming in C++ with ISO standardsThanks!
- For more details about programming in C++ give a visit cplusplus.comThanks!
Warnings
- Your program will crash if you try to input alphabetical values to one of the "int" vars. Since no propper error trapping is done your program can't convert the values. Better read string or catch your exceptions.Thanks!
- Make sure to stay as far away from Dev-C++ as possible, because it has multiple bugs, an outdated compiler, and has not been updated since 2005.Thanks!
- Never use obsolete code.Thanks!
Things You'll Need
- A text/code editor (e.g. vim, notepad, etc).
- A compiler.
- Alternatively, an IDE contains an editor and a compiler.
- Turbo c
- Codepad online
- Notepad++ | https://www.wikihow.com/Create-a-Simple-Program-in-C%2B%2B | CC-MAIN-2020-50 | refinedweb | 968 | 66.84 |
On Friday, September 18, 2009 5:03 PM, Jaya Kumar wrote:> On Thu, Aug 6, 2009 at 11:31 AM, Ben Nizette <bn@niasdigital.com> wrote:>> On Tue, 2009-08-04 at 20:48 -0400, H Hartley Sweeten wrote:>>>>> For the record. The reason I sent this is I'm trying to work out an>>> extension to gpiolib that adds gpio_port_* access to the API. Most>>> of the gpiolib drivers already the necessary logic since the raw I/O>>> is performed on the entire 'chip'. The API just needs the extensions>>> added to request/free the port, set the direction and get/set the value.>>>>>> Is this a worthwhile addition?>>>> Plenty of people seem to think so. Personally I haven't seen a great>> use case except "'coz I can", but if you've got one I'd love to hear.>> Yes, you're right that there has been no major demand for it. There> are (luckily?) only a moderate number of devices that are using gpio> as their parallel bus interface. I've been supporting the batch-gpio> patchset below out-of-the-tree because it has come in handy with a few> e-paper display controllers and LCD 8080-IO that I've been developing> with.With the abundant number of GPIO's available on many of the ARM chips itseemed to me a "port" extension would be worthwhile. It would allow aside-band bus from the chip to access low-speed devices like a characterLCD as an example.>>>> Have you seen ? Donno what ended up>> happening to that patchset..>>>> I didn't pursue it further and have maintained it out-of-tree. I felt> that David had concerns about the API I implemented so it was unlikely> to get merged and I didn't have the motivation to implement another.> :-)Hmm.. That patchset is a lot different than what I was thinking of. Your patch allows a variable width to the number of gpio's in the "port". Butit also still gets/sets the "port" by individual bit accesses to thegpio_chip. By doing this I don't see how you could get a performanceincrease.The extension I was working on just allowed accessing the native widthof the gpio chip. Most of the gpiolib drivers read/write the bits in anative width, the individual gpio pin is masked in/out. My patch justallows access to the raw data without masking anything.Take the .get method in pca953x.c driver as an example.static int pca953x_gpio_get_value(struct gpio_chip *gc, unsigned off){ struct pca953x_chip *chip; uint16_t reg_val; int ret; chip = container_of(gc, struct pca953x_chip, gpio_chip); ret = pca953x_read_reg(chip, PCA953X_INPUT, ®_val); if (ret < 0) { /* NOTE: diagnostic already emitted; that's all we should * do unless gpio_*_value_cansleep() calls become different * from their nonsleeping siblings (and report faults). */ return 0; } return (reg_val & (1u << off)) ? 1 : 0;}The native width of the device is either 8 or 16 bits. To get a gpio valueall of the bits are read then the desired gpio is masked out.My thought was to just add the following methods to struct gpio_chip:int (*port_direction_input)(struct gpio_chip *chip);unsigned int (*port_get)(struct gpio_chip *chip);int (*port_direction_output)(struct gpio_chip *chip, unsigned int value);void (*port_set)(struct gpio_chip *chip, unsigned int value);I basically stopped working on this after Ben's comment and gettingno other feedback. If this appears useful I can look at it again.> Thanks,> jaya> > ps: I'm in Portland for the festival of linux conferences this week> and would be happy to work on this/discuss alternate APIs if it is of> interest.Regards,Hartley--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2009/9/18/432 | CC-MAIN-2016-44 | refinedweb | 627 | 64.1 |
(Tutorial) Unleashing the power of examples in learning regex. Part 1 with example on email extraction (with python & c++ code)
Let’s start with the basics.
What’s REGEX?
Regex is the tool to find specified matches in string like: e-mails, names, cities, syntax errors, tickers etc …
To put it simply when you want to extract specified words (in our example emails) from the below text — regext is your friend!
This is an example string to show you that this email example@example.com can be easily extracted from such text.
Regex is used everywhere and in general its fundamentals are mostly the same across all programming languages.
Regex is a must-have skill for any programmer, data scientists or machine learning scientist! Once you get to know the basics and see a few examples you will be astonished how easy it is.
Let’s start with the example — finding e-mails in text:
As a programmer it is very common to get a task of finding and extracting some information from a website or a document. In this example, let’s focus on extracting e-mails.
For example:
Let’s imagine you are assigned with a very common task to extract all e-mails from a string. In other hand, to create a class that takes as input a raw string and outputs all e-mails within a string.
How to do it?
Here is the solution:
‘[\w.-]+@[\w.-]+.\w+’
OK, let’s go with a thought process.
We need to start how e-mails look like, what are potential combinations of e-mails.
We can have:
- examplenameandsurname@xxx.com
- m123445zal123@zxz.eu
- Pauline.Van-Luise@luxembourgh-eu5.com
The first thing, the most important one is WHAT’S COMMON IN ALL EMAIL ADDRESSES….
This is @ — simply all e-mails contain @. But before and after @ there may be any combination of words or special characters. So how to find them all?
Our goal is to find every combination of words before and after @. In regex to specify any word combination we need to use a special symbol — \w.
— \w is for ASCII letter(e.g. a b c), digit(e.g. 1 2 3 or underscore (e.g. _).
But in email we can also have a dot ( . ) so in regex dot is a dot — . (in fact dot represents any character).
And also we can have a dash ( — ) so in regex dash is a dash ( - )
So how to merge \w, dot (.) and dash (-) together? Simply using a square brackets [] like →
[\w.-].
We also should add the plus symbol at the end to tell regex engine that this combination of any symbols may repeat many times, hence our first part of the regex looks now:
‘[\w.-]+’
Now we add @ as a common pattern for all emails.
‘[\w.-]+@’
and now, we need to add the same pattern after @:
‘[\w.-]+@[\w.-]+’
all email addresses end with .com / .pl / .eu / .xxx, so we should also add it:
‘[\w.-]+@[\w.-]+.\w+’
let’s see a complete working python code:
import reexample_string = ‘This is example string with example@example.com mail and with example-example@example.com.’all_emails = re.findall(r’[\w.-]+@[\w.-]+.\w+’, example_string)print(all_emails)
now let’s see a complete working c++ code:
#include <iostream>
#include <regex>
#include<string.h>using namespace std;int main()
{
string example_string = “This is example e@example.com string with example@example.com mail and with exampleexample@example.com kuku@example.com. Some other text.”;regex rexp(“[\\w.-]+@[\\w.-]+.\\w+ “);
smatch m;
int i = 1;while (regex_search(example_string, m, rexp)) {
cout << “\nMatched string is “ << m.str(0) << endl
<< “and it is found at position “<< m.position(0) << endl;i++;example_string = m.suffix().str();}
cout << “Maslo “ << example_string << endl;
return 0;
}
The difference between python and c++ is an additional white space in c++ at the end of the regex sentence. why? — python as interpreter language allows for that while c++ is more explicit.
All the best! | https://maciejzalwert.medium.com/tutorial-unleashing-the-power-of-examples-in-learning-regex-fb596b045d4?source=---------9---------------------------- | CC-MAIN-2021-25 | refinedweb | 659 | 68.16 |
- Start Date: 2018-02-15
- RFC PR:
- Ember Issue:
Summary
Beginning the transition to deprecate the fallback behavior of resolving
{{foo}} by requiring the usage of
{{this.foo}} as syntax to refer to properties of the templates' backing component. This would be the default behavior in Glimmer Components.
For example, given the following component class:
import Component from '@ember/component'; export default Component.extends({ init() { super(...arguments); this.set('greeting', 'Hello'); } });
One would refer to the
greeting property as such:
<h1>{{this.greeting}}, Chad</h1>
Ember will render "Hello, Chad".
To make this deprecation tractable, we will provide a codemod for migrating templates.
Motivation
Currently, the way to access properties on a components class is
{{greeting}} from a template. This works because the component class is one of the objects we resolve against during the evaluation of the expression.
The first problem with this approach is that the
{{greeting}} syntax is ambiguous, as it could be referring to a local variable (block param), a helper with no arguments, a closed over component, or a property on the component class.
Exemplar
Consider the following example where the ambiguity can cause issues:
You have a component class that looks like the following component and template:
import Component from '@ember/component'; import computed from '@ember/computed'; export default Component.extend({ formatName: computed('firstName', 'lastName', function() { return `${this.firstName} ${this.lastName}`; }); });
<h1>Hello {{formatName}}!</h1>
Given
{ firstName: 'Chad', lastName: 'Hietala' }, Ember will render the following:
<h1>Hello Chad Hietala!</h1>
Now some time goes on and someone adds a
formatName helper at
app/helpers/fortmatName.js that looks like the following:
export default function formatName([firstName, lastName]) { return `${firstName} ${lastName}`; }
Due to the fact that helpers take precedence over property lookups, our
{{formatName}} now resolves to a helper. When the helper runs it doesn't have any arguments so our template now renders the following:
<h1>Hello !</h1>
This can be a refactoring hazard and can often lead to confusion for readers of the template. Upon encountering
{{greeting}}!); finally, you check the component's JavaScript class to look for a (computed) property.
Like RFC#0276 made argument usage explicit through the
@ prefix, the
this prefix will resolve the ambiguity and greatly improve clarity, especially in big projects with a lot of files (and uses a lot of addons).
As an aside, the ambiguity that causes confusion for human readers is also a problem for the compiler. While it is not the main goal of this proposal, resolving this ambiguity also helps the rendering system. Currently, the "runtime" template compiler has to perform a helper lookup for every
{{greeting}} in each template. It will be able to skip this resolution process and perform other optimizations (such as reusing the internal reference
object and caches) with this addition.
Furthermore, by enforcing the
this prefix, tooling like the Ember Language Server does not need to know about fallback resolution rules. This makes common features like "Go To Definition" much easier to implement since we have semantics that mean "property on class".
Transition Path
We intend this to be a very slow process as we understand it is a large change. Because of this we will be doing a phased rollout to help guide people in transtion. Below is an outline of how we plan to roll this change out.
Phase 1:
- Add template lint rule to ember-template-lint as an opt-in rule
- Document the codemod infrastructure and codemod. Make it available for early adopters
- Start updating docs to use
this.
Phase 2:
- Add the lint rule by default in the apps
.template-lintrc.js
- Complete doc migration to use
this.
Phase 3:
- Enable the lint rule by default in the
recommendedconfig
Phase 4:
- Introduce deprecation app only fallbacks
Phase 5:
- Introduce deprecation for any fallbacks
Phase 6:
- Rev major to 4.0.0
- Add assert for fallback behavior
Phase 7:
- Remove fallback functionality in 4.5, post 4.4.0 LTS
How We Teach This
{{this.foo}} is the way to access the properties on the component class. This also aligns with property access in JavaScript.
Since the
{{this.foo}} syntax has worked in Ember.Component (which is the only kind of component available today) since the 1.0 series, we are not really in a rush to migrate the community (and the guides, etc) to using the new syntax. In the meantime, this could be viewed as a tool to improve clarity in templates.
While we think writing
{{this.foo}} would be a best practice for new code going forward, the community can migrate at its own pace one component at a time. However, once the fallback functionality is eventually removed this will result in a "Helper not found" error.
Syntax Breakdown
The follow is a breakdown of the different forms and what they mean:
{{@foo}}is an argument passed to the component
{{this.foo}}is a property on the component class
{{#with this.foo as |foo|}} {{foo}} {{/with}}the
{{foo}}is a local
{{foo}}is a helper
Drawbacks
The largest downside of this proposal is that it makes templates more verbose, causing developers to type a bit more. This will also create a decent amount of deprecation noise, although we feel like tools like ember-cli-deprecation-workflow can help mitigate this.
Alternatives
This pattern of having programming model constructs to distinguish between the backing class and arguments passed to the component is not unique to Ember.
What Other Frameworks Do
React has used
this.props to talk about values passed to you and
this.state to mean data owned by the backing component class since it was released. However, this approach of creating a specific object on the component class to mean "properties available to the template", would likely be even more an invasive change and goes against the mental model that the context for the template is the class.
Vue requires enumeration of
props passed to a component, but the values in the template suffer from the ambiguity that we are trying to solve.
Angular relies heavily on the dependency injection e.g.
@Input to enumerate the bindings that were passed to the component and relies heavily on TypeScript to hide or expose values to templating layer with
public and
private fields. Like Vue, Angular does not disambiguate.
Introduce Yet Another Sigil
We could introduce another sigil to remove ambiguity. This would address the concern about verbosity, however it is now another thing we would have to teach.
Change Resolution Order
The other option is to reverse the resolution order to prefer properties over helpers. However this has the reverse problem as described in the exemplar.
Do Nothing
I personally don't think this is an option, since the goal is to provide clarity for applications as they evolve over time and to provide a more concise mental model.
Unresolved questions
TBD | https://emberjs.github.io/rfcs/0308-deprecate-property-lookup-fallback.html | CC-MAIN-2019-43 | refinedweb | 1,141 | 55.34 |
Welcome to the November 19 - 25, 2017 edition of the Office 365 Weekly Digest.
There were no updates to the Office 365 Roadmap last week, likely a result of the shortened week due to the Thanksgiving holiday in the United States.
There are no changes to the list of upcoming events, but there are plenty of opportunities for customer immersion experiences with Office 365, Windows 10, and Microsoft Teams. Also, keep in mind that there are a couple of Microsoft 365 focused events on the calendar for December.
The third post of the "Four Success Factors for Driving Microsoft 365 Adoption" series, and a Q&A Roundup from a recent Teams On Air episode focused on Skype Meeting Broadcast, highlight last week's blog posts. Also included are posts with details on the November 2017 release for Office on iPad and iPhone, resources for Microsoft Teams in Education, as well as how to use an Exchange Transport Rule to block direct delivery to @onmicrosoft.com email addresses.
A reminder about DirSync and Azure AD Sync connectivity, as well as the latest episode of Teams On Air are included in the noteworthy items from last week. Also, an excellent report, Office 365 Year in Review 2017, is now available for download and is highly recommended.
OFFICE 365 ROADMAP
There were no items added to the Office 365 Roadmap last week...
UPCOMING EVENTS
Transforming your business to meet the changing market and needs of your customers
When: Tuesday, November 28, 2017 and Wednesday, November 29, 2017, November 30, 2017, December 5, 2017 at 12pm ET & 3pm ET and Tuesday, December 12, 2017.
Webinar: Embrace modern IT with Microsoft 365-powered devices
When: Tuesday, December 5, 2017 at 1pm ET | Microsoft 365-powered devices—devices running Windows 10 and Office 365 ProPlus, managed by Enterprise Mobility + Security—are the best way to experience the benefits of Microsoft 365. By combining the latest protection from advanced threats and proactive insights with a servicing model that's predictable and updates that are easy to deploy, you can ensure that your devices are always up to date and your users are always benefiting from the latest productivity enhancements and new features. In addition, embracing Microsoft 365 enables you to make the transition to cloud-based management at your own pace. Join this webinar to: (1) See how you can embrace both user-owned and business-owned devices without sacrificing security or control, (2) Discover how you can simplify the setup process for new devices and manage devices in the cloud instead of on-premises, and (3) Learn how to roll out updates for Windows and Office in a streamlined, predictable, reliable way—without disrupting user productivity. You'll also hear how you can accelerate your journey to Microsoft 365 with the FastTrack program. Engineers and product experts will be standing by to answer your questions, so don't miss this opportunity to get the answers you need to make the move to modern IT. | Related: Windows 10 for IT Pros Webinar Series (most available on-demand) | Microsoft Mechanics: Microsoft 365 Powered Devices
Microsoft 365 Powered Devices: Ask Microsoft Anything (AMA)
When: Tuesday, December 12, 2017 at 12pm ET | We are very excited to announce a Microsoft 365 Powered Device 'Ask Microsoft Anything'! What is a Microsoft 365 powered device? It's a device running Windows 10 with Office 365 ProPlus and Enterprise Mobility + Security (EM+S). The AMA will take place on Tuesday, December 12th from 9:00 a.m. to 10:00 a.m. PT in the Microsoft 365 AMA space..
Productivity Hacks to Save Time & Simplify Workflows
When: Wednesday, December 13th & 20th.
Connecting, Organizing & Collaborating with Your Team
When: Tuesday, December 19, 2017 at 12pm ET and Thursday, December 21, 2017.
BLOG ROUNDUP
Identify business needs and prioritize Microsoft 365 scenarios
This is the third post from our blog series on the four success factors for driving Microsoft 365 adoption. In this blog post, we and departments in your organization to realize important business outcomes. This requires identifying and prioritizing scenarios – which are the different ways people and teams in your organization can use the capabilities of Microsoft 365 to achieve their goals.
Teams On Air Q&A Roundup: Behind the scenes of Skype Meeting Broadcast
Last week on Teams On Air we had Jeff Tyler, lead for digital experiences at Microsoft to showed us Behind the Scenes of Skype Meeting Broadcast at Microsoft. You can leverage Skype Meeting Broadcast to engage with employees and customers live, up to 10K attendees. During this episode, we took you behind the scenes of a highly produced event at Microsoft. In addition, Jeff and I share how we use Skype Meeting Broadcast to run this show, Teams On Air, and some best practices on how you can get started with just a laptop and internet access. We had so many questions from our Q&A, that we couldn’t address them all, so we’ve rounded up all the questions from our live Q&A and included them below. If you want to catch up on the show, you can access our full list of on demand videos on YouTube at.
Office 365 for iPad & iPhone - November 2017 release details
On November 14th, 2017, Microsoft released an updated version of Office for iPad & iPhone to Office 365 subscribers - Version 2.7 in 35 languages. Here are some of the new features included this month: 1) In all applications (Excel, PowerPoint & Word): With the addition of a digital pencil to pens and effects that are already available, you can sketch out your ideas to make them come to life and rediscover the simple pleasure of creating content with a pen. 2) In Excel, you can scroll quickly through long spreadsheets on your touch device using the new scroll handles. More information and help content on this release can be found here.
Microsoft Teams in Education: Impacting our future. This blog post is just a quick reference of resources available for you to get started in Microsoft Teams in education and its major capabilities.
Block direct delivery to @onmicrosoft.com addresses
We're all familiar with how Office 365 tenants work--when you spin up a new Office 365 tenant, you get a managed domain (tenant.onmicrosoft.com). Then, maybe you configure a hybrid environment, and now your tenant has your domain, as well as your original tenant.onmicrosoft.com domain, and a new tenant.mail.onmicrosoft.com. The two managed domains--tenant.onmicrosoft.com and tenant.mail.onmicrosoft.com both have internet-routable MX records. Since we already know that the managed domains have their own MX hosts configured, that means that you can deliver mail destined a migrated user (who might have a primary SMTP address of user@customdomain.com) using their onmicrosoft.com address (user@tenant.onmicrosoft.com). If you have certain mail flow or business requirements (such as DLP, encryption, or other content filtering that has not yet been configured in Office 365) that force your inbound email to traverse a certain path, you may find that unacceptable. To prevent email delivery to the onmicrosoft.com namespace directly, you can use the create a transport rule in your Exchange Online Tenant.
NOTEWORTHY
REMINDER: DirSync and Azure AD Sync Communication to Azure Active Directory - December 31, 2017
On December 31, 2017, Azure AD will no longer accept communications from Windows Azure Active Directory Sync ("DirSync") and Microsoft Azure Active Directory Sync ("Azure AD Sync"). The last release of DirSync was released in July 2014 and the last release of Azure AD Sync was released in May 2015. Details on upgrade paths to Azure Active Directory Connect are included at the link above.
Teams On Air: Ep. 57 - 4 ways to leverage Microsoft Teams for project and client management
Format: Video (22 minutes) | Join Praveen Maloo, Product Evangelist for Microsoft Teams, and Jason Ritchey, Vice President of Client Delivery at Valorem, a consulting firm focused on digital transformation, as they discuss Valorem's journey with Teams. We will look at real-life use cases, adoption best practices and how collaboration with Teams is driving business agility across the organization.
Microsoft IT Showcase: How Microsoft protects against identity compromise
Format: Video (5 minutes) | Identity sits at the very center of the enterprise threat detection ecosystem. Proper identity and access management is critical to protecting an organization, especially in the midst of a digital transformation. This part three of the six-part Securing our Enterprise series where Chief Information Security Officer, Bret Arsenault shares how he and his team are managing identity compromise (leg number two of the security stool).
Office 365 Year in Review 2017
This 23-page report offers a look back at all the key developments in Office 365 during 2017. Catch-up with some of the news you might have missed, as we go back during the year to see changes in Microsoft Teams, SharePoint Online, OneDrive for Business, Office 365 reporting and administration and much more. All the entries include web links with further information. In the second part of the report, we look ahead to 2018 and some of the big upcoming changes like the convergence of Skype for Business and Microsoft Teams as well as the transition from Office 365 Video to Microsoft Stream. By reading this report you’ll learn how much Office 365 has developed this year and the trajectory it’s going along with some resources and predictions.
Resetting passwords on Azure AD-joined devices is much easier with the latest Windows update
We’re announcing a feature that should make your users’ lives much easier. The new Windows 10 Fall Creators update allows users with Azure AD-joined (AADJ) devices to see a “Reset password” link on their lock screen. When they click this link, they will be brought to the same self-service password reset (SSPR) experience they see when signing in from a browser. We know that controlling helpdesk costs is a constant burden for the world’s largest IT organizations. One of the common problems we’ve seen is that, even though organizations have password management solutions, users just don’t know where to go to reset their passwords. This can be due to lack of discoverability, or simply that it’s not in a place that users frequent very often. One of the significant benefits of the Azure AD password reset experience is that it’s right in front of users when they sign in on the web. Now, it’s right in-line with what your users see when coming to work the first thing in the morning, too!
Thanks for featuring my ‘Office 365 Year in Review 2017’ TechNet download report, it’s really appreciated, Also in general thanks Thomas for your contributions you have made to the wider Office 365 community with this brilliant Office 365 weekly digest blog series and planned service changes posts. Keep up the good work! | https://blogs.technet.microsoft.com/skywriter/2017/11/27/office-365-weekly-digest-2017-47/ | CC-MAIN-2018-22 | refinedweb | 1,829 | 57.1 |
A website or a web application is a collection of more than one page. How will you access different pages of a Python website created in Flask using links just as you do in the HTML website? To understand this recall that in the first post of this series of creating Python website, we coded a file application.py having @app.route(‘/’). When the application was executed by pasting the application URL in the browser we got the message in browser. It was returned by the function associated with @app.route(‘/’). @app.route(‘/’) is a route in Python Flask website where @app represents the current web application and route is the decorator.
What is a Route?
All web development frameworks provide a uniform method for connecting pages of a web application or a website. It is done by using routes which keep a record of the URLs of the application defining the sections of the website or web application to access. All the routes are defined in the Python application file. In Python Flask website it is defined with
@(object instance of Flask application).route(‘url of the page to be linked’) def function_name(optional-arguments): Function definition Return statement
@app.route('/') def index(): return('Python flask Website')
Each route in a web application or site must be predefined. A route is always associated with a function that stores the code to process before and to access a web page. It must return a string or a template that can be rendered by the browser sent by the server in response to the URL request by a client.
Passing Argument with a Route
Routes can also be used to pass an argument to the page through function. Flask allows using the last part of the URL as a value to be passed as an argument to function associated with the route. To use it as an argument, you need to define a variable name in angular brackets (<>) as the last part in URL.
@object instance of Flask application.route(‘\
’) def function_name(variable-name): Function definition Return statement
@app.route('/<user>') def hello(user): msg="Hello "+user+"!!! Welcome to my Website" return msg
Rendering a Page with Route
Now that we have understood the purpose of routes and how they are created in Python flask Website or web application, let’s understand how to render a page using a route.
If you are creating multiple pages for your website, you have to link them together. First identify the URLs for all those pages. These URLs beginning with ‘/’ are the routes where ‘/’ represents the first page or the home page of the website.
Once you identify these URLs for your resources, add them as individual routes in the application.py or app.py file or whichever .py file you are using as your application’s start point. Then associate a function with each route.
For rendering the individual pages you need to import render_template in your application.py file. So add this line at the beginning.
from flask import Flask, render_template
In the function associated with the route add the return statement with the name of the web page you want to attach to the route.
return render_template('index.html')
Note : in a Python Flask website, render_template method will always look for the requested page file (HTML) in the templates folder. If it is not there, your application will generate an error. So, create this templates folder and save all your HTML files here. All the webpages associated with you website must be stored in templates folder under the application folder ( in our example f:\flask\myproject\templates)
Linking Pages with Defined Routes
In websites hyperlinks are used to link one page to another. This is done in Flask by using your well known anchor tag (<a href=””> link text</a>). In Flask href destination is set in context to the route defined in the application.py file.
To define the hyperlinks in Flask, Jinja template language is used for rendering pages requested by users. The parameter defined with the functions linked with routes can be passed to the requested page using Jinja template language. Such variables from application can be sent to page by by enclosing it in double curly parenthesis {{}}.
To link a page with another page the following format is used in Flask
<a href={{url_for(‘name of the route function}}> link text</a>
<a href={{url_for(‘next’)}}>NEXT</a>
Be First to Comment | https://csveda.com/python-flask-website-adding-routes-to-link-pages/ | CC-MAIN-2021-04 | refinedweb | 745 | 63.59 |
A YAML/JSON/dictionary schema validator with terse schema definitions
Project description
YSchema is a terse and simple schema format with a reference validator implementation for YAML, JSON and other dictionary based data structures.
YSchema is quite minimal (in terms of lines of code) and is continuously tested against a set of of valid and invalid example data (see the examples directory). The code works nicely for its intended purpose, but may not be the most powerful or popular, even if it does what it was intended for very well. The main assumption (at least for now) is that all keys are strings without whitespace.
YSchema is written in Python (v. 3) and validates dictionaries containing basic datatypes like strings, ints, floats, lists and nested dictionaries. The schema is also a dictionary, so both the data and the schema can be written in Python, JSON, YAML, TOML, … formats. YSchema cannot validate all possible YAML / JSON data, in fact it cannot even validate its own schema files since those use significant white space in dictionary keys to describe expected data types and whether the data is required or not.
To install the YSchema Python library along with the yschema command line program run:
python3 -m pip install -U yschema
Consider using a virtual environment or adding --user to the pip command if you do not want to install into the system’s site-packages directory. PS: You may also want to look at older and more established schema and validators such as Yamale or json-schema in case those serve your needs better.
Contents
Introduction to YSchema
A simple example schema:
# There must be a key "name" that maps to a string required name: str # There can be an integer age, but it is not required optional age: int # The optional height must be above 0 optional height: float(min_val=0)
To validate this, first load the schema above into a dictionary, then load the data to validate into another dictionary, and finally run:
import yschema # possibly loaded from json or yaml or just a plain old dict schema = my_load_schema_function() data_dict = {'name': 'Tormod'} yschema.validate(data_dict, schema_dict)
If the function does not raise yschema.ValidationError then the data is valid according to the given schema. You can also use the yschema command to validate YAML files from the command line.
A more complicated example, showing constants and nested dictionaries:
# Example of a constant that can be used in validation functions constant minimum_string_length: 5 # A sub-dictionary type Whale: # The name is a string of a given minimum length required name: str(min_len=minimum_string_length) # The length must be between 0 and 500 meters optional length: float(min_val=0, max_val=500.0) required whales: list(type=Whale)
The above schema validates data like this:
whales: - name: Unknown Whale - name: Enormous Whale length: 200.0
Note that when working with aliases and types the order of the keys in the dictionary starts to matter. Either use a Python 3.6 or later, or load your schema into an OrderedDict. YSchema contains a helper function for ordered safe loading of YAML files:
with open(schema_file_name, 'rt') as yml: schema = yschema.yaml_ordered_load(yml)
More advanced features
Built in types: the following types are implemented. Optional parameters are listed below each type:
- Any
- bool
- str
- min_len
- max_len
- equals - e.g. str(equals='Hi!') or matching one of several pissibilities with str(equals=('a', 'b', 'c'))
- prefix
- int
- min_val
- max_val
- equals - e.g. int(equals=3) or int(equals=(2, 4, 6))
- float
- min_val
- max_val
- equals - e.g. float(equals=3.2) or float(equals=(2.1, 4.4))
- list
- min_len
- max_len
- type - e.g. list(type=int) or list(type=Whale)
- one_of
- types - e.g. one_of(types=(int, str)) or one_of(types=(str(prefix='Moby'), Whale))
- any_of
- types - see one_of (any_of matches if any of the types match, one_of requires exactly one match)
Alias: you can give an alias to avoid typing the same type definition over and over again:
alias Cat: one_of(types=(HouseCat, Tiger, Lynx)) alias Cats: list(type=Cat)
Glob: you can allow undefined keys by using a glob. The following will validate OK for all documents
optional *: Any
Inherit: a sub-schema introduced by type can contain a key inherit with the name of a previously defined sub-schema to avoid repeating definitions that are shared among several types:
type MeshBase: optional move: list(type=str) optional sort_order: list(type=int) optional mesh_file: str type MeshDolfinFile: inherit: MeshBase required type: str(equals=('XML', 'XDMF', 'HDF5')) required mesh_file: str optional facet_region_file: str type MeshMeshio: inherit: MeshBase required type: str(equals='meshio') required mesh_file: str optional meshio_type: str required mesh: one_of(types=(MeshMeshio, MeshDolfinFile))
Releases
Version 1.0.2 - June 11. 2018
Improve error messages and add convinience function to safe-load YAML into an OrderedDict
Version 1.0.1 - June 7. 2018
Completed v 1.0 implementation goals. The YSchema language is powerful enough to express most of what I wanted for validating Ocellaris input files. The code base is decently tested (using the fantastic CircleCI service) and a command line tool is also included for validating YAML files from the shell.
There may not be a large number of additional releases if no more features are found to be necessary for the author’s uses. It is relatively easy to add new type validators from user code, but feel free to submit a pull request if you are finding YSchema useful and have implemented some general purpose validators. YSchema does not intend to compete with complex and more fully featured schema languages like json-schema.
YSchema is copyright Tormod Landet, 2018. YSchema is licensed under the Apache 2.0 license, a permissive free software license compatible with version 3 of the GNU GPL. See the file LICENSE for the details.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/yschema/ | CC-MAIN-2018-51 | refinedweb | 994 | 59.33 |
![if gte IE 9]><![endif]>
All,
I'm using KUIB 2 and within the app I'm building I need to dynamically remove specific Views based on a users level of access. The access level is already being handled, and I have working code that runs from the onShow method that removes the View menu items the user doesn't have access to. My issue is that when I remove them from within the onShow method of controller.public.js, the menu items are visible for a moment, then disappear, and if I try to remove them within the router-events.js file, they don't exist yet. I don't want the user to know that the other Views exist.
What is the recommended method of allowing/disallowing access to different Views?
TIA
Louis
The following approach handles only hiding ui links to certain views and protect them from direct loading (based on users roles):
All custom code for the role based support needs to be put in a custom extension module – we have such place in our app – scripts/extensions/index.js For example:
// Import your custom modules here:
import customModule from './customModule';
export default angular.module('app.extensions.module', [
customModule
]).name;
In our customModule in order to protect the view we need to handle the angular $stateChangeStart and to prevent (or navigate) if the user do not have permissions to see this view:
export default angular.module('app.customModule', [
])
.run(['$rootScope', '$state', ($rootScope, $state) => {
$rootScope.$on('$stateChangeStart', function(event, toState) {
// Here based on the info from the state we can prohibit loading some of the view. For example:
if (toState.name === 'module.default.moduleTwo.grid') {
event.preventDefault();
$state.go('unauthorized');
}
});
In order to create this unauthorized state we need to use this code:
…
.config(['$stateProvider', ($stateProvider) => {
$stateProvider
.state('unauthorized', {
url: '/unauthorized',
template: '<h1>Not Authorized</h1>'
});
Then to not load any ui elements wich points to this view when users with no permission is logger we need to use following code:
.config(['$provide', function($provide) {
$provide.decorator('uiSrefDirective', ['$delegate', function ($delegate) {
var directive = $delegate[0];
directive.compile = function() {
return function(scope, element, attrs) {
var stateName = attrs.uiSref.replace(/\(.+\)$/g, ''); // strip out the state params
var injector = element.injector();
var state = injector && injector.get('$state').get(stateName);
// Watch for null (abstract) states and warn about them rather than erroring.
if (!state) {
alert('Could not find state:', attrs.uiSref);
} else {
var Session = injector.get('Session'); // The Session is a service which checks for user permission for a view
// If the user lacks sufficient permissions, hide this state from them.
if (!Session.userHasPermission(state)) {
element.remove();
}
}
// Otherwise pass through and let uiSref handle the rest
directive.link.apply(this, arguments);
};
};
return $delegate;
}]);
Hello Louis,
I received feedback from the developers of this functionality.
They provided the following information:
We don't support that officially yet, however, level access to modules/views and even grid columns is high on our backlog and we will look into it very soon.
Currently there is an unofficial way to replace the navigation panel through the extensibility mechanism of our AngularJS apps.
You can define your own custom AngularJS module to achieve that in app/src/scripts/extensions/index.js
The code would look like the following:
var myModule = angular.module('myCustomModule', []);
myModule.run(['$state', function($state) {
var state = $state.get('module.default');
if (state && state.views && state.views['side-navigation']) {
state.views['side-navigation'].template = '<p>Customized Menu</p>';
}
}]);
export default angular.module('app.extensions.module', [
'myCustomModule'
]).name;
You can take a look at app/src/scripts/common/side-navigation/index.html to see the actual text of the side-navigation so that you keep the appropriate style.
Have in mind that this will only replace the navigation panel. You can define different templates and use the right one depending on the user access. If you know the url you will be able to navigate to that view. We are still considering the approach – whether we will need to remove the views from routing (which will have implications on what happens when you login/logout) or we will do the check on view loading showing you “you don’t have access for this view”. And of course how we will get the user access level is still unknown.
Could you please give us more details on your business case? We are interested in how you determine the user access level and how you would like to restrict based on that level.
Thank you and regards.
Edsel,
Thanks for the update. I started working on this today and am still working through the best way to implement in our app.
In our app, I have implemented rules stored in the DB. In the current code (that needs to change to your new methodology), I invoke a function to return the rules for the menus. The menu items (Views) that a user has access to is based on their role within the app. Admins have all menu items, standard Users have a very limited number of menu items, and I envision that a Support role could have a different set of menu items within this app. The idea is to control the access using the rules within the DB so we should be able to change any access level without touching the actual application.
What would be nice is to have an event that fires when each menu item is processed to determine if it should be available. The event could determine if the menu item is fully accessible, view only (on the side navigation but disabled, no link), or not on the menu at all. Along with this, either the same event or a separate event should be available to determine if the requested View is accessible. This is necessary because if a user doesn't have access to a specific View, they shouldn't be able to go directly to it via the URL.
I'll post an update when I figure out the best way to implement your workaround (within our app).
I decided to put the logic that determines which menu items a user has access to on the backend service. My custom module that generates the menu invokes a function to get the menus and then generates the HTML from the returned records and sets the state.views['side-navigation'].template to the generated HTML. This is working fine now.
Regarding the individual view validation, do you have any recommendations on the best place to add logic to validate access to each view before loading/displaying? I am currently calling my verifyPageAccess() method from the page controller constructor and my issue is that the page partially loads before being redirected. Making the call from within the router-events.js onInit function results in the same results. Ideally the user shouldn't see anything that they don't have access to.
Thanks Shelley. This worked great. I didn't end up creating my own state, for what I'm doing I'm sending the user to the home page via $state.go('default.module.application.home').
Shelley,
I changed my mind and am going to try to create my own 'Unauthorized' state. What isn't clear is where to put the new code. I have everything down to the $state.go() which works for loading the application home, I'm just not sure where to add the new custom module code. I'm guessing that I'll need a couple new files since your example has 2 separate .config() statements? | https://community.progress.com/community_groups/openedge_kendo_ui_builder/f/255/p/34581/107186 | CC-MAIN-2019-22 | refinedweb | 1,258 | 55.95 |
Jersey's latest release is 0.6ea and is available from the Jersey Downloads Page. The latest stable release is 0.5ea and is also available through the Update Center. Jersey 0.6ea will be pushed to the UC in about 10 days..
At this point, installing in GlassFish is fairly easy with a
jersey-on-glassfish.xml ANT script. It'll get even easier once Jersey is made available on the GlassFish Update Center. Any day now I hear ;)); } );
@BindingType(JSONBindingID.JSON_BINDING) public class MyService { public Book get() { return new Book(); } public static final class Book { public int id = 1; public String title = "Java"; } }
Via TheGalaxy, Daniel Rubio's recent article?" | http://blogs.sun.com/theaquarium/tags/JSON | crawl-001 | refinedweb | 112 | 69.68 |
Insert object in Treeview
The post gives a great example of using Treeview.
I tried to add an InsertUnder, but that does not work.
What am I doing wrong?
I tested it using R20 and R21.
# In CreateLayout() I added a simple button self.AddButton(1002, c4d.BFH_CENTER, name="Insert Under First") # In Command() I added: if id == 1002: # Insert Under tex = TextureObject("Inserted under first item.") #self._listView.SetDragObject(root, userdata, tex) first = self._listView.GetFirst(self._treegui, self._listView) print "First: ", first # seems ok. It returns the first object T1 #InsertObject(self, root, userdata, obj, dragtype, dragobject, insertmode, bCopy): self._listView.InsertObject(self._treegui, self._listView, first, c4d.DRAGTYPE_FILENAME_OTHER, tex, c4d.INSERT_UNDER, True) # Refresh the TreeView self._treegui.Refresh() # In ListView() I added: def InsertObject(self, root, userdata, obj, dragtype, dragobject, insertmode, bCopy): print "Insert Object obj: ", obj #seems ok, T1 print "dragobject: ", dragobject #seems ok, Inserted under first item. #self._listView.InsertObject(self._treegui, self._listView, first, c4d.DRAGTYPE_FILENAME_OTHER, tex, c4d.INSERT_UNDER, True) return True
Ok, after some more reading I now understand that I should insert the object in the listview myself.
def InsertObject(self, root, userdata, obj, dragtype, dragobject, insertmode, bCopy): self.listOfTexture.append(dragobject) return True
But now it is inserted at the end (makes sense, because I use an append()).
My question is how to insert the new object under another one?
Something like this.
I am trying to make treeview of a folder with its subfolders and files.
Basically as explained in TreeView made simple – Part 1 you have to override GetDown to make it actually return the first child.
Then GetNext will be called to retrieve the next children.
So I would say it's more about how to structure your data that matters.
But since you seem to use the previous example I extended it to also support children.
So first we will extend our TextureObject to support children.
To do so we will add a list that will store all children and also a weakref of our parent. (don't forget to import weakref).
If you don't know what is a weakref I encourage you to read weakref – Garbage-collectable references to objects.
def __init__(self, texturePath): self.texturePath = texturePath self.otherData += texturePath self.children = [] # This will store all children of the current TextureObject. self._parent = None # This will store a weakreaf (so don't forget to import weakref) to the parent.
Then we will define some convenient functions in our TextureObject:
def AddChild(self, obj): obj._parent = weakref.ref(self) self.children.append(obj) def GetChildren(self): return self.children def GetParent(self): if self._parent: return self._parent() return None
Once it's done it's time to adapt our TreeViewFunctions implementation to add support for children.
So the first thing to override is GetDown() to retrieve the first child of a given TextureObject like so.
def GetDown(self, root, userdata, obj): """ Return a child of a node, since we only want a list, we return None everytime """ children = obj.GetChildren() if children: return children[0] return None
The treeview will call GetDown to retrieve the first child, then to retrieve the next child it will call GetNext on it.
So we need to adapt GetNext and GetPref to also support children.
Typically we will use the same logic, the only difference is is the current obj have a parent, look for self in the parent list instead of the global one stored in our TreeViewFunctions implementation.
def GetNext(self, root, userdata, obj): """ Returns the next Object to display after arg:'obj' """ rValue = None # If does have a child it means it's a child. objParent = obj.GetParent() listToSearch = objParent.GetChildren() if objParent is not None else self.listOfTexture currentObjIndex = listToSearch.index(obj) nextIndex = currentObjIndex + 1 if nextIndex < len(listToSearch): rValue = listToSearch[nextIndex] return rValue def GetPred(self, root, userdata, obj): """ Returns the previous Object to display before arg:'obj' """ rValue = None # If does have a child it means it's a child. objParent = obj.GetParent() listToSearch = objParent.GetChildren() if objParent is not None else self.listOfTexture currentObjIndex = listToSearch.index(obj) predIndex = currentObjIndex - 1 if 0 <= predIndex < len(listToSearch): rValue = listToSearch[predIndex] return rValue
We have everything needed now to display everything.
However, we also need to adapt DeletePressed to support children.
DeletePressed is a global event without a passed obj.
Previously we iterated listOfTexture so we need to support children. To do so let's make an iterator that will iterate all nodes and all children.
And let use it in the DeletePressed function.
# This is a global function which accept a list and will iterate over each TextureObject # of the passed list to retrieve all TextureObject and all their children. def TextureObjectIterator(lst): for parentTex in lst: yield parentTex TextureObjectIterator(parentTex.GetChildren()) def DeletePressed(self, root, userdata): "Called when a delete event is received." # Convert the iterator to a list to be able to reverse it for tex in reversed(list(TextureObjectIterator(self.listOfTexture))): if tex.IsSelected: objParent = tex.GetParent() listToRemove = objParent.GetChildren() if objParent is not None else self.listOfTexture listToRemove.remove(tex)
Now everything is done for the TreeViewFunctions implementation.
So let's add a new button in our GeDialog CreateLayout to add a child to the selected TextureObject.
And defines its behavior when clicked in the GeDialog Command.
# In CreateLayout self.AddButton(1002, c4d.BFH_CENTER, name="Add Child to selected") # In Command if id == 1002: for parentTex in TextureObjectIterator(self._listView.listOfTexture): if not parentTex.IsSelected: continue newID = len(parentTex.GetChildren()) + 1 tex = TextureObject("T{0}.{1}".format(str(parentTex), newID)) parentTex.AddChild(tex) # Refresh the TreeView self._treegui.Refresh()
Now that we have everything we may also want to support the folding state to show/hide children in the treeview.
So let's enhance our TextureObject to support folding (very similar to selection state).
class TextureObject(object): _open = True @property def IsOpened(self): return self._open def Open(self): self._open = True def Close(self): self._open = False
Then in the TreeViewFunctions implementation we need to override IsOpened and Open.
def IsOpened(self, root, userdata, obj): """ Returns: (bool): Status If it's opened = True (folded) or closed = False. """ return obj.IsOpened() def Open(self, root, userdata, obj, onoff): """ Called when the user clicks on a folding state of an object to display/hide its children """ if onoff: obj.Open() else: obj.Close()
And here you are, find the full code in pastebin, keep in mind it's only one way to store data.
How you decide to store data is up to you.
In this case, it would make more sense to have a TextureObject as root, so this way all TextureObject will be handled in the same way and the first level is not stored in a specific one level the only list.
Maybe it will be for part 3 of TreeView Exploration! :)
Cheers,
Maxime.
Thanks, great explanation!
One small issue. Delete doesn't work because objParent' is not defined.
Traceback (most recent call last): File "scriptmanager", line 251, in DeletePressed NameError: global name 'objParent' is not defined
Here the code that, I think, solves the issue:
def DeletePressed(self, root, userdata): "Called when a delete event is received." for tex in reversed(list(TextureObjectIterator(self.listOfTexture))): if tex.IsSelected: objParent = tex.GetParent() # Added listToRemove = objParent.GetChildren() if objParent is not None else self.listOfTexture listToRemove.remove(tex) | https://plugincafe.maxon.net/topic/12123/insert-object-in-treeview | CC-MAIN-2020-50 | refinedweb | 1,229 | 51.85 |
Day 4 (Developer Day) of WWW2004 kicks off with an 8:30 A.M. morning keynote. There are many fewer people here this morning, maybe 150. I remembered the camera today so expect more low quality, amateur pictures throughout the day. About three minutes before the conference closed, Stuart Roebuck explained why none of the microphones worked on my PowerBook. I'll know for next time.
The Developer Day Plenary is Doug Cutting talking about Nutch, an open source search engine. His OpenOffice Impress slides "totally failed to open this morning" so he reconstructed them in 5 minutes. He contrasts Nutch with Lucene. Lucene is a mature, Apache project. It is a Java library for text indexing and search meant to be embedded in other applications. It is not an end-user application. JIRA, FURL, Lookout, and others use Lucene. It is not designed for site search. "Documentation is always a hard one for open source projects to get right and books to ?tend to? help here."
Nutch is a young open source project for webmasters, still not end users. It has a few part time paid developers. It is written in Java based on NekoHTML. It is a search site, but wants "to power lots of sites". They may modify the license in the future. They may use the license to force people to make the searches transparent. "Algorithms aren't really objective." It's a research project, but they're trying to reinvent what a lot of commercial companies have already invented. The goal is to increase the transparency of the web search. Why are pages ranked the way they are? Nutch is used by MozDex, Objects Search and other search engines.
Technical goals are scaling well, billions of pages, millions of servers, very noisy data, complete crawl takes weeks, thousands of searches per second, state-of-the-art search result quality. One terabyte of disk per 100 million pages. It's a distributed system. They use link analysis over the link database (various possible algorithms) and anchor text indexing (matches to search terms). 90% of the improvement is done by anchor text indexing as opposed to link analysis. They're within an order of magnitude of the state of the art and probably better than that. Calendars are tricky (infinite page count). Link analysis doesn't help on the Intranet.
In Q&A a concern is raised about the performance of using threads vs. asynchronous I/O for fetching pages. Cutting says he tried java.nio and it didn't work. They could saturate the ISPs' connections using threads. The I/O API is not a bottleneck.
Paul Ford of Harper's is showing an amusing semantic web app. However, it uses pure XML. It does not use RDF. They do have taxonomies. "It's all done by hand." At least the markup is doen by hand in vi and BBEdit. This is then slurped up in XSLT 2 (Saxon), and HTML is spit out onto the site. It was hard to get started but easy to keep rolling. RDF is not right for redundant content and conditionals. They can use XSLT to move to real RDF if they ever need to. This is in the semantic web track, but it occurs to me that if this had been presented five yhears ago we would have just called it an XML app. They do use a taxonomy they've developed, but it's all custom markup and names. They aren't even using URIs to name things as near as I can tell. The web site published HTML and RSS. The original marked up content is not published.
The MuseumFinland is trying to enable search across all 1000 or so Finnish museums.
The Simile Project is trying to provide semantic interoperability for digital library metadata. "metadata quality is a function of heterogeneity" Open questions for the semantic web: How do you deal with disagreements? How do you distinguish disgareements from mistakes?
This conference is making me think a lot about the semantic web. I'm certainly learning more about the details (RDF, OWL etc.). However, I still don't see the point. For instance what does RDF bring to the party? The basic idea of RDF is that a collection of URIs forms a vocabulary. Different organizations and people define different vocabularies, and the URIs sort out whose name, date, title, etc. property you're using at any given time. Remind you of anything? It reminds me a lot of XML + namespaces. What exactly does RDF bring to the party? OWL (if I understand it) lets you connect different vocabularies. But so does XSLT. I guess the RDF model is a little simpler. It's all just triples, that can be automatically combined with other triples, and thereby inferences can be drawn. Does this actually produce anything useful, though? I don't see the killer app. Theoretically a lot of people are talking about combining RDF and ontologies from multiple sources too find knowledge that isn't obvious from any one source. However, no one's actually publishing their RDF. They're all transforming to HTML and publishing that.
Usability of RDF is a common theme among the semanticists. They all see various GUIs being used to store and present RDF. They all want to hide the RDF from end users. It's not clear, however, if there is (or can be, or should be) a generic RDF GUI like the browser is for HTML (or even XML, with style sheets).
After an entertaining lunch featuring Q&A with Tim Berners-Lee (shown above), I decided to desert the semantic web for the final afternoon of the show. Instead I've gone off to learn about the much more practical XForms. Unlikke the semantic web, I believe XForms really can work. My main question is whether browsers will ever implement this, or if there'll be other interesting implementations.
The session begins with a tutorial from the W3C's always entertaining Stephen Pemberton. He claims 20+ implementations on the day of the release and about 30 now. Some large companies (U.S. Navy, Bristol-Myers-Squibb, Daiwa, Frauenhofer) are already using this. He's repeating a lot of his points from Wednesday. XForms separates the data being collected and the constraints on it (the model) from the user interface.
What's d16n? It's not just for forms. You can also use it for editing XML, spreadsheet like apps, output transformations, etc.
Fields are hidden by default. There's no
form element any more. I'm not sure I'm going to write any more about the syntax.
It's too hard to explain without seeing thre examples, and I can't type that fast, but it is quite clean. XForms support HTTP PUT! (and GET and POST. DELETE and WebDAV methods are not supported in 1.0 but may be added in the future.)
You can submit to XML-RPC and SOAP as well as HTTP servers.
And it works with all well-formed XML pages, not just XHTML (but not classic, malformed HTML). XForms has equivalents for all HTML form widgets, and may customize some according to data type. Plus it adds a range control and
an output control. There are toggles and switches to hide and reveal parts of the user interface. These are useful for wizards.
There are repeating fields like in FileMaker. It supports conditionals.
A single form can be submitted to multiple servers.
One thing strikes me as dangerous about XForms: they provide so much power to restrict what can be entered that server side developers are likely to not validate the input like they should, instead relying on the client to have checked the input. It would still be possible to feed unexpected input to the server by using a different server or a client that doesn't enforce the constraints.
An XForm result may replace only the form instance, not the whole page, but what then about namespace mappings, etc.? What is the URI of the combined page? This is a very tricky area.
T. V. Raman, author of XForms: XML Powered Web Forms, is talking about XForms accessibility and usability. "People get so hung up on RDF's syntax. That's not why it's there." He predicts that Mark Birbeck will implement a semantic web browser in XForms, purely by writing angle brackets (XForms code) without writing regular source code. According to Pemberton, CSS is adding a lot of properties specifically for use with XForms.
Next Mark Birbeck is doing a demo of FormsPlayer. "We've had a semantic web overload."
Pure Edge's John Boyer is speaking about security in in XForms. Maybe he'll address my question about server side validation of user input. Hmm, it seems sending the data as a full XML document rather than as a list of name=value pairs might solve this. It would allow the server to easily use standard validation talks. This talk is mostly concerned with XML digital signatures and its supporting technologies. Now on the client side, how does the client know what he or she is signing? If an XML document is transmitted, what's the guarantee that that XML document was correctly and fully displayed to the user? Is what you see, what you signed? e.g. was the color of the fine print the same as the background color of the page? It turns out they have thought of this. The entire form document, plus all style sheets, etc, are signed.
The last speaker of the afternoon (and the conference) is Mark Seaborne, who will be talking about using XForms. His slide show is in an XForm. He's using an XForm control to step through the forms! He works in insurance in the UK, an industry that's completely paper forms driven. It's important that the users not have to fill in the full form before rejections are detected. "There's a huge number of projects listed on the W3C website, but most of them aren't finished and some of them are actually dead." There are lots of different bugs and inconsistencies between different implementations, many of which have to do with CSS. IBM has announced that they're starting work on putting XForms support into Mozilla.
That's it. The show is over. Go home. I'm exhausted. I'll see you on Monday. | http://www.cafeconleche.org/oldnews/news2004May22.html | crawl-002 | refinedweb | 1,745 | 76.52 |
Hey, I have been using livepage in a project and I use gaurd and twisted.cred for different livepage resources. Everything works well, but I wanted to use athena. I run into two problems with athena. The first problem is when I have one athena.Livepage object in my realm I get errors saying I can only render the LivePage once. So, to get around that I just do something like the following in the realm class : def requestAvatar(self, avatarId, mind, *interfaces): for iface in interfaces: if iface is inevow.IResource: if avatarId is checkers.ANONYMOUS: resc = AnonPage() resc.realm = self return (inevow.IResource, resc, noLogout) else: resc = MyPage(avatarId) resc.realm = self return (inevow.IResource, resc, resc.logout) This seems to work ok. Is this something I have to do? Or can I use one class that takes avatarId as a paramater on __init__? Now for the second problem. (This problem happens after I use the above fix. ) If I use checkers.InMemoryUsernamePasswordDatabaseDontUse() everything works ok. But if I use my checker or other checkers the page does not get rendered and everything just hangs. If I run my tac file with twisted -noy I can not control-C to kill the process, I have to kill it with signal 9. I have tried a basic athena.LivePage class with nothing but html, body, and liveglue that is supposed to be rendered out and it still hangs. Everything works well with livepage.LivePage, but hangs with athena. Does anyone know why this would happen? I hope this email is not too vague. Let me know if I need more details. Thanks in advance. | http://twistedmatrix.com/pipermail/twisted-web/2006-April/002619.html | CC-MAIN-2016-07 | refinedweb | 275 | 88.02 |
1 /*2 * Lucane - a collaborative platform3 * Copyright (C) 2003.common.concepts;21 22 23 //links : service <-> group24 public class ServiceConcept extends Concept25 {26 private boolean installed;27 28 public ServiceConcept(String name, boolean installed)29 {30 super(name, "");31 this.installed = installed;32 }33 34 public boolean isInstalled()35 {36 return this.installed;37 }38 39 public void setInstalled()40 {41 this.installed = true;42 }43 44 //--45 46 public boolean equals(Object o)47 {48 if(o instanceof ServiceConcept)49 return this.name.equals(((ServiceConcept)o).name); 50 51 return false;52 }53 }
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/lucane/common/concepts/ServiceConcept.java.htm | CC-MAIN-2017-47 | refinedweb | 109 | 50.84 |
import "gopkg.in/kothar/brotli-go.v0"
Package brotli contains bindings for the Brotli compression library
This is a very basic Cgo wrapper for the enc and dec directories from the Brotli sources. I've made a few minor changes to get things working with Go.
1. The default dictionary has been extracted to a separate 'shared' package to allow linking the enc and dec cgo modules if you use both. Otherwise there are duplicate symbols, as described in the dictionary.h header files.
2. The dictionary variable name for the dec package has been modified for the same reason, to avoid linker collisions.
Package brotli imports 3 packages (graph). Updated 2018-02-15. Refresh now. Tools for package owners. | https://godoc.org/gopkg.in/kothar/brotli-go.v0 | CC-MAIN-2020-50 | refinedweb | 120 | 65.83 |
The program's heap is reaching its limit, and
the program should take action to reduce the amount of
live data it has. Notes:
A variant of throw that can be used within the IO monad.
Although throwIO has a type that is an instance of the type of throw, the
two functions are subtly different:
throw e `seq` return () ===> throw e
throwIO e `seq` return () ===> return ()
catch with a similar type to catch,
except that the Prelude version only catches the IO and user
families of exceptions (as required by Haskell 98). We recommend
either hiding the Prelude version of
catch when importing
Control.Exception, or importing
Control.Exception qualified, to avoid name-clashes.
try with a similar type to try,
except that it catches only the IO and user families of exceptions
(as required by the Haskell 98 IO module).). = bracket (openFile name) hClose | http://www.haskell.org/ghc/docs/6.4/html/libraries/base/Control.Exception.html | CC-MAIN-2014-41 | refinedweb | 146 | 61.87 |
From: Mark Borgerding (mborgerding_at_[hidden])
Date: 2000-03-06 10:13:28
dave abrahams <abraham-_at_[hidden]> wrote:
original article:
> I'm starting this thread to provoken discussion of potential improved
> interfaces for formatted output (and possibly input). I hope this will
> eventually result in a boost library.
>
> I'm actually having a hard time convincing a colleague to avoid
printf (and
> related functions), and he has a point:
>
[snip]
> Of course, most of the downsides of printf are well-known:
> A. Not type-safe
> B. Easy to crash it
> C. Hard to extend (you need va_list, not-even-standard-yet
vsnprintf(), &c)
>
The main reason I avoid printf statements is because of the lack of
compile time checking. There can be some really unpleasant side effects.
Anecdote:
On a project a few years back, a really sharp C programmer made a
varargs logging function that was kinda slick. It acted about the same
as printf -- it actually fed the args into sprintf. Unfortunately, it
was a big problem when a trace message was logged that had "%n" in it.
The program puked on sprintf because it was trying to store an int into
a non-existent memory location. Boom! Access violation.
This was an *extremely* difficult bug to track down.
>
> boost::format( "%1 chickens can be found in coop number %2")
> % chicken_cnt % coop_num;
This is safer than a printf, but could still be prone to an error with
unintended formatting tags in the format string. It does not employ
compile-time checking.
>
> std::cout << x, y, z;
Well you can do
std::cout << (x, y, z);
But I don't think the results are what you would want. (It will
evaluate the expressions in left-to-right order and return z.
I played around with overloading the comma operator. The results were
pretty disasterous. Way, way too error prone. Throw a couple of
parens into the mix and the behavior is completely different from what
you intended. The worst part is the silence with which it compiles.
In this respect, it is no better than printf.
>
> P.S. Also, can't we do something about std::endl? This seems like way
too
> many characters to type just to produce a newline!
>
Put the following in your cpp file, or function.
using namespace std;
or
using std::cout;
using std::cerr;
using std::endl;
I have a suggestion that I actually implemented.
Here is the gist of an approach I came up with at my last job. The
idea was to construct a temporary with a reference to some templated
type. Various modifiers could be done on the temporary, each of which
returned a reference to self. There is then an operator << that knows
how to insert one of these formatting objects into an ostream. All the
modifiers are applied to the ostream, the object is inserted and then
all the modifiers are set back to their original state. A helper
template function can optionally be used to create the correctly
formatting_object specialization -- this just makes the usage a little
easier.
The usage is less verbose and less error prone than applying the
modifieres directly to the ostream. It is also much safer than using
printf.
for (int i=0;i<offsets.size();++i)
cout << "offsets[" << i << "] = "
<< format(offsets[i]).setf(cout.hex).fill('0').width(8)
<< endl;
In the above example, the following functions are used
template<class T> formatter<T> format(T);
// tmpl func to resolve type
formatter<T>::formatter(T);
//construct the formatter object
formatter<T>::setf(fmtflags); formatter<T>::fill(char);
formatter<T>::width(streamsize);
// set a modifier in the formatter object
// Note: this does not yet apply the modifier to the ostream
template <class T> ostream & operator << (ostream & os, formatter<T> &
t)
// apply modifiers to ostream
// os << t
// unapply modifiers
If there is sufficient intereset, I will see if I can recreate this
from memory. It was a very useful class to have. I miss the little
guy ;(
I remember it had modifier operations for flags, setf, width, precision.
Something it lacked was the callback insertion behavior that allows
cout << hex . I think this could be easily implemented with a list of
callbacks in the formatter object, but I haven't done this yet.
- Mark
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2000/03/2363.php | CC-MAIN-2019-22 | refinedweb | 732 | 64.91 |
Anarres::Mud::Driver - A game driver for LP Muds.
use Anarres::Mud::Driver;
This is an EXPERIMENTAL LP Mud driver written in Perl. The principle is to parse the LPC as would a traditional driver, build and typecheck a parse tree, then generate Perl source code from the parse tree. More traditional drivers would here generate and interpret bytecode.
Generated Perl code is in a package namespace generated from the LPC filename of the object to be compiled.
This program contains some interesting curiosities, such as an emulation for a full 'C' style switch statement in Perl.
The system is developer only and the interface changes quite rapidly. A good place to start is probably reading the test files. For more information, see 'SUPPORT' below.
Incomplete. Experimental. Probably broken.
Speak to Arren on the Anarres II LP mud (telnet mudlib.anarres.org 5000)
Shevek CPAN ID: SHEVEK cpan@anarres.org
Copyright (c) 2002 Shevek. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
The full text of the license can be found in the LICENSE file included with this module.
perl(1). | http://search.cpan.org/~shevek/Anarres-Mud-Driver-0.26/lib/Driver.pm | CC-MAIN-2014-52 | refinedweb | 196 | 58.89 |
TELL.
How can I get an arduino to pull data from the internet and act as a hid at the same time, so that I can type remotely?
Ive been playing around with using the buspirate as a programmer, was able to set the fuses on a bunch of loose atmega168s i have lying around, got em to use the internal clock. I'm having a heck of a time getting the ide to play nice with it, trying to make a board entry for it but so far something keeps going wrong.
>>895338
No, it really isn't. Offline processing and importing is much easier.
>>895336
nice isolation milling ;)
>>895305
>Do you need the spreadsheet to be filled out in real-time?
yes
>>895557
Reminds me of one excel-filling gadget. It simply emulated a USB keyboard.
>>895557
you need to write (or purchase) an activeX control to do that
>>895581
don't think I'm budgeted for that...
where would I find instructions on how to write one?
>>895588.
>>895652
Any microcontroller will do that. Depending on how accurate your timings need to be, clock speed might be something to base your choice on. I'd go with whatever cheapest I'm already familar with.
>>895652
Cheapest arduino I know of is $10, pic related. If you wanted you could buy a plain microcontroller chip for $1.50 and wire it up, but that would require some programming chops and equipment costing more than $10 (so, only worth it if you're making a lot of these).
>>895652
Atmega8, buddy
>>895670
>but that would require some programming chops
The cheapest arduino clone can be a programmer
Well, maybe not the ones that come with the CH340 usb chip, they can be a pain in the ass to get working
>>895670
Are you not counting clones? Cause you can get a clone for 3 moneys.
So I am waiting on the parts, but I am experimenting for the first time with piezoelectric sensors and force sensitive resistors for my arduino. Has anyone played with these before? And if so what can I expect as a general what will happen kind of thing? Like will I be able to measure the force or is it a on/off kind of reading? And how difficult is it to use and program these things?
>>895880
The whole point of a force sensitive resistor is that it's sensitive to force. Whether the part you bought is suitable for your needs is a completely different matter. They work pretty much like a potentiometer, there's plenty of Arduino example code on the net.
>>895338
>Two empty IC sockets.
>IC Directly soldered to copper.
Por que?
What I'm trying to achieve here is to make this servo arm to turn on the lamp by flipping this switch.
I've written the code to make it so it turns at 6am everyday.
The problem is the switch is hard enough for the arm to flip it. Any other idea how to achieve this?
>>895927
Instead of controlling the switch mechanically, take out the jbox and directly control the power electrically using Relays.
WARNING: This part is going to get rough and dangerous and CAN, likely WILL kill you if you don't know what you're doing. So study everything you can on relays until you know everything there is to know about the subjects involved.
>>895927
I would attach something to the end of the servo arm. Maybe something 3D printed to fit.
Will this be mounted go the side of the wall. That is going to be really unaesthetic but it'll wake you up
anyone here who has experience with Arduino, LED and microphones. Try to create a sound reactive light but failed so far- mainly on the coding side that is
>>895921
The one that is directly soldered to the copper is a 595. I have dozens of them.
The empty sockets were for an atmega and a ds1302 which I reuse.
Also, check out this metaboard I'm making. Does it look like the work of a complete retard?
Because it is.
>>896375
Could you post the code you've got so far and a slightly more detailed description of the project?
Am I correct to assume that you basically want to make a VU meter?
>>895179
Does it really need to be an excel spreadsheet? If not you can just write to a CSV file. Excel can open those. Each line is a measurement in that case.
Otherwise there are Python libs for opening and modifying XLS files.
>>895652
.
If you only one one or a few, and can buy cheap off aliexpress:
...an Uno copy costs $2.60 with no USB cable (if you already have one) or $3.60 with a USB cable included
...a "LCD 1602 keypad shield" costs about $2.80. This has the 2x16-character LCD plus 5 buttons that you can program functions into (the sixth button is a hard-wired reset button)
depends on what kind of sensors you want to run also, and how long the wires are connecting the arduino to the sensors.... arduinos can have problems reading their own analog inputs under some circumstances, and I dont know how the generic-China 1602 shields are wired as I only have a couple from Adafruit, who builds their own.... there is external 4-channel analog/digital converter boards around for <$2 tho (search aliexpress for "PCF8591 AD/DA converter")
If you don't know much about programming, consider spending more and buying from a mainstream place (in the US, this would be places like Adafruit or Sparkfun). Their stuff costs more but they have better support for it.
The cheap China hardware usually works just fine--but a lot of the China places don't help with documentation or code samples.... And any code samples they do offer may not even work.
>real time excel
Do you niggers even google?
>>896434
>Microsoft Windows 98
>Microsoft Office/Excel 2000 to 2003
>May not work with newer software; no longer supported
I still don't understand why OP needs it in a spreadsheet; the date being recorded is just a bunch of distances with accompanying times. What cross-computations would ever be done between entries?
The simplest way would be to just have the PC print the distances into a text file with the accompanying time, one entry per line. This way you would just quickly scroll down through until you got to the time or the distance you wanted.
The other way would be to have the PC print a "text" file, but make the text file as HTML code. This way you *could* put the results into HTM: tables with row and column headings, as well as break the entire listing up into smaller intervals. It would be simple to view since clicking on it would open it in a web browser.
And either of the above styles could be easily imported into a spreadsheet, if you still needed to.
>>896398
here's the code part 1:
#include "FastLED.h"
#define NUM_LEDS 50
#define DATA_PIN 6
#define CHIPSET WS2811 // make sure you enter the correct chipset
#define COLOR_ORDER GBR // and RBG order
#define BRIGHTNESS 250
#define FRAMES_PER_SECOND 30
#define COOLING random(1,10) // controls how quickly LEDs dim
#define TWINKLING 250 // controls how many new LEDs twinkle
#define FLICKER 5 // controls how "flickery" each individual LED is
CRGB leds[NUM_LEDS];
static int beatInterval = 100; // the interval at which you want the strip to "sparkle"
long nextBeat = 0.01;
long nextTwinkle = 3000; // twinkling doesn't start until after the sanity check delay
unsigned int seeds = 1;
long loops = 0;
long deltaTimeTwinkle = 100;
long deltaTimeSparkle = 300;
boolean beatStarted = false;
static byte heat[NUM_LEDS];
//// PIN and int for mic////
const int analogPin = 0;
unsigned long startMillis= millis(); // Start of sample window
unsigned int peakToPeak = 0; // peak-to-peak level
unsigned int signalMax = 0;
unsigned int signalMin = 1024;
///////////////////////////////////////////////////////////////////////////////////////////
void setup() {
// sanity check delay - allows reprogramming if accidently blowing power w/leds
delay(3000);
FastLED.addLeds<CHIPSET, DATA_PIN, COLOR_ORDER>(leds, NUM_LEDS);
LEDS.setBrightness(BRIGHTNESS);
Serial.begin(115200);
delay(100);
Serial.flush();
while ( Serial.available() ) Serial.read(); // this helps to clear out any junk in the UART
}
// Vocablulary lesson:
// Twinkling - when individual LEDs "ignite" then slowly burn out
// Sparkling - when a whole mess of LEDs "ignite" at the same time then burn out
// Flickering - when a lit led modulates it brightness to simulate the flicker of a flame
>>896493
part 2:
void loop()
{
// Wait for something in the serial monitor before "Sparkling" the first time.
// This lets you time the sparkle to a particular beat in music.
// In practice, just type a letter into the serial monitor and press enter
// when you want the first sparkle to start.
if (loops == 0 && mx>1) {
nextBeat = millis();
}
else {
if (loops == 0 && beatStarted == false) {
nextBeat = millis();
beatStarted == true;
Sparkle();
}
else {
long deltaTimeSparkle = millis() - nextBeat;
if ( deltaTimeSparkle > 0 ) Sparkle(); // if more time than
}
}
deltaTimeTwinkle = millis() - nextTwinkle;
if ( deltaTimeTwinkle > 0 ) {
Twinkle();
}
FastLED.show(); // display this frame);
}
// This Twinkle subroutine creates a slow "twinkling" of the strip.
// It uses the same "heating" methodology as Mark Kriegman's "Fire2012"
// where pixels are "heated" and "cooled" and then the tempreature of
// each pixel is mapped to a color and brightness.
void MicReader()
>>896494
part 3:
//////////////
//serial read mic
{);
}
/////////////
void Twinkle()
{
>>896495
part4:
// Step 1. Create a randome number of seeds
random16_add_entropy( random()); //random8() isn't very random, so this mixes things up a bit
seeds = random8(10,NUM_LEDS-10);
fadeToBlackBy( leds, NUM_LEDS, 150);
int pos = beatsin16(5,0,NUM_LEDS);
leds[pos] += CHSV( 130, 100, 200);
// Step 2. "Cool" down every location on the strip a little
for( int i = 0; i < NUM_LEDS; i++) {
heat[i] = qsub8( heat[i], COOLING);
}
// Step 3. Make the seeds into heat on the string
for ( int j = 0 ; j < seeds ; j++) {
if (random16() < TWINKLING) {
//again, we have to mix things up so the same locations don't always light up
random16_add_entropy( random());
heat[random8(NUM_LEDS)] = random8(50,255);
}
}
// Step 4. Add some "flicker" to LEDs that are already lit
// Note: this is most visible in dim LEDs
for ( int k = 0 ; k < NUM_LEDS ; k++ ) {
if (heat[k] > 0 && random8() < FLICKER) {
heat[k] = qadd8(heat[k] , 10);
}
}
// Step 5. Map from heat cells to LED colors
for( int j = 0; j < NUM_LEDS; j++)
{
leds[j] = HeatColor( heat[j] );
}
nextTwinkle += 100 / FRAMES_PER_SECOND ; // assign the next time Twinkle() should happen
}
// Sparkle works very much like Twinkle, but with more LEDs lighting up at once
void Sparkle() {
// Step 1. Make a random numnber of seeds
seeds = random8(NUM_LEDS - 20 ,NUM_LEDS);
// Step 2. Increase the heat at those locations
for ( int i = 0 ; i < seeds ; i++) {
{
int pos = random8(NUM_LEDS);
random16_add_entropy( random());
heat[pos] = random8(10,255);
}
}
nextBeat += beatInterval; // assign the next time Twinkle() should happen
loops++ ;
}
//Play with this for different strip colors
CHSV HeatColor( int temperature)
{
CHSV heatcolor;
heatcolor.hue = 50;
heatcolor.saturation = 40;
heatcolor.value = temperature;
return heatcolor;
}
>>896496
the issue I have, is that the code without reading the mic is working fine. But somehow I can't bring the mic reading together with the LED -dohh. Thanks for any help or direction with that.
The idea is the following: To have a random low frequency sparkle (x-mas light) when the mic is registering sound it should create hot zones of brighter and more intense sparkle that's kind of the idea....thanks a lot
>>895179
For my University mechanical engineering undergrad capstone project, i am constructing a machine that can play ping pong. There is freedom to do lots of things, but i want to try and make it automated. We have a ridiculous budget of 250$ but i was planning on using a raspberry pi with a camera attached to an arduino controller. I'm hoping to program it to track ping pong balls and move a paddle to hit the ball.
This hasn't started yet and i have relatively novice arduino/pi programming skills. I've found guides where people have tracked ping pong balls using similar setups , and i'm pretty confident it cna be adapted to the device.
I haven't begun but i will soon. Feel free to input any advice you have.
>>896683
I should note that if this doesn't work out, i'm just going to opt for remote control
>>896683
The robotics to move the paddle will be your biggest issue, especially with only $250.
>>896696
Yeah, I'm not planning to create a swinging arm. Just something with side to side and up and down mobility, probably using stepper motors.
We're allowed to exceed budget if we have our own money, and there are multiple teams in my class that opted to go for a catch and release system and as far as I know, mine is the only team designing a strictly hitting system.
We're in academia and we wanted to have a little fun with the design so we went with what i guess people perceive is the tougher option. I personally see benefits and drawbacks of both. I know the professors will be a little more likely to provide extra funding for a more exciting project that doesn't involve a net funneling balls to some kind of launcher.
The goal is to return ping pong balls into a basket on the other side of the table. The basket location on the table will be known as well as the origin of the incoming balls (aka the angles)
Basically im hoping to program the device to track and move the paddle to the ball. The hitting motion will be provided by a quick-return reciprocating motion mechanism. I want to control paddle angle by somehow programming it to track its location vertically and horizontally on the table so the angle can be adjusted automatically based on the location of the basket, the origin location of the incoming balls, and the physical position of the paddle.
Having to return the ball to the basket is tough, especially when hitting the ball, but I'll honestly be happy if the ball gets returned to the other side of the table.
>>895880
You can do a lot with piezos but you have to do some analog interfacing, a preamp or such
>>896400
>Does it really need to be an excel spreadsheet?
not really, those are just easier to manage.
>Python libs
what now?
>>896498
>the issue I have, is that the code without reading the mic is working fine. But somehow I can't bring the mic reading together with the LED -dohh. Thanks for any help or direction with that.
I'm guessing here........
one possible thing--
I have a arduino mega reading some tmp36 temperature sensors through about 3 feet of cable, and (as far as I can see) if you want to use any the analog inputs, it is best not to use any of the other analog pins as outputs at all. You set the unused analog pins to digital-out and to a "low" state, so that they are grounded internally. The analog pins are multiplexed, and a digital input or output on any of the analog pins can mess up the analog reading you want to be accurate.
...additionally, when you change from one analog pin input to another, the first reading is often wildly inaccurate. So you take one reading, have a time delay of some sort, and then take a second reading. There is info about this on the arduino.cc forums as well as elsewhere.
...Also I set the four analog input pins apart, separated by an unused pin (set to digital out:low)... This seemed to help with the analog read interference. I spent a couple weeks trying to figure this out. No place says to do all this stuff, and I still have to throw out the wild readings (way too high) and adjust the zero-voltage offset value but I get useful results now.
TMP36's work just fine when connected directly with 3 inches of wire, I have another machine like that and it works 100% as expected.... but I needed about 4 feet of wire here, and the wire had to go through a slip joint as well. The TMP36 sensors and the arduino both have issues with analogRead() when you have longer wires involved.
...are you using a microphone breakout? all the "arduino" sound sensors I see have a little amp on there for the mic. You can't just connect the mic directly (apparently?)...
Hey guys, whats your experience on these chinese "Arduino" starter kits?
I would like to start with arduino but Im cheapskate. Is it worth to save some money on these? They are like 35 bucks. Is there any problem with IDE?
>>896845
I've bought 7 chinese arduinos (5 Uno and 2 nanos) and all of them work pretty much perfectly.
>>896845
I have a clone. It works.
>>896845
That set? Most if those components are pennies each. The servo and stepper motor are $2 each, but you'll find lots of projects that use that servo because it's cheap. The MPU 6050 is a good gyro. And a breadboard is a breadboard. For the right price it would be worth it.
>>896397
Finished my board and managed to get the USB port in it working.
>>896683
This is cool but going to be really hard. The vision component will be fairly hard to implement. Are you using OpenCv? You'll have to detect the speed of the ball, as well as predict its trajectory to anticipate the movement of the bat. Also what is going to hold/move the bat? You'll probably need some sort of large robot arm, the main challenge will be fitting this in your budget.
>>895929
>directly control the power electrically using Relays
Solid state relay. Has all the zero crossing detection and thyristor drive built in. The low voltage input is basically the input to an LED in a built-in opto-isolator.
>>896790
Have you tried reading the atmel datasheets or app notes? Confining yourself purely to the arduino world is extremely limiting. There are specific procedures necessary to get optimum results out of the ADC's (sleep mode, sampling frequency, source impedance).
>>896498
finally got it working
with an adaption of a Mark Kriegsman's code:
thanks >>896790
>>897177
as a mic I used the one from Adafruit....
Im making a laser tag program. And something to help me stop walking on my toes. the laser tag has been being difficult. Im not even sure what is wrong with it, but it wont work...
The toe walking one consists of a shoe with a sensor on the toe and a sensor on the heel. if the toe is pressed three consecutive times without the heel being pressed, it beeps a piezo and (later versions) will add to a counter that tells me how many times it has been triggered. Cant get it to work right. I have the buttons set up as interupts. there is a variable "heelCount" and "toeCount" that is incramented in the ISR's. in the main program, it continuous.ly checks for toeCount to be above 3. but I cant get the variables to incrament.
tldr: tfw when nothing you do with arduino actually works, but the code makes perfect sense and you cant find out why it wont work
>>897334
post code anon otherwise it only makes sense in your head
>>897093
Can you program it with the usb or do you need to use an ISP(another arduino)?
And if you can use the usb how did you pull it off?
I'm trying to make a IIDX controller. Currently I'm using a leonardo to translate button presses into keyboard inputs but I'd like to transfer it to a breadboard/perfboard.
What should I use in place of the leonardo if I don't trust myself to solder anything SMD?
>>897156
>Have you tried reading the atmel datasheets or app notes? Confining yourself purely to the arduino world is extremely limiting. There are specific procedures necessary to get optimum results out of the ADC's (sleep mode, sampling frequency, source impedance).
yea but it didn't help much. The main issue was that I needed the temp sensors 4 feet from the board. Also the sensors are mounted on a rotating drum and the slip joint added an unknown amount of capacitance to the lines.
Most of the 'digital' temperature sensors are I2C or onewire (that are timing-dependent) and I didn't want that either (didn't know about running it through the slip joint, and didn't want the primary task of the board being interrupted to read I2C data).
If this hadn't worked, next I would have just used another arduino mounted on the drum and wrote a wired communication scheme that was non-timed. This would be much slower and takes 5+ pins but works where many others won't. And it can be run as a secondary task in the code, since it doesn't use hardware interrupts.
>>895929
The last time I checked, 120 only gives a brief shock. Why is it brief? because it hurts and you recoil away. The odds of it killing someone are slim.
Sounds like YOU don't know what you're doing.
>>897582
i would recommend using a midi interface. you can probably find one that uses USB instead of the 5 pin connectors
Another option is to get a separate lamp, and splice a relay into the cord for the lamp. Or a box with its own cord, and a controllable socket to plug the lamp into... This avoids having to do anything to the building's wiring, since there may not be much room to work in the switch box or the wall behind it.
>>895929
>WARNING: This part is going to get rough and dangerous and CAN, likely WILL kill you if you don't know what you're doing. So study everything you can on relays until you know everything there is to know about the subjects involved.
I think the bigger risk here is.... burning your house down...
If you use a mechanical relay, the relay needs to be wired so that if the arduino/whatever controlling it shuts off or fails, the relay goes back to it's "off" state. Most of the mechanical ones are a SPDT switch, so you can wire it either way.
The Sainsmart relay boards require that the control pin be "high" to hold its relay -off-; I dunno why they thought that was better. It's more intuitive to have the LED turn on when the relay is turned on, == when the pin for it goes high. At least, IMO........ ?:>|
It is *possible* that the controlling board could fail and keep the relay turned on, but you can't plan for everything.
>>898100
>The last time I checked, 120 only gives a brief shock. Why is it brief? because it hurts and you recoil away. The odds of it killing someone are slim.
Yea, it's practically harmless. Nobody ever died from that.
>>895652
if I had one and only piece of advice, it would be to never fucking use an arduino. It WILL stab you in the back.
working on a webcam shape tracker. RPi finds the target with opencv, arduino drives servos and an actuator to aim a laser pointer at it and fire.
No real motivation other than wanting to see if I could do it.
Quick question if anyone is familiar with servos- I'm mounting a disc on a servo motor with its center axis parallel to the axis of rotation. I need a quick a dirty way to figure out how the weight on this disc (pressing down on the axis of rotation) will affect the torque. My guess is that it depends on the coefficient of friction of some internal gear, but I have no idea what sort of ballpark value to use. I'm using an all-metal gear mini servo rated for 1.5kgfcm at minimum voltage of 4.8v.
>>897482
It can be programmed through the USB connector.
I used the USBTinyBoard version of the USBAspLoader. I compiled it in Atmel Studio 7 after changing a few things (like the pins the bootloader uses for interfacing with USB), made a custom board profile in the boards.txt and uploaded it with an arduino I use as a programmer. When the PD7 pin is low and you reset it, the board will be recognized as an USBAsp programmer by the computer. It's pretty cool.
Now I made a second board just for fun and to learn how to make two atmegas talk via the RX/TX ports. I'm squeezing the PCB design so it'll be the same size as a regular arduino, or even smaller. it's currently the same size as a Mega.
>>897482
By the way, the schematic I used is the same as the Metaboard.
The firmware code came from here:
Could you make a roomba clone using an arduino?
The basis idea I have would be to modify an existing vacuum cleaner with motorized wheels and somebody manually program a route so that the robot-vacuum follows this same path every time.
>>896683
From experience, I can tell you that the RPi will struggle a bit with tracking. You will be able to track it but it will lag at the speeds present in a ping pong game.
Maybe consider an alternative in a foosball or air hockey game - you can exploit the single plane of motion to project the path of the ball and play at higher speeds.
>>899552
>>896683
Is using a small crappy embedded processor a requirement?
If not, spend $50 of your budget on a second-hand Core2 or (if you can find one at that price) Sandy-Bridge.
Like my ghetto arduinos, /diy/ ? They can even talk to each other through the serial pins.
>>895927
Mechanical leverage
>>895179
What is the easiest way to get remote notifications from an Arduino for free.
I was thinking SMS via email but I am not sure how to send an email.
Or maybe a Twitter account that DMs me.
A lot of the straight SMS tutorials I have seen look to use non free solutions.
Who /Duemilanove/ here? Stand aside Uno Rev3 newfags.
As for tricks/exploits, my advice is to abuse the stack locality of objects, this way you can save SRAM space and smash together bigger libraries.
Also, is it possible to use a Ultrasonic interferometer as some kind of radar?
>>901699
>Stand aside Uno Rev3 newfags
Arduino in general is for fags
Grow the fuck up and use PIC
>>901965
>>899929
Nevermind, I figured it out. I set up my server to send an email using PHP and just told the Arduino to call the file.
>buddy's wife got him an arduino uno
>sits around unopened for two years
>one day us three are hanging out and we start talking about a robot that can cook food
yes i know there are robots that do that already
>decide to work towards making a robot arm that can cook eggs
>agree to meet on tuesdays to make progress towards this interim goal
We have finished the project book it came with and have been playing around with other parts we've bought. We think we are at the point where we can look for a robot arm to start playing with.
First off.... is this too much of a jump, are we going to fuck something up and ruin it? We are both engineer-types and he's a career programmer so we can handle a fair amount of complexity, but still...
Secondly, if we should be ok to handle this, what am I looking for? We only have an arduino uno board and the little bread board it comes with. Can we hook up an arm capable of lifting a spatula with an egg on it to what we have or to we need to get something more capable?
Third, what specs of this arm should I be looking for? Robotshop has a bunch of different arms with wide ranges of price. I dont want to have to buy & return like 4 or 5 to get the right one but I'm totally not in the know as far as... compatibility or... god I dont even know what I dont know
>trying to avoid pic related please
>>903005
My suggestion with the Arduino.
A lot of the basic example code can be reused for larger projects.
The key thing is to consider what you need to do at its most basic level.
My example from the project I am working on. I am building a window alarm that will notify me via SMS or email if the screen is removed from the window. I bought some little wired magnetic sensors for this project.
The core, is identical to the sample "push button LED". A switch is triggered, an action is taken.
So I started with the push button LED, then replaced the "button" with my window magnet. They are both just toggle switches.
Then I replaced the LED On action with some code to get the Arduino to pull a PHP page on my server that was set up to send an email. Both are simply "an action to take".
A lot of things you do are like this. Like the robot arm.
"On Input, turn a servo".
You mentioned cooking, so you would need some delay timers, or maybe the input is a reaction to a temperature probe. The probe reaches a temperature, toggles a software switch, causes an action to happen.
Etc.
>>903009
yeah, we expect to start with general timing of things like... let skillet heat for Q time, dump liquid egg, let sit for R time, spatula motion UVW to flip egg (or stir, whatever), delay S time, spatula motion XYZ to push egg off skillet (with a plate to catch it) theres more to it than that but you get the idea.
but we quickly, from there, want to then implement things like... infrared temp sensor to know when pan is at perfect temp, and when egg itself is at correct temp, or compare temp and moisture content of egg and then move on to next step, etc
We even considered stereoscopic camera setup to determine texture (maybe not for eggs but like... I dunno a cake or brownies or something) or a color sensor to detect when pancakes hit the correct 'golden brown' color.
Essentially we want to duplicate the process a human uses to determine when a certain step is accomplished
>>903011
I am told by the internet Kinect 2 has a fairly robust computer vision kit, but you'd have to modify it and make some new rather clever software.
Which you totally might be able to do and that'd be great.
Judging cooking time is gonna be real hard. First goals would be identifying target position and coming up with a tool path to flip it, and a contingency if the egg is sticky or not flipping like it should.
>>903011
I would think the idea would be to start with
> delay(xx)
Be for an action to adding a sensor, poling it, the adding something like
> if(eggtemp>xxx)
To trigger the flip. That sort of thing. Also you would add some general trigger variables to keep the action from repeating.
Call it "eggflipped", start it at 0, put a toggle that changes it to 1 when the flip routine runs.
You would also modify your if to include a check on what the state of eggflipped is (or maybe nested if statements, I don't remember if Arduino has an if(x && y) style feature.
This way, the senor reads the temp, once the temp is over what you want, then flip the egg, toggle the variable, so next round it says "yep, temperature is good to flip but the trigger says I already flipped it".
>>903012
>kinect
lol thats exactly what we were thinking, thats excellent it can (seemingly, without researching into it yet) be worked with through a computer.
>Judging cooking time is gonna be real hard
you might be right but hopefully it'll be a matter (once we get the proper sensors in place) of telling it "when sensor hits X temp, do Y". Another example would be if we stuck a thermometer into a steak and know when it hits 145º it has hit 'medium' doneness, so remove it from heat.
>if the egg is sticky or not flipping like it should.
this is an excellent point (and further excites me to get to try this!) we hadnt thought of but this is gonna be triiiicky. This might be a good part of this project to impliment the kinect... if it can see there is still a certain amount of material stuck to the skillet, something like that?
>>903017
>mfw all that
i like it. not 100% sure on the answer to the code x & & y question but truth be told my buddy is the code pro. We just discuss the logic and he turns it into code lol.
>>903011
I smell feature creep. Stay with the basics and iterate. Get your am working correctly with some triggers like timers (with maybe ultrasonic distance judging) and you can go from there. By the time you get that working, openMV might be commercially available. If you can't wait, you can use the OV7670 or an RPI instead.
They force me to use it at school because programming individual chips is to complicated for baby engineers
>>903180
Its the future, technology advances grandpa.
>>903005
So, I called up robotshop and told them what I'm trying to do.
Even at $4000 they dont have an arm capable of moving the weight of a spatula/egg.
My mind is kinda blown desu. Am I mistaken the think that an arm is basically a handful of servos linked together by metal rods? How the shit does it cost more than 4 grand to accomplish this? A dinky little servo costs a couple of bucks. A more powerful servo just needs more robust components, its not like it needs fuckin diamond gears or something?
Am I looking at having to make my own arm??
>>904081
.... and why is the abbreviation of 'to be honest' filtered to 'desu'.
>>904081
>Am I looking at having to make my own arm??
probably, but it isn't that hard.
all you do is buy stepper motors with gear reductions on them. Each of these is used for a pivot of the arm. Base/rotate, shoulder, elbow, wrist X and wrist Y. All you would make is the "straight" metal parts of the thing, and the base....
You could use servo motors with gearboxes too, if you are rich. Outside of the RC world, servo motors cost a lot more than steppers. On aliexpress: a stepper nema23 single-axis kit costs ~$45, and the same-motor-size servo kit costs ~$150. (the motor and the controller for a servo are different than a stepper) If you want servos not-made-in-China, figure they'll cost at least 3-4 times as much.
Do note that an arm with gear-reduction motors won't move real fast. if you want a arm that is strong and can move fast, that gets a lot more expensive since the only way is to use bigger motors that don't need gear reductions, or use servos that can run the motors at much higher RPMs accurately. Either way costs $$$...
To control each stepper easily/directly, you need at least two pins for each (the step and direction pins). One enable pin can work for enabling all the motors. But that is 11 pins already, and an Uno only has 14 digital pins, only 12 if you want to use USB serial port monitoring. It might be good to move up to a Arduino Mega for this reason. It has 54 total pins, so running out of pins wont be a problem. Arduino Megas cost ~$9 on aliexpress, and the prototype shields are $3 (use one!). And you still may find a use for the Uno in this project.
I bought an arduino kit (inb4 bad value) and I'm loving it all... until I got to the 8x8 dot matrix tutorial. The circuit is a lot more complex (so I'm sure I messed up somehow) and there's no description or instructions on what the IC chips do.... you just follow a schematic and plug things in and hope it works (it's not working)
I could look up the datasheet but this is just so much more complicated than the previous tutorials, the learning curve is fucked up. For the previous ones, I could follow the code and what was happening and I understood the theory, but the pdf explains nothing for this one.
What do I do?
Could you post
without a trip
>>904081
Actual, non-toy robots currently have two markets:
Manufacturers who don't really care if something costs $4,000 if it can keep them from paying a human $20,000+ a year to do it, and researchers who generally have budgets capable of buying your house several times over.
The rest of buyers for this kind of thing are fairly niche, which itself roughly translates to "expensive". It wouldn't cost you that much to make one...assuming you already have the equipment and skill to do so. You also have to remember that hobby servos aren't really very strong at all. The biggest readily-available ones top out at like 30kg/cm (or ~400oz/in). That might be huge for a flap or wheel at the end of a tiny lever arm, but that's barely 2lb of force, maxxed-out, at the end of a 12" arm.
You want manly arms, you need bigger motors. Either geared steppers or industrial servomotors. Neither is terribly cheap. Not super expensive, necessarily, but not cheap.
>>904081
Something like this?
If one isn't strong enough, have multiple working together or something stupid like that.
>>904162
that type is still just using RC toy servos as the joints. It will be small and weak.
I would suggest using the stepper worm gear drives, they tend to have less backlash than the planetary drives and most places offer a range of reduction values, from 5:1 to 50:1 or more.
on aliexpress,,, figure ~$100 for a nema-23 and $150 for a nema-34 stepper motors. If you want servos, they would be at least $100 more each.
expensive, yes--but that's robotics for ya
>>904116
>I bought an arduino kit (inb4 bad value) and I'm loving it all... until I got to the 8x8 dot matrix tutorial.,,, (it's not working) ,,,What do I do?
Which kit is it?
>>904116
The kits really are not a bad value for starting out.
You generally get a board, a bunch of components, and most importantly, some direction.
Yeah, you could go out and buy a cheap knockoff board and all the individual components online and find some free tutorials, bit just starting, there is a lot of value to having it all together and the convenience factor.
You wouldn't want a kit for every project, but to start fresh, the kits are great.
>>904136
Youre forgetting tye assembly line choke points. Robots and people happily work together. But where a robot is needed to allign parts quickly and then the next stage takes a minute then moves on, you end up with the slowest stage in an assembly line becoming the coke point. For example, robots suck at installing consoles but excel at installing windscreens. Sure you can simply double up sections but space is also limitted.
>>904083
No clue but it's hilarious "desu" "senpai"
>>904116
Is it a shift register? Those take parallel inputs and turn them into a serial output (or vice versa).
>>904136
Does he really need a nema 34 or even 23? This thing is just flipping eggs, =< 17 and reduction gears should be enough, no?
>>895179
This is basically something I came up with while reading a book on AVRs
It's essentially a clock, it counts till 9999 seconds, then it turns the all the D pins high. Just something I made without any use in mind, I am more of a software guy, just trying out micros for fun
Also is there a good book to learn serial communication? The book I have explains it really shit so I can't understand anything
>>904608
Where does this sempai come from?
>>904656
what board is that
>>904661
Top left corner says ultra_avr.
>>904656
Cool, why don't you bring it to the Whitehouse?
>>904670
just as the syrians deliver the C4, i am headed straight to the white house :^)
Just hold it in your hands for 9999 seconds Mr Obama
>>904608
>Does he really need a nema 34 or even 23? T
at least on aliexpress--almost nobody makes gear reductions (or rotary encoders) for nema17 motors, but a lot of places offer them on nema23 motors. so nema23 is kinda the starting point for better motor setups.
Discovered that using a formula with exponents requires using the pow(xyz) syntax, otherwise it just ignores them. And the compiler doesn't catch this particular fuck up.
>>904659
F a m
>>904675
It always helps to know the language you're programming in.
>>904083
>desu
The more ornery boards saw it as a reddittism, and were getting really pissed off when people used it. Meanwhile, regular people were just using it and not trying to be le ebig bait. Thus every time that particular acronym got used, it derailed the thread into a massive argument about whether it was bait or not, and whether or not using it should be considered acceptable.
Easiest way to stop people using something? Wordfilter it.
(frankly, I like random desu appearing everywhere. It still works in context, and it makes the sentence cute and funny.)
It's not an arduino, but I have a Raspberry Pi sitting around gathering dust and I don't have any ideas what to do with it.
I don't want to do the generic "make a file server" project, that's just lame.
Do any of you have a cool idea that can be done without spending money, like contacting ghosts using python and tinfoil antennas?
>>904845
Same boat here. My model is an A+ (256MB, no Ethernet, but I have a working ethernet dongle for it). I also managed to get a 2600mAh battery pack for it. I was just going to get a couple of those LCD screens and run the Game of Life on them or do some realtime stuff, but I don't have the screens yet and all of the local RadioShacks closed down.
>>904116 here. I forgot I made this post! I stepped back from the 8x8 dot matrix display as a whole and focused on understanding how a shift register works by itself. It's surprisingly easy!
I built this circuit to test it. I originally had buttons but I don't know how to debounce them so I was getting double/triple hits. I just replaced them with outputs from my uno and everything is working really well, except I think my breadboard is faulty. The output LEDs will erratically flicker in and out and I can "fix" it by touching the breadboard and/or the register.
I want to move on to the 8x8 dot matrix display but I think my second shift register is faulty, and I really don't trust this breadboard.
>>904464
It's Sunfounder's RFID kit. It's got a ton of neat shit but the PDF with the tutorials is garbage. Figuring out how a shift register works on my own was a billion times more insightful.
>>904577
I like my kit! I've just heard from a few people that ordering the components on their own and pirating an already solid book is a better plan, so I figured typing out that stuff in my original post would stop people from shitposting.
>>904608
Yeah! It was a shift register, and they're really great now that I know how they work.
>>904464
>>904936
HILARIOUS
Turns out 2 lessons later (after having you use shift registers blindly to control the dot matrix) they then teach you how the damn shift registers actually work by having you control a 7 segment display with them.
JOKES ON YOU I LEARNED IT MYSELF YA FUCKIN TERRIBLE KIT MANUAL.
Has anyone used USBKeyboard.h? I'm having trouble getting it to... do anything?
I want to make a robot that I can control with the gyroscope in my phone. Where would I start? I have 0 experience with arduino or robotics
>>906737
With the basics. Divide you project into achievable goals that build up until you master all the necessary skills.
>>904936
There is two ways you can do this: through hardware or through code.
There are discrete circuits for denouncing buttons. Look around online for schematics.
There are chips specifically made to de-bounce digital switch inputs. One is the PDN2008 or EDE2008.
Jameco sells them for ~$5 each, they are easy to use and each chip will debounce and stabilize 8 input lines ("stabilize" means that it holds them either very-low or very-high).
The code way is you use a timer to make certain that some certain amount of time has passed between presses.
There is the "semi-official" one here that relies on comparing the current button state with the previous one--
I don't like this example because it is mixing features--the "lastButtonState" variable is really the LED state, and storing the previous button state isn't necessary.
I prefer to use a single "gatekeeper" variable instead, since that way multiple buttons can use the same debounce function and you don't need to store every button's previous state.
I could put an example on github if anyone gives a shit.
>>906785
A good overview of debouncing techniques:
There's tons of different ways to do it, especially on the software side.
Hey, this thread isn't dead! I've done so many things since I last posted!
>>906785
Thank you kindly for this post. I'd really appreciate posting it if it wasn't too much of a hassle.
How... do I use github? I want to post the code for the project I'm just about to post. (read below)
-----------------------
I've made a rudimentary gamepad to control flash games, NES emulators, etc. using an arduino uno. I couldn't initially get HID to work so I'm using my arduino sketch to output strings to the serial on event triggers, and have a Java program to monitor the serial to then make keypresses with Robot.
It lags. Slightly, but noticeably. If you make a lot of inputs in a short amount of time, it gets backed up and executes them in order, which is generally after they've been triggered.
I can't tell if this is an issue that could be fixed in my code or if I need to ditch serial and move to HID entirely. HID would probably fix it but I want to make serial work!
>>907144
If you're sending long strings, serial can be slow. It also depends on the serial transfer rate, how the serial-to-USB converter packetizes the data, and how the Java serial interface works. What's the latency like if you use eg. PuTTY to view the serial data?
Also, if your Arduino program reads a single input, converts that into a string and immediately sends it over serial using polling, it'll probably be slow.
>>906737
use one of those cheap bluetooth addons (ESP8255 IIRC)
look here
if you're really lazy, buy 1sheeld:
>>895338
Any scripting language that has OLE support will work for that though. It's a lost saner dealing with it in python ruby or even perl IMO.
>>895670
>>895684
If you have a socketted arduino board, you can use it to program your chips, then:
You can get a baggie of cheap chips then. Very cheap
>>895927
just buy a phillips hue :^)
>>895670
I bought a whole pile of Uno clones for $3 each on Aliexpress.
>>907144
>Thank you kindly for this post. I'd really appreciate posting it if it wasn't too much of a hassle.
>How... do I use github? I want to post the code for the project I'm just about to post. (read below)
not github, srry. I meant pastebin. you can just copy & paste it off the page.
I get my generic arduinos on aliexpress as well, unos are $3 and megas are $9. All I've ordered have worked perfectly. They're even a bit cheaper if you don't need the USB cords with them. the unos are the SMD chips (not socketed DIPs) but at only $3 each they're basically disposable anyway.
the only 'bad' things I've gotten off aliexpress so far are the little MOSFET boards: they use an IRF520 or IRF540 mosfet that requires 10v to switch on--that an arduino can't put out.
You can use a 'mosfet driver chip' (for various reasons) but it's easier for casuals to just be able to hook the mosfet sig line right to an arduino pin.
I cut the IRF520 mosfet off and solder on an IRLZ34, and then they work like I want (the arduino can control them directly).
>>908069
I had definitely considered aliexpress but the 4-6 weeks of shipping really turned me off. I might just fire off a mega order and forget about it, cuz I'm already experiencing the limitations of an uno. :(
>>908096
>I had definitely considered aliexpress but the 4-6 weeks of shipping really turned me off.
they tend to estimate on the high side I think.
I live in the central US and don't think anything so far has taken longer than two weeks. A lot of stuff arrives in ~10 days.
The problem is in China--it can take a package 10 days to get out of China, and then only 3-4 days to get across the ocean and then halfway across the USA.
The shipping is hit-or-miss unless you specify the shipping method; I've bought $300 parts that were left on the doorstep and $2 parts that were shipped signature-required. Package tracking inside China tends not to work--or at least, not until the item *leaves* China.
The longest wait I've had was about a month, but this was an expensive part from the manufacturer and they had warnings about the wait time for small orders.
>>908096
BangGood and DX have loads of cheap arduino stuff as well, and they tend to ship a little faster.
Usually. Not always
>>908156
>>908165
I live in Canada, which might be worse. Thanks for the info, though!
Hey, I am picking up how to use an arduino pretty quickly and am doing a really simple project to start off and was wondering about parts. For a section of this project I am making a chronograph for slow moving projectiles, like below 100 fps, and am using ir emitters and photoresistors. I have had no luck with the photo resistors part, the IR LED's light up fine but I cannot get the other half to work. Is there any other method to measure if an object has passed by a sensor for cheap?
Hey guys gonna do an aquaponics project. Whats the best way to communicate between the raspberry pi and arduino?
>>910126
I'd say the serial ports through a level converter, since Pi runs on 3v3 and arduino on 5v. Or just run arduino on 3v3 if you can.
>>910126
I would say it depends on what you need them to say.
The serial mentioned is probably good. The temperature logger I built dumps data into an SQL database and I found the easiest way was to male the Arduino call up some PHP on the other end.
Currently working on completely automating my homebrewing settup
>>895684
>CH340 usb chip, they can be a pain in the ass to get working
they are not.
google ch340 and install the drivers.
just like ftdi.
the clones with the ch340 costs almost nothing.
the nanos are so cheap i hardly ever bother setting up a standalone anymore.
>>907544
What about crystal & caps? Do these have built in osc.? Also decoupling caps are nice.
>>910420
They have and indeed, decoupling caps would be nice. You can do without them pretty often, though.
anyone know if there's a torrent of the recent humble bundle MAKE books?
>>911842
You can pay a cent for like half of them. Quit being a cheap ass.
>>911842
Oh sweet, a new one.
I get constant emails over their stupid subscription bundle but I never get emails for these Make Book bundles I will always buy. Way to go HB.
I'm looking to get some home automation done over the holidays here. I want the garage door to be controlled via RFID, as well as relay all of my outdoor and garage lighting. Hopefully controlled via a small tft touchscreen inside the garage.
>>912995
For the door, you should be able to relay off the opener you have. Attach some wires from the contact pads inside and run them to some pins on your board, throw some code on that closes the circuit on an external input.
Probably the sameish for the lights except you likely don't already have a button of some kind that you can relay off of.
I actually have an arduino trinket that I am going to use to make some neato stage light things for my band.
Im incredibly new to electronics, but i've byoc'd a couple effects pedals and have a little understanding of how electronics works. I also code for fun as well so that part is coming easy.
Basically gonna use a 4 button foot switch hooked and use a 5 din cable to send the signals to the trinket, which will change the pattern which of the 4 buttons was hit. button 1 switches to all on, button 2 is alternating between front back, 3 is fast and random etc. The trinket will be hooked up to a 4 channel relay which will send the power to the LED lights. We'll see how this pans out.
How would I get an arduino nano to control
a NTC 100k thermistor and a 12v 40w ceramic
heater cartridge. This is for a chicken incubator
so it will need to stay at a constant 37.5 degree
celsius
>>895179
What the fuck am I doing wrong here? It's just the included 'hello world.'
I changed a few pins around, following adafruit tutorials to the T. Checked all connections. Checked my shitty soldering.
>>914141
> Read output of thermistor on an analogue input pin.
> Get relay rated to switch the total load of the heater.
> Use digital output pin to switch relay.
> Code mega so that if thermistor measurement falls below preset the relay is activated, if it goes above the relay is switched off, thus controlling the heat.
The individual routines to do each part are covered in the tutorials, shouldn't be too hard to kludge it all together into a workable program.
>>914282
>mega
Oops, arduino, I was using megas before they started wrapping them in dev boards, force of habit.
>>914279
can't see shit in that picture. looks like your screen is powering on? soldering looks okay. probably has something to do with the wiring but your photo has wires covering the pin labels.
>>901699
>Also, is it possible to use a Ultrasonic interferometer as some kind of radar?
Not radar, but sonar. And maybe. Do you have more information on the interferometer in question?
I was toying with a similar idea; I wanted to try making an Arduino-based sonar/echolocation rig capable of ranging, direction determination and perhaps even doppler analysis. My first idea was to hook up a speaker and two microphones directly to the Arduino and drive/process everything on-board, only to realize that the Arduino really isn't capable of sampling fast enough to read ultrasonic waveforms directly. On top of that, although I'm quite familiar with radar/sonar concepts, I'm fairly new to Arduino and such a project would likely be too much for me anyways. So instead I went and bought an SR-04 ultrasonic rangefinder module to screw with for the time being (quite limiting compared to what I had originally hoped, but it's a start).
I also found a REAL 10.5 GHz doppler radar (not sonar) module () for fairly cheap (~$10), which apparently has a signal mixer built in so that all the interferometry happens internally and only the doppler shift itself is output (<1 kHz, easily sampled by Arduino). The catch is that the signal is very low voltage and requires a good bit of custom circuitry for amplification and filtering before it can be made readable. Additionally, there is documentation on operating it in pulse-doppler mode, but I'm unclear on how one would extract range/time-of-flight data from the module, or if it's even possible.
>>914303
Thanks, will take a better picture after in done with my 10 hour masturbation session.
what potentiometer would i need to be able to change the temperature a 12v 40w ceramic heater cartridge or how could i use an arduino to control the temperature of it ??
>>914677
The analog PINs of an Arduino can do that if you hook up a solid state relay to it.
Soldered the Trinket Pro (Weird hackaday.io version I found on amazon). I'd say my solders aren't nearly as bad as I thought they were going to be considering how little experience I have.
I'm building a digital accelerometer from an ADXL335 chip, a 128x64 graphical LCD display, the arduino UNO and some buttons I have lying arround. Will be my first arduino project. Any ideas / tips / uses for this? There is very little seismic activity in my area (germany) so using it as a seismograph would be pretty pointless
>>914929
Not bad for a first time.
A3 and A5 pins could use some touch-up though. Try not to leave open holes.
>>895179
well, since I'm done with finals, and have nothing with work until tomorrow, I'll try to finally get this running.
>>915528
and A4 has all the characteristics of a cold solder joint, based on shape and color.
so i'm making a circuit involving leds as outputs and pushbuttons as digital inputs
I want to use the pushbuttons as a counter, so if you press one 5 times it counts 5, and the other counting every 1 push as a ten. I've done really basic stuff so far but nothing involving many parts.
So my question is do I connect every part to an individual digitalinput pin, and then to ground? or am i doing something wrong. I just want to know since I don't want to end up ruining any parts or my arduino for that matter.
>>915884
pic related
>>915885
>pic
>>915885
You shouldn't need external pull-X (down/up) resistors for microswitches on the Arduino, since it has internal pullups.
See the Arduino tutorial on how to use the DigitalPins. Should give you info on the Arduino's internal pullups and how to use them.
As far as making the physical connections, what I would do is use the Arduino internal pullup configuration, then with the microswitches, connect one end directly to ground, and the other directly to the Arduino.
One trick to learn: With those microswitches, they have four legs, and it may be difficult to know which legs make a connection when the button is pressed. Follow this simple rule: Opposite corners.
For example, on the switch on the left: E20 and F18 will connect when the button is pressed, also E18 and F20. On the right, E15-F13, E13-F15.
>>915917
whats wrong with saying pic
>>915921
thanks for the lengthy reply, ill try and fix it
So I am trying to make a simple circuit with IR LEDs and a few other pieces and they output things to an led display. I am individually testing the pieces and coding but I can't get the IR code to work. It is currently set up that when the IR is disrupted an led on the side turns on. When I try and do a simple code to return a 1 if obstructed it ends up not reading anything. I am just doing a digital read and don't know if there is a better method for this or if there is a better circuit to use or anything.
>>896375
you technically don't need an arduino for this. I was able to get away with doing with an amp some relays and some christmas lights
>>895179
i explained it here >>916522
>>895336
>curved tracks
It's 2015
>>895179
I did a very basic weather station a few weeks ago. For logging I just formatted the data on the arduino for a tsv file, sent it through the serial port, and wrote a simple python script to dump what it receives into a tsv file.
Then I'd just make a copy and open in excel and save a copy as xlsx if i needed to do anything complex.
Not very slick, but simple and got the job done.
>>896683
I have updates for this project!
Things have changed. I've decided to opt for a Pixy. Pixy can process video at 50 FPS. It's pre programmed to track colors. Which is great because the ping pong balls are bright orange. It's also meant for arduino. Weeee.
However, I need to give Pixy depth perception. Well to do that I need to calibrate it with the ping pong balls. Basically to find out how many pixels a ping pong ball takes up at different distances from the camera. I will do this and develop an algorithm for pixy to see X, Y and perpendicular to the plane of view.
Using this I hope to predict the final point of the ball within the plane of contact using projectile motion analysis in real time. This step will take 4-5 frames or about 100 ms. Then the paddle will take this input prediction and use stepper motors to move in X and Y and position the paddle. Encoders may be needed but i'm hoping i size the motors right and can just use inputs to track robot location.The Paddle is a belt driven rod mounted large square and will have a 6-8 inch throw at about 4-5 ft/second which is enough to return the ball to the farthest point on the table.
If anyone is even following this. Kudos. I may start a thread later as we start building and testing. Maybe ill make youtube videos. Who knows, I'm planing on wiring up a few motors over break and calibrating the Pixy so details will follow!
TLDR
Pixy sees and thinks fast. Making a robot that will see and play ping pong (maybe poorly but somewhat capably)
>>915548
>>914929
Drop a small .... droplet of flux on A4 and touch it with the iron. The joint will look nice and shiny. If you don't have flux, go get it.
>>916771
They were just regular 45 degree tracks and then I applied Eagle's version of "fillet" on them. The real curved tracks of the past are a thing of beauty, but they're a bitch to make.
Alright, /diy/, let me tell you what I'm thinking.
The first thing I want to do is just a small project to test out the cheap ass NRF24L01 modules. I'll build a board to host the radio module. It will contain an AMS1117-3.3 regulator and a few other things. It will be connected to the main board (my home made arduinos) very much like what you see on the board on the left. Then I will use that first assembly to control a second board, but this one will in turn be connected to a board with a few relays or some thing else. The board with the relays will also transmit sensor information to the controller board. They're both based on the Atmega8 MCU, but I have a few 328's in case the flash memory isn't enough for the program.
The second thing I want to do is the following: I'll buy some DIN rail supports for PCBs. Maybe a DIN enclosure as well. I'll make an Atmega PCB where I will connect an USB to serial converter, and a few triacs. The triacs will then be used to switch big ass AC relays or contactors. The board will then be connected to an old laptop or netbook computer, and that thing will run a server, and it will allow me to control the relays/contactors over the interwebs and so on. Maybe I won't use a netbook, maybe I will use a raspberry PI but old netbooks can be had for free. I think you get it, don't you? Yes you do do, you're pretty smart. Now, since everything is mounted on DIN rails, I'll be able to sell it as a solution for controlling/automating an hydroponic installation. If add a slot for a 2.4ghz radio on the relay-controller-board, I'll also be able to use that radio to receive information from other 2.4ghz devices (that I still need to design), like temperature, lighting, humidity, water level and pH sensors. It will be so cool, guys.
>>917039
Given how ping pong balls travel (curves ect) you will probably need to calculate trajectory continuously. There is also the matter of the bounce on the table. You have to inflect your calculation
Another thing to consider: Will the ball have any blur in the frames at its travelling speed?
Still waiting for my stepper motor drivers :^(
>>917341
these are the shit, don't need any relays and switch them with low voltage.
>>917399
Depending how quick the robot itself is, a simple linear extrapolation repeated over and over with each new measurement might be good enough. Heatseeking missiles effectively do this using proportional navigation (since they can't measure range to target, only direction), and it's still more than good enough to score direct hits on maneuvering targets.
>>897334
The volatile keyword is important in C for embedded systems.
Any variable that is modified in an ISR (or otherwise external to the main body of the code) should be specified as volatile so that the compiler does not look at the rest of the code, see that no modifications are made to the variable, and then optimise it to just be a constant. The volatile keyword lets it know that YOU know it will be changed elsewhere.
I know you probably won't see this but it is was first occurs to me as the likely issue and this might help others as this is something many tutorials etc skip over.
>>918159
It can do worse than that: it can notice that none of the code that depends on the variable changing from what it's first set to actually does anything, and outright remove it. Then remove any code only it calls. Then remove any constants only that code accesses. should be required reading.
>>918159
Also, if the variable is wider than the CPU registers (eg. a 16-bit int on an 8-bit AVR), interrupts must be disabled when accessing it, to prevent the ISR from modifying the value in the middle of an operation.
>>914279
That soldering doesn't look good to me. It should form a tent on the pin, not a blob beside the pin. Did you use flux?
Take a look at >>914929
Anyone here use a water sensor?
I was thinking of making a device that detects and alerts me when basement is about to flood so I can either turn on the sump pump on or even better turn it on automatically.
Thoughts?
Pic semi related, I don't want to ride a moose in my flooded basement to get to stuff lol
>>919627
Why doesn't your sump pump already have a float built in?
>>919636
Yeah, but the pump is unplugged until we need to use it. The reason is because is we have to connect and then run the hose outside and down part of the hill. We can't leave that thing chillin it in the yard all year.
>>917039
Wouldn't it be easier if you place second camera above the table ?
>>896397
How did you get a dog on there
>>917399
What will cause curves is the spin. Projectile motion analysis can account for the bounce (coefficient of restitution) and the final location of the ball but it cannot accurately predict a balls trajectory with significant spin. In hopes to mitigate the effect of spin, the "paddle" on the robot will be much larger than a traditional paddle. Ultimately the professors will dictate the spin setting so I may need to figure out a way to account for it real time if they decide they want a lot of spin. I don't have a solution for that yet.
The trajectory may not be be able to calculated continuously because the camera will be mobile after the prediction stage. I haven't thought of a way to program it to continue prediction while it's moving. It's not impossible but I believe it will require a lot of feedback from encoders at the motors.
I have not mentally accounted for blur yet at this stage. It may be a problem but the largest velocities seen by the camera will be in the direction towards that camera so hopefully blur will be minimal in terms of pixel count variation. It may mess with calibration though on the extreme ends of the spectrum where it can blur such that the ball appears bigger and closer than it actually it. I'll need to investigate more camera specs.
>>919644
It would definitely be easier to track the ball that way for sure. This would even solve the above problem and account for curvature of the ball realtime. However, I don't believe my budget would allow a second camera and also no part of the device can extend beyond one quarter length of the table. This camera will need to be at the halfway point over the table high in the air. There are also size constraints on the device. It must initially fit in a 24"x18"x18" box. It may fold out or extend afterwards however. Under the project constraints, a second camera is not feasible.
>>895179
Had to do this exact thing for a project in school. We just used LabView to record the data and then plopped the generated csv file into excel
>>917039
What university are you at? I'm in a Bachelor of Technology program at McMaster University and I have a capstone project to do next year with the exact same budget. I'm still not sure what I'm going to do for mine. Best of luck to you, it sounds awesome!
just got a vilros Ardiuno kit.
I really don't know what to do with it.
Im building a green house next year and thought of using one of these to make an automated watering system
>>917767
>use a *relay
>dont use a relay!
>>920396
They're not actually relays, dumbass.
Just like resettable fuses don't have fuses in, and flash disks don't actually have disks in.
>>920471
>resettable fuse
circuit breaker
>flash disk
flash drive
>dont have relays in them
but they are themselves relays, they relay a signal between two circuits.
I'm helping my dad out with his orchard. It's a pretty sizable orchard with a bunch of different types of trees. I'm designing the irrigation system and will probably control it with an arduino.
>>920471
It says relay right on the side of it..
Fucking illiterates trying to act like EE's
Hey guys, I am about to get into this stuff pretty soon, but I wonder what kit should I order, is there something to look out for before purchasing ?
Also can I do some coding in C# ?
>>920880
No C#. There's some Arduino-like board that uses the .NET micro edition framework, but you'll have to do the research on that yourself. If you want to get into microcontrollers, learning C/C++ is pretty much mandatory outside of some very niche platforms.
I've been getting to grips with the HD44780 LCD. I've been generating custom characters - I made an LCD class with the functionality, including a function to create a character, that took an "address" (multiplied by 8, then combined with the CGRAM address, then two unsigned longs for the 8 bytes of the graphics. This was called 8 times at the start, to define some characters.
A strange glitch appeared, where the second character was corrupted. One line (5 bits) had been duplicated.
I played around a bit and discovered that changing the type for the address argument from byte (it's only between 0 and 7 anyway) to unsigned int moved the error along to the third character.
I then randomly changed the order of the calls (the 8 calls that define the characters), same definitions, just now executed in a different order.
The error moved to another character - it was always the second call, no matter what order they were done in. Everything I did, just seemed to make the error simply move to another character, but never eliminating it.
I finally changed the function to accept four unsigned shorts, rather than two unsigned longs, but I'm not happy with this. It's almost defeating the purpose.
It seems to be some sort of error in the function calling but I'm stumped. Any ideas?
>>921054
I should add that, before I made this function, I was defining the same characters by explicitly sending each byte to the module - and it worked fine. The error only emerged when I made a specific function for it.
>>921054
Show the code.
>>921059
I'd deleted the old code, so I just quickly rewrote it and compiled - same error. Here is the "reconstructed" offending code.
>>921067
When I changed it to four unsigned short ints, then it seemed pointless having four for loops that each iterate twice, so I just made it explicit with each byte.
>>921067
>>921070
I realise masking them with 0x1f is unnecessary but it's not the source of the problem. I actually put that in to see if it would make any difference. The things you do when you're stumped eh...
>>921067
Would this not be a great use-case for a byte array, initialised using byte literals, and indexed by, well, indexing?
What's with all the bit-bashing in code that's run precisely once? Why use two parameters to represent what's actually one parameter? Why not delegate the grunt work to the compiler?
>Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
-- Brian Kernighan
>>921085
I don't want to pass an array. I want to do it this way and I should be able to.
>>921054
Are you absolutely, positively sure you're following the timings in the datasheet?
Because this is sounding like the computation is correct, and the interface with the peripheral is racing.
>>921080
Where's the code for setCgRamAddress?
ps. Shifting by 3 will not be faster than multiplying by 8 (the optimizer will produce identical code). Prefer clarity over useless micro-optimizations.
>>921088
>Are you absolutely, positively sure you're following the timings in the datasheet?
I did a lot of playing around with the millisecond delays, even adding some here, taking some away there, etc. Nothing changed.
>>921090
Fair enough, I do have a lot of strange habits. I did wonder if it would produce identical code actually.
>>921090
>Where's the code for setCgRamAddress?
void setCgRamAddress (byte addr)
{
writeLCD (CMD, (0x40 | addr));
}
Provided that addr < 64 (it can only be as high as 56 as 7 is the highest "address") then it should be fine right?
>>921090
>Prefer
It's weird when people use the word "prefer" in the imperative.
>>921092
It's probably better to make "disp" statically allocated. Dynamically allocated memory should be used with care in embedded systems, and on something like an Arduino there's probably not a lot of heap memory anyway.
Also consider putting the initialization data in const arrays and just passing the pointers around.
>>921090
nothing wrong with shifting when you are trying to achieve a bit shift
aren't you anon?
shifting to move address bits or something?
its the comment that is misleading.
not that i can help your issue i don't understand what you are doing, you are defining custom characters but one nibble should be the row address and one nibble should be the row data?
i'm too drunk sorry.
>>921102
>you are defining custom characters but one nibble should be the row address and one nibble should be the row data?
I'm trying to send 8 bytes over 2 32 bit integers. The first parameter sets the cursor to the custom memory bit you want , then the 8 successive writes all increment the cursor automatically.
>>921102
>nothing wrong with shifting when you are trying to achieve a bit shift
If the OP felt the need to add a comment next to the shift, explaining why it's there, the implementation is masking the intention of the code. Replacing the shift with a multiplication makes the intent clear, and removes the need for the comment.
>>921099
It's a perfectly valid use, and it's also a guideline rather than an absolute rule.
>>921107
i disagree, the comment refers to the addresses themselves being multiples of 8 rather than explaining that he is multiplying by 8.
saying multiples of 8 was pretty much the most concise way to get this across, arguably vague but if you read the datasheet its clear.
i would go so far as to say that the sole reason its there is for the authors piece of mind because of the difficulty he is having in getting it to work and not really to explain functionality at all.
criticise the comment if you want but changing this to a multiply confuses the intent rather than clarifies it.
>>921115
You're arguing just to be contrarian.
>>921087
Why do you want to do it that way?
It's not a 32-bit processor. You're needlessly making the compiler do 32-bit emulation, when it would be simpler and clearer to use the native word-length.
>>921115
They're really offsets into an array of character RAM. You're effectly writing
offset = index * sizeof(glyph)
It just so happens that sizeof(glyph) is 2^3.
You usually multiply array offsets, because most of the time they're not a nice power of two, and there's no point doing it differently when they are.
>>921087
You can, but it's inefficient. The compiler will have to generate a lot of unnecessary extra register-shuffling code. It's worth having some idea of what kind of code the compiler will generate for different constructs.
>>921126
no im arguing because i would write it as a shift and i would advise others to if asked.
>>921136
>most of the time they're not a nice power of two
its a bit offset
so yes it will always be a power of two.
my understanding is that adr is the high bits of the address, i was under the impression that the low nibble was set by the user but anon kindly clarified that the low nibble was incremented automatically.
so essentially the operation serves to move the high bits of the address to the correct offset in the address register.
if you were doing anything else, as you suggest calculating an offset, then i would agree with you but thats not the way i think of this particular operation as functioning.
>>921165
You have poor taste. Code that needs comments is code that should be improved.
>>921173
i shan't repeat myself
>>921115
>>921165
>its a bit offset
>so yes it will always be a power of two.
No, it's a sizeof() offset, so it's only coincidentally a power of two.
If the characters were nine lines tall, it would be a power of three.
>>921131
>It's not a 32-bit processor. You're needlessly making the compiler do 32-bit emulation, when it would be simpler and clearer to use the native word-length.
Ah really? Yeah I wish I knew more about the Arduino in that sense, but the "manual" that comes with it is useless for anything. There's the official site, but everything is spread around. I just want one page with all the relevant tech info.
Still though, I should be able to do it this way. This code gets called at max 8 times once (at startup), and I honestly prefer it to using an array. If the system allows unsigned longs, then I should be able to use them.
Oh, and I managed to fix the error by adding a dummy int parameter after the addr. Bit of a botch but hey.
>>921115
>i would go so far as to say that the sole reason its there is for the authors piece of mind because of the difficulty he is having in getting it to work and not really to explain functionality at all.
No it's that I'm normally quite an anal commenter. But I usually wait til the code is working before I go all-out.
>>921323
i suppose
but it would never be done that way
because thats fucking retarded
characters are 5x7 or 5x10 but mapped as 5x8 or 5x6
hmmmm i wonder why. possibly because that's how pretty much every digital (binary) system ever invented works
what a fucking rigmarole it would be to have to raise to an non binary power just get a high byte to address memory? in an embedded system? are you out of your mind?
>>921353
>5x6
5x16
While we're on the subject, why can't I see any difference between 5x8 and 5x10 modes?
>>921356
I would imagine it's because your display only has eight physical rows.
>>921353
>what a fucking rigmarole it would be to have to raise to an non binary power just get a high byte to address memory? in an embedded system? are you out of your mind?
That's why you don't use l/r shifting in the first place.
If you want to save ~40% on your storage requirements for a 5x10*, you throw out the six blank lines, and generate the the address by (<base> + <offset>*10).
Multiplication is not expensive. Even if you use a precomputed lookup table, you're still only wasting two bytes per character instead of six.
*(ON AN EMBEDDED SYSTEM!!!!1!!1)
>>921366
Remember that this is an eight-bit processor, so memory access is granular on eight-bit word boundaries, not thirty-two bit words or 4k cache-lines like on x86.
>>921366
>If you want to save ~40% on your storage requirements
but that isn't how this particular ic operates, it clearly divides memory into banks of 8 or 16. why does it do that? because the hardware to increment the address pointer is no doubt implemented on chip as a shift because its cheaper in both processing time and on chip space even when the wasted memory is accounted for.
there is no point in bringing up memory storage efficiency because it is not supported with this particular peripheral controller
if you want to do a multiply then nobody is going to stop you, all i said was that for this particular application i find it misleading to prefer a multiply syntax over a shift because of reasons of clarity. the reason being the shift better represents how the addressing is manipulated and handled by the display controller.
>>921378
How the controller arranges data internally is an implementation detail. You can't make it more clear that you're generating a multiple of eight than multiplying a value by eight.
>>921384
the entire class is an 'implementation detail' when we are talking about communication with this device.
if multiplying the value was the intention then of course, but it was not, the intention of the operation was to move the high address pointer to the correct position and allow the driver to control the low nibble.
>>921363
Ah right, that's probably it, looking at it. Don't know why I didn't think of that. Probably too busy thinking about the Arduino's call stack.
I'm making a byteOut class, which is a base class from which two other classes are derived, one to use a shift register to emit a byte, the other to use 8 Arduino pins in parallel.
I've made the output method virtual, so each derived class creates its own, specific to how the byte is output.
Thing is I keep getting this "undefined reference to vtable" on the base class.
I found a page where someone fixed it by declaring an empty method for the baseclass but A) that doesn't work on mine, it just does nothing when out () is called and B) isn't the whole point of a virtual method that you leave the implementation to the derived class?
Why does it complain that I haven't defined a method that, by definition, will never be used?
>>921870
you should implement the function in your base class then override it in the derived classes
if the base class function doesn't do anything because it wouldn't make sense to then thats fine, its called a pure virtual method or function.
but it needs to be there.
>>921965
>implement the function in your base class
not 'implement'
just define.
you know what i meant
>>917341
Weee! My plan worked on the first try! Now I only need to do something useful with the rf24 radios
>>921986
Those esp boards are a blast. I'm still stuck on changing the pwm modes though. Not sure if the toolchain/docs have improved in that area lately.
>>922010
Just got a better look, and realized that's not an esp8266. Silly me.
>>921965
But when I do that, it compiles, yes, but the whole thing malfunctions. It's to control an LCD and all the black squares appear. I've worked around it, code wise, for now. The classes barely differ code wise, so I just made 2 variants copy/paste.
>>922028
>but the whole thing malfunctions
well that's probably because you wrote it wrongly then?
post code.
>>922043
It's on the other computer. I do that tomorrow but I'll say this: it's like the functionality is being "underridden" - ie the empty method in the base class seems to overwrite the one in the derived class.
I know the display logic works, it's just when I try to object orientate it, it goes funny.
Until tomorrow...
>>922080
You have to declare the method in the base class virtual. It is not enough to do it just in the derived class (in fact, if the method is declared virtual in the base class you don't need to repeat it in the dervied class - but you should, because it makes the code easier to read.)
>>895927
No X10?
Uh for school I made a barcode Reader and Uploader, I could get the barcode information uploaded as a status to thingspeak but I never got it to go to IFTTT and print the status out on like evernote or somethign
Why are you still using Ardweeno when the Raspberry Pi Zero is 5 dollars
>>924270
They're two different things
>>924318
not really
I mean they are both just boards built around a chip that have unpopulated I/o, right?
the only difference is the raspberry pi has a 1 ghz clk and a HDMI port
>>895557
.csv format. Comma separated spreadsheets, Excel compatible.
>>924322
Arduino is for prototyping circuits. Rpi is a computer
>2016
>still using 8bit μCs
>not using based ARM Cortex-M
stay pleb, /diy/
Former Arduinoer here, wondering if there's a board with beefier specs similar to the Arduino? I guess a board suited for physical computing projects?
>>924946
raspberry pi would be the first suggestion.
then the Intel Galileo
when CHIP comes out for real, it'll be a good option as well. | http://4archive.org/board/diy/thread/895179 | CC-MAIN-2016-44 | refinedweb | 15,063 | 71.34 |
be diving head first into one of dlib’s core computer vision implementations — facial landmark detection.
I’ll be demonstrating how to use facial landmarks for:
- Face part (i.e., eyes, nose, mouth, etc.) extraction
- Facial alignment
- Blink detection
- …and much more.
But it all starts with getting dlib installed!
To learn how to install dlib with Python bindings on your system, just keep reading.
How to install dlib
Developed by Davis King, the dlib C++ library is a cross-platform package for threading, networking, numerical operations, machine learning, computer vision, and compression, placing a strong emphasis on extremely high-quality and portable code. The documentation for dlib is also quite fantastic.
From a computer vision perspective, dlib has a number of state-of-the-art implementations, including:
- Facial landmark detection
- Correlation tracking
- Deep metric learning
Over the next few weeks we’ll be exploring some of these techniques (especially facial landmark detection), so definitely take the time now to get dlib configured and installed on your system.
Step #1: Install dlib prerequisites
The dlib library only has four primary prerequisites:
- Boost: Boost is a collection of peer-reviewed (i.e., very high quality) C++ libraries that help programmers not get caught up in reinventing the wheel. Boost provides implementations for linear algebra, multithreading, basic image processing, and unit testing, just to name a few.
- Boost.Python: As the name of this library suggests, Boost.Python provides interoperability between the C++ and Python programming language.
- CMake: CMake is an open-source, cross-platform set of tools used to build, test, and package software. You might already be familiar with CMake if you have used it to compile OpenCV on your system.
- X11/XQuartx: Short for “X Window System”, X11 provides a basic framework for GUI development, common on Unix-like operating systems. The macOS/OSX version of X11 is called XQuartz.
I’ll show you how to install each of these prerequisites on your Ubuntu or macOS machine below.
Ubuntu
Installing CMake, Boost, Boost.Python, and X11 can be accomplished easily with apt-get :
I assume you already have pip (for managing, installing, and upgrading Python packages) installed on your machine, but if not, you can install pip via:
After completing these steps, continue to Step #2.
macOS
In order to install Boost, Boost.Python, and CMake on macOS, you’ll be using the Homebrew package manager. Think of Homebrew as a similar equivalent of Ubuntu’s apt-get only for macOS.
If you haven’t already installed Homebrew, you can do so by executing the following commands:
Hint: You can check if Homebrew is already installed on your machine by executing the brew command in your terminal. If you get a brew: command not found error, then Homebrew is not installed on your machine.
Now that Homebrew is installed, open up your ~/.bash_profile file (create it if it doesn’t exist):
And update your PATH variable to check for packages installed by Homebrew before checking the rest of your system:
After updating your ~/.bash_profile file, it should look similar to mine:
Figure 1: After updating your ~/.bash_profile file, yours should look similar to mine.
We now need to reload the contents of the ~/.bash_profile file via the source command:
This command only needs to be executed once. Alternatively, you can open up a new terminal window which will automatically source the ~/.bash_profile for you.
Next, let’s install Python 2.7 and Python 3:
We can then install CMake, Boost, and Boost.Python:
The --with-python3 flag ensures that Python 3 bindings for Boost.Python are compiled as well — Python 2.7 bindings are compiled by default.
Once you start the boost-python install, consider going for a nice walk as the build can take a bit of time (10-15 minutes).
As a sanity check, I would suggest validating that you have both boost and boost-python installed before proceeding:
As you can see from my terminal output, both Boost and Boost.Python have been successfully installed.
The last step is to install the XQuartz window manager so we can have access to X11. XQuartz is easy to install — just download the .dmg and run the install wizard. After installing, make sure you logout and log back in!
Fun Fact: XQuartz used to be installed by default on OSX 10.5-10.7. We now need to manually install it.
Now that we have our prerequisites installed, let’s continue to our next (optional) step.
Step #2: Access your Python virtual environment (optional)
If you have followed any of my PyImageSearch tutorials on installing OpenCV, then you are likely using Python virtual environments.
Using Python’s virtualenv and virtualenvwrapper libraries, we can create separate, independent Python environments for each project we are working on — this is considered a best practice when developing software in the Python programming language.
Note: I’ve already discussed Python virtual environments many times before on the PyImageSearch blog so I won’t spend any more time discussing them here today — if you would like to read more about them, please see any of my installing OpenCV tutorials.
If you would like to install dlib into a pre-existing Python virtual environment, use the workon command:
For example, if I wanted to access a Python virtual environment named cv , I would use the command:
Notice how my terminal window has changed — the text (cv) now appears before my prompt, indicating that I am in the cv Python virtual environment:
Figure 2: I can tell that I am in the “cv” Python virtual environment by validating that the text “(cv)” appears before my prompt.
Otherwise, I can create an entirely separate virtual environment using the mkvirtualenv command — the command below creates a Python 2.7 virtual environment named py2_dlib :
While this command will create a Python 3 virtual environment named py3_dlib :
Again, please keep in mind that using Python virtual environments are optional, but highly recommended if you are doing any type of Python development.
For readers that have followed my previous OpenCV install tutorials here on the PyImageSearch blog, please make sure you access your Python virtual environment before proceeding to Step #3 (as you’ll need to install the Python prerequisites + dlib into your virtual environment).
Step #3: Install dlib with Python bindings
The dlib library doesn’t have any real Python prerequisites, but if you plan on using dlib for any type of computer vision or image processing, I would recommend installing:
These packages can be installed via pip :
Years ago, we had to compile dlib manually from source (similar to how we install OpenCV). However, we can now use pip to install dlib as well:
This command will download the dlib package from PyPI, automatically configure it via CMake, and then compile and install it on your system.
Provided you have the CMake, Boost, Boost.Python, and X11/XQuartz installed on your system, the command should exit without error (leaving you with a successful dlib install).
I would suggest going out for a nice cup of coffee as this step can take 5-10 minutes for the compile to finish.
After coming back, you should see that dlib has been successfully installed:
Figure 3: The dlib library with Python bindings on macOS have been successfully installed.
The same goes for my Ubuntu install as well:
Figure 4: Installing dlib with Python bindings on Ubuntu.
Step #4: Test out your dlib install
To test out your dlib installation, just open up a Python shell (making sure to access your virtual environment if you used them), and try to import the dlib library:
Figure 5: Testing out my dlib + Python install on macOS and Python 3.6.
If you’ve installed dlib into the same Python virtual environment that you installed OpenCV, you can access OpenCV as well via your cv2 bindings. Here is an example on my Ubuntu machine:
Figure 6: Validating that I can import both dlib and OpenCV into the same Python shell.
Congratulations, you now have dlib installed on your system!
Summary
In today’s blog post I demonstrated how to install the dlib library with Python bindings on Ubuntu and macOS.
Next week we’ll start exploring how to use dlib; specifically, facial landmark detection.
You won’t want to miss this tutorial, so to be notified when the next post is published, be sure to enter your email address in the form below!
See you next week!
Can you install dlib on Windows? If so, how?
Please see my reply “Anoynmous” below.
Download the latest ” .whl ” file from ” ” and just do ” pip install filename.whl “
Hello Adrian,
Thanks for sharing this amazing and exciting C++ library. Kindly advice the right/stable version of Ubuntu to install.
I’ve personally tested these instructions with both Ubuntu 14.04 and 16.04. Both work.
Hi Adrian,
Will pip also install the tool for bounding box annotation?
If anyone’s interested in installing dlib the compile or pip way on Raspberry Pi, remember to increase your swap memory in “/etc/dphys-swapfile”
Cheers,
Tom
Unfortunately, no. For that you will need to compile dlib from source.
Hi
How much you recommend to increase the memory? it doesn’t work on default
(I have your built raspberry pi image)
You can increase memory on the Raspberry Pi by decreasing memory allocated to the GPU via
raspi-config.
I also had the MemoryError problem installing scipy on my Pi 3.
Using pip’s –no-cache-dir switch allowed the install to complete 0k.
It might be worth adding that to your excellent instructions.
Hey Brian — I’m actually doing an entirely separate tutorial for installing dlib on the Raspberry Pi next week (1 May 2017). It will include a few additional tips to help get dlib installed on the Pi.
You might want to look at my post today about getting dlib to install. It has taken me about 3 days to get openCV and dlib installed but I’m there now.
Is it possible to install this on a Raspberry Pi?
Yes, absolutely. Ubuntu and Raspbian are Debian-based so you can actually use the Ubuntu instructions to install dlib on your Raspberry Pi.
Adrian, I keep getting errors when I try to install scikit-image on my Raspbian. How to fix?
What is the error you are getting? Without knowing the error, it’s impossible to provide any help or insight.
Thanks a lot. Was having trouble installing. looking forward to some tutorials with dlib
Thanks Simba — and congrats on getting dlib installed.
Can we have a same installation process for windows also please.. Struggling a lot for it.
I do not have any plans to create a Windows install tutorial nor support Windows on the PyImageSearch blog. Please refer to dlib official site for Windows instructions. When it comes to learning computer vision (or most advanced science techniques), I would encourage you to use a Unix-based system.
Wow, that’s not exactly helpful. I guess now ‘I do not have any plans’, either…to spend my money on your products.
I wish I could offer Windows support, but there are honestly too many logistical issues. I offer almost 200 free tutorials here on the PyImageSearch blog. I hope you enjoy them. If you don’t want to purchase any of the teaching products here on PyImageSearch, you don’t have to — no one is asking you to or forcing you to. Enjoy the content and I hope it helps you on your computer vision journey. But please keep in mind that I am just one person and there are limitations to what I can provide. I’ve found that Unix-based environments are the most useful when building computer vision applications (the exception being if you want to build a Windows app, of course). If you have a Windows-specific question, I am not the right person to ask.
How did you find it in yourself to reply so politely to that comment?
I assume that everyone, including myself, has a bad day from time-to-time. I’m not sure what is going on in their life and what other pressures outside of computer vision (work, school, family, etc.) could be happening with them. I could be rude/mean in return but there is already enough of that in the world. I let it slide and move on with my day. If that person was being a “hater”, okay, I can’t change their mind. Otherwise, they could be having a bad day and who am I to judge 🙂
Have you concidered adding an extra HD to your system and making it dual bootable with Ubuntu nstalled aside your windows installation ?
This is what I did…
Hello, why don’t you work on a virtual machine ? I am doing so on my Mac, all the Python stuff works much better on a Linux 16.04 LTS VM on VirtualBox. Both products are free, you can get VirtualBox from.
Just install it on your Windows system, create or download () the Ubuntu VM and than follow the directions given by Adrian. Everything is described step by step, its a breeze.
You don’t know tech then. Windows != “tech”. So, no one really develops tutorials for Windows. Half broken, ill matched binaries aren’t exactly dev friendly. You not only made the wrong choice with Windows, but you are also abusive to Adrian. I personally have spoken to him on several occasions, he’s very helpful and doesn’t deserve your statements.
Hi Adrian,
Great post!
I have a nice idea for you.
Why don’t you show how to use DLib to learn the 194 Facial Landmarks of the HELEN database (Remember to augment the database by mirroring the images)?
It would be great and will create a new trained predictor for the users of DLib.
Thank You.
The dlib library ships with a facial landmark detector? Are you asking me to demonstrate how to train that detector from scratch? Keep in mind that most of the dlib functionality doesn’t have Python bindings, only C++.
Hi Adrian,
Yes, I was talking about training it for a new job.
My idea was 194 Points of the HELEN database.
Thank You.
Thanks for a great post again!
I had installed dlib a few days back. I am working in a virtual environment using Anaconda and my experience was that there were a few incompatibilities between the boost in anaconda repository and dlib from pip.
After toiling for a few days, was able to get dlib working with Anaconda. Had to setup dlib and boost from source.
So if any one is facing difficulty in working with Anaconda and dlib, I might be able to help.
Adrian, I am eagerly waiting for your future posts on dlib. Thanks!
Congrats on getting dlib installed Michael. And thank you for being willing to help Anaconda users.
Good to know. I had Anaconda and couldn’t get dlib going due to a boost problem. Finally, simply re-installed Ubuntu from scratch; after that dlib installed OK per instructions here.
Hi Michael
Can you please help me in installing dlib in anaconda environment in windows.
I am getting error after trying every option – “Could NOT find Boost”. please help.
hey man, i do have a problem installing dlib on anaconda 3 , what did you do?
when i try to install dlib on anaconda its showing some conda http error …could ypu please just me some solution.
Double-check that you are indeed connected to the internet before trying to download and install the package. Also validate that your DNS is working properly.
Thanks Adrian. I installed dlib – on Linux 16.04 – no issues at all.
Look for ward to facial landmark detection
Congrats on getting dlib installed Murthy, nice job!
Dear Adrian,
Thanks a ton man. Just struck the Ubuntu + OpenCV + Python + Dlib installation and configuration in one go. It’s just because of you.
Regards
Neeraj Kumar
Thank you for the kind words Neeraj — and congratulations on getting dlib installed.
Straightforward install due to the very clear instructions, also on Ubuntu 16.04. Thanks Adrian
Thank you Tony!
Great post!!
Very well explained!!
Thanks Yash! 🙂
I’m getting “Segmentation fault: 11” when importing dlib in python3 in my Mac =/
99.9% of the time you’ll see a segmentation fault error when you try to import dlib into a different version of Python than it was compiled against. I would suggest compiling dlib from source rather than using pip.
Hello Adrian,
I am trying to install dlib library on raspberry pi 3.
The commands in step 3 to install three packages were failed; installed them using sudo apt-get command.
In the last step “pip install dlib”, it is stuck there (It was also happened with the three packages installation commands) like:
$ pip install dlib
Collecting dlib
Using cached dlib-19.4.0.tar.gz
Building wheels for collected packages: dlib
Running setup.py bdist_wheel for dlib … /
please guide us
The install is likely not “stuck”. It can take 10 minutes to install dlib on a standard laptop/desktop. On a Raspberry Pi this number can jump to an hour. If you run
topyou’ll see that your system is busy compiling dlib.
i am working on raspberry pi too. It took about 4-5 hours to finish
Hello Adrian,
I followed this tutorial to install dlib in python. Everything was going smoothly until the time of installing dlib. When i ran the command “pip install dlib” it gives two error “Failed building wheel for dlib” and “error: cmake configuration failed!”.
What could be the reason behind this?
Thanks in advance.
It sounds like the internal CMake script could not configure dlib correctly. Did you install all the dependencies without error? You have might to resort to compiling dlib from source.
i tried over and over , while install dlib it failed at %79 building cxx…..
cmake build failed.
Which operating system are you trying to compile dlib on?
raspi 2 – raspian 4.4.50-v7+ – python 2.7.x – opencv 3 – virtualenv cv
i tried on your pre-installed package for raspi and i get same error again.
I’m writing a tutorial dedicated to installing dlib on the Raspberry Pi. It will go live at the end of this month. The tutorial will resolve any issues installing dlib on the Raspberry Pi.
Hello, Adrian Rosebrock
your blog help me a lot, thanks, and let me help others
for how to install dlib on windows
1. install python 3.5 32bit, can be download here
2.download dlib from here
select “dlib-18.17.100-cp35-none-win32.whl (md5)”
last, run pip install dlib-18.17.100-cp35-none-win32.whl
it success from my 32bit win7
Thanks for sharing!
Installing DLIB was hanging my RPi 3 (1Gb ram).
I’ve been struggling with this for a couple of days trying to play catchup with your emails. I’d love to know your config which allows it to install in 10 minutes. I’m sure it’s to do with swap size or zRAM (I haven’t tried this)
Today I increased the size of the dphys-swapfile to 1Gb and moved it (temporarily) onto a USB hard drive. The Raspian default is 100Mb on the SD card! Of course a 100Mb swap file on an SD card is going to ‘wear out’ the SD card if a lot of swapping takes place.
Anyway, whilst running the ‘pip install dlib’ command I also fired off top – ‘swap used’ peaked at 500Mb so far – no wonder it hung with the default swap setting. I noted that my Pi is a lot more responsive to user input during the install having done this (previously I had to pull the plug to get it back)
.
Yay, the install has just this second completed without error – and probably in less than 20 minutes (I didn’t time it exactly)- now to follow your blog.
Having done that I’ll now restore the original swapfile and reboot.
For others with similar hanging issues this is what I did (after a few hours of web searching).
1 find a USB powered hard drive (I had a Philips 300Gb not doiung anything) and plug it in.
2 note mount path (something like /media/pi/philips in my case)
3 sudo nano /etc/dphys-swapfile
set the following values :-
CONF_SWAPFILE=/media/pi/philips/swap (use your own values)
CONF_SWAP_SIZE=1024
save and exit
4 issue these commands
sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start
5 install dlib with the ‘pip install dlib’ command
6 when completed restore your /etc/dphys-swapfile settings
7 re-run the commands at step 4
Now you can dismount the usb drive and use it for some other purpose.
Note I also made another post about using –no-cache-dir when installing scipy
When my Pi camera turns up I can get now on with facial feature recognition.
Oh, and one last recommendation. Having suffered so many hiccups on the route I took the precaution of imaging my SD card after each major stage. I use Win32DiskImager for that.
Hi Brian — I’m actually detailing an entire blog post dedicated to installing dlib on the Raspberry Pi. It’s already written and set to publish on Monday, May 1st at 10AM EST. It makes use of swap settings similar to yours. You actually don’t even need the external drive if you increase your swap file size.
I installed dlib but how can I install Sym-link with in OpenCV and with in Keras_tf. I mean I installed
1. dlib (virtual environment)
2. keras_tf (virtual environment)
3. OpenCV
It would be nice to have Sym-link with all of them.
You would need to find the location to your
cv2.sofirst. Normally it would be in
/usr/local/lib/python2.7/site-packages/cv2.so, but you’ll want to check on your own machine first.
From there, change directory to your
site-packagesdirectory of the virtual environment and then sym-link in OpenCV:
How to implement the paper “Max-Margin Object Detection” based on Structured SVM by Python, which is written by Davis King? Can you give a example in your blog?Maybe, it is a good topic for your course!
The method is already implemented inside the dlib library. I would suggest looking at the dlib source code if you’re interested in the algorithm itself.
I am using Raspberry Pi 3 and when i installed dlib it only worked on Python 3.4 and it doesn’t work on Python 2.7.
I want it to work on Python 2.7 because when i type “import cv” on Python 3.4 i get an error because there is no module named ‘cv’, while on 2.7 i can import cv.
Thank you!
This sounds like an issue with your Python virtual environments. Please refer to this blog post where I provide instructions on how to install dlib on the Raspberry Pi.
Hi Adrian,
Appreciate your blog on installing the dlib library. I do wish to ask you of an error as when I am installing the dlib it gets stuck while compiling the trainer.cpp as a result the installation of the dlib freezes.
I read on stackoverflow that we have to manually compile the cpp examples? I am running on py2.7.9 virtual env. Any suggestions would be helpful.
Which operating system/hardware are you trying to compile dlib on?
I am running it on raspberry pi 3 : Raspbian GNU/ Linux 8 (jessie).
Please use this tutorial to install dlib on your Raspberry Pi.
I installed dlib in Ubuntu 16.04 following this guide and it works perfectly. Thanks for that!
My problem is that face detection is a really slow process and I want to speed-it up by activating the support for AVX extensions but I don’t know how to modify the cmake in order to recompile it as initially I just did it through pip(pip install dlib).
Congrats on getting OpenCV installed Ampharos, nice job!
As far as face detection goes, you’ll want to compile dlib from source rather than a
pipinstall. I’ll be covering how to speedup the dlib library for face detection and facial landmarks in a future blog post.
Hi,
I perfromed all the steps but it is not mapping to cv environment like No module cv2 in the python dlib can you please suggest me
Please see the other comments in this post. You need to sym-link your
cv2.sobindings into the Python virtual environment you used for dlib.
My raspberry frezzee when installing dlib :((
Hey Damian — please use this blog post to install dlib on your Raspberry Pi.
Can I install dlib using anaconda? I am using conda environment variable but running into many issues one after another.
this:'tthePythonbindingscompilework
says to uninstall , but anaconda has many things. Is it wise to uninstall it?
I have never used Anaconda to install dlib, I am not sure. I would suggest asking on the official dlib forums.
hi Ardian,
thanks for your great tutorials. I followed this one on my macbook on Sierra, and succeeded in compiling without errors. However, when I import dlib in python3.6, I get an error: illegal instruction: 4. Could it be my processor is not up yo this?
It sounds like you compiled dlib against a different Python version than you imported it into. Double-check your Python versions and ensure you use the same Python and pip for both install and import.
Hello,
Thank you for this great work. These are the great tutorials to start with.
However, reading the comments made it clear you have less idea to install dlib on Windows.
I’ve installed cmake, boost using pip and installed boost-python using gohlke’s libraries ( ) but cannot figure out about X11.
Can you help me with this one only.
Would be a great help as a beginner.
I’m using Win 10 and Python 2.7
Hi Arpit — thanks for the comment; however, I do not support Windows here on the PyImageSearch blog. I highly recommend that you a Unix-based environment such as Linux or macOS to study computer vision.
Great tutorials Adrain! I’ve installed dlib for ubuntu 14.04 as described above but run into the following error when I try to import the module: ImportError: libpng16.so.16: cannot open shared object file: No such file or directory. The package is not on my system (dpkg -L libpng16-16, returns an error) but I can’t seem to install the package (sudo apt-get install libpng16-16, is unable to locate the package). I’m new to ubuntu but I feel like I’m missing something very basic. Thanks!
Hi Andy — I’m sorry to hear about the issues installing dlib. The issue is related to your libpng library not being found. I would search your system install for any libpng library install. You may need to install it via “apt-get”. Then re-install dlib.
Hey Adrian,
Great content and awesome course. I’m trying to install dlib to test drive CV before enrolling in your course. However, it has been more than a week trying to install dlib with no luck. I followed the instructions in this blog and it looks like everything has been installed successfully. However, when I try to import dlib I get this error:
Any help is really appreciated. I really want to enjoy your awesome course after fixing this blocker.
Hi Mohamed — this seems like it might be an Anaconda-related issue. I have not used Anaconda in awhile so I unfortunately cannot replicate the error. Can you please try on a different system that uses the instructions I’ve detailed in this post (i.e., without Anaconda)?
Secondly, you’re default Python is Python 3, but your error is due to a mismatch in versions. Again, I think this is an Anaconda issue messing up your Python PATH. If you’re using Homebrew on top of this as well it would only further make the issue as well.
For what it’s worth I offer pre-configured VMs with dlib pre-installed inside my courses, so definitely don’t let this blocker hold you back!
Thanks for your reply, Adrian!
Do you think if I use virtual environment and go through the instructions this could solve the problem?
If you were to skip using Anaconda and use virtualenv/virtualenvwrapper as I suggest in this tutorial I think it would resolve the issue you are having.
Hi Adrian,
Thanks for this amazing tutorial. I followed this tutorial for Ubuntu 16.04 and Python 3.5 without any errors. However when I try to “$ pip install dlib” I get following error –
Cmake build failed!
Here is the complete error list –
Can you please help me?
Based on your output it seems that your system killed the compile, likely due to running out of memory. How much RAM does your system have?
I’m using it on VM Ware 2012 and my system have 8 GB of RAM.
How much RAM did you allocate to your virtual machine?
Hi Adrian.
Do we need to do any extra setup for GPU support of dlib?
You need to have the cuDNN and CUDA dev kit installed. To quote Davis King, the creator and maintainer of dlib: “When you compile it will print messages saying what it’s doing. It will say something like “Using CUDA” or that it’s not using CUDA because you are missing cuDNN or the CUDA dev kit.
So, by default, it will use the GPU unless you don’t have the CUDA tooling installed. If that’s the case it will print out messages that clearly indicate that’s happening and tell you what to do fix it.”
Hi,
I am trying to install opencv and dlib on my macbook using homebrew. However, I am facing the following error with dlib and I struggle to find a solution to this. Does anyone know how to fix this? Thanks.
David
>>> import dlib
Traceback (most recent call last):
…
from .dlib import *
ImportError: dlopen(/usr/local/lib/python2.7/site-packages/dlib/dlib.so, 2): Library not loaded: @rpath/libmkl_rt.dylib
Referenced from: /usr/local/lib/python2.7/site-packages/dlib/dlib.so
Reason: image not found
I’d recommend that you install
dlibwith pip (not brew):
pip install dlib.
hi adrian,
thanks for running such an excellent blog. it is quite useful for college like students like me.
I use an ubuntu system (16.04.3) and my query is not exactly related to dlib, but another problem I am facing during its installation, which I hope you can clarify.
i have a problem of unmet dependencies when installing libboost-all-dev. hope you can you help me resolve that issue
Thanks a lot.
Hey Akash — have you installed libboost on your system yet? Make sure you run:
$ sudo apt-get install libboost-all-dev
Hi Adrian
I am Valentin
Thank you for this useful tutorial, mostly for students like me who are doing differents school or DIY projects…
I have bee able to install Open CV 3.0.0 successfully in my Raspberry Pi 3 Model B just by following the steps you gave in the other tutorial and now I’m working one the face and eyes recognition and I’m stuck in the installation of dlib: when I try to enter the command “pip install scipy” it does not download and give an error “memory error” And then when I try to enter the command with “pip –no-cache-dir install scipy” then it downloads the scipy-1.0.0.tar.gz (15.2MB) and it gets stuck at “Running setup.py install for scipy …\ ” for hours and hours, over 8 hours…
I hope you acn help me to solve this issue.
Thank you
Hey Valentin — I don’t think your Pi is “stuck”, it’s just compiling SciPy. Take a look at the output of
topto validate that the compile is running.
Hi adrian
Great tutorial!! thumbs up ..
I am having a problem in installing dlib on Mac OS x
i am getting following error:
error: cmake configuration failed!
i have installed everything correctly but still getting error!! need help!!
If CMake is reporting “configuration failed” then there was a problem configuring the build. Scroll up through the CMake output and find where the error occurred as it will give you more information.
Hi Adrian:
after so many try error
i finally install dlib success!!
environment:
ASUS laptop
vmware workstation 14 player
ubuntu 16.04
vm guest (mem 1024mb install dlib so many strange issue)
vm guest (mem 2048mb install dlib success!)
vmware workstation player vm guest create default 1024MB , i change set to 2048MB
all strange issue gone install dlib succes
Congrats on getting dlib installed, Alex! 🙂
I am getting an import error while importing dlib in some python script as -ImportError: No module named dlib. Can you help me solve this.
Hey Pranesh — I’m not sure what the exact error is in this case but I would recommend following this updated guide on installing dlib.
Hi Adrian, I tried to work with dlib on your virtual machine, and tried to install it via this post and also tried the updated post on installing dlib. But I’m still getting “No modile named dlib” when trying to import dlib. I’ll appreciate if you can help me. Thanks
Hey Mojtaba — did you install dlib into the Python virtual environment included with the VM? Additionally, I just published a brand new dlib install tutorial that may be easier to follow and get up and running.
Hi Adrian, Thanks very much. Yes I’m trying these on py3cv3 virtual environment with your VM. And I have also followed that post as well but still couldn’t import dlib and get “No module named dlib”
That is indeed quite strange. Can you confirm that “pip install dlib” finished with an error? In these situations the pip install could have errored out and then the dlib library would not be installed.
Hello. I found a link that installs Dlib (on Windows) without going through the installation of prerequisites and previous steps. Here is the link :
The installation was successful for me and I managed to do ” import dlib ” without error. But I wanted to know if this installation on Windows is correct or if I absolutely have to go through all the steps described above?
I also have another question. I use Python 2.7 and OpenCV version 2.4.13.3 and I saw on your screenshots that you use other versions of Python and OpenCV. I wanted to know if my versions allow to work later for the detection of the eyes and the face.
Thank you
It’s been a long time since I’ve used Windows so I’m not sure on the answer to your first question. Hopefully another PyImageSearch reader can chime in here.
As for detecting eyes and faces, OpenCV 2.4 will work provided you’re using Haar cascades. If you want to use pre-trained deep neural network for face detection you’ll need at least OpenCV 3.3.
Hi Adrian,
Thanks for such great tutorial.
I got error after installation of dlib.
dlib was compiled to use SSE41 instructions, but these arent available on your machine.
That sounds like a warning rather than an error. Can you still import dlib into your shell/Python scripts and execute the code?
Hi Adrian how time will taken to install Dlib on rasberry pi3
I don’t have the exact timings on me but the last time I installed dlib on a Pi 3B+ it did take a few hours.
please how install dlib in fedora server ?
Sorry, I don’t have any tutorials for Fedora.
Hi adrian, so my quuestion is about how can i install dblib on a Jupyter Notebook or a Google Collaboratory. Thanks | https://www.pyimagesearch.com/2017/03/27/how-to-install-dlib/ | CC-MAIN-2019-35 | refinedweb | 6,053 | 73.88 |
The challenge: Use machine learning to categorize RSS feeds
I was recently given the assignment to create an RSS feed categorization subsystem for a client. The goal was to read dozens or even hundreds of RSS feeds and automatically categorize their many articles into one of dozens of predefined subject areas. The content, navigation, and search functionality of the client website would be driven by the results of this daily automated feed retrieval and categorization.
The client suggested using machine learning, perhaps with Apache Mahout and Hadoop, as she had recently read articles about those technologies. Her development team and ours, however, are fluent in Ruby rather than Java™ technology. This article describes the technical journey, learning process, and ultimate implementation of a solution.
What is machine learning?
My first question was, "what exactly is machine learning?" I had heard the term and was vaguely aware that the supercomputer IBM® Watson had recently used it to defeat human competitors in a game of Jeopardy. As a shopper and social network participant, I was also aware that both Amazon.com and Facebook do an amazingly good job of recommending things (such as products and people) based on data about their shoppers. In short, machine learning lies at the intersection of IT, mathematics, and natural language. It is primarily concerned with these three topics, but the solution for the client would ultimately involve the first two:
- Classification. Assigning items to arbitrary predefined categories based on a set of training data of similar items
- Recommendation. Recommending items based on observations of similar items
- Clustering. Identifying subgroups within a population of data
The Mahout and Ruby detours
Armed with an understanding of what machine learning is, the next step was to determine how to implement it. As the client suggested, Mahout was an appropriate starting place. I downloaded the code from Apache and went about the process of learning machine learning with Mahout and its sibling, Hadoop. Unfortunately, I found that Mahout had a steep learning curve, even for an experienced Java developer, and that working sample code didn't exist. Also unfortunate was a lack of Ruby-based frameworks or gems for machine learning.
Finding Python and the NLTK
I continued to search for a solution and kept encountering "Python" in the result sets. As a Rubyist, I knew that Python was a similar object-oriented, text-based, interpreted, and dynamic programming language, though I hadn't learned the language yet. In spite of these similarities, I had neglected to learn Python over the years, seeing it as a redundant skill set. Python was in my "blind spot," as I suspect it is for many of my Rubyist peers.
Searching for books on machine learning and digging deeper into their tables of contents revealed that a high percentage of these systems use Python as their implementation language, along with a library known as the Natural Language Toolkit (NLTK). Further searching revealed that Python was more widely used than I had realized—such as in Google App Engine, YouTube, and websites built with the Django framework. It even comes preinstalled on the Mac OS X workstations I use daily! Furthermore, Python offers interesting standard libraries (for example, NumPy and SciPy) for mathematics, science, and engineering. Who knew?
I decided to pursue a Python solution after I found elegant coding examples. The following one-liner, for example, is all the code needed to read an RSS feed through HTTP and print its contents:
print feedparser.parse("")
Getting up to speed on Python
In learning a new programming language, the easy part is often learning the language itself. The harder part is learning its ecosystem—how to install it, add libraries, write code, structure the code files, execute it, debug it, and write unit tests. This section provides a brief introduction to these topics; be sure to check out Resources for links to more information.
pip
The Python Package Index (
pip) is the standard
package manager for Python. It's the program you use to add libraries to your
system. It's analogous to gem for Ruby libraries. To add the NLTK library to
your system, you enter the following command:
$ pip install nltk
To display a list of Python libraries installed on your system, run this command:
$ pip freeze
Running programs
Executing a Python program is equally simple. Given a program named
locomotive_main.py and three program arguments, you compile
and execute it with the
python program:
$ python locomotive_main.py arg1 arg2 arg3
Python uses the
if __name__ == "__main__": syntax
in Listing 1 to determine whether the file itself is
being executed from the command line or just being imported by other code. To
make a file executable, add
"__main__" detection.
Listing 1. Main detection
import sys import time import locomotive if __name__ == "__main__": start_time = time.time() if len(sys.argv) > 1: app = locomotive.app.Application() ... additional logic ...
virtualenv
Most Rubyists are familiar with the issue of system-wide libraries, or gems.
A system-wide set of libraries is generally not desirable, as one of your
projects might depend on version 1.0.0 of a given library, while another project
depends on version 1.2.7. Likewise, Java developers are aware of this same
issue with a system-wide CLASSPATH. Like the Ruby community with its
rvm tool, the Python community uses the
virtualenv tool (see Resources
for a link) to create separate execution environments, including specific
versions of Python and a set of libraries. The commands in Listing 2
show how to create a virtual environment named p1_env for your p1
project, which contains the
feedparser,
numpy,
scipy, and
nltk libraries.
Listing 2. Commands to create a virtual environment with virualenv
$ sudo pip install virtualenv $ cd ~ $ mkdir p1 $ cd p1 $ virtualenv p1_env --distribute $ source p1_env/bin/activate (p1_env)[~/p1]$ pip install feedparser (p1_env)[~/p1]$ pip install numpy (p1_env)[~/p1]$ pip install scipy (p1_env)[~/p1]$ pip install nltk (p1_env)[~/p1]$ pip freeze
You need to "source" your virtual environment activation script each time you work with your project in a shell window. Notice that the shell prompt changes after the activation script is sourced. As you create and use shell windows on your system and to easily navigate to your project directory and activate its virtual environment, you might want to add an entry like the following to your ~/.bash_profile file:
$ alias p1="cd ~/p1 ; source p1_env/bin/activate"
The code base structure
After graduating from simple single-file "Hello World" programs, Python developers need to understand how to properly structure their code base regarding directories and file names. Each of the Java and Ruby languages has its own requirements in this regard, and Python is no different. In short, Python uses the concept of packages to group related code and provide unambiguous namespaces. For the purpose of demonstration in this article, the code exists within a given project root directory, such as ~/p1. Within this directory, there exists a locomotive directory for a Python package of the same name. Listing 3 shows this directory structure.
Listing 3. Example directory structure
locomotive_main.py locomotive_tests.py locomotive/ __init__.py app.py capture.py category_associations.py classify.py news.py recommend.py rss.py locomotive_tests/ __init__.py app_test.py category_associations_test.py feed_item_test.pyc rss_item_test.py
Notice the oddly named __init__.py files. These files instruct Python to load the necessary libraries for your package as well as your specific application code files that reside in the same directory. Listing 4 shows the contents of the file locomotive/__init__.py.
Listing 4. locomotive/__init__.py
# system imports; loads installed packages import codecs import locale import sys # application imports; these load your specific *.py files import app import capture import category_associations import classify import rss import news import recommend
With the locomotive package structured as in Listing 4, the main programs in the root directory of your project can import and use it. For example, file locomotive_main.py contains the following imports:
import sys # >-- system library import time # >-- system library import locomotive # >-- custom application code library in the "locomotive" directory
Testing
The Python
unittest standard library provides a nice
solution for testing. Java developers familiar with JUnit and Rubyists familiar
with Test::Unit framework should find the Python
unittest
code in Listing 5 to be easily readable.
Listing 5. Python unittest
class AppTest(unittest.TestCase): def setUp(self): self.app = locomotive.app.Application() def tearDown(self): pass def test_development_feeds_list(self): feeds_list = self.app.development_feeds_list() self.assertTrue(len(feeds_list) == 15) self.assertTrue('feed://news.yahoo.com/rss/stock-markets' in feeds_list)
The code in Listing 5 also demonstrates a distinguishing feature of Python: All
code must be consistently indented or it won't compile successfully. The
tearDown(self) method might look a bit odd at first.
You might wonder why the test is hard-coded always to pass? Actually, it's
not. That's just how you code an empty method in Python.
Tooling
What I really needed was an integrated development environment (IDE) with
syntax highlighting, code completion, and breakpoint debugging functionality to
help me with the Python learning curve. As a user of the Eclipse IDE for Java
development, the
pyeclipse plug-in was the next
tool I looked at. It works fairly well though was sluggish at times. I eventually
invested in the PyCharm IDE, which meets all of my IDE requirements.
Armed with a basic knowledge of Python and its ecosystem, it was finally time to start implementing a machine learning solution.
Implementing categorization with Python and NLTK
Implementing the solution involved capturing simulated RSS feeds, scrubbing their
text, using a
NaiveBayesClassifier, and classifying
categories with the kNN algorithm. Each of these actions is described here.
Capturing and parsing the feeds
The project was particularly challenging, because the client had not yet defined the list of target RSS feeds. Thus, there was no "training data," either. Therefore, the feed and training data had to be simulated during initial development.
The first approach I used to obtain sample feed data was simply to fetch a list
of RSS feeds specified in a text file. Python offers a nice RSS feed parsing
library called
feedparser that abstracts the
differences between the various RSS and Atom formats. Another useful library
for simple text-based object serialization is humorously called
pickle. Both of these libraries are used in the code
in Listing 6, which captures each RSS feed as "pickled"
object files for later use. As you can see, the Python code is concise and
powerful.
Listing 6. The CaptureFeeds class
import feedparser import pickle class CaptureFeeds: def __init__(self): for (i, url) in enumerate(self.rss_feeds_list()): self.capture_as_pickled_feed(url.strip(), i) def rss_feeds_list(self): f = open('feeds_list.txt', 'r') list = f.readlines() f.close return list def capture_as_pickled_feed(self, url, feed_index): feed = feedparser.parse(url) f = open('data/feed_' + str(feed_index) + '.pkl', 'w') pickle.dump(feed, f) f.close() if __name__ == "__main__": cf = CaptureFeeds()
The next step was unexpectedly challenging. Now that I had sample feed data, it had to be categorized for use as training data. Training data is the set of data that you give to your categorization algorithm so that it can learn from it.
For example, the sample feeds I used included ESPN, the Sports Network. One of
the feed items was about Tim Tebow of the Denver Broncos football team being
traded to the New York Jets football team during the same time the Broncos
had signed Peyton Manning as their new quarterback. Another item in the feed
results was about the Boeing Company and its new jet. So, the question is, what
specific category value should be assigned to the first story? The values
tebow,
broncos,
manning,
jets,
quarterback,
trade, and
nfl are all appropriate. But only one value can be
specified in the training data as its category. Likewise, in the second story, is the
category
boeing or
jet?
The hard part is in those details. Accurate manual categorization of a large set of
training data is essential if your algorithm is to produce accurate results. The time
required to do this should not be underestimated.
It soon became apparent that I needed more data to work with, and it had to be categorized already—and accurately. Where would I find such data? Enter the Python NLTK. In addition to being an outstanding library for language text processing, it even comes with downloadable sets of sample data, or a corpus in their terminology, as well as an application programming interface to easily access this downloaded data. To install the Reuters corpus, run the commands shown below. More than 10,000 news articles will be downloaded to your ~/nltk_data/corpora/reuters/ directory. As with RSS feed items, each Reuters news article contains a title and a body, so this NLTK precategorized data is excellent for simulating RSS feeds.
$ python # enter an interactive Python shell >>> import nltk # import the nltk library >>> nltk.download() # run the NLTK Downloader, then enter 'd' Download Identifier> reuters # specify the 'reuters' corpus
Of particular interest is file ~/nltk_data/corpora/reuters/cats.txt. It contains
a list of article file names and the assigned category for each article file. The
file looks like the following, so the article in file 14828 in subdirectory test
pertains to the topic
grain.
test/14826 trade test/14828 grain
Natural language is messy
The raw input to the RSS feed categorization algorithm is, of course, text written in the English language. Raw, indeed.
English, or any natural language (that is, spoken or ordinary language) is highly
irregular and imprecise from a computer processing perspective. First, there is
the matter of case. Is the word Bronco equal to bronco?
The answer is maybe. Next, there is punctuation and whitespace to contend with.
Is bronco. equal to bronco or bronco,? Kind of.
Then, there are plurals and similar words. Are run, running, and ran
equivalent? Well, it depends. These three words have a common stem.
What if the natural language terms are embedded within a markup language like
HTML? In that case, you have to deal with text like
<strong>bronco</strong>. Finally, there
is the issue of frequently used but essentially meaningless words like a, and,
and the. These so-called stopwords just get in the way. Natural
language is messy; it needs to be cleaned it up before processing.
Fortunately, Python and NLTK enable you to clean up this mess. The
normalized_words method of class
RssItem,
in Listing 7, deals with all of these issues. Note in
particular how NLTK cleans the raw article text of the embedded HTML markup
in just one line of code! A regular expression is used to remove punctuation,
and the individual words are then split and normalized into lowercase.
Listing 7. The RssItem class
class RssItem: ... regex = re.compile('[%s]' % re.escape(string.punctuation)) ... def normalized_words(self, article_text): words = [] oneline = article_text.replace('\n', ' ') cleaned = nltk.clean_html(oneline.strip()) toks1 = cleaned.split() for t1 in toks1: translated = self.regex.sub('', t1) toks2 = translated.split() for t2 in toks2: t2s = t2.strip().lower() if self.stop_words.has_key(t2s): pass else: words.append(t2s) return words
The list of stopwords came from NLTK with this one line of code; other natural languages are supported:
nltk.corpus.stopwords.words('english')
NLTK also offers several "stemmer" classes to further normalize the words. Check out the NLTK documentation on stemming, lemmatization, sentence structure, and grammar for more information.
Classification with the Naive Bayes algorithm
The Naive Bayes algorithm is widely used and implemented in the NLTK with the
nltk.NaiveBayesClassifier class. The Bayes algorithm
classifies items per the presence or absence of features in their datasets. In
the case of the RSS feed items, each feature is a given (cleaned) word of
natural language. The algorithm is "naive," because it assumes that there is no
relationship between the features (in this case, words).
The English language, however, contains more than 250,000 words. Certainly, I
don't want to have to create an object containing 250,000 Booleans for each
RSS feed item for the purpose of passing to the algorithm. So, which words do
I use? In short, the answer is the most common words in the population of
training data that aren't stopwords. NLTK provides an outstanding class,
nltk.probability.FreqDist, which I can use to identify
these top words. In Listing 8, the
collect_all_words method returns an array of all the
words from all training articles.
This array is then passed to the
identify_top_words method
to identify the most frequent words. A useful feature of the
nltk.FreqDist class is that it's essentially a hash, but
its keys are sorted by their corresponding values, or counts. Thus,
it is easy to obtain the top 1000 words with the
[:1000]
Python syntax.
Listing 8. Using the nltk.FreqDist class
def collect_all_words(self, items): all_words = [] for item in items: for w in item.all_words: words.append(w) return all_words def identify_top_words(self, all_words): freq_dist = nltk.FreqDist(w.lower() for w in all_words) return freq_dist.keys()[:1000]
For the simulated RSS feed items with the NLTK Reuters article data, I need to identify the categories for each item. I do this by reading the ~/nltk_data/corpora/reuters/cats.txt file mentioned earlier. Reading a file with Python is as simple as this:
def read_reuters_metadata(self, cats_file): f = open(cats_file, 'r') lines = f.readlines() f.close() return lines
The next step is to get the features for each RSS feed item. The
features method of the
RssItem
class, shown below, does this. In this method, the array of
all_words in the article is first reduced to a
smaller
set object to eliminate the duplicate words.
Then, the
top_words are iterated and compared to
this set for presence or absence. A hash of 1000 Booleans is returned, keyed by
w_ followed by the word itself. Very concise, this
Python.
def features(self, top_words): word_set = set(self.all_words) features = {} for w in top_words: features["w_%s" % w] = (w in word_set) return features
Next, I collect a training set of RSS feed items and their individual features and pass them to the algorithm. The code in Listing 9 demonstrates this task. Note that the classifier is trained in exactly one line of code.
Listing 9. Training a nltk.NaiveBayesClassifier
def classify_reuters(self): ... training_set = [] for item in rss_items: features = item.features(top_words) tup = (features, item.category) # tup is a 2-element tuple featuresets.append(tup) classifier = nltk.NaiveBayesClassifier.train(training_set)
The
NaiveBayesClassifier, in the memory of the running
Python program, is now trained. Now, I simply iterate the set of RSS feed items
that need to be classified and ask the classifier to guess the category for each
item. Simple.
for item in rss_items_to_classify: features = item.features(top_words) category = classifier.classify(feat)
Becoming less naive
As stated earlier, the algorithm assumes that there is no relationship between the individual features. Thus, phrases like "machine learning" and "learning machine" or "New York Jet" and "jet to New York" are equivalent (to is a stopword). In natural language context, there is an obvious relationship between these words. So, how can I teach the algorithm to become "less naive" and recognize these word relationships?
One technique is to include the common bigrams (groups of two words)
and trigrams (groups of three words) in the feature set. It should now
come as no surprise that NLTK provides support for this in the form of the
nltk.bigrams(...) and
nltk.trigrams(...)
functions. Just as the top n-number of words were collected from the
population of training data words, the top bigrams and trigrams can similarly be
identified and used as features.
Your results will vary
Refining the data and the algorithm is something of an art. Should you further normalize the set of words, perhaps with stemming? Or include more than the top 1000 words? less? Or use a larger training data set? Add more stopwords or "stop-grams"? These are all valid questions to ask yourself. Experiment with them, and through trial and error, you will arrive at the best algorithm for your data. I found that 85 percent was a good rate of successful categorization.
Recommendation with the k-Nearest Neighbors algorithm
The client wanted to display RSS feed items within a selected category or similar categories. Now that the items had been categorized with the Naive Bayes algorithm, the first part of that requirement was satisfied. The harder part was to implement the "or similar categories" requirement. This is where machine learning recommender systems come into play. Recommender systems recommend an item based on similarity to other items. Amazon.com product recommendations and Facebook friend recommendations are good examples of this functionality.
k-Nearest Neighbors (kNN) is the most common recommendation algorithm. The idea is to provide it a set of labels (that is, categories), and a corresponding dataset for each label. The algorithm then compares the datasets to identify similar items. The dataset is composed of arrays of numeric values, often in a normalized range from 0 to 1. It can then identify the similar labels from the datasets. Unlike Naive Bayes, which produces one result, kNN can produce a ranked list of several (that is, the value of k) recommendations.
I found recommender algorithms simpler to comprehend and implement than the classification algorithms, although the code was too lengthy and mathematically complex to include here. Refer to the excellent new Manning book, Machine Learning in Action, for kNN coding examples (see Resources for a link). In the case of the RSS feed item implementation, the label values were the item categories, and the dataset was an array of values for each of the top 1000 words. Again, constructing this array is part science, part math, and part art. The values for each word in the array can be simple zero-or-one Booleans, percentages of word occurrences within the article, an exponential value of this percentage, or some other value.
Conclusion
Discovering Python, NLTK, and machine learning has been an interesting and enjoyable experience. The Python language is powerful and concise and now a core part of my developer toolkit. It is well suited to machine learning, natural language, and mathematical/scientific applications. Although not mentioned in this article, I also found it useful for charting and plotting. If Python has similarly been in your blind spot, I encourage you to take a look at it.
Resources
Learn
- Learn more about machine learning from Wikipedia.
- Read Peter Harrington's Machine Learning in Action (Manning, 2012).
- Check out Natural Language Processing with Python by Steven Bird, Ewan Klein, and Edward Loper (O'Reilly, 2009).
- Check out Implement Bayesian inference using PHP (Paul Meagher, developerWorks, March-May 2009). This three-part series discusses interesting applications designed to help you appreciate the power and potential of Bayesian inference concepts.
-
- Explore the NLTK site and building Python programs to work with human language data.
- Download pip and learn more about this tool for installing and managing Python packages.
- Learn more about
virtualenv, a tool to create isolated Python environments.
unitteststandard library, a Python language version of JUnit.
pyeclipseplug-in for Eclipse.
- Check out the PyCharm IDE for a complete set of development tools to program with Python and capabilities the Django framework.
-. | http://www.ibm.com/developerworks/linux/library/os-pythonnltk/index.html | CC-MAIN-2013-48 | refinedweb | 3,873 | 56.45 |
On Saturday, 21 July 2012 at 21:44:52 UTC, Andrei Alexandrescu
wrote:
>
> Please chime in with thoughts. Would someone want to pioneer
> this project?
>
There are some questions.
1. static info = getModuleInfo("std.algorithm");
The language does not allow you to use CTFE parameter values as
arguments to __traits/templates. Therefore, to be able to build
meta-objects at compile-time, you would have to:
static info = getModuleInfo!"std.algorithm";
2. Then, what is the preferred way of referencing compiler
objects - with strings or directly:
import std.algorithm;
static info = getModuleInfo!(std.algorithm);
?
I think using strings while we have direct access to those
objects is not an incredibly good idea. Those strings will be
mixed-in by getXxxInfo anyway:
ModuleInfo getModuleInfo(string name)()
{
...
mixin("import " ~ name ~ ";");
mixin("alias __traits(getMembers, " ~ name ~ ") members;");
foreach (m; members)
...
}
3. auto info = getModuleInfo("std.algorithm");
There is no way to mutate global data structures at compile-time,
therefore you would have to build, at run time, a data structure
aggregating all meta-objects coming from various modules.
That could be achieved with static constructors:
module std.reflection;
// application-wide module info registry
private ModuleInfo[string] moduleInfos;
// run-time module info getter
string getModuleInfo(string s)
{
return moduleInfos[s];
}
string makeModuleInfoAvailableDynamically()
{
return q{
shared static this() { shared static mi =
getModuleInfo!(__traits(thisModule));
moduleInfos[mi.name] = mi; }
};
}
Problematic because mixing-in makeModuleInfoAvailableDynamically
would mean that circular imports are no longer allowed for that
module. Remember the whining babies complaining about this very
use case a while ago?
What about changing the language so that static constructors
marked with @system are exempted from circular dependency
checking?
3. If you are against inheritance, why classes and not structs?
4. How the whole thing is intended to interact with dynamic link
libraries?.
On 7/22/12, Philippe Sigaud <philippe.sigaud@gmail.com> wrote:
> 2) Why classes, as opposed to structs?
I think either way you'd need reference semantics. For example maybe
you're doing code-generation at compile-time but you need to rename a
class name in a typeinfo returned by std.reflection before doing any
processign. With reference semantics you only have to change one class
and all other types which refer to such a class will have access to
the new name.
You could use structs as well, and actually I use structs in my
codegenerator (with a similar layout to what Andrei posted). Each
struct (e.g. Class/Function) stores Symbols, which are structs with an
ID and a Type. I can look up each symbol in a SymTable which actually
holds the structures with data. So a Symbol is like a fake pointer,
and the SymTable would be the memory. I originally planned to use
classes but some serialization frameworks didn't work with those so I
settled using structs and a bit of template mixin magic instead.
All of this std.reflection talk is quite exciting actually. If you had
AST information about your entire D library you could do some really
cool things. You could make a better documentation generator than
ddoc, or export the AST into a file for a code-completion plugin, or
create a wrapper C library which enables other languages to use your D
library.
"Kagamin" <spam@here.lot> writes:
>.
That's what you'd do in a language that doesn't have something like D's
Object.factory(). In D's case, however, you'd have the "factory method"
be the ctor.
That's all hypothetical, however, since there's no D ABI for shared
objects yet...
--
The volume of a pizza of thickness a and radius z can be described by
the following formula:
pi zz a
On Sun, Jul 22, 2012 at 4:28 PM, Andrei Alexandrescu
<SeeWebsiteForEmail@erdani.org> wrote:
> std.reflection could become the lynchpin for dynamic library use; once the
> library is loaded (with dlopen or such), the client needs to call
> getModuleInfo() (a C function that can be found with dlsym()) and then get
> access to pointers to functions necessary for doing all other work. (Note
> that my examples don't yet include pointers to executable code yet.)
I wouldn't know. I have no experience with dlsym().
>> 2) Why classes, as opposed to structs? Would inheritance/hierarchies
>> play a role there? Or reference semantics?
>> Note that between structs, classes, templates and modules there is a
>> lot of redundancy. A possibility could be to have an AggregateInfo
>> base class.
>
>
> Initially I used struct, but then I figured reference semantics are more
> natural for storing cross-entity information, as you indeed took advantage
> of in your TemplateInfo.
I realized a few minutes after posting I answered my own question:
because this is a self-referencing structure (a tree, a graph), which
are easier to code with classes than structs.
>> 4) How would that allows queries like "Here is class C, give me all
>> its available subclasses."? Hmm, wait, I get it: extract classes from
>> the module, and recursively from imported modules. From these classes,
>> extract the parent classes and so on, until the search ranged over the
>> whole inheritance tree. I guess inheritance info could be standard
>> enough for std.reflection to provide such a search.
>
>
> Something like that. Note that such a query is not particularly OO-ish,
> because getting a class' cone (totality of subclasses) works against the
> modularity that inheritance is meant for. I don't think we should make
> getting class cones particularly easy.
Right. Since people here asked this question, I thought that was a
common request in OO. I'm more a structs and mixins guy, myself.
>> 5) The compiler can emit JSON output giving a partial view of the same
>> information. As a long-term goal, I suggest these infos should be
>> compatible somehow. The JSON output should be enriched, and we should
>> ascertain that using std.json to read this kind of
>> automatically-generated information should give std.reflection infos
>> back.
>
>
> Yes, that's a great connection that Walter and I discussed a bit.
Good to know.
>> 6) Is really all the necessary info available through std.traits and
>> __traits? Imported modules and their subtilities (renamed functions,
>> etc) seem out of reach, no?
>
>
> We'll need indeed to enhance __traits with what's needed. Much of the point
> of std.reflection is to determine exactly what's there and what's needed.
> And that starts with the data structures design (the algorithmic aspects are
> minor).
+1 for enhancing __traits locally.
- having __traits(allMembers, xxx) work on simple module names, and
not only on qualified package.module names
- having a way to get imports and a way to know whether they are
static / renaming import
As for the data structures, other people's designs allured to in this
thread seem similar to your proposal.
I have two other questions:
About functions: should they be subject to reflection also, or not?
They have no fields, inner functions are totally interned, etc. All a
user need is the 'interface', right? (name, and everything that's in
the type: return type, parameters, purity, etc)
About imports, what about inner imports, now that they are authorized
in almost any scope? My gut feeling right now is that a user does not
care if class C internally import std.algorithm in one of its methods,
but I could be wrong.
>> 7) I know your examples are not complete, but don't forget aliases and
>> symbols, and module-level values. Since these can be of any type, I'm
>> not sure how they are managed in your scheme. I mean, you cannot have
>> IntInfo[], DoubleInfo[], ...
>
>
> I sort of eschewed part of that by using strings for types.
I see. They can be managed like fields in an aggregate (struct /
classes), as there are many similarities between D modules and classes
/ structs.
class ModuleInfo {
@property:
...
FieldInfo[] data; // also used in StructInfo and ClassInfo
}
class FieldInfo {
@property:
string name();
bool isStatic();
Protection protection();
string type();
}
As long as this info is available at CT, FieldInfo.type can be
mixed-in and used in code.
what I'm not sure I get in your design is why some informations are
encoded in their own structure, like Protection above (the code is
copy-pasted from yours, I'd guess Protection is an enumeration), and
then some others are encoded as strings (types). Is that because the
values Protection can take are known in advance (and finite)?
I wondered whether a design like this could be interesting?:
abstract class FieldInfo {}
class Field(T) : FieldInfo {
@property:
string name();
bool isStatic();
Protection protection();
alias T Type;
}
But that doesn't cut it, AFAICT: different fields can be stored in a
FieldInfo[] array, but the type information is not easier to get,
anyway. So forget it.
This kind of manipulation is why I got interested in fully polymorphic
trees (tuples of tuples...), able to store any value, while keeping
the type information visible. The drastic consequence is to have a
tree type depend on its entire content. But I'm coming from Static
Typing Land here, whereas this introspection stuff is more dynamic.
Anyway, back to gobal values: aliases should be there also. A simple
AliasInfo class?
> Well you're the resident crazy-stuff-during-compilation guy.
Ah! I wish.
I had this wonderful idea of having code be parsed at CT, semantically
analyzed, transformed into some machine code at CT and... , oh wait.
> Did you try your trees during compilation?
Just did. They fail :) Either segmentation fault (core dumped) or an
error telling me class literals cannot be returned from CTFE.
Hmm, this is old code. I'll have a look since in other projects, I can
obtain trees at CT.
Philippe
On 2012-07-21 23:44, Andrei Alexandrescu wrote:
> Walter and I discussed the idea below a long time (years) ago. Most
> likely it's also been discussed in this newsgroup a couple of times.
> Given the state of the compiler back then, back then it seemed like a
> super cool idea that's entirely realizable, it would just take time for
> the compiler to become as capable as needed. Nowadays we're in shape to
> tackle it.
>
> Here "it" is.
>
> Back when runtime reflection was being discussed, my response was "let's
> focus on compile-time reflection, and then we can do run-time reflection
> on demand as a library". Though this might sound sensible, I initially
> didn't have a design. Now here's what we can do.
I've been waiting for this :)
--
/Jacob Carlborg
On 2012-07-22 02:16, Kapps wrote:
> I agree with most things proposed, however I am not a fan of the idea of
> mixing in runtime reflection info. Many times, you want reflection info
> from a type that is not your own, and thus I believe reflection should
> be generated by specifying a type. More importantly, a method should
> exist for recursively generating reflection info.
I agree.
--
/Jacob Carlborg
On Sun, Jul 22, 2012 at 5:10 PM, Max Samukha <maxsamukha@gmail.com> wrote:
> The language does not allow you to use CTFE parameter values as arguments to
> __traits/templates. Therefore, to be able to build meta-objects at
> compile-time, you would have to:
>
> static info = getModuleInfo!"std.algorithm";
Maybe I don't get your comment, but AFAICT, the language does allow
you to use CTFE parameters values as arguments to templates:
template Twice(double d)
{
enum Twice = d * 2;
}
double foo(double d)
{
return d+1.0;
}
void main()
{
enum t = Twice!(foo(1.0));
pragma(msg, t);
}
On 2012-07-22 14:04, deadalnix wrote:
> I'd expect from std.reflection that it is able to reflect recursively
> from the marked starting point.
I really hope so.
--
/Jacob Carlborg
On 2012-07-22 06:48, Andrei Alexandrescu wrote:
> P.S. What this thing wit quoting a long message to make a 1-line point?
> Is that a thing?
It's the new hip thing :)
--
/Jacob Carlborg | http://forum.dlang.org/thread/juf7sk$16rl$1@digitalmars.com?page=3 | CC-MAIN-2014-35 | refinedweb | 1,995 | 57.37 |
1. What is casting?
Casting is the process of converting one data type to another in a programmng language. Casting may be implict or explict.
2. Why casting is required in a programming language?
It is just due to user only. At runtime, what the value user inputs may not be in a format required by the program. For example, if you ask the user to enter the cost of mangoes, he enters 50. 50 is an integer value and user is very correct in his concept. But your program requires it as a double value because sometimes the mangoes value can be 50.5 rupees also. Then what to do? Simply convert int to double as 50 to 50.0 and use in the subsequent code.
3. Okay sir, then what is parsing in Java?
Casting (either implicit or explict) is the process of converting one data type to another. But in all programming languages, sometimes a string value is required to convert into data type like int. In Java, String is class. Conversion of string to data type cannot be done with simple casting because string is an object and int is a data type; both are incompatible types. Conversion of string to data type requires special code (not casting) known as parsing. Conversion of string to data type is known as parsing because methods of type parseXXX() are used.
4. By the by, why parsing is required in a programming language?
It is all due to user's input only. Programmer knows very well what data type should taken to fit the value. User may give an int value to a running Java program, but the program returns it as a string. Let us see some cases.
5. Give one example on parsing where string is converted to double data type using parseDouble() method?
public class Conversions { public static void main(String args[]) { String str = "10.5"; System.out.println("10.5 in string form: " + str); // prints 10.5; Printing is no problem. 10.5 is in string form // System.out.println(str * str); // raises compilation error as arithmetic operation are possibble on strings double x = Double.parseDouble(str); System.out.println("10.5 in double form: " + x); System.out.println("Square of double: " + x*x); // prints 110.25 } }
Output screenshot on parseDouble()
parseDouble() is a static method of Double class that converts string to data type double.
Integer and Double are known as wrapper classes.
View all for 65 types of Data type Conversions | https://way2java.com/java-lang/convert-string-to-double-in-java-parse-double/ | CC-MAIN-2020-40 | refinedweb | 418 | 68.87 |
J.[3] JavaFX has support for desktop computers and web browsers on Microsoft Windows, Linux, and mac, macOS 2.x platform includes the following components: applications for browser or desktop, and using the same tools: JavaFX SDK and the JavaFX Production Suite. This concept makes it possible to share code-base and graphics assets for desktop and mobile applications. Through integration with Java ME, the JavaFX applications have access to capabilities of the underlying handset, such to enable out-of-the-box support of JavaFX on the devices by working with handset manufacturers and mobile operators to preload the JavaFX Mobile runtime on the handsets. JavaFX Mobile running on an Android was demonstrated at JavaOne 2008 and selected partnerships (incl. LG Electronics, Sony Ericsson) were announced at the JavaFX Mobile launch in February, 2009.
JavaFX Script, the scripting component of JavaFX, began life as a project by Chris Oliver called F3.[8].[9]
On December 4, 2008 Sun released JavaFX 1.0.2
JavaFX for mobile development was finally made available as part of the JavaFX 1.1 release (named Franca[9]) announced officially on February 12, 2009.
JavaFX 1.2 (named Marina[9]) was released at JavaOne on June 2, 2009. This release introduced:[10]
JavaFX 1.3 (named Soma[9]) was released on April 22, 2010. This release introduced:[11]
This version was released on August 21, 2010. This release introduced:
This version (named Presidio[9]) was released on October 10, 2011. This release introduced:.[12][13]
On April 27, 2012, Oracle released version 2.1 of JavaFX,[14] which includes the following main features:[15]
On August 14, 2012, Oracle released version 2.2 of JavaFX,[16] which includes the following main features:[17].[18]
JavaFX is now part of the JRE/JDK for Java 8 (released on March 18, 2014) and has the same numbering, i.e., JavaFX 8.[19]
JavaFX 8 adds several new features, including:[20]
JavaFX 9 features are currently centered on extracting some useful private APIs from the JavaFX code to make these APIs public:
Oracle also announced in November 2012 the open sourcing of Decora, a DSL Shader language for JavaFX allowing to generate Shaders for OpenGL and Direct3D.[25]
The following is a rather simple JavaFX-based program. It displays a window (a stage) containing a button.
stage
package javafxtuts;;
public class Javafxtuts extends Application {
@Override
public void start(final FlowPane
root.getChildren().add(button);
// Creating a scene object
final Scene scene = new Scene(root, 300, 250);
// Adding the title to the window (primaryStage)
primaryStage.setTitle("Hello World!");
primaryStage.setScene(scene);
// Show the window(primaryStage)
primaryStage.show();
}
/**
* Main function that opens the "Hello World!" window
*
* @param args the command line arguments
*/
public static void main(final String[] arguments) {
launch(arguments);
}
}
As of March 2014 JavaFX is deployed on Microsoft Windows, OS X, and Desktop Linux.[26] Oracle has an internal port of JavaFX on iOS and Android Linux.[27][28] Support for ARM is available starting with JavaFX 8[1] On February 11, 2013, Richard Bair, chief architect of the Client Java Platform at Oracle, announced that Oracle would open-source the iOS and Android implementations of its JavaFX platform in the next two months.[29][30] Starting with version 8u33 of JDK for ARM, support for JavaFX Embedded has been removed.[31] Support will continue for x86-based architectures.[32]
There are various licenses for the modules that compose the JavaFX runtime:
During development, Sun explained they will roll out their strategy for the JavaFX licensing model for JavaFX first release.[37].[38]
At JavaOne 2011, Oracle Corporation announced that JavaFX 2.0 would become open-source.[39] Since December 2011, Oracle began to open-source the JavaFX code under the GPL+linking exception.[2][40]
In December 2012, new portions of the JavaFX source code have been open-sourced by Oracle:[41]
We’re also going to open source our iOS and Android implementations over the next couple months.
Oracle has announced plans to open source the iOS and Android implementations of its JavaFX UI platform "over the next couple of months", allowing developers to use the technology to write cross-platform applications for those platforms for the first time.
Starting with JDK 8u33, JavaFX Embedded is removed from the ARM bundle and is not supported.
JavaFX continues to be provided as a fully supported part of the Oracle JDK 8 product on x86 platforms (Windows, Linux, Mac).
Sun will continue to engage the OpenJFX community as we release JavaFX products. This fall we will be rolling out our open source strategy for JavaFX technology concurrent with the release of version 1 of JavaFX Desktop
Sun is committed to open standards and open source, and specifications are coming soon(…)There are some dependencies on licensed code that cannot be open sourced. We are working towards decoupling the dependencies so that the non-proprietary portions can be open sourced. Currently the JavaFX compiler, Netbeans JavaFX plugin and Eclipse JavaFX plugin are already being developed in the open source. The scene graph is out in the open. We will put the core runtime out in the open over time..
Hey guys, Just a note to indicate that the UI controls have been open sourced into openjdk
Hello everyone, today we open-sourced another part of JavaFX. Following projects are now part of OpenJFX | http://enc.tfode.com/JavaFX | CC-MAIN-2017-26 | refinedweb | 893 | 54.12 |
Meaningless Words to Useful Phrases in Spark – word2phrase
Introduction to word2phrase
When we communicate, we often know that individual words in the correct placements can change the meaning of what we’re trying to say. Add “very” in front of an adjective and you place more emphasis on the adjective. Add “york” in after the word “new” and you get a location. Throw in “times” after that and now it’s a newspaper.
It follows that when working with data, these meanings should be known. The three separate words “new”, “york”, and “times” are very different than “New York Times” as one phrase. This is where the word2phrase algorithm comes into play.
At its core, word2phrase takes in a sentence of individual words and potentially turns bigrams (two consecutive words) into a phrase by joining the two words together with a symbol (underscore in our case). Whether or not a bigram is turned into a phrase is determined by the training set and parameters set by the user. Note that every two consecutive words are considered, so in a sentence with w1 w2 w3, bigrams would be w1w2, w2w3.
In our word2phrase implementation in Spark (and done similarly in Gensim), there are two distinct steps; a training (estimator) step and application (transform) step.
*For clarity, note that “new york” is a bigram, while “new_york” is a phrase.
Estimator Step
The training step is where we pass in a training set to the word2phrase estimator. The estimator takes this dataset and produces a model using the algorithm. The model is called the transformer, which we pass in datasets that we want to transform, i.e. sentences that with bigrams that we may want to transform to phrases.
In the training set, the dataset is an array of sentences. The algorithm will take these sentences and apply the following formula to give a score to each bigram:
score(wi, wj) = (count(wiwj) – delta) / (count(wi) * count(wj))
where wi and wj are word i and word j, and delta is discounting coefficient that can be set to prevent phrases consisting of infrequent words to be formed. So wiwj is when word j follows word i.
After the score for each bigram is calculated, those above a set threshold (this value can be changed by the user) will be transformed into phrases. The model produces by the estimator step is thus an array of bigrams; the ones that should be turned to phrases.
Transformer Step
The transform step is incredibly simple; pass in any array of sentences to your model and it will search for matching bigrams. All matching bigrams in the array you passed in will then be turned to phrases.
You can repeat these steps to produce trigrams (i.e. three words into a phrase). For example, with “I read the New York Times” may produce “I read the new_york Times” after the first run, but run it again to get “I read the new_york_times”, because in the second run “new_york” is also an individual word now.
Example
First we create our training dataset; it’s a dataframe where the occurrences “new york” and “test drive” appears frequently. (The sentences make no sense as they are randomly generated words. See below for link to full dataframe.)
You can copy/paste this into your spark shell to test it, so long as you have the word2phrase algorithm included (available as a maven package with coordinates com.reputation.spark:word2phrase:1.0.1).
Download the package, create our test dataframe:
spark-shell –packages com.reputation.spark.word2phrase.1.0.1
import org.apache.spark.ml.feature.Word2Phrase
val wordDataFrame = sqlContext.createDataFrame(Seq(
(0, “new york test drive cool york how always learn media new york .”),
(1, “online york new york learn to media cool time .”),
(2, “media play how cool times play .”),
(3, “code to to code york to loaded times media .”),
(4, “play awesome to york .”),
.
.
.
(1099, “work please ideone how awesome times .”),
(1100, “play how play awesome to new york york awesome use new york work please loaded always like .”),
(1101, “learn like I media online new york .”),
(1102, “media follow learn code code there to york times .”),
(1103, “cool use play work please york cool new york how follow .”),
(1104, “awesome how loaded media use us cool new york online code judge ideone like .”),
(1105, “judge media times time ideone new york new york time us fun .”),
(1106, “new york to time there media time fun there new like media time time .”),
(1107, “awesome to new times learn cool code play how to work please to learn to .”),
(1108, “there work please online new york how to play play judge how always work please .”),
(1109, “fun ideone to play loaded like how .”),
(1110, “fun york test drive awesome play times ideone new us media like follow .”)
)).toDF(“label”, “inputWords”)
We set the input and output column names and create the model (the estimator step, represented by the fit(wordDataFrame) function).
scala> val t = new Word2Phrase().setInputCol(“inputWords”).setOutputCol(“out”)
t: org.apache.spark.ml.feature.Word2Phrase = deltathresholdScal_f07fb0d91c1f
scala> val model = t.fit(wordDataFrame)
Here are some of the scores (Table 1) calculated by the algorithm before removing those below the threshold (note all the scores above the threshold are shown here). The default values have delta -> 100, threshold -> 0.00001, and minWords -> 0.
only showing top 10 rows
So our model produces three bigrams that will be searched for in the transform step:
test drive
work please
new york
We then use this model to transform our original dataframe sentences and view the results. Unfortunately you can’t see the entire row in the spark-shell, but in the out column it’s clear that all instances of “new york” and “test drive” have been transformed into “new_york” and “test_drive”.
scala> val bi_gram_data = model.transform(wordDataFrame)
bi_gram_data: org.apache.spark.sql.DataFrame = [label: int, inputWords: string … 1 more field]
scala> bi_gram_data.show()
only showing top 20 rows
The algorithm and test dataset (testSentences.scala) are available at this repository. | http://tech.reputation.com/tag/transformer/ | CC-MAIN-2020-50 | refinedweb | 1,009 | 73.27 |
Zope Cluster Management facilities
bethel.clustermgmt
Table of Contents
Introduction
This package
Add ‘Cluster Health Reporter’ can now be found in the ‘add’ list in the ZMI.
Configuration
The management screen for a cluster health reporter has two sections. The first is the list of nodes, and the second provides an interface for taking nodes offline.
List of Nodes
Enter the list of nodes in the cluster, one per line. This does not need to be the fqdn of the node, but each node does need a unique entry.
Offline Nodes
The list of nodes is represented here with checkboxes. A node is out of offline (out of service) if it’s box is checked. To manually change the service status of an node (putting it online, taking it offline), check or uncheck the box for that node and click “Save Offline Nodes”.
Use for Monitoring
The load balancer should be configured to query the health status object. If the zope node fails, the health status check will return a system error, or return no response at all (hang). The load balancer will then automatically take the node out of service.
Upon recovery the health status checks will succeed, and the load balancer will automatically bring the node back into service.
Load Balancer configuration (varnish)
Configuring Varnish as a load balancer, and leveraging this health reporter is easy. Let’s assume the following:
- there are two nodes in your cluster, node1.example.com:8080 and node2.example.com:8080
- the cluster health reporter is located at /health
Add a director for these two nodes in the varnish VCL file:
director zope random { { .backend = { .host = "node1.example.com"; .port = "8080"; .first_byte_timeout = 30s; } .weight = 1; } { .backend = { .host = "node2.example.com"; .port = "8080"; .first_byte_timeout = 30s; } .weight = 1; } }
A health check is called a “probe” in VCL. Adding a probe to each backend, the VCL now looks like:
director silva23 random { { .backend = { .host = "node1.example.com"; .port = "8080"; .first_byte_timeout = 30s; .probe = { .url = "/health?node=node1"; .timeout = 0.3 s; .window = 8; .threshold = 3; .initial = 3; } } .weight = 1; } { .backend = { .host = "node2.example.com"; .port = "8080"; .first_byte_timeout = 30s; .probe = { .url = "/health?node=node2"; .timeout = 0.3 s; .window = 8; .threshold = 3; .initial = 3; } } .weight = 1; } }
See the varnish configuration for more information.
Use for Deployments
Using a health status object, rather than an arbitrary web page, for the load balancers health check makes it useful for automatic service removal during system deployments.
The node can me marked as ‘out of service’ via the ZMI, or using REST. The REST approach is useful for automated deployment scripts.
Automated deployments
REST API
This object also responds to REST requests to adjust the service status. Using this method, automated deployment scripts (e.g. using fabric) can take nodes out of service before deploying updates.
Access to the REST API calls are protected using the ‘bethel.clustermgmt.rest’ permission. To access the api calls, the request needs to be authenticated as a manager, or as a user in a role granting this permission.
The REST api has two methods.
Get the status of all nodes (HTTP GET):
/path/to/health/++rest++nodestatus
Returns a json-formatted dictionary of all nodes, and their status (either online or offline), like this:
{nodeA: {status: offline}, nodeB: {status: online}}
Alter the status of one or more nodes (HTTP POST):
/path/to/health/++rest++setstatus
POST data instructs the reporter on the new status for the given nodes. Due to infrae.rest’s lack of support for accepting json payloads, the json input is passed in via a POST parameter named “change”. See the unittests for more info.
The input format is the same the the output from ++rest++nodestatus.
Use in Fabric
A simple python function can trigger a status change for a node. This in turn can be converted into a fabric task. The following is the fabric task we use at Bethel for changing the service status of a node:
env.roledefs = { 'prod': ['node1.example.com', 'node2.example.com'], 'dev': ['test-node.example.com'] } env.buildout_root = "/home/zope/silva23/buildout" def alter_service_status(newstatus): #alter the service status of a zope node, #either putting online or offline host = env['host_string'] node = host.split('.')[0] url = ''%host query = {'change': json.dumps({node: {'status': newstatus}}), 'skip-bethel-auth': 1} req = urllib2.Request(url, query) authh = "Basic " + base64.encodestring('%s:%s'%rest_creds)[:-1] req.add_header("Authorization", authh) response = urllib2.urlopen(req, urllib.urlencode(query)) back = ''.join(response.readlines()) return 'OK'
The username and password are read from a protected file when the fabfile is loaded.
This task in turn can be used as a component of a larger automated deployment task (this is the rest of of Bethel’s fabfile):
def buildout(): with prefix("export HOME=/home/zope/"): with cd(env.buildout_root): sudo("hg --debug pull -u"%env, user="zope") sudo("./bin/buildout"%env, user="zope") def restart_apache(): #using the sudo command does not work; it issues the following: # sudo -S -p 'sudo password:' /bin/bash -l -c "/etc/init.d/httpd restart" # which runs a shell executing the command in quotes. Ross was not # able to configure sudo to allow multiple httpd options with # one line, but suggested the run command instead. #sudo("/etc/init.d/httpd restart") run("sudo /etc/init.d/httpd restart") def push_buildout(apache_restart=True): if type(apache_restart) in StringTypes: apache_restart = (apache_restart == 'True') change_status = False if env.host_string in env.roledefs['prod']: change_status = True #take out of service, it takes less time to take out of service than # it does to put back into service if change_status: puts("taking offline; sleeping 20 seconds") alter_service_status('offline') sleep(20) buildout() if apache_restart: restart_apache() #TODO: test some urls, loading up the local ZODB cache before bringing # back in to service #put back into service if change_status: puts("taking online; sleeping 30 seconds") alter_service_status('online') sleep(30)
Adding fabric to your buildout is detailed here:
This fabfile is located in the buildout root. Running an automated deployment of our production environment is simple:
./bin/fab -R prod push_buildout
When using mod_wsgi to serve Zope, a restart of apache is required for change to take effect. If for any reason you’d want to push buildout but not restart apache, pass in False to the restart_apache per-task argument:
./bin/fab -R prod push_buildout:restart_apache=False
The combination of fabric and bethel.clustermgmt has decreased deployment time considerably. It is now one command run in the background, whereas before it was a 5-10 minute long repetitive rinse/repeat cycle for each node in the cluster.
bethel.clustermgmt changlog
bethel.clustermgmt 1.0 (2012-04-30)
- first release of cluster mgmt utilities
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/bethel.clustermgmt/ | CC-MAIN-2017-39 | refinedweb | 1,127 | 55.34 |
Board index » perl modules
All times are UTC.*-*-*.com/
for the program itself.
-- Cheers Ron & Pen Savage.*-*-*.com/
Specifically, given this document:
<wrapper> <doc1>Contents of Document One</doc1> <doc2><b>Contents of Document Two</b></doc2> <doc3><c><chapter>Contents of 3.1</chapter><chapter>Contents of 3.2</chapter>Contents of Document Three</c></doc3> </wrapper>
xmlSplitter.pl outputs these 3 lines:
doc1 => Contents of Document One
doc2 => <b>Contents of Document Two</b>
doc3 => <c><chapter>Contents of 3.1</chapter><chapter>Contents of 3.2</chapter>Contents of Document Three</c>
Note: Date::Manip, Parse::Yapp and XML::DOM are prerequisites for XML::XQL.
Also, here are 2 warnings, copied from the source of xmlSplitter.pl:
# Warning 1: # This program - xmlSplitter.pl - was developed under MS Windows NT. # I downloaded XML-XQL-0.61.tar.gz from CPAN. Attempting to install # it produced many error messages. This is because the author has used # a Unix command, tput, at lines 522, 523, 525 and 526 in XQL.pm. # You can either ignore these errors, or patch the code before installation. # Patching requires you to convert all (4) lines which look like: # $x = `tput x` || "z"; or $x = (`tput x` . `tput y`) || "z" # into # $x = "z"; # # Warning 2: # Near the end of this program is a call to xql_xmlString. # I determined that this was needed by examining the example # program samples/xql.pl, which is in XML-XQL-0.61.tar.gz. # In samples/xql.pl, you'll find a sub transform, which calls xql_xmlString # under certain circumstances. By experimention, I have ascertained that # calling solve (see below) on _my_ data returns objects of a certain type, # and that that type provides the method xql_xmlString. Thus xql_xmlString can # be safely called (apparently) on the output from solve, without needing all # of sub transform in this program. YMMV.
-- Cheers Ron & Pen Savage
Ron> This tutorial is an example of how to use XML::XQL to split 1 XML document Ron> into several.
If I'm not mistaken, XML::XQL has been abandoned in favor of the much more standard XPath, implemented in the CPAN as XML::XPath.
Perhaps you could do a tutorial on something other than dead-end technologies. :)
-- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!
I don't recall hearing Enno say he was dropping XML::XQL, although it's plain that Matt's putting in a lot more development effort on XML::XPath.
As for XQL itself, there are several implementations on various platforms, so I'd say it's still viable, at least until the W3C Query WG issues a WD, which probably won't happen until the religious war over relative URI references as namespace names gets resolved.
1. ANNOUNCE: Perl tut # 22 - Using XML::XPath - xpathSplitter[12].pl
2. Can Template::Toolkit and XML::XQL be used on Windows NT
3. ANNOUNCE: Tut # 35: An admin tool for MySQL: myadmin.pl
4. ANNOUNCE: Net::LDAPapi v1.21 Available
5. ANNOUNCE: tagged-0.21 released
6. ANNOUNCE: Parse-Yapp-0.21, Christmas Release
7. XML::XQL
8. ANN: Perl tut # 5: XML::DOM etc
9. XML::XQL
10. ANN: Perl tut # 18: doctype2xsl.pl
11. ANNOUNCE: Tutorial # 32: Showcasing Image::Magick (part 2): xml-2-image.pl
12. ANNOUNCE: Perl tut # 17 - MS Word doc to text, para by para | http://computer-programming-forum.com/52-perl-modules/dd6efdbce626182f.htm | CC-MAIN-2020-34 | refinedweb | 580 | 60.01 |
flutter_mopub 0.1.0
flutter_mopub: ^0.1.0 copied to clipboard
A new Flutter plugin that uses native platform views to show mopub rewarded video ads.
Use this package as a library
Depend on it
Run this command:
With Flutter:
$ flutter pub pub add flutter_mopub
This will add a line like this to your package's pubspec.yaml (and run an implicit
dart pub get):
dependencies: flutter_mopub: ^0.1.0
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:flutter_mopub/flutter_mopub.dart'; | https://pub.dev/packages/flutter_mopub/install | CC-MAIN-2021-17 | refinedweb | 102 | 58.28 |
For Pythonistas this could also be called "why flat is better than nested"
If you want to write clear and easy to understand software, make sure it has a single success path.
A 'single success path' means a few things. First, it means that any given function/method/procedure should have a single clear purpose. One of the ways to identify if you are doing this correctly is by trying to name that code block. If you can't do it with a simple name, you've got a smell. Being able to easily assign a clear name to a function means that it also has a single clear purpose and functions with single clear purposes are easy to understand.
The second thing it means is that the success path for a function should be clear in the flat most commands in it.
A note on flatness. A nested command is a block of code that is under a clause that visually moves the start of the code away from the left margin of the text editor (given that you are using good practices of indentation).
if/elseand
try/catchare examples of it. Flat code is the opposite of nested code, it's code that is near to the left margin of the editor.
Success path might mean different things in different parts of a codebase: sometimes it is the default behavior of a function, in others, it is the most likely thing to happen, or simply the path with no digression from the main purpose of the code. For instance, when you are writing a
divide(x,y) function that receives inputs from the user, although the purpose of the code is to do
x / y, you will need to check that
y is not
0 before doing the calculation. Checking the inputs is fundamental for the correct functioning of the code but it's not the purpose of
divide. By definition, you won't be able to have a single flat success path unless there's only one purpose for a function. One depend on the other.
Let's see this in practice, here is a function that transfers money from one user to another, returns
true if it succeeds or return
false if it does not.
def transfer_money(from_user, to_user, amount): if amount > 0: if from_user.balance >= amount: from_user.balance = from_user.balance - amount to_user.balance = to_user.balance + amount notify_success(from_user, amount) return True else: notify_insuficient_funds(from_user) return False else: return False
This is a mess. It is not possible to understand what this function does from a quick look at it. This happens because of a couple things:
if/elseclauses and nesting makes it hard to identify which is the main flow, the main thing this piece of code is trying to do.
- Unless you read everything and understand what the function does it's not possible to know what are the return values for a success or fail execution.
Let's now refactor:
def transfer_money(from_user, to_user, amount): if amount <= 0: return False if from_user.balance < amount: notify_insuficient_funds(from_user) return False from_user.balance = from_user.balance - amount to_user.balance = to_user.balance + amount notify_success(from_user, amount) return True
Notice that despite looking a lot clearer, the refactored code has the exact same cyclomatic complexity of the first. Also worth to mention that measuring cyclomatic complexity is a precise mathematical concept that may indicate your code needs refactoring, flatness on the other hand relates to the semantics of it and is therefore more of a subjective evaluation.
The main change between the first and second pieces of code we showed is that if you read it ignoring anything that is nested you will end up with the main flow of the program:
def transfer_money(from_user, to_user, amount): from_user.balance = from_user.balance - amount to_user.balance = to_user.balance + amount notify_success(from_user, amount) return True
This is the success path. When one picks a new code to read it's natural to first try to understand flat most parts of it and only then inspect nested things which are naturally expected to represent digressions from the main data flow (special cases or error handling flows). Replacing
if/elses with Guard Clauses is in general one of the best ways to expose the success path. We show how we combine guard clauses and decorators for some interesting use cases in this other blog post.
Not being able to achieve that kind of flatness is a sign that your code is doing too much and may be a good idea to separate it into multiple functions.
Thanks to Cuducos, Carriço and Anderson for reviewing this post. | http://pythonic.zoomquiet.top/data/20180121212406/index.html | CC-MAIN-2019-22 | refinedweb | 772 | 59.84 |
Remote Profileing Cloud Services with dotTrace
The Cloud Zone is brought to you in partnership with Mendix. Better understand the aPaaS landscape and how the right platform can accelerate your software delivery cadence and capacity with the Gartner 2015 Magic Quadrant for Enterprise Application Platform as a Service.
Here’s another cross-post from our JetBrains .NET blog. It’s focused around dotTrace but there are a lot of tips and tricks around Windows Azure Cloud Services in it as well, especially around working with the load balancer. Enjoy the read!
With dotTrace Performance, we can profile applications running on our local computer as well as on remote machines. The latter can be very useful when some performance problems only occur on the staging server (or even worse: only in production). And what if that remote server is a Windows Azure Cloud Service?
Note: in this post we’ll be exploring how to setup a Windows Azure Cloud Service for remote profiling using dotTrace, the “platform-as-a-service” side of Windows Azure. If you are working with regular virtual machines (“infrastructure-as-a-service”), the only thing you have to do is open up any port in the loadbalancer, redirect it to the machine’s port 9000 (dotTrace’s default) and follow the regular remote profiling workflow.
Preparing your Windows Azure Cloud Service for remote profiling
Since we don’t have system administrators at hand when working with cloud services, we have to do some of their work ourselves. The most important piece of work is making sure the load balancer in Windows Azure lets dotTrace’s traffic through to the server instance we want to profile.
We can do this by adding an InstanceInput endpoint type in the web- or worker role’s configuration:
By default, the Windows Azure load balancer uses a round-robin approach in routing traffic to role instances. In essence every request gets routed to a random instance. When profiling later on, we want to target a specific machine. And that’s what the InstanceInput endpoint allows us to do: it opens up a range of ports on the load balancer and forwards traffic to a local port. In the example above, we’re opening ports 9000-9019 in the load balancer and forward them to port 9000 on the server. If we want to connect to a specific instance, we can use a port number from this range. Port 9000 will connect to port 9000 on server instance 0. Port 9001 will connect to port 9000 on role instance 1 and so on.
When deploying, make sure to enable remote desktop for the role as well. This will allow us to connect to a specific machine and start dotTrace’s remote agent there.
That’s it. Whenever we want to start remote profiling on a specific role instance, we can now connect to the machine directly.
Starting a remote profiling session with a specific instance
And then that moment is there: we need to profile production!
First of all, we want to open a remote desktop connection to one of our role instances. In the Windows Azure management portal, we can connect to a specific instance by selecting it and clicking the Connect button. Save the file that’s being downloaded somewhere on your system: we need to change it before connecting.
The reason for saving and not immediately opening the .rdp file is that we have to copy the dotTrace Remote Agent to the machine. In order to do that we want to enable access to our local drives. Right-click the downloaded .rdp file and select Edit from the context menu. Under the Local Resources tab, check the Drives option to allow access to our local filesystem.
Save the changes and connect to the remote machine. We can now copy the dotTrace Remote Agent to the role instance by copying all files from our local dotTrace installation. The Remote Agent can be found in C:\Program Files (x86)\JetBrains\dotTrace\v5.3\Bin\Remote, but since the machine in Windows Azure has no clue about that path we have to specify \\tsclient\C\Program Files (x86)\JetBrains\dotTrace\v5.3\Bin\Remote instead.
From the copied folder, launch the RemoteAgent.exe. A console window similar to the one below will appear:
Not there yet: we did open the load balancer in Windows Azure to allow traffic to flow to our machine, but the machine’s own firewall will be blocking our incoming connection. To solve this, configure Windows Firewall to allow access on port 9000. A one-liner which can be run in a command prompt would be the following:
netsh advfirewall firewall add rule name="Profiler" dir=in action=allow protocol=TCP localport=9000
Since we’ve opened ports 9000 thru 9019 in the Windows Azure load balancer and every role instance gets their own port number from that range, we can now connect to the machine using dotTrace. We’ve connected to instance 1, which means we have to connect to port 9001 in dotTrace’s Attach to Process window. The Remote Agent URL will look likehttp://<yourservice>.cloudapp.net:PORT/RemoteAgent/AgentService.asmx.
Next, we can select the process we want to do performance tracing on. I’ve deployed a web application so I’ll be connecting to IIS’s w3wp.exe.
We can now user our application and try reproducing performance issues. Once we feel we have enough data, the Get Snapshot button will download all required data from the server for local inspection.
We can now perform our performance analysis tasks and hunt for performance issues. We can analyze the snapshot data just as if we had recorded the snapshot locally. After determining the root cause and deploying a fix, we can repeat the process to collect another snapshot and verify that you have resolved the performance problem. Note that all steps in this post should be executed again in the next profiling session: Windows Azure’s Cloud Service machines are stateless and will probably discard everything we’ve done with them so far.
Bonus tip: get the instance being profiled out of the load balancer
Since we are profiling a production application, we may get in the way of our users by collecting profiling data. Another issue we have is that our own test data and our live user’s data will show up in the performance snapshot. And if we’re running a lot of instances, not every action we do in the application will be performed by the role instance we’ve connected to because of Windows Azure’s round-robin load balancing.
Ideally we want to temporarily remove the role instance we’re profiling from the load balancer to overcome these issues.The good news is: we can do this! The only thing we have to do is add a small piece of code in our WebRole.cs or WorkerRole.cs class.
public class WebRole : RoleEntryPoint { public override bool OnStart() { // For information on handling configuration changes // see the MSDN topic at. RoleEnvironment.StatusCheck += (sender, args) => { if (File.Exists("C:\\Config\\profiling.txt")) { args.SetBusy(); } }; return base.OnStart(); } }
Essentially what we’re doing here is capturing the load balancer’s probes to see if our node is still healthy. We can choose to respond to the load balancer that our current instance is busy and should not receive any new requests. In the example code above we’re checking if the file C:\Config\profiling.txt exists. If it does, we respond the load balancer with a busy status.
When we start profiling, we can now create the C:\Config\profiling.txt file to take the instance we’re profiling out of the server pool. After about a minute, the management portal will report the instance is “Busy”.
The best thing is we can still attach to the instance-specific endpoint and attach dotTrace to this instance. Just keep in mind that using the application should now happen in the remote desktop session we opened earlier, since we no longer have the current machine available from the Internet.
Once finished, we can simply remove the C:\Config\profiling.txt file and Windows Azure will add the machine back to the server pool. Don't forget this as otherwise you'll be paying for the machine without being able to serve the application from it. Reimaging the machine will also add it to the pool }} | https://dzone.com/articles/remote-profileing-cloud?mz=62447-cloud | CC-MAIN-2015-48 | refinedweb | 1,411 | 61.87 |
Module::Optional - Breaking module dependency chains
use Bar::Dummy qw(); use Module::Optional Bar;
This module provides a way of using a module which may or may not be installed on the target machine. If the module is available it behaves as a straight use. If the module is not available, subs are repointed to their equivalents in a dummy namespace.
Suppose you are the developer of module
Foo, which uses functionality from the highly controversial module
Bar. You actually quite like
Bar, and want to reuse its functionality in your
Foo module. But, many people will refuse to install
Foo as it needs
Bar. Maybe
Bar is failing tests or is misbehaving on some platforms.
Making
Bar an optional module will allow users to run
Foo that don't have
Bar installed. For Module::Build users, this involves changing the status of the
Bar dependency from
requires to
recommends.
To use this module, you need to set up a namespace
Bar::Dummy. The recommended way of doing this is to ship lib/Bar/Dummy.pm with your module. This could be shipped as a standalone module. A dummy module for
Params::Validate is shipped with Module::Optional, as this was the original motivation for the module. If there are other common candidates for dummying, petition me, and I'll include them in the Module::Optional distribution.
Place the lines of code in the following order:
use Bar::Dummy qw(); use Module::Optional qw(Bar quux wibble wobble);
Always set up the dummy module first, but don't import anything - this is to avoid warnings about redefined subroutines if the real Bar is installed on the target machine. Module::Optional will do the importing: quux wibble and wobble from the real Bar if it exists, or from Bar::Dummy if it doesn't.
If you need a version of the module or later, this can be done thus:
use Bar::Dummy qw(); use Module::Optional qw(Bar 0.07 quux wibble wobble);
If version 0.07 or later of Bar is not available, the dummy is used.
You will probably be developing your module on a platform that does have Bar installed (I hope). However, you need to be able to tell what happens on systems without Bar. To do this, run the following (example is Unix):
MODULE_OPTIONAL_SKIP=1 make test
You also want to do this in tests for the dummy module that you are providing. (You are providing tests for this module?) This can easily be done with a begin block at the top of the test:
BEGIN { local $ENV{MODULE_OPTIONAL_SKIP} = 1; use Module::Optional qw(Params::Validate); }
You provide a namespace suffixed with ::Dummy containing subs corresponding to all the subs and method calls for the optional module. You should also provide the same exports as the module itself performs.
Adhere strictly to any prototypes in the optional module.
An example of a dummy module is Params::Validate::Dummy, provided in this distribution.
Module::Optional performs two types of redirection for the missing module. Firstly via @ISA inheritance - Foo::Bar inherits from Foo::Bar::Dummy.
Secondly, an AUTOLOAD method is added to Foo::Bar, which will catch calls to subs in this namespace.
Please report bugs to rt.cpan.org by posting to bugs-module-optional@rt.cpan.org or visiting.
Ivor Williams ivorw-mod-opt at xemaps.com
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
The full text of the license can be found in the LICENSE file included with this module.
Test::MockModule, Module::Pluggable, Module::Build. | http://search.cpan.org/dist/Module-Optional/lib/Module/Optional.pm | CC-MAIN-2014-15 | refinedweb | 605 | 56.66 |
import "github.com/tendermint/tendermint/version"
const ( // TMCoreSemVer is the current version of Tendermint Core. // It's the Semantic Version of the software. // Must be a string because scripts like dist.sh read this file. // XXX: Don't change the name of this variable or you will break // automation :) TMCoreSemVer = "0.34.0" // ABCISemVer is the semantic version of the ABCI library ABCISemVer = "0.17.0" ABCIVersion = ABCISemVer )
var ( // GitCommit is the current HEAD set using ldflags. GitCommit string // Version is the built softwares version. Version = TMCoreSemVer )
var ( // P2PProtocol versions all p2p behaviour and msgs. // This includes proposer selection. P2PProtocol uint64 = 8 // BlockProtocol versions all block data structures and processing. // This includes validity of blocks and state updates. BlockProtocol uint64 = 11 )
Package version is imported by 69 packages. Updated 2020-07-31. Refresh now. Tools for package owners. | https://godoc.org/github.com/tendermint/tendermint/version | CC-MAIN-2020-34 | refinedweb | 138 | 53.98 |
Serverless is a cloud-computing execution model in which the cloud provider is responsible for executing a piece of code by dynamically allocating resources to run the code when needed. With it, we can get reduced operation cost and development time. It allows us to focus on our code to provide business value to the users without worrying about building and maintaining servers. It takes a couple of steps to configure and integrate these services with our code, and AWS Amplify was built to make it easier to build serverless applications on AWS. It provides tools to create and configure services with a few commands, and library components to easily interact with those services from our code.
This article is part of a series where I show you how to build serverless applications in React and AWS Amplify. In the first post we set up our development environment, an Amplify project, and a React project. In the second post we created backend services running on different AWS services and built a React app to perform CRUD operation, thereby interacting with the backend services that was created. In this post, we will add analytics and usage tracking to the application we built in the previous posts.
Set Up Analytics Backend
In many applications, it is required to capture application usage data so the business can gain insight into how customers interact with the app. We will use Amazon Pinpoint to track usage metric for our React application. Amazon Pinpoint has the following types of event:
- Monetization events. This event type is used to report the revenue that's generated by your application and the number of items that are purchased by users.
- Session events. They track usage and indicate when and how often users open and close your app.
- Authentication events. This shows how frequently users authenticate with your application. Sign-ins, Sign-ups, and Authentication failures are tracked in this category.
- Custom events. This type of events represents non-standard events that you define by assigning a custom event type. You can add custom attributes and metrics to a custom event.
To add Pinpoint to our project, open the command to the root directory of your React project and follow the instruction below.
- Run the command
amplify add analytics.
- You'll be prompted for a resource name for this service. Enter
todosPinpointand press the Enter key.
- You should get a prompt asking if you want to allow unauthenticated users to send analytic event. Enter
nand press Enter.
The command we ran created the analytics resource and updated the authentication resource locally. We need to provision them in the cloud. Run the command
amplify push to create the service in the cloud. Once completed, it pulls the service information and updates src/aws-exports.js. If you open it, you'll find we have properties
aws_mobile_analytics_app_id and
aws_mobile_analytics_app_region. This information will be used to configure the Amplify library.
Add Analytics To The App
With the Pinpoint service created in the cloud, we now need to add code to send usage data to it. In src/App.js line 7, which reads
Amplify.configure(aws_exports); will set up the library with data from aws-export.js. Since aws-export.js contains
aws_mobile_analytics_app_id, it'll configure analytics as well as other services whose information is in it. By default, the Amplify library will track user session and authentication events. No need to add extra code. If you start the app, sign-in or sign-up users, you'll get these events data sent to the cloud. We can also record custom events. Let's record a custom event for when an item is deleted. Open src/App.js and update line 4 to import the Analytics module
import Amplify, { API, Analytics } from "aws-amplify";
Update the
delete() function to include the following code statement.
Analytics.record({ name: "delete", attributes: { id } });
This will send a
delete event each time this function is called. It could be used to track how often items get deleted. We could also track which items get viewed the most by recording an event each time we go to the Details view. Add the following code to the
loadDetailsPage() function:
Analytics.record({ name: "detailsView", attributes: { title: response.title } });
Open the app in a browser and select different items to move through details view for different items you have. Now login to AWS management console and go to the Pinpoint dashboard to view analytics report for the application.
That's A Wrap
You can integrate Amazon Pinpoint into your web apps to capture usage data to provide you with insight into how customers interact with your apps. A business can use this data to analyse customer behavior, including their engagement, demographics and purchase activity. I showed you how to create a Pinpoint service using the Amplify CLI, and then integrated it in the React application to send custom events to the Pinpoint service. There's more we can do with the analytics module in the Amplify JavaScript library, like automatic recording of page views and event. See the docs for more info on the Analytics API.
Further Reading
Also published on my blog
Posted on by:
Peter Mbanugo
Software Developer with experience building web apps and services in JS and C#. I'm passionate about building quality software, with interest area around Offline First and Software Architecture.
Discussion | https://dev.to/pmbanugo/going-serverless-with-react-and-aws-amplify-part-3-tracking-app-usage-5f3e | CC-MAIN-2020-40 | refinedweb | 897 | 55.24 |
Re: ANN: threads enabled Itcl
- From: "Wojciech Kocjan" <myfirstname@xxxxxxxxxx>
- Date: Mon, 17 Jul 2006 21:18:06 +0200
On Fri, 14 Jul 2006 08:40:47 +0200, Eckhard Lehmann <ecky-l@xxxxxx> wrote:
Hi all,
I've written a patch for Itcl that makes it possible to use Itcl
objects in multiple threads. It is possible to create an object e.g. in
the main thread (using the -ts/-threadshared option) and call methods
on it in a different thread.
Sounds really interesting! Especially when using things like AOLserver or writing other multithreaded servers.
The patch for Itcl and also one for Itk can be downloaded at, I wrapped up the sources and created
binaries for Linux and windows as well, and created a Wiki entry at.
I'm wondering about one thing, though. The main drawback should be that you need to either deepcopy the Tcl_Obj or convert to/from strings. Both seem very time consuming to me.
From what I remember with ns_cache from AOLserver, Tcl_Obj cannot be shared between threads. So if this is true, your code will probably fail when exiting Tcl or deleting the object, probably in a more complex scenario.
Also, how does it handle the following code:
namespace eval ::myns {}
A -threadshared ::myns::obj0
I would assume you may need to create the parent namespace in each other thread.
--
WK
.
- References:
- ANN: threads enabled Itcl
- From: Eckhard Lehmann
- Prev by Date: ANNOUNCE: Windows Inspection Tool Set (WiTS) 1.0
- Next by Date: Re: Compile TCL8.4.13 using VS2005?
- Previous by thread: ANN: threads enabled Itcl
- Next by thread: tk_table in Unix Environment
- Index(es): | http://coding.derkeiler.com/Archive/Tcl/comp.lang.tcl/2006-07/msg00453.html | CC-MAIN-2015-11 | refinedweb | 273 | 72.16 |
If you think that static rendering is limited to generic, public content that is the same for every user of your website, you should definitely read this article.
Segmented Rendering is a new pattern for the Jamstack that lets you personalize content statically, without any sort of client-side rendering or per-request Server-Side Rendering. There are many use cases: personalization, internationalization, theming, multi-tenancy, A/B tests…
Let’s focus on a scenario very useful for blog owners: handling paid content.
Congratulations On Your New Job
Wow, you just got promoted! You are now “Head of Performance” at Repairing Magazine, the most serious competitor to Smashing Magazine. Repairing Magazine has a very peculiar business model. The witty jokes in each article are only visible to paid users.
Why did the programmer cross the road?
I bet you’d pay to know the answer.
Your job for today is to implement this feature with the best possible performances. Let’s see how you can do that. Hint: we are going to introduce a new pattern named “Segmented Rendering.”
The Many Ways To Render A Web Page With Modern JavaScript Frameworks
Next.js popularity stems from its mastery of the “Triforce of Rendering:” the ability to combine client-side rendering, per-request server-rendering and static rendering in a single framework.
CSR, SSR, SSG… Let’s Clarify What They Are
Repairing Magazine user interface relies on a modern JavaScript library, React. Like other similar UI libraries, React provides two ways of rendering content: client-side and server-side.
Client-Side Rendering (CSR) happens in the user’s browser. In the past, we would have used jQuery to do CSR.
Server-side rendering happens on your own server, either at request-time (SSR) or at build-time (static or SSG). SSR and SSG also exist outside of the JavaScript ecosystem. Think PHP or Jekyll, for instance.
Let’s see how those patterns apply to our use case.
CSR: The Ugly Loader Problem
Client-Side Rendering (CSR) would use JavaScript in the browser to add witty jokes after the page is loaded. We can use “fetch” to get the jokes content, and then insert them in the DOM.
// server.js const wittyJoke = "Why did the programmer cross the road? There was something he wanted to C."; app.get("/api/witty-joke", (req) => { if (isPaidUser(req)) { return { wittyJoke }; } else { return { wittyJoke: null }; } }); // client.js const ClientArticle = () => { const { wittyJoke, loadingJoke } = customFetch("/api/witty-jokes"); // THIS I DON’T LIKE... if (loadingJoke) return <p>Ugly loader</p>; return ( <p> {wittyJoke ? wittyJoke : "You have to pay to see jokes. Humor is a serious business."} </p> ); };
CSR involves redundant client-side computations and a lot of ugly loaders.
It works, but is it the best approach? Your server will have to serve witty jokes for each reader. If anything makes the JavaScript code fail, the paid user won’t have their dose of fun and might get angry. If users have a slow network or a slow computer, they will see an ugly loader while their joke is being downloaded. Remember that most visitors browse via a mobile device!
This problem only gets worse as the number of API calls increases. Remember that a browser can only run a handful of requests in parallel (usually 6 per server/proxy). Server-side rendering is not subject to this limitation and will be faster when it comes to fetching data from your own internal services.
SSR Per Request: Bitten By The First Byte
Per-request Server-Side Rendering (SSR) generates the content on demand, on the server. If the user is paid, the server returns the full article directly as HTML. Otherwise, it returns the bland article without any fun in it.
// page.js: server-code async function getServerSideProps(req) { if (isPaidUser(req)) { const { wittyJoke } = getWittyJoke(); return { wittyJoke }; } else { return { wittyJoke: null }; } } // page.js: client-code const SSRArticle = ({ wittyJoke }) => { // No more loader! But... // we need to wait for "getServerSideProps" to run on every request return ( <p> {wittyJoke ? wittyJoke : "You have to pay to see jokes. Humor is a serious business."} </p> ); };
SSR removes client-side computations, but not the loading time.
We don’t rely on client-side JavaScript anymore. However, it’s not energy-efficient to render the article for each and every request. The Time To First Byte (TTFB) is also increased because we have to wait for the server to finish its work before we start seeing some content.
We’ve replaced the ugly client-side loader with an even uglier blank screen! And now we even pay for it!
The “stale-while-revalidate” cache control strategy can reduce the TTFB issue by serving a cached version of the page until it’s updated. But it won’t work out-of-the-box for personalized content, as it can cache only one version of the page per URL without taking cookies into account and cannot handle the security checks needed for serving paid content.
Static Rendering: The Key To The Rich Guest/Poor Customer Problem
At this point, you are hitting what I call the “rich guest/poor customer” problem: your premium users get the worst performance instead of getting the best.
By design, client-side rendering and per-request server-side rendering involve the most computations compared to static rendering, which happens only once at build time.
99% of the websites I know will pick either CSR or SSR and suffer from the rich guest/poor customer problem.
Deep-Dive Into Segmented Rendering
Segmented Rendering is just a smarter way to do static rendering. Once you understand that it’s all about caching renders and then getting the right cached render for each request, everything will click into place.
Static Rendering Gives The Best Performances But Is Less Flexible
Static Site Generation (SSG) generates the content at build-time. That’s the most performant approach because we render the article once for all. It is then served as pure HTML.
This explains why pre-rendering at build-time is one of the cornerstones of the Jamstack philosophy. As a newly promoted “Head of Performance,” that’s definitely what you want!
As of 2022, all Jamstack frameworks have roughly the same approach of static rendering:
- you compute a list of all possible URLs;
- you render a page for each URL.
const myWittyArticles = [ "/how-to-repair-a-smashed-magazine", "/segmented-rendering-makes-french-web-dev-famous", "/jamstack-is-so-2022-discover-haystack", ];
Result of the first step of static rendering: computing a bunch of URLs that you will prerender. For a blog, it’s usually a list of all your articles. In step 2 you simply render each article, one per URL.
This means that one URL strictly equals one version of the page. You cannot have a paid and a free version of the article on the same URL even for different users. The URL
/how-to-repair-a-smashed-magazine will deliver the same HTML content to everyone, without any personalization option. It’s not possible to take request cookies into account.
Segmented Rendering can go a step further and render different variations for the same URL. Let’s learn how.
Decoupling URL And Page Variation
The most naive solution to allow personalized content is to add a new route parameter to the URL, for instance, “with-jokes” versus “bland.”
const premiumUrl = "/with-jokes/how-to-repair-a-smashed-magazine"; const freeUrl = "/bland/how-to-repair-a-smashed-magazine";
An implementation with Next.js will look roughly like this:
async function getStaticPaths() { return [ // for paid users "/with-jokes/how-to-repair-a-smashed-magazine", // for free users "/bland/how-to-repair-a-smashed-magazine", ]; } async function getStaticProps(routeParam) { if (routeParam === "with-jokes") { const { wittyJoke } = getWittyJoke(); return { wittyJoke }; } else if (routeParam === "bland") { return { wittyJoke: null }; } }
The first function computes 2 URLs for the same article, a fun one and a bland one. The second function gets the joke, but only for the paid version.
Great, you have 2 versions of your articles. We can start seeing the “Segments” in “Segmented Rendering” — paid users versus free users, with one rendered version for each segment.
But now, you have a new problem: how to redirect users to the right page? Easy: redirect users to the right page, literally! With a server and all!
It may sound weird at first that you need a web server to achieve efficient static rendering. But trust me on this: the only way to achieve the best performances for a static website is by doing some server optimization.
A Note On “Static” Hosts
If you come from the Jamstack ecosystem, you may be in love with static hosting. What’s a better feeling than pushing a few files and getting your website up and running on GitHub Pages? Or hosting a full-fledged application directly on a Content Delivery Network (CDN)?
Yet “static hosting” doesn’t mean that there is no server. It means that you cannot control the server. There is still a server in charge of pointing each URL to the right static file.
Static hosting should be seen as a limited but cheap and performant option to host a personal website or a company landing page. If you want to go beyond that, you will need to take control over the server, at least to handle things such as redirection based on the request cookies or headers.
No need to call a backend expert though. We don’t need any kind of fancy computation. A very basic redirection server that can check if the user is paid will do.
Great news: modern hosts such as Vercel or Netlify implements Edge Handlers, which are exactly what we need here. Next.js implements those Edge Handlers as “middlewares,” so you can code them in JavaScript.
The “Edge” means that the computations happen as close as possible to the end-user, as opposed to having a few big centralized servers. You can see them as the outer walls of your core infrastructure. They are great for personalization, which is often related to the actual geographical location of the user.
Easy redirection with Next.js middlewares
Next.js middlewares are dead fast and dead simple to code. Contrary to cloud proxies such as AWS Gateway or open source tools such as Nginx, middlewares are written in JavaScript, using Web standards, namely the fetch API.
In the “Segmented Rendering” architecture, middlewares are simply in charge of pointing each user request to the right version of the page:
import { NextResponse } from "next/server"; import type { NextRequest } from "next/server"; async function middleware(req: NextRequest) { // isPaidFromReq can read the cookies, get the authentication token, // and verify if the user is indeed a paid member or not const isPaid = await isPaidFromReq(req); const routeParam = isPaid ? "with-jokes" : "bland"; return NextResponse.redirect(
/${routeParam}/how-to-repair-a-smashed-magazine); }
A middleware that implements Segmented Rendering for paid and free users.
Well, that’s it. Your first day as a “Head of Performance” is over. You have everything you need to achieve the best possible performances for your weird business model!
Of course, you can apply this pattern to many other use cases: internationalized content, A/B tests, light/dark mode, personalization… Each variation of your page makes up a new “Segment:” French users, people who prefer the dark theme, or paid users.
Cherry On The Top: URL Rewrites
But hey, you are the “Head of Performance,” not the “Average of Performance”! You want your web app to be perfect, not just good! Your website is certainly very fast on all metrics, but now your article URLs look like this:
/bucket-A/fr/light/with-jokes/3-tips-to-make-an-url-shorter
That’s not really good-looking… Segmented Rendering is great, but the end-user doesn’t have to be aware of its own “segments.” The punishment for good work is more work, so let’s add a final touch: instead of using URL redirects, use URL rewrites. They are exactly the same thing, except that you won’t see parameters in the URL.
// A rewrite won’t change the URL seen // by the end user => they won’t see the "routeParam" return NextResponse.rewrite(
/${routeParam}/how-to-repair-a-smashed-magazine);
The URL
/how-to-make-an-url-shorter, without any route parameter, will now display the right version of the page depending on the user’s cookies. The route parameter still “exists” in your app, but the end-user cannot see it, and the URL stays clean. Perfect.
Summary
To implement Segmented Rendering:
- Define your “segments” for a page.
Example: paid users versus free users, users from company A versus users from company B or C, etc.
- Render as many static variations of a page you need, with one URL per segment.
Example:
/with-jokes/my-article,
/bland/my-article. Each variation matches a segment, for instance, paid or free users.
- Setup a very small redirection server, that checks the HTTP request content and redirects the user to the right variation, depending on their segment.
Example: paid users are redirected to
/with-jokes/my-article. We can tell if a user is paid or not by checking their request cookies.
What’s Next? Even More Performance!
Now you can have as many variations of the same page as you want. You solved your issue with paid users elegantly. Better, you implemented a new pattern, Segmented Rendering, that brings personalization to the Jamstack without sacrificing performances.
Final question: what happens if you have a lot of possible combinations? Like 5 parameters with 10 values each? You cannot render an infinite number of pages at build-time — that would take too long. And maybe you don’t actually have any paid users in France that picked the light theme and belong to bucket B for A/B testing. Some variations are not even worth rendering.
Hopefully, modern frontend frameworks got you covered. You can use an intermediate pattern such as Incremental Static Regeneration from Next or Deferred Static Generation from Gatsby, to render variations only on demand.
Website personalization is a hot topic, sadly adversarial to performance and energy consumption. Segmented Rendering resolves this conflict elegantly and lets you statically render any content, be it public or personalized for each user segment.
More Resources on This Topic
- “Let’s Bring The Jamstack To SaaS: Introducing Rainbow Rendering,” Eric Burel
My first article describing the generic, theoretical architecture for Segmented Rendering (aka “Rainbow Rendering”)
- “Treat Your Users Right With Http Cache And Segmented Rendering,” Eric Burel
My second article showing an implementation with Next.js middlewares
- “Render Anything Statically With Next.js And The Megaparam,” Eric Burel
My third article, Segmented Rendering with just the HTTP cache (if you use Remix, you’ll love it
- “Incremental Static Generation For Multiple Rendering Paths,” Eric Burel
The GitHub ticket on Next.js that started it all.
- “Theoretical Foundations For Server-side Rendering And Static-rendering,” Eric Burel
My draft research paper that describes the mathematics behind SSR. It proves that Segmented Rendering achieves an optimal number of rendering in any situation. You cannot top that!
- “High Performance Personalization With Next.js Middleware,” Raymond Cheng
An excellent use case and demonstration with Plasmic.
- “A/B Testing With Next.js Middleware,” Raymond Cheng
Also from Plasmic, Segmented Rendering is being applied to A/B tests.
- “Use Eleventy Edge To Deliver Dynamic Web Sites On The Edge,” Eleventy Edge Blog
Eleventy Edge, a deep integration between 11ty and Netlify Edge Handlers to achieve personalization.
- “Vercel/Platforms,” Steven Tey
Vercel Platforms, an example of segmented rendering for multi-tenancy.
- “Avoid Waterfalls Of Queries In Remix Loaders,” Sergio Xalambrí
How to properly parallelize your data fetching requests (on the example of Remix).
Further Reading on Smashing Magazine
- “Jamstack Rendering Patterns: The Evolution,” Ekene Eze
- “State Management In Next.js,” Átila Fassina
- “Jamstack CMS: The Past, The Present and The Future,” Mike Neumegen
- “A Complete Guide To Incremental Static Regeneration (ISR) With Next.js,” Lee Robinson | https://pickdigit.com/a-new-pattern-for-the-jamstack-segmented-rendering-smashing-magazine/ | CC-MAIN-2022-33 | refinedweb | 2,669 | 55.84 |
Changes the file associated with an existing file pointer
#include <stdio.h>
FILE *freopen ( const char * restrict name , const char * restrict mode ,
FILE * restrict fp );
The freopen( ) function closes the file associated with the FILE pointer argument and opens the file with the specified name, associating it with the same FILE pointer as the file just closed. That FILE pointer is the function's return value. If an error occurs, freopen( ) returns a null pointer, and the FILE pointer passed to the function is closed.
The new access mode is specified by the second character string argument, in the same way described under fopen( ).
The most common use of freopen( ) is to redirect the standard I/O streams stdin, stdout, and stderr.
time_t sec;
char fname[ ] = "test.dat";
if ( freopen( fname, "w", stdout ) == NULL )
fprintf( stderr, "Unable to redirect stdout.\n" );
else
{
time(&sec);
printf( "%.24s: This file opened as stdout.\n", ctime(&sec) );
}
fopen( ), fclose( ), fflush( ), setbuf( ) | http://books.gigatux.nl/mirror/cinanutshell/0596006977/cinanut-CHP-17-93.html | CC-MAIN-2018-43 | refinedweb | 158 | 74.59 |
v1.2
<TOMCAT_HOME>/common/lib/directory.
activation.jarv1.0.1
<TOMCAT_HOME>/common/lib/directory.
xerces.jarv1.4.2
<TOMCAT_HOME>/common/lib/directory..
Once you have moved the
soap.war file into
<TOMCAT_HOME>/webapps/directory, you need to make sure that each of the previously listed .jar files are in the
<TOMCAT_HOME>/common/lib/ directory, excluding the
soap.jar file.(must be a JavaBean).
typeattribute defines the implementation type of the SOAP service. We defined our service as a Java service.
scope
scopeattribute defines the lifetime of the SOAP service. The possible values are
page,
scope,
session, and
application. These scope values map one-to-one with the scope values defined by the JSP specification.
methods
methodsattribute defines the names of the methods that can be invoked on this service object. This list should be a space-separated list of method names.
The final element of the deployment descriptor that we'll look at here is the
java element. This element contains a single attribute,
class, which names the fully qualified class of the named service.function allows you to deploy a new service to a SOAP server.
undeployfunction removes an existing SOAP service from a SOAP server.
listfunction lists all deployed SOAP services..
As we cover the following commands, you should note that each command references a servlet named
rpcrouter. This servlet is at the core of all SOAP actions. It performs all service management and execution..
Now that we have a service defined and deployed, let's write a client that will execute one of the service's methods. The Apache SOAP Project provides a client-side API that makes it extremely simple to create SOAP clients. An example client, which we will use to execute the
subtract method of our service, can be found in Example 3.
Example 3. A SOAP Client
package onjava;
import java.io.*;
import java.net.*;
import java.util.*;
import org.apache.soap.*;
import org.apache.soap.rpc.*;
public class CalcClient {
public static void main(String[] args) throws Exception {
URL url = new URL ("");
Integer p1 = new Integer(args[0]);
Integer p2 = new Integer(args[1]);
// Build the call.);
// make the call: note that the action URI is empty because the
// XML-SOAP rpc router does not need this. This may change in the
// future.
Response resp = call.invoke(url, "" );
// Check the response.());
}
}
}
This client follows a simple process that is common to most SOAP RPC clients. It first creates a URL referencing the
rpcrouter (which we noted earlier) on the HTTP server localhost. This is done in the following code snippet:
URL url = new URL ("");
The next step performed by the client application is to parse the arguments from the command line. These values will be passed to the SOAP service in a subsequent method. The values created will be integers.
After the client has parsed to command-line arguments, it creates an instance of an
org.apache.soap.rpc.RPCMessage.Call. The
Call object is the main interface used when executing a SOAP RPC invocation.
To use the
Call object, we first tell it which service we want to use. We do this by calling the
setTargetObjectURI, passing it the name of the service that we want to execute. We then set the name of the service method we want to execute using the
setMethodName() method, with the name of the method we want to execute. The next step is to set the encoding style used in the RPC call. The final step is to add the parameters that are expected when executing the named method. This is done by creating a
Vector of
Parameter objects and adding them to the
Call object using the
setParams() method. All of these steps are completed using the following code snippet:);
The next step performed by the client application is to call the service method that we are interested in. This is done using
invoke() with the URL we created earlier. Here is the snippet of code calling the
invoke() method:
Response resp = call.invoke(url, "" );
You will notice the return value of the
invoke() method is a
Response object. The
Response object returns two very important items -- an error code and the value returned from the executed service method. You check for an error by calling the
generatedFault() method. This method will return true if there was an error returned; then you can check the
getFault() method. If
generatedFault() returns false, you can then get the value returned in the
Response object, using the
Response.getReturnValue() method. The following code snippet shows how you should process the response of an
invoke():()); }
That is all there is to it. To test your client and service, compile the client and execute it using the following command line:
java onjava.CalcClient 98 90
Note:At this point, you should have the
CalcService deployed and Tomcat should be running.
In this article we discussed the Apache SOAP Project. We described each of the steps involved when integrating SOAP into the Tomcat container. We also created a sample SOAP application demonstrating how each of the SOAP components works together.
As for the next topic we examine, I am leaving it up to you. From this point, we can go in many different directions. Some of the topics that we can discuss include:
Please let me know where you think we should go next, or if you have other related topics that you would like to see covered. You can reach me at jgoodwill@virtuas.com Please include "onjava" in the subject line.
James Goodwill is the co-Founder of Virtuas Solutions, LLC, a Colorado-based software consultancy.
Read more Using Tomcat columns.
Return to ONJava.com. | http://www.onlamp.com/lpt/a/1497 | CC-MAIN-2015-11 | refinedweb | 946 | 65.73 |
Khaindar,
I'm in the same spot as you. I'd like to help but unsure of the best place
to get started. Also, I haven't seen much activity the last few months on
this mailing list. Can anyone update as to the current status of 4.2 and/or
3.6? It doesn't look like there's been a commit to the github project in 6
months.
If any work has started on the 4.2 port, you can give me a namespace or a
few classes to port and I can work on porting them on a fork of
lucene.neton github. I assume that the workflow would be that someone
would create a
4.2 branch in github, we would then create pull requests from our forks for
that branch? Or would we work from trunk? Not sure since there was talk of
starting a port of 4.2 from scratch.
I have spent the most time understanding and extending the code of
Lucene.Net.Documents, Lucene.Net.Analysis, and Lucene.Net.Store, so I could
work on any of those areas once someone has gotten the ball rolling on a
new port (if that's what's happening).
Paul Irwin
pirwin@feature23.com
On Thu, Jun 6, 2013 at 4:35 AM, khaindar black <blackkhaindar@gmail.com>wrote:
> hiho alltogether,
>
> i m quite new to lucene.net (using it for about 2 months), but i d like to
> offer you my support, also the whole opensource stuff is quite new to me,
> so here is a short intro about myself
>
> @blackkhaindar:
> - developing/managing .net projects for a couple of years
> - total lucene.net rookie (only have used basic searching stuff, 2-3
> analizers, index synchronization, document/field basics)
> - interested in sharing knowledge
> - interested in gaining knowledge
> - trying out good ideas
> - ...just ask to find out...
>
> some questions:
> - how is this whole stuff organized, worktime, items, bugs etc...
> (communication)
> - where can i see the current progress in porting the java lib to .net?
> (more than the released stuff @ apache.org/projects../lucene.net....)
> - what has to be done in first instance from my side, to support you guys?
> - ...plenty more of them...
>
>
>
> greetz,
> blackkhaindar
> | http://mail-archives.eu.apache.org/mod_mbox/lucenenet-dev/201306.mbox/%3CCAGm6pShe=6hb60G8SLdURO7FVMO07k_GUWgURq33_jjVcyadVw@mail.gmail.com%3E | CC-MAIN-2021-43 | refinedweb | 369 | 76.32 |
Sakai Plugin
Dependency:
compile "org.grails.plugins:sakai:0.4.2"
Summary
Installation
grails install-plugin sakai
Description
IntroductionThis plugin adds facilities for running Grails apps inside Sakai, an open source collaboration system for educational institutions. See
Unlike typical Java webapps, Sakai tools share a lot of third-party dependencies with each other, all in Tomcat's
/shared/lib.Because Sakai and Grails will share a single copy of Spring and Hibernate jars, their versions must agree. Sakai was patched to use Spring 2.5.6.SEC01 on July 3, 2009. Sakai's older releases, and trunk code before that date won't work with this plugin. Likewise, you cannot (yet) use Grails 1.1.1 with Sakai until Sakai is patched to an updated version of Hibernate. Sakai still uses Hibernate 3.2.5.GA.To summarize: use the sakai plugin with the trunk of Sakai and Grails 1.1.
Configuration OptionsThe sakai plugin adds three configuration settings which determine a little metadata about the tool. These settings should be added to Config.groovy.
The
grails.sakai.toolmode = true grails.sakai.tool.name = "Grade Calc" grails.sakai.tool.desc = "This tool calculates your overall grade for the term."
toolmodeproperty informs the plugin upon building a war file if you intend to run this application as a Sakai tool (meaning it will be accessed within Sakai under the
/portaladdress). If you set
toolmodeto
falseit means you intend to access the application at its direct URL below the root.
War File for SakaiThe plugin adds a new script to grails,
sakai-war.As the name implies, it creates a special war file suitable for dropping into a Sakai installation. The Sakai war file will have certain jar files removed from
WEB-INFto avoid collisions, it adds a tool registration file to provide Sakai with tool metadata (name and description), and it modifies
web.xmlwith a listener, a context loader, and a servlet.
Sakai TagsSakai tools run inside a portal-like environment. All access to tools is relative to the
/portalURL and the specific placement of the tool. Since links inside the tool have to take this portal context into account, there is a
<s:link>tag to be used in place of the
<g:link>tag which will add the necessary context to the link as the tool runs inside the portal. TODO: All of built-in grails tags should eventually have a Sakai version to enable correct links for things like
remoteLinketc. Even better would be if the sakai plugin can actually override the default grails behavior, rather than necessitate entirely different tags, but I haven't learned how to do this yet.The sakai plugin tag library also has an onload() convenience method to add the correct JavaScript calls to make the tool load properly in the portal (sizing the iframe, for instance). Here's how you would use that in your gsp:
<body onload="s.onload()">
Sakai Stub ContextOne of the central goals of the sakai plugin for Grails is to enable developing Sakai code outside of Sakai as much as possible. Sakai is large, startup time is long, and the feedback loop during development is slow. To facilitate developing Sakai capabilities outside of Sakai, the sakai plugin loads a complete Sakai application context, which means all the services like
UserDirectoryServiceand
SiteServiceare available to call. The implementations are stubs only, so you won't get real, meaningful data from these "offline" services, but they will permit you to code as if the services were available. When you deploy your code to Sakai, of course, all of your calls will be directed to the real service.To access the Sakai context in your controllers, services, and taglibs, simply add
def sakaiwithin the class. This is simply the standard Grails way of doing dependency injection. All of the Sakai services are then accessible by their names. Here's an example from a controller:
If you need a little more lifelike behavior from any particular Sakai service, you can create your own implementation that overrides the stub. For example, say I want the display name of the current user to come back as "Professor Plum." I can write an implementation to that effect (very succinctly I might add, thanks to Groovy):
class HelloController def sakai def greetings = { render "Hello, ${sakai.userDirectoryService.currentUser.displayName}!" }
I would put that under
import org.sakaiproject.user.api.*class UserServiceFactory { static user = [ getDisplayName: { return "Professor Plum" } ] as User static UserDirectoryService getInstance() { return [ getCurrentUser: { return user } ] as UserDirectoryService }}
src/groovyand call it UserServiceFactory.groovy. Then I modify resources.groovy under
grails-app/conf/springto specify my implementation of the service. Note that I specifically only want my implementation when I'm running in the development environment:
import grails.util.*beans = { switch(GrailsUtil.environment) { case "development": userDirectoryService(UserServiceFactory) { bean -> bean.factoryMethod = "getInstance" } break } } | http://www.grails.org/plugin/sakai | CC-MAIN-2017-30 | refinedweb | 805 | 55.64 |
I'm using Aurelia with TypeScript. I'm trying to bind an object to the view and I want the view to update when I change a property of my object. I know that Aurelia only watches the object and some array manipulations (push, splice etc). I also know that there are some binding helpers such as @computedFrom and using the BindingEngine but I still can't find the best approach when using value converters.
In my example I have class in TypeScript, e.g. "class Car". Then I bind multiple car objects to the view, e.g. ${car1}, ${car2} etc. I add a value converter to present the car, e.g. ${car1 | carPresenter}. This displays the information like this: "A blue car with full tank and three passengers". If I change a car property, e.g. "car1.passengers++" then I want the ${car1 | carPresenter} to update.
Maybe a value converter is the wrong approach? Please advise on better methods if that's the case. I want to present complex objects by showing some of it's properties but not necessarily all of them. And those presentations should update when the underlying properties change.
I have created a gist with a simplified example that illustrates the problem:
There is an additional binding decorator you can leverage:
@observable [related docs].
More info: Working With Aurelia @observable (Dwayne's blog is an extremely useful resource for learning Aurelia).
Gist demo:
In this demo, the
Car object has its own class defined, where all necessary properties have an
@observable decorator. Value converter has been replaced by
description getter method.
Car class
import { observable } from 'aurelia-framework'; export class Car { @observable color; @observable gasLevel; @observable passengers; constructor(data) { // ... } get description() { return `A ${this.color} car with ${this.gasLevel} tank and ${this.passengers} passengers`; } } | https://codedump.io/share/G3orrDlLzM5v/1/update-aurelia-binding-when-properties-change | CC-MAIN-2017-34 | refinedweb | 299 | 58.99 |
Hey there, everyone.
Here is the question I had to write code for:
Include the following code: char test[15] = {‘T’, ‘h’, ‘i’,’s’, ‘_’, ’i’, ’s’, ‘_’, ’a’, ‘_’, ’t’, ’e’, ’s’, ’t’}; for (int row = 0; row < 15; row++) { cout << test[row]; }
Add the code within the for loop to display only the characters t, e, and s using if/else or case statements.
Here is what I turned in:
#include <iostream> using namespace std; void main() { char test[15] = {'T', 'h', 'i','s', '_', 'i', 's', '_', 'a', '_', 't', 'e', 's', 't'}; int row; cout << "Characters 10-12 are:"<<endl; for (char row = 0; row <15; row++) { if(row > 9) if (row < 13) { cout << test[row]; } else; } cout << "\n" <<endl; }
I thought this was good. But it was handed back to me to redo. My teacher said he wants to see me do this using the exact character letter. I guess meaning by doing 't' 'e' 's'. I am confused as to how to re-write the code this way. Any help is appreciated. Thanks! | https://www.daniweb.com/programming/software-development/threads/155137/code-w-in-for-loop-to-display-only-certain-characters | CC-MAIN-2017-43 | refinedweb | 179 | 84 |
Change Color of selected segmentcontrol
Hey guys,
I’am quite new to pythonista and i already love it. Everything is well documented and there is a thread for everything you need in this Forum. Except for this special requirement. For my own App í´m trying to figure out how i can set the color of the selected segment.
During my search i found this topic and i was able to get this to work in my code. But unfortunately its only the font type, color and size. But if we can access theese attributes wouldnt we be able to change the color of the selection as well.?
I tried with the attribute nsforegroundcolorattributename, but without succes. And i have to admit, I have no experience with object C.
So has anyone a clue how i can get this to work? I would really appreciate it :)
Best wishes
@Killerburns read this topic
Hi cvp,
Thanks for your reply. I have already seen this thread, and as i mentioned was able to change color of the font as you describe it in the the thread you have posted. If its about the „get back ios 12 appearance“ i really have no idea how to use that :( I‘am really a beginner.
I thought about to build a own segmented control with buttons and ui animate if this is too complicated to get it work with the build in segmented control.
@Killerburns Try this, almost nothing in Objective C 😅
import ui from objc_util import * v = ui.View() v.frame = (0,0,400,200) v.background_color = 'white' d = 64 items = ['aaa','bbb'] sc = ui.SegmentedControl() sc.frame = (10,10,d*len(items),d) sc.segments = items def sc_action(sender): idx = sender.selected_index o = ObjCInstance(sender).segmentedControl() #print(dir(o)) idx = o.selectedSegmentIndex() for i in range(len(items)): if i == idx: with ui.ImageContext(d,d) as ctx: path = ui.Path.rounded_rect(0,0,d,d,5) ui.set_color('red') path.fill() s = 12 ui.draw_string(items[idx], rect=(0,(d-s)/2) sc.action = sc_action v.add_subview(sc) v.present('sheet')
Thanks
import') ```
- Gagnon_265
This post is deleted!last edited by
- myagkayaanna
This post is deleted!last edited by
This post is deleted!last edited by | https://forum.omz-software.com/topic/6678/change-color-of-selected-segmentcontrol/? | CC-MAIN-2021-49 | refinedweb | 374 | 69.18 |
Type: Posts; User: divined
Hello everybody
I`ve created a WebForms applications for learning purposes. I`ve placed two TextBoxes on my WebForm1 using the Visual Studio toolbox. The code is shown below :
<asp:TextBox...
Hello everybody
I`m trying to add DirectInput support to a little game I`m writing. I`ve run up against this wall though. When I compile my code I get the warning:
1>C:\Program...
ok. thx. I understand that this has to with protecting the structure - class instance from accidental modification.
Hello everybody
I`m been reading some c++ programming books lately and I`ve run across functions declarations like the one shown below :
CQ::CQ(const st &st)
{
}
thx, they do seem interesting!!!
Hello everybody
I`ve been writing this simple game for some time now and I`d like to have a rudimentary form of protection for it. Some sort of activation. The game has been written in c++, using...
Hello everybody
I`ve written this code under Visual C#.
namespace MySQL_ListBox1
{
partial class ADONetForm1 : Form
{
thanks for the link. A rather focal point for anything related to MySQL Connector.
Fine. But the compiler responds with an error when I use the C# code :
MySql.Data.MySqlClient.MySqlConnection conn;
string myConnectionString;
myConnectionString =...
Hello everybody
I want to connect to a MySQL 5.0 database under C# using the MySQL Connector Net 1.08 drivers. I know that I need to use the MySQLConnector object. But that object is not...
ok. I got the idea. But what collection do I need to use in order for Visual C# to be able to see the MySQL object?
Hello everybody
I`m using MySQL 4.1 and have installed the MySQL Connector net drivers. Now I wish to use a DataAdapter object to connect to my database. This is the connection string I used when...
ok. thanks. It`s just because I`m familiar with MySQL and am just starting to learn ASP.NET. I wouldn`t like to commit myself to learning ASP.NET if it couldn`t interoperate with MySQL.
Hello everybody
Is it possibly to use Visual C# 2005, ASP.NET and MySQL 5 to create web applications? Or am I stuck with M$ SQL Server??
Hello everybody
I`ve got this problem. THe code is shown below :
student.h
#pragma once
#include <iostream>
using namespace std;
ok. I tested it out with the _getch() and it seems to be working. Maybe it was something with some other include files I had in the project. I`ll have to check it again tomorrow at work and get back...
Hello everybody
I`ve tried compiling some old code I`d written using an old Borland C++ compiler under Microsoft Visual C++ 6.0 sp6. That code uses the getch() console function heavily. I`ve...
thanks guys! I got the general idea. I`d practically forgotten about the assignment operator and thought there would be two objects left, thus wasting memory space.
Hello everybody
This is kind of a newbie question but I didn`t find a specific answer in the FAQ. When I overload the addition operator I always create a new unnamed object which is returned to...
Does that mean that I always need to use the using namespace std?
Hello everybody
I want to migrate some old code I`d written to Visual Studio C++ .Net 2005. The application is console based and relies heavily on the cout function. Unfortunately, the Visual...
ok. thanks. I`m gonna this out and get back to you.
Hello everybody
I`m trying to implement the IEnumerable<String> interface for a class. This class has a string collection that I`d like to loop through using the foreach statement. This is the...
thanks! Very clarifying!!!
Hello everybody
I`ve got a question on the Dispose method. Can it be called explicitly or should it always be called under the protective hood of a using statement clause?
Also, what is the... | http://forums.codeguru.com/search.php?s=0e1ff41c1191312d24a19741893c082a&searchid=5793161 | CC-MAIN-2014-52 | refinedweb | 666 | 68.47 |
This document is also available in these non-normative formats: Single XHTML file, PostScript version, PDF version, ZIP archive, and Gzip'd TAR archive.
Copyright ©2006 may also contain an xsi:schemaLocation attribute that associates this namespace with
the XML Schema at the URI..
On introducing a fallback attribute PR #7723
State: Open
Resolution: None
User: None
Notes:
[XHTML 2] Conforming documents and meta properties PR #7777
State: Open
Resolution: None
User: None
Notes:
Fw: [XHTML 2] Section 3.1.1 PR #7791
State: Open
Resolution: None
User: None
Notes:
State: Open
Resolution: None
User: None
Notes:
Change XHTML 2.0 namespace to PR #7808
State: Open
Resolution: None
User: None
Notes:
Namespace versioning problem in XHTML 2 PR #7818
State: Open
Resolution: None
User: None
Notes:
Referance XML 1.1 and example with XML 1.1 declaration PR #7819
State: Open
Resolution: None
User: None, q, (parsed):
Fw: [XHTML 2] Section 5.5 quality values. PR #7799
State: Open
Resolution: None
User: None
Notes:
Fw: [XHTML 2] Section 5.5 intersection of mime-types PR #7800
State: Open
Resolution: None
User: None
Notes:, | http://www.w3.org/TR/xhtml2/xhtml2.html | crawl-001 | refinedweb | 184 | 62.38 |
As you probably know from our most recent blog post, XAP is now open source. In this post, we’ll cover the basics of how to get started with XAP 12 by exploring the hello world example that ships with XAP open source.
If you already know your way around XAP from previous releases, you’ll find most of this familiar, but we recommend you have a quick look through since a few things have been updated.
So whether you’re a brand new user or you’ve been with us for years, read on and we’ll show you how to set up your IDE and create a XAP project, how to create/connect to a data grid, and how to store and retrieve objects from that data grid.
1. How to Get XAP Open Source
You can get XAP Open Source in three different ways:
- Download the binary package from
- Get the source code from github.com/xap and build it yourself
- Use docker (Coming Soon)
2. Importing the Project
A “hello world” example is located at XAP_HOME/examples/hello-world. Both the example and XAP itself are Maven-friendly, so you can use any IDE which supports Maven (Eclipse, IntelliJ IDEA, etc.) and import the hello world Maven project to get started. While you’re there, take a peek in the pom.xml file. If you do, you’ll see the following:
<repositories>
<repository>
<id>org.openspaces</id>
<url></url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>org.gigaspaces</groupId>
<artifactId>xap-openspaces</artifactId>
<version>12.0.0</version>
</dependency>
</dependencies>
All XAP Maven artifacts are published under org.gigaspaces groupId. However, you only need one artifact to get started: xap-openspaces (since we’re using Maven, its dependencies are retrieved implicitly). In addition, since XAP Maven artifacts are currently not published to the central Maven repository, you also need to configure the repository which hosts them. In future releases, we’ll publish to the central repo so that step will no longer be necessary.
(Note: If you can’t use Maven for some reason, no worries—simply add all the jars in XAP_HOME/lib/required to your project’s classpath.)
3. The Message Class
The data grid can store entries of various forms, the most common of which is POJO (Plain ol’ Java Object), which is what the hello world example uses. Take a look at the Message class:
public class Message { private String msg; public Message() { } public Message(String msg) { this.msg = msg; } @SpaceId(autoGenerate = false) public String getMsg() { return msg; } public void setMsg(String msg) { this.msg = msg; } }
As you can see, it has a single String field with a standard getter and setter. Of course, you may add additional properties with different types, but the minimum requirements are:
- A default constructor (required for instantiating the objects when retrieving them from the data grid)
- Annotating one of the properties with @SpaceId, which the space uses as a unique identifier for objects of this class (like a primary key in databases)
Finally, there are additional space annotations to control indexing, routing, and other features, which you can explore once you get going on your own.
4. Interacting with the Data Grid
The code is located at the HelloWorld class. It’s pretty straightforward:
- Set up a data grid connection
- Write (store) objects to the grid
- Read (retrieve) objects from the grid
Creating or Finding the Data Grid
The main class for interacting with the data grid is called GigaSpace. This is an interface which abstracts the details of the space, so interacting with an embedded space, a remote space, or a partitioned cluster of spaces are all done via the same API. In this program we use command line arguments to determine which space to connect to.
Creating an embedded (in-process) space is done via the following code:
GigaSpace space = new GigaSpaceConfigurer(new EmbeddedSpaceConfigurer(spaceName)).gigaSpace();
Connecting to a remote space (hosted in a different process) is done via:
GigaSpace space = new GigaSpaceConfigurer(new SpaceProxyConfigurer(spaceName)).gigaSpace();
Storing Entries in the Data Grid
As you can see from the code, simply use the space.write() method to write entries in the space. For example, the following code writes two Message entries to the space:
space.write(new Message(“Hello”));
space.write(new Message(“World!”));
Retrieving Entries from the Data Grid
Next, we want to read all “Message” entries and print them.
The data grid supports three kinds of queries:
- Query by ID
- Template matching
- SQL Query
In this example, we’ll show template matching, which basically means the query is an object of the same class you want to match, and only properties which are non-null are matched against the entries in the space. So the template new Message() effectively matches all Message entries.
All we need to do now is use that in conjunction with the space.readMultiple() method:
Message[] results = space.readMultiple(new Message());
System.out.println(“read – ” + Arrays.toString(results));
5. Running the Example
Now let’s see how to run the example with different data grids.
Running with Embedded Data Grid
To run the example with an embedded data grid, run it with the following arguments:
-name myDataGrid -mode embedded
Output
Created embedded data-grid: myDataGrid write – ‘Hello’ write – ‘World!’ read – [‘Hello’, ‘World!’]
Running with Remote Data Grid
To start an independent, standalone data grid instance, use the space-instance script located in the product’s bin folder (XAP_HOME/bin/space-instance.sh or .bat).
For example:
./space-instance.sh -name myDataGrid
Then, run the example with the following arguments:
-name myDataGrid -mode remote
Output
Connected to remote data-grid: myDataGrid write – ‘Hello’ write – ‘World!’ read – [‘Hello’, ‘World!’]
Running with a Partitioned Data Grid
A XAP data grid can be partitioned across multiple instances. The data grid proxy routes entries using a routing ID (If not set, the SpaceId is used for routing, which is what happens in our case).
The space-instance script can also be used to form a cluster of spaces. For example, to start a partitioned cluster with two nodes, run the following:
Partition #1:
./space-instance.sh -name myDataGrid -cluster schema=partitioned total_members=2 id=1
Partition #2:
./space-instance.sh -name myDataGrid -cluster schema=partitioned total_members=2 id=2
Note that we’re running two instances with a different ID. If you run the same command without the ID argument, it would look like this:
./space-instance.sh -name myDataGrid -cluster schema=partitioned total_members=2
Then both partitions would be instantiated within the same process. This is sometimes useful for dev environments, but in production you’d probably want a process-per-instance.
Next, run the example with the following arguments:
-name myDataGrid -mode remote
Output
Connected to remote data-grid: myDataGrid write – ‘Hello’ write – ‘World!’ read – [‘Hello’, ‘World!’]
Running with a Partitioned Highly Available Data Grid
A XAP data grid supports the notion of hot backups, so that if an instance fails, its designated backup automatically kicks in and takes over, making the failover process transparent to the user.
To start a partitioned cluster with backups, change the total_members argument to specify the number of backups (e.g. 2,1), and start backup instances with backup_id=1. For example:
Partition #1:
./space-instance.sh -name myDataGrid -cluster schema=partitioned total_members=2,1 id=1
./space-instance.sh -name myDataGrid -cluster schema=partitioned total_members=2,1 id=1 backup_id=1
Partition #2:
./space-instance.sh -name myDataGrid -cluster schema=partitioned total_members=2,1 id=2
./space-instance.sh -name myDataGrid -cluster schema=partitioned total_members=2,1 id=2 backup_id=1
Next, run the example with the following arguments:
-name myDataGrid -mode remote
Output
Connected to remote data-grid: myDataGrid write – ‘Hello’ write – ‘World!’ read – [‘Hello’, ‘World!’]
Summary
In this blog post we’ve learned:
- How to set up a XAP project with or without Maven
- How to create an embedded space or connect to a remote space
- How to store and retrieve entries from the space
- How to start various types of data grids using the space-instance utility
But this is only the beginning! | https://www.gigaspaces.com/blog/get-started-xap-open-source-5-easy-steps/ | CC-MAIN-2020-45 | refinedweb | 1,350 | 52.7 |
, Feb 22, 2012 at 4:02 AM, Vincent Torri wrote:
> On Tue, Feb 21, 2012 at 5:02 AM, Me Myself and I wrote:
>
>>.
And no matter how many times you ask it you get the same answer.
--
Earnie
--
On Tue, Feb 21, 2012 at 5:02 AM, Me Myself and I
<stargate7thsymbol@...> wrote:
>
> As I understand it, the word "toolchain"
> is the word referring to the gcc tools, msys and all
> that usually runs with mingw.
no. It's just the compiler, linker, eventually a debugger, + some
files (like the win32 headers and import libraries.
MSYS is not part of the toolchain. MSYS is a console with some tools
ported to Windows that can run in that console : a shell, ls, find,
etc... It also allows you to run autotools (autoconf, etc...)
>.
Vincent Torri
Thanks for the link. The issue you see is caused by
register-clobbering of r10/r11 etc. Those are callee saved-registers.
So you need to save them too.
By adding following lines (without the +):
GSYM_PREFIX`'ecm_redc3:
push %rbp # Push registers
push %rbx
push %rsi
push %rdi
+ push %r10
+ push %r11
....
End:
addq $32, %rsp
+ pop %r11
+ pop %r10
pop %rdi
pop %rsi
pop %rbx
pop %rbp
ret
...
End2:
addq $32, %rsp
+ pop %r11
+ pop %r10
pop %rdi
pop %rsi
pop %rbx
pop %rbp
ret
Your problem should be gone. Please see as reference about registers
and their save-state either msdn x64 ABI documentation, or see the x64
calling-convention page on our Wiki.
Regards,
Kai
Hello everyone,
I have created a much smaller test case that, I believe, shows the problem I am
running into. I have it down to three files, which are in the included zip
file. The files are:
config.m4
redc.asm
test_mulredc.c
There are just a few command lines to compile the program and see the error.
The commands I used to compile the exe are:
$ m4 -I./ -DOPERATION_redc `test -f redc.asm || echo './'`redc.asm >redc.s
$ libtool --mode=compile gcc -O2 -pedantic -m64 -g -std=gnu99 -mtune=core2
-march=core2 -c -o redc.lo redc.s
$ gcc -c test_mulredc.c
$ gcc test_mulredc.o redc.o -o test.exe -lgmp
Then, when you run the program you will (probably) see the following:
-----------------------------------------------------------------
D:\test_problem>test
i = 1
mulredc = 3d1
mpn_mul_n/ecm_redc3 = 1
Assertion failed: mpn_cmp(z,tmp2, N) == 0, file test_mulredc.c, line 76
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
-----------------------------------------------------------------
The "mulredc" line and the "mpn_mul_n/ecm_redc3" should both print out 3d1.
However, something goes wrong with the ecm_redc3 route.
I would be interested to see if anyone can run this on WinXP x64 and/or Win7
x64, and whether you get the crash like I do above or if it runs to completion.
Can anyone please help me troubleshoot/debug this problem? At this point I
don't know if there is a problem in MingW64 or if the redc.asm code is incorrect
for Win64 systems. It is probably the redc code, but I'm not sure how/why. I'd
really appreciate any time anyone can give to this issue.
-David C.
On 2/20/2012 8:46 AM, David Cleaver wrote:
> Hello,
>
> The file that I think may be causing problems does not contain inline assembler,
> it is a straight assembly file (redc.asm). I do not see any inline assembly in
> gmp-ecm.
>
> One strange thing I see at the top of that file is the following:
> # void ecm_redc3(mp_limb_t * z, const mp_limb_t * x, size_t n, mp_limb_t m)
> # %rdi %rsi %rdx %rcx
> # MinGW w64 %rcx %rdx %r8 %r9
> # save them in %r8 %r9 %r10 %r11
>
> Then there are a few push and pop statements in the file. Maybe the wrong
> registers are being push-ed and pop-ed for the way mingw64 works? Hmmm, but
> then why would some printf statements make it work?
>
> I'm not very good at tracking down some of these problems. I wish I could
> create a small test case to reproduce the problem, but I'm not sure that this
> file is the problem. Actually, I'll try to create a test case with this file
> later today to see what happens. But in the mean time, can someone else try the
> steps I have outlined below and let me know if you run into the same problem?
> Thanks for your time.
>
> -David C.
>
> On 2/19/2012 11:48 PM, Kai Tietz wrote:
>> Hi,
>>
>> such failures a commonly caused by inline-assembler not treating
>> memory-ref proper. I assume that inline assembler doesn't specify
>> explicit memory-clobber/modify. Newer gcc versions are here more
>> strict then older versions.
>>
>> Kai
>>
>> 2012/2/20 David Cleaver:
>>> Hello everyone,
>>>
>>> I have run into a very strange problem and I am not sure what might be causing
>>> it. I am compiling the latest svn of gmp-ecm (right now it is 1746) and
>>> depending on whether I insert a few extra print statements or not, one certain
>>> test in gmp-ecm will either run to completion or will seg_fault. Can someone
>>> here help me test whether this might be a problem with MingW64, Win XP x64, MPIR
>>> 2.5.0, or maybe something else? I have run into the problem with two different
>>> sezero builds (20101003 and 20111101) as my MingW64 compiler. I am compiling
>>> inside an MSYS shell (1.0.16). I am on a Windows XP x64 computer.
>>>
>>> Steps to reproduce the problem I am seeing:
>>> 0) Use MPIR as your GMP library and MingW64 as your compiler
>>> 1) Download latest gmp-ecm svn (currently 1746)
>>> 2) run 'autoreconf -i'
>>> 3) I configure with the following options:
>>> ./configure CC=gcc CFLAGS="-O2 -pedantic -m64 -std=gnu99 -mtune=core2
>>> -march=core2" --enable-asm-redc --build=x86_64-w64-mingw32 --disable-assert
>>> 4) run 'make'
>>>
>>> At this point we can just run the test that fails for me, it is (sorry for any
>>> wrap around):
>>> echo
>>> 449590253344339769860648131841615148645295989319968106906219761704350259884936939123964073775456979170209297434164627098624602597663490109944575251386017
>>> | ecm -sigma 63844855 -go 172969 61843 20658299
>>>
>>> The test should produce a 27 digit factor of the 153 digit input. However, in
>>> my setup it seg_faults, actually just a silent crash with no screen output.
>>>
>>> Now, I have been using some printf's (actually outputf in gmp-ecm code) to try
>>> to track down where the crash was happening. I have found that if I put three
>>> outputf statements in one part of the code, the program runs successfully and
>>> produces the factorization. Can anyone else here reproduce this problem? I'm
>>> curious to see if this problem happens for anyone else on WinXP x64, or maybe on
>>> Win7 x64.
>>>
>>> If anyone can reproduce the crash, can you see if the following change fixes the
>>> problem in your environment? The diff just applies to one file, mpmod.c.
>>> $ diff -u ../svn/mpmod.c mpmod.c
>>> --- ../svn/mpmod.c 2012-02-12 19:59:32 -0600
>>> +++ mpmod.c 2012-02-19 14:48:36 -0600
>>> @@ -719,8 +719,11 @@
>>> #endif /* otherwise go through to the next available mode */
>>> case MPMOD_MUL_REDC3: /* mpn_mul_n + ecm_redc3 */
>>> #if defined(HAVE_ASM_REDC3)
>>> +outputf(OUTPUT_VERBOSE, "tmp0=0x%I64x tmp1=0x%I64x tmp2=0x%I64x\n", tmp[0],
>>> tmp[1], tmp[2]);
>>> mpn_sqr (tmp, s1p, nn);
>>> +outputf(OUTPUT_VERBOSE, "tmp0=0x%I64x tmp1=0x%I64x tmp2=0x%I64x\n", tmp[0],
>>> tmp[1], tmp[2]);
>>> ecm_redc3 (tmp, np, nn, invm[0]);
>>> +outputf(OUTPUT_VERBOSE, "tmp0=0x%I64x tmp1=0x%I64x tmp2=0x%I64x\n", tmp[0],
>>> tmp[1], tmp[2]);
>>> cy = mpn_add_n (rp, tmp + nn, tmp, nn);
>>> if (cy != 0)
>>> mpn_sub_n (rp, rp, np, nn); /* a borrow should always occur here */
>>>
>>> At this point, if you run 'make clean' and 'make' again, then run the test case
>>> from up above, it should produce a factor very quickly.
>>>
>>> For some strange reason, all three of the 'outputf' statements above are
>>> necessary to make gmp-ecm run to completion. If any of those 'outputf'
>>> statements are left out, then the program will silently crash.
>>>
>>> My guess is that there is some assembly code in gmp-ecm (the ecm_redc3 function,
>>> found in ecm_svn_1746/x86_64/redc.asm) that MingW64 might not be processing
>>> correctly. This is just a guess based on the fact that the crash happens with
>>> both MPIR and GMP (it only happens to GMP if it goes through the same code path
>>> as MPIR, ask me for the change if you would like to see it with GMP).
>>>
>>> Is anyone willing to help me track this problem down? Is there any more
>>> information that anyone needs to look into this problem? Thanks for your time
>>> and I'll talk with you later.
>>>
>>> -David C.
>
> ------------------------------------------------------------------------------
> Try before you buy = See our experts in action!
> The most comprehensive online learning library for Microsoft developers
> is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
> Metro Style Apps, more. Free future releases when you subscribe now!
>
> _______________________________________________
> Mingw-w64-public mailing list
> Mingw-w64-public@...
>
>
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/mingw-w64/mailman/mingw-w64-public/?viewmonth=201202&viewday=22 | CC-MAIN-2017-17 | refinedweb | 1,540 | 71.95 |
In this tutorial, we are going to see some best practices for structuring the Python programs. Let's see one by one
Using a tab for indentation in the code make code more readable instead of using random spaces for multiple functions and methods. You can set the number of spaces for a tab in any code editors' settings.
# example def sample(random): # statement 1 # statement 2 # ... return random
Writing more than 79 characters in a line is not recommended Python. Avoid this by breaking line into multiple lines using the escape character (). See the example below.
# example def evaluate(a, b, c, d): return (2 ** (a + b) / (c // d) ** d + a - d * b) \ - (3 ** (a + b) / (c // d) ** d + a - d * b)
If you have to check multiple conditions in if statement, then it will be more than 79 characters. Use any one of the following methods.
if ( a + b > c + d and c + d > e + f and f + g > a + b ): print('Hello') if a + b > c + d and \ c + d > e + f and \ f + g > a + b: print('Hello')
Use docstring in the functions and classes. We can use the triple quotes for the docstrings. some examples below.
def sample(): """This is a function""" """ This is a function """ class Smaple: """This is a class""" """ This is a class """
If you have any doubts in the tutorial, mention them in the comment section. | https://www.tutorialspoint.com/structuring-python-programs | CC-MAIN-2022-05 | refinedweb | 235 | 70.43 |
In this article, we will see how to create REST API using ASP.NET Core Web API and new JSON features in Azure SQL Database and SQL Server 2016.
Azure SQL Database and SQL Server 2016 provide built-in JSON support that enable you to easily get data from database formatted as JSON, or take JSON and load it into table. This is a good choice for web services that return database data in JSON format, or accept JSON text as parameter and insert JSON into database. With new built-in JSON support in Azure SQL Database, transformation between tables and JSON text becomes extremely easy.
In this article, we will see how easily we can build ASP.NET Core Web API using Azure SQL Database and new JSON functionalities.
This REST service is built using new ASP.NET Core framework. In the latest release of ASP.NET, you can create ASP.NET Core web applications using .NET Framework or .NET Core. Use .NET Core, if you need minimal environment, that can be used on any platform.
.NET Core is still under development and we might have some breaking change before RTM, so here I'm using .NET Framework. However, the core principles apply both on ASP.NET Core applications created using .NET Framework and .NET Core.
At the time of writing this article, I have used ASP.NET Core RC1 and then upgraded to RC2 that had some small breaking changes that required me to rewrite the app. Therefore, I chose to use .NET Framework instead of RC2 Core, because it was more stable at the time of writing this article. In the meantime, .NET Core v1 was released and it should be more stable.
However, the core principles apply both on ASP.NET Core applications created using .NET Framework and .NET Core. Since I'm using JSON functionalities in Azure SQL Database, my app is very lightweight so I can add some minor changes in the app to fit new framework and still the main logic of JSON/SQL transformation would be unchanged. You can easily rewrite this app in node.js or any other framework if you want.
End Update :)
I will not use some complex database structure. We are going to build REST service on a simple Todo table that has four columns: Title, Description, Completed, and TargetDate.
Todo
Title
Description
Completed
TargetDate
Our REST Web API service will have the following HTTP methods:
GET
PUT
PATCH
DELETE
When you create Web API, initially you are getting only Controllers folder where you need to put the code for your services. Web API doesn't force you to add models views or any other architectural concept.
However, the first thing that people do is to add Model folder with all necessary domain classes used to map tables to C# objects and to define the schema of JSON that will be generated when these objects are serialized. In some case, they are adding the entire EntityFramework model to access data.
EntityFramework
In a lot of cases, these models are used just as plain data transfer objects (DTO) that are just used as a schema input for other frameworks that read data from SQL database or serialize JSON responses. Sometimes, these classes don't even have some significant domain model characteristics or relationships between classes.
JSON functions in Azure SQL Database enable you to keep your service lightweight and "model-less". If the only purpose for creating model is a "template" for serialization from database to JSON, you don't need to do this if you don't need it. SQL/JSON functions may handle all conversion between JSON and table data.
This way, JSON functions enable you to easily expose your database data to web clients without additional layers of transformation. In this example, I will create REST Service that just passes JSON between SQL database and web clients.
To run this sample, we would need a database with Todo table and a Web application. In this section, how you can setup project and database is described.
First, you need to create new database in Azure SQL Database or SQL Server 2016 and execute the following script that creates and populates Todo table.
DROP TABLE IF EXISTS Todo
GO
CREATE TABLE Todo (
Id int IDENTITY PRIMARY KEY,
Title nvarchar(30) NOT NULL,
Description nvarchar(4000),
Completed bit,
TargetDate datetime2
)
GO
INSERT INTO Todo (Title, Description, Completed, TargetDate)
VALUES
('Install SQL Server 2016','Install RTM version of SQL Server 2016', 0, '2016-06-01'),
('Get new samples','Go to github and download new samples', 0, '2016-06-02'),
('Try new samples','Install new Management Studio to try samples', 0, '2016-06-02')
This SQL script will create simple Todo table and populate it with three rows.
As an alternative, you import bacpac file using SQL Server Management Studio/Import Data-tier application that will restore database and populate table.
One additional thing that you need to do is to set compatibility level to value 130 if it is not already set:
130
ALTER DATABASE TodoDb SET COMPATIBILITY_LEVEL = 130
Latest compatibility level enables OPENJSON function that we will use in these samples. When you finish this step, you will have prepared database with one Todo table.
OPENJSON
I’m using Visual Studio 2015 Community Edition to create Web API REST service. You can download the sample from this article or create a new project and choose ASP.NET Core Web Application (.NET Framework):
Then, you can choose Web API type of the project and optionally check host in the cloud check box if you want to host it in Azure.
Now you have ASP.NET Core Web API project, so we can create a new REST Service.
We would need some classes that read data from database. In this article, I will not use Entity Framework or something similar. Since Azure SQL Database will format and parse my JSON data, I can use any simple ADO.NET library that can execute plain SqlCommand.
SqlCommand
In this article, I’m using small lightweight data access library that wraps basic data access functions. Library is called CLR-Belgrade-SqlClient, which can be downloaded from GitHub -. This is a small, lightweight, data access library that just wraps basic ADO.NET classes and methods.
Belgrade SQL Client library follows something like CQRS pattern where commands and query classes are separated. In this library, we have two main classes:
QueryPipe
FOR JSON
Response.Body
INSERT
UPDATE
Nice thing with this library is that it is completely async. Under the hood, it uses async methods of ADO.NET classes, which might improve scalability of your code.
async
In order to download this library, you can install Belgrade.Sql.Client using Package Manager in Visual Studio, or type the following command into your Package Manager console:
Belgrade.Sql.Client
Install-Package Belgrade.Sql.Client
If this does not work for you, you can download the source code from github and compile it into your project:.
Nuget package is compiled under .NET Framework 4.6, but the source code is generic and it can be compiled under any framework (e.g., .NET Core).
Note that this library is not a prerequisite to use JSON in new SQL Server/Azure SQL Database. It is just a helper library that helps me to write easier code, but you can use any other library that can execute standard SQL commands.
Azure SQL Database provides the following functionalities that handle JSON:
style="width: 600px; height: 303px" data-src="" class="lazyload" data-sizes="auto" data->
Main functionalities that we see on this figure are:
If you would like to know more about JSON in Azure SQL Database, I would recommend the following article - Friday the 13th - JSON is coming to SQL Server - CodeProject.
Now, we will see how to implement basic CRUD operation in Azure SQL Database using these functionalities.
If you want to select data from table and read them as JSON, you just need to add FOR JSON PATH clause at the end of the SQL SELECT query:
FOR JSON PATH
select * from Todo
FOR JSON PATH
As a result, instead of table, you would get something like the following JSON.
[
{"Id":1,"Title":"Install SQL Server 2016",
"Description":"Install RTM version of SQL Server 2016","Completed":false,
"TargetDate":"2016-06-01T00:00:00"},
{"Id":2,"Title":"Check what's new",
"Description":"Go to MSDN to see what is new in SQL Server 2016",
"Completed":false,"TargetDate":"2016-06-01T00:00:00"},
{"Id":3,"Title":"Get new samples",
"Description":"Go to github and download new samples",
"Completed":false,"TargetDate":"2016-06-01T00:00:00"},
{"Id":4,"Title":"Try new samples",
"Description":"Install new Management studio to try samples",
"Completed":false,"TargetDate":"2016-06-02T00:00:00"}
]
If you execute this query from your REST Web API, you can simply stream this result to your client because this is probably the output that client expects.
Azure SQL Database enables you to select single row (by specifying id of the row) and return it as a single JSON object:
from Todo
where Id = 3
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
If you add WITHOUT_ARRAY_WRAPPER option, Azure SQL Database will remove [ and ] that surround JSON result, and return single JSON object that you can return to client – something like:
WITHOUT_ARRAY_WRAPPER
[
]
{"Id":3,"Title":"Get new samples","Description":"Go to github and download new samples",
"Completed":false,"TargetDate":"2016-06-01T00:00:00"}
Like in the previous case, you can return this text directly to the client via REST service.
OPENJSON function parses JSON text that you send to database and transforms it into table structure. Then you can simply insert parsed results into table:
table
Set @todo = '{"Title":"Get new samples",
"Description":"Go to github and download new samples",
"Completed":false,"TargetDate":"2016-06-01T00:00:00"}'
insert into Todo
from OPENJSON(@todo)
WITH( Title nvarchar(30), Description nvarchar(4000), Completed bit, TargetDate datetime2)
OPENJSON will parse JSON text in @todo variable. In WITH clause, you can define what keys from JSON text you want to read and these keys will be returned as columns. Then, you just need to select results from OPENJSON and insert them into Todo table.
@todo
WITH
OPENJSON function, which parses JSON text, can be used to update existing rows:
set @todo = '{"Title":"Get new samples",
"Description":"Go to github and download new samples",
"Completed":false,"TargetDate":"2016-06-01T00:00:00"}'
update Todo
set Title = json.Title, Description = json.Description,
Completed = json.completed, TargetDate = json.TargetDate
from OPENJSON( @todo )
WITH( Title nvarchar(30), Description nvarchar(4000),
Completed bit, TargetDate datetime2) AS json
where Id = @id
OPENJSON will parse JSON text in @todo variable and like in the previous example, you can define in WITH clause what keys from JSON text you want to read. Instead of INSERT, we need to UPDATE row in Todo table with results from OPENJSON.
In order to delete row, you don't need JSON, since rows are deleted by id:
id
DELETE Todo WHERE Id = 3
Now we know how our SQL queries would look, so we just need to add C# code that uses these queries and we will have REST service.
Ok, now we have database, project, and we know how to use JSON in Azure SQL Database, so we can create REST service that access Todo table.
First, add new controller using New / Controller and call it TodoController. TodoController must have a reference to some classes/services that can execute SQL queries and return JSON. Since I'm using CLR-Belgrade-SqlClient, I would need references to Command and Query objects that will execute SQL commands:
TodoController
Command
Query
public class TodoController : Controller
{
private readonly IQueryPipe SqlPipe;
private readonly ICommand SqlCommand;
public TodoController(ICommand sqlCommand, IQueryPipe sqlPipe)
{
this.SqlCommand = sqlCommand;
this.SqlPipe = sqlPipe;
}
}
In this sample project, I’m using simple built-in ASP.NET 5 dependency injection with constructor injection. In ASP.NET 5, you have new Startup.cs class where you can add all services that will be used by controllers and other components in your applications. Services are added in ConfigureServices method in Startup class:
ConfigureServices
Startup
public void ConfigureServices(IServiceCollection services)
{
const string ConnString = "Server=db.database.windows.net;Database=TodoDb;
User Id=usr;Password=pwd";
services.AddTransient<IQueryPipe>( _=> new QueryPipe(new SqlConnection(ConnString)));
services.AddTransient<ICommand>( _=> new Command(new SqlConnection(ConnString)));
// Add framework services.
services.AddMvc();
}
Here, I’m adding transient services with interfaces, IQueryPipe and ICommand, that are initialized using lambda expression in argument. You just need to set your server, database, and user name in connection string.
This is not a mandatory approach, you can use any other dependency injection framework (ninject or autofac) or initialize these objects using any other method. You can even directly initialize Pipe/Command objects:
IQueryPipe
ICommand
Pipe
IQueryPipe SqlPipe = new QueryPipe(new SqlConnection( "Connection string ")));
ICommand SqlCommand = new Command(new SqlConnection("Connection string ")));
Now we can start implementing CRUD methods of REST Web API service. QueryPipe has one method called Stream that streams results of SQL Query into Output stream. It has two parameters:
Stream
string
OutputStream
Example of usage of SqlPipe class is shown in the following code:
SqlPipe
SqlPipe.Stream("SELECT * FROM sys.tables FOR JSON PATH", Response.Body);
SqlPipe will execute SQL query that has FOR JSON clause and directly stream results into output stream, which is Response.Body in our case because we are returning results to client in the body of response.
Now, we can add methods in TodoController that implement basic CRUD operations.
First, we will add one method that will be called when user calls GET API/Todo URL. This URL will return all objects in Todo table:
// GET api/Todo
[HttpGet]
public async Task Get()
{
await SqlPipe.Stream("select * from Todo FOR JSON PATH", Response.Body, "[]");
}
First note that this is async method (async Task) that will stream results of SQL query into Response.Body. Since Belgrade.SqlClient is async library, you can call Stream method with await keyword.
Some web client that calls /api/Todo will see results of SQL queries formatted as JSON. If you call Todo Get method using, you will get something like:
async Task
Belgrade.SqlClient
await
Todo Get
style="width: 640px; height: 205px" alt="Image 3" data-src="/KB/aspnet/1106622/GETall.png" class="lazyload" data-sizes="auto" data->
As you can see, we need one line of code to return data from your in this REST service. The third parameter in Stream method defines what should be returned if there is no returned data – in our case, empty array.
Now when we have list of all Todo items, we need one method that returns Todo by id:
// GET api/Todo/5
[HttpGet("{id}")]
public async Task Get(int id)
{
var cmd = new SqlCommand("select * from Todo where Id = @id FOR JSON PATH,
WITHOUT_ARRAY_WRAPPER");
cmd.Parameters.AddWithValue("id", id);
await SqlPipe.Stream(cmd, Response.Body, "{}");
}
This is also an async method that will stream results of SQL query into Response.Body. Client that calls /api/Todo/1 will see result of SQL query with FOR JSON clause:
/api/Todo/1
style="width: 640px; height: 206px" alt="Image 4" data-src="/KB/aspnet/1106622/GET1.png" class="lazyload" data-sizes="auto" data->
The third parameter defines what should be returned if there is no returned data – in this case, empty object. Example of response is shown in the following figure:
Ok, now we have implemented the required GET methods, so we will proceed with methods that update data.
In order to add new Todo item, I need a method that reacts on POST request:
[HttpPost]
public async Task Post()
{
string todo = new StreamReader(Request.Body).ReadToEnd();
var cmd = new SqlCommand(
@"insert into Todo
from OPENJSON(@todo)
WITH( Title nvarchar(30), Description nvarchar(4000), Completed bit, TargetDate datetime2)");
cmd.Parameters.AddWithValue("todo", todo);
await SqlCommand.ExecuteNonQuery(cmd);
}
You can notice that I just copied OPENJSON query from the previous section, and wrapped it into C# code. This is async method that will read JSON from request body, define SqlCommand and provide input JSON as parameter. JSON will be parsed in OPENJSON command and inserted into table.
request
If you open some tool that can send Http requests to server like Chrome Poster, you might get the following result:
style="width: 640px; height: 385px" alt="Image 5" data-src="/KB/aspnet/1106622/Poster.png" class="lazyload" data-sizes="auto" data->
Now we need to implement PUT method that updates values in the row specified with id. You can add something like this:
// PUT api/Todo/5
[HttpPut("{id}")]
public async Task Put(int id)
{
string todo = new StreamReader(Request.Body).ReadToEnd();
var cmd = new SqlCommand(
@"update Todo
set Title = json.Title,
Description = json.Description,
Completed = json.completed,
TargetDate = json);
}
This is async method will read JSON from request body, define SqlCommand and provide input JSON and id as parameters. JSON will be parsed in OPENJSON command and row with the specified id will be updated.
Many REST services support both PUT and PATCH methods. PATCH method is similar to PUT, but PUT will overwrite everything and put null values if some fields in the input JSON are missing, while PATCH will update only those fields that are provided in JSON. Code for PATCH might look like the following code:
null
// PATCH api/Todo
[HttpPatch]
public async Task Patch(int id)
{
string todo = new StreamReader(Request.Body).ReadToEnd();
var cmd = new SqlCommand(
@"
update Todo
set Title = ISNULL(json.Title, Title),
Description = ISNULL(json.Description, Description),
Completed = ISNULL(json.Completed, Completed),
TargetDate = ISNULL(json.TargetDate,);
}
You might notice that PATCH is very similar to PUT. Both methods use the similar code and update row in the table by id. The key difference is in ISNULL (json.COLUMN, COLUMN) part.
PUT code will update all cells in the row. If some key:value is not provided in JSON, it will insert NULL value because OPENJSON returns NULL if some key that is specified in WITH clause cannot be found.
ISNULL
json.COLUMN, COLUMN
key:value
NULL
However, this code will check if value in JSON NULL, and if it is not NULL, this value will be written in the column. If value is NULL, then existing column will be written and cell will not be changed. With this simple logic, you can send just a single filed that should be updated, and the others will not be changed.
Finaly, we need a DELETE action that deletes row by id. DELETE action does not require Azure SQL JSON functions that are available in Azure SQL Database, so we need just a simple code:
// DELETE api/Todo/5
[HttpDelete("{id}")]
public async Task Delete(int id)
{
var cmd = new SqlCommand(@"delete Todo where Id = @id");
cmd.Parameters.AddWithValue("id", id);
await SqlCommand.ExecuteNonQuery(cmd);
}
This method will just get provided id from request, and delete row in Todo table by Id. Now we have complete REST Service with a few lines of code for each method.
Id
With JSON support in Azure SQL Database, it is extremely easy to create REST Web service that accepts or returns JSON. In this article, you might see that every REST method is just a few lines of code. You don’t even need something like ORM, class model, etc.
If you need to quickly create small micro-services that expose few tables from your database, this might be a good solution for you. With a little effort, you can even generate code for controller.
In this article, I have placed data access logic in the body of controller because I want a simple example. In practice, you would move this code in a separate data access or repository class and just call it from .
In this code, I have placed SQL queries in C# code. My recommendation would be to create stored procedures for all these queries and just call stored procedures from code. Procedures will be simple and have one or two parameters (id and/or JSON text):
CREATE PROCEDURE dbo.InsertTodo(@TodoJson NVARCHAR(MAX))
AS BEGIN
insert into Todo
from OPENJSON(@todo)
WITH( Title nvarchar(30), Description nvarchar(4000),
Completed bit, TargetDate datetime2)
END
With stored procedures, you will have faster queries and simpler data access logic because you will just put stored procedure name in your C# SqlCommand.
Finally, in this project, connection string is placed inline in Startup.cs code, but you should move it to some configuration file.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Response.Headers.Add("Content-Type", "application/json;charset=utf-16");
[HttpGet]
public async Task Get()
{
// Output stream where results will be written.
await SqlPipe.Stream("select * from Todo FOR JSON PATH", Response.Body, "[]");
}
Response.Body Error
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://codeproject.freetls.fastly.net/Articles/1106622/Building-REST-services-with-ASP-NET-Core-Web-API-a?msg=5271044#xx5271044xx | CC-MAIN-2021-49 | refinedweb | 3,504 | 62.58 |
By Alvin Alexander. Last updated: April 18, 2019
Java String "alphanumeric" tip: How to remove non-alphanumeric characters from a Java String.
Here's a sample Java program that shows how you can remove all characters from a Java String other than the alphanumeric characters (i.e., a-Z and 0-9). As you can see, this example program creates a String with all sorts of different characters in it, then uses the
replaceAll method to strip all the characters out of the String other than the patterns
a-zA-Z0-9. The code essentially deletes every other character it finds in the string, leaving only the alphanumeric characters.
package foo; public class StringTest { public static void main(String[] args) { String s = "yo-dude: like, ... []{}this is a string"; s = s.replaceAll("[^a-zA-Z0-9]", ""); System.out.println(s); } } | https://alvinalexander.com/blog/post/java/remove-non-alphanumeric-characters-java-string/ | CC-MAIN-2020-16 | refinedweb | 140 | 55.95 |
In this second swing tutorial we're going to start to add some components to our JFrame window we created in the first tutorial.
The first thing we're going to do is create another class that extends JFrame, we will use this class to add our components and define things like size and our default close operation. Create a class in the same place as the first one and call it something like "MainFrame".
Next we need to make our class extend the JFrame class, to do that we need to add "extends JFrame" to our class, we also need to import JFrame. Your class should now look like this:
import javax.swing.JFrame; public class MainFrame extends JFrame { }
Lets give our class a constructor:
public MainFrame() { }
Lets start to modify our JFrame by using our class. To set the title you need to call the super class constructor by adding this:
super("Hello World");
The String inside the double quotes is the title of our new JFrame window.
Our new class is looking good so far so lets actually tell our application to use it! Go to the first class you created and replace the JFrame line for this:
new MainFrame();
Then copy all the "frame.????" lines into the "MainFrame" class under the "super" line, and remove all the "frame." references, the "MainFrame" class should look like this:
import javax.swing.JFrame; public class MainFrame extends JFrame { public MainFrame() { super("Hello World"); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setVisible(true); setSize(600, 500); } }
Now our application runs as before but with its own class, lets add some more stuff to our "MainFrame" class.
First lets add a Layout Manager. A layout manager's job is to decide where to put all the components we add, like buttons, text fields, etc., and it will reposition and resize everything when we resize the window.
For this example we are going to use a Border Layout, this is a very basic layout manager that lets us choose what position to put our components (Example: Centre).
To use the border layout add this line in your "MainFrame" class:
setLayout(new BorderLayout());
Next import Border Layout by pressing CTRL + SHIFT + O (CMD + SHIFT + O on mac).
Let's add a Text Area to our window, a text area is a box that allows large amounts of text input.
In the "MainFrame" class, before the constructor, add a new instance of a JTextArea, ill call my instance "textArea":
private JTextArea textArea;
Don't forget to import JTextArea.
Lets also add a Button to our window, I don't think I need to explain what that is! Add a new instance of JButton under our JTextArea one, ill call mine btn:
private JButton btn;
Import JButton too.
To initialize these components add this inside our constructor, the String parameter inside JButton() is the text that is displayed on top of the button:
textArea = new JTextArea(); btn = new JButton("Click Me!");
Now we have successfully created our 2 new components, lets add them to our window by adding this code:
add(textArea, BorderLayout.CENTER); add(btn, BorderLayout.SOUTH);
You could also use BorderLayout.NORTH, EAST, WEST and many more!
If you run this now we have a working application and you can type and click the button, they don't do anything yet though. In the next tutorial ill show you how to make it do stuff, but for now here's the complete code for this tutorial:
Main.class
import javax.swing.SwingUtilities; public class Main { public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { new MainFrame(); } }); } }
MainFrame.class
import java.awt.BorderLayout;); } } | https://caveofprogramming.com/guest-posts/tutorial-2-layout-textareas-and-buttons.html | CC-MAIN-2018-09 | refinedweb | 610 | 70.63 |
Copyright © 2009 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
The mobileOK scheme allows content providers to promote their content as being suitable for use on very basic mobile devices. This document provides an overview of the scheme and references the documentation that composes of the W3C mobileOK Scheme. It follows a period of evolution during which the Working Group considered defining two levels of mobileOK conformance, each with its own set of tests. mobileOK is presented here as a simplified and unified scheme in which the relationship with the Best Practices document, the Basic Tests and the Checker is made explicit. Changes since last publication in November 2008 are minor. A complete list of changes is available. The Working Group does not expect further versions of this document.OK is designed to improve the Web experience for users of mobile devices by rewarding content providers that adhere to good practice when delivering content to them.
mobileOK says nothing about what may be delivered to non-mobile devices; furthermore, mobileOK does not imply endorsement or suitability of content. For example, it must not be assumed that mobileOK content is of higher informational value, is more reliable, more trustworthy, is or is not appropriate for children etc.
mobileOK Basic Tests 1.0 [mobileOK] specifies a number of tests that HTTP responses must pass when a URI is requested with a specific set of HTTP headers in the request. The tests are designed to be machine processable and to provide confidence that content will display well on very basic mobile devices.
mobileOK Basic Tests 1.0 is itself based on Mobile Web Best Practices 1.0 [BP], which provides a set of sixty guidelines for making content work well across a wide variety of mobile devices.
The HTTP Request headers used in mobileOK Basic Tests 1.0 identify a hypothetical user agent called the "Default Delivery Context" (DDC). The values of the key properties of the DDC (screen width, formats supported and other basic characteristics) are set at the minimum possible, while still supporting a Web experience.
The DDC is thus not a target to aspire to, it merely sets a base line below which content providers do not need to provide their content. It is Best Practice (see Best Practice [CAPABILITIES]) for content providers, as well as targetting DDC level devices, also to provide experiences for more advanced mobile devices that have capabilities not supported by the DDC.
A software package called the mobileOK Checker [CHECK], has been developed by the Best Practices Working Group to provide automated checking of conformance. The package is in Java, and is open source. It is available under a W3C License.
W3C has created a Web interface as part of the W3C Validator, which uses this package. Other Web based checkers, by dotMobi (see ready.mobi) and CTIC (see TAWDIS) have also been created that adhere to the mobileOK Basic Tests 1.0 [mobileOK].
Content Providers may wish to identify that their content is mobileOK conformant. This means that it can be requested so that the response conforms to mobileOK Basic Tests 1.0 [mobileOK] and hence will provide at least a functional user experience on mobile devices. A claim may only be made of a URI that when dereferenced in the manner described in [mobileOK] yields a response that passes all the tests contained in mobileOK Basic Tests 1.0. Such a claim says nothing about other experiences that may be provided at the same URI, when dereferenced in a different way (e.g. with different User-Agent and Accept HTTP headers).
W3C provides a mobileOK icon that represents a claim that
the content on which the icon is found is mobileOK conformant as described above.
The icon is most appropriately used on desktop representations of a resource for which a mobileOK representation is also available. In such a situation it acts as a signal to a desktop user that the content or service they are using is also available on a mobile device. Display of the mobileOK icon is usually inappropriate on a mobile device since whether the content is usable on their device or not will be fully apparent without it.
When displaying a mobileOK icon, the image should be served from the same server as the resource, not from the W3C site. Note that the image is provided in PNG format which is a further reason why it is not suitable for use on mobileOK representations of pages, though it may be used on other representations.
The icon is issued under W3C copyright and may only be used in accordance with the W3C mobileOK license [LICENSE], the key feature being that it may only be used in representations of resources that, when dereferenced in accordance with the mobileOK Basic Tests 1.0, pass those tests.
To enhance discoverability of mobileOK content, providers may wish to identify their material as being mobileOK using POWDER (see Claiming mobileOK Conformance Using POWDER). Content should then be linked to a claim as described in 2.2.3 Linking Resources to Claims.
The Protocol for Web Description Resources [POWDER] provides a means through which a claim of mobileOK conformance may be made about many resources at once, such as all those available from a Web site. Importantly, POWDER also provides a means of identifying the person, organization or entity that made the claim. These two features make POWDER's Description Resources an ideal transport mechanism for mobileOK conformance claims (mobileOK was a key use case for POWDER).
In the following (fictitious) example, on 25th June 2008 (line 5), the organization described at (line 4) claimed that all the resources available from example.com (lines 9-11)
were mobileOK (line 13). This makes use of a one-class RDF vocabulary with namespace and class name
Conformant.
1 <?xml version="1.0"?> 2 <powder xmlns=""> 3 <attribution> 4 <issuedby src="" /> 5 <issued>2008-06-25T00:00:00</issued> 6 <supportedby src="" /> 7 </attribution> 8 <dr> 9 <iriset> 10 <includehosts>example.com</includehosts> 11 </iriset> 12 <descriptorset> 13 <typeof src="" /> 14 <displaytext>The example.com webiste conforms to mobileOK</displaytext> 15 <displayicon src="" /> 16 </descriptorset> 17 </dr> 18 </powder> (line 4) should lead to an RDF resource that describes the
entity (either the
foaf:Agent or
dcterms:Agent) that provided the Description
Resource. It is open to that organization to provide authentication methods to support its claim of
mobileOK conformance. Note also in line 6 that POWDER's
supportedby element has been used
to refer to, the implication being that the content of the described Web site has
been tested using that checker. Lines 14 and 15 provide textual and graphical data that user agents may
display to end users.
linkElement
All mobileOK resources are HTML. In the following example a powder document is linked using the
link element (line 3). The value of the
rel attribute, "describedby" is namespaced by the
profile attribute of the
head element (line 2) in versions of HTML that support it.
1 <html xmlns=""> 2 <head profile=""> 3 <link rel="describedby" href="powder.xml" type="text/powder+xml"/> 4 <title>Welcome to example.com </title> 5 </head> 6 <body> 7 <p>Today's content is ....</p> 8 </body> 9 </html>
linkHeader
In many application environments it can also be appropriate to use HTTP Link [HTTP Link] headers. The following header is semantically equivalent to the HTML link header above.
Link: <powder.xml>; rel="describedby" type="text/powder+xml";
Other machine readable means of making a claim of mobileOK conformance are available. For example the following RDF triple asserts that the URI is mobileOK conformant:
<> rdf:type <>
Other forms of expressing a claim may become available in the future.
The editors would like to thank members of the BPWG for contributions of various kinds. | http://www.w3.org/TR/2009/NOTE-mobileOK-20090625/ | CC-MAIN-2015-06 | refinedweb | 1,307 | 53 |
Let’s take a intro to how this works in a real project by using the most popular module bundler today, webpack.
All of these tools run on Javascript. In ECMAScript 2015+, you can separate your code into multiple files and
import these files into your application when needed to use their functionality. For example, when building a React application you always write
import React from 'react' at the top of each JS file.
This functionality isn’t built into browsers by default, so modern code bundlers were built to bring this capability in a couple forms: by asynchronously loading javascript as it’s needed, or by combining all the javascript into a single file that’s loaded via a
<script> tag.
Without module bundlers, you would have to manually combine files or load Javascript into HTML with countless
<script> tags, which works, but has several key disadvantages:
<script>tags means more calls to the server, worsening performance. | https://morioh.com/p/6e2a2c5ca7a6 | CC-MAIN-2021-17 | refinedweb | 158 | 54.05 |
#include <FXTable.h>
#include <FXTable.h>
Inheritance diagram for FX::FXTable:
See also:
NULL
0
DEFAULT_MARGIN
Construct a new table.
The table is initially empty, and reports a default size based on the scroll areas's scrollbar placement policy.
[virtual]
Return default width.
Reimplemented from FX::FXScrollArea.
Return default height.
Computes content width.
Computes content height.
Create the server-side resources.
Reimplemented from FX::FXComposite.
Detach the server-side resources.
Perform layout.
Mark this window's layout as dirty.
Reimplemented from FX::FXWindow.
Table widget can receive focus.
Move the focus to this window.
Remove the focus from this window.
Notification that focus moved to new child.
[inline]
Return column header control.
Return row header control.
Change visible rows.
return number of visible rows
Change visible columns.
Return number of visible columns.
Return TRUE if table is editable.
TRUE
Set editable flag.
Show or hide horizontal grid.
Is horizontal grid shown.
Show or hide vertical grid.
Is vertical grid shown.
Get number of rows.
Get number of columns.
Change top cell margin.
Return top cell margin.
Change bottom cell margin.
Return bottom cell margin.
Change left cell margin.
Return left cell margin.
Change right cell margin.
Return right cell margin..
Cancel input mode.
The input control is immediately deleted and the cell will retain its old value. You can also cancel input mode by sending the table an ID_CANCEL_INPUT message.
FALSE.
Determine column containing x.
Returns -1 if x left of first column, and ncols if x right of last column; otherwise, returns column in table containing x.
Determine row containing y.
Returns -1 if y above first row, and nrows if y below last row; otherwise, returns row in table containing y.
Return the item at the given index.
Replace the item with a [possibly subclassed] item.
Set the table size to nr rows and nc columns; all existing items will be removed.
1
Insert new row.
Insert new column.
Remove rows of cells.
Remove column of cells.
Clear single cell.
Clear all cells in the given range.
Remove all items from table.
Scroll to make cell at r,c fully visible.
Return TRUE if item partially visible.
LAYOUT_FIX_HEIGHT
Change column header height mode to fixed or variable.
In variable height mode, the column header will size to fit the contents in it. In fixed mode, the size is explicitly set using setColumnHeaderHeight().
Return column header height mode.
LAYOUT_FIX_WIDTH
Change row header width mode to fixed or variable.
In variable width mode, the row header will size to fit the contents in it. In fixed mode, the size is explicitly set using setRowHeaderWidth().
Return row header width mode.
Change column header height.
Return column header height.
Change row header width.
Return row header width.
Get X coordinate of column.
Get Y coordinate of row.
Change column width.
Get column width.
Change row height.
Get row height.
Change default column width.
Get default column width.
Change default row height.
Get default row height.
Return minimum row height.
Return minimum column width.
Fit row heights to contents.
Fit column widths to contents.
Change column header.
Return text of column header at index.
Change row header.
Return text of row header at index.
Modify cell text.
Return cell text.
Modify cell icon, deleting the old icon if it was owned.
Return cell icon.
Modify cell user-data.
'\t'
'\n'
Extract cells from given range as text.
Overlay text over given cell range.
Return TRUE if its a spanning cell.
Repaint cells between grid lines sr,er and grid lines sc,ec.
Repaint cell at r,c.
Enable item.
Disable item.
Is item enabled..
Return item justification..
Return relative icon and text position.
Change item borders style.
Borders on each side of the item can be turned controlled individually using FXTableItem::LBORDER, FXTableItem::RBORDER, FXTableItem::TBORDER and FXTableItem::BBORDER.
Return item border style.
Change item background stipple style.
return item background stipple style
Change current item.
Get row number of current item.
Get column number of current item.
Is item current.
Change anchor item.
Get row number of anchor item.
Get column number of anchor item.
Get selection start row; returns -1 if no selection.
Get selection start column; returns -1 if no selection.
Get selection end row; returns -1 if no selection.
Get selection end column; returns -1 if no selection.
Is cell selected.
Is row of cells selected.
Is column selected.
Is anything selected.
Select a row.
Select a column.
Select range.
Extend selection.
Kill selection.
Change font.
Return current font.
Obtain colors of various parts.
Change colors of various parts.
Change cell background color for even/odd rows/columns.
Obtain cell background color for even/odd rows/columns.
Change cell border width.
Return cell border width.
Change table style.
Return table style.
Change help text.
Serialize.
Restore window from stream. | http://fox-toolkit.org/ref14/classFX_1_1FXTable.html | CC-MAIN-2021-17 | refinedweb | 800 | 65.49 |
I'm attempting to automate some ID3 tagging with Mutagen, but whenever I attempt to insert unicode characters I have them replaced by question marks.
Smallest test code that results in this error is as follows
from mutagen.id3 import ID3, TALB
audio = ID3()
audio['TALB'] = TALB(encoding=3, text=u'testtest')
audio.save('test.mp3', v1=2)
When run, test.mp3's album tag shows up as
test??test in both my file manager and music player. If I manually enter unicode tags via the file manager the unicode characters display normally without issue.
Things I have already tried in order to fix this problem:
ustring prefix
audio.add(TALB(encoding=3, text=u'testtest')))
I'm using the
v1=2 argument for the
save function as leaving it out results in around half the files not having their tags written (and unicode still being outputted as question marks), and other values refuse to write ID3 tags for any files.
I'm using Windows 10 64bit. My Python environments are Anaconda3 (Python3.4) and Python2.7, both result in the same problem with same code.
So I think your main problem is that your way of testing if the tags are correct has some problems. Let me explain.
For me, this code works:
from mutagen.id3 import ID3, TALB audio = ID3() audio['TALB'] = TALB(encoding=3, text=u'testtest') audio.save("test.mp3",v1=0)
Checking the file in a text editor shows the tags correctly written in unicode.
So why can't you see the tags? Likely because mutagen defaults to writing ID3v2.4 tags which neither Windows File Explorer nor any of the standard Windows media players will read. However, when you have added the
v1=2 argument you have forced mutagen to also write ID3v1 tags. These are readable by File Explorer but unfortunately do not support Unicode. That is why you are seeing the question marks instead. So it us useful, when you want to use Unicode, to add
v1=0 (as I have done) to prevent any ID3v1 tags being written and distracting from the main issue of getting the ID3v2 tags working.
So now move to ID3v2.3 instead of ID3v2.4 and see if that helps:
from mutagen.id3 import ID3, TALB audio = ID3() audio.update_to_v23() audio['TALB'] = TALB(encoding=3, text=u'testtest') audio.save("test.mp3",v1=0,v2_version=3)
Finally, the best way to see what tags are really in the file is to use a dedicated tag editor which comprehensively follows the spec, like Mp3tag. This helps to find out if the problem is how you are writing the tags, or how your player is reading them. | http://m.dlxedu.com/m/askdetail/3/807e12f11d1d72356f674270e2edaedb.html | CC-MAIN-2018-30 | refinedweb | 449 | 67.45 |
We have previously discussed connecting to MQSeries from .NET applications.
That post dealt specifically with connecting to MQSeries to perform queue operations: Puts and Gets. MQSeries also exposes an administrative interface. What are the possibilities for connecting from a .NET app into the MQ Administrative function? At first glance, there seems to be a number of viable options for administratively querying MQSeries from .NET:
1. PCF
2. Using System.DirectoryServices to connect with the MQ ADSI provider
3. COM interop with IBM's MQ ADSI COM library
4. COM interop with IBM's MQAI
But each of these obvious options falls short in one way or another.
PCF stands for, I think, Programmable Command Facility - basically it is a mechanism for sending administrative commands directly over the MQ transport. The use of PCF from .NET involves explicitly formatting messages and placing them on command queues. MQ reads the messages and responds. Using PCF from .NET is possible and has been demonstrated elsewhere. It works, but the programming model is not facile.
The use of System.DirectoryServices namespace is problematic. Apparently the MQ ADSI provider does not fully implement all of the required interfaces, and so using System.DirectoryServices on the MQ provider generates exceptions, timeouts, and etc. This option appears to be not very popular. I couldn't find anyone doing it, and I could find no help on IBM's website.
The MQ-ADSI COM library shipped with MQSeries, found at <MQDIR>\bin\amqmadsi.dll , is intriguing, but there is effectively no documentation on this library from IBM, and no examples to go on, that I could find.
MQAI is a separate option, a facade over top of MQAX and PCF. Again, not facile.
This example uses a fifth option: interaction with MQ's ADSI provider via COM interop with the Active Directory Services library. Don't be confused by the name - I am not saying that MQ stores its administrative data in Active Directory; only that MQ Administrative data can be accessed via the programmatic interface known as ADSI. The most well known subsystem that exposes an ADSI interface is, of course, Active Directory itself. But MQ is another system that does so. In this usage, We are not utilizing the amqmadsi.dll directly; instead, we use the ActiveDs COM library, and specify a binding string (really a COM moniker) like IBMMQSeries://MQHost/hostname to get to MQ. This connects to the MQ ADSI provider.
For the programming model, rather than explicitly putting and getting messages as with PCF, MQ objects are accessed through COM monikers. You can query the queue managers on a machine, the local queues managed by those QMs, the various attributes on those queues, and so on.
Screen shot:
This example is packaged as a Visual Studio solution, but it can also be compiled using just the .NET SDK. Download it here. It queries the properties of MQ objects like Queues, Channels, Queue Managers, and so on. Theoretically, you should also be able to use the MQ ADSI provider to write those properties, but this code does not try that. If anyone cares to take this code and extend it to also update MQ, I'd like to hear about it.
To get this to work, you need to reference the Active Directory Services COM library. To do this in Visual Studio .NET, you need to right click on the project in the Solution Explorer, Add Reference..., select the COM tab, and then select the "Active DS Type Library". This has already been done on the projects included here. Using the .NET SDK, you need to run tlbimp.exe on c:\windows\system32\activeds.tlb to generate the interop assembly, then reference the assembly when compiling this file, eg,
tlbimp c:\windows\system32\activeds.tlb
csc /debug+ /t:exe /r:ActiveDs.dll /out:MQ-via-ADSI.exe MQ-via-ADSI.cs
References:
ADSI Reference from Microsoft:
Programming MQ using the COM interface (from IBM):
-Dino | https://blogs.msdn.microsoft.com/dotnetinterop/2004/11/08/net-and-mq-admin-via-adsi/ | CC-MAIN-2017-47 | refinedweb | 661 | 58.79 |
point in projective 3D space More...
#include <vgl/vgl_point_3d.h>
#include <vgl/vgl_fwd.h>
#include <vcl_iosfwd.h>
#include <vcl_cassert.h>
Go to the source code of this file.
point in projective 3D space
Modifications Peter Vanroose - 4 July 2001 - Added geometric interface like vgl_point_3d Peter Vanroose - 2 July 2001 - Added constructor from 3 planes Peter Vanroose - 1 July 2001 - Renamed data to x_ y_ z_ w_ and inlined constructors Peter Vanroose - 27 June 2001 - Implemented operator== Peter Vanroose - 15 July 2002 - Added coplanar() Guillaume Mersch- 10 Feb 2009 - bug fix in coplanar()
Definition in file vgl_homg_point_3d.h.
Definition at line 317 of file vgl_homg_point_3d.h.
Return the point at the centre of gravity of two given points.
Identical to midpoint(p1,p2). Invalid when both points are at infinity. If only one point is at infinity, that point is returned. inline
Definition at line 294 of file vgl_homg_point_3d.h.
Return the point at the centre of gravity of a set of given points.
There are no rounding errors when Type is e.g. int, if all w() are 1.
Definition at line 307 of file vgl_homg_point_3d.h.
Are three points collinear, i.e., do they lie on a common line?.
Definition at line 38 of file vgl_homg_point_3d.txx.
Return true iff the 4 points are coplanar, i.e., they belong to a common plane.
Definition at line 161 of file vgl_homg.
Return true iff the point is at infinity (an ideal point).
The method checks whether |w| <= tol * max(|x|,|y|,|z|)
Definition at line 156 of file vgl_homg_point_3d.h.
Return the point at a given ratio wrt two other points.
By default, the mid point (ratio=0.5) is returned. Note that the third argument is Type, not double, so the midpoint of e.g. two vgl_homg_point_3d<int> is not a valid concept. But the reflection point of p2 wrt p1 is: in that case f=-1.
Definition at line 282 of file vgl_homg_point_3d.h.
Adding a vector to a point gives a new point at the end of that vector.
If the point is at infinity, nothing happens. Note that vector + point is not defined! It's always point + vector.
Definition at line 198 of file vgl_homg_point_3d.h.
Adding a vector to a point gives the point at the end of that vector.
If the point is at infinity, nothing happens.
Definition at line 210 of file vgl_homg_point_3d.h.
The difference of two points is the vector from second to first point.
This function is only valid if the points are not at infinity.
Definition at line 184 of file vgl_homg_point_3d.h.
Subtracting a vector from a point is the same as adding the inverse vector.
Definition at line 219 of file vgl_homg_point_3d.h.
Subtracting a vector from a point is the same as adding the inverse vector.
Definition at line 226 of file vgl_homg_point_3d.h. 270 of file vgl_homg_point_3d.h. | http://public.kitware.com/vxl/doc/release/core/vgl/html/vgl__homg__point__3d_8h.html | crawl-003 | refinedweb | 483 | 78.65 |
if if if if do something endif endif endif endifIt often develops when a programmer applies the "OneReturnPerFunction rule" blindly and in poor taste, or when mixing both conditions and loops. The best fix is to RefactorLowHangingFruit: simply splitting the entire function in half can lead to two big, ugly functions with lots of arguments and no concept. Instead, refactor the components of the code: often the conditions can be inverted and used as GuardClauses, removing them from the structure; ExtractMethod can convert the contents of loops into separate functions. Interestingly, Dr. LanceMiller? of IBM contrasted this kind of structure with English, where you usually start with a verb and qualify it, e.g. "heat water until boiling". I guess that's an argument for the Perl syntax...? -- PaulMorrison In CodeComplete, SteveMcConnell notes that comprehension decreases beyond three levels of nested 'if' blocks, according to 1986 study by NoamChomsky and GeraldWeinberg. McConnell? refers to the AntiPattern as "Dangerously Deep Nesting". If the arrow is an AntiPattern, consider a whole mountain range:
if get_resource if get_resource if get_resource if get_resource do something free_resource else error_path endif free_resource else error_path endif free_resource else error_path endif free_resource else error_path endifThis style puts much error recovery and clean-up code very far from the thing that spawned the error. HoursOfFun! A good fix: Just make the code completely exception safe. This would ensure nobody cares who cleans up what or how. Ahh, yes. "Just make the code completely exception safe," quotha. Like, how? This is a particularly tricky problem in embedded systems, where the resource being acquired is usually a piece of hardware with distinct setup/setdown conditions. Hmm. How about: (with-resource-foo (foo ...) ... ) ? I guess that is cheating. :) It's a good fix to make one of those objects exception-safe at a time. RefactorDaintily would clean much of the function up, leaving the rest more obvious. It is possible to implement resource-safe exception handling cleanly and elegantly if you have a language like CeePlusPlus which supports the ResourceAcquisitionIsInitialization technique (and this method is by no means exclusive for dealing with such things but just happens to be my personal favourite). However, suggesting that you "just" make the code exception-driven, while correct, is not particularly pragmatic (JustIsaDangerousWord), as often such a refactor has a cascade effect as you shift the error handling around (and usually further up) the call stack. In my experience, ArrowAntiPattern code is symptomatic of code written without attention being paid to proper ModularProgramming techniques, and hence was probably written by a BadProgrammer (an opinion which would strengthen for me proportionally to the scale of the problem: ThreeStrikesAndYouRefactor). Often, a possible solution is to use PolyMorphism to decouple the semantic from the control flow - i.e. you can partition all the bits inside all those if-else blocks into small functions that are called appropriately (polymorphically), while exceptions take care of all those error paths.
callback_list = [] if ( get_resource_A(&callback_list,...) && get_resource_B(&callback_list,...) && ... ) { success_flag = do_work(); } for closure in callback: eval(closure);The true syntax is not too bad, but I've simplified it here to avoid obscuring the main point. Our more complex systems provide a structure that has both "success" and "failure" callback lists, to ensure that error reporting and error recovery are handled correctly, and are reasonably close to the place that caused them.
bool open_some_object(Object *object) { assert(object); if(open_resource_a(&object->a)) { if(open_resource_b(&object->b)) { if(open_resource_c(&object->c)) { return true; } close_resource_b(&object->b); } close_resource_a(&object->a); } return false; }Which looks similar to the anti-pattern, with the exception of the 'return true' in the middle. If the function succeeds, it returns as soon as it's succeeded. If anything fails, the function closes anything which was opened, and then returns false. I've always thought this was quite an elegant way of initializing objects, but maybe I've mislead myself? Is this the anti-pattern, or does it just happen to look like it? Is there a better way of opening objects which are composed of sub-objects?
use_items = [] list.each do |item| if !item.nil? then if item.category == 'foo' then use_items << item end end end...can be rewritten as...
non_nil_items = list.filter {|i| !i.nil?} use_items = non_nil_items.filter {|i| i.category == 'foo'}This example is somewhat contrived, because the code is in Ruby, but Ruby has short circuiting for conditionals, so we didn't have to nest the 2 "if" statements inside the loop in the first place. It's hard to think of pathological starting points in Ruby, though :)
good = true; if (good) { if (! get_resource_1()) { error_handling... close_resource_1; // if applicable (perhaps call it "clean up") good = false; } } if (good) { if (! get_resource_2()) { error_handling... close_resource_2; good = false; } } if (good) { if (! get_resource_3()) { error_handling... close_resource_3; good = false; } } ... if (good) { regular_process... close_resource_1; close_resource_2; close_resource_3; } ...It is easier to insert a new handling level since it is "linear" in nature. It seems a case where HOF may be handy, if you want to take on the issues of moving up the meta scale. This can be improved if nested inside a method that returns after each failed acquisition, so that adding new attempts are wholly linear, and the flag checks vanish Would the calls be nested? I thought the idea was to remove nestedness. I can't see how the above code releases resource_1 if the acquisition of resource_2 fails. Does each "close_resource_n" call "close_resource_m" for every m<=n? Isn't this potentially huge code duplication? I generally assumed diverse resources. For example, resource 1 may be a file, resource 2 a database, resource 3 a pipe, etc. There may be conceptual duplication, but not necessarily implementation duplication. Of course, if they are similar, then please do factor the commonality to a function/method/class/module.
function open_resources { failed_resource = 0; // For each resource if (! get_resource_N()) { close_resources(N); failed_resource = N; } return failed_resource; } function close_resources(int N) { for (rn=1; rn<=N; rn++) { close_resource(rn); } }PS: Gracefully handling failures when closing resources is left to the user. Per above, this may not work well for diverse resources.
class Resources(object): "Manages resource sequence acquisition and release" def __init__(resources, *required): resources.required = list(required) def acquire(resources): resources._acquired = [] for Resource in resources.required: resources._acquired.append(Resource()) class Failures(Exception): def __init__(exception, message, resources): super(Failures, exception).__init__(message) exception.resources = resources def __repr__(exception): reasons = [] for resource, reason in exception.resources: reasons.append(resource+": "+reason) return exception.message + '\n'.join(reasons) def release(resources): "This can be retried upon failure" failures = set() # try to close them all even if some fail to close while resources._acquired: resource = resources._acquired[0] try: resource.release() resources._acquired.pop(0) # there must be one catch Exception, exception: failures.add((resource, exception)) if failures: raise Resources.Failures("Couldn't close the following:", failures) def use(resources, do): try: # note this is not catching exceptions, only ensuring cleanup resources.acquire() do(*resources._acquired) finally: # this always runs resources.release()-- MikeAmy? See also: AvoidExceptionsWheneverPossible
bool function(...) { FOO f = NULL; BAR b = NULL; bool res = false; f = allocate_foo(); if(!f) goto cleanup; b = allocate_bar(); if(!b) goto cleanup; res = ... // do something with f & b cleanup: if(b) release_bar(b); if(f) release_foo(f); return res; }Note that this code is probably only applicable to C (and maybe to some extent to C++), most other languages can achieve similar or better results using automatic cleanup. In the presence of exceptions, this is even a must, as any function in between might throw already. For languages with lazy cleanup (e.g. Java and Python), using a try-finally clause for deterministic cleanup might be necessary. Newer Python (since 2.5, I think) also offers the 'with' statement which achieves similar things with a more friendly interface. -- UlrichEckhardt? What about:
if (! a = allocate_a(...)) goto cleanup; if (! b = allocate_b(...)) goto cleanup; if (! c = allocate_c(...)) goto cleanup; if (! d = allocate_d(...)) goto cleanup; do_something_with(a, b, c, d, etc); ...[Where the remaining code is something like the following, yes?]
return; cleanup: if (a) free(a); if (b) free(b); if (c) free(c); if (d) free(d); | http://c2.com/cgi/wiki?ArrowAntiPattern | CC-MAIN-2014-10 | refinedweb | 1,348 | 57.06 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.