text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
UnOfficial PyPi API Wrapper Project description PipPKG API PipPKG is a GUI for Pip that I have been working on in order to make managing your pip packages easier. While the GUI isn't complete I have completed the API wrapper for PyPi.org I will be using in it. I decided to open source it and release it on pip so that you guys could use it in your projects as well. You do not need any API keys to use this. Read on for more documentation. Getting Started Installing with VirtualEnv and Pip cd project-name virtualenv env source env/bin/activate on Unix or source .\env\bin\activate on Windows pip3 install pippkg-api requests Installing without VirtualEnv w/ Pip pip3 install pippkg-api requests --user cd project-name Installing with VirtualEnv w/ Setup.py cd project-name virtualenv env source env/bin/activate on Unix or source .\env\bin\activate on Windows git clone cd PipPKG-API/ python3 setup.py install Installing without VirtualEnv w/ Setup.py git clone cd PipPKG-API/ python3 setup.py install PipPKG API - Packages The packages module in PipPKG API is used for gaining general information about the most recent version of the package. With this module you can grab basically any general info about the package in question. Below is documentation on how to use the module. Getting Started Import PipPKG API Packages like this: from pippkg-api import packages To get the info for the package define pkginfo (or any variable) like so: pkginfo = packages.package('name-of-pip-package') You must include the above two lines in order to use both the Packages and Releases module in PipPKG API. In the rest of this documentation pkginfo will refer to the variable above. package('name of package') - Returns Dictionary of JSON Response The package() module is the function that grabs and stores all the information about the queried package in a dictionary. The rest of the functions then read from this dictionary to return a value. getAuthor(pkginfo) - Returns String The getAuthor() function does exactly what it sounds like. It returns the Author of the package. Usage: author = packages.getAuthor() getLongDesc(pkginfo) - Returns String The getLongDesc() function gets the main description of the package. This is the description you will see when visiting the PyPi page for said module. Usage: longDescription = packages.getLongDesc(pkginfo) getLicense(pkginfo) - Returns String The getLicense() function gets the license of the queried package and returns it. Usage: licenseType = packages.getLicense(pkginfo) getSummary(pkginfo) - Returns String The getSummary() function returns the short summary of the package. Like the one you would see when quering with pip. Usage: summary = packages.getSummary(pkginfo) getReqs(pkginfo) - Returns List The getReqs() function returns a list of requirements for the said project. Usage: requirements = packages.getReqs(pkginfo) >> ['requests', 'colorama'] requirements[0] >> 'requests' getHomePage(pkginfo) - Returns String The getHomePage() function returns the URL for the home page of the pip package. Usage: homepage = packages.getHomePage(pkginfo) >> getClassifiers(pkginfo) - Returns List The getClassifiers() function returns a list of all classifiers of said package. Usage: classifiers = packages.getClassifiers(pkginfo) This Documentation is Not Complete! If you would like to find more functions look above in the source code. Most functions are pretty self explainatory. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pippkgapi/
CC-MAIN-2021-21
refinedweb
570
50.43
If sense to give the user another set of credentials. Instead, you can validate users by checking the permissions existing Active Directory accounts. The source code to check a user’s credentials in Active Directory using C# or Visual Basic is actually fairly minimal. This works with both ASP.NET and with Windows Forms (or WPF for that matter) if you’re building a desktop application. Here’s how to do it: (1) Reference the appropriate library You’ll need to make use of the System.DirectoryServices library that comes with Visual Studio. You can add this to your ASP.NET code-behind page or your C# class for your Windows forms like this. using System.DirectoryServices; (2) Create An Authentication Function. Here’s a basic function that will check a user’s permissions on a given domain. Essentially, it will try to create an Active Directory entry using the provided credentials, and it can successfully create a valid entry, we know that the user is authenticated. Otherwise, it’ll return false. public bool AuthenticateActiveDirectory(string Domain, string UserName, string Password) { try { DirectoryEntry entry = new DirectoryEntry(“LDAP://” + Domain, UserName, Password); object nativeObject = entry.NativeObject; return true; } catch (DirectoryServicesCOMException) { return false; } } That’s really all there is to it. Microsoft has an extensive aritcle on MSDN that covers active directory authentication in .NET that you might want to check out as well. This is awesome and will come in handy. I’m usually lazy and just use IIS to handle the AD authentication side. Now that you’ve been able to authenticate to AD, how hard would it be to create a function that allows a user to change their password? It’s actually not that difficult at all. You just create a DirectoryEntry object and use the .Invoke method to change the password. Here’s a link to some example code: Just what I needed. Thanks for the tutorial. That’s very good function and work very welll. Thanks to put a usful article. tanks again This doesn’t work correctly. If you do something like: DirectoryEntry entry = new DirectoryEntry(“LDAP://” + Domain, “someone”, String.Empty); object nativeObject = entry.NativeObject; It does not fail even though I’m sure the user “someone” with no password does not exist.
http://adventuresindevelopment.com/2009/06/02/how-to-authenticate-a-user-in-active-directory-using-aspnet/
CC-MAIN-2017-30
refinedweb
377
51.55
In this article we will see how to use the namespace in C++ code. Consider a situation, when we have two persons with the same name, Zara, in the same class. Whenever we need to differentiate them definitely we would have to use some additional information along with their name, like either the area, if they live in different area or their mother’s or father’s. A namespace definition begins with the keyword namespace followed by the namespace name as follows – namespace namespace_name { // code declarations } To call the namespace-enabled version of either function or variable, prepend (::) the namespace name as follows – name::code; // code could be variable or function. ; } Inside first_space Inside second_space
https://www.tutorialspoint.com/how-to-use-namespaces-in-cplusplus
CC-MAIN-2021-39
refinedweb
115
57.4
can I chain an object into a global namespace ? Discussion in 'Javascript' started by chichkov@gmail.com,63 - William F. Robertson, Jr. - Jul 29, 2003 Reaching into the default namespace when using another namespace.Jason Heyes, Nov 19, 2004, in forum: C++ - Replies: - 1 - Views: - 448 - Woebegone - Nov 19, 2004 How to import whole namespace into global symbol table? (newbie)Scott Simpson, Apr 28, 2006, in forum: Python - Replies: - 2 - Views: - 370 - Peter Otten - Apr 28, 2006 Can define a class for namespace A in some other namespace B?Peng Yu, Sep 14, 2008, in forum: C++ - Replies: - 0 - Views: - 638 - Peng Yu - Sep 14, 2008 conditional import into global namespacemk, Mar 2, 2010, in forum: Python - Replies: - 0 - Views: - 352 - mk - Mar 2, 2010
http://www.thecodingforums.com/threads/can-i-chain-an-object-into-a-global-namespace.927398/
CC-MAIN-2014-35
refinedweb
125
71.14
Beyond Interactive: Notebook Innovation at Netflix By Michelle Ufford, M Pacer, Matthew Seal, and Kyle Kelley Notebooks have rapidly grown in popularity among data scientists to become the de facto standard for quick prototyping and exploratory analysis. At Netflix, we’re pushing the boundaries even further, reimagining what a notebook can be, who can use it, and what they can do with it. And we’re making big investments to help make this vision a reality. In this post, we’ll share our motivations and why we find Jupyter notebooks so compelling. We’ll also introduce components of our notebook infrastructure and explore some of the novel ways we’re using notebooks at Netflix. If you’re short on time, we suggest jumping down to the Use Cases section. Motivations Data powers Netflix. It permeates our thoughts, informs our decisions, and challenges our assumptions. It fuels experimentation and innovation at unprecedented scale. Data helps us discover fantastic content and deliver personalized experiences for our 130 million members around the world. Making this possible is no small feat; it requires extensive engineering and infrastructure support.. To support these use cases at such scale, we’ve built an industry-leading Data Platform which is flexible, powerful, and complex (by necessity). We’ve also built a rich ecosystem of complementary tools and services, such as Genie, a federated job execution service, and Metacat, a federated metastore. These tools simplify the complexity, making it possible to support a broader set of users across the company. To help our users scale, we want to make these tasks as effortless as possible. To help our platform scale, we want to minimize the number of tools we need to support. But how? No single tool could span all of these tasks; what’s more, a single task often requires multiple tools. When we add another layer of abstraction, however, a common pattern emerges across tools and languages: run code, explore data, present results. As it happens, an open source project was designed to do precisely that: Project Jupyter. Jupyter Notebooks the 2017 ACM Software Systems Award — a prestigious honor it shares with Java, Unix, and the Web. To understand why the Jupyter notebook is so compelling for us, consider the core functionality it provides: - a messaging protocol for introspecting and executing code which is language agnostic - an editable file format for describing and capturing code, code output, and markdown notes - a web-based UI for interactively writing and running code as well as visualizing outputs The Jupyter protocol provides a standard messaging API to communicate with kernels that act as computational engines. The protocol enables a composable architecture that separates where content is written (the UI) and where code is executed (the kernel). By isolating the runtime from the interface, notebooks can span multiple languages while maintaining flexibility in how the execution environment is configured. If a kernel exists for a language that knows how to communicate using the Jupyter protocol, notebooks can run code by sending messages back and forth with that kernel. Backing all this is a file format that stores both code and results together. This means results can be accessed later without needing to rerun the code. In addition, the notebook stores rich prose to give context to what’s happening within the notebook. This makes it an ideal format for communicating business context, documenting assumptions, annotating code, describing conclusions, and more. Use Cases Of our many use cases, the most common ways we’re using notebooks today are: data access, notebook templates, and scheduling notebooks. Data Access Netflix our entire platform. Today, notebooks are the most popular tool for working with data at Netflix. Notebook Templates As we expanded platform support for notebooks, we began to introduce new capabilities to meet new use cases. From this work emerged parameterized notebooks. A parameterized notebook is exactly what it sounds like: a notebook which allows you to specify parameters in your code and accept input values at runtime. This provides an excellent mechanism for users to define notebooks as reusable templates. Our users have found a surprising number of uses for these templates. Some of the most common ones are: - Scheduling Notebooks One of the more novel ways we’re leveraging notebooks is as a unifying layer for scheduling workflows. Since each notebook can run against an arbitrary kernel, we can support any execution environment a user has defined. And because notebooks describe a linear flow of execution, broken up by cells, we can map failure to particular cells. This allows users to describe a short narrative of execution and visualizations that we can accurately report against when running at a later point in time. This paradigm means we can use notebooks for interactive work and smoothly move to scheduling that work to run recurrently. For users, this is very convenient. Many users construct an entire workflow in a notebook, only to have to copy/paste it into separate files for scheduling when they’re ready to deploy it. By treating notebooks as a logical workflow, we can easily schedule it the same as any other workflow. We can schedule other types of work through notebooks, too. When a Spark or Presto job executes from the scheduler, the source code is injected into a newly-created notebook and executed. That notebook then becomes an immutable historical record, containing all related artifacts — including source code, parameters, runtime config, execution logs, error messages, and so on. When troubleshooting failures, this offers a quick entry point for investigation, as all relevant information is colocated and the notebook can be launched for interactive debugging. Notebook Infrastructure Supporting these use cases at Netflix scale requires extensive supporting infrastructure. Let’s briefly introduce some of the projects we’ll be talking about. nteract is a next-gen React-based UI for Jupyter notebooks. It provides a simple, intuitive interface and offers several improvements over the classic Jupyter UI, such as inline cell toolbars, drag and droppable cells, and a built-in data explorer. Papermill is a library for parameterizing, executing, and analyzing Jupyter notebooks. With it, you can spawn multiple notebooks with different parameter sets and execute them concurrently. Papermill can also help collect and summarize metrics from a collection of notebooks. Commuter is a lightweight, vertically-scalable service for viewing and sharing notebooks. It provides a Jupyter-compatible version of the contents API and makes it trivial to read notebooks stored locally or on Amazon S3. It also offers a directory explorer for finding and sharing notebooks. Titus is a container management platform that provides scalable and reliable container execution and cloud-native integration with Amazon AWS. Titus was built internally at Netflix and is used in production to power Netflix streaming, recommendation, and content systems. We explore this architecture in our follow-up blog post, Scheduling Notebooks at Netflix. For the purposes of this post, we’ll just introduce three of its fundamental components: storage, compute, and interface. Storage The Netflix Data Platform relies on Amazon S3 and EFS for cloud storage, which notebooks treat as virtual filesystems. This means each user has a home directory on EFS, which contains a personal workspace for notebooks. This workspace is where we store any notebook created or uploaded by a user. This is also where all reading and writing activity occurs when a user launches a notebook interactively. We rely on a combination of [workspace + filename] to form the notebook’s namespace, e.g. /efs/users/kylek/notebooks/MySparkJob.ipynb. We use this namespace for viewing, sharing, and scheduling notebooks. This convention prevents collisions and makes it easy to identify both the user and the location of the notebook in the EFS volume. We can rely on the workspace path to abstract away the complexity of cloud-based storage from users. For example, only the filename of a notebook is displayed in directory listings, e.g. MySparkJob.ipynb. This same file is accessible at ~/notebooks/MySparkJob.ipynb from a terminal. When the user schedules a notebook, the scheduler copies the user’s notebook from EFS to a common directory on S3. The notebook on S3 becomes the source of truth for the scheduler, or source notebook. Each time the scheduler runs a notebook, it instantiates a new notebook from the source notebook. This new notebook is what actually executes and becomes an immutable record of that execution, containing the code, output, and logs from each cell. We refer to this as the output notebook. Collaboration is fundamental to how we work at Netflix. It came as no surprise then when users started sharing notebook URLs. As this practice grew, we ran into frequent problems with accidental overwrites caused by multiple people concurrently accessing the same notebook . Our users wanted a way to share their active notebook in a read-only state. This led to the creation of Commuter. Behind the scenes, Commuter surfaces the Jupyter APIs for /files and /api/contents to list directories, view file contents, and access file metadata. This means users can safely view notebooks without affecting production jobs or live-running notebooks. Compute Managing compute resources is one of the most challenging parts of working with data. This is especially true at Netflix, where we employ a highly-scalable containerized architecture on AWS. All jobs on the Data Platform run on containers — including queries, pipelines, and notebooks. Naturally, we wanted to abstract away as much of this complexity as possible. A container is provisioned when a user launches a notebook server. We provide reasonable defaults for container resources, which works for ~87.3% of execution patterns. When that’s not enough, users can request more resources using a simple interface. We also provide a unified execution environment with a prepared container image. The image has common libraries and an array of default kernels preinstalled. Not everything in the image is static — our kernels pull the most recent versions of Spark and the latest cluster configurations for our platform. This reduces the friction and setup time for new notebooks and generally keeps us to a single execution environment. Under the hood we’re managing the orchestration and environments with Titus, our Docker container management service. We further wrap that service by managing the user’s particular server configuration and image. The image also includes user security groups and roles, as well as common environment variables for identity within included libraries. This means our users can spend less time on infrastructure and more time on data. Interface Earlier we described our vision for notebooks to become the tool of choice for working with data. But this presents an interesting challenge: how can a single interface support all users? We don’t fully know the answer yet, but we have some ideas. We know we want to lean into simplicity. This means an intuitive UI with a minimalistic aesthetic, and it also requires a thoughtful UX that makes it easy to do the hard things. This philosophy aligns well with the goals of nteract, a React-based frontend for Jupyter notebooks. It emphasizes simplicity and composability as core design principles, which makes it an ideal building block for the work we want to do. One of the most frequent complaints we heard from users is the lack of native data visualization across language boundaries, especially for non-Python languages. nteract’s Data Explorer is a good example of how we can make the hard things simpler by providing a language-agnostic way to explore data quickly. You can see Data Explorer in action in this sample notebook on MyBinder. (please note: it may take a minute to load) We’re also introducing native support for parametrization, which makes it easier to schedule notebooks and create reusable templates. Although notebooks are already offering a lot of value at Netflix, we’ve just begun. We know we need to make investments in both the frontend and backend to improve the overall notebook experience. Our work over the next 12 months is focused on improving reliability, visibility, and collaboration. Context is paramount for users, which is why we’re increasing visibility into cluster status, kernel state, job history, and more. We’re also working on automatic version control, native in-app scheduling, better support for visualizing Spark DataFrames, and greater stability for our Scala kernel. We’ll go into more detail on this work in a future blog post. Open Source Projects Netflix has long been a proponent of open source. We value the energy, open standards, and exchange of ideas that emerge from open source collaborations. Many of the applications we developed for the Netflix Data Platform have already been open sourced through Netflix OSS. We are also intentional about not creating one-off solutions or succumbing to “Not Invented Here” mentality. Whenever possible, we leverage and contribute to existing open source projects, such as Spark, Jupyter, and pandas. The infrastructure we’ve described relies heavily on the Project Jupyter ecosystem, but there are some places where we diverge. Most notably, we have chosen nteract as the notebook UI for Netflix. We made this decision for many reasons, including alignment with our technology stack and design philosophies. As we push the limits of what a notebook can do, we will likely create new tools, libraries, and services. These projects will also be open sourced as part of the nteract ecosystem. We recognize that what makes sense for Netflix does not necessarily make sense for everyone. We have designed these projects with modularity in mind. This makes it possible to pick and choose only the components that make sense for your environment, e.g. Papermill, without requiring a commitment to the entire ecosystem. What’s Next As a platform team, our responsibility is to enable Netflixers to do amazing things with data. Notebooks are already having a dramatic impact at Netflix. With the significant investments we’re making in this space, we’re excited to see this impact grow. If you’d like to be a part of it, check out our job openings. Phew! Thanks for sticking with us through this long post. We’ve just scratched the surface of what we’re doing with notebooks. This post is part one in a series on notebooks at Netflix we’ll be releasing over the coming weeks. You can follow us on Medium for more from Netflix and check out the currently released articles below: - Part I: Notebook Innovation (this post) - Part II: Scheduling Notebooks We’re thrilled to sponsor this year’s JupyterCon. If you’re attending, check out one of the 5 talks by our engineers, or swing by our booth to talk about Jupyter, nteract, or data with us. - 8/22 1:30 PM — How to Build on top of Jupyter’s Protocols, Kyle Kelley - 8/23 1:50 PM — Scheduled Notebooks: Manageable and traceable code execution, Matthew Seal - 8/23 2:40 PM — Notebooks @ Netflix: From Analytics to Engineering, Michelle Ufford, Kyle Kelley - 8/23 5:00 PM — Making beautiful objects with Jupyter, M Pacer - 8/24 2:40 PM — Jupyter’s configuration system, M Pacer et. al. - 8/25 9AM — 5PM JupyterCon Community Sprint Day There are more ways to learn from Netflix Data and we’re happy to share: - @NetflixData on Twitter - Netflix Data talks on YouTube - Netflix Research website You can also stay up to date with nteract via their mailing list and blog!
https://medium.com/netflix-techblog/notebook-innovation-591ee3221233?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website
CC-MAIN-2019-30
refinedweb
2,569
53.81
Organizer of the contact between SwTextNodes and grammar checker. More... #include <IGrammarContact.hxx> Organizer of the contact between SwTextNodes and grammar checker. Definition at line 29 of file IGrammarContact.hxx. Definition at line 57 of file IGrammarContact.hxx. finishGrammarCheck() has to be called if a grammar checking has been completed for a text node. If this text node has not been hidden by the current proxy list it will be repainted. Otherwise the proxy list replaces the old list and the repaint will be triggered by a timer Implemented in SwGrammarContact. Referenced by finishGrammarCheck(). getGrammarCheck checks if the given text node is blocked by the current cursor if not, the normal markup list is returned if blocked, it will return a markup list "proxy" Implemented in SwGrammarContact. Referenced by SwXTextMarkup::commitMultiTextMarkup(), SwXTextMarkup::commitStringMarkup(), and lcl_SetWrong(). Update cursor position reacts to a change of the current input cursor As long as the cursor in inside a paragraph, the grammar checking does not show new grammar faults. When the cursor leaves the paragraph, these faults are shown. Implemented in SwGrammarContact. Referenced by SwCursorShell::UpdateCursorPos().
https://docs.libreoffice.org/sw/html/classIGrammarContact.html
CC-MAIN-2019-35
refinedweb
182
50.23
KiokuX::Model::Role::Annotations - A role for adding annotations to objects in a KiokuDB database.); This role provides a mechanism to annotate objects with other objects. Add annotations for an object. The first form requires the annotation objects to do the role KiokuX::Model::Role::Annotations::Annotation. The second form has no restrictions on the annotation objects, but requires the key object to be specified explicitly. Remove the specified annotations. Returns true if the object has been annotated. Returns a list of all annotations for the object. The role is actually parameterizable. Defaults to annotations. This string is prepended to the annotated object's ID and used as the key for the annotation set for that object. Dfeaults to the value of namespace. Used to provide the names of all the methods (the string annotations in the above methods would be replaced by the value of this). Defaults to object_to_id (see KiokuDB). The function to map from an object to an ID string, can be a code reference or a string for a method name to be invoked on the model object. The default implementation concatenates namespace, a colon and id_callback to provide the key of the set. If the key object is actually a string, the string is used as is. Can be overridden with a method name to be invoked on the model, or a code reference.
http://search.cpan.org/dist/KiokuX-Model-Role-Annotations/lib/KiokuX/Model/Role/Annotations.pm
CC-MAIN-2016-40
refinedweb
229
58.99
Guys i have been working with Java and each day it's a new error or a new challenge. What I'm trying to here is to move files from a specific "C:\\MSGRTS\\IN\\" folder to another folder "C:\\MSGRTS\\OUT\\" Firstly i have tested moving a single file from one folder to another and it worked pretty well. Here is the code: import java.io.File; public class Test { public static void main(String[] args) { /** Source file to move **/ File f1 = new File("c:\\MSGRTS\\IN\\RTSE200600040324030MZM.txt"); /** New location **/ f1.renameTo(new File("c:\\MSGRTS\\OUT\\RTSE200600040324030MZM.txt")); } } Now what i want to do is to move multiple files (more than one file) to a specific folder and i don't know how. NB: Both folders(IN and OUT) are located in the same machine. Can somebody help me out please? Regards, hellboy83
http://forums.devshed.com/java-help-9/copying-multiple-files-folder-564926.html
CC-MAIN-2015-27
refinedweb
146
72.76
!! 37 thoughts on “Thread synchronization, object level locking and class level locking” ===================================How to lock Object=================================== class A { synchronized void test1() { for(int i=0;i<100;i++) { System.out.println(i); } } synchronized void test2() { for(int i=200;i<300;i++) { System.out.println(i); } } } class B extends Thread { A a1; B(A a1) { this.a1=a1; } public void run() { a1.test1(); } } class C extends Thread { A a1; C(A a1) { this.a1=a1; } public void run() { a1.test2(); } } public class Manager1 { public static void main(String[] args) { A a1=new A(); //A a2=new A(); B b1=new B(a1); C c1=new C(a1); //Here is chance to currpt the data because same references b1.start(); //If not same ref then no chance to currpt data System.out.println("============"); c1.start(); } } I have confusion in Thread,What is different b/w Class level lock and Object level lock??? Thanks.. Mind blowing Lokesh.. Nice article Lokesh Hi Lokesh, I have always seen class level locking in my coding but never have i come across a requirement where i require an object level lock.Can you please tell me any real life scenario where we can use Object Level locking? Hi Sir, Really nice article, but i have doubt over one point can u pls elaborate it with example… 7. It’s possible that both static synchronized and non static synchronized method can run simultaneously or concurrently because they lock on different object. Hi, I would like to elaborate on you 7th point. As far as static synchronized methods are concerned, we are not taking any object into consideration as all static methods are class methods and can only be invoked by class variables. So, we can only have class lock on static methods. Now, for non static synchronized methods, we are talking about object only as these methods invoke on object. So, here we can have class locks as well as objects locks. So, two methods one is static and other is non static can take lock on object and class. In this way both the methods can run simultaneously. Hope this thing is clear to you. Thanks Hi Lokesh, Thanks for the article on synchronization. I liked it ..But I have query in my mind that . you said Synchronization method will allow lock on OBJECT level and by adding static synchronization it add the functionality to allow lock on CLASS level.. But in the below code i have created 2 OBJECTS of same class but still i am not getting the desired effect of synchronization block .. but when i applied static to it hen it was working.. Please explain.. Thanks in advance… I executed the above program. Below are the outputs: When method is static: 0 1 2 3 4 5 0 1 2 3 4 5 When method is non-static: 0 0 1 1 2 2 3 3 4 4 5 5 When method is static: class level lock i.e. both instances of Thread3 are locked by first obj and then obj2. They execute the method in sequence. When method is non-static: Both instances execute the method independently in their instances. In fact, in non-static mode they do not share any shared resource/method. So if you drop the synchronized keyword itself, it doesn’t make any difference. synchronized keyword should be used to protect some shared resource which will be accesses simultaneously by multiple threads. Here testSync() method does not do anything like that. hii lokesh sir.. i have a general doubt as. can we lock object except synchronization concept.. i mean am creating object like , Test t=new Test(); now i want to lock t object with out using sync concept.. is it possible You can use Lock implementations of thank u sir.. i have already referred these docs.. but i didnt get anything.. that is y am asking u So basically you want a simple tutorial on how to use these lock objects, right? I will post one for you soon. thanks in advance What changes should i make so that the output when the method is non static is equal to the output using static method. I don’t want to use static keyword. I just want to know how can we achieve this for non static methods. And one more thing. Can you please explain what object should we keep in synchronized public void testsync(‘object’) between the brackets. How do we decide what to keep here. I saw on most websites that thy keep a variable of the class here, even if that variable is not used anywhere in the method. Thanks in advance Hi Lokesh, Thanks for you articles, they are great!! I would like to mention one thing though that your explanation on Class Level Lock is bit misleading: .” Static code doesn’t belong to any instance, so it has no relation with instances at all. Yes out of 100 threads contending only one thread having managed to acquire the class level lock will be able to execute the static block of code. This should be enough, as code block belongs to instances (non-static methods) can still be executed in different thread at the same time even if they are guarded with Object level lock. Thanks, Kamal Excellent tutorial.. Thank you... Note:- In comment box, please put your code inside [java] ... [/java] OR [xml] ... [/xml] tags otherwise it may not appear as intended.
http://howtodoinjava.com/2013/03/08/thread-synchronization-object-level-locking-and-class-level-locking/
CC-MAIN-2015-11
refinedweb
910
74.29
IV problems Java -> C#843811 May 16, 2006 7:48 PM I have a problem encrypting data in Java to match C# results. Its a RC2 encryption using CBC mode. my problem comes with the IV parameter, cause they (C#) use a 16 byte IV. Ive tried to apply it to my Java code with no luck, using RC2ParameterSpec which only takes the first 8 bytes so the results dont match, and the IvParameterSpec but get an exception when called upon saying it must be 8 bytes long. All documentation ive seen points to 8 byte IVs, so my question is Is it possible to make it work with a 16 byte IV?, if so, how? This content has been marked as final. Show 9 replies 1. Re: IV problems Java -> C#843811 May 16, 2006 8:04 PM (in response to 843811)RC2ParameterSpec has a constructor that allows you to specify which bytes to use for the IV. Have you tried this? (its the 3rd constructor, link isnt posting right)(int,%20byte[],%20int) You'd still have to make sure that you're using the same IV as the C# side. 2. Re: IV problems Java -> C#843811 May 16, 2006 8:27 PM (in response to 843811)I am using the same IV as the one in the C# code. I also have tried the RC2ParameterSpec giving the offset, but get the same problem since it only takes 8 bytes from the IV, just starts from the given offset instead of the beggining of the IV and takes the next 7 bytes. 3. Re: IV problems Java -> C#843811 May 16, 2006 8:36 PM (in response to 843811)ouch, you're right. sorry about that. I replied too fast... I guess the next question is: have you looked at the Bouncy Castle? I'll admit I havent done anything with RC2, but BC tends to have a lot more options than the JCE. 4. Re: IV problems Java -> C#843811 May 16, 2006 8:45 PM (in response to 843811)RC2 is an 8-byte block cipher, so a 16 byte IV doesn't make any sense. I don't think the C# routines are really using a 16 byte IV. 5. Re: IV problems Java -> C#843811 May 16, 2006 9:26 PM (in response to 843811)well maybe this will help here is the C# code : Private bytIV() As Byte = {123, 231, 20, 6, 70, 45, 24, 67, 8, 65, 32, 45, 154, 222, 144, 43} Private Const sKey = "Key" Public Function EncryptText(ByVal sValue As String) As String Dim bytValue() As Byte Dim bytKey() As Byte Dim bytEncoded() As Byte Dim iLen As Integer, iRemaining As Integer Dim objMS As New MemoryStream() Dim objCrypt As CryptoStream Dim objRM As RC2CryptoServiceProvider Try bytValue = Encoding.UTF8.GetBytes(sValue.ToCharArray) iLen = Len(sKey) bytKey = Encoding.ASCII.GetBytes(sKey.ToCharArray) objRM = New RC2CryptoServiceProvider() objCrypt = New CryptoStream(objMS, objRM.CreateEncryptor(bytKey, bytIV), CryptoStreamMode.Write) objCrypt.Write(bytValue, 0, bytValue.Length) objCrypt.FlushFinalBlock() bytEncoded = objMS.ToArray objMS.Close() objCrypt.Close() Return Convert.ToBase64String(bytEncoded) Catch err As System.Exception Throw New Exception("error " & System.Reflection.MethodInfo.GetCurrentMethod.Name & " function:" & err.Message) Finally If Not objMS Is Nothing Then objMS.Close() If Not objCrypt Is Nothing Then objCrypt.Close() objRM = Nothing objMS = Nothing objCrypt = Nothing End Try and here is my adaptation in Java public class RC2 { public static String execute(String data) { String instr = data; String rutenc; String key = "Key"; byte[] encriptado; byte[] aEncriptar; byte[] mikey = key.getBytes(); byte[] iv = {(byte) 123, (byte) 231, (byte) 20, (byte) 6, (byte) 70, (byte) 45, (byte) 24, (byte) 67, (byte) 8, (byte) 65, (byte) 32, (byte) 45, (byte) 154, (byte) 222, (byte) 144, (byte) 43}; try { SecretKeySpec key = new SecretKeySpec(mikey, "RC2"); RC2ParameterSpec rc2Spec = new RC2ParameterSpec(128, iv); Cipher cipher = Cipher.getInstance("RC2/CBC/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, key, rc2Spec); aEncriptar = instr.getBytes("UTF8"); encriptado = cipher.doFinal(aEncriptar); rutenc = new BASE64Encoder().encodeBuffer(encriptado); return(rutenc); } catch (Exception e) { System.out.println("Error : " + e); return(null); } } } 6. Re: IV problems Java -> C#843811 May 16, 2006 9:43 PM (in response to 843811)krim- That is not C#, its VB.NET. But the libraries are all the same, so its just a matter of syntax. I dont have a VB.NET compiler or the time to translate your code right now, but I did throw a couple lines together in C# that attempts to set a 16-byte IV for RC2. It throws an error. Unhandled Exception: System.Security.Cryptography.CryptographicException: Specified initialization vector (IV) does not match the block size for this algorithm. at System.Security.Cryptography.SymmetricAlgorithm.set_IV(Byte[] value) at RC2_IV.MyMainClass.Main() Also, checking the length of the IV byte array returns 8. I)); } } } use an 8-byte IV. Or try your java code only using the first 8 bytes of the IV. Maybe the VB side's RC2 Provider class is just taking the first 8 bytes of the array... 7. Re: IV problems Java -> C#843811 May 17, 2006 9:06 AM (in response to 843811)Thanks for the help cdelikat, you where right, ive managed to recreate the VB.net class and run some tests, it was taking the first 8 bytes of the array. Then i just replaced RC2ParameterSpec rc2Spec = new RC2ParameterSpec(128, iv) for IvParameterSpec ivSpec = new IvParameterSpec(iv,0,8) in my java code and it worked like a charm. 8. Re: IV problems Java -> C#979215 Dec 11, 2012 10:58 AM (in response to 843811.. 9. Re: IV problems Java -> C#Kayaman Dec 11, 2012 11:19 AM (in response to 979215)..
https://community.oracle.com/message/6400540
CC-MAIN-2016-40
refinedweb
941
64.91
This tutorial assumes you have basic familiarity with React, Apollo, and Neo4j While planning my most recent side project, I decided to play with a feature that I've always wanted to mess with on the front end, drag and drop functionality. It didn't take long to find out that there are a number of highly regarded drag and drop libraries for React but, after reading docs and reviews I decided that React-beautiful-dnd was going to fit my use case. In addition it came boxed up with a very nice free tutorial course which you can find here. None of the code pertaining to the drag and drop functionality is mine, I adapted it from the tutorial, my only contribution being that I created it with hooks vs. class components. You'll need to complete their tutorial before you start this one Lets get started! After you've completed the drag and drop tutorial from Egghead, to start here all you need to do is pick up the starter GRANDstack project, clone it and get it spun up in your preferred IDE. After you've got the project up and running we'll need to add these types to your schema.graphl file: type Task { id: ID! content: String! column: Column @relation(name: "BELONGS_TO", direction: "OUT") } type Column { id: ID! title: String! tasks: [Task] @relation(name: "BELONGS_TO", direction: "IN") table: Table @relation(name: "BELONGS_TO", direction: "OUT") taskIds: [ID] } type Table { id: ID! title: String! columns: [Column] @relation(name: "BELONGS_TO", direction: "IN") columnOrder: [ID] } When our data is added our graph will look something like this. Lets go ahead and add data to our graph, open the Neo4j desktop, copy and paste this Cypher code: CREATE(t1:Table {id: "t1", title: "Test Table", columnOrder: []}), (c1:Column {id: "c1", title: "New Test Column", taskIds: []}), (c2:Column {id: "c2", title: "New Test Column 2", taskIds: []}), (c3:Column {id: "c3", title: "New Test Column 3", taskIds: []}), (tk1:Task {id: "tk1", content: "Task 1"}), (tk2:Task {id: "tk2", content: "Task 2"}), (tk3:Task {id: "tk3", content: "Task 3"}) with t1, c1, c2, c3, tk1, tk2, tk3 CREATE (t1)<-[:BELONGS_TO]-(c1) CREATE (t1)<-[:BELONGS_TO]-(c2) CREATE (t1)<-[:BELONGS_TO]-(c3) CREATE (c1)<-[:BELONGS_TO]-(tk1) CREATE (c1)<-[:BELONGS_TO]-(tk2) CREATE (c1)<-[:BELONGS_TO]-(tk3) This will create the graph structure we're after. Next, run these two cypher commands: match(t:Table) match(c:Column) with t, collect(c.id) as ids set t.columnOrder = ids and match(c:Column {id: "c1"}) match(t:Task) with c, collect(t.id) as ids set c.taskIds = ids This sets up the initial ids and ensure that our columns start out correctly. With that done we'll be able to get started. Here's a link to GitHub repository for the completed project. You'll be picking up at the point where you've got multiple columns and are able to swap the order of tasks and also swap them between columns. Up until this point, there's been no back end for the project so any changes that you've made will be undone when you refresh the browser or navigate away. Additionally, we're getting our application state from an object that's been created vs. calling API and that's what we'll add and fix next. If you haven't cloned the repo and have instead been following along with the Egghead.io tutorial adding Apollo to our project is going to be easy. Simply install it with yarn or npm whichever your preferred method for me, it's yarn: yarn add @apollo/client In previous versions of Apollo you'd need to install quite a few other packages but in V3 they all come bundled together. After we've installed Apollo we need to create a new client in the root of our application: index.js import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import '@atlaskit/css-reset'; import App from './App'; import {ApolloClient, ApolloProvider, InMemoryCache} from "@apollo/client"; const client = new ApolloClient({ uri: process.env.REACT_APP_GRAPHQL_URI || '', cache: new InMemoryCache(), }) ReactDOM.render( <React.StrictMode> <ApolloProvider client={client}> <App /> </ApolloProvider> </React.StrictMode>, document.getElementById('root') ); And that's all we need to get up and running with Apollo Client, make sure you've changed the appropriate environment variables or pointed the client at the correct locally running GraphQL API. With that done we're able to go ahead and start querying our Neo4j instance and making the application update and maintain our data in real time. In our App.js file we're going to add a GraphQL query and some mutations that will allow us to capture our application's state. First we'll need to import our needed tools from @apollo/client: import { gql, useMutation, useQuery } from "@apollo/client"; Then we can create our query, for brevity I'm including this in the App.js file but as the size of your application grows you might consider breaking queries and mutations out into their own files. First, we'll want to get our table or page and it's associated columns and tasks from our Neo4j instance. In this case, I'm calling the table by name: const GET_TABLE = gql` query GetTables($title: String){ Table(title: $title){ id title columnOrder columns{ id title taskIds tasks{ id content } } } } ` This query allows us to get the specific table we're after. It pulls the columns out and tasks along with it. In order to use the query we need to add it to our component: const {loading, error, data} = useQuery(GET_TABLE, {variables: 'Test Table'}); This allows us to add directly query our Neo4j instance and get that data we need but first we'll need to make some changes to the application as a whole and manipulate the data returned to fit our current structure. Data Object From Egghead tutorial At the current state of the application you should be using this initialData object to set your state. However now that we're going to be pulling data in via our API well need to change it from this: const initialData = { tasks: { 'task-1': {id: 'task-1', content: 'Take out the garbage'}, 'task-2': {id: 'task-2', content: 'Watch my favorite show'}, 'task-3': {id: 'task-3', content: 'Charge my phone'}, 'task-4': {id: 'task-4', content: 'Cook dinner'}, }, columns: { 'column-1': { id: 'column-1', title: 'To do', taskIds: ['task-1', 'task-2', 'task-3', 'task-4'], }, 'column-2': { id: 'column-2', title: 'In Progress', taskIds: [], }, 'column-3': { id: 'column-3', title: 'Done', taskIds: [], } }, columnOrder: ['column-1', 'column-2', 'column-3'], }; to this: const initialData = { tasks: { }, columns: { }, columnOrder: [] } This gives us the structure of the data we expect before the application is actually able to load it, keeping us from getting rendering and null errors. To ensure that we're getting our data correctly from the API and not encountering async errors we're going to add useEffect and make use of Apollo's loading, and error states. useEffect(() => { if (data) { setTable(data) } }, [data]) if (loading) { return <div>...Loading</div> } if (error) { console.warn(error) } These actions take place before the component has rendered allowing data to be fetched and more importantly for our fetched data to be reshaped into the form our application is expecting. We do this in our setTable function, which is called in useEffect once it's verified that we have data. const setTable = (data) => { const {Table} = data; const tasks = {}; const columns = {}; const columnOrder = Table[0].columnOrder; // Pull all tasks out into their own object Table[0].columns.forEach((col) => { col.tasks.forEach((task) => { tasks[task.id] = {id: task.id, content: task.content} }) }); // Pull out all columns and their associated task ids Table[0].columns.forEach((col) => { columns[col.id] = {id: col.id, title: col.title, taskIds: col.taskIds} }) const table = { tasks, columns, columnOrder } setState(table) } This step is important because our data returned from our GraphQL API is in the shape we requested in it from out GET_TABLE query, and needs to be reshaped in order to properly fit our application. As it is, this gives us a basic frame work to start saving the state changes of our data in our data base. Saving Column Order The first thing we're going to add to the application is the ability for the application to save changes in the order of tasks on a particular column. To do this, we'll add a mutation to update the state of the column, this Mutation is automatically created for us by the GRANDstack's augmented schema functionality. In application we need to send the mutation with all of the info that the column has and in this case we're interested in returning the column ID. const COL_UPDATE = gql` mutation UpdateColumn($id: ID!, $title: String, $taskIds: [ID]){ UpdateColumn(id: $id, title: $title, taskIds: $taskIds){ id } } ` We'll then add the useMutation hook to our application: const [colUpdate] = useMutation(COL_UPDATE) I've omitted the optional error and data properties and I'll be handling this in a very simple way in our onDragEnd function. Where there's a column update we'll add the update function, pardon the wall of text that follows: const onDragEnd = (result) => { const {destination, source, draggableId} = result; if (!destination) { return; } if ( destination.droppableId === source && destination.index === source.index ) { return; } const start = state.columns[source.droppableId]; const finish = state.columns[destination.droppableId] if (start === finish) { const newTaskIds = [...start.taskIds] newTaskIds.splice(source.index, 1); newTaskIds.splice(destination.index, 0, draggableId); const newColumn = { ...start, taskIds: newTaskIds }; const newState = { ...state, columns: { ...state.columns, [newColumn.id]: newColumn } }; setState(newState); colUpdate({ variables: { ...newColumn } }) .catch(error => console.log(error)) return; } You'll see that after the new column state is updated, we do the same with our UpdateColumn Mutation changing the order of the taskIds array and preserving the order of the tasks. At this point, our application will be saving the order of the tasks no matter what column they're moved to but it will also be duplicating tasks because we're not removing them from their old columns. Also because this data is stored in a GraphDB we've got swap the relationships as well. Meaning that when the task moves from one column we have to sever the relationship with that column and create a new [:BELONGS_TO] relationship with the new column. We accomplish this with another set of auto-generated mutations: const REMOVE_TASK = gql` mutation RemoveTaskColumn($from: _TaskInput!, $to: _ColumnInput!){ RemoveTaskColumn(from: $from, to: $to){ to { id } } } ` const ADD_TASK = gql` mutation AddTaskColumn($from: _TaskInput!, $to: _ColumnInput!){ AddTaskColumn(from: $from, to: $to){ to { id } } } ` These mutations allow us to remove the relationship between a task and a column and then also create a new relationship between the same task and a new column. We bring these useMutation hooks in as: const [addTask] = useMutation(ADD_TASK); const [removeTask] = useMutation(REMOVE_TASK); and add them into our onDragEnd function along with our UpdateColumn mutation to capture all the changes occurring when we swap a task between columns. colUpdate({ variables: { ...newStart } }) .then((data) => { const {data: {UpdateColumn: {id}}} = data; removeTask({ variables: { from: {id: taskId}, to: {id} } }) .catch(error => console.log(error)) }) .catch(error => console.log(error)) colUpdate({ variables: { ...newFinish } }) .then((data) => { const {data: {UpdateColumn: {id}}} = data; addTask({ variables: { from: {id: taskId}, to: {id} } }) .catch(error => console.log(error)) }) .catch(error => console.log(error)) The promise chaining is a little ugly but it works and now our tasks properly change relationships when moved. In our original graph we had: And now we're able to see our changes if you move "Task 1" to "Test Column 2" you'll get this result from your graph: And finally move "Task 3" to "Test Column 3" and you'll end up with: And now we've got drag and drop functionality enabled in our GRANDstack application. You can see that it's a little more complicated than it might be with a SQL data base because you have to work about the relationships but luckily the auto-generated mutations and Apollo make it super easy to work with. So go forth and drag and drop all the things! Discussion (0)
https://dev.to/muddybootscode/drag-and-drop-with-the-grandstack-2j22
CC-MAIN-2022-05
refinedweb
2,025
50.87
The. <set>- which is a convenient shorthand for animate, which is useful for assigning animation values to non-numeric attributes and properties, such as the visibility property. <animateMotion>- which moves an element along a motion path. <animateColor>- which modifies the color value of particular attributes or properties over time. Note that the <animateColor>element has been deprecated in favor of simply using the animate element to target properties that can take color values. Even though it's still present in the SVG 1.1 specification, it is clearly noted that it has been deprecated; and it has been completely removed from the SVG 2 specification. In addition to the animation elements defined in the SMIL spec, SVG includes extensions compatible with the SMIL animations spec; these extensions include attributes that extend the functionality of the <animateMotion> element and additional animation elements. The SVG extensions include: <animateTransform>- allows you to animate one of SVG's transformation attributes over time, such as the transformattribute. path(attribute) - allows any feature from SVG's path data syntax to be specified in a path attribute to the animateMotionelement (SMIL Animation only allows a subset of SVG's path data syntax within a path attribute). We'll talk more about animateMotionin an upcoming section. <mpath>- used in conjunction with the animateMotionelement to reference a motion path that is to be used as a path for the motion. The mpath element is included inside the animateMotionelement, before the closing tag. keypoints(attribute) - used as an attribute for animateMotionto provide precise control of the velocity of motion path animations. rotate(attribute) - used as an attribute for animateMotionto control whether an object is automatically rotated so that its x-axis points in the same direction (or opposite direction) as the directional tangent vector of the motion path. This attribute is the key to making motion along a path work as you'd expect. More about this in the animateMotionsection. SVG animations can be similar to CSS animations and transitions via by their nature. Keyframes are created, things move, colors change, etc. However, they can do somethings that CSS animations can't do, which we'll cover. Why Use SVG Animations? SVGs can be styled and animated with CSS (slides). Basically, any transformation or transition animation that can be applied to an HTML element can also be applied to an SVG element. But there are some SVG properties that cannot be animated through CSS that can through SVG. An SVG path, for example, comes with a set of data (a d="" attribute) that defines that path's shape. This data can be modified and animated through SMIL, but not CSS. This is because SVG elements are described by a set of attributes known as SVG presentation attributes. Some of these attributes can be set, modified, and animated using CSS, and others can't. So, many animations and effects can simply not be achieved using CSS at this time. The CSS SVG animation gaps can be filled by using either JavaScript or the declarative SVG animations derived from SMIL. If you prefer using JavaScript, I recommend using snap.svg by Dmitry Baranovsky, which is described as being "the jQuery of SVG". Here's a collection of examples of that. Or if you prefer a more declarative animation approach, you can use the SVG animation elements as we'll cover in this guide! Another advantage to SMIL over JS animations is that JS animations don't work when the SVG is embedded as an img or used as a background-image in CSS. SMIL animations do work in both cases (or should, browser support pending). That's a big advantage, in my opinion. You may find yourself choosing SMIL over other options because of that. This article is a guide to help you get started using SMIL today. Browser Support and Fallbacks Browser support for SMIL animations is pretty decent. They work in all browsers except in Internet Explorer and Opera Mini. For a thorough overview of browser support, you can refer to the compatibility table on Can I Use. If you need to provide a fallback for SMIL animations, you can test for browser support on-the-fly using Modernizr. If SMIL is not supported, you can then provide some kind of fallback (JavaScript animations, an alternate experience, etc). Specifying the target of the animation with xlink:href No matter which of the four animation elements you choose, you need to specify the target of the animation defined by that element. In order to specify a target, you can use the xlink:href attribute. The attribute takes a URI reference to the element which is the target of this animation and which therefore will be modified over time. The target element must be part of the current SVG document fragment. <rect id="cool_shape" ... /> <animate xlink:href="#cool_shape" ... /> If you've come across SVG animation elements before, you've probably seen them nested inside the element that they're supposed to animate. This is possible as well as per the spec: If the xlink:hrefattribute is not provided, then the target element will be the immediate parent element of the current animation element. <rect id="cool_shape" ... > <animate ... /> </rect> So if you want to "encapsulate" the animation into the element it applies to, you can do that. And if you want to keep the animations separate somewhere else in the document, you can do that too, and specify the target of each animation using xlink:href. Both ways work just fine. Specifying the target property of the animation with attributeName and attributeType All animation elements also share another attribute: attributeName. The attributeName attribute is used to specify the name of the attribute that you're animating. For example, if you want to animate the position of the center of a <circle> on the x-axis, you do that by specifying cx as the value for the attributeName attribute. attributeName takes only one value, not a list of values, so, you can only animate one attribute at a time. If you want to animate more than one attribute, you need to define more than one animation for the element. This is something that I wish were different, and that I think CSS has an advantage over SMIL for. But then again, because of the values possible for other animation attributes (which we'll cover next), it only makes sense to define only one attribute name at a time, otherwise the other attribute values can become too complex to work with. When you specify the attribute name, you can add an XMLNS (short for XML namespace) prefix to indicate the namespace of the attribute. The namespace can also be specified using the attributeType attribute. For example, some attributes are part of the CSS namespace (which means that the attribute can be found as a CSS property as well) and others are XML-only. A table showing these attributes can be found here. The attributes in the table are not all of the SVG attributes. They are only the ones that can be set using CSS. Some of them are already available as CSS properties. If the value for attributeType is not explicitly set or is set to auto, the browser must first search through the list of CSS properties for a matching property name, and if none is found, search the default XML namespace for the element. For example, the following snippet animates the opacity of an SVG rectangle. Since the opacity attribute is also available as a CSS property, the attributeType is set to the CSS namespace: <rect> <animate attributeType="CSS" attributeName="opacity" from="1" to="0" dur="5s" repeatCount="indefinite" /> </rect> We'll go over the other animation attributes in the upcoming examples below. Except where otherwise noted, all of the animation attributes are common to all of the animation elements. Animating an element's attribute from one value to another over a duration of time, and specifying the end state: from, by, to, dur and fill Let's start by moving a circle from one position to another. We're going to do that by changing the value of its cx attribute (which specifies the x-position of its center). We're going to use the <animate> element to do that. This element is used to animate one attribute at a time. Attributes that take numerical values and colors are usually animated with <animate>. For a list of attributes that can be animated, refer to this table. In order to change a value to another over a period of time, the from, to, and dur attributes are used. In addition to these, you will also want to specify when the animation should start with the begin attribute. <circle id="my-circle" r="30" cx="50" cy="50" fill="orange" /> <animate xlink: In the above example, we've defined a circle, and then called an animation on that circle. The center of the circle moves from the initial position at 50 units, to 450 units along the x-axis. The begin value is set to click. This means that the circle will move when it is clicked. You can set this value to a time value as well. For example, begin="0s" will start the animation as soon as the page is loaded. You can delay an animation by setting a positive time value. For example, begin="2s" starts the animation two seconds after load. What's even more interesting about begin is that you can define values like click + 1s to start an animation one second after the element is clicked! What's more, you can use other values that allow you to sync animations without having calculate the duration and delays of other animations. More about this later. The dur attribute is similar to the animation-duration equivalent in CSS. The from and to attributes are similar to the from and to keyframes in an animation's @keyframe block in CSS: @keyframes moveCircle { from { /* start value */ } to { /* end value */ } } The fill attribute (which is rather unfortunately named the same as the fill attribute which defines the fill color of an element) is similar to the animation-fill-mode property, which specifies whether or not the element should return to its initial state after the animation is over. The values in SVG are similar to those in CSS, except use different names: freeze: The animation effect is defined to freeze the effect value at the last value of the active duration. The animation effect is "frozen" for the remainder of the document duration (or until the animation is restarted). remove: The animation effect is removed (no longer applied) when the active duration of the animation is over. After the active end of the animation, the animation no longer affects the target (unless the animation is restarted). Try changing the values in the live demo to see how the animation is affected: Open this live demo on CodePen. The by attribute is used to specify a relative offset for the animation. As the name suggests, you can use it to specify the amount by which you want the animation to progress. The effect of by is almost only visible when you're progressing over the animation duration in discrete steps, similar to the way it works with the CSS steps() function. The SVG equivalent to the CSS steps() function is calcMode="discrete". We'll get to the calcMode attribute later in the article. Another case where the effect of by is more obvious is when you only specify the to attribute. An example of that would be if you use it with the set element which we will also cover later in the article. And last but not least, by also comes in useful when you're working with additive and accumulative animations. We will go over that later in the article. Restarting Animations with restart It may be useful to prevent an animation from being restarted while it is active. To do that, SVG offers the restart attribute. You can set this attribute to one of three possible values: always: The animation can be restarted any time. This is the default value. whenNotActive: The animation can only be restarted when it is not active (i.e. after the active end). Attempts to restart the animation during its active duration are ignored. never: The element cannot be restarted for the remainder of the current simple duration of the parent time container. (In the case of SVG, since the parent time container is the SVG document fragment, then the animation cannot be restarted for the remainder of the document duration.) Naming animations and synchronizing them Suppose we want to animate the position and the color of the circle, such that the change in color happens at the end of the moving animation. We can do that by setting the begin value of the color-changing animation to be equal to the duration of the moving animation; this is how we would normally do it in CSS. SMIL, however, has a nice event-handling feature. We mentioned before that the begin attribute accepts values like click + 5s. This value is called an "event value", and is in this case made up of an event reference followed by a "clock value". The interesting part here is the naming of the second part: the "clock value". Why is it not simply a "time value"? Well the answer is that you can literally use a clock value like "10min" or "01:33" which is equivalent to "1 minute and 33 seconds", or even "02:30:03" (two hours, 30 minutes, and 3 seconds). At the time of this writing, clock values are not fully implemented in any browser. So, if we were to go back to the previous demo and use click + 01:30, if a browser started supporting it, the animation would fire 1 minute and 30 seconds after the circle is clicked. Another kind of value it can accept is the ID of another animation followed by an event reference. If you have two (or more) animations (whether they are applied to the same element or not!) and you want to synchronize them so that one of them starts relative to the other, you can do that without having to know the duration of the other animation. For example, in the next demo, the blue rectangle starts moving 1 second after the circle animation starts. This is done by giving each animation an ID, and then using that ID with the begin event as shown in the following code: <circle id="orange-circle" r="30" cx="50" cy="50" fill="orange" /> <rect id="blue-rectangle" width="50" height="50" x="25" y="200" fill="#0099cc"></rect> <animate xlink: <animate xlink: The begin="circ-anim.begin + 1s" is the part that tells the browser to start the rectangle's animation 1 second after the beginning of the circle's. You can check the live demo out: Open this live demo on CodePen. You can also start the rectangle animation after the circle animation ends using the end event: <animate xlink: You could even start it before the end of the circle's animation: <animate xlink: Repeating Animations with repeatCount If you want to run an animation more than once, you can do that using the repeatCount attribute. You can specify the number of times you want it to repeat, or use the indefinite keyword to have it repeat endlessly. So, if we were to repeat the circle's animation for two times, the code would look like so: <animate xlink: You can check the live demo out here. In the demo, I've set the repeat count to be 2 on the circle, and indefinite on the square. Open this live demo on CodePen. Notice how the animation restarts from the initial from value instead of the value it reached at the end of the animation. Unfortunately, SMIL does not include a way to go back and forth between the start and end values like CSS animations allow us to do. In CSS, the animation-direction property specifies whether or not an animation should play in reverse on some or all cycles or iterations. animation-direction: alternate value means that. In SMIL to do that you would have to use JavaScript to explicitly change the values of the from and to attributes. Jon McPartland of the Big Bite Creative wrote a post a while back explaining how he did this for a menu icon animation that he worked on. Another workaround is to specify the end value as the middle value and then have the end value be the same as the initial value. For example, you can set the animation to start from a value, and end at the same value as well with to, except that you specify what you would have set to be a final value, as an intermediate value between from and to. In CSS we would do that using something like this: @keyframes example { from, to { left: 0; } 50% { left: 300px; } } The equivalent in SMIL is to use the values attribute, which we will explain shortly. That said, the above workaround may or may not work for you depending on the kind of animation you're after, and whether or not you are chaining animations, repeating them, or doing additive animations. Here's a nice, simple infinite animation using some delayed begin times by Miles Elam: See the Pen Hexagon Ripple by Miles Elam (@mileselam) on CodePen. Restricting repetition time with repeatDur Setting an element to repeat indefinitely may get annoying or not user-friendly if the animation resumes for a long time. So, it may be a good idea to restrict the repetition time to a certain period of time, and stop the repetition after some time relative to the beginning of the document. This is known as presentation time. The presentation time indicates the position in the timeline relative to the document begin of a given document fragment. It is specified using the repeatDur attribute. Its syntax is similar to that of a clock value, but instead of being relative to another animation event or an interaction event, it's relative to the beginning of the document. For example, the following snippet will stop the repetition of the animation 1 minute and 30 seconds after the document begin: <animate xlink: And here is the live demo: Synchronizing animations based on number of repetitions Now let's go back a step to the synchronizing between two animations topic. Indeed, in SMIL, you can synchronize animations so that one animation starts based on the number of repetitions of another. For example, you can start an animation after the nth-repetition of another, plus or minus an amount of time you may want to add. The following example starts the rectangle's animation at the second repetition of the circle's animation: <animate xlink: The following is a live demo where the rectangle's animation starts 2 seconds after the second repetition of the circle's animation. Open this live demo on CodePen. And here is an example David Eisenberg put together for SVG Essentials, 2nd Edition. Controlling animation keyframe values: keyTimes and values In CSS, we can specify the values that we want our animated property to take in a certain frame during the animation. For example, if you're animating the left offset of an element, instead of animating it from, say, 0 to 300 directly, you can animate it so that it takes certain values during certain frames like this: @keyframes example { 0% { left: 0; } 50% { left: 320px; } 80% { left: 270px; } 100% { left: 300px; } } The 0%, 20%, 80%, and 100% are the frames of the animation, and the values in each frame's block are the values for each frame. The effect described above is one of an element bouncing off a wall, then back to the final position. In SMIL, you can control the values per frame in a similar way, but the syntax is quite different. To specify the keyframes, you use the keyTimes attribute. And then to specify the value of the animated property for each frame, you use the values attributes. The naming conventions in SMIL are quite convenient. If we were to go back to our moving circle, and use values similar to the ones in the CSS keyframes above, the code will look like the following: <animate xlink: So what did we do there? The first thing to notice here is that the keyframe times and intermediate values are specified as lists. The keyTimes attribute is a semicolon-separated list of time values used to control the pacing of the animation. Each time in the list corresponds to a value in the values attribute list, and defines when the value is used in the animation function. Each time value in the keyTimes list is specified as a floating point value between 0 and 1 (inclusive), representing a proportional offset into the simple duration of the animation element. So the keytimes are similar to those in CSS, except that, instead of specifying them as percentages, you specify them as a fraction. The following is the live demo for the above code. Click on the circle to start the animation. Open this live demo on CodePen. Note that if a list of values is used, the animation will apply the values in order over the course of the animation. If a list of values is specified, any from, to and by attribute values are ignored. At this point, it is also worth mentioning that you can use values attribute without the keyTimes attribute - the values are automatically spread out evenly through the time (for every calcMode value other than paced(See next section). Controlling animation pace with custom easing: calcMode, and keySplines I'm going to go for a CSS-SMIL comparison again because the SMIL syntax and concepts will be much simpler to understand if you're already familiar with CSS animations. In CSS, you can choose to change the default uniform animation pace and specify a custom easing function that controls the animation, using the animation-timing-function property. The timing function can be one of a few predefined keywords, or a cubic bezier function. The latter can be created using a tool such as this tool by Lea Verou. In SMIL, the animation pace is specified using the calcMode attribute. The default animation pace is linear for all animation elements except animateMotion (we'll get to it later in the article). In addition to the linear value, you can set the value to: discrete, paced, or spline. discretespecifies that the animation function will jump from one value to the next without any interpolation. This is similar to the steps()function in CSS. pacedis similar to linear, except that it will ignore any intermediate progress times defined by keyTimes.. The following is a live demo courtesy of Amelia Bellamy-Royds, that shows the difference between the three calcModevalues mentioned so far: See the Pen SVG/SMIL calcMode comparison by Amelia Bellamy-Royds (@AmeliaBR) on CodePen. - The fourth value accepted by calcModeis spline. It interpolates from one value in the valueslist to the next according to a time function defined by a cubic bezier spline. The points of the spline are defined in the keyTimesattribute, and the control points for each interval are defined in the keySplinesattribute. You've probably noticed the new attribute in the last sentence: the keySplines attribute. So, what does the keySplines attribute do? Again, to the CSS equivalents. In CSS, you can specify the animation pace inside every keyframe, instead of specifying one animation pace for the entire animation. This gives you better control over how each keyframe animation should proceed. An example using this feature is creating a bouncing ball effect. The keyframes for that may look like this: @keyframes bounce { 0% { top: 0; animation-timing-function: ease-in; } 15% { top: 200px; animation-timing-function: ease-out; } 30% { top: 70px; animation-timing-function: ease-in; } 45% { top: 200px; animation-timing-function: ease-out; } 60% { top: 120px; animation-timing-function: ease-in; } 75% { top: 200px; animation-timing-function: ease-out; } 90% { top: 170px; animation-timing-function: ease-in; } 100% { top: 200px; animation-timing-function: ease-out; } } Instead of keyword easing functions, we could have used the corresponding cubic-bezier functions: ease-in= cubic-bezier(0.47, 0, 0.745, 0.715) ease-out= cubic-bezier(0.39, 0.575, 0.565, 1) Let's start by specifying the key times and list of values for our orange circle to undergo the same bouncing effect: <animate xlink: The animation will be begin on click, and will freeze once it reaches the end value. Next, in order to specify the pace of each keyframe, we're going to add the keySplines attribute. The keySplines attribute takes values must all be in the range 0 to 1, and the attribute is ignored unless the calcMode is set to spline. Instead of taking cubic-bezier functions as values, keySplines takes the coordinates of the two control points that are used to draw the curve. The control points can be seen in the following screenshot taken from Lea's tool. The screenshot also shows the coordinates of each point, each colored with the same color as the point itself. For the keySplines attribute, it is these values that we are going to use to define the pace of the keyframe animations. set of control points than there are keyTimes. If we go back to the bouncing ball example, the control point coordinates for the ease-in and ease-out functions are shown in the following images: So, to translate that into the SVG animation element, we get the following code: <animate xlink: Here's the live demo: Open this live demo on CodePen. If you only want to specify an overall easing function for the entire animation without any intermediate values, you would still have to specify the keyframes using the keyTimes attribute, but you would only specify the start and ending keyframes, namely 0; 1, and no intermediate values. Additive & Accumulative Animations: additive and accumulate Sometimes, it's useful to define an animation that starts from where the previous animation ended; or one that uses the accumulative sum of the previous animations as a value to proceed by. For that, SVG has two conveniently named attributes: additive and accumulate. Suppose you have an element whose width you want to "grow", or a line whose length you want to increase, or an element that you want to move step by step from one position to the other, over separate steps. This feature is particularly useful for repeated animations. Just like any other animation, you're going to specify from and to values. However, when you set additive to sum, each of their values is going to be relative to the original value of the animated attribute. So, back to our circle. For our circle, the initial position of cx is 50. When you set from="0" to="100", the zero if actually the original 50, and the 100 is actually 50 + 100; in other words, it's practically kind of like " from="50" to="150"". By doing that, we get the following result: Open this live demo on CodePen. This is all the additive attribute does. It just specifies whether the from and to values should be relative to the current value or not. The attribute only takes one of two values: sum and replace. The latter is the default value, and it basically means that the from and to values are going to replace the current/original values, which may end up causing a weird jump before the animation starts. (Try replacing sum with replace in the above example for a better comparison.) However, what if we want the values to be added such that the second repetition starts off from the ending value of the previous one? This is where the accumulate attribute comes in. The accumulate attribute controls whether or not the animation is cumulative. The default value is none, which means that, when the animation is repeated for example, it's going to start back from the beginning. You can, however, set it to sum, which specifies that each repeat iteration after the first builds upon the last value of the previous iteration. So, if we were to go back to the previous animation and specify accumulate="sum", we'd get the following prefferable result: Open this live demo on CodePen. Note that the accumulate attribute is ignored if the target attribute value does not support addition, or if the animation element does not repeat. It will also be ignored if the animation function is specified with only the to attribute. Specifying an animation's end time with end In addition to specifying when an animation begins, you can also specify when it ends, using the end attribute. For example, you can set an animation to repeat indefinitely, and then have it stop when another element starts animating. The end attribute takes values similar to those that the begin value takes. You can specify absolute or relative time values/offsets, repeat values, event values, etc. For example, in the following demo, the orange circle moves slowly over a period of 30 seconds to the other side of the canvas. The green circle will also animate, but only when it's clicked. The orange circle's animation will end when the green circle's animation starts. Click on the green circle to see the orange one stop: Open this live demo on CodePen. The same kind of animation synchronization can be achieved for two animations applied to the same element, of course. For example, suppose we set the color of the circle to animate indefinitely changing from one value to another. Then, when the element is clicked, it moves to the other side. We'll set it now so that the color animation stops as soon as the element is clicked and the moving animation is fired. Open this live demo on CodePen. Defining animation intervals using multiple begin and end values Indeed, both the begin and end attributes accept a list of semi-colon-separated values. Each value in the begin attribute will correspond to a value in the end attribute, thus forming active and inactive animation intervals. You can think of this as being similar to a moving car, where the car's tires are active and then inactive for periods of time, depending on whether or not the car is moving. You can even create the animated car effect by applying to animations to the car: one that translates the car or moves it along a path that is also an additive and accumulative animation, and another animation that rotates the car's tires in intervals that would be synchronized with the translation. An example specifying multiple beginning and ending times (i.e. intervals) is the following demo, where the rectangle is rotated based on the defined intervals, changing from active to inactive accordingly. (Rerun the demo if you miss the animation.) Open this live demo on CodePen. Note that in the above example I've used the <animateTransform> element to rotate the rectangle about its center. We'll talk about this element in more detail in an upcoming section below. Also note that, even if you set the repeatCount to indefinite, it will be overridden by the end values and will not repeat indefinitely. Restricting the active duration of an element using min and max Just like you can restrict the repetition time of an animation, you can even restrict the active duration of an animation. The min and max attributes specify the minimum and maximum value of the active duration, respectively. They provide us with a way to control the lower and upper bound of the element active duration. Both attributes take a clock value as a value. For min, that specifies the length of the minimum value of the active duration, measured in element active time. Value must be greater than or equal to 0, which is the default value and does not constrain the active duration at all. For max, the clock value specifies the length of the maximum value of the active duration, measured in element active time. Value must also be greater than 0. The default value for max is indefinite. This does not constrain the active duration at all. If both min and max attributes are specified then the max value must be greater than or equal to the min value. If this requirement is not fulfilled then both attributes are ignored. But what defines the active duration of an element? We mentioned the repeat duration before, in addition to the "simple duration", which is the duration of the animation without any repetition (specified using dur), so how do all of these work together? Which overrides what? and then what about the end attribute which would override and simply end the animation? The way it happens is that the browser will first compute the active duration based on the dur, repeatCount, repeatDur, and end values. Then, it runs the computed duration against the specified min and max values. If the result is within the bounds, this first computed duration value is correct and will not be changed. Otherwise two situations may occur: - If the first computed duration is greater than the maxvalue, the active duration of the element is defined to be equal to the maxvalue. - If the first computed duration is less than the minvalue, the active duration of the element becomes equal to the minvalue and the behavior of the element is as follows: - If the repeating duration (or the simple duration if the element doesn't repeat) of the element is greater than minthen the element is played normally for the ( minconstrained) active duration. - Otherwise the element is played normally for its repeating duration (or simple duration if the element does not repeat) and then is frozen or not shown depending on the value of the fillattribute. That leaves us with how the browser actually computes the active duration. For sake of brevity, I'm not going to get into the details here. But there is a very comprehensive table in the specification that shows the different combinations of the dur, repeatCount, repeatDur, and end attributes, and then shows what the active duration will be based on each combination. You can check the table out and read more about this in this section of the specification. Lastly, if an element is defined to begin before its parent (e.g. with a simple negative offset value), the minimum duration is measured from the calculated begin time not the observed begin. This means that the min value may have no observed effect. <animate> example: morphing paths One of the attributes that can be animated in SMIL (but not in CSS) is the d attribute (short for data) of an SVG <path>. The d attribute contains the data which defines the outline of the shape that you're drawing. The path data consists of a set of commands and coordinates that tell the browser where and how to draw points, arcs, and lines that make up the final path. Animating this attribute allows us to morph SVG paths and create shape tweening effects. But, in order to be able to morph shapes, the start, end, and any intermediate path shapes need to have the exact same number of vertices/points, and they need to appear in the same order. If the number of vertices doesn't match, the animation wouldn't work. The reason for this is that the shape changing actually happens by moving the vertices, and interpolating their positions, so if one vertex is missing or does not match, the paths won't be interpolated anymore. To animate an SVG path, you specify the attributeName to be d, and then set the from and to values that specify the start and end shapes, and you can use the values attribute to specify any intermediate values you want the shape to go through in between. For the sake of brevity, I won't get into the details of how to do this here. Instead, you can read this excellent article by Noah Blon, in which he explains how he created a shape-tweening kind-of-loading animation using <animate>. The live demo for Noah's article is this: Open this live demo on CodePen. And here's another morphing example by Felix Hornoiu: See the Pen SVG Countdown by Felix Hornoiu (@felixhornoiu) on CodePen. You can even morph the values of a path being used as a clipping mask! An example of that by Heather Buchel: See the Pen Loading Animation with Morphing SVG! by Heather Buchel (@hbuchel) on CodePen. Animating along arbitrary paths: The <animateMotion> Element The <animateMotion> element is my favorite SMIL animation element. You can use it to move an element along a path. You specify the motion path using one of two ways which we're going to cover next, and then to set the element up so that is moves along that path. The <animateMotion> element accepts the same attributes mentioned earlier, plus three more: keyPoints, rotate, and path. Also, there is one difference regarding the calcMode attribute, where the default value is paced for <animateMotion>, not linear. Specifying the motion path using the path attribute The path attribute is used to specify the motion path. It is expressed in the same format and interpreted the same way as the d attribute on the path element. The effect of a motion path animation is to add a supplemental transformation matrix onto the current transformation matrix for the referenced object which causes a translation along the x- and y-axes of the current user coordinate system by the computed X and Y values computed over time. In other words, the path specified is calculated relative to the element's current position, by using the path data to transform the element onto the path position. For our circle, we're going to animate it along a path that looks like the following: The code required for the circle to move along this path is: <animateMotion xlink: There is one thing I want to focus on here: the coordinates in the path data. The path starts by moving (M) to the point with coordinates (0, 0), before it starts to draw a curve (c) to another point. It is important to note that the (0, 0) point is actually the position of the circle, no matter where it is - NOT the top left corner of the coordinate system. As we mentioned above, the coordinates in the path attribute are relative to the current position of the element! The result of the above code is: Open this live demo on CodePen. If you were to specify the path starting from a point other than (0, 0), the circle would abruptly jump by the amount specified in the beginning point. For example, suppose you draw a path in Illustrator and then export that path data to use as a motion path (that's what I did the first time I did this); the exported path may look something like this: <path fill="none" stroke="#000000" stroke- The starting point of the path in this case is (100.4, 102.2). If we were to use this data as the motion path, the circle will jump by ~100 units to the right and ~102 units downwards, and then start the motion along the path relative to the new position. So, make sure to keep this in mind when you prepare the motion path for your animation. If used, attributes from, by, to and values specify a shape on the current canvas which represents the motion path. Specifying the motion path using the <mpath> element There is also another way you can specify a motion path. Instead of using the relative path attribute, you can reference an external path using the <mpath> element. The <mpath>, a child of the <animateMotion> element, would then reference the external path using the xlink:href attribute. <animateMotion xlink: <mpath xlink: </animateMotion> The motion path <path> can be defined anywhere in the document; it can even be literally just defined inside a <defs> element and not rendered on the canvas at all. In the next example, the path is rendered because, in most cases, you may want to show the path that the element is moving along. Note that, according to the specification:property or any animations on that attribute due to animateTransformelements on the target element. Again, the position of the circle is "multiplied" or "transformed" by the coordinates in the path data. In the next example, we have a path in the middle of the canvas. The circle is positioned at the beginning of the path. Yet, when the motion path is applied, the circle does not start its motion from its current position. See the demo for a better explanation. Click on the circle to animate it. Open this live demo on CodePen. See how the circle does follow the same shape of the path, but over a different position? This is due to the fact that the circle's position is transformed by the values of the path data. One way around this is to start with the circle being positioned at (0, 0), so that when the path data is used to transform it, it will start and proceed as expected. Another way is to apply a transformation that "resets" the coordinates of the circle so that they compute to zero before the path is applied. The following is a modified version of the above demo, using a closed path and repeating the motion animation indefinitely. Open this live demo on CodePen. Override Rules for <animateMotion> Since there are more than one way to do the same thing for animateMotion, it only makes sense to have override rules to specify which values override others. The override rules for animateMotion are as follows: - Regarding the definition of the motion path, the mpathelement overrides the the pathattribute, which overrides values, which overrides from, byand to. - Regarding determining the points which correspond to the keyTimesattributes, the keyPointsattribute overrides path, which overrides values, which overrides from, byand to. Setting an element's orientation along a motion path with rotate In our previous example, the element we were animating along the path happened to be a circle. But what if we're animating an element that has a certain orientation like, say for example, a car icon? The car icon in the following example is designed by Freepik. In this example, I've replaced the circle with a group with an ID of "car", which contains the element making up the group. Then, in order to avoid the problem with the motion along the path mentioned above, I've applied a transformation to the car to that translates it by a specific amount, so that the initial position ends up at (0, 0). The values inside the transformations are actually the coordinates of the point where the first path of the car starts drawing (right after the move command M). The car then starts moving along the motion path. But... this is how the motion looks like: Open this live demo on CodePen. The car's orientation is fixed, and does not change to match that of the motion path. In order to change that, we're going to use the rotate attribute. The rotate attribute takes one of three values: auto: Indicates that the object is rotated over time by the angle of the direction (i.e., directional tangent vector) of the motion path. auto-reverse: Indicates that the object is rotated over time by the angle of the direction (i.e., directional tangent vector) of the motion path plus 180 degrees. - a number: Indicates that the target element has a constant rotation transformation applied to it, where the rotation angle is the specified number of degrees. To fix the orientation of the car in the above example, we'll start with setting the rotation value to auto. We'll end up with the following result: Open this live demo on CodePen. If you want the car to move outside the path, the auto-reverse value fixes that. Open this live demo on CodePen. This looks better, but we still have one problem: the car looks like it's moving backwards along the path! In order to change that, we'd need to flip the car along its y-axis. This can be done by scaling it by a factor of "-1" along that axis. So, if we apply the transformation to the g with the car ID, the car will move forward as expected. The scaling transformation is just going to be chained with the previous translation we applied earlier. <g id="car" transform="scale (-1, 1) translate(-234.4, -182.8)"> And the final demo looks like this: Open this live demo on CodePen. Controlling the animation distance along the motion path with keyPoints. keyPoints takes a semicolon-separated list of floating point values between 0 and 1 and indicates how far along the motion path the object shall move at the moment in time specified by corresponding keyTimes value. Distance calculations are determined by the browser's algorithms. Each progress value in the list corresponds to a value in the keyTimes attribute list. If a list of keyPoints is specified, there must be exactly as many values in the keyPoints list as in the keyTimes list. One important thing to note here is to set the calcMode value to linear for keyPoints to work. It also looks like it should logically work with paced animation, if your key points move back and forth, but it doesn't. The following is an example by Amelia Bellamy-Royds (whose Codepen profile you should totally check out) that uses keyPoints to mimic the behavior is starting a motion along a path from a pre-defined offset, because we currently don't have that ability by default in SMIL. See the Pen Motion along a closed path, arbitrary start point by Amelia Bellamy-Royds (@AmeliaBR) on CodePen. Moving text along an arbitrary path Moving text along an arbitrary path is different from moving other SVG elements along paths. To animate text, you're going to have to use the <animate> element, not the <animateMotion> element. First, let's start by positioning the text along a path. This can be done by nesting a <textPath> element inside the <text> element. The text that is going to be positioned along a path will be defined inside the <textPath> element, not as a child of the <text> element. The textPath is then going to reference the actual path that we want to use, just like we did in the previous examples. The referenced path can also be either rendered on the canvas, or defined inside a <defs>. Check the code in the following demo out. Open this live demo on CodePen. To animate the text along that path, we're going to use the <animate> element to animate the startOffset attribute. The startOffset represents the offset of the text on the path. 0% is the beginning of the path; 100% represents the end of it. So if, for example, the offset is set to 50%, the text will start halfway through the path. I think you can see where we're going from here. By animating the startOffset, we're going to create the effect of text moving along the path. Check the code in the following demo out. Open this live demo on CodePen. Animating transformations: The <animateTransform> Element The <animateTransform> element animates a transformation attribute on a target element, thereby allowing animations to control translation, scaling, rotation and/or skewing. It takes the same attributes mentioned for the <animate> element, plus an additional attribute: type. The type attribute is used to specify the type of the transformation that's being animated. It takes one of five values: translate, scale, rotate, skewX, and skewY. The from, by and to attributes take a value expressed using the same syntax that is available for the given transformation type: + For a type="translate", each individual value is expressed as <tx> [,<ty>]. + For a type="scale", each individual value is expressed as <sx> [,<sy>]. + For a type="rotate", each individual value is expressed as <rotate-angle> [<cx> <cy>]. + For a type="skewX" and type="skewY", each individual value is expressed as <skew-angle>. If you're not familiar with the syntax for the SVG transform attribute functions, and for the sake of brevity of this article, and because the syntax details and how it works is outside the scope of this article, I recommend you read the article I've written about this a while back: Understanding SVG Coordinate Systems and Transformations (Part 2): The transform Attribute, before you move on with this guide. Back to a previous demo, where we rotated the pink rectangle using the <animateTransform> element. The code for the rotation looks like the following: <rect id="deepPink-rectangle" width="50" height="50" x="50" y="50" fill="deepPink" /> <animateTransform xlink: The from and to attributes specify the angle of rotation (start and end) and the center of rotation. In both, the center of rotation remains the same, of course. If you don't specify the center, it will be the top left corner of the SVG canvas. The live demo for the above code is the following: Open this live demo on CodePen. Here's another fun example with a single animateTransform by Gabriel: See the Pen Orbit by Gabriel (@guerreiro) on CodePen. Animating a single transformation is simple, however, things can get really messy and complicated when multiple transformations are included, especially because one animateTransform can override another, so instead of adding and chaining effects, you may end up with the complete opposite. That, in addition to the way SVG coordinate systems and transformations actually work (refer to the article mentioned earlier on the topic). The examples are vast, and outside the scope of this article. For transforming SVGs, I recommend using CSS transforms. Implementations are working on making the latter work perfectly with SVG, so you may never have to use SMIL for animating transformations in SVG at all. The <set> Element The set element provides a simple means of setting the value of an attribute for a specified duration. It supports all attribute types, including those that cannot reasonably be interpolated, such as string and boolean values. The set element is non-additive. The additive and accumulate attributes are not allowed, and will be ignored if specified. Since <set> is used to set an element to a specific value at and during a specific time, it does not accept all of the attributes mentioned for the previous animation elements. For example, it does not have a from or by attribute, because the value that changes does not change progressively over the period of time. For set, you can specify the element you're targeting, the attribute name and type, the to value, and the animation timing can be controlled with: begin, dur, end, min, max, restart, repeatCount, repeatDur, and fill. The following is an example that sets the color of the rotating rectangle to blue when it is clicked. The color remains blue for a duration of 3 seconds, and then turns back to the original color. Every time the rectangle is clicked, the set animation is fired, and the color is changed for three seconds. Open this live demo on CodePen. Elements, attributes and properties that can be animated Not all SVG attributes can be animated, and not all of those that can be animated, can be animated using all the animation elements. For a complete list of all animatable attributes, and a table showing which of these can be animated by which elements, please refer to this section of the SVG Animation specification. Final Words SMIL has a lot of potential, and I barely scratched the surface and only touched on the basics and technicalities of how they work in SVG. A lot of very impressive effects can be created, especially ones involving morphing and transforming shapes. The sky's the limit. Go crazy! and don't forget to share what you make with the community; we'd love to see what you've been up to. Thank you for reading! This article has been updated based on this discussion in the comments below. Thanks for your input, Amelia. =) Wow, I had no idea that SVG has path interpolation built in! That’s super exciting. Thanks for the compendium of great info, Sara! Great article! I <3 SVG Thanks! Perhaps Flash has finally found his purpose: SVG/SMIL Animations generator :-) You mean, with flash it is possible to generate SVG animations? Wow! SVG has always been “Yeah, you can create some shapes with it…” to me. Since exploring the SVG Tuts by Jakob Jenkov () that definetly has changed. This article comes on my reading list as well! that’s pretty cool! I like “single animateTransform by Gabriel” wowo nice artificial svg nice.how to learn svg which website I remember my those “Macromedia Flash” days :p Great overview. A few corrections, which can hopefully be integrated to make this a lasting reference: You’re mixing up two different concepts here (although the specs aren’t much better). CSS isn’t an XML namespace, and you can’t use attributeNameto specify that it is the CSS property you’re animating (although that’s the default if you don’t specify attributeType). By “XML namespace”, the specs are talking about, for example, the xlink in xlink:href. I’m not sure that there is anything specific about discrete mode, although it certainly makes the differences more obvious. More generally, byworks just the same as specifying both fromand to. Things are only different if you specify toalone, and only if you’re doing additive animations (either an accumulating repeated animation or multiple additive animations of the same property); the first two forms are interpretted as cumulative offsets, but tois interpretted as a final value, regardless of the start value. Javascript seems overkill. It’s easy to do with the values syntax: values="0;100;0"creates an alternating animation between 0 and 100. Just remember that duris now the duration of a full loop, back and forth, of the animation. I’d still prefer an easier way to alternate repetitions (if your values are complex path data commands, it’s a pain to have to repeat everything), but it can be done. It’s a little buggy, but it works. Here’s an example David Eisenberg put together for SVG Essentials, 2nd Edition. Worth mentioning that you can use valueswithout keyTimes. The values are automatically spread out evenly through the time (for every calc mode except paced). pacedignores keyTimes, but it doesn’t override values.. This isn’t true; any of the end triggers will stop the animation, regardless of how it began. E.g. if you included clickin the list of end triggers, a click will stop the current animation (but not prevent future ones). I tend to think about this in the opposite direction — the path is fixed, but it is the (0,0) point of the circle’s coordinate system that moves along it, not necessarily the circle itself. But either way works. To be more specific, they each specify coordinates in x,yfashion (or a semi-colon separated list of coordinate pairs for values). Motion is in straight lines from one coordinate to the next. keyPointsworks just fine, but you need to set calcMode="linear". It should logically work with paced animation, if your key points move back and forth, but it doesn’t. See for example: That is a really sneaky example, using two different path elements, one in front and one behind the moving element, to simulate an animating z-index (which SVG doesn’t currently have)! Of course, you managed to discover a few things I hadn’t noticed about the specs. I hadn’t paid attention the fact that you are supposed to be able to use clock values as begin/end time offsets, so I hadn’t worried about the fact it didn’t work (they work fine as stand-alone values). I also had overlooked the fact that the object id associated with an event-based begin/end value was optional. And although I knew about minand max, I don’t think I ever used them. And I didn’t realize that keyTimes( or calcMode="linear"in general) on an “ with a path but without keyPointswould use the path’s vertex points instead. @Amelia Thank you for your comment. I don’t see myself saying any of those two sentences in the article. :o I never said CSS is an XML namespace, and I said that you use attributeTypeto specify the namespace, not attributeName. Thumbs up. This can sure be added to the article to show another use case for by. Although, as I mentioned in the article, by‘s effect does show more clearly when you’re using discrete pacing, so there was nothing technically wrong in there. I don’t disagree here, but this doesn’t change the fact that SMIL is lacking in this area. That said, it is also a note that would be nice to be added to the article. Thank you. That’s odd. I just tried a demo using repeat()which I had tried when I wrote the article and it didn’t work before, but now it does! I’ll add an example shortly (and link to the example you showed as well). Will be added, too. This is an error on my side, apologies. The spec says it only overrides the keyTimes. My bad. :) Will fix as well. It is true. Yes, the endtrigger will stop the animation, and this also applies to other use cases like when u have multiple repetitions for example, the endvalue can override other active duration times as well, but I didn’t get into specific use cases and examples for brevity. The sentence in the article is still technically true. True, I missed the fact that it works with calcMode="linear". Thanks for the heads up. FWIW, it’s not supposed to be a how-to demo; just a demo to show what’s possible. Also note that SVG does have a z-indexproperty but it’s not yet implemented in any browser. It’s part of SVG2. Thank you for all your notes. Will update article with relevant notes accordingly. Cheers! Thanks Sara, To re-emphasize, I wasn’t being nit-picky just to be critical, but because I think this is a great resource which should be as clear as possible. On using multiple values for beginand end, the point I was trying to make was that they aren’t matched lists of start and end times, the way that keyTimesvalues are matched with keyPointsor values. Your example has them neatly matched, but they don’t have to be. You could reverse the order of times in one of the lists, and the animation would work the same. Or you could have many begin times, but only one over-riding end time or event. On Gabriel’s example, I was not being critical when I said it was sneaky. If internet comments could include the tone of voice, that sentence would have been said in the same manner as “Woah, awesome!” Which is to say, I had to look at the code very carefully to figure out how it worked. And I, too, am eagerly awaiting z-index in SVG2. @Chris Coyier I posted a really long comment summarizing some errors/ambiguous statements in the article, but it isn’t showing up. If it’s just in spam quarantine because it was so excessively long, no problem. If not, send me a note and I’ll repost and/or send to you directly. @Amelia BR Thank you for your comment (this one and the one that wasn’t posted). There is a section in the article that’s missing, the original document didn’t save that section when I wrote it—for some reason—so that may be causing the ambiguity? Not sure. I’ll add that section, and please feel free to send the list directly to me as well @ contact@sarasoueidan.com To the experimentation table, this was amazingly insightful. Wow, that was an epic article, thanks for all the demos and explanations, very well done. Cheers Woow… Amazing article… I love SVG… But here a question: how to work with collisions? i.e. if I develop a game using ‘moving along path’ example as base, how could i know when the car get some obstacle in its way? In Javascript, you can always access the current animated value of any SVG attribute, within the animValsub-property of the relevant property of the element object. E.g., in this pen I animate cxof a circle, and access the current value as {element}.cx.animVal.value(in SVG user units, aka px) or {element}.cx.animVal.valueAsString(with the original units, in this case percentages): For motion along a path, it gets a little more complicated. The animation gets applied using the SVG transformattribute. The corresponding property is {element}.transform, which is of type SVGAnimatedTransformList. You can explore the DOM objects, methods, and properties defined at that link to discover the possibilities for using the SVG DOM to work with transformations. Wow… @AmeliaBR: many thanks for the efforts on explanation and the great, and simple-to-understand, example… I’m going to give a try on the DOM approach this weekend… @Amelia No worries, thank you for your constructive comments and notes. I’m happy to add your notes and correct my errors any time. :) Cheers! (P.S. Yes, written “tones” can be misunderstood, that’s why I tend to use many smiley faces in my comments.) ((P.S#2 Your SVG knowledge is impressive! I’ve once shared your article about using CSS variables to style the contents of a “ element. Good stuff! :)) Wath about path?! How can I draw a path in SVG?! .. from A ……………………… to B ) Wow! Just new to SVG and just found this. Worthy…. Wow this briefly viewed examples are exciting. I’m looking forward to read this whole article as soon as possible. Great! Very interesting article. Bit I love do something like this with canvas ans JS. First, thank a lot, Sara, for this really epic quide to SVG animations! Still, I’d like to shed some more light on the behavior of repeat(n)function. I noticed that in David’s example you added to the article it worked (in Chrome 38+), but unfortunately I couldn’t make it work in your live example in the article. So I played with it a bit and found out that it works only if both following conditions are true: 1. The IDof animation being referenced contains no hyphen ( -); 2. n in the repeat(n)of the referencing animation is less that repeatCountof the animation being referenced. If any of these is false, repeat(n)stops to work. The first seems especially odd to me since -character is valid for XML IDs, AFAIK. Do the parsing rules of beginand endattributes imply some extra constraints on it, probably because -can also be interpreted as minus for a clock value? Or is it just a Chrome bug? For the latter condition, my intuitive explanation is that repeat(n)event doesn’t fire after the whole animation has ended (i.e. if an animation has 2 repeats, after the second repeat the endevent fires first and prevents repeat(2)event from firing). Unfortunately, I couldn’t find neither proof nor disproof for my hypothesis in the spec. Is my guess wrong? @SelenIT Thank you for your notes! I did come across the first point with the dash. I don’t remember if I asked someone about it and I don’t believe I filed a bug report because I’m not sure if it is indeed a bug or not; thank you for pointing it out here. The second point is indeed interesting and makes perfect sense considering the number of repetitions should be less than repeatCount— why reference a third repetition when the animation repeats only twice, right? And it also makes sense the way you put it: I think an endevent could be the reason behind that as well. The way I think of it is that each repetition is “registered” as a begin. I don’t make sense here but I can’t think of another way to put it. Thank you again for your valuable notes. :) Cheers! This was an amazing tutorial Sara. Within 30 minutes, I was up and running with my very own animation that I want to use on the website. If you’re still around, I was wondering something that may be simple. I have a shape tween animation. It works back and forth in the browser with the animate property. Is there a way to do this purely with CSS or with javascript. I want to use this as a state change animation. I have one shape that is a loading image, and then I want to tween it to a play button when ready. Would love some guidance. Thanks! Sara, thank you for such a great SMIL tutorial with great examples! However I have to put my two cents. When I built an initial version of SVG Circus, I found some cross-browser issues in ‘move-on-path’ animations with calcMode=splineand keyTimes/keySplineshaving arrays of values. That makes it difficult to make a non-linear looped animation where an easing is applied to five loops, for example. For that reason I decided to stop using SMIL, wrote a custom SVG animation engine in javascript (RAF-based) and moved SVG Circus to it. I open-sourced it yesterday, so you are welcomed to experiment with it :) can you please explain again with more examples about keyTimes and keySplines and how are they related
https://css-tricks.com/guide-svg-animations-smil/?utm_source=CSS-Weekly&utm_campaign=Issue-132&utm_medium=web
CC-MAIN-2018-47
refinedweb
11,096
61.06
Opened 11 years ago Last modified 4 years ago #1171 new bug GHC doesn't respect the imprecise exceptions semantics Description Yhc bug 122 appears to be a GHC bug: , discussed here: To reproduce: Download and install Yhc two separate times, once doing "scons" to build, once doing "scons type=release". Follow the steps outlined in the above bug report with the standard version, and it gives the correct behaviour. Try again with the release version and it gives the wrong behaviour. GHC 6.4.2 works fine. The difference between the two is that type=release builds with -O, normal doesn't. Changing the code slightly, by introducing "obvious" assert statements in Package.hs makes the bug go away. Creating reduced tests cases didn't get very far... This has been replicated on Mac and Windows with GHC 6.6. If you have any further questions about the code then feel free to email me or the Yhc list. Marking as severity=major because silently doing something different is about as bad as they come. Thanks to Thorkil for doing the detective work to track this down as far as GHC. Attachments (1) Change History (20) comment:1 Changed 11 years ago by Changed 11 years ago by comment:2 Changed 11 years ago by With a recently built ghc HEAD, I cannot reproduce the problem: Thorkil-Naurs-Computer:~/tn/GHC/trac/1171/work/1171Material thorkilnaur$ ghc --version The Glorious Glasgow Haskell Compilation System, version 6.7.20070223 Thorkil-Naurs-Computer:~/tn/GHC/trac/1171/work/1171Material thorkilnaur$ sh ghc.sh [1 of 2] Compiling Package ( Package.hs, Package.o ) [2 of 2] Compiling Main ( T3.hs, T3.o ) Linking T3 ... T3: 2007-Feb-25 01.57 T3: Error: File not found [1 of 2] Compiling Package ( Package.hs, Package.o ) [2 of 2] Compiling Main ( T3.hs, T3.o ) Linking T3 ... T3: 2007-Feb-25 01.57 T3: Error: File not found Thorkil-Naurs-Computer:~/tn/GHC/trac/1171/work/1171Material thorkilnaur$ As we can observe, the same (correct) output is produced, both in the -O and the non -O case for this ghc. comment:3 Changed 11 years ago by wrong-code bugs are a high priority. comment:4 Changed 11 years ago by Great testcase, thanks Thorkil. The bug also happens on Linux/amd64 with 6.6, but the 6.6 branch looks OK. I'll leave the bug open for now as a reminder to me to put the testcase in the testsuite. Thanks Ian comment:5 Changed 11 years ago by test added as cg059 in HEAD and 6.6 branches. Still leaving the bug report open for now as we may wish to chase down the actual problem so we can confirm it really has been fixed, rather than just no longer being tickled in this case. comment:6 Changed 11 years ago by Not a bug!!!! Let's look at the code: case (local,res) of ([x], _) -> return () (_, [x]) -> return () ([], []) -> raiseError $ ErrorFileNone (as, bs) -> if as++bs == [] then error "Empty as++bs" else raiseError $ ErrorFileMany file The last two branches are both _|_. If the value of an expression is definitely _|_, then any _|_ will do: in this case you get the "Empty as++bs" error instead of the one you were expecting. To fix the program you need to use ioError instead of error. The way this works is like this: after the first two alternatives have been eliminated, we're left with case (local,res) of ([], []) -> raiseError $ ErrorFileNone (as, bs) -> if as++bs == [] then error "Empty as++bs" else raiseError $ ErrorFileMany file which is semantically equivalent to _|_, and GHC can detect that. However, we don't gratuitously replace it with undefined, of course; in fact GHC compiles this expression: let z = if as++bs == [] then error "Empty as++bs" else raiseError $ ErrorFileMany file in z `seq` case (local,res) of ([], []) -> raiseError $ ErrorFileNone (as, bs) -> z It's safe to evaluate z eagerly, because the whole expression is known to be _|_. So there you go. I don't know whether it's possible to modify the simplifier to get a more expected result here; I suspect not without sacrificing some performance. comment:7 Changed 11 years ago by Wow, you really claim that if the program calls 'error' at all, then _any_ call to error will do!? If my program has an ArithmeticOverflow, then it is OK to report DivideByZero instead? The semantics of H'98 clearly say that pattern-matching is left-to-right, top-to-bottom, so case [] of [] -> error "OK" _ -> error "broken" really _must_ give the error "OK", not the error "broken". See H'98, Section 3.17.3, Figure 3.1, case (b). comment:8 Changed 11 years ago by Such behavior would probably be excusable, indeed desirable in a Haskell 98 compiler. But GHC is not just a Haskell 98 compiler - it also claims to follow the document "A Semantics for Imprecise Exceptions". And that document, if I recall correctly, does not specify a single _|_ but rather allows expressions to evaluate to a set of expressions, and changing the set of expressions returned is not a semantically correct compiler. Does yhc use Control.Exception.catch? If so, I would call this behavior Incorrect. comment:9 Changed 11 years ago by Indeed our paper is good background reading for this. Consider f :: Bool -> Int -> Int f True x = error "urk" f False x = x We'd probably agree that f is strict. And hence we can use call-by-value. But look at the consequences: g x = f x (error "flop") Since f is strict, we'll use call-by-value, and hence g will (always) craxh with "error: flop". But is that right? After all, if we simply inline f we get g x = case x of True -> error "urk" False -> error "flop" which is very similar to your program. (In your case GHC has made the reverse transformation, lifting one of your case branches out as a shared expression.) So by doing strictness analysis, GHC is increasing the set of exceptions (see the paper) in the denotation of the program. That's a bit confusing, I grant. It's easily stopped, too, by stopping GHC treating error like bottom; but that would make many functions less strict. It'd be interesting to measure the performance impact of this. So I'm re-opening the bug because I think it's a legitimate and interesting question what the "right" behaviour should be. I'd like to add a flag to GHC to give the more conservative behaviour. It's the first time this has happened; interesting! Simon comment:10 Changed 11 years ago by For reference, Yhc has now been patched to do |unsafePerformIO $ putStrLn "msg" >> exit|, which should fix this behaviour. Perhaps the problem is that there are two different uses of error, one is as an internal assertion - typically introduced by an incomplete pattern. The other is the programmer explicitly stating that the program should stop now and give the user a message. The Safe [1] library provides an "abort" function to handle the latter, partly so automated checkers can tell the difference between good and bad calls to error. [1] comment:11 Changed 11 years ago by Yes, I though we were sticking to the imprecise exception semantics, but in fact we're straying outside a bit. I've modified the ticket subject. In response to Neil: why use unsafePerformIO rather than IO exceptions here? I think you're asking for more trouble... comment:12 Changed 11 years ago by The below is from e-mail and IRC conversations that happened in parallel with this bug report. I've editted things slightly to make them flow better, but hopefully haven't changed any important meanings. I said: In I think you claim in section 4.3, in the rationale for the Bad case, that if we have f x = case x of True -> error "A" False -> error "B" then the call f (error "X") is allowed to raise B, as this permits transformations like f' x = let b = error "B" in b `seq` case x of True -> error "A" False -> b which in turn is justified because the case expression is strict in error "B" (as well as every other expression). However, the Ok case tells me that if I call f True then I can get the errors raised by the True -> error "A" branch only. Thus it must raise A. But with the above transformation f' raises B. I also think that this behaviour is very confusing to users; it makes sense that error "A" + error "B" needs to evaluate both values, so throwing either exception is reasonable, but in f True it is "obvious" that A is raised and B is not! Traditionally, we would say that e is strict in x if x = _|_ => e = _|_ However, with the set-based imprecise exceptions, in which we distinguish between different bottoms, it seems to me that a better definition would be that e is strict in x if x = Bad xs => e = Bad ys and xs \subseteq ys Thus, for example, a case can throw an exception if either the scrutinee can or /all/ the branches can, i.e. in the Bad case in 4.3 we take the big intersection rather than big union. So we wouldn't be allowed to pull error "B" out of the above case, but we would still be able to translate case x of True -> y False -> y into y `seq` case x of True -> y False -> y I am also unconvinced by a non-terminating program being allowed to throw anything it likes. It seems much nicer to permit it only to either not terminate or to throw a special exception, here-on written N. I haven't written a denotational semantics or anything, so perhaps this would all unravel if I tried, but here are some example definitions followed by what exceptions I think various expressions ought to be able to throw; are there any obvious nasty corners I have left unexplored?: f x = case x of True -> error "A" False -> error "B" g x = case x of True -> error "C" False -> error "C" h () () = () i = i j = error "D" + j ----- f True A f (error "E") E g True C g (error "F") C or F h (error "G") (error "H") G or H i N or non-termination j D, N or non-termination I also haven't looked into the performance implications, although personally I'd prefer a semantics that is more intuitive along with a few more bang patterns sprinkled around. Simon PJ replied: I think you are basically right here. Another way to say it is this. In 4.5 we claim that GHC's transformations can reduce the set of possible exceptions from a term, but not increase it. But the (current) strictness analysis transformation increases it. I agree that is undesirable. As I say on the Trac, we could change this at the cost of making fewer functions strict. I can tell anyone how to do this, if you want. It would be good to measure the performance impact of doing so. Simon M has a memory that there is some problem, possibly related to monotonicity, with only allowing non-terminating programs to either not terminate or throw a special non-termination error, rather than allowing them to behave like any bottom they wish as the imprecise exceptions paper allows them to. However, he can't remember what the problem actually is; if anyone can then it would be good to have it documented. Regarding Simon PJ 's earlier comment It's easily stopped, too, by stopping GHC treating error like bottom; but that would make many functions less strict. It'd be interesting to measure the performance impact of this. Simon M replied: It would be a shame if error wasn't treated as _|_, because that would lose the opportunity to transform Case error "foo" of alts ===> error "foo" wouldn't it? This is only dead code elimination of course. and Simon PJ followed up with: You are right that the strictness of an Id would have to distinguish: - normal - diverges (bottom) - crashes (calls error) Currently it does not. Changing the strictness analyser to make 'error' look like 'normal' rather than 'diverges' would indeed have the effect of making GHC not realise that (error "x") was a case scrutinee that could be simplified. So yes, there is a bit more to testing the effect of making 'error' less strict than I was suggesting. Quite doable, but more than a moment's work. Regarding my strictness test x = Bad xs => e = Bad ys and xs \subseteq ys Simon M suggested that an implementation might: invent a magic exception X only used by the strictness analyser, and treat this as x = Bad X => e = Bad ys and X `elem` ys All other exceptions (and _|_) can be mapped to the same thing; all that matters is whether x's exception was raised or not. I imagine you'd need to fiddle around with the strictness analyser's domain. The sort of case where Simon PJ is worried we will lose performance is f x [] = error "Can't happen" f x (y:ys) = ... strict in x ... where we would no longer be allowed to claim to be strict in x, as f (error "X") [] should only throw "Can't happen". Debatably it's better to tell the compiler explicitly that you want it to be strict in x with f !x [] = error "Can't happen" f !x (y:ys) = ... strict in x ... anyway, rather than relying on the strictness analyser to figure it out. Finally, I am not entirely convinced by the semantics of imprecise exceptions as given in the paper; they tend allow too many exceptions to be thrown, possibly under the assumption no compiler would actually transform a program in such a way that the unexpected exceptions actually would be thrown. For example (error "A") (error "B") is permitted to throw B, even though error "B" would never be evaluated under non-strict evaluation. Similarly, case error "A" of True -> error "B" False -> 'x' is allowed to throw B. Incidentally, I do think it is reasonable for case error "A" of True -> error "B" False -> error "B" to throw B. comment:13 Changed 10 years ago by Stefan O'Rear happened to write in haskell-cafe a performance reason for allowing non-termination to throw any exception: "When you see an expression of the form: f a you generally want to evaluate a before applying; but if a is _|_, this will only give the correct result if f a = _|_. Merely 'guaranteed to evaluate' misses out on some common cases, for instance ifac: ifac 0 a = a ifac n a = ifac (n - 1) (a * n) ifac is guaranteed to either evaluate a, or go into an infinite loop - so it can be found strict, and unboxed. Whereas 'ifac -1 (error "moo")' is an infinite loop, so using a definition based on evaluation misses this case." comment:14 Changed 9 years ago by comment:15 Changed 9 years ago by comment:16 follow-up: 17 Changed 5 years ago by This reads like a misunderstanding of the semantics described in the paper; close as wontfix? comment:17 Changed 5 years ago by This reads like a misunderstanding of the semantics described in the paper; close as wontfix? There really is a mismatch between the semantics in the paper and what GHC implements, so I think it's good to keep the ticket open. Simon knows the full details, but as I understand it we think that the semantics should take into account the reordering that the strictness analyser does (ie. GHC is right, the semantics is wrong). comment:18 Changed 5 years ago by Here are some more notes about this subject, culled from an exchange betwen Simon M and Simon PJ. Consider this: f :: Bool -> (Int,Int) -> Int f x y = case x of True -> error "urk" False -> case y of (a,b) -> a+b Can we pass y unboxed? Yes, we do so, because we regard bottom (the error branch) as hyperstrict in everything. So we do a w/w split thus f x y = case y of (a,b) -> case a of I# a1 -> case b of I# b1 -> fw x a1 b1 That means, of course, that (f True (error "a", 3)) would throw (error "a") not (error "urk"). The paper doesn't allow this, but I believe that it's crucial for strictness analysis to work well. However, if the function unconditionally diverges, it seems stupid to unbox: f :: (Int,Int) -> a f x = error "urk" Here it seems fruitless to do the same w/w/ split as I gave above, even though the semantics justifies it equally. So pragmatically we do NOT do w/w for a hyper-strict demand. Another variant: f :: (Int,Int) -> a f (x,y) = error (show x) main = ...(f (3, error "no no"))... Similar to the previous example, the (error "no no") is not even used, so it would be very odd indeed to evaluate it before the call to f. See Note [Unpacking arguments with product and polymorphic demands] in stranal/WwLib. A stand-alone file (1171Material.tar.gz) that illustrates this problem on a PPC Mac OS X 10.4 is attached. It has been produced by reducing the Yhc code involved in the original report. Here is a session that uses this file: In this session, T3+Package is compiled and executed twice, first with -O, then without -O. And, as can be observed, the results are different. In the wrong case - the one where -O is used - the erroneous result apparently comes about by selecting the wrong alternative in the case in Package.hs: As can be seen from the output produced, in spite of as+bs being == [], the last alternative is selected when the code is compiled with -O.
https://ghc.haskell.org/trac/ghc/ticket/1171
CC-MAIN-2018-13
refinedweb
3,041
68.5
Being new to programming my question might seem a little basic one, what I want to is to print all the days mentioned in the enum using a loop or otherwise. I have used a console application for the same. Tips to improve basics of C# language coding capabilities along with the answer will be much appreciated. using System; namespace _28_11_2016_enum { class Program { static void Main(string[] args) { weekdays wd = weekdays.mon; for (int i = 0; i < 7; i++) { int a = (int)wd; a = a + i; wd = (wd)a;// faulty code. Console.WriteLine(wd); } Console.Read(); } enum weekdays : int { mon, tue, wed, thur, fri, sat, sun } } You don't have to loop - Enum.GetNames returns the names, and string.Join concat them together into a single string: // mon, tue, wed, thur, fri, sat, sun Console.Write(string.Join(", ", Enum.GetNames(typeof(weekdays)))); in case you want int values: // 0, 1, 2, 3, 4, 5, 6 Console.Write(string.Join(", ", Enum.GetValues(typeof(weekdays)).Cast<int>())); Edit: if you insist on loop I suggest foreach one: // mon == 0 ... sun == 6 foreach (var item in Enum.GetValues(typeof(weekdays))) { Console.WriteLine($"{item} == {(int) item}"); } In case of for loop // do not use magic numbers - 0..7 but actual values weekdays.mon..weekdays.sun for (weekdays item = weekdays.mon; item <= weekdays.sun; ++item) { Console.WriteLine($"{item} == {(int) item}"); } However, in real world applications, please, use standard DayOfWeek enum. Edit 2: your own code improved: static void Main(string[] args) { for (int i = 0; i < 7; i++) { // do not use magic numbers: what does, say, 5 stand for? // we want weekdays, not int to be printed out weekdays wd = (weekdays) i; Console.WriteLine(wd); } Console.Read(); }
https://codedump.io/share/WeGSDfrzeswT/1/print-weekdays-from-enum
CC-MAIN-2016-50
refinedweb
282
75.5
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. HI, You can use computed fields: because its is computed on-the-fly. Define a Date field with compute. Please try this script: date = fields.Date(compute='_compute_date', string='Date') def _compute_date(self): for record in self: end = datetime.date.today() record.date = end Hope this helps. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-add-dynamically-changing-date-on-our-custom-form-on-top-102724
CC-MAIN-2017-13
refinedweb
102
53.27
14 April 2011 18:38 [Source: ICIS news] HOUSTON (ICIS)--Polystyrene (PS) prices were slowly rising in ?xml:namespace> Prices have been frozen in PS material from the main producer, Petrobras Energia, has not had any increases in April, following government guidelines. However, all imported material that entered the country to cover the shortages produced by nearly a month-long strike at the Zarate production plant was being sold at higher prices. Domestic PS production was catching up with shortages in the market, but supply remained vastly inadequate for the domestic needs. The 66,000 tonne/year PS plant, idled for nearly a month by labour unrest, was restarted on 25 March and has been running at full capacity to restore supply and inventories. Imports from the US Gulf and PS prices in Argentina were in the range of $2,054-2,174/tonne DEL (delivered) (€1,417-1,500/tonne) for crystal grade, while high-impact PS (HIPS) prices were at $2,174-2,318/tonne, based on ICIS data. Prices for small-volume buyers were likely to be even higher. ($1 = €0
http://www.icis.com/Articles/2011/04/14/9452740/ps-prices-inch-up-in-argentina-despite-price-controls.html
CC-MAIN-2014-52
refinedweb
184
56.18
I worked more last night on my ASP.NET MVC project. Second time sitting down to work on it. I spent the evening dealing with ViewData, trying to find the best way to wrap everything the UI needs into a nice ViewData object. The night didn't end well. As I mentioned, I'm implementing a wizard. The user will select one or more products, and then they will be prompted with additional steps for each product selected. I wanted the View to be able to do something like this: 1: public class StaplerReview : ViewPage<ProductWizard<Stapler>> 2: { 3: } 1: <h2>Details about your selected stapler</h2> 2: <ul> 3: <li>Staple Size: <%# ViewData.ActiveStep.Product.StapleSize %></li> 4: <li>Number of Pages: <%# ViewData.ActiveStep.Product.NumberPages %></li> 5: </ul> 6: <h4> 7: You are on step <%# ViewData.ActiveStepIndex + 1 %> 8: of <%# ViewData.StepCount %> 9: </h4> But man, this starting getting really, really messy. It was hard to follow through the code and I couldn't quite get it to work the way I wanted. The Model ended up knowing too much about how it was going to be presented. Jeffrey Palermo to the rescue! A strange twist of fate led me to subscribe to Jeffrey's blog tonight and read up. Just 16 hours ago I was giving up on what I was trying to do with ViewData, and I had no ideas of where to go next. And now Jeffrey has shown me the light. Here's what Jeffrey laid out on the table: There are some challenges with ViewData as it stands now: A key is required for every object, both in the controller and view. ViewData.Add("conference", conference); A cast is required to pull out object by key. (ScheduledConference)ViewData["conference"] The ViewPage<T> solution discards the valuable flexibility of the objectbag being passed to the view. <%=ViewData.DaysUntilStart.ToString() %> where ViewData is of type ScheduledConference Facts (or my strong opinions): Repeated keys from controller and view increase chance for typos and runtime errors. Casting every extraction of an object in the view is annoying. Strong-typing ViewPage only works for trivial scenarios. - For instance, suppose once logged in, every view will need the currently logged in user. Perhaps the user name is displayed at the top right of the screen in the layout (master page). Since the layout shares the viewdata with the page, we immediately have the need for a flexible container that supports multiple objects. A strongly typed ViewPage<T> won't work without an elaborate hierarchy of presentation object that are themselves flexible object containers able to support everything needed. Once you get there, you are almost back to the initial dictionary. There are some challenges with ViewData as it stands now: Facts (or my strong opinions): His SmartBag allows any number of objects to be exposed to ViewData instead of just one. While I will likely implement my own flavor of this, the concept is the exactly solution I needed. Now, my ViewData can be strongly typed, but it can provide multiple objects for me, and they don't have to know about each other. Here's what I'm thinking will be in my bag: The View will then be capable of doing this: 1: // ViewBase inherits from ViewPage<CurrentContext> 2: public class StaplerReview : ViewBase 3: { 4: } 3: <li>Staple Size: <%# ViewData.Get<Stapler>().StapleSize %></li> 4: <li>Number of Pages: <%# ViewData.Get<Stapler>().NumberPages %></li> 7: You are on step <%# ViewData.Get<Wizard>().ActiveStepIndex + 1 %> 8: of <%# ViewData.Get<Wizard>().StepCount %> I feel much better about this implementation. Thursday, January 24, 2008 10:06 PM Glad to be of
http://jeffhandley.com/archive/2008/01/24/viewdata-seems-too-constraining.aspx
CC-MAIN-2017-17
refinedweb
611
64.91
Latest Active Directory Interview Questions and Answers Active Directory Interview Questions - 1) Explain what is Active Directory? - 2) What is the port no of LDAP ? - 3) What is KCC? - 4) What is SYSVOL Folder? - 5) Explain the difference between Enterprise and Domain Admin groups in Active Directory? - 6) What are application partitions? When do I use them ? - 7) What are sites? For what they are used? - 8) What is Forest? How to check tombstone lifetime value in your Forest ? - 9) What is LDAP? - 10) Please Explain Active Directory Schema? - 11) Explain domain controller in AD? - 12) List the ports used by Active Directory? - 13) Where Active Directory database held and how would you create a backup of the database? - 14) What is Domain Tree? - 15) What is RODC ? - 16) What is Subnet? - 17) How to configure Universal Group Membership Caching in AD? - 18) What Export-VM command do? - 19) Explain namespace? - 20) What are Schemas? - 21) What are Flat Namespaces? - 22) What are Hierarchical Namespaces? - 23) List different types of containers in AD? - 24) List the components of an Active Directory structure? - 25) What is Multiple-Master Replication? Below are the list of Best Active Directory Interview Questions and Answers Active directory just as the name suggests is a directory service. This directory service acts as a shared platform of information for organizing, managing, locating and administering the daily items and the network sources. This is developed by Microsoft solely for supporting the Windows operating systems. The active directory is found in the processes and services section of the windows server. A number of services associated with identity and as well as are based on a directory now come under one roof of active directory. LDAP uses CLDAP as its transport protocol. The Default Port no. LDAP is on Http is 389 and 636 over SSL. KCC is an acronym for Knowledge Consistency Checker. In Active directory, KCC component is responsible for generation replication topology between domain controllers. Sysvol folder/directory refers to a location on the Windows Operating System (OS) where it stores the server's copy of public data and files for the domain. Sysvol is also known as SYSFOL. RODC can be abbreviated as a read-only domain controller. RODC can be explained as a controller of the domain that has partitions of Active Directory Domain Services. But they only possess read-only partitions. RODC is readily available in the Windows server operating system version of the year 2008 and its further greater versions. It has mainly been designed to be used in branch offices that are not able to support their own domain controllers. The subnet, popularly known as subnetwork can be understood as one of the logical subdivisions of the IP network. Now subnetting is the name given to procedures in which one single network is divided into two or more subnetworks. Now the system that is connected to a subnet is recognized or referred to with an identical and most important bit-group. This lies in the IP address of the respective system. Flat Namespaces can be used to find which are those libraries and executables other than predefined libraries and executables offer all symbols like functions and external variables. The libraries when loaded might depend on a symbol and that is why it can look in the Flat Namespace. After all the symbols are found, the library adds its own symbols in its list. The amount of possible collisions is one of the biggest advantages of this. The duty of dealing with the collision is given to the Operating System. Gpupdate /force command is a policy of Windows to refresh or update your group policies by using a manual method. Although the archive Directory of our PC does it by unknown sometimes you may need to do force updates of group policies. In certain situation, you can use > gpupdate /force No matter if there are no changes in the group policies of the computer, this command will forcibly tell windows to the app for an update of GP settings. This not only forces the background refresh but it will also force the foreground refresh of the group policies. If in case you only wanted to refresh your policies then use > gpupdate
https://www.onlineinterviewquestions.com/active-directory-interview-questions/
CC-MAIN-2020-10
refinedweb
708
59.4
UAH Global Temperature Update for October 2016: +0.41 deg. C by Roy W. Spencer, Ph. D. Oct): : NOTE: About an hour after publication, the story was edited to correct a typo from 2017 to 2016 on the data table (h/t to Marcus) 122 thoughts on “UAH Global temperature update – down slightly for October” Ties with 2015 as the warmest October on record in UAH v6. If you use Micro$oft Excel to plot your blue scatterplots, they can give you an actual band limited continuous function, which might actually conform to the Nyquist criterion. I use it all the time and they use some sort of cubic spline fit to the actual data set points. Otherwise, the discontinuous plots convey nothing and predict nothing for the future. Trend line “predictions/projections” evoke some questions that all have yes/no answers. 1/ Will the next data point obtained be the same as the current most recent point. ? 2/ Will it be higher …. ? 3/ Will it be lower …. ? 4/ Will it be the same as ANY previous data point in the set ? 5/ Will it be higher …. ? new record maximum. 6/ Will it be lower …. ? new record minimum. NONE of those questions can be answered from these graphs. G 6/ Marcus, I’m curious why you left the last 12 months off your graph. Where did you find the Oct 2016 numbers for UAH V6 ?? They’re stated in the table on Roy Spencer’s post. Why does GISS make 1997-8 so much cooler than 2016? What do you mean by “GISS” ? If you are referring to their land and sea index “LOTI” they have rigged the SST data by “correcting” daytime to data using night time data. “If you are referring to their land and sea index “LOTI” they have rigged the SST data by “correcting” daytime to data using night time data.” Obviously you havent looked at the data. Differences between Satillite and Surface are NOT PRONOUNCED if you compare Ocean only That is why, as will soon be shown, the satillite records of SST VINDICATE KARL. too funny… wait for it… CO2 cannot warm the oceans. OOPS. Mosh just tried to sell another LEMON. !! Mosher, isn’t atmospheric water vapor the best satellite proxy for SST? Btw, I saw a glimpse of the new HADISST2 dataset, still unofficial, that seemingly corroborates ERSST v4 during the satellite era: Andy, that’s true for radiation. However CO2 warms molecules around it by absorbing radiation, then transfering that energy as rotational/vibrational energy. IE heat. That is, it gains energy then transfers that energy to other molecules by bouncing into them. It doesn’t matter if that other molecule is O2, N2 or H2O. 1. They measure different things. 2. They measure different things in different ways. there is ZERO expectation that the two metrics will be the same or even comparable Mosh’ says: Mosh’ says: too funny …. They don’t “make it cooler”. Surface temperature behaves different than troposphere temperature. It is not a GISS problem: other surface measurements behave similar. Nonetheless, even UAH6 shows for the Globe warmer temperatures for 2015/16 than for 1997/98 as well. Why don’t you wonder about that? But it does not in the Tropics: It is rather puzzling. Despite falling ENSO numbers all year, the last 4 months have been at least 0.05 above the June anomaly. A precision of hundredths of a degree is beyong the accuracy of most thermometers used to measure these temps so your comment should read “…have been at least 0.00 above the June anomaly”. When you average the daily values of thousands of measurements to build a monthly mean, you need such precision. Averaging lots of readings together, when the readings come from separate instruments in widely different places, does not increase the accuracy of the data. You are correct that it does not increase the accuracy of the data, but it does increase the accuracy of the estimator of the population mean which you are measuring. The standard error decreases as the sample size increases. This is basic statistics which you seem to be unaware of. . . That only applies when you are measuring the same thing the same way, eg in a factory. With surface temperatures you most certainly ARE NOT. “beyong the accuracy of most thermometers “ These are not thermometer measures. Not that I think they are more accurate than those. It rather puzzles you I guess. Probably because you live with the supposition that a global averaging of troposphere temperatures and ENSO indices are by definition in correlation (your guest post about “UAH and ENSO” is the best example of the mistake). Never mind, need new glasses…D’oh… La Nina appears to be forming again and should impact the last two months but 2016 will still likely be the warmest year in the short record. The UAH figures for November/December would need to average below 0.23 for 2016 ‘not’ to set a new record. It’s possible but seems unlikely. “in the short record.” Thank you very much for that statement, it always amazes me that we are attempting to extrapolate, or even base decisions on the spending of billions (I daresay trillions) of dollars on such an insignificant data set. Let’s have some fun with the past 38 years of the data we have …… Life has existed on the planet between 3.5 to 4.1 billion years – I will use 3.8 billion years for ease of the math and the assumption that, “where there is life, there is climate.” Two ways to visualize this, in the time dimension and in the space (okay, area) dimension. My basis for time is a calendar year (averaged for leap years, therefore rounded to 365.25 days), and my basis for area is the US football field. If we were to use one calendar year for the length of time that “climate” has existed on this planet, then the past 38 years fits into the final 0.3 seconds of December 31. Put another way, you are between “1” and “Happy New Year” when accurate temperature data began being recorded, and a whole lot closer to “Happy” than to “1”. If we were to use a football field as the basis for the size of “climate” on earth, then you only need to look at a tiny fraction of space in the end zone, actually a square with sides of 0.288″, as the area occupied by accurate recorded temperatures. Therefore, let’s use this “short record” as the basis for the hysteria and paranoia surrounding climate change, and let’s also state, very inaccurately I might add, that CO2 is both a greenhouse gas (which it is not, only in confined laboratory conditions and only via specific wavelengths does it actually absorb energy, and this is called a resonance frequency, and all molecules have it, fortunately the sun emits radiation over an incredibly large bandwidth, thereby eliminating the “greenhouse” effect of basically all molecules – and there is also this thing called the second law of thermodynamics), and that one part in 2,500 (i.e. 400 ppm) can somehow miraculously be responsible for warming the other 2,499 parts, and let’s use this to base regional, state, federal and UN policies on. More pragmatism. At the end of the Ordovician period, approximately 450 million years ago, there was a significant glacial event (a.k.a. Ice Age), yet the CO2 concentration was 4,000 ppm, or TEN TIMES current. How does CO2 drive temperature? At the end of the Jurassic period, approximately 150 million years ago, there was yet another significant glacial event with a CO2 concentration of 2,000 ppm, or FIVE TIMES current. How does CO2 drive temperature? Please do tell. A very select few are becoming incredibly wealthy at the expense of a vast majority of very poor, and as always, money doesn’t talk, it screams. Note we are now at “beta5” for Version 6, and the paper describing the methodology has just been accepted for publication. @roy I have asked a few times now how you fixed the 0 K and 285 K calibration points? and with the continued updates in the no. of versions I get increasingly skeptical of your results, because it indicates that you don’t really know. My results are indicating that we are losing heat. From the top latitudes down… and more so in the SH but even looking at the whole earth if you know how…. there is no man made global warming and it is actually cooling. live with it…. more cold coming up right ahead. Why would anyone expect a sine wave fit ? The bottom ends of your blue (presumed actual data) graph shows NO HINT of a change in curvature consistent with a sine wave. Try fitting your blue curve to the form: y = exp(-1/x^2) G @george well all my data are showing that the half cycle is 43 years: in fact I made a mistake with the Alaskan data of assuming a total cycle of 88 years when in fact it was exactly 86.5 years. Why would think that Minimum temperatures collected in South Africa could tell you ANYTHING WHATSOEVER about the system as a whole. And also,, UAH doesnt Estimate Minima or Maxima. Apples and Oranges. @ steven mosher minima are important AGW is claiming a GH effect due to more CO2 GH effect is like increasing minima similar to observation of an increase in minima when it is overcast in winter [here] there is no increase in minima [average, global] ergo there is no AGW My take from your comment: Why would think that CO2 concentrations collected in Hawaii could tell you ANYTHING WHATSOEVER about the system as a whole. Why would you think that temperatures based on measurements from barely over half the land surface have any bearing to reality? And so many of those thermometer, are pretty much guaranteed to be JUNK. Either at airports, heavily affected by urban growth etc etc You can prove me wrong by showing us pictures of the pristine temperature stations that provide data to the 6 places marked by circles Then you can show us where the data for all the grey area comes from. Come on Mosh.. step up ! Just for fun, I picked up one of the temperature series that GHCN refuses to use in Central Africa. See the huge warming trend… NOT ! Andy, closer to 20% of the land surface. Next to nothing for sea surface. Polynomial fits have no predictive power and should never, ever be used to show the fit outside of the fitted data. Roy used to include them on his UAH plots (with the disclaimer “for amusement purposes only”), but dropped them years ago. If you carry your parabola out another 10 years things start looking pretty silly! BTW, you used the same .jpg file for both images. @Ric sorry there was an unintended double copy and paste of the southern Africa result. Here is the global result: quadratics are fine to approach half the sine wave (half Gleissberg= 43.25 yrs) in K / time squared as long as you have 4 points [ 4 regressions showing K or C /annum from each weather station, averaged] to define the function showing acceleration/deceleration. in the case of my estimate for the warming/cooling of earth, the sampling procedure of weather stations is important 1) must be random, daily data for at least 45 years 2) equal weather stations NH and SH 3) all weather stations must balance to zero latitude [ longitude does not matter as long as we look at K/annum] 4)70/30 @ sea/inland Not so difficult for a class of first year stats students to check my results with a sample of 50 weather stations? Enjoy! Dr. Spencer, the anomaly is +0.41C – with what margin of error? I was taught to never provide a measurement without a margin of error. Sorry to speculate, but Isn’t the convention in physics that the last digit is uncertain? So an implied +/- 0.005 perhaps? 1. Its not a measurement, its an estimate. 2. The biggest uncertainty is the structural uncertainty which has only been estimated for RSS. Here is the official documentation.. but not for the current version.. needs updating .” Steven, does your first statement that it is an estimate not a measurement mean that there is no need to state a confidence interval or error estimate or some other measure of uncertainty? Does your second statement about sources of error mean that you believe there are better ways to measure what the satellites do? It is somewhat unusual to add, in this context, such information to single values. The margin of error, the confidence interval are rather in use for series, and denote how many single measurements deviate from a linear estimate among the series. This margin therefore depends on the number of measurements. Thus when considering all monthly values published by Roy Spencer since december 1978, you obtain a linear estimate of 0.122 ± 0.009 °C per decade. But for the period: january 2011 till now, you obtain 0.900 ± 0.105 °C / decade. The margin of error is here bigger than the value itself, what is simply absurd. Not so B. If you measure something for any scientific or engineering purposes you need to state the precision. For example you may measure the length of the same object as 12cm, 12.3cm or 12.2784cm depending on your requirements. In those cases the precision claimed is implicit in the last digit. The same must be said of an estimate. You are right FG as far as most engineering measurements are concerned. On a lathe you often enough need a precision of e.g. ± 0.05 mm: I experienced this decades ago. Not mentioning such a precision makes any discussion about the product simply redundant. But for single temperature measurements, I never saw it; as I wrote, It is known to me only in the context of time series for which you execute e.g. a linear regression. last graph [for the drop in global minima] should have been showing that we are cooling Curious, what would the trend need to look like in 2017 for “the Pause” to return? what Pause? Sigh, so much willful ignorance. The pause which so many climate experts have been standing on their heads around trying to explain. Or should I say “mask” by “adjusting the data”? The one that was interrupted by the TRANSIENT of the EL Nino. You do know what transient means, don’t you. Its what you will be when you leave BEST.. “what Pause” This pause. This pause: Want more? Here. Yes, ^THAT^ Pause. Acceptance leads to healing… Steven Mosher said, November 1, 2016 at 12:09 pm: This pause: “Curious, what would the trend need to look like in 2017 for “the Pause” to return?” There is simple arithmetic you can do. UAH did show a near zero trend from about May 1997 to Feb 2016, and the intercept was about 0.25°C. As long as the mean since Feb 2016 is greater than 0.25, the trend since 1997 will be positive, else not. This is an approx good for a year or two at least. The average since Feb is now way higher, and continued monthly values of around 0.4 are adding to the task. The trend won’t stop rising until months are below 0.25 °C, and no pause until a sequence of months have been below 0.25 by as much cumulatively as 2016 has been above. That’s very unlikely. It’s called La Nina, I suspect you have heard about them, even if you want to pretend otherwise. Why is that “very unlikely”? Why unlikely? I looked up details. From Jul 1997 to Jan 2016, there was zero trend, and the average was 0.14C (I had V5.6 above). From Feb to Sep, the average was 0.55. For pause return, the average since Feb 2016 has to be 0.14. So what would it take? Either: 1. Eight months averaging -0.26C. But such months individually are very rare in the record. 2. 24 months averaging 0C. That did happen once, around 2008. But it didn’t start from temps of 0.4C. And yet, UAH seems to be running along at about 0.4. Each extra month which is 0.26 above the .14 level requires another balancing month 0.26 below, or -0.12C. Nick Stokes said, November 2, 2016 at 2:17 pm: Thanks. Yes, I understand that for you this whole thing is a purely statistical exercise. As it is for Monckton. I, however, fundamentally object to this approach. It tends to neglect the natural processes behind the data. Because what happens then when you have a towering El Niño spike towards the end of your data segment? You end up seeing the trees and not the forest. The purely statistically generated linear trend line will have a hard time coming back down. Even if the data itself were to drop back to the previous mean level. To me, all that is needed is for global temps to revert to a mean level fluctuating around +0.13 for a plateau to pick up, disregarding the ENSO noise: So for you, the data needs to come down to a level much lower then the level it was at before the late El Niño spike in order for the “Pause” to continue. For me, it only needs to come back down to where it was before … If you assume that the temperatures from now until the end of 2017 are the same as they were from November 1998 until December 1999, after 1998’s el Nino, (anomalies averaging just below zero), then the Pause will still not return (except for a new one stretching back to 2012). This longing for and belief in the return of the Pause reminds me of those South Sea Islanders who believe that one day Prince Philip will return. Richard B, Skeptics take things as they are, not how they wish they were. Therefore, there is no “longing” for a pause. It either happened or it didn’t. Rather, it is the refusal by some folks to admit that global warming paused for many years. For example, Steve M refuses to admit it happened. Maybe you should ask him why, since you don’t have that problem. Yes you’re quite right. My comment was somewhat tongue-in-cheek, with a glimmer of reality! This compares very favorably with the NCEP CFSR/CFSv2 numbers calculated by Dr. Ryan Maue of WeatherBell, where the temps have been bobbing up and down across the +.4C line since June. I wish they would put this graph on the public site. Every once and a while Bastardi and Maue put it on Twitter, so it is quasi-public. To me this is the best calculation of terrestrial-based calculations out there since it feeds the daily runs of the weather models. If you take out the El Nino spike it looks like the sustained rise since 2011 is holding. Having the last few months coming in very similar it looks like the downside of the peak is done. It may not get much lower. “Based upon this chart, it would require strong cooling for the next two months to avoid 2016 being a new record-warm year (since the satellite record began in 1979) in the UAH dataset.” There you go… even the satellite data. You say that as if you actually believed it was something meaningful. As all of us said back in 2014, nothing matters until after the El Nino is fully out of the picture. Of course you are an expert at cherry picking only the data that supports your point. (PS: We will be sure to bookmark this post so that we can taunt you with it in another year or so when the pause resumes.) …Ummm, it is called an “El Nino” ….What did you expect, a new Ice Age ? True, but it has halved from +0.84C in February. How does CO2 do that, I wonder? In the mean time Chigaco will be having one hell of a winter… too funny. Its like a member of BLM pointing at one bad cop and trying to draw a conclusion from cherry picked data.. who knew you learned your logic from Liberals must say that I am not really that much interested in your opinions would like to hear from roy spencer how he manages to keep stable probes against the current scorching sun and how the system can effectively be calibrated HenryP, instead of constantly expecting detailed personal tuition from Dr Spencer., maybe you should just do the leg work and start reading the published documentation. It is pretty hard going but it’s no one else’s job to do it for you and spoon feed you an easily digestible summary. Yep. And probably the rest of us in the northern 2-3 tiers of states. I’m grateful for the mild October and not getting my fingers frozen while working in the yard for a change. 99% of the time, there’s a cold dry wind out of the north. We actually even had an Indian Summer this year; haven’t had one of those in a long while. (that will be politically incorrect) not being native of your country, do explain to me what is understood by an Indian summer? Sure :) Wikipedia describes it better than I can – .” HenryP, ‘Indian Summer’ is an unusually warm period that occurs across much of the U.S., close to the autumnal equinox. It generally occurs in the last half of September, and lasts for a week or so. (From memory. You can also do a search…) thanks. The Indians lost it – perhaps that is a sign that summer will lose it against winter? as [natural] climate change sets in, – it is getting slightly cooler – it appears that the seasons shift a bit up – on our year calendar. when you see that you are still shoving snow in late spring you know that the big winter is coming…….apparently in some places in England there have been years during the LIA that there never was a summer…. 2016 would barely exceed 1998, by a fraction, yet CO2 concentrations have soared these past 20 years, and blown right past the so-called “tipping point”. So you smugly rest your case on statistically insignificant warming, obviously independent of CO2? Must be nice to be able to close your eyes, cover your ears, and chant “Naw-naw-naw-naw!”. @brians356 November 1, 2016 at 3:36 pm: It’s a prime requisite for the job. : why is temperature not responding to CO2’s sudden surge with a sudden surge of its own?” Because temperature is doing exactly what theory predicts. There is no instantaneous response from increased c02. lag. look it up. only three letter Searches are usually better with more than three letters. BTW, there’s also a couple months lag between SSTs, as in El Nino, and air temps, as in what UAH measures, it’s worth keeping that in mind when SST anomalys are changing rapidly. I think what you mean to say is CO2 is “lagging” ocean water temperatures. Look it up. I looked it up and I saw a variety of models all showing rapid warming, and below a line showing observed warming. The models almost all showed projected warming far higher than observed. Is that what they are supposed to do? A twenty year lag? How convenient. Fifty years ago, or a hundred years ago, if I had access to temperature reconstructions of past million years, or just the past 10,000 years, I could have predicted the current warming following the Little Ice Age, and with better accuracy in terms of rate and magnitude than the current climate models. Both the models and I would have the benefit of hind-casting, and it’s remarkable that their current theory is incapable of explaining past climate changes. It’s Al Gore-ish, claiming that CO2 increases (why? and how?) before warming. Look it up, Steven, it’s only four letters: fail. “There is NO response from increased C02. Just leave out the “instantaneous”… or just keep fooling yourself. . Mosh’s strong point is apparently not physics. The effects of absorbing radiation are indeed instantaneous but also cumulative. figure 7 from Spencer and Baswell 2011 compares the lag shown by both models and observations One of many things that models get fundamentally wrong, indicating that whatever theory is being used is being misapplied or is wrong. The negative side of that graph corresponds to radiation driving temp change ; the lag is about 12 months in the real world. Depending on the nature and magnitude of the feedbacks which we DO NOT understand and are mainly guesses, there will be a longer term settling towards a new dynamic equilibrium. SM wrote: Because temperature is doing exactly what theory predicts. No wonder he won’t admit the ‘Pause’ happened! Too funny… I wouldn’t call doing nothing for 14 years and then suddenly coming out of hibernation coincident to a massive El Nino…a “lag”, Don’t forget everyone that as temps have been rising since the LIA that any year that does NOT make a new record should make news . A new high does not mean it’s anything to do with humans – there have been new highs every few moths for 400 years . It’s only in “Hoaxworld ” a “record ” temp is big news . Mosher writes Far from it. Theory predicts strong positive feedback as represented in the GCMs. Reality has the warming happening at a much lower rate. This is looking more and more like a step increase following the ENSO event. Theory doesn’t predict that either. If there are years of pause following a step increase the the “C” part of CAGW all but vanishes. Here is my take: The “Quiet Sun” is having an unexpected warming effect, as the traditional winds are changed by the revolutionary lack of energy. Simply stated, the lack of winds over certain parts of the ocean leads to a lack of up-welling, which means less cold water is brought up from the deeps. This makes SST warmer, which makes air temperatures warmer. This in turn throws things out of balance, especially as the “Quiet Sun” makes the background colder even as SST makes things warmer. In order to regain balance, the planetary flow is more loopy and meridional, which brings a rush of warmer air to the Pole (where heat is squandered to outer space) and colder air is transported to lower latitudes (which may eventually chill SST). This is an ongoing process, and your bet is as good as mine what the next phase will be. I expect things to be more out-of-balance than in-balance for some time. In the end we likely will have a better idea of what the weather maps for the Dalton Minimum looked like. Caleb-san: Solar cycles have been collapsing since 1996. Since 1996, global temp trends have been ostensibly flat (excluding the late 2014~early 2016 El Nino spike): Your hypothesis doesn’t match the empirical data.. We’re in the early stages of a La Nina cycle. By mid-2018, a 22-yr flat global temp trend should reappear as the current La Niana cycle offsets the 2014~16 El Nino spike, despite 30%+ of all manmade CO2 emissions since 1750 being made over just the last 22 years… From 2019, the AMO will enter its 30-yr cool cycle and the PDO already entered its 30-yr cool cycle in 2008… The weakest solar cycle since 1790 starts around 2022.. In just 5~7 years from now, a global cooling trend will likely appear from 1996, which will finally end the CAGW scam. We’ll see soon enough. If the “quiet sun” were having an unexpected warming effect, I would think winds should be picking up as convection would likewise increase. My guess is that reaction would be secondary, after the warming effect. First the warming has to occur, before the winds pick up. The primary effect would be less wind, and thus less up-welling, and thus less cold water brought to the surface. I’m just guessing. The La Nina showed less up-welling than expected, and I took it from there. It’s the day/night cycle (warming and cooling) that drive the winds/convection currents, so the response to hotter temps would be instantaneous. Freeman Dyson says “A field of corn growing in full sunlight in the middle of the day uses up all the carbon dioxide within a meter of the ground in about five minutes. If the air were not constantly stirred by convection currents and winds, the corn would stop growing.” I agree with Caleb, the Earth is currently in a mixing phase due to the Sun’s impact on upper atmosphere lag to then settling down. The next 2 months look fairly chilled as things dry down. It is not unusual for there to be a little bounce back for a few months on the downside of an El Nino. In fact, this occurs in almost all El Ninos. . Secondly, in this last El Nino, Temps went up very fast and then came down very fast, dropping initially faster than they usually do. We are right on schedule right now for how temperatures should be going down after the 2016-17 Super El Nino. My troposphere model is basically bang-on this month and we should continue heading down for at least 6 months yet. Nice job, sounds really good. Please allow for a remark. You speak about a “Super El Niño”for the 2015/16 event. Maybe it has been one of these, but compared with the 1982/83 and 1997/98 editions (using MEI) it looks like the little brother. Maybe this meaning is influenced by the UAH6.0 time series in its Globe variant. Looking at the Tropics stripe gives another impression, better fitting to MEI. Moreover, 1982/83 and 1997/98 began by far more peaky than did 2015/16, which in fact had a weak start by end of 2013 already, before restarting in 2014. Sometimes I get a bit bored by these endless claims about surface data measurements being inaccurate, flawed, if not even… manipulated! Let me show you a little example falsifying these claims, using as source temperature measurements in Australia since 1979, performed by: – the 570 GHCN surface stations (data from the unadjusted record) – the UAH TLT temperatures measured by satellites (zonal and 2.5° gridded record). No doubt: what GHCN stations measure is quite a bit warmer than what is by UAH. Here the trends for 1979-2016, in °C / decade: – GHCN: 0.273 ± 0.028 – UAH: 0.157 ± 0.026 No wonder: GHCN unadjusted itself is quite a bit warmer than is GISS land-only. But a look at the 60 month running means should convince that the (red) surface and (yellow) troposphere measurements are much nearer than so often pretended. And the difference beween the two running means would be small if the data was originating from the GISS land-only dataset (I don’t have it in the PC). Pour enfoncer le clou, to drive home the point, I made an additional experiment, by collecting all the 2.5° grid cells encompassing one or more of the australian GHCN stations, and computed for these cells a time series out of UAH’s gridded data (the green plot). The extreme small difference between UAH’s complete view over Australia and the grid cell selection encompassing GHCN stations imho is a good measure for the representativity of these stations for the whole country. One could do the same for CONUS… these endless claims about surface data measurements being inaccurate, flawed, if not even… manipulated! @bindidon the problem is that these figures from your ‘official data’ don’t tie up with my own figures. According to my own figures, earth cooled by at least 0.1-0.2K since 2000. Where did it go? Maybe if you did your own measurements you would come to the same conclusion? that involves doing some work…. ??? remember, on any subject of science, you cannot have a vote. You only need one man to get it right. So your “own figures” are more worth than the work of thousands of people? You are simply ridiculous, HenryP. Please please don’t answer to my comments anymore. no, you cannot pick and chose commenters. that’s not how it works here, I am sure. btw why don’t you show us what results you have actually measured, for yourself, instead of relying on those of others. Right. None? That shows us you are either here because of wanting commenters to follow your beliefs / hidden agenda or you are lazy. Are you trying to argue the “unadjusted” data is good? That is the point we are making. It is the “adjusted” Australia temp trend which adds more than 1.0C to the unadjusted trend that is the problem. Sorry Bill Illis, I can perfectly understand your claim, but… You clearly can see here – what it means to homogenize GHCN unadjusted into valuable data, without outliers etc etc – that this complex process doesn’t add any °C to the unadjusted data: the contrary is the case. The trends for 1880-2016 in °C / decade: – GHCN unadjusted: 0.229 ± 0.006 – GISS land-only: 0.097 ± 0.001 – GISS land & ocean: 0.070 ± 0.001 Fell free to download all the data and to compare your processing with mine. Unfortunately I lack GISS data restricted to specific countries. But Australia is, like is CONUS, a very stable land wrt temperatures: thus you shouldn’t expect GISS to show there anything higher than it does for the whole Globe. One day I’ll download the complete GISS netcdf stuff together with the FORTRAN routines needed to expand that into text files! Many hours ago I sent a comment in reply to another comment: and oops?! It was not published. That the moderation of a site rejects a comment with an explanation of why it was refused I of course can understand: it’s the moderation’s job. But that a comment simply disappears I can’t understand. Hi Bindidon, Comments ocassionally disappear. It’s happened to me several times. It’s probably a WordPress glitch, not the fault of this site. WUWT doesn’t delete comments due to their scientific point of view. If it did, it would quickly become apparent from the large volume of complaints that would generate. Unfortunately, WordPress doesn’t offer a preview function, so the best thing you can do is save your comment until you see it published. You can even save your comment as an email draft. It’s good insurance. You can delete the draft after it’s been posted. D. B. Stealey! You are missed around WUWT. Don’t be such a stranger! Hope all is well. NCEP CFSR 35 year running record had it .378C so close to this. Also had this as warmest year on its record Running daily model initializations available here Polar vortex is very weak. Index El Niño gradually declining. The sun without visible spots … whilst the sun is hotting up earth is cooling down anyone who knows why, gets an A plus from me… @greg referring to your request to do my own research on UAH calibration you have to be kidding me/ no idea where to start. I have my own results, telling me what is happening. my results show UAH is out, it should be about 0.1-0.2K lower on average, over the period measured from 2000. you are all free not to believe my results but it is my right to report the matter –? How to treat “trendy” if during the 2016/2017 winter temperatures in the northern hemisphere will fall to the level of the seventies? Snow cover reach to Central Europe. Interesting to note the snow is further south than usual not only in Eurasia, but even in Canada, which I think has been slightly warmer than normal. Apparently the RSS data doesn’t get its own post, but I see that although it is also heading for a record hot year to surpass 1998, its October anomaly (covering 70 S to 82.5 N) is 0.23 deg colder than September’s. This is also the coldest month since July last year. Whom to believe? @Richard do your own measurements – even if you begin only today in your own backyard – {I think Anthony has a DIY kit?} and trust only yourself. winter is coming… On la Nina: reposting from Dr Spencer’s blog. +/-0.5 C threshold, but in NINO3 region, which can be quite different to combined NINO3.4. You can get an idea of the spread of international models here:
https://wattsupwiththat.com/2016/11/01/uah-global-temperature-update-down-slightly-for-october/
CC-MAIN-2017-43
refinedweb
6,244
73.17
Agenda See also: IRC log Date: 10 Sep 2009 <scribe> Meeting: 152 <scribe> Scribe: Norm <scribe> ScribeNick: Norm <PGrosso> yes, ht, fixed -> Accepted. -> Accepted. Mohamed gives regrets. Norm: Schedule for the first week of November in Santa Clara. -> -> Norm: Is anyone else planning to attend? Alex: I'm planning to be there, it's local for me. Mohamed: I'm on the fence. Paul: I can't make it. Vojtech: I'm still unsure. It depends on the status of my membership. Henry: Try real hard, we'd like to have you. Norm: Indeed. Vojtech: I've made good progress on getting membership restarted. Now waiting on the final step. -> Henry: I think there's been no feedback on this revised version. Norm: Is the 4 Aug version I just posted the revised version, or the original? <alexmilowski> 6th august ... Alex: The two sentences "given a pipeline library document..." and "given a top-level pipeline document..." ... I believe you mean the visited set. ... where you say "singleton set" Henry: What I understand Alex to be saying is "Given a pipeline ... it is an error if ... against the background of a visited set being a singleton set containing DU." Alex: Right. However you want to phrase that. Norm: Ok, I think this would be fine, though I'm not sure I like having teh defn of bag-merger in a footnote. <scribe> ACTION: Henry to make one more pass over the prose and insert it into the spec as a revised App G. [recorded in] -> Norm: I don't feel strongly, but I think it should preserve the parameters as p:http-request does. Alex: I agree. Mohamed: They're incorrect in UTF-8, but they aren't incorrect in Unicode. Alex: Is there a valid mapping from the code points to Unicode? General agreement that there is. Vojtech: My question was about encodings in general, not that specific one. Norm: I heard some agreement that we preserve the values. Proposal: The charset value (and other parameters) are preserved. Accepted. Resolved, see above. Vojtech: What happens if you specify an xpath context in p:when but you don't specify a binding? ... In our implementation, the p:when is using the default readable port, not the binding from p:xpath-context from p:choose. Norm: I think this is an edge case that we didn't think of, so we just need to say what the answer is. ... Why did we allow the binding inside xpath-context to be optional? Vojtech: I think we might have done it to preserve the default. Alex: So this one uses the default context? Norm: I think there are two possible interpretations, an empty p:xpath-context either goes back to the default readable port of the p:choose or it goes back to the default on p:choose. ... Or we make it illegal by requiring a binding inside p:xpath-context. Vojtech: Right now the spec says it works just like p:input, so it would get connected to the default readable port. Norm: Making an empty xpath-context go back to the choose would be redundant. ... So I think that boils down to two reasonable intepretations: the default readable port or we make it an error. ... For the 1 in 999,000 case when someone might use this, I guess that would be ok. Mohamed: I think it's a bad idea, when a user uses xpath-context in the choose, then I think we should make the user be explicit in any p:when where they want a different binding. ... I think we should forbid having an empty p:xpath-context. Vojtech: I think I agree with Mohamed on this one. Norm: Ok by me. Proposal: Make it an error to leave the p:xpath-context empty. Accepted. Norm: I'll change the content model so that it's required, we don't need a new error code. Norm: The JSON RFC doesn't define an XML encoding, it just defines JSON Alex: The c:query step is for XQuery, not random queries. Vojtech: Perhaps he meant that if the content-type on p:data was applicatin/json then it would be turned into XML. Mohamed: I don't think the purpose of this spec is to convert all tree-like structures into XML. Henry: I think this is an area where it's perfectly reasonable for implementors to compete. When we were doing the markup pipeline, it ended up being the case that it was appropriate to add a command line switch to upconvert STDIN from some format (like SGML) to XML. Norm: Yes, and I think, I'd have to go back and read carefully, that an impl could recognize application/json and turn it into XML. ... Nope, I was wrong. Alex: There's nothing in this message that seems to imply we're supposed to translate JSON into XML. We've already got ways to represent JSON in a pipeline, using c:data. Some discussion. Proposal: Reply that you already can include JSON as text using c:data. If you want conversion to XML, you'd need an extension step for that. Accepted. Some additional discussion of Henry's use case. <alexmilowski> dropped call... :( <alexmilowski> -i json -o /dev/null Some discussion. General agreement that making the error dynamic rather than static would be very painful for implementors. Alex: Changing the definition of a fundamental step is a bad idea Norm: I think the rules we have are fine, the consequence of the rules is that for some changes, we'll introduce a new namespace or change the step name. Alex: So that just means he has to rearrange the choose, right? Norm: Right. Alex: So the end result would be just a slightly different pipeline. Proposal: Reject making the error dynamic, point out that the constraints are on future versions of steps with the same names, not future functionality. Accepted. Vojtech: I think there are two more questions. What happens if the schema changes so that some elements can contain new elements that weren't supported in V1. Norm: Oh, so we add a p:xyz child of steps. Vojtech: Not just steps, but also in p:serialization, for example. ... or in p:document we add a new child. Norm: I guess we could say that those are ignored. I have some reservations, but I can't articulate them. Vojtech: I can imagine cases where this could cause problems. What if we wanted to add a new kind of instruction like p:choose or p:try. If you ignore it then the pipeline might not make any sense anymore. Norm: Right so if we add p:map-reduce ignoring it would be all you could do but it wouldn't be the right thing. Mohamed: I think it has to fail. Norm: If we add new language elements then you can't write backwards compatible pipelines that use them. Vojtech: If you introduce a new builtin step then you could wrap it in a choose and use step available. ... No, that won't work because you have to know the signature. Mohamed: The problem we have is that we have to compute a new dependency graph. Adding new builtin constructs just makes it not backwards compatible. s/compabiel/compatible/ scribe: I don't find it too restrictive, because when we provide a new instruction perhaps we can provide a wrapper step for it. Proposal: No, we're not going to ignore unknown elements. Accepted. Norm: And the last one is covered by the fact taht you're not allowed to declare steps in the p: namespace unless the URI begins with the right prefix. None heard. This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/p:when/p:xpath-context/ Succeeded: s/port or/port of the p:choose or/ Succeeded: s/compabiel/compatible/ FAILED: s/compabiel/compatible/ Found Scribe: Norm Inferring ScribeNick: Norm Found ScribeNick: Norm Present: Norm Paul Mohamed Alex Vojtech Henry Agenda: Found Date: 10 Sep 2009 Guessing minutes URL: People with action items: henry[End of scribe.perl diagnostic output]
http://www.w3.org/2009/09/10-xproc-minutes.html
CC-MAIN-2016-40
refinedweb
1,376
74.79
AWS IoT Device Support for LoPy Hi, Will LoPy supports AWS IoT integration using MQTT. AWS IoT SDK contains Paho MQTT Python client library. currently umqtt does not support this, Then what is equivalent for Paho MQTT to achieve AWS integration. Thanks & Regards, Satish Kumar. R. - Armen Edvard Banned last edited by This post is deleted! - Gurpreet Singh Banned last edited by This post is deleted! @jomifo I have made a mini guide for this issue: - ubiq_01 Pybytes Beta last edited by Hello Daniel, dear all, I used your code and could do interaction with a custom mosquitto MQTT broker + Let's encrypt certificate (so using TLS on port 8883 and server authentication) from my WiPy. Still using the older firmware from Dec 16th which works fine with TLS + certificates. Many thanks for the great work! Ralf @gertjanvanhethof no, we are still busy trying to solve it. It's going to have to wait for the next release. - gertjanvanhethof Pybytes Beta last edited by This post is deleted! @jomifo we are also experiencing that error and it is caused by the changes made to integrate the Bluetooth stack. We are still investigating the exact root cause. Apologies for the the delay. @daniel - I tried this in the latest release (1.1.0.b1) and am now getting a different OSError. Just wondering if you were able to address this in the latest release and verify your AWS IoT demo works so I know if it is something on my end. Thanks. >>> from simple import MQTTClient >>>>>>>>>>> >>> import os >>> os.uname() (sysname='LoPy', nodename='LoPy', release='1.1.0.b1', version='v1.8.6-274-g9a2018f on 2016-12-29', machine='LoPy with ESP32') >>> @jomifo I can confirm that the issue is there, we also get the exception with OSError 0. We'll investigate it and provide and solution on tomorrow's release. @jomifo I'll be performing some tests this morning and get I'll get back with an update shortly. Cheers, Daniel @daniel Could you please confirm your AWS IoT demo still works for you? I am using the latest firmware and when trying to connect to the MQTT broker using the REPL, I get OSError 0- >>> connection.connect(ssl=True, certfile='/flash/cert/lopy-gateway.cert.pem', keyfile='/flash/cert/lopy-gateway.private.key', ca_certs='/flash/cert/root-CA.crt') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "simple.py", line 79, in connect OSError: 0 >>> >>> >>> import os >>> os.uname() (sysname='LoPy', nodename='LoPy', release='1.0.1.b1', version='v1.8.6-265-g5722083 on 2016-12-22', machine='LoPy with ESP32') Hi @daniel, I am also having issues with your aws demo. When I break out of the loop and test the connection manually I get OSError -32512 DEVICE_ID = "lopy-gateway" HOST = "[redacted].iot.us-east-2.amazonaws.com" TOPIC_DOWNLOAD = "Download" TOPIC_UPLOAD = "Upload" from simple import MQTTClient I have tested from my desktop using the Python AWSIoTPythonSDK module from AWS with the same credentials successfully. I could send my code if that would help. Thanks for any insight. @RSK, I would need to know the OSError number that you are getting. What you could also do, is to email me all your source files, the certificates, and give me temporal access to your AWS account so I can test it fully on my end. BTW, are you using static IP configuration on the LoPy/WiPy 2.0? There are known issues around this that we are still trying to resolve with Espressif. If you are, please give your code another try using DHCP instead... Thanks. Cheers, Daniel Could someone help in this reg? Thanks & Regards, RSK. @daniel Facing issue in connecting LoPy with AWS IoT. Was thinking that problem might be with the cert files generated from AWS IoT. Ensured that they are perfectly fine and working fine with MQTT.fx tool, running the tool in windows 7 32-bit machine. Downloaded from Published myTopic/1 from AWS and Subscribed the same in MQTT.fx tool, and they works fine. This helps me to ensure that cert files are fine. The same cert files are moved to /flash/cert/*, and this time it passes through ssl.wrap_socket() successfully. After passing ssl.wrap_socket() the exception occurs in next statement self.sock.connect(self.addr). To confirm this added the connect function in try exception block to get the error type. and it gives OSError. So this shows that error is with socket.connect(). Again to cross check used the IP 35.160.47.159 in MQTT.fx tool to connect and it worked fine with the tool. Please help to resolve the issue. Thanks & Regards, Satish Kumar. R.
https://forum.pycom.io/topic/352/aws-iot-device-support-for-lopy
CC-MAIN-2022-33
refinedweb
784
76.11
SYNOPSIS #include <sys/fanotify.h> int fanotify_mark(int fanotify_fd, unsigned int flags, uint64_t mask, int dirfd, const char *pathname); DESCRIPTIONFor an overview of the fanotify API, see fanotify(7). fanotify_mark(2): - mount or all non-mount marks from the fanotify group. If flags contains FAN_MARK_MOUNT, all marks for mounts are removed from the group. Otherwise, all marks for directories and files are removed. No flag other than FAN_MARK_MOUNT_PERM - Create an event when a permission to open a file or directory is requested. An fanotify file descriptor created with FAN_CLASS_PRE_CONTENT or FAN_CLASS_CONTENT is required. -. VERSIONSfanotify_mark() was introduced in version 2.6.36 of the Linux kernel and enabled in version 2.6.37. CONFORMING TOThis system call is Linux-specific. BUGST(2) is called with FAN_MARK_FLUSH, flags is not checked for invalid values. COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
http://manpages.org/fanotify_mark/2
CC-MAIN-2019-26
refinedweb
166
51.95
In shell script I am checking whether this packages are installed or not, if not installed then install it. So withing shell script: import nltk echo nltk.__version__ import which nltk import nltk is Python syntax, and as such won't work in a shell script. To test the version of nltk and scikit_learn, you can write a Python script and run it. Such a script may look like import nltk import sklearn print('The nltk version is {}.'.format(nltk.__version__)) print('The scikit-learn version is {}.'.format(sklearn.__version__)) # The nltk version is 3.0.0. # The scikit-learn version is 0.15.2. Note that not all Python packages are guaranteed to have a __version__ attribute, so for some others it may fail, but for nltk and scikit-learn at least it will work.
https://codedump.io/share/trqm6WVMb8z2/1/how-to-check-which-version-of-nltk-scikit-learn-installed
CC-MAIN-2018-13
refinedweb
136
83.25
CouchDB From HaskellWiki Revision as of 10:05, 31 July 2010 Here the code for storing a note and retrieving it with the doc id: ( {-# LANGUAGE DeriveDataTypeable , ScopedTypeVariables #-} module Notes1 where import Database.CouchDB (getDoc, newDoc, runCouchDB', db, Rev(..), Doc) import Data.Data (Data, Typeable) import Text.JSON import Text.JSON.Pretty (pp_value) import Text.JSON.Pretty (render) import Text.JSON.Generic (toJSON, fromJSON) type Strings = [String] -- basic data Note = Note {title, text :: String, tags :: Strings} deriving (Eq, Ord, Show, Read , Typeable, Data) -- not yet necessary copied from henry laxon ppJSON = putStrLn . render . pp_value justDoc :: (Data a) => Maybe (Doc, Rev, JSValue) -> a justDoc (Just (d,r,x)) = stripResult (fromJSON x) where stripResult (Ok z) = z stripResult (Error s) = error $ "JSON error " ++ s justDoc Nothing = error "No such Document" -------------------------------- mynotes = db "firstnotes1" n0 = Note "a59" "a1 text vv 45" ["tag1"] n1 = Note "a56" "a1 text vv 45" ["tag1"] n2 = Note "a56" "updated a1 text vv 45" ["tag1"] n1j = toJSON n1 -- convNote2js n1 runNotes1 = do (doc1, rev1) <- runCouchDB' $ newDoc mynotes n1j putStrLn $ "stored note" ++ show doc1 ++ " revision " ++ show rev1 Just (_,_,jvalue) <- runCouchDB' $ getDoc mynotes doc1 ppJSON jvalue jstuff <- runCouchDB' $ getDoc mynotes doc1 let d = justDoc jstuff :: Note putStrLn $ "found " ++ show d return () -- the output is: --stored noteaa45700981408039346f9c8c73f8701f revision 1-7fa1d1116e6ae0c1ee8d4ce89a701fdf --{"_id": "aa45700981408039346f9c8c73f8701f", -- "_rev": "1-7fa1d1116e6ae0c1ee8d4ce89a701fdf", "title": "a56", -- "text": "a1 text vv 45", "tags": ["tag1"]} --found Note {title = "a56", text = "a1 text vv 45", tags = ["tag1"]} ).
https://wiki.haskell.org/index.php?title=CouchDB&diff=36390&oldid=36389
CC-MAIN-2015-35
refinedweb
235
55.27
Created on 2017-05-06 05:03 by terry.reedy, last changed 2019-01-17 22:28 by terry.reedy. Initial plan. 1a. Change AboutDialog to mimic query.Query with respect to _utest and suppression of dialog display and waiting. 1b. Create AboutDialog instance. 2a. Change textview.TextViewer as with AboutDialog. Also change textview functions and AboutDialog methods to pass on _utest. 2b. Simulate keyclicks on buttons and test that root gains appropriate child. Then destroy it. 3. At some point, remove dead code and change now incorrect encoding comment. Separate issue: add simulated button click tests to test_textview and other popup dialogs. Directly calling commands does not test that buttons invoke the right command. 4a. Change uglyMidcap widget names and CamelCase function names to pep8_conformant names. Simplify: byline, email, docs, pyver, tkver, idlever, py_license, py_copyright, py_credits, readme, idle_news, idle_credits. 4b. Change CamelCase function names to pep8_conformant names. ShowIDLEAbout => show_readme, 5. Add docstrings For show_xyz functions, "Command for xyz button"? even though not verb? #30303 implemented 2a, improve textview and tests. Cheryl: this should be much easier than colorizer. Let me know if you want to take a stab at at least 4 and 5. Follow-up issues. A. Move code and tests into help.py and test_help.py. B. Improve content: 1) #25224; 2) remove or update credits; 3) reconsider each item. C. Improve appearance: 1) ttk widgets; 2) Redo entire look. This looks like fun! :-) I'll let you know if I have any questions. Terry, Is there interest in changing the 'from tkinter import *'? I take 'fun' to mean you will try something. And yes, I have thought about the following for all IDLE modules. 6. Replace "from tkinter import *". It was mainly intended for interactive use. In production IDLE, use either a. from tkinter import Tk, Frame, Label, Button, <constants> b. import tkinter as tk 6a requires more typing in the import statement, but as a replacement, leaves the rest of the code alone. It documents what widgets will be used in the module. It allows individual classes to be mocked for a particular test or group of tests. I know I have done this for the tkinter TypeVar classes. 6b has a short import but requires more typing in the module body, especially when tkinter constants are present, as they are here. When exploring or writing short, one-off programs, I usually use 'as tk'. But for IDLE, I may have converted at least one file to 'as tk', but I am now using 'import Tk, ...' and prefer it. My current thoughts about Tkinter constants versus string literals, such as BOTH versus 'both': The two forms by themselves are about equally easy to type. The CAPS form is shorter but louder, whereas to me they are minor items that should not be loud. The constants seem designed to work with 'import *'; one can then potentially use name completion for the longer names. After 'as tk', I think "tk.Both" is worse than "'both'". For new code with explicit imports, adding constants to the imports is extra work. For existing code with many constants, such as help_about, adding constants to the imports is less work than converting. So I am inclined to leave them as are until such time as we might do a search-replace throughout idlelib. When there are a few constants, I have put them after the classes. For this module, I would like to try a separate import rather than use '\' line continuation. from tkinter import TOP, BOTTOM, SIDE, SUNKEN, EW, ... --- In one file with just two constants, I initially converted to the quoted literal, but another core dev objected on the basis that the constants are checked when the function is compiled instead of when called, and that this is somehow safer. For when issue comes up again, I am recording my current replies here. 1. The same logic would suggest that we should write, for instance, "open(filename, mode=W, encoding=ASCII, errors=STRICT)" (after appropriate imports) instead of the current "open(filename, mode='w', encoding='ascii', errors='strict')". 2. If running a file to do a test compile also does a test call of create-widget functions, then the objection does not apply. For every IDLE file that creates tk windows, there is an htest, initiated by running the file, that creates an instance of the window. The first purpose is to insure that widget creation calls are legitimate. The unit tests will eventually do the same, as in 1b above. (The second purpose, not duplicated by unittests, is to see if the result 'looks good'.) Due to the merged of #30303, text_view now have _utest attribute for unittest, upload the unittest of help_about dialog in PR 1697 Louie, at first glance, this appears to implement the remaining changes in 1 and 2. A possible problem is that I expect the needed 'self.' additions will conflict with the name changes on the same lines that Cheryl said she would do, about 10 hours ago. Cheryl, if you have not yet started your patch, hold off for now, until I can properly review Louie's patch. Oh,? Cheryl: yes, I may work on 4a/4b and msg294004 on Monday, or do you start at these tasks? I am about to go to bed, way late. I would like to review and apply all the non-test changes, 4,5,6 in one PR, and add 3 since I know what I meant. There can be multiple commits in one PR though. Louie, from #30422 and a tracker search, you can see that there is lots to do, for more that 1 or even 2 people. I think 4,5,6 are a good way for Cheryl to start with IDLE. Other files need similar changes that do not need your tkinter skills. If you are looking for a useful challenge, there are 12 open IDLE debugger issues + some enhancements debugger ideas I have not bothered to post yet. If that does not interest you, I can try to suggest something else after I get up again. I expect to be away from my computer on my Monday. New changeset 054e09147aaa6f61aca6cd40c7bf7ce6dc54a04b by terryjreedy (mlouielu) in branch 'master': bpo-30290: IDLE: Add more tests for help_about dialog (#1697) PR merged. I will worry about where in the test to call AboutDialog.Ok, which has the last uncovered line - and any other details -- later, perhaps when I merge this file into help.py. Terry: I see, I'll take some debugger stuff to try first. Cheryl: If you got any problem on 4/5/6, feel free to ask me on IRC or here! I've made a pull request for 4, 5, and 6. I didn't remove the tk constants as it seemed you'd like to do all the files in one project for that. Also, in query.Query, I noticed that all the widgets were created first and then they were all added to the grid last. I didn't move any existing lines for this, but can do that if it's preferred. It also seemed like helper functions might make it more readable (create_py_section, create_idle_section), but maybe that's too much churn compared to what this will end up looking like. At first glance, 1714 looks great. I hope to merge it tomorrow. On the issues you raised: 1. I want to leave existing constants alone for now. 2. There are two 'create widgets' styles: create, place, create, place, ...; and create, create, ..., place, place, ... . Both have their logic, both are used in IDLE. The first makes sure that everything created gets placed. The second makes it easier to arrange things in relation to each other and to rearrange. I naively started with the first, but it seems that most experts advocate the second. 3. If the dialog were left alone, three helper functions could be a sensible refactoring. However, a alternate design could have links with standard blue tagging instead of buttons. For example, the current About Firefox box has these 8 links: 'What's new', 'Mozilla', 'a global community', 'Make a donation', 'get involved', 'Licensing Information, 'End-User Rights', and 'Privacy Policy'. This would make code much shorter and the result would not have the issues above. Thanks Terry. I'll leave those other items alone for now. It makes sense what you said about the 'create widget' styles. Looking at it for the first time, I kind of liked the second version (in query) because I could more immediately understand what was happening, plus it seemed to lend itself to modularizing. For example, have one function that has all 'create' and one that has all the 'place'. help_about was a small dialog and there were already a lot of lines of code. But, to your last point, blue tagging seems like it would make it all easier. For the next step, do you have a plan in mind for what you'd like me to do next? I know you mentioned ttk for help_about, but didn't know if you wanted me to tackle that. Or should I PEP8 and docstring other modules such as textview.py and configdialog.py? One question I forgot to ask before -- the `Toplevel.__init__(self, parent)` line, can this be changed to super().__init__(parent)? New changeset 5a346d5dbc1f0f70eca706a8ba19f7645bf17837 by terryjreedy (csabella) in branch 'master': bpo-30290: IDLE: Refactor help_about to PEP8 names (#1714) While reviewing, I decided to draw a widget tree. I believe it would have been easier if packs and grids were grouped. I want to continue with 'IDLE: modernize help-about' and ditto for textview. The help_about issue can begin with ttk conversion: change imports and remove an invalid option or two. The textview issue can begin with the still missing docstrings. I want to refactor both the way I discussed in the roadmap issue: make separate Toplevel and Frame subclasses. The Toplevel subclass instance creates a Frame subclass instance which becomes an attribute of the toplevel. Help.py already has this structure. Thanks Terry. I've submitted a PR for the textview docstrings and PEP8 names and I'll work on the ttk conversion and the refactoring. pr 1839 is now attached to new issue #30495. This issue is closed except for trivial bugs in the two merged PRs and backports. A follow-up to my brief remarks about revamping About IDLE, in msg294298: idle-dev is subscription-required public forum for discussion of idle design issues. It has been dormant for a year and a half, but I would like to revive it to get broader user input. By coincidence, the first new post (held since April because the author is not subscribed) is a suggestion for AboutIDLE. In my second response, I listed 9 possible or definite changes. I would like that to be where we plan a future 'Modernize About IDLE' issue. New changeset 12cbd87ac0bb826d653040044c6b526dcdb6f6d1 by terryjreedy in branch '3.6': [3.6] bpo-30290: IDLE - pep8 names and tests for help-about (#2070) Postscript: this test that the retrieved text has at least two lines caught a bug in the new Windows Store python distribution. self.assertEqual(printer._Printer__lines[1], dialog._current_textview.textView.get('2.0', '2.end')) The license file was missing and the backup default had only one line. #35683, msg333891
https://bugs.python.org/issue30290
CC-MAIN-2020-16
refinedweb
1,892
74.39
Practical .NET Peter Vogel introduces a new column on application development in the real world, and begins by advocating for Language Integrated Query. Welcome to Practical .NET, a new column offering how-to insight and advice for developers working with the flagship Microsoft programming framework. You may be familiar with my Practical ASP.NET column, which I've been writing weekly (more or less) on the Visual Studio Magazine Web site since May 2008. Now my focus expands from ASP.NET to explore a variety of .NET technologies. Think diversity. What won't change is my commitment to "practical" programming. I'll focus on the tasks developers do right now, every day, in delivering business applications to their users. There will be times when I cover a really cool technology that's worth an early look. But by and large this column will focus on the tools and techniques that developers use to build functional applications. One of the technologies that developers would be well advised to adopt is Language Integrated Query (LINQ). Not only does LINQ reduce the amount of code you have to write, it also gives you better performance and positions you for other technologies (like Parallel LINQ, or PLINQ, parallel processing). Despite the advantages of LINQ, in my experience as a consultant and an instructor I've found that most of the developers I meet aren't using it. I'll look at how to get started with LINQ and the key technology you'll need to fully exploit the technology. I also look into why so many developers aren't using this compelling technology here. In this column I assume that you've created an Entity Framework model based on the Northwind database and now want to retrieve the entities that correspond to your tables. I then walk through using LINQ to do that and introduce a key technology for exploiting LINQ. LINQ Basics LINQ is easy to get started with, assuming you've ever written an SQL statement or a For...Each loop. While SQL lets you select rows from a table, LINQ lets you select objects from a collection. With Entity Framework (and LINQ to Entities), the two technologies overlap because the objects in the collection you're querying represent the rows in the table you want to retrieve. Not only can you leverage your SQL knowledge in LINQ, but you can also take advantage of what you know about For...Each loops. In a For...Each loop, you're used to creating a range variable like the cust variable in this example: '...a bunch of ADO.NET code to retrieve rows 'into a collection of objects called custs For Each cust As Customer In custs If you counted up all the lines involved in converting the rows into objects, it would probably be several dozen lines of code -- impossible for the compiler to optimize. In LINQ/Entity Framework the equivalent statement also uses a range variable but, other than that, it looks much like a SQL statement: '...code to instantiate an Entity Framework 'ObjectContext called Northwind Dim res = From cust In Northwind.Customers Select cust Ignoring the cust range variable, the major difference between this LINQ statement and an equivalent SQL statement is that first, I don't need an asterisk (*) in the Select clause to retrieve all the properties (I just retrieve the whole object), and second, the From clause appears first. The From clause comes first because it allows you to specify the collection that your range variable is retrieving objects from. This gives you IntelliSense support for your range variable as you type in the rest of your LINQ statement. One of the benefits of LINQ is that it's only one statement (plus the Entity Framework code, of course) compared to the For...Each loop's multiple statements. That compression makes it considerably easier for the compiler to optimize your code. Another benefit is that LINQ to Entities will generate all the appropriate SQL for you. It will also take care of generating all the ADO.NET activity, and manage opening and closing the connections. There are a lot of benefits to turning that work over to LINQ. LINQ queries are also .NET friendly -- I can use the output of a LINQ query to set the DataSource on a grid and let the user update the query's results: Me.grdCust.DataSource = From cust In Northwind.Customers Select cust SQL Operations In SQL, the input and the output for a query is a table, so I can use the output from one SQL query as the input to another. I can also use the output from one LINQ query as the input to another LINQ query. That way, instead of having to write one big complicated query (that I probably won't understand), I can write several simpler queries (that I will understand). This example, for instance, finds all the customers without a PostalCode and, if none are found, searches that collection based on the Customer's city: Dim res = From cust In Northwind.Customers Where cust.PostalCode IsNot Nothing Select cust If res.Count > 0 Then Dim res2 = From cust In res Where cust.City = "Berlin" Select cust End If Doing other "typical SQL" operations, like sorting and filtering, lets you continue to leverage your SQL experience: Dim resBC = From cust In Northwind.Customers Where cust.Region = "BC" Order By cust.CustomerID Select cust As I noted earlier, part of the problem developers have with LINQ is that it has two different syntaxes. One is this "SQL-like" syntax that I prefer. The other is based around methods and lambda expressions. The method-based equivalent to my previous example would look like this: Dim resMethod = Northwind.Customers. Where(Function(cust) cust.Region = "BC"). Select(Function(cust) cust). OrderBy(Function(cust) cust.CustomerID) My friends who know more about the inner workings of the .NET compilers tell me that the method-based syntax is the "real" syntax -- the SQL-like syntax is just "syntactical sugar." It's because of this underlying method-like syntax that you need to include an imports statement in your code for the System.LINQ namespace that these methods are part of. Personally, I'm willing to take the hit on the compile time required to convert my pseudo-SQL to methods so that I can stick with a syntax I recognize. For instance, joining two collections looks enough like SQL to make me happy. The Join and On clauses look like SQL, except with range variables where I'd normally have table names: Dim resjoin = From cust In Northwind.Customers Join ord In Northwind.Orders On cust.CustomerID Equals ord.CustomerID Select cust The biggest annoyance is that I can't use "=" in the On clause (which is what my fingers want to type). I have to use "Equals." Not the Way It Seems I think another problem that developers have in embracing LINQ is that the syntax looks so very, very inefficient. It looks like every row in the table is converted into an object in a collection and then processed in a For...Each loop, with the rows you don't want being discarded. That's not what's happening. Instead, the compiler and LINQ to Entities looks at your LINQ statement and generates the appropriate SQL statement. I've looked at the SQL statements that are generated: They're good. By which I mean, they look like what I would've written. Except that I didn't have to. Developers also, I think, get uncomfortable about the use of implicit declarations. I have to admit that I sometimes wonder what data type is being returned by this LINQ statement: Dim resjoin = From cust In Northwind.Customers Select cust The range variable (cust) represents items from the Customers collection, so the result is probably going to be a collection of Customer objects. I'm comfortable with letting the compiler figure it out. I'm just interested in what methods and properties appear in the IntelliSense list when I type "res." That answers the only question I'm really interested in: What can I do with this collection? If you hover your mouse over the res variable, the tooltip will tell you what data type you're getting. It usually turns out to be something like System.LINQ.IQueryable(of someObject). The "I" prefix in IQueryable indicates that the declaration is an interface. So, apparently, the compiler isn't even sure what's coming back from the LINQ expression: All the compiler seems to know is that the return value will be some class that implements the IQueryable interface. If the compiler doesn't know, why should I? If not explicitly knowing the declaration bothered me, I could type it in: Dim res As System.Linq.IQueryable(of northwndModel.Customer) = But why bother? What value am I adding to the process if I do? Either I type in the same declaration that the compiler has figured out or I type in a different (and wrong) one. If the only "value" that I can add to a process is getting it wrong, I'm just as happy to skip it. Quite frankly, not having to work out the data type is one less piece of trivia for me to have to worry about. In addition, small changes to my LINQ statement result in significant changes to what's returned. For instance, I might decide to return just the CompanyID instead of returning the whole Customer object, giving this LINQ statement: Dim resCustId = From cust In Northwind.Customers Select cust.CustomerID My variable res is now going to be some collection of strings. If I'd declared the data type on res when the query retrieved the whole object, I'd now have to go back and change it. Life is too short. The extreme example is when you don't even specify the object that's being returned. If, for instance, I only want a few of the properties on an entity, I'll create an anonymous object in my Select clause. With an anonymous object, I use the New keyword but don't specify a class name -- the compiler will make one up. I just have to list off the properties I want on the object and what values to set them to. You even get to take advantage of your knowledge of the With keyword (though you have to use some annoying braces). This code causes the compiler to generate a class with two properties called Id and Contact, and sets those properties to values from the object being retrieved: Dim resPart = From cust In Northwind.Customers Select New With { .Id = cust.CustomerID, .Contact = cust.ContactName} It's a very Zen approach to programming: just let go and let the universe -- or the compiler -- take care of the trivial details (like what your class is called). All you need is the IntelliSense list so you know what properties you can access. Don't Execute Something else that bothers developers about LINQ (and that I personally like) is the deferred execution. While I can put a LINQ statement into my code, it's not going to execute at the place where I put it. At run time, the connection to the database won't be opened, the SQL statement won't be sent to the database, and the rows won't be retrieved until I manipulate the LINQ query's result. In this case, that means that the code won't touch the database until I get to the For...Each loop: Dim resDefered = From cust In Northwind.Customers Select cust '...126 lines of code... For Each c In res I used to spend a lot of time ensuring that I never opened a connection that I didn't use -- and that I closed that connection as soon as I possibly could. This deferred execution of LINQ means I'm now guaranteed that I won't open a connection until I need the objects being returned. I count on LINQ and Entity Framework to ensure that the connection is closed as soon as possible. If I've used the output of one LINQ expression as the input to another LINQ expression, it's only when the output of the second expression is used that the SQL is generated. So, I may have written this: But it won't be until I touch the res2 collection that my SQL will be generated. At that point the compiler will look at both LINQ expressions and, probably, generate a single SQL statement that combines both. So I get the benefit of having only one query issued to the database, but I also get the clarity in my code that comes from breaking down the problem into simpler steps. Of course, if I'm returning data from a method, returning an "unresolved" LINQ query is probably a bad idea. In addition, if I'm returning some data, I probably want to define the data type that I'm returning. While I'm OK with the compiler specifying my collection's data types and leaving me in ignorance, developers using my methods probably want something more definite. And, if my method is part of a Web service, I also want to return something that will convert well into an XML format. I can force my LINQ query to retrieve all of its data and nail down the data type by using one of the To* methods on the LINQ result to convert my results to a List or an array. It's not unusual for my methods to look like this: Public Function GetCustomerByID(ByVal CustId As String) As List(Of northwndModel.Customer) Dim nw As New northwndModel.northwndEntities Dim res = From cust In nw.Customers Where cust.CustomerID = CustId Select cust Return res.ToList End Function Thinking Architecturally This method (and ones like it) creates a problem with architecturally rigid developers. Does this method go in the data layer of your architecture? After all, it's accessing data, and because you're no longer writing methods filled with ADO.NET code, this is as close to your database engine as you'll get. Or does this method belong in your business layer because it's performing a business function: returning the customer object that corresponds to the customer Id? Should I call the method from my presentation layer or should I call the method from an intervening business layer that my presentation layer calls? LINQ and Entity Framework, to a certain extent, blur the boundary between the business and data layers. But the underlying driver is the "separation of concerns." Each class and each method does one thing; each method is (relatively) simple and easy to understand; and applications are assembled out of simple components that come together to do complex things. The criteria that are set by the separation of concerns are what matter and this method, LINQ and Entity Framework, meets all of those criteria. Getting Rid of the For...Each Loop Another issue that developers have with LINQ is that it seems like you do everything twice: You write the LINQ statement to retrieve your rows and then you write a For...Each loop to process each item you've retrieved. With ADO.NET, you can just retrieve each row and process it as you retrieve it. This reflects how developers are missing the connection between LINQ and another important technology: Extension methods. Extension methods let you skip that For...Each loop after your LINQ statement. Extension methods are methods that aren't attached to a specific object. Instead, you specify the kind of object the methods should be attached to and .NET takes care of listing the method in those objects' IntelliSense lists. This allows you to define a method that, for instance, will attach itself to any IQueryable collection -- such as the result of any LINQ query. For example, let's say that you want to convert the results of your LINQ query into a comma-delimited string (I don't think there's an existing .NET extension method that would do this). You want the CustomerId and CompanyName properties in each row, separated by a comma, and with each row terminated by a carriage return. You could write a For...Each loop to run after your LINQ query: Imports System.Runtime.CompilerServices ... Dim res = From cust In nw.Customers Where cust.CustomerID = CustId Select cust Dim csv As StringBuilder = New StringBuilder For Each cust In res csv.Append(cust.CustomerID & "," & cust.CompanyName & vbCr) Or you could put that code in a method in a Module and decorate that code with the Extension attribute: Module PHVExtensions <System.Runtime.CompilerServices.Extension()> Public Function ToCsv (ByVal custs As IQueryable(Of northwndModel.Customer)) As String Dim csv As StringBuilder = New StringBuilder For Each cust In custs csv.Append(cust.CustomerID & "," & cust.CompanyName & vbCr) Return csv.ToString() End Function End Module The first parameter in the method establishes what kind of object this ToCsv method will attach itself to. In this case, it will attach itself to any collection of Customer objects that implements the IQueryable interface. Or, to put it another way, it will attach itself to the output of any LINQ query that works with Customer objects. The collection this method is attached to will be passed into the method where it can be manipulated. You can now eliminate the For...Each loop that follows your LINQ query and use your new extension method: Dim res = From cust In res Where cust.City = "Berlin" Select cust Dim csv As String = res.ToCsv Or, because there's always at least four different ways to do something with LINQ, just attach your extension method to your LINQ expression (after all, your expression is really just a set of methods anyway): Dim csv As String csv = (From cust In res Where cust.City = "Berlin" Select cust).ToCsv Letting Go I think another reason developers avoid LINQ is because it implies a certain amount of "de-skilling." I spent years developing expertise in SQL (and I have the three-foot shelf of books to prove it), and now I may never need it again. Ditto for ADO.NET: I know all sorts of ways to optimize ADO.NET, and I may never need it again. LINQ seems to take care of it all. But that's not a bad thing. When I started programming, the course that I took had me learn machine code and then assembler (and this was long enough ago that I was typing my code into a punch-card machine). What can I say: I'm very old. That's another set of skills I just don't need any more. But there are lots of opportunity to develop skills in LINQ to Entities -- for instance, optimizing LINQ by leveraging or disabling lazy loading (that's the Include function). Understanding all of those built-in extension methods and how you can leverage them to get what you want is another area where you can develop expertise. It's time to develop some new lore. Printable Format I agree to this site's Privacy Policy.
https://visualstudiomagazine.com/articles/2011/04/01/pcnet_using-linq.aspx
CC-MAIN-2019-35
refinedweb
3,211
63.49
aspnet01 This is a microblog system written in asp.net, using ms sql server 2008.pkutalk v0.1 releasedMicroblog PKU SQLServer Manager All Code "Nơi lưu trữ toà n bộ code của Tôi"Ngôn ngữ: C++,C#,VB.Net,JavaNgà y bắt đầu: 01/01/2011Ngà y kết thúc (dự kiến): 01/06/2013CShape Windows Dojo Json Rest Asp.NetBridge between Dojo's JsonRestStore and Entity Data Models via ASP.NET MVC2Version 0.01Features: CRUD Get by Id Nested sorting. Understands sort parameters like sort(+Name,-Age) Navigation properties as lazy references Defaults: Absolute references for navigation properties "store" is the RouteData.Values key that contains the name of ObjectSet Conventions: Optimism. Client must be right. No error checking. If a field or property is missing in Put or Post, it means you don'AJAX Controller CRUD Database Dojo JSON MVC REST DotNetNuke is an open source web application framework ideal for creating, deploying and managing interactive web, intranet and extranet sites. It is very well supported, just take a look at snowcovered.com. Unfortunately, VB and C# programmers don't cooperate as best they should, and take a mutually exclusive choice between the languages. This is unfortunate because DotNetNuke is a very well developed framework for ASP.NET that a lot of C# programmers do not want to look at. In an attempt to opaspx ContentManagementSystem DNN DotNetNuke Website All source of 3rgb.com. 2010.03 New Version of 3rgb.com.Powered on web.py with SQLAlchemy,Jinja2. 2009.01 Project named "N3C",developed in ASP.NET(c#),within NHibernate/SQLite/XML/XSLT 2008.12 & earlier The old version was developed with ASP(VBscript).3rgb.com ajax asp ckfinder fckeditor jinja2 NHibernate sqlalchemy SQLite web.py xml xslt What is Kannon?Kannon is an attempt to create a full-stack open-source lightweight web framework for .NET. Without ASP.NET. public class HelloWorld : KannonApplication { public HelloWorld() { Install<SeoMiddleware>(); Install<ConditionalGetMiddleware>(); Dispatch("/", () => "Hello, world! You're on Kannon now."); } }Why Kannon?Being a professional .NET/ASP.NET developer, I came to understand that I spend way too much time doing work without actually doing it. ASP.NET is to blame here. More speciframework web Working with trackbacks and pingbacks in .NET Send Trackbackvar trackback = new Trackback();var target_url = new Uri("target_url");var parameters = new LinkbackSendParameters { Title = "title", Excerpt = "excerpt", Url = new Uri("source_url"), BlogName = "blog_name"};var result = trackback.Send(target_url, parameters);Receive Trackbackvar target_url = new Uri("target_url");var trackback = new Trackback();var result = trackback.Receive(Request, target_url);trackback.SendResponse(Response);Send PiLinkback MVC Pingback Trackback Sharp Content Portal is an open source content management portal written in C# and based on the popular DotNetNuke portal framework. The goal of this project it to offer the same great features as the Visual Basic version of the DotNetNuke framework while expanding on the security roles and content management features. Project Updates/Comments 01.30.2009: I have not been able to work on this project for sometime, so I have to declare future enhancements/development dead. Previous comments have b.NET2.0 CMS ContentManagement MicrosoftSQL WebPortal The ProjectThe SCORE OS Project is a collaborative effort to develop an open source fantasy sports management system (FSMS). PrototypesThe first prototype is Pre Pro Sports (PPS) Fantasy College Football League Manager, Version 5.0, which is run on a WISA (Windows, IIS, SQLServer, ASP.NET) software stack. The PPS software is exclusively licensed. The second prototype is the Pre Pro Hoops South (PPHS) College Basketball League Manager, Version 2.0, which runs on a WIAAAJAX Apache Drupal Eclipse FantasySports HTML Linux MySQL XML OpenCampfire.NetOpenCampfire.Net is a .Net implementation of OpenSocial written in C#. We will utilize the Shindig client-side libraries, which implement the OpenSocial API, and mirror the server-side development as much as possible. My hope is to eventually merge this project into the main Shindig tree. Status(2008-08-25) This project will be on hold for the next few months, should be able to resume in January, 2009. ExternalResources(2008-07-01 10:30) I am adding some links for other materialsc-sharp Gadgets OpenSocial Shindig Widgets Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects. Tag Cloud >>
http://www.findbestopensource.com/product/aspnet01
CC-MAIN-2016-40
refinedweb
722
51.14
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | USAGE | ATTRIBUTES | SEE ALSO #include <ftw.h>int ftw(const char *path, int (*fn) (const char *, const struct stat *, int), int depth);:., whose possible values are: Physical walk, does not follow symbolic links. Otherwise, nftw() follows links but will not walk down any path that crosses itself. The walk will not cross a mount point. All subdirectories are visited before the directory itself. The walk changes to each directory before reading it.: The object is a file. The object is a directory. The object is a directory and subdirectories have been visited. The object is a symbolic link. The object is a symbolic link that points to a non-existent file. The object is a directory that cannot be read. The user-defined function fn will not be called for any of its descendants. The stat() function failed on the object because of lack of appropriate permission. The stat buffer passed to fn is undefined. The stat function failed for a reason other than lack of appropriate permission. EACCES is considered an error and nftw() returns -1. length of the path exceeds PATH_MAX, or a path name component is longer than NAME_MAX. A component of path does not name an existing file or path is an empty string. A component of path is not a directory. The ftw() function will fail if: Search permission is denied for any component of path or read permission is denied for path. Too many symbolic links were encountered. The nftw() function will fail if: Search permission is denied for any component of path or read permission is denied for path, or fn() returns -1 and does not reset errno. The ftw() and nftw() functions may fail if: Pathname resolution of a symbolic link produced an intermediate result whose length exceeds PATH_MAX. The ftw() function may fail if: The value of the ndirs argument is invalid. The nftw() function may fail if: Too many symbolic links were encountered in resolving path. There are OPEN_MAX file descriptors currently open in the calling process. Too many files are currently open in the system. In addition, if the function pointed to by fn encounters system errors, errno may be set accordingly. Because ftw() is recursive, it can terminate with a memory fault when applied to very deep file structures. The ftw() function uses malloc(3C) to allocate dynamic storage during its operation. If ftw() is forcibly terminated, such as by longjmp(3C) being executed by fn or an interrupt routine, ftw() will not have a chance to free that storage, so it remains), malloc(3C), attributes(5), lf64(5) NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | USAGE | ATTRIBUTES | SEE ALSO
http://docs.oracle.com/cd/E19683-01/816-0213/6m6ne380f/index.html
CC-MAIN-2016-07
refinedweb
447
66.54
Having written my little blowhole command tool to investigate scheduling of Time Machine backups, I decided that it might be more generally useful. For example, there is no command in AppleScript to write anything into Sierra’s new logs, and as a command blowhole can be called by anything which can run a command line – even Eastgate’s Tinderbox and Storyspace! I therefore wanted a tool which could be invoked with up to two arguments/options: none at all just to write a default message (e.g. run periodically by launchd), one to determine the log level, and another to pass a message which would be written in the log entry. This required parsing the arguments, so my Swift 3 code provides a simple example of how to do that. If you just want to TL;DR and grab the tool, the latest release is available from Downloads above. Writing entries into the new log system is not as free and easy at it might seem. Much of the content of those entries is determined by static strings, so finding a way of passing information from the arguments to the log message has been the toughest problem to solve. If you pass a normal string variable to be written out, the log system automatically censors it to render that content as <private>, which defeats the purpose. According to the presentation given at WWDC 2016 on the new log system, you should be able to override this frustrating behaviour by formatting the string with a {public} override, as you’ll see that illustrated in my source code below. However, I have been messing around for a long time trying to get this to work, and can only conclude that, at present, there is a bug in the Swift 3 support for this, or it has yet to be properly implemented. So all you can do for now with blowhole is to pass an integer, which is better than nothing. A further frustration with Swift 3 is trying to catch errors which occur when converting a string to an integer, using theInteger = Int(string) Again, having tried various approaches to catching errors in that, I have for the moment given up and let the tool flunk out with an error code 4. It would appear that most of the methods of catching an error rely on Int() throwing one, which it doesn’t – it just errors out of the whole tool if you try to convert a string containing inappropriate characters. As with my earlier version, the code to write out to the new log is encapsulated within a public class, which has three different implementations of the method writeLogEntry() to support the three different ways of calling this tool. The third shown below, with a string being passed, shows the code which should result in an arbitrary string being written, but which consistently results in that being rendered as <private>. Below the Blowhole class is an enumeration to help parse options supplied in the first argument, and a short function to split the letter out from the option supplied when calling the command. These were stolen from Jean-Pierre Distler’s Panagram tutorial example, and seem to work nicely. The main code initialises an instance of the class, and gets on with parsing any arguments supplied. CommandLine.arguments[0] is invariably the command itself, which we can safely ignore. The other items in the array contain the supplied parameters in order. Here we test the number of arguments supplied, given in CommandLine.argc, then parse CommandLine.arguments[1] and act on the option given. This could of course be factored out of the main code, which would be worthwhile in a more substantial tool. Although the parsing could go up into the Blowhole class, that would not be appropriate, and it would be better in a separate parser. I will incorporate a short summary of these in my ongoing articles containing Swift snippets.
https://eclecticlight.co/2017/02/13/blowhole-advanced-writing-a-command-tool-in-swift-3-and-more/
CC-MAIN-2018-34
refinedweb
662
55.58
Create a neural network¶ Now let’s look how to create neural networks in Gluon. In addition the NDArray package ( nd) that we just covered, we now will also import the neural network nn package from gluon. [2]: from mxnet import nd from mxnet.gluon import nn Create your neural network’s first layer¶ Let’s start with a dense layer with 2 output units. [31]: layer = nn.Dense(2) layer Then initialize its weights with the default initialization method, which draws random values uniformly from \([-0.7, 0.7]\). [32]: layer.initialize() Then we do a forward pass with random data. We create a \((3,4)\) shape random input x and feed into the layer to compute the output. [34]: x = nd.random.uniform(-1,1,(3,4)) layer(x) As can be seen, the layer’s input limit of 2 produced a \((3,2)\) shape output from our \((3,4)\) input. Note that we didn’t specify the input size of layer before (though we can specify it with the argument in_units=4 here), the system will automatically infer it during the first time we feed in data, create and initialize the weights. So we can access the weight after the first forward pass: [35]: layer.weight.data() Chain layers into a neural network¶ Let’s first consider a simple case that a neural network is a chain of layers. During the forward pass, we run layers sequentially one-by-one. The following code implements a famous network called LeNet through nn.Sequential. [ ]: net = nn.Sequential() # Add a sequence of layers. net.add(# Similar to Dense, it is not necessary to specify the input channels # by the argument `in_channels`, which will be automatically inferred # in the first forward pass. Also, we apply a relu activation on the # output. In addition, we can use a tuple to specify a non-square # kernel size, such as `kernel_size=(2,4)` nn.Conv2D(channels=6, kernel_size=5, activation='relu'), # One can also use a tuple to specify non-symmetric pool and stride sizes nn.MaxPool2D(pool_size=2, strides=2), nn.Conv2D(channels=16, kernel_size=3, activation='relu'), nn.MaxPool2D(pool_size=2, strides=2), # The dense layer will automatically reshape the 4-D output of last # max pooling layer into the 2-D shape: (x.shape[0], x.size/x.shape[0]) nn.Dense(120, activation="relu"), nn.Dense(84, activation="relu"), nn.Dense(10)) net The usage of nn.Sequential is similar to nn.Dense. In fact, both of them are subclasses of nn.Block. The following codes show how to initialize the weights and run the forward pass. [ ]: net.initialize() # Input shape is (batch_size, color_channels, height, width) x = nd.random.uniform(shape=(4,1,28,28)) y = net(x) y.shape We can use [] to index a particular layer. For example, the following accesses the 1st layer’s weight and 6th layer’s bias. [ ]: (net[0].weight.data().shape, net[5].bias.data().shape) Create a neural network flexibly¶ In nn.Sequential, MXNet will automatically construct the forward function that sequentially executes added layers. Now let’s introduce another way to construct a network with a flexible forward function. To do it, we create a subclass of nn.Block and implement two methods: __init__create the layers forwarddefine the forward function. [6]: class MixMLP(nn.Block): def __init__(self, **kwargs): # Run `nn.Block`'s init method super(MixMLP, self).__init__(**kwargs) self.blk = nn.Sequential() self.blk.add(nn.Dense(3, activation='relu'), nn.Dense(4, activation='relu')) self.dense = nn.Dense(5) def forward(self, x): y = nd.relu(self.blk(x)) print(y) return self.dense(y) net = MixMLP() net In the sequential chaining approach, we can only add instances with nn.Block as the base class and then run them in a forward pass. In this example, we used nd.relu to apply relu activation. So this approach provides a more flexible way to define the forward function. The usage of net is similar as before. [ ]: net.initialize() x = nd.random.uniform(shape=(2,2)) net(x) Finally, let’s access a particular layer’s weight [8]: net.blk[1].weight.data()
https://mxnet.apache.org/versions/1.6/api/python/docs/tutorials/getting-started/crash-course/2-nn.html
CC-MAIN-2020-34
refinedweb
693
52.36
Python Module Not Found - Ryan DeVine last edited by So I am new to NPP and I am trying to use it as my main code editor for Python. However, I am having trouble importing numpy and seeing its auto-completion. Below is the code that I have already. print(‘Hello World’) import numpy as np np. After the np., I would like to see suggestions for different numpy methods. When I try to run the code there is an error on the import numpy as np line. The error is shown below.. Original error was: DLL load failed: The specified module could not be found. Does anybody have any suggestions? Thank you! - Eko palypse last edited by the first question would be if you have numpy installed? The second, how do you run the script? Via run menu? NppExec? PythonScript?? And autocompletion, most likely doesn’t work as expected as there is no such thing like LSP client available yet. - Alan Kilborn last edited by @Ryan-DeVine said: After the np., I would like to see suggestions for different numpy methods. Unfortunately, N++ doesn’t work that way with it. When I try to run the code there is an error on the import numpy as np line Well, that’s not Notepad++ related so this isn’t the best place to ask it. - PeterJones last edited by PeterJones Regarding the auto-completion: yeah, that would be awesome. Can you imagine how difficult the task would be of implementing such auto-completion for all the programming languages that Notepad++ natively supports (see the Language menu to see how many there are). From my vague knowledge of LSP (I’ve run across the term a few times, but never used an LSP-enabled system), I think that’s one of the problems that LSP addresses, but I’m not sure. For a specific use case (for example, just Python), someone might be able to code up a plugin or automation script (*) to do it, using something like, "check the current filetype: if it’s not python, exit the script; otherwise, parse this file and all the imported modules looking for functions and methods that it currently has access to; once that list is built, edit the AUTOCOMPLETE table to include those functions/methods; repeat ad infinitum. But that’s a pretty big task, too. *: there’s even a PythonScript plugin, where a python2.7 dll gets loaded into Notepad++, and gives that specific instance of python access to the internals of Notepad++. That would be pretty meta, making a python auto-complete gizmo that’s written in python, running inside Notepad++, which you could use while editing your other PythonScript scripts from inside of Notepad++. 🤯 - Eko palypse last edited by Eko palypse From my vague knowledge of LSP (I’ve run across the term a few times, but never used an LSP-enabled system), I think that’s one of the problems that LSP addresses, but I’m not sure. exactly, the goal is to have the “language” responsible to provide such information instead of reinventing the wheel by every editor. LSP defines just the protocol how a communication of a client and a server has to work, the editor implements the client part whereas some other instance provides the server part.
https://community.notepad-plus-plus.org/topic/16878/python-module-not-found/4
CC-MAIN-2019-51
refinedweb
552
69.92
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Microcontroller Programming » AVR Programming without bootloader I'm looking for a programmer for AVR chips which will allow me to program without the need for a bootloader. Minimum requirements:- Simplicity - ZIF socket or similar, USB connectivity and possibly powered by USB as well, Compatible with as many different AVR chips as possible, Compatible with AVR Dude or AVR Studio,(Probably other requirements that elude me at this moment) What programmer would you suggest? I wouldn't get a "programmer that is prebuilt with a ZIF Socket". I would just use one of the many AVR ISP Programmers and then build a small board with a ZIF Socket that you attach the AVR ISP Programmer to to program your chip. (ISP, stands for "In System Programmer". Atmel has created a hardware/software solution that I believe ALL AVR Microcontrollers support quite literally by definition.) I use the Atmel ATAVRISP2. You can get it from numerous sources. (Mouser.com, Digitkey.com, etc...) There are other ISP Programmers that work as well. (In fact there are posts here on how to make your own!) For Atmel Programmer details look here: AVRISP For the AVR ISP to work you need a Microcontroller that is in a hardware environment it will run in. (Hence the need for a little board to be made!) For that I use: ATmegaXX8 I built the first little board for this mostly identical to the Nerdkits Circuit. On the top right is the standard header for the AVR ISP. NOTE: as well that the crystal I use is in a Female Header so that I can change that when I want. Here's a couple of resources for you - Nerdkit bootloader installation is about putting the bootloader on new chips but has some good info on ISP programmers. Rick is a great resource on ISP programmers. Nerdkit ISP Programmer will show you how to make an ISP programmer from your Nerdkit. I've never bought a programmer but have made several variations of the ISP programmer using my Nerdkit and most recently a version on the Arduino Nano. Thanks for all the replies. I was hoping for an 'all-in-one' solution similar to the STK500 so I didn't have to have multiple boards for multiple chips. As I can see, most of the chips I'll be using will be 'Tinys', but occassionally there'll be the need for a 'Mega'. Does anyone have any experience / reviews on the STK500 or similar? Any isp programmer can program all the ATmegas. As well as the ATtiny's. "As well as the ATtiny's." Except for a very few such as the ATtiny10. That series uses a different protocol. I don't see a lot of those in use though. Which tinys were you wanting to program, TuffLux? I had purchased the Atmel STK500 and another AVR Programming Solution similar to what envision. (Ever so long ago now.) Both were Serially Connected solutions. I was VERY disappointed in both. (If I remember correctly for the STK500 there were onboard jumpers that were not clearly marked, at least to me, that had to be changed for different chips that was the root of my disappointment...) There still are some Professional Programmers that do what you are asking, but most decent ones are quite expensive. How many different AVR Chips do you think you will need to use? I have 1 board I use to program, and a backup board, and every project I have has the ISP Connector on at least the first project board. (Adding an ISP Connector is simply adding 1 6-pin dual-inline header to the working project board.) For my main board I use to program the AVR's I also have some adapter boards I made so that I can just plug-in bigger and smaller AVR's into the same board to program. Peoples needs, and things that make them happy are different. I don't really espouse for a one-size-fits-all theory in anything. For me though using the Atmel ISP Programming Interface with a designated target board or my current project board is what I prefer, no matter what ISP Programmer I am using. If you are looking at other programmers I would be very cautious of getting a STK600 (Atmel's latest replacement to the STK500). You can "potentially" program everything from Atmel but they tend to only last 3 months. I have close to a thousand dollars invested in my STK600 and it is dead. I really like my Atmel Dragon it also does high voltage parrallel programing which I use often. Ralph Currently I'm looking at programming the Tiny4's. They have just enough pins for what I want and are very power efficient. The only problem I have is that I'll have to program them using HVSP. So that eliminates ISP programmers. I do see the Tiny5 has an ADC which could become useful, if only I could find a supplier of those... That is interesting, those little mcu's were not even on my radar. Just for fun I'll have to get a few of the Tiny10's so I can modify the isp programmer to program them too. Noter :- Would be interesting to see. I'm pretty sure you can't use an ISP programmer to switch all four pins to I/O though. I think you need to use a HV programmer. I would just use the isp programmer as a base because it has all the logic to play with avrdude already in it. The task is to change the interface wiring and command set to be compatible with the Tiny4-5-8-10's and then call it a HVSP programmer. I made a HV programmer for the ATmega44-88-168-328's that I developed in the same fashion so I'm pretty confident it will work. I've already added tiny10's to my pending mouser order and will probably place it next week. This Blog was where I got my 1st exposure to those little critters. I haven't really played with them though, just knew they were around. They might be great for simple 1 or two channel PWM or ADC work of some sort. Rick Got my ATtiny10's this week and they are really tiny! It was difficult to get it soldered onto the adapter because it was almost impossible to hold down and solder at the same time. Just looking at it made it move! So now it's on to the fun part, making a TPI programmer and then getting the tiny10 to blink a few leds. I'm using the same nerdkit setup as the AVR_ISP programmer but not messing with the bootloader this time around, just loading direct with my ISP programmer. Then I can test the TPI programmer using the nerdkit usb cable connected directly to the ATmega328 on the breadboard. Those little buggers are cute, but I don't know if I'd ever have a use for them. I can see what you mean about breathing on them making them move! What are your intended uses for them?? I don't have a specific use for them yet beyond just learning. I can see where they might be useful for pwm control of brightness and contrast on a LCD or something like that. But they have no built-in eeprom so keeping a setting can't be done without external help. Next time I order from Mouser I think I'll get a couple of these tiny 128 byte eeproms to try out - Just looking at it made it move! Just looking at it made it move! I use a drop of Super Glue to hold them while soldering. The heat of soldering will sometimes break the bond but it usually holds long enough to get everything soldered. Thanks Ralph. I'll give that a try next time. Paul Here's a little program that will fit on the ATtiny10. Doesn't do much, just randomly blinks a LED to sort of simulate a candle. Main thing now is just to get something that fits because the ATtiny10 only has 1k byte of flash. // LED-Candle.c // Single LED Candle Simulation /* Use the following compile/link command to produce the 686> ISR(TIMER0_COMPA_vect, ISR_BLOCK) { PORTC &= ~(1<<PC2); // turn off } ISR(TIMER0_OVF_vect, ISR_BLOCK) { PORTC |= (1<<PC2); // turn on } /**********************************************************************/ int main() { // set pin for output DDRC |= (1<<PC2); // start the timer TIMSK0 |= _BV(OCIE0A) | _BV(TOV0); TCCR0B = _BV(CS00); // enable interrupts sei(); while(1){ OCR0A = (rand()&127)+128; _delay_ms(100); } } Got it down to 640 bytes by switching to timer2 and PB3 so that fast PWM does all the work. // LED-Candle.c // Single LED Candle Simulation /* Use the following compile/link command to produce the 640> /**********************************************************************/ int main() { // set pin for output DDRB |= (1<<PB3); // start the timer TCCR2A = _BV(COM2A1) | _BV(WGM21) | _BV(WGM20); TCCR2B = _BV(CS20); while(1){ OCR2A = (rand()&127)+128; _delay_ms(100); } } With only 1K of flash, I would have thought assembly would have been the way to go. Good job getting the C to compile down that small. Is it the rand function eating up the bulk of the space? Do I remember correctly that the underscore before a function call forces a jump to a single occurrence of the code in the binary? Anyway, pretty cool to see you have the little micro running. Of course, I never doubted you would. Rick Hi Rick, I don't actually have it on the ATtiny10 yet. Wanted to get it working on the Nerdkit first so I know the code is good when I do get it loaded onto the ATtiny. Sorry to mislead. Yep, the rand() is the bulk of the code. Without it the size is 166 bytes so the rand() is 474 of the 640 bytes used. Assembly is probably the best way to go but I am very rusty on assemblers and also have not written a program for the AVR in assembly yet so the easy out for me is to write it in C. The tiny chip has only 32 bytes of SRAM and probably that is the real limiting factor for doing anything very complicated anyway. I am not familiar with using an underscore before a function call and I couldn't find anything on it in the documentation. Must have been another language/platform. Got the new TPI programmer working and now have my candle emulator running on the ATtiny10. Couldn't use the lib version of rand() but fortunately found suitable code on the web that only uses about 1/2 the flash anyway. This version of the test program occupies only 238 bytes of flash. // LED-Candle.c // Dual LED Candle Simulator // output PWM on OC?A // timer0 on ATtiny - OC0A, OC0B - PB0, PB1 #include <avr/interrupt.h> #include <util/delay.h> /**********************************************************************/ int main() { // bump clock to 8MHz CCP = 0xD8; CLKPSR = 0; DDRB |= _BV(PB0); // set OC0A output DDRB |= _BV(PB1); // set OC0B output // start the timer OCR0A = 3; OCR0B = 3; TCCR0A = _BV(COM0A1) | _BV(COM0B1) | _BV(WGM00); TCCR0B = _BV(CS00) | _BV(WGM02); // ref #define RAND_MAX_32 ((1UL << 31) - 1) uint32_t rseed = 0; while(1){ rseed = ((rseed * 214013UL + 2531011UL) & RAND_MAX_32) >> 16; OCR0B = OCR0A; OCR0A = (rseed%128)|128|3; _delay_ms(100); } } Perfect time of year to build a fake candle. Justify the hobby to the wife as "I'm designing Christmas decorations... " Good going on the programmer modifications. I'm going to have to mess with that one day. I've often thought the ATMEGA32U4 (Like the TEENSY) based boards would make a good programmer since they have built in USB support, but have never tried it. Ahh... a project for another day. Thanks for the update, My wife likes the fake candle. It makes a great night light too. I taped a piece of decorative paper together to make a cylinder and just set it over the leds. Looks almost real. It looks brighter in the photo than it really is. I had the same thought on using the built-in USB and bought a couple of 32U4s a several weeks ago with a Mouser order. Then I laid out a TQFP-44 adapter and ordered a few from OshPark and now have one put together ready to go. Updated lcd code to use different pins and it is working in a little test program so now I can get busy on the USB. I think I'll start with the LUFA libraries and see how that goes. I got my tpi programmer working on the ATmega32U4 using the LUFA library. But it's a big chip with 44 pins so I switched over to the AT90USB162 and it's definitely a better fit for the TPI programmer. I found a SOT-23 socket on eBay that will work well for the ATtiny10 so now all I have to do is design a PCB to get it off the breadboard and make it a permanent fixture in my bag of tricks. Maybe in a month it will be completely done. Looks real good, what's the little 8 pin IC in between the AT90USB162 and the ATTINY10? It's a max662 and puts out 12v to enable HV programming on the tiny10. It's all powered from the USB 5v so needed something to get 12v. The thick red and black wires go to my multi-meter. Please log in to post a reply.
http://www.nerdkits.com/forum/thread/2518/
CC-MAIN-2020-05
refinedweb
2,277
71.85
- Code: Select all class Tween(object): def __init__(self, start=0, end=1, duration=10): self.startpoint = start self.endpoint = end self.duration = duration self.current = 0 @classmethod def start(cls, point): tw = cls(point) return tw def end(self, point): self.endpoint = point return self def over(self, duration): self.duration = duration return self def tick(self, amount): self.current += amount percent_complete = self.current / self.duration if percent_complete >= 1.0: end = self.endpoint self.endpoint = None return end return ((self.endpoint - self.startpoint) * percent_complete) + self.startpoint if __name__ == '__main__': t = Tween.start(0).end(10).over(1000) ticking = True while ticking: tick = t.tick(10) ticking = tick if tick: print(tick) Tweening is used in animation to smoothly move something from one place to another. Here's a quick little script to help with that. A better example to help understand would be to think of an element moving across a webpage. The "start" param would be the element's original x-coordinate, and the "end" is what you want that x-coordinate to be when the animation is complete. "over", then, is the length of time it should take to get from start to end, probably in milliseconds. You then repeatedly tick the tweener, letting it know how much time has passed since the last time you ticked it, and it lets you know what the new value should be. Following our webpage example, you'd set the element's x-coordinate to whatever the return value of tick() was. The result, with a good 'over' and 'tick' setting, would appear to be an element that smoothly moves itself from one place to another. In python, this sort of thing could very easily be used with a graphical package (such as pygame) to create animations. When I get some free time, I may do an example of it in pygame to demonstrate.
http://python-forum.org/viewtopic.php?f=11&t=20964&sid=8bd1c88add7a0047d6182ac8b6df3c3a
CC-MAIN-2017-30
refinedweb
314
68.36
SOAP::SOM - provides access to the values contained in SOAP Response Objects from the SOAP::SOM class aren't generally instantiated directly by an application. Rather, they are handed back by the deserialization of a message. In other words, developers will almost never do this: $som = SOAP::SOM->new; SOAP::SOM objects are returned by a SOAP::Lite call in a client context. For example: my $client = SOAP::Lite ->readable(1) ->uri($NS) ->proxy($HOST) $som = $client->someMethod(); $som = SOAP::SOM->new($message_as_xml); As said, the need to actually create an object of this class should be very rare. However, if the need arises, the syntax must be followed. The single argument to new must be a valid XML document the parser will understand as a SOAP response. The following group of methods provide general data retrieval from the SOAP::SOM object. The model for this is an abbreviated form of XPath. Following this group are methods that are geared towards specific retrieval of commonly requested elements. $som->match('/Envelope/Body/[1]'); This method sets the internal pointers within the data structure so that the retrieval methods that follow will have access to the desired data. In the example path, the match is being made against the method entity, which is the first child tag of the body in a SOAP response. The enumeration of container children starts at 1 in this syntax, not 0. The returned value is dependent on the context of the call. If the call is made in a boolean context (such as if ($som->match($path))), the return value is a boolean indicating whether the requested path matched at all. Otherwise, an object reference is returned. The returned object is also a SOAP::SOM instance but is smaller, containing the subset of the document tree matched by the expression. $res = $som->valueof('[1]'); When the SOAP::SOM object has matched a path internally with the match method, this method allows retrieval of the data within any of the matched nodes. The data comes back as native Perl data, not a class instance (see dataof). In a scalar context, this method returns just the first element from a matched node set. In an array context, all elements are returned. Assuming that the earlier call happens after the earlier call to match, it retrieves the result entity from the method response that is contained in $som, as this is the first child element in a method-response tag. $resobj = $som->dataof('[1]'); Performs the same operation as the earlier valueof method, except that the data is left in its SOAP::Data form, rather than being deserialized. This allows full access to all the attributes that were serialized along with the data, such as namespace and encoding. $resobj = $som->headerof('[1]'); Acts much like dataof, except that it returns an object of the SOAP::Header class (covered later in this chapter), rather than SOAP::Data. This is the preferred interface for manipulating the header entities in a message. $ns = $som->namespaceof('[1]'); Retrieves the namespace URI that governs the requested node. Note that namespaces are inherited, so this method will return the relevant value, even if it derives from a parent or other ancestor node. The following methods provide more direct access to the message envelope. All these methods return some form of a Perl value, most often a hash reference, when called. Context is also relevant: in a scalar context only the first matching node is returned, while in an array context, all matching nodes are. When called as a static method or as a regular function (such as SOAP::SOM::envelope), any of the following methods returns the XPath string that is used with the match method to retrieve the data. $root = $som->root; Returns the value of the root element as a hash reference. It behaves exactly as $som-valueof('/')> does. $envelope = $som->envelope; Retrieves the "Envelope" element of the message, returning it and its data as a hash reference. Keys in the hash will be Header and Body (plus any optional elements that may be present in a SOAP 1.1 envelope), whose values will be the serialized header and body, respectively. $header = $som->header; Retrieves the header portion of the envelope as a hash reference. All data within it will have been deserialized. If the attributes of the header are desired, the static form of the method can be combined with match to fetch the header as a SOAP::Data object: $header = $som->match(SOAP::SOM::header)->dataof; @hdrs = $som->headers; Retrieves the node set of values with deserialized headers from within the Header container. This is different from the earlier header method in that it returns the whole header as a single structure, and this returns the child elements as an array. In other words, the following expressions yield the same data structure: $header = ($som->headers)[0]; $header = $som->valueof(SOAP::SOM::header.'/[1]'); $body = $som->body; Retrieves the message body as a hash reference. The entity tags act as keys, with their deserialized content providing the values. if ($som->fault) { die $som->fault->faultstring } Acts both as a boolean test whether a fault occurred, and as a way to retrieve the Fault entity itself from the message body as a hash reference. If the message contains a fault, the next four methods (faultcode, faultstring, faultactor, and faultdetail) may be used to retrieve the respective parts of the fault (which are also available on the hash reference as keys). If fault in a boolean context is true, the result, paramsin, paramsout, and method methods all return undef. $code = $som->faultcode; Returns the faultcode element of the fault if there is a fault; undef otherwise. $string = $som->faultstring; Returns the faultstring element of the fault if there is a fault; undef otherwise. $actor = $som->faultactor; Returns the faultactor element of the fault, if there is a fault and if the actor was specified within it. The faultactor element is optional in the serialization of a fault, so it may not always be present. This element is usually a string. $detail = $som->faultdetail; Returns the content of the detail element of the fault, if there is a fault and if the detail element was provided. Note that the name of the element isn't the same as the method, due to the possibility for confusion had the method been called simply, detail. As with the faultactor element, this isn't always a required component of a fault, so it isn't guaranteed to be present. The specification for the detail portion of a fault calls for it to contain a series of element tags, so the application may expect a hash reference as a return value when detail information is available (and undef otherwise). $method = $som->method Retrieves the "method" element of the message, as a hash reference. This includes all input parameters when called on a request message or all result/output parameters when called on a response message. If there is a fault present in the message, it returns undef. $value = $som->result; Returns the value that is the result of a SOAP response. The value will be already deserialized into a native Perl datatype. @list = $som->paramsin; Retrieves the parameters being passed in on a SOAP request. If called in a scalar context, the first parameter is returned. When called in a list context, the full list of all parameters is returned. Each parameter is a hash reference, following the established structure for such return values. @list = $som->paramsout; Returns the output parameters from a SOAP response. These are the named parameters that are returned in addition to the explicit response entity itself. It shares the same scalar/list context behavior as the paramsin method. @list = $som->paramsall; Returns all parameters from a SOAP response, including the result entity itself, as one array. Return an array of MIME::Entity's if the current payload contains attachments, or returns undefined if payload is not MIME multipart. Returns true if payload is MIME multipart, false otherwise. Suppose for the following SOAP Envelope: <Envelope> <Body> <fooResponse> <bar>abcd</bar> </fooResponse> </Body> </Envelope> And suppose you wanted to access the value of the bar element, then use the following code: my $soap = SOAP::Lite ->uri($SOME_NS) ->proxy($SOME_HOST); my $som = $soap->foo(); print $som->valueof('//fooResponse/bar'); Suppose the following SOAP Envelope: <Envelope> <Body> <c2fResponse> <convertedTemp test="foo">98.6</convertedTemp> </c2fResponse> </Body> </Envelope> Then to print the attribute 'test' use the following code: print "The attribute is: " . $som->dataof('//c2fResponse/convertedTemp')->attr->{'test'}; Suppose for the following SOAP Envelope: <Envelope> <Body> <catalog> <product> <title>Programming Web Service with Perl</title> <price>$29.95</price> </product> <product> <title>Perl Cookbook</title> <price>$49.95</price> </product> </catalog> </Body> </Envelope> If the SOAP Envelope returned contained an array, use the following code to iterate over the array: for my $t ($som->valueof('//catalog/product')) { print $t->{title} . " - " . $t->{price} . "\n"; } A SOAP::SOM object is returned by a SOAP::Lite client regardless of whether the call succeeded or not. Therefore, a SOAP Client is responsible for determining if the returned value is a fault or not. To do so, use the fault() method which returns 1 if the SOAP::SOM object is a fault and 0 otherwise. my $som = $client->someMethod(@parameters); if ($som->fault) { print $som->faultdetail; } else { # do something } The most efficient way To parse and to extract data out of an array containing another array encoded in a SOAP::SOM object is the following: $xml = <<END_XML; <foo> <person> <foo>123</foo> <foo>456</foo> </person> <person> <foo>789</foo> <foo>012</foo> </person> </foo> END_XML my $som = SOAP::Deserializer->deserialize($xml); my $i = 0; foreach my $a ($som->dataof("//person/*")) { $i++; my $j = 0; foreach my $b ($som->dataof("//person/[$i]/*")) { $j++; # do something } } SOAP::Data,)
http://search.cpan.org/~phred/SOAP-Lite-1.02/lib/SOAP/SOM.pod
CC-MAIN-2015-27
refinedweb
1,643
52.09
We have connected Impala to SAP BusinessObjects, using the latest ODBC drivers available for Red Hat. We are running in to errors when we try to retrieve more than a very small amount of data. We can get perhaps up to about 50Kb either by applying LIMIT=<small number> or being very restrictive in the where clause. For example, on an Id column which starts at 1 and counts upwards we can get just over 8000 rows. Folks using non-Business Objects methods to access the data have no problem. This has been raised with SAP but I'd be interested to see if this rings any bells with anyone. I can't begin to explain why, but we switched our database account to one with more access...and hey presto it all just works. Hi Nick, I have not seen this issue myself but quite curious about it. What kind of errors are you seeing? The latest Imala ODBC driver available from the website is 2.5.29. I this the version you are using? Thanks, Holman Hi, thanks for your reply. Yes 2.5.29. The error is "Your session timed out. Close the Java interface and log on again" This is slightly misleading as if you limit the number of rows to 8191 or less it will run successfully. So an 8k row limit? Wondering if it relates to the UTF-8 / UTF-16 / UTF-32 settings in cloudera_impalaodbc.ini and odbc.ini....we just tested changes but they were sabotaged by some empty tables. Hello Is there any reason to not use JDBC ? I could connect from SAP BO to Impala with 32 Bit Impala JDBC drivers Thanks Hi, the reason we aren't using JDBC.....is it doesn't work either! It works for us locally, but when we try server-side the error is "Fail to create an instance of Job : com/cloudera/impala/jdbc41/Driver : Unsupported major.minor version 51.0" We understand this to mean work with Java 7 not 6....so our admin has made the appropriate change but it made no difference. So, ODBC because we have got further than we have with JDBC! Hello, What type of configuration needs to be made connect to impala using JDBC from IDT/ BO server. Your help is greatly appreciated. We are stuck at this point. Thanks Sandhya I asked around, and somebody knew somebody who asked Simba. They said: "There shouldn't be a limitation on the number of rows imposed by the driver, although there is a statement attribute for limiting the number of rows to return (SQL_ATTR_MAX_ROWS). If there are further issues, can you please tell the customer to enable logging through ODBC Administrator and send a zip file of the captured driver logs?" I appreciate you asking and passing the information on. I will look to see how we can set SQL_ATTR_MAX_ROWS attribute and enable logging. If anything is output I'll see if it can be attached here. Thanks again. We can't set SQL_ATTR_MAX_ROWS via Business Objects. We've now generated the log file - I don't see a way to attach it here. The only error is as follows: Oct 30 11:06:42 INFO 392800576 Connection::SQLSetConnectAttr: Attribute: Unknown Attribute (11003) Oct 30 11:06:42 INFO 392800576 ConnectionAttributes::SetAttribute: Invalid attribute: 11003 Oct 30 11:06:42 ERROR 392800576 Connection::SQLSetConnectAttr: [Cloudera][ODBC] (10210) Attribute identifier invalid or not supported: 11003 Oct 30 11:06:42 INFO 392800576 Statement::SQLColAttributeW: FieldIdentifier: SQL_DESC_TYPE_NAME (14) Oct 30 11:06:42 INFO 392800576 Statement::SQLColAttributeW: FieldIdentifier: SQL_DESC_UNSIGNED (8) Oct 30 11:06:42 INFO 392800576 Statement::SQLSetStmtAttrW: Attribute: SQL_ATTR_ROW_ARRAY_SIZE (27) However, this exact same error is thrown regardless of whether or not the problem occurs. It isn't clear - to me at least - what this invalid attribute actually is. I can post the full 300 lines from the log file but assume no one want to wade through that. We used loglevel=5 so all we miss are TRACE statements. I can't begin to explain why, but we switched our database account to one with more access...and hey presto it all just works.
https://community.cloudera.com/t5/Support-Questions/ODBC-Access-from-SAP-BusinessObjects/m-p/51876
CC-MAIN-2019-35
refinedweb
698
65.12
memmove, memmove_s From cppreference.com 1) Copies countcharacters from the object pointed to by srcto the object pointed to by dest. Both objects are interpreted as arrays of unsigned char. The objects may overlap: copying takes place as if the characters were copied to a temporary character array and then the characters were copied from the array to dest. The behavior is undefined if access occurs beyond the end of the dest array. The behavior is undefined if either destor srcis a null pointer. 2) Same as (1), except when detecting the following errors at runtime, it zeroes out the entire destination range [dest, dest+destsz) (if both destand destszare valid) and calls the currently installed constraint handler function: destor srcis a null pointer destszor countis greater than RSIZE_MAX countis greater than destsz(buffer overflow would occur) The behavior is undefined if the size of the character array pointed to by dest< count<= destsz; in other words, an erroneous value of destszdoes not expose the impending buffer overflow. - As all bounds-checked functions, memmove and non-zero value on error. Also on error, if destis not a null pointer and destszis valid, writes destszzero bytes in to the destination array. [edit] Notes memmove may be used to set the effective type of an object obtained by an allocation function. memcpy when there is no overlap at all. Where strict aliasing prohibits examining the same memory as values of two different types, memmove may be used to convert the values. [edit] Example Run this code #define __STDC_WANT_LIB_EXT1__ 1 #include <stdio.h> #include <stdint.h> #include <inttypes.h> #include <string.h> #include <stdlib.h> int main(void) { char str[] = "1234567890"; puts(str); memmove(str+4, str+3, 3); // copy from [4,5,6] to [5,6,7] puts(str); // setting effective type of allocated memory to be int int *p = malloc(3*sizeof(int)); // allocated memory has no effective type int arr[3] = {1,2,3}; memmove(p,arr,3*sizeof(int)); // allocated memory now has an effective type // reinterpreting data double d = 0.1; // int64_t n = *(int64_t*)(&d); // strict aliasing violation int64_t n; memmove(&n, &d, sizeof d); // OK printf("%a is %" PRIx64 " as an int64_t\n", d, n); #ifdef __STDC_LIB_EXT1__ set_constraint_handler_s(ignore_handler_s); char src[] = "aaaaaaaaaa"; char dst[] = "xyxyxyxyxy"; int r = memmove_s(dst,sizeof dst,src,5); printf("dst = \"%s\", r = %d\n", dst,r); r = memmove_s(dst,5,src,10); // count is greater than destsz printf("dst = \""); for(size_t ndx=0; ndx<sizeof dst; ++ndx) { char c = dst[ndx]; c ? printf("%c", c) : printf("\\0"); } printf("\", r = %d\n", r); #endif } Possible output: 1234567890 1234456890 0x1.999999999999ap-4 is 3fb999999999999a as an int64_t dst = "aaaaayxyxy", r = 0 dst = "\0\0\0\0\0yxyxy", r = 22
http://en.cppreference.com/w/c/string/byte/memmove
CC-MAIN-2016-07
refinedweb
455
52.49
Enable tensor numerics checking in an eager/graph unified fashion. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.debugging.enable_check_numerics tf.debugging.enable_check_numerics( stack_height_limit=30, path_length_limit=50 ) The numerics checking mechanism will cause any TensorFlow eager execution or graph execution to error out as soon as an op's output tensor contains infinity or NaN. This method is idempotent. Calling it multiple times has the same effect as calling it once. This method takes effect only on the thread in which it is called. When a op's float-type output tensor contains any Infinity or NaN, an tf.errors.InvalidArgumentError will be thrown, with an error message that reveals the following information: - The type of the op that generated the tensor with bad numerics. - Data type (dtype) of the tensor. - Shape of the tensor (to the extent known at the time of eager execution or graph construction). - Name of the containing graph (if available). - (Graph mode only): The stack trace of the intra-graph op's creation, with a stack-height limit and a path-length limit for visual clarity. The stack frames that belong to the user's code (as opposed to tensorflow's internal code) are highlighted with a text arrow ("->"). - (Eager mode only): How many of the offending tensor's elements are Infinityand NaN, respectively. Once enabled, the check-numerics mechanism can be disabled by using tf.debugging.disable_check_numerics(). Example usage: - Catching infinity during the execution of a tf.functiongraph: import tensorflow as tf tf.debugging.enable_check_numerics() @tf.function def square_log_x_plus_1(x): v = tf.math.log(x + 1) return tf.math.square(v) x = -1.0 # When the following line runs, a function graph will be compiled # from the Python function `log_x_plus_1()`. Due to the # `enable_check_numerics()` call above, the graph will contain # numerics checking ops that will run during the function graph's # execution. The function call generates an -infinity when the Log # (logarithm) op operates on the output tensor of the Add op. # The program errors out at this line, printing an error message. y = log_x_plus_1(x) z = -y - Catching NaN during eager execution: import numpy as np import tensorflow as tf tf.debugging.enable_check_numerics() x = np.array([[0.0, -1.0], [4.0, 3.0]]) # The following line executes the Sqrt op eagerly. Due to the negative # element in the input array, a NaN is generated. Due to the # `enable_check_numerics()` call above, the program errors immediately # at this line, printing an error message. y = tf.math.sqrt(x) z = tf.matmul(y, y)
https://www.tensorflow.org/versions/r2.1/api_docs/python/tf/debugging/enable_check_numerics
CC-MAIN-2021-21
refinedweb
426
50.33
An options page allows a plug-in developer to add controls to the ReSharper Options dialog. This is typically used to let the user specify various plug-in settings. The plug-in writer can add an unlimited amount of option pages to the dialog, and the dialogs can be nested in any of the options groups. Here is a screenshot of a custom options page in action: Let us now discuss the way in which option pages are defined. Making an options page Making an options page is surprisingly easy. You begin by defining a class which will house the options page. This class should be made to implement IOptionsPage, and should be decorated with the OptionsPage attribute. The OptionsPage attribute requires the plug-in author to provide the following parameters: - The page ID. This ID can be specified as a constant field inside the class. The page ID is a string which uniquely identifies this particular options page. - The name of the page. This is the text that will appear in the left-hand tree on the Options page as well as in the title and navigation elements. - The image. This refers to the glyph that appears next to the item in the tree and is specified as a type (e.g., typeof(OptionsPageThemedIcons.SamplePage)). See 4.05 Icons (R7) for more information. In addition, you may specify the following optional parameters: ParentIdlets you define the ID of the section or element which serves as this page's parent. If you want the parent to be one of the known ReSharper items, look inside the JetBrains.UI.Options.OptionsPagesnamespace for the corresponding pages, and then use their Pidas this parameter. For example, for an options page to appear in the Environment section, you specify the ParentIdof EnvironmentPage.Pid. Sequencelets you define the location of the item you are inserting in relation to the other items. Items are placed in order, so the higher this value, the further this page will be in the list of items. Of course, to accurately position the item, you need to know the Sequencevalue of its siblings. Luckily, this information is available in the metadata. Having specified the attributes, your class will appear as follows: Injecting dependencies Options pages are created by the Component Model, which means you can inject dependencies via your constructor parameters. Your constructor should take at least the following two parameters: Lifetime, which controls the lifetime of this page. OptionsSettingsSmartContext, the settings context that you can use to bind UI elements. Both of these values need to be injected because they are required for binding particular settings to UI elements. If you are inheriting from AOptionsPage, you will also need to inject IUIApplication to pass into the base class constructor. In addition to these values, you may inject any other available component into the service. Note that if you implement IOptionsPage on a user control, you should ensure that the generated default constructor is replaced with the constructor you wish the component model to inject dependencies for. Defining the UI You can define the UI for your options page using either Windows Forms or WPF. Whichever option you choose, all you have to do to actually present the UI is to initialize it and assign it to your option page's Control variable. Note: whichever UI framework you choose, your application must reference the WPF assemblies. The compiler will warn you about this if you start using the EitherControl type without adding appropriate references. To create an options page using Windows Forms, simply create a UserControl and assign it to the Control property. Please note that since multiple inheritance is impossible, the only way to keep the options page class and the UserControl class one and the same is as follows: - Inherit from UserControland implement the IOptionsPageinterface. - Decorate the control with the OptionsPageattribute as described above. - Implement the read-only property Control, returning the value of this. To create an options page using WPF, simply define your UI in terms of WPF elements and then assign the Control property accordingly. You can specify any WPF control, e.g., Grid, as the page control. Needless to say, it is entirely possible to use the WindowsFormsHost class to host Windows Forms controls on a WPF options page. The mechanism which binds the controls to the settings works for both WPF and Windows Forms. (Of course, if you implement IOptionsPage manually, you can simply assign properties manually without using bindings at all.) Working with Settings The OptionsSettingsSmartContext class that we inject has several SetBinding() methods that let us tie together settings and controls. These bind methods have two generic arguments - the name of the settings class, and the type of property that is being saved. In the case of WPF, you would specify: - The property that is being assigned on exit. Defined as a lambda expression (e.g., x => x.Name). - The name of the control that the property is being read from. - The dependency property that is being read. For example, here is how one would bind a WPF text box for a username to a corresponding setting: The situation with WinForms is a bit more trickly - there are no dependency properties to be used, so we use the WinFormsProperty helper class. This helper class has a single method, Create(), that creates an object of type IProperty<T> (where T is the property type). To create the property, it requires the following parameters: - The Lifetimeof the calling component. This should be obvious, since the 'proxy property' should only live as long as it is needed. This does, of course, imply that you must inject the Lifetimeinto the constructor. - The class to take data from. In actual fact, though in the case of WinForms you'll probably provide the corresponding control, this doesn't have to be a control per se - it can be practically any object. After all, the WinFormsPropertyclass does not use any WinForms-specific code. - A lambda expression indicating which property of the aforementioned class that is to be used. Thus, the call to bind a WinForms-based password box to a setting becomes as follows:
https://confluence.jetbrains.com/pages/diffpagesbyversion.action?pageId=50503778&selectedPageVersions=5&selectedPageVersions=4
CC-MAIN-2021-17
refinedweb
1,026
54.22
Linear discriminant analysis is a classification algorithm commonly used in data science. In this post, we will learn how to use LDA with Python. The steps we will for this are as follows. - Data preparation - Model training and evaluation Data Preparation We will be using the bioChemists dataset which comes from the pydataset module. We want to predict whether someone is married or single based on academic output and prestige. Below is some initial code. import pandas as pd from pydataset import data import matplotlib.pyplot as plt from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report Now we will load our data and take a quick look at it using the .head() function. There are two variables that contain text so we need to convert these two dummy variables for our analysis the code is below with the output. Here is what we did. - We created the dummy variable by using the .get_dummies() function. - We saved the output in an object called dummy - We then combine the dummy and df dataset with the .concat() function - We repeat this process for the second variable The output shows that we have our original variables and the dummy variables. However, we do not need all of this information. Therefore, we will create a dataset that has the X variables we will use and a separate dataset that will have our y values. Below is the code. X=df[['Men','kid5','phd','ment','art']] y=df['Married'] The X dataset has our five independent variables and the y dataset has our dependent variable which is married or not. We can not split our data into a train and test set. The code is below. X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=.3,random_state=0) The data was split 70% for training and 30% for testing. We made a train and test set for the independent and dependent variables which meant we made 4 sets altogether. We can now proceed to model development and testing Model Training and Testing Below is the code to run our LDA model. We will use the .fit() function for this. clf=LDA() clf.fit(X_train,y_train) We will now use this model to predict using the .predict function y_pred=clf.predict(X_test) Now for the results, we will use the classification_report function to get all of the metrics associated with a confusion matrix. The interpretation of this information is described in another place. For our purposes, we have an accuracy of 71% for our prediction. Below is a visual of our model using the ROC curve. Here is what we did - We had to calculate the roc_curve for the model this is explained in detail here - Next, we plotted our own curve and compared to a baseline curve which is the dotted lines. A ROC curve of 0.67 is considered fair by many. Our classification model is not that great but there are worst models out there. Conclusion This post went through an example of developing and evaluating a linear discriminant model. To do this you need to prepare the data, train the model, and evaluate.
https://educationalresearchtechniques.com/2018/10/26/linear-discriminant-analysis-in-python/
CC-MAIN-2019-35
refinedweb
528
66.94
This discussion is for expanding your array of maneuvers needed to code quickly and efficiently. yo... You can find more practice examples set @ C++ Programming Example Program with Output yes it will. I previously used to code in JAVA. But in IOITC You will face LOT of problems with it. So shift to C++. Mainly because java is slow.. so even o(n) solutions... Nope;) I am solving the first problem of the IARCS archive but I am getting Submission_State_Fatal.What should I do to solve it? Use existing discussions for all problems. 38 out of 44 problems have hints available. ICO Online Judge is available here. For a really long variable writing auto maybe acceptable; but it isn't a good practice to return auto from some complicated function; in which case the person making the code might not underst... Pointers are powerful but it makes the code look ugly , and for the same reason I don't write using namespace std ; in my code, dont make your code ugly , if you want you could try brainfuck , :P . Though once C++14 is made ready nobody ever needs a pointer anymore and can use auto for every weird thing. That's good and bad. Note:): Is greedy algorithm required for zco ? No... DFS and BFS are only required for INOI Here you go buddy Zonal Informatics Olympiad (ZIO) 25 out of 99 problems posted so far. Can they be submitted by now ? They can't be submitted Vatsal. ICO organizers haven't made the submission website available to the public yet. Where can the problems be submitted, which judge? I can't see the problems on IARCS atleast. You can find all the relevant details at the following link It's optional so you can skip it... i'm not able to type my codechef username in the registration form.my username is sir_zaid02 and it seems that they are not allowing underscores.
https://www.commonlounge.com/community/e448c00bce994d329e411139a57f9ce9/a9716e8ed7574de5a63ed64981d29fc0
CC-MAIN-2019-47
refinedweb
324
76.22
Download presentation Presentation is loading. Please wait. Published byElisabeth Hargreaves Modified over 4 years ago 1 4 If-Statements © 2010 David A Watt, University of Glasgow Accelerated Programming 2 Part I: Python Programming 2 4-2 If-statements: basic form An if-statement enables a program to choose between alternative courses of action. The basic form of if-statement is: if expression : body 1 else: body 0 a sequence of statements To execute this if-statement: 1. Evaluate expression to either True or False. 2. If the result was True, execute body 1. 3. If the result was False, execute body 0. 3 4-3 Example: maximum of two numbers If max were not a built-in function, we could define it ourselves: def max (x, y): # Return the greater of the numbers x and y. if x > y: return x else: return y 4 4-4 If-statements: form without else Sometimes one of the alternatives of an if- statement is to do nothing. We can omit the else part: if expression : body To execute this if-statement: 1. Evaluate expression to either True or False. 2. If the result was True, execute body 1. 3. If the result was False, do nothing. 5 4-5 Example: sorting two numbers The following program takes two numbers in x and y, and sorts them so that the smaller number is in x and the larger in y : x = input('Enter a number: ') y = input('Enter another number: ') if x > y: # Swap the numbers in x and y … z = x x = y y = x print 'Sorted numbers:', x, y 6 4-6 If-statements: extended form (1) Sometimes an if-statement has three (or more) alternatives. We can use an extended form of if-statement: if expression 1 : body 1 elif expression 2 : body 2 else: body 0 may be replicated as often as you need may be omitted 7 4-7 If-statements: extended form (2) This is equivalent to: if expression 1 : body 1 else: if expression 2 : body 2 else: body 0 8 4-8 Example: roots of quadratic equation (1) Consider the general quadratic equation: ax 2 + bx + c = 0 Its roots are given by: ( ‒ b ± √(b 2 – 4ac)) / (2a) But note that: –there are two real roots if b 2 – 4ac > 0 –there is only one real root if b 2 – 4ac = 0 –there are no real roots if b 2 – 4ac < 0. 9 4-9 Example: roots of quadratic equation (2) Function: def roots (a, b, c): # Return the real root(s) of the quadratic equation # a x 2 + b x + c = 0, or ( ) if it has no real roots. d = b**2 – 4*a*c if d > 0: s = square_root(d) r1 = (-b+s)/(2*a) r2 = (-b-s)/(2*a) return (r1, r2) elif d == 0: return –b/(2*a) else: return () 10 4-10 Example: roots of quadratic equation (3) Function calls: roots(1.0, 2.0, 1.0) returns ‒ 1.0 roots(1.0, 3.0, 1.0) returns ( ‒ 0.382, ‒ 2.618) roots(1.0, 1.0, 1.0) returns ( ) 11 4-11 Example: roots of quadratic equation (4) Tracing the function call roots(1.0, 2.0, 1.0) : abc Enter the function: 1.02.01.0 Execute “ d = b**2 - … ”: 1.02.01.0 d 0.0 Test “ d > 0 ”: yields False 1.02.01.00.0 Test “ d == 0 ”: yields True 1.02.01.00.0 Execute “ return –b/(2*a) ”: returns ‒ 1.0 1.02.01.00.0 12 4-12 Testing non-boolean values Uniquely, Python allows a value of any type to be tested in an if-statement (or while-statement). TypeValue treated as False Values treated as True integer0non-zero integers floating-point0.0non-zero numbers string‘’ (empty string)non-empty strings listempty listnon-empty lists dictionaryempty dictionarynon-empty dict’s 13 4-13 Example: testing strings Here is part of a simple command-line program. It accepts commands “duck” and “dive”, and rejects an invalid command. com = raw_input('Command? ') if com == 'duck': duck() elif com == 'dive': dive() elif com: print '- invalid command!' This assumes that functions duck and dive actually execute the respective commands. alternatively, elif com != '': 14 4-14 Conditional expressions A conditional expression evaluates its result in one of two alternative ways, depending on a boolean. It has the form: expression 1 if expression 0 else expression 2 To evaluate this conditional expression: 1. Evaluate expression 0 to either True or False. 2. If the result was True, evaluate expression 1. 3. If the result was False, evaluate expression 2. 15 4-15 Example: conditional expression (1) Here are shorter implementations of the abs and max functions: def abs (x): # Return the absolute value of the number x. return (x if x >= 0 else -x) def max (x, y): # Return the greater of the numbers x and y. return (x if x > y else y) It is good practice to parenthesise this form of expression. 16 4-16 Boolean operators revisited Consider an expression of the form “A and B”. –A is evaluated first. If it yields False, the overall result must be False, so B is not evaluated at all. –Thus “A and B” is equivalent to “B if A else False ”. Similarly, consider an expression of the form “A or B”. –A is evaluated first. If it yields True, the overall result must be True, so B is not evaluated at all. –Thus “A or B” is equivalent to “ True if A else B”. 17 4-17 Short-circuit evaluation The boolean operators “ and ” and “ or ” each uses short-circuit evaluation. The right operand is evaluated only if it can affect the overall result. No other Python operator uses short-circuit evaluation. The left and right operands are both evaluated. 18 4-18 Example: short-circuit evaluation (1) This function tests whether a student is qualified to graduate with an ordinary degree: def qualified (credits, grade_points): # Return True if the student has at least 360 credits # and a grade-point average of at least 9.0. return credits >= 360 and grade_points/credits >= 9.0 If the student has 0 credits, this function correctly returns False. Short-circuit evaluation is essential here, otherwise the function would fail (division by 0). 19 4-19 Example: short-circuit evaluation (2) This function uses a conditional expression to give the same result: def qualified (credits, grade_points): # Return True if the student has at least 360 credits # and a grade-point average of at least 9.0. return (grade_points/credits >= 9.0 \ if credits >= 360 \ else False) Similar presentations © 2018 SlidePlayer.com Inc.
http://slideplayer.com/slide/2395989/
CC-MAIN-2018-47
refinedweb
1,110
62.88
An Xserve running Mac OS X Server is ideal for all the many jobs that hosting a small business network requires. It can manage networking and users for a local area network of dozens of nodes; connect them all to the Internet via Network Address Translation (NAT); provide firewall security for them; and host file and print services for every node on the network. It can provide incoming and outgoing email service for the company domain, including spam filtering and mailing-list management. The company Web site can live on the Xserve, along with FTP service. It can provide a secure Virtual Private Network (VPN) connection to another remote LAN; accept incoming dial-up connections; automate backups of network data; and much more. This is the first in a suite of articles on using Xserve for a small business local area network (LAN); here, we take a high-level look at the capabilities of the Xserve and Mac OS X Server, in the context of a hypothetical LAN. Over time, additional articles will be added to fill in the details on using Xserve on a small business LAN for these functions. The Xserve can manage a network of several dozen or more workstations — desktops and laptops—running Mac OS X, Mac OS 9, Windows 98, Windows 2000, and Windows XP, among other operating systems. The easiest setup is for nodes to be connected in a basic star topology, with the nodes all hooked (via CAT5 Ethernet cable) to a central switch, which also connects to the Xserve. There is no real reason to interconnect the workstations to each other. The Xserve acts as a DHCP server. This function can be turned on, configured, and monitored using the Server Settings utility. Whenever a machine boots up and connects to the network, the Xserve assigns it a "lease" — a temporary IP address in a specified range. Other IP addresses in the namespace can be set aside for other purposes, and never assigned. The client workstations must be configured to receive these assignments. In OS X's Network Preferences, "Get IP Address Using DHCP" should be selected. On the Windows machines, in the Network control panel, the TCP/IP Properties settings need to show "Obtain an IP address automatically". In Mac OS 9, the TCP/IP Control Panel should be set to "Configure Using DHCP Server". Details of DHCP configuration are given in the "DHCP Service" chapter of the Mac OS X Server Administrator's Guide. In order to provide Internet connectivity to the workstation nodes, Network Address Translation can be provided by running natd on the Xserve. When a node sends packets to the outside world, natd translates the source IP address, so that the packets appear to be coming from the Xserve itself, rather than the internal machines. When a response is returned, the translation is reversed, and the packets are routed to the originating machine. The process is completely invisible to the two machines on either end of the connection. In order to implement this, the Xserve must have (at least) two network cards: one card that plugs into the switch connecting it to the rest of the LAN, and the other card connected to the external Internet connection. The external connection can be a DSL line, a cable modem connection, or a faster leased line. For certain types of connections — Point-to-Point Protocol over Ethernet, for example—special client software may be required to run on the Xserve to manage the connection. The Virtual Private Networking capabilities of Mac OS X Server can be used to connect two remote LANs to each other securely. It is designed for situations in which a private line would be ideal, but geography and cost make that impossible, so sensitive data must be transmitted across public infrastructure. Unsecured transmission across a standard Internet connection is not adequate. Mac OS X Server's built-in IPSec functionality can create a permanent Virtual Private Network (VPN) connection. IPSec encrypts both the content and the header of every packet exchanged between the two networks, and additionally verifies each packet's authenticity: that it came from where it purports to have come from. This adds an additional layer of security against attacks and snooping. The topology thus created is flexible and expandable. Workstations can be added and removed with very little configuration required, since addresses on the network are assigned automatically. Thanks to Workgroup Manager, users are not tied to a particular workstation—they can access their files and personalized desktop from anywhere on the network. When specialized needs are encountered—for example, if a dedicated machine of some type is needed for some particular function, like running a server on a non-Apple platform; or using a separate machine as a high-traffic Web server—such machines can be added to the network simply and easily, with no need to reconfigure the basic topology. Additionally, wireless networking can be set up by running an AirPort base station in Ethernet bridging mode, so that the Xserve provides the DHCP information to the clients, rather than the base station's built-in DHCP server. The most crucial part of running a network comes after everything's set up: maintenance. This includes monitoring all the systems involved, running backups, keeping on top of security and software updates, and more. Mac OS X Server and the Xserve simplify these tasks in numerous ways. As any network administrator knows, running regular comprehensive backups is a crucial component of good administration practices. Xserve provides numerous options for backing up data. Automated scripts, written in AppleScript or Bourne shell, can run a timed backup every day (or more frequently if desired), dumping valuable data to tape, hard drive, RAID storage, or other media. Third-party solutions are available as well, such as the BRU system. Because backing up is a resource-hungry process, it's best to schedule it at a time when server load is low. Differential backup methods, which back up only the data that has changed since the last backup, are less comprehensive but quicker than full backups. Another advantage of differential backups is that the more often they are done, the less time each one takes (because there is less new data to back up). The type of backup procedure chosen should depend on patterns of data access. Every service that runs on a server outputs logging data, either by default or as an option. The system directory /var/log is the standard location for log files: /var/log [X:/var/log] paul# ls alias.log mail.log.0.gz cups mail.log.1.gz daily.out mail.log.2.gz diskspacemonitor.log mail.log.3.gz mail.log.4.gz monthly.out netinfo.log netinfo.log.0.gz netinfo.log.1.gz netinfo.log.2.gz httpd netinfo.log.3.gz hwmond.log netinfo.log.4.gz hwmond.log.0.gz ppp hwmond.log.1.gz samba hwmond.log.2.gz secure.log hwmond.log.3.gz servermgrd hwmond.log.4.gz statistics icnotifications.log system.log lastlog system.log.0.gz lookupd.log system.log.1.gz lookupd.log.0.gz system.log.2.gz lookupd.log.1.gz system.log.3.gz lookupd.log.2.gz system.log.4.gz lookupd.log.3.gz system.log.5.gz lookupd.log.4.gz system.log.6.gz lpr.log system.log.7.gz lpr.log.0.gz webobjects.log lpr.log.1.gz webobjects.log.1 lpr.log.2.gz webperfcache lpr.log.3.gz weekly.out lpr.log.4.gz wtmp mail.log wtmp.0.gz The Web server outputs its log to /var/log/httpd/access_log; the mail server to /var/log/mail.log; and so forth. The files in the listing that end in ".gz" are compressed archives of older logs, created by the log rolling scripts discussed below. /var/log/system.log contains logging information for a lot of miscellaneous processes, such as cron, ipfw, and SSH, as well as general system information, such as boot-time messages, Ethernet connection status, and much more. These logs are a vital diagnostic and monitoring tool. They can be opened as text files, or viewed in real-time from the command line with the tail command: /var/log/httpd/access_log /var/log/mail.log /var/log/system.log tail -f /var/log/system.log This provides a scrolling display of new log entries as they appear. Tail is very useful for monitoring critical processes on the server as they occur. There are also some third-party GUI log-analysis tools available which show trends and patterns, although these tools are primarily designed for Web access logs. Webalizer is a popular free choice. Mac OS X Server has three built-in scripts in /etc/periodic that "roll" log files—that is, when files reach a specified size, the relevant script compresses the file and archives it, and starts logging data to a new file, in order to keep logifle size manageable. The size thresholds can be set by modifying the configuration file: /etc/periodic /etc/diskspacemonitor/daily.server.conf When running a server or servers that are exposed to the Internet, maintaining tight security is vital. There are numerous resources on the Web explaining the basics of good security. The Mac OS X Security page points you to documentation on further aspects of OS X security. There are a number of excellent security features built in to OS X Server. The rules-based firewall software, ipfw, controls the machine's network ports, to filter incoming and outgoing connections. It can be configured from Server Settings, from third-party GUI tools such as BrickHouse, or, for more complex setups, from the command line. Virtual Private Networking, as detailed above, is possible with the built-in IPSec software. For secure Web communications, such as credit card transactions, and for other procedures that require encryption of data—secure email, telnet, and other services—Mac OS X Server provides support for the Secure Sockets Layer protocol, or SSL. Using an Xserve to administer a LAN provides access to a host of useful administrative tools and techniques. And while it hosts a LAN, the Xserve can simultaneously fulfill many other server functions necessary to a small business. Workgroup Manager makes it easy to create and delete users, set their access permissions and characteristics, configure groups for resource sharing, and manage users' personal settings, independent of where they log in. To create the default configuration for new users, including the contents of their home directories, Preferences settings, and more, a template can be created in the Macintosh Manager tool. Create and save a template in the Imported Users list under the Users tab. The default template is stored in the directory /System/Library/User Template. User data, such as aliases, group membership, passwords, and so forth must be managed using the NetInfo Manager (PDF) application. Niload provides command-line access to this tool. The ways in which OS X Server makes client management easy are discussed extensively in the "Client Management" sections of the Mac OS X Administrator's Guide. Using a constellation of protocols to ensure compatibility with multiple client operating systems, Xserve can facilitate the sharing of files around the local network, as well as offer file services to external clients. OS X Server includes server software for Apple Filing Protocol (AFP), to share with Macintosh clients; Samba/SMB (Server Message Block) shares for Windows clients; Network File System (NFS) for Unix compatibility; and FTP (File Transfer Protocol) for general use. These file services can be configured and controlled from the Server Settings tool. The Xserve can also provide print services for networked workstations. With one or more printers attached to the network, any of the workstations can use them. The print services software also allows for fine-grained control over print quotas, authorization, queuing, and other management tasks. Mac OS X Server can also function as a database server. It comes with MySQL, and is compatible with numerous other database platforms, such as PostgreSQL, as introduced in the article PostgreSQL on Mac OS X. An Xserve running a database server can provide data to all sorts of applications that require a database backend, including dynamically driven Web sites, customer relationship management tools, and much more. These applications can run either on the Xserve or on separate machines. Apple's Open Directory is a standards-based LDAP directory access and server architecture for hosting and integrating with LDAP directories (which implement RFC 2307). It is intercompatible with the LDAP standard and with Microsoft's Active Directory, and is ideal for providing directory services for an Xserve-based network. It can be used to set up an Open Directory Password Server, which provides authentication for users on the network, by validating passwords and enforcing policies. It controls access to network resources—files and directories, printers, mountable media, preferences, group permissions, and more—not just for Mac OS X clients, but for OS 8 and 9 clients as well, and also Windows and Unix clients. Detailed discussion of how to use Open Directory services is available in the Mac OS X Server Administrator's Guide. OS X has the Apache Web server built in. This server can be easily configured from Server Settings, and is robust enough to host large dynamic Web sites. The article Optimizing an Xserve for Web Hosting discusses how to optimize an Xserve for hosting multiple Web sites. In conjunction with PHP and/or Perl, both of which are also pre-installed, complex Web applications can be created for any need; with a MySQL database backend, this is a powerful, and completely open source, solution. To provide email access within the network domain, the Xserve's built-in mail server can host, store, and route mail for the network, with an address for each user on the system, and internal LAN mail routed separately from external communication. It allows for IMAP and POP access, which can be secured with SSL as explained above; spam filtering; and the use of alternate mail transfer agents, such as Postfix, Sendmail, and Qmail. Whenever you want to manage an Xserve remotely, consider using Apple Remote Desktop. Besides using the application to remotely manage your Xserve, you can also use Apple Remote Desktop to configure and administer Macintosh clients that are connected to your Xserve. An Xserve can be managed with the Mac OS X Server management applications such as Server Admin and Server Monitor. But you can also use Apple Remote Desktop as a virtual KVM to remotely take control of an Xserve; you can even remotely restart or shutdown your Xserve. Another task for Apple Remote Desktop is for copying configuration files or installing software packages (such as Mac OS X system updates or security updates) remotely. But perhaps the greatest power of using Apple Remote Desktop is when you need to update more than one Xserve—whether you have one Xserve or several, with a few clicks you can copy and install software to all of them. You can also use Apple Remote Desktop to remotely manage the Macintosh clients connected to your Xserve—use it for software distribution, desktop support and hardware and software profiling. Posted: 2003-09-30 Get information on Apple products. Visit the Apple Store online or at retail locations. 1-800-MY-APPLE
http://developer.apple.com/server/overviewsbxserve.html
crawl-002
refinedweb
2,568
51.48
The question I’m about to ask is probably not PyTorch-specific, but I encountered it in context of PyTorch DataLoader. How do you properly add random perturbations when data is loaded and augmented by several processes? Let me show on a simple example that this is not a trivial question. I have two files: - augmentations.py: import numpy as np import os class RandomAugmentation: def __call__(self, obj): perturbation = np.random.randint(10) print(os.getpid(), perturbation) return obj - main.py import numpy as np from time import sleep from torch.utils.data import Dataset, DataLoader from torchvision.datasets import ImageFolder from torchvision.transforms import Compose, ToTensor, Resize from augmentations import RandomAugmentation PATH = ... transform = Compose([ RandomAugmentation(), Resize((16, 16)), ToTensor() ]) ds = ImageFolder(PATH, transform=transform) dl = DataLoader(ds, batch_size=2, num_workers=3) for epoch_nr in range(2): for batch in dl: break sleep(1) print('-' * 80) In the main.py we generate batches of data like we would in a regular image recognition task. RandomAugmentation prints output of RNG that would be used for data augmentation in a real task. On my machine the output from running main.py was: 20909 6 20909 8 20908 6 20910 6 20908 8 20910 8 20909 7 20909 6 20908 7 20908 6 20910 7 20908 0 20910 6 20908 1 -------------------------------------------------------------------------------- 20952 6 20953 6 20952 8 20952 7 20953 8 20952 6 20954 6 20952 0 20952 1 20953 7 20954 8 20953 6 20954 7 20954 6 -------------------------------------------------------------------------------- Clearly, i-th perturbation added by one worker is exactly the same as i-th perturbation added by any other worker (in this example 6, 8, 7, 6, and so on). What’s even worse, in the next epoch data loader is kind of reset, and generates exactly the same (perturbated) data as before, what completely ruins the entire idea of data augmentation. For the time being I designed such a workaround: augmentations.py import numpy as np from time import time import os RNG_PID = None RNG = None def get_rng(): global RNG global RNG_PID if os.getpid() != RNG_PID: RNG = np.random.RandomState([int(time()), os.getpid()]) RNG_PID = os.getpid() print("Initialize RNG", int(time()), os.getpid()) return RNG class RandomAugmentation: def __call__(self, obj): rng = get_rng() perturbation = rng.randint(10) print(os.getpid(), perturbation) return obj However, it doesn’t seem like a very elegant solution to me. It would be great to hear how you deal with this matter
https://discuss.pytorch.org/t/dataloader-workers-generate-the-same-random-augmentations/28830
CC-MAIN-2022-21
refinedweb
407
56.15
Line plots are a nice way to express relationship between two variables. For ex. price v/s quality of a product. Wikipedia says that a line graph is: A line chart or line plot or line graph or curve chart is a type of chart which displays information as a series of data points called ‘markers’ connected by straight line segments. These are also used to express trend in data over intervals of time. plot(x, y) x and y are arrays. import matplotlib.pyplot as plt from matplotlib import style style.use(‘fivethirtyeight’) x = [1, 2, 3, 4, 5, 6, 7] y = [8… I love doing programming and data science.
https://himanshubhatt885.medium.com/?source=post_internal_links---------1----------------------------
CC-MAIN-2021-31
refinedweb
111
81.53
Java SE Quiz yourself: Happens-before thread synchronization in Java with CyclicBarrier The CyclicBarrier class provides timing synchronization among threads, while also ensuring that data written by those threads prior to the synchronization is visible among those threads. by Simon Roberts and Mikalai Zaikin September 13, 2021 | Download a PDF of this article More quiz questions available here Given the following CBTest class import static java.lang.System.out; public class CBTest { private List<Integer> results = Collections.synchronizedList(new ArrayList<>()); class Calculator extends Thread { CyclicBarrier cb; int param; Calculator(CyclicBarrier cb, int param) { this.cb = cb; this.param = param; } public void run() { try { results.add(param * param); cb.await(); } catch (Exception e) { } } } void doCalculation() { // add your code here } public static void main(String[] args) { new CBTest().doCalculation(); } } Which code fragment, when added to the doCalculation method independently, will make the code reliably print 13 to the console? Choose one. A. CyclicBarrier cb = new CyclicBarrier(2, () -> { out.print(results.stream().mapToInt(v -> v.intValue()).sum()); }); new Calculator(cb, 2).start(); new Calculator(cb, 3).start(); B. CyclicBarrier cb = new CyclicBarrier(2); out.print(results.stream().mapToInt(v -> v.intValue()).sum()); new Calculator(cb, 2).start(); new Calculator(cb, 3).start(); C. CyclicBarrier cb = new CyclicBarrier(3); new Calculator(cb, 2).start(); new Calculator(cb, 3).start(); cb.await(); out.print(results.stream().mapToInt(v -> v.intValue()).sum()); D. CyclicBarrier cb = new CyclicBarrier(2); new Calculator(cb, 2).start(); new Calculator(cb, 3).start(); out.print(results.stream().mapToInt(v -> v.intValue()).sum()); Answer. The CyclicBarrier class is a feature of the java.util.concurrent package, and it provides timing synchronization among threads while also ensuring that data written by those threads prior to the synchronization is visible among those threads (this is the so-called “happens-before” relationship). These problems might otherwise be addressed using the synchronized, wait, and notify mechanisms, but those are generally considered low-level mechanisms that are harder to use correctly. If multiple threads are cooperating on a task, at least two problems must commonly be addressed. The data written by one thread must be read correctly by another thread when the data is needed. The other thread must have an efficient means of knowing when the necessary data has been prepared and is ready to be read. The CyclicBarrier addresses these problems by providing timing synchronization using a barrier point and a barrier action. The operation of the CyclicBarrier might be likened to a group of colleagues at a conference preparing to go to a presentation together. They get up in the morning and go about their routines individually, getting ready for the presentation and their day. When they’re ready, they go to the lobby of the hotel they’re staying in and wait for the others. When all the colleagues are in the lobby, they all leave at once to walk over to the conference room. Similarly, a CyclicBarrier is constructed with a count of “parties” as an argument. In the analogy, this represents the number of colleagues who plan to go to the presentation. In the real system, this is the number of threads that need to synchronize their activities. When a thread is ready, it calls the await() method on the CyclicBarrier (in the analogy, this is arriving in the lobby). At this point, one of two behaviors occurs. Suppose the CyclicBarrier was constructed with a “parties” count of 3. The first and second threads that call await() will be blocked, meaning their execution is suspended, using no CPU time, until some other occurrence causes the blocking to end. When the third thread calls await(), the blocking of the two threads that called await() before is ended, and all three threads are permitted to continue execution. (This is the second behavior mentioned earlier.) After the CyclicBarrier thread-execution block is ended, data written by any of the threads prior to calling await() will be visible (unless it is perhaps subsequently altered, which can confuse the issue) by all the threads that called await() on this blocking cycle of this CyclicBarrier. This is the happens-before relationship, and it addresses the visibility problem. After the threads are released, the CyclicBarrier can be reused for another synchronizing operation—that’s why the class has cyclic in the name. Note that some of the other synchronization tools in the java.util.concurrent API cannot be reused in this way. The CyclicBarrier provides two constructors. Both require the number of parties (threads) they are to control, but the second also introduces the barrier action. public CyclicBarrier(int parties, Runnable barrierAction) The barrier action defines an action that is executed when the barrier is tripped, that is, when the last thread enters the barrier. This barrier action will be able to see the data written by the awaiting threads, and any data written by the barrier action will be visible to the threads after they resume. The API documentation states, “Memory consistency effects: Actions in a thread prior to calling await() happen-before actions that are part of the barrier action, which in turn happen-before actions following a successful return from the corresponding await() in other threads.” Each of the quiz options creates a CyclicBarrier and passes it to a thread (created from the Calculator class). Each thread—or each calculator, if you prefer—performs a calculation and then adds the result of that calculation to a thread-safe List that’s shared between the two threads. (Note that the ArrayList itself isn’t thread-safe, but the Collections.synchronizedList method creates a thread-safe wrapper around it.) After adding the result to the List, the calculator thread calls the await() method on the CyclicBarrier. Subsequently, the intention is to pick up the data items that have been added to the List and print the sum. For this to work correctly, the summing operation must see all the data written by the calculators—and that must not occur until after the calculated values have been written. The code should achieve this using the CyclicBarrier. In option B, the attempt to calculate the sum and print the result precedes the construction and start of the two calculator threads. As a result, option B is incorrect. Option B might occasionally print the right answer; the situation is what’s called a race condition. Although unlikely, it’s not impossible that the JVM might happen to schedule its threads in a way that the calculations are completed before the summing and printing starts. It’s also possible that the data written in this situation might become visible to the thread that performs the summing and printing. However, such circumstances are unlikely at best and certainly not reliable. With option B, the output is most likely to be 0 (because the list is empty), but the values 4, 9, and 13 are all possible. There’s no way to predict what the results will be. Option D is a variation of option B with swapped lines. Although this looks like the calculations might be executed before the summing and printing operations, the same race-condition uncertainty exists. So, although this option is more likely than option B to print 13 on any given run, for the same reasons, all the values are possible. Therefore, option D is incorrect. Option C is almost correct but not quite. The CyclicBarrier is created with a parties count of 3. Three calls to await() are made, so the main thread would not proceed until the two calculations are complete. The timing would be correct, and the visibility issue would be correctly addressed such that the last line of the option—the line that computes and prints the sum—would work reliably if it were not for one remaining problem with the implementation shown. Blocking behaviors in the Java APIs are generally interruptible, and if they are interrupted, they break out of their blocked state and throw an InterruptedException, which is a checked exception. Because neither the code of option C nor the body of the doCalculation method into which the code is inserted includes code to address this exception, the code for option C fails to compile. In fact, the await() method throws another checked exception, BrokenBarrierException, and this is also unhandled. However, while you might not know about the BrokenBarrierException, you should know about the InterruptedException because it’s a fundamental and pervasive feature of Java’s thread-management model. Because these checked exceptions are unhandled and the code does not compile, option C is incorrect. In option A, the CyclicBarrier is created with a parties count of 2 (which will be the two calculator threads) and a barrier action. The barrier action is the lambda expression that aggregates results from working parties. The two Calculator threads invoke await() after they have written the result of their calculations to the list. This behavior ensures that both writes have occurred before—and the data is visible to—the barrier action. Then, the barrier action is invoked to perform the summing and printing. As a result, it’s guaranteed that both 4 and 9 are in the list before the summing and printing and that they are visible to that operation. Consequently, the output must be 13, and you know that option A is correct. Conclusion. The correct answer is option A. Related quizzes Quiz yourself: Rules about throwing checked exceptions in Java Quiz yourself: Using core functional interfaces Quiz yourself: Create worker threads using Runnable and Callable
https://blogs.oracle.com/javamagazine/java-cyclicbarrier-thread-synchronization
CC-MAIN-2021-39
refinedweb
1,583
54.22
A Path to EnlightenmentA Path to Enlightenment XML-DEV has been busy recently with a number of long running threads drawing in some interesting postings. The underlying theme has been general orientation on how best to understand and come to terms with particular technologies, ranging from Schemas to Web Services. This week the XML-Deviant walks the XML-DEV path to enlightenment to see where it leads. The first steps on the path are familiar territory: deflating unnecessary hype. This discussion began in response to Edd Dumbill's recent Taglines column. Of particular interest was the accusation that marketing fanfare is overhyping "web services". The response was mixed; while most agreed that the hype was selling more than could be reasonably delivered, others remained adamant that technologies such as SOAP brought real value, Michael Brennan among them.. Brennan demonstrates what seems to be a common perception: web services are simply a formulation, albeit using a new set of technologies, of what many having been doing for years. Michael Champion expounded this view, suggesting that SOAP over HTTP is simply an alternative to URLs from Hell. I personally (obligatory disclaimer ...) suspect that SOAP over HTTP will find its niche mainly as a cleaner, more standardized way of doing what people have been doing with HTTP parameters and CGI scripts "forever". I've sweated over the production and parsing of enough URLs from Hell that I grok the SOAP / UDDI / WSDL vision of doing this in a more orderly manner. Whether that provides a solid foundation for Yet Another Paradigm is another matter entirely. It hardly seems like a new paradigm for application development, does it? In a later posting, Champion also explored answers to the question, what are web services good for? Another part of the discussion was some clarification of what SOAP actually is. At various times it has been compared to CORBA and similar distributed object systems; you can also find similar comparisons between XML-RPC and CORBA. However, the comparison isn't justified. Joshua Allen provided a clear appraisal of how SOAP fits into the distributed object framework. You could probably consider SOAP and CORBA as complementary.. Henrik Frystyk Nielsen's explanation was much pithier and to the point. I would just as a reminder like to point out that SOAP doesn't aspire to be a distributed object system. It is a nothing more than a wire protocol. And, as Michael Brennan explained, the Web Services Description Language (WSDL) completes the picture by providing functionality that COM and EJB developers have long been using.. After only a brief journey down our path, we've learned an interesting lesson: web services, as embodied by SOAP and WSDL, don't offer any functionality that developers haven't already had for some time. But SOAP-WSDL achieves things in a way that is potentially more open and cross-platform. While these are certainly laudable goals, it doesn't seem like there's as much to the web service revolution as many (would have us) believe. Moving on, it seems that articles like Don Smith's Understanding W3C Schema Complex Types' are helping users come to terms with W3C XML Schemas; other comments on XML-DEV suggest that many are making progress on their migration away from DTDs. This seems particularly true for less ambitious uses, as Len Bullard related. I think XML Schemas are too hard because we aren't really sure what they are supposed to do... For me they are easy because... most of what I want to represent can be done in DTDs. Still, I find myself creating restricted simple types for reuse to pick up the extra power of regexes and that is a step beyond DTDs. Peter Piatko reported similar experiences.. However as Joe English warned, no system is an island and others may be far more ambitious. ... Slightly further down the road, we learned that W3C XML Schemas are about more than just validation. This is no great surprise, but it's useful to see it clearly spelled out. Interestingly Michael Brennan observed that this may be the cause of some of the frustration directed at the XML Schema specification. I [...] accommodate the weaknesses of the schema language. In our case, this means changing our structures such that an element's content model is not identified by an attribute value. RELAX NG can accommodate this, but XSD cannot. This is an intriguing observation as it implies the conclusion that, if all you need is a schema language slightly more sophisticated than DTDs, RELAX NG may be the appropriate choice. RELAX is solely about validation and is, therefore, likely to be a better fit for that particular use case. Further, if you need only simple datatyping then a mix of RELAX and Schematron may be enough. Rick Jelliffe demonstrated this week that Schematron can be used for typechecking.). Arguments about types seem to fill the landscape this week, as runoff from the schema and namespace debate summarized in last week's XML-Deviant continues. (None too surprising, perhaps, given that those topics have probably consumed more XML-DEV threads than any others). It seems that there are different viewpoints about where and how, or even whether, type information is associated with XML markup. The core issues relate to whether one is working with simple well-formed markup or with systems that involve validation and strong typing. Tim Bray has been at the center of this discussion, arguing at length that properly labeled markup is the key to maximum flexibility. Q1: Why would you use XML? A1: One of the important reasons is so that.. Shirking mentions of "type" as the key to enlightenment, Bray also presented a worldview which he argued was consistent with all interpretations: XML + Namespaces is a means to associate labels with data structures. One is free to build any kind of architecture on this, including one that is strongly influenced by type information. While this may show good architectural design by not limiting decisions in other layers, several people, including Paul Prescod, were concerned that the plurality of different architectures isn't being explored; rather, in fact typing is leading the way. For better or for worse, the emerging XML architecture DOES elevate schemas, validation and ty[p[ype] system. So flaws in that system will eventually become material to all XML users. Some future applications may not deal with element labels (or ulabels) at all. They will deal with t[ype] names. Publication of the XQuery 1.0 and XPath 2.0 Functions and Operators Version 1.0 Working Draft clearly demonstrates Prescod's point. These discussions hint at further splits in the community. Is there a potential fork on our road? Some of us are interested only in well-formed documents, while others want to mix in a constraints (validation) mechanism. Both groups seem to need much less than is currently being designed. And still others desire strong typing and object-oriented features; these are the ones who seem currently be most well-served by the latest W3C deliverables. But their satisfaction may be a detriment to those seeking a simpler existence. This wouldn't be the first time that a fork in the road has been highlighted. In fact in another thread this week the same suggestion, simplification through refactoring, has been made several times. Pertinent to the previous discussion, Alexander Nakhimovsky suggested refactoring W3C XML Schemas. ...it would be a good thing, IMO, if XML Schema were *re-factored* into the validating part (such as RELAX NG) and a "complex-type-relations" part, for use in specialized applications. Following a discussion concerning the use of XPointer within XInclude, which showed that a streaming processing model is not possible without some kind of subsetting (ideally of XPath), Sean McGrath made a plea to the W3C to explore further subsetting of their output.. Some argued that this isn't as easy as it might seem. Henry Thompson wondered how one identifies the users who matter: There is a fundamental question buried within your request: who are the users whose voices matter, and how do we identify them? Remember Lot pleading for Sodom and Gomorrah? How many people really using feature F of XML 1.0 + Namespaces + ... does it take in order to render it safe from pruning? Michael Champion agreed with Thompson, believing that a close shave at Reverend Occam's Barbershop, while overdue, would be a hard fought battle:. Yet these aren't so much arguments against attempting the task as they are an acknowledgment that no subset is likely to please everyone. If the same viewpoint had won out several years ago, then we wouldn't have XML in front of us now. And this subset hasn't stopped useful work being done. Another point worth making is that many argue for refactoring rather than subsetting; the former involves retaining functionality while improving the architecture. So there are other means to the same end. Smaller, refactored specifications will also lend themselves well to being put together in different ways and with alternate piece in key positions. This seems a good way to achieve greater satisfaction. So, at the end of this week's jaunt, we've come full circle. For much of the last year the same obstacles have been in our way. One wonders how long this will continue before new ground is struck? XML.com Copyright © 1998-2006 O'Reilly Media, Inc.
http://www.xml.com/lpt/a/836
CC-MAIN-2013-48
refinedweb
1,577
62.88
Although I've written on this subject before, it occured to me that since we now have .NET Framework 4.0, the Task Parallel library gives us newer, more efficient ways to "skin a cat", so I've put together this demo that has two separate services - one that does it via the Threadpool (the "old" way) using a ManualResetEvent to block until completion, and the second, using the TaskFactory class from .NET 4.0. The client is simply a web page with a label and two buttons. One button calls the Threadpool Service method, and the other button calls into the TaskFactory service with its method. The results are displayed in the label control. To keep this usable, I've only used four urls - but you could do 100 if you wanted to. Let's have a look at the first service (ThreadPool): using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; using System.Threading; using System.Threading.Tasks; namespace MultiRequest { public class StateThing { public string Url { get; set; } public int Counter { get; set; } public StateThing (string url, int counter) { this.Url = url; this.Counter = counter; } } [ServiceContract] public class Service1 { private ManualResetEvent mre = new ManualResetEvent(false); private string[] _contents; private int counter = 0; [OperationContract] public string[] GetRequests( string[] urls) { _contents = new string[urls.Length ]; for (int i = 0; i < urls.Length; i++) { var state = new StateThing( urls[i],i); ThreadPool.QueueUserWorkItem(DoWork, state); } // block thread until last one is processed mre.WaitOne(); return _contents; } private void DoWork(object state) { var st = (StateThing) state; string url = st.Url; int ctr = st.Counter; var wc = new WebClient(); string stuff = wc.DownloadString(url); _contents[counter] = stuff; wc.Dispose(); if (counter == _contents.Length-1) mre.Set(); // OK! Let 'er go! counter++; } } } You can see above, I've created a small class, "StateThing" to hold state. This can include anything you want, it is passed as the second parameter to the ThreadPool QueueUserWorkItem method. So our WebMethod receives a string array of urls, iterates over each creating an instance of the state object, and queues each workitem onto the thread pool. Now - we need to prevent the GetRequests method from returning until all the urls have been retrieved. One easy way to do this is by using the ManualResetEvent class. When you call WaitOne on the ManualResetEvent, all further processing on the main thread is halted until something calls the ManualResetEvent's Set method. We only need to track our items with some sort of counter, as can be seen in the DoWork method. The Threadpool has a set number of threads by default (this can be set with SetMaxThreads) and if all are in use it will queue requests until one becomes available. This is an efficient was to process multiple operations in parallel. When you create a Task or Task(Of TResult) object to perform some task asynchronously, by default the task is scheduled to run on a thread pool thread. Now let's have a look at the TaskFactory alternative: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; using System.Threading.Tasks; namespace MultiRequest { [ServiceContract] public class Service2 { private string[] _contents; [OperationContract] public string[] GetRequestsWithTasks(string[] urls) { var tasks = new Task[urls.Length]; _contents = new string[urls.Length]; int ctr = 0; for (int i = 0; i < urls.Length; i++) { var state = new StateThing(urls[i], i); var task = Task.Factory.StartNew(() => DoTask(state), TaskCreationOptions.LongRunning); tasks[ctr] = task; ctr++; } Task.WaitAll(tasks); return _contents; } private void DoTask(object state) { var wc = new WebClient(); var st = (StateThing)state; var ctr = st.Counter; string stuff = wc.DownloadString(st.Url); _contents[ctr] = stuff; } } } The approach is similar to the first, but instead we call var task = Task.Factory.StartNew(() => DoTask(state), TaskCreationOptions.LongRunning); and add each task to an array of Task objects. We then call Task.WaitAll(tasks) which pretty much does the same thing as the ManualResetEvent in the first example. You can download the complete Visual Studio 2010 Solution here. Remember: anytime you need to do a lot of "something" at the same time, Parallel is your friend.
http://www.nullskull.com/a/1701/aspnet-executing-multiple-httprequests-in-a-wcf-service-via-threadpool-and-task-factory.aspx
CC-MAIN-2014-15
refinedweb
704
60.72
i've added both rc's to the MVS leg. testall/testlock passes. >From the following zOS C/C++ run time manual snippet, looks like this should be enough. 3.563 pthread_cond_timedwait() -- Wait on a Condition Variable | If unsuccessful, *pthread_cond_timedwait*() returns -1 and sets errno to one | of the following values: | EAGAIN For a private condition variable, the time specified by abstime | has passed. | EINVAL Can be one of the following error conditions: - | The value specified by cond is not valid. - | The value specified by mutex is not valid. - | The value specified by abstime (tv_sec) is not valid. - | The value specified by abstime (tv_nsec) is not valid. - | Different mutexes were specified for concurrent operations | on the same condition variable. - | The mutex is not owned by the current thread. | ETIMEDOUT For a shared condition variable, the time specified by abstime | has passed. Index: locks/unix/thread_cond.c =================================================================== --- locks/unix/thread_cond.c (revision 579232) +++ locks/unix/thread_cond.c (working copy) @@ -92,7 +92,11 @@ rv = errno; } #endif +#ifdef __MVS__ + if (ETIMEDOUT == rv || EAGAIN == rv) { +#else if (ETIMEDOUT == rv) { +#endif /* __MVS__ */ return APR_TIMEUP; } return rv; On 10/12/07, William A. Rowe, Jr. <wrowe@rowe-clan.net> wrote: > >; > >
http://mail-archives.apache.org/mod_mbox/apr-dev/200710.mbox/%3C3ce0569d0710160758k7d900b4fye7d271c49aeb38a2@mail.gmail.com%3E
CC-MAIN-2014-23
refinedweb
195
56.76
Hey everyone, Heres my problem. I have 2 forms. Form 1 contains a menu strip and data grid view Form 2 Contains a Button and a bunch of other controls to help create the file. How I'm doing this Private Sub BtnRecord_Click(sender As Object, e As EventArgs) Handles BtnRecord.Click Dim Bookz As String Title = TxtTitle.Text Author = TxtAuthor.Text Stock = TxtStock.Text Price = TxtPrice.Text Genre = RadButFiction.Text Genre2 = RadButNonFiction.Text If TxtAuthor.Text = "" Or TxtPrice.Text = "" Or TxtTitle.Text = "" Or TxtStock.Text = "" Then MessageBox.Show("Please fill in all required text boxes, they are marked with a * .") ElseIf RadButFiction.Checked = False And RadButNonFiction.Checked = False Then MessageBox.Show("Please select a category.") ElseIf RadButFiction.Checked = True Then Bookz = (Title & "," & Author & "," & Stock & "," & Price & "," & Genre) Else Bookz = (Title & "," & Author & "," & Stock & "," & Price & "," & Genre2) End If S = IO.File.AppendText("Booklistz.txt") S.WriteLine(Bookz) S.Close() TxtTitle.Clear() TxtAuthor.Clear() TxtStock.Clear() TxtPrice.Clear() RadButFiction.Checked = False RadButNonFiction.Checked = False So I use a streamwriter which I dim as string in the public class to make it easier. This code is working fine. My problem is in this code, not so much a problem as much as I don't know how to go about doing it. I use an openfiledialog to import the textfile which was created above into the datagridview. I ran into several problems with this code, so if anyone could please guide me into how I would go about doing this. I would be greatly appreciative. Dim textfile As String Try OpenFileDialog1.ShowDialog() textfile = OpenFileDialog1.FileName DGVBooks.DataSource = IO.File.ReadAllLines(textfile) Catch ex As Exception End Try Novice at this I'm trying to import the data so it looks like the attached capture below ![bd1401fb65480c8bd92bea109464a0eb] EDIT: I'm using VB.Net
https://www.daniweb.com/programming/software-development/threads/476143/open-txt-file-in-dgv-with-openfiledialog
CC-MAIN-2017-43
refinedweb
298
63.15
On 12 Dec 2012, at 16:31, Gary Martin <gary.martin@wandisco.com> wrote: > On 12/12/12 15:18, Jure Zitnik wrote: >> Hi, >> >> On 12/12/12 2:30 PM, Gary Martin wrote: >>>> o Database schema changes >>>> >>>> 1. Can we be exact with altered table list? >> >> BEP-0003 updated with the list, currently SQLs targeted at enum, component, milestone, version and wiki tables are translated. >> >>>> 2. Something that we might forgot: What about 3rd party plugin tables that >>>> reference multiproductized Trac tables? >>>> Will probably need to proclaim these incompatible when more than one >>>> product is in effect? >>> >>> Good point. To keep track of records from tables from third party plugins, this approach doesn't quite work. I would have thought that we would be better off using a separate table to keep track of the resources that belong to a product. Is this another area that has not been updated based on discussions? >> >> The current SQL translator implementation would show 3rd party plugins a view of translated tables that would only include resources from the currently selected product scope. If the plugin makes a reference to a resource by it's name everything should work fine as the reference would be consistent each time when in that specific product environment (as the plugin would always get the same view of the database). >> >> Things start breaking if there's a resource with the same name in multiple products, unless the translator is changed to return names with product namespace being prefixed to the actual resource name for example. The plugins would get version name 'BH:1.0' instead of '1.0' for example. Still, this doesn't solve the problem entirely as the plugin (that's not aware of products) would end up (in it's own tables) with references to different resources from different products and maybe that's not exactly what's expected to happen... >> >> Keeping track of resource belonging to a product using a separate resource mapping table also unfortunately doesn't solve the issue. We'd need to change the schema anyway as in the current database model, all tables have 'name' column as their key. We could of course reference the same resource from different products using the separate mapping table but we'd be referencing the same record and changing the name of that record would change the resource in all products which is, at least imo, not what we want. > Ah yes, I forgot about my ideas for that. For the purposes of unique keys I was thinking of including some kind of prefix as part of the name - not necessarily the product namespace as we could consider it better to leave this as a constant with a means to link the prefix to the namespace. > > The problem with just adding fields to each model is not so much a problem from the point of view of 3rd party plugins accessing those models that are modified in such a way, but rather with those resource tables that are added by the third party plugins. These would have to be modified to add the product to their tables too. Is the suggestion that we do that modification for externally defined resources or only provide the ability for specific plugins? > >> >>>> >>>> o Administration commands >>>> >>>> This implies that we will have two different modes of operation (multi >>>> product and single product) which will have to be chosen at >>>> installation? Is this necessary? I can imagine users will start with single >>>> product because it will be perceived simpler but will have additional work >>>> to do when the need for second product arises. >>> >>> I struggle to see a reason to support a deploy-multiproduct admin command at the moment, partly because I don't see a reason to go beyond providing a basic product path based namespace resolution. More on that below.. >> >> I think that we only have one mode of operation and that is multi product. Installation should initially setup a 'default' product and give the users the ability to add products as needed. > > I keep playing with the idea of whether the null product could be the default. The only problem I see with that after a small amount of consideration is making sure it is possible to access it from within other products. It is probably just as well to create a product called 'main' or 'default' or similar to simplify things. > > However, I don't think that is strictly what the section in BEP-0003 is talking about. It seems to be more of a question of whether there is anything different about deployment scripts for bh multiproduct enabled installations and, perhaps, whether we provide admin commands to help add specific products as deployable entities. I think that Peter is right to suggest that there doesn't seem to be a strict reason to do this when it might encourage extra work for adding more products. > > I could be misinterpreting everyone of course! I believe the installation should always be 'multi product enabled'. It's the safer option (i.e. not causing problems later if additional products are required) and multi product is one of the core features of Bloodhound. If a user is really adverse to the possibility of adding additional products he can enforce this via permissions or just install plain Trac. From a UI perspective the initial product could be considered as a 'null' product until a name is given by the user, but all that is really a separate discussion as far as I can tell. > >> >> >>>> o Product resources namespaces >>>> >>>> I remember there have been some debates on this a few weeks ago and it >>>> seems this sections does not reflect them or any consensus if there was one. >>> >>> There may well not have been consensus so it is probably worth someone attempting to review those discussions to see if there was much we could agree on. However, I seem to remember that it ended with the suggestion that webserver configuration should be able to sort out most url schemes so we probably shouldn't worry about direct support. >> >> +1 >> >>> Using a path based approach should be good enough for now and I would suggest dropping others for consideration as a later enhancement - such a later enhancement could just be a bit of documentation of course if that is considered enough. >>> >>>> o Per product repository >>>> >>>> Maybe many-to-many? There are all sorts of things out there in the wild. >>>> But probably not common enough to warren phase 1. >>> >>> Per product repositories does make good sense but it may be a good simplification to avoid worrying about it too early. We should be able to work with a set of globally defined repositories so we should make sure that works first. >> +1 >> >> Cheers, >> Jure >
http://mail-archives.apache.org/mod_mbox/incubator-bloodhound-dev/201212.mbox/%3C642B4ED3-5628-48BD-A2E4-BAC75FD5B40E@wandisco.com%3E
CC-MAIN-2015-32
refinedweb
1,133
58.92
The objective of this post is to explain how to return specific HTTP codes on a MicroPython Picoweb app. The tests were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board. Introduction The objective of this post is to explain how to return specific HTTP codes on a MicroPython Picoweb app. As we will see below, we will use a function called http_error. Nonetheless, we can use it to return other types of HTTP codes, such as the ones in the range of 2xx, which are success codes [1]. You can check here a list of the available HTTP response codes.. Although we are going to use a specific function to return the status code, the start_response function that we have been using on previous tutorials to return the first part of the HTTP response to the client also has an optional status parameter to return a specific HTTP code. By default, it has the value 200, which corresponds to OK [1]. The code First of all, we will import the picoweb module, needed for setting up our HTTP server, and the network module, needed to establish the connection of the ESP32 to a WiFi network. import picoweb import network Next we will take care of connecting to the WiFi network. This piece of code is generic and was explained in more detail on this previous post. You just need to change the values of the ssid and password variables to the credentials of your WiFi network and then the ESP32 should be able to connect to it. Note that at the end of the procedure we will store the IP assigned to the ESP32 in a variable, in order for later pass it to the app instance run method. ssid = "yourNetworkName" password = "yourPassword" station = network.WLAN(network.STA_IF) station.active(True) station.connect(ssid, password) while station.isconnected() == False: pass ip = station.ifconfig() Once finished the connection to the WiFi network, we will take care of creating our app instance and declaring the routes where it will listen. In our case, we will just use one route to test the returning of an HTTP internal server error. We will listen on the “/internalerror” endpoint. app = picoweb.WebApp(__name__) @app.route("/internalerror") def internalError(req, resp): ##Handling function code To keep things simple, our routing handling function will immediately return the HTTP error. Naturally, in a real application scenario, we would most likely have some logic associated before returning an error on a specific endpoint. To return the error code, we simply call the http_error function of the picoweb module, passing as input the client stream writer object and the error code, in string format. The internal server error corresponds to 500 [1]. You can test other status codes. Since under the hood this function uses the awrite method of the stream writer, we need to use the yield from keywords. yield from picoweb.http_error(resp, "500") Finally, we simply need to call the run method on our app instance to start the server. The final source code can be seen below and already includes this call, with the binding to the IP assigned to the ESP32. import picoweb import network ssid = "yourNetworkName" password = "yourPassword" station = network.WLAN(network.STA_IF) station.active(True) station.connect(ssid, password) while station.isconnected() == False: pass ip = station.ifconfig() app = picoweb.WebApp(__name__) @app.route("/internalerror") def internalError(req, resp): yield from picoweb.http_error(resp, "500") app.run(debug=True, host =ip[0]) Testing the code To test the code, simply upload the previous script to your ESP32 and run it. Upon executing, a message should be printed to the console, indicating the root path where the server is listening. You just need to copy that URL to a web browser and append the internalerror word, which corresponds to the route where our app is listening. Upon executing, you should get an output similar to figure 1. Note that I’ve opened Chrome’s developer tools to check the request in more detail. Figure 1 – Internal server error HTTP code returned. As mentioned in the introductory section, despite the function name being http_error, we can use it to return non-error codes, such as the 201, which corresponds to a new resource being created [1]. You can confirm this at figure 2, where this HTTP code is being returned. Note that I’ve simply changed the status code passed to the http_error from 500 to 201 in the previous code, which is why the route is still called “internalerror”. Figure 2 – Created HTTP code. Related Posts - ESP32 Picoweb: Serving JSON content - ESP32 MicroPython: Serving HTML from the file system in Picoweb - ESP32 MicroPython: Changing the HTTP response content-type of Picoweb route - ESP32 MicroPython: HTTP Webserver with Picoweb References [1] Pingback: ESP32 Picoweb: Obtaining the HTTP Method of the request | techtutorialsx
https://techtutorialsx.com/2017/09/26/esp32-picoweb-changing-the-returned-http-code/
CC-MAIN-2017-43
refinedweb
813
54.12
clippy 1.0.0 Clippy — Access system clipboard in Dart (Server & Browser) A library to access the clipboard (copy/paste) for server and browser Install # Add clippy to dependencies/dev_dependencies in in your pubspec.yaml Usage # Server # In the server Clippy supports writing and reading from the clipboard. It uses system tools for this: - On linux uses xsel(Install if needed) - On Mac uses pbcopy/ pbpaste - On windows it embeds a copy/paste tool win-clipboard import 'package:clippy/server.dart' as clippy; main() async { // Write to clipboard await clippy.write(''); // Read from clipboard final clipboard = await clippy.read(); } See example/server Browser # In the browser Clippy supports writing and listening to paste events. import 'package:clippy/browser.dart' as clippy; main() async { // Write a string to clipboard await clippy.write(''); // Write text from an element to clipboard await clippy.write(element); // Write current selection to clipboard await clippy.write(); // Listen to paste event clippy.onPaste.listen((text) => print('OnPaste: $text')); } See example/web Changelog # 0.2.0 # copynow returns a Future<bool> - Better error handling 0.1.0 # - Initial version, works on Linux, Osx, Windows and Browser Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: clippy: ^1.0.0 2. Install it You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. 3. Import it Now in your Dart code, you can use: import 'package:clippy/browser.dart'; import 'package:clippy/server.dart'; We analyzed this package on Jan 17, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.7.0 - pana: 0.13.4 Health issues and suggestions Document public APIs. (-1 points) 7 out of 7 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API. Fix lib/server.dart. (-1.49 points) Analysis of lib/server.dart reported 3 hints: line 9 col 12: Unnecessary new keyword. line 11 col 12: Unnecessary new keyword. line 13 col 12: Unnecessary new keyword. Fix lib/browser.dart. (-1 points) Analysis of lib/browser.dart reported 2 hints: line 5 col 19: Unnecessary new keyword. line 28 col 20: Unnecessary new keyword. Format lib/src/server.dart. Run dartfmt to format lib/src/server.dart. Maintenance issues and suggestions No valid SDK. (-20 points) The analysis could not detect a valid SDK that can use this package. Package is getting outdated. (-43.84 points) The package was last published 75 weeks ago. The package description is too short. (-9 points) Add more detail to the description field of pubspec.yaml. Use 60 to 180 characters to describe the package, what it does, and its target use case. Maintain an example. None of the files in the package's example/ directory matches known example patterns. Common filename patterns include main.dart, example.dart, and clippy.dart. Packages with multiple examples should provide example/README.md. For more information see the pub package layout conventions.
https://pub.dev/packages/clippy
CC-MAIN-2020-05
refinedweb
531
52.87
The .NET framework and GDI+ introduce a new paradigm for drawing in Windows. It drastically simplifies your code, while still providing you with powerful access to the underlying Win32 routines through a rich set of classes. Most developers will find all they need in the classes provided by the framework, and advanced users can still use the old GDI via P/Invoke. And, possibly the best of all, you can use the same code for drawing in any type of application; for example, WinForms, ASP.NET, or web services. So, without further ado ... System.DrawingNamespace All classes involved in drawing are collected into the System.Drawing namespace. It is in turn divided into a few additional namespaces: System.Drawing.Design for design-time image handling (in, for example, Visual Studio.NET), System.Drawing.Drawing2D for advanced two-dimensional drawing, System.Drawing.Imaging for bitmap and metafile handling, System.Drawing.Printing for drawing to a printer, and System.Drawing.Text for advanced font support. In this article, we will try out some of the basic techniques for drawing to a bitmap image. We will create a console application written in C# that, based on a few parameters, draws an ellipse using different sizes and colors. In this sample, we use it from a command line, but it should be simple and straightforward to reuse this code in, for example, a web application. Note: to get this code to compile, you need to add a reference to System.Drawing.dll. To start off, we create a class called EllipseDrawer. This class could be part of a library of shape-drawing classes. The class has a single method, Draw: public Image Draw(int width, int height, int strokeWidth, Color strokeColor, Color fillColor) { // create the bitmap we will draw to Image image = new Bitmap(width + strokeWidth, height + strokeWidth); // calculate the half of the stroke width for use later float halfStrokeWidth = strokeWidth / 2F; // create a rectangle that bounds the ellipse we want to draw RectangleF ellipseBound = new RectangleF( halfStrokeWidth, halfStrokeWidth, width, height); // create a Graphics object from the bitmap using(Graphics graphics = Graphics.FromImage(image)) { // create a solid color brush using(Brush fillBrush = new SolidBrush(fillColor)) { // fill the ellipse specified by the // rectangle calculated above graphics.FillEllipse(fillBrush, ellipseBound); } // create a pen using(Pen pen = new Pen(strokeColor, strokeWidth)) { // draw the stroke of the ellipse specified by // the rectangle calculated above graphics.DrawEllipse(pen, ellipseBound); } } return image; } Let's dissect this method in detail and begin with its signature. public Image Draw(int width, int height, int strokeWidth, Color strokeColor, Color fillColor) The width and height parameters specify the size of the ellipse that will be drawn. strokeWidth and strokeColor set the width and color for the outline of the shape, and fillColor determines the color to use to fill the interior of the ellipse. A Color is defined by four values: red, green, blue, and an alpha value that sets the opacity. The framework also provides you with a large set of predefined colors (you might recognize them from, for example, HTML or SVG). Also notice that we return an Image. Image is an abstract base class for all types of images. In the .NET framework, we have two types: bitmaps and metafiles. A bitmap is built up by a grid of pixels, while a metafile is a sequence of drawing instructions (a vector image). In this case, we use a bitmap, but by returning an Image, we make our method a bit more generic, in case we would like to change it and instead return a metafile in a future version. Now let's move on to our actual code. Image image = new Bitmap(width + strokeWidth, height + strokeWidth); This creates our image, more specifically a bitmap image. The values we use in the constructor specify the size of the bitmap, measured in pixels. The reason why we're adding the stroke width to our width and height probably deserves some explaining. When GDI+ draws our stroke, it puts half of the stroke's thickness on the outside of the ellipse we specify, and half on the inside. The figure below shows how it's done. The methods for drawing ellipses use a rectangle to define the position and size. So, to ensure that the entire shape fits in our image, we need to add the stroke width to both the width and height of the ellipse. And, to position the ellipse so the entire shape is visible, we need to calculate half of the stroke width and use that for our starting coordinates. float halfStrokeWidth = strokeWidth / 2F; We then create a rectangle that bounds the ellipse. RectangleF ellipseBound = new RectangleF( halfStrokeWidth, halfStrokeWidth, width, height); Now we're getting into the real magic. using(Graphics graphics = Graphics.FromImage(image)) Graphics is the class around which the entire process of drawing revolves. Graphics is the drawing surface for any kind of drawing, whether on a WinForm control, a printer, or a bitmap. One of the great things about this design is that once you have your code for drawing, you can easily switch to drawing on a different target; for example, to add printing to your graphics application. The Graphics class doesn't have any public constructor. Instead, you can create an instance using one of the static methods that starts with " From." In this case, we use FromImage. When using an instance of the Graphics class, it is very important to always release the resources it uses after we're done with it. A good thing is that Graphics implements the IDisposable interface, which means we can use the using statement to automatically dispose of the object when it's no longer needed. The same goes for other classes in the System.Drawing namespace, as we will see below. Next, we want to draw the actual ellipse. The Graphics class has two sets of methods, starting with either " Fill" or " Draw." In our case, we're interested in FillEllipse and DrawEllipse. FillEllipse will do just that, fill the area defined by the ellipse with a color (or a more advanced pattern). In the same way, DrawEllipse will draw the stroke of the ellipse. GDI+ uses a "painter's model," which means that each thing you draw will simply be put on top of what's already on the drawing surface. This is a simple and easy to understand model, but it requires us to do some extra thinking about the order in which we do our drawing. As we saw in the figure above, half of the stroke will be on the inside of the ellipse, so to make sure that our fill doesn't cover half of the stroke, we must paint the fill first. Filling is done, just like in non-digital painting, by using a brush. The framework contains an abstract base class, Brush, for the different types of brushes. We just want to paint with a single color, so our best choice is to use a SolidBrush. Other useful brush classes include LinearGradientBrush, for filling with a gradient, or TextureBrush, which can tile an image to fill an area. Haven't you always dreamed of a brush like the one use to paint a chessboard in Santa's Workshop? Brushes also implement IDisposable, so we will use the using statement again. using(Brush fillBrush = new SolidBrush(fillColor)) Next, we finally get to fill our ellipse. graphics.FillEllipse(fillBrush, ellipseBound); The only thing we have left to do is to draw the stroke. To do that, we use a Pen, which is basically a brush with a width. In our case, we use a constructor that takes a color and width, but you can also create it from a brush, which means that you can draw your stroke using a gradient or pattern. By this time, you're probably well familiar with the using statement. using(Pen pen = new Pen(strokeColor, strokeWidth)) And using our pen, we now draw our final piece of art. graphics.DrawEllipse(pen, ellipseBound); Last, but not least, we return the bitmap we have just created. return image; We're now all done with our code. For the full listing of the code, click here. The image below shows an example output from the program using the following command: ellipseDrawer 50 100 20 yellow red If you want to continue exploring GDI+, I would suggest taking a look at the many properties and methods in the Graphics class. For example, you can make the result of our drawing look nicer with anti-aliasing by using the SmoothingMode property. Try out some of the other draw and fill methods. In my next article, I will look into the Graphics.DrawPath() method and the very powerful GraphicsPath class. Also, please email me suggestions for what you want me to cover in future articles, and I'll try to make your wishes come true. Niklas Gustavsson is a molecular biologist that took a extended break and is now working as a system architect/developer with web, Java and NET development. Return to ONDotnet.com
http://archive.oreilly.com/lpt/a/3681
CC-MAIN-2015-35
refinedweb
1,514
62.68
Web API in ASP.NET Web Forms Application Web API in ASP.NET Web Forms Application With the release of ASP.NET MVC 4 one of the exciting features packed in the release was ASP.NET Web API. Join the DZone community and get the full member experience.Join For Free Bugsnag monitors application stability, so you can make data-driven decisions on whether you should be building new features, or fixing bugs. Learn more. for building RESTful services. Microsoft shipped “ASP.NET and Web Tools 2012.2” update sometime in February 2013. This included updated ASP.NET MVC Templates, ASP.NET Web API, SignalR and Friendly URLs. So if you take the latest you get all the latest bits ASP.NET has to offer so far. Rest of this post assumes that you have installed the 2012.2 bits. I assume you know the Web API fundamentals like API Controller, Action and Route configuration as I wont get into discussing those. In this post, we will see how Web API can be created in a Web Form application or project. I will be using Visual Studio Express 2012 for Web as my IDE for this post. Create ASP.NET Web Form Project: To start out with, create a empty “ASP.NET Empty Web Application” or “ASP.NET Web Forms Application”. Both of these templates are Web Forms based. So doesn’t matter which one you choose. For the sake of the demo I will choose ASP.NET Empty Web Application. Fire up a Visual Studio Express 2012 for Web, File > New Project > ASP.NET Empty Web Application and give a name to the project. Fig 1: New Project Dialog Visual Studio will create the project and you should have the infrastructure ready. And here is how the solution will look like: Adding a Web API Controller: Since Web API is kind of API we are building lets bring in some best practice here. It is good to have a folder created with the name “Api” and store all your Web API controllers here. With the latest release of ASP.NET, you can create Controllers in any folder you like. So go ahead and create a new folder and name it API. Here is how the Solution Explorer will look like: Now lets add a Web API Controller. Right click on Api folder, select Add > New Item. You will find a new Item Template called “Web API Controller Class”. Select Web API Controller Class Item. Note that Visual Studio names the class as ValuesController1.cs – you can rename it to ValuesController. Make the changes and click Add. Fig 4: Adding Web API Controller Class Here is the glimpse of the code inside ValuesController class provided by default: Fig 5: Values Controller Class There is one magic Visual Studio does under the hood when you add a Web API Controller class. It adds the following packages which is required for making the Web API work: - Microsoft.AspNet.WebApi - Microsoft.AspNet.WebApi.Client - Microsoft.AspNet.WebApi.Core - Microsoft.AspNet.WebApi.WebHost - Microsoft.Net.Http - Newtonsoft.Json You can see that there is a packages.config file added to the solution and it is used by Nuget package manager to keep track of the packages. The above packages make it possible for Web API to be hosted within the Web Forms application or project. So, is this all to make the Web API run – well not yet. Web API works with the routes and we have not yet configured the routing engine to grab the request and see if it is web api request. Lets do that in next section. Setting up Web API Routes: One thing you will notice in the solution is – it does not contain a Global.asax file. Well that’s why the name of the template “ASP.NET Empty Web Application” its literally empty and you need to add things needed manually. So first lets go ahead and add a Global.asax. Right click on the project, select Add > New Item > Global Application Class. You will get the blank Global.asax i.e. the class will have all the event handler added up but empty. The event handler which is of importance for us is Application_Start. Lets now configure the Http Routing. As a best practice create a new folder called App_Start at the root of the project. Add a new class, call it WebApiConfig.cs and make it a static class. Idea behind naming the folder as App_Start is that, codes in this folder are the first to be called or executed when the app starts. So here is how the solution explorer looks like now: Now add the following code to WebApiConfig.cs. We are creating a static method called Register and it will register the Http routes for the application: public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } Listing 1: WebApiConfig Register method What we are doing in Register method is: - Get the HttpConfiguration object - Use the Routes table to configure routes - Map a Http Route to our API by providing a pattern - Default pattern is to look for “api/{controller}/{id} – where {id} is optional Here is the complete code of WebApiConfig.cs file: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Http; namespace WebAPIinWebForms.App_Start { public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } } Listing 2: WebApiConfig class Now, this is just a definition but where do we actually call it. Remember we added a Global.asax file – we will call the Register method from the Application_Start event handler. Here is the code to do that: protected void Application_Start(object sender, EventArgs e) { WebApiConfig.Register(GlobalConfiguration.Configuration); } Listing 3: Global.asax Application_Start event handler And we have a Web API created in Web Forms project now. Next lets see how to test it out. Testing Web API: At this moment if you build and run the project, browser may show you 403 and that is expected as we do not have an Default.aspx or index.html. Don’t worry as we are interested in testing the API controller. Remember the default patterns we set up for API routes – our Web API is available at a URL “:<port>/api/Values”. So type the URL and hit enter. IE will show you the response as a JSON value. This is because IE always sends the Accept header as application.json and Web API server implementation will respect the content negotiation and send the appropriate data type to the client. Fig 7: Testing API Controller in IE Here is the same response when seen from Firefox: Fig 8: Testing API Controller in Firefox Summary: This was a post intended to show case the fact that Web API are not meant only for ASP.NET MVC project types but it can be hosted even in ASP.NET Web Forms project too. Thanks to the tooling, Visual Studio does all the magic under the hoods when you add a Item template of type “Web API Controller Class”. We saw how easy it is to set up a api controller and test it. Hope this gives you a jump start on creating web API in web forms project. Do let me know if you have any feedback on this article. It will help me better myself. Till next time – Happy Coding. Code with Passion, Decode with Patience. Monitor application stability with Bugsnag to decide if your engineering team should be building new features on your roadmap or fixing bugs to stabilize your application.Try it free. }}
https://dzone.com/articles/web-api-aspnet-web-forms
CC-MAIN-2018-47
refinedweb
1,292
67.76
[Solved] Segfault on QApplication exit I've been debugging a segmentation fault that happens when closing a Qt application composed of several projects. We ended up trying this very simple test: main.cpp: @#include <QApplication> #include <QMainWindow> int main(int argc, char *argv[]) { // Method 1: CLOSES CORRECTLY /QApplication a(argc, argv); QMainWindow w; w.show(); return a.exec();/ // Method 2: CAUSES SEGFAULT QApplication *a = new QApplication(argc, argv); QMainWindow *w = new QMainWindow(); w->show(); return a->exec(); }@ Project file: @ QT += widgets TARGET = test TEMPLATE = app SOURCES += main.cpp@ I realized that the first method (commented code), always closes correctly and without errors while the second method which uses pointers causes a segfault on exit. This seems to happen randomly. This behavior appeared after migrating from Qt 4.6 to Qt 5.0.1, on CentOS 6.4. Anyone would have an idea why the method using pointers does not work correctly anymore? Is this method valid? Is this behavior normal? With your second method, the objects are allocated on the heap, but you never delete them! The memory leak that this causes probably doesn't matter much, as the process is going to exit anyway. Nonetheless it's bad practice to not clean up your heap objects properly. So please try: @main() { QApplication *a = new QApplication(argc, argv); QMainWindow *w = new QMainWindow(); w->show(); int ret = a->exec(); delete w; delete a; return ret; }@ Seems like many examples and tutorials I found don't delete objects on the stack declared, probably, as you said, because it doesn't matter in this case. Manually deleting the objects fixed the problem. Still wondering why we didn't notice this before since this is a C/C++ issue and not a Qt one. Anyway, case solved, and I'll stick to good practices. Thanks a lot! Well, you can assume that QApplication and QMainWindow will also allocate system resources, e.g. from the underlying windowing system. If you never destroy these objects, their destructors never get executed and thus they will never get a chance to do a proper clean up, e.g. of system resources. Here things can go wrong! BTW: If examples don't delete heap objects, then that's probably because the example code wants to illustrate a specific aspect, but is not intended to be used 1:1 in a real software and thus leaves out various "details". Though this is marked as solved, I can't see the reason for the SEGFAULT when no desctructors are called. Can anyone explain please?
https://forum.qt.io/topic/26658/solved-segfault-on-qapplication-exit
CC-MAIN-2018-09
refinedweb
423
66.13
Mario Hewardt is a senior design engineer with Microsoft, and has worked extensively in the Windows system level development area for the last seven years. Mario is the author of Advanced .NET Debugging on which this article is based. Courtesy Addison-Wesley Professional. All rights reserved. Editor's Note. The source code accompanying this article is available at the author's Advanced .NET Debugging website. The Windows operating system is a preemptive and multithreaded operating system. Multithreading refers to the capability to run any number of threads concurrently. If the system is a single processor machine, Windows creates the illusion of concurrent thread execution by allowing each thread to run for a short period of time (known as a "time quantum'). When that time quantum is exhausted, the thread is put to sleep and the processor switches to another thread (known as a "context switch"), and so on. On a multiprocessor machine, two or more threads are capable of running concurrently (one thread per physical processor). By being preemptive, all active threads in the system must be able to yield control of the processor to another thread at any point in time. Given that the operating system can take away control from a thread, developers must take care to always be in a state where control can safely be taken away. If all applications were single threaded, or if all the threads were running in isolation, synchronization would not be a problem. Alas, for efficiency sake, dependent multithreading is the norm today and also the source of a lot of bugs in applications. Dependent multithreading occurs when two or more threads need to work in tandem to complete a task. Code execution for a given task may, for example, be broken up between one or more threads (with or without shared resources) and hence the threads need to "communicate" with each other in regards to the order of thread execution. This communication is referred to as "thread synchronization" and is crucial to any multithreaded application. Thread Synchronization Primitives Internally, the Windows operating system represents a thread in a data structure known as the "thread execution block" (TEB). This data structure contains various attributes such as the thread identifier, last error, local storage, and so on. Listing 1 shows an abbreviated output of the different elements of the TEB data structure. 0:000> dt _TEB ntdll! +0x034 LastErrorValue : Uint4B +0x038 CountOfOwnedCriticalSections : Uint4B … +0xfca RtlExceptionAttached : Pos 9, 1 Bit +0xfca SpareSameTebBits : Pos 10, 6 Bits +0xfcc TxnScopeEnterCallback : Ptr32 Void +0xfd0 TxnScopeExitCallback : Ptr32 Void +0xfd4 TxnScopeContext : Ptr32 Void +0xfd8 LockCount : Uint4B +0xfdc ProcessRundown : Uint4B +0xfe0 LastSwitchTime : Uint8B +0xfe8 TotalSwitchOutTime : Uint8B +0xff0 WaitReasonBitMap : _LARGE_INTEGER All in all, on a Windows Vista machine, the TEB data structure contains right around 98 different elements. Although most of these elements aren't typically used when debugging .NET synchronization problems, it is important to be aware that Windows carries a lot of information about any given thread to accurately schedule execution. Much in the same way that Windows includes a thread data structure to maintain the state of a thread, so does the CLR. The CLR's version of the thread data structure is, not surprisingly, called Thread. The internals of the Thread class is not made public. One very useful command is the threads command, which outputs a summary of all the CLR threads currently in the process as well as individual state for each thread: Although the threads command gives us some insight into the CLR representation of a thread (such as the thread state, CLR thread ID, OS thread ID, etc.), the internal CLR representation is far more extensive. Even though the internal representation is not made public, we can use the Rotor source code to gain some insight into the general structure of the Thread class. The Rotor source files of interest are threads.h and threads.cpp located under the sscli20\clr\src\vm folder. Listing 2 shows a few examples of data members that are part of the Thread class. class Thread { … volatile ThreadState m_State; DWORD m_dwLockCount; DWORD m_ThreadId; LockEntry *m_pHead; LockEntry m_embeddedEntry; … } The m_State member contains the state of the thread (such as alive, aborted, etc.). The m_dwLockCount member indicates how many locks are currently held by the thread. The m_ThreadId member corresponds to the managed thread ID, and the last two members (m_pHead, m_embeddedEntry) correspond to the reader/writer lock state of the thread. If we need to take a closer look at a CLR thread (including the members above), we have to first find a pointer to an instance of a Thread class. This can easily be done by first using the threads command and looking at the ThreadOBJ column, which corresponds to the underlying Thread instance: We can see that the threads command shows that the first thread pointer is located at address 0x003b4528. We can then use the dd command to dump out the contents of the pointer. What if we want to find out the contents of the m_State member? To accomplish this, we have to first figure out the offset of this member in the object's memory layout. A couple of different strategies can be used. The first strategy is to look at the class definition and see if there are any members in close proximity that you already know the value of. If that is the case, you can simply dump out the contents of the object until you find the known member and subsequently find the target member by relative offset. The other strategy is to simply look at all the members in the class definition and find the offset of the target member by simply adding up all the sizes of previous members leading up to the member of interest. Let's use the latter strategy to find the m_State member. Looking at the class definition, we can see that the m_State member is in fact the very first member of the class. It then stands to reason that if we were to dump out the contents of the thread pointer, the very first field should be the state of the thread: < Interestingly enough, the first element (0x79f96af0) doesn't seem to resemble a thread's state. As a matter of fact, if we use the ln (list near) command, we can see the following: We are seeing the virtual function table pointer of the object. Although not terribly interesting from a debugging perspective, it can come in handy to convince ourselves that the pointer we are looking at is in fact a pointer to a valid thread object. Because we can safely ignore this pointer for our current purposes, the next value is 0x00000220. This value looks like it may represent a bitmask of sorts but to interpret this bitmask in the context of a thread state, we must first enumerate the various bits that constitute a thread state. The Thread class contains an enumeration that represents a thread's state called the ThreadState enumeration. This enumeration can yield important clues when debugging synchronization problems. Although the entire enumeration contains close to one hundred fields, some are more important than others when debugging synchronization issues. Table 1 shows the most interesting fields of the ThreadState enumeration. Based on Table 1 and our previous state 0x00000220, we can infer the following: - The thread is a background thread (0x00000200). - The thread is in a state where it can enter a Join (0x00000020). - The thread is a newly initialized thread (0x00000000). THREAD CLASS DISCLOSURE Although it may be useful to see the "internals" of a thread, it is important to realize that there is a good reason why this information is internal and not exposed through the threads command. Much of the information is an implementation detail and Microsoft reserves the right to change it at any time. Taking a dependency on these internal mechanisms is a dangerous prospect and should be avoided at all cost. Secondly, Rotor is a reference implementation and does not guarantee that the internals mimic the CLR source code in detail. Now that we have discussed how the CLR represents a thread internally, it is time to take a look at some of the most common synchronization primitives that the CLR exposes as well as how they are represented in the CLR itself. Events The event is a kernel mode primitive accessible in user mode via an opaque handle. An Event is a synchronization object that can take on one of two states: signaled or nonsignaled. When an event goes from the nonsignaled state to the signaled state (indicating that a particular event has occurred), a thread waiting on that event object is awakened and allowed to continue execution. Event objects are very commonly used to synchronize code flow execution between multiple threads. For example, the native Win32 API ReadFile can read data asynchronously by passing in a pointer to an OVERLAPPED structure. Figure 1 illustrates the flow of events. The ReadFile returns to the caller immediately and processes the read operation in the background. The caller is then free to do other work. After the caller is ready for the results of the read operation, it simply waits (using the WaitForSingleObject API) for the state of the event to become signaled. When the background read operations succeeds, the event is set to a signaled state, thereby waking up the calling thread, and allows execution to continue. There are two forms of event objects: manual reset and auto reset. The key difference between the two is what happens when the event is signaled. In the case of a manual reset event, the event object remains in the signaled state until explicitly reset, thereby allowing any number of threads waiting for the event object to be released. In contrast, the auto reset event only allows one waiting thread to be released before being automatically reset to the nonsignaled state. If there are no threads waiting, the event remains in a signaled state until the first thread tries to wait for the event. In the .NET framework, the manual reset event is exposed in the System.Threading.ManualResetEvent class and the auto reset event is exposed in the System.Threading.AutoResetEvent class. To take a closer look at an instance of either of the two classes of events, we can use the do command as shown in the following: Because the Event classes in the System.Threading namespace are simply wrappers over the underlying Windows kernel objects, the waitHandle member of the classes can be used to gain more insight into the underlying kernel mode object. We can use the handle debugger command with the waitHandle value: Here, we can see that the waitHandle with value 204 corresponds to an auto reset event that is currently in a waiting state.
http://www.drdobbs.com/architecture-and-design/advanced-net-debugging-synchronization/222002723
CC-MAIN-2015-48
refinedweb
1,800
59.23
Topic: Java Selection Sort Not finding your answer? Try searching the web for Java Selection Sort Answers to Common Questions How to Sort a Linked List in Java A linked list is one of the primary types of data structures in the programming world. It's an arrangement of nodes which contains both data and references pointing to the next node. To sort a linked list in Java, there's a linked list clas... Read More » Source:... How to Update the Select Box With JavaScript JavaScript allows you to set up a "select" box, which is the drop-down form element used to collect input from users. You can update and add a new value for users. This type of JavaScript feature is typically used when you add values to a s... Read More » Source: How to Sort the Printer Driver Selection List in Alphabetical Ord... If you want to sort the printer driver list in Windows XP, there are several ways you can do so. You can sort the list by type, size or name. If you want to view the list in alphabetical order, you must sort the drivers by name. This is use... Read More » Source:... More Common Questions Answers to Other Common Questions A sorted set is exactly what it sounds like: a Set implementation in which all elements are stored in a sorted order. To quote from the Java API, an object which implements the SortedSet interface is A Set that further provides a total orde... Read More » Source: Java is a programming language with a large library of classes that provide functionality for common operations such as renaming files. The key class for rename a file is the File class and its "RenameTo" method. Read More » Source: Javascript is a language used to automate some actions on a website. The language is run on the user's computer, so the programmer can change the "action" property in a form located in a web page. The action property controls the form's sub... Read More » Source:... Selection sort is one of the easiest method of sorting elements of array. In selection sort ,the begaining of the sorting with the comapairing first element of the array with other elements of array by this process smallest value in given a... Read More » Source: You sort a selected range of cells. Read More » Source:... string sorting means sorting the string array in a specific order an example is given below import java.io.*; public class stringArray {public static void main(String args[])throws IOException {String A[]=new String[10];int i=0,j=0;String t... Read More » Source: Java has a very efficient built in implementation of quick sort. You can use it on any array of primitives or Comparable Objects by invoking Arrays.sort(<array>) See related link. Read More » Source:...
http://www.ask.com/questions-about/Java-Selection-Sort
crawl-003
refinedweb
475
73.88
If you've been following very closely, the only linter errors left should read something like this: "'params' is missing in props validation". This opens the door to a whole area of React we haven't touched yet, but it's important because it makes your code easier to understand and helps reduce bugs – and, as you've just seen, you get linting errors if you don't do it! When running in development mode (i.e., everything we've done so far), React will automatically check all props you set on components to make sure they have the right data type. For example, if you say a component has a Message prop that is a string and required, React will complain if it gets set using a number or doesn't get set at all. For performance reasons this check only happens while you're developing your code – as soon as you switch to production, this goes away. ESLint is warning us because we don't tell React what data types our props should be. This is easily done using a set of predefined options such as React.PropTypes.string, React.PropTypes.number, and React.PropTypes.func, plus a catch-all "anything that can be rendered, including arrays of things that can be rendered": React.PropTypes.node. ESLint is telling us that the App component uses this.props.children without specifying what data type that is. That's easily fixed: add this directly after the end of the App class in App.js: src/pages/App.js App.propTypes = { children: React.PropTypes.node, }; Note: when I say "directly after the end" I mean after the closing brace for the class, but before the export default App line, like this: src/pages/App.js import React from 'react'; class App extends React.Component { render() { return ( <div> <h1>Unofficial GitHub Browser v0.1</h1> {this.props.children} </div> ); } } App.propTypes = { children: React.PropTypes.node, }; export default App; If you want to see what happens when React detects the wrong prop type being used, try using React.PropTypes.string in the snippet above. As you'll see, your page still loads fine, but an error message should appear in your browser's debug console. We need to add two more propTypes declarations in order to make our code get cleanly through linting. Both are the same, and say that the component can expect a params property that is an object. Add this directly after the end of the Detail class: src/pages/Detail.js Detail.propTypes = { params: React.PropTypes.object, }; And add this directly after the end of the User class: src/pages/User.js User.propTypes = { params: React.PropTypes.object, }; That's it! If you run the command npm run lint now you should see no more errors.!
http://www.hackingwithreact.com/read/1/41/how-to-add-react-component-prop-validation-in-minutes
CC-MAIN-2017-22
refinedweb
467
66.13
Online profiel overview screen. More... #include <online_profile_achievements.hpp> Online profiel overview screen. Callback before widgets are added. Clears all widgets. Reimplemented from OnlineProfileBase. True if a's goal progression is <= to b's. If they are equal, goalSort(a,b) != goalSort(b,a), ù as the bool can't handle the 3 values required to avoid this implement callback from parent class GUIEngine::Screen Called when entering this menu (after widgets have been added). Reimplemented from OnlineProfileBase. implement callback from parent class GUIEngine::Screen Callback when the xml file was loaded. Reimplemented from OnlineProfileBase. Called every frame. It will check if results from an achievement request have been received, and if so, display them. Reimplemented from GUIEngine::Screen. Which column to use for sorting.
https://doxygen.supertuxkart.net/classBaseOnlineProfileAchievements.html
CC-MAIN-2020-16
refinedweb
124
63.86
Cry about... .NET / C# Troubleshooting The name 'NNNN' does not exist in the current context (C#) Symptom: When compiling a C# application the compiler generates the following error: The name 'NNNN' does not exist in the current context where ' NNNN' is the name of a variable. If you are using VB.Net then the error message is slightly different (but means the same thing): 'NNNN' is not declared. It may be inaccessible due to its protection level. in this case please refer to the VB.Net version of this article "NNNN is not declared".: int number; numbr = 1; Here the identifier is defined as ' number' but used in the code as ' numbr'. The solution is to correct the spelling. - If the name is referring to an identifier then it may be that the reference simply needs to be qualified. For example: using System.Web.HttpContext; class Example string ExampleFunc() { return Application["name"]; . . [Error] Name 'Application' is not declared. Try replacing " Application" with " Current.Application". - The most common cause is that the namespace that defines the name is missing. Identify and import the required namespace - the table below should help. For example, with the error: The name 'Directory' does not exist in the current context The missing namespace is System.IO, so the solution is to add: using: The name 'HttpRuntime' does not exist in the current context or The name 'HttpContext' does not exist in the current context even though the line "using System.Web" is included. Remedy Add the reference to the project: - Expand the "References" shown in the solution explorer for the project. - Check that the required namespace is listed under "References". The required reference is listed in the table above (given under "Possible Cause 1"). If it is not listed then right click on "References" and select "Add reference...". In the "Add Reference" dialog box that will then appear, the reference you require will probably be on the ".NET" tab. For example, the solution to "HttpRuntime (or HttpContext) does not exist" (even though "Imports System.Web" is included in the file) is to ensure that "System.Web" is listed as one of the project References. These notes are believed to be correct for C# for the .NET 4, .NET 3, .NET 2 and .NET 1.1 frameworks, and may apply to other versions as well. For the corresponding VB.NET version of this article please see "NNNN is not declared". About the author: Brian Cryer is a dedicated software developer and webmaster. For his day job he develops websites and desktop applications as well as providing IT services. He moonlights as a technical author and consultant.
http://www.cryer.co.uk/brian/mswinswdev/ms_csharp_name_does_not_exist_in_current_context.htm
CC-MAIN-2018-09
refinedweb
439
58.69
Last call issues list for xpath20 (up to message 2004Mar/0246). This document identifies the status of Last Call issues on XML Path Language (XPath) 2.0 as of April 4, 2005. The XML Path Language (XPath) 2.0 has been defined jointly by the XML Query Working Group and the XSL Working Group (both part of the XML Activity). The April 4,Path]” 152 issue(s). 0 raised (0 substantive), 0 proposed, 152 decided, 0 announced and 0 acknowledged. Section 2.4.4.1 Matching a SequenceType and a Value Editorial Please replace "value" with "sequence of items". Editorial. Not on our list. MK believes it done. [Liam: done at Florida f2f, 2004-01-23 People note MikeK's text is correct, although redundant if we adopt Rys'. So, people approve MikeK's amendment. And, Srinivas will reply to this public comment. ] This DUPLICATE of qt-2004Feb1011-01 SECTION RESOLUTION: "qt-2004Feb1032-01 ORA-XP-395-E: Use of the word 'type'" "Changes in Type terminology, Don C" and an IR1 comment [IR1] MHK-XP-002 (XPath/2004Nov/0198): are all closed. Don's recent changes to the document have addressed these concerns. SECTION 2.6: Optional Features The spec defines "Static Typing Feature" as "XPath 2.0 defines an optional feature called the Static Typing Feature". There is no definition. Regards, Mark Scardina Oracle Corporation Done. SECTION > f) qt-2004Feb1026-01 ORA-XP-390-Q: Need for an error-free Static Analysis > RESOLVED by editorial action This comment builds on one aspect of David Carlisle's XSLT 2.0 comment David reported: <dc> Less-than and greater-than comparisons between strings have changed since XPath 1.0 <xsl:variable <xsl:variable <xsl:for-each which is checking that three "numbers" obtained from the source files satisfy a constraint that one lies between the other two. Is it really necessary for this to break in BC mode? Is it not possible for the mapping of <= to the underlying F&O operators is changed in BC mode to more closely match the behaviour in 1.0? While this is annoying it is actually less trouble to fix than the previous error, especially in this case, where the node sets such as @f in the expression really are only going to return one node so I would just need to add a couple of instances of number() (I hope:-) However if tehre are cases where the implicit existential quantification are used, it will be tricky for an end user to get right (easier for the system, I would have thought). </dc> First, an observation which I have made before but which may have been lost: Section 3.5.2 currently says: If XPath 1.0 compatibility mode is true, and at least one of the atomic values has a numeric type, then both atomic values are cast to to the type xs:double. It should say: If XPath 1.0 compatibility mode is true, and one of the atomic values has a numeric type and the other does not, then the value that does not have a numeric type is converted to a double using the fn:number function. (There are two changes here. Firstly, a value that is numeric is not changed, so a decimal comparison remains a decimal comparison. Secondly, the conversion is done using the number function rather than casting, so that "abc"<3 gives false rather than an error.) Second, a proposal to address David's concern about the compatibility problem. I suggest that in the case where both operands of <, >, <=, or >= are untypedAtomic, we should in BCM replicate the 1.0 behavior, but with a strong encouragement to implementors to issue a warning. Specifically: change rule 2 of 3.5.2 as follows. (The rules also need to be arranged so rule 2b takes precedence over the current rule 1): 2. If backwards compatibility mode is true, then: 2a. If one of the atomic values has a numeric type, and the other does not, then the value that does not have a numeric type is converted to a double using the fn:number function. 2b. If both of the atomic values have the type xdt:untypedAtomic, and the operator is one of <, >, <=, or >=, then both of the atomic values are converted to doubles using the fn:number function, and the processor should output a warning indicating that the comparison would be performed as a string comparison if backwards compatibility mode were false. The format and destination of this warning is implementation-defined. The warning may be output either during the analysis phase or during the evaluation phase. (Note: XPath 1.0 would attempt a numeric comparison even if one of the arguments was a string. So there is still a backwards incompatibility. However, it is far less likely to arise in practice.) I've made the warning a "should" rather than a "must" because there are environments where there is no way of communicating with the stylesheet author, and in any case we can't legislate against it being sent to /dev/null. Michael Kay The XSL and XQuery working groups, meeting jointly, today looked at comments concerning XPath backwards compatibility, including from Martin Duerst, and the XPath-related parts of from David Carlisle. The WGs agreed to accept a change proposal which removes most of the incompatibilities listed in Appendix H.1: that is, it takes XPath 2.0 running in backwards compatibility mode much closer to XPath 1.0 (and by implication, further from XPath 2.0 without BCM). If you have access to member-only areas, the details are at: The main effects of the change can be summarised as: * comparing anything to a singleton boolean works by converting the other operand to a boolean * the comparisons <, <=, >, and >= involving sequences containing any mixture of strings, untypedAtomic values, and numbers are done by converting the items in both operands to xs:double * all arithmetic is double arithmetic The remaining incompatibilities that we are aware of are: * the construct A < B < C is now a syntax error; it must be rewritten as (A < B) < C to achieve the same effect as XPath 1.0. * Certain strings such as "+INF", "1e5", and "+2" which converted to NaN in XPath 1.0 now convert to values other than NaN. So for example ("+2" > "-2") was false, and is now true. The proposal did not address any residual incompatibilities in the function library. I trust that these changes are acceptable. Michael Kay for the XSL and XQuery WGs. Sections 2.5.2 and 2.5.3 of the XPath book talk about "dynamic errors", but what they say is equally applicable to type errors raised during the evaluation phase. The examples make this clear: consider "For example, if a function parameter is never used in the body of the function, an implementation may choose whether to evaluate the expression bound to that parameter in a function call." For this example to be correct, the section must apply to all run-time errors, not only to so-called "dynamic errors". (The problem arises because of poor choice of terminology. We tend to imagine that all run-time errors are dynamic errors, but they are not.) While we are on the subject, here is a request for clarification. The expression concat("Title:", //title) raises a type error if the document contains more than one <title> element. Section 2.5.3 says: "an implementation is not required to search for data whose only possible effect on the result would be to raise an error" Assuming that section 2.5.3 applies to type errors as well as to dynamic errors, does this mean that in the above expression, the implementation can output the value of the first <title> element in the document, and avoid searching for any others? If so, we have reintroduced the first-item semantics of XPath 1.0 (and the corresponding efficiency) by the back door, and we should make this explicit, at least by including an example. Michael Kay CLOSED, RESOLVED (overtaken). [My apologies that these comments are coming in after the end of the Last Call comment period.] Section 2.2.3.2 The first paragraph following the definition of "the dynamic evaluation phase" states, "If the Static Typing Feature is not in effect, an implementation is allowed to raise type-related warnings during the static analysis phase, but it must proceed with the dynamic evaluation phase despite these warnings." However, in Formal Semantics Section 2.4.1, the second paragraph following the second numbered list states that "Dynamically typed implementations are required to find and report type errors during evaluation, but are permitted to report them during static analysis." XPath explicitly prohibits dynamically typed implementations from raising type errors during static analysis, while Formal Semantics explicitly permits it. The two specifications need to be made consistent. Thanks, Henry [Speaking on behalf of reviewers from IBM.] ------------------------------------------------------------------ Henry Zongaro Xalan development IBM SWS Toronto Lab T/L 969-6044; Phone +1 905 413-6044 mailto:zongaro@ca.ibm.com > a) qt-2004Feb0981-01 [XPath] IBM-XP-112: > May type errors be raised statically without Static Typing? The text has been changed in the meantime to say "may proceed" instead of "must proceed". Mike Kay: the language books permits a dynamic implementation to raise warnings & then stop, whereas FS gives it permission to raise errors. Don: we should not give anyone permission to raise warnings since we state that those are implementation dependent and can be raised for any reason. PaulC: should we change "warnings" to "errors" here (2.2.3.2 second paragraph, "If the Static Typing Feature is not in effect, an implementation is allowed to raise warnings...") Mike Kay: 2.3.1 has the text we need, so we could delete the earlier paragraph, as it's said elsewhere. Or if we're going to say it twice, copy the paragraph identically. Don: I've heard it suggested that the sentence we tweaked yesterday be copied into 2.2.3.2. I think that would be not harmful. Don will copy the sentence, and consider removing it from 2.3.1 if that makes sense. Query Lang [2.1.1, C.1] Default function namespace "[Definition: Default function namespace. This is a namespace URI. This namespace URI is used for any unprefixed QName appearing as the function name in a function call. The initial default function namespace may be provided by the external environmentor by a declaration in the Prolog of a module.]" But the table in appendix C.1 says that the default function namespace is fn. By 2.1.1, the spec does not make clear that the default function namespace is "" and appears to license implementations to not have a default function namespace at all, or have it bound to something else by default. For portability and overall simplicity, the default function namespace in a main module should simply be set. Solution: Replace the definition in 2.1.1 quoted above to: "The initial default function namespace is set to '' but may be overridden by a declaration in the Prolog of a module." RESOLUTION: "qt-2004Feb0832-01 [QT] CER-12 Default function namespace" is closed. moot. Offending sentence is no longer in the working draft. Query Lang [Appendix B] xs:string/xs:anyURI Given the lack of a formal derivation relation between xs:string and xs:anyURI, there is a serious usability issue for any function (such as the fn:doc function) that expects a URI as an argument. If the argument is declared as xs:string, users will get type errors if they pass data that happens to be xs:anyURI. Conversely, if the argument is declared as xs:anyURI, users will get type errors if they pass string literals. Either situation is likely to occur in practice. We therefore request that xs:anyURI and xs:string be subject to special promotion rules, such as those applying to xs:float and xs:double to avoid this problem. Specifically: A function that expects a parameter $p of type xs:string can be invoked with a value of type xs:anyURI and a function that expects a parameter $p of type xs:anyURI can be invoked with a value of type xs:string. RESOLUTION: "qt-2004Feb0825-01 [QT] CER-06 xs:string/xs:anyURI" is closed. Overtaken by events. SECTION 3.2: Path expressions It says that "//" at the beginning of a path expression is an abbreviation for fn:root(self::node()) treat as document-node()/ descendent-or-self::node(). As noted, this will cause an exception if the root is not a document node. This seems arbitrary. Why not permit // when the root of a node is just an element node, for example? - Steve B. Karun: Oracle withdraws the comment. No changes needed. SECTION. RESOLUTION: "qt-2004Feb0690-01 ORA-XQ-243-C: Need to clarify: optimization on XQuery expression should not raise new errors" is closed. Overtaken by events. See agenda item J7. SECTION 2.4.4.3: Matching an Element Test and an element Node Item 2)b) says "type-matches(TypeName, AT) is true, where AT is the type of the given element node. However, if the given element node has the nilled property, then this rule is satisfied only if TypeName is followed by the keyword nillable." This paragraph is confusing. I can come up with two different interpretations of it. There are four cases to consider, in a two-by-two matrix. One axis of the matrix is whether AT has xsi:nil='true' or not. The other axis is whether nillable is specified or not. One interpretation, which I think is the most literal, is to regard the sentence beginning "However" as an additional requirement if At has xsi:nil='true'. That is, to pass the test, the element must satisfy type-matches (TypeName, AT), and, nillable must be specified. This produces the following AT has xsi:nil='true' nillable specified satisfied AT has xsi:nil='true' nillable not specified not satisfied AT lacks xsi:nil='true' nillable specified not satisfied AT lacks xsi:nil='true' nillable not specified not satisfied The other interpretation I would express using the following language: b. type-matches(TypeName, ATT) is true, where AT is obtained from the type AT of the given element node by overriding the nillability of AT as follows: ATT is nillable if and only if the keyword nillable is specified. The two examples at the end of this section support the latter interpretation. - Steve B. RESOLUTION: qt-2004Feb0666-01 is closed. Overtaken by decisions on sequence type. SECTION. Don: It's important that no implementation be required to raise an error DECIDED. Reject qt-2004Feb0590-01 with no change to our documents. NOTE: This was decided after 12:30 EDT. Push back if you had to drop off the call. Section 3.12.5 Constructor Functions Technical We do not see a reason to disallow constructor functions for types that are not associated with a targetnamespace and would like to have them treated the same as any other constructor functions. Obviously, if the default function namespace is set to the F&O namespace, you would not be able to access them, but if I undeclare the default function namespace, I should have access to them. | J5. qt-2004Feb0530-01 [XQuery] MS-XQ-LC1-122 |> Recommendation: accept comment | | A-RESHO-10 on MRYs to write a more complete proposal of his solution | to qt-2004Feb0530-01 (MS-XQ-LC1-122), including the rewrite of | 4.14 and of the contruction section | DONE. See: | MR summarizes proposal. At a high level: MR would like to be able to have constructor functions for types declared in schemas that have no namespace. Some discussion of how this effects user-defined functions in no namespace. In short: they're still not allowed, it just opens up the possibility of constructor functions for types that have no namespace. DC expresses some reservations about functions in no namespace MR asserts that this is related to user-defined functions and does not apply to the case of constructor functions for user-defined types. Discussion wanders into the area of declaring user-defined functions in no namespace and the problems associated with doing so. Some discussion of query-related static compilation semantics in the presence of modules with functions declared in no namespace. AE: Where are we heading on this? We've got some unease and we've revisited some of the discussion. Proposal: ACCEPTED. Resolves: qt-2004Feb0530-01 Section? MK believes this done, and confirms that the current (July) spec incorporates the change. Done. Section 3.12.5 Constructor Functions Technical The signature should request an atomic value and then have the standard function invocation semantics perform atomization. Thus, replace T($x as item) as T with T($x as xdt:anyAtomicType) as T and fix the following sentence. MK believes this done, and confirms that the current (July) spec incorporates the change. Done. Section. RESOLUTION: qt-2004Feb0488-01: [QXquery] MS-XQ-LC1-080 is resolved by proposal adopted 2 weeks ago on effective boolean value. Section 3.5.2 General Comparisons Technical Casting from untyped to numeric should be to double only if other type is not decimal and do decimal otherwise. Reason: double compare is non-precise. > > h) qt-2004Feb0486-01 [XQuery] MS-XQ-LC1-078 > strong tendency towards status quo no action, closed. DECIDED - that no changes are required to the documents to resolve qt-2004Feb0486-01 RESOLVED with no changes to the documents. Section). MikeK: most useful feature of XSLT is grouping, and grouping essentially relies on eq (MS-XQ-LC1-076) CLOSED and REJECTED Section 3.3.1 Constructing Sequences Technical We should not allow heterogeneous sequences of nodes and atomic values. This adds lots of complexity and inefficiency with very little user value. MikeK: I use heterogeneous sequences and constructions all the time. E.g. as in (@A,10)[1] to get the default value for an attribute. (MS-XQ-LC1-075) CLOSED and REJECTED Section 3.2.2 Predicates Technical The current dispatch rules for predicates is problematic since one often has to defer to runtime whether an index or an effective Boolean value is calculated, even if one does static type inferencing, and since float/double imprecision can lead to unexpected/wrong results. Instead of doing position and fn:boolean() do the following for E[$x]: if $x instance of xs:decimal=> do position as described (note no float/double due to precision issues), if $x instance of node()* then fn:boolean($x), if instance of xs:boolean then $x, otherwise type error (may also add xs:string as an option). >. XQuery: serious limitation Casting is not permitted from xs:Qname to xs:string. This is a very serious limitation. This implies that we cannot create an attribute node whose value is of type Qname. XQuery should allow this operation. yes, RESOLVED and CLOSED by the approval of the serialization proposal (the "triple proposal") in our previous meeting (cf.) XQuery: request for simplification and symmetry Sequence type production is supposed to denote a type. However, in case of processing instructions it mentions the content of the PI. The text mentions in a couple of places that a type represented by a sequence type production helps filtering of items by their type. This is clearly incorrect for Pis since in this case it filters by content. Two questions: (a) what does this extra special case buys to the language? (b) how will this type/content be mapped to a Formal Semantics type ? (BEA_017) CLOSED as OVERTAKEN by events (no actions needed) Rationale: a function might wish to discriminate based on the name/target In the syntax, "$" VarName turns up in various places. This should be replaced by a non-terminal, e.g. VarRef, which would be defined as: VarRef ::= "$" VarName Also, the syntax seems to allow a space after the $, but none of the examples have a space after the $. The above rule would allow to forbid whitespace with a /* ws: explicit */. There seems to be no need to allow whitespace after the $, it will only confuse users. Regards, Martin. duplicate of issue 646 (rejected at meeting 191) CLOSED (as dup, cf. w3c-query-editors/2004Aug/02138 ), Scott will reply to this comment in the public. The handling of errors in cases such as 'and', 'or', 'some',... is really dangerous, because it is highly unpredictable, not only between different implementations but also in a single implementation. I'm pretty sure that the speed advantages can be obtained with other methods that don't introduce that much unpredictability. Regards, Martin. Jonathan: I think we carefully considered other designs and spent a lot of time on this... the use of indexes is important... PaulC: The objective is to produce a highly optimiseable language, at the expense of predictability of errors. DECISION: qt-2004Feb0391-01 closed with no changes Jonathan to respond. 2.2.5 says: "Enforcement of these consistency Constraints is beyond the scope of this specification." Who/what enforces these constraints? In case they are not enforced, what are they there for? Regards, Martin. comment says: "who enforces these [XPath] constraints?" Don: these constraints are not enforced. This book applies only if these constraints are true. Mike Kay: the constraints are a contract between the XPath implementation and its user. MSM: this is a useful editoral comment. There are two sorts of contraints -- things the processer must check, and things that must be true for the spec to apply... for this reader, he didn't know which it was. It's not hard for the editor to make it clearer. Don: "This spec does not define the result of an expression under any condition in which one of more of these constraints is not satisfied." is now in the spec and seems clear. (end of 1st para in XPath 2.2.5) DECISION: qt-2004Feb0386-01 closed with no changes MSM to reply. Dear XML Query WG and XSL WG, Below please find the I18N WGs comments on your last call document "XML Path Language (XPath) 2.0" (). Please note the following: - These comments have not yet been approved by the I18N WG. Please treat them as personal comments (unless and) until they are approved by the I18N WG. - Please address all replies to there comments to the I18N IG mailing list (w3c-i18n-ig@w3.org), not just to me. - The comments are numbered in square brackets [nn]. - Because XQuery in big parts is identical to XPath, many of these comments also apply to XQuery. We are confident that you can figure this out yourselves. [1] URIs: Throughout this and other specs, URIs have to be changed to IRIs. [2] Current date and time: It should be made explicit that this has to include a timezone. [3] Implicit time zone: This has to be removed. Using implicit conversions between timezoned and non-timezoned dates and times is way too prone to all kinds of subtle and not so subtle bugs. [4] 2.2.3.1 operation tree normalization: There are many different normalizations in this series of specifications. These should be clearly and carefully separated. [5] 3.1.1: The Note says that the characters used to delimit the literal must be different from the characters that are used to delimit the attribute. This is not strictly true, " or so can also be used. [6] 3.1.5: conversion rules: to what extent can strings, elements with string content, and elements allowing mixed content but actually only containing a string be mixed? This is important for some I18N applications. [7] 3.2.1.2: How to test for an element that only contains text (independent of type)? This is important for some I18N applications. [8] 3.5.1: What about elements that have more elaborate types (e.g. mixed), but that still don't contain anything more than a string? This is important for some I18N applications. [9] 3.10.2: How to cast between complex types? [10] References: The reference to ISO/IEC 10646 should be updated to the newest version. Regards, Martin. RESOLUTION: qt-2004Feb0389-01 is closed. We reject this comment. There are anomalies, but they are outweighed requirement to have deterministic operations. There are functions to allow you avoid the problems. The less namespaces, the easier to use. For example, I don't see any need to have the xdt namespace; these few types, if they are needed, should be added to XML Schema, and to that namespace. Regards, Martin. CLOSED, RESOLVED (responded) [Speaking on behalf of reviewers from IBM, not just personally.] Section 2.1.2 The definition of the "Current date and time" component states, "If invoked multiple times during the execution of a query or transformation, [fn:current-date, fn:current-time, and fn:current-dateTime ] always return the same result." XPath should only impose this requirement for a single expression. It is up to the host language to impose this sort of requirement across multiple expressions (like a transformation in XSLT). In addition, section C.2 indicates that the scope of "Current date and time" is "global", which "indicates that the value of the component remains constant throughout the XPath expression." That contradicts the statement quoted in 2.1.2. 2.1.2 should be changed to be consistent with C.2 Thanks, Henry ------------------------------------------------------------------ Henry Zongaro Xalan development IBM SWS Toronto Lab T/L 969-6044; Phone +1 905 413-6044 mailto:zongaro@ca.ibm.com > d) qt-2004Feb0372-01 [XPath] IBM-XP-106: Value of current date and time > across XPath expressions PaulC: the problem is that we have no concept of what an atomic unit of work might be. E.g. if you have two different xpath expressions that execute 10 hours apart in the same XSLT process they might get times that are hours apart, but if XSLT views its whole transformation as a single unit of work as a it might want the time to be the same Mike Kay: absolutely. He's not disagreeing, he's saying it's in the wrong document. It should be in XSLT, not Xpath. Don: in 2.1.2 we can change "query or transformation" to "expression" for XPath and leave it up to the host language to say something further, and in the XQuery version it should say "query", and in XSLT it should say "transformation". WG looks at the occurrences of "transformation" -- In Xpath book - current date/time, document order, definition of stable In the same places in F&O and also in fn:error We'd like for F&O and XPath not to use query or transformation (except in F&O examples). Editors to make this so. ACTION A-223REDWOOD-17: Mike Kay to ensure that XSLT handles time being constant over an entire transformation correctly and aligned with F&O and XPath. DECISION: qt-2004Feb0372-01 closed [Speaking on behalf of reviewers from IBM, not just personally.] Section 2.1.1 The definition of Statically known collections states, "If the argument to fn:collection is not a string literal that is present in statically-known collections, then the static type of fn:collection is node()?." The static type in this case should be "node()*" rather than "node()?". Thanks, Henry ------------------------------------------------------------------ Henry Zongaro Xalan development IBM SWS Toronto Lab T/L 969-6044; Phone +1 905 413-6044 mailto:zongaro@ca.ibm.com already corrected. Section See [449]: Don to fix. e.g. you can't define don:element() and make "don" your default prefix, and then call element() A-SJ04-31: Don to fix. Section 1 The first sentence of this section states, "The primary purpose of XPath is to address the nodes of [XML 1.0] trees." This should defer to Data Model on what levels of XML are supported. Otherwise, it should include XML 1.1 in some way. Thanks, Henry ------------------------------------------------------------------ Henry Zongaro Xalan development IBM SWS Toronto Lab T/L 969-6044; Phone +1 905 413-6044 mailto:zongaro@ca.ibm.com Already fixed in the document. XQuery 1.0: An XML Query Language W3C Working Draft 12 November 2003 A.2.2 Lexical Rules This section should be deleted, or at least made non-normative. I have made this point several times before: I've received a few replies, but never a satisfactory justification. This time, I'll presumably at least get an official WG response. Here's a summary of my objections to this section: (1) It's error-prone. (2) It's poorly defined. There's only a vague description of the automaton. There's no definition of: --- its possible configurations; --- how its configuration is changed by each different kind of transition; --- what its initial and accepting configurations are. Moreover, there's no description of how the automaton ascertains which pattern (of the currently legal patterns) matches the current input. (Note that most automata don't have to deal with this, because their "pattern" vocabulary is the same as their input vocabulary.) (3) It favours a particular implementation strategy, making conformance more difficult for anyone choosing to use a different strategy. All of which could be improved or excused if it were actually necessary, but: (4) It isn't necessary. It's redundant, given the rest of Appendix A. Or, if it actually *does* express a requirement not expressed elsewhere, then either it's a mistake, or it *should* be expressed explicitly elsewhere. And if you don't understand the implications of A.2.2 enough to *know* whether it expresses unique requirements, then that lack of knowledge alone should tell you that you can't risk making it normative. -Michael Dyck >: > ADOPTED. ISSUES RESOLVED qt-2004Feb0348-01, qt-2004Jan0396-01. Scott to reply to the commenters. X Covered (accepted) by the solution to qt-2004Feb0853-01 (adopted at teleconference 193). RESOLVED, CLOSED (moot), this had been already done XQuery 1.0: An XML Query Language W3C Working Draft 12 November 2003 Here are some comments from that did not receive a response from the WG. -------------------- [3] ExprComment This seems like a poor name for the symbol, given that it's not a kind of expression. (In fact, CompXmlComment would seem to have more of a claim to the name, since it actually is a kind of expression.) Why not just "Comment"? By the way, why did you drop single-line comments (# to line-end)? What is the grammatical/lexical effect of a comment? E.g., is foo(: comment :)bar equivalent to foobar or foo bar ? (And is the effect the same for Pragmas and MUExtensions?) -------- [9] DoubleLiteral Change ("e" | "E") to [eE] Change ("+" | "-") to [+-] [23] HexDigits Change ([0-9] | [a-f] | [A-F]) to [0-9a-fA-F] -------- 3.7.1 Direct Element Constructors "a pair of identical curly brace characters within the content of an element or attribute are interpreted by XQuery as a single curly brace character" [And similarly in A.2.2 ELEMENT_CONTENT.] An alternative would be to use character references (e.g., { and }). -Michael Dyck 1) Adapt rename ExprComment to Comment 2) Defer "Does comment act as whitespace?" to whitespace issues. Approved first third of the email. Rest is deferred: it needs to be broken up, creating new separate issues for this. Scott will do it. W3C XSL WG Technical Comment on XPath 2.0 Last Call Draft Comment on 2.4.2: Referenc to F&O should be added for the fn:data and fn:string functions. The sentences referring to dm:typed-value and dm:string-value should be removed. W3C XSL WG Technical Comment on XPath 2.0 Last Call Draft Comment on 2.4.1: XPath claims that the types are from W3C Schema, BUT says, in item 1., that some are subtypes of xdt:anyAtomicType (which is NOT in W3C Schema). If it uses "subtype" as being different from "derived from" subtype needs to be defined and it pointed out that "subtype"d things need not be "derived from" in W3C Schema. Figure 2 needs to say that the lines are "subtype" relationships and not "derived from" lines. Also it needs to say: - For types not in the xdt:-namespace subtype means derived by restriction for simple types and derived by restriction or extension for complex types. - For types in xdt:namespace it is as defined by F&O/FS/or the language docs. It needs to be verified that "derived" is only used in the W3C Schema sense. The caption for the figure should be changed to be speaking about "subtype relationships" rather than "type hierarchy" to stress that it is different from W3C Schema. Discussed, and finally APPROVED (and CLOSED) by accepting Don's proposal: In all of 2.4.1, substitute "subtype" with "derived from". Don't change the labels in the diagram (cf. ) W. The text has gone already. W3C XSL WG Technical Comment on XPath 2.0 Last Call Draft Comment on 2.2.3.2, 1st paragraph: The text is not a definition of "dynamic evaluation phase"; it says WHEN it occurs, not WHAT it is... Substitute this with "The dynamic evaluation phase is the phase during which the value of the expression is computed." I'm quite frightened by the long list of incompatibilities (even despite a special compatibility switch) between XPath 1.0 and XPath 2.0. Each of these items seems to be small, but overall, this has a large potential for confusion, subtle errors, and so on. Not a big problem for a spec, but a potential nightmare for deployment. My guess is that to avoid this, implementations will probably implement both XPath 1.0 and XPath 2.0. If that's the case, it would be easier to make the compatibility switch switch all features, rather than just part of them. Bringing XPath 1.0 and 2.0 closer together would be even better. Regards, Martin.. X > ACTION A-221-004 on Don to respond to non-member submitters of comments > whose resolutions are recorded in > CLOSED, RESOLVED (responded) (IBM-XQ-014) Appendix A: The definitions of QName and NCName contain references to Namespaces 1.0. These should be replaced by a statement that it is implementation-defined whether QName and NCName are defined by Namespaces 1.0 or by Namespaces 1.1. --Don Chamberlin tackled with message xsl-query/2004Nov/0121 (IBM-XQ-007) Section 3.2 (Path Expressions): The definition of a path expression should be revised to remove the restriction that the expression on the right side of "/" must return a sequence of nodes. The restriction should be retained for the expression on the left side of "/". In effect, this would permit the last step in a path to return one or more atomic values. This feature has recently been requested by Sarah Wilkin () who proposes the following rule: When evaluating E1/E2, if each evaluation of E2 returns a sequence of nodes, they are combined in document order, removing duplicates; if each evaluation of E2 returns a sequence of atomic values, the sequences are concatenated in the order generated; otherwise a type error is raised. Like all type errors, this error can be raised either statically or dynamically, depending on the implementation. This rule provides well-defined static and dynamic semantics for path expressions. To illustrate the usability advantages of this proposal, consider a document containing "employee" elements, each of which has child elements "dept", "salary", and "bonus". To find the largest total pay (salary + bonus) of all the employees in the Toy department, here is what I think many users will write: max( //employee[dept = "Toy"]/(salary + bonus) ) Unfortunately in our current language this is an error because the final step in the path does not return a sequence of nodes. The user is forced to write the following: max( for $e in //employee[dept = "Toy"] return ($e/salary + $e/bonus) ) This expression is complex and error-prone (users will forget the parentheses or will forget to use the bound variables inside the return clause). There is no reason why this query cannot be expressed in a more straightforward way. Users will try to write it as a path expression and will not understand why it fails. Another very common example is the use of data() to extract the typed value from the last step in a path, as in this case: //book[isbn="1234567"]/price/data(). This very reasonable expression is also an error and the user is forced to write data(//book[isbn="1234567"]/price). Note that I am NOT asking for a general-purpose mapping operator, which I think is not in general needed since we already have a for-expression. Instead, I think we should simply relax the unnatural and unnecessary restriction that is currently placed on path expressions. This will remove a frequent source of errors and will improve the usefulness of path expressions, without precluding us from introducing a general-purpose mapping operator later if a consensus emerges to do so. --Don Chamberlin >. (IBM-XQ-004) In the XQuery document, there should be no reference to "namespace nodes". Since XQuery does not support the namespace axis, there is no way for an expression to return a namespace node. All references to namespace nodes should be replaced by references to the "in-scope namespaces" of an element, a term that is already widely used in both the Data Model and the Functions and Operators documents. Nodes are needed only to preserve identity and to represent a position on an axis. Since namespace-prefix-bindings in XQuery have neither identity nor position, there is no need in XQuery for the concept of a "namespace node". See IBM-DM-031 for a related Data Model comment, but note that this proposal can be adopted independently of IBM-DM-031 () --Don Chamberlin IBM-DM-031: No need for namespace nodes (related: IBM-DM-021 (Feb0027-01) [preserving namespace prefixes) Don: element nodes are all of the concept that we need. PaulC: there are lot of specs that currently reference namespace nodes currently... Sharon: this impacts XSLT a lot Jonathan: I'd like the xquery language to never mention namespace nodes, and to make them deprecated. Don: element nodes have a property called inscope namespaces If you'd like to have those namespace nodes, then you can take those things and make them namespaces nodes. So, we can make namespace access a deprecated feature of XPath2.0. PaulC: concerned about other people on the table that think these are not deprecated (they might not use the *access*, but the data model). Sharon: any decision we take here, I please ask this to be contingent to the decision of the full XSL wg, as this impacts XSL a lot. Norm: sympathetic, but this doesn't seem to help the DM, as we'll need them anyway in the DM (the output of the namespace accessors). Don: my major concern is not to have the namespace nodes mentioned in the XQuery language book. I could live with this, and keeping the namespace nodes in the DM if really needed. PaulC: So, DM can maintain those nodes for those people who need to use them. Don: change property of elementnodes: the accessor should not return a set of namespace nodes, which is not helpful; instead, the language doc needs the inscope namespaces. A property like "namespace prefix binding". Karun: eg, F&O currently doesn't make references to the namespace nodes, but uses broader terms. *** * ACTION A-MAND-04 * on Norm * to expose the concept of namespace accessors in a formal way in the DM *** *** * ACTION A-MAND-05 * on Don * to build a proposal on how to get rid of the wording "namespace node" in the * XQuery document (coordinating with Norm) *** Note MikeK should be anyway consulted on this whole issue (A-MAND-04/A-MAND-05) (IB (Unnumbered): Clearly distinguish the following concepts in the language documents: "statically known namespaces" (a static property of an expression, found in the static context), vs. "in-scope namespaces" (a dynamic property of an element node). This change resolves comments/2004Feb/0210. Fixed throughout, using defined terms "in-scope namespaces" and "statically-known namespaces" This DISCHARGES the Unnumbered action. [...] AGREED to change to use "variable declarations" and "function declarations" for the static side. AGREED to keep 'in-scope' for schema definitions, "statically known" for document and collection. ACTION A-SJ04-24: Language Doc editor to change "In-scope variables" to "Variable declarations", "In-scope functions" to "Function declarations", "In-scope collations" to "Statically known collations", "Dynamic variables" to "Variable values" and "Function ..." to "Function implementations". . --Don Chamberlin (A-MAND-05): Revise the XQuery document to remove all references to "namespace nodes", using instead the term "in-scope namespaces", a property of an element node. The concept of namespace nodes (and the deprecated namespace axis) remain in the XPath document but are optional for host languages. These changes resolve comments/2004Feb/0207. The following parts of the XQuery document are affected by this item: 2. (Basics) Last paragraph and note. 3.7.1.2 (Namespace Declaration Attributes) 3.7.3.1 (Computed Element Constructors), local namespace declarations 3.7.4 (In-scope Namespaces of Constructed Elements) Corresponding changes to the Data Model document (defining "in-scope namespaces" as a property of an element node) are still pending (Norm's action item A-MAND-04). Done. In XQuery, "2.4.4 SequenceType Matching" says: In this case, an implementation is allowed (but is not required) to provide an implementation-dependent mechanism for determining whether the unknown type is compatible with the expected type". This should be implementation-defined. An implementation should document whether it provides such a mechanism, and the mechanism it uses (generally, to rely on information in the PSVI). Jonathan RESOLVED (CLOSED/MOOT) by the current document. Looking at appendix F of the XPath language book: Errors XP0001 and XP0002 between them do not cover all cases, because evaluation of an expression may depend on a value in the static context (e.g. the base URI). Error XP0008 includes XP0017 as a subset (and XP0017 starts "It is It is") XP0002 includes XP0018 as a subset XP0007 would appear to belong in F+O XP0006 includes XP0019 as a subset XP0055 applies to attribute tests as well as to element tests Using the namespace axis with a processor that doesn't support it raises error XP0021, but this is described in appendix F as an casting error.). There seem to be a number of errors that don't have codes allocated: - target type of a cast is not atomic - violation of the consistency constraints in section 2.2.5 - no entry in operator table for the actual types of the operands There seem to be cases where more specialized error codes would be useful: - "unknown types" in sequence type matching (2.4.4) should not give the same error as an incorrect but known type The description of casting in the XPath book duplicates the description of casting in F+O. The two descriptions allocate different error codes to the same error conditions. Which text is normative? The distinction between the two casting errors XP0021 and XP0029 isn't clear. Reading the descriptions in the appendix and the descriptions in the body of the document gives different impressions as to the difference between the two cases. In any event, it seems odd to have two different codes for these two subtly-different cases when other error codes are much less specialized. Note also typo in 3.2.1.1: "XPath defines a set of full set of axes" Michael Kay See (1) Errors XP0001 and XP0002 between them do not cover all cases, because evaluation of an expression may depend on a value in the static context (e.g. the base URI). Don reads out the definition of the static context, and that looks wider than MikeK expected, so he withdraws the comment: subissue (1): CLOSED and REJECTED (withdrawn) (2) Error XP0008 includes XP0017 as a subset (and XP0017 starts "It is It is"): subissue (2): CLOSED and ACCEPTED (3) XP0002 includes XP0018 as a subset: subissue (3) CLOSED because OVERTAKEN (no longer applicable) (4) XP0007 would appear to belong in F+O: subissue (4) CLOSED and ACCEPTED (5) XP0006 includes XP0019 as a subset: subset (5): CLOSED and REJECTED (6) XP0055 applies to attribute tests as well as to element tests: subissue (6): CLOSED as OVERTAKEN (no longer relevant) (7) Using the namespace axis with a processor that doesn't support it raises error XP0021, but this is described in appendix F as an casting error. Don: yesterday XP0021 was killed and we decided the language book would go ahead and use and F&O error code. MikeK: this error was overloaded, as it was used for something else too. subissue (7): CLOSED and ACCEPTED (XP0021 will have to be rewritten then, not deleted as previously agreed). (8)). subissue (8): CLOSED because OVETAKEN (by having accepted qt-2004Feb0154-01 subissue (5)) subissue (9a): CLOSED and ACCEPTED subissue (9b): CLOSED and REJECTED (no further actions) subissue (9c): CLOSED and REJECTED (no further actions) subissue (10): CLOSED and REJECTED (withdrawn) subissue (11): CLOSED and RESOLVED (done yesterday) subissue (12): CLOSED and ACCEPTED The following sentence occurs at the end of section H.2: <quote> It is not the case that these differences will always result in XPath 2.0 raising an error. In some cases, XPath 2.0 will return different results for the same expression. For example, the expression "4" < "4.0". This returns false in XPath 1.0, and true in XPath 2.0. </quote> I believe this statement is true of the incompatibilities listed in section H.1 (with backwards compatibility mode on) but is not true of the incompatibilities listed in section H.2 (further incompatiblities when BCM=off). The incompatibilities in section H.2 will always result in XPath 2.0 raising a type error for constructs that succeeded in XPath 1.0. Also, the example given relates to one of the incompatibilities listed in H.1. I think the sentence should simply be moved from H.2 to H.1. Michael Kay RESOLVED (and CLOSED): ACCEPTED, MikeK will take care of this. Hi,. done at meeting 166: ACCEPTED, Don to make the change and reply to the public comment. The current rules for backwards compatibility in general comparisons reads: <old> If XPath 1.0 compatibility mode is true, and at least one of the atomic values has a numeric type, then both atomic values are cast to to the type xs:double. </old> [Note also the typo: "to to"] This means that if both the values are xs:decimal values, they must both be converted to xs:double values for comparison. This seems unreasonable, given that two xs:decimal values can be unequal when compared as decimals, but equal when converted to doubles. Since it's entirely OK to compare two numeric values of different atomic type, there doesn't seem to be a good reason for converting both operands to double, rather than only converting one; nor is there a good reason for converting to xs:double if the value is already numeric. I suggest changing the rule to: <new> If XPath 1.0 compatibility mode is true, and if one of the atomic values has a numeric type and the other does not, then the value that is not numeric is cast to the type xs:double. </new> Michael Kay CLOSED (as DUPLICATE of qt-2004Feb1011-01) Some question: is anyAtomicType covered by item()? A: yes, in the sense that if item() is expected, then atomics are accepted. MH proposes change: under xpath 1.0 compatibioity mode, insert a prior rule: if the expected type is singleton or optional, and V is a sequence, take V[1]. Then apply existing rules (n.b. rule 3 becomes irrelevant and rules 1 and 2 can lose their subscript -- details left to Don). ACTION A-SJ04-44: Don to make these chagnes ACTION A-SJ04-45: Jonathan to update issues list marking issue ... as accepted. ACTION A-SJ04-46: Don C to reply to Priscilla. In Section 3.1.5, it says: "2. If the expected type is a numeric type, then the given value V is effectively replaced by fn:number(V[1])." This should apply only when it is expecting up to one numeric value. If it is expecting a sequence of (possibly several) numeric values, you wouldn't want it to use just the first one. For example, the expected type of the codepoints-to-string argument is xs:integer*. codepoints-to-string( (97, 98, 99) ) should return "abc", not just "a", regardless of XPath 1.0 compatibility mode. I realize that codepoints-to-string is not part of XPath 1.0, but it still seems that you wouldn't want its behavior to change because of XPath 1.0 compatibility mode. Thanks, Priscilla Walmsley Discussoin of what Priscilla is worried about. We notice a difference in wording between the language document adn f/o: the language document talks about "a numeric type" while f and o talks about float and double. Priscilla's quetion, however, is answered differently: integer* does not get treated as she fears, because xs:integer* is not "a numeric type" within the meaning of bullet item 2 of the compatibility-mode rule. ACTION A-SJ04-42: AM to reply to PW. ACTION A-SJ04-43: am to make F and O rules compatible with language book. 3.2 Path Expressions "Each evaluation of E2 must result in a (possibly empty) sequences of nodes; otherwise a type error is raised. [err:XP0019]" We feel XQuery is limited by its focus on nodes. The evaluation of E2 should be able to contain nodes or atomic values. The main purpose of this is to allow for a function at the end of a path. Generally this saves writing a loop. For example: let $root := <b><a> foo bar</a><a>baz faz</a></b> return $root/a/normalize-space(.) instead of let $root := <b><a> foo bar</a><a>baz faz</a></b> let $seq := $root/a let $result := for $item in $seq return normalize-space($item) return $result In addition, without this functionality ugly workarounds are required to obtain the value of context functions. For example: ("a", "b", "c" )/text{ position() } instead of the straightforward: ("a", "b", "c" )/position() --Sarah >. It is worth noting that the serialization process depends on the ability to cast a QName into a string (for example, when it is found in the value of an attribute such as xsi:type="xs:decimal"). Casting a QName into a string is currently not defined in the Functions and Operators document. This inconsistency between documents should be fixed. --Don Chamberlin yes, RESOLVED and CLOSED by the approval of the serialization proposal (the "triple proposal") in our previous meeting (cf.) Hi. ------------------------------------------------------------------------ Scott: inclined to keep the tables but putting them in non-normative section; and do the other appropriate modifications (eg as normative grammar notes, or constraints on the grammar). MaryH: yes, please don't force my implementation to behave in a specific way, do it in a neutral grammar. Don: however, we have this way an existence proof that the language is parseable PaulC: but, that's code, why should it be normative? Anybody wants to remove the tables? [none] So, options are either to stick with status quo (normative), or keep them but non-normative (Accept the comment). This still stays pending, we'll try to solve the other grammar issues first. editorial, so CLOSED, and Scott will reply as per w3c-query-editors/2004Aug/02138 >: > Kindly reconsider to amend production [54] to allow for an optional else clause. [54] IfExpr ::= <"if" "("> Expr ")" "then" ExprSingle ("else" ExprSingle)? with nested if-then-else tied through parentheses as suggested below No action required. This change was rejected in Cannes-Mandelieu. The duplicate of Feb662-01 (accepted at the San Jose' F2F) Scott thinks he has solved this. So: RESOLVED, and CLOSED, unless people come back to this in the next two days. Section 3.2 Path Expressions Technical fn:root(self::node()) treat as document-node() is not precise enough for meaningful static typing of path expressions of the form /a/b/c. We should use a type that is given via the static context as a new property such as the static type of a built-in context root item (similar to the notion of a context item). Should we force recommend to implementations to always infer the given type in treat as? 10 yes. 1 no (DF: it will kill a lot of optimizations.) In favor: 7. Opposed: 0. Carries. ACTION A-SJ04-39: Don C to make appropriate change to language document. This provides the loophole necessary to deal with the normalization of slash, and thus CLOSES this issue. ACTION: mr to respond publicly to this issue (once he gets the text). ACTION A-SJ04-40: Don C to respond publicly to this issue. Section. Visibility of nested comments are necessary to do nesting counting. Closed: Scott will add a note on how to do the matchup (a footnote called nested comment, explaining how they are done). Section 3.1.5 Function Calls Technical >From a usability point of view, we should also define promotion rules between xs:string and xs:anyURI and viceversa. This would allow us to pass a value of either type to a function such as fn:doc(). RESOLUTION: qt-2004Jan0210-01: [XQuery] MS-XQ-LC1-054 is closed. Resolved by previous changes. Section) Pending discussion of schema context paths in 27 April meeting. (Or was this decided yesterday?) Has been accepted. ACTION A-SJ04-38: MR to respond saying accepted. Pending discussion of schema context paths in 27 April meeting. (Or was this decided yesterday?) Has been accepted. ACTION A-SJ04-38: MR to respond saying accepted. Sections. > b) qt-2004Jan0191-01 [XQuery] MS-XQ-LC1-034 > - (MK) it's no longer relevant, solved by re-write of seq types production closed Section 2.4.3 SequenceType Syntax Technical Schema context overhead is too large for the benefit to be able to refer to elements of anonymous types in instance of/function signatures. We recommend to cut this feature from XQuery 1.0/XPath 2.0. Done: schema context is long gone. (Was this a dup?) Section 2.3.4 Input Sources Technical fn:collection should be dropped or made optional since it has only limited use in certain implementation contexts that understand the notion of a collection. [XQuery] MS-XQ-LC1-025 RESOLVED: Close with no action. Section 2.3.2 Atomization Technical fn:data() on mixed complex content statically needs to always raise a type error since an instance could be of a non-mixed complex content derived from the static type. Are we ok with a static error? Or should we allow atomization on all complex content. See also comment MS-DM-LC2-055 ( l) MR: resolved by decision in Mandelieu to make type value of mixed content an error. The atomization of a mixed content element in a static typing implementation will raise a static error, so Michael Rys is satisfied that the comment can remain closed. Some members of NCITS H2, including IBM, are planning to propose the embedding of XQuery expressions within SQL statements in the next version of SQL/XML. This feature is made possible by the addition of an XML data type in SQL/XML:2003. In adding this feature, errors in the evaluation of an XQuery expression will need to be reflected in SQLSTATE values, which have the following description: The character string value returned in an SQLSTATE parameter comprises a 2-character class value followed by a 3-character subclass value, each with an implementation-defined character set that has a one-octet character encoding form and is restricted to <digit>s and <simple Latin upper case letter>s. We suggest a small number of changes to XQuery to more easily support the reflection of XQuery errors in SQL. We believe that these changes will be helpful in the embedding of XQuery within other environments as well. 1. Ensure that the local part of QNames that are defined in XQuery use only 5 characters Add a statement such as the following: 2.5.x Identifying Errors The errors that are defined by this specification (indicated by "an error is raised: [...]") will use QNames with local parts that have the form XXYYY, where XX is one of "XP" or "XQ" and YYY consists of exactly 3 characters that are digits. 2. Change the existing error QNames Replace XP0xxx with XPxxx and XQ0xxx with XQxxx. 3. Define the "err:" namespace prefix Choose a namespace URI that does not change from one version of XQuery to another. 4. Disallow error values that are the empty sequence The evaluation of fn:error() and fn:error($arg), where $arg is the empty sequence, should have the effect of raising err:XQnnn (specific value to be assigned by the XQuery editors). This error would be defined as follows: XQnnn Unspecified dynamic error. 5. Make error values more regular The fn:error function accepts an argument of type item()?. We suggest that all errors should be identified by QNames, possibly with an associated non-QName value. The means by which the non-QName value is provided to a host environment is implementation-dependent. Specifically, when $arg is a value with a type other than xs:QName, then the evaluation of fn:error should have the effect of raising err:XQmmm. This error would be defined as follows: XQmmm Unspecified dynamic error with non-QName value. 6. More clearly identify errors that are defined by implementations and users We suggest that all errors reported to a host environment be defined in this specification. Errors raised by an implementation or a user with QName values would be identified by a standard error with an associated QName value. The means by which the QName value is provided to a host environment is implementation-dependent. Specifically, when $arg is a value with type xs:QName, but it is not a QName defined by this specification, then the evaluation of fn:error should have the effect of raising err:XQrrr. This error would be defined as follows: XQrrr Implementation-defined or user-defined error. 7. Change the error QNames defined in F&O Replace all of the QNames in Annex D, Error Summary (Non-Normative), with QNames of the form err:XFxxx. 8. Make the error QNames Normative in F&O Change the title of Annex D from " Error Summary (Non-Normative)" to just " Error Summary". Add the following sentence to the beginning of this annex: The error text provided with these errors is non-normative. 8. Define error QNames in Serialization Serialization states in numerous places, starting with Section 2, Serializing Arbitrary Data Models, bullet 2, "It is a serialization error if ..." Like XQuery and F&O, QNames should be associated with these errors. These QNames should have the form err:XSxxx. -- Andrew -------------------- Andrew Eisenberg IBM 5 Technology Park Drive Westford, MA 01886 andrew.eisenberg@us.ibm.com Phone: 978-399-5158 Fax: 978-399-5117 6. qt-2004Jan0091-01: [XQuery] IBM-XQ-001 - changes to error QNames > We suggest a small number of changes to XQuery to more easily support > the reflection of XQuery errors in SQL. We believe that these changes > will be helpful in the embedding of XQuery within other environments > as well. This was the IBM comment that lead to the proposal that we accepted. d) [XQuery] IBM-XQ-001 - changes to error QNames, Don c [AndrewE made a presentation on this] XQuery WG decided to change the language book to say that QNames for errors are not normative, following F&O. Originally this was because there was English text describing the errors, and because of internationalization concern. The question of whether the codes themselves are normative is undecided, but as Mike Kay pointed out, they are not testable within our spec, so possibly should not be normative. JimMelton noted that XQ had already adopted 8-character local parts. The err: prefix is a stylesheet error and Norm has already said he will fix it. It should be (said Jim) xdt: PaulCotton: Michael said the errors should be in no namespace. Mike Kay: I was under the impression that namespaces were there only for user-defined namespaces Norm: I think we couldn't decide and said we'd come back to it. Jim: it's orthogonal to this proposal. Jim: Andrew's argument stems from what I consider to be a very reasonable source -- that an important use of XQ is likely to be in conjunction with SQL. Unfortunately I disagree that we should make the XQ error space have the same space as the SQL error space. I think another approach would be better for SQL as well as for us. SQL has specifically set aside a range of error codes for SQL use and a range for implementors. The latter range includes all errors starting with X, so we can't use it. PaulC: the SQL standard also has a mechanism to get a message text, Andrew: we're thinking that if XQ had XQxxx, we'd use a lower range, such as AAxxx. PaulC: why can't you return the existing XQ error codes in the diagnostic area message text field, instead of doing a mapping in SQL? Jim: I think Andrew is trying to make XQ a first class citizen in SQL, but another approach might be to define a mapping between each of the XQ error codes and the appropriate subclass codes in SQL. I'd like to see the QNames of these error codes [in XQ] be normative. I belive there willbe more interfaces than just SQL. I'd prefer not to make all the others suffer simply because a desire to make it trivial for SQL. Liam: decouple specs, I'd prefer xq-xml-query-... i.e. include the W3C shortnames in the qname. Mike Kay: I hate short names too. They're things from the 1970s. XSLT uses 6 characters now, and it'd be an enormous pain to cut it down to five, and I wouldn't want to change it because stability in them is important. On the motivation, I'm very wary of trying to run 2 sets of unique values owned by distinct authorities into the same value space. The implication that we don't have freedom to use the other half of the value space in our next draft is abhorrent. MichaelR: if we go with the qname, I think the qname should be not human-readable because it's just an identifier. But I'm worried about making them normative, because some APIs might expose errors just as qnames. In SQL we don't have qnames, so a [SQL] mapping table is appropriate, We should keep the existing 8 character naming and have SQL/XML define a mapping, and the APIs also define a mapping, exposing it as a qname or just a string value. It'd be better not to use the prefixes on error codes, because prefixes aren't supposed to be significant, do people have to use the URI to look them up? on (4) in the proposal, I think it's OK, but a bit vacuous. on (5) I'd like to restrict the fn:error() argument to something less than item, but not allow more than one argument. I'd like a URI, so people can get more resources about the error. on (6) I think if we have a qname or a URI, ths is possible, but there's not a big difference between host and user defined functions. on (7) this should be done at an API level. We need to be explicit that the error text is not normative. So we should make clear that our error codes are not normative from a testing point of view, but we should tell implementors to provide a mapping to these codes. JonathanR: Andrew is proposing that we change our errors so they map clearly to SQL. straw poll, how many people feel that's something we should do? I.e. adopt part 1 adopt part1: 3 [all from IBM] not adopt: 6 abstains: 4 IBM can live with 8-character codes. Resolved: we have already decided 8-character error codes, with a specified breakdown. *** * ACTION A-TAMPA-26 * on Jim Melton * to find the email summarising the 8-char error codes * (cf. error qnames issue, ) * and send it to the WG list. Include a proposal for prefixes (XP, * XQ or whatever) if the decision didn't include one. *** (Don will then need to change the query book) [much more discussion followed] The ability to deal with null values (that translate into the XML data model very often as empty sequences as result of path expressions) is a clear necessity for most real world applications. Unfortunately in the current specification the value predicates (e.g. eq, lt, gt) raise dynamic errors when one of their arguments is an empty sequence and this poses significant problems while querying data with null values. In order to solve this problem the value comparisons should simply return the empty sequence when one of their arguments is an empty sequences. Please remark that the other form of comparisons (the general comparsion "=" and its family) do deal properly with the empty sequence, but have other properties that make them unsuitable for processing large volumes of data (e.g. they are non transitive). Thanks for your attention, best regards, Dana Florescu c) [XQuery] value comparisons and empty sequences, Dana F Status: Both WGs have adopted a solution to this problem. We simply need to assign responsibility for providing a description in response to this comment. Resolution: this change (3rd paragraph) has already been adopted. *** * ACTION A-TAMPA-25 * on Michael Rys * to respond to the public comment on value comparisons and empty sequences * (cf.) *** Hi. This is a comment on the last call XQuery WD. The subject was previously discussed on public-qt-comments, but due to an oversight on my part, wasn't concluded. That discussion can be found here; Mark. -- Mark Baker. Ottawa, Ontario, CANADA. b) XQuery and URIs, Mark B (reading "query" instead of "document" in the comment) Mike Kay: A URI quoting scheme for queries is already defined. You can use escape-uri or see the RFC 2396 for how to encode a query in a URI. *** * ACTION A-TAMPA-24: * on Liam * to respond to the public comment on "XQuery and URIs" * (cf. ) *** I This issue is confirmed as closed. I'm submitting this as a last-call comment because I keep hearing discontent about it, mainly from the XSLT user community. There have been many informal discussions on this in the XSL WG, with a fair amount of sentiment in favour, but I think it needs to go on the formal joint XPath agenda for a decision one way or the other. This is a personal rather than a Software AG comment. I believe that there is a clear need for a simple mapping operator in XPath 2.0. I will use the symbol "!" to represent this operator. The semantics are that: E1 ! E2 evaluates E2 for each item in E1, and returns the concatenation of the results, retaining order. E2 is evaluated with the context item set to the relevant item in the result of evaluating E1. For example: string-join(*!name(), ",") returns a list of names of the child elements of the context node sum(item!(@price*@qty)) returns the sum of price*quantity over all items (1 to 10)!(.*2) returns the numbers 2,4,6,... $emps!@date-of-birth returns the dates of birth of the employees in $emps, retaining the original sequence order. Why is this needed? Basically because the "for" expression is too heavyweight for the task. The "for" expression introduces range variables, which are only really needed when doing joins, and joins in XPath expressions are actually extremely unusual. The vast majority of "for" expressions actually used in XPath are not nested. By requiring users to use "for" expressions as the only mapping construct in the language, (a) they are forced to use a syntax that is unlike the rest of XPath, and (b) they have to switch idioms to use variables, instead of using the context item in the way that comes naturally - which is a very common source of errors. There has been some sentiment that a simple mapping operator would remove the need for "for" expressions in XPath entirely. I'm not arguing for that here (I think there is some merit in XPath being relationally complete). But for a language so heavily based on sequences, I think the simple mapping operator is really needed. Michael Kay done at the Redmond f2f: RESOLVED and CLOSED (REJECTED). See qt-2003Dec0061-01. The *************************************************************************** > f) 2003Nov/0302 DM expressing until-like queries in XPath 2.0 > > > Example in PDF is attached to: > > > David Carlisle's answer: > > Summary: I think we only need to get confirmation from the commenter > that they are happy. [there is now a reply at message 0313] JonathanR, MikeK: this isn't a request to change the language; it's an assertion that some queries require recursion, and can't be done just in XPath. *** * ACTION A-TAMPA-21 * on Mike Kay * to respond to te public comment * * saying thank you for the response, * and to explain it's an 80/20 decision. *** XQuery 1.0: An XML Query Language states (section 4.12: Function Declaration): "In XQuery 1.0, user-declared functions may not be overloaded. A user-declared function is uniquely identified by its expanded QName. ... Note: If a future version of XQuery supports overloading of user-declared functions, an ambiguity may arise between a function that takes a node as parameter and a function with the same name that takes an atomic value as parameter (since a function call automatically extracts the atomic value of a node when necessary). " Would it be possible to relax this rule and allow for function overloading based on the arity of the function parameters? (in other words, allow for ns:fn(a), ns:fn(a,b), ns:fn(a,b,c)). Such an approach would not lead to ambiguities, as a function would be uniquely identified by its expanded QName and the number of its arguments. At the same time some of the flexibility provided by overloading would remain at the hands of the users. Cheers, Panagiotis The request is to do the easy part of overloading now, based on arity. KK In XSLT this is allowed. AE: what's driving this? KK Action TAMPA-19 DF to write a proposal for arity-based. Open, pending completion of Tampa 19. Later revisited; various people's notes say this was decided on, and the onus is now on the editors. It's in Don's work list, no action number needed. The XQuery language draft says: "Each operation E1/E2 is evaluated as follows: Expression E1 is evaluated, and if the result is not a sequence of nodes, a type error is raised.[err:XP0019]" Shouldn't E1 be allowed to be the empty sequence? Can it be reworded "...if the result is not a (possibly empty) sequence of nodes..." Thanks, Priscilla done at the Redmond f2f: RESOLVED (CLOSED as MOOT): this has been already taken care of in the Jul draft. I would like to make a proposal to change the rules affecting the focus when evaluating a path expression in the form E1/E2. Specifically, I propose that when evaluating E2, the context position and context size should both be set to 1 (one), not to the position of the context node in the list of nodes selected by E1 and the size of this list. There are very few expressions that will be affected by this difference. Because E2 must return a sequence of nodes, you need to find an expression that uses position() or last() but still returns nodes. It's not enough to use position() or last() within a predicate, because a predicate changes the context. You need to find expressions such as a/b/remove($c, position()) or a/b/subsequence($c, position()) or a/b/(for $i in 1 to last() return $c[$i]) I have not found any expression in this category that is actually useful, or that cannot be written in a more natural way. But why change the rules? The reason is that the current rule makes the "/" operator non-associative. The result of (a/b)/remove($c, position()) is not the same as a/(b/remove($c, position()). Rewriting a/b/c as a/(b/c) can be an important rewrite for optimizers as it often reduces the need to perform a sort. To be fair, I should point out that (a) it's not difficult to detect path expressions that use position() or last() on the right hand side of "/", and (b) there are other situations that can make "/" non-associative: specifically, when one of the operands creates new nodes with distinct identity by means of a function call (or in XQuery, a node constructor) However, I think it's not reasonable to ask implementations to do a lot of extra work for the sake of constructs that will only ever be found in conformance tests. The rule that position() and last() should both be set to 1 is just as good as the present rule from a user perspective (there are plently of cases in XSLT where these values are always set to 1 already, e.g. when evaluating keys) and a lot easier from an implementor's perspective. There are no backwards compatibility implications, because XPath 1.0 restricted the rhs of "/" to be a step, which could not contain calls on position() or last() except in a predicate. Michael Kay The request was for position() and last() to be undefined (or illegal) on the right of "/" Don pointed out that all our XPath operators are defined independently of each other, and this would change that. MK pointed out that / and [] set position() and last() in 2.0 (in 1.0 there was no way to use these functions to the right of /) Dana pointed out that there are other things that destroy associativity of the / operator; MK felt that node construction was another. Karun: if the compiler is smart enough to re-order, it's smart enough to test for the use of position() and last(). Scott: it's another special case rule users have to remember. MKay: I'm hearing a lack of support for this proposal Resolution: We'd prefer to keep the status quo so that / and predicates have the same behaviour *** * ACTION A-TAMPA-18 * on Jonathan Robie * to reply to the public comment * * ([XPath] Focus for evaluating E1/E2) *** RESOLVED as this was already CLOSED in the San Jose' f2f (A-TAMPA-18) I RESOLUTION: Rejected, closed. ACTION A-SJ04-35: Mary Holstage to respond. Dear Niko, This is a response to the following message, which you posted to the XML Query Working Group's comments list: The XML Query Working Group considered your comment and decided to make no change. First note that the data model defines mappings both from the PSVI and from the vanilla Infoset, and given that the PSVI is an extension of the Infoset, an implementation is perfectly free to provide only the Infoset mapping. An implementation of XSLT is further free to provide provide facilities for handling documents as if there were no schema information available. So the part of your comments regarding allowing PSVI annotation to be disregarded is already an option available to implementors. With respect to dynamic versus static dispatch of operators such as "+", you don't need to bring date-time types into play to run into the issue. Since there is more than one numeric type, dynamic dispatch will still be necessary, given that integer addition and floating point addition are not the same. Type polymorphism requires late binding even for numeric types. The WG is not prepared to abandon polymorphism for arithmetic operators for different numeric types, nor to abandon those different numeric types. We appreciate your feedback on the XML Query specifications. Please let us know if this response is satisfactory. If not, please respond to this message, explaining your concerns. Mary Holstege | Iiko RESOLUTION: Rejected, closed. ACTION A-SJ04-35: Mary Holstage to respond. [[rejected]] > b) 2003Nov/0053 [XSLT2.0] PSVI, XPath and optimization > > Michael Kay's reply: > (discussion of static typing; the commentator's example has an input document that uses xs:type on instances) (xsi:type in the absence of validation is just an attribute) Resolution: XSL WG to discuss the building of the data model and PSVI annotations in the original input document with Niko, and also to consider schema-awareness declarations or flags, and to respond. [later in the same meeting...] MikeK gives a presentation on "Simplifying Durations". DataPower public comment (XSLT) in public-qt-comments/2003Nov/0053 People note making + less overloaded doesn't quite solve the associativity problem (eg, for floats). Dana: core that is as simple as possible, and then allow libraries to extend this Rys: people/vendors can define their own extension functions for this, so I am in favour of MikeK's proposal. Straw poll: people in favour of MikeK's proposal: 5 people not in favour of MikeK's proposal: 6 abstain: 5 So, we go to the "status quo prevail" line: can live with status quo: 12 can't live with status quo: 1 abstain: 3 So, resolved, status quo prevails. Hello, I am feeling the need of a built-in function, any() in XSLT.. please look, at the following e.g. <xsl:for-each <xsl:if </xsl:if> </xsl:for-each> any() function, would match to any node in the preceding-sibling::, and following-sibling:: *axis* any() function, would be equivalant to (preceding-sibling:: or self:: or following-sibling:: ) I am wondering, if this might be appropriate ?? Regards, Mukul __________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard RESOLUTION: comment was withdrawn, closed, no action, issues list needs to point to message in the same thread in which the commnter says "I'm happy". Thank you for your responses. I am happy with the answers.. Thanks Jeni, Thanks David Regards, Mukul RESOLUTION: comment was withdrawn, closed, no action, issues list needs to point to message in the same thread in which the commnter says "I'm happy". Section 2.3.2 Typed Value and String Value, bullet 3d states: "If the type annotation denotes a complex type with non-mixed complex content, then the typed value of the node is undefined. " Since non-mixed complex content includes a content type of "empty", and bullet c covers empty content, we think this text should instead say "If the type annotation denotes a complex type whose content type is elementOnly, then the typed value of the node is undefined." Note: it's now section 2.4.2 4d. RESOLUTION: qt-2003Nov0014-06 closed: this change has already been made. ACTION A-SJ04-33: Michael Sperberg-McQueen to respond to Schema WG to confirm the resolution. Note: it's now section 2.4.2 4d. RESOLUTION: qt-2003Nov0014-06 closed: this change has already been made ACTION A-SJ04-33: Michael Sperberg-McQueen to respond to Schema WG to confirm the resolution. SECTION Mark Scardina requests a better definition of Sequence Type. Status: Accepted and implemented. SECTION 2.4.4: SequenceType Maching "Module" is used in the 3rd paragragh and is an XQuery construct. Also the 2nd note refers to an XQuery function syntax. Regards, Mark Scardina Oracle Corporation SECTION 2.3: Documents In the first sentence it would be more precise to say "XPath is used in the processing of documents." Regards, Mark Scardina Oracle Corporation Overtaken by events. This section has changed. SECTION H.1: Incompatibilities when Compatibility Mode is true Number 8. states )" The last sentence in #8 is not true for special cases defined in #6 as this would result in 5e0>4 becoming NaN>4. Regards, Mark Scardina Oracle Corporation Pertains to a fine detail of XPath 1.0 Compatability Mode. Status: Refer this comment to Michael Kay for processing. SECTION 3.1.2: Variable Reference Shouldn't the the first sentence "A variable reference is a QName preceded by a $-sign" be a defintion? Regards, Mark Scardina Oracle Corporation SECTION 3.1.1: Literals The 2nd note states: "If a string literal is used in an XPath expression contained within the value of an XML attribute, the characters used to delimit the literal must be different from the characters that are used to delimit the attribute." Aren't the only characters this pertains to are ' and "? If so shouldn't the spec be explicit? Regards, Mark Scardina Oracle Corporation Delete this note from XQuery but not from XPath (because it's common to embed XPath in XML). And within XPath qualify it with respect to its use. AE: Is this legal: <element a='{'b'}'/>? DC: I believe that we decided this isn't legal: because people wanted to be able to use existing XML parsers to parse element constructors that might have things like this in them. AE: It's not a problem to process it, but a parser would find it problematic. MK: Does this note appear in XQuery as well as XPath? We allow all sorts of things in attributes in XQuery that aren't allowed in XML. I was under the impression that we allowed all sorts of stuff in an attribute constructor on the XQuery side. DC: I'll try to put a band-aid on the hole by saying we shouldn't have different rules in XQuery and XPath. MK: If we're trying to restrict what you can put in an attribute constructor in XQuery, then we've got to do a lot more work than this. HZ: It appears in XPath and it is explicitly talking about a value within an XML attribute. AE: Putting this in a note makes it non-normative. So I don't know what to do with this. MK: I suggest we remove this note from the XQuery document. HZ: There's a note in 3.5 of XPath about the "<" symbol, so we have a precedent. That doesn't appear in XQuery. MK: Embedding in XML is a design requirement for XPath but not for XQuery. Proposal: Delete this note from XQuery but not from XPath (because it's common to embed XPath in XML). And within XPath qualify it with respect to its use. Accepted. SECTION 2.4.4.2 : Matching an ItemType and an Item Shouldn't the 2nd sentence: "An AtomicType AtomicType matches an atomic value whose actual type is AT if type-matches(AtomicType, AT) is true." actually be "An atomic type, AtomicType matches an atomic value whose actual type is AT if type-matches(AtomicType, AT) is true." Regards, Mark Scardina Oracle Corporation Recommend to reject. Italics and underscores are used to distinguish the different uses of AtomicType (one of them is a reference to the name of a production). SECTION 2.2.3.2: Dynamic Evaluation Phase Should there be a definition of dynamic types here as they derive from the DM which exists prior to the Static or Dynamic Analysis phase? Regards, Mark Scardina Oracle Corporation Overtaken by events. The definition of dynamic type has been moved to "Data Model Generation" which occurs before the suggested location. [My apologies that these comments are coming in after the end of the Last Call comment period.] Hello, Following are comments on XPath that we believe to be editorial in nature. ------------------------------------------------------------------ Section 2 In the lead-in to the bulleted list, "prefixes(these" should be "prefixes (these". ------------------------------------------------------------------ Section 2.1.1 The definition of the default collation needs to indicate that there must be a (URI, collation) pair in the in-scope collations for which the collation is the default collation. The definition of fn:default-collation requires it. ------------------------------------------------------------------ Section 2.1.1 To avoid confusion, it might be better to rename the "Statically-known documents" component to "Statically-known document types" or something similar. That would avoid any need for a note to clarify the meaning. Even if the note is kept, the component would benefit from the name change. Similarly, "Statically-known collections" could be renamed "Statically known collection types". ------------------------------------------------------------------ Section 2.2.1 The second paragraph following the numbered list, states "For example, if the data model was derived from an input XML document, the dynamic types of the elements and attributes are derived from schema validation." The words "are derived" should be "might be derived", or something similarly vague. ------------------------------------------------------------------ Section 2.2.5 The first paragraph of this section states, "Enforcement of these consistency constraints is beyond the scope of this specification." Suggest replacing this sentence with, "An implementation is not required to detect whether the data model, static context and dynamic context obey these consistency constraints. The behavior of an implementation if a consistency constraint is violated is implementation-dependent." ------------------------------------------------------------------ Section 2.3.2 In the second bulleted list, the meaning of the term "function returns" isn't necessarily clear. The entire list might be better rephrased in terms of the parts of the relevant expressions that are atomized: "arithmetic operands", "function arguments", etc. At the very least, "and returns" should probably be removed from the list. ------------------------------------------------------------------ Section 3.2.1.1 In the paragraph following the bulleted list, the definitions of "forward axis" and "reverse axis" are phrased so that the self axis is both a forward and a reverse axis. Perhaps this paragraph should be made into explanatory material, with the normative definitions of forward and reverse axes being the explicit lists of axes in the paragraph that follows this one. ------------------------------------------------------------------ Section 3.2.3 In the bulleted list, the example "parent::node()" states that "If the context node is an attribute node, this expression returns the element node (if any) to which the attribute node is attached." The same should also be stated for namespace nodes. ------------------------------------------------------------------ Section 3.2.4 The last example in this section is "E/.". Although this is an interesting example, it doesn't belong in a section on abbreviated syntax. ------------------------------------------------------------------ Section 3.4 The second item in the numbered list indicates that the result of applying an arithmetic operator on an empty sequence is an empty sequence. Some rationale for this behaviour (as opposed to a dynamic error being reported) would be desirable. ------------------------------------------------------------------ Section 3.5.1 The third item in the numbered list states, "If the value of the first atomized operand is not comparable with the value of the second. . . ." This should be "If the type of the first atomized operand is not comparable with the type of the second. . . ." ------------------------------------------------------------------ Appendix B The "op" pseudo-prefix is used in this section, but its meaning is never explained. ------------------------------------------------------------------ Appendix G It would be helpful if this listed the implementation-defined and implementation-dependent features. This same comment applies to other specifications in this set. ------------------------------------------------------------------.] Sections 2.4.4 and 3.1.5 In section 2.4.4, the second paragraph beneath the first note refers to "the module in which the given type is encountered." In addition, in section 3.1.5, the third bullet under the definition of "function conversion rules" speaks of the module in which a function call takes place, and the module in which a function is defined. The term "module" is only defined for XQuery, not for XPath, and must not be used in these two places..] Section 2.4.2 The second example in list item 3 refers to "xs:IDREFS, which is a list datatype derived from the atomic datatype xs:IDREF." In the terminology of XML Schema, xs:IDREFS is derived by restriction from xs:anySimpleType. This should instead say, "xs:IDREFS, which is a list datatype whose item type is the atomic datatype xs:IDREF." Thanks, Henry [Speaking on behalf of reviewers from IBM.] ------------------------------------------------------------------ Henry Zongaro Xalan development IBM SWS Toronto Lab T/L 969-6044; Phone +1 905 413-6044 mailto:zongaro@ca.ibm.com SECTION 3.2.4: Abbreviated Syntax 3.2.4, bullet 3, gives an explanation of // as being effectively replaced by /descendant-or-self::node()/ However, this is only true for // used in non-initial positions. If // is used at the beginning of a path expression, it is effectively replaced by fn:root(self:node()) treat as document-node()/descendant-or-self::node() as explained in 3.2 Path Expressions. So we need to make it clear here. Or repeat the abbreviation of the initial '/' and '//' here in 3.2.4 as well. - Steve B. SECTION 3.2.1.1: Axes In 3.2.1.1 Axes, it defines the parent and descendant axis using 'transitive closure'. But it does not give the exact defintion of 'transitive closure'. Should it be defined ? - Steve B. Status: Recommend to reject. I believe the existing text is clear enough: "The descendant axis is defined as the transitive closure of the child axis; it contains the descendants of the context node (the children, the children of the children, and so on). SECTION. Appendix B2 Operator Mapping Editorial ge, le for xs:boolean are missing their return type.] As an important detail, the text in the spec should make clear that *:NCName also applies to the default namespace. Regards, Martin. "XPath allows functions to be called" sounds totally redundant. Please remove or improve wording. Regards, Martin. 2.5.3 in XPath should make it very clear that if the spec defines an error, that error has to be reported, and processing has to stop. Regards, Martin. Overtaken by events. The term "schema path" has been dropped. The term 'schema path' should be better defined and added to the glossary. regards, Martin. Martin Duerst requests a definition of "schema path". Status: Overtaken by events. This term no longer appears in the XPath book. The hierarchy shown in Fig. 2 has a box saying 'specific list and union types such as xs:IDREFS'. What about other, e.g. user-constructed, list and union types? Regards, Martin. Recomment to reject. Figure gives an example of a list type but is not intended to be exhaustive. 2. Recommend to reject. Suggests revising the definition of document order, but it has been debated extensively and is now consistent in all our documents. XPath 2.2.3.2 explains static and dynamic errors. It could be nice to have a table showing cases that are - static but not dynamic errors - dynamic but not static errors - static and dynamic errors - neither static nor dynamic errors Also, renaming the 'static typing feature' to 'strict typing feature' may make things easier to understand. Regards, Martin. Martin Duerst requests a table of static vs. dynamic errors, and requests renaming the static typing feature to "strict typing feature". Status: Recommend to reject. Static, dynamic, and type errors will be identified by their error codes, and also in the Summary of Errors appendix. Explanatory text indicates the processing phase (analysis or evaluation) during which each kind of error can be raised. The current name of the static typing feature was chosen to emphasize static analysis. The purpose of known documents/collections (both static and dynamic) needs to be better explained. For example, in the general case, is 'dynamic available documents' equivalent to all the documents on the Web? Regards, Martin. Martin Duerst requests expanded def'ns of Statically Known Documents etc. Status: Recommend to reject. I believe these terms are adequately defined. [Speaking on behalf of reviewers from IBM, not just personally.] Section 3.2.3 The twenty-sixth through the twenty-eighth bullets (which contain predicates of the form [attribute::type="warning"]) all indicate that they test whether the context node "has a type attribute with value warning." However, if the attribute is of a list type, the general comparison is true if at least one value in the atomized sequence has the value "warning". For instance, the string value of the attribute might be "info warning error", and the typed value might be the sequence of xs:string values ("info", "warning", "error"). Suggest changing "has a type attribute with value warning" to "has a type attribute, and the result of the equality general comparison between the attribute and "warning" is true." Thanks, Henry ------------------------------------------------------------------ Henry Zongaro Xalan development IBM SWS Toronto Lab T/L 969-6044; Phone +1 905 413-6044 mailto:zongaro@ca.ibm.com Requests changes to some examples illustrating paths. Status: Changed examples to use "eq" value comparisons instead of general comparisons. Description of examples is now correct. [Speaking on behalf of reviewers from IBM, not just personally.] Section 3.2.2 The paragraph between the two bulleted lists (which defines the terms "reverse axis" and "reverse document order") states, "By contrast, (preceding::foo)[1] returns the first foo element in document order, because the axis that applies to the [1] predicate is the child axis." This reference to the child axis applying to the predicate is carried over from how this was described in XPath 1.0. In XPath 2.0, this should state that, as described in 3.2.1, the items from a FilterStep are returned in the same order as the primary expression, which is document order in the case of an axis step..4.4 The second paragraph beneath the first note uses the adjectives "known" and "unknown" to describe types. The term "unknown types" might confuse the reader, as the same term is used in Data Model with a different meaning. Suggest replacing "known types" with "types in the in-scope schema definitions" and "unknown types" with "types that are not in the in-scope schema definitions", respectively. Thanks, Henry ------------------------------------------------------------------ Henry Zongaro Xalan development IBM SWS Toronto Lab T/L 969-6044; Phone +1 905 413-6044 mailto:zongaro@ca.ibm.com Recommend to reject. The paragraph defines its terms and uses them consistently. [Speaking Overtaken by events. Definition of document order has changed. [Speaking on behalf of reviewers from IBM, not just personally.] Section 2.1.2 The definition of "focus" in the fourth paragraph states that "The focus enables the processor to keep track of which nodes are being processed by the expression." However, the context item might not be a node..1.1 Some components of the static context are described as sets of pairs, in particular, the in-scope namespaces, in-scope variables, and in-scope collations. In each case, there is an implicit constraint on the component - namely, that the first item in any pair in each set is not associated with more than one value in the set. For instance, the in-scope namespaces must not be the set: {(pre,), (pre,)} Those consistency constraints should be stated explicitly. Thanks, Henry ------------------------------------------------------------------ Henry Zongaro Xalan development IBM SWS Toronto Lab T/L 969-6044; Phone +1 905 413-6044 mailto:zongaro@ca.ibm.com Requests explicit statement of certain constraints in Section 2.1.1 (Static Context). Status: No need to change Section 2.1.1. The requested constraints are stated in Appendix C. Section Dear Colleagues, This comment pertains to the Nov. 12 2003 version of XPath 2.0 [1]. [1] Lisa Martin, on behalf of the XML Schema Working Group ---------------------------------------------------- Section 2.4.4.3 Matching an ElementTest and an Element Node Bullet 2 states: 2. element(ElementName, TypeName) matches a given element node if: a. ... , and: b. type-matches(TypeName, AT) is true, where AT is the type of the given element node. ... The first example is: Example: element(person, surgeon) matches an non-nilled element node whose name is person and whose type annotation is surgeon. Given the rules for type-matches (ET, AT), shouldn't the example instead say " ... and whose type annotation is surgeon, or is a type derived from surgeon"? This comment applies to many examples in this, and following sections. Schema WG wants some text added to some examples. Status: Done Dear. First part done; second part overtaken by events (no more schema context paths). X Recommend to reject. Commentor doesn't like the term "tuples," wants to change it to "mappings". W3C XSL WG Editorial Comment on XPath 2.0 Last Call Draft Comment on 2.4.2: The data-model accessors do not belong in this section. It would be more useful if they were introduced in the "datamodel" section (second half of 2.2.1). Recommend to reject. Suggests that we move the input functions from "Concepts" to "Data Model Generation". W3C XSL WG Editorial Comment on XPath 2.0 Last Call Draft Comment on 2.3.4: It would be useful to refer to "available documents" and "available collections" in the dynamic context. W. W3C XSL WG Editorial Comment on XPath 2.0 Last Call Draft Comment on 2.3.2, 1st paragraph: The first two sentences of the current definition should be outside the "definition markup". The third sentence is the REAL definition. It is okay that XQuery copies parts of XPath. But it's very annoying for reviewers, implementers, and users that these sections are not clearly marked. Please mark them. Regards, Martin. Recommend to reject. Commentor wants the parts of the XQuery document that are in common with XPath to be "marked". This comprises more than half of the document. Comment. XPath section 2.4.4 contains the sentence: An unknown type might be encountered, for example, if the module in which the given type is encountered does not import the schema in which the given type is defined. Modules, of course, are an XQuery concept. I suggest changing the example to: An unknown type might be encountered if a source document has been validated using a schema that was not imported into the static context. (It might also be useful to add this example to the XQuery spec). There is another reference to modules in 3.1.5: If the function call takes place in a module other than the module in which the function is defined, this rule must be satisfied in both the module where the function is called and the module where the function is defined (the test is repeated because the two modules may have different in-scope schema definitions.) I suggest deleting this sentence from the XPath version of the document. Michael Kay In Section 3.2.1.1, the last bullet refers to the functions get-in-scope-namespaces and get-namespace-uri-for-prefix. "namespaces" should be changed to "prefixes", and the get- prefix should be removed (if it hasn't been already). Thanks, Priscilla Walmsley Minor nit: In Section 3.8, the example shows a book title as "Advanced Unix Programming". Two examples later (supposedly in the results of a query on the first example), the title has changed to "Advanced Programming in the Unix environment". I think the name of the book is actually "Advanced Programming in the Unix environment". Thanks, Priscilla Walmsley Hi Section A.1.1 and A.2.1 provide helpful grammar notes that are clearly visible in the preceding BNF. Section A.3 provides equally significant clarification that does not. For no very obvious reason gratuitous ElementName and AttributeName aliases for QName are provided, yet there is no FunctionName to which the A.3 text should be annotated. Therefore please replace QName by FunctionName in FunctionCall and add e.g. FunctionName ::= QName /* A.3 reserved names */ ------------------------------------------------------------------------ Accepted Hi The signature example should more clearly indicate that this is not XPath syntax, since "as" must be preceded by "cast", "castable" or "treat" according to Appendix A. ------------------------------------------------------------------------ Recommend to reject. The function signature is presented in the standard notation used in the Functions & Operators document. Section 3.2.1.1 Axes Editorial Either always use () in grammar (as in forwardstep) or do not as in reverse step. See [447] RESOLUTION: closed, fixed. see RESOLUTION: closed, fixed. Section 3.2.1.1 Axes Editorial Add node to parent that an attribute may have an element as a parent even though it is not a child of the element. Section /). Section 3.2.1 Steps Editorial Please remove "The result of the filter step consists of all the items returned by the primary expression for which all the predicates are true. If no predicates are specified, the result is simply the result of the primary expression.": This has been said in paragraph before and is also falsely implying that you can use a naïve conjunction of all the predicates. Also rewrite beginning of following sentence. M.Rys requests change in description of Filter Step. Status: Recommend to reject. This description has now moved to a new section (3.3.2) called Filter Expressions. The sentence objected to by M.Rys still exists in the new section, but I do not understand why it needs to be fixed. Section 3.1.5 Function Calls Editorial Please reword: "The result is a value of the function's declared return type." to "The result is an instance of the function's declared return type." Section 2.4.4.2 Matching an ItemType and a Value Editorial "document-node(E)": Note that PI, Comments and element may be interleaved. Probably done. Section has been reworded but not in exactly the way the commentor suggests. I think the meaning is clear. M.Rys requests that the explanation of the sequence type document(E) specify that PIs, comments, and element E may be interleaved. Status: Recommend to reject since the explanation already says: "matches any document node that contains exactly one element node, optionally accompanied by one or more comment and processing instruction nodes, if E is ...". I believe this is clear and accurate. Section 2.4.4.2 Matching an ItemType and a Value Editorial Please replace "xs:IDREF*." with "xs:IDREF+." Xs:IDREFS is a sequence of at least one. Section 2.4.4 SequenceType Matching Editorial Please qualify ET in first two rules as known type. Section 2.4.4 SequenceType Matching Editorial Add a better example for a case of unknown types. For example, instance of myInt is being passed to a function from a module that checks for an instance of xs:integer inside the function but does not know the type myInt. Recommend to reject. I believe unknown types are clearly explained. Section 2.4.4 SequenceType Matching Editorial Please reword: "actual type of a given value " -> "actual type of a given sequence of items" (see item MS-XQ-LC1-001) Recommend to reject. Another complaint about the word "value". Section 2.4 Types Editorial Please reword: "XQuery is a strongly typed language with a type system based on [XML Schema]. " -> "XQuery's type system is based on [XML Schema]. " Note that we have both strongly (e.g., xs:string) and weakly (e.g., xdt:untypedAtomic) typed components in XQuery. M.Rys requests deletion of the phrase "strongly typed." Status: Accepted and implemented. 3.2.1 Steps Editorial (esthetical) The production [52] uses additional parentheses around each pair "axis name" "::", whereas the production [53] doesn't. Please use a consistent notation. Kind regards, Oliver Becker /-------------------------------------------------------------------\ | ob|do Dipl.Inf. Oliver Becker | | --+-- E-Mail: obecker@informatik.hu-berlin.de | | op|qo WWW: | \-------------------------------------------------------------------/ See [447] RESOLUTION: closed, fixed. see A small inaccuracy in 3.2.2 Predicates it reads: By contrast, |(preceding::foo)[1]| returns the first |foo| element in document order, -> correct ...because the axis that applies to the |[1]| predicate is the |child| axis. -> I dont see any child axis here If I am not mistaken, it should read something like "because the expression that applies to the |[1]| predicate is itself in document order" (being a whole path expression, and not a simple reverse step) "2.4 Predicates An axis is either a forward axis or a reverse axis. An axis that only ever contains the context node or nodes that are after the context node in document order is a forward axis. An axis that only ever contains the context node or nodes that are before the context node in document order is a reverse axis. Thus, the ancestor, ancestor-or-self, preceding, and preceding-sibling axes are reverse axes; all other axes are forward axes. Since the self axis always contains at most one node, it makes no difference whether it is a forward or reverse axis.." This is the first paragraph of section 2.4 Predicates. It deals exclusively with axes and its place is in section 2.2 Axes. Thanks, Dimitre Novatchev. __________________________________ Do you Yahoo!? Free Pop-Up Blocker - Get it now 2: ________________________________________________________________________ Partly done. The incorrect reference to Prolog has been removed. Hyperlinks have not been added (I don't think this is necessary.) I commented on an earlier draft see Mike kay's response here that the spec was unclear on the order returned by single step path expressions. The document does now say (in 3.2.1) <p>The result of an <b>axis step</b> is always a sequence of zero or more nodes, and these nodes are always returned in document order. which clarifies the result order, however the phrase <p>In a sequence of nodes selected by an axis step, each node is assigned a context position that corresponds to its position in the sequence. If the axis is a forward axis, context positions are assigned to the nodes in document order, starting with 1. If the axis is a reverse axis, context positions are assigned to the nodes in reverse document order, starting with 1. This makes it possible to select a node from the sequence by specifying its position.</p> which appears a little later ought to have a clarifying note. In Xpath 1 this was clear enough as it was specifying the current node list which was the only ordered construct and was always a transient thing not a first class object, however in Xpath2 where the result of the expression is itself ordered I think that most readers will miss the distinction between "The result of an <b>axis step</b>" in the first quote (document order) and "In a sequence of nodes selected by an axis step" in the second (reverse order) In particular there should be an explicit example somewhere close to this point that points out that ancestor-or-self::*[1] is the root of the current document and (ancestor-or-self::*)[1] is the current node. David ________________________________________________________________________ This e-mail has been scanned for all viruses by Star Internet. The service is powered by MessageLabs. For more information on a proactive anti-virus service working around the clock, around the globe, visit: ________________________________________________________________________ Done. The requested example of numeric predicates on reverse axes is found in the Predicates section. Hello In Xpath 1, a reader does not have to read far in to the document (5 pages out of 35 in my "print preview") before seeing examples of the fundamental "path" nature of Xpath: child::para selects the para element children of the context node In the Xpath 2 document, I fear the majority of readers will have given up in despair before ever seeing a Path. The description of steps comes after mountains of dense only marginally interesting on facts on typing syntax, diagrams of possible processing models, etc. Error handling (2.5) and Optional features (2.6) come before the reader has even seen any basic expression syntax. This all seems to be backwards. I now have to wait until page 47 of 89 (ie over half way in) before seeing the example child::para selects the para element children of the context node I suspect that you are not going to want to completely re-structure the document this late in the process (although that would be worthwhile I think) but if you don't do that, could you at least expand section 2 Basics to have some usable (to an end-user) description of what a simple Xpath expression looks like? Sorry that this isn't a particularly constructive comment, but it's hard to suggest specific re-organisation without following the details of your document build process, whether reordering sections for example could be purely a stylesheet matter or would require re-writing the source xml, also I realise that the sources are shared with the Xquery doc, although probably these comments apply equally to that. David ________________________________________________________________________ This e-mail has been scanned for all viruses by Star Internet. The service is powered by MessageLabs. For more information on a proactive anti-virus service working around the clock, around the globe, visit: ________________________________________________________________________ David Carlisle: wants document reorganized to introduce simple paths sooner. Status: Recommend to reject. Specification is not a tutorial. There are many places in the spec where terminology from XML Schema is used, without references or links. For example, bullet 4.c of section 3.10.2 contains the following sentence: "The input value is first converted to a value in the lexical space of the target type by applying the whitespace normalization rules for the target type; a dynamic error is raised if the resulting lexical value does not satisfy the pattern facet of the target type." We suggest you add links where you use XML-Schema-specific terminology. Schema WG requests more links to XML Schema, and gives an example location. Status: Partially done. A link has been provided in the sample location. Exhaustive search for other possible locations has not been done. Commenter may wish to submit suggest more specific locations where links are requested. The specification contains lots of technical terms. We feel that a glossary section would be very useful. Section 2.1.1 Static Context: In-scope Type definitions In-scope type definitions are the type definitions that are in scope during processing of an expression. These include XML Schema built-in type definitions, some pre-defined XPath type definitions and possibly user-defined type definitions as well. From the XPath spec: "[Definition: In-scope type definitions. The in-scope type definitions always include the predefined types listed in 2.1.1.1 Predefined Types. Additional type definitions may be provided by the host language environment.]" XML Schema distinguishes named types, which are given a QName by the schema designer, must be declared at the top level of a schema, and are uniquely identified by their QName, from anonymous types, which are not given a name by the schema designer, must be local, and are identified in an implementation-dependent way. Both named types and anonymous types can be present in the in-scope type definitions." - (in general) 'QName' (in the sense of identifiers in the * lexical space of xsd:QName) and 'expanded QName' (in the sense of members of the value space of xsd:QName) do not seem to be distinguished consistently. (Cf. definition of in-scope functions in 2.1.1) - We think there needs to be more explicit text (similar to what is in the Schema Rec for QName resolution) on - mapping from lexical QNames to expanded QNames and, - mapping from expanded QNames to types (and other things, like functions). - For "must be declared at the top level of a schema", we suggest recasting to something like "are top level". Also "must be local" to "are local". (that is, you do NOT want RFC 2116. wording here) Partly done, and partly overtaken by events. Section 2.1.1 Static Context: In-scope Element Declarations. The paragraph includes "An element declaration includes information about the substitution groups to which this element belongs." What does "substitution groups" refer to? Is this different from the {substitution group affiliation} of an XML Schema element declaration? In particular, note that {substitution group affiliation} names a single element; from your mention of 'substitution groups' in the plural we believe you may have in mind (a) the {substitution group affiliation} property itself [i.e. the plural is a typo], (b) the transitive closure of {substitution group affiliation}, or (c) its inverse (the list of elements which are in the substitution group headed by this element.) Note also (just in case) that the ability to substitute an element A for an element B depends not only on the {substitution group affiliation} property of A's declaration (and those of other possibly intermediate elements) but also on the {block} property of B's declaration. We think you may wish to describe the relevant information in terms of the "effective substitution group" headed by B, with a reference to the XML Schema spec.. Section 2.3.2 Typed Value and String Value, bullet 2 states: "The typed value of an attribute node with the type annotation xdt:untypedAtomic is the same as its string value, as an instance of xdt:untypedAtomic. The typed value of an attribute node with any other type annotation is derived from its string value and type annotation in a way that is consistent with schema validation." We think this may be clearer if you replace "in a way that is consistent with schema validation" with "using the lexical-to-value-space mapping described in XML Schema Part 2 for the relevant type." Section 2.4.1 Sequence Type: this section introduces a path language to designate some schema components. This appears to duplicate a subset of the functionality of SCDs. The XML Schema WG is chartered to produce a specification of SCDs. From the charter [1]: "the definition of a free-standing specification describing how to name or refer to arbitrary components in an XML Schema; in existing discussions these are sometimes referred to as normalized universal names or NUNs". Schema context paths as defined here and in the Formal Semantics appear to us broadly consistent with our existing SCD design. We wish to make SCDs as useful as possible; we note that you haven't used the existing SCD syntax, and we would like to know if the existing SCD syntax could be changed to make it more usable in the Query context. We believe discussion is needed on this topic. ***** XML Schema WG - ACTION (2004-01-21): AV and MH to work with Scott Boag to clarify possibilities for and issues surrounding possible alignment between XML Schema SCPs and QT SCPs. ***** XML Query WG - A-TAMPA-14 on Scott and Asir to see whether Schema's SCDs could be integrated with query or adapted to Query in a reasonable way. Scott and I have been working on this for some time. We have put together a paper that describes two proposals for using schema component path (from SCDs) in XQuery and XPath expressions. We request the XML Schema, XML Query and XSL WGs to consider them. Special thanks are due to Mary Holstege (Mark Logic) for reviewing this paper and providing us with detailed comments. Paper is at, Note that we have read the discussion titled "ACTION A-MAND-12: element(N) and friends" [1], but our proposal is not currently informed by it. We *think* that one of our proposals (we have two) and Michael Rys' proposal are very similar, except that our proposal makes use of a subset of Schema Component Designators. Chalk this up to "great minds think alike", and, the very least our proposal together with Michael's may be some validation that the approach of dividing up simple element tests from potentially complex schema component matching make sense. [1] Overtaken by events. XQuery no longer has a syntax for schema context paths. Section 2.4.1.1 Sequence Type Matching, bullet 6 of "ElementTest" states: "element(P), where P is a valid schema context path beginning with a top-level element name or type name in the in-scope schema definitions and ending with an element name." - What is a "valid schema context path"? There *is* BNF for "SchemaContextPath", but no additional constraints. What happens if the leading QName is the name of a local element? And what are the rules for the SchemaContextStep (other than it is a QName)? The same comment applies to "AttributeTest". - If we read the BNF and the Formal Semantics properly, the schema context path cannot actually begin with a type name, but only with either an element name or with the keyword 'type(' followed by a type name and a closing ')' perhaps this description should be revised. Overtaken by events. XQuery no longer has a syntax for schema context paths. Section 3.10.4 Constructor Functions states: "For each user-defined top-level atomic type T in the in-scope type definitions that is in a namespace, a constructor function is effectively defined." Perhaps the word 'effectively' should be replaced with 'implicitly'? With the current wording, we found ourselves wondering: who defines this constructor function for user-defined top-level atomic types? Is it the user, is it magic, implementation defined or implementation dependent? We found the sentence construction here a little difficult. Perhaps "A constructor function is implicitly defined for each ..." We also found "the in-scope type definitions that is in a namespace" a bit difficult to parse at first. B.1 Type Promotion, bullet 2 states: "A value of type xs:decimal (or any type derived by restriction from xs:decimal) can be promoted to either of the types xs:float or xs:double. The result is the value of the target type that is closest to the original value." There should probably be an algorithm, or reference to an algorithm, that defines how to find the value that "is the closest to the original value". I recommend rejection of this comment unless it can be made more specific. It suggest that an algorithm is needed to specify how to find the float or double value that is closest to a given decimal value. I don't know of such an algorithm, but I am open to suggestions.
http://www.w3.org/2005/04/xpath-issues.html
CC-MAIN-2014-42
refinedweb
19,748
63.39
> > > 2) Introduce a <tagdef> or <roledef> for the purpose > > > of locating extension points as nested elements. > but I looked at the code and realized that the type > handling in ant would become too complicated - there I see your point. Increased complexity is not good. But is it really that complex? This new mechanism would only kick in when IH would overwise have rejected a nested element, and only add lookup into a new mapping when the current type has an add(Type) extension point. True, their can be tricky cases when a type has both an add(Condition) and a add(FileSelector), but that should be pretty rare, and could be thrown out as ambiguous. It could even be resolved using the undocumented ant:type attribute. > It would however be nice to get something that > allows add(Condition) or add(FileSelector) to work > without having to extend BaseCondition or whatever > is done for FileSelector . Precisely. Unless I'm mistaken, this can already be achieved now by using namespaces, and the condition antlib. So AssertTask could be made ConditionBase independent now, no? --DD --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional commands, e-mail: dev-help@ant.apache.org
http://mail-archives.apache.org/mod_mbox/ant-dev/200609.mbox/%3C255d8d690609110824g4e7664e2hf34d3f4564bf1b96@mail.gmail.com%3E
CC-MAIN-2014-15
refinedweb
201
55.95
mbw2001 with the updater I used API v2 which is now dead. If you look here it was adapted for v3. Might be helpful (2012-06-28, 08:20)_Mikie_ Wrote: The one above they edited so I don't know what exactly is going on there. The original one from HTPC Manager should use the version.txt. I was just referencing the API because the original HTPC Manager uses the old API. Is there a scheduler to check regularly for updates or is it all manual? from apscheduler.scheduler import Scheduler SCHEDULE = Scheduler() SCHEDULE.add_interval_job(checkGithub, hours=6) SCHEDULE.start() (2012-07-04, 08:25)_Mikie_ Wrote: For everyone who used to use this I think that we are back to plus minus where we were before development stopped. (2012-07-05, 08:16)_Mikie_ Wrote: @SlackMaster Valid point. Maybe a chosen number with a "show more" to show everything? (2012-07-05, 18:30)maruchan Wrote: Any chance that CouchPotato functionality will be included at some point? Would be great to have a single interface for managing everything.
https://forum.kodi.tv/showthread.php?pid=1141310
CC-MAIN-2019-43
refinedweb
180
67.65
Dimitriy V. Masterov > This? I have been using Vim since January and it works great for me. I particularly enjoy being able to customize keyboard mappings, virtually no limits on the number of mappings. My _vimrc file follows. The Stata relevant lines are: autocmd FileType stata set tabstop=4 shiftwidth=4 expandtab smartindent map <F8> :!start "c:/program files/stata8/wstata.exe" do "%:p" Note the F8 mapping allows you to run do files. Patrick Joly -----_vimrc-----<begin here>---- set nocompatible source $VIMRUNTIME/vimrc_example.vim source $VIMRUNTIME/mswin.vim behave mswin colorscheme darkslategray " Win32 only - maximize Window (i.e. simulates Alt-space, then x) " simalt ~x " doesn't seem to work, in does from command-line though " maximize using :winpos and :set ... instead winpos 0 0 set lines=39 set columns=125 set shortmess+=I set sessionoptions+=unix,slash set sessionoptions+=winpos,resize set sessionoptions-=options set nobackup if has("gui_win32") set diffexpr=MyDiff() function MyDiff() let opt = '' if &diffopt =~ 'icase' | let opt = opt . '-i ' | endif if &diffopt =~ 'iwhite' | let opt = opt . '-b ' | endif silent execute '!"C:\Program Files\vim\vim62\diff" -a ' . opt . '"' . v:fname_in . '" "' . v:fname_new . '" > "' . v:fname_out . '"' endfunction endif if has("gui_running") if has("gui_gtk2") set guifont=Courier\ New\ 10 elseif has("x11") set guifont=-*-courier-medium-r-normal-*-*-180-*-*-m-*-* else set guifont=Courier_New:h10:cDEFAULT endif endif filetype on autocmd FileType perl set tabstop=4 shiftwidth=4 expandtab autocmd FileType stata set tabstop=4 shiftwidth=4 expandtab smartindent " enable POD syntax high-lighting let perl_include_POD = 1 " changes how Perl displays package names in references (such as $PkgName::VarName) let perl_want_scope_in_variables = 1 " complex variable declarations such as @{${var}} let perl_extended_vars = 1 " treat strings as a statement let perl_string_as_statement = 1 map <F8> :!start "c:/program files/stata8/wstata.exe" do "%:p" map <F5> :!perl "%" map <F6> :!start perl -d "%" map <F7> :!podchecker "%" map <F2> :buffers<CR>:sb map <F3> :buffers<CR>:e # map <C-Tab> <C-W>w map ,gm1 :%s/\([0-9-]\+\) \+\([0-9-]\+\)/\1,\2/gc map ,gm2 :%s/)\s*$/)./gc " t stands for tourism, v for vnumbers map ,tv1 :3,$s/^[^V]\+.*\n//gc map ,tv2 :%s/^\(V\d\+\) .*$/\1/gc map ,tv3 :3,$s!\n! !gc " v stands for vnumbers, (CANSIM II 'view vector directory') map ,v1 :%s/\s\+$/ map ,v2 :%s/\s\+\d\+-\d\+-\d\+.*$//g map ,v3 :%s/\n^\s\+/ /g map ,v4 :g/Current prices/d map ,v5 :%s/\d\.\d.[^\s]\+\s\+//g map ,v6 :%s/\(v\d\+\).*$\n/\1 /gc " perl one-liners map ,p1 :!perl -pi.bak -e 's/\r//g' "%:p" -----_vimrc-----<ends here>---- * * For searches and help try: * * *
http://www.stata.com/statalist/archive/2004-06/msg00624.html
CC-MAIN-2015-35
refinedweb
433
59.5
I would like to know how can I query an array of objects. For example I have an array object like CarList. So CarList[0] would return me the object Car. Car has properties Model and Make. Now, I want to use linq to query the array CarList to get the Make of a Car whose Model is say "bmw". I tried the following var carMake = from item in CarList where item .Model == "bmw" select s.Make; Could not find an implementation of the query pattern for source type CarList[] Add: using System.Linq; to the top of your file. And then: Car[] carList = ... var carMake = from item in carList where item.Model == "bmw" select item.Make; or if you prefer the fluent syntax: var carMake = carList .Where(item => item.Model == "bmw") .Select(item => item.Make); Things to pay attention to: item.Makein the selectclause instead if s.Makeas in your code. itemand .Modelin your whereclause
https://codedump.io/share/0R2YGQOmu3Vb/1/query-an-object-array-using-linq
CC-MAIN-2017-39
refinedweb
156
79.77
**I'm to the point I think where I can ask a user to input a string value, answer is Hourly. if hourly the user enters hours, then the program prints out value in currency format answer is Salary user enters which salary level Level 1 is for recent college grads Level 2 is for executives level one and level 2 have different salary levels and pay is calulated on a bi-weekly pay schedule** import java.io.*; public class Payroll { public static void main( String args[] )throws IOException { System.out.println("Are you an Hourly or Salary worked?"); BufferedReader br; br = new BufferedReader( new InputStreamReader( System.in ) );//gathers data from user String line = br.readLine(); if(line.equals("Hourly")) { System.out.println("Enter Hours Worked"); System.out.println("Enter Pay Rate"); } [COLOR="Red"] //a. If hourly then ask for the following //i. Hours worked //ii. Pay Rate //1. If the hours worked is greater than 40 hours then you must calculate total //pay for the first 40 hours at the pay rate given and then calculate hours //above 40 at time and a half[/COLOR] // else if(line.equals("Salary")) { System.out.println("Enter a Salary level 1 or 2"); } //Level 1 is for recent college grads //Level 2 is for executives else { System.out.println(" "); } } } //iii. The bottom line is just set to constant variables with the yearly //salary for either level and calculate the pay based on bi-weekly pay schedule. //4. Be able to print/show the results for the date taken in after all data has been entered. //5. Please put this into a loop that will allow more than one entry. //FORMAT THE RESULT AS A CURRENCY (w/ a $ sign in front of it)
https://www.daniweb.com/programming/software-development/threads/116925/payroll-application-sos
CC-MAIN-2017-26
refinedweb
290
64.3
Linking patterns At the heart of Linked Data is a simple idea: a client searches its “current data” for links, follows them (normally via HTTP), and receives “more data”. This “more data” is subject to a number of variables. Among them: - Significance and/or authority. “More data” can be critical to any processing application, or merely extra bits of limited interest. - Model and syntax. “More data” could be in any of a number of RDF serializations, or in other machine-readable formats like “plain old” XML, or even human-readable prose in HTML or PDF. - Vocabulary. In the case of machine-readable formats, especially RDF, the same information can be expressed with various common or custom vocabularies. - Size. Data about a city could include as little as its name, or as much as a complete list of streets and major buildings, potentially multiple MiB in size. Different applications have different needs when it comes to gathering “more data” about resources. For example, a generic RDF crawler might follow all links to build up a comprehensive database, but it might not be interested in non-RDF objects. On the other hand, specialized user agents may seek specific info about a resource of interest, and may be prepared to glean it from a variety of sources from RDF to natural language. All these applications could benefit from a shared set of linking patterns, which would allow a link to describe itself with regard to the above variables. Then, a Linked Data client could simply look at a link and decide if it wants to follow it or not. A number of such patterns already exist, but they are scattered over many documents, or often not specified at all, and there is some confusion as evidenced by a January 2011 thread on the public-lod mailing list. This wiki page aims to gather informal recommendations on common linking patterns. Contents “Follow your nose” The most basic linking mechanism in Linked Data stems from the original guideline: When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL) To dereference a URI in search for information about it is sometimes called to “follow your nose”. It works because URIs in Linked Data are HTTP URIs, and moreover specially designed for such usage. Information thus received is often though of as “authoritative”, because it comes from the same domain as the URI, so the “owner” of the information and the “owner” of the resource are the same. The Linked Data tutorial discusses in detail what this “authoritative” description should include. “Nose-following” does not work with information resources that cannot embed RDF. For example, the W3C logo in PNG format cannot describe itself at its own URI. In this case, other approaches can be followed. Existing RDF terms rdfs:seeAlso rdfs:seeAlso is a widely-used predicate for linking a resource to information about it. It is defined by the RDF Schema specification: A triple of the form: S rdfs:seeAlso Ostates that the resource O may provide additional information about S. It may be possible to retrieve representations of O from the Web, but this is not required. When such representations may be retrieved, no constraints are placed on the format of those representations. However, some believe that rdfs:seeAlso should only point to RDF data of limited size, in particular because of how it is used in the FOAF project and the Tabulator data browser. Other examples of seeAlso usage: - the WordPress SIOC exporter mirrors the blog’s pagination in its RDF output and uses rdfs:seeAlsoto point to other pages - Yahoo! SearchMonkey recommends rdfs:seeAlso(together with media:image) for pointing to an image representing a product rdfs:isDefinedBy The RDF Schema spec also introduces isDefinedBy—a subproperty of seeAlso that is defined thus: A triple of the form: S rdfs:isDefinedBy Ostates that the resource O defines S. This property is used for linking RDFS classes and properties to the schema documents that define them, see for example the organization ontology. Questions: what exactly does “define” mean? can isDefinedBy be used as an equivalent to nose-following when the latter is not available for some technical reasons? wdrs:describedby A predicate defined by the POWDER specification, intended for linking a resource to its description. It has been discussed as an equivalent to rdfs:isDefinedBy for instance (non-vocabulary) data, especially in the context of 303-less “toucan publishing”. So maybe it can be used as an equivalent to nose-following, again in cases where the latter is problematic. This is however complicated by an error in the POWDER spec. FIXME: The “toucan publishing” link is now broken. The current one might be . foaf:page and foaf:homepage foaf:page, defined in the FOAF vocabulary, “relates a thing to a document about that thing”. Since FOAF’s notion of “document” includes all kinds of data, foaf:page is theoretically equivalent to rdfs:seeAlso. In practice, however, foaf:page is more often? used to link to human-readable documents, whereas seeAlso is better suited for links to more RDF. foaf:homepage is a specialization of foaf:page: A 'homepage' in this sense is a public Web document, typically but not necessarily available in HTML format. The page has as a topicthe thing whose homepage it is. The homepage is usually controlled, edited or published by the thing whose homepage it is; as such one might look to a homepage for information on its owner from its owner. This works for people, companies, organisations etc. Note that foaf:homepage is an inverse functional property, i.e. two things cannot have the same foaf:homepage. XHTML vocabulary The XHTML+RDFa spec defines several navigational predicates, among them xhv:prev, xhv:next, xhv:section, xhv:first, xhv:last. It doesn’t mention any limits on the resources’ types, so presumably? they could be RDF as well as anything else. FIXME: where’s the equivalent for RDFa 1.1? - In particular, the Linked Data API uses these terms for linking between pages in list output. However, their semantics are broader, for example xhv:nextcan point to a next chapter in a book. dc:format A “format” property is available both in the new and old Dublin Core namespaces. It could be used in conjunction with linking predicates to indicate the format of data in advance. For example: </id/something> foaf:page </paper.pdf> . </paper.pdf> dc:format <> . → from this an LD client would know that paper.pdf is a PDF document, and there’s probably no use retrieving it (if the client only expects RDF triples). FIXME: what are the real URIs for Internet media types? xtypes </id/something> rdfs:seeAlso </mysterious-url> . </mysterious-url> rdf:type xtypes:Document-RDFSerialisation . See suggestions on improvement of this namespace to cjg@ecs.soton.ac.uk
http://www.w3.org/2001/sw/wiki/Linking_patterns
CC-MAIN-2014-52
refinedweb
1,139
54.22
Opened 5 years ago Closed 5 years ago Last modified 5 years ago #2820 closed Bug (Fixed) WIN(all) handle bug Description Strange bug for almost all functions related to the search window. If the window handle, for example 0x00110758, then the same window be found from 0x00000758 (LoWord) It generates such bugs as: WinExists('192.168.0.1 - WINDOW NOT EXISTS') - This CODE return TRUE and finds window handle 0x000000C0, but the original handle 0x003100C0 WinGetHandle ("192.txt - THIS WINDOW NOT REALY EXISTS") - this return not valid handle #include <WinAPI.au3> $h = WinGetHandle('[CLASS:SciTEWindow]') ConsoleWrite('Handle 1: ' & $h & @CRLF) $hLo = _WinAPI_LoWord($h) ConsoleWrite('Handle 2: ' & WinGetHandle($hLo) & @CRLF) Here function WinGetHandle return wrong result to. Bug is not observed on x64 systems. Autoit 3.3.10.0 - 3.3.x.x Attachments (0) Change History (11) comment:1 Changed 5 years ago by Jon comment:2 Changed 5 years ago by Jpm I don't know why you want to use only the low part of an handle an handle is 32-bit under Autoit-32 and 64-bit under AutoIt-64 WinGetHandle is supposed to work with title/text. Weird the low part is pointing to the same Windows just use the return vale as a whole comment:3 Changed 5 years ago by jchd18 comment:4 Changed 5 years ago by anonymous WinGetHandle is supposed to work with title/text. OK. But why WinExists("192.168.0.1 - WINDOW NOT EXISTS") returns 1? I don't have window with title "192.168.0.1 - WINDOW NOT EXISTS" and can't find window with handle 0x000000C0 (tried to use WinList, _WinAPI_EnumWindows, _WinAPI_EnumChildWindows). Or "192.168.0.1 - WINDOW NOT EXISTS" is not a string? comment:5 follow-up: ↓ 6 Changed 5 years ago by BrewManNH It doesnt't return 1 for me, it returns 0x0000000000000000 with an @error set to 1. I used this code, modified from the help file example, which should be just as valid. #include <MsgBoxConstants.au3> Example() Func Example() ; Run Notepad ;~ Run("notepad.exe") ; Wait 10 seconds for the Notepad window to appear. ;~ WinWait("[CLASS:Notepad]", "", 10) ; Retrieve the handle of the Notepad window using the classname of Notepad. Local $hWnd = WinGetHandle("[CLASS:Notepad]") If @error Then ConsoleWrite('@@ Debug(' & @ScriptLineNumber & ') : $Error code: ' & @error & @CRLF) ;### Debug Console ConsoleWrite(VarGetType($hWnd) & @CRLF) MsgBox($MB_SYSTEMMODAL, "", "An error occurred when trying to retrieve the window handle of Notepad.") Exit EndIf ConsoleWrite('@@ Debug(' & @ScriptLineNumber & ') : $Error code: ' & @error & @CRLF) ;### Debug Console ; Display the handle of the Notepad window. MsgBox($MB_SYSTEMMODAL, "", $hWnd) ; Close the Notepad window using the handle returned by WinGetHandle. WinClose($hWnd) EndFunc ;==>Example BTW, using WinExists instead of WinGetHandle returns 0 with an error of 0. comment:6 in reply to: ↑ 5 Changed 5 years ago by anonymous It doesnt't return 1 for me, it returns 0x0000000000000000 with an @error set to 1. I used this code, modified from the help file example, which should be just as valid. ... BTW, using WinExists instead of WinGetHandle returns 0 with an error of 0. It looks like this bug is only in x86 comment:7 Changed 5 years ago by anonymous Win7 x86 v 3.3.8.x MsgBox(0, "", WinExists("192.txt")) ; 0 MsgBox(0, "", WinExists(0x000000C0)) ; 0 MsgBox(0, "", WinExists(HWnd(0x000000C0))) ; 1 v 3.3.10.x + MsgBox(0, "", WinExists("192.txt")) ; 1 MsgBox(0, "", WinExists(0x000000C0)) ; 1 MsgBox(0, "", WinExists(HWnd(0x000000C0))) ; 1 Which version works correctly? comment:8 Changed 5 years ago by Jon Ok, something is wrong there. Let me run it through the debugger. comment:9 Changed 5 years ago by Jon It looks like it was a change trancexx made in Dec 2011 that just missed the 3.3.8.0 release and is only now showing up. The change was what to do when a window match didn't occur. Basically if a window match fails it tries to interpret passed strings as window handles and then tries again. Oddly "192.txt" messes up when it is forced to convert from a string into a handle and ends up evaluating as 192 or C0. And that happens to be a valid handle. This will possibly cause false matches with any string window titles that start with numbers. I'll have to remove the change or improve the conversion so that these errors don't occur. comment:10 Changed 5 years ago by Jon - Milestone set to 3.3.13.14 - Owner set to Jon - Resolution set to Fixed - Status changed from new to closed comment:11 Changed 5 years ago by anonymous How about MsgBox(0, "", HWnd("192.txt")) ; 0x000000C0 MsgBox(0, "", HWnd("log.txt")) ; 0x00000000 Guidelines for posting comments: - You cannot re-open a ticket but you may still leave a comment if you have additional information to add. - In-depth discussions should take place on the forum. For more information see the full version of the ticket guidelines here. Seems to be some weird quirk of the Windows API. Outside of autoit I can get a handle to a window 0x00960608 and yet IsWindow(0x608) seems to point to the same window. I can even get window text from the same windows using both handles.
https://www.autoitscript.com/trac/autoit/ticket/2820
CC-MAIN-2020-05
refinedweb
868
72.56
Some thoughts for a 2010 evolution of the Java Support in SCons The current support for Java in SCons is fairly simplistic and is rooted in a pre-Groovy, pre-Scala world. The following are some thoughts designed to start a review and refactoring (or reimplementation) of the JVM-based languiages support in SCons. Currently the DAG built by SCons is supposed to have knowledge about every single generated file so that the engine can work out which files need transformation. In a world where there is 1 -> 1 relationship between source file and generated file (as with C, C++, Fortran, etc.) or where there is 1 -> n relationship where the various n files can be named without actually undertaking the compilation, things are fine. For Java, and even more extremely for languages like Groovy, it is nigh on impossible to discover the names of the n files without running the compiler -- either in reality or in emulation. This means you have to reason in terms of the 1 -> 1 and not worry about the n-1 files that also get generated. This works fine for forward transformation, but SCons has an issue that none of the other systems have, it treats cleaning as "unbuild" not as a target "clean". For this it must know what got built. However, unless there is a post-processing stage that manipulates the stored DAG after compilation, there is no way this can happen. The idea of running the actual compiler to gain the information in order to decide whether to run the compiler is clearly not the way forward. The only other alternative is to have a special Java/Closure/Scala/JRuby/Jython SideEffect builder which realizes the ideas of the Ant glob "**/*.class". This is, in effect, what Maven, and Gradle do -- Ant and Gant require the user to specify explicitly what it means to clean so they use remove tasks explicitly. So SCons has to compromise here and allow for not having all build products specified in the DAG. Relying on only considering the 1 -> 1 transformation of: to determine whether to compile seems good enough, the reason being that all the other generated classes will always be regenerated anyway. The Big Downside is clearly what happens if someone removes or damages an untracked generated file? Well in this case the tests fail and the developer will usually do a "rebuild project" which means full clean out. This may seem strange to C, C++, Fortran, SCons folk, but it ends up being the fastest way of solving things in a number of cases, not just in the one above, but in the case where downloaded dependent jars are out of sync. C, C++, Fortran, etc. have the idea of compiling against a library, but it is assumed that the library is either there (in which case transformation proceeds) or it isn't (in which case transformation stops). The whole Autotools/Waf/SCons philosophy has been based on this. The Java build structure introduces an extra intermediate step -- if the jar I need isn't already here can I go and get it from the Maven repository (implemented using either Maven, Ivy or something built on them as Gradle does). This is something SCons should be able to handle in its DAG since this is the way ultimately Maven, Ivy and Gradle do things. Given the Object and Program builders, then Classes seems like the builder to cover Java, Groovy and Scala compilation since builder names generally indicate the target not the source. Then we can have Jar, War, Ear. These should be OSGi-fy from the outset so the BND tool will be a dependency that SCons will have to carry. Material on this page from 2005 and 2008 Multi-Step Java Builder patch and build example: by LeanidNazdrynau This is another Java-Build-Run-Code. Builds Java files with classpath containing any directory within the project, recursively parses the classpath directory to include all files .class and .jar files, within it. Can compile a single file or complete src directory. Runs, class files, with the same classpath principle. Easy to change and modify, by BabarAbbas. This is first implementation of multi-step Java builder. It re-defines next builders: Jar, JavaH, Java, JavaDir, JavaFile, where - Jar has src_builder=Java - JavaH has src_builder=Java Java has src_builder=JavaFile (but I am not using it now) JavaDir - build all java in directories (what current Java builder does), I just could not mixed together Java, which takes files as source and JavaDir, which takes directories as source in one builder declaration. I hope this builder can be combine with Java later on. They do use the same functions in javac.py tool. What you can do with it: With Multi-step builders you can simply define build Jar file and specify .java. Or you can add swig.py builder to it and use .i as input to Jar builder, like: - Jar(['Sample.i','A.java']) In this call swig builder will build .java from .i files and send it to Java builder, which will build .class files and send them to Jar builder, which will generate .jar file all in one call, so your Java can work similar as C/C++ builds. From patch above download java build example: project.zip file. This example tested on Windows with BuildDir set and duplicate=0. You have to have JDK and swig in your path. Example demostrate: - src/HelloApplet - build jar file from directory and pottentially sign it. #this is regulat Java build for scons import os Import ("env") denv=env.Copy() classes=denv.JavaDir(target='classes',source=['com']) #set correct path for jar denv['JARCHDIR']=os.path.join(denv.Dir('.').get_abspath(),'classes') denv.Jar('HelloApplet',classes) src/server - build classes from Java directories and create .war file, which includes built class files, WEB-INF and built HelloApplet.jar import os Import ("env") classes=env.JavaDir(target='classes',source=['com']) env['WARXFILES']=['SConscript','.cvsignore'] env['WARXDIRS']=['CVS'] env.War('scons',[classes,Dir('../WebContent'),'#/buildout/HelloApplet/HelloApplet.jar']) - src/jni - JNI interface. Will build Shared library and Jar file, Will use swig to generate interface Java and C++ files. Deactivate build in this directory if you do not have swig installed. Import ("env") denv=env.Copy() denv.Append(SWIGFLAGS=['-java']) denv.SharedLibrary('scons',['JniWrapper.cc','Sample.i']) denv['JARCHDIR']=denv.Dir('.').get_abspath() denv.Jar(['Sample.i','A.java']) - src/javah - JNI. Will build shared library and Jar file. Will use JavaH to generate C++ header files. Import('env') denv=env.Copy() denv['JARCHDIR']=denv.Dir('.').get_abspath() denv.Jar('myid','MyID.java') denv.JavaH(denv.Dir('.').get_abspath(),'MyID.java') denv.SharedLibrary('myid','MyID.cc')
http://www.scons.org/wiki/JavaSupport
CC-MAIN-2013-48
refinedweb
1,117
54.83
Proposed features/Harbour harbour=yes + harbour:name= + harbour:LOCODE=AAAAA The National Geospatial-Intelligence Agency provide a MS Access databse with information to approximately 64.000 harbours worldwide under the following link: NGA World Port Index. Some of the information that might be useful to the project includes: - World Port Index Number - Region Index - Main Port Name - Country Code - Latitude Degrees - Latitude Minutes - Latitude Hemisphere - Longitude Degrees - Longitude Minutes - Longitude Hemisphere - Harbour size code - Harbour type code - Shelter afforded code - Entrance restriction tide - Entrance restriction swell - Entrance restriction ice - Channel depth - Anchorage depth - Cargo pier depth - Oil terminal depth - Tide - Maximum size vessel code - Good holding ground - Turning area - First port of entry - US representative - ETA message - Pilotage available - Drydock --Neutronstar2 14:26, 6 August 2008 (UTC) A harbour (also port, North American English Harbor) is a place where ships, boats, and barges can seek shelter from stormy weather. This article describes some of the many features associated with a harbour and the approach from the sea. Sample harbour - Rock coast - Natural jetty - Stone coast - Footbridge or raft - Slipway - Quay - Bollard - Gate - Dry dock, tidal harbour - Pier - Anchorage - Sandy beach - Beach - Stone jetty - Harbour jetty with pier inside - Harbour jetty with rock fill outside - Coastline - Harbour as a node Mapping a harbour The harbour node Every harbour should be marked with a harbour=yes node to which all related properties are assigned as detailed below. A large harbour will also have a area using landuse=harbour and this may also be divided into several harbour basins which many have their own names. If, in a large harbour, there are several harbour basins designated for a specified uses, then each harbour basin can be tagged with its own icon. Please only tag harbours with international or nationwide importance as main-harbours. Main-harbours are also shown in lower zoom levels. The harbour basin is surrounded by jetties or piers but should not be tagged as a separate area of water. To contain that many properties a new namespace harbour: is used. Name The name=* should be the set to the official name for the harbour in the local language. An alternative name can be put in alt_name=*. Names in other languages can be put in name:de, name"en, name:fr etc. Type The type of the harour should be in the category field. Also: The harbour area The harbour site (land area, not area of water ), mostly fenced or outlined by hedge, barrier or access restriction is tagged on land using landuse=harbour. It should include: dry anchorage, car parks, service buildings, dockyard, storages, container handling area, garages, sanitation, etc and also various associated facilities including: restaurant, clubhouse, supermarket, ship chandler, etc., as long it is inside of premises. All fixed walls associated with a harbour should be mapped as being part of the natural=coastline or waterway=riverbank. These include external and internal harbour wall, wharfs, for every masonry pier and unfortified shores. Lighter elements, such as footbridges, rafts, landing-stages are deemed to be part of the sea rather than defining te edge of the sea. The coastline can be divided into separate segments with appropriate tagging, a pier may, for example, have two different tags on either side. natural=coastline + material=* ( concrete | masonry | boulder | tripode | steel | wood ) + mooring=* (Landing for …) - Size The size respectively importance of a harbour is sorted into 4 classes in the world port index: - Restrictions - Communication data - Schema Associated features A harbour contains and has many associated features. Here are details of some of them: Amenities Harbour relevant facilities e.g.: - harbour master (amenity=harbourmaster) - customs - ship fuel station (amenity=fuel) - drinking water (amenity=drinking_water) - trash disposal (amenity=waste_disposal) - oil recycling - toilets (amenity=toilets) - boat storage (amenity=boat_storage) Breakwater A breakwater is a solid or nearly solid structure used to reduce the power of the waves within a harbour or to reduce coastal erosion. A breakwater in OSM is defined as "stand alone" (no connection to the shore - if a wave protection building is connected to shore, see Groyne). It is not normally possible to to walk on a breakwater and that are not used to access moored boats. Use man_made=breakwater. Dolphin A dolphin is structure made of wood, metal (sometimes with hard rubber covering) or concrete sticking out of shallow water which can either serve as markings of navigational channels to mark shoals or as an extended mooring for vessels to extend a pier. Proposed tagging for a mooring 'dolphin' is as a node with seamark:type=mooring ; seamark:mooring:category=dolphin Entrance The harbour entrance is how boats and ships enter the harbour. This is not explicitly tagged but is likely to have navigation bouy=*s nearby and may be formed from a combination of the natural=coastline and man_made=breakwaters. Groyne A groyne is a timber structure or a linear pile of rocks stretching across a beach a short distance into the sea to reduce erosion. These should be tagged with man_made=groyne. Mooring A mooring is a place where boats can be fastened to a fixed object such as a bollard, pier, quay or the seabed and should use mooring=*. Moorings may be alongside a man_made=pier, on a sea wall or as a floating mooring buoy=* attached to the seabed. The mooring tag should be added to the relevant feature. In the case of a sea wall then the natual=coastline itself can be used. Do however be very careful when doing anything to the coastline as it can cause difficulties if the coastline gets broken. If it is a permanent anchorage it can be complemented with the name of the vessel mooring:name=* respectively mooring:operator=* for the operator. Pier A pier (also jetty, landing stage, footbridge) is a wooden or other solid structure on metal,wooden or concrete legs or floating which is used for the mooring, loading and unloading of vessels. Larger pier can include buildings and possibly amusement arcades all build on columns out over the sea. Piers should be tagged using man_made=pier. Use a linear way for narrower ones used to access boats and areas for larger piers with buildings. A floating=yes if the structure floats on the water Platform A Platform is a oil platform, offshore platform such as: Deepwater Horizon. Tag with harbour=yes and harbour:category=oil_platform -- Question. Is this relevant to the harbour article? PeterIto 08:28, 9 June 2011 (BST) Pontoon A pontoon is a huge anchored harbour facility or a moving swimming, sometimes self driving working platform. -- Question: should we map moving things? I suggest we only do if in practice it does not move. What tagging should be used for this feature? PeterIto 00:12, 8 June 2011 (BST) Quay A quay is a length of coast designed to allow vessels of some size to moor alongside the shore. In an industrial harbour a quay may have cranes and facilites to unload cargo. For passenger vessels it may have facilities for passengers to embark. A quay will normally be tagged as part of the coastline natural=coastline. -- Question: how should this be tagged? PeterIto 00:17, 8 June 2011 (BST) Slipway A slipway is used to launch a vessel and consists of a ramp or a carriage on rails. It continues under water until the water depth is reasonable. Tag with leisure=slipway and optionally also operator=* and operating=* ( hand | car | cable_winch | travellift ). Further information:o Ship launching, boat lift, Travellift Routes Only permanent routes are mapped. They begin at the start harbour via stopover harbours to the harbour of destination. More than one route on a passage is connected between the junctions to one node. Preferences "Harbour" for JOSM For Editor JOSM we have comfortable preferences for editing all relevant objects in harbours. The preferences are very detailed; the user can edit the most important data like a checklist. To use the "Harbour" preferences: - start JOSM and press key <F-12> - click on the left side the "coordinate grid" icon(3rd icon from top: "Settings for the map projection and data interpretation.") - click on the top on "Tagging Presets" tab - highlight in the lower window "Harbour", click "Activateand "OK" By doing this, the preferences "Harbour" will be copied in the upper window. - start JOSM again. The preference "Harbour" you will now find in JOSM-menu under "Preferences". Other resources LOCODE The World Port Index contains 4300 harbours in 400 regions. Every harbour has a 5-digit number as index. See also - Marine Mapping - Harbour map - Proposal Page - INT 1 international nautical chart symbols - OpenSeaMap (Example rendering of a sea chart) - World Port Index [1]
http://wiki.openstreetmap.org/wiki/Proposed_features/Harbour
crawl-003
refinedweb
1,442
51.07
This page provides information on changes in all released stable versions of the NDK. To download the latest stable version of the NDK or any currently available beta version, see the NDK downloads page. Android NDK, Revision r22b (March 2021)Changelog - Downloads - - Announcements - GNU binutils is deprecated and will be removed in an upcoming NDK release. Note that the GNU assembler ( as) is a part of this. If you are building with -fno-integrated-as, file bugs if anything is preventing you from removing that flag. If you're using asdirectly, use clanginstead. - LLD is now the default linker. ndk-build and our CMake toolchain file have also migrated to using llvm-ar and llvm-strip. - ndk-gdb now uses lldb as the debugger. gdb is deprecated and will be removed in a future release. To fall back to gdb, use --no-lldb option. But please file a bug explaining why you couldn't use lldb. std::filesystemsupport is now included. There are two known issues: - Issue 1258: std::filesystem::perm_options::nofollowmay not be honored on old devices. - Issue 1260: std::filesystem::canonicalwill incorrectly succeed when passed a non-existent path on old devices. Android NDK LTS, Revision r21e (January 2021)Changelog - Downloads - - Announcements 32-bit Windows is no longer supported. This does not affect the vast majority of users. If you do still need to build NDK apps from 32-bit versions of Windows, continue using NDK r20. For more information on this change within Android Developer tools, see the blog post on the topic. - LLD is now available for testing. AOSP has switched to using LLD by default and the NDK will follow (timeline unknown). Test LLD in your app by passing -fuse-ld=lldwhen linking. Note that Issue 843 will affect builds using LLD with binutils strip and objcopy as opposed to llvm-strip and llvm-objcopy. - The legacy toolchain install paths will be removed over the coming releases. These paths have been obsolete since NDK r19 and take up a considerable amount of space in the NDK. The paths being removed are: - platforms - sources/cxx-stl - sysroot - toolchains (with the exception of toolchains/llvm) make_standalone_toolchain.pyusers (though that script has been unnecessary since r19). For information on migrating away from the legacy toolchain layout, see the Build System Maintainers Guide for the NDK version you're using. - The Play Store will require 64-bit support when uploading an APK beginning in August 2019. Start porting now to avoid surprises when the time comes. For more information, see this blog post. - A macOS app bundle that is signed and notarized is now available for download from our wiki and our website. Note that because only bundles may use RPATHs and pass notarization, the traditional NDK package for macOS cannot be notarized. The SDK will continue to use the traditional package as the app bundle requires layout changes that would make it incompatible with Android Studio. The NDK is not quarantined when it is downloaded via the SDK manager, so is curently allowed by Gatekeeper. The SDK manager is currently the most reliable way to get the NDK for macOS. Android NDK, Revision r20b (June 2019)Changelog - Downloads - - Announcements - LLD is now available for testing. AOSP is in the process of switching to using LLD by default and the NDK will follow (timeline unknown). Test LLD in your app by passing -fuse-ld=lldwhen linking. - The Play Store will require 64-bit support when uploading an APK beginning in August 2019. Start porting now to avoid surprises when the time comes. For more information, see this blog post. - Added Android Q APIs. Android NDK, Revision r19c (January 2019)Changelog - Downloads - - Announcements - Developers should begin testing their apps with LLD. AOSP has switched to using LLD by default and the NDK will use it by default in the next release. BFD and Gold will be removed once LLD has been through a release cycle with no major unresolved issues (estimated r21). Test LLD in your app by passing -fuse-ld=lldwhen linking. Note: lld does not currently support compressed symbols on Windows. Issue 888. Clang also cannot generate compressed symbols on Windows, but this can be a problem when using artifacts built from Darwin or Linux. - The Play Store will require 64-bit support when uploading an APK beginning in August 2019. Start porting now to avoid surprises when the time comes. For more information, see this blog post. - Issue 780: Standalone toolchains are now unnecessary. Clang, binutils, the sysroot, and other toolchain pieces are now all installed to $NDK/toolchains/llvm/prebuilt/<host-tag>and Clang will automatically find them. Instead of creating a standalone toolchain for API 26 ARM, instead invoke the compiler directly from the NDK: $ $NDK/toolchains/llvm/prebuilt/For r19 the toolchain is also installed to the old path to give build systems a chance to adapt to the new layout. The old paths will be removed in r20. The /bin/armv7a-linux-androideabi26-clang++ src.cpp make_standalone_toolchain.pyscript will not be removed. It is now unnecessary and will emit a warning with the above information, but the script will remain to preserve existing workflows. If you're using ndk-build, CMake, or a standalone toolchain, there should be no change to your workflow. This change is meaningful for maintainers of third-party build systems, who should now be able to delete some Android-specific code. For more information, see the Build System Maintainers guide. - ndk-depends has been removed. We believe that ReLinker is a better solution to native library loading issues on old Android versions. - Issue 862: The GCC wrapper scripts which redirected to Clang have been removed, as they are not functional enough to be drop in replacements. Android NDK, Revision r18b (September 2018)Changelog - Downloads - - Announcements - GCC has been removed. - LLD is now available for testing. AOSP is in the process of switching to using LLD by default and the NDK will follow (timeline unknown). Test LLD in your app by passing -fuse-ld=lldwhen linking. - gnustl, gabi++, and stlport have been removed. - Support for ICS (android-14 and android-15) has been removed. Apps using executables no longer need to provide both a PIE and non-PIE executable. - The Play Store will require 64-bit support when uploading an APK beginning in August 2019. Start porting now to avoid surprises when the time comes. For more information, see this blog post. Android NDK, Revision r17c (June 2018)Changelog - Downloads - - Announcements - GCC is no longer supported. It will be removed in NDK r18. - libc++ is now the default STL for CMake and standalone toolchains. If you manually selected a different STL, we strongly encourage you to move to libc++. Note that ndk-build still defaults to no STL. For more details, see this blog post. - gnustl and stlport are deprecated and will be removed in NDK r18. - Support for ARMv5 (armeabi), MIPS, and MIPS64 has been removed. Attempting to build any of these ABIs will result in an error. - Support for ICS (android-14 and android-15) will be removed from r18. - The Play Store will require 64-bit support when uploading an APK beginning in August 2019. Start porting now to avoid surprises when the time comes. For more information, see this blog post. Android NDK, Revision 16b (December 2017)Changelog - Downloads - Announcements - The deprecated headers have been removed. Unified Headers are now simply "The Headers". For migration tips, see Unified Headers Migration Notes. - be removed when the other STLs are removed in r18. libc++is out of beta and is now the preferred STL in the NDK. Starting in r17, libc++is the default STL for CMake and standalone toolchains. If you manually selected a different STL, we strongly encourage you to move to libc++. For more details, see this blog post. - Support for ARM5 (armeabi), MIPS, and MIPS64 are deprecated. They will no longer build by default with ndk-build, but are still buildable if they are explicitly named, and will be included by "all", "all32", and "all64". Support for each of these has been removed in r17. Both CMake and ndk-build will issue a warning if you target any of these ABIs. - APIs Added native APIs for Android 8.1. To learn more about these APIs, see the Native APIs overview. For additional information about what's new and changed in this release, see this changelog. Android NDK, Revision 15c (July 2017)Changelog - Downloads - Announcements - Unified headers are enabled by default. To learn how to use these headers, see Unified Headers. - GCC is no longer supported. It is not removed from NDK yet, but is no longer receiving backports. It cannot be removed until after libc++ stabilizes enough to be the default, as some parts of gnustl are still incompatible with Clang. - Android 2.3 ( android-9) is no longer supported. The minimum API level target in the NDK is now Android 4.0 ( android-14). If your APP_PLATFORMis set lower than android-14, android-14is used instead. - CMake in NDK now supports building assembly code written in YASM to run on x86 and x86-64 architectures. To learn more, see Building assembly code. Note: The deprecated headers will be removed in an upcoming release. If you encounter any issues with these headers, please file a bug. For migration tips, see the unified headers migration notes. - APIs Added native APIs for Android 8.0. To learn more about these APIs, see the Native APIs overview. For additional information about what's new and changed in this release, see this changelog. Android NDK, Revision 14b (March 2017)Changelog - Downloads - Announcements - Unified headers: This release introduces platform headers that are synchronized and kept always up-to-date and accurate with the Android platform. Header-only bug fixes now affect all API levels. The introduction of unified headers fixes inconsistencies in earlier NDK releases, such as: - Headers in M and N were actually headers for L. - Function declarations in headers did not match their platform levels correctly; headers declared non-existent functions or failed to declare available functions. - Several of the old API levels had missing or incorrect constants that were in newer API levels. These new unified headers are not enabled by default. To learn how to enable and use these headers, see Unified Headers. - GCC deprecation: This release ends active support for GCC. GCC is not removed from the NDK just yet, but will no longer receive backports. As some parts of gnustl are still incompatible with Clang, GCC won't be entirely removed until after libc++ has become stable enough to be the default. For additional information about what's new and changed in this release, see this changelog. Android NDK, Revision 13b (October 2016) - Downloads - Announcements - likely be removed after that point. - Added simpleperf, a CPU profiler for Android. - r13b - Additional fixes for missing __cxa_bad_cast. - NDK NDK_TOOLCHAIN_VERSIONnow defaults to Clang. - libc++ has been updated to r263688. - We've reset to a (nearly) clean upstream. This should remove a number of bugs, but we still need to clean up libandroid_support before we will recommend it as the default. make-standalone-toolchain.shis now simply a wrapper around the Python version of the tool. There are a few behavioral differences. See the commit message for details. - Some libraries for unsupported ABIs have been removed (mips64r2, mips32r6, mips32r2, and x32). There might still be some stragglers. - Issues with crtbegin_static.o that resulted in missing atexit at link time when building a static executable for ARM android-21+ have been resolved: Issue 132 - Added CMake toolchain file in build/cmake/android.toolchain.cmake. - Known Issues - This is not intended to be a comprehensive list of all outstanding bugs. - Standlone toolchains using libc++ and GCC do not work. This seems to be a bug in GCC. See the commit message for more details. - Bionic headers and libraries for Marshmallow and N are not yet exposed despite the presence of android-24. Those platforms are still the Lollipop headers and libraries (not a regression from r11). - RenderScript tools are not present (not a regression from r11): Issue 7. Android NDK, Revision 12b (June 2016) - Downloads - Announcements - The ndk-buildcommand defaults to using Clang in r13. We will remove GCC in a subsequent release. - The make-standalone-toolchain.shscript will be removed in r13. Make sure make_standalone_toolchain.pysuits your needs. - Report issues to GitHub. - We have fixed ndk-gdb.py. (Issue 118) - We have updated NdkCameraMetadataTags.hso that it no longer contain an invalid enum value. - A bug in ndk-build that resulting in spurious warnings for static libraries using libc++ has been fixed. For more information about this change, see the comments here. - The OpenSLES headers have been updated for android-24. - NDK - We removed support for the armeabi-v7a-hard ABI. For more information, see this explanation. - Removed all sysroots for pre-GB platform levels. We dropped support for them in r11, but neglected to actually remove them. - Exception handling when using c++_shared on ARM32 now mostly works. The unwinder will now be linked into each linked object rather than into libc++ itself. For more information about this exception handling, see Known Issues. - Default compiler flags have been pruned. (Issue 27). - For complete information about these changes, see this change list. - Added a Python implementation of standalone toolchains: build/tools/make_standalone_toolchain.py. - Windows users no longer need Cygwin to use this feature. - We'll be removing the bash flavor in r13, so test the new one now. -fno-limit-debug-infohas been enabled by default for Clang debug builds. This change should improve debugability with LLDB. --build-idis now enabled by default. - Build ID will now be shown in native crash reports so you can easily identify which version of your code was running. NDK_USE_CYGPATHshould no longer cause problems with libgcc. (Android Issue 195486) - The -Wl, --warn-shared-textrel, and -Wl,--fatal-warningsoptions are now enabled by default. If you have shared text relocations, your app cannot load on Android 6.0 (API level 23) or higher. Text relocations have never been allowed for 64-bit apps. - Precompiled headers should work better. (Issue 14 and Issue 16) - Removed unreachable ARM (non-thumb) STL libraries. - Added Vulkan support to android-24. - Added Choreographer API to android-24. - Added libcamera2APIs for devices with INFO_SUPPORTED_HARDWARE_LEVEL_LIMITEDor above. For more information, see Camera Characteristics. -. - GCC - Synchronized with the ChromeOS GCC @ google/gcc-4_9 r227810. - Backported coverage sanitizer patch from ToT (r231296). - Fixed libatomic to not use ifuncs. (Issue 31) - Binutils - Silenced “Erratum 843419 found and fixed” info messages. - Introduced option --long-pltto fix internal linker error that occurs when linking huge arm32 binaries. - Fixed wrong run time stubs for AArch64. This was causing jump addresses to be calculated incorrectly for very large DSOs. - Introduced default option --no-apply-dynamicto work around a dynamic linker bug for earlier Android releases. - NDK r11 KI for dynamic_castdoes not work with Clang. We have fixed x86, stlport_static, and optimization. - GDB - Known Issues - x86 ASAN still does not work. For more information see the discussion on this change list. - Exception unwinding with c++_sharedstill does not work for ARM on Android 2.3 (API level 9) or Android 4.0 (API level 14). - Bionic headers and libraries for Android 6.0 (API level 23) and Android 7.0 (API level 24) are not yet exposed despite the presence of android-24. Those platforms are still the Android 5.0 (API level 21) headers and libraries (not a regression from r11). - RenderScript tools are not present (not a regression from r11). (Issue 7) - This changelog is not intended to be a comprehensive list of all outstanding bugs. __threadshould work for real this time. Android NDK, Revision 12 (June 2016) - Downloads - Announcements - The ndk-buildcommand will default to using Clang in an upcoming release. GCC will be removed in a later release. - The make-standalone-toolchain.shscript will be removed in an upcoming release. If you use this script, please plan to migrate to the make_standalone_toolchain.pyas soon as possible. - NDK - Removed support for the armeabi-v7a-hard ABI. See the explanation in the documentation. - Removed all sysroots for platform levels prior to Android 2.3 (API level 9). We dropped support for them in NDK r11, but neglected to actually remove them. - Updated exception handling when using c++_shared on ARM32 so that it mostly works (see Known Issues). The unwinder is now linked into each linked object rather than into libc++ itself. - Pruned the default compiler flags (NDK Issue 27). You can see details of this update in Change 207721. - Added a Python implementation of standalone toolchains in build/tools/make_standalone_toolchain.py. On Windows, you no longer need Cygwin to use this feature. Note that the bash flavor will be removed in an upcoming release, so please test the new one now. - Configured Clang debug builds to have the -fno-limit-debug-infooption is enabled by default. This change enables better debugging with LLDB. - Enabled the --build-idas a default option. This option causes an identifier to be shown in native crash reports so you can easily identify which version of your code was running. - Fixed issue with NDK_USE_CYGPATHso that it no longer causes problems with libgcc (Issue 195486). - Enabled the following options as default: -Wl,--warn-shared-textreland -Wl,--fatal-warnings. If you have shared text relocations, your app does not load on Android 6.0 (API level 23) and higher. Note that this configuration has never been allowed for 64-bit apps. - Fixed a few issues so that precompiled headers work better (NDK Issue 14, NDK Issue 16). - Removed unreachable ARM (non-thumb) STL libraries. - Added Vulkan support to android-24. - Added Choreographer API to android-24. - Added libcamera2 APIs for devices that support the INFO_SUPPORTED_HARDWARE_LEVEL_LIMITEDfeature level or higher. For more information, see the CameraCharacteristicsreference. -. - Fixed __threadso that it works for real this time. - GCC - Synchronized the compiler with the ChromeOS GCC @ google/gcc-4_9 r227810. - Backported coverage sanitizer patch from ToT (r231296). - Fixed libatomicto not use ifuncs (NDK Issue 31). - Binutils - Silenced the "Erratum 843419 found and fixed" info messages. - Introduced option --long-pltto fix an internal linker error when linking huge arm32 binaries. - Fixed wrong run time stubs for AArch64. This problem was causing jump addresses to be calculated incorrectly for very large dynamic shared objects (DSOs). - Introduced default option --no-apply-dynamicto work around a dynamic linker bug for earlier Android releases. - Fixed a known issue with NDK r11 where dynamic_castwas not working with Clang, x86, stlport_static and optimization. - GDB - Updated to GDB version 7.11. For more information about this release, see GDB News. - Fixed a number of bugs in the ndk-gdb.pyscript. - Known Issues - The x86 Address Sanitizer (ASAN) currently does not work. For more information, see Issue 186276. - Exception unwinding with c++_shareddoes not work for ARM on Android 2.3 (API level 9) or Android 4.0 (API level 14). - Bionic headers and libraries for Android 6.0 (API level 23) and higher are not yet exposed despite the presence of android-24. Those platforms still have the Android 5.0 (API level 21) headers and libraries, which is consistent with NDK r11. - The RenderScript tools are not present, which is consistent with NDK r11. (NDK Issue 7) - In NdkCameraMetadataTags.hheader file, the camera metadata tag enum value ACAMERA_STATISTICS_LENS_SHADING_CORRECTION_MAPwas listed by accident and will be removed in next release. Use the ACAMERA_STATISTICS_LENS_SHADING_MAPvalue instead. Android NDK, Revision 11c (March 2016) - Changes - Applied additional fixes to the ndk-gdb.pyscript. - Added an optional package name argument to the ndk-gdbcommand --attachoption. (Issue 13) - Fixed invalid toolchain paths for 32-bit Windows platform. (Issue 45) - Fixed the relative path for the ndk-whichcommand. (Issue 29) - Fixed use of cygpath for the libgcc compiler. (Android Issue 195486) Android NDK, Revision 11b (March 2016) - NDK - Important announcements - Changes ndk-gdb.pyis fixed. It had regressed entirely in r11. ndk-gdbfor Mac is fixed. - Added more top-level shortcuts for command line tools: ndk-depends. ndk-gdb. ndk-stack. ndk-which. This command had been entirely absent from previous releases. - Fixed standalone toolchains for libc++, which had been missing __cxxabi_config.h. - Fixed help documentation for --toolchainin make-standalone-toolchain.sh. - Clang - Errata - Contrary to what we reported in the r11 Release Notes, __threaddoes not work. This is because the version of Clang we ship is missing a bug fix for emulated TLS support. Android NDK, Revision 11 (March 2016) - Clang - Important announcements - We strongly recommend switching to Clang. - Clang has been updated to 3.8svn (r243773, build 2481030). - This version is a nearly pure upstream Clang. - The Windows 64-bit downloadable NDK package contains a 32-bit version of Clang. - Additions - Clang now provides support for emulated TLS. - The compiler now supports __threadby emulating ELF TLS with pthread thread-specific data. - C++11 thread_localworks in some cases, but not for data with non-trivial destructors, because those cases require support from libc. This limitation does not apply when running on Android 6.0 (API level 23) or newer. - Emulated TLS does not yet work with Aarch64 when TLS variables are accessed from a shared library. - GCC - Important announcements - GCC in the NDK is now deprecated in favor of Clang. - The NDK will neither be upgrading to 5.x, nor accept non-critical backports. - Maintenance for miscompiles and internal compiler errors in 4.9 will be handled on a case by case basis. - Removals - Removed GCC 4.8. All targets now use GCC 4.9. - Other changes - Synchronized google/gcc-4_9 to r224707. Previously, it had been synchronized with r214835. - NDK - Important announcements - The samples are no longer included in the NDK package. They are instead available on GitHub. - The documentation is no longer included in the NDK package. Instead, it is on the Android developer website. - Additions - Added a native tracing API to android-23. - Added a native multinetwork API to android-23. - Enabled libc, m, and dl to provide versioned symbols, starting from API level 21. - Added Vulkan headers and library to API level N. - Removals - Removed support for _WCHAR_IS_8BIT. - Removed sed. - Removed mclinker. - Removed Perl. - Removed from all versions of NDK libc, m, and dl all symbols which the platform versions of those libs do not support. - Partially removed support for mips64r2. The rest will be removed in the future. - Other changes - Changed ARM standalone toolchains to default to arm7. - You can restore the old behavior by passing specifying the -targetoption as armv5te-linux-androideabi. - Changed the build system to use -isystemfor platform includes. - Warnings that bionic causes no longer break app builds. - Fixed a segfault that occurred when a binary threw exceptions via gabi++. (Issue 179410) - Changed libc++’s inline namespace to std::__ndk1to prevent ODR issues with platform libc++. - All libc++ libraries are now built with libc++abi. - Bumped default APP_PLATFORMto Gingerbread. - Expect support for Froyo and older to be dropped in a future release. - Updated gabi++ _Unwind_Exceptionstruct for 64 bits. - Added the following capabilities to cpufeatures: - Detect SSE4.1 and SSE4.2. - Detect cpu features on x86_64. - Updated libc++abi to upstream r231075. - Updated byteswap.h, endian.h, sys/procfs.h, sys/ucontext.h, sys/user.h, and uchar.hfrom ToT Bionic. - Synchronized sys/cdefs.hacross all API levels. - Fixed fegetenv and fesetenvfor arm. - Fix end pointer size/alignment of crtend_*for mips64 and x86_64. - Binutils - Additions - Added a new option: --pic-veneer. - Removals - The 32-bit Windows package no longer contains ld.gold. You can instead get ld.gold from the 64-bit Windows package. - Changes - Unified binutils source between Android and ChromiumOS. For more information on this change, see the comments here. - Improved reliability of Gold for aarch64. Use -fuse-ld=goldat link time to use gold instead of bfd. The default will likely switch in the next release. - Improved linking time for huge binaries for Gold ARM back end (up to 50% linking time reduction for debuggable Chrome Browser). - GDB - Removals - Removed ndk-gdb in favor of ndk-gdb.py. - Changes - Updated gdb to version 7.10. - Improved performance. - Improved error messages. - Fixed relative project paths. - Stopped Ctrl-C from killing the backgrounded gdbserver. - Improved Windows support. - YASM - Changes - Updated YASM to version 1.3.0. - Known issues - x86 ASAN does not currently work. For more information, see the discussion here. - The combination of Clang, x86, stlport_static, and optimization levels higher than -O0causes test failures with dynamic_cast. For more information, see the comments here. - Exception handling often fails with c++_shared on ARM32. The root cause is incompatibility between the LLVM unwinder that libc++abi uses for ARM32 and libgcc. This behavior is not a regression from r10e. Android NDK, Revision 10e (May 2015) - Downloads - Important changes: - Integrated the workaround for Cortex-A53 Erratum 843419 into the aarch64-linux-android-4.9linker. For more information on this workaround, see Workaround for cortex-a53 erratum 843419. - Added Clang 3.6; NDK_TOOLCHAIN_VERSION=clangnow picks that version of Clang by default. - Removed Clang 3.4. - Removed GCC 4.6. - Implemented multithreading support in ld.goldfor all architectures. It can now link with or without support for multithreading; the default is to do it without. - To compile with multithreading, use the --threadsoption. - To compile without multithreading, use the --no-threadsoption. - Upgraded GDB/gdbserver to 7.7 for all architectures. - Removed the NDK package for 32-bit Darwin. - Important bug fixes: - Fixed a crash that occurred when there were OpenMP loops outside of the main thread. - Fixed a GCC 4.9 internal compiler error (ICE) that occurred when the user declared #pragma GCC optimize ("O0"), but had a different level of optimization specified on the command line. The pragmatakes precedence. - Fixed an error that used to produce a crash with the following error message: in add_stores, at var-tracking.c:6000 - Implemented a workaround for a Clang 3.5 issue in which LLVM auto-vectorization generates llvm.cttz.v2i64(), an instruction with no counterpart in the ARM instruction set. - Other bug fixes: - Made the following header and library fixes: - Fixed PROPERTY_*in media/NdkMediaDrm.h. - Fixed sys/ucontext.hfor mips64. - Dropped the Clang version check for __builtin_isnanand __builtin_isinf. - Added android-21/arch-mips/usr/include/asm/reg.hand android-21/arch-mips64/usr/include/asm/reg.h. - Fixed a spurious array-bounds warning that GCC 4.9 produced for x86, and reenabled the array bounds warning that GCC 4.9 had produced for ARM. The warning for ARM had previously been unconditionally disabled. - Fixed Clang 3.5 for mipsand mips64to create a writable .gcc_except_tablesection, thus matching GCC behavior. This change allows you to avoid the following linker warning: .../ld: warning: creating a DT_TEXTREL in a shared object - Backported a fix for compiler-rtissues that were causing crashes when Clang compiled for mips64. For more information, see LLVM Issue 20098. - Fixed Clang 3.5 crashes that occurred on non-ASCII comments. (Issue 81440) - Fixed stlport collate::compareto return -1and 1. Previously, it had returned arbitrary signed numbers. - Fixed ndk-gdbfor 64-bit ABIs. (Issue 118300) - Fixed the crash that the HelloComputeNDK sample for RenderScript was producing on Android 4.4 (Android API level 19). For more information, see this page. - Fixed libc++ __wrap_iterfor GCC. For more information, see LLVM Issue 22355. - Fixed .asmsupport for ABI x86_64. - Implemented a workaround for the GCC 4.8 stlportissue. (Issue 127773) - Removed the trailing directory separator \\from the project path in Windows. (Issue 160584) - Fixed a no rule to make targeterror that occurred when compiling a single .cfile by executing the ndk-build.cmdcommand from gradle. (Issue 66937) - Added the libatomic.aand libgomp.alibraries that had been missing from the following host toolchains: aarch64-linux-android-4.9 mips64el-linux-android-4.9 mipsel-linux-android-4.9 x86_64-4.9 - Other changes: - Added ld.goldfor aarch64. The default linker remains ld.bfd. To explicitly enable ld.gold, add -fuse-ld=goldto the LOCAL_LDFLAGSor APP_LDFLAGSvariable. - Built the MIPS and MIPS64 toolchains with binutils-2.25, which provides improved R6 support. - Made -fstandalone-debug(full debug info) a default option for Clang. - Replaced -fstack-protectorwith -fstack-protector-strongfor the ARM, AArch64, X86, and X86_64 toolchains for GCC 4.9, Clang 3.5, and Clang 3.6. - Added the --packagecommand-line switch to ndk-gdbto allow the build system to override the package name. (Issue 56189) - Deprecated -mno-ldc1-stc1for MIPS. This option may not work with the new -fpxxand -mno-odd-spregoptions, or with the FPXX ABI. - Added MIPS MSA and R6 detection to cpu-features. Android NDK, Revision 10d (December 2014) - Important changes: - Made GCC 4.8 the default for all 32-bit ABIs. Deprecated GCC 4.6, and will remove it next release. To restore previous behavior, either add NDK_TOOLCHAIN_VERSION=4.6to ndk-build, or add --toolchain=arm-linux-androideabi-4.6when executing make-standalone-toolchain.shon the command line. GCC 4.9 remains the default for 64-bit ABIs. - Stopped all x86[_64] toolchains from adding -mstackrealignby default. The NDK toolchain assumes a 16-byte stack alignment. The tools and options used by default enforce this rule. A user writing assembly code must make sure to preserve stack alignment, and ensure that other compilers also comply with this rule. (GCC bug 38496) - Added Address Sanitizer functionality to Clang 3.5 support to the ARM and x86 ABIs. For more information on this change, see the Address Sanitizer project. - Introduced the requirement, starting from API level 21, to use -fPIE -piewhen building. In API levels 16 and higher, ndk-build uses PIEwhen building. This change has a number of implications, which are discussed in Developer Preview Issue 888. These implications do not apply to shared libraries. - Important bug fixes: - Made more fixes related to A53 Errata #835769 in the aarch64-linux-android-4.9 linker. As part of this, GCC passes a new option, --fix-cortex-a53-835769, when -mfix-cortex-a53-835769(enabled by default) is specified. For more information, see this binutils message and this binutils message. - Documented a fix to a libc++ sscanf/vsscanfhang that occurred in API level 21. The fix itself had been implemented in r10c. (Issue 77988) - Fixed an AutoFDO ( -fauto-profile) crash that occurred with GCC 4.9 when -Oswas specified. (Issue 77571) - Other bug fixes: - Made the following header and library fixes: - Added posix_memalignto API level 16. Also, added a prototype in stdlib.hto API levels 16 to 19. (Issue 77861) - Fixed stdatomic.hso that it includes <atomic>only for C++11. - Modified the following headers for standalone use: sys/user.h, and gl2ext.h, dlext.h, fts.h, sgidefs.hfor API level 21. - Modifiedfrom sys/user.hto rename mxcsr_maskas mxcr_mask, and to change the data type for u_ar0 - Changed sysconf()return value type from intto long. - Fixed ndk-build's handling of thumbfor LOCAL_ARM_MODE: In r10d, ndk-build adds LOCAL_LDFLAGS+=-mthumbby default, unless one of the following conditions applies: - You have set LOCAL_ARM_MODEequal to arm. - You are doing a debug build (with settings such as APP_OPTIM=debugand AndroidManifest.xmlcontaining android:debuggable="true"), where ARM mode is the default in order to retain compatibility with earlier toolchains. (Issue 74040) - Fixed LOCAL_SRC_FILESin ndk-build to use Windows absolute paths. (Issue 74333) - Removed bash-specific code from ndk-gdb. (Issue 73338) - Removed bash-specific code from make-standalone-toolchain.sh. (Issue 74145) - Revised documentation concerning a fix for System.loadLibrary()transitive dependencies. (Issue 41790) - Fixed a problem that was preventing 64-bit packages from extracting on Ubuntu 14.04 and OS X 10.10 (Yosemite). (Issue 78148) - Fixed an issue with LOCAL_PCHto improve Clang support. (Issue 77575) - Clarified "requires executable stack" warning from ld.gold. (Issue 79115) unsigned longto struct user_regs_struct*. Android NDK, Revision 10c (October 2014) - Important changes: - Made the following changes to download structure: - Each package now contains both the 32- and the 64-bit headers, libraries, and tools for its respective platform. - STL libraries with debugging info no longer need be downloaded separately. - Changed everything previously called Android-Lto the official release designation: android-21. - Updated GCC 4.9 by rebasing to the - The -O2option now turns on vectorization, without loop peeling but with more aggressive unrolling. - Enhancements to FDO and LIPO - Added Clang 3.5 support to all hosts: NDK_TOOLCHAIN_VERSION=clangnow picks Clang 3.5. Note that: - ARM and x86 default to using the integrated assembler. If this causes issues, use -fno-integrated-asas a workaround. - Clang 3.5 issues more warnings for unused flags, such as the -finline-functionsoption that GCC supports. - Made it possible to enter ART debugging mode, when debugging on an Android 5.0 device using ART as its virtual machine, by specifying the art-onoption. For more information, see prebuilt/common/gdb/common.setupin the directory containing the NDK. - Removed support for Clang 3.3. - Deprecated GCC 4.6, and may remove it from future releases. - Updated mclinker to 2.8 with Identical Code Folding ("ICF") support. Specify ICF using the --icfoption. - Broadened arm_neon.hsupport in x86 and x86_64, attaining coverage of ~93% of NEON intrinsics. For more information about NEON support: - Navigate to the NDK Programmer's Guide ( docs/Programmers_Guide/html/), and see Architectures and CPUs > Neon. - Examine the updated hello-neonsample in samples/. - See Intel's guide to porting from ARM NEON to Intel SSE. - Documented support for _FORTIFY_SOURCEin headers/libs/android-21, which appeared in r10 (when android-21was still called Android-L), but had no documentation. For more detailed information, see Important bug fixes below. When migrating from projects using GCC, you can use -Wno-invalid-command-line-argumentand -Wno-unused-command-line-argumentto ignore the unused flags until you're able decide on what to do with them longer-term. - Important bug fixes: - Fixed an internal compiler error with GCC4.9/aarch64 that was causing the following error message (Issue 77564): internal compiler error: in simplify_const_unary_operation, at simplify-rtx.c:1539 - Fixed incorrect code generation from GCC4.9/arm. (Issue 77567) - Fixed an internal compiler error with GCC4.9/mips involving inline-assembly. (Issue 77568) - Fixed incorrect code that GCC4.9/arm was generating for x = (cond) ? y : x. (Issue 77569) - Fixed GCC4.9/aarch64 and Clang3.5/aarch64 to work around the Cortex-A53 erratum (835769) by default. Disable the workaround by specifying -mno-fix-cortex-a53-835769. - Other bug fixes: - Made the following header and library fixes to android-21: - Added more TV keycodes: android/keycodes.h - Added more constants and six new sensor functions to android/sensor.h: ASensorManager_getDefaultSensorEx, ASensor_getFifoMaxEventCount, ASensor_getFifoReservedEventCount, ASensor_getStringType, ASensor_getReportingMode, and ASensor_isWakeUpSensor. - Fixed stdatomic.hto improve compatibility with GCC 4.6, and provide support for the <atomic>header. - Added sys/ucontext.hand sys/user.hto all API levels. The signal.hheader now includes <sys/ucontext.h>. You may remove any existing definition of struct ucontext. - Added posix_memalignto API levels 17, 18, and 19. - Added the following functions to all architectures: android_set_abort_message, posix_fadvise, posix_fadvise64, pthread_gettid_np. - Added the required permissions to the native-media/AndroidManifest.xmlsample. (Issue 106640) - Added clock_nanosleepand clock_settimeto API level 21. (Issue 77372) - Removed the following symbols from all architectures: get_malloc_leak_info, free_malloc_leak_info, __srget, __swbuf, __srefill, __swsetup, __sdidinit, __sflags, __sfp, __sinit, __smakebuf, __sflush, __sread, __swrite, __sseek, __sclose, _fwalk, __sglue, __get_thread, __wait4, __futex_wake, __open, __get_tls, __getdents64, and dlmalloc. - Removed the following functions from the 64-bit architectures: basename_r, dirname_r, __isthreaded, _flush_cache(mips64). - Removed the following function from the 32-bit architectures: __signalfd4. - Changed the type of the third argument from size_tto intin the following functions: strtoll_l, strtoull_l, wcstoll_l, and wcstoull_l. - Restored the following functions to the 64-bit architecture: arc4random, arc4random_buf, and arc4random_uniform. - Moved cxa_*and the newand deleteoperators back to libstdc++.so. This change restores r9d behavior; previous versions of r10 contained placeholder files. - Restored MXU support in GCC 4.8 and 4.9 for mips. This support had been absent from r10 and r10b because those versions of GCC had been compiled with binutils-2.24, which did not support MXU. It now does. - Fixed --toolchain=in make-standalone-toolchain.shso that it now properly supports use of a suffix specifying a version of Clang. - Fixed the libc++/armeabi strtod()functions. - Made fixes to NDK documentation in docs/. - Other changes: - Enhanced cpu-featuresto detect ARMv8 support for the following instruction sets: AES, CRC32, SHA2, SHA1, and 64-bit PMULL/PMULL2. (Issue 106360) - Modified ndk-build to use *-gcc-ar, which is available in GCC 4.8, GCC 4.9, and Clang. Clang specifies it, instead of *-ar. This setting brings improved LTO support. - Removed the include-fixed/linux/a.out.hand include-fixed/linux/compiler.hheaders from the GCC compiler. (Issue 73728) - Fixed an issue related to -fltowith GCC 4.8 on Mac OS X. The error message read: .../ld: error: .../libexec/gcc/arm-linux-androideabi/4.9/liblto_plugin.so Symbol not found: _environ - Fixed a typo in build-binary.mk.(Issue 76992) - Important known issues: - Android NDK, Revision 10b (September 2014) - Important notes: - Because of the 512MB size restriction on downloadable packages, the following 32-bit items are not in the 32-bit NDK download packages. Instead, they reside in the 64-bit ones: - Android-L headers - GCC 4.9 - Currently, the only Renderscript support provided by the NDK is for 32-bit Renderscript with Android 4.4 (API level 19). You cannot build HelloComputeNDK (the only Renderscript sample) with any other combination of Renderscript (32- or 64-bit) and Android version. - To compile native-codec, you must use a 64-bit NDK package, which is where all the Android-L headers are located. - Important bug fixes: - - Other bug fixes: - Removed stdio.hfrom the include-fixed/directories of all versions of GCC. (Issue 73728.) - Removed duplicate header files from the Windows packages in the platforms/android-L/arch-*/usr/include/linux/netfilter*/directories. (Issue 73704.) - Fixed a problem that prevented Clang from building HelloComputeNDK. - Fixed atexit. (Issue 66595.) - Made various fixes to the docs in docs/and sources/third_party/googletest/README.NDK. (Issue 74069.) - Made the following fixes to the Android-L headers: - Added the following functions to ctype.hand wchar.h: dn_expand(), grantpt(), inet_nsap_addr(), inet_nsap_ntoa(), insque(), nsdispatch(), posix_openpt(), __pthread_cleanup_pop(), __pthread_cleanup_push(), remque(), setfsgid(), setfsuid(), splice(), tee(), twalk()(Issue 73719), and 42 *_l()functions. - Renamed cmsg_nxthdrto __cmsg_nxthdr. - Removed __libc_malloc_dispatch. - Changed the ptrace()prototype to long ptrace(int, ...);. - Removed sha1.h. - Extended android_dlextinfoin android/dlext.h. - Annotated __NDK_FPABI__for functions receiving or returning float- or double-type values in stdlib.h, time.h, wchar.h, and complex.h. - Other changes: - Android NDK, Revision 10 (July 2014) - Important changes: - Added 3 new ABIs, all 64-bit: arm64-v8a, x86_64, mips64.Note that: - GCC 4.9 is the default compiler for 64-bit ABIs. Clang is currently version 3.4. NDK_TOOLCHAIN_VERSION=clangmay not work for arm64-v8a and mips64. - Android-L is the first level with 64-bit support. Note that this API level is a temporary one, and only for L-preview. An actual API level number will replace it at L-release. - This release includes now includes. - The new GNU libstdc++ in Android-L contains all <tr1/cmath>Before defining your own math function, check _GLIBCXX_USE_C99_MATH_TR1to see a function with that name already exists, in order to avoid "multiple definition" errors from the linker. - The cpu-features library has been updated for the ARMv8 kernel. The existing cpu-features library may fail to detect the presence of NEON on the ARMv8 platform. Recompile your code with the new version. - Added a new platforms/android-L/API directory. It includes: - Updated Bionic headers, which had not changed from Android API levels 3 (Cupcake) to 19 (KitKat). This new version, for level L, is to be synchronized with AOSP. - New media APIs and a native-codec sample. - An updated Android.hheader for SLES/OpenSLES, enabling support for single-precision, floating-point audio format in AudioPlayer. - GLES 3.1 and AEP extensions to libGLESv3.so. - GLES2 and GLES3 headers updated to the latest official Khronos versions. - Added GCC 4.9 compilers to the 32-/64-bit ABIs. GCC 4.9 is the default (only) compiler for 64-bit ABIs, as previously mentioned. For 32-bit ABIs, you must explcitly enable GCC 4.9, as GCC 4.6 is still the default. - For ndk-build, enable 32-bit, GCC 4.9 building either by adding NDK_TOOLCHAIN_VERSION=4.9to Application.mk, or exporting it as an environment variable from the command line. - For a standalone toolchain, use the --toolchain=option in the make-standalone-toolchain.shscript. For example: --toolchain=arm-linux-androideabi-4.9. - Upgraded GDB to version 7.6 in GCC 4.8/4.9 and x86*. Since GDB is still at version GDB-7.3.x in GCC 4.6 (the default for ARM and MIPS), you must set NDK_TOOLCHAIN_VERSION=4.8or 4.9to enable ndk-gdb to select GDB 7.6. - Added the -mssse3build option to provide SSSE3 support, and made it the default for ABI x86 (upgrading from SSE3). The image released by Google does not contain SSSE3 instructions. - Updated GCC 4.8 to 4.8.3. - Improved ARM libc++ EH support by switching from gabi++ to libc++abi. For details, see the "C++ Support" section of the documentation. Note that: - All tests except for locale now pass for Clang 3.4 and GCC 4.8. For more information, see the "C++ Support" section of the documentation. - The libc++ libraries for X86 and MIPS libc++ still use gabi++. - GCC 4.7 and later can now use <atomic>. - You must add -fno-strict-aliasingif you use <list>, because __list_imp::_end_ breaks TBAA rules. (Issue 61571.) - As of GCC 4.6, LIBCXX_FORCE_REBUILD:=true no longer rebuilds libc++. Rebuilding it requires the use of a different compiler. Note that Clang 3.3 is untested. - mclinker is now version 2.7, and has aarch64 Linux support. - Added precompiled header support for headers specified by LOCAL_PCH. (Issue 25412). - Important bug fixes: - Fixed libc++ so that it now compiles std::feof, etc. (Issue 66668). - Fixed a Clang 3.3/3.4 atomic library call that caused crashes in some of the libc++ tests for ABI armeabi. - Fixed Clang 3.4 crashes that were occurring on reading precompiled headers. (Issue 66657). - Fixed the Clang 3.3/3.4 -O3assert on: - Fixed the following Clang 3.3/3.4 crash:). - Other bug fixes: - Fixed headers: - Fixed 32-bit ssize_tto be intinstead of long int. - Fixed WCHAR_MINand WCHAR_MAXso that they they take appropriate signs according to the architecture they're running on: - X86/MIPS: signed. - ARM: unsigned. - To force X86/MIPS to default to unsigned, use -D__WCHAR_UNSIGNED__. - To force wchar_tto be 16 bits, use -fshort-wchar. - Removed non-existent symbols from 32-bit libc.so, and added pread64, pwrite64, ftruncate64for Android API level 12 and higher. (Issue 69319). For more information, see the commit message accompanying AOSP change list 94137. - Fixed GCC warning about redefinition of putchar. Warning message reads: - Fixed make-standalone-toolchain.sh --stl=libc++so that it: - Copies cxxabi.h. (Issue 68001). - Runs in directories other than the NDK install directory. (Issues 67690 and 68647). - Fixed GCC/Windows to quote arguments only when necessary for spawning processes in external programs. This change decreases the likelihood of exceeding the 32K length limit. - Fixed an issue that made it impossible to adjust the APP_PLATFORMenvironment variable. - Fixed the implementation of IsSystemLibrary()in crazy_linker so that it uses strrchr()instead of strchr()to find the library path's true basename. - Fixed native-audio's inability to build in debug mode. - Fixed gdb's inability to print extreme floating-point numbers. (Issue 69203). - Fixed Clang 3.4 inability to compile with . include/stdio.h:236:5: warning: conflicts with previous declaration here [-Wattributes] int putchar(int);(Change list 91185). - Other changes: - Added arm_neon.hto the x86 toolchain so that it now emulates ~47% of Neon. There is currently no support for 64-bit types. For more information, see the section on ARM Neon intrinsics support in the x86 documentation. - Ported ARM/GOT_PREL optimization (present in GCC 4.6 built from the GCC google branch) to ARM GCC 4.8/4.9. This optimization sometimes reduces instruction count when accessing global variables. As an example, see the build.sh script in $NDK/tests/build/b14811006-GOT_PREL-optimization/. - Added ARM version for STL gabi++, stlport, and libc++. They now have both it and Thumb mode. - It is now possible to call the make-standalone-toolchain.sh script with --toolchain=x86_64-linux-android-4.9, which is equivalent to --toolchain=x86_64-4.9. Android NDK, Revision 9d (March 2014) - Important changes: - Added support for the Clang 3.4 compiler. The NDK_TOOLCHAIN_VERSION=clangoption now picks Clang 3.4. GCC 4.6 is still the default compiler. - Added: - When executing the. - The make-standalone-toolchain.shscript copies additional libaries under /harddirectories. Add the above CFLAGSand LFLAGSto your makefile to enable GCC or Clang to link with libraries in /hard. - Added the yasm assembler, as well as LOCAL_ASMFLAGSand EXPORT_ASMFLAGSflags for x86 targets. The ndk-buildscript uses prebuilts/*/bin/yasm*to build LOCAL_SRC_FILESthat have the .asmextension. - Updated MClinker to 2.6.0, which adds -gc-sectionssupport. - Added experimental libc++ support (upstream r201101). Use this new feature by following these steps: - Add APP_STL := c++_staticor APP_STL := c++_sharedin Application.mk. You may rebuild from source via LIBCXX_FORCE_REBUILD := true - Execute make-standalone-toolchain.sh --stl=libc++to create a standalone toolchain with libc++ headers/lib. CPLUSPLUS-SUPPORT.html. (Issue 36496) - Important bug fixes: - Fixed an uncaught throw from an unexpected exception handler for GCC 4.6/4.8 ARM EABI. (GCC Issue 59392) - Fixed GCC 4.8 so that it now correctly resolves partial specialization of a template with a dependent, non-type template argument. (GCC Issue 59052) - Added more modules to prebuilt python (Issue 59902): - Mac OS X: zlib, bz2, _curses, _curses_panel, _hashlib, _ssl - Linux: zlib, nis, crypt, _curses, and _curses_panel - Fixed the x86 and MIPS gdbserver event_getmsg_helper. - Fixed numerous issues in the RenderScript NDK toolchain, including issues with compatibility across older devices and C++ reflection. - Other bug fixes: - Header fixes: - Fixed a missing #include <sys/types.h>in android/asset_manager.hfor Android API level 13 and higher. (Issue 64988) - Fixed a missing #includein android/rect_manager.hfor Android API level 14 and higher. - Added JNICALLto JNI_OnLoadand JNI_OnUnloadin jni.h. Note that JNICALLis defined as __NDK_FPABI__For more information, see sys/cdefs.h. - Updated the following headers so that they can be included without the need to manually include their dependencies (Issue 64679): android/tts.h EGL/eglext.h fts.h GLES/glext.h GLES2/gl2ext.h OMXAL/OpenMAXSL_Android.h SLES/OpenSLES_Android.h sys/prctl.h sys/utime.h - Added sys/cachectl.hfor all architectures. MIPS developers can now include this header instead of writing #ifdef __mips__. - Fixed platforms/android-18/include/android/input.hby adding __NDK_FPABI__to functions taking or returning float or double values. - Fixed MIPS struct stat, which was incorrectly set to its 64-bit counterpart for Android API level 12 and later. This wrong setting was a regression introduced in release r9c. - Defined __PTHREAD_MUTEX_INIT_VALUE, __PTHREAD_RECURSIVE_MUTEX_INIT_VALUE, and __PTHREAD_ERRORCHECK_MUTEX_INIT_VALUEfor Android API level 9 and lower. - Added scalbln, scalblnf, and scalblnlto x86 libm.sofor APIs 18 and later. - Fixed a typo in sources/android/support/include/iconv.h. (Issue 63806) - Fixed gabi++ std::unexpected()to call std::terminate()so that a user-defined std::terminate()handler has a chance to run. - Fixed gabi++ to catch std::nullptr. - Fixed samples Teapot and MoreTeapots: - Solved a problem with Tegra 2 and 3 chips by changing specular variables to use medium precision. Values for specular power can now be less than 1.0. - Changed the samples so that pressing the volume button restores immersive mode and invalidates SYSTEM_UI_FLAG_IMMERSIVE_STICKY. Screen rotation does not trigger onSystemUiVisibilityChange, and so does not restore immersive mode. - Fixed the ndk-buildscript to add -rpath-link=$SYSROOT/usr/liband -rpath-link=$TARGET_OUTin order to use ld.bfdto link executables. (Issue 64266) - Removed -Bsymbolicfrom all STL builds. - Fixed ndk-gdb-py.cmdby setting SHELLas an environment variable instead of passing it to python.exe, which ignores the setting. (Issue 63054) - Fixed the make-standalone-toolchain.shscript so that the --stl=stlportoption copies the gabi++ headers instead of symlinking them; the cmd.exeand MinGW shells do not understand symlinks created by cygwin. - Other changes: - Applied execution permissions to all *cmdscripts previously intended for use only in the cmd.exeshell, in case developers prefer to use ndk-build.cmdin cygwin instead of the recommended ndk-buildscript. - Improved the speed of the make-standalone-toolchain.shscript by moving instead of copying if the specified destination directory does not exist. Android NDK, Revision 9c (December 2013) This is a bug-fix-only release. - Important bug fixes: - Fixed a problem with GCC 4.8 ARM, in which the stack pointer is restored too early. This problem prevented the frame pointer from reliably accessing a variable in the stack frame. (GCC Issue 58854) - Fixed a problem with GCC 4.8 libstdc++, in which a bug in std::nth_element was causing generation of code that produced a random segfault. (Issue 62910) - Fixed GCC 4.8 ICE in cc1/cc1plus with -fuse-ld=mcld, so that the following error no longer occurs: cc1: internal compiler error: in common_handle_option, at opts.c:1774 - Fixed -mhard-floatsupport for __builtinmath functions. For ongoing information on fixes for -mhard-floatwith STL, please follow Issue 61784. - Other bug fixes: - Header fixes: - Changed prototype of pollto poll(struct pollfd *, nfds_t, int);in poll.h. - Added utimensatto libc.sofor Android API levels 12 and 19. These libraries are now included for all Android API levels 12 through 19. - Introduced futimensinto libc.so, for Android API level 19. - Added missing clock_settime()and clock_nanosleep()to time.hfor Android API level 8 and higher. - Added CLOCK_MONOTONIC_RAW, CLOCK_REALTIME_COARSE, CLOCK_MONOTONIC_COARSE, CLOCK_BOOTTIME, CLOCK_REALTIME_ALARM,and CLOCK_BOOTTIME_ALARMin time.h. - Removed obsolete CLOCK_REALTIME_HRand CLOCK_MONOTONIC_HR. - In samples Teapot, MoreTeapots, and source/android/ndk_helper: - Changed them so that they now use a hard-float abi for armeabi-v7a. - Updated them to use immersive mode on Android API level 19 and higher. - Fixed a problem with Check_ReleaseStringUTFCharsin /system/lib/libdvm.sothat was causing crashes on x86 devices. - Fixed ndk-buildfails that happen in cygwin when the NDK package is referenced via symlink. - Fixed ndk-build.cmdfails that happen in windows cmd.exewhen LOCAL_SRC_FILEScontains absolute paths. (Issue 69992) - Fixed the ndk-stackscript to proceed even when it can't parse a frame due to inability to find a routine, filename, or line number. In any of these cases, it prints ??. - Fixed the - Fixed gabi++ so that it: - Does not use malloc() to allocate C++ thread-local objects. - Avoids deadlocks in gabi++ in cases where libc.debug.malloc is non-zero in userdebug/eng Android platform builds. - Other changes: - Added LOCAL_EXPORT_LDFLAGS. - Introduced the" - Provided the ability to rebuild all of STL with debugging info in an optional, separate package called android-ndk-r9c-cxx-stl-libs-with-debugging-info.zip, using the -goption. This option helps the ndk-stackscript provide better a stack dump across STL. This change should not affect the code/size of the final, stripped file. - Enhanced hello-jnisamples to report APP_ABIat compilation. - Used the artool in Deterministic mode (option -D) to build static libraries. (Issue 60705) Android NDK, Revision 9b (October 2013) - Important changes: - Updated include/android/*hand math.hfor all Android API levels up to 18, including the addition of levels 13, 15, 16 and 17. For information on added APIs, see commit messages for Changes 68012 and 68014. (Issues 47150, 58528, and 38423) - Added support for Android API level 19, including Renderscript binding. - Added support for -mhard-floatin the existing armeabi-v7a ABI. For more information and current restrictions on Clang, see tests/device/hard-float/jni/Android.mk. - Migrated from GNU Compiler Collection (GCC) 4.8 to 4.8.2, and added diagnostic color support. To enable diagnostic colors, set -fdiagnostics-color=auto, -fdiagnostics-color=always,or export GCC_COLORSas shown below: GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'For more information, see GCC Language Independent Options. - Added two new samples to demonstrate OpenGL ES 3.0 features: Teapot and MoreTeapots. These samples run on devices with Android 4.1 (API level 16) and higher. - Deprecated GCC 4.7 and Clang 3.2 support, which will be removed in the next release. - Important bug fixes: - Fixed problem with ARM GCC 4.6 thumb2failing to generate 16-bit relative jump tables. (GCC Issue) - Fixed GCC 4.8 internal compiler error (ICE) on g++.dg/cpp0x/lambda/lambda-defarg3.C. (Change 62770, GCC Issue) - Fixed a problem with Windows 32-bit *-gdb.exeexecutables failing to launch. (Issue 58975) - Fixed GCC 4.8 ICE when building bullet library. The error message is as follows: internal compiler error: verify_flow_info failed(Issue 58916, GCC Issue) - Modified GDB/ARM build to skip ARM.exidxdata for unwinding in prologue code and added a command ( set arm exidx-unwinding) to control exidx-based stack unwinding. (Issue 55826) - Fixed Clang 3.3 MIPS compiler problem where HI and LO registers are incorrectly reused. - Fixed issue with MIPS 4.7 ICE in dbx_reg_number. The error message is as follows: external/icu4c/i18n/decimfmt.cpp:1322:1: internal compiler error: in dbx_reg_number, at dwarf2out.c:10185(GCC Patch) - Other bug fixes: - Header fixes - Fixed the ARM WCHAR_MINand WCHAR_MAXto be unsigned according to spec (the X86/MIPS versions are signed). Define _WCHAR_IS_ALWAYS_SIGNEDto restore old behavior. (Issue 57749) - Fixed include/netinet/tcp.hto contain TCP_INFOstate enum. (Issue 38881) - Fixed the cdefs_elh.hmacro _C_LABEL_STRINGto stop generating warnings in the GCC 4.8 toolchain when using c++11 mode. (Issue 58135, Issue 58652) - Removed non-existent functions imaxabsand imaxdivfrom header inttypes.h. - Fixed issue with pthread_exit()return values and pthread_self(). (Issue 60686) - Added missing mkdtemp()function, which already exists in bionicheader stdlib.h. - Fixed problem building samples/gles3jniwith Clang on Android API level 11. - Fixed MCLinker to allow multiple occurrences of the following options: -gc-sectionsand --eh-frame-hdr. - Fixed MCLinker to accept the --no-warn-mismatchoption. - Modified cpu-featuresoption to not assume all VFPv4 devices support IDIV. Now this option only adds IDIV to white-listed devices, including Nexus 4. (Issue 57637) - Fixed problem with android_native_app_glue.cerroneously logging errors on event predispatch operations. - Fixed all operations on gabi++terminate and unexpected_handler to be thread-safe. - Fixed several issues with Clang -integrated-asoption so it can pass tests for ssax-instructionsand fenv. - Fixed GCC 4.6/4.7/4.8 compiler to pass the linker option --eh-frame-hdreven for static executables. For more information, see the GCC patch. - Fixed extra apostrophe in CPU-ARCH-ABIS.html. For more information, see NDK-DEPENDS.html. (Issue 60142) - Fixed extra quotes in ndk-build output on Windows. (Issue 60649) - Fixed Clang 3.3 to compile ARM's built-in, atomic operations such as __atomic_fetch_add, __atomic_fetch_sub, and __atomic_fetch_or. - Fixed Clang 3.3 ICE with customized vfprintf. (Clang issue) - Other changes: - Enabled OpenMP for all GCC builds. To use this feature, add the following flags to your build settings: LOCAL_CFLAGS += -fopenmp LOCAL_LDFLAGS += -fopenmpFor code examples, see tests/device/test-openmp - Reduced the size of ld.mcldsignificantly (1.5MB vs. ld.bfd3.5MB and ld.gold7.5MB), resulting in a speed improvement of approximately 20%. - Added. - Added gabi++array helper functions. - Modified GCC builds so that all libgcc.afiles are built with -funwind-tablesto allow the stack to be unwound past previously blocked points, such as __aeabi_idiv0. - Added Ingenic MXU support in MIPS GCC4.6/4.7/4.8 with new -mmxuoption. - Extended MIPS GCC4.6/4.7/4.8 -mldc1-sdc1to control ldxc1/sdxc1 too - Added crazy linker. For more information, see sources/android/crazy_linker/README.TXT. - Fixed bitmap-plasmato draw to full screen rather than a 200x200 pixel area. - Reduced linux and darwin toolchain sizes by 25% by creating symlinks to identical files. Android NDK, Revision 9 (July 2013) - Important changes: - Added support for Android 4.3 (API level 18). For more information, see STABLE-APIS.htmland new code examples in samples/gles3jni/README. - Added headers and libraries for OpenGL ES 3.0, which is supported by Android 4.3 (API level 18) and higher. - Added GNU Compiler Collection (GCC) 4.8 compiler to the NDK. Since GCC 4.6 is still the default, you must explicitly enable this option: - For ndk-buildbuilds, export NDK_TOOLCHAIN_VERSION=4.8or add it in Application.mk. - For standalone builds, use the --toolchain=option in make-standalone-toolchain.sh, for example: --toolchain=arm-linux-androideabi-4.8 Note: The -Wunused-local-typedefsoptionbuild option when building for kernels that do not support this feature. - Added Clang 3.3 support. The NDK_TOOLCHAIN_VERSION=clangbuild option now picks Clang 3.3 by default. Note: Both GCC 4.4.3 and Clang 3.1 are deprecated, and will be removed from the next NDK release. - Updated GNU Project Debugger (GDB) to support python 2.7.5. - Added MCLinker to support Windows hosts. Since ld.goldis the default where available, you must add -fuse-ld=mcldin LOCAL_LDFLAGSor APP_LDFLAGSto enable MCLinker. - Added ndk-dependstool which prints ELF library dependencies. For more information, see NDK-DEPENDS.html. (Issue 53486) - Important bug fixes: - Fixed potential event handling issue in android_native_app_glue. (Issue 41755) - Fixed ARM/GCC-4.7 build to generate sufficient alignment for NEON load and store instructions VST and VLD. (GCC Issue 57271) - Fixed a GCC 4.4.3/4.6/4.7 internal compiler error (ICE) for a constant negative index value on a string literal. (Issue 54623) - Fixed GCC 4.7 segmentation fault for constant initialization with an object address. (Issue 56508) - Fixed GCC 4.6 ARM segmentation fault for -Ovalues when using Boost 1.52.0. (Issue 42891) - Fixed libc.soand libc.ato support the wait4()function. (Issue 19854) - Updated the x86 libc.so and libc.a files to include the clone()function. - Fixed LOCAL_SHORT_COMMANDSbug where the linker.listfile is empty or not used. - Fixed GCC MIPS build on Mac OS to use CFI directives, without which ld.mcld --eh-frame-hdrfails frequently. - Fixed Clang 3.2 X86/MIPS internal compiler error in llvm/lib/VMCore/Value.cpp. (Change 59021) - Fixed GCC 4.7 64-bit Windows assembler crash. (Error: out of memory allocating 4294967280 bytes). - Updatedoption to restore previous behavior. - Fixed GDB crash when library list is empty. - Fixed GDB crash when using a stepicommand past a bx pcor blx pcThumb instruction. (Issue 56962, Issue 36149) - Fixed MIPS gdbserverto look for DT_MIPS_RLD_MAPinstead of DT_DEBUG. (Issue 56586) - Fixed a circular dependency in the ndk-build script, for example: If A->B and B->B, then B was dropped from build. (Issue 56690) - Other bug fixes: - Fixed the ndk-buildscript to enable you to specify a version of Clang as a command line option (e.g., NDK_TOOLCHAIN_VERSION=clang3.2). Previously, only specifying the version as an environment variable worked. - Fixed gabi++ size of _Unwind_Exceptionto be 24 for MIPS build targets when using the Clang compiler. (Change 54141) - Fixed the ndk-buildscript to ensure that built libraries are actually removed from projects that include prebuilt static libraries when using the ndk-build cleancommand. (Change 54461, Change 54480) - Modified the NDK_ANALYZE=1option to be less verbose. - Fixed gnu-libstdc++/Android.mkto include a backward/path for builds that use backward compatibility. (Issue 53404) - Fixed a problem where stlport newsometimes returned random values. - Fixed ndk-gdbto match the order of CPU_ABIS, not APP_ABIS. (Issue 54033) - Fixed a problem where the NDK 64-bit build on MacOSX chooses the wrong path for compiler. (Issue 53769) - Fixed build scripts to detect 64-bit Windows Vista. (Issue 54485) - Fixed x86 ntonl/swap32error: invalid 'asm': operand number out of range. (Issue 54465, Change 57242) - Fixed ld.goldto merge string literals. - Fixed ld.goldto handle large symbol alignment. - Updated ld.goldto enable the --sort-section=nameoption. - Fixed GCC 4.4.3/4.6/4.7 to suppress the -export-dynamicoption for statically linked programs. GCC no longer adds an .interpsection for statically linked programs. - Fixed GCC 4.4.3 stlportcompilation error about inconsistent typedefof _Unwind_Control_Block. (Issue 54426) - Fixed awkscripts to handle AndroidManifest.xmlfiles created on Windows which may contain trailing \rcharacters and cause build errors. (Issue 42548) - Fixed make-standalone-toolchain.shto probe the prebuilts/directory to detect if the host is 32 bit or 64 bit. - Fixed the Clang 3.2 -integrated-asoption. - Fixed the Clang 3.2 ARM EHABI compact model pr1and pr2handler data. - Added Clang -mllvm -arm-enable-ehabioption to fix the following Clang error: clang: for the -arm-enable-ehabi option: may only occur zero or one times! - Fixed build failure when there is no uses-sdkelement in application manifest. (Issue 57015) - Other changes: - Header Fixes - Modified headers to make __set_errnoan inlined function, since __set_errnoin errno.his deprecated, and libc.sono longer exports it. - Modified elf.hto include stdint.h. (Issue 55443) - Fixed sys/un.hto be included independently of other headers. (Issue 53646) - Fixed all of the MotionEvent_getHistoricalAPI family to take the const AInputEvent* motion_event. (Issue 55873) - Fixed malloc_usable_sizeto take const void*. (Issue 55725) - Fixed stdint.h to be more compatible with C99. (Change 46821) - Modified wchar.hto not redefine WCHAR_MAXand WCHAR_MIN - Fixed <inttypes.h>declaration for pointer-related PRIand SCNmacros. (Issue 57218) - Changed the) - Added more formatting in NDK docs/and miscellaneous documentation fixes. - Added support for a thin archive technique when building static libraries. (Issue 40303) - Updated script make-standalone-toolchain.shto support the stlportlibrary in addition to gnustl, when you specify the option --stl=stlport. For more information, see STANDALONE-TOOLCHAIN.html. - Updated the make-standalone-toolchain.shscript so that the --llvm-version=option creates the $TOOLCHAIN_PREFIX-clangand $TOOLCHAIN_PREFIX-clang++scripts in addition to clangand clang++, to avoid using the host's clang and clang++ definitions by accident. - Added two flags to re-enable two optimizations in upstream Clang but disabled in NDK for better compatibility with code compiled by GCC: - Added a . - Added a . - Added . - Downgraded the event severity from warning to info if) - Added the android_getCpuIdArm()and android_setCpuArm()methods to cpu-features.c. This addition enables easier retrieval of the ARM CPUID information. (Issue 53689) - Modified ndk-buildto use GCC 4.7's as/ldfor Clang compiling. Note: In GCC 4.7, monotonic_clockand is_monotonichave been renamed to steady_clockand is_steady, respectively. - Added the following new warnings to the ndk-buildscript: - Added warnings if LOCAL_LDLIBS/LDFLAGSare used in static library modules. - Added a warning if a configuration has no module to build. - Added a warning for non-system libraries being used in LOCAL_LDLIBS/LDFLAGSof a shared library or executable modules. - Updated build scripts, so that if APP_MODULESis not defined and only static libraries are listed in Android.mk, the script force-builds all of them. (Issue 53502) - Updated ndk-buildto support absolute paths in LOCAL_SRC_FILES. - Removed the *-gdbtuiexecutables, which are duplicates of the *-gdbexecutables with the -tuioption enabled. - Updated the build scripts to warn you when the Edison Design Group (EDG) compiler front-end turns _STLP_HAS_INCLUDE_NEXTback on. (Issue 53646) - Added the environment variable NDK_LIBS_OUTto allow overriding of the path for libraries/gdbserverfrom the default $PROJECT/libs. For more information, see OVERVIEW.html. - Changed ndk-build script defaults to compile code with format string protection -Wformat -Werror=format-security. You may set LOCAL_DISABLE_FORMAT_STRING_CHECKS=trueto disable it. For more information, see ANDROID-MK.html - Added STL pretty-print support in ndk-gdb-py. For more information, see NDK-GDB.html. - Added tests based on the googletest frameworks. - Added a notification to the toolchain build script that warns you if the current shell is not bash. Android NDK, Revision 8e (March 2013) - Important changes: - Added 64-bit host toolchain set (package name suffix *-x86_64.*). For more information, see CHANGES.HTMLand NDK-BUILD.html. - Added Clang 3.2 compiler. GCC 4.6 is still the default. For information on using the Clang compiler, see CHANGES.HTML. - Added static code analyzer for Linux/MacOSX hosts. For information on using the analyzer, see CHANGES.HTML. - Added MCLinker for Linux/MacOSX hosts as an experimental feature. The ld.goldlinker is the default where available, so you must explicitly enable it. For more information, see CHANGES.HTML. - Updated ndk-build to use topological sort for module dependencies, which means the build automatically sorts out the order of libraries specified in LOCAL_STATIC_LIBRARIES, LOCAL_WHOLE_STATIC_LIBRARIESand LOCAL_SHARED_LIBRARIES. For more information, see CHANGES.HTML. (Issue 39378) - Important bug fixes: - Fixed build script to build all toolchains in -O2. Toolchains in previous releases were incorrectly built without optimization. - Fixed build script which unconditionally builds Clang/llvm for MacOSX in 64-bit. - Fixed GCC 4.6/4.7 internal compiler error: gen_thumb_movhi_clobber at config/arm/arm.md:5832. (Issue 52732) - Fixed build problem where GCC/ARM 4.6/4.7 fails to link code using 64-bit atomic built-in functions. (Issue 41297) - Fixed GCC 4.7 linker DIV usage mismatch errors. (Sourceware Issue) - Fixed GCC 4.7 internal compiler error build_data_member_initialization, at cp/semantics.c:5790. - Fixed GCC 4.7 internal compiler error redirect_eh_edge_1, at tree-eh.c:2214. (Issue 52909) - Fixed a GCC 4.7 segfault. (GCC Issue) - Fixed <chrono>clock resolution and enabled steady_clock. (Issue 39680) - Fixed toolchain to enable _GLIBCXX_HAS_GTHREADSfor GCC 4.7 libstdc++. (Issue 41770, Issue 41859) - Fixed problem with the X86 MXX/SSE code failing to link due to missing posix_memalign. (Change 51872) - Fixed GCC4.7/X86 segmentation fault in i386.c, function distance_non_agu_define_in_bb(). (Change 50383) - Fixed GCC4.7/X86 to restore earlier cmovbehavior. (GCC Issue) - Fixed handling NULL return value of setlocale()in libstdc++/GCC4.7. (Issue 46718) - Fixed ld.goldruntime undefined reference to __exidx_startand __exidx_start_end. (Change 52134) - Fixed Clang 3.1 internal compiler error when using Eigen library. (Issue 41246) - Fixed Clang 3.1 internal compiler error including <chrono>in C++11 mode. (Issue 39600) - Fixed Clang 3.1 internal compiler error when generating object code for a method call to a uniform initialized rvalue. (Issue 41387) - Fixed Clang 3.1/X86 stack realignment. (Change 52154) - Fixed problem with GNU Debugger (GDB) SIGILL when debugging on Android 4.1.2. (Issue 40941) - Fixed problem where GDB cannot set source:linebreakpoints when symbols contain long, indirect file paths. (Issue 42448) - Fixed GDB read_program_headerfor MIPS PIE executables. (Change 49592) - Fixed STLportsegmentation fault in uncaught_exception(). (Change 50236) - Fixed STLportbus error in exception handling due to unaligned access of DW_EH_PE_udata2, DW_EH_PE_udata4, and DW_EH_PE_udata8. - Fixed Gabi++ infinite recursion problem with nothrow new[]operator. (Issue 52833) - Fixed Gabi++ wrong offset to exception handler pointer. (Change 53446) - Removed Gabi++ redundant free on exception object (Change 53447) - Other bug fixes: - Fixed NDK headers: - Removed redundant definitions of size_t, ssize_t, and ptrdiff_t. - Fixed MIPS and ARM fenv.hheader. - Fixed stddef.hto not redefine offsetofsince it already exists in the toolchain. - Fixed elf.hto contain Elf32_auxv_tand Elf64_auxv_t. (Issue 38441) - Fixed the #ifdefC++ definitions in the OpenSLES_AndroidConfiguration.hheader file. (Issue 53163) - Fixed STLportto abort after out of memory error instead of silently exiting. - Fixed system and Gabi++ headers to be able to compile with API level 8 and lower. - Fixed cpufeaturesto not parse /proc/self/auxv. (Issue 43055) - Fixed ld.goldto not depend on host libstdc++ and on Windows platforms, to not depend on the libgcc_sjlj_1.dlllibrary. - Fixed Clang 3.1 which emits inconsistent register list in .vsaveand fails assembler. (Change 49930) - Fixed Clang 3.1 to be able to compile libgabi++ and pass the test-stlporttests for MIPS build targets. (Change 51961) - Fixed Clang 3.1 to only enable exception by default for C++, not for C. - Fixed several issues in Clang 3.1 to pass most GNU exception tests. - Fixed scripts clangand clang++in standalone NDK compiler to detect -cc1and to not specify -targetwhen found. - Fixed ndk-buildto observe NDK_APP_OUTset in Application.mk. - Fixed X86 libc.soand lib.awhich were missing the sigsetjmpand siglongjmpfunctions already declared in setjmp.h. (Issue 19851) - Patched GCC 4.4.3/4.6/4.7 libstdc++ to work with Clang in C++ 11. (Clang Issue) - Fixed cygwin path in argument passed to HOST_AWK. - Fixed ndk-buildscript warning in windows when running from project's JNI directory. (Issue 40192) - Fixed problem where the ndk-buildscript does not build if makefile has trailing whitespace in the LOCAL_PATHdefinition. (Issue 42841) - Other changes: - Enabled threading support in GCC/MIPS toolchain. - Updated GCC exception handling helpers __cxa_begin_cleanupand __cxa_type_matchto have default visibility from the previous hidden visibility in GNU libstdc++. For more information, see CHANGES.HTML. - Updated build scripts so that Gabi++ and STLport static libraries are now built with hidden visibility except for exception handling helpers. - Updated build so that STLportis built for ARM in Thumb mode. - Added support for std::set_new_handlerin Gabi++. (Issue 52805) - Enabled FUTEXsystem call in GNU libstdc++. - Updated ndk-buildso that it no longer copies prebuilt static library to a project's obj/local/<abi>/directory. (Issue 40302) - Removed __ARM_ARCH_5*__from ARM toolchains/*/setup.mkscript. (Issue 21132) - Built additional GNU libstdc++ libraries in thumb for ARM. - Enabled MIPS floating-point madd/msub/nmadd/nmsub/recip/rsqrtinstructions with 32-bit FPU. - Enabled graphite loop optimizer in GCC 4.6 and 4.7 to allow more optimizations: -fgraphite, -fgraphite-identity, -floop-block, -floop-flatten, -floop-interchange, -floop-strip-mine, -floop-parallelize-all, and -ftree-loop-linear. (info) - Enabled pollyfor Clang 3.1 on Linux and Max OS X 32-bit hosts which analyzes and optimizes memory access. (info) - Enabled -fltoin GCC 4.7, 4.6, Clang 3.2 and Clang 3.1 on linux (Clang LTO via LLVMgold.so). MIPS compiler targets are not supported because ld.goldis not available. - Enabled --pluginand --plugin-optfor ld.goldin GCC 4.6/4.7. - Enabled --text-reorderfor ld.goldin GCC 4.7. - Configured GNU libstdc++ with _GLIBCXX_USE_C99_MATHwhich undefines the isinfscript in the bionic header. For more information, see CHANGES.html. - Added APP_LDFLAGSto the build scripts. For more information, see ANDROID-MK.html. - Updated build scripts to allow NDK_LOG=0to disable the NDK_LOG. - Updated build scripts to allow NDK_HOST_32BIT=0to disable the host developer environment 32-bit toolchain. - Changed the default GCC/X86 flags -march=and -mtune=from pentiumproand genericto i686and atom. - Enhanced toolchain build scripts: - Fixed a race condition in build-gcc.shfor the mingwbuild type which was preventing a significant amount of parallel build processing. - Updated build-gabi++.shand build-stlport.shso they can now run from the NDK package. (Issue 52835) - Fixed run-tests.shin the MSysutilities collection. - Improved 64-bit host toolchain and Canadian Cross build support. - Updated build-mingw64-toolchain.shscript to more recent version. - Added option to build libgnustl_static.aand stlport_static.awithout hidden visibility. Android NDK, Revision 8d (December 2012) - Important changes: - Added the GNU Compiler Collection (GCC) 4.7 compiler to the NDK. The GCC 4.6 compiler is still the default, so you must to explicitly enable the new version as follows: - For ndk-build, export the NDK_TOOLCHAIN_VERSION=4.7variable or add it to Application.mk. - For standalone builds, add the --toolchain=option to make-standalone-toolchain.sh, for example: --toolchain=arm-linux-androideabi-4.7 Note: This feature is experimental. Please try it and report any issues. - Added stlportexception support via gabi++. Note that the new gabi++ depends on dlopenand related code, meaning that: - You can no longer build a static executable using. - If your project links using . - Added a -guardsetting itself does not enable any -fstack-protector*options. - Added android_setCpu()function to sources/android/cpufeatures/cpu-features.cfor use when auto-detection via /procis not possible in Android 4.1 and higher. (Chromium Issue 164154) - Important bug fixes: - Fixed unnecessary rebuild of object files when using the ndk-buildscript. (Issue 39810) - Fixed a linker failure with the NDK 8c release for Mac OS X 10.6.x that produced the following error:. - Removed the -x c++options from the Clang++ standalone build script. (Issue 39089) - Fixed issues using the NDK_TOOLCHAIN_VERSION=clang3.1option in Cygwin. (Issue 39585) - Fixed the make-standalone-toolchain.shscript to allow generation of a standalone toolchain using the Cygwin or MinGW environments. The resulting toolchain can be used in Cygwin, MingGW or CMD.exe environments. (Issue 39915, Issue 39585) - Added missing SL_IID_ANDROIDBUFFERQUEUESOURCEoption in android-14 builds for ARM and X86. (Issue 40625) - Fixed x86 CPU detection for the ANDROID_CPU_X86_FEATURE_MOVBEfeature. (Issue 39317) - Fixed an issue preventing the Standard Template Library (STL) from using C++ sources that do not have a .cppfile extension. - Fixed GCC 4.6 ARM internal compiler error at reload1.c:1061. (Issue 20862) - Fixed GCC 4.4.3 ARM internal compiler error at emit-rtl.c:1954. (Issue 22336) - Fixed GCC 4.4.3 ARM internal compiler error at postreload.c:396. (Issue 22345) - Fixed problem with GCC 4.6/4.7 skipping lambda functions. (Issue 35933) - Other bug fixes: - NDK header file fixes: - Fixed __WINT_TYPE__and wint_tto be the same type. - Corrected typo in android/bitmap.h. (Issue 15134) - Corrected typo in errno.h. - Added check for the presence of __STDC_VERSION__in sys/cdefs.h. (Issue 14627) - Reorganized headers in byteswap.hand dirent.h. - Fixed limits.hto include page.hwhich provides PAGE_SIZEsettings. (Issue 39983) - Fixed return type of glGetAttribLocation()and glGetUniformLocation()from intto GLint. - Fixed __BYTE_ORDERconstant for x86 builds. (Issue 39824) - Fixed ndk-buildscript to not overwrite -Oswith -O2for ARM builds. - Fixed build scripts to allow overwriting of HOST_AWK, HOST_SED, and HOST_MAKEsettings. - Fixed issue for ld.goldon fsck_msdosbuilds linking objects built by the Intel C/C++ compiler (ICC). - Fixed ARM EHABI support in Clang to conform to specifications. - Fixed GNU Debugger (GDB) to shorten the time spent on walking the target's link map during solibevents. (Issue 38402) - Fixed missing libgcc.afile when linking shared libraries. - Other changes: - Backported 64-bit built-in atomic functions for ARM to GCC 4.6. - Added documentation for audio output latency, along with other documentation and fixes. - Fixed debug builds with Clang so that non-void functions now raise a SIGILLsignal for paths without a return statement. - Updated make-standalone-toolchain.shto accept the suffix -clang3.1which is equivalent to adding --llvm-version=3.1to the GCC 4.6 toolchain. - Updated GCC and Clang bug report URL to: s.html - Added ARM ELF support to llvm-objdump. - Suppressed treating c input as c++ warning for Clang builds. - Updated build so that only the 32-bit version of libiberty.ais built and placed in lib32/. Android NDK, Revision 8c (November 2012) -or add this environment variable setting to Application.mk. - For standalone builds, add -. - Added Gold linker. - Added checks for spaces in the NDK path to the ndk-build[.cmd]and ndk-gdbscripts, to prevent build errors that are difficult to diagnose. - Made the following changes to API level handling: - Modified build logic so that projects that specify android-10through android-13in APP_PLATFORM, project.propertiesor default.propertieslink against android-9instead of android-14. - Updated build so that executables using android-16 (Jelly Bean) or higher are compiled with the -fPIEoption for position-independent executables (PIE). A new APP_PIEoption allows you to control this behavior. See APPLICATION-MK.htmlfor details. Note: All API levels above 14 still link against platforms/android-14and no new platforms/android-Nhave been added. - Modified ndk-buildto provide warnings if the adjusted API level is larger than android:minSdkVersionin the project's AndroidManifest.xml. - Updated the cpu-featureshelper library to include more ARM-specific features. See sources/android/cpufeatures/cpu-features.hfor details. - Modified the long double on the X86 platform to be 8 bytes. This data type is now the same size as a double, but is still treated as a distinct type. - Updated build for APP_ABI=armeabi-v7a: - Modified this build type to pass the -march=armv7-aparameter to the linker. This change ensures that v7-specific libraries and crt*.oare linked correctly. - Added -mfpu=vfpv3-d16to ndk-buildinstead of the -mfpu=vfpoption used in previous releases. - Important bug fixes: - Fixed an issue where running make-standalone-toolchain.shwith root privileges resulted in the stand alone tool chain being inaccessible to some users. (Issue 35279) - All files and executables in the NDK release package are set to have read and execute permissions for all. - The ownership/group of libstdc++.ais now preserved when copied. - Removed redundant \rfrom Windows prebuilt echo.exe. The redundant \rcaused gdb.setupto fail in the GNU Debugger (GDB) because it incorrectly became part of the path. (Issue 36054) - Fixed Windows parallel builds that sometimes failed due to timing issues in the host-mkdirimplementation. (Issue 25875) - Fixed GCC 4.4.3 GNU libstdc++to not merge typeinfonames by default. For more details, see toolchain repo gcc/gcc-4.4.3/libstdc++-v3/libsupc++/typeinfo. (Issue 22165) - Fixed problem on nullcontext in GCC 4.6 cp/mangle.c::write_unscoped_name, where GCC may crash when the context is nulland dereferenced in TREE_CODE. - Fixed GCC 4.4.3 crashes on ARM NEON-specific type definitions for floats. (Issue 34613) - Fixed the STLportinternal _IteWrapper::operator*()implementation where a stale stack location holding the dereferenced value was returned and caused runtime crashes. (Issue 38630) - ARM-specific fixes: - Fixed ARM GCC 4.4.3/4.6 g++to not warn that the mangling of <va_list> was changed in GCC 4.4. The workaround using the -Wno-psabiswitch to avoid this warning is no longer required. - Fixed an issue when a project with . - Fixed binutils-2.21/ld.bfdto be capable of linking object from older binutils without tag_FP_arch, which was producing assertion fail error messages in GNU Binutils. (Issue 35209) - Removed Unknown EABI object attribute 44 warning when binutils-2.19/ldlinks prebuilt object by newer binutils-2.21 - Fixed an issue in GNU stdc++compilation with both -mthumband -march=armv7-a, by modifying make-standalone-toolchain.shto populate headers/libsin sub-directory armv7-a/thumb. (Issue 35616) - Fixed unresolvable R_ARM_THM_CALL relocation error. (Issue 35342) - Fixed internal compiler error at reload1.c:3633, caused by the ARM back-end expecting the wrong operand type when sign-extend from char. (GCC Issue 50099) - Fixed internal compiler error with negative shift amount. (GCC Issue) - Fixed -fstack-protectorfor X86, which is also the default for the ndk-buildx86 ABI target. - MIPS-specific fixes: - Fixed STLportendian-ness by setting _STLP_LITTLE_ENDIANto 1 when compiling MIPS libstlport_*. - Fixed GCC __builtin_unreachableissue when compiling LLVM. (GCC Issue 54369) - Backported fix for cc1compile process consuming 100% CPU. (GCC Issue 50380) - GNU Debugger-specific fixes: - Disabled Python support in gdb-7.x at build, otherwise the gdb-7.x configure function may pick up whatever Python version is available on the host and build gdbwith a hard-wired dependency on a specific version of Python. (Issue 36120) - Fixed ndk-gdbwhen APP_ABIcontains alland matchs none of the known architectures. (Issue 35392) - Fixed Windows pathname support, by keeping the :character if it looks like it could be part of a Windows path starting with a drive letter. (GDB Issue 12843) - Fixed adding of hardware breakpoint support for ARM in gdbserver. (GDB Issue) - Added fix to only read the current solibswhen the linker is consistent. This change speeds up solibevent handling. (Issue 37677) - Added fix to make repeated attempts to find solibbreakpoints. GDB now retries enable_break()during every call to svr4_current_sos()until it succeeds. (Change 43563) - Fixed an issue where gdbwould not stop on breakpoints placed in dlopen-edlibraries. (Issue 34856) - Fixed SIGILLin dynamic linker when calling dlopen(), on system where /system/bin/linkeris stripped of symbols and rtld_db_dlactivity()is implemented as Thumb, due to not preserving LSBof sym_addr. (Issue 37147) - Other bug fixes: - Fixed NDK headers: - Fixed arch-mips/include/asm/*code that was incorrectly removed from original kernel. (Change 43335) - Replaced struct member data __unusedwith __linux_unusedin linux/sysctl.hand linux/icmp.hto avoid conflict with #define __unusedin sys/cdefs.h. - Fixed fenv.hfor enclosed C functions with __BEGIN_DECLSand __END_DECLS. - Removed unimplemented functions in malloc.h. - Fixed stdint.hdefinition of uint64_tfor ANSI compilers. (Issue 1952) - Fixed preprocessor macros in <arch>/include/machine/*. - Replaced link.hfor MIPS with new version supporting all platforms. - Removed linux-unistd.h - Move GLibc-specific macros LONG_LONG_MIN, LONG_LONG_MAXand ULONG_LONG_MAXfrom <pthread.h>to <limits.h>. - Fixed a buffer overflow in ndk-stack-parser. - Fixed _STLP_USE_EXCEPTIONS, when not defined, to omit all declarations and uses of __Named_exception. Compiling and use of __Named_exceptionsettings only occurs when STLportis allowed to use exceptions. - Fixed building of Linux-only NDK packages without also building Windows code. Use the following settings to perform this type of build: ./build/tools/make-release.sh --force --systems=linux-x86 - Fixedoptions, you must provide your own __dso_handlebecause crtbegin_so.ois not linked in this case. The content of __dso_handledoes not matter, as shown in the following example code: extern "C" { extern void *__dso_handle __attribute__((__visibility__ ("hidden"))); void *__dso_handle; } - Fixed symbol decoder for ARM used in objdumpfor pltentries to generate a more readable form function@plt. - Removed the following symbols, introduced in GCC 4.6 libgcc.a, from the X86 platform libc.solibrary: __aeabi_idiv0, __aeabi_ldiv0, __aeabi_unwind_cpp_pr1, and __aeabi_unwind_cpp_pr2. - Removed unused .ctors, .dtors, and .eh_framein MIPS crt*_so.S. - Updated) - Other changes: - Removed arch-x86and arch-mipsheaders from platforms/android-[3,4,5,8]. Those headers were incomplete, since both X86 and MIPS ABIs are only supported at API 9 or higher. - Simplified c++ include path in standalone packages, as shown below. (Issue 35279) <path>/arm-linux-androideabi/include/c++/4.6.x-google to: <path>/include/c++/4.6/ - Fixed ndk-buildto recognize more C++ file extensions by default: .cc .cp .cxx .cpp .CPP .c++ .C. You may still use LOCAL_CPP_EXTENSIONto overwrite these extension settings. - Fixed an issue in samples/san-angelesthat caused a black screen or freeze frame on re-launch. - Replaced deprecated APIs in NDK samples. (Issue 20017) hello-gl2from android-5 to android-7 native-activityfrom android-9 to android-10 native-audiofrom android-9 to android-10 native-plasmafrom android-9 to android-10 - Added new branding for Android executables with a simpler scheme in section are deprecated. - Added a new script. - Important bug fixes: - Fixed LOCAL_SHORT_COMMANDSissues on Mac OS, Windows Cygwin environments for static libraries. List file generation is faster, and it is not regenerated to avoid repeated project rebuilds. - Fixed several issues in ndk-gdb: - Updated tool to pass flags -e, -dand -sto adb more consistently. - Updated tool to accept device serial names containing spaces. - Updated tool to retrieve /system/bin/linkinformation, so gdbon the host can set a breakpoint in __dl_rtld_db_dlactivityand be aware of linker activity (e.g., rescan solibsymbols when dlopen()is called). - Fixed ndk-build cleanon Windows, which was failing to remove ./libs/*/lib*.so. - Fixed ndk-build.cmdto return a non-zero ERRORLEVELwhen makefails. - Fixed libc.soto stop incorrectly exporting the __exidx_startand __exidx_endsymbols. - Fixed SEGVwhen unwinding the stack past __libc_initfor ARM and MIPS. - Important changes: - Added GCC 4.6 toolchain ( binutils2.21 with goldand GDB 7.3.x) to co-exist with the original GCC 4.4.3 toolchain ( binutils2.19 and GDB 6.6). - GCC 4.6 is now the default toolchain. You may set NDK_TOOLCHAIN_VERSION=4.4.3in Application.mkto select the original one. - Support for the goldlinker is only available for ARM and x86 architectures on Linux and Mac OS hosts. This support is disabled by default. Add LOCAL_LDLIBS += -fuse-ld=goldin Android.mkto enable it. - Programs compiled with -fPIErequire the new GDBfor debugging, including binaries in Android 4.1 (API Level 16) system images. - The binutils2.21 ldtool contains back-ported fixes from version 2.22: - Fixed ld --gc-sections, which incorrectly retains zombie references to external libraries. (more info). - Fixed ARM stripcommand to preserve the original p_alignand p_flagsin GNU_RELROsection if they are valid. Without this fix, programs built with -fPIEcould not be debugged. (mor e info) - Disabled sincos()optimization for compatibility with older platforms. - Updated build options to enable the Never eXecute (NX) bit and relro/ bind_nowprotections by default: - Added --noexecstackto assembler and -z noexecstackto linker that provides NX protection against buffer overflow attacks by enabling NX bit on stack and heap. - Added -z relroand -z nowto linker for hardening of internal data sections after linking to guard against security vulnerabilities caused by memory corruption. (more info: 1, 2) - These features can be disabled using the following options: - Disable NX protection by setting the --execstackoption for the assembler and -z execstackfor the linker. - Disable hardening of internal data by setting the -z norelroand -z lazyoptions for the linker. - Disable these protections in the NDK jni/Android.mkby setting the following options: LOCAL_DISABLE_NO_EXECUTE=true # disable "--noexecstack" and "-z noexecstack" DISABLE_RELRO=true # disable "-z relro" and "-z now" See docs/ANDROID-MK.htmlfor more details. - Added branding for Android executables with the */ } - Other bug fixes: - Fixed mips-linux-gnurelocation truncated to fit R_MIPS_TLS_LDMissue. (more info) - Fixed ldtool segfaults when using --gc-sections. (more info) - Fixed MIPS GOT_PAGEcounting issue. (more info) - Fixed follow warning symbol link for mips_elf_count_got_symbols. - Fixed follow warning symbol link for mips_elf_allocate_lazy_stub. - Moved MIPS .dynamicto the data segment, so that it is writable. - Replaced hard-coded values for symbols with correct segment sizes for MIPS. - Removed the . - Fixed wrong package names in samples hello-jniand two-libsso that the testsproject underneath it can compile. - Other Changes: - Changed locations of binaries: - Moved gdbserverfrom toolchain/<arch-os-ver>/prebuilt/gdbserverto prebuilt/android-<arch>/gdbserver/gdbserver. - Renamed x86 toolchain prefix from i686-android-linux-to i686-linux-android-. - Moved. - Moved libbfd.aand libintl.afrom lib/to lib32/. - Added and improved various scripts in the rebuild and test NDK toolchain: - Added build-mingw64-toolchain.shto generate a new Linux-hosted toolchain that generates Win32 and Win64 executables. - Improved speed of download-toolchain-sources.shby using the clonecommand and only using checkoutfor the directories that are needed to build the NDK toolchain binaries. - Added build-host-gcc.shand build-host-gdb.shscripts. - Added tests/check-release.shto check the content of a given NDK installation directory, or an existing NDK package. - Rewrote the tests/standalone/run.shstandalone tests . - Removed. By default, code is generated for ARM-based devices. You can add mipsto your APP_ABIdefinition in your Application.mkfile to build for MIPS platforms. For example, the following line instructs ndk-buildto build your code for three distinct ABIs: APP_ABI := armeabi armeabi-v7a mips Unless you rely on architecture-specific assembly sources, such as ARM assembly code, you should not need to touch your Android.mkfiles to build MIPS machine code. - You can build a standalone MIPS toolchain using the -. - Important bug fixes: - Fixed a typo in GAbi++ implementation where the result of dynamic_cast<D>(b)of base class object bto derived class Dis incorrectly adjusted in the opposite direction from the base class. (Issue 28721) - Fixed an issue in which make-standalone-toolchain.shfails to copy libsupc++.*. - Other bug fixes: - Fixed: - Important bug fixes: - Fixed GNU STL armeabi-v7a binaries to not crash on non-NEON devices. The files provided with NDK r7b were not configured properly, resulting in crashes on Tegra2-based devices and others when trying to use certain floating-point functions (e.g., cosf, sinf, expf). - Important changes: - Added support for custom output directories through the NDK_OUTenvironment variable. When defined, this variable is used to store all intermediate generated files, instead of $PROJECT_PATH/obj. The variable is also recognized by ndk-gdb. - Added support for building modules with hundreds or even thousands of source files by defining LOCAL_SHORT_COMMANDSto truein your Android.mk. This change forces the NDK build system to put most linker or archiver options into list files, as a work-around for command-line length limitations. See docs/ANDROID-MK.htmlfor details. - Other bug fixes: - Fixed: - Important bug fixes: - Updated sys/atomics.hto avoid correctness issues on some multi-core ARM-based devices. Rebuild your unmodified sources with this version of the NDK and this problem should be completely eliminated. For more details, read docs/ANDROID-ATOMICS.html. - Reverted to binutils2.19 to fix debugging issues that appeared in NDK r7 (which switched to binutils2.20.1). - Fixed ndk-buildon 32-bit Linux. A packaging error put a 64-bit version of the awkexecutable under prebuilt/linux-x86/binin NDK r7. - Fixed native Windows build ( ndk-build.cmd). Other build modes were not affected. The fixes include: - Removed an infinite loop / stack overflow bug that happened when trying to call ndk-build.cmdfrom a directory that was not the top of your project path (e.g., in any sub-directory of it). - Fixed a problem where the auto-generated dependency files were ignored. This meant that updating a header didn't trigger recompilation of sources that included it. - Fixed a problem where special characters in files or paths, other than spaces and quotes, were not correctly handled. - Fixed the standalone toolchain to generate proper binaries when using -lstdc++(i.e., linking against the GNU libstdc++C++ runtime). You should use -lgnustl_sharedif you want to link against the shared library version or -lstdc++for the static version. See docs/STANDALONE-TOOLCHAIN.htmlfor more details about this fix. - Fixed gnustl_sharedon Cygwin. The linker complained that it couldn't find libsupc++.aeven though the file was at the right location. - Fixed Cygwin C++ link when not using any specific C++ runtime through APP_STL. - Other changes: - When your application uses the GNU libstdc++runtime, the compiler will no longer forcibly enable exceptions and RTTI. This change results in smaller code. If you need these features, you must do one of the following: - Enable exceptions and/or RTTI explicitly in your modules or Application.mk. (recommended) - Define. - Fixed a rare bug where NDK r7 would fail to honor the LOCAL_ARM_MODEvalue and always compile certain source files (but not all) to 32-bit instructions. STLport: Refresh the sources to match the Android platform version. This update fixes a few minor bugs: - Fixed instantiation of an incomplete type - Fixed minor "==" versus "=" typo - Used memmoveinstead of memcpyin string::assign - Added better handling of IsNANorINF, IsINF, IsNegNAN, etc. For complete details, see the commit log. STLport: Removed 5 unnecessary static initializers from the library. - The GNU libstdc++ libraries for armeabi-v7a were mistakenly compiled for armeabi instead. This change had no impact on correctness, but using the right ABI should provide slightly better performance. - The. - Cygwin: ndk-buildno longer creates an empty "NUL" file in the current directory when invoked. - Cygwin: Added better automatic dependency detection. In the previous version, it didn't work properly in the following cases: - When the Cygwin drive prefix was not /cygdrive. - When using drive-less mounts, for example, when Cygwin would translate /hometo \\server\subdirinstead of C:\Some\Dir. - Cygwin:: - New features - Added official NDK APIs for Android 4.0 (API level 14), which adds the following native features to the platform: - Added native multimedia API based on the Khronos Group OpenMAX AL 1.0.1 standard. The new . - Updated the native audio API based on the Khronos Group OpenSL ES 1.0.1 standard. With API Level 14, you can now decode compressed audio (e.g. MP3, AAC, Vorbis) to PCM. For more details, see docs/opensles/index.htmland. - Added CCache support. To speed up large rebuilds, define the. - Added support for settingwhen calling ndk-buildfrom the command-line, which is a quick way to check that your project builds for all supported ABIs without changing the project's Application.mk file. For example: ndk-build APP_ABI=all - Added. - Shortened paths to source and object files that are used in build commands. When invoking . - Experimental features - You can now build your NDK source files on Windows without Cygwin by calling thedoes not work on Windows, so you still need Cygwin to debug. This feature is still experimental, so feel free to try it and report issues on the public bug database or public forum. All samples and unit tests shipped with the NDK successfully compile with this feature. - Important bug fixes - Imported shared libraries are now installed by default to the target installation. - Other changes. - Added a new C++ support runtime named gabi++. More details about it are available in the updated docs/CPLUSPLUS-SUPPORT.html. - Added a new C++ support runtime named gnustl_sharedthat corresponds to the shared library version of GNU libstdc++ v3 (GPLv3 license). See more info at docs/CPLUSPLUS-SUPPORT.html - Added support for RTTI in the STLport C++ runtimes (no support for exceptions). - Added support for multiple file extensions in LOCAL_CPP_EXTENSION. For example, to compile both foo.cppand bar.cxxas C++ sources, declare the following: LOCAL_CPP_EXTENSION := .cpp .cxx - Removed many unwanted exported symbols from the link-time shared system libraries provided by the NDK. This ensures that code generated with the standalone toolchain doesn't risk to accidentally depend on a non-stable ABI symbol (e.g. any libgcc.a symbol that changes each time the toolchain used to build the platform is changed) - Refreshed the EGL and OpenGLES Khronos headers to support more extensions. Note that this does not change the NDK ABIs for the corresponding libraries, because each extension must be probed at runtime by the client application.: - GLES 1.x - GLES 2.0: - Important bug fixes - Fixed the build when APP_ABI="armeabi x86"is used for multi-architecture builds. - Fixed the location of prebuilt STLport binaries in the NDK release package. A bug in the packaging script placed them in the wrong location. - Fixed atexit()usage in shared libraries with the x86standalone toolchain. - Fixed make-standalone-toolchain.sh --arch=x86. It used to fail to copy the proper GNU libstdc++ binaries to the right location. - Fixed the standalone toolchain linker warnings about missing the definition and size for the __dso_handlesymbol (ARM only). - Fixed the inclusion order of $(SYSROOT)/usr/includefor x86 builds. See the bug for more information. - Fixed the definitions of. - General notes: - Adds support for the x86 ABI, which allows you to generate machine code that runs on compatible x86-based Android devices. Major features for x86 include x86-specific toolchains, system headers, libraries and debugging support. For all of the details regarding x86 support, see docs/CPU-X86.htmlin the NDK package. By default, code is generated for ARM-based devices, but you can add x86 to your APP_ABIdefinition in your Application.mkfile to build for x86 platforms. For example, the following line instructs ndk-buildto build your code for three distinct ABIs: APP_ABI := armeabi armeabi-v7a x86 Unless you rely on ARM-based assembly sources, you shouldn't need to touch your Android.mkfiles to build x86 machine code. - You can build a standalone x86 toolchain using the --toolchain=x86-4.4.3option when calling make-standalone-toolchain.sh. See docs/STANDALONE-TOOLCHAIN.htmlfor more details. - The new. - Other changes:_LIBRARIESto work correctly with the new toolchain and added documentation for this in docs/ANDROID-MK.html. - Fixed a bug where code linked againstfails, it now prints the list of directories that were searched. This is useful to check that the NDK_MODULE_PATHdefinition used by the build system is correct. - When import-modulesucceeds,failures and improved error messages. <pthread.h>: Fixed the definition of PTHREAD_RWLOCK_INITIALIZERfor API level 9 (Android 2.3) and higher. - Fixed an issue where a module could import itself, resulting in an infinite loop in GNU Make. - Fixed a bug that caused the build to fail if LOCAL_ARM_NEONwas set to true (typo in build/core/build-binary.mk). - Fixed a bug that prevented the compilation of .sassembly files ( .Sfiles - Fixed the following ndk-buildissues: - A bug that created inconsistent dependency files when a compilation error occurred on Windows. This prevented a proper build after the error was fixed in the source code. - A Cygwin-specific bug where using very short paths for the Android NDK installation or the project path led to the generation of invalid dependency files. This made incremental builds impossible. - A typo that prevented the cpufeatures library from working correctly with the new NDK toolchain. - Builds in Cygwin are faster by avoiding calls to cygpath -mfrom GNU Make for every source or object file, which caused problems with very large source trees. In case this doesn't work properly, define NDK_USE_CYGPATH=1in your environment to use cygpath -mag_PATHenvironment variable from working properly when it contained multiple directories separated with a colon. - Thewas added to <netinet/in.h>. - Missing declarations for IN6_IS_ADDR_MC_NODELOCALand IN6_IS_ADDR_MC_GLOBALwere added to <netinet/in6.h>. - 'asm' was replaced with '__asm__': - Input subsystem (such as the keyboard and touch screen) - Access to sensor data (accelerometer, compass, gyroscope, etc). - Event loop APIs to wait for things such as input and sensor events. - Window and surface subsystem - Audio APIs based on the OpenSL ES standard that support playback and recording as well as control over platform audio effects - Access to assets packaged in an .apkfile. -. - Adds an EGL library that lets you create and manage OpenGL ES textures and services. - Adds new sample applications, native-plasmabuild command. - Adds support for easy native debugging of generated machine code on production devices through the new ndk-gdbcommand. - Adds a new Android-specific ABI for ARM-based CPU architectures, armeabi-v7a. The new ABI extends the existing armeabiABI to include these CPU instruction set extensions: - Thumb-2 instructions - VFP hardware FPU instructions (VFPv3-D16) - Optional support for ARM Advanced SIMD (NEON) GCC intrinsics and VFPv3-D32. Supported by devices such as Verizon Droid by Motorola, Google Nexus One, and others. - Adds a new cpufeaturesstaticlibrary, Google Playobjectsobject. Android NDK, Revision 1 (June 2009) Originally released as "Android 1.5 NDK, Release 1". - General notes: - Includes compiler support (GCC) for ARMv5TE instructions, including Thumb-1 instructions. - Includes system headers for stable native APIs, documentation, and sample applications.
https://developer.android.com/ndk/downloads/revision_history?authuser=2
CC-MAIN-2021-21
refinedweb
16,118
52.97
Reference Index Table of Contents pthread_spin_unlock - unlock a spin lock object (ADVANCED REALTIME THREADS) #include <pthread.h> int pthread_spin_unlock(pthread_spinlock_t *lock); The pthread_spin_unlock function shall release the spin lock referenced by lock which was locked via the pthread_spin_lock(3) or pthread_spin_trylock(3) functions. If there are threads spinning on the lock when pthread_spin_unlock is called, the lock becomes available and an unspecified spinning thread shall acquire the lock. Pthreads-w32 does not check ownership of the lock and it is therefore possible for a thread other than the locker to unlock the spin lock. This is not a feature that should be exploited. The results are undefined if this function is called with an uninitialized thread spin lock. Upon successful completion, the pthread_spin_unlock function shall return zero; otherwise, an error number shall be returned to indicate the error. The pthread_spin_unlock function may fail if: This function shall not return an error code of [EINTR]. The following sections are informative. None. Pthreads-w32 does not check ownership of the lock and it is therefore possible for a thread other than the locker to unlock the spin lock. This is not a feature that should be exploited. None. None. pthread_spin_destroy(3) , pthread_spin_lock
http://www.sourceware.org/pthreads-win32/manual/pthread_spin_unlock.html
CC-MAIN-2014-42
refinedweb
200
62.17
This. View Complete Post I have a problem with that because I believe the .aspx pages that are copied in cause massive errors (I can no longer view the site after import). Specifically, the problem: importing the absences list from a MOSS2007 Fab40 Absence tracking p Hi all, I have upload functionality in my application. User can upload excel files and the system save the file into a folder and reads all records one by one. We have a gridview where we display a link to processed excel file. we are also using XSLT version 1.0 for transformations. There is no problem with uploading and downloading 2003 and 2007 excel documents. Also 2010 excel document is uploading properly but unable to download it. Following is the namespace in xslt :" Can anyone please help in resoving
http://www.dotnetspark.com/links/36093-problem-with-list-columns-foundation-2010.aspx
CC-MAIN-2018-30
refinedweb
136
66.23
Article description This article demonstrates the basic techniques used to build "ASP.net Noughts & Crosses" (tic tac toe to our American friends). The game uses the native imaging and drawing features of the .net Framework to dynamically generate a JPEG image which displays the game board to the user. Players can take turns to click on the area of the image where they wish to make a move, their move is then submitted to the web server where, if legal, it is drawn onto the board. The application consists of two aspx web pages which each have an associated code behind page, the source code for the game can be found here Dynamically generating Images in ASP.net One of the more impressive features of ASP.net is the ability to dynamically generate images that can be viewed in a standard web browser, something that was not easy to achieve using previous versions of ASP. We'll start with an example which draws the four lines of the game grid and sends the output to the user as an image. The two main .net base classes used to create and edit images both belong to the System.Drawing namespace, these are: We will be creating a true colour JPEG, therefore the following code can be used to create a 24bit color Bitmap object 300 pixels wide and 300 high that will represent our image. Bitmap objGridBitmap = new Bitmap(300, 300, System.Drawing.Imaging.PixelFormat.Format24bppRgb); Next we need to create a Graphics object which will be used to draw onto our bitmap. This can be done using the static method FromImage() of the Graphics Class, the method has one parameter which is a reference to our bitmap. Graphics objGraphicsGrid = Graphics.FromImage(objGridBitmap); After clearing the surface with a white fill we can now use the draw methods of the Graphics object to draw onto the bitmap. To Begin with we'll draw a single horizontal black line 100 pixels from the top of our bitmap; this is the first line of the grid. We can draw a line using the DrawLine method of the Graphics Class, we will be passing three parameters into the method: Since we are drawing four lines we'll create a single Black pen object three pixels think which can be used to draw all the lines. As each line uses different points we will create two new points for each method call, these two points represent x and y coordinates from the top left corner of the bitmap. //Clear Graphics Object with white brushobjGraphicsGrid.Clear(Color.White);//Create a black PenPen objBlackPen = new Pen(Color.Black, 3);//Draw a LineobjGraphicsGrid.DrawLine(objBlackPen, new Point(0, 100), new Point(300, 100)); We can now draw the remaining three lines in the same fashion. //Draw three more linesobjGraphicsGrid .DrawLine(objBlackPen, new Point(0, 200), new Point(300, 200));objGraphicsGrid .DrawLine(objBlackPen, new Point(100, 0), new Point(100, 300));objGraphicsGrid .DrawLine(objBlackPen, new Point(200, 0), new Point(200, 300)); With the game grid complete we are ready to send the result to the users browser, this is achieved with just two more lines of code. We will use the Response object of the page to first set the content type of the page to image/JPEG and then stream our bitmap to the clients browser using the Save() method of the Bitmap class. //Set the content type to JpegResponse.ContentType = "image/jpeg";//Send image to users browserobjGridBitmap.Save(Response.OutputStream, System.Drawing.Imaging.ImageFormat.Jpeg); That's all there is to it, if all has gone well when we view this page in our web browser the result should be as below. Viewing the image in a html page It's important to realise that the aspx page which we've just created represents a single image and cannot contain any HTML. The reason for this is that when a browser displays a HTML page containing an image the browser actually makes two get requests to the web server, one for the HTML and another for the image. If we wish our dynamic image to appear in a HTML page we need to create a separate web page containing a HTML element that has our dynamic image aspx page as it's source. If we were to name our dynamic image page dynamicimage.aspx then the HTML tag could look like this: <img border="0" name="image1" src ="dynamicimage.aspx"> Interpreting user input To register user input the game uses a form input element of type 'image' which has the dynamic image aspx page as it's source. The reason for this is that when an image input is clicked in a browser the page's form is submitted and the x and y coordinates of where the user clicked on the image are sent to the server. We can then use this information to determine which part of the game grid we need to draw a move. Below is the HTML and the c# code needed to record the users mouse input:- aspx HTML<form name ="form1" action ="get" method ="#">< input</ form> codebehind c# code //Store the gamegrid.X querystring parameter in a local variableint intMouseX = Int32.Parse(Request.QueryString["gamegrid.X"]);//Store the gamegrid.Y querystring parameter in a local variableint intMouseY = Int32.Parse(Request.QueryString["gamegrid.Y"]); Passing information between pages using the Session Object Since there are two aspx pages in the application, one which interprets user input and another to generate the game graphics, it needs the ability to pass data between the two pages. There are a number of ways this can be achieved, however when designing the game I decided to share data using the Session object. As an ASP developer I have previously been very reluctant to use the Session object, this was mainly due to the effect it has on the scalability of web applications. Using sessions effectively meant that an application would not be run correctly on a web farm due to the way session object data was stored. This problem has been overcome in ASP.net with the option of using 'out of process' session databases when the application is run on a web farm. Consequently the session object has once again become a convenient container for small amounts of session data. Storing data in a session object is relatively straightforward, the following code will store an integer in a new session object called s_intMyData Session["s_intMyData"] = 5; Using the value in our new session object for comparison or assignment purposes will require the object being cast to the value it holds, in this case to int. The reason for this is that our session object is of type 'object' even though it is used to hold an int. The following code demonstrates using our session object //store an integer in a session variable in a local integer variableint intMyDataLocal = (int)Session["s_intMyData"];//compare an integer in a session variable with a literal integerif((int)Session["s_intMyData"] == 5){//Do something} I used the Session object in the game to hold an array of integers, we can similarly cast the object to an array of type int, this is demonstrated in the following code //Create new Session object which contains an array of integersSession["s_intMyArray"] = new int[]{1, 2, 3} ;//Store the session object in a local array int[] intMyArrayLocal = (int[])Session["s_intMyArray"]; Conclusion I've discussed most of the major elements that make up the game; the rest of the code is pretty self explanatory and extensively commented. I've only just touched upon the dynamic image generation abilities of ASP.net which offers incredibly feature rich graphics and imaging functionality, both programmatically and functionally, ASP.net is a huge leap forward from ASP 3.0. ASP.NET Naughts and Crosses Reflecting Data to .NET Classes: Part I - From HTML Forms
http://www.c-sharpcorner.com/UploadFile/markjohnson/ASPNetNoughtsCrosses11232005065109AM/ASPNetNoughtsCrosses.aspx
crawl-003
refinedweb
1,316
58.82
java - JSP-Interview Questions java hi.. snd some JSP interview Q&A and i wnt the JNI(Java Native Interface) concepts matrial thanks krishna Hi friend, Read more information. hi online multiple choice examination hi i am developing online multiple choice examination for that i want to store questions,four options,correct answer in a xml file using jsp or java?can any one help me? Please hi hi i want to develop a online bit by bit examination process as part of my project in this i am stuck at how to store multiple choice questions... can any one help me?????? We are providing you a simple application java - Java Interview Questions java i want to java&j2ee interview questions. Regards Akhilesh Kumar Hi friend, I am sending you a link. This link will help you. Read for more information. Servlet & Jsp - Java Interview Questions ); } } } Thanks Thanks a lot..You are doing a great help to me...Servlet & Jsp is it possible of communicating from SERVLET to JSP...; Hi Friend, You can also use RequestDispatcher to forward request from interview questin of java - Java Interview Questions interview questin of java what r the common question of core & addvance java in interview? Hi Garima, I am sending you a link. This link will help you. please visit for more information. http Help Very Very Urgent - JSP-Servlet Help Very Very Urgent Respected Sir/Madam, I am sorry..Actually... requirements.. Please please Its Very very very very very urgent... Thanks/Regards, R.Ragavendran.. Hi friend jsp - JSP-Interview Questions jsp i want to know how to take a number from a user with html form.please help me as soon as possible. Hi Friend, Try the following code: function isNumber(str){ if(/^-?\d+(\.\d+)?$/.test(form.num.value Hi - Hibernate Interview Questions Hi please send me hibernate interview questions Interview - Java Interview Questions . Thanking you. Pabitra kr debanth. Hi friend, I am sending you a link. This link will help you. Please visit for more information. Thanks jsp - Java Interview Questions JSP pages not loading On running the application.. it is not loading the JSP Pages. What could be the possible reason?Thanks for any Help! Hi friend<%@ page language="java" import="java.util.*" JSps - JSP-Interview Questions JSps HI, I want to use scriptlet code into my html:link tag. Is it possible? Kindly help me out. hi all - Java Beginners hi all hi, i need interview questions of the java asap can u please sendme to my mail Hi, Hope you didnt have this eBook. You.../Good_java_j2ee_interview_questions.html?s=1 Regards, Prasanth HI India website. Index | Ask Questions | Site Map Web Services... and offline is evolving very fast. New software development and testing techniques... Tutorial | J2ME Tutorial | JSP Tutorial | Core Java Tutorial Jsp - Java Interview Questions Need JSP Interview Questions Hi, I need JSP interview questions.Thanks tomcat - JSP-Interview Questions tomcat i have installed tomcat server but i have not got tomcat6.exe file in the bin directory.although i have got the icon (of apache tomcat) on the task bar.please help me soon. Hi Friend, If you got the zip to our site. Site map will help you in accessing the complete resource listing... Site Map We have organized our site map for easy access. You can browser though Site Map to reach the tutorials and information pages. We Interview question - JSP-Interview Questions Interview question Hi Friend, Can we inherit interface in JSP. Thank u in advance interview questions - EJB interview questions in Java Need interview questions in Java Hi,Please check the following linksJ2EE tutorial and documentations: Questions: of interview questions and their answers. Thanks thanks for your...hint Dear roseindia, i want the java interview question and the corresponding question answers. Hi Friend, Please visit saving data - JSP-Interview Questions to save the data in JSP and using crimson editor? Thank u very much. rodmel Hi friend, I am sending you data store code into the database...:// Thanks jsp - JSP-Interview Questions jsp what are the life cycles of jsp and give a brief description Hi friend, The lifecycle of jsp page life cycle of jsp... ----------------------------------------- Read for more information. Thanks Help with jQuery show hide div script - JSP-Interview Questions Help with jQuery show hide div script Hi guys, I have a jquery script for showing and hiding the content inside, between two divs, controlled... somebody please help. Thx in advance. content content interview question - Java Interview Questions interview question hello i want technical interview question in current year Hi Friend, Please visit the following links: Set Parameter - JSP-Interview Questions Set Parameter Hi, could someone please explain the process of setting parameter in the session from JSP with the help of code? Thanks! Hi,In your JSP page use the Set Tag, and set the scope attribute Programming help Very Urgent - JSP-Servlet Programming help Very Urgent Respected Sir/Madam, Actually my code... Please please its very urgent.. Thanks/Regards, R.Ragavendran.. Hi friend, Read for more information, jsp - JSP-Interview Questions jsp hi, What is the difference between page and pageContext in jsp.If any body know this answer please tell me thank u JSP - JSP-Interview Questions are the comments in JSP(java server pages)and how many types and what are they.Thanks inadvance. Hi friend, JSP Syntax XML Syntax... A comment marks text or lines that the JSP container additinal info - JSP-Interview Questions a certain processing. I hope you help me in this code. Please refer to my last questions. Regards, Hi Friend, You can use ArrayList class JSP - JSP-Interview Questions within a JSP page and Java Servlet. What is the difference between these objects and what would you use them for? Hi Read for more information. JSP - JSP-Interview Questions bitween include directive and include tag.Thanks in advance. Hi friend, jsp:include tag includes the jsp during runtime, whereas the <%@ include> includes the specified jsp during compilation time. If you modify a jsp jsp - JSP-Interview Questions jsp i want to know how to take value from user in jsp and not with javascript.help me. Hi Friend, Try it: Enter Name: Thanks jsp - JSP-Interview Questions open my jsp page in IE, it will not open related form.It will open only the code as it is.What is the reason for this.Thank you in advance. Hi friend... Hi, if i declare in declaration & same in scriptlets...?? Thanks Hi Friend, Whenever you declare the variables in the scriptlet, it is gone to the service method when jsp converted into servlet jsp - JSP-Interview Questions jsp what are the thinks in el (jsp) nesseay to take care wel codeing Hi Friend, Expression Language (EL), provides a way to simplify expressions in JSP. EL provides the ability to use run-time expressions outside jsp - JSP-Interview Questions JavaServer Pages Standard Tag Library (JSTL) Hi, I need some...;Hi,The JavaServer Pages Standard Tag Library (JSTL), is a component of the Java EE Web application development platform. It extends the JSP specification jsp retrive image - JSP-Interview Questions jsp retrive image How to Display Images in JSP retrieved from MYSQL database? plz help me.. Hi Friend, Please visit the following link: Thanks jsp - JSP-Interview Questions jsp i have installes tomcat server 5.0.how can i run a jsp...; Hi Friend, After installing the tomcat server,put the apache-tomcat folder.... After that create a jsp file 'hello.jsp' and put it into the 'application jsp interview Question - JSP-Interview Questions jsp interview Question What are taglibraries in jsp Hi Friend, Please visit the following link: Hope that it will be helpful for you. Thanks Failed to run Online Quiz Application in JSP - JSP-Interview Questions Failed to run Online Quiz Application in JSP Hi, I have been... and quizeApplication.jsp. Would you please help ? Though I did have my Tomcat and MySQl installed!! Thanks Royman Hi Friend, Please again send JSP - JSP-Interview Questions . But how to write the search i can't understand .. Can any body help me.... this code will develop using jsp only .. And another button i will create help needed - Java Interview Questions help needed What are the new features added in jdk 1.5? Hi friend, The following were the new features added in java 1.5.... * Autoboxing/unboxing ------------------------- This link will help you JSP - JSP-Interview Questions back option also.... Hi Friend, Try the following code: 'pagination.jsp' Pagination of JSP page Roll No Name Marks Difficult Interview Questions Difficult Interview Questions  ... opportunity will be enhanced. Difficult Interview Questions - Page 1...; Difficult Interview Questions - Page 3 Question : -Interview Questions jsp how we retrive a data using iterator in arraylist. give me examples Hi friend, Code to solve the problem : import java.util.ArrayList; import java.util.Iterator; public class ArrayListExam{ public help me in these - Java Interview Questions help me in these hello every body i have some question if you cam plz answer me it is important to me and these are the questions : 1)Write... use the corejava.* Hi Friend, 1) import java.util.*; public class Synchronized - Java Interview Questions Synchronized i want synchronized programs ? in java plz help me?any site Hi Friend, If you want to know about the synchronized method,please visit the following link: help plz - Java Interview Questions help plz 1 )write a program that does the following : a. prompts... , if the input string is abcd, the output is : edcba plz plz plz plz plz help me Hi Friend, Try the following: 1) import java.util. help plz - Java Interview Questions help plz 1 )write a program that does the following : a. prompts the user to input five decimal numbers representing the scores? b. prints...; Hi Friend, Try the following codes: 1)//Input only decimal number plz help - Java Interview Questions don't use "for" Hi Friend, Try the following code 1) import Java - Java Interview Questions Interview interview, Tech are c++, java, Hi friend, Now get more information. you can lean all type interview question by following link link. help me - Java Interview Questions interview questions java pdf Do you have any PDF for Java Interview questions bean - JSP-Interview Questions bean what is use bean in jsp? Hi Friend, Please visit the following links: Hope counter - JSP-Interview Questions using java technology,i explained we can do that by using jsp and servlets... will know how many times the site is viewed by the user. ////////////*****using below code u can get the number that how many times ur site is visited java - JSP-Interview Questions () and forward() methods? Hi JSP forward action transfers the control... file, another JSP file, or a servlet. It should be noted that the target file must be in the same application context as the forwarding JSP file java - Java Interview Questions /interviewquestions/ Here you will get lot of interview questions... ...or any of the sites....which handles these kind of questions... Hi...java hello sir this is suraj .i wanna ask u regarding interview java - Java Interview Questions modifiers Hi Just to let you know, you can find these difference anywhere while searching on google but if you want you can learn it in a interview preparation manner as well.. I have a java interview question URL from where you tomcat - JSP-Interview Questions tomcat how to run a program of jsp using tomcat server.please give me detailed procedure. Hi Friend, After installing and configuring... that create a jsp file 'hello.jsp'and put into the 'application' folder. To start JSP - Java Interview Questions JSP definition of the scope of the project Hi please find... a value to be reused in a single JSP page. The default scope is application."You must practice on JSP. Good Luck Debugging in jsp? - JSP-Interview Questions Debugging in jsp? Hi Friends, am newbie to jsp.How to debug error in jsp ArrayList - JSP-Interview Questions ); when I code this like in my jsp <%ArrayList<Integer> data= new.... Thanks, Hi Friend, We have send you the code which was working... or not. It seems that values are not getting from jsp to servlet. Thanks servlet - Servlet Interview Questions my Java Servlet page at time interval of three minutes? Hi,You can use the following attribute on your html(jsp or servlet generated ) page to refresh... by the URL of the site to load. Note that the "URL=" part is within the " Computer - JSP-Interview Questions for this hi friend, package javacode; import java.io.... ForwardServlet extends HttpServlet{ private static final String forwardTo = "/jsp/ResultServlets"; private static final String includeIn = "/jsp/ResultServlets javascript - Hibernate Interview Questions =null. i think my conditon is wrong can u help me i am sending code. package...==null && txt1==null) { rd=request.getRequestDispatcher("/WEB-INF/jsp...); session.setAttribute("servise",fservises); rd1=request.getRequestDispatcher("/WEB-INF/jsp javascript - JSP-Interview Questions javascript open new window popup How to open new window in popup view using JavaScript? hi friend,<html><title>new window page</title><head></head><body><FORM><INPUT type java, - JSP-Interview Questions java, hi.. define URI? wht is difference b/w URL and URI wht are the diffrent types Sessions in Servelts define wht is meant throw? define wht... as the default for HTTP traffic. Hi friend, URI : 1)(Uniform ContentType - JSP-Interview Questions as " +saveFile); } %> Hi Friend, Use method="POST Scriptless Jsp - JSP-Interview Questions Scriptless Jsp Hi Deepak, Can we create scriptless jsp, if so explain me how, with advantages. can we access database by using javascript only. Thank u in advance hi roseindia - Java Interview Questions hi roseindia advantages of object oriented progamming language? Hi friend, Some advantages of object oriented progamming language : 1) A provides modular structure . 2) The programming can be done with real jsp paging - JSP-Interview Questions /bank.shtml Thanks Hi...jsp paging I am working in JSP paging with ms-access but i have... Advance Thank u Hi friend, You check the Database Table interview questions - Java Interview Questions interview questions for Java Hi Any one can u please post the interview point of questions.Warm Regards, Satish Hi - Struts please help me. its very urgent Hi friend, Some points to be remember...Hi Hi Friends, I want to installed tomcat5.0 version please help me i already visit ur site then i can't understood that why i installed java - Servlet Interview Questions Java Design Patterns Interview Questions and Answers I need to know Java Design Patterns with the help of Interview Questions and Answers Hi malli,GenericServlet is the super class of HttpServlet calssjava soft change password servlets - JSP-Interview Questions change password servlets hi all, i need the codes to create a change password servlet. Hi, I dont have the time to write the code. But i... the database( Update the value for the password). Try this hope it will help Session tracking in login jsp program - JSP-Interview Questions press back button it should be show session is expired.Please help me. Send jsp code on my email id :kirankadam72@gmail.com Hi Friend, Please...Session tracking in login jsp program I have using jsp technology help today plz:( - Java Interview Questions help today plz:( write a program that promptes the user to enter the weight of a package in pounds, and then outputs the weight of the backage... "for" and import java.text.*; Hi Friend, Try the following code java interview - JSP-Interview Questions java interview what type of questions would be asked to a 3 years experience person in java? can anyone please provide list of topics or interview questions for 3 years experience in java help me plz - Java Interview Questions help me plz 1)write a java program that prompts the user to input a decimal number and print the number rounded to the nearest integer? 2)write...? plz answer my question Hi Friend, Try the following code: 1 plz help me - Java Interview Questions plz help me 1)Rewrite the method in exercise 10 such that it use the binary search algorithm instead. the linear search algorithm is suitable.... and thanks for you Hi Friend, Linear Search i need your help - Java Interview Questions than the other. Hi Friend, Try the following code: import Very new to Java Very new to Java hi I am pretty new to java and am wanting to create a programe for the rhyme 10 green bottles. 10 green bottles standing... actually help me with this that would be great Java Servlets - Java Interview Questions > go to for help and download. Hi Friend, 1... class.but it will have a one great feature.that is the class [pojo class] can't extends
http://roseindia.net/tutorialhelp/comment/11171
CC-MAIN-2014-10
refinedweb
2,784
65.83
Red Hat Bugzilla – Bug 222374 Review Request: paprefs - Management tool for PulseAudio Last modified: 2007-11-30 17:11:53 EST Spec URL: SRPM URL: Description: PulseAudio Preferences (paprefs) is a simple GTK based configuration dialog for the PulseAudio sound server. This is my first package and I need a sponsor. This is not a review I think calling the package pulseaudio-preferences would be more intuitive. I wouldn't know to install paprefs to get the pulseaudio preferences. PulseAudio website even refers to paprefs as pulseaudio volume control on the front page. According to PackageNamingGuidelines: "If a new package is considered an "addon" package that enhances or adds a new functionality to an existing Fedora Core or Fedora Extras package without being useful on its own, its name should reflect this fact." Some other issues to adress: * BuildRequires for lynx is commented out, lynx is needed to build * BuildRequires for desktop-file-utils commented out for some reason * "--add-category="X-Fedora" --vendor=" is depriciated, you should also add "--remove-category Application" * %dir is for owning a dir, but not the contents of that dir. since this package owns all files in %{_datadir}/paprefs, %dir %{_datadir}/paprefs is not needed. * Doesn't Require: any of the pulseaudio stack, while paprefs runs without it, I would imagine it to be pretty useless without pulseaudio actually installed Good: rpmlint silent, source matches upstream, spec looks good, runs, includes licencing information, Created attachment 146339 [details] New spec I agree that a name such as pulseaudio-preferences would make more sense. The reason I named it paprefs was for the sake of consistency. All current gtk based packages for pulseaudio have the name of their executable: pavumeter, padevchooser, pavucontrol, paman while all pulseaudio backend package names start with pulseaudio. Maybe should we rename the gtk based packages to the more meaningful pulseaudio-whatever. I am not utterly familiar with desktop-file-install, could you point me to the page that describe the deprecation of some of those parameters. ie: removing the "--vendor=" statement resulted in an rpmbuild failure with the below error: + desktop-file-install --dir /var/tmp/paprefs-0.9.5-1-root-emoret/usr/share/applications --remove-category Application /var/tmp/paprefs-0.9.5-1-root-emoret/usr/share/applications/paprefs.desktop Must specify the vendor namespace for these files with --vendor Re: BuildRequires I took the paman spec file as a template to derive the spec for paprefs. I have addressed all concerns regarding the issues you kindly outlined in Comment #1 and I think most would also apply to all gtk based pulseaudio packages (pavumeter, padevchooser, pavucontrol, paman), so you may want to provide feedback for those other packages as well. This package is realy useful for configuring pulseaudio without editing config files. Would be good to get it in for F7. As soon as this package is reviewed and marked APPROVED I'll commit it in cvs extras. So far no reviewer has looked at it... Why was this added to the CVS admin requests, even though this package is not approved? The whole process has been a bit confusing to me as well! I initially inquired for a sponsorship for the paprefs package but was instead granted it for alsa- plugins (Bug #222248). However I am not the assignee on that specific bug, so I can't do much with it. Anyways, it was my intend to respect the workflow and wait until I get this package reviewed before submitting it in CVS. Do you think you could spare a few cycles in a review of the paprefs package? If not I can just remove that CVS admin request. Request for review, rebuilt source rpm file: I really have no idea why this is still sitting around; it's really unfortunate. It builds fine and looks clean. I'm not sure I have any way to test it, but it runs without crashing and puts up some buttons, and nobody who understands what it's actually supposed to do has stepped up to review it, so.... * source files match upstream: afef8ecadcf81101ccc198589f8e8aadb0b7ec942703e69544613d6801c1c728 paprefs-0.9 (development, x86_64). * package installs properly * debuginfo package looks complete. * rpmlint is silent. * final provides and requires are sane: paprefs = 0.9.5-1.fc8 = libORBit-2.so.0()(64bit) libatk-1.0.so.0()(64bit) libatkmm-1.6.so.1()(64bit) libcairo.so.2()(64bit) libcairomm-1.0.so.1()(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgconf-2.so.4()(64bit) libgconfmm-2.6.so.1()(64bit) libgdk-x11-2.0.so.0()(64bit) libgdk_pixbuf-2.0.so.0()(64bit) libgdkmm-2.4.so.1()(64bit) libglade-2.0.so.0()(64bit) libglademm-2.4.so.1()(64bit) libglib-2.0.so.0()(64bit) libglibmm-2.4.so.1()(64bit) libgmodule-2.0.so.0()(64bit) libgobject-2.0.so.0()(64bit) libgthread-2.0.so.0()(64bit) libgtk-x11-2.0.so.0()(64bit) libgtkmm-2.4.so.1()(64bit) libpango-1.0.so.0()(64bit) libpangocairo-1.0.so.0()(64bit) libpangomm-1.4.so.1()(64bit) libpthread.so.0()(64bit) libsigc-2.0.so.0()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libxml2.so.2()(64bit) pulseaudio-module-gconf * %check is not present; no test suite upstream. Package seems to run fine for me, although I have no audio on the test machine so I've no idea if it works. *. * GUI app; desktop files exists and looks good to me. APPROVED New Package CVS Request ======================= Package Name: paprefs Short Description: Management GUI tool for PulseAudio Owners: eric.moret@epita.fr Branches: FC-6 F-7 InitialCC: CVS done.
https://bugzilla.redhat.com/show_bug.cgi?id=222374
CC-MAIN-2017-39
refinedweb
955
52.9
iaan hinted to me that some people may be working on multiple language support for Boa's IDE. I would hope my idea here may help in this task, as well as really helping me out! I would like to float an idea I had to implement selectable language support in Boa. Of course I will quickly duck and cover just in case I have stumbled into a mine field :). I would like to add a _init_text method, I imagine that in this method I would see: def _init_text(self): self.ExitText = ('Back to Main Screen', 'Spanish', 'French')[GlobalLang] def _init_ctrls(self, prnt): self._init_text() wxFrame.__init__(self, size = wxSize(640, 480), id = wxID_BOAFRAME, ***SNIP*** self.Exit = wxButton(label = self.ExitText, id = wxID_BOAFRAMEEXIT, parent = self, name = 'Exit', siz ***SNIP*** I would expect to next have to change the Inspector's text input methods. Currently they assume all input is text. Some sort of adjustment is necessary to implement this change. I don't know how most people would prefer it done. I can imagine a couple of ways: 1- Stop implying the "'s. The user can enter any variable or 'text' into the control and then would be responsible for creating his own _init_text method for any variables necessary. Easy to implement quickly, but very manual for the user. 2- A global preferences setting for the project. Either Boa behaves as it does now, or it forces every text field to have a global number of language entries. Probably easier, but coarse control. 3- A check box for each text field, that would allow simple text entry, or create the appropriate _init_text entry. This may be too much control for what is necessary. Also, I don't know how much overhead adding all of the text entries into the class's memory space. Perhaps it would be more efficient to embed the ('english','spanish')[GlobalLang] in the object's instantiation call. Example in case I wasn't clear: self.Exit = wxButton(label = ('english','spanish')[GlobalLang], id = wxID_BOAFRAMEEXIT... The great feature of the _init_text method is this: I only know 1 language, therefore with projects that require additional languages it would be nice to be able to create all of the screens, then have 1 concise area (per screen) for a linguist to go to and fill in the matrix of text entries. I'm sorry for not doing my own research on how to implement this functionality, but I have tried to follow Boa's code, and I just don't understand object oriented/wxPython code enough yet. Although, I will try some experiments if someone can at least point me to the correct methods. Thank you for reading this far. Blaine Lee Blaine Lee wrote: > > > _______________________________________________ > Boa-constructor-users mailing list > Boa-constructor-users@... > First of all: Hi everybody. I'm new to the list and traffic seems low so I'll introduce myself. Nevertheless I hope to find lots of Python junkies here :-) I wanna get going with Python and chose Boa as my GUI for starts. Here's my first Problem when I start it from the commandline: qbert@...:~/Boa-0.0.5 > Boa.py : command not found : command not found I see a cross as mousecursor but nothing else. I klick, the speaker beeps Boa end with this; : command not found ./Boa.py: line 20: syntax error near unexpected token `trace_func(f' '/Boa.py: line 20: `def trace_func(frame, event, arg): qbert@...:~/Boa-0.0.5 > I'm running the SuSE 7.2 Pro distro with KDE 2.2.1. Thanks for the help in advance. I am using Boa 0.0.5 to develop code for an embedded Linux application. The problem I have is the screen colors change occasionally when I enter different Frames. Also, if a dialog is closed, the background it covered may change. Can someone point me where the problem is? I am hoping it is something like I have to declare background colors or some such thing... I have included the top of the frame declarations below. One other thing I am wondering about is that I have my frames nested i.e.: Fmain = BoaFrame( in Fmain's __init__ I have self.Fdiag = BoaFrame self.Foperator = BoaFrame etc... Is this the 'right' way to do things? Linux environment: Software: Debian 2.2 xfree86 3.3.6 wxPython 2.2.5 python 1.5.2 & 2.1.1 (happens under both) Hardware: processor / video NS Geode GXM w/5530 companion chip 640x480 I have also confirmed that colors change under VNC. FYI it is a virtual X server that acts as a frame buffer on the local machine and can be viewed on any connected machine with a viewer (therefore I think its not the xserver). Thank you Blaine Lee If its worth anything, here is the definition of one of the screens that frequently shows this problem: def _init_ctrls(self, prnt): wxFrame.__init__(self, size = wxSize(640, 480), id = wxID_BOAFRAME, title = 'Operator Screen', parent = prnt, name = 'BoaFrame', style = 0, pos = wxPoint(239, 306)) self._init_utils() self.SetBackgroundColour(wxColour(255, 255, 255)) self.SetFont(wxFont(10, wxMODERN, wxNORMAL, wxNORMAL, false, 'clean')) EVT_CLOSE(self, self.OnBoaframeClose) EVT_ACTIVATE(self, self.OnBoaframeActivate) self.Exit = wxButton(label = 'Back to Main Screen', id = wxID_BOAFRAMEEXIT, parent = self, name = 'Exit', size = wxSize(128, 59), style = 0, pos = wxPoint(8, 384)) EVT_BUTTON(self.Exit, wxID_BOAFRAMEEXIT, self.OnExitButton) self.Start = wxBitmapButton(bitmap = wxBitmap('start.bmp', wxBITMAP_TYPE_BMP), id = wxID_BOAFRAMESTART, validator = wxDefaultValidator, parent = self, name = 'Start', size = wxSize(72, 72), style = wxBU_AUTODRAW, pos = wxPoint(456, 368)) self.Start.SetBackgroundColour(wxColour(255, 255, 255)) EVT_BUTTON(self.Start, wxID_BOAFRAMESTART, self.OnStartButton) self.Stop = wxBitmapButton(bitmap = wxBitmap('stop.bmp', wxBITMAP_TYPE_BMP), id = wxID_BOAFRAMESTOP, validator = wxDefaultValidator, parent = self, name = 'Stop', size = wxSize(72, 72), style = wxBU_AUTODRAW, pos = wxPoint(544, 368)) EVT_BUTTON(self.Stop, wxID_BOAFRAMESTOP, self.OnStopButton) self.staticBitmap1 = wxStaticBitmap(bitmap = wxBitmap('logo.bmp', wxBITMAP_TYPE_BMP), id = wxID_BOAFRAMESTATICBITMAP1, parent = self, name = 'staticBitmap1', size = wxSize(160, 64), style = 0, pos = wxPoint(392, 30)) self.a = wxPanel(size = wxSize(264, 320), id = wxID_BOAFRAMEA, parent = self, name = 'a', style = wxTAB_TRAVERSAL, pos = wxPoint(16, 48)) I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/boa-constructor/mailman/message/4675403/
CC-MAIN-2017-34
refinedweb
1,055
67.86
You may want to search: Jewellery Making Stainless Steel 18K Gold 11mm Coffee Bean Jewelry Set US $8.49-9.21 / Sets 12 Sets (Min. Order) new products 2017 innovative product hyderabadi jewellery 925 solid silver wedding band ring US $0.3-1.7 / Gram 70 Grams (Min. Order) 26998 Top engraving machine statement hyderabadi jewelry set US $7.0-7.0 / Pieces 12 Pieces (Min. Order) New arrival hyderabadi bangles wedding set in hot sale in hot sale US $1-5 / Set 50 Sets (Min. Order) rhinestone hyderabadi jewelry set US $1-5 / Piece 10 Dozens (Min. Order) 27267 anti-corrupt case handmade hyderabadi jewelry set US $14.4-14.4 / Pieces 12 Pieces (Min. Order) top Color Amethyst Beaded Bracelet Bangle for Women jewellery US $0.1-50 / Piece 100 Pieces (Min. Order) 2017 hyderabad bangles manufacturers indian aluminium bangles hyderabadi bangles US $0.6-3.2 / Pair 10 Pairs (Min. Order) Newest hyderabadi bangles wedding set in hot sale US $2-7 / Piece 12 Pieces (Min. Order) fashion gold plated jewelry cheaper wholesale indian hyderabadi crystal stontes wedding churi bangle set US $1-10 / Piece 1 Piece (Min. Order) import semi china jewelry hyderabadi jewellery earrings 2017 for women US $2-4 / Pair 50 Pairs (Min. Order) hyderabadi bangles beautiful bangles non square bangles US $2.55-4.85 / Piece 36 Pieces (Min. Order) 2017 new hot sale hyderabadi bangles wholesale US $1-5 / Piece 5 Pieces (Min. Order) Best Quality Hyderabadi Bangle Bracelet Wedding Set US $1-10 / Piece 100 Pieces (Min. Order) fashion stainless steel bible print hyderabadi bangles US $1-6 / Piece 100 Pieces (Min. Order) Stainless steel jewelry wholesale hyderabadi bangles US $1-10 / Piece 50 Pieces (Min. Order) Handmade Hyderabadi Navaratan Guluband - (Neck Tie) and Jhumkas (Earrings) US $180-190 / Piece 1 Unit (Min. Order) Multi Color Synthetic Corrundum & C.Z Studded Pearl Mesh Pattern Imitation Metal Jewellery Bangle US $2-50 / Unit 10 Units (Min. Order) Cheap wholesale acrylic crystal hyderabadi bangles US $0.1-3.0 / Piece 30 Dozens (Min. Order) Vogue jewellery bracelets metal green enamel boy bracelet bangles blank hyderabadi bangles US $0.5-6.8 / Piece 50 Pieces (Min. Order) Hyderabadi Royal Bangles US $500-550 / Box 100 Dozens (Min. Order) famous silver jewelry necklace brand 830377 china wholesale jewelry from dubai hyderabadi bangles US $2-6 / Piece 5 Pieces (Min. Order) pakistani jewellery US $28-35 / Unit 5 Units (Min. Order) Marlary Factory Sale Bangles With Logo Design Silver Jamaica Bangle Bracelets US $3.45-5.95 / Piece 5 Pieces (Min. Order) Natural hyderabadi garnet semi precious 9x7mm oval cabochon 2.08 cts loose gemstone for jewelry US $0.52-0.65 / Piece 1 Piece (Min. Order) Popular Premium stone bracelet men fish hook bracelet US $3.5-15.8 / Piece 1 Piece (Min. Order) New Fashionable Women Wear Imitation Jewelry Bangle - NBG2g73 US $2-50 / Unit 10 Units (Min. Order) - About product and suppliers: Alibaba.com offers 122 hyderabadi jewellery products. About 55% of these are silver jewelry, 13% are jewelry sets, and 10% are copper alloy jewelry. A wide variety of hyderabadi jewellery options are available to you, such as party, engagement, and anniversary. You can also choose from women's, men's, and children's. As well as from earrings, jewelry sets, and rings. And whether hyderabadi jewellery is zircon, diamond, or crystal, rhinestone. There are 65 hyderabadi jewellery suppliers, mainly located in Asia. The top supplying countries are China (Mainland), India, and Pakistan, which supply 49%, 41%, and 7% of hyderabadi jewellery respectively. Hyderabadi jewellery products are most popular in North America, South America, and Eastern Europe. You can ensure product safety by selecting from certified suppliers, including 20 with Other certification. Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show hyderabadi jewellery or other products of your own company? Display your Products FREE now!
https://www.alibaba.com/showroom/hyderabadi-jewellery.html
CC-MAIN-2018-17
refinedweb
658
60.61
I'm currently having some trouble getting by raspberry pi to work with a NPN transistor. All I am trying to do is turn an L.E.D. (which has a forward voltage of 3V) on from an external battery and have my Pi control when to allow the battery current to flow through the L.E.D. via transistor. Here's my problem: I can't get the Pi to turn the transistor into the "off" state to turn off the L.E.D. I know my code is working properly, as I made a circuit without the transistor, but I can't get the darn L.E.D. to turn off with the transistor in the circuit. Here's my circuit: My Code: - Code: Select all import RPi.GPIO as gpio import time gpio.setmode(gpio.BCM) gpio.setup(14, gpio.OUT) while True: gpio.output(14, gpio.HIGH) time.sleep(1) gpio.output(14, gpio.LOW) time.sleep(1) I have tried several other GPIO pins but I still can't get the L.E.D. to turn off. The weird part is that if I replace pin 8 in my circuit with the ground pin, the L.E.D. will still stay on! If I completely unplug the Pi from the circuit, the L.E.D. is off. Do the pins on the Raspberry Pi constantly give off a small amount of current constantly or something? Please help me as I have tried several hours to figure this out to no avail. Thank You, Superroxfan
https://www.raspberrypi.org/forums/viewtopic.php?f=44&t=50745
CC-MAIN-2015-40
refinedweb
259
83.56
My project for my C++ class is to develop an application that will help me out in my programming life, and since I constantly have to use a calculator to determine the values in free fall acceleration I decided to write an application to do it for me. I was wondering if someone here could give me a quick inspection of my code to make sure it's all correct. It runs fine and there are no syntax errors what-so-ever, but I'm not 100 percent sure the math is right. #include <iostream> #include <Windows.h> using namespace std; int main() { float startPosY = 0; const float stopY = 0; float Acceleration = 0, InitialAcc = 0; float posY = 0; int sElapsed = 1; cout << "Please input the starting height of free fall in meters: "; cin >> startPosY; posY = startPosY; cout << "Please input the acceleration of the object in m/s: "; cin >> InitialAcc; Acceleration = InitialAcc; while (posY > 0) { posY -= Acceleration; Acceleration += InitialAcc; if (posY >= 0) { cout << "Current height: " << posY << " meters\nTime Elapsed: " << sElapsed << " seconds.\n\n"; sElapsed++; } else { posY = 0; cout << "Current height: " << posY << " meters.\nTime Elapsed: " << sElapsed << " seconds.\n"; } Sleep(1000); } cout << "\n\nThe object fell: " << startPosY << " meters in " << sElapsed << " seconds with a final acceleration of: " << Acceleration << "m/s\n"; system("pause"); } Thanks in advance, Jamie
https://www.daniweb.com/programming/software-development/threads/411452/free-fall-acceleration-just-need-a-check
CC-MAIN-2017-17
refinedweb
213
54.97
Asked by: Customizing the Project 2010 ribbon with a VSTO add-in General discussion RE: the Project Professional 2010 forum thread: Project 2010: adding ribbon, which discusses the SDK article How to: Add a Custom Command to the Ribbon, and the use of SetCustomUI in VBA. NOTE: The code in this post is explained in the Project 2010 SDK article, How to: Use Managed Code to Add a Custom Command to the Ribbon. It is instructive to do the same exercise with VSTO. There are many advantages to using a VSTO add-in, including the ability to easily publish the solution with ClickOnce. As the thread discussion notes, there is no good way to distribute the VBA solution in the Global.MPT to other users. Basically, use Visual Studio 2010 and create a new Project 2010 Add-in project. Use the .NET Framework 3.5. You can use C# or VB. - There are no changes necessary in the ThisAddIn.cs (or ThisAddIn.vb) file. - Right-click the project in Solution Explorer, and add a new item -- add a Ribbon (Visual Designer) item. In the code below, it is named ManualTaskColor.cs (or ManualTaskColor.vb). - In the ManualTaskcolor.cs [Design] view, drag a Tab from the Toolbox\Office Ribbon Controls to the ribbon. - Drag a Group to the new tab. - Drag a Button (or a ToggleButton) to the group. Change the labels, button image, etc. as you wish. - To match the VBA example in the SDK, you can set the OfficeImageID property of the button to DiagramTargetInsertClassic. - Select the new button in the ribbon Design view, click the Events icon in the Properties pane, and then double-click the Click event to create the button_Click event handler. Here is the C# code in the ManualTaskColor.cs file. The code is ported from the VBA code in the SDK article: using System; using Microsoft.Office.Tools.Ribbon; using MSProject = Microsoft.Office.Interop.MSProject; namespace RibbonAddIn { public partial class ManualTaskColor { private const int WHITE = 0xFFFFFF; private const int LIGHT_BLUE = 0xF0D9C6; MSProject.Application app; MSProject.Project project; private void ManualTaskColor_Load(object sender, RibbonUIEventArgs e) { app = Globals.ThisAddIn.Application; } private void tBtnManualTaskColor_Click(object sender, RibbonControlEventArgs e) { ToggleManualTasksColor(); } private void ToggleManualTasksColor() { project = app.ActiveProject; string column = "Name"; bool rowRelative = false; int rgbColor; foreach (MSProject.Task t in project.Tasks) { if ((t != null) && !(bool)t.Summary) { app.SelectTaskField(t.ID, column, rowRelative); rgbColor = app.ActiveCell.CellColorEx; if ((bool)t.Manual) { // Check whether the manual task color is white. if (rgbColor == WHITE) { app.Font32Ex(CellColor:LIGHT_BLUE); // Change the background to light blue. } else { app.Font32Ex(CellColor:WHITE); // Change the background to white. } } else { // The task is automatically scheduled, so change the background to white. app.Font32Ex(CellColor:WHITE); } } } } } } _________________Just for kicks, here is the VB code in the ManualTaskColor.vb file, in you do the project in VB. The code is ported from the C# example above: Imports Microsoft.Office.Tools.Ribbon Imports MSProject = Microsoft.Office.Interop.MSProject Public Class ManualTaskColor Private Const WHITE As Integer = &HFFFFFF Private Const LIGHT_BLUE As Integer = &HF0D9C6 Dim app As MSProject.Application Dim project As MSProject.Project Private Sub ManualTaskColor_Load(ByVal sender As System.Object, ByVal e As RibbonUIEventArgs) _ Handles MyBase.Load app = Globals.ThisAddIn.Application End Sub Private Sub tBtnManualTaskColor_Click(ByVal sender As System.Object, _ ByVal e As Microsoft.Office.Tools.Ribbon.RibbonControlEventArgs) _ Handles tBtnManualTaskColor.Click ToggleManualTasksColor() End Sub Sub ToggleManualTasksColor() project = app.ActiveProject Dim column As String = "Name" Dim rowRelative As Boolean = False Dim rgbColor As Integer For Each t As MSProject.Task In project.Tasks If (Not t Is Nothing) And (Not t.Summary) Then app.SelectTaskField(t.ID, column, rowRelative) rgbColor = app.ActiveCell.CellColorEx If (t.Manual) Then ' Check whether the manual task color is white. If (rgbColor = WHITE) Then app.Font32Ex(CellColor:=LIGHT_BLUE) ' Change the background to light blue. Else app.Font32Ex(CellColor:=WHITE) ' Change the background to white. End If Else ' The task is automatically scheduled, so change the background to white. app.Font32Ex(CellColor:=WHITE) End If End If End Sub End Class __________________ Have fun, --Jim - Edited by Jim Corbin Tuesday, June 29, 2010 7:46 PM Updated article in SDK All replies - Are there any examples where the ribbon is implemented in Project using ribbon xml with callbacks (VSTO and C#)? I am getting errors doing this in Project using the same callback signatures that work in other Office apps. If not, can the method shown here work with Visual Studio 2008? SharePoint, Project Server, and Project client were developed with .NET 3.5. Using .NET 4.0 in some cases doesn't work with Project Server applications, such as workflow and configuring WCF. It's probably taking a chance if you mix 3.5 and 4.0 components, but I haven't done a lot of testing with 4.0. - Visual Studio 2010 is required because it includes the templates for Project 2010 and correctly creates event handlers. The SDK update later in May includes an article that creates an event handler. I don't yet have an example using ribbon XML with callbacks. - The May update of the Project 2010 SDK includes the How to: Use Managed Code to Add a Custom Command to the Ribbon article, and the SDK download includes the complete code. To all -- I made a MSDN presentation on this to use the VSTO in VS2010 and VB.NET. I create a ribbon and some executable code. If you wish a copy of the code (it is free) you contact me by visiting my blog When I dig out the actual Microsoft link, I will edit this post. Jim jeaksel at yahoo dot com Jim, I am looking for resources on doing the opposite. I want to remove/hide things on the ribbon. One example is on the Project Center, under new I want to remove/hide the From SharePoint list option. Also, does anyone know how to find the ID of the existing ribbon controls. Jay Smith Project 2007 VSTO issue: I'm not sure if you've tried to create a VSTO add-in targeting Project 2007 - but I'd be glad if you could verify the following: - Create a VSTO addin for Project 2007 using VS2008 or VS2010 targeting 3.5 - the addin doesn't have to do anything at all - just using the code in th newly created "empty" template will suffice. - Debug the addin or publish/install - doesn't matter which - and open Project 2007 with the addin loaded - Create an Excel Visual Report - Close the Visual Report dialogue box - Close Project - Wham! Doesn't happen on Project 2010. /Lars Hammarberg Looking at the SDK pre-requisites here: it appears that XP is not supported. I am seeing crashes with a simple ribbon addin (similar to that above) for Project 2010 standard when tested on XP machines and am being told this is because XP is not supported. Can you categorically tell me if writing a VSTO addin for Project 2010, to customise a Ribbon is supported on Win 7 AND XP-SP3 (with .NET 3.5)? Many thanks Hi can you help me, look only want put colo to my bars!!! but how!! Imports Microsoft.Office.Interop button Dim pjApplication As New MSProject.Application pjApplication.FileNew(Template:="") pjApplication.Visible = True pjApplication.OptionsCalendar(StartWeekOnMonday:=True, HoursPerDay:=24, HoursPerWeek:=168, DaysPerMonth:=30, StartTime:="7:00 a.m.", FinishTime:="7:00 a.m.") pjApplication.ScreenUpdating = True 'diasble visible screenupdates until the code is completed Dim tsks As MSProject.Tasks Dim t As MSProject.Task tsks = pjApplication.ActiveProject.Tasks t = tsks.Add("Programa: " + sacadescripcion.ExecuteScalar.ToString) t.OutlineLevel = 1 t.Start = '01/01/2014' t.Finish = '31/01/2014' 'For Each t In tsks 'GanttBarFormat(tss:=Task.ID, MiddleColor:=pjGreen, StartColor:=pjGreen, EndColor:=pjGreen) 'Next and the for do not nothing, mark with red, not compile only color to the bar! im using vs2010 on VB forms, and project 2010 desktop version tnks a lot! - Edited by Donato Arzola Wednesday, February 19, 2014 12:30 AM
https://social.technet.microsoft.com/Forums/Azure/en-US/ca3019f4-3856-4b03-adeb-bd1e6f79df93/customizing-the-project-2010-ribbon-with-a-vsto-addin?forum=project2010custprog
CC-MAIN-2019-30
refinedweb
1,334
51.34
.0).0).0, shape keyword in the call to a stochastic variable creates multivariate array of (independent) stochastic variables. The array behaves like a NumPy array when used like one, and references to its tag.test_value attribute return NumPy arrays. The shape argument also solves the annoying case where you may have many variables $\beta_i, \; i = 1,...,N$ you wish to model. Instead of creating arbitrary names and variables for each one, like: beta_1 = pm.Uniform("beta_1", 0, 1) beta_2 = pm.Uniform("beta_2", 0, 1) ... we can instead wrap them into a single variable: betas = pm.Uniform("betas", 0, 1,.0) lambda_2 = pm.Exponential("lambda_2", 1.0)(size=20000) plt.hist(samples, bins=70, normed=True, histtype="stepfilled") plt.title("Prior distribution for $\lambda_1$") plt.xlim(0, 8); To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model. PyMC3] parameter $\lambda$. Do we know $\lambda$? No. In fact, we have a suspicion that there are two $\lambda$ values, one for the earlier behaviour and one for the later = =("Artificial dataset") plt.xlim(0, 80) plt.legend(); It is okay that our fictional dataset does not look like our observed dataset: the probability is incredibly small it indeed would. PyMC3+1) plot_artificial_sms_dataset() Later we will see how we use this to make predictions and test the appropriateness of our models. A/B testing is a statistical design pattern for determining the difference of effectiveness between two different treatments. For example, a pharmaceutical company is interested in the effectiveness of drug A vs drug B. The company will test drug A on some fraction of their trials, and drug B on the other fraction (this fraction is often 1/2, but we will relax this assumption). After performing enough trials, the in-house statisticians sift through the data to determine which drug yielded better results. Similarly, front-end web developers are interested in which design of their website yields more sales or some other metric of interest. They will route some fraction of visitors to site A, and the other fraction to site B, and record if the visit yielded a sale or not. The data is recorded (in real-time), and analyzed afterwards. Often, the post-experiment analysis is done using something called a hypothesis test like difference of means test or difference of proportions test. This involves often misunderstood quantities like a "Z-score" and even more confusing "p-values" (please don't ask). If you have taken a statistics course, you have probably been taught this technique (though not necessarily learned this technique). And if you were like me, you may have felt uncomfortable with their derivation -- good: the Bayesian approach to this problem is much more natural. As this is a hacker book, we'll continue with the web-dev example. For the moment, we will focus on the analysis of site A only. Assume that there is some true $0 \lt p_A \lt 1$ probability that users who, upon shown site A, eventually purchase from the site. This is the true effectiveness of site A. Currently, this quantity is unknown to us. Suppose site A was shown to $N$ people, and $n$ people purchased from the site. One might conclude hastily that $p_A = \frac{n}{N}$. Unfortunately, the observed frequency $\frac{n}{N}$ does not necessarily equal $p_A$ -- there is a difference between the observed frequency and the true frequency of an event. The true frequency can be interpreted as the probability of an event occurring. For example, the true frequency of rolling a 1 on a 6-sided die is $\frac{1}{6}$. Knowing the true frequency of events like: are common requests we ask of Nature. Unfortunately, often Nature hides the true frequency from us and we must infer it from observed data. The observed frequency is then the frequency we observe: say rolling the die 100 times you may observe 20 rolls of 1. The observed frequency, 0.2, differs from the true frequency, $\frac{1}{6}$. We can use Bayesian statistics to infer probable values of the true frequency using an appropriate prior and observed data. With respect to our A/B example, we are interested in using what we know, $N$ (the total trials administered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be. To setup a Bayesian model, we need to assign prior distributions to our unknown quantities. A priori, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]: import pymc3 as pm # The parameters are the bounds of the Uniform. with pm.Model() as model: p = pm.Uniform('p', lower=0, upper=1) Applied interval-transform to p and added transformed p_interval_ to model. Had we had stronger beliefs, we could have expressed them in the prior above. For this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a Bernoulli distribution: if $X\ \sim \text{Ber}(p)$, then $X$ is 1 with probability $p$ and 0 with probability $1 - p$. Of course, in practice we do not know $p_A$, but we will use it here to simulate the data. #set constants p_true = 0.05 # remember, this is unknown. N = 1500 # sample N Bernoulli random variables from Ber(0.05). # each random variable has a 0.05 chance of being a 1. # this is the data-generation step occurrences = can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the difference between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, and $\text{delta} = p_A - p_B$, all at once. We can do this using PyMC_trace["delta"] figsize(12.5, 10) #histogram of posteriors ax = plt.subplot(311) plt.xlim(0, .1) plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85, label="posterior of $p_A$", color="#A60628", normed=True) plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)") plt.legend(loc="upper right") plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns") ax = plt.subplot(312) plt.xlim(0, .1) plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85, label="posterior of $p_B$", color="#467821", normed=True) plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)") plt.legend(loc="upper right") ax = plt.subplot(313) plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85, label="posterior of delta", color="#7A68A6", normed=True) plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--", label="true delta (unknown)") plt.vlines(0, 0, 60, color="black", alpha=0.2) plt.legend(loc="upper right"); Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$. With respect to the posterior distribution of $\text{delta}$, we can see that the majority of the distribution is above $\text{delta}=0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable: # Count the number of samples less than 0, i.e. the area under the curve # before 0, represent the probability that site A is worse than site B. print("Probability site A is WORSE than site B: %.3f" % \ np.mean(delta_samples < 0)) print("Probability site A is BETTER than site B: %.3f" % \ np.mean(delta_samples > 0)) Probability site A is WORSE than site B: 0.208 Probability site A is BETTER than site B: 0.792 If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A). Try playing with the parameters true_p_A, true_p_B, N_A, and N_B, to see what the posterior of $\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned: it naturally fits into Bayesian analysis. I hope the readers feel this style of A/B testing is more natural than hypothesis testing, which has probably confused more than helped practitioners. Later in this book, we will see two extensions of this model: the first to help dynamically adjust for bad sites, and the second will improve the speed of this computation by reducing the analysis to a single equation. Social data has an additional layer of interest occurred3 to dig through this noisy model, and find a posterior distribution for the true frequency of liars. Suppose 100 students are being surveyed for cheating, and we wish to find $p$, the proportion of cheaters. There are a few ways we can model this in PyMC3). Finally, the last line sums this vector and divides by float(N), produces a proportion. observed_proportion.tag.test_value array(0.5600000023841858):] plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30, label="posterior distribution", color="#348ABD") plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3) plt.xlim(0, 1) plt.legend(); With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a. where we include our observed 35 "Yes" responses. In the declaration of the pm.Binomial, we include value = 35 and observed = True._trace["freq_cheating"] plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30, label="posterior distribution", color="#348ABD") plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2) plt.xlim(0, 1) plt.legend(); N = 10 x = np.ones(N, dtype=object) with pm.Model() as model: for i in range(0, N): x[i] = pm.Exponential('x_%i' % i, (i+1.0)*3. The $\beta, \alpha$ parameters familiar with the Normal distribution already have probably seen $\sigma^2$ instead of $\tau^{ +, pm3. #(); All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect. Similarly, all $\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\alpha$ is significantly less than 0. 2.5% quantiles for "confidence interval" qs = mquantiles(p_t, [0.025, 0.975], occurring rationale = pm.Bernoulli("bernoulli_obs", p, = trace["bernoulli_sim"] print(simulations.shape) plt.title("Simulated dataset using posterior parameters") figsize(12.5, 6) for i in range(4): ax = plt.subplot(4, 1, i+1) plt.scatter(temperature, simulations[1000*i, :], color="k", s=50, alpha=0.6) (10000, 23).40 | 0 0 |()
https://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC3.ipynb
CC-MAIN-2018-51
refinedweb
1,886
58.89
4 Context: Trying to forecast some sort of consumption value (e.g. water) using datetime features and exogenous variables (like temperature). Take some datetime features like week days ( mon=1, tue=2, ..., sun=7) and months ( jan=1, ..., dec=12). A naive KNN regressor will judge that the distance between Sunday and Monday is 6, between December and January is 11, though it is in fact 1 in both cases. Domains hours = np.arange(1, 25) days = np.arange(1, 8) months = np.arange(1, 13) days >>> array([1, 2, 3, 4, 5, 6, 7]) type(days) >>> numpy.ndarray Function A custom distance function is possible: def distance(x, y, domain): direct = abs(x - y) round_trip = domain - direct return min(direct, round_trip) Resulting in: # weeks distance(x=1, y=7, domain=7) >>> 1 distance(x=4, y=2, domain=7) >>> 2 # months distance(x=1, y=11, domain=12) >>> 2 distance(x=1, y=3, domain=12) >>> 2 However, custom distance functions with Sci-Kit's KNeighborsRegressor make it slow, and I don't want to use it on other features, per se. Coordinates An alternative I was thinking of is using a tuple to represent coordinates in vector space, much like we represent the hours of the day on a round clock. def to_coordinates(domain): """ Projects a linear range on the unit circle, by dividing the circumference (c) by the domain size, thus giving every point equal spacing. """ # circumference c = np.pi * 2 # equal spacing a = c / max(domain) # array of x and y return np.sin(a*domain), np.cos(a*domain) Resulting in: x, y = to_coordinates(days) # figure plt.figure(figsize=(8, 8), dpi=80) # draw unit circle t = np.linspace(0, np.pi*2, 100) plt.plot(np.cos(t), np.sin(t), linewidth=1) # add coordinates plt.scatter(x, y); Clearly, this gets me the symmetry I am looking for when computing the distance. Question Now what I cannot figure out is: What data type can I use to represent these vectors best, so that the knn regressor automatically calculates the distance? Perhaps an array of tuples; a 2d numpy array? Attempt It becomes problematic as soon as I try to mix coordinates with other variables. Currently, the most intuitive attempt raises an exception: data = df.values The target variable, for simple demonstration purposes, is the categorical domain variable days. TypeError Traceback (most recent call last) TypeError: only size-1 arrays can be converted to Python scalars The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) <ipython-input-112-a34d184ab644> in <module> 1 neigh = KNeighborsClassifier(n_neighbors=3) ----> 2 neigh.fit(data, days) ValueError: setting an array element with a sequence. I just want the algorithm to be able to process a new observation (a coordinate representing the day of the week and temperature) and find the closest matches. I am aware the coordinate is, of course, a direct representation of the target variable, and thus leaks the answer, but it's about enabling the math of the algorithm. Thank you in advance. 1An alternative - it looks like there's a 'precomputed' option for distance, which will let you use the distance you "really" (?) want, and should not be slow, since there's no computation to be done. btw, I like your idea of converting to 2d (the unit circle), 2d numpy array would be the way to go here I think. There could be issues if you have both days and months, since the distance may not "know" to separate them - depends on the details of your setup. – bogovicj – 2020-08-11T18:17:17.127 In response to the attempt section, for this code neigh.fit(data, days)what are the shapes of dataand days? Am I understanding that you're predicting temperature from datetime? – bogovicj – 2020-08-12T13:54:37.940 Thank you, @bogovicj , for pointing that out. I have edited the post to clarify. Naively, I'd simply pass two columns for the algorithm to .fit(): day of week (int)and temperature (float). However, this gets me in trouble due to the mentioned lack of symmetry (it will compute Monday-Sunday=6). Instead, I try using coordinates. This gets me the desired symmetry but results in columns with nested arrays: coordinates (list of tuples/numpy array of numpy arrays)and temperature (float). The last part is my hurdle. – Robin Teuwens – 2020-08-12T15:48:22.237 Could it be that the answer is dead simple, and that is just to split the days_coordinates (x, y)into separate columns days_xand days_y? Thereafter, If I want one feature to be more important than the other I can standardize all features first, and then multiply them by custom weights? – Robin Teuwens – 2020-08-12T16:25:43.597 1Yes, the days_x anddays_y`into separate columns is the way to go if you take the unit circle approach. On feature importance - my intuition is that re-weighting features as you said will do what you want if by "feature importance", you mean "how much each feature matters for determining distance". – bogovicj – 2020-08-12T17:09:02.680
https://library.kiwix.org/datascience.stackexchange.com_en_all_2021-04/A/question/80130.html
CC-MAIN-2021-31
refinedweb
857
64.81
I’m trying to convert my project to use fmod, and i’ve got simple sounds playing etc, however, i would like to get the more advanced features available in designer. However, it seems that none of the eventsystem features are available to me. For example (and most pressingly) it seems that there is no EventSystem member of my FMOD namespace (nor is there EventSystem_Create) Am i missing an include or a library etc? it doesnt appear to be in any of the ones i got as part of the fmod Ex download, and there doesnt appear to be any with the designer download. Cheers. edit: Another post on this forum mentioned the same problem that was solved by including fmod_event.hpp – I do not have this header – where does one get it from? - Hernan asked 6 years ago - You must login to post comments Never mind – found it. If others have this problem in the future, the fmod_event api is located in "…/FMOD SoundSystem/FMOD Programmers API Win32/fmoddesignerapi/api/[inc, lib etc]" (i.e. fmoddesignerapi/api rather than api) - Hernan answered 6 years ago
http://www.fmod.org/questions/question/forum-36122/
CC-MAIN-2017-22
refinedweb
186
68.4
Thanks for the detailed post! Sorry it’s been some days since you’ve had a response. @croczey Can you chime in here? -Benny Thanks for the detailed post! Sorry it’s been some days since you’ve had a response. @croczey Can you chime in here? -Benny Hmm…I’m kind of confused as to what you are trying to accomplish. How MQTT works is you have a server that listens on port 1883 and clients that subscribe to and publish to the server. Subscribing will receive data from the server and publishing pushes data to the server. So…from what I read I’m getting that you want the current server/client roles reversed so the Pi is the “server” and Cayenne subscribes to it? Typically it’s not a good idea for a few reasons. 1. You have to add port forwarding to you router and 2. You now have a port open directly on the internet which can lead to security issues. If possible change your broker to client mode and connect to the Cayenne server normally. Hi Adam, Yes, it is exactly what I try; have the raspberry or other board as broker/server and the cayenne IOT as mqtt client. Also ideally I would like cayenne IOT on my local network and not on the internet site. Have already a VPN to my local Synology NAS and could make my own secure connection to the local cayenne IOT. But first, is it possible to reverse the roles and how do I do this ? I do not find info on the manually publishing/subscribing part of the documentation. How do I configure the cayenne IOT as a generic mqtt client subscriber?with the MQTTFX client?? Once subscribed, import the data to a sensor to be shown on the cayenne dashboard. Parsing the “brute” mqtt data into the cayenne format via a small config form without any programming codes. Advantage: any board, sensor,… could be integrated in the Cayenne IOT without any user programming. Make the mqtt for this board-sensor work with the doc of this board and sensor (not a Cayenne problem), in the cayenne subscribe to this mqtt data, configure with “form like screen” where the parsed data (unit, value, timestamp etc,) goes to the right field to show the data in a good format on the dashboard and you have a super tool for people without any programming experience. As long as the local cayenne IOT doesn’t work, can I at least start with the “non secure” port forwarding? Any help, tips, workaround how to do this would be greatly appreciated. Thanks Patrick Using the Cayenne servers as a client is not supported right now, and I don’t really think it should be. In the Cayenne world they are the server so anything sending data to Cayenne needs to be set up as a client. Generally anything collecting data and sending it to with MQTT will be set up as a client not a broker. The only scenario I can see your setup being required would be if you have created a local MQTT server on your Pi that sends out MQTT messages to clients that are subscribed, but in that case you wouldn’t need Cayenne anyway. I guess I’m still not following what you are using to send the data out. You mention MQTTFX, but what are you using to read your sensors and send the data with MQTT? Hi Adam, Thanks for the info that cayenne can not be used as client. My sensor is a small board (trinkerforge brick with temp, humidity and air pressure sensor) with his own IP (192.168.1.41) connected to a raspberry PI (192.168.1.20). On this Pi a small python script is running as a mqtt broker with software from tinkerforge to get the sensor data on a regular interval via my internal network. I do not us the raspberry GPIO’s . When using whatever client (mqttfx or other) I can subscribe to the pi and get de sensor(s) data. Saw in the “bring your own thing” doc that there is a MQTT manually publishing/subscribing paragraph so I thought it was possible to subscribe in the Cayenne software to get the raspberry mqtt data. In this paragraph they mention the mqttfx software to do this, so I installed it and tried to configure it without success. From there my forum question to see if someone could help me. But with what you told in the last comment, I think it is a impossible task. I loved the dashboard-sensor display used in the cayenne software. It is much better than the “brut data dump” of the usual mqtt clients. Hopeful that it ill be possible in the future. Thanks for your explanations. Patrick Got it, that makes sense now. Do you have the python script? It would be very easy to connect to the Cayenne servers and send the data as a client. You can see my post here if you want to try for yourself DHT11/DHT22 with Raspberry Pi Hi Adam, I got the info from the tinkerforge site and some help from the tinkerforge forum. I installed following steps on my raspberry with latest Jessie lite to make it a mqtt broker for my sensors: Last line is to start the Brick-MQTT-Proxy as process with 5 seconds interval to send the data Then using mqttfx as a client on my PC, I could subscribe to see the data. I configure the mqttfx as shown in my first post screenshots and I can see the data for the air-pressure sensor as an example. Can this help you? Your help is greatly appreciated. Thanks Patrick hi @dewit281. does it have to be on your local pc? cant you just send the sensor data to your Rpi then just publish the data to Cayenne using the API - BYOT from the Rpi? it doesnt have to be connected to the pin really. Sorry cant helped since im doing BYOT as well. i thought @adam is right, it’s much more easy. Hi Adam, Took your file and tried to make some modification to fit my settings Code: import paho.mqtt.client as mqtt import time import sys" topic_humidity = "v1/username/things/clientid/data/2" topic_air-press = “v1/username/things/clientid/data/3” while True: try: temp = mosquitto_sub -v -t tinkerforge/bricklet/temperature/t6Q/temperature humidity= mosquitto_sub -v -t tinkerforge/bricklet/humidity/uk9/humidity air-press=() “xxxx”, "yyyy"and “zzzz” I replaced by my by bring you own thong data. But when I run the script, I got an error in line 13: temp = mosquitto_sub -v -t tinkerforge/bricklet/temperature/t6Q/temperature I suppose I have to get the published (not subscribed) data from the sensor??? I have also trouble with the payload= Can you put me on the right track? Thanks Patrick I think we can go about this a little differently. Are you reading the sensor directly from the Pi that you are running this script? What is the sensor model number? We’ll get your data to the dashboard somehow haha. Hi Adam, I am running the script on the same PI that is my mqtt broker. I got a little bit further. I concentrated to one of my sensors, the temperature sensor. If I type mosquitto_sub -t /tinkerforge/bricklet/temperature/t6Q/temperature at the command prompt, I get this: pi@pi20:~/tinkerforge $ mosquitto_sub -t /tinkerforge/bricklet/temperature/t6Q/temperature {"_timestamp":1486541777.09539,“temperature”:2406} so I can get the timestamp and temp value of the temp sensor and every 5 seconds I get a new value. I modified the script (cayenne-mqtt.py) by removing the lines for humidity and air-pressure and while True: try: temp = “mosquitto_sub -t /tinkerforge/bricklet/temperature/t6Q/temperature” print temp #to see the value of temp if temp is not None: temp = “temp,c=” + str(temp) mqttc.publish(topic_temp, payload= temp , retain=True) sorry I did not finish my post; pressed the wrong button!!! I receive thi every 5 seconds: pi@pi20:~/tinkerforge $ python cayenne-mqtt.py mosquitto_sub -t /tinkerforge/bricklet/temperature/t6Q/temperature mosquitto_sub -t /tinkerforge/bricklet/temperature/t6Q/temperature So I did not get the timestamp and temp value in the temp variable. What do I do wrong? is this a wrong variable type issue or do I make a concept error? I also put the same “temp” in the mqttc.publish payload.(last line) is this ok? Thanks for your help Patrick “mosquitto_sub -t /tinkerforge/bricklet/temperature/t6Q/temperature” is a bash command but when you put that in your python code it’s just reading it as a string. Try this: import paho.mqtt.client as mqtt import time import sys import subprocess" #change username/clientid here topic_humidity = "v1/username/things/clientid/data/2" #change username/clientid here topic_air-press = "v1/username/things/clientid/data/3" #change username/clientid here while True: try: temp = subprocess.Popen(mosquitto_sub -v -t tinkerforge/bricklet/temperature/t6Q/temperature) humidity= subprocess.Popen(mosquitto_sub -v -t tinkerforge/bricklet/humidity/uk9/humidity) air-press= subprocess.Popen() Hi adam, Added line import subprocess. Modified the clientid in the topic_temp to the clientid value (xxxx) I received from cayenne without any quotes Modified temp to temp = subprocess.Popen(mosquitto_sub -t /tinkerforge/bricklet/temperature/t6Q/temperature) Changed the payload value to temp: if temp is not None: #temp = “temp,c=” + str(temp) mqttc.publish(topic_temp, payload= temp , retain=True) saved the file and ran it:) NameError: name ‘mosquitto_sub’ is not defined And receive this error message above. But I see the “bring your own thing dashboard” jumping from offline to something else in a fraction of seconds until the script stops with this error. So I think, the connection to cayenne dashboard is working, but no sensor data yet. Again, many thanks for helping me. Patrick oh, sorry. Put quotes around the command temp = subprocess.Popen(“mosquitto_sub -t /tinkerforge/bricklet/temperature/t6Q/temperature”) Hi Adam, Added the quotes and ran cayenne-mqtt.py”) File “/usr/lib/python2.7/subprocess.py”, line 710, in init errread, errwrite) File “/usr/lib/python2.7/subprocess.py”, line 1335, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory thought it was a python 2 vs python 3 problem so ran: pi@pi20:~/tinkerforge python3 cayenne-mqtt.py Traceback (most recent call last): File "cayenne-mqtt.py", line 1, in <module> import paho.mqtt.client as mqtt ImportError: No module named 'paho' pi@pi20:~/tinkerforge As a absolute enthusiast beginner with python and after some Google search on both errors, I am stuck. I had an error about mixing spaces and tabs that I could resolve. but those errors are to much for me. What do I do wrong? Thanks Patrick Sorry about that again…not on a good start here haha. Popen requires more than 1 argument, I was thinking of subprocess.call when I put that in there. Try: temp = subprocess.Popen(“mosquitto_sub”, “-t /tinkerforge/bricklet/temperature/t6Q/temperature”) or temp = subprocess.call(“mosquitto_sub -t /tinkerforge/bricklet/temperature/t6Q/temperature”, shell=True) Hi Adam, With the first one solution I receive a error: buffersize must be an integer The second one seems to be working. here is a printscreen from the moment I logon to PI with putty: login as: pi pi@192.168.1.20 Feb 12 09:21:51 2017 from 192.168.1.2 pi@pi20:~ **cd /home/pi/tinkerforge** pi@pi20:~/tinkerforge python brick-mqtt-proxy.py --brickd-host 192.168.1.41 --brickd-port 4223 --broker-host localhost --broker-port 1883 --update-interval 5& [1] 2151 pi@pi20:~/tinkerforge python cayenne-mqtt.py {"_timestamp":1486889999.273564,"temperature":2456} {"_timestamp":1486890034.483148,"humidity":272} {"_timestamp":1486890034.895567,"air_pressure":1017743} {"_timestamp":1486890039.948072,"air_pressure":1017742} {"_timestamp":1486889999.273564,"temperature":2456} {"_timestamp":1486890034.483148,"humidity":272} {"_timestamp":1486890039.948072,"air_pressure":1017742} {"_timestamp":1486890045.001824,"air_pressure":1017750} {"_timestamp":1486890045.001824,"air_pressure":1017750} {"_timestamp":1486889999.273564,"temperature":2456} {"_timestamp":1486890034.483148,"humidity":272} {"_timestamp":1486890045.001824,"air_pressure":1017750} {"_timestamp":1486890049.521626,"humidity":271} {"_timestamp":1486890049.521626,"humidity":271} {"_timestamp":1486890049.521626,"humidity":271} {"_timestamp":1486890050.097411,"air_pressure":1017752} {"_timestamp":1486890050.097411,"air_pressure":1017752} {"_timestamp":1486890050.097411,"air_pressure":1017752} {"_timestamp":1486889999.273564,"temperature":2456} {"_timestamp":1486890049.521626,"humidity":271} {"_timestamp":1486890050.097411,"air_pressure":1017752} {"_timestamp":1486890055.1549,"air_pressure":1017749} {"_timestamp":1486890055.1549,"air_pressure":1017749} {"_timestamp":1486890055.1549,"air_pressure":1017749} {"_timestamp":1486890055.1549,"air_pressure":1017749} ^Cpi@pi20:~/tinkerforge first of allI have two questions: I have to manually start the broker at the command prompt every time I restart the pi with : cd /home/pi/tinkerforge python brick-mqtt-proxy.py --brickd-host 192.168.1.41 --brickd-port 4223 --broker-host localhost --broker-port 1883 --update-interval 5& can I be done automatically? I tried to put the commands in a .sh file and put it the crontab.(found this on Google). But only the python brick-mqtt-proxy.py start without the --brickd-host 192.168.1.41 --brickd-port 4223 --broker-host localhost --broker-port 1883 --update-interval 5& arguments. So I do not have the 5 seconds refreshment but one value. 2.is this normal that with the 5 seconds interval (from question 1), I receive every 5 seconds the values of my 3 sensors in random order and sometimes more than once; as example see the last 4 lines of my print screen; 4 identical timestamps with 4 identical air_pressure values. Is this normal mqtt behavior? Then I went to the cayenne dash board and the dashboard went from to this screen as soon I a ran the python cayenne-mqtt.py command at the command prompt. Yellow band at the top disappeared, so I think communication is working but still no sensor data. how do I fix this? Thanks Patrick Ok, that looks much better. Lets see your full updated script again and go from there. As far as running it on boot, you can use cron to do it. Put this line in your crontab: “@reboot python /home/pi/tinkerforge/brick-mqtt-proxy.py --brickd-host 192.168.1.41 --brickd-port 4223 --broker-host localhost --broker-port 1883 --update-interval 5&” I usually include an " &" at the end but it’s not necessary. I
https://community.mydevices.com/t/mqtt-manually-publishing-subscribing/2172/7
CC-MAIN-2019-35
refinedweb
2,391
65.22
New Visual Studio Web Application: The ASP.NET Core Razor Pages Jürgen Gutsch - 24 July, 2017 I think, everyone who followed the last couple of ASP.NET Community Standup session heard about Razor Pages. Did you try the Razor Pages? I didn't. I focused completely on ASP.NET Core MVC and Web API. With this post I'm going to have a first look into it. I'm going to try it out. I was also a little bit skeptical about it and compared it to the ASP.NET Web Site project. That was definitely wrong. You need to have the latest preview on Visual Studio 2017 installed on your machine, because the Razor Pages came with ASP.NET Core 2.0 preview. It is based on ASP.NET Core and part of the MVC framework. Creating a Razor Pages project Using Visual Studio 2017, I used "File... New Project" to create a new project. I navigate to ".NET Core", chose the "ASP.NET Core Web Application (.NET Core)" project and I chose a name and a location for that project. In the next dialogue, I needed to switch to ASP.NET Core 2.0 to see all the new available project types. (I will write about the other one in the next posts.) I selected the "Web Application (Razor Pages)" and pressed "OK". Program.cs and Startup.cs I you are already familiar with ASP.NET core projects, you'll find nothing new in the Program.cs and in the Startup.cs. Both files look pretty much the same. public class Program { public static void Main(string[] args) { BuildWebHost(args).Run(); } public static IWebHost BuildWebHost(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>() .Build(); } The Startup.cs has a services.AddMvc() and an app.UseMvc() with a configured route:=Home}/{action=Index}/{id?}"); }); } That means the Razor Pages are actually part of the MVC framework, as Damien Edwards always said in the Community Standups. The solution But the solution looks a little different. Instead of a Views and a Controller folders, there is only a Pages folder with the razor files in it. Even there are known files: the _layout.cshtml, _ViewImports.cshtml, _ViewStart.cshtml. Within the _ViewImports.cshtml we also have the import of the default TagHelpers @namespace RazorPages.Pages @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers This makes sense, since the Razor Pages are part of the MVC Framework. We also have the standard pages of every new ASP.NET project: Home, Contact and About. (I'm going to have a look at this files later on.) As every new web project in Visual Studio, also this project type is ready to run. Pressing F5 starts the web application and opens the URL in the browser: Frontend For the frontend dependencies "bower" is used. It will put all the stuff into wwwroot/bin. So even this is working the same way as in MVC. Custom CSS and custom JavaScript ar in the css and the js folder under wwwroot. This should all be familiar for ASP.NET Corer developers. Also the way the resources are used in the _Layout.cshtml are the same. Welcome back "Code Behind" This was my first thought for just a second, when I saw the that there are nested files under the Index, About, Contact and Error pages. At the first glimpse this files are looking almost like Code Behind files of Web Form based ASP.NET Pages, but are completely different: public class ContactModel : PageModel { public string Message { get; set; } public void OnGet() { Message = "Your contact page."; } } They are not called Page, but Model and they have something like an handler in it, to do something on a specific action. Actually it is not a handler, it is an an method which gets automatically invoked, if this method exists. This is a lot better than the pretty old Web Forms concept. The base class PageModel just provides access to some Properties like the Contexts, Request, Response, User, RouteData, ModelStates, ViewData and so on. It also provides methods to redirect to other pages, to respond with specific HTTP status codes, to sign-in and sign-out. This is pretty much it. The method OnGet allows us to access the page via a GET request. OnPost does the same for POST. Gues what OnPut does ;) Do you remember Web Forms? There's no need to ask if the current request is a GET or POST request. There's a single decoupled method per HTTP method. This is really nice. On our Contact page, inside the method OnGet the message "Your contact page." will be set. This message gets displayed on the specific Contact page: @page @model ContactModel @{ ViewData["Title"] = "Contact"; } <h2>@ViewData["Title"].</h2> <h3>@Model.Message</h3> As you can see in the razor page, the PageModel is actually the model of that view which gets passed to the view, the same way as in ASP.Net MVC. The only difference is, that there's no Action to write code for and something will invoke the OnGet Method in the PageModel. Conclusion This is just a pretty fast first look, but the Razor Pages seem to be pretty cool for small and low budget projects, e. g. for promotional micro sites with less of dynamic stuff. There's less code to write and less things to ramp up, no controllers and actions to think about. This makes it pretty easy to quickly start a project or to prototype some features. But there's no limitation to do just small projects. It is a real ASP.NET Core application, which get's compiled and can easily use additional libraries. Even Dependency Injection is available in ASP.NET Core RazorPages. That means it is possible to let the application grow. Even though it is possible to use the MVC concept in parallel by adding the Controllers and the Views folders to that application. You can also share the _layout.cshtml between both, the Razor Pages and the VMC Views, just by telling the _ViewStart.cshtml where the _layout.cshtml is. Don't believe me? Try this out: I'm pretty sure I'll use it in some of my real projects in the future.
http://asp.net-hacker.rocks/2017/07/24/razor-pages.html
CC-MAIN-2018-09
refinedweb
1,038
77.84
If you’ve been keeping up with my blog and tutorials, you’ll know that I’ve done quite a few posts on Ionic Framework. I’ve been hearing a lot about React Native lately so I figured it is time to give it a shot. There are 6,500 languages and roughly seven billion people in the world. Chances are your native language is only known by a small piece of the global population. You can boost downloads of your application and overall App Store Optimization (ASO) by accommodating a larger variety of languages. Last year, I did a tutorial regarding localization (l10n) and internationalization (i18n) in an Ionic Framework Android and iOS application. This time I’m going to go over the same, but in a React Native application for iOS. If this is your first time taking a look at React Native, you should note that as of right now it is only compatible with iOS. I believe Android is coming in the future, but not yet. This means you’ll need a Mac to do any development. Let’s start this tutorial by first creating a fresh React Native project. Assuming you have React Native installed on your machine, run the following from your Mac Terminal application: react-native init ReactProject cd ReactProject This project will take advantage of the React Native plugin react-native-i18n by Alexander Zaytsev to do any translating. With the project as your current working directory in your Terminal, run the following command: npm install react-native-i18n --save The dependencies are now in your project directory, but they need to also be added to the Xcode project. Open ReactProject.xcodeproj found in the root of your project. Now right click the Libraries directory found in the project tree of Xcode and choose Add Files to “ReactProject”…. You want to add RNI18n.xcodeproj found in your project’s node_modules/react-native-i18n directory. Next, add libRNI18n.a via the Link Binary With Libraries section of Build Phases in Xcode. The plugin has been added to your project, but not to your project’s code. Open index.ios.js found at your project’s root and include the following line: var I18n = require('react-native-i18n'); A lot of what comes next is coming from the plugin’s documentation. Per the documentation it recommends you use language fallbacks. For example, you can choose to add translations for en_US and en_GB, but with fallbacks enabled you can have a more generic translation for en instead. This fallback can be enabled by adding the following: I18n.fallbacks = true; Now let’s look at what it takes to add some translations. We will essentially be just creating an object with parent properties for whatever locale we wish to use. For example: I18n.translations = { en: { greeting: "Hello", goodbye: "Bye" }, fr: { greeting: "Bonjour", goodbye: "Au Revoir" }, es: { greeting: "Hola", goodbye: "Adios" } } In the above object we have three different languages all using the fallback value. We have a greeting and exiting word for English, Spanish, and French. These translated variables can be used like the following in our view: <Text>{I18n.t("greeting")}</Text> <Text>{I18n.t("goodbye")}</Text> To get a better idea of it all together, your full index.ios.js file might look like the following: 'use strict'; var React = require('react-native'); var I18n = require('react-native-i18n'); var { AppRegistry, StyleSheet, Text, View, } = React; var ReactProject = React.createClass({ render: function() { return ( <View style={styles.container}> <Text style={styles.welcome}> {I18n.t("greeting")} </Text> <Text style={styles.goodbye}> {I18n.t("goodbye")} </Text> </View> ); } }); var styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center', backgroundColor: '#F5FCFF', }, welcome: { fontSize: 20, textAlign: 'center', margin: 10, }, goodbye: { textAlign: 'center', color: '#333333', marginBottom: 5, }, }); I18n.fallbacks = true; I18n.translations = { en: { greeting: "Hello", goodbye: "Bye" }, fr: { greeting: "Bonjour", goodbye: "Au Revoir" }, es: { greeting: "Hola", goodbye: "Adios" } } AppRegistry.registerComponent('ReactProject', () => ReactProject); Just like with Angular Translate for Ionic Framework, we can add localization and internationalization features to our React Native application. This is a terrific way to boost your App Store Optimization (ASO) as your application will now reach more audiences. A video version of this article can be seen below.
https://www.thepolyglotdeveloper.com/2015/08/internationalization-and-localization-in-your-react-native-app/
CC-MAIN-2022-21
refinedweb
701
56.05
Closed Bug 1370667 Opened 3 years ago Closed 3 years ago Does freebl _fips Power Up Self Test() have a good reason to run during startup (or ever for that matter)? Categories (NSS :: Libraries, enhancement, P1) Tracking (Not tracked) 3.33 People (Reporter: ehsan, Assigned: franziskus) References (Blocks 1 open bug) Details Attachments (3 files, 1 obsolete file) I was trying to figure out why FREEBL_GetVector() seems to take up so much time in this startup profile: I noticed that BL_POSTRan() calls freebl_fipsPowerUpSelfTest(). That seemed quite surprising! AFAIK Firefox isn't FIPS certified any more. Does this code have any business to run during our startup at all? Flags: needinfo?(dkeeler) Summary: Does → Does freebl_fipsPowerUpSelfTest() have a good reason to run during startup (or ever for that matter)? (In reply to :Ehsan Akhgari (needinfo please, extremely long backlog) from comment #0) ... > I noticed that BL_POSTRan() calls freebl_fipsPowerUpSelfTest(). That seemed > quite surprising! AFAIK Firefox isn't FIPS certified any more. Firefox certainly isn't. My understanding is people have been saying "well, if the crypto library is FIPS-certified, then it's fine". Of course, as far as I know no recent version of NSS (and Firefox essentially must have the most recent version at the time of release) has been FIPS-certified, but that's another story. > Does this code have any business to run during our startup at all? Not if NSS isn't in FIPS mode. I would file an NSS bug (or move this to NSS :: Libraries). Flags: needinfo?(dkeeler) I moved this over to NSS. It looks like the latest FIPS related changes introduced this without disabling it in non-FIPS mode. The call to freebl_fipsPowerUpSelfTest should be inside a #ifndef NSS_NO_INIT_SUPPORT. Assignee: nobody → franziskuskiefer Component: Security: PSM → Libraries Priority: -- → P1 Product: Core → NSS Target Milestone: --- → 3.32 Version: unspecified → other This is currently being worked on. Franziskus submitted a new patch yesterday that I'm currently reviewing. We'll just need one more review from RedHat and then we're good to go. Status: NEW → ASSIGNED Attachment #8897402 - Flags: review?(rrelyea) Comment on attachment 8897402 [details] [diff] [review] fips-startup-tests.patch Review of attachment 8897402 [details] [diff] [review]: ----------------------------------------------------------------- Just for the record, I still strongly prefer to keep the FIPS plumbing in and create a 'fake' FIPS mode where we don't actually verify the library integrity, but all the other stuff happens. I'm still worried that we won't be getting FIPS testing at all on the mozilla side. I also recognized that I've lost that battle, so I'm not holding Franziskus to that standard. (Just registering my objection for posterity). The r- is reviewing the patch assuming the goal is to have a completely non-FIPS compiled version of softoken as well as a full-FIPS version. The patch falls down in 2 cases: 1) in the non-FIPS case, the integrity checks should always FAIL. Anyone depending on those integrity checks should get failures if they aren't using the full-FIPS mode (in my proposed fake-FIPS mode, you want success, because you are 'faking' FIPS mode). 2) The unrelated rng change needs to allow reinitialization, so the goRNGInit variable needs to bel cleard on RNG_Shutdown if we are going to clear the rng_lock. The bike shedding comment is just my preference and does not need to be changed to get an r+. bob ::: lib/freebl/det_rng.c @@ +112,5 @@ > { > + if (rng_lock) { > + PZ_DestroyLock(rng_lock); > + rng_lock = NULL; > + } Hmm I think we also need to clear the coRNGInit variable. We need to support the case where we've shutdown NSS and restarted it. ::: lib/freebl/nsslowhash.c @@ +45,3 @@ > return 1; > } > +#endif /* NSS_FIPS_DISABLED */ Bike shedding: I would have preferred just to keep the function and always return 0. Doing it your way does save a call and some code, my way has less ifdefs. ::: lib/freebl/shvfy.c @@ +555,5 @@ > +PRBool > +BLAPI_VerifySelf(const char *name) > +{ > + return PR_TRUE; > +} Hmm, do we actually fail if we return FALSE here? We need to keep these entry points, but if something actually uses them, they should get a PR_FALSE return if we didn't actually verify the library. If we were keeping the 'fake' FIPS mode (do all the POST checks, keep the FIPS plumbing, just disable the SELF tests) like I recommended, then this would be correct, but if we aren't faking FIPS mode, we definitely should return PR_FALSE for both of these functions. Attachment #8897402 - Flags: review?(rrelyea) → review- Yeah, the PRNG fix is unrelated. It was only triggered by changes in here. I'll pull that out. > ::: lib/freebl/shvfy.c > Hmm, do we actually fail if we return FALSE here? Yes we should fail. That shouldn't break anything. I changed that. Return false now in the integrity check functions when FIPS is disabled. I pulled out the RNG change to make this cleaner. Attachment #8897402 - Attachment is obsolete: true Attachment #8898718 - Flags: review?(rrelyea) Comment on attachment 8898718 [details] [diff] [review] fips-startup-tests.patch v2 Review of attachment 8898718 [details] [diff] [review]: ----------------------------------------------------------------- r+ I also see you removed the RNG changes. I'm OK with the idea of them, but we do need to make sure we can reinitialize once we've finalized them. ::: lib/freebl/shvfy.c @@ +555,5 @@ > +PRBool > +BLAPI_VerifySelf(const char *name) > +{ > + return PR_FALSE; > +} Thanks! Attachment #8898718 - Flags: review?(rrelyea) → review+ Thanks Bob! Status: ASSIGNED → RESOLVED Closed: 3 years ago Resolution: --- → FIXED Target Milestone: 3.32 → 3.33 We should fix all.sh to automatically run or skip the fips.sh test, based on build config and environment. If the environment sets NSS_FORCE_FIPS, can we run fips.sh and also automatically set the environment variables NSS_TEST_ENABLE_FIPS? Attachment #8899451 - Flags: review?(franziskuskiefer) Comment on attachment 8899451 [details] [diff] [review] all-sh-fips.patch Review of attachment 8899451 [details] [diff] [review]: ----------------------------------------------------------------- lgtm Attachment #8899451 - Flags: review?(franziskuskiefer) → review+ Comment on attachment 8899451 [details] [diff] [review] all-sh-fips.patch Reopening, the commits from comment 10 resulted in the SSL tests no longer being executed (and the test scripts didn't detect it, which is another bug in the test scripts we should fix). Running tests for ssl TIMESTAMP ssl BEGIN: Mon Aug 21 10:56:57 UTC 2017 ssl.sh: SSL tests =============================== ssl.sh: Error: Unknown server mode 1 Status: RESOLVED → REOPENED Resolution: FIXED → --- This part of your commit seems to set an incorrect variable. NSS_SSL_TESTS contains a list of tests to be executed, which you replace with "1". Which variable did you really want to set? + // We don't run FIPS SSL tests + if (task.tests == "ssl") { + if (!task.env) { + task.env = {}; + } + task.env.NSS_SSL_TESTS = "1"; + } How about this approach to abort the tests, if invalid/unknown tests are requested? Attachment #8899812 - Flags: review?(franziskuskiefer) This enables SSL tests again and makes sure that all.sh fails when something like this happens again, i.e. NSS_SSL_TESTS has invalid values. Status: REOPENED → RESOLVED Closed: 3 years ago → 3 years ago Resolution: --- → FIXED Why did you introduce new symbol NSS_FIPS_DISABLED ? Why did you define NSS_FIPS_DISABLED only in gyp ? Why didn't you reuse build flag NSS_FORCE_FIPS ? Let's cleanup in bug 1402410, which showed a difference of behavior in make and gyp builds, non-fips.
https://bugzilla.mozilla.org/show_bug.cgi?id=1370667
CC-MAIN-2020-40
refinedweb
1,209
66.84
justify either the actual behaviour or what you believe the behaviour should be? Note that appealing to the C# standard will not help you here because the standard gets it wrong. I'll discuss how exactly the standard gets it wrong and what the justification is for the actual behaviour next time on FAIC. [UPDATE: I have reviewed the standard more carefully. The standard is actually correct, but really hard to read. I'll post a full analysis on Monday.] If you would like to receive an email when updates are made to this post, please register here R. Sorry, I prefer to err on the side of politeness... As far as B inheriting T.... I was trying to infer the inner workings here, to explain why in the world the example would yield int instead of string. Some sort of weird inheritance of T was the only way I could figure that this would happen. Obviously there's some assumption I'm making that is wrong. Either I don't understand the scoping rules, or I've got the order in which things happen all mixed up. I still think it _should_ output string. =P Sigh.... I'll wait for Monday. Took me a few minutes to grasp this one. The <T> is clearly referring to the containing class. Which means that b.M() shows a string. But the container class for c is b - which is A<Int>. So yes, that makes sense - just took a moment to get there. Hi, public class A<M> { public class X<T> { public class B : A<int> { public new void M() { System.Console.WriteLine(typeof(T).ToString()); System.Console.WriteLine(typeof(M).ToString()); } public class C : B { } } ... A<string>.X<double>.B.C c = new A<string>.X<double>.B.C(); c.M(); Prints string and double. And why do I need the new keyword in M here? Your postings are way shorter than Chris Brumme's when he was still posting, but I am finding myself equally confused after reading them (I love that). So I guess you can consider yourself much more efficient:-) Steve: Excellent example! Why is it that this DOESN'T reproduce the odd behaviour? The germane difference between the two is that in your example, A<T> does not contain anything called B. When I post what is really happening on Monday it'll be clear why this does what it does. You need the "new" because your method M shadows the in-scope type variable M from A<M>. We try to make it either illegal to shadow like this, or make you put the keyword "new" on one of them as a big waving red flag that says "there is a name ambiguity involving this method, so keep that in mind when you are reading this code". Today, the answer to Friday's puzzle . It prints "Int32". But why? Some readers hypothesized that M would Hi Eric, I have a confusion, you mentioned that ** the question hinges solely upon what "B" means in "class C : B", and in this case it means A<int>.B. ** So, if you consider the following code A<string>.B b = new A<string>.B(); Why in this case b.M() prints String not INT32 , here also "B" should be considered as A<int>.B ... what's your comment .... Hi Eric, If we modify the code like this .. { public void N() { Console.WriteLine(typeof(T).ToString()); public class B : A<int> public void M() { Console.WriteLine(typeof(T).ToString()); } public class C : B static void Main() A<string>.B b = new A<string>.B(); b.M(); b.N(); Here, b.M() shows String but b.N() shows Int32. I am confused how could it happen.....:)
http://blogs.msdn.com/ericlippert/archive/2007/07/27/an-inheritance-puzzle-part-one.aspx
crawl-002
refinedweb
626
77.03
Hello to all, the project that I want to present I have realized entirely by myself from the mechanics to the control software. In life I make the web programmer and I have thought to exploit this knowledge by applying them to the robotics world. I modeled every single piece of mechanics, I printed it in 3D with my Davinci 3D, assembled the electronics part and finally developed the control software, it was a wonderful experience for me that gave me the opportunity to understand best of all the production process of a real prototype. I hope to make this my achievement interesting ;-) Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: The Idea The idea is to create a erasable whiteboard that could write and draw in real time via a web interface and therefore also from a smartphone. What do I need? I do not know! I do this more than that for study and research, who knows if I will ever find a use ... though to think about it well though an idea I would have it! I would like (but I must have the consent of my wife) to place the whiteboard in a kitchen-like place hanging from the wall and automate the printing of automated messages such as some news via the web or today's weather, the days of the day recovered from Google Calendar, a handmade design, written notes when I'm in the car and so much more ... for now my child plays with the Tablet. I was strongly inspired by an existing project (iBoardbot). The iBoardbot project is very nice that it is based on the same principle with the difference that I am controlling the machine. To help you read this article I organized the contents in macro-sections: - Components - Working logic - Mechanical project - Electronics - Software Step 2: Components The project is mainly based on the use of a Raspberry Pi3, below list the components needed to assemble this board. Mechanical components - Wooden boards for the structure - ABS molded parts - Approximately 1mt. of T2 belt - 2 Pulleys GT2 - 2 mt. of GT2 belt - 1 iron stand for nema 17 - 1 stem steel diam. 10mm x 500mm - 1 6mm aluminum tube - 1 bearing - 1 aluminum profile to L - 1 white glass plate - Some small vines from wood - Screws, bolts and washers M3 various sizes - 2 self-lubricating bronze bearings diam. 10mm Electronic Components - Raspberry Pi 3 model B - Arduino Uno - CNC shield - 2 engines Nema 17 - 1 Servo SG90 9g - Voltage regulator IN 6-24V OUT dual USB 5V - Various electric leads - 12Volt power supply - 2 Driver pololu A4988 Step 3: Working Logic Choosing to adopt a Raspberry is dictated by the fact that having a linux operating system gives me the freedom to organize things to my liking without any kind of constraint, can go online with WiFi and last but not least, I can make it work like a web server, and this is the key to everything because properly configuring the home router can access from the outside to the Raspberry with a smartphone browser. In short, Raspberry is the webserver on which it runs a site with the interface to write and draw then the SVG format is sent, always to the Raspberry that converts it into GCODE and processes it to operate the engines. Later in the software section I will explain the processes in detail. Step 4: Mechanical Project Unlike all the other CNC I have made with recovery materials I designed it from zero in Rhino with the goal of printing ABS all the components needed and assembling it all without any surprises. I also developed all the software that can run on the web and command the headset to draw and, if necessary, delete the whiteboard. Regarding the bulkheads I tried to put everything in a box that was not too cluttered and easy to install because at the end of everything from that box I would like a single cable, that of the 220Volt, and nothing else, just like a small plotter. I have designed a light structure with few pieces but optimized for the purpose, I tried to miniaturize as much as possible the components especially those dedicated to the head. Step 5: Printing Parts Modeling the pieces was easy and if we wanted it too fast but the thing that made me lose a bit of time was to be able to print them respecting the final measures, I try to explain better ... For example, if I have a 3D model of a 20x20x20 cube with a central hole diam. 6 mm when printing the final measurements will never be those of the original design because the printing material (ABS in my case) tends to shrink during the cooling phase. The unpleasant thing is that, for example, the holes shrink and the outer parts shrink. When do the pieces shrink? boh! I've read that the shrinkage percentage may change from the brand of filament used and even the color. I solved the question by printing before the sample pieces and measuring the final size, then by making proportions I obtained a shrinkage percentage. This trick forced me to modify all 3D piece measurements with the consequent loss of time. In addition to this I found myself having to remodel some pieces because once printed they did not fit well or just forgot something (a hole, a pin, etc.) I realized that designing a prototype involves having to rework and review every single piece and re-adapt it because it works as it should, so you have to be ready to put everything into play if you encounter obstacles or obstacles. Just to make the idea the head I had to reprint 5 times and still is not the optimal situation (but it does its duty). Step 6: Assembly The moment of assembly is always exciting because surprises are always behind the corner. Fortunately everything went well and in 30 minutes the assembly was completed. maybe I spent more time documenting the project than realizing it :-) Step 7: Electronics The electronic components I adopted are mainly a Raspberry Pi 3, an Arduino Uno and a CNC Shield. Raspberry is the main heart because it manages all the operations and the input and output signals, Arduino manages the current flowing into the motors and actuates a servo to move the pen and the eraser, the CNC shield aboard the Pololu drivers A4988 they operate the Nema 17. In addition to these basic components there is a 12 Volt power supply, a voltage reducer, a relay some electric cable. The power supply supplies power to the CNC shield at Nema 17° and (via a voltage reducer with USB output) to the Raspberry and the servomotor. Arduino is powered directly from the Raspberry via a usb cable and to simplify some things I have installed the Arduino IDE on Raspberry so I can make the sketch on the fly directly from the Raspberry desktop interface even remotely via VNC. One of the goals I set for myself was to have an object without a lot of files coming out, transformers or external drivers, but with only the 220 V wire coming out. That said, I tried to put everything inside the structure. Step 8: Software The software part of this project is the most important thing because it is the one that allows you to manage every single behavior of the machine according to your needs. The user who wants to interact with the machine (or blackboard) will have a jQuery-based web app that will allow you to draw a free hand or write a text and then send it to the webserver (Raspberry) that will have to process the data , convert it to Gcode and finally transform it into engine signals. It may seem a complicated thing, but if you isolate all the various processes in small atomic operations everything becomes simpler and more comprehensible. The programming languages that should be known are: - Linux bash - PHP - Python - HTML - Javascript + jQuery The software parts required for this project are as follows: - PyCNC (engine and gcode management) - svg2gcode (convert from svg to gcode) - grecode (gcode manipulation) - easysvg (text generator in svg) - iBoard.class.php (main control class) - iBoard.js (client side javascript class that dialogs with iBoard.class.php) As I mentioned in the introduction, the underlying principle is the SVG format, which is very useful in the web to draw mainly graphics. If you try to open with the Notepad, any .SVG file will notice as the first thing that is a textual format and inside it there is an XML structure that contains tags with within the X, Y coordinates and is its own this aspect of obtaining Gcode. Webserver Apache + PHP The server configuration (Raspberry) is a bit laborious but nothing so complex. Install Apache2 sudo apt-get update sudo apt-get install apache2 Install PHP sudo apt-get install php5 libapache2-mod-php5 php5-mcrypt After installing Apache and PHP, you must configure Apache to run it with the "pi" and "pi" users because when the scripts run, the apache user will not be able to run them. The file that needs to be addressed is this /etc/apache2/httpd.conf edit these two lines: User pi Group pi Now you need to create a virtualhost and point it to the project directory. The folder to create a vhost is here: /etc/apache2/sites-enabled just create a .conf file for example iboard.yourdomain.com.conf inside the file just enter this configuration <virtualhost *:80> DocumentRoot /var/www/html/iBoard ServerName iboard.yourdomain.com<br> ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined RewriteEngine On <directory /var/www/html/iBoard> AllowOverride All </Directory> </VirtualHost> You must restart the apache service by making the changes sudo /etc/init.d/apache2 stop<br>sudo /etc/init.d/apache2 start To see if the configuration is successful, just open the browser of a PC on the network and type in the domain name you chose. If you have not configured the router to accept http incoming calls and if you have not yet registered a domain, just add it to the hosts file on your pc. On Windows the file is here: C:\Windows\System32\drivers\etc\hosts At the bottom of the file, you need to add a line like this [IP Raspberry] iboard.yourdomain.com eg. 192.168.1.70 iboard.yourdomain.com<br> Save the file and type on your PC browser, if it's done well you should see the Apache Welcome page. Stepper stepper library Without a software capable of sending signals to the stepper motor drivers this project would not exist. To be able to convert an instruction into motion, a library is required to interpret the data and handle the output signals. The peculiarity that this library has to have is that the gcode statements should be run in bash on command line. PyCNC Library (python) Before writing anything from scratch I did a long search and eventually found on GitHub a library written by Nikolay Khabarov in Python called PyCNC. Thankyou Nikolay! PyCNC was developed by a guy Nikolay Khabarov very kind and helpful because every question I asked him always answered me with precision and excellent solutions. At this link you can see the technical questions I had and the related answers. I explained briefly why I had contacted Nikolay. His library manages up to 5 axes with stepper motors while in my project I also have a small servo used to operate the pen and the eraser. According to Nikolay, servo management was possible but after doing several tests and modifying the sources of its code we both came to the conclusion that for what was written it was not possible to do so or it was better that as the steppers turned I was interfering with servo motor. The solution I adopted was to include in the project an Arduino Uno that will operate the pen the eraser and a relay for the management of the energy saving that I will explain later. To make the PyCNC library communicate with Arduino I had to make several changes to the Nikolay code. PyCNC looks at every single line of gcode and identifies commands (eg G1, G2, G3, M3, M5, etc.) and calls for specific functions that generate output signals for the motors. All engine parameters and pin configuration are in this file: PyCNC/cnc/config.py Here are some lines of configuration code: <p>MAX_VELOCITY_MM_PER_MIN_X = 10000<br>MAX_VELOCITY_MM_PER_MIN_Y = 10000 MAX_VELOCITY_MM_PER_MIN_Z = 600 MAX_VELOCITY_MM_PER_MIN_E = 1500 MIN_VELOCITY_MM_PER_MIN = 1 # Average velocity for endstop calibration procedure CALIBRATION_VELOCITY_MM_PER_MIN = 1000 # Stepper motors steps per millimeter for each axis. STEPPER_PULSES_PER_MM_X = 80.5 STEPPER_PULSES_PER_MM_Y = 39.75 STEPPER_PULSES_PER_MM_Z = 400 STEPPER_PULSES_PER_MM_E = 150 # ----------------------------------------------------------------------------- # Pins configuration. # Enable pin for all steppers, low level is enabled. STEPPERS_ENABLE_PIN = 26 STEPPER_STEP_PIN_X = 21 STEPPER_STEP_PIN_Y = 16 STEPPER_STEP_PIN_Z = 12 STEPPER_STEP_PIN_E = 8 STEPPER_DIR_PIN_X = 20 STEPPER_DIR_PIN_Y = 19 STEPPER_DIR_PIN_Z = 13 STEPPER_DIR_PIN_E = 7</p><p> <strong># Mirco added start PEN_PIN = 9 ERASER_PIN = 11 DRIVERS_PIN = 5 # Mirco added end</strong></p><p>SPINDLE_PWM_PIN = 4 FAN_PIN = 27 EXTRUDER_HEATER_PIN = 24 BED_HEATER_PIN = 22 EXTRUDER_TEMPERATURE_SENSOR_CHANNEL = 2 BED_TEMPERATURE_SENSOR_CHANNEL = 1 ENDSTOP_PIN_X = 23 ENDSTOP_PIN_Y = 10 ENDSTOP_PIN_Z = 25</p> What I did was inventing 5 new Gcode M333, M444, M555, M666, M777 commands and implementing them in PyCNC where each of them generates output signals on the Raspberry pin. In particular - M333 operates the relay to power the drivers - M444 disables the relay to power the drivers - M555 servo at rest 90° - M666 servo in the 120° position that drives the pen - M777 servo in position 60° that drives the eraser when starting PyCNC and launching one of the above-mentioned gcode commands, the Raspberry sends digital signals to the pins previously configured. In the file /PyCNC/cnc/gmachine.py I added the following instructions elif c == 'M333': # Mirco - turn ON the engine drivers self._driver_on() elif c == 'M444': # Mirco - turn OFF the engine drivers self._driver_off() elif c == 'M666': # Mirco - activates the pen self._write() elif c == 'M777': # Mirco - activates the eraser self._erase() elif c == 'M555': # Mirco - rest position self._headup() for example, code M666 always in the same file I defined the _write() def _write(self):<br> hal.write() and in the file PyCNC/cnc/hal_raspberry/hal.py I added the new function def write(): """ the pen is active with a digital output to the PEN_PIN pin """ logging.info("pen is active")<br> gpio.set(PEN_PIN) gpio.clear(ERASER_PIN) These are the main changes I've made for my purposes. To launch PyCNC to command line just enter the PyCNC directory and type sudo ./pycnc pi@raspiboard:~/Desktop/iBoard/scripts/PyCNC $ cd /var/www/html/iBoard/scripts/PyCNC/ pi@raspiboard:/var/www/html/iBoard/scripts/PyCNC $ sudo ./pycnc ---- ads111x is not detected ---- *************** Welcome to iBoard! (powered by PyCNC) *************** > In order to operate the pen or the eraser, it is necessary to create an ad hoc written gcode, other than the classic M3 M5. If we want to draw a square the code will look like this M555 G0 X10 Y10 M666 G1 X10 Y20 G1 X20 Y20 G1 X20 Y10 G0 X10 Y10 M555 Arduino Side Arduino things are much simpler because it is enough to configure pin inputs and read digital value. Let's assume this configuration: Gcode M666 command configured on GPIO24 as a digital gadget output GPIO24 pin connected to the Arduino PIN7 When typing the Gcode M666 instruction the Raspberry will output a digital signal to the Arduino PIN7, the Arduino sketch knows that when it comes to a digital signal on the PIN7 it will have to position the servo on the 120 ° position (then the position of the servo I had other problems I will talk later on). below the sketch I wrote and uploaded on Arduino Uno #include <Servo.h> Servo myservo; int pos = 0; int inputEraser = 7; int inputPen = 8; int inputRele = 6; int pinServo = 9; int pinRele = 5; int statusEraser = 0; int statusPen = 0; int statusRele = 0; int servoVoidAngle = 90; // currently not used int servoEraserAngle = 60; // currently not used int servoPenAngle = 115; // currently not used int current_angle = servoVoidAngle; // currently not used int servoVoid_ms = 1750; // milliseconds to handle precise rotation at 90°<br>int servoEraser_ms = 1550; // milliseconds to handle precise rotation at 60°<br>int servoPen_ms = 1950; // milliseconds to handle precise rotation at 115°<br>int current_ms = servoVoid_ms; String incomingByte = ""; void setup() { pinMode(inputEraser, INPUT); pinMode(inputPen, INPUT); pinMode(inputRele, INPUT); pinMode(pinRele, OUTPUT); Serial.begin(9600); myservo.attach(pinServo); } void loop() { if (Serial.available() > 0) { incomingByte = Serial.readString(); if(incomingByte) { myservo.writeMicroseconds(incomingByte.toInt()); Serial.println(incomingByte); } } if(true){ statusEraser = digitalRead(inputEraser); statusPen = digitalRead(inputPen); statusRele = digitalRead(inputRele); if(statusRele==HIGH){ digitalWrite(pinRele, LOW); }else{ digitalWrite(pinRele, HIGH); } if(statusPen==HIGH || statusEraser==HIGH){ if(statusPen==HIGH) { current_ms = servoPen_ms; myservo.writeMicroseconds(current_ms); } if(statusEraser==HIGH) { current_ms = servoEraser_ms; myservo.writeMicroseconds(current_ms); } }else{ current_ms = servoVoid_ms; myservo.writeMicroseconds(current_ms); } } } Anyone familiar with Arduino's sketch will soon notice that I did not specify degrees to handle the position of servo but I used the writeMicroseconds () function. The reason for this choice is because when I specified a 90 ° position, the servo positioned about 80 °, in short it was not exactly as I wanted. The writeMicroseconds feature allows you to handle your position more accurately. The relay that I mentioned first about managing energy savings will be used to disconnect power to engine drivers after a certain period of inactivity. at the moment the idle timer did not implement it in the sketch above but I will do it later on at project completion. Conversion from SVG to Gcode To convert an SVG file there are so many opesource modes and scripts around the network and good or bad all do the same thing instead of wasting time on "reinventing hot water" (by rewriting it from scratch) I've taken any one on GitHub in Python and customized it by changing some pieces of code to fit it with the purpose. The script in question is called svg2gcode, it has been written by Vishal Patil and can be downloaded from here. thanks Vishal Patil! The first important thing is to edit the config.py file where you can specify the maximum work area and the prefixes and suffixes to apply for each parsed shape. this below for example is the configuration that I have adopted: """G-code emitted at the start of processing the SVG file""" preamble = "g21\ng90\nM333\ng4 p1\ng28\nf9000" """G-code emitted at the end of processing the SVG file""" postamble = "g4 p0.3" """G-code emitted before processing a SVG shape""" shape_preamble = "g4 p0.1" """G-code emitted after processing a SVG shape""" shape_postamble = "g4 p0.1\nM555" """Print bed width in mm""" bed_max_x = 380 """Print bed height in mm""" bed_max_y = 220 """ Used to control the smoothness/sharpness of the curves. Smaller the value greater the sharpness. Make sure the value is greater than 0.1 """ smoothness = 0.1 In my case the maximum extension in X is 380 mm and in Y 220 mm, then I added the M333 and M555 controls to activate the current to the drivers and to place the resting head and some other small features in the initial preamble. I found myself forced to make some changes so I made a copy of the file calling it svg2gcodeMirco.py, this will be the file I'll call to generate the gcode that suits my needs. The part that I modified is the one at the bottom of the file, in particular I added the M666 command, a pause of 300ms (G4 p0.3) and a few comments to put in the Gcode to do debugging. print shape_preamble p = point_generator(d, m, smoothness) count = 1 for x,y in p: if x > 0 and x < bed_max_x and y > 0 and y < bed_max_y: if count == 2: print "G4 p0.3\nM666\nG4 p0.3" print "(startsegment)" print "G1 X%0.1f Y%0.1f" % (scale_x*x, scale_y*y) count += 1 else: print "(fuori misure x%s y%s)" % (x,y) print "(endsegment)" print shape_postamble To convert a .svg file to Gcode just go to the root folder where the svg2gcode files are located and launch this command cat myfile.svg | python svg2gcode.py the command above will show the newly generated gcode so if we want to write it on a file, just add the bigger symbol (>) and the name of the file we want to create, for example. cat myfile.svg | python svg2gcode.py > mygcode.nc<br> At this point, the road is downhill because there is nothing left to put together all the pieces and make them work properly. Grecode Generating Gcode is not enough because if you think about it, it has never been specified at what point the work area will have to be positioned in our design and besides this in the generated gcode there is a mirrored code problem in the sense that in the file SVG coordinates come from those web and in our machine may not always coincide. Grecode is a useful library that is concerned with remaining the Gcode, for example by moving the drawing to a specific coordinate, mirroring it, rotating it, scaling it, stitching it, and so on. Then, after generating the Gcode, you can decide whether to paste it into grecode to apply changes. iBoard.class.php I wrote this PHP class to put together all the components described above and to make them work in unison and command the whiteboard. The operating logic of this class is to generate a task (or job) for each drawing that is done by creating a folder with the inside of the single svg and the generated Gcode. Managing things this way I have the ability to partially delete the whiteboard asking for the deletion of a precise task. For partial deletion I mean that if for example drawing a star I can even delete it without having to go across the whiteboard from top to bottom. How does partial deletion occur? It's very simple, having the gcode saved for each drawing what i do is take the minimum and maximum coordinates of the design by parsing all the gcode lines and building on the fly another gcode file that drives the eraser making it move zigzag in the area which was the drawing. In addition to these basic features, I have added other ones to run pre-formatted code such as a welcome message or the total blackboard erase. easysvg (svg text generator in PHP) easysvg is a PHP class that is always downloaded from GitHub that allows you to convert a text into SVG. Someone is wondering: but why not generate them via the web? The reason is very simple, because creating a web text means using a special XML tag called text inside of which there are no xy coordinates, so the svg2gcode script will not handle it. There is a Javascript (RaphaelJS) library that manipulates svg and that it can convert text to svg path but after some tests I realized that it is a bit heavy to handle for the browser especially in the presence of a lot of text. The solution I adopted is to send POST to my class iBoard.class.php only text and some other parameters such as font-family, font-size, and alignment, then using the easysvg class generate a svg to run. The fonts generate them with online tools to convert a .ttf font into .svg and then create a webfont. If in the future I will find a way to simplify a text in path svg, I will implement it and update this document. The advantage of using a svg PHP generator in PHP is to automate writing processes on the blackboard, such as scheduling automatic weather messages, some news from the web, or appointments from Google Calendar. using easysvg is very simple require 'easySVG.php'; $svg = new EasySVG(); $svg->setFont("om_telolet_om-webfont.svg", 100, '#000000'); $svg->addText("Simple text display"); $svg->addAttribute("width", "800px"); $svg->addAttribute("height", "120px"); echo $svg->asXML();Console at command line In order to manage the automatic writing of texts I have also developed a command line console that always uses the iBoard.class.php class but which can be used in command line bash such as: php controller/cli.php wtriteText -text="ciao!" -align=top-left -font=comic -size=40 the instruction above will make the text "Hello!" appear on the whiteboard. 40 mm high with the comic character located at the top left. If, for example, I want to schedule the printing of the latest news from a RSS feed, I only need to create a CRON at 8:00 AM that reads the RSS feed XML and executes the above command with the text of the news then if I would like to download weather forecasts from open weather and print it under the news. The only limit is the fantasy ... Client Side Client side will have a jQuery-based HTML web interface that will allow the user to write text or draw hands-free with the mouse or with the smartphone's touch screen. I developed a small javascript class that has the task of dialoguing via Ajax with the web server by sending the SVG format design. Step 9: Considerations The whole project ended successfully with no special problems; the aspect I have to work on is still tied to the marker that with the passage of time tends to dry out and therefore no longer write. I still have to do some tests to see if there is such a resting position that the ink does not dry. At present when the pencil is at rest it stays in an inclined position with the tip pointing upwards slightly and I do not think it is very advantageous, I should try to keep the blackboard out on a pew so the pencil stays almost upright. If even the pencil is dry, you can make a change on the whiteboard so that in the rest position at X0 Y0 there is a kind of hole that acts as a stopper on the plane where the tip of the pen can be threaded, avoiding to stay in contact with the air. You can download project source files click on this link I hope you enjoyed this presentation, thanks for reading and the next ... First Prize in the Make It Move Contest 2017 First Prize in the Automation Contest 2017 38 Discussions 7 months ago Sir, please mention how to run the program? Is it in raspberry pi or in Linux? And please give me the link to create the app, please help me in this sir. Question 1 year ago sir, can you help me please i pay you ?? Question 1 year ago can i make it without cnc sheild 1 year ago Grande progetto, mi piace molto! 1 year ago anybody give me the critical analysis from project background and literature reviews about the project 1 year ago This is excellent very technical work. Even though its all written out right in front of me how you built it I'm still not sure I could copy it ;). Great job! Reply 1 year ago thankyou MechEngineerMike! 1 year ago Omg, you’re a genius! Do you sell this? Reply 1 year ago Hi Verolee, thanks for what you wrote but I do not sell it, if you're interested you can download the whole project. See you soon 1 year ago It's very detailed! Good job! :D Reply 1 year ago thankyou IgorF2! 1 year ago Sei un grande! W l'Italia! Reply 1 year ago Grazie kommy, w l'Italia!!!! 1 year ago WOW - this is fabulous. I have commercial embroidery machines and vinyl graphic cutters - your machine works on the same principal - you got my vote. 1 year ago Great Idea! I would flip the pen upside, gravity will help to last the pencil longer ;) Reply 1 year ago in fact I had already thought about it and thanks for the advice, I will do some tests 1 year ago BELLISSIMO! I just chose my winter project :-) Reply 1 year ago well, I wish you a good job and once you've finished I would be happy to see it. See you soon 1 year ago This is among the best projects I've seen on Instructables. Very cool! I hope you keep working on it. Try to raise some money for the prototype and make it into a flexible solution for any whiteboard out there. Connect the whole system with a camera, so that I could write on one whiteboard in Boston and on another whiteboard in San Francisco my writing would appear. Many possibilities for extension of the concept. Thanks for sharing. Reply 1 year ago thank you! I customized the texts by creating a font with my real writing ... the idea of the camera is very interesting ...
https://www.instructables.com/id/IBoard-Web-controlled-Whiteboard/
CC-MAIN-2019-35
refinedweb
4,866
57.91
Problem with Digital output timinguser_343349849 Aug 18, 2017 2:02 PM Hi, I am working on a project that needs to toggle an "H" Bridge at 125kHz. If I drive just one output pin on and off I get the exact same ON and OFF period. When I add a second drive pin the ON time gets longer than off time for some reason. I have attached the bundle for the project. It just uses NOPs to create delay for on and off (I am not sure how accuracte the cyDelayus routine is). If I uncomment NMOS1 only, I get 8us ON time and OFF time. If I comment out NMOS1 and use NMOS2 I also get 8us ON time and 8us OFF time. If I use Both then I get 10.7us ON time and 8us OFF time for BOTH. The program needs to set P1/ N2 for the first 1/2 cycle and P2/N1 for second half with symettrical 4us period. (During each half cycle it switches off the unused pairs of control) Thanks 1. Re: Problem with Digital output timingbob.marlowe Oct 25, 2014 4:08 AM (in response to user_343349849) David, you can rely on the CyDelay() function as you can rely on the clock precision. What you like to do, getting a precise timing for your h-bridge, could be done best with some hardware state-machines using LUTs. When a project using delay-functions gets expanded there will be the need for serving interrupts which will destroy any exact software-timings. Making sure that the bridge does not generate a shortcut can be done best in hardware. Bob 2. Re: Problem with Digital output timingbob.marlowe Oct 25, 2014 4:11 AM (in response to user_343349849) Additionally some infos you get when typing "h bridge" into the keyword search field on top of this page :-) Bob 3. Re: Problem with Digital output timinguser_14586677 Oct 25, 2014 5:45 AM (in response to user_343349849) This is one case where I would advocate external integrated H Bridge, like Infineon, TI, Freescale, Intersil, IR.... Reason is these have thermal protection, extensive OV protection, gate drivers, high currents due to gate drive and drain loads are internally routed to allow easy board layout, translate ground transients due to layout minimized. Also timing due to component differences better controlled and screened. Essentially these integrated parts become just a simple logic PWM interface from PSOC. You can even get parts that are galvanicaly isolated, eg. medical applications, etc.. Regards, Dana. 4. Re: Problem with Digital output timinguser_343349849 Oct 25, 2014 5:40 PM (in response to user_343349849) Thanks for the responses. I change the layout to have a 125 kHz clock going to a LUT. A control Reg (1 bit) also went to LUT input (used as "ENABLE" signal) The 2 output bits were control by LUT such that when ENABLE was "1", One output followed the Clock polarity while other was inverted. This worked ok as far as signal generation but my FET drivers and Fets were getting warm (even with light load of 1K across bridge). I have a reference design that is driven from a PIC controller that works fine and the only difference I can see is 1 machine cycle (250ns) delay between the pins being switched. The drivers for the FETS are inverted (TC4426). I apply a "1" to one of the ICS (this switches the High side P FET ON and the low side N FET OFF on left side of bridge) and I apply "0" to inputs on other side that switched P FET OFF and N FET ON. I wonder if there is small overlap and all are switching on for several hundre nanosecods and causing the heat? If so, is there a way I can always ensure 1 output is switched before the other? ie. The "0" on one side applied before the "1" on the other. I suspect I need to discretely control each pin from software instead of using clock and LUT in this case? 5. Re: Problem with Digital output timinguser_343349849 Oct 26, 2014 12:00 AM (in response to user_343349849) I took a look at some integrated solutions and found that ST produce a package that has a full bridge driver built into it with full protection. It also provides the required delay to prevent cross conduction when swithing. However, all of the models I found in the DMOS family (like the L6226) have upper switching frequency limit of 100kHz. Can anyone recommend a suitable device that will go a bit higher in frequency ? (with 2.5A current capavility or more). This does work out economical as the one device replaces 4 ICs (the 2 drivers and 2 HEX FET ICs I am using at the moment) 6. Re: Problem with Digital output timinguser_14586677 Oct 26, 2014 7:12 AM (in response to user_343349849) Did you try - ON Infineon IRF Fairchild TI Many have selector and filtering tools such as Regards, Dana. 7. Re: Problem with Digital output timinguser_343349849 Oct 26, 2014 6:43 PM (in response to user_343349849) It will take some tme to source alternate devices and lay up new PCB. In the meantime I would like to test my output circuit. Can anyone suggest a suitable method to achieve the following, the Pins need to be switched in order shown to avoid turning both FETS of a side of the bridge on at same time. I can get the switching sequence ok if I don't use hardware clock but have toruble gettings accurrate ON OFF times in code. (Or shoudl I disable interrupts during transmission and use NOP for delay and then reenable interrupts after? The CLOCK component works real well but I cannot figure out how to ensure port pins switch in sequence below as the cock state or enable function changes. Thanks 125kHz clock (4us HIGH, 4us LOW). Clock goes HIGH PMOS2 = 0 NMOS2 = 0 NMOS1 = 1 PMOS1 = 0 Clock goes LOW PMOS1 = 0 NMOS1 = 0 NMOS2 = 1 PMOS2 = 1 ALL OFF PMOS1 = 0 PMOS2 = 0 NMOS1 = 0 NMOS2 = 0 8. Re: Problem with Digital output timinguser_343349849 Oct 26, 2014 7:04 PM (in response to user_343349849) The strange thing is that if I do not enable the interrupts and attempt to toogle the pins, I cannot get them to toggle faster then 5us ON and 9.8us OFF. The datasheet mentions the pins operating faster than 1MHz in fast mode (I have set fast mode and push pull output). I am using the CY8CKIT Is this related to the "Write" API or is something else affecting the timing? Here is my simple code. #include <project.h> void LF_ONE(void) { PMOS2_Write(0); NMOS2_Write(0); NMOS1_Write(1); PMOS1_Write(1); asm("nop"); } void LF_ZERO(void) { PMOS1_Write(0); NMOS1_Write(0); NMOS2_Write(1); PMOS2_Write(1); asm("nop"); } void SendCarrierBurst(void) { LF_ONE(); LF_ZERO(); } int main() { PMOS1_Write(0); PMOS2_Write(0); NMOS1_Write(0); NMOS2_Write(0); for(;;) { SendCarrierBurst(); } } 9. Re: Problem with Digital output timinguser_14586677 Oct 26, 2014 7:50 PM (in response to user_343349849) You have seen the discussion in this ap note about speeding up pin toggling rate ? AN72382 - Using PSoC® 3 and PSoC 5LP GPIO Pins Regtards, Dana. 10. Re: Problem with Digital output timinguser_343349849 Oct 26, 2014 10:44 PM (in response to user_343349849) Thanks for the info. I redid the code using the fast IO method and was able to get period below 2us now. I disabled the interrupts, bit bashed the port bits (with nop'sto set exact required delay) and now it all works fine.
https://community.cypress.com/message/77471
CC-MAIN-2018-39
refinedweb
1,264
76.15
NAME sys/msg.h - XSI message queue structures SYNOPSIS #include <sys/msg.h> DESCRIPTION The <sys/msg.h> header shall define the following data types through typedef: msgqnum_t Used for the number of messages in the message queue. msglen_t Used for the number of bytes allowed in a message queue. These types shall be unsigned integer types that are able to store values at least as large as a type unsigned short. The <sys/msg.h> header shall define the following constant as a message operation flag: MSG_NOERROR No error if big message. The msqid_ds structure shall contain. The pid_t, time_t, key_t, size_t, and ssize_t types shall be defined as described in <sys/types.h> . The following shall be declared as functions and may also be defined as macros. Function prototypes shall be provided. int msgctl(int, int, struct msqid_ds *); int msgget(key_t, int); ssize_t msgrcv(int, void *, size_t, long, int); int msgsnd(int, const void *, size_t, int); In addition, .
http://manpages.ubuntu.com/manpages/lucid/man7/msg.h.7posix.html
CC-MAIN-2015-40
refinedweb
160
66.44
The following page details how to install PyGame 1.9.1 on a Mac OS X 10.6 (Snow Leopard) running the most recent 2.x version of Python (2.7.1). Here are some alternative methods for installing PyGame on osx: 1. Installed python but using visual python version. This is based on python 2.7.x In my system this installs files under Applications and /Library/Frameworks/Python.framework From this site: First, download and install the pure 32-bit Python, Python-2.7.3 (VPython does not work with Mac 64-bit/32-bit Python, but this 32-bit version of Python works fine on 64-bit Macs) Second, download and install VPython-Mac-Py2.7-5.74 This includes version 1.5.1 of numpy. The download of Python-2.7.3 is from the vpython site and designed to work with vpython. You can use the python installer from. However, you will have to install numpy. You can get numpy from 2. Installed the SDL libraries from dmg SDL 1.2.15 SDL_mixer 1.2.12 SDL_ttf 2.0.11 SDL_image 1.2.12 3. Installed the libjpeg and libpng libraries from dmg libpng v1.5.4 libjpeg 8c 4. Installed Xcode 4.4 from Apple apps. Need to add command line tools. To do this start Xcode and go to Preferences under Xcode menu. Choose the Download tab and select Components. Then install the Command Line Tools. 5. Installed XQuartz. Mountain Lion OS X no longer includes the X11 window system library. This is different from Lion OS X. XQuartz-2.7.2.dmg 6. Downloaded pygame tar file from pygame-1.9.1release.tar.gz Decompressed and extracted to create directory pygame1.9.1release 6. Before compilation of pygame: a. SDL_x header files refer to SDL as <SDL/SDL_yy.h> However, a SDL directory is not under the include directory of SDL (SDL/Headers). To fix this in a simple way: i. Went to directory /Library/Frameworks/SDL.framework/Headers then made a link as follows: ln -s SDL ./ 7. Changed to the pygame directory (normally pygame1.9.1release). Then switched to the super user. However, you can use the sudo command instead. I set the following compilation flags export CC='/usr/bin/gcc' export CFLAGS='-isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk -I/opt/X11/include -arch i386' export LDFLAGS='-arch i386' export ARCHFLAGS='-arch i386' You must specify using the original location of X11. The mac does not have a ld.so.conf and apparently the links generated by XQuartz in /usr do not work correctly. 8. Now execute: python config.py This should find the SDL, jpeg, png, and numpy libraries python setup.py build This will build in the directory before installing. It should complete with no errors. then python setup.py install 9. Confirmed that it worked: Out of super user mode and in a terminal shell python and within python import pygame this gave no error and a simple pygame program ran fine. GAMercier 2012-08-05 Set these environment flags before building... export CC='/usr/bin/gcc-4.0' CFLAGS='-isysroot /Developer/SDKs/MacOSX10.5.sdk -arch i386'export LDFLAGS='-arch i386' export ARCHFLAGS='-arch i386' If you get this error: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory This link has a solution: first, get the following packages: SDL frameworks Tested with the latest python 2.7.x (2.7.1) framework: Open the DMG files and read the included Readme files to determine what needs to be moved where. The normal process is to go to each opened volume (/Volumes/SDL_something) directory, and run the appropriate following command: sudo cp -R SDL.framework /Library/Frameworks/SDL.framework sudo cp -R SDL_image.framework /Library/Frameworks/SDL_image.framework sudo cp -R SDL_ttf.framework /Library/Frameworks/SDL_ttf.framework sudo cp -R SDL_mixer.framework /Library/Frameworks/SDL_mixer.framework Unpack the tgz tar xvzf pyobjc-1.4.tar.gz cd pyobjc-1.4 python setup.py bdist_mpkg --open tar xvzf Numeric-24.2.tar.gz cd Numeric-24.2 //#if !defined(__sgi) // int gettimeofday(struct timeval *, struct timezone *); //#endif sudo python setup.py install pygame needs universal binaries to build right, but libpng and libjpeg sources don't build as universal binaries automatically... so you probably want to download the pre-built universal binaries for libpng & jpeg from ethan.tira-thompson.com. However you probably don't want to build against the dylib's for libpng and libjpeg if you are making an installer or will be using py2app, because then your distribution's imageext.so will not work without your clients also installing the libjpeg and libpng pacakges.Since the package above installs both dylib and .a versions to /usr/local/lib and the build process will use the dylib versions by default, this means you probably want to delete/rename the dylib versions Install the package sudo rm /usr/local/lib/libpng.dylib sudo rm /usr/local/lib/libjpeg.dylib Skip this step if you downloaded the pre-built universal binaries for libpng & jpeg tar xvzf jpegsrc.v6b.tar.gz cd jpeg-6b ./configure make sudo make install-lib Unpack the tbz tar xvjf libpng-1.2.16.tar.bz2 cd libpng-1.2.16 Build and install ./configure make sudo make install Unpack the tgz tar xvzf pygame-1.8.0rc3.tar.gz cd pygame-1.8.0rc3 Configure python config.py) python setup.py build
http://pygame.org/wiki/MacCompile
CC-MAIN-2015-35
refinedweb
928
62.75
Description Ink Out effect time is not getting set correctly. What MCU/Processor/Board and compiler are you using? Simulator What do you experience? So, I am writing unit tests in my serialization and ink out time is not being set. What do you expect? Ink out time being set correctly. Code to reproduce Bug is here, line 340 in lv_btn.c: uint16_t lv_btn_get_ink_out_time(const lv_obj_t * btn) { #if LV_USE_ANIMATION && LV_BTN_INK_EFFECT lv_btn_ext_t * ext = lv_obj_get_ext_attr(btn); return ext->ink_in_time; <<<<<<< ERROR #else (void)btn; /*Unused*/ return 0; #endif } ink_in_time is being returned when it should be ink in Use the ```c and ``` tags to format your code: /*You code here*/ Screenshot and/or video If possible, add screenshots and/or videos about the current issue.
https://forum.lvgl.io/t/bug-in-button-ink-out-time/851/4
CC-MAIN-2020-29
refinedweb
122
57.16
Deprecated report.txt foo bar -v and --report are both options. Assuming that --report takes one argument, report.txt is an option argument. foo and bar are positional arguments. are find, tar, and dd—all of which are mutant oddballs that have been rightly criticized for their non-standard syntax and confusing interfaces.) Lots of people want their programs to have “required options”. Think about it. If it’s required, then it’s not optional! If there is a piece of information that your program absolutely requires in order to run successfully, that’s what positional arguments are for. options at all: cp SOURCE DEST cp SOURCE ... DEST-DIR You can get pretty far with just that. Most cp implementations provide a bunch of options to tweak exactly how the files are copied: you can preserve mode and modification time, avoid following symlinks, ask before clobbering existing files, etc. But none of this distracts from the core mission of cp, which is to copy either one file to another, or several files to another directory.. While optparse is quite flexible and powerful, it’s also straightforward to use in most cases. This section covers the code patterns that are common to any optparse-based program. First, you need to import the OptionParser class; then, early in the main program, create an OptionParser instance: from optparse import OptionParser [...] parser = OptionParser() Then you can start defining options. The basic syntax is: parser.add_option(opt_str, ..., attr=value, ...) Each option has one or more option strings, such as -f or --file, and several option attributes that tell optparse what to expect and what to do when it encounters that option on the command line. Typically, each option will have one short option string and one long option string, e.g.: parser.add_option("-f", "--file", ...) You’re free to define as many short option strings and as many long option strings as you like (including zero), as long as there is at least one option string overall. The option strings passed to OptionParser.add_option() are effectively labels for the option defined by that call. For brevity, we will frequently refer to encountering an option on the command line; in reality, optparse encounters option strings and looks up options from them. Once all of your options are defined, instruct optparse to parse your program’s command line: (options, args) = parser.parse_args() (If you like, you can pass a custom argument list to parse_args(), but that’s rarely necessary: by default it uses sys.argv[1:].) parse_args() returns two values:. When dealing with many options, it is convenient to group these options for better help output. An OptionParser can contain several option groups, each of which can contain several options. An option group is obtained using the class OptionGroup: where: Return the OptionGroup to which the short or long option string opt_str (e.g. '-o' or '--option') belongs. If there’s no such OptionGroup, return None.: Print the version message for the current program (self.version) to file (default stdout). As with print_usage(), any occurrence of %prog in self.version is replaced with the name of the current program. Does nothing if self.version is empty or undefined. Same as print_version() but returns the version string instead of printing it.. The OptionParser constructor has no required arguments, but a number of optional keyword arguments. You should always pass them as keyword arguments, i.e. do not rely on the order in which the arguments are declared. There are several ways to populate the parser with options. The preferred way is by using OptionParser.add_option(), as shown in section Tutorial. add_option() can be called in one of two ways: The other alternative is to pass a list of pre-constructed Option instances to the OptionParser constructor, as in: option_list = [ make_option("-f", "--filename", action="store", type="string", dest="filename"), make_option("-q", "--quiet", action="store_false", dest="verbose"), ] parser = OptionParser(option_list=option_list) (make_option() is a factory function for creating Option instances; currently it is an alias for the Option constructor. A future version of optparse may split Option into several classes, and make_option() will pick the right class to instantiate. Do not instantiate Option directly.) Each option’s action determines what optparse does when it encounters this option on the command-line. The standard option actions hard-coded into optparse are: (If you don’t supply an action, the default is "store". For this action, you may also supply type and dest option attributes; see Standard option actions.). The following option attributes may be passed as keyword arguments to OptionParser.add_option(). If you pass an option attribute that is not relevant to a particular option, or fail to pass a required option attribute, optparse raises OptionError. (default: "store") Determines optparse‘s behaviour when this option is seen on the command line; the available options are documented here. (default: "string") The argument type expected by this option (e.g., "string" or "int"); the available option types are documented here. (default: derived from option strings) If the option’s action implies writing or modifying a value somewhere, this tells optparse where to write it: dest names an attribute of the options object that optparse builds as it parses the command line. The value to use for this option’s destination if the option is not seen on the command line. See also OptionParser.set_defaults(). (default: 1) How many arguments of type type should be consumed when this option is seen. If > 1, optparse will store a tuple of values to dest. For actions that store a constant value, the constant value to store. For options of type "choice", the list of strings the user may choose from. For options with action "callback", the callable to call when this option is seen. See section Option Callbacks for detail on the arguments passed to the callable. optparse.SUPPRESS_HELP. (default: derived from option strings) Stand-in for the option argument(s) to use when printing help text. See section Tutorial for an example.. Set parsing to not stop on the first non-option, allowing interspersing switches with command arguments. This is the default behavior. Returns the Option instance with the option string opt_str, or None if no options have that option string. Return true if the OptionParser has an option with option string opt_str (e.g., -q or --verbose). If the OptionParser has an option corresponding to opt_str, that option is removed. If that option provided any other option strings, all of those option strings become invalid. If opt_str does not occur in any option belonging to this OptionParser, raises ValueError. If you’re not careful, it’s easy to define options with conflicting option strings: parser.add_option("-n", "--dry-run", ...) [...] parser.add_option("-n", "--noisy", ...) (This is particularly true if you’ve defined your own OptionParser subclass with some standard options.) Every time you add an option, optparse checks for conflicts with existing options. If it finds any, it invokes the current conflict-handling mechanism. You can set the conflict-handling mechanism either in the constructor: parser = OptionParser(..., conflict_handler=handler) or with a separate call: parser.set_conflict_handler(handler) The available conflict handlers are: - "error" (default) - assume option conflicts are a programming error and raise OptionConflictError - "resolve" - resolve option conflicts intelligently (see below) As an example, let’s define an OptionParser that resolves conflicts intelligently and add conflicting options to it: parser = OptionParser(conflict_handler="resolve") parser.add_option("-n", "--dry-run", ..., help="do no harm") parser.add_option("-n", "--noisy", ..., help="be noisy") At this point, optparse detects that a previously-added option is already using the -n option string. Since conflict_handler is "resolve", it resolves the situation by removing -n from the earlier option’s list of option strings. Now --dry-run is the only way for the user to activate that option. If the user asks for help, the help message will reflect that: Options: --dry-run do no harm [...] -n, --noisy be noisy It’s possible to whittle away the option strings for a previously-added option until there are none left, and the user has no way of invoking that option from the command-line. In that case, optparse removes that option completely, so it doesn’t show up in help text or anywhere else. Carrying on with our existing OptionParser: parser.add_option("--dry-run", ..., help="new dry-run option") At this point, the original -n/--dry-run option is no longer accessible, so optparse removes it, leaving this help text: Options: [...] -n, --noisy be noisy --dry-run new dry-run option. OptionParser supports several other public methods: Set the usage string according to the rules described above for the usage constructor keyword argument. Passing None sets the default usage string; use optparse.SUPPRESS_USAGE to suppress a usage message. Print the usage message for the current program (self.usage) to file (default stdout). Any occurrence of the string %prog in self.usage is replaced with the name of the current program. Does nothing if self.usage is empty or not defined. Same as print_usage() but returns the usage string instead of printing it. OptionParser: types: TYPES and TYPE_CHECKER. A tuple of type names; in your subclass, simply define a new tuple TYPES that builds on the standard one. A dictionary mapping type names to type-checking functions. A type-checking function has the following signature: def check_mytype(option, opt, value) where option is an Option instance, opt is an option string (e.g., -f), and value parameter. Your type-checking function should raise OptionValueError if it encounters any problems. OptionValueError takes a single string argument, which is passed as-is to OptionParser‘s‘s): All actions must be listed in ACTIONS. “store” actions are additionally listed here. “typed” actions are additionally listed here. Actions that always take a type (i.e. whose options always take a value) are additionally listed here. The only effect of this is that optparse assigns the default type, "string", to options with no explicit type whose action is listed in ALWAYS_TYPED_ACTIONS. In order to actually implement your new action, you must override Option’s take_action() method and add a case that recognizes your action. For example, let’s add an "extend" action. This is similar to the standard "append" action, but instead of taking a single value from the command-line and appending it to an existing list, "extend" will take multiple values in a single comma-delimited string, and extend an existing list with them. That is, if --names is an "extend" option of type "string", the command line --names=foo,bar --names blah --names ding,dong would result in a list ["foo", "bar", "blah", "ding", "dong"] Again we define a subclass of Option: class MyOption(Option): ACTIONS = Option.ACTIONS + ("extend",) STORE_ACTIONS = Option.STORE_ACTIONS + ("extend",) TYPED_ACTIONS = Option.TYPED_ACTIONS + ("extend",) is an instance of the optparse_parser.Values class, which provides the very useful ensure_value() method. ensure_value() is essentially getattr() with a safety valve; it is called as values.ensure_value(attr, value) If the attr attribute of values doesn’t exist or is None, then ensure_value() first sets it to value, and then returns ‘value. This is very handy for actions like "extend", "append", and "count", all of which accumulate data in a variable and expect that variable to be of a certain type (a list for the first two, an integer for the latter). Using ensure_value() means that scripts using your action don’t have to worry about setting a default value for the option destinations in question; they can just leave the default as None and ensure_value() will take care of getting it right when it’s needed.
http://docs.python.org/3.3/library/optparse.html
CC-MAIN-2014-10
refinedweb
1,929
56.96
I'm using a Polyline from google API to create a route on the map. I know that I can only use .svg formatted image for my icon. I created a custom icon that is saved as .svg and I put it into our public folder. When I tried to access it as public/img.svg or /img.svg it's not working. Here is googles example I'm changing the path: google.maps.SymbolPath.FORWARD_CLOSED_ARROW to my custom path but it's not working. .svg public/img.svg /img.svg path: google.maps.SymbolPath.FORWARD_CLOSED_ARROW Can you please provide the snippet of code you're having trouble implementing? To access the public folder, you would either have to use "%PUBLIC_URL%/your_files" or process.env.PUBLIC_URL. "%PUBLIC_URL%/your_files" process.env.PUBLIC_URL For example: import SomeSymbol from '%PUBLIC_URL%/SomeSymbol.svg'; <img src={process.env.PUBLIC_URL + '/SomeSymbol.svg'} /> import SomeSymbol from '%PUBLIC_URL%/SomeSymbol.svg'; <img src={process.env.PUBLIC_URL + '/SomeSymbol.svg'} /> However, I'd encourage you to create a folder inside the src folder and import your assets there. More info:
http://lovelace.augustana.edu/q2a/index.php/5159/custom-icon-access-reading
CC-MAIN-2022-33
refinedweb
179
55
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | USAGE | ATTRIBUTES | SEE ALSO #include <stdio.h>FILE *fopen(const char *filename, const char *mode); The fopen() function opens the file whose pathname is the string pointed to by filename, and associates a stream with it. The argument mode points to a string beginning with one of the following sequences:)). Opening a file with read mode (r as the first character in the mode argument) fails if the file does not exist or cannot be read.mixed determined not to refer to an interactive device. The error and end-of-file indicators for the stream are cleared. If mode is w, is w or w+ and the file did previously exist, upon successful completion, fopen() will mark for update the st_ctime and st_mtime fields of the file. The fopen() function will allocate a file descriptor as open(2)open() function
http://docs.oracle.com/cd/E19683-01/816-0213/6m6ne37va/index.html
CC-MAIN-2016-22
refinedweb
146
62.38
Entertain humans and pets alike with paper planes launched by your voice. In this project we will take a look at a combination of some old and new technologies to achieve something even older, entertainment. Maybe I won't have the timeline right but the technologies used in this projects are listed from the oldest to the newest below: - Paper Planes - Paper Plane Launcher - Arduino - Alexa Smart Home Skill Demo This is me launching a paper plane using the technology stack previously mentioned Paper Planes I'm using this simple model for the paper plane because it's sleek, easy to make and flies very well. I'm using paper rectangles of 6.5 x 8 cm and the instructions are shown below (taken from here) Here's an animation showing how the plane is built Paper Plane Launcher The Roof, the Bump and the Base To launch a paper plane we use two motors to move two wheels separated by a small distance. The body of the plane passes between the wheels and it gets pushed as if we have launched the plane ourselves. I built something similar to what is shown in this video and added some structure so that I didn't have to use my hands to make the planes take off. The two components are a small base and a roof with a "bump". Both components support the plane in position while it passes through the motors to get the push required to make it fly. I made a video describing the structure and you can see it in action below To create the whole structure I used cardboard and stick tape only. I started adding small rectangles of cardboard between the motors until I found the right number that separated them just enough to allow the planes to pass through while having a good contact with the wheels. Once the two motors were joint together, I started adding the roof and the base following a similar principleho using the same small pieces of cardboard and stick tape. See the video of me talking about the motors' front view. The Platform Besides the motors we also have a platform so that we can launch several planes without using our hands. The platform holds the plains in queue to be launched. It is connected to the motors by stick tape and the vibration of the motors plus the angle of the platform help the planes to move down so that the motors can push them through. I also made a video describing the Platform and its workings. To create the Platform you need two sheets of carton or cardboard. You don't need exact measures but make sure to start with big pieces so that you can trim them down until you find what works better. The final shape of the Platform mimicking the plane's shape was the result of trial and error as well as the height of the supporting structure. Now that we have a whole take-off platform let's wire the motors to make our planes fly. Arduino The Circuits I chose the Arduino MKR1000 because it already has WiFi support and good specs compared to other editions. The electronics of the whole system was taken from the Adafruit site where they show how to control DC Motors with an Arduino Uno R3. My circuit replicates almost exactly theirs differing in the amount of motors. In the code I removed the user input and variable speed features because the motors where going to be started by Alexa and they were going to run at full speed always. Here's the layout, schematics and PCB of the circuit I shared in Fritzing. There is an important warning on the Arduino Store Website. That's why I power the circuit using the VCC pin that outputs 3.3v. Maybe You can use the 5V pin but I lack electronics knowledge as well as tools and I didn't want to burn my Arduino. I tested it several times, left the motors on for several minutes and everything is working fine, however bear in mind that I didn't take any precautions to protect the Arduino. The Code The software side of the Arduino is quite simple as it acts like a remote switch turning on or off based on an incoming message. It can be seen in here void messageReceived(String &topic, String &payload) { Serial.println("Received " + topic + ": " + payload); // Set value to 1 if we receive "ON". Set it to 0 otherwise int value = payload == "ON" ? HIGH : LOW; // Turn on/off built-in led digitalWrite(led, value); // Turn on/off motors int speed = 255 * value; analogWrite(motorPin, speed); } The other part is the configuration to connect to the WiFi and to the MQTT server. You can take a look at the full code in here. To send/receive messages I used the shiftr.io service that provided me with the infrastructure and the code to listen to messages broadcasted to a specific topic. There you have namespaces which are like isolated projects. Inside a namespace you have tokens (key/secret pairs) that are used to authenticate against the mqtt host. Here's how you connect to your namespace client.connect(shiftrClientId, shiftrUsername, shiftrPassword) The clientId is the name of this node in the mqtt network, in the case of the Arduino is "MKR1000" while for the lambda is "lambda". (More about the lambda later) There is also a topic in the configuration of the mqtt connection. The topic is used to send and receive messages under a namespace; the Arduino subscribes to the /launch topic so that it can receive a notification to control the motors. client.subscribe(shiftrTopic); The whole configuration is shown below // MQTT Configuration const char shiftrHostname[] = "broker.shiftr.io"; const char shiftrUsername[] = SECRET_SHIFTR_USERNAME; const char shiftrPassword[] = SECRET_SHIFTR_PASSWORD; const char shiftrClientId[] = "MKR1000"; const char shiftrTopic[] = "/launcher"; And this is what we see in the monitor view of the arduino Ok, so that's all from the Arduino side. Now let's take a look to the Alexa side Alexa As you may already guess we use Alexa to turn on/off our Paper Plane Launcher. Alexa can't place the planes in position to take-off so that duty is on your side; once it's done, you can simply say Alexa, turn on Paper Plane Launcher And after a few seconds your planes will be feeling the air in their wings. Take a look at this 1 min demo where I show how I turn on and off the Paper Plane Launcher with my voice So, what do you need to do in the Alexa side? You need to create your developer account, create the smart home skill and create a lambda to handle the directives. The following steps will help you to build you own back-end. - 2. Make sure you complete the 5 steps before developing a Smart Home Skill - 3. Read about Managing Device Discovery for Your Alexa Smart Home Skill but do not use that code. The reason is that the whole payload format changed from the v2 to v3 and that code is out of date. You can take a look at the code I wrote to handle v3 directives. - 4. Add tests events to the lambda. Again, do not use the code in the article because it is outdated. You can find the structure for the v3 payload in this repo. - 5. Test your skill. I wish to tell you it was easy, but it wasn't. Those 5 steps are the result of many hours figuring out how to make everything work together. I think that I spent most of my time trying to make the outdated code work; you won't have that issue if you clone my repository and start from there. Let's dive into the lambda code to better understand how it works. Lambda The whole code is divided in 3 files. The handler (index.js), which receives the event, the responseBuilder which builds the object that Alexa expects as a response and the broadcaster to tell the Arduino through mqtt/shiftr.io that it's time to be turned on/off. We create a response using the responseBuilder and succeed immediately if the event is not a PowerControl event. The reason is that in the case of Discovery or ReportState, we don't need to interact with the Arduino and we can just simply reply to Alexa right away. const response = responseBuilder.buildResponseToEvent(event) if (!responseBuilder.isPowerControlRequest(event)) { return context.succeed(response) } If, on the other hand, the event is a PowerControl one we need to send the message to the Arduino so that it can turn on/off. This is done with the help of the broadcaster as shown below const value = responseBuilder.getPowerValue(event) broadcaster.send(value) .then(() => context.succeed(response)) We also inform Alexa about errors in case we have one const type = responseBuilder.ERROR_TYPES.INTERNAL_ERROR const response = responseBuilder.buildErrorResponse(event, type, error) context.succeed(response) In here we create the different responses based on the incoming event. To do so, I first identify the request type and build one reply or another. This builder can create Discovery, ReportState, Error, and PowerControl responses. They all work the same with small differences in structure. Don't forget to take a look to the tests. It has a simple task: send a message to a topic. The broadcaster connects to the host and publishes a value to the given topic; in that way we tell the Arduino to start/stop the motors of our launcher. The username and password are the same we use in the Arduino but instead of having a secret file we configure environment variables that can be read like this const { SHIFTR_USERNAME, SHIFTR_PASSWORD } = process.env and then used like this var client = mqtt.connect(`mqtt://${SHIFTR_USERNAME}:${SHIFTR_PASSWORD}@${HOST}`, { clientId: CLIENT_ID }) To configure environment variables in your lambda scroll down to that section and set the values Then, as soon as the message is sent we disconnect from the server because there's nothing else to do and because otherwise the lambda will keep on running until it's execution time has finished. Not disconnecting could cause unnecessary costs. client.publish(TOPIC, value, error => { client.end() ... }) If you want to test the broadcaster locally, you need to set the environment variables in the npm script like shown below ... "test": "SHIFTR_USERNAME=123 SHIFTR_PASSWORD=abc jest" ... I also included some extra scripts that will help you with your lambda development. Those were added in the package.json You can deploy your lambda just by running npm run deploy To do so, you need first to have configured the aws cli and also you need to change the --function-name in the update-lambda npm script. Running the deploy command will copy all files in the src folder and the node_modules folder to a build folder. It will then zip the build folder and execute the aws lambda command that will update the function's code. Since the name of the zip file is build.zip you need to make sure the handler in the lambda is set to build.handler as shown below. And that covers our lambda code to handle Alexa directives. Diagrams Let's see some diagrams to have a better understanding of how the pieces work together VUI Diagram The Voice-User-Interaction in this case is very simple because we are using the Smart Home Skill to control the paper plane launcher as a switch Cloud Components The communication between the parts happens as described in the image below Closing thoughts Building a semi-automated voice-activated paper plane launcher had a lot of obstacles in different fields. We started with our bare hands making small planes out paper, cutting and sticking together pieces of cardboard to build a launcher to then connect its motors to an unprotected circuit controlled by an Arduino that listens to messages in a /launcher mqtt topic where a lambda function posts values each time Alexa hears you saying the right words. (Phew...) The mixture of feelings is rich in variety but the prominent one is satisfaction. There's also a lot of gratitude and I want to create a section to give thanks. Special Thanks I created this project to participate in The Alexa and Arduino Smart Home Challenge, however the first idea was a completely different one and there were different people helping me with each idea. - For the Paper Plane Launcher Christine Gaulis: For the hours of conversations and support in this project in both ideas. Ivan Mottier from ZigoBot: Who took the time to listen to my broken french and provided me with the right components to build the paper plane launcher. - For the sound detector (asking Alexa if the water tap was open) From the Hackuarium Lab: Rachel Aronoff: For connecting me with the lab and some of its members. Also for showing me her project about the DNA in the cells from the cheeks and her fluor detector. Vanessa Lorenzo: For her help while we were in the lab, for showing me her own experiments and telling me about the music making using microorganisms. Luc Patiny: For his time explaining me the different ways I could process the sound in order to find the information I was looking for. Also for sharing with me his experience in Cali, Colombia, my own city. He also shared with me very useful resources like C.H.I.P and the ml.js libraries Christian Zufferey: Who by his own initiative contacted me through Vanessa Lorenzo to propose that I use a method he used back in 2007 to compare websites so that I could identify objects through the sound. And finally, from this repo in github to Alexandre Storelli: Who helped me by answering to my questions about audio signatures. ---------------------- I hope you can learn something from what I shared here and have a lot of fun while doing it. If you build a Paper Plane Launcher or something similar please let me know.
https://www.hackster.io/jonathanmv/alexa-launch-a-paper-plane-acf175
CC-MAIN-2018-47
refinedweb
2,361
68.81
12 January 2010 12:03 [Source: ICIS news] LONDON (ICIS news)--British Polythene Industries (BPI) expects its financial results for 2009 to be at the top end of current expectations, the company said on Tuesday, adding, however, that it will close another UK production facility in the face of lower volume demand. The group had continued to make positive progress since the release of an interim statement in November, it said. Net borrowings on 31 December were lower than previously expected and remained around similar levels to June 2009, it added. BPI reported a net profit of £6.1m (€6.8m/$9.8m) for the first half of 2009, up from £5.2m for the corresponding period of 2008 and the full-year net result of £2.8m. First-half 2009 sales were £231.4m from a comparable £265.9m. Net borrowing had been reduced to £55m at the end of June 2009 from £76m at the end of December 2008. BPI has seen volume demand fall and rise again through the downturn and its raw material prices followed suit. It said in November, however, that August, September and October had been better than the same months in 2008. The company would close its ?xml:namespace> BPI earlier had closed other The company said at the half year that trading overall had remained challenging. The second-half results would, it said, depend heavily on the raw material price cycle and whether Christmas retail spending was up to expectations. ($1 = £0.62/€0.69) For more on polyethyl
http://www.icis.com/Articles/2010/01/12/9324778/british-polythene-industries-expects-stronger-2009-results.html
CC-MAIN-2015-18
refinedweb
257
63.7
XML::Atom::Entry - Atom entry use XML::Atom::Entry; my $entry = XML::Atom::Entry->new; $entry->title('My Post'); $entry->content('The content of my post.'); my $xml = $entry->as_xml; my $dc = XML::Atom::Namespace->new(dc => ''); $entry->set($dc, 'subject', 'Food & Drink'); Creates a new entry object, and if $stream is supplied, fills it with the data specified by $stream. Automatically handles autodiscovery if $stream is a URI (see below). Returns the new XML::Atom::Entry object. On failure, returns undef. $stream can be any one of the following: This is treated as the XML body of the entry. This is treated as the name of a file containing the entry XML. This is treated as an open filehandle from which the entry XML can be read. Returns the content of the entry. If $content is given, sets the content of the entry. Automatically handles all necessary escaping.'); $entry->author($author); If called in scalar context, returns an XML::Atom::Link object corresponding to the first <link> tag found in the entry. If called in list context, returns a list of XML::Atom::Link objects corresponding to all of the <link> tags found in the entry. Adds the link $link, which must be an XML::Atom::Link object, to the entry as a new <link> tag. For example: my $link = XML::Atom::Link->new; $link->type('text/html'); $link->rel('alternate'); $link->href(''); $entry->add_link($link); Given an XML::Atom::Namespace element $ns and an element name $element, retrieves the value for the element in that namespace. This is useful for retrieving the value of elements not in the main Atom namespace, like categories. For example: my $dc = XML::Atom::Namespace->new(dc => ''); my $subj = $entry->get($dc, 'subject'); Just like $entry->get, but if there are multiple instances of the element $element in the namespace $ns, returns all of them. get will return only the first. Please see the XML::Atom manpage for author, copyright, and license information.
http://search.cpan.org/dist/XML-Atom/lib/XML/Atom/Entry.pm
CC-MAIN-2015-06
refinedweb
330
56.45