text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
typeweight 2.0.0
typeweight: ^2.0.0 copied to clipboard
Flutter plugin to help you to use more readable/clear font weight definitions.
TypeWeight #
Since I am not really good remembering font weights by the numbers, and to be honest, even though it wasn't the problem, humane names are better in my opinion. Also, humane names are better when in communication.
Usage #
Install typeweight as a dependency into your project, then use it in replacement of FontWeight.
Example #
import 'package:flutter/material.dart'; import 'package:typeweight/typeweight.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'TypeWeight Demo', home: Scaffold( body: Center( child: Text( 'TypeWeight', style: TextStyle( fontWeight: TypeWeight.black, ), ), ), ), ); } }
License #
MIT © Frenco Jobs | https://pub.dev/packages/typeweight/versions/2.0.0 | CC-MAIN-2021-10 | refinedweb | 124 | 50.73 |
The L. Frank Baum books
Glinda of Oz
Books inspired by Oz
Was
Visitors From Oz
Wicked
Oz Squad (comic book series)
Films
The Wizard of Oz
The Wiz
Return to Oz
Wild At Heart
The Tales of the Wizard of Oz (animated TV series)
The Wizard of Mars
Noder’s Notes - December 2008: I’m slowly noding the Oz books by L. Frank Baum.
Noded and linked:, and now The Tin Woodman of Oz.
A few other Baum books -- The Life and Adventures of Santa Claus, A Kidnapped Santa Claus, The Sea Fairies and Sky Island -- are also noded.
In process: The next Oz Book. I'm way behind, and at this rate I won't finsh the books until the end of the year decade...
All of Baum's Oz books are in the public domain. Publishing them here does not violate copyright law.
The original text for these books were downloaded from Project Gutenberg. A few spelling corrections were made, as well as reformatting for E2. Some chapters have duplicate titles in other books, and have been namespaced to avoid confusion.
Hardlinks made with no nodes created may mean that they are coming. Requests for hardlinks and character nodes encouraged. /msg gnarl with any comments. Enjoy!
Ruth Plumly Thompson
Log in or register to write something here or to contact authors.
Need help? accounthelp@everything2.com | https://everything2.com/title/The+Great+OZ+Node | CC-MAIN-2018-17 | refinedweb | 229 | 72.26 |
I would like use TypeScript.NET in my angular application. I am newbie in typescript and maybe root of problem is simple.
For example I would like to use StringBuilder.
First I install TypeScript via nuget
Add ref to index.html
<script src="source/System/Text/StringBuilder.js"></script>
<script src="source/System/Text/Utility.js"></script>
Here I am confuse, I don’t use any module system (e.g commonJs, amd, etc).
TypeScript.NET nuget add to my project folder dist which constains probably dist version for amd, commonjs etc hence I reference files from source folder.
Now I want to use StringBuilder in my internal module.
module App.Company{
//in internal module
import IVersion = App.IVersion;
import IError = App.IError;
//TypeScript.NET
import StringBuilder from "../../../../source/system/text/stringbuilder";
}
Here I get compilation errror
Import declarations in a namespace cannot reference a module
What is solution?
Second how can simple reference all types from TypeScript.NET? | http://www.dlxedu.com/askdetail/3/f0e672076de2b4c4629d865d90da2969.html | CC-MAIN-2018-47 | refinedweb | 158 | 54.59 |
Introduction
We often need to pass data between Activities of an Android app. An easy way to do this is with
Intent.putExtra(), but if you have a lot of structured data to pass, Parcelable may be a better solution. In this post I'll show you how Parcelable makes it easy to serialize classes for sharing between Activities.
Why Parcelable?
Parcelable is an Android-only Interface. It allows developers to serialize a class so its properties are easily transferred from one activity to another. This is done by reading and writing of objects from Parcels, which can contain flattened data inside message containers.
Creating the Main Activity and Layout
Our main Activity will handle the collection of the book details. Let's start by setting up our
onCreate method.
package com.tutsplus.code.android.bookparcel; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //... } }
Next, open activity_main.xml to set up the view's layout and appearance. You will need two text entry fields and a button for submission.
It should look like this:
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <EditText android: <EditText android: <Button android: </LinearLayout>
Now open your main Activity and link the view fields to your activity. You'll have to do it inside your
onCreate() method, like this:
//... final EditText mBkTitle = (EditText) findViewById(R.id.title); final EditText mBkAuthor = (EditText) findViewById(R.id.author); Button button = (Button) findViewById(R.id.submit_button);
To finish off
MainActivity, you need to set up an
onClickListener. This will be called whenever the Submit button is pressed. When that happens, the details entered are to be collected and sent to the next activity.
button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Book book = new Book(mBkTitle.getText().toString(), mBkAuthor.getText().toString()); Intent intent = new Intent(MainActivity.this, BookActivity.class); intent.putExtra("Book", book); startActivity(intent); } });
Here, you add an
onClickListener to the
Button instance you retrieved from your Activity layout. This code will be run whenever the Submit button is clicked.
Note that we simply pass the
Book instance to
putExtra(). As we'll see later, Parcelable takes care of serializing the book data to a string so it can be passed via the Intent.
Now that the main Activity is complete, we need to create our
BookActivity as well as the Book class to hold book info.
Create the Book Class
Let's create a
Book class to hold info about each book.
public class Book implements Parcelable { // book basics private String title; private String author; }
This class needs to implement
Parcelable. This will enable the passing of the data from
MainActivity to
BookActivity.
We'll also add some standard getter functions and a constructor to quickly create an instance of the class.
// main constructor public Book(String title, String author) { this.title = title; this.author = author; } // getters public String getTitle() { return title; } public String getAuthor() { return author; }
Write to the Parcel
The
writeToParcel method is where you add all your class data to the parcel. This is done in preparation for transfer. This method will be passed a Parcel instance which has a number of write methods that you can use to add each field to the parcel. Watch out: the order in which you write the data is important!
Here is how you do it.
// write object values to parcel for storage public void writeToParcel(Parcel dest, int flags) { dest.writeString(title); dest.writeString(author); }
Read Data Back From the Parcel
Just as the write method handles writing the data to be transferred, the constructor is used to read the transferred data back from the serialized Parcel. This method will be called on the receiving activity to collect the data.
Here's how it should look:
public Book(Parcel parcel) { title = parcel.readString(); author = parcel.readString(); }
The receiving Activity will get the data as a string, and will then call the
getParcelableExtra method to start the process of collecting the data. That will trigger the constructor we defined above, which will deserialize the data and create a new
Book instance.
Parcelable.Creator
To complete your Parcelable class, you need to create a
Parcelable.Creator instance and assign it to the
CREATOR field. The Parcelable API will look for this field when it needs to deserialize an instance of your class that has been passed to another component.
public static final Parcelable.Creator<Book> CREATOR = new Parcelable.Creator<Book>() { @Override public Book createFromParcel(Parcel parcel) { return new Book(parcel); } @Override public Book[] newArray(int size) { return new Book[0]; } };
This binds everything together. Its job is simple—it generates instances of your Parcelable class from a
Parcel using the parcel data provided. The creator calls the constructor you defined above, passing it a
Parcel object, and the constructor initializes the class attributes.
If your Parcelable class will have child classes, you'll need to take some extra care with the
describeContents() method. This will let you identify the specific child class that should be created by the
Parcelable.Creator. You can read more about how this works on Stack Overflow.
Book Activity and Layout
Now we can complete our app with the book Activity. Go ahead and create a new empty activity called
BookActivity. Make the layout look like what I have below.
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: <TextView android: </LinearLayout>
In the activity_book.xml, you only need two
TextView widgets, which will be used to show the title and author of the books.
Now let's set up our activity. Your activity should already look like this:
package com.tutsplus.code.android.bookparcel; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; public class BookActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_book); } }
In this activity, you want to receive the data that was passed from your main Activity and display it on your views. Thus you will retrieve the instances of your
TextView using the id of the
TextView set in your layout.
TextView mBkTitle = (TextView) findViewById(R.id.bk_title); TextView mBkAuthor = (TextView) findViewById(R.id.bk_author);
Then, of course, you'll call
getIntent() because you will be retrieving data in this activity. The data you will be retrieving are collected from the Book class using
getParcelableExtra(). Next, you set the values of the
TextViews to the data you collected. Here is how it is done.
Intent intent = getIntent(); Book book = intent.getParcelableExtra("Book"); mBkTitle.setText("Title:" + book.getTitle()); mBkAuthor.setText("Author:" + book.getAuthor());
Build your application and launch it, and you should see the little beauty you have just built.
Conclusion
In this post, you've seen how you can easily move objects from one activity to another. You no longer have to retrieve each data field you passed to the Intent object separately, and you don't have to remember the name that you passed each field under. Not only that, but this process is faster than Java's serialize functionality.
In this tutorial, you have learned how to use Parcelable to pass data from one activity to another. You can dive deeper into Parcelable by checking the official documentation.
In the meantime, check out some of our other posts about Android app development!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/how-to-pass-data-between-activities-with-android-parcelable--cms-29559 | CC-MAIN-2020-16 | refinedweb | 1,249 | 50.02 |
Is there a better way to give elements knowlege of their parents and xpath in xml.etree.ElementTree
I have the following code which works:
import xml.etree.ElementTree as etree def get_path(self): parent = '' path = self.tag sibs = self.parent.findall(self.tag) if len(sibs) > 1: path = path + '[%s]'%(sibs.index(self)+1) current_node = self while True: parent = current_node.parent if not parent: break ptag = parent.tag path = ptag + '/' + path current_node = parent return path etree._Element.get_path = get_path etree._Element.parent = None class XmlDoc(object): def __init__(self): self.root = etree.Element('root') self.doc = etree.ElementTree(self.root) def SubElement(self, parent, tag): new_node = etree.SubElement(parent, tag) new_node.parent = parent return new_node doc = XmlDoc() a1 = doc.SubElement(doc.root, 'a') a2 = doc.SubElement(doc.root, 'a') b = doc.SubElement(a2, 'b') print etree.tostring(doc.root), '\n' print 'element:'.ljust(15), a1 print 'path:'.ljust(15), a1.get_path() print 'parent:'.ljust(15), a1.parent, '\n' print 'element:'.ljust(15), a2 print 'path:'.ljust(15), a2.get_path() print 'parent:'.ljust(15), a2.parent, '\n' print 'element:'.ljust(15), b print 'path:'.ljust(15), b.get_path() print 'parent:'.ljust(15), b.parent
Which results in this output:
<root><a /><a><b /></a></root> element: <Element a at 87e3d6c> path: root/a[1] parent: <Element root at 87e3cec> element: <Element a at 87e3fac> path: root/a[2] parent: <Element root at 87e3cec> element: <Element b at 87e758c> path: root/a/b parent: <Element a at 87e3fac>
Now this is drastically changed from the original code, but I'm not allowed to share that.
The functions aren't too inefficient but there is a dramatic performance decrease when switching from cElementTree to ElementTree which I expected, but from my experiments it seems like monkey patching cElementTree is impossible so I had to switch.
What I need to know is whether there is either a way to add a method to cElementTree or if there is a more efficient way of doing this so I can gain some of my performance back.
Just to let you know I am thinking of as a last resort implementing selected static typing and to compile with cython, but for certain reasons I really don't want to do that.
Thanks for taking a look.
EDIT: Sorry for the wrong use of the term late binding. Sometimes my vocabulary leaves something to be desired. What I meant was "monkey patching."
EDIT: @Corley Brigman, Guy: Thank you very much for your answers which do address the question, however (and I should have stated this in the original post) I had completed this project before using lxml which is a wonderful library that made coding a breeze but due to new requirements (This needs to be implemented as an addon to a product called Splunk) which ties me to the python 2.7 interpreter shipped with Splunk and eliminates the possibility of adding third party libraries with the exception of django.
Answers
If you need parents, use lxml instead - it tracks parents internally, and is still C behind the scenes so it's very fast.
However... be aware that there is a tradeoff in tracking parents, in that a given node can only have a single parent. This isn't usually a problem, however, if you do something like the following, you will get different results in cElementTree vs. lxml:
p = Element('x') q = Element('y') r = SubElement(p, 'z') q.append(r)
cElementTree:
dump(p) <x><z /></x> dump(q) <y><z /></y>
lxml:
dump(p) <x/> dump(q) <y> <z/> </y>
Since parents are tracked, a node can only have one parent, obviously. As you can see, the element r is copied to both trees in cElementTree, and reparented/moved in lxml.
There are probably only a small number of use cases where this matters, but something to keep in mind.
you can just use xpath, for example:
import lxml.html def get_path(): for e in doc.xpath("//b//*"): print e
should work, didn't test it though...
Need Your Help
How to set a property in SVN for a single revision of a single file
Jquery - How to affect just one node in drupal view
javascript jquery drupal viewsWhen I click a node in a drupal view it toggles the body for every node in my view instead of just the node I am clicking. How can I get jquery to toggle just the node I am clicking? Here is my cod... | http://unixresources.net/faq/20133316.shtml | CC-MAIN-2019-18 | refinedweb | 748 | 66.23 |
Currently I'm experimenting a little bit with recursive functions in Python. I've read some things on the internet about them and I also built some simple functioning recursive functions myself. Although, I'm still not sure how to use the base case.
I know that a well-designed recursive function satisfies the following rules:
def checkstring(n, string):
if len(string) == 1:
if string == n:
return 1
else:
return 0
if string[-1:] == n:
return 1 + checkstring(n, string[0:len(string) - 1])
else:
return checkstring(n, string[0:len(string) - 1])
print(checkstring('l', 'hello'))
That is absolutely fine and valid function. Just remember that for any scenario that you can call a recursion function from, there should be a base case reachable by recursion flow.
For example, take a look at the following (stupid) recursive function:
def f(n): if n == 0: return True return f(n - 2)
This function will never reach its base case (n == 0) if it was called for odd number, like 5. You want to avoid scenarios like that and think about all possible base cases the function can get to (in the example above, that would be 0 and 1). So you would do something like
def f(n): if n == 0: return True if n == 1: return False if n < 0: return f(-n) return f(n - 2)
Now, that is correct function (with several ifs that checks if number is even).
Also note that your function will be quite slow. The reason for it is that Python string slices are slow and work for O(n) where n is length of sliced string. Thus, it is recommended to try non-recursive solution so that you can not re-slice string each time.
Also note that sometimes the function do not have strictly base case. For example, consider following brute-force function that prints all existing combinations of 4 digits:
def brute_force(a, current_digit): if current_digit == 4: # This means that we already chosen all 4 digits and # we can just print the result print a else: # Try to put each digit on the current_digit place and launch # recursively for i in range(10): a[current_digit] = i brute_force(a, current_digit + 1) a = [0] * 4 brute_force(a, 0)
Here, because function does not return anything but just considers different options, we do not have a base case. | https://codedump.io/share/KWYn8ETfwMqL/1/python-base-case-of-a-recursive-function | CC-MAIN-2018-17 | refinedweb | 396 | 59.87 |
(2012?
#at beginning of file, along with other existing imports
import time
#near the end
previousTime = time.time()
count = 0
while (not xbmc.abortRequested and (time.time()-previousTime < 5)):
previousTime = time.time()
if (count == continuousWolDelay):
xbmc.executebuiltin('XBMC.WakeOnLan("'+macAddress+'")')
print 'WakeOnLan signal sent to MAC-Address '+macAddress
count = 0
else:
count+=1
xbmc.sleep(1000)
print 'script.advanced.wol closing'
return
<favourite name="Advanced Wake On Lan">RunScript("script.advanced.wol",ActivateWindow(10025,plugin://plugin.video.eyetv.parser),True)</favourite>
(2012-10-08 08:45)tenzion Wrote: I have the video plugin "EyeTV Parser" installed on my XBMC. I am using Aeon Nox 3.6.1 and I have added a favourite on the home screen that activates the AWOL script and in turn should activate the EyeTV Parser video plugin. I am using an openelec build if that has any relevance.
Here's the code from favourites.xml
Code:
<favourite name="Advanced Wake On Lan">RunScript("script.advanced.wol",ActivateWindow(10025,plugin://plugin.video.eyetv.parser),True)</favourite>
But it doesn't work. The AWOL script part works as my remote host is woken up from sleep, but nothing else happens.
(2012-09-02 14:24)sw4y Wrote: Is it possible (maybe with parameters) to wake the server up when xbmc starts ("this is possible"), wait until it's awake and then start searching for new files in the libraries?
(2012-10-09 09:13)tenzion Wrote: But did you look at the syntax of the code I pasted from favourites.xml? Is it correct?
The "ActivateWindow" refers to 10025 which is "videolibrary" - is this correct when we're dealing with a video plugin? Or is "ActivateWindow" the correct built-in command to use?
<favourite name="Advanced Wake On Lan">RunScript("script.advanced.wol",ActivateWindow(10025,"plugin://plugin.video.eyetv.parser"),True)</favourite>
(2012-10-09 14:30)tenzion Wrote: Unfortunately, the "EyeTV Parser" addon does not support being added as a favourite,...
(2012-10-09 19:31)tenzion Wrote: Anyway, I've posted my xbmc.log on Pastebin -, if you'd care to take a look...
19:00:54 T:140441375938304 NOTICE: WakeOnLan signal sent to MAC-Address c4:2c:03:12:74:99
19:00:54 T:140441375938304 ERROR: Error Type: <type 'exceptions.ValueError'>
19:00:54 T:140441375938304 ERROR: Error Contents: need more than 1 value to unpack
19:00:54 T:140441375938304 ERROR: Traceback (most recent call last):
File "/storage/.xbmc/addons/script.advanced.wol/default.py", line 114, in <module>
main()
File "/storage/.xbmc/addons/script.advanced.wol/default.py", line 81, in main
except socket.error, (errno, msg):
ValueError: need more than 1 value to unpack
Quote:- what have you entered in the AWOL-setting "IP-address or hostname"?
- what version of openelec are you using exactly?
- are you running the openelec-XBMC with a user with root-rights?
(2012-10-10 19:11)Skyler Wrote: If you like I could test the new version as well. Just write a pm. I use a AMD Fusion system with the current RC of openelec V2.
(2012-10-10 19:35)tenzion Wrote: Just need to make sure the syntax is correct with regards to activating the EyeTV parser plugin... | http://forum.kodi.tv/showthread.php?tid=121142&page=4 | CC-MAIN-2016-40 | refinedweb | 536 | 60.21 |
jsweet alternatives and similar libraries
Based on the "Miscellaneous" category.
Alternatively, view jsweet alternatives based on common mentions on social networks and blogs.
FizzBuzz Enterprise EditionFizzBuzz Enterprise Edition is a no-nonsense implementation of FizzBuzz made by serious businessmen for serious business purposes.
JavaCV8.8 6.3 jsweet VS JavaCVJava interface to OpenCV, FFmpeg, and more
Simple Java Mail5.8 8.1 jsweet VS Simple Java MailSimple API, Complex Emails (Jakarta Mail smtp wrapper)
PipelinR3.4 2.0 jsweet VS PipelinRPipelinR is a lightweight command processing pipeline ❍ ⇢ ❍ ⇢ ❍ for your Java awesome app.
yGuard3.3 5.9 jsweet VS yGuardThe open-source Java obfuscation tool working with Ant and Gradle by yWorks - the diagramming experts
JCuda2.7 0.0 jsweet VS JCudaJCuda samples
Less time debugging, more time building
Do you think we are missing an alternative of jsweet or a related project?
README
JSweet: a Java to JavaScript transpiler.
- JSweet is safe and reliable. It provides web applications with type-checking and generates fully type-checked JavaScript programs. It stands on Oracle's Java Compiler (javac) and on Microsoft's TypeScript (tsc).
- JSweet allows you to use your favorite JS library (JSweet+Angular2, JSweet+threejs, IONIC/Cordova, ...).
- JSweet enables code sharing between server-side Java and client-side JavaScript. JSweet provides implementations for the core Java libraries for code sharing and legacy Java migration purpose.
- JSweet is fast, lightweight and fully JavaScript-interoperable. The generated code is regular JavaScript code, which implies no overhead compared to JavaScript, and can directly interoperate with existing JavaScript programs and libraries.
How does it work? JSweet depends on well-typed descriptions of JavaScript APIs, so-called "candies", most of them being automatically generated from TypeScript definition files. These API descriptions in Java can be seen as headers (similarly to *.h header files in C) to bridge JavaSript libraries from Java. There are several sources of candies for existing libraries and you can easily build a candy for any library out there (see more details).
With JSweet, you take advantage of all the Java tooling (IDE's, Maven, ...) to program real JavaScript applications using the latest JavaScript libraries.
Java -> TypeScript -> JavaScript
Here is a first taste of what you get by using JSweet. Consider this simple Java program:
package org.jsweet; import static jsweet.dom.Globals.*; /** * This is a very simple example that just shows an alert. */ public class HelloWorld { public static void main(String[] args) { alert("Hi there!"); } }
Transpiling with JSweet gives the following TypeScript program:
namespace org.jsweet { /** * This is a very simple example that just shows an alert. */ export class HelloWorld { public static main(args : string[]) { alert("Hi there!"); } } } org.jsweet.HelloWorld.main(null);
Which in turn produces the following JavaScript output:
var org; (function (org) { var jsweet; (function (jsweet) { /** * This is a very simple example that just shows an alert. */ var HelloWorld = (function () { function HelloWorld() { } HelloWorld.main = function (args) { alert("Hi there!"); }; return HelloWorld; }()); jsweet.HelloWorld = HelloWorld; })(jsweet = org.jsweet || (org.jsweet = {})); })(org || (org = {})); org.jsweet.HelloWorld.main(null);
More with the live sandbox.
Features
- Full syntax mapping between Java and TypeScript, including classes, interfaces, functional types, union types, tuple types, object types, string types, and so on.
- Extensive support of Java constructs and semantics added since version 1.1.0 (inner classes, anonymous classes, final fields, method overloading, instanceof operator, static initializers, ...).
- Over 1000 JavaScript libraries, frameworks and plugins to write Web and Mobile HTML5 applications (JQuery, Underscore, Angular, Backbone, Cordova, Node.js, and much more).
- A Maven repository containing all the available libraries in Maven artifacts (a.k.a. candies).
- Support for Java basic APIs as the J4TS candy (forked from the GWT's JRE emulation).
- An Eclipse plugin for easy installation and use.
- A Maven plugin to use JSweet from any other IDE or from the command line.
- A Gradle plugin to integrate JSweet with Gradle-based projects.
- A debug mode to enable Java code debugging within your favorite browser.
- A set of nice WEB/Mobile HTML5 examples to get started and get used to JSweet and the most common JavaScript APIs (even more examples in the Examples section).
- Support for bundles to run the generated programs in the most simple way.
- Support for JavaScript modules (commonjs, amd, umd). JSweet programs can run in a browser or in Node.js.
- Support for various EcmaScript target versions (ES3 to ES6).
- Support for async/await idiom
- ...
For more details, go to the language specifications (PDF).
Getting started
- Step 1: Install (or check that you have installed) Git, Node.js and Maven (commands
git,
node,
npmand
mvnshould be in your path).
- Step 2: Clone the jsweet-quickstart project from Github:
bash $ git clone
- Step 3: Run the transpiler to generate the JavaScript code:
bash $ cd jsweet-quickstart $ mvn generate-sources
- Step 4: Check out the result in your browser:
bash $ firefox webapp/index.html
- Step 5: Edit the project and start programming:
- Get access to hundreds of libs (candies)
- Refer to the language specifications to know more about programming with JSweet
- Eclipse users: install the Eclipse plugin to get inline error reporting, build-on-save, and easy configuration UI
More info at.
Examples
- Simple examples illustrating the use of various frameworks in Java (jQuery, Underscore, Backbone, AngularJS, Knockout):
- Simple examples illustrating the use of the Threejs framework in Java:)
- Node.js + Socket.IO + AngularJS:
- Some simple examples to get started with React.js:
- JSweet JAX-RS server example (how to share a Java model between client and server):
- JSweet Cordova / Polymer example:
- JSweet Cordova / Ionic example:
- JSweet Angular 2 example:
- JSweet Angular 2 + PrimeNG:
Sub-projects
This repository is organized in sub-projects. Each sub-project has its own build process.
- JSweet transpiler: the Java to TypeScript/JavaScript compiler.
- JSweet core candy: the core APIs (JavaScript language, JavaScript DOM, and JSweet language utilities).
- JDK runtime: a fork from GWT's JRE emulation to implement main JDK APIs in JSweet/TypeScript/JavaScript.
- JSweet candy generator: a tool to generate Java APIs from TypeScript definition files, and package them as JSweet candies.
- JSweet documentation: JSweet documentation.
Additionally, some tools for JSweet are available in external repositories.
How to build
Please check each sub-project README file.
Contributing
JSweet uses Git Flow.
You can fork this repository. Default branch is develop. Please use
git flow feature start myAwesomeFeature to start working on something great :)
When you are done, you can submit a regular GitHub Pull Request.
License
Please read the LICENSE file.
*Note that all licence references and agreements mentioned in the jsweet README section above are relevant to that project's source code only. | https://java.libhunt.com/jsweet-alternatives | CC-MAIN-2022-05 | refinedweb | 1,091 | 58.28 |
Hello to all, welcome to therichpost.com. In this post, I am telling you How to add bootstrap in Vuejs? I liked Vue Js code and its simple and clean.
Vue Js is the good competitor of Angularjs.
Here are the following steps for add bootstrap in Vuejs:
1. In your command line write below command:
npm install bootstrap
2. In main.js file add below code:
import ‘bootstrap/dist/css/bootstrap.css’
And this is done.
You can also see below added:
package.json
package-lock.json
There are so many tricky code in VueJs and i will let you know all. Please do comment if you any query related to this post. Thank you. Therichpost.com
Recent Comments | https://therichpost.com/add-bootstrap-vuejs/ | CC-MAIN-2021-43 | refinedweb | 120 | 79.06 |
My book on XForms (see Resources) landed on the shelves and online in the fall of 2003. Shortly thereafter, I started getting lots of e-mail questions about XForms, usually including a page or three of buggy XML source. Normally I'm good about answering e-mail, but sifting through pages of someone else's XML looking for common typos is neither fun nor productive. I had to find a better way.
I'm a huge believer in constructive laziness, so I decided to write an online tool that would accept an XForms document as input, and produce a report on any markup constructs that were either wrong or suspicious. From my e-mail archive, I had a reasonable sample of the kinds of mistakes people were making. Combining the two, I would have a powerful tool to help form authors help themselves.
The XForms 1.0 specification (see Resources) is defined as a number of elements, attributes, and content models. One thing it doesn't define, however, is a root element -- that is left to a host language to address. The two most common host languages are XHTML and SVG, but in principle almost any XML vocabulary could be used. Thus, the first job of an XForms validator is to extricate the XForms portions out of a document. For these, I've coined the term XForms islands.
Because XForms separates purpose from presentation, all but the most minimal form documents have at least two XForms islands, one for the XForms Model (the definition of what the form does) and one for the XForms User Interface (the definition of what the form looks like).
Listing 1 shows a simple XForms+XHTML document -- which may be too simple, as it contains a common mistake.
Listing 1. A common, but erroneous, XForms+XHTML document.
Listing 1 clearly has two XForms islands, one for
x:model and one for
x:input. The question is how the code should locate
them. Actually, it's not that difficult, since an element can be
identified as an XForms island if it meets two simple conditions:
- It must be in the XForms namespace
- It must not have any ancestors in the XForms namespace
While this test could have been done with XPath, I wanted to explore different ways for Python to perform XML processing. So I wrote a filter function that, given a node, determines whether it starts an XML island. Listing 2 shows the code.
Listing 2. Picking out XForms islands
The validator stores the resulting list of XForms islands for later
processing. Now, the mistake from Listing 1 (as described in the comment) is
that the
root element has inadvertently been
left in the default namespace -- namely that of XHTML. This mistake, and
several other similar problems, can be detected with XPath-based checks like the one shown in Listing 3, which returns any suspicious element
nodes.
Listing 3. Checking for namespace leakage with XPath
Note that since Listing 3 can't make any assumptions about the
structure of the surrounding host language, it makes heavy use of
XPath's
// abbreviation, which uses the
descendant-or-self axis to thoroughly search the
document under consideration. Note too that the namespace prefix
mappings (
xf:) in the XPath don't have to
match those used in the target document (
x:).
The test checks descendants of the
x:instance
element to see whether they have either the XForms namespace or the
namespace of the root node. This definitely qualifies as a heuristic,
since it is possible for perfectly valid XForms documents to trigger
this condition. On the other hand, the chances are pretty good that this
condition is an authoring error, like the one in Listing 1, and so the
validator spits out a warning.
Another area of frequent mistakes is matching up
IDs and
IDREFs.
This is partially a historical problem, since the mechanism for defining
an
ID relies on the presence of a DTD. Some tools -- depending largely on
the author's philosophy towards XML Schema and the Infoset -- also allow
IDs to be defined through XML Schema datatypes. In practice, however,
you'll often find little to go on other than the presence of attributes that
happen to be named
id.
This situation isn't pretty, but a real-world validator tool needs to
be aware of it. The validator looks at a list of all attributes known to
contain
IDREFs in XForms. First it tries the
built-in
id() function; if that doesn't
find a match, it resorts to an XPath test that checks for attributes named
either
id or
xml:id (based on an unfinished W3C draft -- see
Resources). Here's the code:
As a final step, the validator looks at each XForms island and
validates it using RELAX NG. This is more complicated than it sounds,
since several areas of XForms (such as
label)
can contain markup from the host language, not to mention additional
attributes that are allowed everywhere.
To deal with this, the validator uses a highly modularized RELAX NG schema for XForms, which is integrated into a highly permissive host language. By "highly modularized" I mean that every element definition, set of attributes on an element, and content model for an element is assigned a unique name that can be separately extended. Listing 4 shows how this works for a single element definition, using the handy compact syntax of RELAX NG.
Listing 4. RELAX NG modularized element definition
Note that even attributes that contain an
IDREF are labeled as
xsd:NCName, which performs only the lexical
validation of the attribute. As I mentioned earlier, actual checking of the
ID-to-
IDREF connections
happens at a different level. The primary
advantage of defining everything separately is that it's simple to
extend, for example by adding a
class
attribute to all form controls.
In fact, that's exactly what the host language schema does. This portion of the validator is still under development, but Listing 5 shows how the host language is defined.
Listing 5. XForms + host language definition
When included by the main schema for XForms, the code in Listing 5 extends the schema for XForms in a way that allows everyday constructs,
like
class and
id
attributes, to pass validation. Since the included bits of host language
can be almost anything, this is one area that will need ongoing
adjustments, based on actual usage as seen in the wild.
As the validator works, it keeps track of the running results as an in-memory XML file. At the conclusion, an XSL transformation converts the results into the final HTML that gets sent over the wire.
As always, namespaces are a tricky subject, especially for authors. Standards can make things easier, and two developing standards approach this problem in different ways.
One such suite of standards is Document Schema Definition Languages, or DSDL (see Resources); these languages are currently progressing variously towards becoming ISO Final Draft International Standards. Currently divided into 10 parts, DSDL is an implicit recognition of the complexity of the overall subject of validation. Individual parts include the definition for RELAX NG (part 2), Schematron (part 3), and a mechanism something like my XForms islands for selecting validation candidates out of a larger document (part 4). The remaining parts of DSDL cover other diverse areas like character repertoire validation and ways to combine various schema languages.
Another related standard and toolset is CAM, or "Content Assembly Mechanism" from OASIS. This technology allows business rules to define, validate, and compose documents; thus schema fragments can be brought together to define larger, compound documents.
All in all, mixed namespace validation in all its glory is a fertile area of XML development. The XForms validator is still a work in progress, as well as a great learning experience.
- Read the full text of Micah Dubinko's O'Reilly book XForms Essentials online. You can also order the book from the developerWorks Developer Bookstore.
- Try out the XForms Validator discussed in this article.
- Get more details on the underlying specifications for XForms 1.0 and XPath 1.0.
- Explore the concept of constructive laziness and its rich legacy. The most widely-known reference is perhaps from chapter 2 of Eric Raymond's The Cathedral and the Bazaar.
- Learn a convenient way to specify
IDness in XML by reading the in-progress
xml:idWorking Draft, which defines a reserved name of
xml:id.
- Leverage the power of RELAX NG, starting with the specifications and tutorials on the official site. developerWorks also focuses on this technology in the tutorial "Understanding RELAX NG" (December 2003) by Nicholas Chase.
- Visit the DSDL site, where a 10-part ISO specification covering all manner of XML validation is under way.
- Also visit the OASIS Content Assembly Mechanism (CAM) site, which describes another ongoing standardization effort related to validation.
-.
Micah Dubinko is a consultant and founder of Brain Attic, L.L.C., a software vendor and consultancy specializing in defeating information overload. He wrote XForms Essentials for O'Reilly Media and served on the Working Group that developed XForms 1.0. He lives and works in Phoenix, AZ. You can contact him at micah@dubinko.info. | http://www.ibm.com/developerworks/xml/library/x-xfvalid.html | crawl-002 | refinedweb | 1,530 | 51.99 |
NAME
hdlcdrv - HDLC amateur (AX.25) packet radio network driver
SYNOPSIS
#include <linux/hdlcdrv.h> linux/drivers/net/hdlcdrv.c extern inline void hdlcdrv_putbits(struct hdlcdrv_state * s, unsigned int bits); extern inline unsigned int hdlcdrv_getbits(struct hdlcdrv_state * s); extern inline void hdlcdrv_channelbit(struct hdlcdrv_state * s, unsigned int bit); extern inline void hdlcdrv_setdcd(struct hdlcdrv_state * s , int dcd); extern inline int hdlcdrv_ptt(struct hdlcdrv_state * s); void hdlcdrv_receiver(struct device *, struct hdlcdrv_state *); void hdlcdrv_transmitter(struct device *, struct hdlcdrv_state *); void hdlcdrv_arbitrate(struct device *, struct hdlcdrv_state *); int hdlcdrv_register_hdlcdrv(struct device * dev, struct hdlcdrv_ops * ops, unsigned int privsize, char * ifname, unsigned int baseaddr , unsigned int irq, unsigned int dma); int hdlcdrv_unregister_hdlcdrv(struct device * dev);
DESCRIPTION
This driver should ease the implementation of simple AX.25 packet radio modems where the software is responsible for the HDLC encoding and decoding. Examples of such modems include the baycom family and the soundcard modems. This driver provides a standard Linux network driver interface. It can even be compiled if Kernel AX.25 is not enabled in the Linux configuration. This allows this driver to be used even for userland AX.25 stacks such as Wampes or TNOS, with the help of the net2kiss utility. This driver does not access any hardware; it is the responsibility of an additional hardware driver such as baycom or soundmodem to access the hardware and derive the bitstream to feed into this driver. The hardware driver should store its state in a structure of the following form: struct hwdrv_state { struct hdlc_state hdrv; ... the drivers private state }; A pointer to this structure will be stored in dev->priv. hdlcdrv_register_hdlcdrv registers a hardware driver to the hdlc driver. dev points to storage for the device structure, which must be provided by the hardware driver, but gets initialized by this function call. ops provides information about the hardware driver and its calls. privsize should be sizeof(struct hwdrv_state). ifname specifies the name the interface should get. baseaddr, irq and dma are simply stored in the device structure. After this function succeeds, the interface is registered with the kernel. It is not running, however, this must be done with ifconfig ifname up. hdlcdrv_unregister_hdlcdrv shuts the interface down and unregisters it with the kernel. hdlcdrv_putbits delivers 16 received bits for processing to the HDLC driver. This routine merely stores them in a buffer and does not process them. It is thus fast and can be called with interrupts off. The least significant bit should be the first one received. hdlcdrv_getbits requests 16 bits from the driver for transmission. The least significant bit should be transmitted first. This routine takes them from a buffer and is therefore fast. It can be called with interrupts off. hdlcdrv_channelbit puts a single bit into a buffer, which can be displayed with sethdlc -s. It is intended for driver debugging purposes. hdlcdrv_setdcd informs the HDLC driver about the channel state (i.e. if the hardware driver detected a data carrier). This information is used in the channel access algorithm, i.e. it prevents the driver from transmitting on a half duplex channel if there is already a transmitter on air. hdlcdrv_ptt should be called by the hardware driver to determine if it should start or stop transmitting. The hardware driver does not need to worry about keyup delays. This is done by the HDLC driver. hdlcdrv_receiver actually processes the received bits delivered by hdlcdrv_putbits. It should be called with interrupts on. It guards itself against reentrance problems. hdlcdrv_transmitter actually prepares the bits to be transmitted. It should be called with interrupts on. It guards itself against reentrance problems. hdlcdrv_arbitrate does the channel access algorithm (p-persistent CSMA). It should be called once every 10ms. Note that the hardware driver must set the hdrv.par.bitrate field prior to starting operation so that hdlcdrv can calculate the transmitter keyup delay correctly.
HARDWARE DRIVER ENTRY POINTS
The hardware driver should provide the following information to the HDLC driver: struct hdlcdrv_ops { const char *drvname; const char *drvinfo; int (*open)(struct device *); int (*close)(struct device *); int (*ioctl)(struct device *, struct ifreq *, int); }; drvname and drvinfo are just for informational purposes. The following routines receive a pointer to the device structure, where they may find the io address, irq and dma channels. open must be provided. It is called during ifconfig ifname up and should check for the hardware, grab it and initialize it. It usually installs an interrupt handler which then gets invoked by the hardware. close must be provided. It is called during ifconfig ifname down and should undo all actions done by open, i.e. release io regions and irqs. ioctl may be provided to implement device specific ioctl’s.
IOCTL CALLS
The driver only responds to SIOCDEVPRIVATE. Parameters are passed from and to the driver using the following struct: struct hdlcdrv_ioctl { int cmd; union { struct hdlcdrv_params mp; struct hdlcdrv_channel_params cp; struct hdlcdrv_channel_state cs; unsigned int calibrate; unsigned char bits; } data; }; Since the 16 private ioctl request numbers for network drivers were not enough, the driver implements its own sub request number with cmd. The following numbers are implemented: HDLCDRVCTL_GETMODEMPAR returns the IO parameters of the modem in data.mp. This includes the io address, irq, eventually dma, and ports to output a PTT signal. HDLCDRVCTL_SETMODEMPAR sets the modem parameters. Only superuser can do this. Parameters can only be changed if the interface is not running (i.e. down). HDLCDRVCTL_GETCHANNELPAR returns the channel access parameters. HDLCDRVCTL_SETCHANNELPAR sets the channel access parameters. Only superuser can do this. They may also be changed using the kissparms command if using kernel AX.25 or the param command of *NOS. HDLCDRVCTL_GETSTAT statistics and status information, such as if a carrier is detected on the channel and if the interface is currently transmitting. HDLCDRVCTL_CALIBRATE instructs the driver to transmit a calibration pattern for the specified number of seconds. HDLCDRVCTL_GETSAMPLES returns the bits delivered by the hardware driver with hdlcdrv_channelbit. The bits are returned 8 at a time with the least significant bit the first one. This command may not be available, depending on debugging settings. HDLCDRVCTL_GETBITS returns the bits delivered by the hardware driver to the HDLC decoder. The bits are returned 8 at a time with the least significant bit the first one. This command may not be available, depending on debugging settings.
SEE ALSO
baycom (9), soundmodem (9), sethdlc (8), linux/drivers/net/hdlcdrv.c,
AUTHOR
hdlcdrv was written by Thomas Sailer, HB9JNX/AE4WA, (sailer@ife.ee.ethz.ch). | http://manpages.ubuntu.com/manpages/hardy/man9/hdlcdrv.9.html | CC-MAIN-2014-15 | refinedweb | 1,072 | 58.38 |
Ana checklist » History » Version 16
Version 16/30
(diff) -
Current version
Colton Hill, 08/16/2018 04:34 PM
MCC8 Analysis checklist¶
1. Beam Flash Timing Window:¶Each sample has a different timing window where the flashes occur, are yours correctly configured?
The following beam windows have length of 1.8 us: 1.6 us (beam spill) + 0.1 us at each end to accomodate time jitter)
- MC BNB+ MC cosmics: [3.10, 4.90]
- BNB data: [3.20, 5.00]
- BNB-EXT data: [3.60, 5.40]
- MC BNB + data cosmics overlaid: [3.50, 5.30]
- references
2. Reco2 Calibration data:¶In MCC8 if you use the standard production reco2 (pandora, trajcluster, PMA, etc.) there are two sets of calorimetry information, are you using the correct one?
- "calo" -- does not include the calibration information
- "cali" -- has had the calibration applied
- Only MCC 8.8 has the correct calibration, MCC8.11 (v06_26_01_21) is planned to also have the correct calibration
- Technote: MicroBooNE-doc-14754-v3
- Public note: MicroBooNE-doc-15584-v13
3. Space-charge corrections: Are you applying them?¶
If you try to match between reconstructed quantities and truth level information you need to account for the space charge effects that are applied at LArG4.
- references
4. Data-to-MC Normalization: Are you including empty files?¶
- references
5. Is your sample Prescaled?¶
- If you're using standard BNB, BNB EXT, or NuMI On-Beam samples, the chance that your events are prescaled is very unlikely! If you're using NuMI EXT or any Unbiased samples, then you'll likely need to be careful of the prescaling.
- To correct for any prescaling, you should apply this on a run-by-run basis. Zarko's POT counting tool in it's most recent (August 2018) form, should automatically account for any prescaling, removing this from a worry of any analyser.
- You can check if your data is prescaled by looking at the software trigger parameters. Constructing a SWTriggerHandle:
art::Handle<raw::ubdaqSoftwareTriggerData> SWTriggerHandle; e.getByLabel("daq", SWTriggerHandle); SWTriggerHandle->getPrescale("EXT_NUMIwin_FEMBeamTriggerAlgo");
where EXT_NUMIwin_FEMBeamTriggerAlgo is one of the many different trigger algorithms. You should check these based on the sample. Also be sure to include:
#include "uboone/RawData/utils/ubdaqSoftwareTriggerData.h"
6. Reco-MC-Truth¶
- references
7. Has your sample been passed through the data-quality monitoring?¶
- While most samples used by analysers are already passed through the basic form of data-quality monitoring (DQM), it never hurts to check.
- First look at the sam definition name you're using. Some of them will say "DQM" or "GoodRuns", etc.
- One of the ways to check if it has been applied is to simply do:
samweb describe-definition defname
- This should return meta-data regarding the definition, including how it is defined. If you check "Dimensions", this will tell you how the definition was constructed. Quite often when the DQM parameters are applied you will see:
and defname: dqm_neutrino2016_bnbextrevised
- This parameter indicates that the runs included in the sam definition also include runs which pass the DQM.
- Other possible methods? | https://cdcvs.fnal.gov/redmine/projects/uboonecode/wiki/Ana_checklist/16 | CC-MAIN-2021-17 | refinedweb | 505 | 50.63 |
The rules to the C++ standard library are mainly about containers, strings, and iostreams.
Curiously, there is no section to the algorithms of the standard template library (STL) in this chapter. Curiously, because there is a proverb in the C++ community: If you write an explicit loop, you don't know the algorithms of the STL. Anyway. Only for completeness, let me start with the first three rules which provide not much beef.
SL.1: Use libraries wherever possible, because reinventing the wheel is a bad idea. Additionally, you benefit from the work of others. This means you use already tested and well-defined functionality. This holds, in particular, true, if you SL.2: Prefer the standard library to other libraries. Imagine, for example, you hire someone. The benefit is that he already knows the library and you don't have to teach him your libraries. You save a lot of money and time. I once had a customer who named his infrastructure namespace std. Of course, if you want to have a lot of fun, do it. If not: SL.3: Do not add non-standard entities to namespace std.
std
The next rules to STL containers are more concrete.
The first rule is quite easy to argue.
array
vector
I assume you know a std::vector. One of the big advantages of a std::vector to a C array is it that the std::vector automatically manage its memory. Of course, that holds true for all further containers of the Standard Template Library. But now, let's have a closer look at the automatic memory management of std::vector.
// vectorMemory.cpp
#include <iostream>
#include <string>
#include <vector>
template <typename T>
void showInfo(const T& t,const std::string& name){
std::cout << name << " t.size(): " << t.size() << std::endl;
std::cout << name << " t.capacity(): " << t.capacity() << std::endl;
}
int main(){
std::cout << std::endl;
std::vector<int> vec; // (1)
std::cout << "Maximal size: " << std::endl;
std::cout << "vec.max_size(): " << vec.max_size() << std::endl; // (2)
std::cout << std::endl;
std::cout << "Empty vector: " << std::endl;
showInfo(vec, "Vector");
std::cout << std::endl;
std::cout << "Initialised with five values: " << std::endl;
vec = {1,2,3,4,5};
showInfo(vec, "Vector"); // (3)
std::cout << std::endl;
std::cout << "Added four additional values: " << std::endl;
vec.insert(vec.end(),{6,7,8,9});
showInfo(vec,"Vector"); // (4)
std::cout << std::endl;
std::cout << "Resized to 30 values: " << std::endl;
vec.resize(30);
showInfo(vec,"Vector"); // (5)
std::cout << std::endl;
std::cout << "Reserved space for at least 1000 values: " << std::endl;
vec.reserve(1000);
showInfo(vec,"Vector"); // (6)
std::cout << std::endl;
std::cout << "Shrinke to the current size: " << std::endl;
vec.shrink_to_fit(); // (7)
showInfo(vec,"Vector");
}
To spare typing I wrote the small function showInfo. This function returns for a vector its size and its capacity. The size of a vector is its number of elements, the capacity of a container is the number of elements a vector can hold without an additional memory allocation. Therefore, the capacity of a vector has at least to be as big as its size. You can adjust the size of a vector with its method resize; you can adjust the capacity of a container with its method reserve.
But, back to the program from top to bottom. I create (line 1) an empty vector. Afterwards, the program displays (line 2) the maximum numbers of elements a vector can have. After each operation, I return their size and capacity. That holds for the initialization of the vector (line 3), for the addition of four new elements (line 4), the resizing of the containers to 30 elements (line 5) and the reserving of additional memory for at least 1000 elements (line 6). With C++11, you can shrink with the method shrink_to_fit (line 7) the vector's capacity to its size.
Before I present the output of the program on Linux let me make a few remarks.
Okay, but what is the difference between a C array and a C++ array?
std::array combines the best from two worlds. On one hand, std::array has the size and efficiency of a C-array; on the other hand, std::array has the interface of a std::vector.
My small program compares the memory efficiency of a C array, a C++ array (std::array), and a std::vector.
// sizeof.cpp
#include <iostream>
#include <array>
#include <vector>
int main(){
std::cout << std::endl;
std::cout << "sizeof(int)= " << sizeof(int) << std::endl;
std::cout << std::endl;
int cArr[10] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
std::array<int, 10> cppArr = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
std::vector<int> cppVec = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
std::cout << "sizeof(cArr)= " << sizeof(cArr) << std::endl; // (1)
std::cout << "sizeof(cppArr)= " << sizeof(cppArr) << std::endl; // (2)
// (3)
std::cout << "sizeof(cppVec) = " << sizeof(cppVec) + sizeof(int) * cppVec.capacity() << std::endl;
std::cout << " = sizeof(cppVec): " << sizeof(cppVec) << std::endl;
std::cout << " + sizeof(int)* cppVec.capacity(): " << sizeof(int)* cppVec.capacity() << std::endl;
std::cout << std::endl;
}
Both, the C array (line1) and the C++ array (line 2) take 40 bytes. That is exactly sizeof(int) * 10. In contrast, the std::vector needs additional 24 bytes (line 3) to manage its data on the heap.
This was the C part of a std::array but the std::array supports the interface of a std::vector. This means, in particular, that std::array knows its size and, therefore, error-prone interfaces such as the following one are a heavy code-smell.
void bad(int* p, int count){
...
}
int myArray[100] = {0};
bad(myArray, 100);
// -----------------------------
void good(std::array<int, 10> arr){
...
}
std::array<int, 100> myArray = {0};
good(myArray);
When you use a C array as a function argument, you remove almost all type information and pass it as a pointer to its first argument. This is extremely error-prone because you have to provide the number of elements additionally. This will not hold if your function accepts an std::array<int, 100>.
If the function good is not generic enough, you can use a template.
template <typename T>
void foo(T& arr){
arr.size(); // (1)
}
std::array<int, 100> arr{};
foo(arr);
std::array<double, 20> arr2{};
foo(arr2);
Because a std::array knows its size, you can ask for it in line 1.
The next two rules to containers are quite interesting. In the next post, I give an answer to the question: When to use which 146 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more... | https://modernescpp.com/index.php/c-core-guidelines-the-standard-library | CC-MAIN-2021-43 | refinedweb | 1,107 | 64.91 |
char* path; ufy_t* ufy = ufy_new(NULL); ufy_config(ufy, path);
'path' may be NULL, or may be a string, referring to a location on the filesystem. If it is NULL, however, it will try to autoconfigure from known locations. If ufy_config isn't called at all, that means no configuration is prepended when the actual script arrives.
ufy_set_script(ufy_t* ufy, char* path, char* source); ufy_set_script_path(ufy_t* ufy, char* path);
Through these functions, we set the interpreter up with a script. In the first variety, 'path' may be NULL, but source may not be. The path is just auxillary here. In the second variety, 'path' may obviously not be NULL; it points to the place of the script on the filesystem.
mainthrd_t* mainthread = mainthread_new(NULL, ufy);
Here we create a new main-thread. A main-thread is the top of a tree of threads that run through the static ufy context. The only thing left to do now is running it:
inst_t* instance = NULL; thrd_run(mainthread, &instance);
When this function returns, the script has finished running. The 'instance' contains the resulting value of the script.
#include "ufy.h" static int mod_foo_decl (ctx_t* ctx, char* name) { return 0; } static int mod_foo_var (thrd_t* thread, char* name) { return 0; } void ufy_module_init (native_t* module, char* name, char* use) { module->load_declarations = mod_foo_decl; module->load_variables = mod_foo_var; }
The gist of it is this: you fill a pointed-to structure with functors that will, on various occasions, be called to provide the static and dynamic contexts related to the execution of an ufy script. The 'declarations' are called when the module is loaded, the 'variables' are called when the main thread starts running through the script. Now we have to fill one, or both, of those functions with our module-specific stuff. Like so:
static int mod_foo_fnc_bar (thrd_t* thread, lst_t* params, inst_t** result) { return 0; } static int mod_foo_decl (ctx_t* ctx, char* name) { ufy_add_function(ctx, "bar", mod_foo_fnc_bar); return 0; }
Again, this is a skeleton. This time for the native 'bar' function. Now, let's make this function 'bar' actually do something. Let's say it's a function that accepts either a single 'int' as a parameter, or a single 'string', and just do some reporting:
static int mod_foo_fnc_bar (thrd_t* thread, lst_t* params, inst_t** result) { switch (check_params_alt(params, 2, 1, INSTYPE_STRING, 1, INSTYPE_INT)) { case 0: fprintf(stderr, "You've passed a string !\n'%s'\n", PARAM_STRING(params,0)); break; case 1: fprintf(stderr, "You've passed an int !\n'%d'\n", PARAM_INT(params,0)); break; default: return thrd_pusherr(thread, ERR_PARAM, "bar: Need string or int"); } return 0; }The function implementation above contains four elements which are part of the ufy API, and which need further explanation:
*result = inst_new_int(3);The following instance instantiators are at your disposal:
inst_t* inst_new_void(); inst_t* inst_new_byte(unsigned char c); inst_t* inst_new_int(int i); inst_t* inst_new_float(double d); inst_t* inst_new_string(char* str); /* must be a malloced pointer */ inst_t* inst_
int mod_foo_decl (ctx_t* ctx, char* name) { ctx_add_function( ctx, ufy_new_sfunction( "join", "function join(string s, list l) {" " var result = string(pop(l));" " foreach (; elt; l) {" " result += s + string(elt);" " } " "}" ) ); }
The C code above is the implementation of a module; it's declaration callback, to be more precise. Inside it, we pass a string to the ufy_new_sfunction function, which parses it, and wraps it inside an ufy fnc_t* type.
There are a few conditions to doing this: you should always provide the complete function declaration. You should always use a 'function' declaration, even though you're actually creating a method to a class. If you think the string may have faulty syntax, you should check the return value of the ufy_new_sfunction function - it will return NULL if there are any errors. You may use the private modifier prefix in the declaration. | http://ufy.sourceforge.net/api.html | CC-MAIN-2018-05 | refinedweb | 621 | 56.49 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <net_config.h>
void init_serial (void);
The init_serial function initializes the serial driver. The
function:
The init_serial function is part of RL-TCPnet. The
prototype is defined in net_config.h.
note
The init_serial function does not return any value.
com_getchar, com_putchar, com_tx_active
void init_serial (void) {
/* Initialize the serial interface */
rbuf.in = 0;
rbuf.out = 0;
tbuf.in = 0;
tbuf.out = 0;
tx_active = __FALSE;
/* Enable RxD1 and TxD1 pins. */
PINSEL0 = 0x00050000;
/* 8-bits, no parity, 1 stop bit */
U1LCR = 0x83;
/* 19200 Baud Rate @ 15MHz VPB Clock */
U1DLL = 49;
U1DLM = 0;
U1LCR = 0x03;
/* Enable RDA and THRE interrupts. */
U1IER = 0x03;
/* Enable UART1 interrupts. */
VICVectAddr14 = (U32)handler_UART1;
VICVectCntl14 = 0x27;
VICIntEnable = 1 <<. | http://www.keil.com/support/man/docs/rlarm/rlarm_init_serial.htm | CC-MAIN-2019-43 | refinedweb | 120 | 63.05 |
From: Paul A. Bristow (pbristow_at_[hidden])
Date: 2001-04-23 12:26:53
> -----Original Message-----
> From: John Maddock [mailto:John_Maddock_at_[hidden]]
> Sent: Sunday, April 22, 2001 12:02 PM
> To: INTERNET:boost_at_[hidden]
> Subject: [boost] Boost.MathConstants: Review
>
>
> How does this library interact with the POSIX standard, see
> for a
> listing, these don't cut and paste well :-(
>
> I assume that this library provides a superset of these values?
All the POSIX constants are only double precision,
there are only a few of them, and I think I include all of them
(I'll check this).
math.h - mathematical declarations
header provides for the following constants.
The values are of type double and are accurate within the precision of the
double type.
M_E Value of e
M_LOG2E Value of log2e
M_LOG10E Value of log10e
M_LN2 Value of loge2
M_LN10 Value of loge10
M_PI Value of
M_PI_2 Value of pi/2
M_PI_4 Value of pi/4
M_1_PI Value of 1/pi
M_2_PI Value of 2/pi
M_2_SQRTPI Value of 2/pi
M_SQRT2 Value of sqrt(2)
M_SQRT1_2 Value of 1/sqrt(2)
I personally don't like the naming - namespaces allow much nicer names.
We could use these as check, but only double precision. is much more relevant - but we are a bit ahead in
tackling the much simpler problem of constants.
Long term, I believe BOOST needs the functions that they plan to provide.
Introduction
Undoubtedly you are familiar with Abramowitz and Stegun's, Handbook of
Mathematical Functions (with Formulas, Graphs, and Mathematical Tables),
first published by the National Bureau of Standards in 1964, and still in
print.
A project is underway at the National Institute of Standards and Technology
(the heir to NBS) to develop a replacement for the Handbook. This will be a
major new mathematical reference source on the World Wide Web for special
functions and their applications.
About the project
For more information about the project, see the Project web pages; including
Upcoming Events and What's New!
Digital Library of Mathematical Functions Mockup
Finally, we present the Mockup of NIST's Digital Library of Mathematical
Functions. This mockup gives an idea of the current ideas about the design
and organization of the new Digital Library; these ideas are, of course,
subject to change. Content only is only available in a few places -- it is
only a mockup. Give it a look, and give us your responses.
Starting points:
Table of Contents
Help about the site.
Attached in html.
Paul
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2001/04/11278.php | CC-MAIN-2020-45 | refinedweb | 436 | 62.88 |
- Code: Select all
import numpy as np
import pandas as pd
a = pd.DataFrame(np.random.rand(5,5),columns=list('ABCDE'))
b = a.mean(axis=0)
>>> b
A 0.536495
B 0.522431
C 0.582918
D 0.600779
E 0.371422
dtype: float64
My application is to take the averaged values and insert them into another dataframe, e.g.
- Code: Select all
# Average parameters collected when "hour of day" (data.hod) is 13
tmp = pd.DataFrame()
tmp = pd.concat( [tmp,data[data.hod==13].mean(axis=0)], axis=0)
This is where it gets ticked off and gives the AttributeError: 'Series' object has no attribute '_data'
I would expect there exists some way of averaging a dataframe that does not involve converting the output to a Series. I know this is doable when the dataframe is multi-indexed, but that is not done here. Does anyone know how to perform this operation? | http://python-forum.org/viewtopic.php?f=6&t=7116&p=8999 | CC-MAIN-2016-40 | refinedweb | 154 | 70.9 |
We can add functional tests to our application to verify all of the functionality that we claim to have. Ferris provides you with the ability to test our application’s controllers in the same way that we tested our data model.
Before we get started testing, we need to add a root route so that goes to /posts.
Modify app/routes.py and remove these lines:
from webapp2 import Route from ferris.handlers.root import Root ferris_app.router.add(Route('/', Root, handler_method='root'))
Add these lines in their place:
from webapp2_extras.routes import RedirectRoute ferris_app.router.add(RedirectRoute('/', redirect_to='/posts'))
At this point opening should send us to.
Note
To run tests execute scripts/backend_test.sh or alternatively python ferris/scripts/test_runner.py app just as we did in the Data Model section.
Ferris provides an example test called SanityTest that resides in app/tests/backend/test_sanity.py.
Here’s the code for that test:
class SanityTest(AppTestCase): def testRoot(self): self.loginUser() resp = self.testapp.get('/') self.loginUser(admin=True) resp = self.testapp.get('/admin') self.assertTrue('Ferris' in resp)
Let’s walk through this:
We’re going build on this example and create tests to verify:
While this isn’t an exhaustive list, it will demonstrate how to test various aspects of our application.
In this test, we’ll create four posts: two from the first user, and two from the second user. We will then check the results of our list methods. Our tests should verify that the data was created and appears as expected.
First, let’s create our test class in app/tests/backend/test_posts.py:
from ferris.tests.lib import AppTestCase from app.models.post import Post class TestPosts(AppTestCase): def testLists(self): self.loginUser("user1@example.com")
We have a user logged in, so let’s create some posts as that user:
Post(title="Test Post 1").put() Post(title="Test Post 2").put()
Now let’s log in the second user and create some more posts:
self.loginUser("user2@example.com") Post(title="Test Post 3").put() Post(title="Test Post 4").put()
At this point we can now make requests and verify their content. Let’s start with /posts and verify that all of the posts are showing up:
resp = self.testapp.get('/posts') assert 'Test Post 1' in resp.body assert 'Test Post 2' in resp.body assert 'Test Post 3' in resp.body assert 'Test Post 4' in resp.body
Very well, let’s continue with /posts?mine and verify that only user2@example.com's posts are present:
resp = self.testapp.get('/posts?mine') assert 'Test Post 1' not in resp.body assert 'Test Post 2' not in resp.body assert 'Test Post 3' in resp.body assert 'Test Post 4' in resp.body
Additionally, let’s make sure the ‘edit’ links are present:
assert 'Edit' in resp.body
Let’s add a new method and make a request to /posts/add:
def testAdd(self): self.loginUser("user1@example.com") resp = self.testapp.get('/posts/add')
Now let’s get the form from the response, try to submit it without filling it out, and verify that it caused a validation error:
form = resp.form error_resp = form.submit() assert 'This field is required' in error_resp.body
With that in place, let’s fill out the form, submit it, and verify that it went through:
form['title'] = 'Test Post' good_resp = form.submit() assert good_resp.status_int == 302 # Success redirects us to list
Finally, load up the list and verify that the new post is there:
final_resp = good_resp.follow() assert 'Test Post' in final_resp
To test to make sure that a user can only edit his own posts, we’re going to need to create posts under two different users like we did before:
def testEdit(self): self.loginUser("user1@example.com") post_one = Post(title="Test Post 1") post_one.put() self.loginUser("user2@example.com") post_two = Post(title="Test Post 2") post_two.put()
Now, let’s load the edit page for post two. This should succeed:
self.testapp.get('/posts/:%s/edit' % post_two.key.urlsafe())
Finally, load the edit page for post one. We should expect this to fail:
self.testapp.get('/posts/:%s/edit' % post_one.key.urlsafe(), status=401) | http://ferris-framework.appspot.com/docs21/tutorial/6_functional_testing.html | CC-MAIN-2017-13 | refinedweb | 708 | 61.53 |
.
.
We use the"InterCallings Analysis "(ICA) to see (in an indented fashion) how kernel functions call each other.
For example, the sleep_on command is described in ICA below:
|sleep_on |init_waitqueue_entry -- |__add_wait_queue | enqueuing request |list_add | |__list_add -- |schedule --- waiting for request to be executed |__remove_wait_queue -- |list_del | dequeuing request |__list_del -- sleep_on ICA
The indented ICA is followed by functions' locations:
Note: We don't specify anymore file location, if specified just before.
In an ICA a line like looks like the following
function1 -> function2
means that < function1 > is a generic pointer to another function. In this case < function1 > points to < function2 >.
When we write:
function:
it means that < function > is not a real function. It is a label (typically assembler label).
In many sections we may report a ''C'' code or a ''pseudo-code''. In real source files, you could use ''assembler'' or ''not structured'' code. This difference is for learning purposes.
The advantages of using ICA (InterCallings Analysis) are many:
As all theoretical models, we simplify reality avoiding many details, such as real source code and special conditions..
Many years ago, when computers were as big as a room, users ran their applications with much difficulty and, sometimes, their applications crashed the computer.
To avoid having applications that constantly crashed, newer OSs were designed with 2 different operative modes:
|.
Once we understand that there are 2 different modes, we have to know when we switch from one to the other.
Typically, there are 2 points of switching:
System calls are like special functions that manage OS routines which live in Kernel Mode.
A system call can be called when we:
| | ------->|:
Special interest has the Timer IRQ, coming every TIMER ms to manage::
The task state is managed by its presence in a relative list: READY list and BLOCKED list.
The movement from one task to another is called ''Task Switching''. many computers have a hardware instruction which automatically performs this operation. Task Switching occurs in the following cases:
*:
CONS:.
Standard ISO-OSI describes a network architecture with the following levels:
The first 2 levels listed above are often implemented in hardware. Next levels are in software (or firmware for routers).
Many protocols are used by an OS: one of these is TCP/IP (the most important living on 3-4 levels).
The kernel doesn't know anything (only addresses) about first 2 levels of ISO-OSI.
In RX it:
frames packets sockets NIC ---------> Kernel ----------> Application | packets --------------> Forward - RX -
In TX stage it:
sockets packets frames Application ---------> Kernel ----------> NIC packets /|\ Forward ------------------- - TX -.
How can we solve segmentation and pagination problems? Using either 2 policies.
| .. | |____________________| ----->| Page 1 | | |____________________| | | .. | ____________________ | |____________________| | | |---->| Page 2 | | Segment X | ----| |____________________| | | | | .. | |____________________| | |____________________| | | .. | | |____________________| |---->| Page 3 | |____________________| | .. |
Process X, identified by Segment X, is split in 3 pieces and each of one is loaded in a page.
We do not have:
| | | | | | Offset2 | Value | | | /|\| | Offset1 | |----- | | | /|\ | | | | | | | | | | \|/| | | | | ------>| | \|/ | | | | Base Paging Address ---->| | | | | ....... | | ....... | | | | | Hierarchical Paging
The last function ''rest_init'' does the following:
Linux has some peculiarities that distinguish it from other OSs. These peculiarities include:.
Under the Linux kernel only 4 segments exist:
[syntax is ''Purpose [Segment]'']
Under Intel architecture, the segment registers used are:
So, every Task uses 0x23 for code and 0x2b for data/stack.
Under Linux 3 levels of pages are used, depending on the architecture. Under Intel only 2 levels are supported. Linux also supports Copy on Write mechanisms (please see Cap.10 for more information).
The answer is very very simple: interTask address conflicts cannot exist because they are impossible. Linear -> physical mapping is done by "Pagination", so it just needs to assign physical pages in an univocal fashion.
No. Page assigning is a dynamic process. We need a page only when a Task asks for it, so we choose it from free memory paging in an ordered fashion. When we want to release the page, we only have to add it to the free pages list.'' routine will set right bit in the vector describing softirq pending.
''wakeup_softirq'' uses ''wakeup_process'' to wake up ''ksoftirqd_CPU0'' kernel thread.
A module always contains:
If these functions are not in the module, you need to add 2 macros to specify what functions will act as init and exit module:
NOTE: a module can "see" a kernel variable only if it has been exported (with macro EXPORT_SYMBOL).
//!
This can be considered the most useful proc subdirectory. It allows you to change very important settings for your network kernel configuration.
core ipv4 ipv6 unix ethernet 802.
"ip_forward", enables or disables ip forwarding in your Linux box. This is a generic setting for all devices, you can specify each device you choose.
I think this is the most useful /proc entry, because it allows you to change some net settings to support wireless networks (see Wireless-HOWTO for more information).
Here are some examples of when you could use this setting:
This
Linux).
We'll see now an example of what happens when we send a TCP packet to Linux, starting from ''netif_rx [net/core/dev.c]'' call.
|netif_rx |__skb_queue_tail |qlen++ |* simple pointer insertion * |cpu_raise_softirq |softirq_active(cpu) |= (1 << NET_RX_SOFTIRQ) // set bit NET_RX_SOFTIRQ in the BH vector
Functions::
Description:
SERVER (LISTENING) CLIENT (CONNECTING) SYN <------------------- SYN + ACK -------------------> ACK <------------------- 3-Way TCP handshake
TODO
Here we view how "stack" and "heap" are allocated in memory
FF.. | | <-- bottom of the stack /|\ | | | higher | | | | stack values | | | \|/ growing | | XX.. | | <-- top of the stack [Stack Pointer] | | | | | | 00.. |_________________| <-- end of stack [Stack Segment] Stack
Memory address values start from 00.. (which is also where Stack Segment begins) and they grow going toward FF.. value.
XX.. is the actual value of the Stack Pointer.
Stack is used by functions for:
For example, for a classical function:
|int foo_function (parameter_1, parameter_2, ..., parameter_n) { |variable_1 declaration; |variable_2 declaration; .. |variable_n declaration; |// Body function |dynamic variable_1 declaration; |dynamic variable_2 declaration; .. |dynamic variable_n declaration; |// Code is inside Code Segment, not Data/Stack segment! |return (ret-type) value; // often it is inside some register, for i386 eax register is used. |} we have | | | 1. parameter_1 pushed | \ S | 2. parameter_2 pushed | | Before T | ................... | | the calling A | n. parameter_n pushed | / C | ** Return address ** | -- Calling K | 1. local variable_1 | \ | 2. local variable_2 | | After | ................. | | the calling | n. local variable_n | / | | ... ... Free ... ... stack | | H | n. dynamic variable_n | \ E | ................... | | Allocated by A | 2. dynamic variable_2 | | malloc & kmalloc P | 1. dynamic variable_1 | / |_______________________| Typical stack usage Note: variables order can be different depending on hardware architecture.
We have to distinguish 2 concepts:
Often Process is also called Task or Thread.
2 kind of locks:
Copy_on_write is a mechanism used to reduce memory usage. It postpones memory allocation until the memory is really needed.
For example, when a task executes the "fork()" system call (to create another task), we still use the same memory pages as the parent, in read only mode. When a task WRITES into the page, it causes an exception and the page is copied and marked "rw" (read, write).
1-) Page X is shared between Task Parent and Task Child Task Parent | | RO Access ______ | |---------->|Page X| |_________| |______| /|\ | Task Child | | | RO Access | | |---------------- |_________| 2-) Write request Task Parent | | RO Access ______ | |---------->|Page X| Trying to write |_________| |______| /|\ | Task Child | | | RO Access | | |---------------- |_________| 3-) Final Configuration: Either Task Parent and Task Child have an independent copy of the Page, X and Y Task Parent | | RW Access ______ | |---------->|Page X| |_________| |______| Task Child | | RW Access ______ | |---------->|Page Y| |_________| |______|
bbootsect.s [arch/i386/boot] setup.S (+video.S) head.S (+misc.c) [arch/i386/boot/compressed] start_kernel [init/main.c]
Descriptors are data structure used by Intel microprocessor i386+ to virtualize memory.
IR.]
Files:
Functions:
Called functions:
Linux.
Official Linux kernels and patches download site
Great documentation about Linux Kernel
Official Kernel Mailing list
Linux Documentation Project Guides | http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/html_single/KernelAnalysis-HOWTO.html | CC-MAIN-2014-41 | refinedweb | 1,282 | 57.06 |
I'm enjoying the book Effective C++. It has highlighted things that I've looked over, never heard of, or never even though of before! However, even with this book my understanding of C++ still doesn't seem to be solid.
For some time now I've been curious about how binary data is handled as well as developing efficient algorithms to improve performance. My curiosity grew after seeing a lot of posts here that strictly deal with data problems (conversions, casts, copying data, serialization and file I/O, binary algorithms, etc).
From what I understand, at some point or another, an object in C++ holds data based on the built-in types of C++ (and if it doesn't, I'd assume for that object to be a "tool" object, like pow or other math funcctions I suppose). What I want to understand is how to have better control of data and how it is represented in a class, structure, union or namespace. Basically a recommendation for a book that really goes in depth on how data is represented in all possible data containers.
For example, how is data (at the binary level) sorted in a struct or a class? In a union it's fairly straightforward (the amount of memory that's used in a union is shared across its members). For classes and structs I'd assume for it to be conditional (what separates the binary representation of the data, or is it simply "summed up?"). Additionally how does a namespace resolve data - is it different or the same as classes and structs?
Furthermore, is there a binary representation of the access level of the data, signature of the data, and parameters of the data? During serialization and file I/O, how is the information read and remembered?
So far, C++ has improved my understanding and ability to learn the concepts of programming in general. I am fairly confident that a book that covers a lot of these details in depth will improve my knowledge and help me progress without wondering how a lot of the good programmers here are just "so good." =)
If anyone has any recommendations of a book that has this information, please let me know!
Thanks,
-Alex | https://www.daniweb.com/programming/software-development/threads/140910/understanding-binary-data-and-algorithms | CC-MAIN-2017-26 | refinedweb | 374 | 58.32 |
this is my first question here so if I don't follow perfect etiquette please forgive me. I have been searching for an answer for about 6 hours so I figured it was time to ask. I'm new to Python, and I'm trying to automate data entry with Selenium. I'm doing well, but stuck at the part where I pull data from Excel. The data pulling isn't the problem, but telling my excel to delete the top row, or move onto the next, is. Here is a sample of my script:
import pandas as pd
from pandas import ExcelWriter
import numpy as np
xl = pd.ExcelFile('C:\\Users\\Steph_000\\Desktop\\students2.xlsx')
xl
wr = pd.ExcelWriter('C:\\Users\\Steph_000\\Desktop\\students2.xlsx')
xl.sheet_names
['Sheet1']
df = xl.parse('Sheet1') #This reads the sheet into a dataframe
df
cp = pd.read_excel('C:\\Users\\Steph_000\\Desktop\\students2.xlsx')
IDNum = cp[:1] #This pulls the first row as a variable
print(IDNum) #This is where it prints to the search bar to look the student up
chart = cp[1:] #This pulls everything but the first row as a variable
print(chart)
df.to_excel(wr, 'Sheet1')
wr.save() #This is supposed to overwrite it, but just saves it as blank
Look at example below:
Look() | https://codedump.io/share/BnlCoqI7Nl8f/1/trying-to-remove-top-row-from-excel-spreadsheet-using-python | CC-MAIN-2017-13 | refinedweb | 215 | 73.27 |
Microphone Wireling Tutorial
This Wireling uses the SPW2430 MEMS Microphone to detect sound and output an analog signal.
Technical Details
SPW2430 Specs
- Consists of an acoustic sensor, a low noise input buffer, and an output amplifier
- Low Current
- MaxRF protection
- Ultra-Stable performance
TinyDuino Power Requirements
- Voltage: 3.0V - 5.5V
- Current: 70-110 µA (Normal Mode). Due to the low current, this board can be run using the TinyDuino coin cell option
Pins Used
- A5/SCL - I²C Serial Clock line
- A4/SDA - I²C Serial Data line
Dimensions
- 10mm x 10mm (.394 inches x .394 inches)
- Height (from the lower bottom of Wireling to upper top Wireling Connector): 3.70mm (0.15 Microphone
Make the correct Tools selections for your development board. If unsure, you can double check the Help page that mentions the Tools selections needed for any TinyCircuits processor.
Upload Program
If you would like to use a different Wireling Port, simply change the value of micPin in the code and plug the Wireling into the corresponding port on the adapter. Note that on the Wireling Adapter TinyShield, A0 corresponds to Port 0, A1 to Port 1, and so on.
Open the Serial Monitor to see the analog value the microphone is picking up.
Code
/* TinyCircuits Microphone Wireling Example Sketch This sketch reads the analog value output by the microphone based on the volume of sound it receives. The best way to see data is via the Serial Plotter. A simple test can be done by speaking at a normal volume, clapping, or blowing gently into the microphone and looking at the data on screen. Written 15 July 2019 By Hunter Hykes Modified N/A By N/A */ #if defined (ARDUINO_ARCH_AVR) #define SerialMonitorInterface Serial #elif defined(ARDUINO_ARCH_SAMD) #define SerialMonitorInterface SerialUSB #endif float micPin = A0; // use port 0 float value = 0.0; void setup() { SerialMonitorInterface.begin(9600); while (!SerialMonitorInterface); // must open Serial Monitor to execute following code delay(100); } void loop() { value = analogRead(micPin); // read the input pin SerialMonitorInterface.println(value); // print value delay(1); }
Try clapping near the microphone or blowing into it to see a reading on the Analog pin.
Downloads
If you have any questions or feedback, feel free to email us or make a post on our forum. Show us what you make by tagging @TinyCircuits on Instagram, Twitter, or Facebook so we can feature it.
Thanks for making with us! | https://learn.tinycircuits.com/Wirelings/Microphone_Wireling_Tutorial/ | CC-MAIN-2022-33 | refinedweb | 396 | 53.21 |
My.
// BEGIN SPDBEX.CS using System; using System.Collections.Generic; using System.Text; using System.Data; using System.Data.SqlClient; using System.IO; namespace spdbex { class Program { static void Main(string[] args) { // replace this string with your // Sharepoint content DB connection string string DBConnString = "Server=DATABASESERVER;" + "Database=CONTENTDBNAME;Trusted_Connection=True;"; // create a DB connection SqlConnection con = new SqlConnection(DBConnString); con.Open(); // the query to grab all the files. SqlCommand com = con.CreateCommand(); com.CommandText = "SELECT ad.SiteId, ad.Id, ad.DirName," + " ad.LeafName, ads.Content" + " FROM AllDocs ad, AllDocStreams ads" + " WHERE ad.SiteId = ads.SiteId" + " AND ad.Id = ads.Id" + " AND ads.Content IS NOT NULL" + " Order by DirName"; //(4,(); } } } // END SPDBEX.CS
Thanks Keith. This works great.
tk
Comment by Todd Klindt — July 6, 2008 @ 11:25 pm
[...] Exporting site content from a SharePoint Content Database for recovery purposes [...]
Pingback by Links (7/6/2008) « Steve Pietrek - Everything SharePoint — July 7, 2008 @ 7:44 am
[...] Exporting Site Content From a SharePoint Content Database for Recovery Purposes (Keith Richie) [...]
Pingback by Dew Drop - July 7, 2008 | Alvin Ashcraft's Morning Dew — July 7, 2008 @ 6:46 pm
Keith
Loving it buddy !
Comment by Neil Hodgkinson — July 12, 2008 @ 5:49 am
Thanks for the excellent code – it’s a real life saver!
Comment by Mark Fertig — October 15, 2008 @ 12:58 am
Hi,
Thanks for this code, it’s been really useful. The only problem I have at the moment is we have quite a few folder paths that are longer than 248 characters. Do you know if there is a workaround to this?
Thanks
Comment by Al — November 11, 2008 @ 11:30 pm.
Comment by Keith Richie — November 11, 2008 @ 11:40 pm)
Comment by Al — November 12, 2008 @ 8:46 pm
Hi,
I managed to find some code on the web to allow me to use a DSN. Thanks once again.
Comment by Al — November 12, 2008 @ 9:11 pm
My god… you have just saved my life. This is the only way I was able to get the content after upgrading from WSS3 to MOSS 2007 and experiencing complete meltdown. You are a god among men- a gentleman, a saint!
Comment by John — November 17, 2008 @ 4:48 pm
I deleted a site from the site settings…but then hit cancel. From what I can tell the database is intact…can I rebuild my site with the code above
Comment by Virginia — December 14, 2008 @ 9:57 am.
Comment by Jehan — March 27, 2009 @ 3:31 pm
Thanks Jehan!
Comment by Keith Richie — March 27, 2009 @ 8:53 pm.
Comment by techiee — April 25, 2009 @ 1:06 pm
An error occurred when I execute spdbex.exe.
“ArgumentException”
How can I fix this error?
Comment by sungho — June 12, 2009 @ 1:44 pm
I need a bit more context here. the code snippet doesn’t take any arguments.
Do you get a stack trace after the error?
Comment by Keith Richie — June 23, 2009 @ 8:46 pm | http://blog.krichie.com/2008/07/06/exporting-site-content-from-a-sharepoint-content-database-for-recovery-purposes/ | crawl-002 | refinedweb | 499 | 68.57 |
and deeplearning.ai, the lecture videos corresponding to the YOLO algorithm can be found here). The problem description is taken straightaway from the assignment.
Given a set of images (a car detection dataset), the goal is to detect objects (cars) in those images using a pre-trained YOLO (You Only Look Once) model, with bounding boxes. Many of the ideas are from the two original YOLO papers: Redmon et al., 2016 and Redmon and Farhadi, 2016 .
Some Theory
Let’s first clear the concepts regarding classification, localization, detection and how the object detection problem can be transformed to supervised machine learning problem and subsequently can be solved using a deep convolution neural network. As can be seen from the next figure,
- Image classification with localization aims to find the location of an object in an image by not only classifying the image (e.g., a binary classification problem: whether there is a car in an image or not), but also finding a bounding box around the object, if one found.
- Detection goes a level further by aiming to identify multiple instances of same/ different types of objects, by marking their locations (the localization problem usually tries to find a single object location).
- The localization problem can be converted to a supervised machine learning multi-class classification problem in the following way: in addition to the class label of the object to be identified, the output vector corresponding to an input training image must also contain the location (bounding box coordinates relative to image size) of the object.
- A typical output data vector will contain 8 entries for a 4-class classification, as shown in the next figure, the first entry will correspond to whether or not an object of any from the 3 classes of objects. In case one is present in an image, the next 4 entries will define the bounding box containing the object, followed by 3 binary values for the 3 class labels indicating the class of the object. In case none of the objects are present, the first entry will be 0 and the others will be ignored.
- Now moving from localization to detection, one can proceed in two steps as shown below in the next figure: first use small tightly cropped images to train a convolution neural net for image classification and then use sliding windows of different window sizes (smaller to larger) to classify a test image within that window using the convnet learnt and run the windows sequentially through the entire image, but it’s infeasibly slow computationally.
- However, as shown in the next figure, the convolutional implementation of the sliding windows by replacing the fully-connected layers by 1×1 filters makes it possible to simultaneously classify the image-subset inside all possible sliding windows parallelly, making it much more efficient computationally.
- The convolutional sliding windows, although computationally much more efficient, still has the problem of detecting the accurate bounding boxes, since the boxes don’t align with the sliding windows and the object shapes also tend to be different.
- YOLO algorithm overcomes this limitation by dividing a training image into grids and assigning an object to a grid if and only if the center of the object falls inside the grid, that way each object in a training image can get assigned to exactly one grid and then the corresponding bounding box is represented by the coordinates relative to the grid. The next figure described the details of the algorithm.
- In the test images, multiple adjacent grids may think that an object actually belongs to them, in order to resolve the iou (intersection of union) measure is used to find the maximum overlap and the non-maximum-suppression algorithm is used to discard all the other bounding boxes with low-confidence of containing an object, keeping the one with the highest confidence among the competing ones and discard the others.
- Still there is a problem of multiple objects falling in the same grid. Multiple anchor boxes (of different shapes) are used to resolve the problem, each anchor box of a particular shape being likely to eventually detect an object of a particular shape.
The following figure shows the slides taken from the presentation You Only Look Once: Unified, Real-Time Object Detection in the CVPR 2016 summarizing the algorithm:
Problem Statement
Let’s assume that we are working on a self-driving car. As a critical component of this project, we’d like to first build a car detection system. To collect data, we’ve mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while we drive around.
The above pictures are taken from a car-mounted camera while driving around Silicon Valley. We would like to especially thank drive.ai for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.
We’ve gathered all these images into a folder and have labelled them by drawing bounding boxes around every car we found. Here’s an example of what our bounding boxes look like.
Definition of a box
If we have 80 classes that we want YOLO to recognize, we can represent the class label c either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. Here we will use both representations, depending on which is more convenient for a particular step.
In this exercise, we shall learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for our use. The instructions for how to do it can be obtained from here and here.
YOLO
YOLO (“you only look once“) is a popular algorithm.
Model details we expand c into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
We will use 5 anchor boxes. So we can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
Let’s look in greater detail at what this encoding represents.
Encoding architecture for YOLO).
Flattening the last two last dimensions
Now, for each box (of each cell) we will compute the following element-wise product and extract a probability that the box contains a certain class.
Find the class detected by each box
Here’s one way to visualize what YOLO is predicting on an image:
- For each of the 19×19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
Each of the 19×19 grid cells colored according to which class has the largest predicted probability in that cell.:
Each cell gives us 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes.
In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You’d like to filter the algorithm’s output down to a much smaller number of detected objects. To do so, we’ll use non-max suppression. Specifically, we’ll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class).
- Select only one box when several boxes overlap with each other and detect the same object.
Filtering with a threshold on class scores
We are going to apply a first filter by thresholding. We would like to get rid of any box for which the class “score” is less than a chosen threshold.
The model gives us a total of 19x19x5x85 numbers, with each box described by 85 numbers. It’ll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- box_confidence: tensor of shape (19×19,5,1) containing pc (confidence probability that there’s some object) for each of the 5 boxes predicted in each of the 19×19 cells.
- boxes: tensor of shape (19×19,5,4) containing (bx,by,bh,bw) for each of the 5 boxes per cell.
- box_class_probs: tensor of shape (19×19,5,80) containing the detection probabilities (c1,c2,…c80) for each of the 80 classes for each of the 5 boxes per cell.
Exercise: Implement yolo_filter_boxes().
- Compute box scores by doing the element-wise product as described in the above figure.
- For each box, find:
- the index of the class with the maximum box score.
- the corresponding box score.
- Create a mask by using a threshold. The mask should be True for the boxes you want to keep.
- Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don’t want.
We should be left with just the subset of boxes we want to keep.
Let’s first load the packages and dependencies that are going to be useful.
import argparse
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6): """Filters YOLO boxes by thresholding on object and class confidence. Arguments: box_confidence -- tensor of shape (19, 19, 5, 1) boxes -- tensor of shape (19, 19, 5, 4) box_class_probs -- tensor of shape (19, 19, 5, 80) threshold -- real value, if [ highest class probability score = threshold) # Step 4: Apply the mask to scores, boxes and classes return scores, boxes, classes
Non-max suppression
Even after filtering by thresholding over the classes scores, we still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
n this example, the model has predicted 3 cars, but it’s actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probability) one of the 3 boxes.
Non-max suppression uses the very important function called “Intersection over Union”, or IoU.
Definition of “Intersection over Union”
Exercise: Implement iou(). Some hints:
- In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.
- To calculate the area of a rectangle we need to multiply its height (y2 – y1) by its width (x2 – x1)
-
In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.. # Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B) # compute the IoU return iou.
Exercise: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so we don’t actually need to use your iou() implementation): # Use K.gather() to select only nms_indices from scores, boxes and classes return scores, boxes, classes
Wrapping up the filtering
It’s time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions we’ve just implemented.
Exercise: Implement yolo_eval() which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There’s just one last implementational detail we have to know. There’re a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which are provided):
boxes = yolo_boxes_to_corners(box_xy, box_wh)
which converts the yolo box coordinates (x,y,w,h) to box corners’ coordinates (x1, y1, x2, y2) to fit the input of yolo_filter_boxes
boxes = scale_boxes(boxes, image_shape)
YOLO’s network was trained to run on 608×608 images. If we are testing this data on a different size image – for example, the car detection dataset had 720×1280 images – his step rescales the boxes so that they can be plotted on top of the original 720×1280 image.
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5): """ Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes. Arguments: yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors: box_confidence: tensor of shape (None, 19, 19, 5, 1) box_xy: tensor of shape (None, 19, 19, 5, 2) box_wh: tensor of shape (None, 19, 19, 5, 2) box_class_probs: tensor of shape (None, 19, 19, 5, 80) image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype) max_boxes -- integer, maximum number of predicted boxes you'd like score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box iou_threshold -- real value, "intersection over union" threshold used for NMS filtering Returns: scores -- tensor of shape (None, ), predicted score for each box boxes -- tensor of shape (None, 4), predicted box coordinates classes -- tensor of shape (None,), predicted class for each box """ # Retrieve outputs of the YOLO model # Convert boxes to be ready for filtering functions # Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold # Scale boxes back to original image shape. # Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold return scores, boxes, classes
Summary for YOLO:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19×19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because (pc,bx,by,bh,bw) has 5 numbers, and and 80 is the number of classes we’d like to detect.
- We then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold.
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes.
- This gives us YOLO’s final output.
Test YOLO pretrained model on images
In this part, we are going to use a pre-trained model and test it on the car detection dataset. As usual, we start by creating a session to start your graph. Run the following cell.
sess = K.get_session()
Defining classes, anchors and image shape.
Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files “coco_classes.txt” and “yolo_anchors.txt”. Let’s load these quantities into the model by running the next cell.
The car detection dataset has 720×1280 images, which we’ve pre-processed into 608×608 images.
class_names = read_classes(“coco_classes.txt”)
anchors = read_anchors(“yolo_anchors.txt”)
image_shape = (720., 1280.)
Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. We are going to load an existing pretrained Keras YOLO model stored in “yolo.h5”. (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. Technically, these are the parameters from the “YOLOv2” model, but we will more simply refer to it as “YOLO” in this notebook.)
yolo_model = load_model(“yolo.h5”)
This loads the weights of a trained YOLO model. Here’s a summary of the layers our model contains.
yolo_model.summary()
____________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
===========================================================================
input_1 (InputLayer) (None, 608, 608, 3) 0
____________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 608, 608, 32) 864 input_1[0][0]
____________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 608, 608, 32) 128 conv2d_1[0][0]
____________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 608, 608, 32) 0 batch_normalization_1[0][0]
____________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 304, 304, 32) 0 leaky_re_lu_1[0][0]
____________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 304, 304, 64) 18432 max_pooling2d_1[0][0]
____________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 304, 304, 64) 256 conv2d_2[0][0]
____________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 304, 304, 64) 0 batch_normalization_2[0][0]
____________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 152, 152, 64) 0 leaky_re_lu_2[0][0]
____________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 152, 152, 128 73728 max_pooling2d_2[0][0]
____________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 152, 152, 128 512 conv2d_3[0][0]
____________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 152, 152, 128 0 batch_normalization_3[0][0]
____________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 152, 152, 64) 8192 leaky_re_lu_3[0][0]
____________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 152, 152, 64) 256 conv2d_4[0][0]
____________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 152, 152, 64) 0 batch_normalization_4[0][0]
____________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 152, 152, 128 73728 leaky_re_lu_4[0][0]
____________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 152, 152, 128 512 conv2d_5[0][0]
____________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 152, 152, 128 0 batch_normalization_5[0][0]
____________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 76, 76, 128) 0 leaky_re_lu_5[0][0]
____________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 76, 76, 256) 294912 max_pooling2d_3[0][0]
____________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 76, 76, 256) 1024 conv2d_6[0][0]
____________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_6[0][0]
____________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 76, 76, 128) 32768 leaky_re_lu_6[0][0]
____________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 76, 76, 128) 512 conv2d_7[0][0]
____________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 76, 76, 128) 0 batch_normalization_7[0][0]
____________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 76, 76, 256) 294912 leaky_re_lu_7[0][0]
____________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 76, 76, 256) 1024 conv2d_8[0][0]
____________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_8[0][0]
____________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 38, 38, 256) 0 leaky_re_lu_8[0][0]
____________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 38, 38, 512) 1179648 max_pooling2d_4[0][0]
____________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 38, 38, 512) 2048 conv2d_9[0][0]
____________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_9[0][0]
____________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_9[0][0]
____________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 38, 38, 256) 1024 conv2d_10[0][0]
____________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_10[0][0]
____________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_10[0][0]
____________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 38, 38, 512) 2048 conv2d_11[0][0]
____________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_11[0][0]
____________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_11[0][0]
____________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 38, 38, 256) 1024 conv2d_12[0][0]
____________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_12[0][0]
____________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_12[0][0]
____________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 38, 38, 512) 2048 conv2d_13[0][0]
____________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_13[0][0]
____________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 19, 19, 512) 0 leaky_re_lu_13[0][0]
____________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 19, 19, 1024) 4718592 max_pooling2d_5[0][0]
____________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 19, 19, 1024) 4096 conv2d_14[0][0]
____________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_14[0][0]
____________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_14[0][0]
____________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 19, 19, 512) 2048 conv2d_15[0][0]
____________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_15[0][0]
____________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_15[0][0]
____________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 19, 19, 1024) 4096 conv2d_16[0][0]
____________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_16[0][0]
____________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_16[0][0]
____________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 19, 19, 512) 2048 conv2d_17[0][0]
____________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_17[0][0]
____________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_17[0][0]
____________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 19, 19, 1024) 4096 conv2d_18[0][0]
____________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_18[0][0]
____________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_18[0][0]
____________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 19, 19, 1024) 4096 conv2d_19[0][0]
____________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 38, 38, 64) 32768 leaky_re_lu_13[0][0]
____________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_19[0][0]
____________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, 38, 38, 64) 256 conv2d_21[0][0]
____________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_19[0][0]
____________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU) (None, 38, 38, 64) 0 batch_normalization_21[0][0]
____________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 19, 19, 1024) 4096 conv2d_20[0][0]
____________________________________________________________________________________________
space_to_depth_x2 (Lambda) (None, 19, 19, 256) 0 leaky_re_lu_21[0][0]
____________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_20[0][0]
____________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 19, 19, 1280) 0 space_to_depth_x2[0][0]
leaky_re_lu_20[0][0]
____________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 19, 19, 1024) 11796480 concatenate_1[0][0]
____________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, 19, 19, 1024) 4096 conv2d_22[0][0]
____________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_22[0][0]
____________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 19, 19, 425) 435625 leaky_re_lu_22[0][0]
===========================================================================
Total params: 50,983,561
Trainable params: 50,962,889
Non-trainable params: 20,672
____________________________________________________________________________________________
Reminder: this model converts a pre-processed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in the above Figure.
Convert output of the model to usable bounding box tensors
The output of yolo_model is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following code does this.
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
We added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by our yolo_eval function.
Filtering boxes
yolo_outputs gave us all the predicted boxes of yolo_model in the correct format. We’re now ready to perform filtering and select only the best boxes. Lets now call yolo_eval, which you had previously implemented, to do this.
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
Run the graph on an image
Let the fun begin. We have created a (sess) graph that can be summarized as follows:
- yolo_model.input is given to yolo_model. The model is used to compute the output yolo_model.output
- yolo_model.output is processed by yolo_head. It gives us yolo_outputs
- yolo_outputs goes through a filtering function, yolo_eval. It outputs your predictions: scores, boxes, classes
Exercise: Implement predict() which runs the graph to test YOLO on an image. We shall need to run a TensorFlow session, to have it compute scores, boxes, classes.
The code below also uses the following function:
image, image_data = preprocess_image(“images/” + image_file, model_image_size = (608, 608))
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won’t need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
Important note: when a model uses BatchNorm (as is the case in YOLO), we will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
def predict(sess, image_file): """ Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions. Arguments: sess -- your tensorflow/Keras session containing the YOLO graph image_file -- name of an image stored in the "images" folder. Returns: out_scores -- tensor of shape (None, ), scores of the predicted boxes out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes out_classes -- tensor of shape (None, ), class index of the predicted boxes Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes. """ # Preprocess your image # Run the session with the correct tensors and choose the correct placeholders in the # feed_dict. We'll need to use feed_dict={yolo_model.input: ... , output_image = scipy.misc.imread(os.path.join("out", image_file)) imshow(output_image) return out_scores, out_boxes, out_classes
Let’s Run the following cell on the following “test.jpg” image to verify that our function is correct.
Input
out_scores, out_boxes, out_classes = predict(sess, “test.jpg”)
The following figure shows the output after car detection. Each of the bounding boxes have the name of the object detected on the top left along with the confidence value.
Output (with detected cars with YOLO)
Found 7 boxes for test.jpg
car 0.60 (925, 285) (1045, 374)
car 0.66 (706, 279) (786, 350)
bus 0.67 (5, 266) (220, 407)
car 0.70 (947, 324) (1280, 705)
car 0.74 (159, 303) (346, 440)
car 0.80 (761, 282) (942, 412)
car 0.89 (367, 300) (745, 648)
The following animation shows the output Images with detected objects (cars) using YOLO for a set of input images.
What.
References: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener’s github repository. The pretrained .
Car detection dataset: Creative Commons License.
The Drive.ai Sample Dataset (provided by drive.ai) is licensed under a Creative Commons Attribution 4.0 International License.
One thought on “Autonomous Driving – Car detection with YOLO Model with Keras in Python”
@Meister yeah should notice about the citations of the original work at the very outset of the article before any allegation. | https://sandipanweb.wordpress.com/2018/03/11/autonomous-driving-car-detection-with-yolo-in-python/ | CC-MAIN-2020-24 | refinedweb | 4,178 | 55.98 |
12.4. Creating a Model to Correct PurpleAir Measurements¶
Now that we’ve matched together PM2.5 readings from AQS and PurpleAir sensors, we’re ready for the final step of the analysis: creating a model that calibrates PurpleAir measurements. Barkjohn’s original analysis fits many models to the data in order to find the most appropriate one. In this section, we’ll start by fitting several simpler models using the techniques from the Modeling and Estimation chapter. Then, we’ll fit a few models from Barkjohn’s analysis, including the final model they chose for real-world use. Since the final model uses methods that we introduce later in the book, we won’t explain the technical details very deeply here—we instead encourage you to revisit this section after reading the Linear Models and [In progress] Multiple Linear Regression chapters.
12.4.1. Loading in the Data¶
Let’s begin by loading in the cleaned dataset of matched AQS and PurpleAir PM2.5 readings. We do some data processing as we load in the CSV file. Since this section focuses on modeling, we’ve left it as an exercise to understand what the code does.
csv_file = 'data/cleaned_purpleair_aqs/Full24hrdataset.csv' usecols = ['Date', 'ID', 'region', 'PM25FM', 'PM25cf1', 'TempC', 'RH', 'Dewpoint'] full = (pd.read_csv(csv_file, usecols=usecols, parse_dates=['Date']) .dropna()) full.columns = ['date', 'id', 'region', 'pm25aqs', 'pm25pa', 'temp', 'rh', 'dew'] full
12246 rows × 8 columns
We’ve included an explanation for each of the columns below.
Before we start modeling, we want to understand the data. We’ll start by making some simple visualizations. First, we’ll make a plot of the weekly average AQS PM2.5 over time.
plt.figure(figsize=(8, 4)) (full.set_index('date') .resample('W') ['pm25aqs'] .mean() .plot() ) plt.ylabel('Weekly avg PM2.5 (AQS)');
We see that most PM2.5 values range between 5.0 and 15.0 µg m⁻³.
Next, we’ll plot the distribution of both AQS and PurpleAir PM2.5 readings.
def plot_dist(col): # We need to make sure the x- and y-axes have the same limits for both plots, # or else the plots will be difficult to compare sns.displot(data=full, x=col, stat='density', kde=True, aspect=3, height=3) plt.xlim([0, 60]) plt.ylim([0, 0.11])
# We need to make sure the x- and y-axes have the same limits for both plots, # or else the plots will be difficult to compare plot_dist('pm25aqs') plot_dist('pm25pa')
We see that the distributions of PM2.5 readings are skewed to the right.
Lastly, we’ll plot PurpleAir against AQS readings.
sns.relplot(data=full, x='pm25aqs', y='pm25pa', s=30, aspect=1, height=4); plt.xlim([0, 80]) plt.ylim([0, 80]);
Before starting this analysis, we expected that PurpleAir measurements would generally overestimate the PM2.5. And indeed, this is reflected in the scatter plot.
12.4.2. Our Modeling Procedure¶
First, let’s go over our modeling goals. We want to create a model that predicts PM2.5 as accurately as possible. To do this, our model can make use of the PurpleAir measurements, as well as the other variables in the data, such as the ambient temperature and relative humidity.
Here, we treat the AQS measurements as the true PM2.5 values. The AQS measurements are taken from carefully calibrated instruments and are actively used by the US government for decision-making, so we have reason to trust that the AQS PM2.5 values are accurate.
We plan to fit and compare several models on the data.
To do so, we first randomly split the data in
full into a
training and testing set.
For all of our models, we’ll fit the model on the training set and report
the root mean squared error (RMSE) on the test set.
Intuitively, the held-out test set mimics new measurements
that the model “doesn’t see” while we fit it.
This means that we can treat the test set error as an
estimator for the model’s real-world performance.
We’ll set aside 1000 observations aside for the test set.
np.random.seed(42) n = len(full) test_n = 1000 # Shuffle the row labels shuffled = np.random.choice(n, size=n, replace=False) # Split the data test = full.iloc[shuffled[:test_n]] train = full.iloc[shuffled[test_n:]] train
11246 rows × 8 columns
We also plan to evaluate each model’s predictions on the test set, so we’ll define a function to compute the RMSE of a set of predictions.
def rmse(predictions): return np.sqrt(np.mean((test['pm25aqs'] - predictions)**2))
12.4.3. Model 1: A Simple Constant Model¶
Now, let’s fit our first model—a constant model like the ones we worked with in the previous Modeling and Estimation chapter. Constant models only predict one number. That is, our model \( f_{\theta}(x) \) is:
For example, if we set \( \theta = 5.5 \), then the model would predict that the AQS PM2.5 reading is always 5.5.
In our previous discussion of the constant model, we found that we minimize the mean squared error when we set \( \hat{\theta} \) to the mean of the response variable. In this case, we set \( \hat{\theta} \) to the mean AQS PM2.5 value, and our model is simply:
def model_1(train): '''f(x) = θ''' mean = train['pm25aqs'].mean() def predict(data): return np.repeat(mean, len(data)) return predict
Now, we can fit the model on the training set and check its accuracy on the test set.
predict = model_1(train) rmse(predict(test))
5.446107182896083
We’ve found that this simple model has a loss of 5.45 µg m⁻³. Intuitively, this means that the model will usually be around 5.45 µg m⁻³ away from the actual AQS measurement. We can see this when we mark the mean on the distribution of AQS readings.
mean = train['pm25aqs'].mean() plot_dist('pm25aqs') plt.axvline(mean, c='red', linestyle='-.'); # Most points are within 5.36 µg m⁻³ away from the mean
This model is too simple to use in practice—it performs especially badly when PM2.5 values are high, which is exactly where we care about model accuracy the most! Still, this serves as a useful baseline for future models and demonstrates our modeling procedure.
12.4.4. Model 2: Adjusting PurpleAir by a Constant¶
Let’s fit a constant again, but with a twist: we’ll fit a model that adjusts PurpleAir measurements by a constant rather than simply outputting the average AQS measurement. Our model is:
In this model, \( \text{PA}_i \) represents the PurpleAir measurement for a row \( x_i \) in the data, and \( \theta \) is a constant we once again need to fit.
We want to minimize the mean squared loss. Let \( \text{AQS}_i \) denote the AQS reading for row \( x_i \) of the data. Then, the mean squared loss is:
To minimize the loss, we can derive that \( \hat{\theta} = \frac{1}{n} \sum_i(\text{AQS}_i - \text{PA}_i) \). In other words, \( \hat{\theta} \) is the mean difference between AQS and PurpleAir readings. We’ve left the derivation as an exercise for the reader. After fitting on the training data, we have \( \hat{\theta} = -4.86 \):
np.mean(train['pm25aqs'] - train['pm25pa'])
-4.859246747707521
With this, we can implement our second model.
def model_2(train): '''f(x) = PA + θ''' mean_diff = np.mean(train['pm25aqs'] - train['pm25pa']) def predict(data): return data['pm25pa'] + mean_diff return predict
To compare our models, we’ve written a small helper function called
model_results that outputs the models and their RMSE on the test set.
We’ll call this function to compare the results of the two models we’ve
defined thus far.
model_results([model_1, model_2])
12.4.4.1. Why Did Model 2 Perform Worse?¶
Surprisingly, the second model has a worse RMSE than the first. Why did this happen? One way to diagnose these models is to subtract the model’s predictions from the actual observed AQS values. These differences are called the residuals.
def residuals(model): predict = model(train) return test['pm25aqs'] - predict(test)
Let’s plot the residuals for both models and compare.
def plot_residuals(models): model_names = pd.Index([model.__name__ for model in models], name='model') resids = (pd.concat( [test.assign(resid=residuals(model)) for model in models], keys=model_names).reset_index(level=0)) g = sns.relplot(data=resids, x='pm25pa', y='resid', col='model', col_wrap=3, aspect=0.8, s=30) g.map(lambda **k: plt.axhline(y=0, color='black', linestyle=":")) g.set_xlabels('PA') g.set_ylabels('residuals')
plot_residuals([model_1, model_2])
Note that a residual of 0 means that the model correctly predicted the PM2.5. A positive residual means the model underestimated PM2.5, and a negative residual means the model overestimated PM2.5.
We see that both models perform poorly at higher PM2.5 measurements. Model 1, the simple constant model, tends to underestimate PM2.5. Likewise, Model 2, the model that adjusts PurpleAir by a constant, tends to overestimate PM2.5.
We saw earlier in this section that PurpleAir sensors tend to overestimate PM2.5. However, subtracting a constant value doesn’t seem to do enough to correct PurpleAir readings.
Next, we’ll fit linear models on the data. In this book, we cover linear models in later chapters (Chapters 15 and 19). However, we include linear models in this section in order to match Barkjohn’s analysis, and to produce the final model that is currently in real-world use.
12.4.5. Model 3: Simple Linear Regression¶
Making a scatter plot of PurpleAir and AQS measurements shows that a linear model can be appropriate:
By eyeballing this plot, we might guess that the PurpleAir PM2.5 is about twice as high as the actual PM2.5. This idea is encapsulated in the simple linear model:
To predict the PM2.5, this model multiplies the PurpleAir measurements by \( \theta_1 \), then adds a constant \( \theta_0 \).
12.4.5.1. Using a Linear Model for Calibration¶
This is a calibration scenario. We have PurpleAir sensors which we want to calibrate to match AQS sensors. Then, we can adjust the PurpleAir readings using the correction we get from calibration.
This is a two-step procedure:
Fit a calibration model that uses AQS measurements to predict PurpleAir measurements.
Invert the model to find \( \hat \theta_1 \) and \( \hat \theta_0 \).
In other words, we’ll fit this model first:
Then, we invert this model:
Which gives \( \hat \theta_1 = \frac{1}{\hat m} \) and \( \hat \theta_0 = - \frac{\hat b}{\hat m} \)
Why invert the model?
This procedure might seem a bit roundabout. Why not fit a linear model for \( f_{\theta}(x) \) that uses PurpleAir to predict AQS directly? This would get \( \hat \theta_1 \) and \( \hat \theta_0 \) without needing to invert anything.
Intuitively, linear models assume that the explanatory variable, or the variable we put on the x-axis, has no error. Linear models also assume that all the error occurs when we measure the response variable, or the variable we put on the y-axis. Thus, during calibration we treat the AQS measurements as the x-axis variable since these measurements have relatively little error. This allows us to model the error in PurpleAir measurements. If we fit \( f_{\theta}(x) \) directly, we’ll say the opposite: that the measurement error happens in the AQS readings rather than PurpleAir readings.
As a simpler example, let’s say we’re calibrating a weight scale. We could do this by first placing known weights—say 1 kg, 5 kg, and 10 kg—onto the scale, and seeing what the scale reports. Analogously, the AQS measurements are the known quantities, and we check what the PurpleAir sensors report.
For a more rigorous treatment of statistical calibration, see [Osborne, 1991].
Now, let’s define the model.
from sklearn.linear_model import LinearRegression def model_3(train): '''f(x) = θ₀ + θ₁PA''' # Fit calibration model using sklearn X, y = train[['pm25aqs']], train['pm25pa'] model = LinearRegression().fit(X, y) m, b = model.coef_[0], model.intercept_ # Invert model theta_1 = 1 / m theta_0 = - b / m def predict(data): return theta_0 + theta_1 * data['pm25pa'] return predict
model_results([model_1, model_2, model_3])
We see that the linear model performs considerably better than the other models we’ve done. This is reflected in the residuals:
plot_residuals([model_1, model_2, model_3])
Happily, the residuals of the linear model shows that it still performs relatively well even when PM2.5 is high.
Under this model, \( \hat \theta_1 = 0.52 \) and \( \hat \theta_0 = 1.54 \), so our fitted model predicts:
12.4.6. Model 4: Incorporating Relative Humidity¶
The final model that Barkjohn selected was a linear model that also incorporated the relative humidity:
Here, \( \text{PA}_i \) and \( \text{RH}_i \) refer to the PurpleAir PM2.5 and the relative humidity for row \( x_i \) of the data. This is an example of a multivariable linear regression model—it uses more than one variable to make predictions.
As before, we’ll fit the calibration model first:
Then, we invert the calibration model to find the prediction model:
Which gives \( \hat \theta_0 = -\frac{\hat b}{\hat m_1} \), \( \hat \theta_1 = \frac{1}{\hat m_1} \), and \( \hat \theta_2 = - \frac{\hat m_2}{\hat m_1} \).
def model_4(train): '''f(x) = θ₀ + θ₁PA + θ₂RH''' # Fit calibration model using sklearn X, y = train[['pm25aqs', 'rh']], train['pm25pa'] model = LinearRegression().fit(X, y) [m1, m2], b = model.coef_, model.intercept_ # Invert to find parameters theta_0 = - b / m1 theta_1 = 1 / m1 theta_2 = - m2 / m1 def predict(data): return theta_0 + theta_1 * data['pm25pa'] + theta_2 * data['rh'] return (data['pm25pa'] - data['rh'] * m2 - b) / m1 return predict
model_results([model_1, model_2, model_3, model_4])
We see that out of the four models, the model that incorporates PurpleAir and relative humidity performs the best. This is also reflected in its residual plot.
plot_residuals([model_3, model_4])
From the residual plot, we can see that Model 4’s residuals are generally closer to 0. Model 4’s residuals trend downward less than Model 3’s, indicating a better model fit.
After fitting the model, we have: \( \hat \theta_0 = 5.77 \), \( \hat \theta_1 = 0.524 \), and \( \hat \theta_2 = -0.0860 \) so our fitted model predicts:
And thus, we’ve achieved our goal: we have a model to correct PurpleAir PM2.5 measurements! From our analysis, this model achieves a test set error of 2.58 µg m⁻³, which is a useful improvement over our baseline models. If the model can maintain this error rate for real-world data, it should be practically useful—for instance, a “moderate” PM2.5 level corresponds to 12-35 µg m⁻³, while a “unhealthy for sensitive groups” PM2.5 level corresponds to 35-55 µg m⁻³ [EPA, 2016]. Our residual plot shows that applying our correction to PurpleAir sensors makes most measurements within 5 µg m⁻³ away from the true PM2.5, so we can reasonably recommend using PurpleAir sensors to report air quality. | http://www.textbook.ds100.org/ch/12/pa_modeling.html | CC-MAIN-2021-49 | refinedweb | 2,488 | 58.28 |
This program checks whether a year (integer) entered by the user is a leap year or not and displays Check Leap Year.
To understand this Program to Check Leap Year, you should have the knowledge of following C++ programming topics:
- C++ if, if…else and Nested if…els
All years which are perfectly divisible by 4 are leap years except for century years (years ending with 00) which is the leap year only it is perfectly divisible by 400.
For example, 2012, 2004, 1968 etc are the leap year but, 1971, 2006 etc are not leap year. Similarly, 1200, 1600, 2000, 2400 are leap years but, 1700, 1800, 1900 etc are not.
Program to Check if a year is the leap year or not
#include <stdio.h>); return 0; }
Output
Enter a year: 2014 2014 is not a leap year.
In the above C++ program, the user is asked to enter a year and this program checks whether the year entered by a user is the leap year or not, Below is how to do leap year check.
Related Programs
- C++ Program to Check Whether Number is Even or Odd
- C++ Program to Check Whether a character is Vowel or Consonant
- C++ Program to Find Largest Number Of Three Numbers
Ask your questions and clarify your/others doubts on checking leap year in C++ by commenting. Documentation | https://coderforevers.com/cpp/cpp-program/check-leap-year/ | CC-MAIN-2019-35 | refinedweb | 227 | 68.54 |
Welcome to the Parallax Discussion Forums, sign-up to participate.
cmd.exe /c start %D/bin/loadp2 %B -CHIP -t -v
const LED1 = 56
const LED2 = 57
Searching serial ports for a P2
P2 version A found on serial port com9
Setting clock_mode to 10c1f08
Setting user_baud to 115200
C:/Propeller/spin2gui/samples/blink2.binary loaded
[ Entering terminal mode. Press ESC to exit. ]
00000 | con
00000 | led1 = 56
00000 | led2 = 57
00000 | dat
00000 000 | org 0
00000 000
00000 000 | entry
00000 000 00F00FF2 | cmp ptra, #0 wz
00004 001 1800905D | if_ne jmp #spininit
00008 002 30F003F6 | mov ptra, objptr
0000c 003 00F007F1 | add ptra, #0
00010 004 00FE65FD | hubset #255
00014 005 D404C0FD | calla #@_program
| COG_BSS_START
| fit 496
| orgh $400
| 'SEEMS TO MAKE A DIFFERENCE? First long is CPU Clock?
| long _CLOCKFREQ 'was long 80000000
| ...
| hubentry
cheezus wrote: »?
cheezus wrote: »
I've still been using the proptool for the bulk of editing but I think it's time for me to look at some of the other editors. I've always used proptool and it's just that comfortable place to develop for the p1.
The one thing that I'm looking for in a toolchain is conditional compilation. If I understand correctly fastspin has a preprocessor, with built in directives?
pub nextfile(fbuf) | i, t, at, lns
{{
' Find the next file in the root directory and extract its
' (8.3) name into fbuf. Fbuf must be sized to hold at least
' 13 characters (8 + 1 + 3 + 1). If there is no next file,
' -1 will be returned. If there is, 0 will be returned.
}}
repeat
if (bufat => bufend)
t := pfillbuf
if (t < 0)
return t
if (((floc >> SECTORSHIFT) & ((1 << clustershift) - 1)) == 0)
fclust++
at := @buf + bufat
if (byte[at] == 0)
return -1
bufat += DIRSIZE
if (byte[at] <> $e5 and (byte[at][$0b] & $18) == 0)
lns := fbuf
repeat i from 0 to 10
byte[fbuf] := byte[at][i]
fbuf++
if (byte[at][i] <> " ")
lns := fbuf
if (i == 7 or i == 10)
fbuf := lns
if (i == 7)
byte[fbuf] := "."
fbuf++
byte[fbuf] := 0
fsize2:= brlong(at+$1c)
return 0
\sdspi.readblock(0, @buf)
cheezus wrote: »
Thanks as always for taking a look at that....
ersmith wrote: ».
sdbuffer byte 0[1024] ' 1024 byte buffer for sd card interface and also for sending one row 320 x3 = 960 bytes max picture size
buffer2 byte 0[512] ' 512 general purpose hub buffer
rambuffer word 0[1024] ' 512 byte (256 word) buffer easier for using with words, needed to double this
CON ''PINS
LCD_RS = |< 16 '
LCD_CS = |< 20 ' p5 -1 -d_en
''LCD_RD = |< 18 ' p6 -0 -d_r
''LCD_WR = |< 19 ' p6 -1_d_W
''LCD_RST = |< 21 ' GP3 & P10
LCD_PINS = LCD_CS
LCD_DIRS = LCD_PINS | LCD_RS | $FFFF
D_RST = 10
SPI_A0 = 10
SPI_A1 =11
SD_DO = 12
SD_CLK = 13
SD_DI = 14
SD_CS = 15
GROUP_EN = 21
CLOCK_PIN = 20
CON ''OTHER STUFFS
_1ms = 1_000_000 / 1_000 ' Divisor for 1 ms
'define latch bits
#1, _ramEnable, _ram_rW, _flash_rW, _dispEnable, _disp_rW, _flashEnable '' version 1, UGLY
_group0, _group1, _group2, _group3 '' group bits
''OBJ disp '' display object - SSD1963,SSD1289 specific - todo
VAR
word curx, cury, ScreenWidth, ScreenHeight
word BackFontColor, FontHeight
byte DisplayMode, orientation
byte rambuffer[512]
'cog stuffs
byte latchvalue, spiselect
PUB Testing | i
clkset(_SETFREQ, _CLOCKFREQ)
waitcnt(cnt+clkfreq*2)
Reset_Display
Start_SSD1963
'Draw(0, 0, 479, 799)
REPEAT
Draw(0, 0, 479, 799)
repeat (480 * 800)
Pixel(0)
Draw(0, 0, 479, 799)
repeat (480 * 800)
Pixel($ffff)
repeat
pause1ms(1000)
PUB pause1ms(period) | clkcycles '' Pause execution for period (in units of 1 ms).
clkcycles := ((clkfreq / _1ms * period) - 4296) #> 381 ' Calculate 1 ms time unit
waitcnt(clkcycles + cnt) ' Wait for designated time
''*********************************************************************************************************************************************************
' ***************** Start display routines *************************
PUB Draw(x1, y1, x2, y2) '' sets the pixel to x1,y1 and then fills the next (x2-x1)*(y2-y1) pixels
{
ifnot orientation ' landscape mode so swap x and y for 1963
result :=x1 ' swap x1 and y1
x1 := y1
y1 := result
result := x2 ' swap x2 and y2
x2 :=y2
y2 := result
}
DisplayEnable ' enable one or both displays
'' ssd1963
Lcd_Write_Com($002B)
Lcd_Write_Data(x1>>8)
Lcd_Write_Data(x1&$ff)
Lcd_Write_Data(x2>>8)
Lcd_Write_Data(x2&$ff)
Lcd_Write_Com($002A)
Lcd_Write_Data(y1>>8)
Lcd_Write_Data(y1&$ff)
Lcd_Write_Data(y2>>8)
Lcd_Write_Data(y2&$ff)
Lcd_Write_Com($002c)
{ '' ssd1289
Displaycmd($0044,(x2<<8)+x1)
Displaycmd($0045,y1)
Displaycmd($0046,y2)
Displaycmd($004e,x1)
Displaycmd($004f,y1)
Lcd_Write_Com($0022)
}
LCD_RS_High
PUB Pixel(pixelcolor) '' send out a pixel
Lcd_Write_Fast_Data(pixelcolor) ' need to set RS high at beginning of this group (ie call Draw first)
' it is more efficent to send one Draw command then lots of pixels than sending individual pixels
' but of course, you can send draw x,y,x,y where x and y are the same and then send one pixel
' ***********************************************************
'' DISPLAY SETTINGS
PRI DisplayEnable
latchvalue :=( latchvalue & $FC) | _group2) | _dispEnable
SetLatch( latchvalue )
OUTA |= LCD_PINS ' set /cs high
DIRA |= LCD_DIRS ' enable these pins for output
PRI SpinHubToDisplay(hubaddress,number)| i
' DisplayEnable
repeat i from 0 to number -1
pixel(long[hubaddress][i])_Write_Fast_Data(LCDlong) ' write RS elsewhere then skip the RS above as this is a latch output and takes longer
LCD_Writ_Bus(LCDlong)
PRI LCD_Writ_Bus(LCDLong)
LCDLong &= $0000_FFFF
OUTA &= %11111111_11111111_00000000_00000000 ' set P0-P15 to zero ready to OR
OUTA |= LCDLong ' merge with the word to output
LCD_CS_Low
LCD_CS_High
PRI LCD_CS_Low
OUTA &= !LCD_CS
PRI LCD_CS_High
OUTA |= LCD_CS
PRI LCD_RS_Low
OUTA &= !LCD_RS
PRI LCD_RS_High
OUTA |= LCD_RS
PRI SetLatch(value)
OUTA |= GROUP_EN
OUTA |= CLOCK_PIN
OUTA &= !$FF
value &= $FF
OUTA |= value
OUTA &= CLOCK_PIN
OUTA |= CLOCK_PIN
OUTA &= GROUP_EN
PRI SetCounter(address) | olv
olv := latchvalue
SetLatch($0)
OUTA &= CLOCK_PIN
OUTA &= !$F_FFFF
address &= $F_FFFF
OUTA |= address
OUTA |= CLOCK_PIN
SetLatch(olv)
PRI Reset_Display | olv
olv := latchvalue
SetLatch(_group3)
OUTA &= D_RST
pause1ms(10)
OUTA |= D_RST
SetLatch( olv )
DAT
VDP long 479 ' v pixels
HDP long 799 ' h pixels
HT long 928
HPS long 46
LPS long 15
VT long 525
VPS long 16
FPS long 8
HPW long 48
VPW long 16
PRI Start_SSD1963? 54?? 21??
Displaycmd($0006,$0000) 'Compare Register (2) POR-$0000
Displaycmd($0016,$EF1C) 'Horizontal Porch POR-$EFC1
Displaycmd($0017,$0003) 'Vertical Porch POR-$0003
Displaycmd($0007,$0233) 'Display Control '0033? POR-$0000
Displaycmd($000B,$0000) 'Frame cycle Control 'd308 POR-$D308
Displaycmd($000F,$0000) 'Gate Scan start position POR-$0000
Displaycmd($0041,$0000) 'Vertical Scroll Control (1) POR-$0000
Displaycmd($0042,$0000) 'Vertical Scroll Control (2) POR-$0000
Displaycmd($0048,$0000) 'First Window Start POR-$0000
Displaycmd($0049,$013F) 'First Window End POR-$013F
Displaycmd($004A,$0000) 'Second Window Start POR-$0000
Displaycmd($004B,$0000) 'Second Window End POR-$013F
Displaycmd($0044,$EF00) 'Horizontal Ram Address Postion POR-$EF00
Displaycmd($0045,$0000) 'Vertical Ram Address Start POR-$0000
Displaycmd($0046,$013F) 'Vertical Ram Address End POR-$013F
Displaycmd($0030,$0707)
Displaycmd($0031,$0204)
Displaycmd($0032,$0204)
Displaycmd($0033,$0502)
Displaycmd($0034,$0507)
Displaycmd($0035,$0204)
Displaycmd($0036,$0204)
Displaycmd($0037,$0502)
Displaycmd($003A,$0302)
Displaycmd($003B,$0302)
Displaycmd($0023,$0000) 'RAM write data mask (1) POR-$0000
Displaycmd($0024,$0000) 'RAM write data mask (2) POR-$0000
Displaycmd($0025,$8000) 'not in datasheet?
Displaycmd($004f,$0000) 'RAM Y address counter POR-$0000
Displaycmd($004e,$0000) 'RAM X address counter POR-$0000
Lcd_Write_Com($0022)
#define DEBUG
#define FEATURE1
#define FEATURE2
PUB myfunc(arg)
#ifdef DEBUG
ser.printf("reached myfunc, arg=%d\n", arg)
#endif
PUB do_output
#ifdef __P2__
p2 output code
#else
p1 output code
#endif
PUB do_input
#ifdef __P2__
p2 input code
#else
p1 input code
#endif
OBJ
#ifdef __P2__
p: "p2_routines.spin"
#else
p: "p1_routines.spin"
#endif
...
DAT
ssd1289_startlist
word $0000, $0001 ' Turn Oscillator on
word $0003, $A8A4 ' Power control (1)
word $000C, $0000 ' Power control (2)
...
word $FFFF, $FFFF ' end of list; some values that should never appear
PUB Displaylist(ptr) | reg, val
repeat
reg := word[ptr]
val := word[ptr+2]
if reg == $FFFF and val == $FFFF
return
Displaycmd(reg, val)
PRI Start_SSD1289
Displaylist(@ssd1289_startlist)
if packet==0
'Send filename and length
i:=strsize(@fbuf)
bytemove(@pdata,@fbuf,i+1)
p:=num.tostr(size,num#DEC) '' is fat.fsize
j:=strsize(p)
bytemove(@pdata+i+1,p,j+1)
repeat k from i+1+j+1 to 128
pdata[k]:=0
...
ser.decuns(size, 14) '' works for printing
ser.str(num.tostr(size,num#DSDEC14)) ' I do this kind of thing a lot
..
ersmith wrote: »
I wasn't aware that numbers.spin didn't work. I don't have a lot of time to debug right now but I'll try to take a look at it soon. All plain spin code is supposed to work with fastspin (modulo size and speed issues), so if any doesn't it's a bug. It'd be great if you could narrow down what part of tostr() is failing. My guess is that there may be some bug in fastspin's operator precedence, but that's just a wild guess. '' compiler whines about indention because of line directly above
if (val >= 0) '' but this seems to work okay
digit := val // base
val := val / base
if (digit => 0 and digit =< 9)
digit += "0"
else
digit := (digit - 10) + "A"
buf[i++] := digit
--digitsNeeded
while (val <> 0 or digitsNeeded > 0) and (i < 32)
if (signflag > 1)
tx(signflag)
'' now print the digits in reverse order
repeat while (i > 0)
tx(buf[--i])
Pri NumToStr(val, base, digitsNeeded) | i, digit, r1, q1
'' make sure we will not overflow our buffer
if (digitsNeeded > 32)
digitsNeeded := 32
'' accumulate the digits
i := 0
buf[i++] := 0 'z term string chz ' no like
if (val >= 0)
digit := val // base
val := val / base
if (digit => 0 and digit =< 9)
digit += "0"
else
digit := (digit - 10) + "A"
buf[i++] := digit
--digitsNeeded
while (val <> 0 or digitsNeeded > 0) and (i < 32)
return @buf + i ' return buffer address + ptr to most sig char
PUB NumToStr(value, places) | p, n
if places > 0
p := places
else
p :=1
n := value
repeat while n > 0
n := n / 10
if n > 0
p += 1
return str.integerToDecimal(value, p) +1
byte[@BCX0][Digits++ >> 1] += ||(Num // Base) << (4 * Digits&1)
con
BUFSIZ = 32
var
byte sbuf[BUFSIZ + 1] ' buffer for output
long i ' output index
' tx copies the character to the buffer
pub tx(c)
if i < BUFSIZ
sbuf[i++] := c
' fetch zero terminates the buffer, resets the index, and returns a pointer
' to the buffer
pub fetch
sbuf[i] := 0
i := 0
return @sbuf
#include "spin/std_text_routines.spinh"
' convert integer to decimal string
pub todec(n)
dec(n)
return fetch
' convert integer to hex string with w digits (default 8)
pub tohex(n, w=8)
hex(n, w)
return fetch
ersmith wrote: »
I think Numbers.spin is relying on the order of evaluation of side-effects like ++; that is, expressions like:
byte[@BCX0][Digits++ >> 1] += ||(Num // Base) << (4 * Digits&1)
seem to come out differently in standard Spin and fastspin (I think fastspin is evaluating the Digits++ on the left hand side before the Digits on the right hand side, but regular Spin does it the other way around). I'm going to have to think carefully about that one, but for now I suggest avoiding expressions that are that tricky (personally I find it confusing to read anyway!). Thanks for finding this!
PUB NumToStr(value, places) | p, n
if places == 0
p := 3
n := value
repeat while (n /= 10) > 10
if n > 10
p ++
else
p := places
return str.integerToDecimal(value, p-1) +1
movs :ld, line
movd :st, line
potatohead wrote: »
-1 is TRUE in SPIN, anything else is false. That was done to make booleans in conditional expressions easier. Just FYI.
value <<= (8 - digits) << 2
repeat digits
curx := TextChar(font,lookupz((value <-= 4) & $F : "0".."9", "A".."F"),Curx,Cury)
_texthex
mov COUNT_, #17
calla #pushregs_
mov local01, arg01
mov local02, arg02
mov local03, arg03
mov local04, local02
mov local05, #8
sub local05, local03
shl local05, #2
shl local04, local05
mov local02, local04
mov local06, local03
LR__0045
cmp local06, #0 wz
if_e jmp #LR__0046
mov local04, local01
mov local07, local02
rol local07, #4
mov local02, local07
mov local05, local02
and local05, #15
mov local08, local05
mov local09, #0
add ptr__dat__, ##5324
mov local10, ptr__dat__
sub ptr__dat__, ##5324
mov local11, local10
mov local12, #16
mov arg01, local08
mov arg02, local09
mov arg03, local11
mov arg04, local12
calla #__system___lookup
mov local13, result1
mov local14, local13
add objptr, ##3324
rdword local15, objptr
sub objptr, ##3324
add objptr, ##3326
rdword local16, objptr
sub objptr, ##3326
mov arg01, local04
mov arg02, local14
mov arg03, local15
mov arg04, local16
calla #_textchar
mov local17, result1
add objptr, ##3324
wrword local17, objptr
sub objptr, ##3324
mov local04, local06
sub local04, #1
mov local06, local04
jmp #LR__0045
LR__0046
mov ptra, fp
calla #popregs_
_texthex_ret
reta
CON
SPI_A0 = 16
…
DIR := $0003_0000
OUT := ((spi_select & %11) << SPI_A0)
I am using Windows 7.
1) I heard about issues with loadp2.exe so I grabbed and compiled the version from here:
I placed the newly compiled loadp2.exe version 0.007 into the /bin folder of spin2gui
2) When running spin2gui, I had to open the menu labeled "Commands" and select "Configure Commands..."
This brings up a window called "Executable Paths"
I pressed the [P2 Defaults] button, but had to change the text in "Run Command" to this path: Press OK to close the configure commands window.
3) I opened up the sample file blink2.bas. Changed the first lines to this, and saved:
4) Pressed the "Compile & Run" button.
Got this in the terminal, because I'm using -v for loadp2 (verbose mode):
5) Now I'm stuck. It looks like fastspin inserts a hubset #255 into the asm code. From various other comments, this isn't going to work on the P2-EVAL board.
Here is the beginning of the .lst file it generates for blink2.bas, with me having changed the pin numbers of led1 and led2.
When changing the .p2asm file to .spin2 as you described, and adding the _CLOCKFREQ constants from that message, I think I had to go down further below to set the clock frequency as well in the COG_BSS_START section, if present??
That is what normally happens. Assembly boils down to just being a set of directives to assemble a binary file. It may or may not be instructions. Could just be data.
I'm glad it's working for you. fastspin doesn't try to do any translation of P1 asm to P2 asm, so you're probably just getting lucky if your DAT section has much legacy P1 assembly code in it. The P2 instruction set is "mostly" a superset of P1, and I have kept a few P1 assembly conventions (e.g. fastspin accepts "wc,wz" as well as "wcz"), so some P1 asm will compile OK. But in many cases you'll need to put an "#ifdef __P2__" in the DAT section with the P2 specific assembly.
The Spin code (outside of DAT) should mostly translate fine, so the more you can put in Spin the better. Performance shouldn't be an issue at all, fastspin is compiling to hubexec code on P2, so your P2 compiled Spin code will probably run faster than hand-crafted P1 assembly.
The one thing that I'm looking for in a toolchain is conditional compilation. If I understand correctly fastspin has a preprocessor, with built in directives?
I still haven't tried to run any code on the P2 yet, I really wanted to get something working on the p1 first. I finally got the Spin p1 version working and before going to bed I thought I'd try fastspin and... well it sure lives up to its name. It went from some 10s of seconds to draw a 320x240 blank screen to at least a refresh a second? This is with no work to optimize the spin code for speed. I'd LIKE to create an object that works for P1 and P2 and it seems doable but there's a lot of gotchas with the current codebase.
Is there a "fastspin" manual I'm missing?
Thanks as always to all the forum members here, you are awesome!
Yes. See the docs/spin.md file in the fastspin or spin2gui release. It's a basic subset of the C preprocessor, with #define, #ifdef / #else / #endif, and #include. There are some predefined symbols, like __P2__ if compiling for the P2 (again, documented in the spin.md file; that's a GitHub mark-down file, so readable as plain text).
makes for an easier read.
Seems fastspin doesn't like the copy of fsrw I've been using, giving me an error that's got me scratching my head right now.
"fsrw_Ray.Spin(568) error: internal error : expected statement list, got 87"
So I go looking for an 87 in the code and can't find a dec, hex or binary... But line 568 is a blank line so I thought maybe something in the last function isn't properly formatted or something. I'm sure it's something simple really.
I was finally able to trim down Touch.Spin enough to compile the desktop app. I'm still rewriting the ASM driver in Spin... Got a long way to go and starting to wonder if I should just bypass this step and jump to P2asm. I wish I had the time and "resources" to devote to serious development... But, almost 3 year old twin boys who think playing with wires and cords is almost as amazing as beating each other with plastic baseball bats; a cat that thinks jumper wires are the best thing since hair ties and feathers; as well as preparing for a move all slow the process.
btw, I still haven't rtfm :P
*edit, added sources i'm trying to build, duh
In this case it looks like fastspin messed up compiling the line: in the mount(basepin) function in fsrw_Ray.spin. I'll try to sort out what's wrong, but for now probably removing the \ will get it going..
When I started playing with this I wasn't even sure it would work but now I'm starting to see potential. Part of the reason for these experiments is I'm trying to understand ways to optimize as well as restructure things. As it stands now, there's a lot of code that only runs once like display inits. This is one place I've always thought things could be improved. I've thought about putting the init code into CSV files on the SD and just parse though a file for display init but it really seems best to have the display init code in EEPROM in case there's something wrong with the SD I can still init the display and print debugging info.
For a while I used a small program to init the display, start the SD and chain into another program that actually loaded the desktop. From here each program could be chained to and when finished kicks back to the EEPROM. This would check the SRAM to see if the desktop attributes were still in ram and if so skip loading the desktop and just display the pre-loaded desktop. The return to desktop from a running app is instant, which is nice because loading an 800x480 image from SD seems like it takes forever!
There were several problems with doing things this way but the biggest annoyance, other than loading the desktop for the first time, was switching between displays. The previous hardware design didn't use the LCD /RD and simply pulled up to 3v3. The new hardware should allow me to determine which controller is plugged in, instead of changing a line in the source code and recompiling the entire package. The biggest problem I'm having is figuring out how to handle 2 different resolutions, as for a long time the 320x240 "package" was completely separate from the 800x480.
Getting back the 3k is going to be tricky but with 1mw of SRAM at least I have some place to stuff data between programs run off SD. Its very interesting looking at the spin code and then seeing the pasm output. I haven't really noticed your compiler "slacking off" anywhere and the moving large constants is just to satisfy the FIT directive. I think I may have made some mistakes and need to double check pointer vs register. I'm also thinking that display init data could be overwritten for buffers if I could wrap my head around that mess.
I was having trouble with some piece of code overflowing the rambuffer so that's 4x larger than it needs to be. There's plenty of places where this code can be optimized, I'm just trying to understand the best ways to optimize for P1 and looking for ideas in the P2 rewrite.
At this point things look pretty simple, put all the display specific stuff in a file. But every time I try to break things down it seems to complicated. In this example, DISPLAY_RS is controlled by it's pin, but the previous hardware (do I really have to support it? probably not?) it's controlled by a latch. Reset_Display, Set_Counter, and Set_Latch are new and totally untested..
Does anyone have tips for the best way to manage a HUGE program, that's meant to run across multiple hardware? There's already quite a few pieces to this puzzle and it seems like I'm either going to end up with lots of little files, or one REALLY long one. Ifdefs really add to character count and I still don't quite get the syntax.
As always, any advice appreciated!
In general I would try to factor out common code between your hardware into one object, and have the hardware specific parts in other objects (something like "common.spin", "driver1.spin", and "driver2.spin". That's how I did my VGA driver: there's a common file that has
all the text output routines, separate modules for setting up different modes (1024x768, 800x600, and so on) and then another common low level VGA driver that actually bangs the bits on the hardware.
#ifdef is a really powerful helper. I regularly do things like starting files with: and then within the code having bits like: To assist with debugging or for easily turning feaures on and off during development. For example when I'm done debugging I just comment out the "#define DEBUG" line, and the debug code is no longer compiled.
For big things like switching hardware or processors, I would suggest that ifdef be used in large chunks, ideally at the highest level to control including whole objects rather than inside of functions. So don't write: That leads to a rats nest of #ifdefs and is hard to read (in my opinion). Instead I would do something like: and then use "p.do_output", "p.do_input", etc.
In general I would definitely urge abstracting things and writing lots of little functions over writing large complicated functions. The smaller the pieces you break things up into, the easier the code will be to read and the less likely it is that bugs will creep in. Don't worry about creating even tiny functions -- in fact fastspin will happily inline any small functions (ones that are just a few lines), and at optimization level -O2 it will inline any function that's only used once, so it's OK to write your code in a very modular fashion.
The other tip I would have is to try to make things data driven where you're doing the same thing over and over but with different parameters. So for example your last function consists basically of a long list of things like: I'd change this into a display list instead:
I've made a lot of progress and am hopefully close to having something to show. I did a FB live video of the 'cardboard box notUnIpod' messing with the fonts. Now that I have a working FSRW I've been testing, I'm trying to get Ymodem working and getting close. Right now I'm banging my head against a wall because numbers.spin doesn't seem to work right. I took a quick look and can't see what would break toStr. I tried hijacking the number functions from std_text_routines.spinh but it's ugly...
Could use some sage advice since I'm a bit stuck and have several drivers that are getting close but still need to be debugged!
I had to fiddle with FSRW to make it return a file's size while walking through the directory and once I got that working the only thing missing (other than CHAIN) is damn ToStr. I think numbers.spin would be an ESSENTIAL have, as well as some string stuffs. I admit to being super lazy in this regard, but that's the point of libraries right??
I will try to narrow some specifics down, it seems that fromStr was working although I didn't realize I was using it at the time. I have noticed some bugs with operator precedence and will keep that in mind, as I have recently. I'm very liberal with my bracketing, mostly for human readability but libraries... I haven't played with optimization settings either so that could help narrow it down. I also noticed a bug with return values and will try to get a detail of what's working and what's not.
I've got a ton of code to debug myself and half the time when I find what could be a bug, it's from debugging by buggy code. Separating the two can be difficult in the moment and I rarely get a chance to go back and reproduce the bug, sorry.
One thing I'm noticing is std_text_routines.spinh has some interesting code that works as an include (pretty sure) but not directly.
I haven't tested this yet but here's what I was thinking a workaround for now
I've almost got ymodem working and that will help with testing significantly. I was sending small files okay but I think the serial port on my laptop is causing a problem with the longer transfers. 2 steps forward, 2 lightyears to go.
I've tried a bunch of different ways of doing this but it's as if p never increments? I don't get it!
*edit
ughhhh, it's a sign problem... I think I have a fix..
*aedit -
Still not sure why numbers.spin isn't working but it' been broken for quite some time now iirc. I actually wanted to use Kye's string engine in Ymodem for a while because I think it would simplify handling user input. I'm making progress again though and can deal with the signed / unsigned issue since the only place it showed up is in the final total of the directory. I might need to change this to display total kb on disk instead of bytes but shouldn't break too much, other than the transfers of very large files that are probably impractical to transfer this way anyway..
std_text_routines.spinh isn't intended to work stand-alone, but rather converts any object with a "tx" method into one that also has "dec", "hex", and so on. A simple kind of decimal to string capability can be built with this by creating an object where "tx" just stores a character into a string:
This is one that caught me up several times. One of the things I hate about "black box" object usage is really hard to understand code like this. When things break it becomes nearly impossible to debug. Kye's string engine works great, although it only handles signed decimal. I'm sure there's a few other formats that may get tricky, I'll have to see when I get there. Right now it looks a little ugly but it's working.
Now I'm to the place of testing the memory chips and was hoping to use the old XMM cache test to verify but now I have to wrap my head around not having movd / movs.
I'm happy that things are progressing and everything seems to work good so far.
SRW_CHZrc3 works really well and the smartpin spi code seems very stable. There's a lot of possible optimizations but I think the next thing to work on is LUT sharing. I've got a build of Ymodem that can only send to the SD card, receive is broken but I think it has something to do with the smartpin serial?? Haven't really debugged this yet but the one nice thing I did notice is the limiting factor now is the serial, with transfer speed constant over any sysclock setting (as long as it's able to keep up with the baud). I've also been running 460800 baud, and had a few tests complete at 921600 baud, although I think self heating and sysclock >240mhz is a problem right now.
I'm also able to turn my tft lcd on and display some text. Started working on cog code for that, although SRAM is the next thing I need to verify. I've got some SPI code that should read the touch adc but have not even tried testing yet. Things are progressing though and it's amazing how fast this chip is!
(1) As noted above, the order of evaluation of things like x[Digit++] |= Digit was different between openspin and fastspin; fastspin followed GCC's convention (the Digit++ on the LHS got evaluated before the Digit on the RHS) but openspin did it the other way around.
(2) Numbers.spin relies on "x/0" to equal 0, whereas fastspin's divide routine returned -1 for this.
I think relying on the value of division by 0 is particularly ugly, but I'll change fastspin to match the original Spin since perhaps other code relies on this too
Thanks. fastspin has always done this for Spin code, and so that wasn't an issue. I suppose one could argue that making "x/0" be 0 ties in to it being "false", but that's tenuous, since valid division like "1/2" also produces 0. Anyway, since division by 0 is undefined there's no particular harm in making the result 0 like Spin does, although many processors do return $FFFFFFFF (maximum 32 bit integer) for that case, as it's the natural output of the division algorithm when used with a divisor of 0.
compiles to
That's broken my hex to ascii method currently.
I'm also having problems with something like this.
Again, I haven't had a chance to track this down and have OUT hard-coded to 0 for now. I tried a few different things and not quite sure what's going on.
Solved with a small wait, seems to be a race condition with the hardware? Need to track that down later when there's no ribbon cable
I've got a LOT of bugs in my own code to find still, I'm sure. And I still have a long way to go, but my project is starting to show signs of life. Hopefully @ersmith has some answers. | http://forums.parallax.com/discussion/comment/1475910 | CC-MAIN-2020-24 | refinedweb | 5,195 | 66.17 |
6.5 Examples
The following program will be used to demonstrate the effects of different optimization levels:
#include <stdio.h> double powern (double d, unsigned n) { double x = 1.0; unsigned j; for (j = 1; j <= n; j++) x *= d; return x; } int main (void) { double sum = 0. The run-time of the program can be
measured using the
time command in the GNU Bash shell.
Here are some results for the program above, compiled on a 566MHz Intel Celeron with 16KB L1-cache and 128KB L2-cache, using GCC 3.3.1 on a GNU/Linux system:
$ gcc -Wall -O0 test.c -lm $ time ./a.out real 0m13.388s user 0m13.370s sys 0m0.010s $ gcc -Wall -O1 test.c -lm $ time ./a.out real 0m10.030s user 0m10.030s sys 0m0.000s $ gcc -Wall -O2 test.c -lm $ time ./a.out real 0m8.388s user 0m8.380s sys 0m0.000s $ gcc -Wall -O3 test.c -lm $ time ./a.out real 0m6.742s user 0m6.730s sys 0m0.000s $ gcc -Wall -O3 -funroll-loops test.c -lm $ time ./a.out real 0m5.412s user 0m5.390s sys 0m0.000s
The relevant entry in the output for comparing the speed of the resulting executables is the ‘user’ time, which gives the actual CPU time spent running the process. The other rows, ‘real’ and ‘sys’, record the total real time for the process to run (including times where other processes were using the CPU) and the time spent waiting for operating system calls. Although only one run is shown for each case above, the benchmarks were executed several times to confirm the results.
From the results it can be seen in this case that increasing the
optimization level with
-O1,
-O2 and
-O3
produces an increasing speedup, relative to the unoptimized code
compiled with
-O0. The additional option
-funroll-loops produces a further speedup. The speed of the
program is more than doubled overall, when going from unoptimized code
to the highest level of optimization.
Note that for a small program such as this there can be considerable
variation between systems and compiler versions. For example, on a
Mobile 2.0GHz Intel Pentium 4M system the trend of the results
using the same version of GCC is similar except that the performance
with
-O2 is slightly worse than with
-O1. This
illustrates an important point: optimizations may not necessarily make a
program faster in every case. | http://www.network-theory.co.uk/docs/gccintro/gccintro_50.html | crawl-001 | refinedweb | 405 | 79.06 |
25016/connecting-to-university-wifi
I have a board with an ESP8266 chip running Micropython firmware v1.8.7. My requirement is to use WebREPL via the University Wi-Fi, which uses WPA2 Enterprise EAP-MSCHAPv2 authentication. My Google-fu so far has informed me that Arduino users have been able to connect to WPA2 Enterprise EAP-TLS (certificate based authentication) (link) but not (SSID, username, pwd) networks.
All the threads I've seen so far on the subject seem to be from mid-2016 at the very latest, so I'm wondering whether someone's been able to figure out how to do this since then. I've never dabbled in network related stuff before (nor am I a great programmer), so all the big words above are pretty new to me. I thus have the following questions:
I appreciate any help you guys can provide. If there's any relevant info I haven't included, please let me know and I'll edit it in.
Edit: @MaximilianGerhardt This is what works for me on a WPA2 Personal:
import network
wlan = network.WLAN(network.STA_IF)
wlan.active(True)
wlan.connect('ssid','pwd')
wlan.ifconfig()
import webrepl
webrepl.start()
On a WPA2 Enterprise, I had hoped changing this line would work, but no joy:
wlan.connect('ssid',auth=WPA2_ENT,'user','pwd')
Thanks, I'll look into the Espressif Non-OS SDK V2.0.0 and see if I can make it work.
Since you're not using the Espressif C SDK, but the python "Micropython" firmware, this change has not been yet propagated into this python firmware.
You can see the mapping of the network functions (active(), connect(), ifconfig() etc) in the firmware here:. In line 115 you can also see the call to wifi_station_connect(), which is a native Espressif-SDK function. Thus you'll see, the firmware doesn't yet make use of the new functions for WPA2 authentication. In line 490 you can see all the available options for authentication:
MP_OBJ_NEW_SMALL_INT(AUTH_OPEN) ,
MP_OBJ_NEW_SMALL_INT(AUTH_WEP) ,
MP_OBJ_NEW_SMALL_INT(AUTH_WPA_PSK) ,
MP_OBJ_NEW_SMALL_INT(AUTH_WPA2_PSK) ,
MP_OBJ_NEW_SMALL_INT(AUTH_WPA_WPA2_PSK)
WPA2 enterprise authentication is not yet one of them.
So now I'd say your options are:
Hey, I think its alright!
Your Raspberry Pi ...READ MORE
Well, I think there are some \r ...READ MORE
I do this by creating a DHCP ...READ MORE
Finding the mac-address would probably work. Basically, ...READ MORE
Took me little time, but resolved the ...READ MORE
Yes it is possible. You may take ...READ MORE
This fixed the problem:
Browse to then if ...READ MORE
Try following.
Server loop
void loop() {
// ...READ MORE
You can check if wifi is connected ...READ MORE
You are missing a few \r\n and the length ...READ MORE
OR | https://www.edureka.co/community/25016/connecting-to-university-wifi | CC-MAIN-2019-39 | refinedweb | 455 | 66.64 |
0
Hi
I'm trying to write a program that will print the results of the series
sqrt(1), sqrt(1)+sqrt(2), sqrt(1)+sqrt(2)+sqrt(3),...., sqrt(1)+sqrt(2)+sqrt(3)+...+sqrt(n)
where n is chosen by the user. The output should look something like
1, 2.414, 4.146, 6.146, 8.382, 10.831, .....
Here's what I have so far
#include <iostream> #include <cmath> using namespace std; void sumN(); double count; int Q; int i; int main() { cin>>Q; for (i=1;i<=Q;i++) cout<<sumN(i)<<" "; } void sumN() { for(int i=0;i<=Q;i++){ sumN(i)+=sqrt(i);} }
Any help would be appreciated
Thanks | https://www.daniweb.com/programming/software-development/threads/393689/print-results-of-multiple-series-based-on-input-by-user | CC-MAIN-2018-43 | refinedweb | 115 | 77.27 |
Natural language processing (NLP) is the domain of artificial intelligence (AI) that focuses on the processing of data available in unstructured format, specifically textual format. It deals with the methods by which computers understand human language and ultimately respond or act on the basis of information that is fed to their systems.
According to analysts, 80 to 85 percent of business-relevant information originates in text format, which gives rise to the necessity of computational linguistics and text analytics in extracting meaningful information from a large collection of textual data.
In order to analyze large amounts of text, having a convenient structure to store the data is essential. One critical metric in text analysis is understanding the context and relationships between occurrences of words. A convenient data storage should intend to store the extracted meaning in a connected way.
Neo4j – a graph database platform – is able to connect bodies of text and establish context as to how they relate to each other. This also applies to words, sentences and documents. Such relationships are very useful when drawing inferences and insights quickly from the text at scale, which makes Neo4j suitable for NLP.
In this article, we’ll look at how graphs can be leveraged for NLP. We’ll learn how to load text data, process it for NLP, run NLP pipelines and build a knowledge graph from the extracted information in the Neo4j database. We’ll also take a look at how you can take a pre-existing graph and build a natural language query feature on top of it.
(Note: if you’re unfamiliar with the basic NLP terms, check out this glossary of basic NLP terms by KDNuggets.)
Neo4j and Natural Language Processing
Natural language processing is achievable by leveraging the power of graphs with Neo4j.
From a high-level perspective, elements of text are stored as nodes in the database and the connections between those words are stored as relationships. Tags and named entities are also stored as nodes connected to their respective elements or words of the text.
What you’ll need:
- Neo4j graph database
- Py2neo – Python driver for Neo4j
- GraphAware libraries (jar file)
- Stanford CoreNLP (jar file)
- Natural Language Toolkit (NLTK)
- SpaCy
- Python editor or environment (i.e. Jupyter Notebook)
With the plugin, all the text is broken down into tokens, then tagged and stored as nodes in the database. The occurrences of tags are stored as relationships.
In Neo4j, GraphAware allows users to create a pipeline in which they can select steps such as tokenization, stop-words removal and named-entity recognition (NER), among others. Users can also select which processor to use for the same (Stanford CoreNLP or OpenNLP).
After running the pipeline on the corpus (i.e article nodes), the pipeline annotates all parts of the text by separating elements of the sentences into tags, where each tag becomes a separate node and the tag relation becomes the relationship in the database.
Setting Up Your Database
Let’s walk through the process of loading the corpus into a Neo4j database. Here, our corpus is a collection of news stories from BBC business articles that were initially stored as text files. To work on it, we’ll need to include this text in our database. Moreover, we recommend adding the original file path as a property of every article node so that we can keep track of each article.
A simple Python program can iteratively grab all the text from each file and dump it into a Neo4j database. To do this, we’ll need the py2neo driver package, which is a community driver used to connect to the Neo4j database. We’ll also need a database to connect to in order to provide authentication details to the driver for database access.
We’ll first connect to the database using py2neo:
graph = Graph(host='localhost', user='neo4j',password='password') tx = graph.begin()
After, you will need to enter your database’s username and password in the respective fields, get all the articles and place them in the database:
for filename in glob.glob(folder_path): with open(filename, 'r') as f: file_contents = f.read() Nodes = Node("Article",Text=str(file_contents),path=filename) print(Nodes) graph.create(Nodes) tx.merge(Nodes)
You’ll need to specify your folder path in the
glob()function.
In the
Node()function, we specify a label –
“Article”– for the nodes. Text in the documents is added to those nodes as property
“Text”, and path also becomes a property called
“path”.
graph.create()will create a graph with all the nodes, and
tx.merge()will actually load it into our database.
Now let’s go back in the database and check if our text has been loaded.
Bingo! We have an article as a node in the Neo4j database.
We now need to create something called an NLP pipeline, which is a series of operations that are performed on the text. GraphAware has provided a set of functions that can be programmed with Cypher to perform a variety of operations on the text, such as tokenization, entity recognition and dependency parsing.
This is how we’ll create a pipeline:
CALL ga.nlp.processor.addPipeline({ name:"pipeline_name", textProcessor: 'com.graphaware.nlp.processor.stanford.ee.processor.EnterpriseStanfordTextProcessor', processingSteps: {tokenize:true, ner:true, dependencies:true, relations:true, open:true, sentiment:true} })
Please refer to the GraphAware documentation in order to explore and understand the methods of the GraphAware library for Neo4j.
Now, let’s check if our pipeline has been created:
CALL ga.nlp.processor.getPipeline()
You should have a pipeline created, with the name
pipeline”, and all the steps included.
Now that we have our pipeline, let’s run all the text we have through it and tokenize and tag it. The structure of graphs gives us an advantage of storing “what-goes-where” as the relationships and is very helpful for lookup.
In order to annotate all the text we have, we’ll run annotation function. For larger sizes of data, we recommend using a function from the APOC library from Neo4j (apoc.periodic.iterate) to produce faster results.
CALL apoc.periodic.iterate( 'MATCH (n:Article) RETURN n', 'CALL ga.nlp.annotate({ text: n.Text, id: id(n), pipeline: "pipeline_name", checkLanguage:false }) YIELD result MERGE (n)-[:HAS_ANNOTATED_TEXT]->(result)', {batchSize:1, iterateList:false})
This will break down the text, annotate it with respective identifiers and create necessary relationships.
By annotating everything and identifying what our content is, we can check how the text has been broken down from sentences to words to tags.
In the image above, the
article(the pink node) branches into a sentence that has further been broken down into words
(the red nodes). This will happen for every sentence in the article and for all the subsequent article nodes.
After decomposing the entire text into parts, let’s see if we can get some insight from the text using Cypher queries.
Let’s see the top five most mentioned organizations in these articles:
MATCH (n:NER_Organization) RETURN n.value, size((n)<-[:HAS_TAG]-()) as Frequency ORDER BY Frequency DESC LIMIT 5
Let’s also do the same for top five most mentioned people:
MATCH (n:NER_Person) RETURN n.value, size((n)<-[:HAS_TAG]-()) as Frequency ORDER BY Frequency DESC LIMIT 5
We can also check which companies have the most mentions in a negative context:
MATCH (tag:NER_Organization)-[]-(s:Sentence:Negative)-[]-(:AnnotatedText)-[]-(a:Article) WHERE NOT tag:NER_Person AND NOT tag:NER_O RETURN distinct tag.value, count(a) as articles ORDER BY articles DESC;
In the process of building a knowledge graph, we'll extract some information regarding who works where and in what capacity or role:
MATCH (o:NE_Organization)-[]-(p:NE_Person)-[]-(t:TagOccurrence) WHERE NOT t:NE_Person AND t.pos IN [['NN']] RETURN DISTINCT p.value AS Person, collect(distinct t.value) as Title, o.value AS Company ORDER BY Company
We can also construct meaningful relationships using these insights, hence moving towards actually building a knowledge graph.
Also, make sure to note that doing natural language processing search in a Neo4j database has allowed us to transform plaintext news articles into a knowledge graph. This becomes a sweet spot for Neo4j, in which we're able to transform 510 business news articles into meaningful insight!
In the first phase, we extracted information from raw text to create a knowledge graph and tie the pieces together. The knowledge graph integrates all of the information using links and helps reasoners derive new knowledge from the data.
Natural Language Query for Neo4j
Some of the main applications of our knowledge graph are conversational interface and natural language search for a database. In the following section, we’ll build a natural language query feature for our database and knowledge graph.
In order to map a question to the graph, we’ll need to break down the question to its atomic elements as well. We’ll do this in Python using SpaCy for NLP, and we'll also use the py2neo driver to connect to the graph.
Quite close to what we did in the graph, we will be running the question through an NLP pipeline to produce tags and receive linguistic identifiers regarding the words involved in the question. Identifying keywords in the question helps us seek out what exactly is asked.
First, we'll have to connect to our Neo4j database using py2neo:
graph = Graph("bolt://localhost:7687",auth = ("username","password")) tx = graph.begin()
Let's actually enter a question and work on it. For demonstration purposes, we’ll ask the question, "Where does Larry Ellison work?"
From here, SpaCy will tokenize the question and tag its contents into linguistic identifiers. These tokens – generated from their respective questions – will serve as our set of identifiers, and tell the system to look for suitable nodes in the graph.
This image below breaks down the question into words and tags them with its identifiers.
In the following block of code, we are running an NLP pipeline on the question which is tokenizing, tagging, performing named entity recognition and dependency parsing, and rendering it as the diagram above.
nlp = en_core_web_sm.load() doc = nlp(question) ner = [(X.text, X.label_) for X in doc.ents] print(filtered_question) for token in doc: print((token.text, token.pos_, token.tag_, token.dep_)) print(ner) displacy.render(doc)
We have a named entity recognized from the question which we have to use in the Cypher query as a parameter. For this, we can use our logic – that if the question contains any named entities, we’ll put them in a parameter according to their types (such as person, organization, location).
The block of code below checks if we have more than one named entities, or if the type of named entity is an organization. It then puts it in a key-value pair.
if (len(ner[0])>2): parms_2 = {"name1":name1, "name2":name2} print(parms_2) else: parms = {} parms["names"]=names print("Parms: ",parms) if (ner[0][1] == "ORG"): parms_1 = {} parms_1["org"] = [ner[0][0]] print(parms_1)
This is going to be used as a parameter in the following Cypher query:
Tagged names are: [['Larry', 'Ellison']] Parms: {'names': ['Larry Ellison']} As mentioned before, we will use the set of tokens generated from the question to identify and construct the Cypher query we want to use by putting appropriate parameters into it.
Here, for the parameter, "Larry Ellison," our Cypher query would look something like this:
MATCH (p:Person) -[r:WORKS_AT]-> (o:Organization) WHERE p.value = "Larry Ellison"
By declaring components of our query as strings (variable=”(var:Label)”) and building the query using those components, we produce:
query1 = ''' match {} {} {} where p.value IN $person return p.value as Name,r.value as Works_as,o.value as at '''.format(label_p,works,label_o)
Finally, we'll run the query shown below with the
graph.run()function of py2neo to produce our results. Although this will generate a simple results table in textual format, we recommend taking this query and running it in the Neo4j browser in order to visualize relationships.
for word,tag in tags: if word in ['work','do']: verb = word print('We have a verb: ',verb) print(graph.run(query1,parms).data())
Since we have the verb
workin the question we asked, we can construct a query around that verb by selecting the appropriate relationship in the query:
So if you have a knowledge graph containing 'n' people, we can utilize this query to get the same information for all those people, implying that designing the right queries answers almost all questions for a knowledge graph.
While we can run the entire set of NLP operations inside a Neo4j database and then extract and create something valuable, we can also take an already available knowledge graph and use NLP to query it with a natural language interface.
Let’s take a look at how to do that!
NLP on Top of Neo4j
In this example, we will not be running NLP pipelines inside a Neo4j database. Instead, we will be using NLP libraries in Python to build a near-natural language querying feature on top of the Panama Papers dataset by ICIJ loaded into Neo4j. The Panama Papers dataset contains records of offshore investments by entities as well as various organizations' roles regarding these investments.
Essentially, we'll be running an NLP pipeline on the question instead and utilize the tokens from the question to produce the Cypher query we desire. The objective here is to create a conversational system on top of an existing graph database, and perhaps utilize graph algorithms such as PageRank to get the most influential entities in the graph.
We have implemented a simple conditional programming paradigm to build a query using string-building based on the tokens in the question.
Let's first connect to the Neo4j database:
def connect(): graph = Graph("bolt://localhost:7687",auth = ("username","password")) tx = graph.begin() print('Connected...')
We will then ask users to input a question. For this example, let's ask: ‘Which locations come under Panama jurisdiction?"
def ask_question(): question = input("INPUT: ") print("\n")
Now that we have a question, let's run some NLP functions on it.
First, let's tokenize and tag the question. The image below breaks down the question into words and tags them with its identifiers. This is how it should look:
Similar to what we did with the question before, in the code below, we are running an NLP pipeline for tokenizing, tagging, performing named entity recognition and dependency parsing, and rendering it as the diagram above.
def tag_question(): doc = nlp(question) tokens = [token.text for token in doc] pos = [pos.pos_ for pos in doc] tags = zip(tokens,pos) tags = list(tags) ner = [(ner.text,ner.label_) for ner in doc.ents]
Next, we need to build parameters based on the named entities in the question in order to use it in the Cypher query:
def parms_builder(): if len(ner) == 1: if (ner[0][1] == 'GPE') or (ner[0][1] == 'LOC'): if (ner[0][0] == "US") or (ner[0][0] == "USA"): country_ = 'United States' elif (ner[0][0] == "UK"): country_ = 'United Kingdom' else: country = ner[0][0] parms = {} parms["country"] = country
Now that we have tokens and parameters, we can start building the Cypher query based on these. Try to pick out at least two sets of unique tokens from the question and utilize it in the condition to construct the query. We can also use the
any()and
all()functions from Python to check with the elements in our tokens’ list.
Once we are in the loop, we need to specify which node and relationship to look for, and pass them as parameters to the string builder as well.
Finally, we'll use the
graph.run()function in py2neo to run the Cypher query and produce our results:
if token in (all(['come','under']) and ['jurisdiction']): match_0 = "MATCH {}".format(label_entity) query = match_0 + "WHERE entity.jurisdiction_description CONTAINS $country RETURN collect(distinct entity.countries) as Locations, entity.jurisdiction_description as Jurisdiction" print(graph.run(query,parms).to_table())
In the table below, we can clearly see there are a lot of countries which indicate that entities from these locations used Panama as their jurisdiction of choice. (Note: The output has been reformatted to look more readable.)
This system has a limited scope for the types of questions that can be asked of it, as the rules for fetching a query are programmed manually without the use of machine learning (ML). However, by increasing the number and type of questions, it is quite possible to increase the model’s scope.
An ML approach was tried on a sample to detect the nodes automatically, but it did not yield sufficient accuracy for its use in this application. However, a larger vocabulary could result in improved accuracy.
Moving Forward
The path forward would be to replace dynamic string building with dynamic Cypher query building, or automatically building a Cypher query from the question. This would require complex modelling techniques as well as a suitable training-testing dataset underneath in order to produce the best results.
Octavian’s blog post regarding sequence translation is a good read for those who desire to obtain a deeper technical perspective. Another paper called "Cognitive Natural Language Search Using Calibrated Quantum Mesh" published by the Institute of Electrical and Electronics Engineers (IEEE) also leverages the graph structure for natural language search.
Circling back, the structure of the graph model makes natural language processing easier. Graphs have the power to transform raw data into information – such as knowledge graphs – and this ability makes Neo4j a good candidate for solving ever-growing problems in AI.
All code used in this blog post along with the data used in our examples can be found on Github.
Ready to learn more about how graph technology enhances your AI projects? Read the white paper, Artificial Intelligence & Graph Technology: Enhancing AI with Context & Connections
Get My Copy
Get My Copy | https://neo4j.com/blog/accelerating-towards-natural-language-search-graphs/ | CC-MAIN-2022-40 | refinedweb | 2,993 | 52.8 |
objd 0.3.3
objd is an Object Oriented framework for Building Minecraft Datapacks with ease
objD #
Objective Builder Just for Datapacks #
objD is a framework for developing Datapacks for Minecraft. It uses the Dart programming language.
Why a framework? #
A framework is a set of utilities to reduce the amount of repetitive work. I've tried many ways in the past to achieve a quick and easy way to generate Datapacks for Minecraft: A own programming language mcscript, several graphical online generators at stevertus.com or premade utilities.
Instead of developing a language, why not using the tools and parser an other language gives you? By building a custom frame around it you get the reliability and the extensive debugging tools in Editors.
The generation of Datapacks is easy,fast and aims to be less repetitive and modular by utilizing the concept of Widgets as one of the main features.
Installation #
[Temporary]
You need the Dart SDK for this framework. Download and install it from
I would also recommend Visual Studio Code along with the dart plugin to edit dart files very conveniently.
Make a new folder that will house your project wherever you want(I would recommend datapacks folder).
And inside of that create a file named
pubspec.yaml and another folder called
lib.
Open the pubspec.yaml file and add
name: [unique_namespace] dependencies: objd: ^0.3.3
Also remember to replace the
[unique_namespace] with your own project name.
And run
$ pub get
with the console in the new folder(VS code does this automatically)
Tip: You can also generate a full project with a console command. read more
Getting started #
Let's get started and create our first dart file with
lib/main.dart file.
Then we import the framework with:
import 'package:objd/core.dart';
Then we need to create a new datapack project:
void main(List<String> args){ createProject( Project( name:"This is going to be the generated folder name", target:"./", // path for where to generate the project generate: CustomWidget() // The starting point of generation ), args ); }
Next of we need to build this custom Widget:
class CustomWidget extends Widget { @override Widget generate(Context context){ } }
To get more details on why we build it like that, check out the Widget documentation.
Like we can see the generate method, which is called on build, has to return another Widget, in our case an instance of the Pack class.
Widget generate(Context context){ return Pack( name:"mypack", main: File( // optional 'main' ) ) }
What we are doing right now is to generate a new subpack with a name(This will be the namespace of your functions later) and a main file(runs every tick) with the name "main.mcfunction".
You can run the project already and the result should be a pack with an empty main.mcfunction file.
So lets add some functionality to our project in our main file. We can use the Log Widget to display a message to the player.
main: File( 'main', child: Log('Hello World') )
But how to add a list of Actions then? Well there's also an Widget for that:
For.of
child: For.of([ Log('Hello World'), Command('setblock 0 0 0 air') ])
So now we have a List of Widget, so we can use as many as we want
Whole code:
import 'package:objd/core.dart'; void main(List<String> args){ createProject( Project( name:"This is going to be the generated folder name", target:"./", generate: CustomWidget() ), args ); } class CustomWidget extends Widget { @override Widget generate(Context context){ return Pack( name:"mypack", main: File( // optional 'main', child: For.of([ Log('Hello World'), Command('setblock 0 0 0 air') ]) ) ); } }
Documentation and Examples #
The example folder contains a boilerplate to start off.
There are many more widgets for objD including basic Widgeds, Command Wrappers, Text and Util Widgets. So check out the documentation at or my youtube channel. In this video playlist are all objD related videos: | https://pub.dev/packages/objd | CC-MAIN-2020-34 | refinedweb | 652 | 63.29 |
Learn By Agenda application
Expand Messages
- I have put my latest release, "Learn By Agenda" on my web page at
This is a proof of concept text mode learning system based on space
interval recall techniques. I am using it myself to study `Olelu Hawai`i
with some modest success. Your mileage will vary; the user interface is a
little odd at first, and bugs are guaranteed to be present.
I only have one more app in the works, a diabetic monitor which I
currently use, but it is not ready for prime time. After that I will
cease to annoy everyone with my little toys!
Putting together all these sample apps and macros has been an interesting
learning experience for me, and quite entertaining. I hope someone out
there will find something of use, even if only as sample code.
Bob Newell
Santa Fe, New Mexico
- If deemed appropriate I'd like to start a little discussion here comparing
Agenda and ZOOT. This is prompted by some private e-mails saying that?
- Zoot has an option to automatically configure a rule
for the name of the category. The option can be turned
on or off. In that sense, it is a step beyond Agenda.
Zoot covers most of the areas that Agenda did. Some of
the things are configured a bit differently but, with
very few exceptions, there is almost nothing that
Agenda can do that Zoot can not.
One noteable feature that is missing is the ability to
have separators within views. I miss this in Zoot but
there are MANY things that Zoot can do that Agenda
could not.
-bs
--- Bob Newell <chungkuo@...> wrote:
> If deemed appropriate I'd like to start a little=====
> discussion here comparing
> Agenda and ZOOT. This is prompted by some private
>?
>
>
> To unsubscribe from this list simply send an e-mail
> to:
> pimlist-unsubscribe@yahoogroups.com
>
> Your use of Yahoo! Groups is subject to
>
>
>
-Bruce Sohn
Albuquerque, NM
__________________________________________________
Do you Yahoo!?
HotJobs - Search new jobs daily now
- Bruce Sohn wrote: "One noteable feature that is missing is the ability to
have separators within views. I miss this in Zoot but there are MANY things
that Zoot can do that Agenda could not."
Agreed. Agenda made (or makes) a distinction between raw data (the
items and notes in a database), how you organize the data (the category
hierarchy), and how you show the date (views). In Zoot, the category
hierarchy and the views are essentially merged. Zoot will not let you show
more than one view at a time. Agenda's approach, which is similar to the
set-up of several more conventional database products such as MS-Access and
Paradox, lets you tailor the end result much more precisely.
Agenda also lets you create extra date categories and has numeric and
unindexed categories that you don't find in Zoot.
Zoot's biggest advantage over Agenda is that it is much easier to
import data into the Windows program. One of Agenda's great drawbacks in
the days of DOS was that there was no easy way to take information from a
WordPerfect file or another application and pour it into an Agenda database.
Zoot clips text beautifully and can be set up to automatically the titles
and URL's of every web-site you visit. Large text files and be imported and
automatically split into manageable chunks as long as you can identify a
consistent delimiter.
Zoot also has a nifty "linking" feature. A hypertext link can be
established between any two items -- even items in different databases.
Over time, the user can build up a useful network of cross-references, a
little like the "Memex" device that V. Bush described in that famous article
in the Atlantic Monthly. This network supplements the folder system as a
way to subdivide and manage information.
Finally, Zoot offers an advanced query features that lets you search
across many databases at once. A brilliant idea. I wish the databases I
use at work had this feature.
James B. Salla
Chicago, IL
- On 5 Nov 2002 at 10:04, James Salla wrote:
> Finally, Zoot offers an advanced query features that lets youHi James,
> search
> across many databases at once. A brilliant idea. I wish the
> databases I use at work had this feature.
I have used Lotus Magellan to do this with Agenda files for years,
via Magellan's "explore" and "index" features. Very cool! Magellan
and Agenda seem like they were made for one another.
The premier DOS programs: Agenda, Magellan, Grandview, WordPerfect
6.1, and 123 remain very powerful and useful tools.
Cordially,
Charles
-----------------------------------------------------------
Charles Bradley
Hopewell Presbyterian Church, Columbia, TN
Union Grove Presbyterian Church, Columbia, TN
Grace Presbyterian Church, Hohenwald, TN
"Let Thy works praise Thee, that we may love Thee; and let us love
Thee, that
Thy works may praise Thee." Aurelius Augustine
cwbrad@...
-----------------------------------------------------------
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/pimlist/conversations/topics/2614 | CC-MAIN-2015-27 | refinedweb | 827 | 63.9 |
curl_easy_setopt − set options for a curl easy handle
#include <curl/curl.h>
CURLcode curl_easy_setopt(CURL *handle, CURLoption option, parameter);, will not be copied by the library. Instead you should keep them available until libcurl no longer needs them. Failing to do so will cause very odd behavior or even crashes. libcurl will need them until you call curl_easy_cleanup(3) or you set the same option again to use a different pointer.
The handle is the return code from a curl_easy_init(3) or curl_easy_duphandle(3) call.
CURLOPT_VERBOSE
Set the parameter to non-zero non-zero parameter tells the library to include the header in the body output. This is only relevant for protocols that actually have headers preceding the data (like HTTP).
CURLOPT_NOPROGRESS
A non-zero parameter tells the library to shut off the built-in progress meter completely.
Future versions of libcurl is likely to not have any built-in progress meter at all.
CURLOPT_NOSIGNAL
Pass a long. If it is non-zero,)
Consider building libcurl with ares support to enable asynchronous DNS lookups. It enables nice timeouts for name resolves without signals. and it will abort the transfer and return CURLE_WRITE_ERROR.
This function may be called with zero bytes data if the transfered told)
If you set the callback pointer to NULL, or doesn is also known. (Opion added in 7.12.3)
CURLOPT_IOCTLDATA
Pass a pointer that will be untouched by libcurl and passed as the 3rd argument in the ioctl callback set with CURLOPT_IOCTLFUNCTION. (Option added in 7.12.3)
CURLOPT_PROGRESSFUNCTION
Function pointer that should match the curl_progress_callback prototype found in <curl/curl.h>. This function gets called by libcurl instead of its internal equivalent with a frequent interval during operation (roughly once per second). Usage of the CURLOPT_PROGRESSFUNCTION callback is not recommended when using the multi interface.
CURLOPT_NOPROGRESS must be set to FALSE stream is the one you set with the CURLOPT_WRITEHEADER option. The callback function must return the number of bytes actually taken care of, or return -1 to signal error to the library (it will cause it to abort the transfer with a CURLE_WRITE_ERROR return code).. Using this function allows for example_ERRORBUFFER
Pass a char * to a buffer that the libcurl may store human readable error messages in. This may be more helpful than just the return code from curl_easy_perform. The buffer must be at least CURL_ERROR_SIZE big. non-zero parameter tells the library to fail silently if the HTTP code returned is equal to or larger than 400. The default action would be to return the page normally, ignoring that code.
CURLOPT_URL
The actual URL to deal with. The parameter should be a char * to a zero terminated string. The string must remain present until curl no longer needs it, as it doesn’t copy the that are supported.
The string given to CURLOPT_URL must be url-encoded and following the RFC 2396 ().
CURLOPT_URL is the only option that must be set before curl_easy_perform(3) is called. is set. The CURLOPT_PROXY option does however override any possibly set environment variables.
Starting with 7.14.1, the proxy host string can be specified the exact same way as the proxy environment variables,_SOCKS4 (added in 7.15.2) CURLPROXY_SOCKS5. The HTTP type is default. (Added in 7.10)
CURLOPT_HTTPPROXYTUNNEL
Set the parameter to non-zero to get the library to tunnel all operations through a given HTTP proxy. There is a big difference between using a proxy and to tunnel through it. If you don’t know what this means, you probably don’t want this tunneling option.
CURLOPT_INTERFACE
Pass a char * as parameter. This set. Note that port numbers are only valid 1 - 65535. (Added in 7.15.2)
CURLOPT_LOCALPORTRANGE
Pass a long. This is the number of attempts libcurl should do to find a working local port number. It starts with the given CURLOPT_LOCALPORT and adds one to the number for each retry. Setting this value to 1 or below will make libcurl do only one try for exact port number. Note that port numbers by nature is a scarce resource (0) to completely disable caching, or set to -1 to make the cached entries remain forever. By default, libcurl caches this info for 60 seconds.
CURLOPT_DNS_USE_GLOBAL_CACHE
Pass a long. If the value is non-zero, makse with the host and user name (to find the password only) or the a .netrc file in the current user’s home directory. (Added in 7.10.9)
CURLOPT_USERPWD
Pass a char * as parameter, which should be [user name]:[password] to use for the connection. Use CURLOPT_HTTPAUTH to decide authentication method.
When using NTLM, you can set authentication method.
CURLOPT_HTTPAUTH
Pass a long as parameter, which
is set to a bitmask, to tell libcurl what authentication
method(s) you want it to use. The available bits are listed
below._USERPWD
option. (Added in 7.10.6)
CURLAUTH_BASIC
HTTP Basic authentication. This is the default choice, and the only method that is in wide-spread use and supported virtually everywhere. This is sending be also used along with another)
CURLOPT_AUTOREFERER
Pass a non-zero parameter.
CURLOPT_FOLLOWLOCATION
A non-zero parameter.
CURLOPT_UNRESTRICTED_AUTH
A non-zero parameter_PUT
A non-zero parameter tells the library to use HTTP PUT to transfer data. The data should be set with CURLOPT_READDATA and CURLOPT_INFILESIZE.
This option is deprecated and starting with version 7.12.1 you should instead use CURLOPT_UPLOAD.
CURLOPT_POST
A non-zero parameter tells the library to do a regular HTTP post. This will also make the library use the a "Content-Type: application/x-www-form-urlencoded" header. (This is by far the most commonly used POST method).
Use the CURLOPT_POSTFIELDS option to specify what data to post and CURLOPT_POSTFIELDSIZE option. a non-zero value, it will automatically set CURLOPT_NOBODY to 0 (since 7.14.1).
If you issue a POST request and then want to make a HEAD or GET using the same re-used handle, you must explictly set the new request type using CURLOPT_NOBODY or CURLOPT_HTTPGET or similar.
CURLOPT_POSTFIELDS
Pass a char *.
Using POST with HTTP 1.1 implies the use of a "Expect: 100-continue" header. You can disable this header with CURLOPT_HTTPHEADER as usual.
To make multipart/formdata posts (aka rfc1867 contents as in ’Accept:’ (no data on the right side of the colon), the internally used header will get disabled. Thus, using this option you can add new headers, replace internal headers and remove internal headers. To add a header with no contents, make the contents. So if your alias is "MYHTTP/9.9", Libcurl will not treat the server as responding with HTTP version 9.9. Instead Libcurl will use the value set by option CURLOPT_HTTP_VERSION..
Using this option multiple times will only make the latest string override the previously request. non-zero to mark this as a new cookie "session". It will force libcurl to ignore all cookies it is about to load that are "session cookies" from the previous session. By default, libcurl always stores and loads all cookies, independent if they are session cookies)
CURLOPT_HTTPGET
Pass a long. If the long is non-zero, this forces the HTTP request to get back to GET. usable if a POST, HEAD, PUT or a custom request have been used previously using the same curl handle.
When setting CURLOPT_HTTPGET to a non-zero value,, an network interface name (under Unix) or just a ’-’ letter to let the library use your systems default IP address. Default FTP operations are passive, and thus won’t use PORT.
You disable PORT again and go back to using the passive version by setting this option to NULL.
CURLOPT_QUOTE
Pass a pointer to a linked list of FTP commands to pass to the server prior to your ftp request. This will be done before any other FTP commands are issued (even before the CWD command). The linked list should be a fully valid list of to append strings (commands) to the list, and clear the entire list afterwards with curl_slist_free_all(3). Disable this operation again by setting a NULL to this option.
CURLOPT_POSTQUOTE
Pass a pointer to a linked list of FTP commands to pass to the server after your ftp transfer request..
CURLOPT_FTPLISTONLY
A non-zero parameter tells the library to just list the names of an ftp directory, instead of doing a full directory listing that would include file sizes, dates etc.
This causes an FTP NLST command to be sent. Beware that some FTP servers list only files in their response to NLST; they might not include subdirectories and symbolic links.
CURLOPT_FTPAPPEND
A non-zero parameter tells the library to append to the remote file instead of overwrite it. This is only useful when uploading to an ftp site.
CURLOPT_FTP_USE_EPRT
Pass a long. If the value is non-zero, FALSE non-zero, it tells curl to use the EPSV command when doing passive FTP downloads (which it always does by default). Using EPSV means that it will first attempt to use EPSV before using PASV, but if you pass FALSE non-zero, curl will attempt to create any remote directory that it fails to CWD into. CWD is the command that changes working directory. (Added in 7.10.7) a non-zero value,_SSL
Pass a long using one of the
values from below, to make libcurl use your desired level of
SSL for the ftp transfer. (Added in 7.11.0)
CURLFTPSSL_NONE
Don’t attempt to use SSL.
CURLFTPSSL_TRY
Try using SSL, proceed as normal otherwise.
CURLFTPSSL_CONTROL
Require SSL for the control connection or fail with CURLE_FTP_SSL_FAILED.
CURLFTPSSL_ALL
Require SSL for all communication or fail with CURLE_FTP_SSL_FAILED.
CURLOPT_FTPSSLAUTH
Pass a long using one of the
values from below, to alter how libcurl issues "AUTH
TLS" or "AUTH SSL" when FTP over SSL is
activated (see CURLOPT_FTP_SOURCE_URL
When set, it enables a FTP third party transfer, using the set URL as source, while CURLOPT_URL is the target.
CURLOPT_SOURCE_USERPWD
Set "username:password" to use for the source connection when doing FTP third party transfers.
CURLOPT_SOURCE_QUOTE
Exactly like CURLOPT_QUOTE, but for the source host.
CURLOPT_SOURCE_PREQUOTE
Exactly like CURLOPT_PREQUOTE, but for the source host.
CURLOPT_SOURCE_POSTQUOTE
Exactly like CURLOPT_POSTQUOTE, but for the source host. very’.
CURLOPT_TRANSFERTEXT
A non-zero parameter_CRLF
Convert Unix newlines to CRLF newlines on transfers.). Pass a NULL to this option to disable the use of ranges.
CURLOPT_RESUME_FROM
Pass a long as parameter. It contains the offset in number of bytes that you want the transfer to start from. Set this option to 0 to make the transfer start from the beginning (effectively disabling resume). user instead of GET or HEAD when doing an HTTP request, or instead of LIST or NLST when doing an ftp directory listing. This is useful for doing DELETE or other more or less obscure HTTP requests. Don’t do this at will, make sure your server supports the command first. a non-zero value, non-zero parameter.
CURLOPT_INFILESIZE_LARGE
When uploading a file to a remote site, this option should be used to tell libcurl what the expected size of the infile is. This value should be passed as a curl_off_t. (Added in 7.11.0)
CURLOPT_UPLOAD
A non-zero parameter tells the library to prepare for an upload. The CURLOPT_READDATA and CURLOPT_INFILESIZEE or CURLOPT_INFILESIZE_LARGE and FTP.
The last modification time of a file is not always known and in such instances this feature will have no effect even if the given time condition would have not been met.
CURLOPT_TIMEVALUE
Pass a long as parameter. This should be the time in seconds since 1 jan 1970, and the time will be used in a condition as specified with CURLOPT_TIMECONDITION. on cumulative average during the transfer, the transfer will pause to keep the average rate less than or equal to the parameter value. (default: 0, unlimited)
CURLOPT_MAX_RECV_SPEED_LARGE
Pass a curl_off_t as parameter. If an upload exceeds this speed on cumulative average during the transfer, the transfer will pause to keep the average rate less than or equal to the parameter value. (default: 0, unlimited)
CURLOPT_MAXCONNECTS
Pass a long. The set number will be the persistent connection cache size. The set amount will be the maximum amount of simultaneously open connections that libcurl may cache. Default is 5, and there isn’t much point in changing this value unless you are perfectly aware of how this work and changes libcurl’s behaviour. This concerns connection using any of the protocols that support persistent connections.
When reaching the maximum limit, curl uses the CURLOPT_CLOSEPOLICY to figure out which of the existing connections to close to prevent the number of open connections to increase.
If you already have performed transfers with this curl handle, setting a smaller MAXCONNECTS than before may cause open connections to get closed unnecessarily.
CURLOPT_CLOSEPOLICY
Pass a long. This option sets what policy libcurl should use when the connection cache is filled and one of the open connections has to be closed to make room for a new connection. This must be one of the CURLCLOSEPOLICY_* defines. Use CURLCLOSEPOLICY_LEAST_RECENTLY_USED to make libcurl close the connection that was least recently used, that connection is also least likely to be capable of re-use. Use CURLCLOSEPOLICY_OLDEST to make libcurl close the oldest connection, the one that was created first among the ones in the connection cache. The other close policies are not support yet.
CURLOPT_FRESH_CONNECT
Pass a long. Set to non-zero non-zero to make the next transfer explicitly close the connection when done. Normally, libcurl keep all connections alive when done with one transfer in case there comes a succeeding one that can re-use them. This option should be used with caution and only if you understand what it does. Set to 0 to have libcurl keep the connection open for possibly. A non-zero parameter tells the library to perform any required proxy authentication and connection setup, but no data transfer.
This option is useful with the CURLINFO_LASTSOCKET option to curl_easy_getinfo(3). The library can set up the connection and then the application can obtain the most recently used socket for special data transfers. (Added in 7.15.2)
CURLOPT_SSLCERT
Pass a pointer to a zero terminated string as parameter. The string should be the file name of your certificate. The default format is "PEM" and can be changed with CURLOPT_SSLCERTTYPE.
CURLOPT_SSLCERTTYPE
Pass a pointer to a zero terminated string as parameter. The string should be the format of your certificate. Supported formats are "PEM" and "DER". (Added in 7.9.3)
CURLOPT_SSLCERTPASSWD
Pass a pointer to a zero terminated string as parameter. It will be used as the password required to use the CURLOPT_SSLCERT certificate.
This option is replaced by CURLOPT_SSLKEYPASSWD and should only be used for backward compatibility. You never needed a pass phrase to load a certificate but you need one to load your private key._SSLKEYPASSWD
Pass a pointer to a zero terminated string as parameter. It will be used as the password required to use the CURLOPT_SSLKEY private key..
CURLOPT_SSLVERSION
Pass a long as parameter to
control what version of SSL/TLS to attempt to use. The
available options are:
CURL_SSLVERSION_DEFAULT
The default action. When libcurl built with OpenSSL, this will attempt to figure out the remote SSL protocol version. Unfortunately there are a lot of ancient and broken servers in use which cannot handle this technique and will fail to connect. When libcurl is built with GnuTLS, this will mean SSLv3. nonzero value.
Note that option is by default set to the system path where libcurl’s cacert bundle is assumed to be stored, as established at build. (Added in 7.9.8).
The checking this option controls is of the identity that the server claims. The server could be lying. To control lying, see CURL, , − and + can be used as operators. Valid examples of cipher lists include ’RC4-SHA’, ´SHA1+DES´, ’TLSv1’ and ’DEFAULT’. The default list is normally set when you compile OpenSSL.
You’ll find more details about cipher lists on this URL:
CURLOPT_KRB4LEVEL
Pass a char * as parameter. Set the krb4 security level, this also enables krb4 awareness. This is a string, ’clear’, ’safe’, ’confidential’ or ’private’. If the string is set but doesn’t match one of these, ’private’ will be used. Set the string to NULL to disable kerberos4. The kerberos support only works for FTP.
CURLOPT_PRIVATE
Pass a char *, you MUST use the locking methods in the share handle. See curl_share_setopt(3) for details..
curl_easy_init(3), curl_easy_cleanup(3), curl_easy_reset(3), | http://man.cx/curl_easy_setopt(3) | CC-MAIN-2015-27 | refinedweb | 2,745 | 66.23 |
Hi Eric, An update... > $ echo '#include <fcntl.h>' > foo.c > $ bgxlc_r -E foo.c > fcntl.out Apparently my user hacked around an issue here. As produced, the configure script identified a fcntl.h under /bgsys/drivers/V1R4M2_200_2010-100508P/ppc/gnu-linux/powerpc-bgp-linux/sys-include/bits/ which broke his compilation due to the header beginning with #ifndef _FCNTL_H # error "Never use <bits/fcntl.h> directly; include <fcntl.h> instead." #endif That file is attached as fcntl_error.h. The user mucked around a bit and made the code include a different fcntl.h from "/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/gnu-linux/powerpc-bgp-linux/sys-include/" which works okay. It is attached as fcntl_okay.h. It worked okay. > $ echo '#include <float.h>' > foo.c > $ bgxlc_r -E foo.c > float.out This produces a zero-length float.out. My user reports there's no float.h file sitting alongside other headers in the usual places. > We already special-case AIX, using a different mode of > '$CC -C -E' to make the preprocessed output more verbose; is there some > compiler switch for bgxlc_r (maybe -C, maybe some other spelling) that > makes the output more verbose? Trying again with -C (documented in the attached man page, in case that's helpful at all) a la > $ echo '#include <float.h>' > foo.c > $ bgxlc_r -C -E foo.c > float.out produces the output attached as float_with_-C.out. I'm happy to keep trying to provide what's necessary to get this sorted, but feel free to bail on the effort whenever this turns into a time suck. That said, thank you, Rhys
fcntl.out
Description: Binary data
fcntl_error.h
Description: Text Data
fcntl_okay.h
Description: Text Data
bgxlc_r.man.gz
Description: GNU Zip compressed data
float_with_-C.out
Description: Binary data | https://lists.gnu.org/archive/html/bug-gnulib/2013-10/msg00046.html | CC-MAIN-2021-25 | refinedweb | 297 | 62.85 |
Created on 2019-07-09 10:32 by Dschoni, last changed 2020-01-10 12:29 by pingchaoc.
A long description of the issue can be found on SO here:
TL;DR:
This fails on windows:
from datetime import datetime
datetime.fromtimestamp(1).timestamp()
Looks like a similar problem to issue29097.
>>> from datetime import datetime
>>> d = datetime.fromtimestamp(1)
>>> d.timestamp()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 22] Invalid argument
Neijwiert tried to analyzed it in stackoverflow:
This indeed seems to be a duplicate of 29097, which is fixed in Python 3.7, so we can close this bug. Thank you for your report Dschoni, and thank you for finding the duplicate Ma Lin!
issue29097 fixed bug in `datetime.fromtimestamp()`.
But this issue is about `datetime.timestamp()`, not fixed yet.
Ah, my mistake. The examples all use `datetime.fromtimestamp`, so I didn't notice that it was failing only on the `timestamp` side. Re-opening, thanks! | https://bugs.python.org/issue37527 | CC-MAIN-2020-40 | refinedweb | 163 | 68.67 |
So i found a code thats the Java equivalent of the bubble sort algorithm, i understand parts of it, but the thing is i dont get how it works, specifically the loops which i assume do much of the work, can somebody please explain to me, i swear if i had 1 thousand dollars i would give to you if you explained well, but all i got is points to give... sooo pleaase i know its tedious but again pleease:
public class Bubble { public static void main (String args[]){ System.out.println("Bubble sort algorithm:"); int name [] = {7,3,4,20,6}; for(int x = 0; x<name.length;x++){ System.out.print(name[x] + " ");} for (int x = name.length-1; x > 0; x-- ){ for(int y = 0; y < name.length-1; y++){ int temp = name[y]; int tempo = name[y+1]; if(name[y]>name[y+1]){ name[y]=tempo; name[y+1]=temp; } }} System.out.println(); for(int x=0; x<name.length;x++){ System.out.print(name[x]+ " "); } } } | https://www.daniweb.com/programming/software-development/threads/355629/can-someone-please-explain-this-bubble-sort-code-in-java-pleeease | CC-MAIN-2017-22 | refinedweb | 171 | 70.02 |
This RFE is filed on behalf of thomask@netscape.com, and the CMS team. Today, NSS nicknames identify a subject name, not an individual cert. If a user has multiple certs with the same subject name, there is only one nickname that identifies all those certs. The nickname does not unambiguously identify any one of the certs that share that subject name. When an NSS application calls functions such as CERT_FindCertByNickname or CERT_FindUserCertByUsage to obtain a cert (say, to be used as an SSL server cert or SSL client auth cert) NSS picks the most recently issued cert (that is currently within its validity period) with that nickname. The cert chosen this way is ALWAYS a valid cert for use at the present time. The cert returned by CERT_FindUserCertByUsage is always value for the specified use at the present time. But nonetheless, the user may wish to use a different specific cert than the one chosen by the NSS functions named above. So, Thomas proposes to change the operation of functions like those named above to make is possible to unambiguously identify a cert through those APIs. The idea is to effect this change without needing to change the products that use NSS. A new NSS shared library could allow users of existing server and client products to simply change the strings they use, and be able to identify specific particular certs. The proposal is to have these functions first behave exactly as they do now, so as to maximize backward compatibility. Then if the cert search fails to find a nickname match, parse the nickname according to an alternative syntax, and look for a cert according to that information. There have been several proposals for alternative nickname syntax, including Proposal 1) An RFC 2253 ASCII-encoded issuer name followed by an ASCII encoded serial number. Proposal 2) An NSS nickname (as today) followed by " serial#" and a serial number in ASCII encoded decimal. Proposal 3) An NSS nickname (as today) followed by an expiration date expressed as an ASCII encoded generalTime.
See bug 210709 and the bugs on which it depends to see why I think Proposal 1 is not a good idea.
Hmm, just looked at NES's code, look like they do their own version of find certificate by nickname. I will work with to NES to solve this problem CERTCertificate* SSLSocketConfiguration :: FindServerCertFromNickname(const char* name) const { CERTCertList* clist; clist = PK11_ListCerts(PK11CertListUser, NULL); CERTCertListNode *cln; for (cln = CERT_LIST_HEAD(clist); !CERT_LIST_END(cln,clist); cln = CERT_LIST_NEXT(cln)) { CERTCertificate* cert = cln->cert; const char* nickname = (const char*) cln->appData; if (!nickname) { nickname = cert->nickname; }; if (0==strcmp(name, nickname)) { // we found our cert return cert; }; }; return NULL; }; void SSLSocketConfiguration :: set_cert_and_key() { SECStatus secstatus = SECFailure; // check the certificate SECCertTimeValidity certtimestatus; // Get own certificate and private key // later, we may need to do do a PK11_FindCertsFromNickname (note the plural) // because there may be several certs under the same nickname, // * for example in the case of certs signed by distinct CA // * there can also be certs with different expiration dates, but the singular // PK11_FindCertFromNickname always returns the most recent cert so we are OK // for that case, except if a cert that isn't yet valid has been installed ... // * DH support : if both RSA and DSA certs need to be installed on the same // server, we'll actually need to set both types certs on the socket ... servercert = FindServerCertFromNickname((char*)servercertnickname.data()) ...
For reference, here is the NES bug filed
Independently, I suggest we put the change into NSS: certutil is currently using PK11_FindCertsFromNickname() In mozilla/security/nss/lib/pk11wrap/pk11cert.c static long pk11_getSerialNumberFromNickname(char *nickname) { /* Alternative Nickname Format: <nickname> <serial># */ int len = 0; int i; char *nicknameDup = NULL; long rv = -1; int isNum = 0; nicknameDup = PORT_Strdup(nickname); len = PORT_Strlen(nicknameDup); if (len <= 0) { goto done; } if (nicknameDup[len-1] == '#') { nicknameDup[len-1] = '\0'; for (i=len-2; i >= 0; i--) { if (nicknameDup[i] == ' ') { if (isNum) { rv = atol(nicknameDup+i+1); } goto done; } if (nicknameDup[i] < '0' || nicknameDup[i] > '9') { goto done; } else { isNum = 1; } } } done: if (nicknameDup) PORT_Free(nicknameDup); return rv; } static CERTCertificate * pk11_FindCertFromNicknameWithSN(char *nickname, void *wincx) { char *nicknameDup = NULL; int i; long sn = -1; long certSN = -1; CERTCertificate *rvCert = NULL; CERTCertList *certList = NULL; CERTCertListNode *node = NULL; sn = pk11_getSerialNumberFromNickname(nickname); if (sn == -1) goto done; /* error */ nicknameDup = PORT_Strdup(nickname); for (i = PORT_Strlen(nicknameDup); i >= 0; i--) { if (nicknameDup[i] == ' ') { nicknameDup[i] = '\0'; goto next; } } next: certList = PK11_FindCertsFromNickname(nicknameDup, wincx); if (certList == NULL) { goto done; /* error */ } for (node = CERT_LIST_HEAD(certList); !CERT_LIST_END(node, certList); node = CERT_LIST_NEXT(node)) { CERTCertificate *cert = node->cert; certSN = DER_GetInteger(&cert->serialNumber); if (certSN == sn) { rvCert = CERT_DupCertificate(cert); goto done; /* success */ } } done: if (nicknameDup) PORT_Free(nicknameDup); if (certList) CERT_DestroyCertList(certList); return rvCert; } CERTCertificate * PK11_FindCertFromNickname(char *nickname, void *wincx) { ... } /* Test if we are using the new serial number format */ if (rvCert == NULL && pk11_getSerialNumberFromNickname(nickname) != -1) { rvCert = pk11_FindCertFromNicknameWithSN(nickname, wincx); } if (slot) { PK11_FreeSlot(slot); } if (nickCopy) PORT_Free(nickCopy); return rvCert; loser: if (slot) { PK11_FreeSlot(slot); } if (nickCopy) PORT_Free(nickCopy); return NULL; #endif }
Thomas, Before you "fix" NES, let me explain why I had to implement my own version of findcertbynickname. The problem was that an identical cert could exist in multiple tokens, along with multiple private keys. Even though the NSS cert nickname specified the slot (token:nickname), the NSS APIs could return the cert structure with an undetermined slot, if there were multiple copies of the cert. Only NSS 4.0 could solve this problem with the API. Of course, this would be a major change for all NSS applications, and not just a simple recompile. If we change the definition of an NSS nickname to map to a unique cert rather than any cert matching a subject, we will still have to deal with the token issue. This isn't just a corner case for servers. Currently for instance, I have my email certs and private keys living both on a smartcard and in the softoken. Also, I think it is likely that some applications that are aware of the NSS indexing by subject, eg. PSM and possibly parts of the NES admin, will break if we make this change. Note that I'm not at all saying we shouldn't change it, but that this is a major change that is probably best dealt with in NSS 4.0 . Since we are now working on NSS 3.9, it seems like an appropriate time to start seriously scheduling NSS 4.0 work.
Bob, please work with Steve and Thomas to resolve this RFE.
We should not change the semantics of the current find cert by nickname call. Adding a new function with a new name may be reasonable. Applications can get this current functionality without changes to NSS by creating their own FindCertByNicknameWithSerial(). Call CERT_CreateNicknameCertList() for the nickname part. walk the CertList looking for a cert with the requested serial number. NOTE: This API is *not* guarrenteed to identify a unique cert. nickname the nickname of the subject, not the issuer. Two certs may have the same subject and same serial number with different issuers. bob
Bob, the purpose of this RFE is to avoid having to build new binaries of certain old NSS-based server products. The argument is that if the syntax accepted by NSS functions that use nicknames is extended to facilitate unique cert selection, then those old servers can be used with new NSS libs, without rebuilding the servers.
The problem is the only way to get that is to also change PK11_ListCerts to return these kinds of nicknames. That would break existing apps as well. Since it already requires a rebuild of those servers, then the problem should be fixed correctly.
I agree with Bob. There is no "fix" for this that can be done only in NSS. Some changes will have to be made to the server applications.
Can we not localize the change to PK11_FindCertFromNickname? I understand that DS server team uses this function to retrieve the server certificate for SSL certificate. If we worry about consistent issue, then maybe we should introduce a new function PK11_FindCertFromSuperNickname() as suggested. Then, in the long run, all server should use this PK11_FindCertFromSuperNickname(). For server certificate selection in SSL configuration, we should provide automatic and manual configuration. Maybe by default and normal circumstances, we will use the automatic certificate selection provided in PK11_FindCertFromNickname(). But in some scenario, administrators do want to be able to pick one particular certificate for whatever reason. Currently, we do not provide that functionality in NSS. I agree that the application that implement this feature. But if this is useful to all server product, why dont we put this feature in NSS? Providing a brand new infrastructure to support manual selection could be a lot of work. That is why I proposed to have a way to leverage the existing API.
Thomas, Even if we localize the change to PK11_FindCertFromNickname, the web server still won't work with that sole change, because it uses PK11_ListCerts to locate certs, not PK11_FindCertFromNickname. PK11_ListCerts returns a list of certs with their nickname. If we change it to return a nickname + SN, existing applications, including NES, will be broken by that change. So we would have to change the web server anyway. It can't work with just an NSS patch to PK11_FindCertFromNickname. DS may be different. I don't know about other products. I suspect there is no one single rule here as far as the certificate search goes. There are lots of different ways to find certificates in NSS. nickname is only one of them. Other applications have moved away from using nicknames completely. It is not accurate to say that we don't provide the function you want in NSS. We don't provide a one-line function call to do what you want. However, you can can certainly already perform the search you want by getting a subject list of all the certs under the given nickname, and then looking at their serial number and filter to the specific one that you need. There is nothing in NSS that stops you from implementing what you need today. However, what you really want is to add functionality to a number of different NSS-based applications, and you believe that a change to NSS will automatically do that. My take on this, is that it will not. I have been trying to tell you that in our previous meeting on the issue, as Bob is telling you now. I have no problem with adding a one-line function that does the search that you want, so that you can modify all the applications that need the new behavior to call this new cert search function. In standard NSS terminology, we would probably not modify the format of the nickname string, but instead have a function called "PK11_FindCertByNicknameAndSN" which would take 2 arguments - one for the nickname and SN. If we add this new function to do the search, it would be my preference to use that sort of prototype. However, I don't have a problem with putting the 2 arguments in a single string together as long as that argument is not called a nickname. I don't particularly like "SuperNickname", but we can probably find something less confusing.
No, I don't think we never asked for any change to PK11_ListCerts. That function should continue to return the plain nickname. At least if we extend CERT_FindCertByNickname, users can override the nickname in their applications config file to add the serial number.
By the way - if we were able to make more extensive changes to the servers handling of certificate configuration, it would be to remove support for nicknames completely. They seem to cause far more confusion than is necessary.
Steve, Once more ... NES uses PK11_ListCerts to locate certs, not PK11_FindCertFromNickname. It works by enumerating the certs into a list, then matching the nickname string in the list against the string in the config file. This is why changes to CERT_FindCertFromNickname, or PK11_FindCertByNickname, won't help this particular application. There may be other cases. If we make the change, I don't think you will get all the results you expect. IMO we should not be lazy about this problem, because it will not work as you expect. We should look at each application that needs this new functionality, see exactly which functions they call, and modify them to call the new search function where needed. If we keep the 2 arguments in one string, the application changes should not be expensive - essentially renaming a couple of function calls. However the applications code does need to be reviewed.
My issue is PK11_FindCertByNickname() has a semantic and a meaning. You want a new function with a different semantic. This new function needs to be distinct. Where it goes is 6 of one half a dozen of another. There is definately server specific code in NSS that clients never use. There is also shared code servercore that implements functions unique to the way servers use NSS. This can be implemented in either place. -------------------- So here's one proposal: We provide two new functions: char *PK11_GetStringForUniqueCert(CERTCertificate *); CERTCertificate *PK11_FindUniqueCertByString(const char *string); under the covers the string is the nickname and one of: Issue date serial number certificate hash I personally prefer certificate hash, since it's the only one guarrenteed to be unique, but we can discuss which it is. It is even possible that PK11_FindUniqueCertByString() will accept any of the three properly identified. Example: "ServerCert serial:05:06:7d:54:45:80:12" (assuming a verisign cert) bob
RE: Steve and removing nicknames: That is certainly doable, and probably desirable. It doesn't require any changes to NSS. Other apps have done this already. bob
Is certificate hash being displayed by certutil? If it is not currently displayed by our utility, people may have a hard time coming up the hash to do the listing.
Yes, certificate hash can be displayed in certutil, but certutil in all NSS releases <= 3.8.2 displays the wrong hash (bug 220016). This bug was recently fixed by Nelson in the upcoming NSS 3.9 release.
Moved to NSS 3.10. Should we resolve this bug WONTFIX or implement Bob's alternate proposal in comment 16?
Unsetting target milestone in unresolved bugs whose targets have passed. | https://bugzilla.mozilla.org/show_bug.cgi?id=210941 | CC-MAIN-2017-26 | refinedweb | 2,389 | 61.77 |
"Animated" ASCII Cactus
hey, would you like to plz visit this cool repl i made?
thanks so [email protected]
hey, would you pls visit this cool repl i just made, and gie me some tips and [email protected]
thanks so much!!!
:)
Great job! I'd recommend using sys.stdout.write instead of print though since it is faster and also, I'd recommend using
"\n".join(cactus) to put the entire ASCII cactus into one print function for speed as well. I did it in my fork if you want to see what I mean .
@CodingCactus Though, now you should try and get the CEO of Repl.it to do this version in BASIC!
@Highwayman Well, I thought since the Repl.it CEO did @CodingCactus 's main logo in BASIC, maybe he could do this version too. I didn't know he already did it.
@CodingCactus Did you notice that two out of the three trending repls are yours? When the Repl.it CEO made a post with your logo, you were just famous. Now, you are ultra famous. About as famous as you can get on Repl.it. LOL!
@CodingCactus Also, it was just like yesterday when you got to 500 cycles and now you have 604...
@Highwayman Remember the thing I was doing trying to see all of the Repl.it languages not in the official list? I figured out why it wouldn't work... Repl.it has request rate limits. I ended up getting an error saying that access was denied to my IP address... Do you have any idea how to bypass the rate limits or at least do you know what the limit is so that I can slow down the program to not go over?
@CodingCactus I don't know how to do that... I just want to see all the languages Repl.it supports!
@AmazingMech2418 i dunno, do you get a new ip every time you go on the internet?? If so can you do a run until you get bolcked (save stuff in file) then go off internet, then run again from where you left off?
@CodingCactus Do you have any idea what this means though?
I was doing it on a repl and then this happened after I tried to incorporate a proxy.
@CodingCactus I got it to work again, but now it is showing a 404 error for everything...
@CodingCactus yeah thats cool. They are used to make multiline comments. They also happen to be treated as strings so they can also be formatted and printed.
@CodingCactus no problem. U wrote a bunch of quotes when you only needed one set of comment strings. I dont want people to waste their time.
Nice!
I feel like you went through necessary work, is there a reverse function for strings in Python? If so, it would've used a single variable.
Also, the loop constantly joins the arrays, you could've done this:
cactus2 = "\n".join([ ..., ... ])
Now it just sets the variable and reads from it.
@StudentFires ik, the join was added later on, so it was just thrown in there, I'll change it for you :)
@CodingCactus Ha, you don't need to, I do stuff like that because I'm an optimization freak, I'm been doing some insane stuff when it comes to code recently.
Just wait until my template comes out, lol!
@CodingCactus It's definitely going to be something else... unfortunately it's not going to be too accessible, it'll be in C++ and JavaScript.
Most Repl users don't know C++.
@CodingCactus
You are a god and I just wanted to say that you are my hero and I look up to you. Keep up the great work! I look forward to seeing later projects! GREAT CODING!!!!
from termcolor import colored s = colored('[]','grey',attrs=['concealed']) O = colored('[]','white','on_white') G = colored('[]','yellow','on_yellow') f = colored('[]','cyan','on_cyan') M = colored('[]','green','on_green') print(M)
you are a god
@nt998302 it is known.
@Highwayman lol | https://replit.com/talk/share/Animated-ASCII-Cactus/33911 | CC-MAIN-2022-21 | refinedweb | 669 | 83.76 |
Hi,
I did the following code. It can be run except one point that I do not know how to handle it.
How can I get the total of temperature for EACH day? I got the figure to be 30 for these 5 days.
Thanks for help.
gogo
#include <iostream>
#include <string>
#include <cctype>
#include <iomanip>
using namespace std;
int main()
{
const int x = 5;
const int y = 3;
const string week[] = {"Mon", "Tue", "Wed", "Thu", "Fri"};
float day[x][y] = {0};
float average[y] = {0};
float sum = 0;
float sum1 =0;
int count = 0;
for (int i=0; i<x; i++)
{
cout << endl;
for (int j=0; j<y; j++)
{
cout << "Pls enter " << week[i] << "\'s temperature ";
cout << "with reading #" << (j+1) << " :";
cin >> day[i][j];
}
}
cout << endl;
for (int a=0; a<y; a++)
for (int b=0; b<x; b++)
average[a] += day[b][a];
cout << "Average morning temp is " << average[0]/x << "." << endl;
cout << "Average afternoon temp is " << average[1]/x << "." << endl;
cout << "Average evening temp is " << average[2]/x << "." << endl;
cout << endl;
for (int e=0; e<x; e++)
{
cout << week[e] << endl;
for (int f=0; f<y; f++)
{
cout << "reading # " << (f+1) << "is : " << day[e][f] << endl;
count = count++;
sum += day[e][f];
sum1 += day[0][f]; //Don't understand??
}
cout << endl;
}
cout << "Total 1-5 = " << sum << endl << endl;
for (int g=0; g<x; g++)
cout << week[g] << "\'s total is " << sum1 << endl << endl;
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/5453-multi-dimension-array-question.html | CC-MAIN-2015-35 | refinedweb | 244 | 76.25 |
What is Kuzzle #
If you're on this page it's probably because you need a backend for your mobile, web or IoT application.
Once again you had been preparing to develop a backend from scratch... Well, maybe not entirely from scratch, because you probably planned to use some kind of framework (and there are a lot of them!) to make it easier for you.
Those frameworks allow you to develop faster by providing a predefined structure, classes and configurations.
However, you will still have to develop the majority of the basic features:
- Storing and searching data
- Permission management
- User authentication
- Access to data and functionalities through an API
Each of these features will take time. Time to develop but also time to:
- Debug
- Test
- Maintain
- Secure
In short, you were going to spend a lot of time on code that doesn't bring any value to your users but is nevertheless essential.
This time could have been used for many other things:
- Development of business functionalities
- UI / UX of frontend applications
- 100% coverage by automated tests
- Implementation of devops best practices
- Marketing of your product
- ...
It is on the basis of this failure to optimize development time that we decided to start developing Kuzzle 5 years ago and that we have been devoting all our efforts to it ever since.
How it works #
Kuzzle is a backend with ready-to-use features that can be extended in the same way as any framework.
When you start Kuzzle, you automatically have access to an API exposing a wide range of features:
Then you can develop your custom business and high level features by extending Kuzzle API or modifying API methods behavior.
Example: Basic Kuzzle application
import { Backend } from 'kuzzle'; // Instantiate a new application const app = new Backend('playground'); // Declare a "greeting" controller app.controller.register('greeting', { actions: { // Declare a "sayHello" action sayHello: { handler: request => `Hello, ${request.input.args.name}` } } }); // Start the application app.start() .then(() => { app.log.info('Application started'); });
Complete ecosystem #
In addition to Kuzzle, we are developing many other projects to facilitate the use of our backend.
All these projects are also available under the Apache-2 license on Github.
Admin Console #
The Admin Console is a Single Page Application (SPA) written in Vue.js.
It is used to manage its data and the user permissions system.
As it is a single-page application (SPA), no data related to your Kuzzle application will pass through our servers, so you can use the online version available at.
SDKs #
We develop many SDKs to facilitate the use of Kuzzle in applications.
These SDKs are available for the most common languages and the majority of frontend development platforms:
- Javascript / Typescript : Node, React, React Native, Vue.js, Angular, etc
- Dart : Flutter
- Csharp : Xamarin, .NET
- Java / Kotlin : Android, JVM
Kourou #
Kourou is a command line interface that facilitates development with Kuzzle.
It can be used in particular to execute any API action or even code snippets directly.
Business plugins #
We also develop and distribute plugins for Kuzzle.
These plugins allow you to use the functionalities of other services such as Amazon S3 or Prometheus.
The community is also able to develop and distribute its own plugins to enrich the ecosystem.
Expert Professional Support #
The Kuzzle backend and all our projects are developed by a team of engineers based in Montpellier, France.
Our multidisciplinary team of experts is capable of addressing any type of issue and assisting projects of all sizes.
You can thus pass the development and production phases with a relaxed spirit, knowing that you can rely on quality professional support.
Meet the community #
We federate a community of developers using Kuzzle around the world.
Whether you want to ask a question on StackOverflow, check out the Kuzzle awesome list, watch our video on YouTube, or discuss Kuzzle on Discord, the community and the core team will be there to help you. | https://doc.kuzzle.io/core/2/guides/introduction/what-is-kuzzle/ | CC-MAIN-2022-05 | refinedweb | 650 | 51.48 |
Python provides a variety of time and date processing methods, mainly in time and datetime modules. In this article i will show you some time and datetime module examples to tell you how to use them in python.
1. Python Represents Time In Two Ways.
1.1 Timestamp.
The timestamp is relative to the offset in seconds from 1970.1.1 00:00:00. Timestamp is unique.
1.2 Struct Time Tuple.
Struct time is a tuple of time elements, struct_time tuple has nine elements, which are defined as follows. Please note struct_time for the same timestamp varies depending on the time zone.
- tm_year : This element represent year. It is four digits, for example : 2019.
- tm_mon : Represent month, two digits, for example : 03, value range is 1 – 12.
- tm_mday : Represent day in month, two digits, for example : 29, value range is 1 – 31.
- tm_hour : Represent hour in a day, two digits, for example : 19, value range is 0 – 24.
- tm_min : Represent minutes, for example 19, value range 0 – 59.
- tm_sec : Represent seconds, for example 19, value range 0 – 59.
- tm_wday : Day of the week, for example 0 means sunday, value range is 0 – 6.
- tm_yday : Day of the year, value range is 1 – 366.
- tm_isdst : Represent daylight saving time, default value is -1.
2. Python time Module.
In Python documentation, time is classified in the Generic Operating System Services. In other words, it provides functions closer to the Operating System level. As you can see from the documentation, the time module is built around Unix Timestamp. This module mainly includes a class struct_time, along with several other functions and related constants.
It is important to note that most functions in time module will invoke same name function of the os platform C library, so that some functions are platform specific, so same name function may have different effects on different os platform.
Another point is that because it is based on Unix Timestamp, the range of date it can express is limited to 1970-2038. If you’re writing code that deals with dates that are outside the scope, you had better consider using the datetime module.
2.1 Python time Module Methods.
- Before invoke any time module methods, you should first import time module.
import time
- time.sleep(secs) : Delay the specified time (seconds) and continue running.
>>> for i in range(10): ... print(i) ... time.sleep(2) ... 0 1 2 3 4 5 6 7 8 9 >>>
- time.localtime([secs]) : Convert a timestamp to a struct_time of the current time zone. If secs parameter is not provided, then use current time seconds.
>>> time.localtime(time.time()) time.struct_time(tm_year=2019, tm_mon=3, tm_mday=29, tm_hour=16, tm_min=24, tm_sec=9, tm_wday=4, tm_yday=88, tm_isdst=0)
- time.strftime(format[, t]) : Return the specified struct_time t’s string value ( t’s default value is current time if t is not provided) according to the specified time string format.
>>> time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time())) '2019-03-29 16:25:13'
- time.time() : Returns the timestamp of the current time.
>>> time.time() 1553847582.5042903
- time.mktime(t) : Convert a time.struct_time to a timestamp. This method is opposite to time.localtime() which receives a timestamp and returns a struct_time.
>>> s_t = time.localtime(1000000000) >>> s_t time.struct_time(tm_year=2001, tm_mon=9, tm_mday=9, tm_hour=9, tm_min=46, tm_sec=40, tm_wday=6, tm_yday=252, tm_isdst=0) >>> time.mktime(s_t) 1000000000.0
- time.gmtime([secs]) : Similar to the time.localtime() method, the gmtime() method convert a timestamp to a UTC time zone (0 time zone) based time.struct_time object.
>>> time.gmtime(time.time()) time.struct_time(tm_year=2019, tm_mon=3, tm_mday=29, tm_hour=8, tm_min=41, tm_sec=6, tm_wday=4, tm_yday=88, tm_isdst=0)
- time.clock() : It should be noted that this method returned value means different in different operating system. On UNIX systems, it returns “process time,” which is a floating point number in seconds (timestamp). In WINDOWS, the first call returns the actual running time of the process. The second subsequent call is the elapsed time since the first call.
>>> time.clock() __main__:1: DeprecationWarning: time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead 0.094425
- time.asctime([t]) : Convert a tuple or struct_time object which represent time to a string form like ‘Fri Mar 29 19:11:31 2019’.
>>> time.asctime() 'Fri Mar 29 19:11:31 2019'
- time.ctime([secs]) : Converts a timestamp (a floating point number in seconds) to a string form like ‘Fri Mar 29 19:21:37 2019’.
>>> time.ctime() 'Fri Mar 29 19:21:37 2019'
3. Python datetime Module.
Datetime is a lot more advanced than time, you can think datetime module extends time module, datetime module provides more practical functions. The datetime module contains several classes, list as follows.
- timedelta : Mainly used to calculate the time span.
- tzinfo : Provide methods of time zone related.
- time : Provide time related methods only.
- date : Provide date related methods only.
- datetime : Provide both date and time operation methods.
The most used class in above list is datetime.datetime and datetime.timedelta. The other two, datetime.date and datetime.time, are not very different from datetime.datetime.
3.1 datetime.datetime class properties and methods.
- datetime.year, datetime.month, datetime.day, datetime.hour, datetime.minute, datetime.second, datetime.microsecond : You can understood all these datetime.datetime class’s properties by the word literal meaning.
Run below command in a terminal python3 interactive console. >>> import datetime >>> >>> dt = datetime.datetime.now() >>> dt.year 2019 >>> dt.month 3 >>> dt.day 29 >>> dt.hour 19 >>> dt.minute 59 >>> dt.seconds Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'datetime.datetime' object has no attribute 'seconds' >>> dt.second 29 >>> dt.microsecond 14285
- datetime.now([tz]) : Return current time datetime object by timezone. tz is None or a datetime.tzinfo class’s instance.
- datetime.today() : Return current time datetime object according to local timezone.
- datetime.utcnow() : Return current time datetime object according to UTC timezone.
- datetime.fromtimestamp(timestamp[, tz]) : Built datetime object by unix timestamp.
- datetime.date() : Return date object.
- datetime.time() : Return time object.
- datetime.replace(name=value) : By default, datetime properties value is read-only, but you can use this method to replace date property values.
>>> dt = datetime.datetime.now # do not forget () after the method, otherwise you will get below information. >>> dt <built-in method now of type object at 0x7f2c6b348ca0> >>> >>> >>> dt = datetime.datetime.now() >>> dt datetime.datetime(2019, 3, 29, 20, 42, 48, 689605) >>> dt1 = dt.replace(year=2018) >>> dt1 datetime.datetime(2018, 3, 29, 20, 42, 48, 689605) >>> dt datetime.datetime(2019, 3, 29, 20, 42, 48, 689605)
Below is an example about using above methods.
>>> import datetime >>> dt_now = datetime.datetime.now() >>> dt_now datetime.datetime(2019, 3, 29, 20, 9, 50, 454469) >>> dt_now.strftime('%Y-%m-%d %H:%M:%S') '2019-03-29 20:09:50' >>> dt_delta = datetime.timedelta(hours=24) >>> print(dt_now + dt_delta) 2019-03-30 20:09:50.454469 >>> print(dt_now - dt_delta) 2019-03-28 20:09:50.454469 | https://www.code-learner.com/how-to-use-python-time-datetime-module-examples/ | CC-MAIN-2020-40 | refinedweb | 1,185 | 62.34 |
You.
The most common type of security hole in a webpage allows an attacker to execute commands on behalf of a user, but unknown to the user. The cross-site request forgery attack exploits the trust a website has already established with a user's web browser.
In this tutorial, we'll discuss what a cross-site request forgery attack is and how it's executed. Then we'll build a simple ASP.NET MVC application that is vulnerable to this attack and fix the application to prevent it from happening again.
What Is Cross-Site Request Forgery?.
How the Attack Works
The act of getting the victim to use a link does not require them clicking on a link. A simple image link could be enough:
<img src="" width="1" height="1" />
Including a link such as this on an otherwise seemingly innocuous forum post, blog comment, or social media site could catch a user unaware. More complex examples use JavaScript to build a complete HTTP post request and submit it to the target website.
Building a Vulnerable Web Application in ASP.NET MVC downloaded and installed from Microsoft.
Begin by creating a new project and choose to use the Internet Project template. Either View Engine will work, but here I'll be using the ASPX view engine.
We'll add one field to the UserProfile table to store an email address. Under Server Explorer expand Data Connections. You should see the Default Connection created with the information for the logins and memberships. Right click on the UserProfile table and click Open Table Definition. On the blank line under UserName table, we'll add a new column for the email. Name the column
nvarchar(MAX), and check the Allow Nulls option. Now click Update to save the new version of the table.
This gives us a basic template of a web application, with login support, very similar to what many writers would start out with trying to create an application. If you run the app now, you will see it displays and is functional. Press F5 or use DEBUG -> Start Debugging from the menu to bring up the website.
Let's create a test account that we can use for this example. Click on the Register link and create an account with any username and password that you'd like. Here I'm going to use an account called
testuser. After creation, you'll see that I'm now logged in as testuser. After you've done this, exit and let's add a page to this application to allow the user to change their email.
Before we create that page to change the email address, we first need to make one change to the application so that the code is aware of the new column that we just added. Open the
AccountModels.cs file under the
Models folder and update the
UserProfile class to match the following. This tells the class about our new column where we'll store the email address for the account.
[Table("UserProfile")] public class UserProfile { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int UserId { get; set; } public string UserName { get; set; } public string EmailAddress { get; set; } }
Open the
AccountController.cs file. After the
RemoveExternalLogins function add the following code to create a new action. This will get the current email for the logged in user and pass it to the view for the action.); }
We also need to add the corresponding view for this action. This should be a file named
ChangeEmail.aspx under the
Views\Account folder:
<%@>
This gives us a new page we can use to change the email address for the currently logged in user.
If we run this page and go to the
/Account/ChangeEmail action, we now see we currently do not have an email. But we do have a text box and a button that we can use to correct that. First though, we need to create the action which will execute, when the form on this page is submitted.
); }
After making this change, run the website and again go to the
/Account/ChangeEmail action that we just created. You can now enter a new email address and click the Change Email button and see that the email address will be updated.
Attacking the Site
As written, our application is vulnerable to a cross-site request forgery attack. Let's add a webpage to see this attack in action. We're going to add a page within the website that will change the email to a different value. In the
HomeController.cs file we'll add a new action named
AttackForm.
public ActionResult AttackForm() { return View(); }
We'll also add a view for this named
AttackForm.aspx under the
/Views/Home folder. It should look like this:
<%@>
Our page helpfully announces its ill intent, which of course a real attack would not do. This page contains a hidden form that will not be visible to the user. It then uses Javascript to automatically submit this form when the page is loaded.
If you run the site again and go to the
/Home/AttackForm page, you'll see that it loads up just fine, but with no outward indication that anything has happened. If you now go to the
/Account/ChangeEmail page though, you'll see that your email has been changed to
newemail@evilsite.com. Here of course, we're intentionally making this obvious, but in a real attack, you might not notice that your email has been modified.
Mitigating Cross-Site Request Forgery.
ASP.NET makes this process easy, as CSRF support is built in. To use it, we only need to make two changes to our website.
Fixing the Problem
First, we must add the unique token to the form to change the user's email when we display it. Update the form in the
ChangeEmail.aspx view under
/Account/ChangeForm:
<% using(Html.BeginForm()) { %> <%: Html.AntiForgeryToken() %> <%: Html.TextBoxFor(t=>t.NewEmail) %> <input type="submit" value="Change Email" /> <% } %>
This new line:
<%: Html.AntiForgeryToken() %> tells ASP.NET to generate a token and place it as a hidden field in the form. In addition, the framework handles placing it in another location where the application can access it later to verify it.
If we load up the page now and look at the source, we'll see this new line, in the form, rendered to the browser. This is our token:
>
We also need to make a change to our action to let it know that we've added this token and that it should verify the token before accepting the posted form.
Again this is simple in ASP.NET MVC. At the top of the action that we created to handle the posted form, the one with the
[HttpPost] attribute added, we*
Let's test this out. First go to the
/Account/ChangeEmail page and restore the email for your account to a known value. Then we can return to the
/Home/AttackForm page and again the attack code attempts to change our email. If you return to the
/Account/ChangeEmail page again, this time you'll see that your previously entered email is still safe and intact. The changes we made to our form and action have protected this page from the attack.
If you were to look at the attack form directly (easily done by removing the
<iframe> tags around the form on the attack page, you'll see the error that actually happens when the attack form attempts to post.
These two additional lines added to the site were enough to protect us from this error.
Conclusion. | http://code.tutsplus.com/tutorials/understanding-cross-site-request-forgery-in-net--net-33999 | CC-MAIN-2014-15 | refinedweb | 1,271 | 72.56 |
In C#, MaskedTextBox control gives a validation procedure for the user input on the form like date, phone numbers, etc. Or in other words, it is used to provide a mask which differentiates between proper and improper user input. The MaskedTextBox class is used to represent the windows masked text box and also provide different types of properties, methods, and events. It is defined under System.Windows.Forms namespace.
This class enhanced version of TextBox control, it supports a declarative syntax for receiving or rejecting the user input and when this control display at run time, it represents the mask as a sequence of prompt characters and optional literal characters. In C# you can create a MaskedTextBox in the windows form by using two different ways:
1. Design-Time: It is the easiest way to create a MaskedTextBox as shown in the following steps:
- Step 1: Create a windows form as shown in the below image:
Visual Studio -> File -> New -> Project -> WindowsFormApp
- Step 2: Next, drag and drop the MaskedTextBox control from the toolbox to the form.
- Step 3: After drag and drop you will go to the properties of the MaskedTextBox control to modify MaskedTextBox according to your requirement.
Output:
2. Run-Time: It is a little bit trickier than the above method. In this method, you can create a MaskedTextBox control programmatically with the help of syntax provided by the MaskedTextBox class. The following steps show how to set the create MaskedTextBox dynamically:
- Step 1: Create a MaskedTextBox control using the MaskedTextBox() constructor is provided by the MaskedTextBox class.
// Creating a MaskedTextBox control MaskedTextBox mbox = new MaskedTextBox();
- Step 2: After creating MaskedTextBox control, set the property of the MaskedTextBox control provided by the MaskedTextBox class.
// Setting the properties // of MaskedTextBox mbox.Location = new Point(374, 137); mbox.Mask = "000000000"; mbox.Size = new Size(176, 20); mbox.Name = "MyBox"; mbox.Font = new Font("Bauhaus 93", 18);
- Step 3: And last add this MaskedTextBox control to the form using the following statement:
// Adding MaskedTextBox // control on the form this.Controls.Add(mbox);
Example:
Output: | https://www.geeksforgeeks.org/c-sharp-maskedtextbox-class/?ref=rp | CC-MAIN-2021-25 | refinedweb | 344 | 53.92 |
#include <typemodel.hh>
Base class for all type definitions
Definition at line 20 of file typemodel.hh.
Foreign => local mapping for merge() and isSame()
Definition at line 111 of file typemodel.hh.
Definition at line 24 of file typemodel.hh.
Definition at line 22 of file typemodel.cc.
Definition at line 27 of file typemodel.cc.
Returns true if
to can be used to manipulate a value that is described by
from
Definition at line 50 of file typemodel.cc.
The set of types this type depends upon
This method returns the set of types that are directly depended-upon by this type
Implemented in Typelib::Indirect, Typelib::Compound, Typelib::Enum, Typelib::Numeric, Typelib::OpaqueType, and Typelib::NullType.
Method that is implemented by type definitions to compare *this with
other.
If
equality is true, the method must check for strict equality. If it is false, it must check if
other can be used to manipulate values of type
*this.
For instance, let's consider a compound that has padding bytes, and assume that *this and
other have different padding (but are the same on every other aspects). do_compare should then return true if equality is false, and false if equality is true.
It must use rec_compare to check for indirections (i.e. for recursive calls).
The base definition compares the name, size and category of the types.
Reimplemented in Typelib::OpaqueType.
Definition at line 88 of file typemodel.cc.
Called by Type::merge when the type does not exist in
registry already. This method has to create a new type in registry that matches the type definition of *this.
All types referenced by *this must be moved to their equivalent in
registry.
Implemented in Typelib::Container, Typelib::Pointer, Typelib::Array, Typelib::Compound, Typelib::Enum, Typelib::Numeric, Typelib::OpaqueType, and Typelib::NullType.
Implementation of the actual resizing. Called by resize()
Reimplemented in Typelib::Array, Typelib::Indirect, and Typelib::Compound.
Definition at line 138 of file typemodel.cc.
The type name without the namespace
Definition at line 30 of file typemodel.cc.
The type category
Definition at line 33 of file typemodel.cc.
The type full name (including namespace)
Definition at line 29 of file typemodel.cc.
The type namespace
Definition at line 31 of file typemodel.cc.
Size in bytes of a value
Definition at line 37 of file typemodel.cc.
Returns the number of bytes that are unused at the end of the compound
Reimplemented in Typelib::Compound.
Definition at line 60 of file typemodel.cc.
true if this type is null
Definition at line 38 of file typemodel.cc.
Deep check that
other defines the same type than self. Basic checks on name, size and category are performed by ==
Definition at line 41 of file typemodel.cc.
Checks that
identifier is a valid type name
Merges this type into
registry: creates a type equivalent to this one in the target registry, reusing possible equivalent types already present in +registry+.
Definition at line 91 of file typemodel.cc.
Base merge method. The default implementation should be fine for most types.
Call try_merge to check if a type equivalent to *this exists in
registry, and do_merge to create a copy of *this in
registry.
Reimplemented in Typelib::Indirect.
Definition at line 112 of file typemodel.cc.
Called by the registry if one (or more) of this type's dependencies is aliased
The default implementation does nothing. It is reimplemented in types for which the name is built from the dependencies' name
Reimplemented in Typelib::Indirect, and String.
Definition at line 34 of file typemodel.cc.
Definition at line 39 of file typemodel.cc.
Definition at line 40 of file typemodel.cc.
Called by do_compare to compare +left+ to +right+. This method takes into account potential loops in the recursion
See do_compare for a description of the equality flag.
One type cannot be equal to two different types. So either we are already comparing left and right and it is fine (return true). Or we are comparing it with a different type and that means the two types are different (return false)
Definition at line 63 of file typemodel.cc.
Update this type to reflect a type resize. The default implementation will resize *this if it is listed in
new_sizes. new_sizes gets updated with the types that are modified.
This is not to be called directly. Only use Registry::resize().
Definition at line 124 of file typemodel.cc.
Changes the type name. Never use once the type has been added to a registry
Definition at line 32 of file typemodel.cc.
Changes the type size. Don't use that unless you know what you are doing. In particular, don't use it once the type is used in a Compound.
Definition at line 36 of file typemodel.cc.
Checks if there is already a type with the same name and definition than *this in
registry. If that is the case, returns it, otherwise returns NULL;
If there is a type with the same name, but whose definition mismatches, throws DefinitionMismatch
Definition at line 96 of file typemodel.cc.
Reimplemented in Typelib::Numeric.
Definition at line 41 of file typemodel.hh.
Definition at line 38 of file typemodel.hh.
Definition at line 40 of file typemodel.hh.
Reimplemented in Typelib::Numeric.
Definition at line 35 of file typemodel.hh. | http://docs.ros.org/en/groovy/api/typelib/html/classTypelib_1_1Type.html | CC-MAIN-2021-10 | refinedweb | 886 | 59.7 |
monads are fractals
On my way back from Uppsala, my mind wandered to a conversation I had with a collegue about the intuition of monads, which I pretty much butchered at the time. As I was mulling this over, it dawned on me.
))
The above is a
List of
List of
Int. We can intuitively crunch this into a
List of
Int like this:
scala> List(1, 2, 3, 4) res1: List[Int] = List(1, 2, 3, 4)
For
1 to form
List(1) we can also provide a single-parameter constructor
unit: A => F[A]. This allows us to crunch
1 and
4 along with
List(2, 3):
scala> List(List.apply(1), List(2, 3), List.apply(4)) res2: List[List[Int]] = List(List(1), List(2, 3), List(4))
The type signature of crunching, also known as
join is
F[F[A]] => F[A].
monoids
The crunching operation reminded me of monoids, which consists of:
trait Monoid[A] { def mzero: A def mappend(a1: A, a2: A): A }
We can use monoid to abstract out operations on two items:
scala> List(1, 2, 3, 4).foldLeft(0) { _ + _ } res4: Int = 10 scala> List(1, 2, 3, 4).foldLeft(1) { _ * _ } res5: Int = 24 scala> List(true, false, true, true).foldLeft(true) { _ && _ } res6: Boolean = false scala> List(true, false, true, true).foldLeft(false) { _ || _ } res7: Boolean = true
One aspect of monoid I want to highlight here is that data type alone is not enough to define the monoid. The pair
(Int, +) forms a monoid. Or
Ints are monoid under addition. See for more on this.
List is a monad under
++
When
List of
List of
Int crunches into a
List of
Int, it's obvious that it uses something like
foldLeft and
++ to make
List[Int].
scala> List(List.apply(1), List(2, 3), List.apply(4)).foldLeft(List(): List[Int]) { _ ++ _ } res8: List[Int] = List(1, 2, 3, 4)
But it could have been something else. For example, it could return a list of sums.
scala> List(List.apply(1), List(2, 3), List.apply(4)).foldLeft(List(): List[Int]) { (acc, xs) => acc :+ xs.sum } res9: List[Int] = List(1, 5, 4)
That's a contrived example, but it's important to think of the composition semantics that a monad encapsulates.
Option is a monad under...?
Let's look at
Option too. Remember the type signature of monadic crunching is
F[F[A]] => F[A], so what we need as examples are nested
Options, not a list of
Options.
scala> Some(None: Option[Int]): Option[Option[Int]] res10: Option[Option[Int]] = Some(None) scala> Some(Some(1): Option[Int]): Option[Option[Int]] res11: Option[Option[Int]] = Some(Some(1)) scala> None: Option[Option[Int]] res12: Option[Option[Int]] = None
Here's what I came up with to crunch
Option of
Option of
Int into an
Option of
Int.
scala> (Some(None: Option[Int]): Option[Option[Int]]).foldLeft(None: Option[Int]) { (_, _)._2 } res20: Option[Int] = None scala> (Some(Some(1): Option[Int]): Option[Option[Int]]).foldLeft(None: Option[Int]) { (_, _)._2 } res21: Option[Int] = Some(1) scala> (None: Option[Option[Int]]).foldLeft(None: Option[Int]) { (_, _)._2 } res22: Option[Int] = None
So
Option apparenlty is a monad under
_2. In this case I don't know if it's immediately obvious from the implemetation, but the idea is to propagate
None, which represents a failure.
what about the laws?
So far we have two functions
join and
unit. We actually need one more, which is
map.
join: F[F[A]] => F[A]
unit: A => F[A]
map: F[A] => (A => B) => F[B]
Given
List[List[List[Int]]], we can write the accociative law by crunching the outer most list first or middle list first. The following is from one of the chapter notes of Functional Programming in Scala:
scala> val xs: List[List[List[Int]]] = List(List(List(1,2), List(3,4)), List(List(5,6), List(7,8))) xs: List[List[List[Int]]] = List(List(List(1, 2), List(3, 4)), List(List(5, 6), List(7, 8))) scala> val ys1 = xs.flatten ys1: List[List[Int]] = List(List(1, 2), List(3, 4), List(5, 6), List(7, 8)) scala> val ys2 = xs map {_.flatten} ys2: List[List[Int]] = List(List(1, 2, 3, 4), List(5, 6, 7, 8)) scala> ys1.flatten res30: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8) scala> ys2.flatten res31: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8)
This can be generalized as:
join(join(m)) assert_=== join(map(m)(join))
Here are the identity laws also from the same notes:
join(unit(m)) assert_=== m join(map(m)(unit)) assert_=== m
This illustrates that we can define a monad without using
flatMap. In actual coding, however, we tend to deal with monads by chaining
flatMaps using
for comprehension, which combines
map and
join.
State monad
When writing in purely functional style, one pattern that arises often is passing a value that represents some state.
val (d0, _) = Tetrix.init() val (d1, _) = Tetrix.nextBlock(d0) val (d2, moved0) = Tetrix.moveBlock(d1, LEFT) val (d3, moved1) = if (moved0) Tetrix.moveBlock(d2, LEFT) else (d2, moved0)
The passing of the state object becomes boilerplate, and error-prone especially when you start to compose the state transition using function calls.
State monad is a monad that encapsulates state transition
S => (S, A).
After rewriting
Tetrix.nextBlock and
Tetrix.moveBlock functions to return
State[GameSate, A], we can write the above code as:
def nextLL: State[GameState, Boolean] = for { _ <- Tetrix.nextBlock moved0 <- Tetrix.moveBlock(LEFT) moved1 <- if (moved0) Tetrix.moveBlock(LEFT) else State.state(moved0) } yield moved1 nextLL.eval(Tetrix.init())
It's hard to say whether it's good thing to be able to write
for comprehension since it possibly makes less sense to those who are not informed about the
State monad. One good thing is that we now have a type that automates passing
d0,
d1,
d2, ...
What I want to highlight here is that
State monad is a fractal just like
List.
moveBlock function returns a
State and
for comprehension is
State of
State. In the above example, two calls to
moveBlock function can be factored out:
def leftLeft: State[GameState, Boolean] = for { moved0 <- Tetrix.moveBlock(LEFT) moved1 <- if (moved0) Tetrix.moveBlock(LEFT) else State.state(moved0) } yield moved1 def nextLL: State[GameState, Boolean] = for { _ <- Tetrix.nextBlock moved <- leftLeft } yield moved nextLL.eval(Tetrix.init())
This allows us to create mini imperative style programs that can be combined functionally. Note the semantics of
for is limited to one monad at a time.
StateT monad transformer
In the above, my hypothetical
moveBlock returns
State[GameState, Boolean]. When it returns
false the block has either hit a wall or another block so no further action will be taken. If
true do something, is like a mantra of imperative programming. It's also a code smell for functional programming, because you likely want
Option[A] instead. To use
State and
Option simultaneously, we can use
StateT. Now all state transition will also be wrapped in
Option.
Suppose
nextBlock will place the current block at x position 1, and moving left beyond 0 will fail.
scala> import scalaz._, Scalaz._ import scalaz._ import Scalaz._ scala> :paste // Entering paste mode (ctrl-D to finish) type StateTOption[S, A] = StateT[Option, S, A] object StateTOption extends StateTInstances with StateTFunctions { def apply[S, A](f: S => Option[(S, A)]) = StateT[Option, S, A] { s => f(s) } } case class GameState(blockPos: Int) sealed trait Direction case object LEFT extends Direction case object RIGHT extends Direction case object DOWN extends Direction object Tetrix { def nextBlock = StateTOption[GameState, Unit] { s => Some(s.copy(blockPos = 1), ()) } def moveBlock(dir: Direction) = StateTOption[GameState, Unit] { s => dir match { case LEFT => if (s.blockPos == 0) None else Some((s.copy(blockPos = s.blockPos - 1), ())) case RIGHT => Some((s.copy(blockPos = s.blockPos + 1), ())) case DOWN => Some((s, ())) } } } // Exiting paste mode, now interpreting. scala> def leftLeft: StateTOption[GameState, Unit] = for { _ <- Tetrix.moveBlock(LEFT) _ <- Tetrix.moveBlock(LEFT) } yield () leftLeft: StateTOption[GameState,Unit] scala> def nextLL: StateTOption[GameState, Unit] = for { _ <- Tetrix.nextBlock _ <- leftLeft } yield () nextLL: StateTOption[GameState,Unit] scala> nextLL.eval(GameState(0)) res0: Option[Unit] = None scala> def nextRLL: StateTOption[GameState, Unit] = for { _ <- Tetrix.nextBlock _ <- Tetrix.moveBlock(RIGHT) _ <- leftLeft } yield () nextRLL: StateTOption[GameState,Unit] scala> nextRLL.eval(GameState(0)) res1: Option[Unit] = Some(())
The above shows that moving left-left failed, but calling right-left-left succeeded. In this simple example monad stacked nicely, but this could get hairly.
scopt as a monad
Another thing I was thinking on the plane was scopt, which is a command line parsing library. One of the issue that's been raised about scopt is that the parser it generates is not composable.
If you think about it, scopt is essentially a
State. You pass in a config case class in one end, and after series of transformations you get the config back. Here's a hypothetical code of how scopt could look like:
val parser = { val builder = scopt.OptionParser.builder[Config]("scopt") import builder._ for { _ <- head("scopt", "3.x") _ <- opt[Int]('f', "foo") action { (x, c) => c.copy(foo = x) } _ <- arg[File]("<source>") action { (x, c) => c.copy(source = x) } _ <- arg[File]("<targets>...") unbounded() action { (x, c) => c.copy(targets = c.targets :+ x) } } yield () } parser.parse("--foo a.txt b.txt c.txt", Config()) match { case Some(c) => caes None => }
If the
parser's type is
OptionParser[Unit], then
opt[Int] will also be a
OptionParser[A]. This allows us to factor out some of the options into a sub-parser and reuse it given
Config can be reused.
Free monad
Perhaps no other monads feels more fractal-like than
Free monads.
List and
Option are fractal too, but with
Free you're involved in the construction of a nanotech monomer, which then repeats itself to become a giant structure on its own.
For example, using
Tuple2[A, Next],
Free can form a monad that acts like a list by embedding another
Tuple2[A, Next] into
Next like
Tuple2[A, Tuple2[A, Next]], and so on.
What we end up is a data structure that's free of additional context other than the fact that it's a fractal. You're responsible for destructuring the result and do something meaningful. This approach could be simpler than monad transformer.
scala> import scalaz._, Scalaz._ import scalaz._ import Scalaz._ scala> :paste // Entering paste mode (ctrl-D to finish) case class GameState(blockPos: Int) sealed trait Direction case object LEFT extends Direction case object RIGHT extends Direction case object DOWN extends Direction sealed trait Tetrix[Next] object Tetrix { case class NextBlock[Next](next: Next) extends Tetrix[Next] case class MoveBlock[Next](dir: Direction, next: Next) extends Tetrix[Next] implicit val gameCommandFunctor: Functor[Tetrix] = new Functor[Tetrix] { def map[A, B](fa: Tetrix[A])(f: A => B): Tetrix[B] = fa match { case n: NextBlock[A] => NextBlock(f(n.next)) case m: MoveBlock[A] => MoveBlock(m.dir, f(m.next)) } } def nextBlock: Free[Tetrix, Unit] = Free.liftF[Tetrix, Unit](NextBlock(())) def moveBlock(dir: Direction): Free[Tetrix, Unit] = Free.liftF[Tetrix, Unit](MoveBlock(dir, ())) def eval(s: GameState, cs: Free[Tetrix, Unit]): Option[Unit] = cs.resume.fold({ case NextBlock(next) => eval(s.copy(blockPos = 1), next) case MoveBlock(dir, next) => dir match { case LEFT => if (s.blockPos == 0) None else eval(s.copy(blockPos = s.blockPos - 1), next) case RIGHT => eval(s.copy(blockPos = s.blockPos + 1), next) case DOWN => eval(s, next) } }, { r: Unit => Some(()) }) } // Exiting paste mode, now interpreting. scala> def leftLeft: Free[Tetrix, Unit] = for { _ <- Tetrix.moveBlock(LEFT) _ <- Tetrix.moveBlock(LEFT) } yield () leftLeft: scalaz.Free[Tetrix,Unit] scala> def nextLL: Free[Tetrix, Unit] = for { _ <- Tetrix.nextBlock _ <- leftLeft } yield () nextLL: scalaz.Free[Tetrix,Unit] scala> Tetrix.eval(GameState(0), nextLL) res0: Option[Unit] = None scala> def nextRLL: Free[Tetrix, Unit] = for { _ <- Tetrix.nextBlock _ <- Tetrix.moveBlock(RIGHT) _ <- leftLeft } yield () nextRLL: scalaz.Free[Tetrix,Unit] scala> Tetrix.eval(GameState(0), nextRLL) res1: Option[Unit] = Some(())
Except for the type signature, the program portion of the code is identical to the one using
StateTOption.
There's a bit of tradeoff on using this since we'll be responsible for implementing the context, but there's less mess on the type after the initial setup.
summary
Monads are self-repeating structure like fractals, which could be expressed as a function
join: F[F[A]] => F[A]. This property enables monadic values to be composed into larger monadic values. Just like monoid's
mappend,
join can encapsulate some additional of semantics (for example
Option and
State). Whenever you find self-repeating structure, you might be looking at a monad.
The composition of monadic types could be achieved via monad tranformers, but it is notorious for getting complicated.
Free may offer an alternative of providing monadic DSL. | http://eed3si9n.com/node/175 | CC-MAIN-2017-51 | refinedweb | 2,185 | 67.45 |
A sphinx theme plugin support extension. #sphinxjp
Project description
A sphinx theme plugin extension.
Warning
For users: sphinxjp.themecore will be deprecated. Please use theme plugins with Sphinx-1.2.
Warning
For theme developers: sphinxjp.themecore’s ‘sphinx_themes’ entry point feature is provided on the Sphinx from 1.2(b3) release. However ‘sphinx_directives’ feature is not provided by the Sphinx.
If your theme plugin provides only ‘sphinx_themes’ entry point, you need remove extensions = ['sphinxjp.themecore'] line from your documentation and remove sphinxjp.themecore dependency from install_requires in the setup.py. There is a example of change to support both before and after Sphinx-1.2:
If your theme plugin provides ‘sphinx_directives’ entry point too, additionaly you need write your setup() function in your extension root package instead of such as setup_directive() and need change your documentation’s installation section with like: “set extensions=["sphinxjp.themes.s6"] instead of ‘sphinx.themecore’”. There is a example of change to support both before and after Sphinx-1.2:
Features
- provide theme template collection by using setuptools plugin mechanism.
Setup
Make environment with easy_install:
$ easy_install sphinxjp.themecore
Make your plugins
themes
If you want to integrate new theme, write sphinx_themes entry_points in your setup.py:
entry_points = """ [sphinx_themes] path = sphinxjp.themes.s6:get_path """
and write get_path function that return path of Sphinx themes. Sphinx themes directory include one or more theme directories.
directives
If you want to integrate new directive, write sphinx_directives entry_points in your setup.py:
entry_points = """ [sphinx_directives] setup = sphinxjp.themes.s6:setup_directives """
and write setup_directives function that receive app argument and return None. setup_directives is same as sphinx extension’s setup function. See Sphinx extension document for more information.
Requirements
- Python 2.4 or later (not support 3.x)
- sphinx 1.0.x
License
Licensed under the MIT license . See the LICENSE file for specific terms.
History
0.2.0 (2013/12/10)
- A part of sphinxjp.themecore feature is merged into Sphinx-1.2(b3).
0.1.3 (2011/7/9)
- fix fatal bug on version 0.1.2. sorry.
0.1.2 (2011/7/9)
- fixed issue #1: html_theme_path definition in conf.py discard all sphinxjp.themes.* paths.
0.1.1 (2011/7/6)
- fixed namespace package declaration missing, thank you togakushi!
0.1.0 (2011/2/6)
- first release
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/sphinxjp.themecore/ | CC-MAIN-2021-49 | refinedweb | 402 | 62.64 |
Today I rigineered my frequency counter to output frequency to a computer via a USB interface. You might remember that I did this exact same thing two years ago, but unfortunately I fell victim to accidental closed source. When I rigged it the first time, I stupidly tried to get fancy and add USB interface with V-USB requiring special drivers and special software code to retrieve the data. The advantage was that the microcontroller spoke directly to the PC USB port via 2 pins requiring no extra hardware. The stinky part is that I’ve since lost the software I wrote necessary to decode the data. Reading my old post, I see I wrote “Although it’s hard for me, I really don’t think I can release this [microchip code] right now. I’m working on an idiot’s guide to USB connectivity with ATMEL microcontrollers, and it would cause quite a stir to post that code too early.” Obviously I never got around to finishing it, and I’ve since lost the code. Crap! I have this fancy USB “enabled” frequency counter, but no ability to use it. NOTE TO SELF: NEVER POST PROJECTS ONLINE WITHOUT INCLUDING THE CODE! I guess I have to crack this open again and see if I can reprogram it…
My original intention was just to reprogram the IC and add serial USART support, then use a little FTDI adapter module to serve as a USB serial port. That will be supported by every OS on the planet out of the box. Upon closer inspection, I realized I previously used an ATMega48 which has trouble being programmed by AVRDUDE, so I whipped up a new perf-board based around an ATMega8. I copied the wires exactly (which was stupid, because I didn’t have it written down which did what, and they were in random order), and started attacking the problem in software.
The way the microcontroller reads frequency is via the display itself. There are multiplexed digits, so some close watching should reveal the frequency. I noticed that there were fewer connections to the microcontroller than expected – a total of 12. How could that be possible? 8 seven-segment displays should be at least 7+8=15 wires. What the heck? I had to take apart the display to remind myself how it worked. It used a pair of ULN2006A darlington transistor arrays to do the multiplexing (as expected), but I also noticed it was using a CD4511BE BCD-to-7-segment driver to drive the digits. I guess that makes sense. That way 4 wires can drive 7 segments. 8+4=12 wires, which matches up. Now I feel stupid for not realizing it in the first place. Time to screw things back together.
Here’s the board I made. 3 wires go to the FTDI USB module (GND, VCC 5V drawn from USB, and RX data), black wires go to the display, and the headers are to aid programming. I added an 11.0592MHz crystal to allow maximum serial transfer speed (230,400 baud), but stupidly forgot to enable it in code. It’s all boxed up now, running at 8MHz and 38,400 baud with the internal RC clock. Oh well, no loss I guess.
I wasted literally all day on this. It was so stupid. The whole time I was kicking myself for not posting the code online. I couldn’t figure out which wires were for the digit selection, and which were for the BCD control. I had to tease it apart by putting random numbers on the screen (by sticking my finger in the frequency input hole) and looking at the data flowing out on the oscilloscope to figure out what was what. I wish I still had my DIY logic analyzer. I guess this project was what I built it for in the first place! A few hours of frustrating brute force programming and adult beverages later, I had all the lines figured out and was sending data to the computer.
With everything back together, I put the frequency counter back in my workstation and I’m ready to begin my frequency measurement experiments. Now it’s 9PM and I don’t have the energy to start a whole line of experiments. Gotta save it for another day. At least I got the counter working again!
Here’s the code that goes on the microcontroller (it sends the value on the screen as well as a crude checksum, which is just the sum of all the digits)
#define F_CPU 8000000UL #include <avr/io.h> #include <util/delay.h> #include <avr/interrupt.h> #define USART_BAUDRATE 38400 #define BAUD_PRESCALE (((F_CPU / (USART_BAUDRATE * 16UL))) - 1)(int byte){ if (byte==0){ USART_Transmit(48); } while (byte){ USART_Transmit(byte%10+48); byte-=byte%10; byte/=10; } } void sendBin(int byte){ char i; for (i=0;i<8;i++){ USART_Transmit(48+((byte>>i)&1)); } } volatile char digits[]={0,0,0,0,0,0,0,0}; volatile char freq=123; char getDigit(){ char digit=0; if (PINC&0b00000100) {digit+=1;} if (PINC&0b00001000) {digit+=8;} if (PINC&0b00010000) {digit+=4;} if (PINC&0b00100000) {digit+=2;} if (digit==15) {digit=0;} // blank return digit; } void updateNumbers(){ while ((PINB&0b00000001)==0){} digits[7]=getDigit(); while ((PINB&0b00001000)==0){} digits[6]=getDigit(); while ((PINB&0b00010000)==0){} digits[5]=getDigit(); while ((PINB&0b00000010)==0){} digits[4]=getDigit(); while ((PINB&0b00000100)==0){} digits[3]=getDigit(); while ((PINB&0b00100000)==0){} digits[2]=getDigit(); while ((PINC&0b00000001)==0){} digits[1]=getDigit(); while ((PINC&0b00000010)==0){} digits[0]=getDigit(); } int main(void){ USART_Init(); char checksum; char i=0; char digit=0; for(;;){ updateNumbers(); checksum=0; for (i=0;i<8;i++){ checksum+=digits[i]; sendNum(digits[i]); } USART_Transmit(','); sendNum(checksum); USART_Transmit('\n'); _delay_ms(100); } }
Here’s the Python code to listen to the serial port, though you could use any program (note that the checksum is just shown and not verified):
import serial, time import numpy ser = serial.Serial("COM] print line
This is super preliminary, but I’ve gone ahead and tested heating/cooling an oscillator (a microcontroller clocked with an external crystal and outputting its signal with CKOUT). By measuring temperature and frequency at the same time, I can start to plot their relationship…
Anonymous
June 24, 2013 at 7:04 AM (UTC -5) Link to this comment
Interesting, the last graph seems to suggest that there is either some form of hysteresis in the temperature-frequency relationship, or your temperature sensing is lagging behind the actual temperature.
Scott Harden
June 24, 2013 at 7:35 AM (UTC -5) Link to this comment
Good call. I think the issue is that the temperature sensor is indeed lagging behind. I should slow everything down by using a larger thermal mass.
john
June 24, 2013 at 1:17 PM (UTC -5) Link to this comment
Hi Scott,
I did a frequency counter in a PIC84. Then gave it a 16 bit serial DAC hooked to a vari-cap controlled oscillator. Great VFO. The VFO ran at 50 mhz and had a PAL to down divide by powers of two.
Built an inphase and quad mixer with a very low noise pair of op-amps. Into SM5BSZ software on a 100MHZ pentium with an ISA audio board.
Made a simple and very good receiver. Really should have done an XMTR then :).
Probably worth thinking about replacing freq. counter with a PIC as well.
John, also a ham.
hassan shahsavari
March 8, 2014 at 4:43 AM (UTC -5) Link to this comment
Hi dear
how are you?
I would like to live in Iran.
I have a circuit that will measure sound levels in a UV METER FOR 2 AUDIO IN LCD
So if the sound was cut off on LCD quiet time for each specific radio frequency and also the alarm will sound
I am grateful to you for supplying this device can help
hassan shahsavari
March 8, 2014 at 4:45 AM (UTC -5) Link to this comment
h_shahsavar@yahoo.com
hassan shahsavari
March 8, 2014 at 4:47 AM (UTC -5) Link to this comment
i need that you help me | http://www.swharden.com/blog/2013-06-22-adding-usb-to-a-cheap-frequency-counter-again/ | CC-MAIN-2014-41 | refinedweb | 1,362 | 70.02 |
#include "stdafx.h" #include <iostream> using std::cin; using std::cout; int main() { int number1, number2, number3, number4, number5, number6, number7; int meh; int average; cout << "Please choose which function you would like for me to perform.\n1. Semester Grade (One class)\n2. Quarterly GPA\n"; cin >> meh; if ( meh == 1 ) { cout << "\n" << "Please enter first quarter percentage: "; cin >> number1; cout << "Please enter second quartrer percentage: "; cin >> number2; average = ( number1 + number2 ) / 2; cout << "Your semester grade is a: " << average << "\n"; } (Other If statements and stuff continues after this point)
11 replies to this topic
#1
Posted 24 August 2007 - 08:09 AM
Okay basically i'm trying to make a simple console program to calculate and spit out my gpa and what not. Anyway, I want to know how to loop back to the top of the main function once I finish a process so I do not have to keep restarting the program in order to do another calculation. (And yes there is another "If" statement, but I think this bit of code should be enough for someone to come up with an answer, thanks:
#2
Posted 24 August 2007 - 08:21 AM
There's several alternatives. You could use recursion, and call the main-function itself again, you could use goto-statements and labels, you could use a while-statement, etc. My suggestion is to use a while-statement.
The while-statement works in this way:
Let's say you want to do to print the message M to the user N times. Then the loop would look like this;
The while-statement works in this way:
while ( expression ) ... statement 1 ... statement 2 ... ... statement NThis is a pseudo-code, so do not try this yourself, it won't work. But it shows you how it works. The statements will be executed while the expression is true. So it actually only takes two values; false (0) and true (!false)
Let's say you want to do to print the message M to the user N times. Then the loop would look like this;
int N = 10; const char *M = "Hello, World"; while(N >= 0) { std::cout << M << std::endl; N--; }You need to decrement the value of N. If you don't, then the argument to the while-statement will never become false, because N will always be greater than 0; true. But when N reaches -1, then it's not greater than than zero anymore and the result will be; false.
#3
Posted 24 August 2007 - 11:34 AM
If you make your main function a void, you can just call it at the end of what you're doing with a simple
main();I attached something I just threw together, exe and the source. Let me know what you think
Attached Files
#4
Posted 24 August 2007 - 05:52 PM
Oh, okay I think I get how its working now. Thanks a bunch for your help both of you.
#5
Posted 24 August 2007 - 10:13 PM
Victor, what you're talking about is recursion. But your example is really weird. Usually, you never make a prototype for the main-function, it's simply not needed by the compiler nor linker. Next, in your example, you're not even including your Main.hpp-file.
#6
Posted 25 August 2007 - 10:26 AM
Okay, so I did some research about the:.
void main().
#7
Posted 25 August 2007 - 10:45 AM
Check this link as well. Written by the creator of C++ himself.
In what way are you confused? How it work, or...?
Please specify what you really want to do. If you just want to loop forever, over and over again, then you can just give it true as "argument."
In what way are you confused? How it work, or...?
Please specify what you really want to do. If you just want to loop forever, over and over again, then you can just give it true as "argument."
while(true){ /* ... loop forever ... */ }
#8
Posted 25 August 2007 - 02:32 PM
My bad. My example I made thinking along the lines of how you helped me a few months ago with a calculator. (Thread here). I used advice from that thread to repeat the calculator again. But again, sorry, my bad.
#9
Posted 25 August 2007 - 10:36 PM
It's alright. You weren't completely lost. It's actually possible to use your idea. You can call the main-function as much as you want to, like you showed in your code - but you don't need the prototype for main, like I said before.
#10
Posted 27 August 2007 - 05:20 PM
although...main should be returning an int on windoz and linux, since you pass back basic errorcodes.
#11
Posted 28 August 2007 - 08:23 AM
Recursively calling main is a potentially bad idea if you are declaring variables in it. It would be much better to have the logic wrapped in a while loop with an input to escape from the loop.
Programming is a branch of mathematics.
My CodeCall Blog | My Personal Blog
My CodeCall Blog | My Personal Blog
#12
Posted 30 August 2007 - 10:19 PM
You can, yes, call it recursivly, but you will see no commercial application EVER do this. A logic-loop with some input would be much better.
while(TRUE) { <Do whatever> if(cin.get() == 'q') { return 0; } } | http://forum.codecall.net/topic/38036-calling-another-functioin/ | crawl-003 | refinedweb | 900 | 80.31 |
Carousel¶
New in version 1.4.0.
The
Carousel widget provides the classic mobile-friendly carousel view
where you can swipe between slides.
You can add any content to the carousel and have it move horizontally or
vertically. The carousel can display pages in a sequence or a loop.
Example:
from kivy.app import App from kivy.uix.carousel import Carousel from kivy.uix.image import AsyncImage class CarouselApp(App): def build(self): carousel = Carousel(direction='right') for i in range(10): src = "" % i image = AsyncImage(source=src, allow_stretch=True) carousel.add_widget(image) return carousel CarouselApp().run()
Kv Example:
Carousel: direction: 'right' AsyncImage: source: '' AsyncImage: source: '' AsyncImage: source: '' AsyncImage: source: ''
Changed in version 1.5.0: The carousel now supports active children, like the
ScrollView. It will detect a swipe gesture
according to the
Carousel.scroll_timeout and
Carousel.scroll_distance properties.
In addition, the slide container is no longer exposed by the API.
The impacted properties are
Carousel.slides,
Carousel.current_slide,
Carousel.previous_slide and
Carousel.next_slide.
- class kivy.uix.carousel.Carousel(**kwargs)[source]¶
Bases:
kivy.uix.stencilview.StencilView
Carousel class. See module documentation for more information.
-)
- anim_cancel_duration¶
Defines the duration of the animation when a swipe movement is not accepted. This is generally when the user does not make a large enough swipe. See
min_move.
anim_cancel_durationis a
NumericPropertyand defaults to 0.3.
- anim_move_duration¶
Defines the duration of the Carousel animation between pages.
anim_move_durationis a
NumericPropertyand defaults to 0.5.
- anim_type¶
Type of animation to use while animating to the next/previous slide. This should be the name of an
AnimationTransitionfunction.
anim_typeis a
StringPropertyand defaults to ‘out_quad’.
New in version 1.8.0.
- clear_widgets(children=None, .
- current_slide¶
The currently shown slide.
current_slideis an
AliasProperty.
Changed in version 1.5.0: The property no longer exposes the slides container. It returns the widget you have added.
- direction¶
Specifies the direction in which the slides are ordered. This corresponds to the direction from which the user swipes to go from one slide to the next. It can be right, left, top, or bottom. For example, with the default value of right, the second slide is to the right of the first and the user would swipe from the right towards the left to get to the second slide.
directionis an
OptionPropertyand defaults to ‘right’.
- ignore_perpendicular_swipes¶
Ignore swipes on axis perpendicular to direction.
ignore_perpendicular_swipesis a
BooleanPropertyand defaults to False.
New in version 1.10.0.
- index¶
Get/Set the current slide based on the index.
indexis an
AliasPropertyand defaults to 0 (the first item).
- load_slide(slide)[source]¶
Animate to the slide that is passed as the argument.
Changed in version 1.8.0.
- loop¶
Allow the Carousel to loop infinitely. If True, when the user tries to swipe beyond last page, it will return to the first. If False, it will remain on the last page.
loopis a
BooleanPropertyand defaults to False.
- min_move¶
Defines the minimum distance to be covered before the touch is considered a swipe gesture and the Carousel content changed. This is a expressed as a fraction of the Carousel’s width. If the movement doesn’t reach this minimum value, the movement is cancelled and the content is restored to its original position.
min_moveis a
NumericPropertyand defaults to 0.2.
- next_slide¶
The next slide in the Carousel. It is None if the current slide is the last slide in the Carousel. This ordering reflects the order in which the slides are added: their presentation varies according to the
directionproperty.
next_slideis an
AliasProperty.
Changed in version 1.5.0: The property no longer exposes the slides container. It returns the widget you have added.
-.
- previous_slide¶
The previous slide in the Carousel. It is None if the current slide is the first slide in the Carousel. This ordering reflects the order in which the slides are added: their presentation varies according to the
directionproperty.
previous_slideis an
AliasProperty.
Changed in version 1.5.0: This property no longer exposes the slides container. It returns the widget you have added.
- remove_widget(widget, *args, **kwargs)[source]¶
Remove a widget from the children of this widget.
- Parameters
- widget:
Widget
Widget to remove from our children list.
>>> from kivy.uix.button import Button >>> root = Widget() >>> button = Button() >>> root.add_widget(button) >>> root.remove_widget(button)
- scroll_distance¶
Distance to move before scrolling the
Carouselin pixels. As soon as the distance has been traveled, the
Carouselwill start to scroll, and no touch event will go to children. It is advisable that you base this value on the dpi of your target device’s screen.
scroll_distanceis a
NumericPropertyand defaults to 20dp.
New in version 1.5.0.
- scroll_timeout¶
Timeout allowed to trigger the
scroll_distance, in milliseconds. If the user has not moved
scroll_distancewithin the timeout, no scrolling will occur and the touch event will go to the children.
scroll_timeoutis a
NumericPropertyand defaults to 200 (milliseconds)
New in version 1.5.0.
- slides¶
List of slides inside the Carousel. The slides are the widgets added to the Carousel using the
add_widgetmethod.
slidesis a
ListPropertyand is read-only. | https://kivy.org/doc/master/api-kivy.uix.carousel.html | CC-MAIN-2021-39 | refinedweb | 833 | 61.93 |
Thanks for digging into it. A fix in the next release will be fine. Rather than recompile IP, I worked around it by writing a C# class that overrides the property. -----Original Message----- From: Dino Viehland [mailto:dinov at exchange.microsoft.com] Sent: Thursday, April 26, 2007 4:45 PM To: Discussion of IronPython Subject: Re: [IronPython] can't override LayoutEngine property Wow, after digging into this some more I gotta say: this is very interesting and may be a CLR bug. The short of this is there's a bug here and there's at least a work around (from the perspective of you can rebuild IronPython :) ). In Src\IronPython\Compiler\Generation\NewTypeMaker.cs there's a function called OverrideSpecialName. It's signature needs a new parameter: private void OverrideSpecialName(Type type, MethodInfo mi, Dictionary<string, bool> specialNames) { and then that parameter needs to be used for the GetProperties call: PropertyInfo[] pis = type.GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic); There's 1 caller to this function in OverrideVirtualMethods and it just needs to pass type: OverrideSpecialName(type, mi, specialNames); Here's an explanation of what's going on. We are calling GetMethods() on Panel which returns the methods for Panel and all of its subtypes - including Control. We are then filtering those down to virtual methods and trying to find the property they are associated with. At that point in time we call GetProperties() on the method's declaring type (in this case, Control) to get all of the properties. We then go through each property trying to find the property who has a get or set method that matches this method (get_LayoutEngine). What's strange is that we come to a point where we have a get_LayoutEngine method declared on Control that we grabbed from Panel, and we have a get_LayoutEngine method declared on Control that we grabbed from Control's LayoutEngine property. For some reason these two methods aren't comparing equal even though they're the exact same method. Given that we have a workaround for IronPython I'll open a bug (9908 -) and this should go into the next stable release. I'll also follow up with the CLR team and see if they believe this is a bug and if so get that fixed as well. Thanks for reporting this... Hopefully the workaround won't be too much trouble until we get the next release out. -----Original Message----- From: users-bounces at lists.ironpython.com [mailto:users-bounces at lists.ironpython.com] On Behalf Of Jonathan Amsterdam Sent: Thursday, April 26, 2007 12:23 PM To: users at lists.ironpython.com Subject: Re: [IronPython] can't override LayoutEngine property Dino, Thanks for your prompt reply. Unfortunately, your suggestion doesn't work either (it gives the same result when invoked from C#). Moreover, the first way (name = property(...)) worked fine when I just built my own C# class with its own virtual property. Date: Thu, 26 Apr 2007 11:04:08 -0700 From: Dino Viehland <dinov at exchange.microsoft.com> Subject: Re: [IronPython] can't override LayoutEngine property To: Discussion of IronPython <users at lists.ironpython.com> Message-ID: <7AD436E4270DD54A94238001769C22278F5F482E0B at DF-GRTDANE-MSG.exchange.corp .microsoft.com> Content-Type: text/plain; charset="us-ascii" I believe currently you need to do: @property def get_MyLayoutEngine(self): return mle to get the get_ method to override this. A couple of people have ran into this and we don't seem to have a bug on it so I've opened bug #9902 () to make this more intuitive. From: users-bounces at lists.ironpython.com [mailto:users-bounces at lists.ironpython.com] On Behalf Of Jonathan Amsterdam Sent: Thursday, April 26, 2007 8:23 AM To: users at lists.ironpython.com Subject: [IronPython] can't override LayoutEngine property I'm trying to overrride the LayoutEngine property of Control so I can implement my own layout engine. I find that this doesn't work in IP. Accessing the LayoutEngine property in IP works fine, but it doesn't work from C#. //////////////////// C# code: public class MyCSharpClass { public static LayoutEngine getLayoutEngine(Panel p) { return p.LayoutEngine; } } //////////////////// Python code: class MyLayoutEngine(LayoutEngine): def Layout(self, parent, eventArgs): pass mle = MyLayoutEngine() class MyPanel(Panel): LayoutEngine = property(lambda self: mle) p = MyPanel() print "from Python:", p.LayoutEngine print "from C#:", MyCSharpClass.getLayoutEngine(p) //////////////////// output: from Python: <MyLayoutEngine object at 0x000000000000002B> from C#: System.Windows.Forms.Layout.DefaultLayout ======================================================================== ===================== Email transmissions can not. In addition,, disclosure of the parties to it, or any action taken or omitted to be taken in reliance on it, is strictly prohibited, and may be unlawful. If you are not the intended recipient please delete this email message. ======================================================================== ====================== _______________________________________________ users mailing list users at lists.ironpython.com | https://mail.python.org/pipermail/ironpython-users/2007-April/004809.html | CC-MAIN-2014-15 | refinedweb | 789 | 55.84 |
read with interest Jason Ferrara's post concerning embedding PyMOL in a QT app (back in January ). I've spent a short time experimenting and have encountered a problem or two - I was wondering if any insightful users had any pointers for me.
I've built pymol from the SourceForge source (1.4.1) on some Fedora linux x86 box. I initially jumped straight into the QT code, but experienced a few crashes. Taking things apart further, I've got a little script that consistently crashes with a segfault:
import sys
sys.path.append( os.path.join( os.environ['PYMOL_PATH'], 'modules' ) )
import pymol2
pymol = pymol2.PyMOL()
print 'pymol initialised'
pymol.cmd.set("internal_gui",0)
print 'we never see this'
$ python crash.py
> pymol initialised
> Segmentation fault (core dumped)
Should this ever work? The code is a snippet from Jason's approach. I appreciate that it doesn't do anything, but should I expect a crash? Getting personal with gdb the stack trace comes up with
0x00007fffed6830e6 in APIEnterBlocked (self=0xd3c148, args=<value optimized out>) at layer4/Cmd.c:160
160 PRINTFD(G, FB_API)
which seems relatively benign (there is a PyThread_get_thread_ident() call there).
Thanks for reading this far,
Dan
Dan O'Donovan
SBGrid Consortium
Harvard Medical School | https://sourceforge.net/p/pymol/mailman/pymol-users/?viewmonth=201110&viewday=5 | CC-MAIN-2017-39 | refinedweb | 207 | 59.3 |
Migrating from 2.x to 3.x - Android
This guide provides an introduction to the 3.x Programmable Video Android SDK and a set of guidelines to migrate an application from 2.x to 3.x.
Programming Model
The programming model has not changed from 2.x to 3.x. Refer to our 2.x migration guide for a refresher on the Video Android SDK models.
WebRTC
The media stack has been upgraded from WebRTC 57 to WebRTC 67. The process by which our team upgrades WebRTC has been improved and developers can expect a steadier cadence of releases with WebRTC upgrades moving forward.
Java 8
The Video Android SDK is now compiled with Java 8 features. As a result, consumers of the Video Android SDK must update their applications to use Java 8. Add the following to your application build.gradle to enable Java 8 features.
android { compileOptions { sourceCompatibility 1.8 targetCompatibility 1.8 } }
Optionally, you can also instruct Android Studio to optimize your project for Java 8 by clicking "Analyze -> Inspect Code".
RoomState to Room.State
RoomState has been moved to
Room.State. If you were using this enum before perform the following:
- Replace
import com.twilio.video.RoomState;with
import com.twilio.video.Room.State;
- Replace all usages of
RoomStatewith
Room.State
Need some help?
We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd by visiting Twilio's Community Forums or browsing the Twilio tag on Stack Overflow. | https://www.twilio.com/docs/video/migrating-2x-3x-android-30 | CC-MAIN-2021-31 | refinedweb | 254 | 62.44 |
Chapter 37. Getting started with IPVLAN
This document describes the IPVLAN driver.
37.1. IPVLAN overview
IPVLAN is a driver for a virtual network device that can be used in container environment to access the host network. IPVLAN exposes a single MAC address to the external network regardless the number of IPVLAN device created inside the host network. This means that a user can have multiple IPVLAN devices in multiple containers and the corresponding switch reads a single MAC address. IPVLAN driver is useful when the local switch imposes constraints on the total number of MAC addresses that it can manage.
37.2. IPVLAN modes
The following modes are available for IPVLAN:
L2 mode
In IPVLAN L2 mode, virtual devices receive and respond to address resolution protocol (ARP) requests. The
netfilterframework runs only inside the container that owns the virtual device. No
netfilterchains are executed in the default namespace on the containerized traffic. Using L2 mode provides good performance, but less control on the network traffic.
L3 mode
In L3 mode, virtual devices process only L3 traffic and above. Virtual devices do not respond to ARP request and users must configure the neighbour entries for the IPVLAN IP addresses on the relevant peers manually. The egress traffic of a relevant container is landed on the
netfilterPOSTROUTING and OUTPUT chains in the default namespace while the ingress traffic is threaded in the same way as L2 mode. Using L3 mode provides good control but decreases the network traffic performance.
L3S mode
In L3S mode, virtual devices process the same way as in L3 mode, except that both egress and ingress traffics of a relevant container are landed on
netfilterchain in the default namespace. L3S mode behaves in a similar way to L3 mode but provides greater control of the network.
The IPVLAN virtual device does not receive broadcast and multicast traffic in case of L3 and L3S modes.
37.3. Overview of MACVLAN
The MACVLAN driver allows to create multiple virtual network devices on top of a single NIC, each of them identified by its own unique MAC address. Packets which land on the physical NIC are demultiplexed towards the relevant MACVLAN device via MAC address of the destination. MACVLAN devices do not add any level of encapsulation.
37.4. Comparison of IPVLAN and MACVLAN
The following table shows the major differences between MACVLAN and IPVLAN.
Note that both IPVLAN and MACVLAN do not require any level of encapsulation.
37.5. Creating and configuring the IPVLAN device using iproute2
This procedure shows how to set up the IPVLAN device using
iproute2.
Procedure
To create an IPVLAN device, enter the following command:
~]# ip link add link real_NIC_device name IPVLAN_device type ipvlan mode l2
Note that network interface controller (NIC) is a hardware component which connects a computer to a network.
Example 37.1. Creating an IPVLAN device
~]# ip link add link enp0s31f6 name my_ipvlan type ipvlan mode l2 ~]# ip link 47: my_ipvlan@enp0s31f6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether e8:6a:6e:8a:a2:44 brd ff:ff:ff:ff:ff:ff
To assign an
IPv4or
IPv6address to the interface, enter the following command:
~]# ip addr add dev IPVLAN_device IP_address/subnet_mask_prefix
In case of configuring an IPVLAN device in L3 mode or L3S mode, make the following setups:
Configure the neighbor setup for the remote peer on the remote host:
~]# ip neigh add dev peer_device IPVLAN_device_IP_address lladdr MAC_address
where MAC_address is the MAC address of the real NIC on which an IPVLAN device is based on.
Configure an IPVLAN device for L3 mode with the following command:
~]# ip neigh add dev real_NIC_device peer_IP_address lladdr peer_MAC_address
For L3S mode:
~]# ip route dev add real_NIC_device peer_IP_address/32
where IP-address represents the address of the remote peer.
To set an IPVLAN device active, enter the following command:
~]# ip link set dev IPVLAN_device up
To check if the IPVLAN device is active, execute the following command on the remote host:
~]# ping IP_address
where the IP_address uses the IP address of the IPVLAN device. | https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/getting-started-with-ipvlan_configuring-and-managing-networking | CC-MAIN-2021-39 | refinedweb | 678 | 54.36 |
Patch by Arnaud Bienner
Details
- Reviewers
-
- Commits
- rL349054: Make -Wstring-plus-int warns even if when the result is not out of bounds
rG57fc9582f97f: Make -Wstring-plus-int warns even if when the result is not out of bounds
rC350335: Make -Wstring-plus-int warns even if when the result is not out of bounds
rG8523c085e7ed: Make -Wstring-plus-int warns even if when the result is not out of bounds
rC349054: Make -Wstring-plus-int warns even if when the result is not out of bounds
rL350335: Make -Wstring-plus-int warns even if when the result is not out of bounds
Event Timeline
I found this warning to be really useful but was confused that it is not always triggered.
I initially discovered -Wstring-plus-char (and fixed some issues in the projects I work on where sometimes developers didn't realized the "+" wasn't doing to do concatenation because it was applied on char and char*).
Then I also activated -Werror=string-plus-int to reduce the risk of developers doing mistakes in our codebase.
Turns out that because we had some buggy code not catch by this warning for this reason: the result of the concatenation was not out of bounds, so the warning wasn't triggered.
I understand that point of view, but IMHO this warning should be activated even though, like for -Wstring-plus-char: it's easy to fix the warning by having a nicer, more readable code, and IMO code raising this warning is likely to indicate an issue in the code.
Having developers accidentally concatenating string and integer can happen (even more if when not concatenating literals directly).
But I've found a bug in our codebase even more tricky: when concatenating enum and literals char* strings:
We had a code like this (using Qt framework):
QString path = QStandardPaths::StandardLocation(QStandardPaths::DataLocation) + "/filename.ext";
The developer was trying to use QStandardPaths::standardLocations [1] static function (mind the lowercase s and the extra s at the end) which takes an enum of type QStandardPaths::StandardLocation. Similar name (so easy to do a typo) but very different meanings.
So instead of getting a string object and concatenating it with some literal strings, path was set to a truncated version of "/filename.ext".
This kind of bugs is hard to catch during code review process (and wasn't in my case).
Long story, but I think it's interesting to illustrate the need to have this warning applied to more cases.
[1]:
Nico: I've added you as reviewer since you originally implemented this warning (thanks for this BTW :) as said, it's really helpful), but feel free to delegate to someone else.
The usual process for adding / tweaking warnings is to build some large open-source project with the new / tweaked warning and note true and false positive rate. IIRC I did this on Chromium when I added the warning, and the out-of-bounds check prevented false positives without removing true positives. (...or maybe Richard asked for it during review? Did you try and find the review where this got added?)
That is, can you try building Chromium with this tweak and see what it does to true and false positive rate?
I found the commit: [1] which links to a related discussion [2] but there isn't much details. I wasn't sure if there was other discussions.
I will try to build Firefox with this change, and see how it goes (just need to find some time to do it).
I'll keep you posted.
[1]:
[2]:
So I built Firefox with my patched version of clang and the only place it found a warning was in ffmpeg [1], a case you already reported in [2] when you implemented that patch, so no new positive AFAICT.
Do you also want to try on Chromium and see how it goes? Or is Firefox enough? (it's a pretty big C++ codebase, so I would assume that's enough)
[1]: mozilla-central/media/ffvpx/libavutil/utils.c:73:42: warning: adding 'unsigned long' to a string does not append to the string [-Wstring-plus-int]
278:13.00 return LICENSE_PREFIX FFMPEG_LICENSE + sizeof(LICENSE_PREFIX) - 1;
[2]:
Firefox is good enough.
Thanks for finding -- this was from before we did reviews on phab, so I was able to find the review in my inbox. I can't find the full discussion online (we also moved list servers :-/ Fragments are at ). Here's the discussion between Richard and me about the behavior you're changing:
Richard: """
When building all of chromium and its many dependencies, the warning
did also find 3 false positives, but they all look alike: Ffmepg
contains three different functions that all look like this:
4 bugs and 3 false positives doesn't sound great to me. Have you considered
relaxing the warning in cases where the integral summand is a constant
expression and is in-bounds? Would this pattern have matched any of your
real bugs?
"""
Me: """
4 bugs and 3 false positives doesn't sound great to me.
True. All of the the 3 false positives are really the same snippet
though. (On the other hand, 3 of the 4 bugs are very similar as well.)
The bugs this did find are crash bugs, and they were in rarely run
error-logging code, so it does find interesting bugs. Maybe you guys
can run it on your codebase after it's in, and we can move it out of
-Wmost if you consider it too noisy in practice?
Have you considered
relaxing the warning in cases where the integral summand is a constant
expression and is in-bounds? Would this pattern have matched any of your
real bugs?
That's a good idea (for my bugs, the int was always a declrefexpr,
never a literal). The ffmpeg example roughly looks like this:
#define A "foo"
#define B "bar"
consume(A B + sizeof(A) - 1);
The RHS is just "sizeof(A)" without the "- 1", but since A B has a
length of 6, this still makes this warning go away. I added this to
the patch. With this change, it's 4 bugs / 0 false positives.
Note that this suppresses the warning for most enums which start at 0.
Or did you mean to do this only for non-enum constant expressions?
"""
Richard: """
Note that this suppresses the warning for most enums which start at 0.
Or did you mean to do this only for non-enum constant expressions?
I could imagine code legitimately wanting to do something like this:
template</*...*/>
struct S {
enum { Offset = /* ... */ };
static const char *str = "some string" + Offset;
};
That said, I'm happy for us to warn on such code. Whatever you prefer seems fine to me; we can refine this later, if a need arises.
"""
Given that discussion, your results on Firefox, and that this is useful to you sound like it should be fine to land this. So let's do that and see what happens. If anyone out there has any feedback on this change, we're all ears :-)
Thanks for the archeological work and finding the conversation related to the initial patch :)
Interesting that the last case you mentioned is exactly the bug I had in my project (though in your case this would have been intended).
Actually this has been failing for 8 hours. So reverted in r349117. Also reverted your attempt to update the test. It wasn't updating the right test: r349118
And I was just about to commit what I think is the work-around for this failure!
Like Adam pointed out, your fix in r349118 was not updating the correct test and caused an additional failure.
I was able to get the test passing by adding a flag to suppress the warning that you added here. I changed line 54 in test_diagnostics.py to be the following:
def test_diagnostic_range(self): tu = get_tu(source='void f() { int i = "a" + 1; }', flags=["-Wno-string-plus-int"]) # <-- This is line 54 that was changed. self.assertEqual(len(tu.diagnostics), 1)
If you apply that fix, the output should be as it originally was and the test passes.
Really sorry about breaking the build :( Thanks for reverting it.
Actually, I think we could fix that test by removing the " +1": AFAICT this test is checking that warnings are correctly emitted from clang, but not testing all the warnings. So the conversion from const char* to int is enough: no need to have an extra +1 at the end.
I will update my patch to update the test accordingly.
Fixed two tests broken by this change:
- test_diagnostics.py: AFAICT we are not testing all warnings here, but just that warnings are emitted correctly. The "+1" didn't seem to be useful, since the warning triggered was about the const char* being converted to an int (and this warning still applies)
- test/SemaCXX/constant-expression-cxx1y.cpp: is a false positive for -Wstring-plus-int so use the preferred, equivalent syntax
@thakis: do those additional changes look OK to you? Or do you want someone else to review those?
Just wanted to review the change to the Python script to ensure it remains compatible with Python2 and Python3, which is indeed the case; so LGTM on that aspect.
OK, thanks Serge! :)
For the record, I updated the patch one last time after it was accepted to remove my change to constant-expression-cxx1y.cpp since someone else did the same change in a separate commit meanwhile. | https://reviews.llvm.org/D55382 | CC-MAIN-2019-22 | refinedweb | 1,592 | 68.7 |
Java stores the string contants appearing in the source code in a pool. In other words when you have a code like:
String a = "I am a string"; String b = "I am a string";
the variables
a and
b will hold the same value. Not simply two strings that are equal but rather the very same string. In Java words
a == b will be true. However this works only for Strings and small integer and long values. Other objects are not interned thus if you create two objects that hold exactly the same values they are usually not the same. They may and probably be equal but not the same objects. This may be a nuisance some time. Probably when you fetch some object from some persistence store. If you happen to fetch the same object more than one time you probably would like to get the same object instead of two copies. In other words I may also say that you only want to have one single copy in memory of a single object in the persistence. Some persistence layers do this for you. For example JPA implementations follow this pattern. In other cases you may need to perform caching yourself.
In this example I will describe a simple intern pool implementation that can also be viewed on the stackoverflow topics. In this article I also explain the details and the considerations that led to the solution depicted there (and here as well). This article contains a but ore tutorial information than the original discussion.
Object pool
Interning needs an object pool. When you have an object and you want to intern that object you essentially look in the object pool to see if there is already an object equal to the one in hand. In case there is one we will use the one already there. If there is no object equal to the actual one then we put the actual object into the pool and then use this one.
There are two major issues we have to face during implementation:
- Garbage Collection
- Multi-thread environment
When an object is not needed anymore it has to be removed from the pool. The removal can be done by the application but that would be a totally outdated and old approach. One of the main advantage of Java over C++ is the garbage collection. We can let GC collect these objects. To do that we should not have strong references in the object pool to the pooled objects.
Reference
If you know what soft, weak and phantom references, just jump to the next section.
You may noticed that I did not simply say “references” but I said “strong references”. If you have learned that GC collects objects when there are no references to the object then it was not absolutely correct. The fact is that it is a strong reference that is needed for the GC to treat an object untouchable. To be even more precise the strong reference should be reachable travelling along other strong references from local variables, static fields and similar ubiquitous locations. In other word: the (strong) references that point point from one dead object to another does not count, they together will be removed and collected.
So if these are strong references, then presumably there are not so strong references you may think. You are right. There is a class named
java.lang.ref.Reference and there are three other classes that extend it. The classes are:
in the same package. If you read the documentation you may suspect that what we need is the weak one. Phantom is out of question for use to use in the pool, because phantom references can not be used to get access to the object. Soft reference is an overkill. If there are no strong references to the object then there is no point to keep it in the pool. If it comes again from some source, we will intern it again. It will certainly be a different instance but nobody will notice it since there is no reference to the previous one.
Weak references are the ones that can be use to get access to the object but does not alter the behavior of the GC.
WeakHashMap
Weak reference is not the class we have to use directly. There is a class named
WeakHashMap that refers to the key objects using soft references. This is actually what we need. When we intern an object and want to see if it is already in the pool we search all the objects to see if there is any equal to the actual one. A map is just the thing that implements this search capability. Holding the keys in weak references will just let the GC collect the key object when nobody needs it.
We can search so far, which is good. Using a map we also have to get some value. In this case we just want to get the same object, so we have to put the object into the map when it is not there. However putting there the object itself would ruin what we gained keeping only weak references for the same object as a key. We have to create and put a weak reference to the object as a key.
WeakPool
After that explanation here is the code. It just says if there is an object equal to the actual one then
get(actualObject) should return it. If there is none,
get(actualObject) will return null. The method
put(newObject) will put a new object into the pool and if there was any equal to the new one, it will overwrite the place of the old one with the new.
public class WeakPool<T> { private final WeakHashMap<T, WeakReference<T>> pool = new WeakHashMap<T, WeakReference<T>>(); public T get(T object){ final T res; WeakReference<T> ref = pool.get(object); if (ref != null) { res = ref.get(); }else{ res = null; } return res; } public void put(T object){ pool.put(object, new WeakReference<T>(object)); } }
InternPool
The final solution to the problem is an intern pool, that is very easy to implement using the already available
WeakPool. The
InternPool has a weak pool inside, and there is one single synchronized method in it
intern(T object).
public class InternPool<T> { private final WeakPool<T> pool = new WeakPool<T>(); public synchronized T intern(T object) { T res = pool.get(object); if (res == null) { pool.put(object); res = object; } return res; } }
The method tries to get the object from the pool and if it is not there then puts it there and then returns it. If there is a matching object already there then it returns the one already in the pool.
Multi-thread
The method has to be synchronized to ensure that the checking and the insertion of the new object is atomic. Without the synchronization it may happen that two threads check two equal instances in the pool, both of them find that there is no matching object in it and then they insert their version into the pool. One of them, the one putting its object later will be the winner overwriting the already there object but the looser also thinks that it owns the genuine single object. Synchronization solves this problem.
Racing with the Garbage Collector
Even though the different threads of the java application using the pool can not get into trouble using the pool at the same time we still should look at it if there is any interference with the garbage collector thread.
It may happen that the reference gets back null when the weak reference
get method is called. This happens when the key object is reclaimed by the garbage collector but the weak hash map in the weak poll implementation still did not delete the entry. Even if the weak map implementation checks the existence of the key whenever the map is queried it may happen. The garbage collector can kick in between the call of
get() to the weak hash map and to the call of
get() to the weak reference returned. The hash map returned a reference to an object that existed by the time it returned but, since the reference is weak it was deleted until the execution of our java application got to the next statement.
In this situation the
WeakPool implementation returns null. No problem.
InternPool does not suffer from this also.
If you look at the other codes in the beforementioned stackoverflow topics, you can see a code:
public class InternPool<T> { private WeakHashMap<T, WeakReference<T>> pool = new WeakHashMap<T, WeakReference<T>>(); public synchronized T intern(T object) { T res = null; // (The loop is needed to deal with race // conditions where the GC runs while we are // accessing the 'pool' map or the 'ref' object.) do { WeakReference<T> ref = pool.get(object); if (ref == null) { ref = new WeakReference<T>(object); pool.put(object, ref); res = object; } else { res = ref.get(); } } while (res == null); return res; } }
In this code the author created an infinite loop to handle this situation. Not too appealing, but it works. It is not likely that the loop will be executed infinite amount of time. Likely not more than twice. The construct is hard to understand, complicated. The morale: single responsibility principle. Focus on simple things, decompose your application to simple components.
Conclusion
Even though Java does interning only for String and some of the objects that primitive types are boxed to it is possible and sometimes desirable to do interning. In that case the interning is not automatic, the application has to explicitly perform it. The two simple classes listed here can be used to do that using copy paste into your code base or you can:
<dependency> <groupId>com.javax0</groupId> <artifactId>intern</artifactId> <version>1.0.0</version> </dependency>
import the library as dependency from the maven central plugin. The library is minimal containing only these two classes and is available under the Apache license. The source code for the library is on GitHub.
Why would you even create a shockingly inferior interner to Guava’s, let alone publish it in an article??
Dear SH. Thanks for your honest opinion.
The Guava code is not the best for juniors to learn from. It uses a lot of other Guava classes and methods that you have to know to understand how the code is working. It is not focusing on solving a single issue (it also contains a strong interner) and it does not contain the tutorial type of explanation. The code solves the interning in one method following the same pattern as the solution depicted on the stackoverflow topics that I referenced. As a consequence it implements the “hopefully stopping” infinite loop structure. The two classes, separation of concerns as described in this article leads to readable and simpler solution.
I am not absolutely sure this solution is OK. I consulted with some senior experts inside the company (we have more than 5000 developers) where I work; also some who have everyday, production experience with high performance, parallel Java application development. None of them saw any issue with the above code. It still does not mean there is none. If you see some: don’t keep it secret!
In this respect the Guava solution is presumably better from reliability point of view simply because of the number of users who actually use the code.
The ‘weak poll’ in the sentence – ‘This happens when the key object is reclaimed by the garbage collector but the weak hash map in the weak poll implementation still did not delete the entry.’-is write wrong? | https://www.javacodegeeks.com/2014/03/java-object-interning.html | CC-MAIN-2017-04 | refinedweb | 1,944 | 71.75 |
Brushless Motors as Rotary Encoders
May 20, 2012
Brushless motors can be used as rotary encoders. When the rotor is turned the three coils will produce AC waveforms, each one-third out of phase with the next coil. The period and amplitude depend on how fast the rotor is turned. Spinning the rotor faster will result in shorter periods and higher amplitudes. Feeding those waveforms into op amps does the trick. For this test I displayed a number on an LCD and used a buzzer to play a 5KHz tone for each clockwise movement and a 1KHz tone for each counter-clockwise movement. The motor is from an old floppy disk drive.
The op amps are used to turn each AC waveform into a square wave that can be interpreted by the microcontroller:
If an op amp is used as a simple comparator (waveform going to one input, with the other input tied to ground) there will be problems if you spin the rotor slowly or very fast. The slightest amount of noise will cause glitches. (The output will swing high or low when it shouldn't.)
A little positive feedback adds hysteresis to the circuit. This is simply a threshold which must be crossed in order to get the output to swing high or low. For my particular motor I needed the threshold to be approximately 70mV. If the threshold is too low noise can still get through and cause problems. If the threshold is too high you will not be able to sense slow rotations. The sweet spot will be different for every motor.
Positive feedback supplies part of the op amp output back into the non-inverting input. A voltage divider is used to provide the feedback, one end goes to the op amp output and the other end goes to ground. To get 70mV when the output is high I needed two resistors with a 54.3:1 ratio. To find the x:1 ratio, use: x = (output high voltage) / (desired mV) * 1000. To minimize current flow the two resistors should add up to more than 1K. The op amps I used will output about 3.8V when supplied with 5V so I used some 27K and 470 resistors that I had laying around which gave a 66mV threshold.
When an op amp input has a voltage below the negative supply voltage, most op amps will effectively short that input to ground through a diode. Since the motor is only being spun by hand this will not be a problem but some current limiting resistors should be in series with all motor leads to be on the safe side. Also keep the op amp voltage limits in mind when using larger motors or if you will be spinning the rotor at a significant speed.
I initially set up three op amps, one for each coil. This resulted in noticeable glitching at low speeds. The resistors I used for positive feedback have a 5% tolerance, resulting in the threshold for each coil being slightly different. I'm fairly certain that was the cause of the problem but I may be wrong. There is still a little glitching when using just two coils but it occurs far less often.
Here's the final schematic and code:
#include <LiquidCrystal.h> LiquidCrystal lcd(9,7,6,5,4,3); // RS, Enable, D4, D5, D6, D7 int count = 0; byte currentA, currentB, previousA, previousB = 0; void setup() { lcd.begin(16, 2); previousA = currentA = digitalRead(10); previousB = currentB = digitalRead(11); } void loop() { previousA = currentA; previousB = currentB; currentA = digitalRead(10); currentB = digitalRead(11); if(previousA == LOW && previousB == LOW && currentA == HIGH && currentB == LOW) { // clockwise count++; tone(2, 5000, 2); } else if(previousA == HIGH && previousB == HIGH && currentA == LOW && currentB == HIGH) { // clockwise count++; tone(2, 5000, 2); } else if(previousA == LOW && previousB == LOW && currentA == LOW && currentB == HIGH) { // counter-clockwise count--; tone(2, 1000, 2); } else if(previousA == HIGH && previousB == HIGH && currentA == HIGH && currentB == LOW) { // counter-clockwise count--; tone(2, 1000, 2); } lcd.setCursor(0, 0); lcd.print(count); lcd.print(" "); } | http://www.farrellf.com/projects/hardware/2012-05-20_Brushless_Motors_as_Rotary_Encoders/ | CC-MAIN-2018-05 | refinedweb | 674 | 69.92 |
This tutorial is the sixth chapter of our implementation of an AirBnB clone in React Native. In the previous chapter, we successfully implemented a loading modal. In case you need to get caught up, here are links to parts 1–5:
In part 6 of the implementation of our AirBnB clone, we’re going to continue from where we left off—implementing animated checkmarks on the login screen input fields, which validate if the email and the password are correct or not.
The implementation of animated checkmarks is pretty simple and can be used in many other cases as well. So it might be helpful for you to learn how to make use of them here. The idea is to show the checkmarks on the right side of the input fields when the email and password that we entered is correct.
This part of our clone implementation directly relates to part 2 of this tutorial series, where we implemented the Login UI—just in case any revision is needed.
So let’s get started!
This step doesn’t really relate to the actual implementation of animated checkmarks, but it might be helpful in many cases. Here, we’re going to change the keyboard style on the basis of which input type the input fields take.
In our Login screen, we have two input fields—one for an email and another for a password. So what we’re going to do is show the keyboard suitable for the email input field and the default keyboard for the password entry field. The code to implement this is provided in the code snippet below:
{% gist %}
In the code snippet above, we’ve initialized a keyboardType constant inthe InputFields.js file, which takes a value as email-address if inputType equals email; otherwise it’s default.
Next, we’re going to bind this to our TextInput component of the InputField.js file, as shown in the code snippet below:
{% gist %}
Now let’s test to see if this works in our emulator:
As a result, we can see that the keyboard style changes for the email fields.
In this step, we’re going to add the checkmarks on the right side of our input fields in Login Screen. The file we’re going to be working on is the InputField.js file. The idea here is to import the ‘checkmark’ icon from the react-native-vector-icons package and then add it to the Input fields.
First, we need to import the FontAwesome icons from the react-native-vector-icons package in the InputFields.js file, as shown below:
import Icon from ‘react-native-vector-icons/FontAwesome’;
Now we’re going to add those checkmark icons into our input fields by using the following piece of code:
{% gist %}
As we can see in the code snippet, we have defined an Icon with name ‘check’ above the TextInput component. This will display the icons on our input fields, as we can see in the emulator screenshot below:
But, the icons are visible on the left side of the input fields. So we need to correct the positioning of the checkmark icons to the right side of the input fields.
Here we’ll position the checkmark icons on the right side of the input fields. For that, we need to wrap the Icon component with the View component. Then, we need to bind some styles to our View component. The code and style are provided in the code snippet below:
{% gist %}
As we can see, the View component wraps the Icon component and checkmarkWrapper style is bound to it. The style configuration for the checkmarkWrapper is given below:
{% gist %}
As a result, we get the checkmark icons on the right side of the input fields, as shown the emulator screenshot below:
Now we need to show those checkmark icons with the animations that communicate whether or not the email and password we entered are correct....
As we start learning new technologies we want to start building something or work on a simple project to get a better understanding of the technology.
Hire dedicated React Native developers for your next project. As the top react native development company we offer you the best service as per your business needs.
Hire top react native app development company in New York to build and develop custom react native mobile apps for Android & iOS with the latest features. | https://morioh.com/p/46fa14cedccd | CC-MAIN-2020-40 | refinedweb | 738 | 65.96 |
Red Hat Bugzilla – Bug 523523
Review Request: clutter-gesture - Gesture Library for Clutter
Last modified: 2009-12-28 22:22:33 EST
SPEC:
SRPM:
Description:
This library allows clutter applications to be aware of gestures
and to easily attach some handlers to the gesture events. The library
supports both single and multi touch.
koji:
Here is the full review of the package:
* rpmlint: ?
rpmlint SRPMS/clutter-gesture-0.0.2-1.fc12.src.rpm RPMS/i686/clutter-gesture-* SPECS/clutter-gesture.spec
clutter-gesture-devel.i686: W: no-documentation
4 packages and 1 specfiles checked; 0 errors, 1 warnings.
Rpmlint looks clean - in general is a missing documentation acceptable for
devel packages. However, searching for clutter-gesture on moblin.org I've
found a document describing the clutter-gesture API:
Depending on its "up-to-date status", would it be an option
to package this as the documentation for the devel package?
* naming: OK
- name matches upstream
- spec file name matches package name
* sources: OK
- md5sum: de0e3e5c01f0a328cc04a46198776ebe clutter-gesture-0.0.2.tar.bz2
- sources matches upstream
- Source0 tag ok
- spectool -g works
* URL tag: ?
clutter-gesture has its own project page:
Would it make sense to use this one?
* License: TODO
- License LGPLv2+ acceptable
- License in spec file does not match the actual license:
In general it does, but one source file (engine/stroke.c) is actual under the MIT license:
- So the license in the spec file should be "LGPLv2+ and MIT" (according to ). Please also add a comment about this issue to the spec file.
- License file packaged
- It would be morer clearer if upstream would provide a standard license file with a proper name like COPYING (and not "NEWS" ;-) ). According to the Review guidelines the packager is encouraged to query upstream to include it. However this will not block the review. ;-)
* spec file written in English and legible: OK
* compilation: TODO
- supports parallel build: OK
- RPM_OPT_FLAGS are correctly used: OK
- builds in koji for F13 and F12: OK
- uncommon configure flags: IMHO it shouldn't be necessary to add "--with-pic", since "--enable-dynamic" is the default anyway
- uncommon CFLAGS: auto*/libtool usually take care of compiling with the correct
parameters for shared libs
- For testing purposes I've removed "--with-pic" and the CFLAGS definition at all and according to the output during compilation the library is still compiled with the correct flags.
* BuildRequires: OK
* locales handling: OK (n/a)
* ldconfig in %post and %postun: OK
* package owns all directories that it creates: OK
* no files listed twice in %files: OK
* file permissions: OK
- %defattr used
- actual permissions in packages ok
* %clean section: OK
* macro usage: OK
* code vs. content: OK (only code)
* large documentation into subpackage: OK (n/a)
* header files in -devel subpackage: OK
* header files: TODO
- clutter-gesture.h includes "config.h" and there is no "config.h" in
/usr/include/clutter-gesture/
- this means, it is not possible to include the public API
- possible solutions:
a) header file should not include "config.h"
b) use an conditional #include directive like
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
or
c) have a "config.h" in the include directory (this should not be done with the config.h generated by configure)
* static libraries in -static package: OK (n/a)
* package containing *.pc files must "Requires: pkgconfig": OK
* *.so link in -devel package: OK
* devel package requires base package using fully versioned dependency: OK
* packages must not contain *.la files: OK
* GUI applications must provide *.desktop file: OK (n/a)
* packages must not own files/dirs already owned by other packages: OK
* rm -rf $RPM_BUILD_ROOT at the beginning of %install: OK
* all filenames UTF-8: OK
* functional test: ?
How can I test this package? The tests in tests/ just crash if I try
to execute them...
* debuginfo sub-package: OK
- non-empty
Summary
-------
nice to have:
- Package API documentation
- Let URL tag point to specific project page on moblin.org
- What would be a proper functional test?
blocking:
- Add MIT to license tag
- Fix public header files
- Remove unusual %configure flags / CFLAGS
Fixed:
> - Let URL tag point to specific project page on moblin.org
> - Add MIT to license tag
> - Remove unusual %configure flags / CFLAGS
Reported upstream.
> - Package API documentation
> - Fix public header files
eom is the currently only supported test although there should be more support in Moblin 2.2
> - What would be a proper functional test?
Updated SRPM:
Spec location as before.
Thank you very much for the update.
(In reply to comment #2)
> Fixed:
> > - Let URL tag point to specific project page on moblin.org
> > - Add MIT to license tag
> > - Remove unusual %configure flags / CFLAGS
>
> Reported upstream.
> > - Package API documentation
> > - Fix public header files
I've reviewed the new package and besides the public header / pkgconfig issue it looks fine now.
Have you already heard something from upstream?
However, since it is possible to compile eom against clutter-gesture (and the test programs work), the header issue does not block the review.
The remaining minor non-blocking issues are:
1. Package API documentation
2. Fix public header files / pkgconfig
Please note, that /usr/lib/pkgconfig/clutter-gesture.pc also contains a placeholder which was not replaced by the configure script:
modlibexecdir=@modlibexecdir@
This must also be fixed.
-> APPROVED
Thanks for the review
New Package CVS Request
=======================
Package Name: clutter-gesture
Short Description: Gesture Library for Clutter
Owners: pbrobinson
Branches: F-12
InitialCC:
cvs done.
Built and in rawhide | https://bugzilla.redhat.com/show_bug.cgi?id=523523 | CC-MAIN-2017-51 | refinedweb | 905 | 55.34 |
0
Hello
Well I have this coding and I thought I set it up ok to catch this simple exception, but as of right now, the program continues to run after the exception happens and its not even catching it. where did I go wrong. Thank you all
Your rookie programmer
GCard
#include<iostream> using namespace std; void main(void) { float nNumber1, nNumber2; while(1) { cout << "Enter two integers to get their quotient:"; try{ cin >> nNumber1 >> nNumber2; if(nNumber2==0) { throw(nNumber2); } } catch(int nValue){ cout << "\nERROR: You tried to divide by" <<nValue<< "\n"; return; } cout << "The quotient is:" << (float)nNumber1/nNumber2<< "\n"; } } | https://www.daniweb.com/programming/software-development/threads/109141/why-doesn-t-this-program-catches-my-exception | CC-MAIN-2017-30 | refinedweb | 103 | 56.63 |
x:
5 3 0 0 255 16777215 24 24 24 0 0 24 24 0 24 24 24Sample Output:
24
水题,水题,绝逼大水题。本来怕容量太大的,结果。。一点问题都没有。
#include<iostream> #include<map> #include<stdio.h> using namespace std; int main(){ for(int n,m;scanf("%d%d",&n,&m)!=EOF;){ map<int,int>sence; for(int i = 0;i < n; i++){ for(int j = 0;j < m;j++){ int temp; scanf("%d",&temp); sence[temp]++; } } map<int,int>::iterator iter ; int max = 0; int max_num = 0; for(iter = sence.begin();iter != sence.end();iter++){ if(iter->second > max){ max = iter->second; max_num = iter->first; } } printf("%d\n",max_num); } return 0; } | https://blog.csdn.net/Win_Man/article/details/49965431 | CC-MAIN-2018-30 | refinedweb | 108 | 63.39 |
Xmonad/Using xmonad in Gnome
From HaskellWiki
Revision as of 08:05, 27 October 2010
1 Introduction
Xmonad makes an excellent drop-in replacement for Gnome's default window manager (metacity) giving you/sessions/required_components/windowmanager xmonad gconftool-2 -t string -s /desktop/gnome/applications/winow_manager/current xmonad
After startup I am running
killall compiz xmonad &
2.2.2.2.2 xmonad-0.9 for karmic
This mailing list thread has instructions to use a PPA for newer xmonad, dzen, xmobar.
2.3 Ubuntu Jaunty
At least 3 XMonad users have found that the ~/.gnomerc will not work on Jaunty Ubuntu when one is upgrading from Intrepid; apparently the ~/.gconf/ directory is incompatible or something, so Gnome/Ubuntu will not read .gnomerc and any settings in it will be ignored.
The work-around is essentially to remove .gconf entirely. On the next login, a fresh default .gconf will be created and .gnomerc will be read. This of course implies that one's settings and preferences will also be removed, and one will have to redo them. (Copying over selected directories from the old .gconf to the new one may or may not work.)
Or alternatively, the following worked for me (without touching .gconf or .gnomerc or exports): Add an xmonad launcher in the gnome-session-properties and then execute:
$ gconftool -t string -s /desktop/gnome/applications/window_manager/current xmonad $ gconftool -t string -s /desktop/gnome/session/required_components/windowmanager xmonad $ killall metacity; xmonad &
Also make sure to add the /usr/share/applications/xmonad.desktop file shown above, if it's not already present. This lets gnome know that xmonad is a windowmanager and where to look for it.
2.3.1 newer haskell and xmonad for jaunty
gspreemann's PPA has newer haskell toolchain without some of the setup problems others have had, as well as xmonad-0.9.
2.4 Ubuntu Intrepid
This forum thread has instructions for making Gnome play nice with xmonad on intrepid.
2.5.1 Using the Config.Gnome module
For xmonad-0.8 or greater, see Basic DE Integration for a simple three line
xmonad.hs configuration that:
- integrates docks and gnome-panel using ewmh's
- allows gap-toggling
- binds the gnome run dialog to mod-p, and mod-shift-q to save your session and logout
- otherwise keeps xmonad defaults.
It is a good starting point. You can then come back and add some of the features below once everything's working.
Once the Config.Gnome module set up, you may want to customize these gnome settings.
3.1.1 Keys
Use EZConfig to add keybindings. Note that you must use gnomeConfig whereever defaultConfig is mentioned.
import XMonad import XMonad.Util.EZConfig ]
3.1.2 ManageHooks
Be sure to include the default gnome manageHook when overriding manageHooks so that xmonad can leave a gap for gnome-panel:
main = xmonad gnomeConfig { ... , manageHook = composeAll [ manageHook gnomeConfig , title =? "foo" --> doShift "2" -- needs: import XMonad.Hooks.ManageHelpers (isFullscreen,doFullFloat) , isFullscreen --> doFullFloat ] ... }) ... }
(Using recent gnome and xmonad I found that it was necessary.) In 1 dimension: CycleWS:
main = xmonad gnomeConfig { modMask = mod4Mask } `additionalKeysP` [ ... -- moving workspaces , ("M-<Left>", prevWS ) , ("M-<Right>", nextWS ) , ("M-S-<Left>", shiftToPrev ) , ("M-S-<Right>", shiftToNext ) ]
4.4.2 In 2 dimensions: Plane
If Gnome is configured to lay out desktops in more than one line, it's possible to navigate with Ctrl+Alt+Up/Bottom also. The contrib module XMonad.Actions.Plane, available in the xmonad-0.8 or greater. The keybindings can be incorporated in with EZConfig as such:
import XMonad import XMonad.Config.Gnome import XMonad.Actions.Plane import XMonad.Util.EZConfig import qualified Data.Map as M ] `additionalKeys` -- NOTE: planeKeys requires xmonad-0.9 or greater M.toList (planeKeys mod4Mask GConf Finite)
Actions.WorkspaceCursors can be used to navigate workspaces arranged in three or more dimensions.
4 are several remedies for this.
* Run 'xmonad &' from a command line. * Quit X using Alt-Ctrl-Backspace. * Rebind mod+shift+q
4.)
, ("M-S-q", spawn "gnome-session-save --gui --logout-dialog") )
4 , ("M-S-l", spawn "gnome-screensaver-command -l") -- Logout , ("M1-M-S-l", spawn "gnome-session-save --gui --kill") -- Sleep , ("M1-S-'", spawn "gnome-power-cmd.sh suspend") -- Reboot , ("M1-S-,", spawn "gnome-power-cmd.sh reboot") -- Deep Sleep , ("M1-S-.", spawn "gnome-power-cmd.sh hibernate") -- Death , ("M1-S-p", spawn "gnome-power-cmd.sh shutdown")" } | https://wiki.haskell.org/index.php?title=Xmonad/Using_xmonad_in_Gnome&diff=37339&oldid=37270 | CC-MAIN-2016-30 | refinedweb | 725 | 52.36 |
El vie, 15-08-2008 a las 19:31 +0300, Vesa Jääskeläinen escribió: > Javier Martín wrote: > > WRT "kernel and modules going hand by hand", think about external > > modules: if the drivemap module is finally rejected for introduction in > > GRUB, I will not scrap it, but keep it as a module external to the > > official GNU sources and possibly offer it in a web in the form of > > patches to the official GRUB2. In this case, changes made to the kernel > > would not take into account that module, which would break if I weren't > > monitoring this list daily. > > Then it is really your problem ;) Indeed, but bitrot is not just the real of external modules: it's happening right now even within the GRUB trunk as you admit in the "Build problems on powerpc" thread... > > > Additionally, the cost of this function in platforms which don't have > > any structs registered yet, as the function could be a stub like this: > > > > void* grub_machine_get_platform_structure (int stidx) > > { > > grub_error (GRUB_ERR_BAD_ARGUMENT, "Struct %d not supported", stidx); > > return 0; > > } > > > > The kernel space taken would most likely be less than 50 bytes. For > > i386-pc, it could be like this (also lightweight) function: > > > > void* grub_machine_get_platform_structure (int stidx) > > { > > grub_errno = GRUB_ERR_NONE; > > > > switch (stidx) > > { > > case GRUB_MACHINE_I386_IVT: > > return /* Call to asm function that runs SIDT in real mode */ ; > > case GRUB_MACHINE_I386_BDA: > > return (void*)0x400; > > default: > > grub_error (GRUB_ERR_BAD_ARGUMENT, "Struct %d not supported", > > stidx); > > return 0; > > } > > } > > And what lets assume couple of extra platforms... how about > x86-32bit-efi and ppc. What do they do? > > Implement their own enum entries for those indexes and only use their > own indices...? At first, they would just have the stub which does not recognize any index, but yes, i386-efi devs could decide that certain firmware-provided structure (like a video modes info table or such, I don't know the internals of EFI) might be interesting to a module they're creating, so they create an index for it and add it to the version of the function in their platform. If I had not mentioned it before, the function would be declared in a cross-platform file, but _implemented_ in platform-specific files, and the indices would be declared in the platform-specific machine.h. Thus, there would not be a "single" indices namespace: structure #1 might be the IVT in i386-pc, but some devices info table in powerpc-ieee1275. > > Where here we are sharing any code? (if we do not count the name of the > fuction.) Interface is kinda useless if there is no possibility that > no-one is sharing its functionality... The idea is a single function to retrieve the addresses of firmware-provided/used structures. This includes the IVT and BDA in i386-pc, but as I said before it could also be used by other platforms for their own structures. The alternative would be just creating such "get struct X" functions on each platform as they are needed, but I imagined that a single interface (with such a low cost in space) would be a more elegant solution. -Habbit > > > > _______________________________________________ > Grub-devel mailing list > address@hidden >
signature.asc
Description: Esta parte del mensaje está firmada digitalmente | http://lists.gnu.org/archive/html/grub-devel/2008-08/msg00446.html | CC-MAIN-2015-27 | refinedweb | 526 | 53.24 |
Since 2018, every December, I
try to work my way through Advent of Code, a set of 25 puzzles revealed each day this month, until Christmas day. This has been around since 2015 (I also tried working on the earlier years, check all of my solutions in my advent of code repo).
A short description from their about page:.
The puzzles vary in difficulty, getting harder through the month. This year, I'll be writing about my solutions and a little about each puzzle's rationale and thought process. I'm not planning to get the best solution for each problem, but I try going about one or two steps on optimizations to showcase language features or get better run times. I'll also assume all my inputs will lead to a valid result so that no significant error checks will be done.
I'll mainly use python for the solutions. It's the language I'm most proficient in, and this year it's been proving to have a lot of excellent tools to get cleaner results.
We're on Day 6 at the time of writing, so I'll go over each day in this post, then update through the week. Follow along with me!
Day 1: Report Repair (link)
TO START: I absolutely love the stories every year. Every year, the main character is an elf, tasked with saving Christmas. This year though, we're going on a vacation. Christmas is safe! Some good news this year, at last :)
In the first part of day 1, we're tasked with processing an expense report--a list of numbers, and we have to find the two entries that add up to 2020 and multiply these numbers together. The input was super short so that I could go with the "brute force" approach. For each number, go over the list again and find the one that adds up to 2020. I knew a simple trick to traverse the list once: using a set as the data structure to hold the numbers, I can find an item in constant time. For each number, I need to check if
2020 - number is on the list!
def part_one(report: List[int]): report_set = set(report) for number in report: if 2020 - number in report_set: return number * (2020 - number)
The second part presents a similar puzzle, but now we need to find 3 numbers that add up to 2020. At this point, I remembered of python's
itertools.combinations. This returns the subsequences of a list with the given size. I can use it also for part 1, so I just wrote a generic solution:
from functools import reduce from itertools import combinations def solve_with_combinations(report, n): for test in combinations(report, n): if sum(test) == 2020: return reduce(mul, test) def part_one_combinations(): solve_with_combinations(report, 2) def part_one_combinations(): solve_with_combinations(report, 3)
Python will generate the combinations in a complexity better than O(n^2) or O(n^3), but I found out I could get O(n^2) for part two. The solution involves sorting the list beforehand, then using a two-pointer technique: for each number in the list, I keep a pointer to the next number and the last of the list. If the sum is more than 2020, I decrease the end pointer to reduce the sum. If it's less than 2020, I increase the first pointer to get a larger sum. I repeat it for each item until I find all 3 numbers that match the requirements. I had to do a bit of research, so here's the source.
def best_performance_part_two(report): n = len(report) for i in range(n): left = i + 1 right = n - 1 while left < right: result = report[i] + report[left] + report[right] if result == 2020: return report[i] * report[left] * report[right] if result < 2020: left += 1 else: right -= 1 best_performance_part_two(sorted(report))
Day 2: Password Philosophy (link)
On day 2, we're tasked with processing a list of passwords and checking if they follow a set policy. Each line of the input gives the policy and the password to check. The password policy indicates the lowest and highest number of times a given letter must appear for the password to be valid. A valid input looks like this:
1-3 a: abcde 1-3 b: cdefg 2-9 c: ccccccccc
For this one, I went with a regular expression to parse each line and a
collections.Counter to see if the letter has the correct count. Not much to improve there.
import re from collections import Counter def part_one(passwords: List[str]): valid = 0 expression = re.compile(r"(\d+)-(\d+) (.): (.*)") for password in passwords: if match := expression.match(password): min_, max_, letter, password = match.groups() count = Counter(password)[letter] if count >= int(min_) and count <= int(max_): valid += 1 return valid
In part 2, the only difference is a reinterpretation of the policy. Each policy actually describes two positions in the password, and exactly one of these positions must contain the given letter. So I get the letters, add a set, and test if the set has size two (meaning the letters are different), and the given letter is in the set. There might definitely be better ways to check this, but here it goes:
def part_two(passwords: List[str]): valid = 0 expression = re.compile(r"(\d+)-(\d+) (.): (.*)") for password in passwords: if match := expression.match(password): pos_1, pos_2, letter, password = match.groups() letters = {password[int(pos_1) - 1], password[int(pos_2) - 1]} if len(letters) == 2 and letter in letters: valid += 1 return valid
Day 3: Toboggan Trajectory (link)
In this one, the puzzle input is a section of a "map", where the
.represent empty spaces and
#represents a tree, representing the geography of an area you're going to be sliding with a Toboggan. You want to find a slope on the map where you're finding the smaller number of trees (steering is hard in this area!).
The map is only a section of the whole geography: the pattern repeats to the right "many times." This was a hint that there might be a way to figure out where on the map you are without "gluing" those sections together.
Part 1 asks how many trees are there if you go down a slope right 3, down 1, which means you'll walk 3 squares to the right, then one down. The map has many more rows than columns, so it means you'll end up in this "extended area." How can we read this map and count the trees without duplicating the lines to figure out how these hidden areas look like? The solution is keeping track of your position, and every time your coordinates land outside the size of the line, you figure out the new index by getting the modulo of your position and the size of the line.
I made part 1 generic to any slope thinking about the fact that I needed to do it for more cases; here's the solution I landed:
from itertools import count def count_trees(right, down): # [`itertools.count`][count] yields numbers separated by `step`. # Think range() but unbound. Really nice for this case! counter = count(step=right) total_trees = 0 for i, line in enumerate(read_file()): # line is something like ".....#.##......#..##..........#" if i % down != 0: # If going down more than once at a time, go straight to # the lines that are multiple of `down`. continue line = line.strip() # next counter will return the steps I'm walking right. # If I land after the end of the line, the modulo # will return an index that will represent what's in the # square out of bounds. position = next(counter) % len(line) # this is a nice trick with python booleans. They are actually an # extension of integers, and False == 1 :) total_trees += line[position] == "#" return total_trees
For part 2, it was just asked to check the tree count for other slopes (including one where you'd be going down more two rows). I just passed these to the function above and multiplied the values.
from functools import reduce from operator import mul vals = [ count_trees(1, 1), count_trees(3, 1), count_trees(5, 1), count_trees(7, 1), count_trees(1, 2), ] print(reduce(mul, vals))
Day 4: Passport Processing (link)
This one felt like work. We're tasked with validating passports and checking if they have the required fields. Fields are those of a common passport (date of birth, issue date, country, etc.). Country is not required because "North Pole Credentials aren't issued by a country."
I used [dataclasses][dataclasses] and read the input file, passing a key-value map of the results to the auto-generated constructor. If any of the required arguments were missing, the constructor would raise an exception, which I catch and skip the passport as invalid.
@dataclass class Passport: byr: str # Birth Year iyr: str # Issue Year eyr: str # Expiration Year hgt: str # Height hcl: str # Hair Color ecl: str # Eye Color pid: str # Passport ID cid: str = "" # country. The assignment at the class definition will make this field not required def part_1(): passports = [] p = {} for line in read_file(): if not line.strip(): try: passports.append(Passport(**p)) except TypeError: continue finally: p = {} continue values = line.strip().split(" ") for value in values: k, v = value.split(":") p[k] = v # last line passports.append(Passport(**p)) return passports first_pass_valid = part_1() print(len(first_pass_valid))
Part 2 extends the validation. So I added a
validate method to the passport data class and called for the valid passports on part 1.
@dataclass class Passport: # fields... def validate(self): assert 1920 <= int(self.byr) <= 2002 assert 2010 <= int(self.iyr) <= 2020 assert 2020 <= int(self.eyr) <= 2030 h, unit = size_re.match(self.hgt).groups() if unit == "cm": assert 150 <= int(h) <= 193 else: assert 59 <= int(h) <= 76 assert hair_re.match(self.hcl) assert self.ecl in ["amb", "blu", "brn", "gry", "grn", "hzl", "oth"] assert pid_re.match(self.pid) # ... part 1 valid = 0 for passport in first_pass_valid: try: passport.validate() valid += 1 except Exception: print(passport) continue print(valid)
I almost skipped this one. This looks too much like my day-to-day work (validate forms for business logic and save is the bread and butter of web applications nowadays).
Day 5: Binary Boarding
This was a fun one. I should've noticed by the name of today's puzzle there was an easier solution than almost writing verbatim the puzzle rules. Today we're looking through a list of boarding passes and "decoding" the seat IDs from the passes codes. From the day instructions, 'a seat might be specified like FBFBBFFRLR, where F means "front", B means "back", L means "left", and R means "right"'. This defines a
binary space partitioning. I then proceeded to write the algorithm exactly like the puzzle described. Part 1 asks to submit the highest seat ID. So here's the implementation:
def partition(code: str, count: int, lower_ch: str, upper_ch: str) -> int: left = 0 right = 2 ** count for i in range(count): # for each char in the code ch = code[i] mid = (right + left) // 2 # split the current length in two groups if ch == lower_ch: # if the char represent the "lower half" of the current region, move # the right pointer to the middle right = mid elif ch == upper_ch: # else, move the left pointer to the middle left = mid # you'll either end with the same number or there will be a difference # of 1. Return the minimum. return min(left, right) def part_1(): max_id = 0 for code in read_file(): # First 7 letters represent the row row = partition(code[:7], 7, "F", "B") # last 3 represent the colums col = partition(code[-3:], 3, "L", "R") seat_id = row * 8 + col if seat_id > max_id: max_id = seat_id return max_id
When discussing with colleagues about day 5 solutions, one of them pointed out the rules were just the steps to transform a binary number into its base-10 representation, where "F"/"B" and "L"/"R" are "0" and "1". The
int constructor in python can cast string representation of numbers in any base, which you can set as the second parameter. So
int("1001", 2) will return
9.
def to_int(code, zero, one): code = code.replace(zero, "0").replace(one, "1") return int(code, base=2) # ... for code in read_file(): row = to_int(code[:7], "F", "B") col = to_int(code[-3:], "L", "R") seat_id = row * 8 + col
Neat.
For part 2, we want to find the only missing seat ID in the list (the story character lost their boarding pass!). I could not for the life of me, figure out how to do that. The puzzle states the back and the front of the airplane are empty, so you need to find the empty spot in the "middle." I went with the first idea in my mind: let's visualize the airplane after all seats are filled, print out the column and row, and manually find the seat ID.
def part_2_visualization(): """ Will print something like this with my input ... 086 -> ['#', '#', '#', '#', '#', '#', '#', '#'] 087 -> ['#', '#', '#', '#', '#', '#', '#', '#'] 088 -> ['#', '.', '#', '#', '#', '#', '#', '#'] 089 -> ['#', '#', '#', '#', '#', '#', '#', '#'] 090 -> ['#', '#', '#', '#', '#', '#', '#', '#'] ... Meaning the free seat is in row 88, col 1. """ aircraft = [["." for _ in range(8)] for _ in range(128)] for code in read_file(): row = partition(code[:7], 7, "F", "B") col = partition(code[-3:], 3, "L", "R") aircraft[row][col] = "#" for i, x in enumerate(aircraft): print("{:0>3} -> {}".format(i, x))
Again, talking with colleagues made me understand a programmatic solution. It's given that the plane is full. The ID formula is
row * 8 + col. The airplane has 8 columns, so seats in the same row will all share the first "piece" of this equation, with the "col" making these ids map to all integers from 0 to 1024 (127 * 8 + 8). With all the ids calculated, I need to find the difference between the ids I have and the set of all possible ids.
def part_2_for_real_now(): ids = set() for code in read_file(): row = partition(code[:7], 7, "F", "B") col = partition(code[-3:], 3, "L", "R") ids.add(row * 8 + col) # all possible IDs are between the first and last # non-empty seat seat = set(range(min(ids), max(ids))) - ids return seat.pop()
Day 6: Custom Customs
This day was an exercise on python's
Counter data structure. The input represents questions (marked a to z) people answered "yes" to in a customs declaration form. For part 1, we're interested in finding how many questions any individual in a group of people answered "yes" to. Each line is an individual, and an empty line separates groups.
Ah! Also, since this day, I stopped separating the puzzles by parts. I'll write the solutions and separate them into functions the repeat bits for better organization.
So I just pass each line to a
Counter instance, and add them up for each group.
Counter implements addition so
Counter('abc') + Counter('cde') will be equivalent to the dictionary
{'c': 2, 'a': 1, 'b': 1, 'd': 1, 'e': 1} (not the key
c has value
2, because they appear in both sides of the addition).
groups = [] current_group = Counter() group_size = 0 for line in read_file(): if line: current_group += Counter(line) group_size += 1 else: groups.append([current_group, group_size]) current_group = Counter() group_size = 0 print("--- part 1 ---") # the "length" of each group counter is the number of unique answers for that group. # I could use a `set` here: the actual count is not important for part 1 print(sum(map(lambda c: len(c[0]), groups)))
Using
Counters made part 2 super easy. We learn that we don't want to count how many questions anyone answered "yes," but the ones where everyone in the group answered yes.
For each group captured in part 1, I call
most_common() in the counter, which will return each letter sorted by their count in decrescent order. If the count is the same as the group's size, this letter represents the question all individuals answered "yes" to.
total_count = 0 for group, count in groups: for _, ans_count in group.most_common(): if ans_count == count: total_count += 1 else: break print(total_count)
Discussion (0) | https://dev.to/rbusquet/advent-of-code-2020-solutions-review-11o0 | CC-MAIN-2022-21 | refinedweb | 2,683 | 70.13 |
#include <wx/fswatcher.h>
The wxFileSystemWatcher class allows receiving notifications of file system changes.
For the full list of change types that are reported see wxFSWFlags.
This class notifies the application about the file system changes by sending events of wxFileSystemWatcherEvent class. By default these events are sent to the wxFileSystemWatcher object itself so you can derive from it and use the event table
EVT_FSWATCHER macro to handle these events in a derived class method. Alternatively, you can use wxFileSystemWatcher::SetOwner() to send the events to another object. Or you could use wxEvtHandler::Bind() with
wxEVT_FSWATCHER to handle these events in any other object. See the fswatcher sample for an example of the latter approach.
Default constructor.
Destructor.
Stops all paths from being watched and frees any system resources used by this file system watcher object.
Adds path to currently watched files.
The path argument can currently only be a directory and any changes to this directory itself or its immediate children will generate the events. Use AddTree() to monitor the directory recursively.
Note that on platforms that use symbolic links, you should consider the possibility that path is a symlink. To watch the symlink itself and not its target you may call wxFileName::DontFollowLink() on path.
This is the same as Add(), but also recursively adds every file/directory in the tree rooted at path.
Additionally a file mask can be specified to include only files matching that particular mask.
This method is implemented efficiently on MSW and macOS, but should be used with care on other platforms for directories with lots of children (e.g. the root directory) as it calls Add() for each subdirectory, potentially creating a lot of watches and taking a long time to execute.
Note that on platforms that use symbolic links, you will probably want to have called wxFileName::DontFollowLink on path. This is especially important if the symlink targets may themselves be watched.
Retrieves all watched paths and places them in paths.
Returns the number of watched paths, which is also the number of entries added to paths.
Returns the number of currently watched paths.
Clears the list of currently watched paths.
Associates the file system watcher with the given handler object.
All the events generated by this object will be passed to the specified owner. | https://docs.wxwidgets.org/3.1.5/classwx_file_system_watcher.html | CC-MAIN-2021-31 | refinedweb | 384 | 56.25 |
Out of date: This is not the most recent version of this page. Please see the most recent version
InterruptIn
Use the InterruptIn class to trigger an event when a digital input pin changes.
API
Warnings:
No blocking code in ISR: avoid any call to wait, infinite while loop or blocking calls in general.
No printf, malloc or new in ISR: avoid any call to bulky library functions. In particular, certain library functions (such as printf, malloc and new) are non re-entrant, and their behavior could be corrupted when called from an ISR." InterruptIn button(SW2); DigitalOut led(LED1); DigitalOut flash(LED4); void flip() { led = !led; } int main() { button.rise(&flip); // attach the address of the flip function to the rising edge while(1) { // wait around, interrupts will interrupt this! flash = !flash; wait(0.25); } }
Example
Try the following example to count rising edges on a pin.
#include "mbed.h" class Counter { public: Counter(PinName pin) : _interrupt(pin) { // create the InterruptIn on the pin specified to Counter _interrupt.rise(callback(this, &Counter::increment)); // attach increment function of this counter instance } void increment() { _count++; } int read() { return _count; } private: InterruptIn _interrupt; volatile int _count; }; Counter counter(SW2); int main() { while(1) { printf("Count so far: %d\n", counter.read()); wait(2); } }
Related
To read an input, see DigitalIn.
For timer-based interrupts, see Ticker (repeating interrupt) and Timeout (one-time interrupt). | https://docs.mbed.com/docs/mbed-os-api-reference/en/latest/APIs/io/InterruptIn/ | CC-MAIN-2018-30 | refinedweb | 231 | 55.34 |
Can someone explain how this bash script works? The part I don't understand is
""":"
#!/bin/sh
""":"
echo called by bash
exec python $0 ${1+"$@"}
"""
import sys
print 'called by python, args:',sys.argv[1:]
$ ./callself.sh xx
called by bash
called by python, args: ['xx']
$ ./callself.sh
called by bash
called by python, args: []
That's clever! In Bash, the
""":" will be expanded into only
:, which is the empty command (it doesn't do anything). So, the next few lines will be executed, leading to
exec. At that point, Bash ceases to exist, and the file is re-read by Python (its name is
$0), and the original arguments are forwarded.
The
${1+"$@"} means: If
$1 is defined, pass as arguments
"$@", which are the original Bash script arguments. If
$1 is not defined, meaning Bash had no arguments, the result is empty, so nothing else is passed, not even the empty string.
In Python, the
""" starts a multi-line string, which includes the Bash commands, and extends up to the closing
""". So Python will jump right below. | https://codedump.io/share/vH9SkATTuwaV/1/how-to-write-a-bash-script-which-calls-itself-with-python | CC-MAIN-2018-22 | refinedweb | 178 | 82.75 |
User's Guide
A complete example
<html> <head> <title>${title}</title> </head> <body> <p>The following items are in the list:</p> <ul><%for element in list: output "<li>${element}</li>"%></ul> <p>I hope that you would like Brail</p> </body> </html>
The output of this program (assuming list is (1,2,3) and title is "Demo" ) would be:
<html> <head> <title>Demo</title> </head> <body> <p>The following items are in the list:</p> <ul><li>1</li><li>2</li><li>3</li></ul> </body> </html>
And the rendered HTML will look like this:
---- The following items are in the list: * 1 * 2 * 3 -----
Code Separators
Brail supports two code separators <% %> and <?brail ?>, I find that <% %> is usually easier to type, but <?brail ?> allows you to have valid XML in the views, which is important for some use cases. Anything outside a <?brail ?> or <% %> is sent to the output. ${user.Id} can be used for string interpolation.
The code separator types cannot be mixed. Only one type of separators must be used per file.
Output methods
Since most of the time you will want to slice and dice text to serve the client, you need some special tools to aid you in doing this. Output methods* are methods that are decorated by [Html] / [Raw] / [MarkDown] attributes. An output method return value is transformed according to the specified attribute that has been set on it, for instance, consider the [Html] attribute:
<% [Html] def HtmlMethod(): return "Some text that will be <html> encoded" end %> ${HtmlMethod()}
The output of the above script would be:
Some text that will be <html> encoded
The output of a method with [Raw] attribute is the same as it would've without it (it's supplied as a NullObject for the common case) but the output of the MarkDown attribute is pretty interesting. Here is the code:
<% [MarkDown] def MarkDownOutput(): return "[Ayende Rahien](), __Rahien__." end %> ${MarkDownOutput()}
And here is the output:
<p><a href="">Ayende Rahien</a>, <strong>Rahien</strong>.</p>
Markdown is very interesting and I suggest you read about its usage.
Using variables
A controller can send the view variables, and the Boo script can reference them very easily:
My name is ${name} <ul> <% for element in list: output "<li>${element}</li>" end %> </ul>
Brail has all the normal control statements of Boo, which allows for very easy way to handle such things as:
<% output AdminMenu(user) if user.IsAdministrator %>
This will send the menu to the user only if he is administrator.
One thing to note about this is that we are taking the variable name and trying to find a matching variable in the property bag that the controller has passed. If the variable does not exist, this will cause an error, so pay attention to that. You can test that a variable exists by calling the IsDefined() method.
<% if IsDefined("error"): output error end %>
Or, using the much clearer syntax of "?variable" name:
<% output ?error %>
The syntax of "?variable" name will return an IgnoreNull proxy, which can be safely used for null propagation, like this:
<% # will output an empty string, and not throw a null reference exception output ?error.Notes.Count %>
This feature can make it easier to work with optional parameters, and possible null properties. Do note that it will work only if you get the parameter from the property bag using the "?variableName" syntax. You can also use this using string interpolation, like this:
Simple string interpolation: ${?error} And a more complex example: ${?error.Notes.Count}
In both cases, if the error variable does not exists, nothing will be written to the output.
Using sub views
There are many reasons that you may want to use a sub view in your views and there are several ways to do that in Brail. The first one is to simply use the common functionality. This gives a good solution in most cases (see below for a more detailed discussion of common scripts).
The other ways is to use a true sub view, in Brail, you do that using the OutputSubView() method:
Some html <?brail OutputSubView("/home/menu")?> <br/>some more html
You need to pay attention to two things here:
The rules for finding the sub view are as followed:
- If the sub view start with a '/' : then the sub view is found using the same algorithm you use for RenderView()
- If the sub view doesn't start with a '/' : the sub view is searched starting from the ''current script'' directory.
A sub view inherit all the properties from its parent view, so you have access to anything that you want there.
You can also call a sub view with parameters, like you would call a function, you do it like this:
<?brail OutputSubView("/home/menu", { "var": value, "second_var": another_value } ) ?>
Pay attention to the brackets, what we have here is a dictionary that is passed to the /home/menu view. From the sub view itself, you can just access the variables normally. This variables, however, are not inherited from views to sub views.
Including files
Occasionally a need will arise to include a file "as-is" in the output, this may be a script file, or a common html code, and the point is not to interpret it, but simply send it to the user. In order to do that, you simply need to do this:
${System.IO.File.OpenText("some-file.js").ReadToEnd()}
Of course, this is quite a bit to write, so you can just put an import at the top of the file and then call the method without the namespace:
<% import System.IO %> ${File.OpenText("some-file.js").ReadToEnd()}
Principle of Least Surprise
On general, since NVelocity is the older view engine for now, I have tried to copy as much behavior as possible from NVelocityViewEngine. If you've a question about how Brail works, you can usually rely on the NVelocity behavior. If you find something different, that is probably a bug, so tell us about it.
Common Functionality
In many cases, you'll have common functionality that you'll want to share among all views. Just drop the file in the CommonScripts directory - (most often, this means that you will drop your common functionality to Views\CommonScripts) - and they will be accessible to any script under the site.
The language used to write the common scripts is the white space agnostic deriative of Boo, and not the normal one. This is done so you wouldn't have white spacing sensitivity in one place and not in the other.
The common scripts are normal Boo scripts and get none of the special treatment that the view scripts gets. An error in compiling one of the common scripts would cause the entire application to stop.
Here is an example of a script that sits on the CommonScripts and how to access it from a view:
Views\CommonScripts\SayHello.boo - The common script
def SayHello(name as string): return "Hello, ${name}" end
Views\Home\HelloFromCommon.boo - The view using the common functionality
<% output SayHello("Ayende") %>
The output from the script:
Hello, Ayende
Symbols and dictionaries
Quite often, you need to pass a string to a method, and it can get quite cumbersome to understand when you have several such parameters. Brail support the notion of symbols, which allows to use an identifier when you need to pass a string. A symbol is recognized by a preceding '@' character, so you can use this syntax:
<% output SayHello( @Ayende ) %>
And it will work exactly as if you called SayHello( "Ayende" ). The difference is more noticable when you take into account methods that take components or dictionary parameters, such as this example:
<% component Grid, {@source: users, @columns: [@Id, @Name] } %>
Using a symbol allows a much nicer syntax than using the string alternative:
<% component Grid, {"source: users, "columns": ["Id", "Name"] } %>
Layouts
Using layouts is very easy, it is just a normal script that outputs ChildOutput property somewhere, here is an example:
Header ${ChildOutput} Footer | http://www.castleproject.org/monorail/documentation/trunk/viewengines/brail/usersguide.html | crawl-001 | refinedweb | 1,332 | 57.61 |
31 March 2011 19:16 [Source: ICIS news]
LONDON (ICIS)--Indonesian producer PT Pupuk Kalimantan Timur (Kaltim) has awarded ?xml:namespace>
The plant will be located in Bontang, East Kalimantan, Indonesia and will have a capacity of 2,700 tonnes/day of ammonia and 3,500 tonnes/day of urea. The companies will collaborate on the engineering, procurement and construction (EPC) phases of the project on a lump-sum turnkey basis.
The contract is worth $577m (€410m) and construction is expected to take 33 months. No date for the start of construction work was provided.
News of the award concludes the bidding process which opened in September 2010 and received interest from companies around the world. This will be Kaltim’s fifth fertilizer plant in
($1 = €0.71)
For more on ammonia | http://www.icis.com/Articles/2011/03/31/9448994/japans-toyo-awarded-contract-for-indonesian-fertilizer-plant.html | CC-MAIN-2014-42 | refinedweb | 132 | 64.41 |
I would like to perform data analysis. Indeed I would like to analyze the potential correlations between the price of CAC40 and Bitcoin. For that I did data scrapping and I was able to import the values of CAC40 and Bitcoin over the last two years. Here is the script below using the yahoo finance package.
import yfinance as yf cac="^FCHI" data=yf.Ticker(cac) dataDF= data.history(periode="1d", start="2020-1-1", end='2022-1-1') dataDF btc="BTC-USD" data2=yf.Ticker(btc) dataDF2= data2.history(periode="1d", start="2020-1-1", end='2022-1-1') dataDF2
I get 6 columns (date, open price, higher price, lower price, close price, volume) for CAC40 and for Bitcoin.
Now I would like to analyze thoses results.
Could you give me the histogram and correlation graph scripts to highlight my results?
Thank you in advance for your answers !!
Source: | https://news.priviw.com/technology/financial-data-analysis-with-python/ | CC-MAIN-2022-05 | refinedweb | 150 | 69.48 |
On Sat, Feb 17, 2007 at 01:11:48PM +0000, Richard W.M. Jones wrote: > oneMark McLoughlin wrote: > >Add a qemudLog() function which uses syslog() if we're in > >daemon mode, doesn't output INFO/DEBUG messages unless > >the verbose flag is set and doesn't output DEBUG messages > >unless compiled with --enable-debug. > > You're all gonna hate this I know, but libvirtd handles syslog by > forking an external logger(1) process. Messages sent to stderr go to > syslog. This is partly necessary because the SunRPC code within glibc > is a bit too happy to send debug messages to stderr & nowhere else. Is this just wrt to the server side of SunRPC, or does it apply to the client side too ? If using libvirt from command line tools it won't be nice if SunRPC is spewing crap to STDERR. > #ifdef LOGGER > /* Send stderr to syslog using logger. It's a lot simpler > * to do this. Note that SunRPC in glibc prints lots of > * gumf to stderr and it'd be a load of work to change that. > */ > int fd[2]; > if (pipe (fd) == -1) { > perror ("pipe"); > exit (2); > } > int pid = fork (); > if (pid == -1) { > perror ("fork"); > exit (2); > } > if (pid == 0) { /* Child - logger. */ > const char *args[] = { > "logger", "-tlibvirtd", "-p", "daemon.notice", NULL > }; > close (fd[1]); > dup2 (fd[0], 0); > close (fd[0]); > execv (LOGGER, (char *const *) args); > perror ("execv"); > _exit (1); > } > close (fd[0]); > dup2 (fd[1], 2); > close (fd[1]); > #endif BTW, need to make sure all file descriptors are either explicitly closed, or have close-on-exec -=| | https://www.redhat.com/archives/libvir-list/2007-February/msg00095.html | CC-MAIN-2015-11 | refinedweb | 261 | 69.82 |
Investigate current XML tools
Find tools that edit, validate, format, and compare XML, plus support XQuery, XPath, sitemaps, schemas, and RSS feeds
When you select tools to work with XML-related technologies, first determine your requirements. For example, if you typically do multiple tasks with XML (edit, validate, and more), consider an XML IDE with the appropriate functions. For a specific task (compare XML files or build a sitemap), consider a more focused tool for that single task.
In this article, investigate these categories to find XML tools that fit your needs:
- XML sitemap creators and validators
- RSS feed generators
- XML schema generators
- XML validators
- XML formatters
- XML editors
- XML tools
- XML open source tools
- XML IDEs
- XML Compare tools
- XQuery tools
- XPath tools
XML sitemap creator
An XML sitemap lists all the URLs for a. See Related topics for links to all listed tools.
Several sitemap generation tools are now available:
- Google SiteMap Generator automatically generates a sitemap based on updates and traffic to your website when you deploy it on a web server.
- Gsite Crawler creates sitemaps. It is a Windows-based desktop tool.
- Apart from the downloadable tools, many online applications can generate sitemaps; two examples are:
- Sitemaps Builder creates sitemaps for Google, HTML, and text URLs.
- XML Sitemaps builds sitemaps in XML, ROR, Text, or HTML formats.
XML sitemap validators
Sitemap validators are used to validate the sitemap generated for a website. A validator checks that the sitemap is valid for search engines to consume. See Related topics for links to all listed tools.
Check this list of sitemap validators:
- Automapit sitemap validator validates your sitemap to ensure acceptance by search engines.
- Sitemap XML validator checks your site map for valid XML code so you can correct errors before you submit it to search engines.
- XML sitemaps validator identifies any sitemap problems for you to resolve before you inform search engines.
- Online Merchant sitemap checker checks the XML headers in your sitemap.xml file for accuracy before you submit it.
RSS feed generators
RSS newsfeeds are a great way to keep your site visitors updated with the latest content added to your site. RSS feed generators are popular among people who wish to glance at the headlines of news sites (for example, CNN) or to know about the latest updates in the sports world. See Related topics for links to all listed tools.
Website developers can generate RSS feeds with these tools:
- IceRocket RSS builder is a simple interface that lets you add topics, links, and content to create RSS feeds for your website.
- Feedity creates RSS feeds for web pages, new, or products.
- RSSPect sets up RSS feeds for websites, documents, or podcasts.
XML schema generators
You can generate XML schemas from an XML instance. See Related topics for links to all listed tools.
Available tools include:
- Trang from ThaiOpenSource, a command-line based tool, generates XML Schema Definition (XSD) from XML.
- XMLBeans is a tool from Apache that provides several functions, one of which is schema generation using inst2xsd (Instance to Schema Tool).
- XML for ASP BuildXMLSchema is an online XML schema generator.
XML validators
You can validate XML instances against their schemas. See Related topics for links to all listed tools.
Use one of these online tools:
- XMLValidation.com validates your XML document against an XML schema or DTD declared in the document or performs a syntax check if no schema or DTD is declared.
- DecisionSoft.com Schema Validator validates a single schema plus an instance document and lists errors.
- W3C XML validator is a service that validates schema documents with the namespace URI.
XML formatters
XML formatting is an operation frequently performed on XML to make it readable. Most of the desktop XML tools provide this feature. To perform a quick format of XML content without installing any XML tools, try one of these online services. See Related topics for links to all listed tools.
- XMLIndent.com
- X01's online xml formatter
XML editors
XML editors can help you clearly interpret your XML document with color highlights for elements, attributes, or plain text and indented content. Another advantage of using XML editors is that they have context-oriented options, such as the tree view which enables a user to traverse the various nodes of an XML document easily. They also validate and present you with warnings and errors when you don't close XML tags properly. See Related topics for links to all listed tools.
- Xerlin XML Editor, a Java™-based tool, creates and validates XML content. The editor is an open source tool with XSLT support, and it can also validate XML against DTDs and schemas.
- Jaxe Editor, another Java-based open source XML editor, supports exporting the content to PDF, HTML-based previewing with an XSLT, and multiple platforms.
- XMLFox, a freeware product, is an XML editor with a validator tool for creating well-formed XML documents and schemas. This editor also supports other XML operations.
XML tools
XSLT transformations are useful in converting one form of XML to another using stylesheets. A wide range of tools can assist you in this process; Tiger XSLT Mapper and Kernow are just two examples. See Related topics for links to all listed tools.
Tiger XSLT Mapper is a tool that novice users can easily use to map between XML structures. It automatically creates the mappings which you can edited using the drag-and-drop GUI.
Kernow is a Java API that runs transformations programmatically. Kernow is a good choice when a developer must repeatedly run XSLT transformations using a visual interface.
A few web-based XSLT tools are also useful:
- XSLT Online Transformation
- W3C Online XSLT 2.0 Service
Developers who prefer browser-based plugins can check this list of useful XML plugins:
Mozilla Firefox
- XSL Results Add-on shows XSL transformation results (XSLT 1.0 or XSLT 2.0 through Saxon-B) of a document.
- XML Developer Toolbar adds use of standard XML tools from a browser toolbar.
Google Chrome
- XML Tree displays XML data in a user-friendly manner.
- XML Viewer is an XML viewer for Google Chrome.
XML open source tools
For users who cannot afford the cost of enterprise XML tools, open source tools are of great help. Active community contributions have made it possible to create very good XML open source tools. See Related topics for links to all listed tools.
The iXedit XML IDE includes several XML processing features:
- DTD validation
- DTD-based auto completion
- User templates
- XSLT processing
- Part-by-part editing
The Rinzo XML Editor is an Eclipse XML editor. Some of its features are:
- Namespace support
- Auto-completion of tags and attributes
- XML validation
This tool also provides features for working with Java elements:
- Auto-completing class names
- Opening a class definition
XPontus XML Editor is an open source Java-based tool that includes these features:
- Code formatting and completions
- XSL transformation
- DTD and schema generation
- XML validation
XML IDEs
XML IDE applications perform almost all the operations related to XML. You can choose from several IDEs with a variety of supported features. See Related topics for links to all listed tools.
XMLSpy is an XML IDE for authoring, editing, and debugging XML, XML schema, XSL/XSLT, XQuery, WSDL, and SOAP. Additional features include:
- A code generator
- A file converter
- A debugger
- A profiler
- Support for integrating into Visual Studio.NET and Eclipse IDE
- A database import wizard that enables you to import data from Microsoft® Access®
XML Marker is an XML editor that uses a synchronized table-tree and text display to show you a hierarchal and a tabular view of your XML data. This tools can load very large documents (that are hundreds of megabytes and even gigabytes in size). Other features include:
- A syntax-highlighting editor
- Table sorting
- Automatic indentation
- As-you-type syntax checking
Liquid XML Studio, a complete package of several XML tools bundled together, provides these tools:
- XML schema editor
- XML data-binding code generator
- WSDL editor
- XML editor
- Microsoft Visual Studio Integration
- Web service test client
- XPath expression builder
- HTML documentation generation
- XSLT editor and debugger
- Large file editor
- XML Diff - Compare XML files
Figure 1 shows a preview of the Liquid XML editor with a set of panels to manipulate the XML content. (View a larger version of Figure 1.)
Figure 1. A preview of Liquid XML Studio
<oXygen/> XML Editor is a full-featured XML IDE with support for an array of XML-related operations. Expert XML users can harness the benefits from the functionalities offered by this tool. A few of its features are:
- Intelligent XML editing
- XML validation
- XSL/XSLT support
- XQuery support
- XPath support
- Single-source XML publishing
- Support for Microsoft Office documents
Figure 2 shows a preview of the <oXygen/> XML Editor showing the source and a tree view of an XML document. (View a larger version of Figure 2.)
Figure 2. A preview of the <oXygen/> editor
Stylus Studio offers these features:
- XSLT and XQuery profilers
- Support for EDI
- Enterprise web service tools
- XML pipeline
- XML schema-awareness in XSLT 2.0 and XQuery 1.0
- XML publishing tools
XML Notepad from Microsoft helps developers in creating XML documents. It is a free tool and includes the XMLDiff tool that you can use to compare two XML files. The interface is simple and user friendly. This tool works on top of the .Net platform. The features of this tool are:
- Tree view synchronized with node text view
- Namespace support provided while copying and moving text
- Incremental search in both tree and text views
- Drag-and-drop support while making changes
- Unlimited undo and redo for edit operations
- Searching support with support for regex and XPath
- Loads documents up to 3MB quickly
- Instant XML schema validation
- Intellisense based on expected elements and attributes and enumerated simple type values
- Support for custom editors for date, dateTime, time datatypes, and other types such as color
- Built-in HTML viewer
- Support for XInclude
Figure 3 shows a preview of XML Notepad with a tree view of an XML file and its error panel. (View a larger version of Figure 3.)
Figure 3. A preview of XML Notepad
XML Copy Editor is a quick, validating XML editor. The tab feature allows you to edit multiple files at the same time. Other features include:
- DTD/XML Schema/RELAX NG validation
- XSLT and XPath support
- Pretty-printing and syntax highlighting
- Folding and tag completion
- Lossless import and export of Microsoft Word documents
- Support for XHTML, XSL, DocBook, and Text Encoding Initiative (TEI)
firstobject XML Editor is a free XML editor. The XML tree displayed from the XML document content can be edited directly facilitating easy traversal. Large files can be loaded into the tool for manipulation easily. Its features are:
- Fast, portable, and built on CMarkup
- No requirement for Java technology or MSXML
- Word wrap
- MSXML-based DTD validation
- Go To Line
- Show XPath
- Tabbed file editing
- C++ code generation
XRay XML Editor is a free XML IDE. This tool validates your XML document as and when you type. It has built-in support for W3C standards. It also has an HTML viewer to preview web pages built with XML. You can create three types of schemas that include XSD, DTD, and External Data Representation (XDR). Other features of this tool include:
- Real-time XSLT processing
- Real-time schema validation
- Integrated online tutorials about XML
XMLSpear is a free Java-based XML editor available for many platforms. It has advanced features such as interactive schema resolving, extensive XPath panel, and more. XML is displayed in three different formats, including tree table, element view, and source view. XMLSpear is available as Java web start software or as a stand-alone application. Additional features are:
- Support for XPath and XSLT
- Ability to generate complete XML documents from schema
- Multiple format of encoding support
- Integrated text and HTML plugin
- Real-time validation against schema or DTD as you type
- Schema generation from XML instances
- Tree editor for manipulating nodes
XMLmind, a multi-featured XML editor based on Java technology, is available for multiple platforms. It is better suited for experienced professionals than novice users. It presents an innovative way to edit XML documents and requires Java platform support. The features in XMLmind are:
- Conversion of XML documents into HTML help files, PDF, Eclipse help files, and many other formats
- Inclusion of a DITA converter
- Support for DocBook, JavaDoc, and XHTML and built-in templates for them
- Support for MathML document creation
- Editable commands
- Integrated XML parser and XSLT engine
ElfData XML Editor is a tool for Mac OS users. This XML IDE offers Unicode support and can check your XML document for well-formedness with and without a DTD. Tree mode and source mode are the two view modes available. Drag-and-drop support enables you to drag and drop XML elements. Searching is facilitated by two modes: source-find and tree-find mode. Other features in this tool include:
- XML 1.0 compliant
- Mac-like user interface
- Detailed error messages with assistance in debugging it
- "Send to Browser" option that enables you to preview your document in a browser
- Option to save pages as XHTML with DTD
XMetaL looks like a word processor. Like most of the XML IDEs, it can validate XML documents and supports schemas, DTDs, and XInclude. Other features are:
- Spell checking and auto correction
- Support for web help output
- Ability to convert XML documents into other formats like PDF, HTML, and many more
- XMetal connector integrates with content management systems (CMS) and source control systems such as SVN
- Unicode support creates XML documents in many languages
- DITA support with features such as a visualization, topic-oriented user interface for authoring of DITA content
XML Compare tools
Developers, editors, and writers often need to compare two versions of an XML document to track the changes. Though many text comparing tools are available, an exclusive XML comparing tool is efficient for many operations as it is XML aware. See Related topics for links to all listed tools.
The <oXygen/> XML Diff & Merge utility can compare files, directories, and ZIP-based archives. When you load the source and target documents into this tool, the differences are shown by coloring, and you can edit and move changes in both source and target files. It has many built-in comparing algorithms and has the ability to automatically choose algorithms based on the document content and size. It can do both word-level and character-level comparison. When you compare directories and archives you can choose to base it on the following parameters:
- Timestamp
- Content
- Binary comparison
Liquid XMLDiff has many XML-specific options such as removing whitespace, comments, and processor directives. This tool is advanced enough to predict whether attributes and elements are new, deleted, or have moved. This tool is available in the designer and developer edition of Liquid XML Studio.
ExamXML is a powerful tool to visually compare and merge the differences between XML documents. The input XML for comparison can be either from a file or from a database. ExamXML can also compare and save part of an XML document; you also can import to and export from Microsoft Excel® documents. ExamXML is available for several versions of Microsoft Windows®. The other features of this tool include:
- Validation of XML against DTD/XML schema
- Normalization of dates and numbers
- Drag-and-drop support
- XML documents displayed in tree view
DeltaXML can enable you to search, compare, merge, and synchronize changes to XML documents. It has Java API support, which facilitates the programmatic comparison of XML documents. It also has the ability to handle large files. The tool can output a delta file with the result of comparison. You can display this delta file directly or use XSL; you can process the delta file with other XML tools. The DeltaXML Sync tool can compare three XML documents and render the differences. In addition to the XML comparison function, it has some format-specific tools:
- DeltaXML DITA Compare
- DeltaXML DocBook Compare
- DeltaXML ODT Compare
- DeltaXML ODT Merge
XQuery tools
For advanced XML users, XQuery can be very helpful in querying and extracting content out of large XML documents. XQuery specific tools help you harness the power of XQuery and enable you to use high level features like mapping, debugging, and profiling. Some of the useful features provided by them include validation, auto complete, and previewing. See Related topics for links to all listed tools.
XMLSpy XQuery Editor provides syntax-highlighting and context-sensitive menus for XQuery. Its auto-code complete features enable you to create XQuery documents easily. It also has support for developing XQuery against XML-enabled databases. Other features include:
- Error isolation
- Simplified debugging
- Enhanced code performance
- Advanced text view
Stylus Studio XQuery Editor has an integrated XQuery Editor with a wide range of features that include intelligent code sensing, code completion, element constructors, functions, path expressions, and more. It is based on open XQuery architecture with support for the Saxon XQuery processor. The XQuery source tree window supports the drag-and-drop feature along with useful symbols and icons about the source file. Additional features are:
- Creation of XQuery scenarios
- XQuery preview
- Mapping of XQuery results preview to XQuery expressions
XQuery development tools for Eclipse assist creating, debugging, and executing XQuery in Eclipse. The tools also provide:
- Support for XQuery updates and scripting extensions
- Code completion and code templates
- Semantic checking and quick fixes
- Validation performed as you type
XPath tools
XPath specific tools are useful in visualizing your XPath evaluation results and can help you construct and validate XPath expressions. A couple of useful options provided by these tools include debugging XPath, auto completion, and searching databases using XPath. See Related topics for links to all listed tools.
SketchPath is an XPath editor and XML analysis and testing tool. It provides an IDE for developing and testing XPath expressions against XML documents. It uses the .NET Framework for XPath 1.0 evaluation and Saxon.NET for XPath 2.0. The other features include:
- Use of XPath variables within expressions
- XPath function assistant
- Built-in step-tracer and debugger
- Syntax coloring for expressions
XPath Visualizer is a free Microsoft Windows tool that runs your XPath queries on XML documents and visualizes the results. The input file can be from a file system or a URL, or you can paste into the tool as text. In this tool, you type the whole XPath query. The other features of this tool are:
- Automatic detection and display of the XML namespaces
- XPath query validation
- Automatic addition of the default XML namespace into query expressions and the option to remove XML namespace from any document
Web-based XPath tools are also available, including:
- XPath Query Expression Tool (XMLME.com)
- Simple online XPath tester
- XSLT Tryit Editor (W3Schools.com)
Conclusion
Many available tools support XML-related technologies. As an XML user, you must analyze the requirements and choose the appropriate tool. For example, if you require many sophisticated operations, then you might select an XML IDE to have more functionalities such as editing, validation, and others. For a very specific task, such as comparing XML files, then you might choose an exclusive comparing tool.
Downloadable resources
Related topics
- Find the XML sitemap creator tools:
- Find the XML sitemap validators:
- Find the RSS feed generators:
- Find the XML schema generators:
- Find the XML validators:
- Find the XML editors:
- Find the XML tools:
- Kernow
- XSLT Online Transformation
- W3C Online XSLT 2.0 Service
- Mozilla Firefox: XSL Results Add-on
- Mozilla Firefox: XML Developer Toolbar Add-on
- Google Chrome: XML Tree
- Google Chrome: XML Viewer
- Find the XML open source tools:
- Find the XML IDEs:
- Find the XML Compare tools:
- Find the XQuery tools:
- Find the XPath tools:
- Comparison of XML editors (Wikipedia): Check out a list that compares the licensing, supported platforms, and features of various XML editors.
- Sitemap generators: Explore a list of links to tools and code snippets that generate or maintain sitemap files.
- XML area on developerWorks: Find the resources you need to advance your skills in the XML arena. See the XML technical library for a wide range of technical articles and tips, tutorials, standards, and IBM Redbooks
- IBM certification: Find out how you can become an IBM-Certified Developer.
- IBM product evaluation versions: Get your hands on application development tools and middleware products. | https://www.ibm.com/developerworks/library/x-xmltools/ | CC-MAIN-2017-34 | refinedweb | 3,396 | 51.38 |
Tried it. Doesn't change a thing. Means: I get about half the number ofwarning messages, but that just corresponds to half the number of packets.What helps a lot, but not to 100% (get bad keypresses anyway) istotally deactivating the ACPI. Killing all processes that access /proc/acpiseems again to help a bit.And The number of Warnings seemingly increases with the labtoptemperature... In a really cold room I get nearly no warnings at all.Jitter? Hardware, that is simply broken?Anyway, --- with Dmitrys patches I get hardly ever little bad events, justwarnings --- and --- well... I can live with them, Gunter.On Today, Dmitry Torokhov wrote:>From: Dmitry Torokhov <dtor_core@ameritech.net>>Date: Sat, 10 Jan 2004 03:45:13 -0500>To: Gunter Königsmann <gunter.koenigsmann@gmx.de>,> Gunter Königsmann <gunter@peterpall.de>>Cc: linux-kernel@vger.kernel.org, Vojtech Pavlik <vojtech@suse.cz>,> Andrew Morton <akpm@osdl.org>>Subject: [PATCH 1/2] Synaptics rate switching>>===================================================================>>>ChangeSet@1.1512, 2004-01-10 02:42:42-05:00, dtor_core@ameritech.net> Input: Allow switching between high and low reporting rate for Synaptics> touchpads in native mode. Synaptics support 2 report rates - 40> and 80 packets/sec; report rate must be set using Synaptics mode> set command. Rate is controlled by psmouse.rate parameter, values> greater or equal 80 will set 'high' rate. (psmouse.rate defaults> to 100)>> Using low report rate should help slower systems or systems> spending too much time in SCI (ACPI).>>> psmouse.h | 1 +> synaptics.c | 4 +++-> 2 files changed, 4 insertions(+), 1 deletion(-)>>>===================================================================>>>>diff -Nru a/drivers/input/mouse/psmouse.h b/drivers/input/mouse/psmouse.h>--- a/drivers/input/mouse/psmouse.h Sat Jan 10 03:22:26 2004>+++ b/drivers/input/mouse/psmouse.h Sat Jan 10 03:22:26 2004>@@ -67,6 +67,7 @@> int psmouse_command(struct psmouse *psmouse, unsigned char *param, int command);>> extern int psmouse_smartscroll;>+extern unsigned int psmouse_rate;> extern unsigned int psmouse_resetafter;>> #endif /* _PSMOUSE_H */>diff -Nru a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c>--- a/drivers/input/mouse/synaptics.c Sat Jan 10 03:22:26 2004>+++ b/drivers/input/mouse/synaptics.c Sat Jan 10 03:22:26 2004>@@ -214,7 +214,9 @@> {> struct synaptics_data *priv = psmouse->private;>>- mode |= SYN_BIT_ABSOLUTE_MODE | SYN_BIT_HIGH_RATE;>+ mode |= SYN_BIT_ABSOLUTE_MODE;>+ if (psmouse_rate >= 80)>+ mode |= SYN_BIT_HIGH_RATE;> if (SYN_ID_MAJOR(priv->identity) >= 4)> mode |= SYN_BIT_DISABLE_GESTURE;> if (SYN_CAP_EXTENDED(priv->capabilities))>-- The best ways are the most straightforward ways. When you're sitting aroundscamming these things out, all kinds of James Bondian ideas come forth, butwhen it gets down to the reality of it, the simplest and most straightforwardway is usually the best, and the way that attracts the least attention.Also, pouring gasoline on the water and lighting it like James Bond doesn'twork either.... They tried it during Prohibition. -- Thomas King Forcade, marijuana smuggler-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2004/1/10/191 | CC-MAIN-2017-22 | refinedweb | 497 | 50.84 |
>
Hi there. Sorry for that my question may be stupid, but I'm a novice to Unity (also sorry for bad english). I'm now making my first scene. It is a forest. And as I added trees by painting them, I made tree colliders by creating prefab with tree model, adding capsule collider to it and replacing the original tree object with my prefab in terrain tree-painting menu. But what about rocks? If I paint them as terrain details (like flowers, grass etc.), they will look different as I exactly want, but will have no colliders. Tried to paint them as trees (with creating a prefab and adding collider to it), but they still had no collider and also look similar. Of course i can add a collider to every place where I have a rock, but I've got like 50-60 rocks in my forest. Any solutions?
Select all rocks in the scene by selecting them in your hierarchy view. Then In your inspector view click add component, select physics, select mesh collider. There are many options in mesh collider when you add it. See what suit your needs.
Red.
BoredMormon, as i wrote in post i tried that, but not working.
Redeemer86, they are not exist in hierarchy as i painted them by terrain tools. Adding rocks one by one as objects will make them look simiral.
Answer by bubzy
·
Oct 10, 2014 at 09:24 AM
I use something like this, its not a painter, and I'm not entirely sure how the painter works, but this will randomly populate a set area with rocks (drag the model including collider into the "item" field in the inspector and enter the other properties too to customise the item spawns
using UnityEngine;
using System.Collections;
public class itemspawner : MonoBehaviour {
// Use this for initialization
public Transform item;
public Vector2 spawnAreaBottomLeft = new Vector2(0,0);
public Vector2 spawnArea = new Vector2(100,100);
public int spawnQuantity = 10;
public bool randomSize = false;
public bool randomRotation = false;
public bool randomRotationY = false;
public float offsetY = 0f;
//public bool dominant = false;
Vector2 bLeft; //quicker! :D
public float exclusionZone = 10f;
//int i = 0;
public string priorityTag; //when this item is spawned, delete it if it collides with the priorityTag item
public void spawnItems()
{
bLeft = spawnAreaBottomLeft;
for(int i = 0; i < spawnQuantity; i++)
{
Transform temp;
Vector3 tempPos = new Vector3(Random.Range(bLeft.x,bLeft.x+spawnArea.x),0,Random.Range(bLeft.y,bLeft.y+spawnArea.y));
temp = Instantiate(item,tempPos+new Vector3(0,Terrain.activeTerrain.SampleHeight(tempPos)+offsetY,0),Quaternion.identity) as Transform;
if(randomSize)
{
float _randomSize = (float)Random.Range(1,10)/10;
temp.transform.localScale = new Vector3(_randomSize,_randomSize,_randomSize);
}
if(randomRotationY)
{
//Quaternion tempRot = new Quaternion(0f,Random.Range(0f,360f),0f,0f);
temp.transform.RotateAround(new Vector3(0,1,0),Random.Range(0,360));
}
if(randomRotation)
{
temp.transform.localRotation = Random.rotation;
}
temp.parent = gameObject.transform;
}
}
void Start () {
}
// Update is called once per frame
void Update () {
}
}
hope this helps
Thanks, but I dont understand a few things. 1.Should I add this script to my terrain? Or to my rock prefab? 2.It will spawn rocks immediately, or how I'm gonna make it start?
im not being nasty or preachy here, but you should really follow a few tutorials first.
you can attach this to an empty and add the following line in the
void Start() function
void Start()
void Start()
{
spawnItems();
}
the way I did it was to have a separate scene manager object that referenced and called all of the spawn functions remotely, this is why the function is public.
Make a simple tree
1
Answer
Colliding with trees using mesh colliders
1
Answer
Mesh Collider Issue(?) - Raycast (ScreenPointToRay) Appears to Collide on Nothing
0
Answers
Default Mesh colliders are not working
1
Answer
Adding colliders to human imported fbx - Newbie
1
Answer | https://answers.unity.com/questions/806046/collider-for-rocks.html | CC-MAIN-2019-26 | refinedweb | 642 | 53.1 |
What system monitoring tools are available?
I am looking for system monitoring tools which are GUI and CLI or web-based which include basic functions such as:
- CPU Usage
- Ram Usage
- Swap Usage
- Disk Usage ( Space / I/O )
- Heat Monitoring
I know there are many tools I can use, but I am looking for a single tool that has these basic functions.
7 years ago
Glances - An eye on your system
Glances is a free software (licensed under LGPL) to monitor your GNU/Linux or BSD operating system from a text interface. Glances uses the library libstatgrab to retrieve information from your system and it is developed in Python.
Installation
Open a terminal (Ctrl+Alt+T) and run following commands:
From Ubuntu 16.04 and above you can just type
sudo apt install glances, but version 2.3 have this bug. Else:
Easy Script Installation Glances
curl -L | sudo /bin/bash
OR
wget -O- | sudo /bin/bash
Manual Installation
sudo apt-get install python-pip build-essential python-dev lm-sensors sudo pip install psutil logutils bottle batinfo zeroconf netifaces pymdstat influxdb elasticsearch potsdb statsd pystache docker-py pysnmp pika py-cpuinfo bernhard sudo pip install glances
Basic usage
To start
glancessimply type
glancesin terminal.
Cpu , Ram , Swap Monitoring
Disk Monitoring
System Heat Monitoring
If you type
glances --helpyou will find (
-eEnable the sensors module (Linux-only) )
glances -e
Configuration file
You can set your thresholds in Glances configuration file, on GNU/Linux, the default configuration file is located in
/etc/glances/glances.conf..
Additional Sources: PyPI, Github, Linuxaria
Update
Monitoring juju container just for example how things look like Large Image
In terminal no 1 Glances is running in server mode, In terminal no 2 juju container is running
apt-get update& In terminal 3
glances -c 192.168.1.103Glances is connected to container ip
Glances CPU Usage
Glances itself seems to require period spikes of cpu usage while being active, as evidenced by the built in system monitor usage graph. If the graph is accurate - then by using glances one gives up about 1/4 of a CPU on a system. This my have en effect for those who are monitoring CPU loads on servers.
:) , Yes it is @B4NZ41
Is it worth installing python just to have better statistics, especially on tiny servers? (e.g. amazon micro istances with <750MB ram)
@Razor Well i dont think so , its just for deskstop ( to monitor one pc )
Awesome tool. There's a package in Debian 8.0 Jessie (and probably in Ubuntu already).
best tool I've seen....
I strongly recommend against the 'easy' installation method suggested here! Piping data from the Internet to a privileged BASH interpreter is a very insecure. If someone misconfigured the DNS, or hacked bit.ly, you could be installing anything to your system and you might never know.
I **don't recommend** the "Easy Script Installation", install only using packages.
To uninstall just `sudo pip uninstall glances`.
indicator-SysMonitor
Indicator-SysMonitor does a little, but does it well. Once installed and run, it displays CPU and RAM usage on your top panel. Simple.
Conky
One of my personal favourites
Screenlet you’ll find a bunch of differently styled CPU and RAM monitors included in the screenlets-all package available in the Ubuntu Software Center.
Glances
To install:
sudo apt-get install python-pip build-essential python-dev sudo pip install Glances sudo pip install PySensors
VMSTAT
Displays information about CPU, memory, processes, etc. more parameters, use this command:
iostat --help
MPSTAT
The mpstat command line utility will display average CPU usage per processor. To run it, use simply this command:
mpstat
For CPU usage per processor, use this command:
mpstat -P ALL
Saidar
Saidar also allows to monitor system device activities via the command line.
You can install is with this command:
sudo apt-get install saidar
To start monitoring, run this command:
saidar -c -d 1
Stats will be refreshed every second.
GKrellM
GKrellM is a customizable widget with various themes that displays on your desktop system device information (CPU, temperature, memory, network, etc.).
To install GKrellM, run this command:
sudo apt-get install gkrellm
Monitorix
Monitorix is another application with a web-based user interface for monitoring system devices.
Install it with these commands:
sudo add-apt-repository ppa:upubuntu-com/ppa sudo apt-get update sudo apt-get install monitorix)
@Thuener It's better for you just to read and search before such nonsense comment and yes it's ppa::upubuntu-com/ppa... refer to this link and i think better for you to remove the downvote :)
I have been using GKrellM and really like it, especially the temperatures sensor display. I wish they were graphical, however it lets me know how my laptop is doing as it has a over heating problem.
Following are the tools for monitoring a linux system
- System commands like
top,
free -m,
vmstat,
iostat,
iotop,
sar,
netstatetc.
For the last few years I have used:
System Load Indicator
available from Software Centre
nice one : System Load Indicator
top
top is monitoring Software, listing all the processes with CPU/RAM usage, Overall CPU/RAM usage and more Also it's mostly installed by default
htop
htop is like an extended version of top. It has all the features from above, but you can see child processes and customize the display of everything. It also has colors.
iotop
iotop is specifically for Monitoring Hard rive I/O It lists all processes and shows their Hard drive usage for read and write.
where is heat monitoring ? and in your answer you have already included 3 utilities ... check the question **i am looking for a single tool that has some basic function **
With the three tools I am just giving different options for the OP, but I am dissapointed to say that none of those have heat monitoring
at least you have tried to answer the question ... thank you
google ( Saidar ubuntu )to monitor most things. It's easiest to use an external module installer instead of building from source.
Note: These examples are written in Python 2.7
sudo apt-get install pip sudo pip install psutil
Now that we have the modules installed, we can start coding.
First, create a file called
usage.py.
gedit ~/usage.py
Start by importing
psutil
import psutil
Then, create a function to monitor the percentage your CPU cores are running at.
def cpu_perc(): cpu_perc = psutil.cpu_percent(interval=1, percpu=True) for i in range(len(cpu_perc)): print "CPU Core", str(i+1),":", str(cpu_perc[i]), "%"
Let's break that down a bit, shall we?
The first line,
cpu_num = psutil.cpu_percent(interval=1, percpu=True), finds the percentage that the cores in your CPU are running at and assigns it to a list called
cpu_perc.
This loop right here
for i in range(len(cpu_num)): print "CPU Core", str(i+1),":", str(cpu_perc[i]), "%"
is a for loop that prints out the current percentage of each of your CPU cores.
Let's add the RAM usage.
Create a function called
ram_perc.
def ram_perc(): mem = psutil.virtual_memory() mem_perc = mem.percent print "RAM: ", mem_perc, "%"
psutil.virtual_memorygives a data set containing different facts about the RAM in your computer.
Next, you can add some facts about your network.
def net(): net = psutil.net_io_counters() mbytes_sent = float(net.bytes_sent) / 1048576 mbytes_recv = float(net.bytes_recv) / 1048576 print "MB sent: ", mbytes_sent print "MB received: ", mbytes_recv
Since
psutil.net_io_counters()only gives us information about packets sent and received in bytes, some converting was necessary.
To get some information about swap space, add this function.
def swap_perc(): swap = psutil.swap_memory() swap_perc = swap.percent.
def disks():"
The original output of
psutil.disk_usageis this,
>>>psutil.disk_usage('/') sdiskusage(total=21378641920, used=4809781248, free=15482871808, percent=22.5)
but you can also just receive
total,
used,
free, or
percent.
The completed program: (the aforementioned functions were combined)
import psutil, os, sys mem_perc = 0 #init var swap_perc = 0 #init var mbytes_sent = 0 #init var mbytes_recv = 0 #init var cpu_perc = 0 #init var swap = 0 #init var mem = 0 #init var net = 0 #init var def disp(degree): global cpu_perc global swap global swap_perc global mem global mem_perc global net global mbytes_sent global mbytes_recv cpu_perc = psutil.cpu_percent(interval=1, percpu=True) swap = psutil.swap_memory() swap_perc = swap.percent mem = psutil.virtual_memory() mem_perc = mem.percent net = psutil.net_io_counters() mbytes_sent = float(net.bytes_sent) / 1048576 mbytes_recv = float(net.bytes_recv) / 1048576 os.system('clear') #clear the screen print "-"*30 print "CPU" print "-"*30 print "CPU Temperature: " , degree, "'C" for i in range(len(cpu_perc)): print "CPU Core", str(i+1),":", str(cpu_perc[i]), "%" print "-"*30 print "MEMORY" print "-"*30 print "RAM: ", mem_perc, "%" print "Swap: ", swap_perc, "%" print "-"*30 print "NETWORK" print "-"*30 print "MB sent: ", mbytes_sent print "MB received: ", mbytes_recv print "-"*30 print "DISKS" print "-"*30" def main(): print("Press Ctrl+C to exit") while True: temp = open("/sys/class/thermal/thermal_zone0/temp").read().strip().lstrip('temperature :').rstrip(' C') temp = float(temp) / 1000 disp(temp) main()
The line
temp = open("/sys/class/thermal/thermal_zone0/temp").read().strip().lstrip('temperature :').rstrip(' C')might not work with your hardware configuration.
Run this program from the command line. Pass the disks you want to monitor as arguments from the command line.
$ python usage.py / Press Ctrl+C to exit ------------------------------ CPU ------------------------------ CPU Temperature: 39.0 'C CPU Core 1 : 4.8 % CPU Core 2 : 1.0 % CPU Core 3 : 0.0 % CPU Core 4 : 4.9 % ------------------------------ MEMORY ------------------------------ RAM: 33.6 % Swap: 6.4 % ------------------------------ NETWORK ------------------------------ MB sent: 2.93382358551 MB received: 17.2131490707 ------------------------------ DISKS ------------------------------ / Megabytes total: 13952.484375 Megabytes used: 8542.6640625 Megabytes free: 4678.5703125 Percentage used: 61.2 /media/calvin/Data Megabytes total: 326810.996094 Megabytes used: 57536.953125 Megabytes free: 269274.042969 Percentage used: 17.6
Hope this helps! Comment if you have any questions.
While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes.
@Ron - Okay, I'll add an edit to my post and show the basic scripting parts of sysmon in a couple of days. Thanks for the advice!
@muru - Nevermind, now it is working. Thanks for the link!
@muru - But, to answer your question, I started the code block with three backticks followed by the language I wanted the syntax to be highlighted in, and ended with three backticks.
@calthecoder the three-backticks method is not supported by SE's Markdown (yet).
@muru - Okay. Is GitHub the only place that supports it? It is in their Documentation section.
Other places do too, but SE doesn't. See for what's supported.
Package systat has a tool called
sarthat does all you want. It can also gather historical data so you can see what happened some time ago.
The
freecommand is the most simple and easy to use command to check memory usage on linux/ubuntu.
free -m
To check memory usage is to read the
/proc/meminfofile.
cat /proc/meminfo
The
vmstatcommand with the
soption.
vmstat -s
The
topcommand is generally used to check memory and cpu usage per process.
top
The htop command also shows memory usage along with various other details.
htop
To find out hardware information about the installed RAM.
sudo dmidecode -t 17
I love htop! Simple and good enough.
I like to use
conkywhich can be configured anyway you like:
You can google
conkyand find 787,000 hits. There is something for everyone.
At the top of the display notice "Lock screen: 4 Minutes Brightness: 2074". These are generated by "Indicator-Sysmonitor" which allows you to display on the systray / application indicator using a bash script.
For a tutorial on setting up "Indicator-Sysmonitor" see: Can BASH display in systray as application indicator?
Qasim 7 years ago
:) , Yes it is @B4NZ41 | https://libstdc.com/us/q/askubuntu/293426 | CC-MAIN-2021-10 | refinedweb | 1,972 | 66.54 |
import "github.com/pressly/chi"
Package chi is a small, idiomatic and composable router for building HTTP services.
chi requires Go 1.7 or newer.
Example:
package main import ( "net/http" "github.com/go-chi/chi" "github.com/go-chi/chi/middleware" ) func main() { r := chi.NewRouter() r.Use(middleware.Logger) r.Use(middleware.Recoverer) r.Get("/", func(w http.ResponseWriter, r *http.Request) { w.Write([]byte("root.")) }) http.ListenAndServe(":3333", r) }
See github.com/go-chi/chi/_examples/ for more in-depth examples.
URL patterns allow for easy matching of path components in HTTP requests. The matching components can then be accessed using chi.URLParam(). All patterns must begin with a slash.
A simple named placeholder {name} matches any sequence of characters up to the next / or the end of the URL. Trailing slashes on paths must be handled explicitly.
A placeholder with a name followed by a colon allows a regular expression match, for example {number:\\d+}. The regular expression syntax is Go's normal regexp RE2 syntax, except that regular expressions including { or } are not supported, and / will never be matched. An anonymous regexp pattern is allowed, using an empty string before the colon in the placeholder, such as {:\\d+}
The special placeholder of asterisk matches the rest of the requested URL. Any trailing characters in the pattern are ignored. This is the only placeholder which will match / characters.
Examples:
"/user/{name}" matches "/user/jsmith" but not "/user/jsmith/info" or "/user/jsmith/" "/user/{name}/info" matches "/user/jsmith/info" "/page/*" matches "/page/intro/latest" "/page/*/index" also matches "/page/intro/latest" "/date/{yyyy:\\d\\d\\d\\d}/{mm:\\d\\d}/{dd:\\d\\d}" matches "/date/2017/04/01"
chain.go chi.go context.go mux.go tree.go
var ( // RouteCtxKey is the context.Context key to store the request context. RouteCtxKey = &contextKey{"RouteContext"} )
RegisterMethod adds support for custom HTTP method handlers, available via Router#Method and Router#MethodFunc
ServerBaseContext wraps an http.Handler to set the request context to the `baseCtx`.
URLParam returns the url parameter from a http.Request object.
URLParamFromCtx returns the url parameter from a http.Request Context.
Walk walks any router tree that implements Routes interface.
type ChainHandler struct { Middlewares Middlewares Endpoint http.Handler // contains filtered or unexported fields }
ChainHandler is a http.Handler with support for handler composition and execution.
func (c *ChainHandler) ServeHTTP(w http.ResponseWriter, r *http.Request)
type Context struct { Routes Routes // Routing path/method override used during the route search. // See Mux#routeHTTP method. RoutePath string RouteMethod string // Routing pattern stack throughout the lifecycle of the request, // across all connected routers. It is a record of all matching // patterns across a stack of sub-routers. RoutePatterns []string // URLParams are the stack of routeParams captured during the // routing lifecycle across a stack of sub-routers. URLParams RouteParams // contains filtered or unexported fields }
Context is the default routing context set on the root node of a request context to track route patterns, URL parameters and an optional routing path.
NewRouteContext returns a new routing Context object.
RouteContext returns chi's routing Context object from a http.Request Context.
Reset a routing context to its initial state.
RoutePattern builds the routing pattern string for the particular request, at the particular point during routing. This means, the value will change throughout the execution of a request in a router. That is why its advised to only use this value after calling the next handler.
For example,
func Instrument(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { next.ServeHTTP(w, r) routePattern := chi.RouteContext(r.Context()).RoutePattern() measure(w, r, routePattern) }) }
URLParam returns the corresponding URL parameter value from the request routing context.
Middlewares type is a slice of standard middleware handlers with methods to compose middleware chains and http.Handler's.
Chain returns a Middlewares type from a slice of middleware handlers.
Handler builds and returns a http.Handler from the chain of middlewares, with `h http.Handler` as the final handler.
func (mws Middlewares) HandlerFunc(h http.HandlerFunc) http.Handler
HandlerFunc builds and returns a http.Handler from the chain of middlewares, with `h http.Handler` as the final handler.
Mux is a simple HTTP route multiplexer that parses a request path, records any URL params, and executes an end handler. It implements the http.Handler interface and is friendly with the standard library.
Mux is designed to be fast, minimal and offer a powerful API for building modular and composable HTTP services with a large set of handlers. It's particularly useful for writing large REST API services that break a handler into many smaller parts composed of middlewares and end handlers.
NewMux returns a newly initialized Mux object that implements the Router interface.
NewRouter returns a new Mux object that implements the Router interface.
func (mx *Mux) Connect(pattern string, handlerFn http.HandlerFunc)
Connect adds the route `pattern` that matches a CONNECT http method to execute the `handlerFn` http.HandlerFunc.
func (mx *Mux) Delete(pattern string, handlerFn http.HandlerFunc)
Delete adds the route `pattern` that matches a DELETE http method to execute the `handlerFn` http.HandlerFunc.
func (mx *Mux) Get(pattern string, handlerFn http.HandlerFunc)
Get adds the route `pattern` that matches a GET http method to execute the `handlerFn` http.HandlerFunc.
Group creates a new inline-Mux with a fresh middleware stack. It's useful for a group of handlers along the same routing path that use an additional set of middlewares. See _examples/.
Handle adds the route `pattern` that matches any http method to execute the `handler` http.Handler.
func (mx *Mux) HandleFunc(pattern string, handlerFn http.HandlerFunc)
HandleFunc adds the route `pattern` that matches any http method to execute the `handlerFn` http.HandlerFunc.
func (mx *Mux) Head(pattern string, handlerFn http.HandlerFunc)
Head adds the route `pattern` that matches a HEAD http method to execute the `handlerFn` http.HandlerFunc.
Match searches the routing tree for a handler that matches the method/path. It's similar to routing a http request, but without executing the handler thereafter.
Note: the *Context state is updated during execution, so manage the state carefully or make a NewRouteContext().
Method adds the route `pattern` that matches `method` http method to execute the `handler` http.Handler.
func (mx *Mux) MethodFunc(method, pattern string, handlerFn http.HandlerFunc)
MethodFunc adds the route `pattern` that matches `method` http method to execute the `handlerFn` http.HandlerFunc.
func (mx *Mux) MethodNotAllowed(handlerFn http.HandlerFunc)
MethodNotAllowed sets a custom http.HandlerFunc for routing paths where the method is unresolved. The default handler returns a 405 with an empty body.
func (mx *Mux) MethodNotAllowedHandler() http.HandlerFunc
MethodNotAllowedHandler returns the default Mux 405 responder whenever a method cannot be resolved for a route.
func (mx *Mux) Middlewares() Middlewares
Middlewares returns a slice of middleware handler functions.
Mount attaches another http.Handler or chi Router as a subrouter along a routing path. It's very useful to split up a large API as many independent routers and compose them as a single service using Mount. See _examples/.
Note that Mount() simply sets a wildcard along the `pattern` that will continue routing at the `handler`, which in most cases is another chi.Router. As a result, if you define two Mount() routes on the exact same pattern the mount will panic.
func (mx *Mux) NotFound(handlerFn http.HandlerFunc)
NotFound sets a custom http.HandlerFunc for routing paths that could not be found. The default 404 handler is `http.NotFound`.
func (mx *Mux) NotFoundHandler() http.HandlerFunc
NotFoundHandler returns the default Mux 404 responder whenever a route cannot be found.
func (mx *Mux) Options(pattern string, handlerFn http.HandlerFunc)
Options adds the route `pattern` that matches a OPTIONS http method to execute the `handlerFn` http.HandlerFunc.
func (mx *Mux) Patch(pattern string, handlerFn http.HandlerFunc)
Patch adds the route `pattern` that matches a PATCH http method to execute the `handlerFn` http.HandlerFunc.
func (mx *Mux) Post(pattern string, handlerFn http.HandlerFunc)
Post adds the route `pattern` that matches a POST http method to execute the `handlerFn` http.HandlerFunc.
func (mx *Mux) Put(pattern string, handlerFn http.HandlerFunc)
Put adds the route `pattern` that matches a PUT http method to execute the `handlerFn` http.HandlerFunc.
Route creates a new Mux with a fresh middleware stack and mounts it along the `pattern` as a subrouter. Effectively, this is a short-hand call to Mount. See _examples/.
Routes returns a slice of routing information from the tree, useful for traversing available routes of a router.
ServeHTTP is the single method of the http.Handler interface that makes Mux interoperable with the standard library. It uses a sync.Pool to get and reuse routing contexts for each request.
func (mx *Mux) Trace(pattern string, handlerFn http.HandlerFunc)
Trace adds the route `pattern` that matches a TRACE http method to execute the `handlerFn` http.HandlerFunc.
Use appends a middleware handler to the Mux middleware stack.
The middleware stack for any Mux will execute before searching for a matching route to a specific handler, which provides opportunity to respond early, change the course of the request execution, or set request-scoped values for the next http.Handler.
With adds inline middlewares for an endpoint handler.
Route describes the details of a routing handler. Handlers map key is an HTTP method
RouteParams is a structure to track URL routing parameters efficiently.
func (s *RouteParams) Add(key, value string)
Add will append a URL parameter to the end of the route param
type Router interface { http.Handler Routes // Use appends one) }
Router consisting of the core routing methods used by chi's Mux, using only the standard net/http. }
Routes interface adds two methods for router traversal, which is also used by the `docgen` subpackage to generation documentation for Routers.
type WalkFunc func(method string, route string, handler http.Handler, middlewares ...func(http.Handler) http.Handler) error
WalkFunc is the type of the function called for each method and route visited by Walk.
Package chi imports 10 packages (graph) and is imported by 145 packages. Updated 2019-08-10. Refresh now. Tools for package owners. | https://godoc.org/github.com/pressly/chi | CC-MAIN-2019-35 | refinedweb | 1,680 | 59.9 |
task.ts:
module com.anicehumble { export enum Status { PENDING = 0, IN_PROGRESS = 1, COMPLETED = 2 } export interface Task { uuid: com.anicehumble.uuid; ownedBy: string; title: string; status: Status; } }
Then the reading the value of enum would result to runtime error: 'com is not defined':
console.log(com.anicehumble.Status.IN_PROGRESS);
If we try to move the enum Status and make it node-compatible by putting it on separate file without the module namespace and exporting the enum, the task.ts will be interpreted as external module instead, and as such, all dotted names will be interpreted as external modules too and would result to compile-time error if they are not imported, e.g., TypeScript will complain that it has no exported member uuid on com.anicehumble even there is one defined on an internal module, as all names under com.anicehumble will now be interpreted as external modules instead.
When a module become an external one like the code below, all members (e.g., uuid) under the module com.anicehumble should be moved to task.ts and cannot be defined anymore in an internal module on a separate file; or if it will not be inlined in task.ts, uuid should be implemented as external module, and it should be imported like the importing of Enums from the-enums.ts below:
the-enums.ts:
export enum Status { PENDING = 0, IN_PROGRESS = 1, COMPLETED = 2 }
task.ts:
import * as Enums from './the-enums'; module com.anicehumble { export interface Task { uuid: com.anicehumble.uuid; // TypeScript will complain com.anicehumble 'has no exported member uuid' even it is defined in an internal module on a separate file. ownedBy: string; title: string; status: Enums.Status; } }
To maintain the internal-ness of a TypeScript module, we will just simulate the enum, and then for the enum-using node, it will just import the external module version of the enum.
So let's define the task's status enum and make it an external module version of an enum instead. You might be wondering why we need to use const instead of enum. Later, the advantage of using const over enum will be apparent.
tasks-status-enum.ts
export const PENDING = 0; export const IN_PROGRESS = 1; export const COMPLETED = 2;
task.ts:
module com.anicehumble { export type Status = 0 | 1 | 2; export interface Task { uuid: com.anicehumble.uuid; ownedBy: string; title: string; status: Status; } }
The internal module version of the enum is the export type Status = 0 | 1 | 2. We will not be directly using the numbers in node, as it will look like magic numbers, we just add that to constraint the values that can be assigned to Task's status. The code below uses magic number:
const t : com.anicehumble.Task = <any>{}; t.status = 2; //
For enum-using node, this is how enum will be used:
app.ts:
import * as TaskStatus from './task-status-enum'; const t : com.anicehumble.Task = <any>{}; t.status = TaskStatus.IN_PROGRESS; console.log(TaskStatus.IN_PROGRESS); // no more undefined error
And this will not work, as the status value is constrained to 0, 1, 2 values only:
t.status = 42; // compile error: 42 is not assignable to type Status
This will not compile as type Status and number are not type-compatible:
const n: number = 42; t.status = n; // compile error: number is not assignable to type Status
and so is this, even if the value of 2 of n is in the allowable values of Status:
const n: number = 2; t.status = n; // compile error: number is not assignable to type Status
And now for the interesting part, we can make things more discoverable if we will make the simulated enum have a type of com.anicehumble.Status:
tasks-status-enum.ts
export const PENDING : com.anicehumble.Status = 0; export const IN_PROGRESS: com.anicehumble.Status = 1; export const COMPLETED: com.anicehumble.Status = 2;
Adding the type to the constant, that external module version of simulated enum is now related to the internal module version of the enum (i.e., export type Status = 0 | 1 | 2)
With real enum, the type cannot be added, the following is invalid:
export enum Status { PENDING : com.anicehumble.Status = 0, IN_PROGRESS: com.anicehumble.Status = 1, COMPLETED: com.anicehumble.Status = 2 }
Update
Had I known TypeScript's const enum from the start, this post will not be written. Use const enum, it's better than what was suggested here.
Happy Coding! | https://www.anicehumble.com/2017/01/typescript-enum-node-that-works.html | CC-MAIN-2019-47 | refinedweb | 734 | 57.98 |
Contents
- Introduction
- Using Path Expressions To Match And Select Items
- Axis Steps
- Axis Specifiers
- Node Tests
- Shorthand Form
- Name Tests
- Wildcards in Name Tests
- Using Predicates In Path Expressions
- Positional Predicates
- Boolean Predicates
- Constructing Elements
- Element Constructors are Expressions
- Constructing Atomic Values
- Running The Cookbook Examples
- Further Reading
- Why didn't my path expression match anything?
- What if my input namespace is different from my output namespace?
- Why doesn't my return clause work?
- Why didn't my expression get evaluated?
- My predicate is correct, so why doesn't it select the right stuff?
- Why doesn't my FLWOR behave as expected?
- Why are my elements created in the wrong order?
- Why can't I use true and false in my XQuery?
A Short Path to XQuery
XQuery is a language for querying XML data or non-XML data that can be modeled as XML. XQuery is specified by the W3C.
Introduction
Where Java and C++ are statement-based languages, the XQuery language is expression-based. The simplest XQuery expression is an XML element constructor:
<recipe/>
This <recipe/> element is an XQuery expression that forms a complete XQuery. In fact, this XQuery doesn't actually query anything. It just creates an empty <recipe/> element in the output. But constructing new elements in an XQuery is often necessary.
An XQuery expression can also be enclosed in curly braces and embedded in another XQuery expression. This XQuery has a document expression embedded in a node expression:
<html xmlns="" xml:
It creates a new <html> element in the output and sets its id attribute to be the id attribute from an <html> element in the other.html file.
Using Path Expressions To Match And Select Items
In C++ and Java, we write nested for loops and recursive functions to traverse XML trees in search of elements of interest. In XQuery, we write these iterative and recursive algorithms with path expressions.
A path expression looks somewhat like a typical file pathname for locating a file in a hierarchical file system. It is a sequence of one or more steps separated by slash '/' or double slash '//'. Although path expressions are used for traversing XML trees, not file systems, in QtXmlPatterms we can model a file system to look like an XML tree, so in QtXmlPatterns we can use XQuery to traverse a file system. See the file system example.
Think of a path expression as an algorithm for traversing an XML tree to find and collect items of interest. This algorithm is evaluated by evaluating each step moving from left to right through the sequence. A step is evaluated with a set of input items (nodes and atomic values), sometimes called the focus. The step is evaluated for each item in the focus. These evaluations produce a new set of items, called the result, which then becomes the focus that is passed to the next step. Evaluation of the final step produces the final result, which is the result of the XQuery. The items in the result set are presented in document order and without duplicates.
With QtXmlPatterns, a standard way to present the initial focus to a query is to call QXmlQuery::setFocus(). Another common way is to let the XQuery itself create the initial focus by using the first step of the path expression to call the XQuery doc() function. The doc() function loads an XML document and returns the document node. Note that the document node is not the same as the document element. The document node is a node constructed in memory, when the document is loaded. It represents the entire XML document, not the document element. The document element is the single, top-level XML element in the file. The doc() function returns the document node, which becomes the singleton node in the initial focus set. The document node will have one child node, and that child node will represent the document element. Consider the following XQuery:
doc('cookbook.xml')//recipe
The doc() function loads the cookbook.xml file and returns the document node. The document node then becomes the focus for the next step //recipe. Here the double slash means select all <recipe> elements found below the document node, regardless of where they appear in the document tree. The query selects all <recipe> elements in the cookbook. See Running The Cookbook Examples for instructions on how to run this query (and most of the ones that follow) from the command line.
Conceptually, evaluation of the steps of a path expression is similar to iterating through the same number of nested for loops. Consider the following XQuery, which builds on the previous one:
doc('cookbook.xml')//recipe/title
This XQuery is a single path expression composed of three steps. The first step creates the initial focus by calling the doc() function. We can paraphrase what the query engine does at each step:
- for each node in the initial focus (the document node)...
- for each descendant node that is a <recipe> element...
- collect the child nodes that are <title> elements.
Again the double slash means select all the <recipe> elements in the document. The single slash before the <title> element means select only those <title> elements that are child elements of a <recipe> element (i.e. not grandchildren, etc). The XQuery evaluates to a final result set containing the <title> element of each <recipe> element in the cookbook.
Axis Steps
The most common kind of path step is called an axis step, which tells the query engine which way to navigate from the context node, and which test to perform when it encounters nodes along the way. An axis step has two parts, an axis specifier, and a node test. Conceptually, evaluation of an axis step proceeds as follows: For each node in the focus set, the query engine navigates out from the node along the specified axis and applies the node test to each node it encounters. The nodes selected by the node test are collected in the result set, which becomes the focus set for the next step.
In the example XQuery above, the second and third steps are both axis steps. Both apply the element(name) node test to nodes encountered while traversing along some axis. But in this example, the two axis steps are written in a shorthand form, where the axis specifier and the node test are not written explicitly but are implied. XQueries are normally written in this shorthand form, but they can also be written in the longhand form. If we rewrite the XQuery in the longhand form, it looks like this:
doc('cookbook.xml')/descendant-or-self::element(recipe)/child::element(title)
The two axis steps have been expanded. The first step (//recipe) has been rewritten as /descendant-or-self::element(recipe), where descendant-or-self:: is the axis specifier and element(recipe) is the node test. The second step (title) has been rewritten as /child::element(title), where child:: is the axis specifier and element(title) is the node test. The output of the expanded XQuery will be exactly the same as the output of the shorthand form.
To create an axis step, concatenate an axis specifier and a node test. The following sections list the axis specifiers and node tests that are available.
Axis Specifiers
An axis specifier defines the direction you want the query engine to take, when it navigates away from the context node. QtXmlPatterns supports the following axes.
Node Tests
A node test is a conditional expression that must be true for a node if the node is to be selected by the axis step. The conditional expression can test just the kind of node, or it can test the kind of node and the name of the node. The XQuery specification for node tests also defines a third condition, the node's Schema Type, but schema type tests are not supported in QtXmlPatterns.
QtXmlPatterns supports the following node tests. The tests that have a name parameter test the node's name in addition to its kind and are often called the Name Tests.
Shorthand Form
Writing axis steps using the longhand form with axis specifiers and node tests is semantically clear but syntactically verbose. The shorthand form is easy to learn and, once you learn it, just as easy to read. In the shorthand form, the axis specifier and node test are implied by the syntax. XQueries are normally written in the shorthand form. Here is a table of some frequently used shorthand forms:
The XQuery language specification has a more detailed section on the shorthand form, which it calls the abbreviated syntax. More examples of path expressions written in the shorthand form are found there. There is also a section listing examples of path expressions written in the longhand form.
Name Tests
The name tests are the Node Tests that have the name parameter. A name test must match the node name in addition to the node kind. We have already seen name tests used:
doc('cookbook.xml')//recipe/title
In this path expression, both recipe and title are name tests written in the shorthand form. XQuery resolves these names (QNames) to their expanded form using whatever namespace declarations it knows about. Resolving a name to its expanded form means replacing its namespace prefix, if one is present (there aren't any present in the example), with a namespace URI. The expanded name then consists of the namespace URI and the local name.
But the names in the example above don't have namespace prefixes, because we didn't include a namespace declaration in our cookbook.xml file. However, we will often use XQuery to query XML documents that use namespaces. Forgetting to declare the correct namespace(s) in an XQuery is a common cause of XQuery failures. Let's add a default namespace to cookbook.xml now. Change the document element in cookbook.xml from:
<cookbook>
to...
<cookbook xmlns="">
This is called a default namespace declaration because it doesn't include a namespace prefix. By including this default namespace declaration in the document element, we mean that all unprefixed element names in the document, including the document element itself (cookbook), are automatically in the default namespace. Note that unprefixed attribute names are not affected by the default namespace declaration. They are always considered to be in no namespace. Note also that the URL we choose as our namespace URI need not refer to an actual location, and doesn't refer to one in this case. But click on, for example, which is the namespace URI for elements and attributes prefixed with xml:.
Now when we try to run the previous XQuery example, no output is produced! The path expression no longer matches anything in the cookbook file because our XQuery doesn't yet know about the namespace declaration we added to the cookbook document. There are two ways we can declare the namespace in the XQuery. We can give it a namespace prefix (e.g. c for cookbook) and prefix each name test with the namespace prefix:
declare namespace c = ""; doc('cookbook.xml')//c:recipe/c:title
Or we can declare the namespace to be the default element namespace, and then we can still run the original XQuery:
declare default element namespace ""; doc('cookbook.xml')//recipe/title
Both methods will work and produce the same output, all the <title> elements:
<title xmlns="">Quick and Easy Mushroom Soup</title> <title xmlns="">Cheese on Toast</title> <title xmlns="">Hard-Boiled Eggs</title>
But note how the output is slightly different from the output we saw before we added the default namespace declaration to the cookbook file. QtXmlPatterns automatically includes the correct namespace attribute in each <title> element in the output. When QtXmlPatterns loads a document and expands a QName, it creates an instance of QXmlName, which retains the namespace prefix along with the namespace URI and the local name. See QXmlName for further details.
One thing to keep in mind from this namespace discussion, whether you run XQueries in a Qt program using QtXmlPatterns, or you run them from the command line using xmlpatterns, is that if you don't get the output you expect, it might be because the data you are querying uses namespaces, but you didn't declare those namespaces in your XQuery.
Wildcards in Name Tests
The wildcard '*' can be used in a name test. To find all the attributes in the cookbook but select only the ones in the xml namespace, use the xml: namespace prefix but replace the local name (the attribute name) with the wildcard:
doc('cookbook.xml')//@xml:*
Oops! If you save this XQuery in file.xq and run it through xmlpatterns, it doesn't work. You get an error message instead, something like this: Error SENR0001 in, at line 1, column 1: Attribute xml:id can't be serialized because it appears at the top level. The XQuery actually ran correctly. It selected a bunch of xml:id attributes and put them in the result set. But then xmlpatterns sent the result set to a serializer, which tried to output it as well-formed XML. Since the result set contains only attributes and attributes alone are not well-formed XML, the serializer reports a serialization error.
Fear not. XQuery can do more than just find and select elements and attributes. It can construct new ones on the fly as well, which is what we need to do here if we want xmlpatterns to let us see the attributes we selected. The example above and the ones below are revisited in the Constructing Elements section. You can jump ahead to see the modified examples now, and then come back, or you can press on from here.
To find all the name attributes in the cookbook and select them all regardless of their namespace, replace the namespace prefix with the wildcard and write name (the attribute name) as the local name:
doc('cookbook.xml')//@*:name
To find and select all the attributes of the document element in the cookbook, replace the entire name test with the wildcard:
declare default element namespace ""; doc('cookbook.xml')/cookbook/@*
Using Predicates In Path Expressions
Predicates can be used to further filter the nodes selected by a path expression. A predicate is an expression in square brackets ('[' and ']') that either returns a boolean value or a number. A predicate can appear at the end of any path step in a path expression. The predicate is applied to each node in the focus set. If a node passes the filter, the node is included in the result set. The query below selects the recipe element that has the <title> element "Hard-Boiled Eggs".
declare default element namespace ""; doc("cookbook.xml")/cookbook/recipe[title = "Hard-Boiled Eggs"]
The dot expression ('.') can be used in predicates and path expressions to refer to the current context node. The following query uses the dot expression to refer to the current <method> element. The query selects the empty <method> elements from the cookbook.
declare default element namespace ""; doc('cookbook.xml')//method[string-length(.) = 0]
Note that passing the dot expression to the string-length() function is optional. When string-length() is called with no parameter, the context node is assumed:
declare default element namespace ""; doc('cookbook.xml')//method[string-length() = 0]
Actually, selecting an empty <method> element might not be very useful by itself. It doesn't tell you which recipe has the empty method:
<method xmlns=""/>
What you probably want to see instead are the <recipe> elements that have empty <method> elements:
declare default element namespace ""; doc('cookbook.xml')//recipe[string-length(method) = 0]
The predicate uses the string-length() function to test the length of each <method> element in each <recipe> element found by the node test. If a <method> contains no text, the predicate evaluates to true and the <recipe> element is selected. If the method contains some text, the predicate evaluates to false, and the <recipe> element is discarded. The output is the entire recipe that has no instructions for preparation:
<recipe xmlns="" xml: <title>Hard-Boiled Eggs</title> <ingredient name="Eggs" quantity="3" unit="eggs"/> <time quantity="3" unit="minutes"/> <method/> </recipe>
The astute reader will have noticed that this use of string-length() to find an empty element is unreliable. It works in this case, because the method element is written as <method/>, guaranteeing that its string length will be 0. It will still work if the method element is written as <method></method>, but it will fail if there is any whitespace between the opening and ending <method> tags. A more robust way to find the recipes with empty methods is presented in the section on Boolean Predicates.
There are many more functions and operators defined for XQuery and XPath. They are all documented in the specification.
Positional Predicates
Predicates are often used to filter items based on their position in a sequence. For path expressions processing items loaded from XML documents, the normal sequence is document order. This query returns the second <recipe> element in the cookbook.xml file:
declare default element namespace ""; doc('cookbook.xml')/cookbook/recipe[2]
The other frequently used positional function is last(), which returns the numeric position of the last item in the focus set. Stated another way, last() returns the size of the focus set. This query returns the last recipe in the cookbook:
declare default element namespace ""; doc('cookbook.xml')/cookbook/recipe[last()]
And this query returns the next to last <recipe>:
declare default element namespace ""; doc('cookbook.xml')/cookbook/recipe[last() - 1]
Boolean Predicates
The other kind of predicate evaluates to true or false. A boolean predicate takes the value of its expression and determines its effective boolean value according to the following rules:
- An expression that evaluates to a single node is true.
- An expression that evaluates to a string is false if the string is empty and true if the string is not empty.
- An expression that evaluates to a boolean value (i.e. type xs:boolean) is that value.
- If the expression evaluates to anything else, it's an error (e.g. type xs:date).
We have already seen some boolean predicates in use. Earlier, we saw a not so robust way to find the recipes that have no instructions. [string-length(method) = 0] is a boolean predicate that would fail in the example if the empty method element was written with both opening and closing tags and there was whitespace between the tags. Here is a more robust way that uses a different boolean predicate.
declare default element namespace ""; doc('cookbook.xml')/cookbook/recipe[method[empty(step)]]
This one uses the empty() and function to test whether the method contains any steps. If the method contains no steps, then empty(step) will return true, and hence the predicate will evaluate to true.
But even that version isn't foolproof. Suppose the method does contain steps, but all the steps themselves are empty. That's still a case of a recipe with no instructions that won't be detected. There is a better way:
declare default element namespace ""; doc('cookbook.xml')/cookbook/recipe[not(normalize-space(method))]
This version uses the not and normalize-space() functions. normalize-space(method)) returns the contents of the method element as a string, but with all the whitespace normalized, i.e., the string value of each <step> element will have its whitespace normalized, and then all the normalized step values will be concatenated. If that string is empty, then not() returns true and the predicate is true.
We can also use the position() function in a comparison to inspect positions with conditional logic. The position() function returns the position index of the current context item in the sequence of items:
declare default element namespace ""; doc('cookbook.xml')/cookbook/recipe[position() = 2]
Note that the first position in the sequence is position 1, not 0. We can also select all the recipes after the first one:
declare default element namespace ""; doc('cookbook.xml')/cookbook/recipe[position() > 1]
Constructing Elements
In the section about using wildcards in name tests, we saw three simple example XQueries, each of which selected a different list of XML attributes from the cookbook. We couldn't use xmlpatterns to run these queries, however, because xmlpatterns sends the XQuery results to a serializer, which expects to serialize the results as well-formed XML. Since a list of XML attributes by itself is not well-formed XML, the serializer reported an error for each XQuery.
Since an attribute must appear in an element, for each attribute in the result set, we must create an XML element. We can do that using a for clause with a bound variable, and a return clause with an element constructor:
for $i in doc("cookbook.xml")//@xml:* return <p>{$i}</p>
The for clause produces a sequence of attribute nodes from the result of the path expression. Each attribute node in the sequence is bound to the variable $i. The return clause then constructs a <p> element around the attribute node. Here is the output:
<p xml: <p xml: <p xml:
The output contains one <p> element for each xml:id attribute in the cookbook. Note that XQuery puts each attribute in the right place in its <p> element, despite the fact that in the return clause, the $i variable is positioned as if it is meant to become <p> element content.
The other two examples from the wildcard section can be rewritten the same way. Here is the XQuery that selects all the name attributes, regardless of namespace:
for $i in doc("cookbook.xml")//@*:name return <p>{$i}</p>
And here is its output:
<p name="Fresh mushrooms"/> <p name="Garlic"/> <p name="Olive oil"/> <p name="Milk"/> <p name="Water"/> <p name="Cream"/> <p name="Vegetable soup cube"/> <p name="Ground black pepper"/> <p name="Dried parsley"/> <p name="Bread"/> <p name="Cheese"/> <p name="Eggs"/>
And here is the XQuery that selects all the attributes from the document element:
declare default element namespace ""; for $i in doc("cookbook.xml")/cookbook/@* return <p>{$i}</p>
And here is its output:
<p xmlns="" count="3"/>
Element Constructors are Expressions
Because node constructors are expressions, they can be used in XQueries wherever expressions are allowed.wegian.
Constructing Atomic Values
XQuery also has atomic values. An atomic value is a value in the value space of one of the built-in datatypes in the XML Schema language. These atomic types have built-in operators for doing arithmetic, comparisons, and for converting values to other atomic types. See the Built-in Datatype Hierarchy for the entire tree of built-in, primitive and derived atomic types. Note: Click on a data type in the tree for its detailed specification.
To construct an atomic value as element content, enclose an expression in curly braces and embed it in the element constructor:
<e>{sum((1, 2, 3))}</e>
Sending this XQuery through xmlpatterns produces:
<e>6</e>
To compute the value of an attribute, enclose the expression in curly braces and embed it in the attribute value:
declare variable $insertion := "example"; <p class="important {$insertion} obsolete"/>
Sending this XQuery through xmlpatterns produces:
<p class="important example obsolete"/>weigian.
Running The Cookbook Examples
Most of the XQuery examples in this document refer to the cookbook.xml example file from the Recipes Example. Copy the cookbook.xml to your current directory, save one of the cookbook XQuery examples in a .xq file (e.g., file.xq), and run the XQuery using Qt's command line utility:
xmlpatterns file.xq
Further Reading
There is much more to the XQuery language than we have presented in this short introduction. We will be adding more here in later releases. In the meantime, playing with the xmlpatterns utility and making modifications to the XQuery examples provided here will be quite informative. An XQuery textbook will be a good investment.
You can also ask questions on XQuery mail lists:
FunctX has a collection of XQuery functions that can be both useful and educational.
This introduction contains many links to the specifications, which, of course, are the ultimate source of information about XQuery. They can be a bit difficult, though, so consider investing in a textbook:
- XQuery 1.0: An XML Query Language - the main source for syntax and semantics.
- XQuery 1.0 and XPath 2.0 Functions and Operators - the builtin functions and operators.
The answers to these frequently asked questions explain the causes of several common mistakes that most beginners make. Reading through the answers ahead of time might save you a lot of head scratching.
Why didn't my path expression match anything?
The most common cause of this bug is failure to declare one or more namespaces in your XQuery. Consider the following query for selecting all the examples in an XHTML document:
doc("index.html")/html/body/p[@class="example"]
It won't match anything because index.html is an XHTML file, and all XHTML files declare the default namespace "" in their top (<html>) element. But the query doesn't declare this namespace, so the path expression expands html to {}html and tries to match that expanded name. But the actual expanded name is {}html. One possible fix is to declare the correct default namespace in the XQuery:
declare namespace x = ""; doc("index.html")/x:html/x:body/x:p[@class="example"]
Another common cause of this bug is to confuse the document node with the top element node. They are different. This query won't match anything:
doc("myPlainHTML.html")/body
The doc() function returns the document node, not the top element node (<html>). Don't forget to match the top element node in the path expression:
doc("myPlainHTML.html")/html/body
What if my input namespace is different from my output namespace?
Just remember to declare both namespaces in your XQuery and use them properly. Consider the following query, which is meant to generate XHTML output from XML input:
declare default element namespace ""; <html> <body> { for $i in doc("testResult.xml")/tests/test[@status = "failure"] order by $i/@name return <p>{$i/@name}</p> } </body> </html>
We want the <html>, <body>, and <p> nodes we create in the output to be in the standard XHTML namespace, so we declare the default namespace to be. That's correct for the output, but that same default namespace will also be applied to the node names in the path expression we're trying to match in the input (/tests/test[@status = "failure"]), which is wrong, because the namespace used in testResult.xml is perhaps in the empty namespace. So we must declare that namespace too, with a namespace prefix, and then use the prefix with the node names in the path expression. This one will probably work better:
declare namespace x = ""; <x:html> <x:body> { for $i in doc("testResult.xml")/tests/test[@status = "failure"] order by $i/@name return <x:p>{$i/@name}</x:p> } </x:body> </x:html>
Why doesn't my return clause work?
Recall that XQuery is an expression-based language, not statement-based. Because an XQuery is a lot of expressions, understanding XQuery expression precedence is very important. Consider the following query:
for $i in(reverse(1 to 10)), $d in xs:integer(doc("numbers.xml")/numbers/number) return $i + $d
It looks ok, but it isn't. It is supposed to be a FLWOR expression comprising a for clause and a return clause, but it isn't just that. It has a FLWOR expression, certainly (with the for and return clauses), but it also has an arithmetic expression (+ $d) dangling at the end because we didn't enclose the return expression in parentheses.
Using parentheses to establish precedence is more important in XQuery than in other languages, because XQuery is expression-based. In In this case, without parantheses enclosing $i + $d, the return clause only returns $i. The +$d will have the result of the FLWOR expression as its left operand. And, since the scope of variable $d ends at the end of the return clause, a variable out of scope error will be reported. Correct these problems by using parentheses.
for $i in(reverse(1 to 10)), $d in xs:integer(doc("numbers.xml")/numbers/number) return ($i + $d)
Why didn't my expression get evaluated?
You probably misplaced some curly braces. When you want an expression evaluated inside an element constructor, enclose the expression in curly braces. Without the curly braces, the expression will be interpreted as text. Here is a sum() expression used in an <e> element. The table shows cases where the curly braces are missing, misplaced, and placed correctly:
My predicate is correct, so why doesn't it select the right stuff?
Either you put your predicate in the wrong place in your path expression, or you forgot to add some parentheses. Consider this input file doc.txt:
<doc> <p> <span>1</span> <span>2</span> </p> <p> <span>3</span> <span>4</span> </p> <p> <span>5</span> <span>6</span> </p> <p> <span>7</span> <span>8</span> </p> <p> <span>9</span> <span>a</span> </p> <p> <span>b</span> <span>c</span> </p> <p> <span>d</span> <span>e</span> </p> <p> <span>f</span> <span>0</span> </p> </doc>
Suppose you want the first <span> element of every <p> element. Apply a position filter ([1]) to the /span path step:
let $doc := doc('doc.txt') return $doc/doc/p/span[1]
Applying the [1] filter to the /span step returns the first <span> element of each <p> element:
<span>1</span> <span>3</span> <span>5</span> <span>7</span> <span>9</span> <span>b</span> <span>d</span> <span>f</span>
Note:: You can write the same query this way:
for $a in doc('doc.txt')/doc/p/span[1] return $a
Or you can reduce it right down to this:
doc('doc.txt')/doc/p/span[1]
On the other hand, suppose you really want only one <span> element, the first one in the document (i.e., you only want the first <span> element in the first <p> element). Then you have to do more filtering. There are two ways you can do it. You can apply the [1] filter in the same place as above but enclose the path expression in parentheses:
let $doc := doc('doc.txt') return ($doc/doc/p/span)[1]
Or you can apply a second position filter ([1] again) to the /p path step:
let $doc := doc('doc.txt') return $doc/doc/p[1]/span[1]
Either way the query will return only the first <span> element in the document:
<span>1</span>
Why doesn't my FLWOR behave as expected?
The quick answer is you probably expected your XQuery FLWOR to behave just like a C++ for loop. But they aren't the same. Consider a simple example:
for $a in (8, -4, 2) let $b := ($a * -1, $a) order by $a return $b
This query evaluates to 4 -4 -2 2 -8 8. The for clause does set up a for loop style iteration, which does evaluate the rest of the FLWOR multiple times, one time for each value returned by the in expression. That much is similar to the C++ for loop.
But consider the return clause. In C++ if you hit a return statement, you break out of the for loop and return from the function with one value. Not so in XQuery. The return clause is the last clause of the FLWOR, and it means: Append the return value to the result list and then begin the next iteration of the FLWOR. When the for clause's in expression no longer returns a value, the entire result list is returned.
Next, consider the order by clause. It doesn't do any sorting on each iteration of the FLWOR. It just evaluates its expression on each iteration ($a in this case) to get an ordering value to map to the result item from each iteration. These ordering values are kept in a parallel list. The result list is sorted at the end using the parallel list of ordering values.
The last difference to note here is that the let clause does not set up an iteration through a sequence of values like the for clause does. The let clause isn't a sort of nested loop. It isn't a loop at all. It is just a variable binding. On each iteration, it binds the entire sequence of values on the right to the variable on the left. In the example above, it binds (4 -4) to $b on the first iteration, (-2 2) on the second iteration, and (-8 8) on the third iteration. So the following query doesn't iterate through anything, and doesn't do any ordering:
let $i := (2, 3, 1) order by $i[1] return $i
It binds the entire sequence (2, 3, 1) to $i one time only; the order by clause only has one thing to order and hence does nothing, and the query evaluates to 2 3 1, the sequence assigned to $i.
Note: We didn't include a where clause in the example. The where clause is for filtering results.
Why are my elements created in the wrong order?
The short answer is your elements are not created in the wrong order, because when appearing as operands to a path expression, there is no correct order. Consider the following query, which again uses the input file doc.txt:
doc('doc.txt')//p/<p>{span/node()}</p>
The query finds all the <p> elements in the file. For each <p> element, it builds a <p> element in the output containing the concatenated contents of all the <p> element's child <span> elements. Running the query through xmlpatterns might produce the following output, which is not sorted in the expected order.
<p>78</p> <p>9a</p> <p>12</p> <p>bc</p> <p>de</p> <p>34</p> <p>56</p> <p>f0</p>
You can use a for loop to ensure that the order of the result set corresponds to the order of the input sequence:
for $a in doc('doc.txt')//p return <p>{$a/span/node()}</p>
This version produces the same result set but in the expected order:
<p>12</p> <p>34</p> <p>56</p> <p>78</p> <p>9a</p> <p>bc</p> <p>de</p> <p>f0</p>
Why can't I use true and false in my XQuery?
You can, but not by just using the names true and false directly, because they are name tests although they look like boolean constants. The simple way to create the boolean values is to use the builtin functions true() and false() wherever you want to use true and false. The other way is to invoke the boolean constructor:
xs:boolean("true")
No notes | http://qt-project.org/doc/qt-4.8/xquery-introduction.html | CC-MAIN-2013-48 | refinedweb | 5,827 | 62.68 |
Atomic core and packages
#1
Posted 18 July 2011 - 03:02 PM
atomic core
The idea is to keep Yii2 core small, fast and flexible moving things like zii widgets, webservices and even active record out of it. Things to keep are base ones like autoloading, events, routing or CLI support.
packages
A package is some amount of structured code following a standard, documentation and a meta-description file. That's where all non-core code goes. Package concept will allow to use Yii2 as an atomic micro-framework or as a full-blown solution like current Yii.
Meta file should include package name, version, author, brief description, a list of packages and their versions it depends on.
package manager
Package manager should be simple to use. When one tries to get a package, it should be able to automatically get all dependencies automatically. One should be able to add/remove any package at any time (of course, if it's not required by another one).
ability to store packages at third-party servers
Currently we have extensions and one of the problems is that it's time consuming to upload archives to Yii website when extension code is updated. Overall, it's less time consuming to keep your code in a single place such as GitHub or your own server. So package manager should be able to install packages from different locations.
package repository
Since it will be hard to find a right package it's a good idea to have a central repository. Same idea as current extensions but the difference is that one should only specify URL to the package meta file and Yii website will automatically update description, version number etc. from the package and provide a summary page. Another difference is that files will not be stored at Yii website.
official packages
Among with packages that are maintained by developers we can review and "approve" some packages. Ones with decent documentation and code. These will be marked as "officially approved". It will solve the problem with choosing a stable code to build production system on.
#2
Posted 18 July 2011 - 04:34 PM
And from the other side, if we want to make Yii a really powerful and extensions-rich framework, we will need this package system. That's why now I'm tending to like this idea
#3
Posted 18 July 2011 - 04:40 PM
Package management built right into Yii would make it so much easier to manage packages, not only Yii official packages, but third-party packages as well. Especially third-party extensions.
Is it inspired by the Phundament2-derived work by Schmunk?
#4
Posted 18 July 2011 - 05:11 PM
- run static code analysis (coding conventions, API documentation depth, code complexity, nesting depth, ...)
- generate API reference
- run unit tests and measure code coverage
from all this, generate publicly visible rating with access to detailed information if requested. Ideally, this should happen automatically (say author creates a new tag in his repo, the server hosting that repo sends a notification. Yii site updates the package locally and starts the build).
Really loved to have something like this...
---
For the core, I'm not sure it would be such a good idea. After all, it adds complexity both for you as maintainer and for all users which probably only want to use a great framework. Having to search for components and plugging them together... I don't know. I prefer downloading a framework and being able to do all sorts of things with it (including especially WS and AR, but also widgets which I loved to be extended - different styles for CMenu is something I missed for example). Do you feel Yii has grown so fat? What do you hope to improve by separating the framework into smaller chunks?
#5
Posted 18 July 2011 - 06:16 PM
About the core, maybe is a good idea to have pre-configured those packages that are considered now as part of it (AR,WS,Widgets, etc), so to allow programmers to reduce it if some of them are not required but not to search for them, when they were always there before.
#6
Posted 18 July 2011 - 07:05 PM
#7
Posted 18 July 2011 - 08:01 PM
And I love the package repository, I always wanted something like that... I've used somthing like that in Typo3 CMS, I think that CMS has one of the best Package Managers, so we can take some ideas from it..
#8
Posted 19 July 2011 - 01:54 AM
#9
Posted 19 July 2011 - 04:59 AM
andy_s
Web based package installer is probably a good idea. Still, it should be doubled by console application in order to be able to automate installing process via shell scripts.
jacmoe
To be honest, I've never reviewed Phundament2. Is there a package manager?
Ben
Good idea about automating quality checks.
robregonm
We'll keep using lazy loading in Yii2. There's no other way to make framework fast and feature-rich at the same time.
Can you post some links to Typo3 manuals/implementation etc.?
Mike
Yes, PEAR is quite similar.
#10
Posted 19 July 2011 - 08:49 AM
The current Typo3 version uses an extension manager as is in these pages:
A glimpse:
A screenshot:
Extensions repository:
The new Typo3 v5.0 implements a slightly different way to manage the packages, it is based in a new PHP framework called FLOW3 (still in alpha):
I think we can take some ideas from Typo3, It is a bit complex sometimes (it's being improved in v.5.0) but it implements cool features and ideas.
Best.
#11
Posted 19 July 2011 - 09:37 AM
It is the start of a unified extension/module handling/installation/configuration framework.
Interesting idea.
It sounds very close to the idea of packages.
Although p3widgets currently is concerned with widgets, it is the general mechanism which is interesting.
#12
Posted 21 July 2011 - 07:43 AM
Additionally, maybe some official packages can be in incubation state, which mean that they are close to become official packages, but still not tested/stable enough.
#13
Posted 21 July 2011 - 01:47 PM
If one package === one namespace in Yii 2.0, then this would also make it possible to use pharchives, which would be neat - this would make it really easy to switch to a newer version of a package, and you can comfortably keep the old version of the package around, without storing/deploying (or checking in) hundreds (or even thousands) of individual files.
I like that idea a lot. Fewer files, less mess, easier switching between versions, faster deployment - good stuff.
#14
Posted 28 July 2011 - 04:43 PM
I am very pleased to see this discussion going on
First, isn't it like a package is just a standardized extension format?
Does not matter if a package contains a component or a module ... whatever.
@samdark
I totally agree with all your points.
And yes, there's kind of a package manger in phundament, it's called p3admin.
Check this posting, it's also shown in the video.
It basically just looks for a migration directory in modules and executes the migrations for database setup via yiic.
I think there should also be a convention about where to store files, i.e. protected/data/<packageName>
Every package should also provide a default configuration, which can be easily merged with the application configuration.
Configuration handling is also an issue in Yii, a package should be able to register itself within the application.
Regarding a package manager and a repository, what's about an yiic wrapper for git commands or for the GitHub API?
Console-based is a must.
I also strongly vote for GitHub! I've just started using it and found zero disadvantages compared to GoogleCode and svn.
And the ability to share code on GitHub is awesome.
Compatibility is also a thing to think about. Thinking especially about different jQuery versions here
But also cross-dependencies between packages.
Best regards,
schmunk
Fork on github
Follow phundament on Twitter
DevSystem: Mac OS X 10.7 - PHP 5.3 - Apache2 - Yii 1.1 / trunk - Firefox or Safari
#15
Posted 28 July 2011 - 07:29 PM
Mercurial!!
<sorry>
#16
Posted 01 August 2011 - 07:19 AM
The GUID should change with each major revision of a package, as an indication that it is no longer backwards compatible. Dependencies should be specified with GUID (as well as package name) so that users receive a warning if they're not using the right version of a dependency.
Note that while this eliminates "fighting" for a "package name", packages may still "fight" for a namespace, so that doesn't really address that problem. I think "package name" should be == root namespace of a package by convention, and the Yii site should maintain a namespace registry. When you enroll a package, you reserve the root namespace, so we avoid collisions.
Alternatively, we could use the Java-style convention where the vendor name is used as the root namespace, which virtually eliminates collisions - e.g. "mindplay\ImageManager" different from, say, "somebody\ImageManager". One problem with that idea, is that this is not a convention generally followed by the PHP community, so not much use when integrating a third-party library into a package.
Thoughts?
#17
Posted 01 August 2011 - 09:24 AM
Instead of GUID that's not too human friendly we can just use versions:
x.0.0 — Entirely new API.
1.x.0 — partially non BC.
1.0.x — BC changes.
Using vendor name in namespace can be a problem because of autoloader convention for namespace to follow file system. Namespace system at website can be a good solution.
#18
Posted 01 August 2011 - 12:20 PM
samdark, on 01 August 2011 - 09:24 AM, said:
Not an argument for GUID, but why would that be a problem? Presumably, the autoloader will allow you to point a root-namespace to a particular folder - that's a basic feature of any modern autoloader?
All the Yii packages will likely reside in namespaces under "Yii", so Yii itself already follows that convention.
#19
Posted 12 August 2011 - 01:42 PM
mindplay, on 01 August 2011 - 07:19 AM, said:
SPL is about to standardize on this convention - I think we should consider following that standard.
#20
Posted 13 August 2011 - 09:03 AM
For example:
- vendors/Mindplay/Annotations
- vendors/Mindplay/ImageManager
- etc.
Just one namespace => path mapping to configure, which makes it easy to install and configure third-party packages with many dependencies on other packages from the same third-party vendor.
Also, package managers need only know where the root "vendors" folder is located - they will then be able to automatically install packages and dependencies, without prompting you for paths to each individual package.
And finally, this eliminates "fighting" for package-names, since the full package-name is disambiguated by the vendor namespace-prefix.
In my opinion, this would work very well. | http://www.yiiframework.com/forum/index.php/topic/21694-atomic-core-and-packages/page__p__106186 | CC-MAIN-2013-20 | refinedweb | 1,840 | 62.58 |
System.Windows.Forms.MessageBox.Show("Hello World")
Yes, you can, then the class belongs to global namespace which has no name.
For commercial products, naturally, you wouldn’t want global namespace
.config files in .NET are supported through the API to allow storing
and retrieving information. They are nothing more than simple XML files,
sort of like what .ini files were before for Win32 apps.
Web deployment: the user always downloads the latest version of the code;
the program runs within security sandbox, properly written app will not require
additional security privileges.
Both methods do the same, Move and Resize are the names adopted from VB to ease migration to C#..
A hyphen ‘-’ would do it. Also, an ampersand ‘&\’ would underline the next letter.
Windows service is used for back end processing like
printing,creating setup of CD.it is not used for Gui
application. | http://www.megasolutions.net/qs/Windows_Net_Forms_Interview_Questions.aspx | CC-MAIN-2015-27 | refinedweb | 145 | 59.9 |
java.util.TreeMap.subMap() Method
Description
The subMap(K fromKey, boolean fromInclusive, K toKey, boolean toInclusive) method is used to return a view of the portion of this map whose keys range from fromKey to toKey. If fromKey and toKey are equal, the returned map is empty unless fromExclusive and toExclusive are both true. The returned map is backed by this map, so changes in the returned map are reflected in this map, and vice-versa.
Declaration
Following is the declaration for java.util.TreeMap.subMap() method.
public NavigableMap<K,V> subMap(K fromKey, boolean fromInclusive, K toKey, boolean toInclusive)
Parameters
fromKey--This is the low endpoint of the keys in the returned map.
fromInclusive--This is true if the low endpoint is to be included in the returned view.
toKey--This is the high endpoint of the keys in the returned map.
toInclusive--This is true if the high endpoint is to be included in the returned view.
Return Value
The method call returns a view of the portion of this map whose keys range from fromKey to toKey.
Exception
ClassCastException--This exception is thrown if fromKey and toKey cannot be compared to one another using this map's comparator.
NullPointerException--This exception is thrown if fromKey or toKey is null and this map uses natural ordering, or its comparator does not permit null keys.
IllegalArgumentException--This exception is thrown if fromKey is greater than toKey; or if this map itself has a restricted range, and fromKey or toKey lies outside the bounds of the range.
Example
The following example shows the usage of java.util.TreeMap.subMap()
package com.tutorialspoint; import java.util.*; public class TreeMapDemo { public static void main(String[] args) { // creating maps TreeMap<Integer, String> treemap = new TreeMap<Integer, String>(); Navigable a portion of the map"); treemapincl=treemap.subMap(1, true, 3, true); System.out.println("Sub map values: "+treemapincl); } }
Let us compile and run the above program, this will produce the following result.
Getting a portion of the map Sub map values: {1=one, 2=two, 3=three} | http://www.tutorialspoint.com/java/util/treemap_submap_inclusive.htm | CC-MAIN-2014-10 | refinedweb | 343 | 55.03 |
My name is Donavon West, a Live.com and Sidebar gadget developer at LiveGadgets.net and a Microsoft MVP for Windows Live Development. This is my first blog article here at LiveSide. Many of you have written gadgets for Live.com (what Microsoft is now referring to as "web gadgets"). The introduction of gadgets on Windows Live Spaces introduces many new challenges for developers. Here are a few:
With traditional Live.com gadgets, the person that placed a gadget on the page was the only person to view it. Windows Live Spaces a new paradigm. Now anyone with access to the Internet can view your gadget. But, you don't want to give everyone the right to change the gadget's settings. Because of this, gadgets now operate in one of two modes: author and visitor.
Let's examine the two modes using the iTunes gadget that I recently wrote. It displays album art, song title and artist from the iTunes most popular songs RSS feed. It also allows the user (we'll get to the definition of "user" in a moment) to select from a list of genres to display as well as the color of the virtual "iPod".
When you place the iTunes gadget on a page in Live.com, the user can adjust the various settings. This is what is called "author mode". In author mode, controls are exposed by the gadget to alter it's behavior. For example the user/author could select the genre Hip Hop/Rap. With Live.com there is only one mode: author.
With Windows Live Spaces, the author is the person who "owns" the site (i.e. yoursite.Spaces.Live.com). The "visitor" would be one of the visitors to the site. In many cases you do not want viewers to change certain aspects of the gadget. In the previous example, if you place the iTunes gadget on your space and set it up to display Hip Hop/Rap (because that's the kind of music you like), you may not want your users to alter this. Therefore, in visitor mode, setting controls are hidden (i.e. there is no "Change Genre" button).
Normally, when you want to save some user setting (again, like the genre mentioned above) in author mode, you would call p_args.module.setPreference() with the name/value pair that you would like to persist. This works fine when your gadget runs on Live.com. If you try and do the same thing in Live Spaces the underlying HTTP call returns a response code of 500 (SERVER ERROR) and your settings are lost.
What can you do about this? About the only thing you can do (short of praying to the Spaces God, or Gods if you are a BSG fan) is to write and host your own data store. But before you do, consider this; your gadget may end up of thousands of spaces around the world. This means thousands of database records to store the gadget's settings and potentially millions of HTTP hits on your server. If your hosting company charges by the megabyte, this could amount to a costly gadget.
So all you have to do is write a web service and call it with the user data and some unique ID. On Live.com you would do a module.GetId(), but when you try this on Live Spaces, you get some long string something like this:
GadgetGallery:
Not much good as this is the same ID that you will get on each of the thousands of Live Spaces on which your gadget resides. No, you need something to uniquely identify the Live Space that you gadget is installed on. Fortunately, I know the secret and equally as fortunate for you, I'm going to tell you what the secret is.
But first, here's a hint. On your Live Space, right-click somewhere on the background of your gadget's iframe and select properties. Take a look at the URL. Do you see anything that might be unique to the particular space you gadget is running on? Yes, you do. There is a "&host=foo.spaces.live.com" in the URL.
Sweet, you say. But how does that help me? Well I thought you might ask that. All you have to do is parse the URL and return all query string and hash (the stuff following the "#") parameters. When you are done, the value for host will be your unique ID. (Note: this ID is unique to the apace and not to an instance on the space; that is to say that you can only have one copy of your gadget on any given Live Space). Oh, I'm not going to write the parsing code for you. You'll have to figure that out for yourself. :)
With Live.com gadget, programmers just got used to setting the backgroundColor of their DIV class to white/#ffffff or leaving it blank to inherit the background color of the body. The body of a non-certified gadget is the controlled by the HTML page that loaded in the iframe. You gadget runs inside of a DIV on this HTML page. As any piece of code running on a web page, you have full control of the document (and thus the body element) of the page.
To successfully have a transparent background in an iframe, you need 2 things. 1) the parent iframe element must set attribute allowTransparency="true". As luck would have it, the Live Spaces people have been gracious enough to oblige.
The other part of the equation is to set the backgroundColor of the body element of the HTML page within the iframe to transparent. Here is the code to do just that:
if (window.parent != window.self) { document.body.style.backgroundColor = "transparent";}
If you look at the example shown above, the gadget on the left has the code above applied. The gadget on the right has not. Note that the 2 gadgets also have different background iPod images (one black and one white). This has nothing to do with the backgroundColor code.
Arrrrrrg! When a gadget is instantiated, it is passed 3 parameters: p_el, p_args and p_namespace (p_namespace if currently unused). p_el is a pointer to the DIV element where your gadget is bound to or "lives". p_args is an object that contains many useful properties. Here is what p_args looks like in VS2005 with the iTunes gadget running on Live.com compared to Live Spaces:
/: {...}
Wow, that's a big difference. I won't go into the meaning of them all, but some notable standouts include: defaults and feed. Of course it would be nice if id were set to the Live Spaces host name so we wouldn't have to go through all that trouble that we did above. Oh, and the "M" in ownermkt should be capitalized. ;)
To help to keep tabs on what p_args are supported on Live.com and Live Spaces, I've written a tool called TestPlatform. The manifest URL is:. It displays 2 of the most important aspects of the gadget framework, p_args and p_args.module.
If you want to see how your web gadget will look on Live Spaces (both author and view modes) and on Live.com, here is a link you should know about.
As Live Spaces matures and the line between gadget platforms starts to come together as one codebase, many of these issues will go away. Until then, I hope these tips help you develop better gadgets that play well on Live.com and Live Spaces.
donavon
PingBack from
i understand the problems with the gadgets that devellopers might have, but the gadgets being made are absolutely crap and so far i've used none that anyone besides MS has made.
for example the itunes one you have made. why would i want that? i have itunes on my PC and can access all of that info. now if you made it useful like to show the songs i've been listening to or something that might be a little more interesting. if i had the know-how to make my own gadget i'd try myself but i am not a developer i'm a user so.
Aloha! Donavon! ;-)
Microsoft Appears To Have Opted For "Gimmicks" Over "User Content" With Regard To Windows Live Spaces!
These "Gadgets" Appear To Be The Cause of EXTREMELY S-L-O-W Loading/FREEZING of Windows Live Spaces, Rendering Windows Live Spaces Practically Useless & The Source of MUCH User Discontent!
Therefore, I Believe MSN Groups: Is a Better Venue For Showcasing These "Gadgets" Than, Windows Live Spaces!
Mahalo! ...4 Your HARD Work!
Aloha!
;-)
punky,
It sounds like you wouldn't want the iTunes gadget, but's it's been downloaded nearly 50,000 times on the Microsoft Gallery so SOMEONE out there must see a use for it.
But seriously, it's not a very practical gadget. As I'm tracking it, it's appears as though it appeals mainly to 14-16 y/o girls from China so they can have a slide show featuring album art of the genre of music that they like so their friends can see it when they visit their Live Space. Not my cup-o-tea, but oh well.
And while I agree that the majority of the gadgets on the gallery are crap (there has been a rash of gadget "spam" as of late), there are some very well written gadgets out there. Personally I've written around 20 gadgets and maybe only 1 or 2 have been crap. ;) I even won an Xbox for one of my gadgets and a Creative Zen media player for another.
Comme vous le savez peut être déjà, les gadgets que vous développez pour la page d'accueil personnalisable
How do you go about adding a gadget to your Spaces page manually for testing purposes? So far that's eluded me, though I will admit that I haven't spent much time really thinking about Spaces gadgets.
Your article makes me want to implement a simple storage mechanism so I can start doing some more interesting Spaces-related gadgets.
Todd,
To add a gadget to your space, use this (from a post on thespacecraft, the Spaces team blog):
The permalink to the full article:
Thanks to this article, I finally went ahead and implemented my own preference storage mechanism. Soon, my "Xbox 360 Favorite Games" gadget () will have proper Spaces support (just as soon as it makes it through Gallery approval -- check back tomorrow morning). In the process, I've got a couple more questions that perhaps you or someone from Spaces can answer:
1. Is there any way to grab the style from a Space? Gadgets tend to assume a dark-on-light theme, but there are plenty of Space themes that are light-on-dark. By blindly setting the gadget background to transparent, you may end up with unreadable text (black text on a black or dark background).
2. Is there any way to allow a gadget to use more horizontal space? Gadgets squeeze fine if you put them into a narrower portion of your layout, but they don't stretch out all the way if you put them in a wider part.
3. Do you have any recommendation on how best to determine if a gadget is running on Spaces? The best I found was to check the window.location for the "host" parameter, or match the specific '"GadgetGallery:" + p_args.module.resolveUrl("gadget.xml")' string from p_args.module.getId(). Neither seems particularly robust.
Thanks for article Donavon. I have posted a response at
Today sees a small but valuable Windows Live Spaces update that tweaks and fixes the existing version
Notably for the gadget developer community we have enabled support for Get/Set Preferences with this release. See the post at
Thanks Greg. This is great news for gadget developers. I expect to see more quality gadgets available for Spaces in the near future.
I'm sorry, but about the transparency tips I found here:
I can say u it doesn't function at all. I have already submited three or four new versions of differents gadget implemented as u stated here but I didn't get any result. Just a white background. As the the theme of my space has a white background I could guess that the problem is from the source of the frame, I'm unable to override the *default setting*.
Thank u,
Daniele
To get u with more problems :-),
if you can tip the right people from your side,
the boxes containing the gadgets, can I call them webparts?, have a strange behaviour on the rendering of their size, particularly the height. I guess it is a css problem. In any way, when u load your space some *boxes* are of the right height, some are not at all. The result is usually a space with many blank spaces and the things in disorder. Otherwise u have to reload your space many times until the browser seems to eat it well.
Hopes this helps and u can find a fast solution.
Grazie
an other interesting parameter in the gadget url is "A" like "..aspx?a=true" that showes you if u are in edit mode of your space ("True") or not ("False")
ciao :-P
About the ccs problems, the problems rendering the height of the gadgets in Live Spaces.
I have noticed that if I right click the gadget and I refresh the start.com service which starts the gadget, the gadget is rendered always in the right way. So probably it is a problem of the page loading event or of the start.com service processing all the requests.
ciao
the ohio state university football team 2005
home page nc educational lottery
learn spanish free audio
learn spanish language online for free
illinois community college teaching jobs
wisconsin high school football playoffs 2006 wiaa
2007 high school football rankings national
free online bible study course by john quinn
This is great, have bookmarked. | http://www.liveside.net/developer/archive/2006/10/03/Live-Spaces_3A00_-The-challenges-facing-gadget-developers.aspx | crawl-002 | refinedweb | 2,353 | 72.56 |
XML::Filter::Dispatcher - Path based event dispatching with DOM support
use XML::Filter::Dispatcher qw( :all ); my $f = XML::Filter::Dispatcher->new( Rules => [ 'foo' => \&handle_foo_start_tag, '@bar' => \&handle_bar_attr, ## Send any <foo> elts and their contents to $handler 'snarf//self::node()' => $handler, ## Print the text of all <description> elements 'description' => [ 'string()' => sub { push @out, xvalue } ], ], Vars => { "id" => [ string => "12a" ], }, );
WARNING: Beta code alert.
A SAX2 filter that dispatches SAX events based on "EventPath" patterns as the SAX events arrive. The SAX events are not buffered or converted to an in-memory document representation like a DOM tree. This provides for low lag operation because the actions associated with each pattern are executed as soon as possible, usually in an element's
start_element() event method.
This differs from traditional XML pattern matching tools like XPath and XSLT (which is XPath-based) which require the entire document to be built in memory (as a "DOM tree") before queries can be executed. In SAX terms, this means that they have to build a DOM tree from SAX events and delay pattern matching until the
end_document() event method is called.
A rule is composed of a pattern and an action. Each XML::Filter::Dispatcher instance has a list of rules. As SAX events are received, the rules are evaluated and one rule's action is executed. If more than one rule matches an event, the rule with the highest score wins; by default a rule's score is its position in the rule list, so the last matching rule the list will be acted on.
A simple rule list looks like:
Rules => [ 'a' => \&handle_a, 'b' => \&handle_b, ],
There are several types of actions:
Rules => [ 'a' => \&foo, 'b' => sub { print "got a <b>!\n" }, ],
Handler => $h, ## A downstream handler Rules => [ 'a' => "Handler", 'b' => $h2, ## Another handler ],
Rules => [ '//node()' => $h, 'b' => undef, ],
Useful for preventing other actions for some events
Rules => [ 'b' => \q{print "got a <b>!\n"}, ],
Lower overhead than a CODE reference.
EXPERIMENTAL.
Note: this section describes EventPath and discusses differences between EventPath and XPath. If you are not familiar with XPath you may want to skim those bits; they're provided for the benefit of people coming from an XPath background but hopefully don't hinder others. A working knowledge of SAX is necessary for the advanced bits.
EventPath patterns may match the document, elements, attributes, text nodes, comments, processing instructions, and (not yet implemented) namespace nodes. Patterns like this are referred to as "location paths" and resemble Unix file paths or URIs in appearance and functionality.
Location paths describe a location (or set of locations) in the document much the same way a filespec describes a location in a filesystem. The path
/a/b/c could refer to a directory named
c on a filesystem or a set of
e<ltc>> elements in an XML document. In either case, the path indicates that
c must be a child of
b,
b must be <a>'s, and <a> is a root level entity. More examples later.
EventPath patterns may also extract strings, numbers and boolean values from a document. These are called "expression patterns" and are only said to match when the values they extract are "true" according to XPath semantics (XPath truth-ness differs from Perl truth-ness, see EventPath Truth below). Expression patterns look like
string( /a/b/c ) or
number( part-number ), and if the result is true, the action will be executed and the result can be retrieved using the xvalue method.
TODO: rename xvalue to be ep_result or something.
We cover patterns in more detail below, starting with some examples.
If you'd like to get some experience with pattern matching in an interactive XPath web site, there's a really good XPath/XSLT based tutorial and lab at.
Two kinds of actions are supported: Perl subroutine calls and dispatching events to other SAX processors. When a pattern matches, the associated action
This is perhaps best introduced by some examples. Here's a routine that runs a rather knuckleheaded document through a dispatcher:
use XML::SAX::Machines qw( Pipeline ); sub run { Pipeline( shift )->parse_string( <<XML_END ) } <stooges> <stooge name="Moe" hairstyle="bowl cut"> <attitude>Bully</attitude> </stooge> <stooge name="Shemp" hairstyle="mop"> <attitude>Klutz</attitude> <stooge name="Larry" hairstyle="bushy"> <attitude>Middleman</attitude> </stooge> </stooge> <stooge name="Curly" hairstyle="bald"> <attitude>Fool</attitude> <stooge name="Shemp" repeat="yes"> <stooge name="Joe" hairstyle="bald"> <stooge name="Curly Joe" hairstyle="bald" /> </stooge> </stooge> </stooge> </stooges> XML_END
Let's count the number of stooge characters in that document. To do that, we'd like a rule that fires on almost all
<stooge> elements:
my $count; run( XML::Filter::Dispatcher->new( Rules => [ 'stooge' => sub { ++$count }, ], ) ); print "$count\n"; ## 7
Hmmm, that's one too many: it's picking up on Shemp twice since the document shows that Shemp had two periods of stoogedom. The second node has a convenient
repeat="yes" attribute we can use to ignore the duplicate.
We can ignore the duplicate element by adding a "predicate" expression to the pattern to accept only those elements with no
repeat attribute. Changing that rule to
'stooge[not(@repeat)]' => ...
or even the more pedantic
'stooge[not(@repeat) or not(@repeat = "yes")]' => ...
yields the expected answer (6).
Now let's try to figure out the hairstyles the stooges wore. To extract just the names of hairstyles, we could do something like:
my %styles; run( XML::Filter::Dispatcher->new( Rules => [ 'stooge' => [ 'string( @hairstyle )' => sub { $styles{xvalue()} = 1 }, ], ], ) ); print join( ", ", sort keys %styles ), "\n";
which prints "bald, bowl cut, bushy, mop". That rule extracts the text of each
hairstyle attribute and the
xvalue() returns it.
The text contents of elements like
<attitudes> can also be sussed out by using a rule like:
'string( attitude )' => sub { $styles{xvalue()} = 1 },
which prints "Bully, Fool, Klutz, Middleman".
Finally, we might want to correlate hairstyles and attitudes by using a rule like:
my %styles; run( XML::Filter::Dispatcher->new( Rules => [ 'stooge' => [ 'concat(@hairstyle,"=>",attitude)' => sub { $styles{$1} = $2 if xvalue() =~ /(.+)=>(.+)/; }, ], ], ) ); print map "$_ => $styles{$_}\n", sort keys %styles;
which prints:
bald => Fool bowl cut => Bully bushy => Middleman mop => Klutz
When a blessed object
$handler is provided as an action for a rule:
my $foo = XML::Handler::Foo->new(); my $d = XML::Filter::Dispatcher->new( Rules => [ 'foo' => $handler, ], Handler => $h, );
the selected SAX events are sent to
$handler.
If the event is selected is a
start_document() or
start_element() event and it is selected without using the
start-document:: or
start-element:: axes, then the handler (
$foo) replaces the existing handler of the dispatcher (
$h) until after the corresponding
end_...() event is received.
This causes the entire element (
<foo>) to be sent to the temporary handler (
$foo). In the example, each
<foo> element will be sent to
$foo as a separate document, so if (whitespace shown as underscores)
<root> ____<foo>....</foo> ____<foo>....</foo> ____<foo>....</foo> </root>
is fed in to
$d, then
$foo will receive 3 separate
<foo>...</foo>
documents (
start_document() and
end_document() events are emitted as necessary) and
$h will receive a single document without any
<foo> elements:
<root> ____ ____ ____ </root>
This can be useful for parsing large XML files in small chunks, often in conjunction with XML::Simple or XML::Filter::XSLT.
But what if you don't want
$foo to see three separate documents? What if you're excerpting chunks of a document to create another document? This can be done by telling the dispatcher to emit the main document to
$foo and using rules with an action of
undef to elide the events that are not wanted. This setup:
my $foo = XML::Handler::Foo->new(); my $d = XML::Filter::Dispatcher->new( Rules => [ '/' => $foo, 'bar' => undef, 'foo' => $foo, ], Handler => $h, );
, when fed this document:
<root> __<bar>hork</bar> __<bar> __<foo>....</foo> __<foo>....</foo> __<foo>....</foo> __<hmph/> __</bar> __<hey/> </root>
results in
$foo receiving a single document of input looking like this:
<root> __ __<foo>....</foo> __<foo>....</foo> __<foo>....</foo> __<hey/> </root>
XML::Filter::Dispatcher keeps track of each handler and sends
start_document() and
end_document() at the appropriate times, so the
<foo> elements are "hoisted" out of the
<bar> element in this example without any untoward
..._document() events.
TODO: support forwarding to multiple documents at a time. At the present, using multiple handlers for the same event is not supported.
TODO: At the moment, selecting and forwarding individual events is not supported. When it is, any events other than those covered above will be forwarded individually
XML::Filter::Dispatcher checks when it is first loaded to see if Devel::TraceSAX is loaded. If so, it will emit tracing messages. Typical use looks like
perl -d:Devel::TraceSAX script_using_x_f_dispatcher
If you are
use()ing Devel::TraceSAX in source code, make sure that it is loaded before XML::Filter::Dispatcher.
TODO: Allow tracing to be enabled/disabled independantly of Devel::TraceSAX.
XML::Filter::Dispatcher offers namespace support in matching and by providing functions like local-name(). If the documents you are processing don't use namespaces, or you only care about elements and attributes in the default namespace (ie without a "foo:" namespace prefix), then you need not worry about engaging XML::Filter::Dispatcher's namespace support. You do need it if your patterns contain the
foo:* construct (that
* is literal).
To specify the namespaces, pass in an option like
Namespaces => { "" => "uri0", ## Default namespace prefix1 => "uri1", prefix2 => "uri2", },
Then use
prefix1: and
prefix2: whereever necessary in patterns.
A missing prefix on an element always maps to the default namespace URI, which is "" by default. Attributes are treated likewise, though this is probably a bug.
If your patterns contain prefixes (like the
foo: in
foo:bar), and you don't provide a Namespaces option, then the element names will silently be matched literally as "foo:bar", whether or not the source document declares namespaces. This may change, as it may cause too much user confusion.
XML::Filter::Dispatcher follows the XPath specification rather literally and does not allow
:*, which you might think would match all nodes in the default namespace. To do this, ass a prefixe for the default namespace URI:
Namespaces => { "" => "uri0", ## Default namespace "default" => "uri0", ## Default namespace prefix1 => "uri1", prefix2 => "uri2", },
then use "default:*" to match it.
CURRENT LIMITAION: Currently, all rules must exist in the same namespace context. This will be changed when I need to change it (contact me if you need it changed). The current idear is to allow a special function "Namespaces( { .... }, @rules )" that enables a temporary namespace context, although abbreviated forms might be possible.
"EventPath" patterns are that large subset of XPath patterns that can be run in a SAX environment without a DOM. There are a few crucial differences between the environments that EventPath and XPath each operate in.
XPath operates on a tree of "nodes" where each entity in an XML document has only one corresponding node. The tree metaphor used in XPath has a literal representation in memory. For instance, an element
<foo> is represented by a single node which contains other nodes.
EventPath operates on a series of events instead of a tree of nodes. For instance elements, which are represented by nodes in DOM trees, are represented by two event method calls,
start_element() and
end_element(). This means that EventPath patterns may match in a
start_...() method or an
end_...() method, or even both if you try hard enough.
The only times an EventPath pattern will match in an
end_...() method are when the pattern refers to an element's contents or it uses the XXXX function (described below) to do so intentionally.
The tree metaphor is used to arrange and describe the relationships between events. In the DOM trees an XPath engine operates on, a document or an element is represented by a single entity, called a node. In the event streams that EventPath operates on, documents and element
EventPath is not a standard of any kind, but XPath can't cope with situations where there is no DOM and there are some features that EventPath need (start_element() vs. end_element() processing for example) that are not compatible with XPath.
Some of the features of XPath require that the source document be fully translated in to a DOM tree of nodes before the features can be evaluated. (Nodes are things like elements, attributes, text, comments, processing instructions, namespace mappings etc).
These features are not supported and are not likely to be, you might want to use XML::Filter::XSLT for "full" XPath support (tho it be in an XSLT framework) or wait for XML::TWIG to grow SAX support.
Rather than build a DOM, XML::Filter::Dispatcher only keeps a bare minimum of nodes: the current node and its parent, grandparent, and so on, up to the document ("root") node (basically the /ancestor-or-self:: axis). This is called the "context stack", although you may not need to know that term unless you delve in to the guts.
EventPath borrows a lot from XPath including its notion of truth. This is different from Perl's notion of truth; presumably to make document processing easier. Here's a table that may help, the important differences are towards the end:
Expression EventPath XPath Perl ========== ========= ===== ==== false() FALSE FALSE n/a (not applicable) true() TRUE TRUE n/a 0 FALSE FALSE FALSE -0 FALSE** FALSE n/a NaN FALSE** FALSE n/a (not fully, anyway) 1 TRUE TRUE TRUE "" FALSE FALSE FALSE "1" TRUE TRUE TRUE "0" TRUE TRUE FALSE * To be regarded as a bug in this implementation ** Only partially implemented/supported in this implementation
Note: it looks like XPath 2.0 is defining a more workable concept for document processing that uses something resembling Perl's empty lists,
(), to indicate empty values, so
"" and
() will be distinct and
"0" can be interpreted as false like in Perl. XPath2 is not provided by this module yet and won't be for a long time (patches welcome ;).
All of this means that only a portion of XPath is available. Luckily, that portion is also quite useful. Here are examples of working XPath expressions, followed by known unimplemented features.
TODO: There is also an extension function available to differentiate between
start_... and
end_... events. By default
Expression Event Type Description (event type) ========== ========== ======================== / start_document Selects the document node /a start_element Root elt, if it's "<a ...>" a start_element All "a" elements b//c start_element All "c" descendants of "b" elt.s @id start_element All "id" attributes string( foo ) end_element matches at the first </foo> or <foo/> in the current element; xvalue() returns the text contained in "<foo>...</foo>" string( @name ) start_element the first "name" attributes; xvalue() returns the text of the attribute.
There are several APIs provided: general, xstack, and EventPath variable handling.
The general API provides
new() and
xvalue(),
xvalue_type(), and
xrun_next_action().
The variables API provides
xset_var() and
xget_var().
The xstack API provides
xadd(),
xset(),
xoverwrite(),
xpush(),
xpeek() and
xpop().
All of the "xfoo()" APIs may be called as a method or, within rule handlers, called as a function:
$d = XML::Filter::Dispatcher->new( Rules => [ "/" => sub { xpush "foo\n"; print xpeek; ## Prints "foo\n" my $self = shift; print $self->xpeek; ## Also prints "foo\n" }, ], ); print $d->xpeek; ## Yup, prints "foo\n" as well.
This dual nature allows you to import the APIs as functions and call them using a concise function-call style, or to leave them as methods and use object-oriented style.
Each call may be imported by name:
use XML::Filter::Dispatcher qw( xpush xpeek );
or by one of three API category tags:
use XML::Filter::Dispatcher ":general"; ## xvalue() use XML::Filter::Dispatcher ":variables"; ## xset_var(), xget_var() use XML::Filter::Dispatcher ":xstack"; ## xpush(), xpop(), and xpeek()
or en mass:
use XML::Filter::Dispatcher ":all";
my $f = XML::Filter::Dispatcher->new( Rules => [ ## Order is significant "/foo/bar" => sub { ## Code to execute }, ], );
Must be called as a method, unlike other API calls provided.
"string( foo )" => sub { my $v = xvalue }, # if imported "string( foo )" => sub { my $v = shift->xvalue }, # if not
Returns the result of the last EventPath expression evaluated; this is the result that fired the current rule. The example prints all text node children of
<foo> elements, for instance.
For matching expressions, this is equivalent to $_[1] in action subroutines.
Returns the type of the result returned by xvalue. This is either a SAX event name or "attribute" for path rules ("//a"), or "" (for a string), "HASH" for a hash (note that struct() also returns a hash; these types are Perl data structure types, not EventPath types).
This is the same as xeventtype for all rules that don't evaluate functions like "string()" as their top level expression.
Returns the type of the current SAX event.
Runs the next action for the current node. Ordinarily, XML::Filter::Dispatcher runs only one action per node; this allows an action to call down to the next action.
This is especially useful in filters that tweak a document on the way by. This tweaky sort of filter establishes a default "pass-through" rule and then additional override rules to tweak the values being passed through.
Let's suppose you want to convert some mtimes from seconds since the epoch to a human readable format. Here's a set of rules that might do that:
Rules => [ 'node()' => "Handler", ## Pass everything through by default. 'file[@mtime]' => sub { ## intercept and tweak the node. my $attr = $_[1]->{Attributes}->{"{}mtime"}; ## Localize the changes: never assume that it is safe ## to alter SAX elements on the way by in a general purpose ## filter. Some smart aleck might send the same events ## to another filter with a Tee fitting or even back ## through your filter multiple times from a cache. local $attr->{Value} = localtime $attr->{Value}; ## Now that the changes are localised, fall through to ## the default rule. xrun_next_action; ## We could emit other events here as well, but need not ## in this example. }, ],
EventPath variables may be set in the current context using
xset_var(), and accessed using
xget_var(). Variables set in a given context are visible to all child contexts. If you want a variable to be set in an enclosed context and later retrieved in an enclosing context, you must set it in the enclosing context first, then alter it in the enclosed context, then retrieve it.
EventPath variables are typed.
EventPath variables set in a context are visible within that context and all enclosed contexts, but not outside of them.
"foo" => sub { xset_var( bar => string => "bingo" ) }, # if imported "foo" => sub { shift->xset_var( bar => boolean => 1 ) },
Sets an XPath variables visible in the current context and all child contexts. Will not be visible in parent contexts or sibling contexts.
Legal types are
boolean,
number, and
string. Node sets and nodes are unsupported at this time, and "other" types are not useful unless you work in your own functions that handle them.
Variables are visible as
$bar variable references in XPath expressions and using xget_var in Perl code. Setting a variable to a new value temporarily overrides any existing value, somewhat like using Perl's
local.
"bar" => sub { print xget_var( "bar" ) }, # if imported "bar" => sub { print shift->xget_var( "bar" ) },
Retrieves a single variable from the current context. This may have been set by a parent or by a previous rule firing on this node, but not by children or preceding siblings.
Returns
undef if the variable is not set (or if it was set to undef).
"bar" => sub { print xget_var_type( "bar" ) }, # if imported "bar" => sub { shift->xget_var_type( "bar" ) },
Retrieves the type of a variable from the current context. This may have been set by a parent or by a previous rule firing on this node, but not by children or preceding siblings.
Returns
undef if the variable is not set.
XML::Filter::Dispatcher allows you to register handlers using
set_handler() and
get_handler(), and then to refer to them by name in actions. These are part of the "general API".
You may use any string for handler names that you like, including strings with spaces. It is wise to avoid those standard, rarely used handlers recognized by parsers, such as:
DTDHandler ContentHandler DocumentHandler DeclHandler ErrorHandler EntityResolver LexicalHandler
unless you are using them for the stated purpose. (List taken from XML::SAX::EventMethodMaker).
Handlers may be set in the constructor in two ways: by using a name ending in "Handler" and passing it as a top level option:
my $f = XML::Filter::Dispatcher->new( Handler => $h, FooHandler => $foo, BarHandler => $bar, Rules => [ ... ], );
Or, for oddly named handlers, by passing them in the Handlers hash:
my $f = XML::Filter::Dispatcher->new( Handlers => { Manny => $foo, Moe => $bar, Jack => $bat, }, Rules => [ ... ], );
Once declared in new(), handler names can be used as actions. The "well known" handler name "Handler" need not be predeclared.
For exampled, this forwards all events except the
start_element() and
end_element() events for the root element's children, thus "hoisting" everything two levels below the root up a level:
Rules => [ '/*/*' => undef, 'node()' => "Handler", ],
By default, no events are forwarded to any handlers: you must send individual events to an individual handlers.
Normally, when a handler is used in this manner, XML::Filter::Dispatcher makes sure to send
start_document() and
end_document() events to it just before the first event and just after the last event. This prevents sending the document events unless a handler actually receives other events, which is what most people expect (the alternative would be to preemptively always send a
start_document() to all handlers when when the dispatcher receives its
start_document(): ugh).
To disable this for all handlers, pass the
SuppressAutoStartDocument = 1> option.
$self->set_handler( $handler ); $self->set_handler( $name => $handler );
$self->set_handler( $handler ); $self->set_handler( $name => $handler );
The xstack is a stack mechanism provided by XML::Filter::Dispatcher that is automatically unwound after end_element, end_document, and all other events other than start_element or start_document. This sounds limiting, but it's quite useful for building data structures that mimic the structure of the XML input. I've found this to be common when dealing with data structures in XML and a creating nested hierarchies of objects and/or Perl data structures.
Here's an example of how to build and return a graph:
use Graph; my $d = XML::Filter::Dispatcher->new( Rules => [ ## These two create and, later, return the Graph object. 'graph' => sub { xpush( Graph->new ) }, 'end::graph' => \&xpop, ## Every vertex must have a name, so collect in and add it ## to the Graph object using its add_vertex( $name ) method. 'vertex' => [ 'string( @name )' => sub { xadd } ], ## Edges are a little more complex: we need to collect the ## from and to attributes, which we do using a hash, then ## pop the hash and use it to add an edge. You could ## also use a single rule, see below. 'edge' => [ 'string()' => sub { xpush {} } ], 'edge/@*' => [ 'string()' => sub { xset } ], 'end::edge' => sub { my $edge = xpop; xpeek->add_edge( @$edge{"from","to"} ); }, ], ); my $graph = QB->new( "graph", <<END_XML )->playback( $d ); <graph> <vertex name="0" /> <edge from="1" to="2" /> <edge from="2" to="1" /> </graph> END_XML print $graph, $graph->is_sparse ? " is sparse!\n" : "\n";
should print "0,1-2,2-1 is sparse!\n".
This is good if you can tell what object to add to the stack before seeing content. Some XML parsing is more general than that: if you see no child elements, you want to create one class to contain just character content, otherwise you want to add a container class to contain the child nodes.
An faster alternative to the 3 edge rules relies on the fact that SAX's start_element events carry the attributes, so you can actually do a single rule instead of the three we show above:
'edge' => sub { xpeek->add_edge( $_[1]->{Attributes}->{"{}from"}->{Value}, $_[1]->{Attributes}->{"{}to" }->{Value}, ); },
Push values on to the xstack. These will be removed from the xstack at the end of the current element. The topmost item on the xstack is available through the peek method. Elements xpushed before the first element (usually in the
start_document() event) remain on the stack after the document has been parsed and a call like
my $elt = $dispatcher->xpop;
can be used to retrieve them.
Tries to add a possibly named item to the element on the top of the stack and push the item on to the stack. It makes a guess about how to add items depending on what the current top of the stack is.
xadd $name, $new_item;
does this:
Top of Stack Action ============ ====== scalar xpeek .= $new_item; SCALAR ref ${xpeek} .= $new_item; ARRAY ref push @{xpeek()}, $new_item; HASH ref push @{xpeek->{$name}} = $new_item; blessed object xpeek->$method( $new_item );
The $method in the last item is one of (in order) "add_$name", "push_$name", or "$name".
After the above action, an
xpush $new_item;
is done.
$name defaults to the LocalName of the current node if it is an attribute or element, so
xadd $foo;
will DWYM. TODO: search up the current node's ancestry for a LocalName when handling other event types.
If no parameters are provided, xvalue is used.
If the stack is empty, it just xpush()es on the stack.
Like
xadd(), but tries to set a named value. Dies if the value is already defined (so duplicate values aren't silently ignored).
xset $name, $new_item;
does this:
Top of Stack Action ============ ====== scalar xpeek = $new_item; SCALAR ref ${xpeek} = $new_item; HASH ref xpeek->{$name} = $new_item; blessed object xpeek->$name( $new_item );
Trying to xset any other types results in an exception.
After the above action (except when the top is a scalar or SCALAR ref), an
xpush $new_item;
is done so that more may be added to the item.
$name defaults to the LocalName of the current node if it is an attribute or element, so
xset $foo;
will DWYM. TODO: search up the current node's ancestry for a LocalName when handling other event types.
If no parameters are provided, xvalue is used.
Exactly like xset but does not complain if the value has already been xadd(), xset() or xoverwrite().
Rules => [ "foo" => sub { my $elt = $_[1]; xpeek->set_name( $elt->{Attributes}->{"{}name"} ); }, "/end::*" => sub { my $self = shift; XXXXXXXXXXXXXXXXXXXX } ],
Returns the top element on the xstack, which was the last thing pushed in the current context. Throws an exception if the xstack is empty. To check for an empty stack, use eval:
my $stack_not_empty = eval { xpeek };
To peek down the xstack, use a Perlish index value. The most recently pushed element is index number -1:
$xpeek( -1 ); ## Same as $self->peek
The first element pushed on the xstack is element 0:
$xpeek( 0 );
An exception is thrown if the index is off either end of the stack.
my $d = XML::Filter::Dispatcher->new( Rules => [ ....rules to build an object hierarchy... ], ); my $result = $d->xpop
Removes an element from the xstack and returns it. Usually called in a end_document handler or after the document returns to retrieve a "root" object placed on the stack before the root element was started.
Handy for detecting a nonempty stack:
warn xpeek unless xstack_empty;
Because
xpeek and
xpop throw exceptions on an empty stack,
xstack_empty is needed to detect whether it's safe to call them.
Handy for walking the stack:
for my $i ( reverse 0 .. xstack_max ) { ## from top to bottom use BFD;d xpeek( $i ); }
Because
xpeek and
xpop throw exceptions on an empty stack,
xstack_max may be used to walk the stack safely.
This section assumes familiarity with XPath in order to explain some of the particulars and side effects of the incremental XPath engine.
Expressions like
0,
false(),
1, and
'a' have no location path and apply to all nodes (including namespace nodes and processing instructions).
&&or
==instead of
andor
=.
Only some axes can be reasonably supported within a SAX framework without building a DOM and/or queueing SAX events for in-document-order delivery.
On the other hand, lots of SAX-specific Axes are supported.
SAX does not guarantee that
characters events will be aggregated as much as possible, as text() nodes do in XPath. Generally, however, this is not a problem; instead of writing
"quotation/text()" => sub { ## BUG: may be called several times within each quotation elt. my $self = shift; print "He said '", $self->current_node->{Data}, "'\n'"; },
write
"string( quotation )" => sub { my $self = shift; print "He said '", xvalue, "'\n'"; },
The former is unsafe; consider the XML:
<quotation>I am <!-- bs -->GREAT!<!-- bs --></quotation>
Rules like
.../text() will fire twice, which is not what is needed here.
Rules like
string( ... ) will fire once, at the end_element event, with all descendant text of quotation as the expression result.
You can also place an XML::Filter::BufferText instance upstream of XML::Filter::Dispatcher if you really want to use the former syntax (but the
GREAT! example will still generate more than one event due to the comment).
All axes are implemented except for those noted below as "todo" or "not soon".
Also except where noted, axes have a principal event type of
start_element. This node type is used by the
* node type test.
Note: XML::Filter::Dispatcher tries to die() on nonsensical paths like
/a/start-document::* or
//start-cdata::*, but it may miss some. This is meant to help in debugging user code; the eventual goal is to catch all such nonsense.
attribute::(XPath,
attribute)
child::(XPath)
Selects start_element, end_element, start_prefix_mapping, end_prefix_mapping, characters, comment, and processing_instruction events that are direct "children" of the context element or document.
descendant::(XPath)
descendant-or-self::(XPath)
end::(SAX,
end_element)
Like
child::, but selects the
end_element event of the element context node.
This is usually used in preference to
end-element:: due to its brevity.
Because this selects the end element event, most of the path tests that may follow other axes are not valid following this axis. self:: and attribute:: are the only legal axes that may occur to the right of this axis.
end-document::(SAX,
end_document)
Like
self::, but selects the
end_document event of the document context node.
Note: Because this selects the end document event, most of the path tests that may follow other axes are not valid following this axis. self:: are the only legal axes that may occur to the right of this axis.
end-element::(SAX,
end_element)
EXPERIMENTAL. This axis is not necessary given
end::.
Like
child::, but selects the
end_element event of the element context node. This is like
end::, but different from
end-document::.
Note: Because this selects the end element event, most of the path tests that may follow other axes are not valid following this axis. attribute:: and self:: are the only legal axes that may occur to the right of this axis.
following::(XPath, not soon)
following-sibling::(XPath, not soon)
Implementing following axes will take some fucky postponement logic and are likely to wait until I have time. Until then, setting a flag in $self in one handler and checking in another should suffice for most uses.
namespace::(XPath,
namespace, todo)
parent::(XPath, todo (will be limited))
parent/ancestor paths will not allow you to descend the tree, that would require DOM building and SAX event queueing.
preceding::(XPath, not soon)
preceding-sibling::(XPath, not soon)
Implementing reverse axes will take some fucky postponement logic and are likely to wait until I have time. Until then, setting a flag in $self in one handler and checking in another should suffice for most uses.
self::(XPath)
start::(SAX,
start_element)
This is like child::, but selects the
start_element events. This is usually used in preference to
start-element:: due to its brevity.
start:: is rarely used to drive code handlers because rules that match document or element events already only fire code handlers on the
start_element event and not the
end_element event (however, when a SAX handler is used, such expressions send both start and end events to the downstream handler, so start:: has utility there).
start-document::(SAX,
start_document)
EXPERIMENTAL. This axis is confusing compared to and
start-element::, and is not necessary given
start::.
This is like
self::, but selects only the
start_document events.
start-element::(SAX,
start_element)
EXPERIMENTAL. This axis is not necessary given
start::.
This is like
child::, but selects only the
start_element events.
Anything not on this list or listed as unimplemented is a TODO. Ring me up if you need it.
normalize-space() is equivalent to
normalize-space(.).
Object may be a number, boolean, string, or the result of a location path:
string( 10 ); string( /a/b/c ); string( @id );
string() is equivalent to
string(.).
string-length() not supported; can't stringify the context node without keeping all of the context node's children in mempory. Could enable it for leaf nodes, I suppose, like attrs and #PCDATA containing elts. Drop me a line if you need this (it's not totally trivial or I'd have done it).
See notes about node sets for the string() function above.
Converts strings, numbers, booleans, or the result of a location path (
number( /a/b/c )).
Unlike real XPath, this dies if the object cannot be cleanly converted in to a number. This is due to Perl's varying level of support for NaN, and may change in the future.
number() is equivalent to
number(.).
Many of these cannot be fully implemented in an event oriented environment.
No support for nodesets, though.
Supports limited nodesets, see the string() function description for details.
Some features are entirely or just currently missing due to the lack of nodesets or the time needed to work around their lack. This is an incomplete list; it's growing as I find new things not to implement.
No nodesets => no count() of nodes in a node set.
With SAX, you can't tell when you are at the end of what would be a node set in XPath.
I will implement pieces of this as I can. None are implemented as yet.
XPath has no concept of time; it's meant to operate on a tree of nodes. SAX has
start_element and
end_element events and
start_document and
end_document events.
By default, XML::Filter::Dispatcher acts on start events and not end events (note that all rules are evaluated on both, but the actions are not run on end_ events by default).
By including a call to the
is-start-event() or
is-end-event() functions in a predicate the rule may be forced to fire only on end events or on both start and end events (using a
[is-start-event() or is-end-event()] idiom).
text()handlers fire once per text node instead of once per
characters()event.
add_rule(),
remove_rule(),
set_rules()methods.
Pass Assume_xvalue => 0 flag to tell X::F::D not to support xvalue and xvalue_type, which lets it skip some instructions and run faster.
Pass SortAttributes => 0 flag to prevent calling sort() for each element's attributes (note that Perl changes hashing algorithms occasionally, so setting this to 0 may expose ordering dependancies in your code).
NOTE: this section describes things that may change from version to version as I need different views in to the internals.
Set the option Debug => 1 to see the Perl code for the compiled ruleset. If you have GraphViz.pm and ee installed and working, set Debug => 2 to see a graph diagram of the intermediate tree generated by the compiler.
Set the env. var XFDSHOWBUFFERHIGHWATER=1 to see what events were postponed the most (in terms of how many events had to pile up behind them). This can be of some help if you experience lots of buffering or high latency through the filter. Latency meaning the lag between when an event arrives at this filter and when it is dispatched to its actions. This will only report events that were actually postponed. If you have a 0 latency filter, the report will list no events.
Set the env. var XFDOPTIMIZE=0 to prevent all sorts of optimizations.
perl, especially across some platforms that it apparently isn't easily supported on.
perl, especially across some platforms that it apparently isn't easily supported on.
This is more of a frustration than a limitation, but this class requires that you pass in a type when setting variables (in the
Vars ctor parameter or when calling
xset_var). This is so that the engine can tell what type a variable is, since string(), number() and boolean() all treat the Perlian
0 differently depending on its type. In Perl the digit
0 means
false,
0 or
'0', depending on context, but it's a consistent semantic. When passing a
0 from Perl lands to XPath-land, we need to give it a type so that
string() can, for instance, decide whether to convert it to
'0' or
'false'.
...to Kip Hampton, Robin Berjon and Matt Sergeant for sanity checks and to James Clark (of Expat fame) for posting a Yacc XPath grammar where I could snarf it years later and add lots of Perl code to it.
Barrie Slaymaker <barries@slaysys.com>
You may use this module under the terms of the Artistic or GNU Pulic licenses your choice. Also, a portion of XML::Filter::Dispatcher::Parser is covered by:
The Parse::Yapp module and its related modules and shell scripts are copyright (c) 1998-1999 Francois Desarmenien, France. All rights reserved. You may use and distribute them under the terms of either the GNU General Public License or the Artistic License, as specified in the Perl README file.
Note: Parse::Yapp is only needed if you want to modify lib/XML/Filter/Dispatcher/Grammar.pm | http://search.cpan.org/~rbs/XML-Filter-Dispatcher-0.52/lib/XML/Filter/Dispatcher.pm | CC-MAIN-2015-48 | refinedweb | 6,237 | 60.85 |
Flash sound tutorial
Contents
- 1 Overview
- 2 Basics
- 3 Sound imports to frames of the timeline
- 4 Attaching sound to buttons
- 5 Load and play sounds with ActionScript
- 6 Play sounds randomly with one button
- 7 Links
1 Overview
- Learning goals
- Use sound (attach sound to frames and button frames)
- Edit the sound envelope with the Flash tool
- Load sound files with ActionScript
- Play sound with ActionScript, both sound textures from the library and loaded sound files
- Prerequisites
- Flash CS6 desktop tutorial
- Flash drawing tutorial
- flash layers tutorial
- flash button tutorial
- Flash CS6 motion tweening tutorial or some other technique that uses the timeline
- Moving on
- The Flash article has a list of other tutorials.
- Flash Video component tutorial
-
Grab the various *.fla files from here:
- Alternative versions
- Flash CS3 sound tutorial (old version)
We shall explain the whole procedure using a simple animation example.
The animation with sound example shows a motion animation with a global music sound track and 4 layers with sound "textures" that are limited in time.
Source code:
- flash-cs6-cloud-animation-sound.fla
- The sound clips are already in the library. Consult Sound Assets if you are looking for some free sounds. Also consider using the built-in sound library: Menu Window -> Common libraries -> Sounds
3.1 Background sounds
Smaller sound files (not full CD tracks !) should be imported to the library.
- To import à sound file
- File->Import->Import To library (or drag and drop).
3.2 Attaching sound to a frame
- Step 1 - Create a new layer and import sound to a frame
You can attach sound to any frame via the properties panel
- Create a new layer for this sound (not mandatory, but good practice)
- later).
3.3 Editing sounds
- Editing sound with the Edit Envelope editor
- Click in the sound layer in some frame where you have sound
- In the Properties Panel, Click the Edit ... button next to the Effect: field
- This opens the Edit Envelope sound volume.4 Example used
-6-cloud-animation-sound.* from this directory:)
- Button with sound files
- See the button with sound.
- Source: flash-cs3-button-sound.fla
5 Load and play sounds with ActionScript
It is better to load sounds with ActionScript if your sound file is large, e.g. a background music or if you want to to trigger a sound as a result of some complex user interaction. Select the frame where the sound should start (typically in a "script" layer), then insert this kind of code with F9.
- Select the ActionScript tab
-();
Stopping a single sound takes some more code, your_sound.stop() will not work, since you will have to stop a Sound channel (as opposed to just the sound file). Use the following code fragment. See the on/off button example just below for a complete example.
var channel:SoundChannel; channel = s.play(); ..... channel.stop();
For a on/off button.
/* Click to Play/Stop Sound Clicking on the symbol instance stops the sound. Clicking on the symbol instance plays the specified sound again. */ stop_start.addEventListener(MouseEvent.CLICK, start_stop_sound); var fl_SC:SoundChannel; var s:Sound = new Sound(new URLRequest("music.mp3")); //This variable keeps track of whether you want to play or stop the sound var fl_ToPlay:Boolean = true; function start_stop_sound(evt:MouseEvent):void { // want to play if (fl_ToPlay) { // Need to capture the SoundChannel that is used to play fl_SC = s.play(); } else { fl_SC.stop(); } // switch state of wanting to play fl_ToPlay = ! fl_ToPlay; }
Example code:
Example using the "play()" and "stopAll()" methods
For an example used in the Flash drag and drop tutorial, look at flash-cs3-drag-and-drop-matching-3.*
With this exemple you will learn how to play different sounds with one button.
import flash.utils.Dictionary; import flash.events.MouseEvent; //loading sounds (in our case the four sounds correspond to the sounds of the notes C#, D#, F#, G#) var request:URLRequest = new URLRequest(""); var ci_diese:Sound = new Sound(); ci_diese.load(request); var request_two:URLRequest = new URLRequest(""); var d_diese:Sound = new Sound(); d_diese.load(request_two); var request_three:URLRequest = new URLRequest(""); var f_diese:Sound = new Sound(); f_diese.load(request_three); var request_four:URLRequest = new URLRequest(""); var g_diese:Sound = new Sound(); g_diese.load(request_four); var play_liste = 0; // dictionary which associates numbers with sounds in order to play the sounds randomly var dictSounds = new Dictionary (); dictSounds[1] = d_diese; dictSounds[2] = ci_diese; dictSounds[3] = f_diese; dictSounds[4] = g_diese; // changing the cursor into hand over button (in our case we created an instance on the scene and we named it "play_sounds") play_sounds.buttonMode = true; // Listener on the button that will play our sounds play_sounds.addEventListener (MouseEvent.MOUSE_DOWN, mouseDownHandler); // this function will play the sound that correspond to the random number. // by using the random function we create numbers between 1 and 4. function mouseDownHandler (event:MouseEvent) : void { play_liste = Math.ceil(Math.random () *4); dictSounds[play_liste].play (); }
7 Links
- Sound Assets (look this up if you need websites with free sounds)
7.1 Documentation
- Working with sound (Adobe), Using sounds, some AS2, no AS3
- SoundMixer (Adobe AS3 reference) | http://edutechwiki.unige.ch/en/Flash_sound_tutorial | CC-MAIN-2017-17 | refinedweb | 834 | 54.83 |
Setting up your robot using tf - boost::bind()
Hello all, I am passing just a hard time trying to understand ros, c++ and associated concepts. Probably, here is not the best place to post the question, but I will glad with some help me with my doubts.
Here we go. From the Setting up your robot using tf Navigation Stack tutoriais we have the following snippet code:
ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(listener)));
I am not sure that I really understand what is going here. For me, it seems that in every second the method boost::bind will be called as a callback function. This method, when called, will call the function transformPoint passing to it the argument listener in terms of boos::ref(). In general, I suppose that this is what is happening but I thinking that I am missing something. So, I have some questions.
1_ Why it is used boost::ref to pass the listener variable as argument? Could I pass it directly, i.e. without the method boost::ref?
As a exercise, I tried to reproduce the same behavior in the following code:
#include <iostream> #include "boost/bind.hpp" #include "boost/ref.hpp" void show(char a){ std::cout << a << std::endl; } int main() { char letter = 'a'; boost::bind(&show, boost::ref(letter)); }
But, It did not work as I as expecting. The function show was not called.
2_ It is possible to reproduce the same behavior presented by the snippet code above? If so, how?
I could reproduce a similar behavior with:
int main() { char letter = 'a'; boost::bind(&show, _1)(boost::ref(letter)); }
But, it is in the same format that it is used in the ros implementation.
Thanks in advance, any help is welcome.
ps. I already look for exemples in internet, but I could not find exactly the answer to my doubts. | https://answers.ros.org/question/231203/setting-up-your-robot-using-tf-boostbind/?sort=oldest | CC-MAIN-2019-51 | refinedweb | 319 | 66.03 |
Here is the assignment:
I need a program that will accept user input - stop when user enters (-1) - the program needs to output the average of the numbers entered - the program also has to find the highest and lowest entry. The program has to be written using the given code with the only changes being permitted are at the bottom.
When I ask my instructor a question for my online class that answer is usually "look in the book". The instructor has admitted that the assigned textbook sucks.
#include <iostream> #include <cmath> using namespace std; int main() { double scores [75]; int counter =-1; do { counter++; cout << "Please enter a score (enter -1 to stop): "; cin >> scores [counter] ; } while (scores [counter] >=0); // can not change above here
I do not want the solution given to me. I just need to figure out how to get it. | http://www.dreamincode.net/forums/topic/196972-manipulating-user-input-in-an-array/ | CC-MAIN-2016-50 | refinedweb | 145 | 63.83 |
Aslak Hellesoy wrote:
> Jörg Schaible wrote:
>
>> Hi folks,
>>
>> just a heads-up, since my commit did not show up at the scm list. I've
>> merged again HEAD into MX_PROPOSAL to get the latest stuff from Aslak
>> (LC & Visitors). I also fixed any problem within PicoContainer so that
>> the PicoTests all pass now. I will continue soon to look at all other
>> modules.
>>
Hey Jörg,
just a quick warning. I wasn't able to keep up with the latest
development of pico 1.1 so there might be some things in MX_PROPOSAL
that aren't ideal for 1.1. But if the tests pass it should work.
> Good stuff Jorg. I had a look a week ago, and it all looks good. -Except
> that some tests seemed to be commented out. (Don't remember exactly
> where). These must be commented out before we merge back to HEAD.
I removed/commented the GenericCollection/Map/List stuff. I later added
support for arrays but stopped at this point due to some questions. You
never answered one of them tho:
public class SomethingGreedy {
public SomethingGreedy(Object[] everything) { ... }
}
This thing will receive _all_ components in the PicoContainer.
Everything there is in the whole tree. Depending on the Cyclic
dependency detection strategy used it will either fail (since its tried
to inject itself too) or pass and have a very packed component. I think
this is a security issue and we should think about it first.
> How about settling for a merge from MX_PROPOSAL to HEAD about a week
> from now? That should give most of us sufficient time to discuss any
> piece of code they don't agree with.
I'll try and take a look what has changed so far. Still very busy. :(
/thomas | http://article.gmane.org/gmane.comp.java.picocontainer.devel/3491 | crawl-002 | refinedweb | 291 | 76.62 |
From Confused to Proficient: Three Key Elements and Implementation of Kubernetes Cluster Service
By Sheng Dong, Alibaba Cloud After-Sales Technical Expert
Generally, it is not easy to understand the concept of the Kubernetes cluster service. In particular, troubleshooting service-related issues with a faulty understanding leads to more challenges. For example, beginners find it hard to understand why a service IP address fails to be pinged and for experienced engineers, understanding the service-related iptables configuration is a great challenge.
This article explains the key principles and implementation of the Kubernetes cluster service.
Essence of the Kubernetes Cluster Service
Theoretically, the Kubernetes cluster service works as a Server Load Balancer (SLB) or a reverse proxy. It is similar to Alibaba Cloud SLB and has its IP address and front-end port. Multiple pods of container groups are attached to the service as back-end servers. The back-end servers have their IP addresses and listening ports.
When the architecture of SLB plus back-end servers are combined with a Kubernetes cluster, it results in the most intuitive implementation method. The following diagram represents the method.
A node in the cluster functions as SLB, which is similar to Linux Virtual Server (LVS), and other nodes load back-end container groups.
This implementation method has the single point of failure (SPOF). Kubernetes clusters are the result of Google’s automated O&M over many years. Their implementation deviates from the philosophy of intelligent O&M.
Built-in Correspondent
The Sidecar mode is the core concept in the microservices field and indicates that a correspondent is built-in. Those who are familiar with the service mesh must be familiar with the Sidecar mode. However, only a few people notice that the original Kubernetes cluster service is implemented based on the Sidecar mode.
In a Kubernetes cluster, a Sidecar reverse proxy is deployed on each node for service implementation. Access to cluster services is converted by the reverse proxy on the node to the access to back-end container groups of the services.
The following figure shows the relationships between nodes and the Sidecar proxies.
Translating the Kubernetes Cluster Service Into Reverse Proxy
The preceding two sections introduce the Kubernetes cluster service works as an SLB or a reverse proxy. In addition, the reverse proxy is deployed on each cluster node as the Sidecar of the cluster nodes.
In this case, the kube-proxy controller of the Kubernetes cluster translates the service into the reverse proxy. For more information about how a Kubernetes cluster controller works, refer to the article about controllers in this series. The kube-proxy controllers are deployed on cluster nodes and monitor cluster status changes through API servers. When a service is created, kube-proxy translates the cluster service status and attributes into the reverse proxy configuration. Next, the reverse proxy implementation is done as shown in the following figure.
Implementation
Currently, the reverse proxy is implemented for Kubernetes cluster nodes in three modes, including userspace, iptables, and ipvs. This article further analyzes the implementation in iptables mode. The underlying network is based on the Flannel cluster network of Alibaba Cloud.
Filter Framework
Assume a scenario, where there is a house with a water inlet pipe and a water outlet pipe. Since, you cannot directly drink the water that enters the inlet pipe because it contains impurities, you might want to directly drink the water from the outlet pipe. Therefore, you cut the pipe and add a water filter to remove impurities.
After a few days, in addition, to directly drinking the water from the outlet pipe, you might want the water to be hot. Therefore, you must cut the pipe again and add a heater to it.
Certainly, it is not feasible to cut water pipes every time and add new functions to suit the changing demands. Also, it is not possible to cut the pipe further after a certain period
Therefore, there is a need for a new design. First, fix the incision of the water pipe. Use the preceding scenario as an example and ensure that the water pipe has only one incision position. Then, abstract two water processing methods, including physical change and chemical change.
Based on the preceding design, filter impurities, by adding an impurity filtering rule to the chemical change module. To increase the temperature, add a heating rule to the physical change module.
The filter framework is much better than the pipe cutting method. To design the framework, fix the pipe incision position and abstract two water processing methods.
Now check the iptables mode, or more accurately, how Netfilter works. Netfilter is a filter framework with five incisions on the pipeline for network packet sending and receiving and routing, including PREROUTING, FORWARD, POSTROUTING, INPUT, and OUTPUT. In addition, Netfilter defines several processing methods of network packets, such as NAT and filter.
Note that PREROUTING, FORWARD, and POSTROUTING greatly increase the complexity of Netfilter. Barring these functions, Netfilter is as simple as the water filter framework.
Node Network Overview
This section describes the overall network of Kubernetes cluster nodes. Horizontally, the network environment on nodes is divided into different network namespaces, including host network namespaces and pod network namespaces. Vertically, each network namespace contains a complete network stack, from applications to protocol stacks and network devices.
At the network device layer, use the cni0 virtual bridge to build a virtual LAN (VLAN) in the system. The pod network is connected to the VLAN through the veth pair. The cni0 VLAN communicates externally through the host route and the eth0 network port.
At the network protocol stack layer, implement the reverse proxy of cluster nodes by programming Netfilter.
Implementation of the reverse proxy indicates Destination Network Address Translation (DNAT). This implies changing the destination of a data packet from the IP address and port of the cluster service to that of a specific container group.
As shown in the figure of Netfilter, add the NAT rule to PREROUTING, OUTPUT, and POSTROUGING to change the source or destination address of a data packet.
Implement DNAT to change the destination before PREROUTING and POSTROUGING to ensure that the data packet is correctly processed by PREROUTING or POSTROUGING. Therefore, the rules for implementing the reverse proxy must be added to PREROUTING and OUTPUT.
Use the PREROUTING rule to process the data flow for access from a pod to the service. After moving from veth in the pod network to cni0, a data packet enters the host protocol stack and is first processed by PREROUTING in Netfilter. After DNAT, the destination address of the data packet changes to the address of another pod, and then the data packet is forwarded to eth0 by the host route and sent to the correct cluster node.
The DNAT rule added to OUTPUT processes the data packet sent from the host network to the service in a similar way. Thus, before PREROUTING and POSTROUGING, the destination address changes to facilitate forwarding by the route.
Upgrade the Filter Framework
This section introduces Netfilter as a filter framework. In Netfilter, there are five incisions on the data pipeline and they process data packets. Although the fixed incision positions and the classification of network packet processing methods greatly optimize the filter framework, there is a need to modify the pipeline to handle new functions. In other words, the framework does not completely decouple the pipeline and the filtering function.
To decouple the pipeline and the filtering function, Netfilter uses the table concept. The table is the filtering center of Netfilter. The core function of the table is the classification (table) of filtering methods and the organization (chain) of filtering rules for each filtering method.
After the filtering function decouples from the pipeline, all the processing of data packets becomes the configuration of the table. The five incisions on the pipeline change to the data flow entry and exit, which send data flow to the filtering center and transmit the processed data flows along the pipeline.
As shown in the preceding figure, Netfilter organizes rules into a chain in the table. The table contains the default chains for each pipeline incision and the custom chains. A default chain is a data entry, which jumps to a custom chain to implement some complex functions. Custom chains bring obvious benefits.
To implement a complex filtering function, such as implementing the reverse proxy of Kubernetes cluster nodes, use custom chains to modularize the rules.
Implement the Reverse Proxy of the Service by Custom Chains
Implementation of the reverse proxy of cluster service indicates that custom chains implement DNAT of data packets in modularization mode. KUBE-SERVICE is the entry chain of the entire reverse proxy, which is the total entry of all services. The KUBE-SVC-XXXX chain is the entry chain of a specific service.
The KUBE-SERVICE chain jumps to the KUBE-SVC-XXXX chain of a specific service based on the service IP address. The KUBE-SEP-XXXX chain represents the address and port of a specific pod, that is, the endpoint. The KUBE-SVC-XXXX chain of a specific service uses a certain algorithm (generally a random algorithm) to jump to the endpoint chain.
As mentioned above, implement DNAT. Change the destination address, before PREROUTING and POSTROUTING to ensure that data packets are correctly processed by PREROUTING or POSTROUTING. Therefore, the KUBE-SERVICE is called by the default chains of PREROUTING and OUTPUT.
Conclusion
This article provides a comprehensive understanding of the concept and implementation of the Kubernetes cluster service. The key points are as follows:
- The service essentially works as SLB.
- To implement service load balancing, use the Sidecar mode which is similar to the service mesh Instead of an exclusive mode of the LVS type.
- kube-proxy is essentially a cluster controller. Additionally, think about the design of the filter framework and understand the principle of service load balancing implemented by iptables. | https://alibaba-cloud.medium.com/from-confused-to-proficient-three-key-elements-and-implementation-of-kubernetes-cluster-service-4c50038c187b?source=post_internal_links---------3---------------------------- | CC-MAIN-2022-33 | refinedweb | 1,649 | 54.02 |
A friend of mine was working on a programming exercise, and it turns it out was based on a chunk of math which I thought I should have seen before, but either have not seen or have forgotten. It’s basically that the products of all primes less than some number n is less than or equal to en and in the limit converges to precisely en. First of all, I wrote a chunk of code to test it out, at least for primes less than a million. Here’s the code I wrote (I swiped my rather bizarre looking prime sieve code from a previous experiment with different sieving algorithms):
from random import randint from math import sqrt, ceil, floor, log import time import sys from math import * def sieveOfErat(end): if end < 2: return [] #The array doesn't need to include even numbers lng = ((end/2)-1+end%2) # Create array and assume all numbers in array are prime sieve = [True]*(lng+1) # In the following code, you're going to see some funky # bit shifting and stuff, this is just transforming i and j # so that they represent the proper elements in the array # Only go up to square root of the end for i in range(int(sqrt(end)) >> 1): # Skip numbers that aren't marked as prime if not sieve[i]: continue # Unmark all multiples of i, starting at i**2 for j in range( (i*(i + 3) << 1) + 3, lng, (i << 1) + 3): sieve[j] = False # Don't forget 2! primes = [2] # Gather all the primes into a list, leaving out the composite numbers primes.extend([(i << 1) + 3 for i in range(lng) if sieve[i]]) return primes sum = 0. for p in sieveOfErat(1000000): sum += log(p) print p, sum/p
Most of the code is just the sieve. In the end, instead of taking a very large product, we instead take the logarithm of both sides. This means that the sum of the logs should be nearly equal to n. The program prints out the value of the prime, and how sum compares to the value of p. Here’s a quick graph of the results:
Note, it’s not monotonically increasing, but it does appear to be converging. You can run this for higher and higher values and it does appear to be converging to 1.
This seems like a rather remarkable thing to me. The relationship between e and primes seems (to me) completely unobvious, I wouldn’t have any idea how to go about proving such a thing. A quick search on Wikipedia reveals this page on the primorial function but similarly gives little insight. Recalling Stirling’s approximation for ordinary factorials suggests that these large products are related to exponentials (Stirling’s approximation not only has a factor of e in it, but also the square root of two times pi as well), but the idea that the product of primes would precisely mimic powers of e seems deeply mysterious…
Any math geniuses out there care to point me at a (hopefully simple) explanation of why this might be? Or is the explanation far from simple? | http://brainwagon.org/2014/02/ | CC-MAIN-2016-18 | refinedweb | 529 | 58.76 |
Code splitting with dynamic imports in Next.js
How to speed up your Next.js app with code splitting and smart loading strategies.
What will you learn?
This post explains different types of code splitting and how to use dynamic imports to speed up your Next.js apps.
Route-based and component-based code splitting
By default, Next.js splits your JavaScript into separate chunks for each route. When users load your application, Next.js only sends the code needed for the initial route. When users navigate around the application, they fetch the chunks associated with the other routes. Route-based code splitting minimizes the amount of script that needs to be parsed and compiled at once, which results in faster page load times.
While route-based code splitting is a good default, you can further optimize the loading process with code splitting on the component level. If you have large components in your app, it's a great idea to split them into separate chunks. That way, any large components that are not critical or only render on certain user interactions (like clicking a button) can be lazy-loaded.
Next.js supports dynamic
import(),
which allows you to import JavaScript modules (including React components)
dynamically and load each import as a separate chunk. This gives you
component-level code splitting and enables you to control resource loading so
that users only download the code they need for the part of the site that
they're viewing. In Next.js, these components are server-side rendered
(SSR)
by default.
Dynamic imports in action
This post includes several versions of a sample app that consists of a simple page with one button. When you click the button, you get to see a cute puppy. As you move through each version of the app, you'll see how dynamic imports are different from static imports and how to work with them.
In the first version of the app, the puppy lives in
components/Puppy.js. To
display the puppy on the page, the app imports the
Puppy component in
index.js with a static import statement:
import Puppy from "../components/Puppy";
To see how Next.js bundles the app, inspect the network trace in DevTools:
To preview the site, press View App. Then press Fullscreen
.
Control+Shift+J(or
Command+Option+Jon Mac) to open DevTools.
Click the Network tab.
Select the Disable cache checkbox.
Reload the page.
When you load the page, all the necessary code, including the
Puppy.js
component, is bundled in
index.js:
When you press the Click me button, only the request for the puppy JPEG is added to the Network tab:
The downside of this approach is that even if users don't click the button to
see the puppy, they have to load the
Puppy component because it's included in
index.js. In this little example that's not a big deal, but in real-world
applications it's often a huge improvement to load large components only when
necessary.
Now check out a second version of the app, in which the static import is
replaced with a dynamic import. Next.js includes
next/dynamic, which makes it
possible to use dynamic imports for any components in Next:
import Puppy from "../components/Puppy";
// ...
Follow the steps from the first example to inspect the network trace.
When you first load the app, only
index.js is downloaded. This time it's
0.5 KB smaller (it went down from 37.9 KB to 37.4 KB) because it
doesn't include the code for the
Puppy component:
The
Puppy component is now in a separate chunk,
1.js, which is loaded only
when you press the button:
By default, Next.js names these dynamic chunks number.js, where number starts from 1.
In real-world applications, components are often much larger, and lazy-loading them can trim your initial JavaScript payload by hundreds of kilobytes.
Dynamic imports with custom loading indicator
When you lazy-load resources, it's good practice to provide a loading indicator
in case there are any delays. In Next.js, you can do that by providing an
additional argument to the
dynamic() function:
const Puppy = dynamic(() => import("../components/Puppy"), {
});
To see the loading indictor in action, simulate slow network connection in DevTools:
To preview the site, press View App. Then press Fullscreen
.
Control+Shift+J(or
Command+Option+Jon Mac) to open DevTools.
Click the Network tab.
Select the Disable cache checkbox.
In the Throttling drop-down list, select Fast 3G.
Press the Click me button.
Now when you click the button it takes a while to load the component and the app displays the "Loading…" message in the meantime.
Dynamic imports without SSR
If you need to render a component only on the client side (for example, a chat
widget) you can do that by setting the
ssr option to
false:
const Puppy = dynamic(() => import("../components/Puppy"), {
ssr: false,
});
Conclusion
With support for dynamic imports, Next.js gives you component-level code splitting, which can minimize your JavaScript payloads and improve application load time. All components are server-side rendered by default and you can disable this option whenever necessary. | https://web.dev/code-splitting-with-dynamic-imports-in-nextjs/ | CC-MAIN-2020-29 | refinedweb | 870 | 66.54 |
----- Original Message -----
From: Costin Manolache <costin@eng.sun.com>
To: <tomcat-dev@jakarta.apache.org>
Sent: Sunday, April 02, 2000 4:34 AM
Subject: Re: Multi-homing/Virtual named hosts
Ok, it appears we are getting the wrong ends of sticks here, so I will try
and clarify my position and see how you can explain how I should progress. I
suspect this all stems a misunderstanding on my part from ContextManagers
and how they get passed around.
> > 2))
Context getContextByPath(String path) {
...
lookup:
do {
ctx = cm.getContext(path);
if (ctx == null) {
int i = path.lastIndexOf('/');
if (i > -1 && path.length() > 1) {
path = path.substring(0, i);
if (path.length() == 0) {
path = "/";
}
} else {
// path too short
break lookup;
}
} else {
}
} while (ctx == null);
// no map - root context
if (ctx == null) {
ctx = cm.getContext( "" );
}
return ctx;
}.
Now, lets see what that cm.getContext(path) does:
public Context getContext(String name) {
return (Context)contexts.get(name);
}
Hmmmmmmmmmm - this means that the context is being indexed on its _path_ -
which is absolutely no use in the situation of multiple contexts with the
_same_ path (two virtual hosts each with / for example).:
> ContextManager is the "controler" - it just maintain the collection of
Contexts
> ( without knowing the semantics or relation between contexts), the
> collection of interceptors ( that will know how to manipulate Req/Resp and
> Contexts) and the adapters. It doesn't have to know about what's inside
> Context or Request.
> > 4) The Ajp12 (and others?) handler would then need to be fixed to get
the
> > right context, as the context is being derived solely from the path, and
> > moved later on when the server name is found.
>
> No problem - it's done. The virtual host is already there, and all web
servers
> we support (Apache, IIS, NES ) can deal with virtual hosts..
> > 3) The ServerName property can appear un fully qualified, which means
that
> > either (a) the server would need to fully qualify a domain name
requested
> > (which shouldn't be too nasty as the name service mechanism on the local
> > machine should have cached it) or (b) require partial matching. Which
should
> > be supported? I think partial matching will slow it down.
>
> I don't know - we should check with existing implementations ( like
apache).
Anyone out there know? It would save me a lot of work trolling through years
of feeping creaturism.
Richard | http://mail-archives.apache.org/mod_mbox/tomcat-dev/200004.mbox/%3C001601bf9c2e$0877f090$4bd90eca@sew.co.nz%3E | CC-MAIN-2016-30 | refinedweb | 390 | 66.44 |
Type Extensions (F#)
Type extensions let you add new members to a previously defined object type.
There are two forms of type extensions that have slightly different syntax and behavior. An intrinsic extension is an extension that appears in the same namespace or module, in the same source file, and in the same assembly (DLL or executable file) as the type being extended. An optional extension is an extension that appears outside the original module, namespace, or assembly of the type being extended. Intrinsic extensions appear on the type when the type is examined by reflection, but optional extensions do not. Optional extensions must be in modules, and they are only in scope when the module that contains the extension is open.
In the previous syntax, typename represents the type that is being extended. Any type that can be accessed can be extended, but the type name must be an actual type name, not a type abbreviation. You can define multiple members in one type extension. The self-identifier represents the instance of the object being invoked, just as in ordinary members.
The end keyword is optional in lightweight syntax.
Members defined in type extensions can be used just like other members on a class type. Like other members, they can be static or instance members. These methods are also known as extension methods; properties are known as extension properties, and so on. Optional extension members are compiled to static members for which the object instance is passed implicitly as the first parameter. However, they act as if they were instance members or static members according to how they are declared. Implicit extension members are included as members of the type and can be used without restriction.
Extension methods cannot be virtual or abstract methods. They can overload other methods of the same name, but the compiler gives preference to non-extension methods in the case of an ambiguous call.
If multiple intrinsic type extensions exist for one type, all members must be unique. For optional type extensions, members in different type extensions to the same type can have the same names. Ambiguity errors occur only if client code opens two different scopes that define the same member names.
In the following example, a type in a module has an intrinsic type extension. To client code outside the module, the type extension appears as a regular member of the type in all respects.
module MyModule1 = // Define a type. type MyClass() = member this.F() = 100 // Define type extension. type MyClass with member this.G() = 200 module MyModule2 = let function1 (obj1: MyModule1.MyClass) = // Call an ordinary method. printfn "%d" (obj1.F()) // Call the extension method. printfn "%d" (obj1.G())
You can use intrinsic type extensions to separate the definition of a type into sections. This can be useful in managing large type definitions, for example, to keep compiler-generated code and authored code separate or to group together code created by different people or associated with different functionality.
In the following example, an optional type extension extends the System.Int32 type with an extension method FromString that calls the static member Parse. The testFromString method demonstrates that the new member is called just like any instance member.
The new instance member will appear like any other method of the Int32 type in IntelliSense, but only when the module that contains the extension is open or otherwise in scope.
Before F# 3.1, the F# compiler didn't support the use of C#-style extension methods with a generic type variable, array type, tuple type, or an F# function type as the “this” parameter. F# 3.1 supports the use of these extension members.
For example, in F# 3.1 code, you can use extension methods with signatures that resemble the following syntax in C#:
This approach is particularly useful when the generic type parameter is constrained. Further, you can now declare extension members like this in F# code and define an additional, semantically rich set of extension methods. In F#, you usually define extension members as the following example shows:
However, for a generic type, the type variable may not be constrained. You can now declare a C#-style extension member in F# to work around this limitation. When you combine this kind of declaration with the inline feature of F#, you can present generic algorithms as extension members.
Consider the following declaration:
By using this declaration, you can write code that resembles the following sample.
In this code, the same generic arithmetic code is applied to lists of two types without overloading, by defining a single extension member. | http://msdn.microsoft.com/en-us/library/dd233211(v=vs.120).aspx | CC-MAIN-2014-52 | refinedweb | 767 | 56.25 |
You can block almost all signals, with the notable exception of SIGKILL.
By default the kill command sends SIGTERM, which you can block.
Read about the sigaction system call to learn how to block signals.
No; when you stop Tomcat, the application context is torn down and Spring
Integration doesn't have any control.
We introduced a new Orderly Shutdown feature in 2.2 using a JMX operation
that allows stopping all active components (pollers, JMS listener
containers, etc) then waiting some time for messages to quiesce. Some
endpoints (such as the http inbound) are aware of this state and won't
allow new requests to come in, while staying active to handle active
threads.
It's not a perfect solution but it covers the vast majority of use cases.
I have amended your code as per below:
Option Explicit
Dim result
result = MsgBox ("Shutdown?", vbYesNo, "Yes/No Exm")
Select Case result
Case vbYes
MsgBox("shuting down ...")
Dim objShell
Set objShell = WScript.CreateObject("WScript.Shell")
objShell.Run "C:WINDOWSsystem32shutdown.exe -r -t 20"
Case vbNo
MsgBox("Ok")
End Select
The main issues were that "option explicit" has to be at the top, and as a
result the "result" variable then must be declared using the "dim" keyword.
The above code works fine when I executed it via the command line.
I also added a timeout of 20, but you can easily change this back to the
original value of 0.
You can use the appcmd command line utility for managing sites on IIS. It's
located in %systemroot%system32inetsrvAPPCMD. I think it is available in
IIS v7 and above only though, not sure if your using an older version of
IIS.
To stop and start a site, the command will look like the following:
%systemroot%system32inetsrvAPPCMD stop site <Your Site's Name>
%systemroot%system32inetsrvAPPCMD start site <Your Site's Name>
More info on the appcmd utility is here:
This is actually not JUnit but the external systems which run JUnit test
(like Eclipse or Maven) who are responsible for terminating JVM. Those call
System.exit which stops all the threads. If JUnit did it then the external
system would have no chance to process the results.
You need to handle the WM_QUERYENDSESSION messsage. It's sent to each
application before Windows starts the shutdown process. Do what you need
quickly, because failure to respond rapidly enough causes the behavior
you're observing in FireFox, which is usually a sign of a badly designed
app (and the user may terminate it before you get a chance to finish).
interface
...
type
TForm1 = class(TForm)
procedure WMQueryEndSession(var Msg: TWMQueryEndSession);
message WM_QUERYENDSESSION;
end;
implementation
procedure TForm1.WMQueryEndSession(var Msg: TWMQueryEndSession);
begin
// Do what you need to do (quickly!) before closing
Msg.Result := True;
end;
(Just as an aside: The enabling/disabling of sounds is a per-user setting,
and you should have a very good need for inter
The if (running == 0) but is pointless!
while (running == 1) {
commands();
}
return 0;
Does exactly the same - once running is 0 it falls out the bottom of the
loop, and main returns. The whole idea of the global running is getting
into side effect programming, which is a bad thing!
Block all incoming request.
Add a filter to your application with a boolean to accept request, by
default it accepts.
Add a ContextListener. and complete the method onContextDestroyed() add
your code to modify that boolean
@echo off && for /r %F in (*) do if %~zF==0 del "%F" > NUL
The > NUL is because I can't recall if certain situations cause del to
try to output
This usually results when you format the namenode and don't do a datanode
clean up and you namespace changes.
Probable solution:
Try deleting the /app/hadoop/tmp/dfs/data directory and restart the
datanode
Updated answer.
Some options:
In your terminal (dev mode basically), just type "Ctrl-C"
If you started it as a daemon (-d) find the PID and kill the process:
SIGTERM will shut Elasticsearch down cleanly
If running as a service, run something like service elasticsearch stop.
Previous answer. It's now deprecated from 1.6.
Yeah. See admin cluster nodes shutdown documentation
Basically:
# Shutdown local node
$ curl -XPOST ''
# Shutdown all nodes in the cluster
$ curl -XPOST ''
Specify a datastore path that is not in /tmp. By default /tmp is a memory
based filesystem and will therefore be cleared on each reboot.
For instance:
dev_appserver.py --datastore_path=/home/newapp_datastore /home/newapp
The issue is that you have not been given XML and the parser legitimately
gets in a mess as it sees data that is not legal.. The XML specification
says.
Thus you have to alter the XML and replace & by &
We dealt with the same problem earlier this year. First, you need to look
through your log files to determine which site(s) are getting attacked. My
guess would be that the sites being hit at 1.5 sites, without exception
that was the case on our servers. If that is the case, then those sites
need to be let go. If they don't want to upgrade, they need to take their
sites elsewhere. Simple as that. You cannot risk your other sites and email
blacklists due to customers that don't want to upgrade. We don't allow J1.5
on our servers any more.
iGoogle has already expired but not the gadget link you have posted.
Currently it returns 200 OK with probably your desired message.
Update: It got expired too. No Gadgets are available at the moment.
Update: Available from here
This is a code from threading.py module:
import sys as _sys
class Thread(_Verbose):
def _bootstrap_inner(self):
# some code
# If sys.stderr is no more (most likely from interpreter
# shutdown) use self._stderr. Otherwise still use sys (as in
# _sys) in case sys.stderr was redefined since the creation of
# self.
if _sys:
_sys.stderr.write("Exception in thread %s:
%s
" %
(self.name, _format_exc()))
else:
# some code
might be helpful. The error you see comes from else statement. So in your
case:
import sys as _sys
while True:
if not _sys:
break/return/die/whatever
do_something()
time.sleep(interval)
I'm not sure if it works though
I have not tried with spring quartz. However, normally while using Quartz,
we shutdown
the quartz gracefully. By gracefully, I mean here that I will shutdown the
scheduler
only after executing all my pending jobs(jobs which are executed currently
but have
not yet marked their completion).
For graceful shutdowns, we pass attribute true while using method shutdown.
Refer API here
I am eager to know how spring quartz implementation does this.
I'd suggest creating a project directory for each VM you plan to use. If
you change into that empty project directory before doing vagrant init a
dedicated Vagrantfile is created for that project/VM, which then can be
customized to your needs. To use that customized Vagrantfile then, just run
vagrant up from inside your projects directory. Not sure if this solves
your problem but it's worth a try I guess. ;-)
Btw. you can check if your VM is running with the command vagrant status
[machine-name].
The org.eclipse.jetty.server.Server has a .stop() method you can call to
perform a graceful shutdown of the Jetty server instance.
Also, in Eclipse, be sure you have the "Debug View" open.
Window > Show View > Other > Debug
That will show you the list of running processes that any Eclipse plugin
started for you.
You might think you terminated it, but it might still be registered as
running.
I'm not sure if there is some shutdown method, but there are two methods
which can help. As it is written in the docs:
When the dispatcher no longer has any actors registered, how long will it
wait until it shuts itself down, defaulting to your akka configs "akka.
You can set akka.actor.default-dispatcher.shutdown-timeout in
reference.conf and then detach you actor from your dispatcher.
Try this solution:
"I was able to fix it: Uninstalling Snippet Designer and deleting the Code
Snippets folder on C:UsersUSERDocumentsVisual Studio 2012"
Taken | http://www.w3hello.com/questions/-Shut-down-of-asp-net-process- | CC-MAIN-2018-17 | refinedweb | 1,362 | 65.01 |
can i get the answer for this one plz
i cant find any answer i tried everything
2 Answers
Jennifer NordellTreehouse Teacher
Hi there Ali! First, I don't know which question you're referring to, nor am I likely to give a direct answer without knowing what you've tried thus far. Also, the questions are presented in a random order, so "this one" doesn't narrow down to which question you're referring. But here are some hints:
Q: What is the purpose of the following code statement?
Look around 3:23 of this video
This hint can be applied to three separate quiz questions.
Adobe.Illustrator.Canvas.Paint consists of 3 parts. The method which is the last item (Paint). The class, which is the next to last item (Canvas). And the namespace which is everything before the class. Remember, a namespace can contain a dot to represent a separation between the company and the particular product.
Hope this helps!
Aurelian Spodarec10,789 Points
Of course you can! The answer is in the video, if you re-watch it!
But on a serious note, try to re-watch it, sometimes when I learn, i might not get it the first time, in fact, I'm re-watching the videos all the time.
A X12,842 Points
A X12,842 Points
ali raafat : Do you know how to post your code so we can see what you've done so far? | https://teamtreehouse.com/community/can-i-get-the-answer-for-this-one-plz | CC-MAIN-2022-40 | refinedweb | 243 | 81.53 |
I have a problem with the following problem:
Problem:
Implement a function count_words() in Python that takes as input a string s and a number n, and returns the n most frequently-occuring words in s. The return value should be a list of tuples - the top n words paired with their respective counts [(, ), (, ), ...], sorted in descending count order.
You can assume that all input will be in lowercase and that there will be no punctuations or other characters (only letters and single separating spaces). In case of a tie (equal count), order the tied words alphabetically.
E.g.:
print count_words("betty bought a bit of butter but the butter was bitter",3)
Output:
[('butter', 2), ('a', 1), ('betty', 1)]
This is my solution:
"""Count words."""
from operator import itemgetter
from collections import Counter
def count_words(s, n):
"""Return the n most frequently occuring words in s."""
# TODO: Count the number of occurences of each word in s
words = s.split(" ");
words = Counter(words)
# TODO: Sort the occurences in descending order (alphabetically in case of ties)
print(words)
# TODO: Return the top n words as a list of tuples (<word>, <count>)
top_n = words.most_common(n)
return top_n
def test_run()
"""Test count_words() with some inputs."""
print(count_words("cat bat mat cat bat cat", 3))
print(count_words("betty bought a bit of butter but the butter was bitter", 3))
if __name__ == '__main__':
test_run()
You can sort them using the number of occurrence (in reverse order) and then the lexicographical order:
>>> lst = [('meat', 2), ('butter', 2), ('a', 1), ('betty', 1)] >>> >>> sorted(lst, key=lambda x: (-x[1], x[0])) # ^ reverse order [('butter', 2), ('meat', 2), ('a', 1), ('betty', 1)]
The number of occurrence takes precedence over the lex. order.
In your case, use
words.items() in place of the list of the list I have used. You will no longer need to use
most_common as
sorted already does the same. | https://codedump.io/share/QLxlcd87w85w/1/most-frequently-occuring-n-words-in-a-string | CC-MAIN-2017-39 | refinedweb | 317 | 58.32 |
I commiserate with those who knew him for their loss.
Ongoing, my first edition copy of K&R which has been within arms length of me whenever I've been programming for very nearly 30 years, will probably continue to get referenced weekly as it has for most of that time,
There are showmen, some legends in their lifetimes, and then there are the quiet men that just got on and did what needed to be done.
#include <stdio.h>
void main() {
hat(0);
while (elapsed_time() < 60) {
moment_of_silence(1);
}
printf("Goodbye, world!\n");
hat(1);
}
mortality ...
... sucks..
We have another thread for this at RIP Dennis Ritchie. The two threads were posted almost simultan | http://www.perlmonks.org/?node_id=931268 | CC-MAIN-2015-48 | refinedweb | 114 | 70.43 |
Hello!
I previously solved a communication issue using a Nano 33 IoT. The key to that solution was to adjust the number of bits in the UART packets to have 7 data bits, no parity, and a single stop bit. When switching to the Nano 33 BLE Sense I found that using the standard “mySerial.begin(speed, config)” is only applying the speed, but the configuration never changes.
After some testing, it seems to be sending 1 start bit and 8 bits of data with no parity and 1 stop bit, regardless of what I enter for the config. I verified this using an oscilloscope and varying the config from Serial_5N1 to Serial_8N1 and varying the number of bits between 1 and 2 to check if a parity bit was ever generated.
Does anyone know how to adjust the UART packet length on a Nano 33 BLE Sense?
Here is the code I’ve been using to test the configuration settings:
#include <Arduino.h> UART mySerial (digitalPinToPinName(6), digitalPinToPinName(5), NC, NC); void setup() { mySerial.begin(1000000, SERIAL_7N1); } uint8_t x[5] = {0b00000000, 0b00010000, 0b00011000, 0b11111111, 0b01010101}; void loop() { mySerial.write(x, 1); mySerial.flush(); delay(5); } | https://forum.arduino.cc/t/adjusting-uart-packet-length-on-nano-33-ble-sense/683566 | CC-MAIN-2021-49 | refinedweb | 196 | 54.63 |
Hi,
I have the code below and it works fine with the IDLE; however, it is not working with T.C.
import os
myPath = 'O:\\myDir' #this is my mapped network drive
os.chdir(myPath)
Can someone please shed a light!
What happens when you try it?
I got error message:
Python runtime error.
FileNotFoundError
[WinError3] The system cannot find the specific path specified: 'O:\\\\'
If you hard code the path in the os.chdir line, does it work? That will tell you if it is a syntax problem with the second line or a permissions problem and TestComplete can't get to that directory.
Hardcoded is not working either. For example:
os.chdir('O:\\myFolder')
It works fine when running with IDLE, but not with T.C. and I don't know why.How do know if it's a permission problem with T.C?
Are you able to browse to that folder and open the file from the machine where TC is installed?
Yes, I can. I am an admin of the system where T.C. is installed.
I used IDLE on the same machine, and did not have any problem.
Can we see your actual code please? It's hard to tell what's going on from made up examples. | https://community.smartbear.com/t5/TestComplete-General-Discussions/os-chdir-myPath-is-not-working/td-p/171740 | CC-MAIN-2021-17 | refinedweb | 213 | 86.4 |
Data alignment problems with sizeof and new
- From: astdsoftware <astdsoftware@xxxxxxxxxxxxxxxx>
- Date: Tue, 1 Nov 2005 13:12:03 -0800
I have a strange problem with a project involving classes with data alignment
padding. It appears that, in this particular project, the sizeof and new
operators are not taking the alignment padding into account when calculating
the size of an object. The struct member alignment is set to 8 (/Zp8), and
there are no #pragma pack directives. The test class below shows how, given a
class with the first member being a dword size followed by doubles, the first
double member is padded out to an offset of 8, increasing the memory
footprint by 4 bytes. Using the sizeof() operator outside the scope of the
class, the size returned is the sum of all of the members, but if I use the
sizeof operator inside the constructor, it calculates the right size.
The real problem is that the new operator only allocates enough memory for
the object without any padding, and when the constructor starts initializing
its data members, the last one writes beyond the memory block allocated for
the class. If I move this code to a different project, it works just fine,
but I can't find any project options or anything which might explain this
behavior.
Of course, the problem goes away if I change the alignment to /Zp4, simply
because the padding goes away, but that only works if I don't use anything
smaller than 4 bytes in size.
Any clues would be greatly appreciated!
-----------------------------------------------------------
// TestClass.h
#pragma once
class TestClass
{
public:
TestClass(void);
~TestClass(void);
unsigned int m_dwData; // Offset 0
double m_Data1; // Offset 8
double m_Data2; // Offset 16 (0x10)
double m_Data3; // Offset 24 (0x18)
double m_Data4; // Offset 32 (0x24)
};
-------------------------------------------------------------
// TestClass.cpp
#include "StdAfx.h"
#include ".\testclass.h"
TestClass::TestClass(void)
: m_dwData(0)
, m_Data1(0)
, m_Data2(0)
, m_Data3(0)
, m_Data4(0)
{
int mysize = sizeof(*this); // Evaluates to 28h (Correct Value)
}
TestClass::~TestClass(void)
{
}
--------------------------------------------------------------
// Main.cpp
#include ".\testclass.h"
TestClass* pTest;
pTest = new TestClass;
int tsize = sizeof(*pTest); // Evaluates to 24h (Incorrect Value)
---------------------------------------------------------------
.
- Follow-Ups:
- Re: Data alignment problems with sizeof and new
- From: Doug Harrison [MVP]
- Re: Data alignment problems with sizeof and new
- From: Igor Tandetnik
- Prev by Date: Re: Drawing HTML text
- Next by Date: Re: Data alignment problems with sizeof and new
- Previous by thread: Re: /Za option
- Next by thread: Re: Data alignment problems with sizeof and new
- Index(es): | http://www.tech-archive.net/Archive/VC/microsoft.public.vc.language/2005-11/msg00029.html | crawl-002 | refinedweb | 409 | 52.12 |
89 Posts Tagged 'Ruby'
Vim: fun with filters
Vim lets you pipe text through an external filter. There are some obviously nice ways to use this in Linux, like
:!sort | uniq
which will sort all your lines, and then get rid of duplicate lines. But you can do things that are much more sophisticated if you write your own scripts which read from STDIN and output something back to STDOUT.
For example I wrote this Ruby script.
#!/usr/bin/ruby del = %r{#{Regexp.escape ARGV[0]}} if ARGV[0] del ||= %r{\s+} STDIN.each do |line| puts '(' + line.strip.gsub(/'/,"''").split(del,-1).collect{|x| "'#{x}'"}.join(',') + '),' end
This will take a line full of delimited fields, escape all the single-quotes, split into fields on the delimiter, wrap each field in single-quotes, put commas between the fields, wrap each line in (), and put a comma at the end of the line. You can either specify a delimiter, or don't specify one and it'll default to splitting on whitespace. I use this to turn a delimited ASCII file of data into a form suitable for an INSERT command in SQL. So if I start with this:
bob|2|3|Some description chester|1|4|Another description sarah|99|0|Let's try an apostrophe
and run this in Vim:
:%!sql_wrap.rb '|'
I get this:
('bob','2','3','Some description'), ('chester','1','4','Another description'), ('sarah','99','0','Let''s try an apostrophe'),
Or consider another simple example. This script will HTML-escape text:
#!/usr/bin/ruby require 'cgi' STDIN.each do |line| puts CGI::escapeHTML(line) end</pre>
So it'll turn this:
Is 5 > 3? Yes, & isn't that neat?
into this:
Is 5 > 3? Yes, & isn't that neat?
RubyFacets
Today I found a really neat site, RubyFacets. Reminds me a bit of Perl's List::Util and List::MoreUtils; it's a bunch of methods to extend core classes in interesting ways.
A while back I posted about a way to prevent Ruby from raising an exception when trying to access an un-initialized subarray of a multidimensional array by extending NilClass. At RubyFacets I found something arguably more interesting: auto-initializing sub-hashes of a multi-dimensional hash.
The code:
def self.auto(*args) leet = lambda { |hsh, key| hsh[key] = new( &leet ) } new(*args,&leet) end</pre>
It took me a couple minutes to figure out what this is doing. The standard new method for class Hash takes a block; if you reference an uninitialized hash element (via the
[] method) that block will be called, which presumably assigns the element a default value (thought it doesn't have to).
The above method assigns a default value to any uninitialized Hash elements referenced via
[]. The default value is a new Hash object. The new Hash object's constructor is also passed a block which assigns new Hash objects to uninitialized Hash elements. You can see above that the "leet" anonymous function contains a reference to itself. I find that mighty clever. This lets you do crazy things like
h = Hash.auto h['a']['lot']['of']['dimensions'] = true
and you'll get hashes the whole way down.
Lisp, part 2
Ruby date enumeration
The Date class in ruby provides an
upto method, so you can iterate over a series of dates.
Date.new(2000,1,1).upto( Date.new(2001,1,1) ) do |d| puts d end
This counts by days, so it will print 365 values or so from 2000-01-01 to 2001-01-01.
What if you want to count by months? Being able to modify classes in Ruby makes this easy enough. Not sure this handles all situations, but it worked for what I needed.
class Date def +(n) if n == 0 then return self elsif self.month + n > 12 return Date.new( self.year + (n.to_f / 12).ceil, (self.month + n) % 12, self.day ) else return Date.new(self.year, self.month + n, self.day) end end def upto(max) date = self until date > max do yield date date = date + 1 end end end Date.new(2000,1,1).upto( Date.new(2001,1,1) ) do |d| puts d end
Ruby > Perl
A while back I stopped coding in Perl and starting using Ruby for mostly everything. Today I had occassion to use Perl again, because there's no good working equivalent to Perl's Spreadsheet::WriteExcel that works well in Ruby; this Ruby spreadsheet package is a bit too buggy. (It's not my choice to use Excel, but they're paying me to use it. I can't complain. Well, I can complain here I guess. And will.)
One thing I notice about Perl is that Perl sure does give your fingers a workout. I looked at the fairly simple 103-line Perl script I wrote, and it has exactly 118 dollar signs. That's a lot of Shift-$ finger reaching, if you think about it. Ruby doesn't even use curly braces around blocks; it uses do and end, which type quite nicely. Ruby does use a lot of pipes, but I can easily do a one-handed Shift-| maneuver if I lift my right hand off the home row. Think of the potential gains I will have when I'm older from the avoidance of arthritis alone.
Try to come up with a more petty gripe than this. I dare you..
Theseus and the Minotaur
Here's a little Java game that I found pretty entertaining.
When I got to the sixth puzzle I decided to see if I could write a program to solve these kinds of puzzles. I did; here it is in Ruby, featuring OOP goodness and a bit of recursion, but otherwise just brute force.
It only takes about .04 seconds to solve maze 9. It doesn't find an optimal solution; it tends to have Theseus wander around like a drunkard. Maybe it could be improved with heuristics, but I couldn't think of one. "Move towards the goal" doesn't work in general, because Theseus has to backtrack a lot on purpose to strand the Minotaur behind walls. It'll save one or two moves at most. "Move away from the Minotaur" or "Move toward the Minotaur" don't work because both are necessary many times. So I don't know. I only tried it on puzzle 6-9, but it seems to work.
FlashGot
I'm likely the last person in the world who heard of FlashGot, but better late than never. FlashGot is a Firefox plugin that lets you integrate with an external download manager program. It also lets you download every link on a page via a single menu command, which is either nice or overkill, depending on what you want to do.
Linux doesn't have many (any?) good download managers. There's D4X, but I never cared much for it. I installed GWGET but FlashGot didn't auto-recognize it, and I'm not going through any trouble to get it working.
However I still find FlashGot incredibly useful, for one reason: You can use a custom downloader executable. FlashGot will then call the executable and pass it the download URL as a command line argument. You can also pass other arguments (read about them all here) but the URL is all I really need.
The downloader I use is a simple Ruby script I wrote myself which calls wget. What's the point of this, you ask? Well, you can do some neat things like:
- Filter your downloads into directories by filetype, filename, source website, or any criteria at all.
- Spawn massive numbers of parallel downloads with a single click. (Probably not a good idea to hammer servers too much with this though, it's not nice.)
Use all the power of wget, which includes:
custom timeout duration* download retrying* download resuming* filename timestamping* download speed throttling* FTP suport* (perhaps my favorite) GOOD filename collision resolution, so if you download a file called 1.png and then download a file called 1.png from a different site, wget will save the second one as 1.png.1. This something I miss from Safari. Firefox by default tends to ask you if you want to overwrite the old file, which gets very annoying very quickly.
You could even conceivably crawl a web page or do recursive downloads.
Let's say you want every MP3 you download to go into a "music" folder, every PNG you download to go into a "Pictures" folder, and ignore all other files. You could do something extremely simple like this (which I just wrote in 5 minutes and haven't tested):
#!/usr/bin/ruby require 'fileutils' begin ARGV.each do |arg| dir = '' if arg =~ /mp3/i then dir = '/home/chester/music' elsif arg =~ /png/i then dir = '/home/chester/pictures' else dir = nil end if dir then FileUtils.mkdir(dir) unless File.directory?(dir) Dir.chdir(dir) do `wget #{arg}` end end end rescue Exception => e # If you want to see the output # when the script crashes, you # could log it here. raise e end
Point FlashGot to this script and when you "FlashGot All", all linked PNGs and MP3s on a site will be downloaded and sorted, and all other links will be ignored. This would be very useful if you want to grab a whole page of wallpapers for example.
Shoot. | http://briancarper.net/tag/95/ruby?p=9 | CC-MAIN-2013-48 | refinedweb | 1,573 | 73.68 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.